Noel Rappin Writes Here

Overriding Refinery, Extending Globalize, and Pow!

UncategorizedNoel Rappin6 Comments
Here are a few random tips that have come up while working on an application using Refinery CMS, Globalize, and who the heck knows what else...

Extending stuff from Refinery

Refinery is about as extendable as a Rails framework gets. Although it provides you with a lot of default behavior, it's not that hard to override.

Refinery is made up of a collection of Rails Engines. For example, the main display of pages is in the refinerycms-pages sub-gem. If you want to override a controller or view element in these gems (which include Page and User), Refinery provides a Rake task refinery:override which copies the file into your main application, effectively overriding the default version.

It's a little different if you create your own data types. Refinery generates a Rails Engine for each type, which means that all the controller, model, and view files are actually in your vendor/engines directory and accessible to you (although they do require a development server restart when you change them...) However, a lot of the default CRUD behavior is defined by Refinery in the refinerycms-core gem via the Refinery crudify method, so if you want to change some default behaviors, you either need to look up and copy the existing Refinery behavior, or try and implement your new behavior as a before filter (preferred, if possible).

If you want to extend one of the existing Refinery models, most notably Page or User, you can re-open them to monkey patch in your app, but it took us a few tries to learn the magic require statement:

require Refinery::Pages::Engine.config.root + 'app' + 'models' + 'page'


The Globalize3 gem is used by refinery to manage i18n content. Globalize works by setting up a separate translation table tied to your model. In your model, you specify which columns are translated (plus you need to specify a migration to create a translation table -- Globalize has a shortcut for that):

translate :description, :text

Behind the scenes, when you access the ActiveRecord object, Globalize joins the object with the row in the translation table corresponding to the current locale, and allows you to transparently access the translated versions of the fields corresponding to that locale. Very cool. (Note that once you have added the description and text fields to the translation table, you no longer need them in the main model table.)

A couple of things to be clear on:

Setting locale

I'm not completely sure whether this comes from Refinery or Globalize, but there seem to be two sources of truth as to the what locale is used. Rails uses ::I18n.locale, while Refinery uses Thread.current[:globalize_locale]. We had some difficulty with what appeared to be confusion over these being in synch, and at least temporarily have fixed the problem with a before filter, along the lines as the Rails guide on i18n suggests:

before_filter :set_locale
def set_locale
::I18n.locale = params[:locale]
Thread.current[:globalize_locale] = params[:locale]

That seems to solve our problems with a single source of truth, but it does feel a little string-and-sealing-wax, so there's probably a better way.

Adding functionality to Translation models

One nice side effect of the way Globalize manages locale-specific content is that you aren't limited to things that are technically translations, anything that is different in each locale can be added to the translate list. For example, if you need to track the author of each particular translation, you can add a :user_id column to the translates call, and place the corresponding column in the MODEL_translations database table and you are good to go.

Except for two things. First, if you want to actually treat that user_id as an ActiveRecord association, you need to add a belongs_to method to the Translation model. Unfortunately, that model is created by Globalize using some metaprogramming magic and doesn't have a physical location that you can use to update the class.

Happily, this is Ruby, and you can always re-open a class. It took me a few tries to figure out exactly the best place to open the class so that I could actually be opening the right class, but I eventually got to this:

class MyModel
translates :text, :user_id

class Translation
belongs_to :user

Yep. Nested classes. In Ruby. In essence, the outer class is doubling as a module, which implies the translation class could be defined elsewhere, but I'm not sure that's worth more times to track down.

The other issue is that the Globalize command to automatically create a translation table based on the translates command for a class does not like it if you have non-string columns, such as user_id, in the list of columns to translate. You need to create the translate table using a normal migration. This is even true if you add the user_id column later on after the initial creation, a user running the migrations from scratch after the user_id is added will still have problems.


I started using Pow to manage my development server. I like it quite a bit, it gives me what I used to have with the Passenger preference pane, but with a support for RVM and less overhead. I recommend the powder gem for a little more command line support.

Anyway, I needed to debug a Facebook issue, which requires me to have my development environment temporarily use the .com address that Facebook knows about in order to use Facebook connect from my system. But how?

Here's how I did it using Pow:

  • Use the Mac application Gas Mask to change my /etc/hosts file to a file where points to localhost. Gas Mask just makes managing these files easier, you could just edit the file directly

  • Create a symlink in the .pow directory used by Pow named "www.myapp", and pointing to my application directory. With Powder you can do this from the command line in the root of the application with the command powder link www.myapp. This is fine to do even if you have another symlink from the .pow directory to the app

  • Create a .powconfig file in your home directory. Add the line export POW_DOMAINS=dev,com, save the file. This tells POW to try and apply itself to .com requests from your machine. Pow seems to be smart enough to pass through .com requests that don't match it's files, so you shouldn't be cutting yourself off from the entire internet.

  • Restart Pow. Unfortunately this is not like restarting Pow for your one dev server, you need to restart the whole Pow system so that the config file is re-read. As of right now, the only way to do this is to go to the application monitor, find the Pow app, and quit it from there, or the command-line equivalent using ps -ax | grep pow to find the PID and then killing it from the command line. (If I get around to it, this is my pull request for Powder)

And that should be it -- should be picked up by the Pow server and run in your dev environment, making Facebook connect happy.

When you are done, comment out the line in the .pow config and restart Pow again. Also change the /etc/hosts file back. You don't need to remove the symlink in the .pow directory, but you can.

That's all I've got today. Hope that helps somebody.

Nebulous, or More iPad Text Editors. Really.

UncategorizedNoel RappinComment
Hey, guess what, I've got another iPad text editor or two to review.

The thing is... I really like writing on the iPad, with or without the bluetooth keyboard. It's a very lightweight, fun writing machine. But all the editors I've used have flaws that have been making them less than workable for me. I still like them, but I'm getting resigned to their limitations.

For example, iaWriter doesn't see subfolders, meaning that it doesn't work with Scrivener sync directories. Plus, its sync feature is annoying, and likes to recreate files that I have deleted. It also has a charming habit of not remembering the last open file, and choosing which file to start with seemingly at random. Textastic fixed some of its bugs, but typing is still laggy from the bluetooth keyboard, and the actual display of the text is awkward if you are doing text and not code. I've been using PlainText, which has a nice direct sync with Dropbox, but is deliberately not as fully featured as some of the other tools.

I've got a couple of new contenders. One is named Nebulous, which is a criminally low $2 on the app store right now, the other is Notesy, which is $3. Without much more ado, lets see how they stack up against the functionality that I find useful in my iPad text editor.

Dropbox support:

Nebulous let's you read anything on your Dropbox folder, but it doesn't auto-sync. It does let you open files regardless of type or file extension, which is a big win. When you select a file, it goes into a "auto-saves" list, which is basically local storage -- the "auto" refers to automatically saved locally. You can upload a file as you work on it, or from the list of open auto-saved files. The setup is a little non-intuitive (I think calling the lost "local" files would go a long way.) Nebulous does, however, seem to automatically notice and update if the Dropbox version has changed. Overall, once you get used to the naming scheme, the sync seems to work pretty well. I'd rather have full auto-sync, but the implementation here seems clear and safe.

Notesy syncs with one folder on Dropbox, your choice, and automatically syncs when you change the active document if you are connected. I think it only sees .txt files. The auto sync is nice, but the folder constraint is a minus. However, Notesy does allow you to search the body of all the files in the folder, where Nebulous is limited to only a list sorted by name.


Both programs support TextExpander. Yay. As far as I can tell Notesy has no other fancy keyboard features.

Nebulous has a upper row of extra characters, similar to Textastic, even down to the fact that the row can scroll, which I'm still not sure about. I am sure, though, that Nebulous adds the fantastic ability to customize the keys in the row, making them effectively one-key shortcuts to common snippets. This includes the ability to create macros that wrap selected text. Wow. Very useful. I can already see how I'd use that to make some killer HTML macros. Extra nice: the macro bar stays on-screen even if you are using the bluetooth keyboard.

Text Settings:

Nebulous has about 10 proportional and 5 monospaced fonts to choose from, lets you select size, and uses the pinch gesture to change font size while typing. There are also some themes, which are kind of pointless.

Notesy has a slightly larger set of fonts and let's you quickly change a single file to the default Monospaced font from the variable font and vice versa.


Basic word count style information is more accessible in Notesy. On the other hand, Nebulous let's you preview in HTML or Markdown for any file, regardless of the file extension, which is handy.


Nebulous has pretty much taken over as my workhorse editor. It's more featured than iaWriter, and smoother then Textastic, although Textastic would probably still be my choice if I am actually editing code. I think, though, that it beats out iaWriter and PlainText, at least for the moment, though there are still features of each of those apps that I like. Notesy looks like it will replace Simplenote, since that let's me sync with NotationalVelocity via Dropbox, and Notesy seems faster for just creating a new, short file.

Numbers, Crunched, or Publishing Economics

UncategorizedNoel RappinComment
So, I've been writing technical books for about ten years. What can I say about that time overall?

Here are two pie charts representing my published books to date. I've thrown in the Lulu version of Rails Test Prescriptions just for the heck of it. Care to guess what the pie charts represent?

All sales
All money

Before I give the answer, please note that there is basically no correlation between the values in each chart. You'd expect that to mean that the two charts are about completely unrelated data, say time spent on each book versus best Amazon rank, or something.

The first chart (with all the green) represents absolute sales, through March 31. The second chart (with all the purple) represents the approximate dollar amount that I've been paid for each book, again, through March 31.

Now, you'd expect the sales and money heading to the author to be roughly consistent with each other. At least, I would. Let me explain why this isn't so, I hope this will be a kind of interesting glance at the twilight economics of publishing, or why Pragmatic is different from other publishers.

By way of context, I should mention that the absolute total count of all these sales is a number of people that could comfortably fit in Wrigley Field.

You'd think that I'd be able to put my finger on the exact number of sales for each book. That's actually not the case. Most of the sales numbers are approximate in several different and interesting ways:

  • I get quarterly statements for Jython Essentials but they don't include a running total of how many books have sold -- the pie slice here is a stone cold guess.

  • I haven't seen a statement on wxPython in Action since the end of 2009. The statement is the thing the publisher sends that tells you how many books you have sold. I could be wrong, and I have no desire to look it up, but I believe the publisher is contractually obligated to send me this information.

  • I also haven't seen a statement on Pro Rails since the one and only I got right after the book's release in 2008. Not that I'm whining about it. Okay, I'm whining about it.

  • Lulu reported exact up to the minute sales. Yay. This version was on sale for five months.

  • Sales on Rails Test Prescriptions are obviously incomplete. Pragmatic gives up to the minute details for direct sales via the web site, so right now "incomplete" includes those sales, and an educated guess as to bookstore and Amazon sales in the first month of release. Over the next few months, I'd expect that red slice to get a little bigger.

I'm not 100% sure, but I have reason to believe that the Jython book was somewhat below the average sales for an O'Reilly Animal book at that time, although it's also pretty specialized, so I don't know what their expectations were. There were preliminary discussions about doing a second one that never went anywhere, so that might be a clue.

The wx book sold more or less what Manning expected (they told me their expectations right before launch), maybe a hair more by now. Pro Rails, frankly, was a disappointment (at least to me), although I think my expectations may have been unrealistic.

Similar context on the dollar numbers: the absolute value here, again, isn't all that high given that this is about a decade's worth of writing. You could kind of buy a car with it, but not a "chicks dig the car" kind of car. (I'm not complaining, mind you, I love writing technical books. Just pointing out that I'm not likely to abscond to the Bahamas on my writing income anytime soon.)

The discrepancy between sales and dollars is partially explained by co-authors. For Jython, the author royalties were split 50/50 with a co-author. For wx, I am on the large side of a 66/33 split. For both wx and Pro Rails, this total does not count 15% that was given to an agent. (Which was well earned, by the way, not least because neither book would have happened without the agency.) For the Pragmatic version Rails Test Prescriptions, this is direct sales from the Pragmatic website only, it's too early to have gotten money from other sales.

The rest of what you are seeing here is that the author's per-book revenue is much greater with the Pragmatic model. Also, you are seeing that the Pro Rails book never earned back its advance, which makes it's per-book rate seem unusually high.

I should back up. Most publishers pay authors an advance, which is typically paid in installments, with the last installment tied to the final approval of the book for publication. After publication, the author gets a royalty, which is a percentage of the publisher's net sales, and for tech books is typically around 10%. However, the author does not get any royalty money until after the royalty total exceeds the amount of the advance, which is literally an "advance" against future sales. When a book has reached the point where the author is getting royalties, the book is said to have earned out.

The wx book earned out almost immediately, in part because the advance was relatively low compared to the price of the book. The Jython book, somewhat to my surprise, earned out after about seven years, and I now get about $10 a quarter from O'Reilly. Pro Rails will never earn out, unless the book becomes retro and hip.

Pragmatic works a little differently. They don't pay an advance. Instead, they take their net sales on the book subtract per-book shipping costs plus a couple of other one-time costs, and split the remainder 50/50 with the author. That means a significantly higher per-book payment for the author, and explains why Rails Test Prescriptions is the most money I've been paid for a book.

Here's why self-publishing is viable for technical books: In the Lulu self-publish model, I get 70% of the price of the book. Compare to, say, the Jython book, where I got half of 10% of the net cost, or about 2-3% of the gross sales price. The Lulu book earned me about as much as the Jython book. It's also a fairly quick turnaround, although the major ebook channels are not very good at managing the process if you continually update a book.

Here's why publisher-publishing is still viable for technical books: Even with making more money per-book, I still made more total money, and reached a lot more people through traditional publishers. Many things that I have no talent or desire to do -- editing, cover design, marketing, indexing -- were managed by my traditional publishers. (Well, Manning did make us do our own index.)

Testing Advice in Eleven Steps

Uncategorized, testingNoel Rappin2 Comments
As it happens, my generic advice on Rails testing hasn't changed substantially, even though the tools I use on a daily basis have.

  • Any testing tool is better than no testing. Okay, that's glib. You can make an unholy mess in any tool. You can also write valuable tests in any tool. Focus on the valuable part.

  • If you've never tested a Rails application before, I still recommend you start with out of the box stuff: Test::Unit, even fixtures. Because it's simpler and there's a very good chance you will be able to get help if you need it.

  • That said, if you are coming into a team that already has a house style to use a different tool, use that one. Again, because you'll be able to get support from those around you.

  • Whatever tool you choose, the important thing is to write a small test, make it pass with a small piece of code, and refactor. Let the code emerge from the tests. If you do that, you are ahead of the game, no matter what tool you are using.

  • At any given moment, the next test has some chance of costing you time in the short term. The problem is it's nearly impossible to tell which tests will cost the time. Play the odds, write the test. Over the long haul, the chance that the tests are really the bottleneck are, in my experience, quite small.

  • If you start with the out of the box test experience, you will likely experience some pain points as you test more and more. That's the time to add new tools, like a mock object package, a factory data package, or a context package. Do it when you have a clear sense that the new complexity will bring value.

  • Some people like the RSpec syntax and, for lack of a better word, culture. Others do not. If you are one of the people who doesn't like it, don't use it. Well, try it once. You never know.

  • I go back and forth on whether Test::Unit and RSpec are actually functionally equivalent, and eventually have decided it doesn't matter. You can write a good test suite in either, and if there is a particular bell or whistle on one of them that attracts you or repels you, go that way.

  • You really should do some kind of full-stack testing, especially once you've gotten good at unit testing. But whether it's the Test::Unit integration testing, the new Capybara syntax, or Steak, or Cucumber, is, again, less important than the idea that you are specifying behavior and automatically verifying that the code matches the specification. Most of what I said about RSpec above also applies to Cucumber.

  • This old joke that was repeated with relish on the XP mailing list circa 2000: "Doctor, it hurts when I do this". "Then don't do it".

  • And last, but not least, buy my book. Or buy Dave's book. Or Kent Beck's book. Or hang out on mailing lists. Ask questions on Twitter. If you want to get better at testing, there are all kinds of resources available.

Cucumber Rails 0.4: The De-Web-Step-ining

Cucumber, Rails, Uncategorized, testingNoel Rappin2 Comments
Consider this part of an occasional series where I attempt to revisit tools discussed in Rails Test Prescriptions that have undergone some revision. (NOTE: Most of this was written before the DHH Twitter-storm about testing this week. For the purposes of this post, I'm choosing to pretend the whole thing didn't happen.)

The cucumber-rails gem released version 0.4 last week, which had some significant changes, and intensified what we might call the opinionated nature of Cucumber over what a Cucumber scenario should look like.

If you update cucumber-rails, you need to re-run the rails generate cucumber:install to see the new stuff.

There are a couple of minor changes -- the default env.rb file is much simpler, the capybara date selector steps now work with Rails 3, that kind of thing. The biggest change, though is conceptual, and comes in two parts.

Part one is best laid out by the new first line of the web_steps.rb file:


The header goes on to say that if you make use of these steps you will end up with verbose and brittle cucumber features. Also, your hair will fall out, and you will have seven years bad luck. The last may be more implied than stated.

Why would they do such a thing? And what's the "official" preferred way to use Cucumber now?

Well, it's not like the Cucumber dev team has me on speed-dial or anything like that, but since they subtly included in the web_steps.rb file links to, count 'em, three separate blog posts explaining how to best use Cucumber, I will follow that subtle, yet blazing, trail and try to put it together in some coherent way so that I can understand it.

(Note to Cucumber dev team: if you feel the need to link to this post in future versions of Cucumber, you should consider yourself as having permission to do so....)

Anyway, the Cucumber team is making a very opinionated statement about how to use Cucumber "with the grain", and I actually don't think that statement is "don't use the web_steps" file -- I think that some parts of the web_steps file have a place in the Cucumber world.

Here's the statement as I see it:

  • A Cucumber scenario is an acceptance test.

  • As such, the scenario should completely be in the domain of the user.

  • A Cucumber scenario should not have any reference to implementation details.

  • Implementation details include, but are not limited to: CSS selectors, class names, attribute names, and HTML display text.

As a good rule of thumb, if you are putting something in your Cucumber steps in quotation marks, you should at least think about whether your Cucumber scenario is at a high enough level. In the Cucumber world, the place for implementation-specific details is in the step definition files. If the acceptance criteria changes, the scenario should change, but if the implementation changes, only the step definitions should change.

This sharp separation between the acceptance test and the implementation is a feature, not a bug, in Cucumber (By the way, you do not want bugs in your cucumbers. Yuck.) The separation is what makes Cucumber a true black-box test of your application, and not a black box riddled with holes.

That said, full-stack testing that is based on knowing implementation details -- which is "integration testing" rather than "acceptance testing" -- is a perfectly valid thing to do, especially in a case where there isn't an external customer that needs or wants to see the acceptance testing. But, if you are actually doing integration testing, then you don't need the extra level of indirection that Cucumber offers -- you should drop down to Steak, or Rails integration tests, or the new Capybara acceptance test DSL or something.

Okay, so. Acceptance testing is not integration testing, and if you are trying to do integration testing via Cucumber, you will be frustrated, because that's not what Cucumber is good at. To me, there's a value in acceptance testing, or in this case, acceptance test driven development, because it's helpful to try and describe the desired system behavior without any implementation details confusing the issue.

Which brings us back to the question of how you actually replace the web steps in your Cucumber scenarios. Essentially the idea is to replace implementation-based steps with steps that describe behavior more generically. You might have something like this:

Scenario: Updating a user profile
Given a user named "Noel" with a preference for "Cool Stuff"
When I go to the edit profile page
And I fill in "bananas" for "Favorite Food"
And I select "Comic Books" from "Preferences"
And I press "Submit"
Then I should see "Bananas"
And I should see "Comic Books"

That's not horrible, because it doesn't have any explicit CSS or code in it, but it's still very much about implementation details, such as the exact starting state of the user, the labels in the form, and the details of the output. On the plus side, the only step definition you'd need to write for this is for the first step, every other step is covered by an existing web step. But... I've written my share of Cucumber scenarios that look like this, and it's not the best way to go. It's hard to tell from this step what the most important parts are and what system behavior is actually being described.

The implicit version of the scenario looks more like this:

Scenario: Updating a user profile
Given I am an existing user with a partially completed profile
When I go to edit my profile
And I fill in new preferences
Then I see my new preferences on my profile page

Two questions to answer: why is this better, and how does it work?

The second question first. We need to write step definitions for all these steps. Normally, I write these in terms of the underlying Capybara or Webrat API rather than calling web steps. The second step doesn't need a full definition, it just needs an entry for /edit my profile/ in the paths.rb file (right now, it seems like that's about the only step in the web steps file that the Cucumber team is willing to use), but the other three steps need definitions -- here's what they might look like, this might have a typo or syntax jumble, it's just the basic idea.

Given /^I am an existing user with a partially completed profile$/ do
@user = Factory(:user)
@user.profile = Factory(:profile, :preference => "Cool Stuff",
:favorite_food => nil)

When /^I fill in new preferences$/ do
fill_in("Favorite Food", :with => "Bananas")
select("Comic Books", :from => "Preferences")

Then /^I see my new preferences on my profile page$/
with_scope("preference listing") do
page.should have_selector(selector_for("bananas are my favorite food"))
page.should have_selector(selector_for("comic books are my preference"))

If you are used to Cucumber but haven't used the 0.4 Rails version yet, the last step will look unfamiliar. Bear with me for a second.

Why is the second version better? It's not because it's shorter -- it's a bit longer, although only a bit (the first version would need a step definition for the user step as well). However, the length is split into more manageable chunks. The Cucumber scenario is shorter, and more to the point, each step is more descriptive in terms of what it does and how it fits into the overall scenario. The new step definitions you need to write add a little complexity, but not very much, and my Cucumber experience is that the at this size, the complexity of the step definitions is rarely the bottleneck. (For the record, the bottleneck is usually getting the object environment set up, followed by the inevitable point of intersection with implementation details, which is why I'm so keen to try and minimize intersection with the implementation.)

Yes, the scenario is something you could show a non-developer member of the team, but I also think it's easier for coders to comprehend, at least in terms of getting across the goals of the system. And this is supposed to be an acceptance test -- making the goals of the system explicit is the whole point.

Okay, either you believe me at this point or you don't. I suspect that some of you look at the step definitions and say "hey, I could string those seven lines of code together and call it a test all by itself". Again, if that's what works for you, fine. Any full-stack testing is probably better than no full-task setting. Try it once, though. For me.

Back to the step definitions, the last one uses the selector_for method -- and I hope I'm using it right here because I haven't gotten a chance to work with it yet, and the docs aren't totally clear to me. The idea behind selector_for is to be analogous to the path_to method, but instead of being a big long case statement that turns a natural language phrase into a path, it's a big long case statement that turns a natural language phrase into a CSS selector. The big lng case statement is in the support folder in a selectors.rb file. The with_scope method uses the same big case statement to narrow the statements inside the block to DOM elements within the snippet.

As with the paths, the idea is to take information that is implementation specific and likely to be duplicated and quarantine it into one particular location. As I said, I haven't really incorporated this into my Cucumber routine yet, my first thought is that it'll be nice to hide some of the complex CSS selectors I use in view testing, but I worry that the selectors.rb file will become a mess and that there's less probability of duplicating a snippet.

I sure wish I had a rousing conclusion to reward you as this post nears the 1750 word mark. I like the direction that these changes are taking Cucumber, they are in line with what I've found to be the best use of the tool. Take a chance and try writing tests as implicitly as you can, as an exercise and see if it works for you.

Um, Hi? My book is out.

UncategorizedNoel Rappin1 Comment
When I say that I'm really bad at self-marketing, one of the things that I mean is that I've left this blog basically dark for almost a month. This was an especially good idea because a) the TextMate post on Feb 10 became the most read post on this site ever by a factor of 5 over the previous three most popular posts (the PeepOpen review, the iaWriter review, and the thing about writing bad code, in case you care), and b) my book actually came out in print during this time.

So, obviously that's a good time for complete radio silence. Sigh.

Book Thing

I should mention that there's going to be a signing/event at Obtiva HQ on Tuesday, March 8th. We're going to be giving away about 25 copies, and if there are more people than that, we'll be selling copies at what I assume will be a discount of the cover price. There will also be food. If you are in the neighborhood stop by, because otherwise this will be depressing, and I'd really prefer it be fun.


I have an article in this month's PragPub about testing against web services. It's 100% all-new material, none of it is in the book, and I kind of like it, even though I didn't know about the vcr gem that works with webmock to automate creating good test data.

The Book is out

I have a half-written, somewhat indulgent post on Rails Test Prescriptions being out that might go up someday soon. In the meantime, the book is out, a physical object that you can hold, write in the margins, buy from Amazon, and so on.

People ask me how it's selling, and I honestly don't know how to answer, because I don't really have any expectations -- only a few technical books sell what you might think of as objectively a lot of copies. It's sold far better than the Lulu version did, and it either has already or will soon outsell my Wrox book. (I literally have no idea how many copies the Wrox book sold because the only statement I saw was immediately after the book came out.) The people who have bought have said nice things, and that's really the best part.

Coming Soon To A Hotel Conference Room Near You

UncategorizedNoel Rappin1 Comment
I have a couple of upcoming conference and training appearances that I don't think that I've mentioned on the blog before.

March 16, I'll be in Salt Lake City for Training Day, the day before the official start of MountainWestRubyConf. I'll be doing a full day of training, the morning will be on Improving Your Ruby Skills, and the afternoon will be Getting Started with TDD in a Legacy Environment. You can get more details, including location and pricing at the MWRC site. I'm super-excited about this, so if you are planning on being in the neighborhood, please sign up.

The full schedule isn't online, but I'll be speaking at Red Dirt RubyConf in Oklahoma City, April 21 and 22. This is a talk about Jasmine, and doing TDD in Javascript from Ruby. I'm hoping to show off a new tool for linking Rails and Jasmine. This looks like it's going to be a great conference.

Also, Obtiva has updated their web site describing training courses that we're offering. Sign up for classes or contact Obtiva for more information on scheduling or coaching opportunities.

Text And Mate

UncategorizedNoel Rappin19 Comments
After a long time bouncing back and forth, I've come back to TextMate as my main editor. I realize that's starting to sound almost old-school these days, but it still works the best for me.

What I've come to realize about TextMate versus, say, Vim, or RubyMine is that a) this is a genuinely personal decision and different people are just more comfortable with some tools than other and b) it comes down to what each tool makes easy and how useful that is.

For instance, RubyMine. RubyMine makes navigating a project super easy, which is great, since I do that all the time. It also makes refactoring easy, which is less useful because in practice I use the automated refactoring less. Vim makes manipulating text, if not easy, at least powerful, but again, I find myself doing that less. And the thing that Vim makes hard, having to keep track of modes, absolutely drives me crazy.

Anyway, TextMate. TextMate makes creating new snippets and triggers very easy, and doesn't make anything particularly hard. That said, I have seen in some of my pairing around that a lot of Ruby developers don't know about all the tools that give TextMate some of the features of RubyMine and Vim. So here are a dozen or so things that you can to to make TextMate work for you.


Install the AckMate plugin. AckMate is a very fast project-wide text search with a nice interface. It's about a gazillion times nicer than TextMate's internal project search.

Ruby Amp

Install the ruby-amp bundle. Ruby-amp gives you a number of features. The one I use most often is a command-; shortcut for auto-completion against symbols in all open files. Very useful. You also get navigation to method, module, or class definitions with a keyboard shortcut.

Search Bundle Commands

Which reminds me, control-command-t is maybe the most important keyboard shortcut to learn, it gives you a pulldown menu of all bundle commands that you can fuzzy-text search. Changed how I work, and opened up a lot more of the power of the various bundles.

Zen Coding

Install the Zen Coding bundle. Zen Coding has a lot of features, but at core, it's sort-of a meta-shortcut for outputting HTML or XML. So you type in something like div#highlight>ul>(li*3>p), press command-e and out comes fully formed HTML:

<div id="highlight">

It's wildly awesome anytime you have to write highly structured text, not just HTML, but also XML or DocBook or anything like it.

Column Selection

Holding down the option key when you select text, allows you to select text in a rectangle, what TextMate calls "column selection". What can you do with this? Insert a prefix on a bunch of lines. Remove an entire column of a Cucumber table in one action. Remove the colon at the beginning of a list of symbols. And that's just the stuff I did with it today...

Peep Open

PeepOpen costs a few bucks (technically $12), but it's a somewhat cleaner version of TextMate's file dialog, with slightly faster file search. The main downside is that it's still kind of beta-y in that it occasionally hangs.

Make Snippets

Learn to create your own snippets. For example, I use the RSpec let command all the time. The RSpec bundle doesn't have a snippet for let. Sob. Go to the bundle editor, select the RSpec bundle, choose create snippet from the bottom left, and put let(:$1) { $0 } in the text box. Set the tab completion to let, optionally set the scope to source.ruby.rspec and boom, good to go. (The dollar signs indicate places where the cursor goes when you press tab, in numerical order, but $0 is last. You can also set defaults if you want.) I'm skimming the surface here, but it's super easy, and it would have been a real challenge to write RTP without defining custom snippets.


You can also create commands which run scripts, which can be written in the scripting language of your choice. Like, say, Ruby. You have access to the current file. Also very easy and very helpful. Here's a dumb one I wrote that takes an HTML link in text and turns it into a Markdown self link. Again, you do this in the bundle editor:

#!/usr/bin/env ruby -w
input =
print "[#{input}](#{input})"

Assign that a key and a scope of text.html.markdown, and set the input to "Selected Text" and the output to "Replace Selected Text"

Here's a slightly less dumb one that converts soft-wrapped text to hard wrapped text, indented to whatever level the first selected line is. I use this all the time when I write structured text. In

#!/usr/bin/env ruby -wKU

text =
initial_spaces = " " * ((text.size) - (text.lstrip.size))
text = text.strip.gsub(/\s+/, ' ')

result = "#{initial_spaces}"

count = initial_spaces.size
text.split.each do |word|
count += (word.size + 1)
if count > 79
result += "\n#{initial_spaces}"
count = word.size + initial_spaces.size + 1
result += " " unless result.size == initial_spaces.size
result += word

printf result


Learn About Scopes. TextMate is based on the concept of scopes, which are defined by the language files in each bundle and determine what key commands and bundle items are active at any time. This is what lets TextMate syntax color multiple languages in the same file, it knows, for example, that the ERB &lt= marker makes the code enter Ruby scope so that Ruby syntax coloring and commands apply. control-shift-p at any place to show the scopes in play at that point in the text.

I learned about scopes from the TextMate book. It's a few years old now, and I haven't read it in a while, but it had a lot of good info on TextMate basics.

Random other Bundles

The default Cucumber bundle has a lot of useful stuff like goto step definition from feature, and so on. The git bundle has a useful visualization of git blame. The subtle gradient bundle has some useful grab bag stuff, including aligning ruby code.

Random other commands

Command-shift-v: paste previous contents of the clipboard

Control-t: transpose the two characters on either side of the cursor.

Command-shift-B: select enclosing brackets


Here's a git repo with a jillion TextMate themes. Find one you like. Also, it's easy to customize themes, especially once you get the hang of scopes and language definitions, since you can define text types in the bundle language definitions. See this article for more.

I hope that helps somebody. If you have a tip I don't, leave it as a comment.

Book Review: Among Others by Jo Walton

UncategorizedNoel RappinComment
Among Others is an evocative, subtle, and mostly brilliant fantasy novel on the themes of dealing with loss, growing up, learning to live, and how amazing the new Heinlein novel is. People who grew up inhaling SF and fantasy books are, by and large, going to recognize themselves pretty strongly. Not surprisingly then, many SF writers who have reviewed the book on line have raved. I'll rave too, with some quibbles that we'll get to in a bit.

It's 1979, and Mori is a fifteen-year-old Welsh girl being sent to boarding school in England. We quickly learn that her twin sister died in a car accident about a year earlier, and that Mori has been given to the care of a father she hasn't seen since she was a small child.

We also learn about the fairies. Turns out that Mori and her sister can see fairies across the Welsh countryside. And they can do magic. Mori's sister was killed when the two of them effectively saved the world from their evil witch of a mother.

In the novel's preface (five years earlier), we see Mori and her sister magically destroying a factory by dropping flowers in a pond (we never find out exactly why). They expect something flashy, and in fact believe they have failed, until the next day when they find out the factory has been closed.

Later, Mori explains how magic works, and it's not like it does in books.

It's like if you snapped your fingers and produced a rose but it was because someone on an areoplane had dropped a rose at just the right time for it to land in your hand. There was a real person and a real aeroplane and a real rose, but that doesn't mean the reason you have the rose in your hand isn't because you did the magic

[...] If it's like books at all, it's more like The Lathe of Heaven than anything. We thought the Phurnacite would crumble to ruins before our eyes when in fact the decisions to close it were taken in London weeks before, except they wouldn't have been if we hadn't dropped those flowers...It always works through things in the real world and it's always deniable.

It's hard for me to overstate how brilliant I think that paragraph is. It's evocative, describes magic that is genuinely uncanny and weird, and also has a lot of depth in the way it affects the story. Plus it has an SF reference, like nearly everything else Mori does.

There are many things I loved about his novel. Mori reacts to her situation and its loneliness by reading a lot of SF and fantasy, which she namechecks, reviews, and comments on continually. This part is a lot of fun, especially if you've read most of the books, and it's not a bad guide to future reading.

She's smart, sensitive, and clearly trying to figure out how to go about living after you've saved the world among a group of people who don't know and wouldn't believe you anyway. Eventually Mori works some magic, then spends a lot of time wondering exactly how disruptive she's been and the ethics thereof -- there's a lot of free-floating dread and eeriness because of the way magic works.

Walton deliberately subverts your expectation of plot in this book, not least by starting the book a year after the world-saving part, which is conventionally, you know, sort of the climax of these kinds of books. There are other cases, too, where we're lead to believe that there might be some kind of conventional fantasy menace happening, only to have reality more or less take the air out of it. (That said, the ending, at least in terms of where Mori winds up, is quite satisfying.)

Ironically given that I'm writing this, I wish I had read fewer reviews of the book -- I think it messed with my expectations a little (in particular, I expected things to be tied up more cleanly at the end). What you have is a very smart, somewhat nostalgic look at 1979 without a whole lot of conventional plot, and I think that your reaction to the book may depend on how willing you are to identify with the way in which Mori uses books. (In a weird way, the book reminded me of Almost Famous, in that they are about immersing yourself in a particular time and place to be a fan of something -- the kid in the movie gets to live with his heroes, while Mori gets to actually live in a fantasy world. I'm way more interested in late 70s SF fandom, though, than mid 70s Rock Fandom.)

Much like Walton's Lifelode (also highly recommended), at about the 1/3 way through this book, I realized that I wasn't sure where the author was going, but I was enjoying the characters so much that I didn't care. I still felt that way at the end, but wished there was just a little more meat to the narrative. Still, I loved this book, and if you looked to SF and fantasy as a teenager as a way to go to places that were amazing, you'll probably love it too.

Rails Test Prescriptions is at the printer

UncategorizedNoel Rappin1 Comment
I suppose I should get this on the blog...

Rails Test Prescriptions was sent to the printer yesterday, actually a couple of days ahead of the schedule that we've been on through the last stages of production.

Here are the dates, as I understand them...

  • The book is scheduled to leave the printer on Thursday, Feb 17, headed for bookstores and warehouses. I'd expect that you would probably see it in bookstores early the following week. If you've ordered via Amazon,

  • I believe the production version of the ebook will become available a couple of days earlier, on the 14th or 15th

  • I think that's the same time that Pragmatic will take the book out of beta on the order page, meaning that you can buy the physical book by itself.

This has been a great experience. I know there are a few of you who have been waiting to order the book until it's official debut -- I hope you like it.

How I became a Haml-tonian

UncategorizedNoel RappinComment
I mentioned on Twitter the other day that I was starting to like using Haml and it was surprising me because I used to dislike it so much. Colin Harris asked me to elaborate, but Twitter was too short, so here goes.

I assume that most people reading this have some idea of what Haml is, if you don't, it's an ERb replacement for view depleting which uses Python-style indentation for blocks, and a couple of other tricks for simplifying the amount of markup you need to write in your view -- here's a sample from the Haml tutorial, which gives you the flavor.

#date= print_date
#address= current_user.address

I don't really intend to turn this into its own-mini tutorial, but suffice to say that "=" indicates Ruby execution, and the "." and "#" allow you to specify CSS selectors, with a div tag being implied, and indentation indicating the extent of the outer tag.

So, how did I go from really hating Haml to mostly liking it? Well, here's how I came to dislike it.

My first exposure to Haml in use came on a legacy project that I was asked to take over. The project seemed to have been worked on by two people who rarely spoke, as evidenced by, among other things, the fact that the view code was half ERb, half Haml. And the Haml seemed to be tied to some of the ugliest code in the system.

As a longtime Python programmer, I have no problem with using whitespace to mark blocks. In fact, I kind of like it. However, you have to acknowledge that the side-effect of whitespace blocks is to keep your blocks short, and try not to let them get too deep. That's part of the point -- the structure of the code is supposed to guide you to keeping the code concise and well factored. If you don't pay attention, then the whitespace code -- Python or Haml -- becomes very hard to follow and maintain.

I'm sure you see where this is going. This first Haml project had long files, with deep nesting. It also, I realize now, wasn't particularly good at using Haml's strengths in specifying DOM elements. Every time I needed to touch that code it was a bad experience, and I got very averse to seeing Haml.

I also had a couple of other issues. Among my coding quirks, I tend to be aggressive about keeping line lengths under 80-90, and Haml 2.x didn't allow you to insert line breaks in executing Ruby code, which led to unavoidably long lines in some cases. (I realize that the point was to keep you from having long Ruby calls, but sometimes, especially in that code base, it was unavoidable.) To a much lesser extent, I was also not attracted to the "If you don't like Haml you have no poetry in your soul" tone of the Haml website. I've come to terms with that, too.

So what happened?

A couple of things. Haml 3.0 made it possible to break lines in Ruby code, and that was enough to get me to look at Haml again. The addition of SCSS over SASS for writing CSS helped a lot -- I love SCSS and never really got into the SASS syntax. I started working on projects that used Haml and were smarter about using CSS, making the Haml's easy creation of complex CSS selectors more attractive. I saw better examples of Haml code that showed it in a better light.

And one day, working on a totally different project that also happened to be half Haml, half ERb, I noticed that the ERb started to fell clunky and heavyweight. The Haml started to feel disturbingly natural.

I started to prefer Haml, using it as the default on new projects.

What do I like about it?

  • It's concise, but after you get used to it, completely readable (again, when written well). It did take me a little while to get used to it.

  • Haml makes it much more difficult to write invalid markup, effectively eliminating a source of error

  • Haml encourages treating markup like code and writing it in small pieces

  • Haml implies the use of SCSS, which is pretty unambiguously awesome.

  • As Brian Hogan pointed out on Twitter, Haml makes it easier to restructure the hierarchy of your view output because it's much easier to change the relative placement of tags.

It's not perfect. Like all HTML tools that assume an HTML structure, unstructured text is challenging. Ruby blocks still look strange without endings. Other than that, I'm happy making Haml my default view layer. And I'm glad I got over it enough to give it another look

Quick Rails Test Prescriptions Update

UncategorizedNoel Rappin1 Comment
It's been quiet on the Rails Test Prescription front. Those of you on the beta program should have gotten Beta 11 earlier this week. There are no major changes in this beta, but it does contain the final copyedit, a pass through the errata, and a couple of late-breaking reviewer comments.

At the moment, the book is being typeset, which means that non-typesetter changes to the source files are definitely contraindicated.

My understanding, which is guaranteed to be somewhat incorrect, is that the typesetter will be done early next week. At that point, we'll have a couple of days to make sure everything looks okay, and then it'll actually go to the printer. Once it goes to the printer, I think it will go out of beta for the purposes of buying the book. I expect to have more concrete dates once it's actually at the printer.

I'm not saying I'm excited about this finally being a physical book, but my first commit to the original, self-published repo was November 7th, 2008.

I Feel Textastic

iPadNoel RappinComment
So, back in the summer when I started my bizarre quest to edit my book on the iPad, I had three requirements.

  • Be able to read the book files from Dropbox

  • Support for editing HTML/XML files, syntax coloring, extra keys, something like that.

  • TextExpander integration to make it easier to type the markup tags.

It quickly became clear that I was the only person in the world looking for this exact set of features.
There wasn't any editor that met all those requirements, so I bought a bunch of other editors and obsessively reviewed them on this very site. Naturally, because sometimes the universe loves irony, a new version of editor meeting these features was released a mere week after I turned in the final draft. It's called Textastic, and I'm typing on it right now, though I'll probably edit this in MarsEdit later.

Summary: Textastic is nice. It's got some features that I really like and an overall feeling of being polished. I think it would work fine to do some basic editing on a code file, though it's obviously not TextMate or Vim. As a writing/blogging tool, it's close, but I think it's a point releases or two away from being fully baked -- it's about 80% baked at the moment.

Now here's the part where I give enough details to let you know whether to spend your money. Textastic is $9.99, which isn't a lot in the absolute sense, but is more than some of the other tools that I'd be comparing it to.

I'm noticing kind of a conceptual split among the cloud/dropbox text editors. Some, like PlainText, try to make it feel like you are actually editing the file on the remote server, primarily by automatically saving the file to the cloud. Others, like iaWriter, clearly want you to feel like you are editing on the iPad, and only backing up to Dropbox when you want. Textastic is clearly in the latter group.

Pressing a globe-shaped icon from the main display flips the text edit pane over to reveal a dual-pane file browser. Local files on the left, remote files on the right. Remote files can come from Dropbox, FTP, or WebDAV, and you can download them individually or folder-by-folder. The file structure on the iPad does not need to match the file structure remotely, which is nice. Even better, Textastic keeps track of the remote source of each file. When you are editing the file, the standard-looking export icon allows you to easily upload or update the file being worked on, making simple what might otherwise be a pain. It's easy to download, say, an entire Rails project at once, but you are going to want to delete a lot of directories -- Textastic will even grab the Git subdirectory.

Actual file editing is largely what you'd expect. Easily reachable and usable settings allow you to choose from five monospaced fonts, set the font size, plus turn on or off auto-correct and TextExpander. You can also pinch in the text edit field to zoom in, effectively increasing or decreasing the font size. This is more effective than I would have thought, although it does have what seems to be an unpredictable relationship with the actual font size as you set in a dialog.

Textastic has syntax coloring for just about anything you'll want, and even better, allows you to specify the syntax for each file. There's a find and find-and-replace dialog, and line and column position is discreetly displayed in the upper right. The soft keyboard has an extra set of keys containing programmer specialty punctuation, including parens, brackets, braces, and greater-than less-than, making HTML typing possible. HTML and Markdown can be previewed in the app.

There's a lot of features here and they are nicely tied together. I'm finding Textastic to be decent at tweaking existing code files, and I think I would have been happy to edit or write Prag book files (which are essentially XML) on it, especially with a little TextExpander love.

Using it as a blogging tool, though, exposes some weaknesses that will hopefully be addressed in point releases -- there are a couple of bugs around auto-correct, and some typographic things about spacing when it's wrapping long lines. Both of those are kind of minor. One thing that is weird is that the text field is noticeably sluggish. Typing on the bluetooth keyboard, I can easily get a word or two ahead of the display, and I can even get a little ahead of it on the soft keyboard -- I assume that the calculations for syntax coloring are slowing it down. This can be frustrating combined with the autocorrect bugs.

Still, those are minor quibbles in what is overall a useful app. As an iPad code editor, it's the best I've seen, and it's very close to PlainText or iaWriter as a quick typing app. I'm looking forward to see where this one goes.

A tribute to the humble page number

eBooksNoel Rappin1 Comment
I've been doing a lot more reading on eBooks since I got the iPad. For the most part, I really like it.
I've been using three different eReader programs: Apple's iBooks, the Amazon Kindle, and the Barnes & Noble Nook. You'd think that an eReader would be basically similar between apps, but there are definitely differences in how the apps feel.

Here's one little example: how does each reader app marks your progress in the book? Physical books have this nice technology called the "page number", which, when coupled with "seeing how thick the book is on both the left and right side" gives you a good sense of how much you have to go before you have to find another book to read.

EBooks of course, don't have pages or thickness (though, for some reason, when I hold my iPad as a reader, I put my finger between the iPad and the Apple case flap, as though I was folding over a paperback book.) Each eReader that I use has a different metaphor for progress. And I'm not sure which I like best. It fascinates me that three different teams looked at this issue, which would not seem from the outside to be that complicated, and came up with three different answers.

The Kindle uses what they call "locations", which are basically virtual page numbers placed at (I assume) regular intervals in the text. The Kindle tells you what location you are at (well, it tells you what locations are currently displayed on the screen, there is usually more than one at a time), and it tells you what percentage of the book you have completed.


There are a couple of advantages to this setup. The biggest is that the locations are stable if you change the font size, so location "2345" always refers to the same place in the book. Because of that, I assume you could actually use a Kindle location in a citation, if you were so inclined. (Okay, you probably don't care. But if you were a student, being able to cite based on an eBook seems increasingly important...) The percentage progress through the book is a reasonably user-friendly way to go. The downside, obviously, is nobody knows what the hell a location is, so saying that you are on "location 2345" has relatively little actual meaning.

Apple's iBooks app went a different way. iBooks takes the nuts-and-bolts approach that a screen's worth of text is a page and will tell you that you are on "page 45 of 456", along with a set of dots that shows progress through the book.


The nice part about this approach is that it is concrete and directly related to the device you are using. You know exactly how many screen taps there are -- iBooks also tells you how many pages are left in the current chapter, which is a genuinely useful piece of information. The problem is the flipside of the Kindle setup -- the page numbers are dependent on font size and device screen. If you change the font, then iBooks recalculates the page you are on. (It also takes a noticeable amount of time to calculate when you open a new book.) You can't use the page number to reference a permanent part of the book. Still, for casual reading, it's hard to deny that the page number that iBooks presents is the most relevant and useful information. Although, strictly aesthetically, I don't love the row of dots, it doesn't read as a progress indicator quite as easily as the other two apps.

The Barnes & Noble Nook app does something different still. It presents page numbers based, I think, on the pagination of the print book. In other words -- and I realize it's slightly insane to define a page number based on something else -- it's essentially the Kindle location, but with location markers placed at the start of each print page.


I can't decide whether I think that's the best of both worlds or the worst -- I like the look of it best, though. Like the Kindle, the Nook's locations are permanent, and therefore citable. There's a certain kind of familiarity in keeping page numbers in a familiar range. That said, it's genuinely a little weird to turn the virtual Nook page and not see the number changed -- the first time I noticed it, I thought it was a bug.

So that's one tiny little usability decision that turns out to be complicated enough to have three separate answers, and even having used all three I'm not sure which one I like best. Superficially, I like the iBooks approach, I think, but I also think that a permanent location marker is needed. As usual, it leaves me with respect for how complicated even easy decisions get once users are involved.

Rails Test Prescriptions Out Of Edit

UncategorizedNoel Rappin2 Comments
Very quick status update:

Rails Test Prescriptions is out of copyedit. It should head for typesetting on Monday for a probably ship date in mid-February.

Right now, we're in the phase where I go over the copyedit and whine about things. Actually, this copyedit has been pretty clean, probably the cleanest I've ever had.

By way of contrast, when I did the wx book, the copyeditor did not realize that "Python" and "wxPython" were two different things, and decided to unilaterally change all instances of "Python" to "wxPython", apparently in the belief that I had made the same mistake 1500 times.

Though if there's one thing that copyediting proves, it's that I am capable of making the same mistake over and over again. Here are some things that keep coming up:

  • The copyeditor changed all instances of "plugin" to "plug-in", which may be the dictionary usage, but isn't the common Rails community usage. I've changed them all back, except that I changed a bunch of them to "gem".

  • Apparently I add extra commas to sentences a lot. In my head, I tend to think of a comma as indicating a pause when I read the sentence aloud, which leads me to put commas before words like "and" and "or" in cases where the structure of the sentence doesn't require a comma. I'm not arguing about any of these.

  • There's a pretty consistent change where I'll write something like "this only happens when" and it's changed to "this happens only when". I'm assured that the second form is correct, but it still sounds weird to me.

  • My probably-annoying tic of putting a) outline notation in b) a sentence was removed. I get why, and I didn't put them back, but I still like the slightly staccato rhythm and extra emphasis from adding those notations to a list. In a couple of cases I rewrote the sentence to get a similar emphasis a different way.

  • One surprise for anybody who has been following this book since the beginning is that we finally seemed to have flushed out all the its/it's mistakes, or at least the copyeditor doesn't seem to be finding new ones. I consider that a triumph -- this was a mistake I made A LOT --, aided and abetted by a number of people who made corrections early on.

It's an open question how much of this makes a difference to a reader. I like to think that making each individual sentence clean and consistent lowers the amount of friction between the text and the reader's brain, but then, as an author I would say something like that.

Mock Me, Amadeus

UncategorizedNoel RappinComment
Nick Gauthier's post about mock testing, got some attention and I was kind of opining about it on Twitter, but it's well over 140 characters, so lucky you, I've decided to take it to the blog. (Do read Nick's post, and then then comments, with Nick and Dave Chelimsky and others discussing the topic.)

I want to back up a bit from the "Mock testing is bad for you and hurts puppies" versus the "Mock testing will make your breath fresh and cause you to lose weight" split argument. (It's possible I might be oversimplifying the two sides of the argument). I've used mocks badly and I stopped using them for a while. I want to discuss how I've started to use mocks with what feels like effectiveness. I'm not all that confident that it's the best way -- I expect to have changed my mind again in six months -- but it's working for me and I like it.

Take a fairly typical Rails feature, maybe a list that's filtering based on certain criteria. I think we can do this discussion without specifying exactly what the criteria is, let's stay abstract.

From a user perspective, the only valid test is the acceptance test, which is "when I submit this form describing my filter, I see the objects that I expect".

From a developer's perspective, though, that's not enough information to fully test or code the feature. Specifically, it's too big a chunk of code to do in one TDD step, so we need to break it into smaller pieces that we can write developer tests against.

The TDD process is supposed to ensure that any change in program logic is driven by a failed test. In this particular case, we'll most likely be changing the program logic in two places.

  • The model object or objects being discussed needs to change to take in parameters from the user request and return the expected set of objects.

  • The controller that is the target of the user request needs to convert the Rails request parameters to some kind of call to the model object.

I'm assuming a typical Rails practice thin controller structure, where the controller basically makes a single call to a model that does most of the work.

We're getting to mock objects, promise.

Okay, we've written a Cucumber acceptance test and it's failing, so we need to start working on the actual application. It doesn't matter technically whether we do the model or controller first, but for the purposes of storytelling, let's talk about the model first.

The model testing is straightforward. We have some set of cases, usually the happy-path case, plus exceptional cases like blank input. Maybe we'll test for case sensitivity or partial matches, or whatever the business logic requires.

For our purposes here, the main point is that we don't mock the calls within the model layer. They way I do it, even if this filter spans multiple model objects, I don't use mock objects within the model layer, for most of the reasons that Nick lays out in his post.

You might argue that the database itself is a different layer than the model layer, and you could mock the actual calls to the database itself so that your tests don't have the overhead of dealing with the database. I think that's a useful line of argument in some web frameworks, but in Rails I generally haven't found that helpful, except maybe for pure speed benefits. I do, however, try to write as many tests as I can without actually saving objects to the database.

Okay, model tested. That brings us to the changes to the controller. (To keep this a little less complicated, I'm leaving the view test out of it, these days I tend to let Cucumber stand as my view testing, on the theory that if there's view logic that doesn't present an acceptance-testable change to the user, then it probably shouldn't be view logic). So, what is the behavior of the controller that needs to be tested and built?

There are two different ways of describing the controller's behavior, each with a different implication for specifying that behavior. I'm not saying that one is right or one is wrong, but I do think that the kinds of other tests you are writing make one plan more useful than the other.

  • The controller's job is to take the user's input and present to the view the expected set of objects.

  • The controller's job is to take the user's input, send it to the model.

The difference, I guess, is between a conductor and a conduit -- in both cases, the controller dispatches the work elsewhere, but in the conductor view, the controller object has more perceived responsibility for the behavior of the system.

Or, to put this another way, if the model behavior is not correct, should the controller test also fail? Thinking of the controller in the first way, the answer is yes, the controller's job is to deliver objects to the view, and if the calls are incorrect, then the controller has also failed. In the narrower view of the controller's responsibilities, the controller's job is just to call the model layer. Whether or not the model is correct is somebody else's problem.

Before I started using Cucumber regularly, I tended toward the conductor metaphor, but when I also have Cucumber tests, controller tests written like that feel very redundant with the acceptance test. So now I'm more likely to use the conduit metaphor and just test that the controller has the expected interaction with the model.

Which means mocks. And that's largely how I use mocks within my application -- to prevent the controller layer tests from directly interacting with model code. (And I using mocks to avoid dealing with an external library, but that is a different story.)

There are two potential problems with using mocks like this. First, the mocked controller test doesn't fail if the model is wrong or the model's API changes. If the mocked controller test is part of a larger test suite, though, this isn't an issue. The controller test won't fail, but something else will, either a model test or an integration test. You might even argue that limiting the number of failing tests from one code problem makes the issue easier to diagnose. (You might also argue that the cucumber test makes the controller test irrelevant. I don't agree, but you might argue it). So this issue doesn't bother me much.

The second thing to look out for is the inverse -- the mock test will fail if the API to the model changes even if the actual behavior at the end hasn't changed. I have more trouble with this one, but it does tend to work out (while admitting that it has been annoying when I've gotten sloppy and used mocks to cover an uglier API). Assuming that the model API is reasonable, and that the controller's mission in life is to call a particular API, then if that API changes, then in some sense the controller's behavior is changing and that should trigger a failed test.

However, in order to prevent this kind of problem, you really do need to use your tests to drive code structure and the clean API between the controller and the model. If the test mocking seems like it's becoming a burden, then that indicates that the code is not properly factored.

I'm not yet prepared to defend that all the way to the death, as I've said, I have been bitten by cases where mocks made the tests more brittle by exposing the internals of the model to the controller in more detail then necessary. Almost always, that happens on legacy systems that were written without tests and might not have a clean separation between model and view concerns. I have much, much less trouble with newer code that has a TDD-accented modular design.

You can mess yourself up badly with mocks. I for sure have, and it kept me away from RSpec for a long time. I've gotten better at using them, I think, and have started using them more frequently, but pretty much as I've described here. It's what works for me this week.

The Eternal Battle of the Keyboard and the Mouse, A Sidenote

UncategorizedNoel RappinComment
This week I did Obtiva's weekly Geekfest with a presentation/bookreport on the Making Software book. I'll write more about that book later, right now I want to expand briefly on something I referenced at the end because it's one of my favorite tech/UI examples to talk about.

We were discussing the relationship between an experts gut feeling about what works and what can and can't be shown using empirical evidence. Specifically, the difference, even for experts, between our perception of our effectiveness and our actual effectiveness.

I referenced something from Bruce Tognazzini's book Tog on Interface about how... well, here's the exact quote, because I didn't get it completely right from memory. This was originally written in August 1989 -- Tog is talking about the UI guidelines for the original Mac models.

We've done a cool $50 million of R&D on the Apple Human Interface. We discovered, among other things, two pertinent facts:

  • Test subjects consistently report that keyboarding is faster than mousing.

  • The stopwatch consistently proves that mousing is faster than keyboarding.

This contradiction between user-experience and reality apparently forms the basis for many user/developers' belief that the keyboard is faster.

One thing I love about mentioning this example to a room full of developers is that everybody leaps in with "But I use Vim, and my keyboard shortcuts really are faster..." Sure. You are a unique snowflake.

To be fair, Tog is talking about users new to the mouse, though that hardly weakens his point, and he's mostly talking about common tasks on the order of cut, paste, and save, not the kind of super powered keyboard stuff that Vim users use (or, you know, TextMate users, it's not like I ignore keyboard shortcuts).

That said, the basic argument -- that the time spent trying to remember the keyboard shortcut is cognitively engaging, while the time spent on the mouse is boring and thus seems longer -- still sounds plausible even for Vim users. Frankly, I don't know what similar studies would show today. I'm not even sure you could design a similar study about Vim, since Vim expertise is such a huge component of effectiveness.

At Geekfest, I combined details of this with a different study about whether the Mac's top-of-the-screen menu bar was faster than Window's style menubar in the window. (Hint: it is. Still.)

The point is the Tog book is great. Really dated now, but still a lot of fun to read. The other point is that it's worth checking your assumptions every now and then to make sure they are valid.

Tune in next week for more on Making Software and the whole project of empirically studying software development.

Me Break Weekly

UncategorizedNoel Rappin1 Comment
In the interests of full disclosure, I probably should start by saying that, technically, Rails Test Prescriptions was not actually featured on this week's MacBreak Weekly.

But it is a lot more fun to say that it was...

Here's the whole sordid, somewhat self-indulgent story:

I've been a regular listener to the MacBreak Weekly podcast almost since it started, and recently I've taken to listening to the show while walking to the train but switching to the video version when I get on the train.

This week, MBW host Leo Laporte chose the Ruby Pickaxe book as his pick of the week, calling attention to the $10 sale that was just ending. As Leo shows the Pragmatic website on the screen, and talks about how Pragmatic has the best programming books. Right as he does that, he scrolls down the Pragmatic home page, and by total luck, Rails Test Prescriptions is right there in the random carousel of books on the home page. Yay!


Hey, I know it's not a real mention or anything like that, and that I'm probably the only person in the world who even noticed. But it made me smile sitting on the train riding home this week. I'll take that.

Rails Test Prescriptions Status Update

UncategorizedNoel RappinComment
We've gone through the technical comments for Rails Test Prescriptions and I've made the substantial changes that I plan to make. At this point, we are about to enter the production process, we just missed getting it in before the Thanksgiving holiday, so I'd expect this to start next week.

I'd also expect beta 10 to come in next week, hopefully Monday, maybe Tuesday.

As for the timeline on the rest, I'm not 100% sure. I think that ordinarily Pragmatic expects indexing and copy edit to take 2-3 weeks, but I'm not sure how the holidays and vacation will affect that. I'm still expecting the final print version to be available sometime in January.

November 15: Getting Closer

UncategorizedNoel RappinComment
So. Um. I really didn't intend to be off the blog for quite this long. But, well, one thing leads to another, and sometimes the easiest thing in the world to do is Not Blog.

Some things that are going on:

Rails Test Prescriptions is through it's full technical review. Comments were mostly positive, with a nice dusting of "why didn't I think of that", and "Oops", and "I should fix that", plus the occasional "We're just going to have to agree to disagree".

As far as timing goes, the plan is to respond to all outstanding comments and give the thing a good read-thru by this Friday. I'd expect that we'll put out a beta at that point (for one thing, the factories chapter has been updated to factory_girl 2, and I'd like to get that out). After that, I'm not sure how long the copyedit part of the process will take. The Pragmatic web site has mid-January as the publish date at the moment, while Amazon is about a week or two more pessimistic.

In other news, loosely defined, Scrivener 2 is out with handy sync to external folder support, meaning I can easily round-trip between Scrivener on the Mac and, say, PlainText on the iPad. Nifty, and I can't wait for iaWriter to support sub-folders so I can round-trip with that, too. Also, I'd love it if MarsEdit had the same kind of round-trip sync, not that anybody is asking.

In a slightly related story, I'm doing a kind of half-assed NaNoWriMo, half-assed because I haven't been able to do it anything near every day. But I'm picking up something I wrote and got pretty far with a few years a go before stopping, and I really would like to get it all the way to The End.

I also just obtained the new Making Software book, which is a collection of empirical software engineering research. I'd expect to see a blog post or two on that, based on what I've read so far, I think I'll have opinions on this.