Noel Rappin Writes Here

Master Space And Time Status Update

Self Publishing, mstjsNoel RappinComment

Here’s a quick status update on Master Space and Time With JavaScript, book 3…

Short version: expect an early beta of about 1/3 of the book to be out in about a week.

Longer version:

Work is proceeding steadily, the current draft of just hit 30 pages toward a target of around 90, though about 10 of that is the same intro and outro that the other books have, so it’s more like 20 pages toward a target of 80.

The structure of Book 3 is one chapter on replacing the existing app homepage with a very simple Backbone clone, then another chapter building a different page using Backbone that has more events and more complexity, then a third chapter talking about communication between Backbone and the server, and anything else that comes up. The exact division between the second and third chapters may change.

I will release the first beta when the first chapter is draft complete — this will be rougher than the other books because it hasn’t gone through rounds of review and editing, but I like how it’s going. I also need to clear through reported errata on the first two books, and probably clean up the landing page in expectation of doing another marketing push.

Thanks for being patient and for your great feedback.

Oh, and you can still buy it!

Depending on jQuery and Perspective

JavaScriptNoel RappinComment

The reported errata for Master Time and Space With JavaScript (buy it here) has been pretty light so far. A bunch of typos, some setup weirdness.

And one interesting issue worth exploring. What is a dependency, and maybe more to the point, where is a dependency?

This issue was raised by a reviewer whose name I’m not going to mention — that’s not a reflection on the reviewer, but rather a reflection on the fact that I’m going to put words in his mouth to expand on his brief comment on the issue, so my interpretation of his position may not actually be his position.

Anyway the reviewer had a comment about when to convert to and from jQuery objects. He raised it in the context of the autocomplete example from the please-go-pay-for-it Book 2, but it also applies to the toggler example in just-for-free Book 1, and really, in any case where you have a JavaScript object which uses a data type that is a jQuery object.

Here’s the deal. I have an object — in this case it’s my autocomplete selector, but I’ll pretend it’s a generic widget because the exact object doesn’t matter — which is being initialized via a function call like so.

    widget = new Widget({parentSelector: "#autodiv"});

The key point here being the parentSelector: "#autodiv" option. In the eventual constructor, that #autodiv is immediately converted to a jQuery object (The example in the book is more elaborate, I’m simplifying to focus on the issue at hand…)

    var Widget = function(options) {
        this.$domParent = $(options.parentSelector);
    };

The reviewer’s point was that he’d rather convert the selector to jQuery in the call to Widget and pass the argument to Widget already converted to a jQuery object, rather than have the Widget constructor do the conversion:

    widget = new Widget({domParent: $("#autodiv")});

    var Widget = function(options) {
        this.$domParent = options.domParent;
    };

I’m not convinced yet that one way is better than the other — I haven’t changed the book code, but I certainly respect this reviewer’s opinion on how to structure JavaScript. But I do think that the two options make different assumptions about the structure of the application that are worth teasing out.

In my structure, jQuery is treated as an implementation detail of the Widget. Since jQuery is an implementation detail, outside code wouldn’t be expected to know about it, so outside code communicates to the Widget via a selector. Now, I grant that there’s a little bit of fiction there, since the selector syntax is (somewhat) jQuery specific. If I really wanted to isolate jQuery, then the calling argument should just be the DOM ID, and the Widget would also be responsible for converting the DOM ID to a jQuery object, with the intermediate step of building a jQuery selector string:

    widget = new Widget({parentId: "autodiv"});

    var Widget = function(options) {
        this.$domParent = $("#" + options.parentId);
    };

The advantage of treating jQuery as an implementation detail of my widget is that the rest of my code does not need to know or care about it, and I can presumably even test other parts of my code without jQuery even being around. Also, if I choose to swap jQuery out for some reason, the rest of my code doesn’t need to know, only my widget needs to be changed. I would consider the conceptual point about jQuery being an implementation detail of the widget to be important even if I find it exceptionally unlikely that I would swap out jQuery. (For one thing, it also protects me against jQuery itself changing).

In my reviewer’s structure jQuery is a dependency of the application as a whole. Looked at from that perspective, it makes sense to convert everything to jQuery objects as early as possible, to maintain consistency of representation across the entire application. The code as a whole may be easier to read, since we aren’t continually having to worry about whether an object is a jQuery object or just some kind of DOM identification. If multiple widgets are all using the same jQuery object, then we might prevent some duplicate conversion to jQuery objects. This probably simplifies the internal code at the cost of making us more dependent on jQuery itself. As a practical matter, that tradeoff might be worth it — once we’ve decided to use jQuery, changing it is probably unlikely.

Essentially, it’s a question of where you draw your boundaries. I’m not sure there’s a long-term practical difference between these two structures, in that I don’t think one of them necessarily leads to better or more flexible code over time, especially given even rudimentary attention to practical details. But I do think you should be clear about which structure you are using — mixing the two by treating jQuery as a specific dependency of some parts of the code but a general dependency of other parts would probably lead to confusion later on.

Update:

A reddit commenter asked why I wasn’t passing a DOM element into the Widget, as in:

new Widget({parentSelector: document.getElementBy("autodiv")});

My response: I have nothing really against passing in a DOM element beyond it being a little verbose and an old-timey superstition against using getElementBy. Using a DOM element still keeps the dependency in the widget.

Like this? Go ahead and read Master Time and Space With JavaScript

How's it Going? MSTWJS Edition

Self Publishing, mstjs, self promotionNoel RappinComment

And now for a more inside-baseball post about how the self-publishing aspects of Master Space and Time With JavaScript are going. Did you know you can buy it?

Short answer: Pretty well, though I could always have done better. Still unclear how this will work over the long haul.

At this point, the book has been on sale for 10 days, plus the pre-sale to people who were on the mailing list. It’s clear that the initial burst of traffic from incoming links is slowing down, and I’m now entering the longer struggle to get people interested — not completely sure how to do that.

Anyway, a few disconnected points about the process so far

One of the big things I miss about a larger publisher is the marketing reach. That said, there’s something really nice about how people feel a little bit of ownership concerning self-published projects that they like. I’ve gotten a couple of very nice copy-edit runs, for example.

I’ve generally been lucky in reviews (notable exception: the two-star review of RTP on Amazon that I check out whenever my Impostor’s Syndrome feels insufficiently pronounced…), and so far, the people who have commented on the books where I can see them have been positive.

I can be a little transparent with numbers. As I type this, traffic is still about double the level that I had generated in the past on days that I posted to the blog, and much, much higher than the ambient level of traffic when I hadn’t been posting.

Over the course of the ten days, about one in six people that have hit the landing page at /mstwjs and aliases convert to either the free version or one of the paid versions. Low day was 12%, high day was just under 23%. There doesn’t seem to be a consistent trend between traffic and conversion rate.

So that’s about one in six doing anything, of those that do choose, about one in six or so actually bite on one of the paid versions. So that’s a paid conversion rate in the 2 - 3% range. That rate seems to be slightly negatively correlated with traffic, which is actually in line with what I would expect.

As for the pricing strategy, which I thought was so clever… So far, there have only been a very few people buying book 2 at the $7 level — most purchases have been of the whole book at $15. I’m not planning on changing the pricing (beyond my already-stated plan to raise the $15 when Book 3 gets near-final). I want to see how this looks when there are more individual books for sale. But there’s a good chance this means that I outsmarted myself, and probably could have priced a little higher.

So far, as best as I can tell, under 3% of people who originally downloaded the free version came back to upgrade. That seems very low, but it might be higher — the way I’m counting this, if somebody gave me a bogus email for their freebie, I wouldn’t track it as an upgrade. Also, I assume this number goes up a touch as people read the book and as upgrade reminders go out. Still, as it stands, it’s not a great data point for the “give away free stuff to increase sales” school of internet marketing.

Over the course of the ten days, the number one referrer, by far, was Peter Cooper’s JavaScript Weekly email newsletter. It’s the biggest by about a factor of three over the next highest measurable referrer. Next up was Twitter, and I think the highest link out of those came from JavaScript daily, and I think the second one was from my feed. It’s really hard to track those for sure, though. Third was Google Reader, though I think that was mostly blog posts and not links to the landing page. Fourth was Reddit — my post there didn’t get much traffic, and fifth was the Ruby5 podcast and show notes. Rounding out the referrals so far is Mike Gunderloy’s Fresh Cup links, and then we also get some noise with internal referrals and things like Pocket, and the Ruby Rogues link, which just came out.

Another point of comparison is that MSTWJS sales are about 150% + of RTP sales over the first ten days. That’s less impressive than it sounds, I really struggled to get traction with RTP after the initial burst. (That’s the paid number for MSTWJS, and it includes the pre-sale to the list). RTP had a free section as well, and I don’t have any stats on how often that was downloaded, but since it was just a link and not a shopping cart, I think it was pretty high.

That’s where we are — I hope that those of you that bought or downloaded the book are enjoying it, and if you are hoping to do your own self-publishing project, I hope this information is helpful.

Oh, and the book is still on sale.

The Origin of Master Space and Time With JavaScript

Self Publishing, mstjs, self promotionNoel RappinComment

I have a new book, Master Space and Time With JavaSript. You can buy it.

Here’s the secret origin.

This all started over a year ago. Rails Test Prescriptions had been complete for a few months, and I was getting a little antsy to take on a new project.

But what? I wanted it to be a project where I would learn something, and I wanted it to be something where I had a particular perspective to offer.

A couple of recent experiences pushed me toward view-level coding in general, and JavaScript in particular. About then, for the first time in a while, I worked on a project that had a reasonably serious JavaScript front-end on a team small enough that I was contributing some of the JavaScript.

It quickly became clear to me that JavaScript coding had changed dramatically, not just from my first pass at it (warning clients away from it circa 2000), or my second pass (avoiding it with RJS circa 2008). The tools were better, the idioms had changed, and the expectation was that future web applications would need to handle this stuff very well.

I was also working with a new Obtiva apprentice, who wanted to build a very JavaScript-heavy site, but didn’t really know much JavaScript. In searching for books to point him to, it seemed to me like there was an underserved part of the market, not for total beginners, not a description of a particular library, not a pronouncement from On High about The Right Way To Do Things, but a practical guide to writing and testing modern JavaScript to do cool stuff.

Which is what I set about to write. My original proposal for this book may well be one of the best pages of text I’ve ever written.

And yadda, yadda, yadda, here we are.

I knew I wanted the book to build up on a single web application example — I’ve always liked that style, even though it can be a pain in the neck to structure. I also knew that I wanted to have a lot of testing in the book. Not only is writing about testing in JavaScript something that seems pretty needed, it also felt like a perspective where I actually might have useful things to say. Plus, I had used the largely-test-first style of writing before (in the Wrox book), which I’m sure was appreciated by all three of the people who bought it. I thought I could do it again.

The idea of taking an application with no JavaScript and adding JavaScript features seemed like a good hook. I’d used the Time Travel Adventures travel agency before (for my testing legacy code workshop), and the idea of a time-traveling client who was confused about what modern web sites needed seemed like a suitably silly hook. (In the first draft, the client was named Emmett Brown, but I was guided away from using that name directly, hence the client becomes the mysterious Doctor What.)

And I was off…

Here’s what you get in the books.

Book 1 is largely concerned with a particular simple-seeming request: add a show/hide toggle link to each trip on the home page. Although this is simple, it actually winds up touching a fair amount of jQuery — using selectors to find elements, binding events, and manipulating element pieces.

We build up this toggle thing in a few stages. First is a quick pass writing a simple version of the toggle functionality test-first. The goal here is a sense of what a test-driven process looks like in JavaScript, plus the basics of how Jasmine and jQuery can be brought to bear to write the feature. It’s basically the book equivalent of “fast to green”. We solve the problem with the understanding that we’ll clean up the details later.

Then we go back with two more chapters that go more in-depth on first Jasmine, then jQuery. In these chapters, we don’t add new features as much as describe the library features that supported what we did and explore related functionality.

The final chapter of Part 1 covers the JavaScript object model and why it is confusing if you come to JavaScript from a more traditional Object-Oriented language. By the end of it, we’ve used the module pattern to build the most over-designed show/hide toggle ever. I’ve had some experience doing workshops based on the material in this chapter, and it seems like even people who have been doing JavaScript for a while get something new out of it.

And that’s Book 1. It’s available for the low, low price of zero dollars and zero cents. Worth every penny.

Book 2 is mostly about apply and extend. The first chapter is about building up an widget that combines an autocomplete text input with a list of currently chosen items. Test-first, of course. The we throw some Ajax in the mix, extending our already gold-plated toggler with the ability to get data from a server. This also gives us an excuse to talk about Jasmine spies. Finally, we build up a rating widget, with the clickable stars and a histogram and stuff, which lets us talk about JSON and Mustache. There’s also a small, slightly out of date bit on the Chrome developer tools. I’ll catch up on that at some point.

Book 2 is a mere $7.

Or you can get Books 1 & 2, Plus books 3 & 4 when they come out for $15. That $15 is a temporary price, and will go up when Book 3 gets closer to completion.

Book 3 is going to be about Backbone. I know the structure of most of it — first, we’ll recreate the front page using a Backbone structure. Next, we’ll build a buy page that allows you to make several calculations about differing trip purchase options on the client. Not sure about the last part, it will definitely include communicating back to the server, probably something else.

Book 3 is going to start appearing on the site in a couple of weeks, and will probably be draft complete by the end of September.

Book 4 will cover Ember.js. I’m not sure yet what we’ll build, though I want it to be a new part of the site, and not a recreation of the same things we do in Backbone. I’m hoping that Book 4 will be out by the end of 2012.

Oh — the title. I had a list of two word titles that were like POWERWORD JavaScript, but all of them were either taken, sounded ridiculously stentorian to me, or both. The original proposal titles were “Getting Things Done in JavaScript”, which nobody liked, or “JavaScript for People who Hate JavaScript”, which nobody liked (see the pattern).

I had Master Space and Time with JavaScript on my list as kind of a joke — a reference to the Time Travel conceit in the book. Plus I liked that it sounded like a pulp adventure novel, and would lend itself to a cover easily. I know it’s not the, like, SEO favorite title, but I’m hoping that people won’t forget it once they hear it.

The cover, by the way, is my own design, and I like it considerably more than many of the other things I’ve designed. (I’m also pretty happy with the PDF layout…) I think the cover particularly works well at thumbnail size, so you can see the difference between the individual books easily.

That’s the story. Hope you like it. Buy it!, or tell all your friends (Twitter hashtag #mstwjs).

Thanks.

Rails, Objects, Tests, and Other Useful Things

Objects, Rails, Ruby, testingNoel Rappin4 Comments

For the first time in quite a while, I’ve been able to spend time working on a brand-new Rails application that’s actually a business thing and not a side project. It’s small. Okay, it’s really small. But at least for the moment it’s mine, mine, mine. (What was that about collective code ownership? I can’t hear you…)

This seemed like a good time to reflect on the various Object-Oriented Rails discussions, including Avdi’s book, DCI in Rails, fast Rails tests, Objectify, DHH’s skepticism about the whole enterprise, and even my little contribution to the debate. And while we’re at it, we can throw in things like Steve Klabnik’s post on test design and Factory Girl

I’m not sure I have any wildly bold conclusion to make here, but a few things struck me as I went through my newest coding enterprise with all this stuff rattling around in my head.

A little background — I’ve actually done quite a bit of more formal Object-Oriented stuff, though it’s more academic than corporate enterprise. My grad research involved teaching object-oriented design, so I was pretty heavily immersed in the OO documents circa the mid-to-late 90s. So, it’s not like I woke up last May and suddenly realized that objects existed. That said, I’m as guilty as any Rails programmer at taking advantage of the framework’s ability to write big balls of mud.

Much of this discussion is effectively about how to manage complexity in an application. The thing about complexity, while you can always create complexity in your system, you can’t always remove it. At some point, your code has to do what it has to do, and that’s a minimum on how complex your system is. You can move the complexity around, and you can arguably make it easier to deal with. But… to some extent “easier to deal with” is subjective, and all these techniques have trade-offs. Smaller classes means more classes, adding structure to make dependencies flexible often increases immediate cost. Adding abstraction simplifies individual parts of the system at the cost of making it harder to reason about the system as a whole. There are some sweet spots, I think, but a lot of this is a question of picking the Kool-Aid flavor you like best.

Personally, I like to start with simple and evolve to complex. That means small methods, small classes, and limited interaction between classes. In other words, I’m willing to accept a little bit of structural overhead in order to keep each individual piece of the code simple. Then the idea is to refactor aggressively, making techniques like DCI more something I use as a tool when I see complexity then a place I start from. Premature abstraction is in the same realm as premature optimization. (In particular, I find a lot of forms of Dependency Injection really don’t fit in my head, it takes a lot for me to feel like that flavor of flexible dependencies are the solution to my problem.)

I can never remember where I saw this, but it was an early XP maxim that you should try to keep the simplicity 90% of your system that was simple so that you had the maximum resources to bear on the 10% that is really hard.

To make this style work, you need good tests and you need fast tests — TDD is a critical part of building code this way. You need to be confident that you can refactor, and you need to be able to refactor in small steps and rerun tests. That’s why, while I think I get what Gregory Moeck is saying here, I can’t agree with his conclusion. I think “more testable” is just as valid an engineering goal as “fast” or “uses minimal memory”. I think if your abstraction doesn’t allow you to test, then you have the wrong abstraction. (Though I still think the example he uses is over built…).

Fast tests are most valuable as a means to an end, with the end being understandable and easily changeable code. Fast tests help you get to that end because you can run them more often, ideally you can run them fast enough so that you don’t break focus going back and forth between tests and code, the transition is supposed to be seamless. Also, an inability to write fast tests easily often means that there’s a flaw in your design. Specifically, it means that there’s too much interaction between multiple parts of your program, such that it’s impossible to test a single part in isolation.

One of the reasons that TDD works is that the tests become kind of a universal client of your code, forcing your code to have a lot of surface area, so to speak, and not a lot of hidden depth or interactions. Again, this is valuable because code without hidden depth is easier to understand and easier to change. If writing tests becomes hard or slow, the tests are trying to tell you that your code is building up interior space where logic is hiding — you need to break the code apart to expose the logic to a unit test.

The metric that matters here is how easily you can change you code. A quick guide to this is what kinds of bugs you get. A well-built system won’t necessarily have fewer bugs, but will have shallower bugs that take less time to fix.

Isolation helps, the Single Responsibility Principle helps. Both of these are good rules of thumb in keeping the simple parts of your code simple. But it also helps to understand that “single responsibility” is also a matter of perspective. (I like the guideline in GOOS that you should be able to describe what a class does without using “and” or “or”.

Another good rule of thumb is that objects that are always used together should be split out into their own abstraction. Or, from the other direction, data that changes on different time scales should be in different abstractions.

In Rails, remember that “models” is not the same as “ActiveRecord models”. Business logic that does not depend on persistence is best kept in classes that aren’t also managing persistence. Fast tests are one side effect here, but keeping classes focused has other benefits in terms of making the code easier to understand and easier to change.

Actual minor Rails example — pulling logic related to start and end dates into a DateRange class. (Actually, in building this, I started with the code in the actual model, then refactored to a HasDateRange service module that was mixed in to the ActiveRecord model, then refactored to a DateRange class when it became clear that a single model might need multiple date ranges. The DateRange class can be reused, and that’s great, but the reuse is a side-effect of the isolation. The main effect is that it’s easier to understand where the date range logic is.

I’ve been finding myself doing similar things with Rails associations, pulling methods related to the list of associated objects into a HasThings style module, then refactoring to a ThingCollection class.

You need to be vigilant to abstractions showing up in your code. Passing arguments, especially if you are passing the same argument sets to multiple methods, often means there’s a class waiting to be born. Using a lot of If logic or case logic often means there’s a set of objects that have polymorphic behavior — especially if you are using the same logical test multiple times. Passing around nil often means you are doing something sub-optimally.

Another semi-practical Rails example: I have no problem with an ActiveRecord model having class methods that create new objects of that model as long as the methods are simple. As soon as the methods get complex, I’ve been pulling them into a factory class, where they become instance methods. (I always have the factory be a class that is instantiated rather than having it be a set of class methods or a singleton — I find the code breaks much more cleanly as regular instance methods.) At that point, you can usually break the complicated factory method into a bunch of smaller methods with semantically meaningful names. These classes wind up being very similar to a DCI context class.

Which reminds me — if you are wondering whether the Extract Method refactoring is needed in a particular case, the answer is yes. Move the code to a method with a semantically meaningful name. Somebody will be thankful for it, probably you in a month.

Some of this is genuinely subjective — I never in a million years would have generated this solution — I’d be more likely to have a Null Object for Post if this started to bother me, because event systems don’t seem like simplifications to me.

I do worry how this kind of aggressive refactoring style, or any kind of structured style, plays out in a large team or even just a team with varying degrees of skill, or even just a team where people have different styles. It’s hard to aggressively refactor when three-dozen coders are dependent on something (though, granted, if you’ve isolated well you have a better shot). And it’s hard to overstate the damage that one team member who isn’t down with the program can do to your exquisite object model. I don’t have an answer to this, and I think it’s a really complicated problem.

You don’t know the future. Speculation about reuse gains and maintenance costs are just speculation. Reuse and maintenance are the side effect of good coding practices, but trying to build them in explicitly by starting with complexity is has the same problems as any up-front design, namely that you are making the most important decisions about your system at the point when you know the least about the system. The TDD process can help you here.

Upcoming Me

self promotionNoel RappinComment

Updates, schedules, things, and stuff.

Scottish Ruby

The Scottish Ruby conference is having a charity workshop June 28, and I’m presenting my “Advanced Rails Design” workshop. This is the extended dance mix version of the workshop I did at Mountain West Ruby earlier this year. I thought it went really well (so did the attendees, I’m sure), and I’m very excited about this one. Details at http://scottishrubyconference.com/charity/ — you don’t need to be attending Scottish Ruby, but you do need to register in advance.

There are three workshops that day — a UX workshop that has sold out. A Dave Thomas advance Ruby workshop that hasn’t, and mine. Let’s just say there are more tickets available right now for mine than I’d like, and I hope that if you are in the neighborhood, you’ll stop by. It’ll be worth it.

Windy City Rails

Much closer to home, I’m speaking at Windy City Rails this year. According to their schedule, my talk will be “Let’s Make Testing Fun Again”. This conference is always great, the venue this year looks outstanding, and the speaker list is — myself excluded — top-notch. Hope to see you there.

Ignite Rails

My IgniteRailsConf talk: Manage Your Development Environment / Never Burn Another Burger by Noel Rappin is now available on line at http://www.youtube.com/watch?v=VLCLBdFsSOE. I don’t think my other RailsConf thing is up yet, but I’m sure I’ll let you know.

Master Space and Time With JavaScript

It’s near schedule. I think that converting all the text for the first two parts of the book will be done next week. Leaving me with a) a serious edit b) cover and incidental design, c) cleanup for epub and mobi, and d) product sale logistics. Still hoping that’ll go out in early July. If you’ve signed up at /mstjs-form, you’ll probably get an early look/chance to help me work the kinks out.

And Hey,

You can still buy Rails Test Prescriptions. Much of it is still up-to-date…

Automator + Bash = Yay

Noel Rappin1 Comment

At it’s best, working in Mac OS X combines the power of the Unix shell with the convenience of an actual interface.

Here’s a best case scenario:

As I may have mentioned here a few times, I’m writing a book. As part of my current workflow, I need to convert my text from it’s old format to my new format, which is Markdown. The old format is a custom XML-based language the details of which don’t matter beyond the fact that it’s XML-based.

Moving the text over has two issues:

  • The obvious one is that there are XML tags in the body of the text for things like code literals and text italics that I want to replace with either the Markdown backtick or the Markdown underscore.
  • The less obvious one is that when writing XML text, I treat it like code, meaning I’m ridiculously insane about text layout. In my XML source, I really do still insist on an 80-character line, with hard returns and indentation. I’ve decided that I don’t need to do this when I write in Markdown, so when I move the text over, I need to get rid of all the hard returns and spacing.

What I’d like is a workflow where I can copy a paragraph of text to the clipboard, do magic, and then paste cleaned-up text in the new book file. Going a paragraph at a time is not a problem — it’s actually preferable, since I’m editing as I go, so moving a whole file at a time is not really what I want.

I toyed with the idea of doing this in Ruby, but that seemed like a pain, so I wrote a short shell script.

Understand: I never do this, and I pretty much know beans about the shell programs involved. But by googling things like “shell remove newlines” and with some helpful man pages, I cobbled together the following, with details of the XML fuzzed a bit.

    pbpaste | tr -d '\n' | tr -s ' ' | 
    sed -E 's/<\/*(literal|lit)>/\`/g' | 
    sed -E 's/<\/*(italic|bold)>/\*/g' | pbcopy

All of you who actually know shell scripting are invited to have a hearty laugh. While you are off chuckling, I’ll explain what this line noise does…

  • pbpaste takes the text from the clipboard, which is piped to…
  • tr -d '\n', which removes all the newlines, and pipes to…
  • tr -s ' ', which removes duplicate spaces, and pipes to…
  • sed -E 's/<\/*(literal|lit)>/\/g’`, which takes all the XML tags for things that I want replaced with Markdown literal syntax and puts in a back tick, then pipes to…
  • sed -E 's/<\/*(italic|bold)>/\*/g', takes the XML tags I want to replace with Markdown emphasis syntax, then pipes to…
  • pbcopy, which copies the final text back into the clipboard

Works great. A bit of a mouthful to type, if I can mix metaphors. (The actual one is even more complicated, because I strip out some XML entities as well). It’s a bit much to type at the terminal in my workflow. Creating an alias is a possibility, but still, requires a terminal.

There’s another option in Mac OS X, though: Automator.

You may not have played with Automator, because you do not fully appreciate Automator. Automator is an OS X application that lets you chain together predefined actions (very similar to the actions exposed via apple script), using a GUI interface, and save the result as an application or an OS X service, among other options.

This isn’t an Automator tutorial, because what I want to do here is really simple. One of the available actions in Automator is “Run Shell Script”.

Hmm…

I created a new Automator document as a Service, added the Run Shell Script action to it, pasted my big shell script into the body of the action. The action doesn’t need input. Even better, I can have it work not on the clipboard, but on the selected text in the open application, which saves me a step in my workflow. The shell script is already putting the output on the clipboard, so I don’t need to deal with that in automator either.

Okay, big deal, it runs a shell script. But, since I’ve saved it as a service, I can assign a global key command to it. In the System Preferences for Keyboard, my new service is somewhere in the Keyboard Shortcuts tab (under services). From there, I can assign a keyboard shortcut, which is available in any application that exposes itself to services, which is many applications.

Now I just select a paragraph of old text in my text editor, hit my new key combination, and I can paste the cleaned up text in my new editor window. A little Unix, a little Mac, and a lot of time saved.

Master Space And Time Release Plan

Self Publishing, mstjsNoel RappinComment

There is a plan.

It goes like this:

Master Space and Time With JavaScript will be split into four parts.

Part one and part two will be available sometime in July. I’d say July 1st, but I’ll still be in Scotland. It’d be sooner, but there are still logistics to be managed around the actual layout of the book and getting the payment gateway in order. Plus, I need to actually finish the text.

Part one is an introduction to Jasmine, jQuery, and the JavaScript object model. It will be available for free.

Part two will be more advanced jQuery, Jasmine, and JavaScript examples. It will be available for, most likely, $7.

Each part will be on the order of 50 - 75 PDF pages. The exact split between the two parts will depend on the final page count of the chapters.

You will also have the option to pre-purchase the entire book — that’s an immediate download of parts one and two, and an eventual update with parts three and four. That bundle will most likely be $15. I’m not sure yet how long that option will remain available.

Part three, will cover Backbone.js. It will come out something like four to six weeks after parts one and two. Part three will also be $7.

Part four, which will cover Ember.js, will come out some time in the future, same deal.

There are two reasons why the Ember.js part might take a few months. One is that I’m still kind of waiting for Ember.js to settle.

But the main reason is that I plan on doing other things.

More mini-books, in the 30-75 page range, for $5 to $10 each. I’m going to do at least one, maybe two, before the Ember.js part. (Yes, I’ll say what they are. Eventually).

And that’s the real plan: an outlet for me to release these mini-books on a regular basis. To write these ideas that I have that aren’t quite long enough for full books.

I’m excited about this. I have a few topics I’ve wanted to write about for a long time, and this is way for me to get these out to people who I think will find them useful, interesting, and fun.

A couple of other logistical things:

  • In case it’s not clear, all releases will be digital with PDF, ePub, Mobi, maybe HTML. iBooks Author is a possibility, but probably not in the immediate future.
  • I would update previous books to correct errors and typos and that kind of thing.
  • I’d like to have some way to offer something like a subscription. I need to find out what the payment tool I’ll be using makes possible.
  • I’ll probably have some sort of umbrella name for the series, but I don’t know what it will be yet.

I think this is a good plan. I hope you agree. I’d love to hear your thoughts, and if you want to get an email when the first two parts are available, please fill out the interest form

Master Space And Time With JavaScript Update: The First Couple of Chapters

mstjsNoel RappinComment

It’s not much of an exaggeration to say that I’ve been writing the same two or three chapters for six months. I think — I hope — this is the last time.

This is a more or less weekly update on the manuscript currently known as Master Space & Time in JavaScript. Today, the update is about the first few chapters and how they change over time. These chapters cover a lot of ground, and getting the order and beats right has been a struggle. But I think I’ve got a good handle on it now.

(If all you want is a pure update: I’m making progress. The biggest factor in when the book actually goes on sale is how much I decide to have completed before I sell it. But I think we’re still looking at a few weeks out.)

The problem with the first couple chapters is that there are a lot of topics to cover right at the beginning. The first part of the book introduces jQuery, but I also want to introduce testing as a practice, so Jasmine comes in immediately as well. I want to use the examples in the book to model a good test-first practice (for the two or three of you that are familiar with my Wrox book, you know that I tried something similar there).

However well intentioned my plan to cover testing is, the fact is that for a reader unfamiliar with both, simultaneous exposure to Jasmine and jQuery has a lot of potential to be confusing.

In my first draft, the reader was presented with a suite of three or four Jasmine tests, which were fully explained, then with the jQuery code. The example is super-simple — just making a detail show and hide based on a user click. That’s on purpose, so nobody gets bogged down in the details of the example. Even though it’s simple, it does touch on a lot of basic jQuery features, which were explained in some detail, while also referring back to the Jasmine.

Some early reviewers found the back and forth confusing, so I tried a few different ways to clean it up:

  • I really like starting books with quick demos that show off things that will be fully explained later. So I added one that covered the JavaScript console. More on that in a moment.
  • I added a lot of text explaining exactly where the reader was, why we were talking the thing we were talking about and so on. A lot of this was necessary and stuck around, though at one point I wound up with a two-page chapter introduction that was — accurately — described as “apologetic”.

Eventually, though, I tried to separate the chapters, presenting the jQuery code first and then the Jasmine tests after. On the plus side, separating the two led me to write much better explanations of each library. On the down side, the order just felt really wrong to me, it wasn’t really the way I wanted the book to go.

As I tried to bring the material together for the new version, I came up with an answer that I think keeps all the draft material I like, and scraps the stuff I don’t. I mentioned that I really like quick-march introductions for my books. It finally occurred to me that the small example actually works as a quick introduction. So the plan is to work through that example in a kind of strict test-first way, showing the rhythm of a BDD process without a lot of library detail. Then go back and describe Jasmine in more detail, then jQuery in more detail.

Which makes this the current table of contents:

  • Introduction. Mostly logistics, explanation of what’s there and so on.
  • First Look: The quick walk through Jasmine and jQuery.
  • Jasmine in more detail.
  • jQuery in more detail — at least the basic selector and element manipulation parts.
  • JavaScript functions and objects. How they work. We take the simple code from the first example and apply different JavaScript module and class patterns to it. I like this chapter.
  • Developer tools, the console and the WebKit package
  • Pulling the first few chapters together on a more complex example — a multi-select autocomplete widget.
  • Ajax. How to do it and how to test it.
  • JSON. This is an Amazon-style rating widget, so pulling in everything so far plus Ajax, plus JSON.
  • Backbone. Probably 2-3 chapters here, covering Underscore, and Mustache. This is the point where we get to things that haven’t been written yet, though I do have an outline.
  • Depending on how ambitious I feel at this point, a similar treatment of Ember.js is possible.

The initial release is most likely everything up to the JSON chapter, thought it’s possible I may start putting it out there once I finish reworking the autocomplete widget chapter.

If this sounds interesting to you, let me know by signing up. There’s still time to suggest other tools you want covered. Or leave a note in the comments here.

Self-Publishing Workflow Update

Self Publishing, mstjsNoel RappinComment

Next up on the Master Space and Time With JavaScript status report is the workflow that takes my words and turns them into a PDF. And an HTML file. And an ePub. And don’t forget Kindle.

As you can imagine, this is something of a minefield, although there are a lot more tools available than there were three years ago when I did this the last time — here’s an overview of the process I used then. That article talks about the process that I used on Rails Test Prescriptions for as long as it was self-published.

Things have gotten more complicated. Most obviously, there are more devices and formats to support. The Kindle’s mobi format and ePub are different, and every ePub device has its own quirks. On the plus side, there are a lot more tools and libraries available than there were, though figuring out what they all do is a challenge in itself. Plus, a lot of the existing tools produce documents that are, well, kind of dull-looking, especially noticeable in the PDF versions. (If you are a self-publishing author, I don’t mean you, your stuff looks great. Those other people, though… can you believe them?)

Over the last few years, I’ve gotten addicted to being able to sort of see what the text looks like all laid out fancy and the like. So it was important to me to get at least the semblance of a tool chain in place before I started rewriting in earnest, not least because I needed to figure out what format the text was going to be.

Here’s what I’ve got. It starts with Markdown, because Markdown is pretty simple, I’ve been using it for years, and there are like a jillion editors that support it. Because I’m using Markdown, I’m also experimenting with writing the book in a more writerly editor like Byword, rather than a programmer editor like Sublime. In theory, this will make the book less… I don’t know, less programmerish, or less sublime? Dunno, but right now, I’m enjoying the change of scenery.

Then it starts to get complicated. There are multiple implementations of Markdown or Markdown-like libraries, and they all are subtly different. Right now, I’m using Multi-Markdown, since it has footnotes and cross-references, though there’s some possibility I’d switch at some point in the future. (I love footnotes, and I’d add one here, but I don’t think this blog engine uses a version of Markdown with footnotes.)

Markdown’s weakness is that it’s hard to specify custom styles or HTML classes. I’m working around that by running my Markdown text through a pre-processor where I can put in some custom directives. I can, for example, write:

!!!sidebar
:title A Sidebar
The sidebar content goes here

The pre-processor catches that, and converts the code into some specially styled HTML. (Yes, I’ll be open-sourcing the tool at some point in the future.)

I’ve also added something that I’ve wanted for a long time, namely the ability to insert code from a file at an arbitrary branch in a git repository, meaning that I can show code from multiple successive versions of the same file just by having them in different git branches. Meaning I can distribute the sample code in a git repository and have it not look too awkward.

I also have the ability to post-process Markdown’s HTML, which I think I’m eventually going to need to reconcile the rest of the tools.

Once I have the converted HTML, I need to generate e-book files. For PDF, I’m going back to using PrinceXML. There’s a lot of things I like about this tool. For one, it has a lot of the features that you would normally associate with actual books, like section numbering, footnotes, cross references, and page headers and footers. I think it produces nice-looking files, and its controllable with CSS, a technology that I (mostly) understand, so it’s easy and fun to tinker with.

I didn’t have any existing tools for ePub or Mobi, so I looked at somebody who I thought was creating pretty nice files, specifically Avdi Grimm, who did a pretty great job with the Objects on Rails ebooks. Avdi created his own tool called OrgPress, and for ePub and Mobi, he uses the command line interface to Calibre. Even though OrgPress uses Emacs Org Mode, Make, and Awk — three tools that, to put it mildly, I feel no particular pull to tinker in — I was able to take advantage of Avdi’s hard-fought war with Calibre to get a set of command line flags that basically work, though I’m going to have to tweak them a bit before they are salable. I’m also looking at Rpub, since it’s in Ruby.

One nice side effect of all these tools is that it’s easy for me to have an almost all iPad workflow — I turn on watchr on the laptop, write using Byword on the iPad, and when Dropbox syncs the file, the ebooks are all updated, and I can view them on the iPad via Dropbox. Not bad.

I know this isn’t done yet. For one thing, I suspect that Multimarkdown and PrinceXML are going to disagree on the format for a footnote, and I’m probably going to have to referee. Later note: it’s worse then that… Multimarkdown’s footnote format actually crashes the Mac Adobe and Nook ePub readers, though that seems to be a bug on their end, and it does work in iBooks.

I’m eventually going to need custom styles for each format — the sidebar CSS that looks great in PDF looks awful in iBooks. And I’ll need a cover.

And, you know, content.

May 9, 2012: The Random Link Post Returns

JavaScript, Music, RSpec, Ruby, Self PublishingNoel RappinComment

And now, the return of the semi-occasional link post. I’m going to try to do this at least once a week, but who knows.

If you are writing JavaScript, you should be looking at Justin Searls and his JavaScript testing tools. Justin posted the slides for what looks like a great talk on JavaScript testing. These slides made me happy.

In random media sales, the audio book of World War Z is on sale for a mere six bucks.

A couple of Ruby posts. Steve Klabnik argues that merely splitting code into modules doesn’t reduce complexity. Instead he argues that you need encapsulation. I think splitting code is probably better than nothing, but not a lot better.

Meanwhile, Avdi Grimm describes the Ruby idiom for type conversion which I have to admit, I never thought of as an idiom before.

In a story that I think of as a cautionary tale about pricing and value, the LA Times writes about the history of American Airlines customers who bought unlimited tickets. And then, you know, they used them, unlimitedly.

I always like to see plucky programmers trying to self-publish books about testing. So I’m glad that Aaron Sumner is LeanPubbing a book on testing Rails and RSpec. Good luck, Aaron!

Pretty much everybody who blogs or writes or tries to explain things to people should read this XKCD

Finally, a random music recommendation. I don’t recommend music much, but I do have a weakness for lyric-heavy, earnest, catchy music. Which brings me to the Lisps and their recent musical project Futurity. The musical is a Steampunky kind of thing that concerns a Civil War vet who tries to build a “Steam Brain” with the help of Ada Lovelace. It’s clever and I like it. Album Link. On a similar vein, their song Singluarity from their previous album Are We At The Movies.

Master Space And Time With JavaScript Status 5-08

JavaScript, mstjsNoel RappinComment

Now that the new book is public, I’m going to start doing more frequent status updates. It’s going to be weird for me, after keeping the project under wraps for so long, but I’m sure we will all get by.

When the book, shall we say, reverted back to me, I had two immediate questions: what to write, and how to deliver it to a (hopefully) desiring public. Let’s talk about the content first, though in practice, I needed to make sure I had a tool chain I liked before proceeding.

At peak length, the manuscript was about 150 PDF pages, give or take, comprising something like 60% of my outline. Great! Ready to start selling. Except for a few things:

  • Because the last few months consisted of a few different writing experiments on different parts of the book, the pages that I have aren’t consistent.
  • Some of the text was started but not finished once review became more important… There is a chapter or two that I haven’t even read in months.
  • Plus, life marches on. Ember.js wasn’t even a thing when I started, now I’m increasingly feeling like I need to cover it.
  • The existing code is in an XML-esque format, and my preferred toolchain starts with Markdown.

Those points are all annoyances. The real problem is harder to articulate. Maybe the best way to put it is that the review process made me kind of cautious when writing, and that the book needs to be bolder and more sure of itself in order to succeed.

So the content part of the last week has involved writing a new, short introduction, I think I’ll probably revisit that once I have more of the rest of the book in place. I’ve also started transporting the first real chapter, but it’s more of a re-imagining than a copy and paste.

The way the book is set up, it uses the Time Travel Travel Agency example I’ve used in a couple of workshops — hence, the name — and you are contacted by a possibly time-traveling client who asks you to continually make JavaScript changes to his application. It’s a little silly, but I like it, and it keeps some focus on writing JavaScript in a real, if smallish, application. I’m refocusing the content on teaching good BDD practices, part of what I’m doing right now is trying to balance learning BDD with learning the JavaScript.

And that’s where I am. I’ll keep updating status regularly. If this interests you, remember to sign up.

Welcome

Noel RappinComment

Hello, and welcome to my new site, noelrappin.com.

I have a new site because now that I have another book to promote, having the site be named after the previous book seemed perhaps not in keeping with the best marketing practices.

Speaking of the new book, it’s called Mastering Space and Time With JavaScript, and you can find out more information about it. The book should go on sale in June, if you’d like to be notified, please fill out the handy interest form.

Expect to hear more about the book in the upcoming weeks. It’s going to be great.

Thanks,
Noel

Setting Up Fast No-Rails Tests

Rails, testingNoel Rappin1 Comment

The key to fast tests is simple: don’t do slow things.

Warning: this post is a kind of long examination of a problem, namely, how to integrate fast non-Rails tests and slow Rails tests in the same test suite. This may be a problem nobody is having. But having seen a sample of how this might work, I was compelled to try and make it work in my toy app. You’ve been warned, hope you like it.

In a Rails app, “don’t do slow things” largely means “don’t load Rails”. Which means that the application logic that you are testing should be separable from Rails implementation details like, say, ActiveRecord. One way to do that is to start putting application logic in domain objects that use ActiveRecord as an implementation detail for persistence.

By one of those coincidences that aren’t really coincidences, not only does separating logic from persistence give you fast tests, it also gives you more modular, easier to maintain code.

To put that another way, in a truly test-driven process, if the tests are hard to write, that is assumed to be evidence that the code design is flawed. For years, most of the Rails testing community, myself included, have been ignoring the advice of people like Jay Fields and Michael Feathers, who told us that true unit tests don’t touch the database, and we said, “but it is so easy to write a model test in Rails that hits the database, we are sure it will be fine.” And we’ve all, myself included, been stuck with test suites that take way too long to run, wondering how we got there.

Well, if the tests get hard to write or run, we’re supposed to consider the possibility that the code is the issue. In this case, that our code is too entangled with ActiveRecord. Hence, fast tests. And better code.

Anyway, I built a toy app placing logic in domain objects for the Mountain West workshop. In building this, I wanted to try a whole bunch of domain patterns at once, fast tests, DCI, presenters, dependency injection. There are a lot of things that I have to say about messing around with some of the domain object patterns floating around, but first…

Oh. My. God. It is great to be back in a code base where the tests ran so fast that I didn’t have time to lose focus while the tests ran. It occurred to me that it is really impossible to truly do TDD if the tests don’t run fast, and that means we probably have a whole generation of Rails programmers who have never done TDD, who only know tests as the multi-minute slog they need to get through to check in their code, and don’t know how much fun fast TDD is.

Okay, at some unspecified future point, I’ll talk about some of the other patterns. Right now, I want to talk about fast tests, and some ideas about how to make them run. While the basic idea of “don’t do slow things” is not hard, there are some logistical issues about managing Rails-stack and non-Rails stack tests in the same code base that are non obvious. Or at least they weren’t obvious to me.

One issue is file logistics. Basically, in order to run tests without Rails, you just don’t load Rails. In a typical Rails/RSpec setup, that means not requiring spec_helper into the test file. However, even without spec_helper, you still need some of the same functionality.

For instance, you still need to load code into your tests. This is easy enough, where spec_helper loaded Rails and triggered the Rails auto load, you just need to explicitly require the files that you need for each spec file. If your classes are really distributing responsibility, you should only need to require the actual class under test and maybe one or two others. I also create a fast_spec_helper.rb file, which starts like this:

$: << File.expand_path("app")
require 'pry'
require 'awesome_print'

Pry and Awesome Print are there because they are useful in troubleshooting, the addition to the load path is purely a convenience when requiring my domain classes.

There is another problem, which is that your domain classes still need to reference Rails and ActiveRecord classes. This is a little messier.

I hope it’s clear why this is a problem – even if you are separating domain logic from Rails, the two layers still need to interact, even if it’s just of the load/save variety. So your non-Rails tests and the code they call may still reference ActiveRecord objects, and you need to not have your tests blow up when that happens. Ideally, you also don’t want the tests to load Rails, either, since that defeats the purpose of the fast test.

Okay, so you need a structure for fast tests that allows you to load the code you need, and reference the names of ActiveRecord objects without loading Rails itself.

Very broadly speaking, there are two strategies for structuring fast tests. You can put your domain tests in a new top-level directory – Corey Haines used spec-no-rails in his shopping cart reference application. Alternately, you can put domain tests with everything else in the spec directory, with subdirectories like spec/presenters and the like, just have those files load your fast_spec_helper. About a month ago, Corey mentioned on Twitter and GitHub that he had moved his code in this direction.

There are tradeoffs. The separate top-level approach enforces a much stricter split between Rails tests and domain tests – in particular, it makes it easier to run just the domain tests without loading Rails. On the other hand, the directory structure is non-standard, there is a whole ecosystem of testing tools that basically assumes that you have one test directory.

It’s not hard to support multiple spec directories with a few custom rake tasks, though it is a little awkward. Since your Rails objects are never loaded in the domain object test suite, though, it’s very easy to stub them out with dummy classes that are only used by the domain object tests.

As I mentioned, Corey has also shown an example with all the tests under single directory and some namespacing magic. I’m not 100% sure if I like the single top-level better. But I can explain how he got it to work.

With everything being under the same top level directory, it’s easier to run the whole suite, but harder to just run the fast tests (not very hard, just harder). Where it gets weird is when your domain objects reference Rails objects. As mentioned before, even though your domain objects shouldn’t need ActiveRecord features, they may need to reference the name of an ActiveRecord class, often just to call find or save methods. Often, “fast” tests get around this by creating a dummy class with the same name as the ActiveRecord class.

Anyway, if you are running your fast and slow tests together, you’re not really controlling the order of test runs. Specifically, you don’t know if the ActiveRecord version of your class is available when your fast test just wants the dummy version. So you need dummy versions of your ActiveRecord classes that are only available from the fast tests, while the real ActiveRecord objects are always visible from the rest of the test suite.

I think I’m not explaining this well. Let’s say I have an ActiveRecord object called Trip. I’ve taken the logic for purchasing a trip and placed it in a domain object, called PurchaseTripContext. All that’s fine, and I can test PurchaseTripContext in a domain object test without Rails right up until the point where it actually needs to reference the Trip class because it needs to create one.

The thing is, you don’t actually need the entire Trip class to test the PurchaseTripContext logic, you just need something named Trip that you can create, set some attributes on, and save. It’s kind of a fancy mock. And if you just require the existing Trip, then ActiveRecord loads Rails, which is what we are trying to avoid.

There are a few ways to solve this access problem:

If you have a separate spec_fast directory that only runs on its own, then this is easy. You can create just a dummy class called Trip – I make the dummy class a subclass of OpenStruct, which works tolerably well. class Trip < OpenStruct; end.

You could also use regular stub, but there are, I think, two reasons why I found that less helpful. First is that the stubs kind of need to be recreated for each test, whereas a dummy class basically gets declared once. Second, OpenStruct lets you hold on to a little state, which – for me – makes these tests easier to write.

Anyway, if your domain logic tests are mixed into the single spec directory, then the completely separate dummy class doesn’t work – the ActiveRecord class might already be loaded. Worse, you you can’t depend on the ActiveRecord class being there because you’d like to run your domain test standalone without running Rails. You can still create your own dummy Trip class, but it requires a little bit of Ruby module munging, more on that in a second.

If you want to get fancy, you can use some form of dependency injection to make the relationship between TripPurchaseContext and Trip dynamic, and use any old dummy class you want. One warning – it’s common when using low-ceremony dependency injection to make the injected class a parameter of the constructor with a default, as in def initialize(user, trip_class = Trip). That’s fine, but it doesn’t completely solve our testing problem because the use of Trip in the parameter list needs to be resolved at load time, so the constant Trip still needs some value.

Or, you could bite the bullet and bring the Rails stack in to test because of the dependency. For the moment, I reject this out of hand.

This isn’t an exhaustive list, there are any number of increasingly insane inheritance or metaprogramming things on the table. Or under the table.

So, if we choose a more complicated test setup with multiple directories, we get an easy way to specify these dummy classes. If we want the easier single-directory test setup, then we need to do something fancier to make the dummy classes work for the fast tests but be ignored by the Rails-specific tests.

At this point, I’m hoping this makes sense. Okay, the problem is that we want a class to basically have selective visibility. Here’s the solution I’m trying – this is based on a gist that Corey Haines posted a while back. I think I’m filling in the gaps to make this a full solution.

For this to work, we take advantage of a quirk in they way Ruby looks up class and module names. Ruby class and module names are just like any other Ruby constant. When you refer to a constant that does not have any scope information, like, say, the class name Trip, Ruby first looks in the current module, but if the current module doesn’t contain the class, then Ruby looks in the global scope. (That’s why sometimes you see a constant prefixed with ::, as in ::Trip, the :: forces a global lookup first).

That’s perfect for us, as it allows us to put a Trip class in a module and have it shadow the ActiveRecord Trip class in the global scope. There’s one catch, though – the spec, the domain class, and the dummy object all have to be part of the same local module for them all to use the same dummy class.

After some trial and error (lots of error, actually), here’s a way that I found which works with both the fast tests and the naming conventions of Rails autoload. I’m not convinced this is the best way, so I’m open to suggestions.

So, after 2000 words of prologue, here is a way to make fast tests run in the same spec directory in the same spec run as your Rails tests.

Step 1: Place all your domain-specific logic classes in sub modules.

I have sub directories app/travel/presenters, and app/travel/roles and the like, where travel is the name of the Rails application. I’m not in love with the convention of putting all the domain specific directories at a separate level, but it’s what you need to do in Rails to allow autoloaded classes to be inside a module.

So, my PurchaseTripContext class, for example, lives at app/travel/contexts/purchase_trip_context.rb, and starts out:

module Contexts
class PurchaseTripContext
# stuff
end
end

Step 2: Place your specs in the same module

The spec for this lives at spec/contexts/purchase_trip_context_spec.rb (yes, that’s an inconsistency in the directory structure between the spec and app directories.) The spec also goes inside the module:

module Contexts
describe PurchaseTripContext do
it "creates a purchase" do
#stuff
end
end
end

Step 3: Dummy objects

The domain objects are in a module, the specs are in a module, now for the dummy classes. Basically, I just put something like this in my fast_spec_helper.rb file:

module Contexts
class Trip < OpenStruct; end
class User < OpenStruct; end
end

This solves the problem, for some definition of “solves” and “problem”. The fast tests see the dummy class, the Rails tests see the Rails class. The tests can be run all together or in any smaller combination. The cost is a little module overhead that’s only slightly off-putting in terms of finding classes. I’m willing to pay that for fast tests. One place this falls down, though, is if more than one of my sub-modules need dummy classes – each sub-module then needs its own set, which does get a little ugly. I suspect there’s a way to clean that up that I haven’t found yet.

In fact, I wonder if there’s a way to clean up the whole thing. I half expect to post this and have somebody smart come along and tell me I’m over complicating everything – wouldn’t be the first time.

Next up, I’ll talk a little bit about how some of the OO patterns for domain objects work, and how they interact with testing.

Self-assessment

JavaScript, Me, self promotionNoel RappinComment

Here’s what I’ve got.

2 chapters introducing jQuery and Jasmine via a walkthrough of a simple piece of JavaScript functionality.

1 need to convert all my text from its current proprietary format to something more Markdown based.

1 genuinely silly conceit tying together the application that gets built in the book. And I mean that in the best way. It should be silly, there’s no reason not to be bold. There is even a twist ending. I think.

1 slightly dusty self-publishing tool chain that converts a directory of markdown files into HTML, with syntax colored code. It’s possible that there’s a better library for some of the features these days.

1 chapter on converting that simple piece of jQuery into various patterns of JavaScript object. I quite like this one, actually.

1 website, which is currently hosted by WordPress – at one point, I had to abandon the railsrx.com site that actually sold stuff, and WordPress was easy. I think I’ll need to upgrade that a bit.

1 Intro chapter covering JavaScript basics and the Chrome developer tools. Not sure if this is at the right level for the audience I expect.

1 Prince XML license for converting said HTML files into PDF. No idea if that’s still the best tool for the job. Or even if my license is current.

1 chapter on building a marginally complex auto complete widget in jQuery and Jasmine. I like this example.

1 copy of most of the book’s JavaScript code in CoffeeScript. Not sure when I thought this was the right idea for the book, beyond an excuse to use CoffeeScript.

1 chapter on jQuery and Ajax.

0 toolchains for generating epub and mobi files. I know I can find this.

1 case of impostor’s syndrome, not helped by rereading the harsh review of Rails Test Prescriptions on Amazon. That was dumb, why would I do that?

1 chapter on using JSON. As far as I can remember, this chapter never went to edit.

3 people who mentioned on Twitter that they’d buy a self-published book. Don’t worry, I won’t hold you to it.

1 plan for writing 2 or three chapters on Backbone.js

5 people who reviewed the last version who I feel should get free copies when this comes out. It’s not their fault.

4 viewings of Ze Frank’s “Invocation for Beginnings”

So. Ready to go. Watch this space.

A Brief Announcement About A Book

JavaScript, Me, self promotionNoel RappinComment

So… The JavaScript book that I had contracted to do with Pragmatic will no longer be published by them.

I need to be careful as I write about this. I don’t want to be defensive – I’m proud of the work I did, and I like the book I was working on. But I don’t want to be negative either. Everybody that I worked with at Pragmatic was generous with their time and sincere in their enthusiasm for the project. Sometimes it doesn’t work out, despite the best intentions.

I haven’t spoken about this project publicly in a while because it was so up in the air. And also because I’m not sure what to say about it without sounding whiny or mean. And also because I was afraid of jinxing things, which is obviously less of an issue now.

Since November, the book has been in review and I’ve gone through a few cycles with Pragmatic trying to get things just right. The issues had more to do with the structure and presentation of the material then of the content or writing itself. I’m not completely sure what happened, but I think it’s fair to say that the book I was writing did not match the book they wanted in some way or another.

Anyway, that’s all water under the bridge. I have full rights to the text I’ve already produced. Self-publishing is clearly an option, though the phrase “sunk costs” keeps echoing in my head. It’s hard to resist the irony of starting with a Pragmatic contract and moving to self-publishing after having done it the other way around with Rails Test Prescriptions. I’m hoping to blog more – in addition to being a time sink, not being able to write about this book was kind of getting in my head about any blogging.

Thanks for listening, and watch this space for further announcements. I was excited about this project, and while this is disappointing, I’ll be excited about it again in a few days. Thanks to the people I worked with at Pragmatic for the shot, and thanks to all the people who have been supportive of this project.

Control Your Development Environment And Never Burn Another Hamburger

developmentNoel Rappin4 Comments

Everything I know about the world of fine dining I know from watching Top Chef and from eating at Five Guys. But I do know this: chefs have the concept of mise en place (which does not mean Mice In Place), which is the idea that everything the chef is going to need to prepare the food is done ahead of time and laid out for easy access. The advantages of having a good mise en place include time savings from getting common prep done together, ease of putting together meals once the prep is done, ease of cleanup. Again, I have no idea what I’m talking about, but it seems like the quality and care that a chef puts into their mise en place has a direct and significant benefit on how well they are able to do their job.

You probably know where I’m going with this, but I’m becoming increasing convinced that one of the biggest differences between expert and novice developers is control over the environment and tools they use. I came to this the hard way – for a long time I was horrible about command lines, it was actually something that kept me away from Rails for a little bit. It’s not like I’m Mr. Showoff Bash Script guy these days, but I know what I need to know to make my life easier.

Let me put it another way. Once upon a time I read a lot of human factors kind of materials. I don’t remember where I read it, but I read once about an expert short-order cook at a burger joint. His trick was that he would continually shift the burgers to the right as he cooked them, such that by the time they got to the end of the griddle, they were done. Great trick (though admittedly you need to be a bit of an expert to pull it off). Not only did the cook know when things were done without continually checking, but it was easy to put on a burger that needed a different amount of doneness simply by starting it farther left along the griddle.

What does that mean?

The name of the game in being an expert developer is reducing cognitive load.

Try and list all the things you need to keep in your head to get through your day. Think about all the parts of that you could offload onto your environment, so that you can see them at a glance, like the short order cook, and not have to check. What are the repetitive tasks that you do that can be made easier by your environment? What can you do so that your attention and short-term memory, which are precious and limited, are focused on the important parts of your problem, and not on remembering the exact syntax of some git command.

This is not about which editor you use, but it is about picking an editor because you like it and understand how to customize it, not because all the cool kids use it. If you are going to use Vim, really learn how to use Vim – it’s not that hard. But if you use Vim, and don’t learn the Vim features that actually make it useful, then it’s not helping you, and it’s probably hurting you. Is Vim (or TextMate, or whatever) making your life easier or not? Vim drives me nuts, but I’ve seen people fly supersonically on it.

I’m getting a little cranky about seeing people’s environments – I’m not normally a big You're Doing It Wrong kind of guy, but, well, I’m getting a little bit cranky.

If you’re doing web development, there are probably three things that you care about at any given time: your text editor, a command line, and a web browser. Every man, woman, and child developer at my fine corporation has a laptop along with a second monitor that is larger than a decent surfboard. If you can’t see at least two of those three things at all times, try and figure out how much time you spend flipping between windows that you could be seeing together. If you are running Vim in a terminal in such a way that you can never see an editor and a command line at the same time, I think you can probably do better.

If you use Git, for the love of God, please put your git branch and status in your shell prompt. RVM status, too. And it’s not hard to add autocompletion of Git branches, too. Or, go all the way to zsh, which has fantastic autocompletion. Again, reducing cognitive load – you can see your environment without checking, you can type complex things without knowing all the syntax.

Use a clipboard history tool. I use Alfred, which is generally awesome as a launcher and stuff, but it’s not the only one.

Use some tool that converts frequently used text to shorter easier to remember text. Shell aliases do this, I also use TextExpander, which has the advantage that TextExpander shortcuts are usable in SSH sessions.

The thing is, I don’t know what the most important advice is for you. You need to be aware of what you are doing, and strangely, lower your tolerance to your own pain. What do you do all the time that is harder than it needs to be? Is there a tool that makes it easier or more visible? How hard would it be to create or customize something? Are you the cook who can look at the griddle and know exactly when everything will be done, or are you the guy constantly flipping burgers to check?

iaWriter and iCloud, You Know, In The Cloud

Editors, iPadNoel RappinComment

If I don’t write about iOS editors every few months, then it’s harder for me to justify continuing to mess around with them…

The thing that’s changed my editor use in the last couple of months is iaWriter Mac and iOS adding iCloud support, even more deeply integrated than Apple’s own applications. iaWriter is the first writing program I use to move to the iCloud future (though there are some games and other programs that also sync via iCloud already).

At a technical level, the integration is fantastic. In iaWriter, iCloud shows up as a storage location, on par with internal iPad and Dropbox storage. If you are just using the iPad version then there is not much difference between iCloud and Dropbox. iCloud saves automatically, but Dropbox lets you use subfolders. (As a side note, iaWriter has improved its Dropbox sync from “show-stoppingly bad” to “works with a couple of annoyances”, the main annoyance being that it doesn’t remember your place in the Dropbox file hierarchy.)

Where the iCloud thing gets really cool is if you are running iaWriter on both iPad and Mac. On iaWriter Mac, you get a command in the file menu for iCloud, which has a sublisting of all the files iaWriter is managing in iCloud, along with commands to move the current file to or from iCloud.

When you make a change to an iCloud file (on the Mac side, an explicit save, on the iPad side an automatic local save), it is automatically sent to iCloud and pushed to the other site. No different from Dropbox, you say. True, except that the iCloud sync behaves much better if a file is simultaneously open in both apps. The changes just appear in the other app. You can put the iPad and Mac next to each other and go back and forth between the two with only a very slight pause while they sync up.

I haven’t quite gotten that level of integration from Dropbox. In particular, if a Lion-aware app has Dropbox change the file behind its back, the original Mac file continues to be displayed with a filename indicating that it is a deleted version. You then need to close the Mac file and reopen it. I’m not sure I’ve seen an iOS editor that polls Dropbox for changes, though one of the auto-sync ones (Elements, WriteRoom) might

This may seem esoteric, but since I tend to have several blog posts on progress in open windows on my laptop, I do wind up regularly using the iPad to edit an open file. The iaWriter iCloud sync is noticeably less annoying.

It’s not all sweetness and light, especially if you are a really heavy creator of text files. There is no such thing as a folder in iCloud land, which will eventually become an organizational problem. Worse, there’s an implied lock-in to using iCloud that seems to miss the point of using text files in the first place.

When you move a file to iCloud from the Mac, it moves the file to the iCloud hidden directory, which I think is somewhere in the library directory. Although it doesn’t technically vanish from your hard drive – if you can find the file, you can open it in another application (for what it’s worth, the Alfred launcher can find the files), the clear intent is that the file is in a place not to be touched by applications other than iaWriter.

On the iPad side, the situation is worse. If a file is in iaWriter’s iCloud storage than no other iPad app can see it. (To be fair, it is relatively easy for iaWriter to move a file from iCloud to Dropbox from either device.) I don’t know if sharing files between applications will be possible when more applications support iCloud, or whether iCloud is strictly sandboxed.

And hey, for a lot of things, this limitation isn’t an issue. If you are using a tool with its own format, then it is less of an issue that other applications can’t see it. Even with something like text, if you aren’t the kind of crazy that needs to open a file in a gajillion different editors, you are probably okay. If you are using text specifically because it’s easy to move around between different programs, and you have a workflow where a file will commonly be touched by different apps, then iCloud is going to get in your way a little.

As for me, the iCloud support has made me use iaWriter more often for blogs and short notes. (Though I still use Textastic for more structured stuff on iPad.) I always liked iaWriter, but for a while it was just really bad at sync compared to other iOS editors. So, despite some quibbles about what happens in iCloud when I have dozens of files that I want to share among different apps, right now, the sync is good enough to make it valuable.

Getting Back to Smalltalk

SmalltalkNoel Rappin11 Comments
This week I wound up trying to put together a sample “real-world” problem suitable for use in an automated web thing aimed at potential new hires. Of course, any actual “real-world” problem would be have too many extraneous details to make it suitable given the constraints of the web thing, but trying to create the illusion of real-world-ness was kind of fun.

Since this seemed like sort of a weird way to spend a day or two, I decided to make it weirder by implementing both my solution to the problem and the code to generate test data in a language I don’t use every day. Just for the heck of it, I picked Smalltalk (Specifically, Pharo), the first time I’ve written Smalltalk code in anger for… hmm. Let’s see… I wrote the intro chapter to this in about 2000, and even then it had been a couple of years since I really wrote something in Smalltalk. (For the record, my other choice was Clojure, but I seem to have some weird mental block about getting started on a Clojure project. Someday soon, though.)

Anyway, Smalltalk was interesting because the environment is so different from anything else going and because I really haven’t used it in so long, I was curious to see whether my reactions to the Smalltalk environment would be any different given how long it’s been. For example, the last time I worked in Smalltalk, SUnit and TDD wasn’t a thing.

Overall, successful experience. I was able to actually write the program. It probably took me a little longer than it would have in Ruby, just because I was a bit rusty and the Pharo UI was a little unfamiliar. But it works, and most of the code isn’t bad – it got a little ragged toward the end. Here are a few random notes about the experience:

Language Structure


Whatever the line between a “scripting” language and a “normal” language is, Smalltalk is on the “not a scripting language” side of it. Smalltalk is very much a purist language with a small, consistent syntax, and not a lot of the sugar that you get used to in the Ruby/Python/Perl neighborhoods. Method names tend to be longish, and although the environment encourages short methods, basically there’s more typing in Smalltalk than in the equivalent Ruby.

Smalltalk was created almost completely outside the C/C++/Java family of programming languages, which combined with its pure structure makes it feel somewhat alien. And I say that as somebody who likes Smalltalk and genuinely enjoyed working in the environment.

For example:
* The first element of an array is index 1, not index 0. Which, honestly, makes more sense.
* Smalltalk doesn’t have operator precedence, and is evaluated strictly left-to-right, so a mathematical expression like 2 + 3 * 4 will not give you the result you would get in, say, every other programming language ever.
* Smalltalk doesn’t use the dot-notation for method calls, instead, the dot is a statement separator. It has keyword arguments, as later borrowed by Objective-C. Strings are single quotes, double quoted strings are comments.

Then you get other things like all the boolean and loop logic being managed by the library rather than the syntax. Really, the only thing the syntax manages in Smalltalk is order of operations and a very small amount of literal syntax. There’s a rich set of collection classes, and the names and classes are just slightly off from what you are used to. Takes some immersion to get the feel.

Workflow


The other really unfamiliar thing about Smalltalk is the environment and workflow. Smalltalk source isn’t normally kept in files. Instead, you run a binary image of all the code in the Smalltalk environment. If I’m remembering correctly, a Smalltalker working on multiple projects would have a separate image for each project. (There are mechanisms for teams working together, but I don’t think they are very standard, and I’m not all that familiar with them.)

The thing about Smalltalk is that you are always inside the environment. The idea, for example, that Ruby is super-flexible in what it allows you to do at run-time is a trivial point in Smalltalk. Everything is at run-time, because the phone call is coming from inside the house, so to speak. Of course you can create classes and methods at run-time. If you couldn’t, the whole system would be unworkable.

The image contains your code, the entire Smalltalk system, and the state of all the objects therein. Smalltalk typically also has a mechanisms for export/import of source, and also for internal version control. I think it’s fair to say that dealing with the workspace is at best counterintuitive when you are used to dealing with text editors and files.

Your main interface with the Smalltalk system is the code browser, which allows you to view any class and method in the system. One method at a time. Quick Quiz – what’s the Smalltalk keyword to define a method? Trick question. Smalltalk doesn’t have one. Smalltalk just knows that the things you type into the code browser are methods, and the first line of a method is its name. Code browsers are very neat, but again, using them effectively is a skill – a big step is realizing you can have more than one of them open at a time.

What’s interesting to me are the places where the Smalltalk environment is still ahead of my regular code environments, and where it’s behind. On the plus side:

  • When you save a Smalltalk method, the code browser verifies that it’s syntactically correct before allowing the save. If you are calling a method or using a variable that doesn’t exist, you have to okay it – in some circumstances, you even have the ability to create the method or variable as part of the save process.

  • The browser has easy access to all old versions of a method that you are looking at.

  • It’s very easy to see places that might call the method you are writing.

  • The browser also has some powerful refactoring functionality built in.

  • Integration is amazing, since everything is running. To give one example, the code browser knows which methods are SUnit tests, and they display in the browser with a red/yellow/green/gray indicator of their current state (error/failure/success/not run). Individual tests can be run separately easily in the browser, and since they environment is already loaded, the tests run in an eye blink.

  • On a related note, the debugger is famously awesome. From a breakpoint, you can see the state of all variables, edit values, edit code, all nice and easy.

  • You also can create workspaces, which are just blank code windows that you can execute arbitrary snippets in, like a iRb loop, but a bit more flexible.


But there are some missing pieces:

  • The code editor itself isn’t as powerful as you are used to if you’ve been using Vim or TextMate or, hell, anything. I’m a big “typing is not the bottleneck” kind of guy, but still, I would have loved some macros or snippets or something.

  • The one-method on display per code browser limitation is, well, limiting. You can have multiple code browsers open, but that takes up a lot of real estate. Tabbed browsers, split windows, that kind of thing would be kind of nice.

  • Intellectually I know that it’s really hard to lose changes in a Smalltalk image, but it still feels fragile to me not to have files readable by an external tool. UPDATE: I could have been clearer about this. Smalltalk doesn't save changes in the image as such, instead it saves all your changes and lets you play changes back. I was trivially able to recover about 30 minutes of work after a Pharo crash, simply because all the changes were recorded and easily placed back in the image. The Smalltalk setup is effective, but feels different from the way I tend to think about their projects.


One interesting feature of the browser based editing environment is that it’s very easy to create a new method that is based on an existing method. You don’t even have to cut and paste, just browse to the existing method, change its name and save. Smalltalk will create a new method based on the new name.

This is very nice from a getting code in the system standpoint, but it seems to have the side effect of encouraging more duplication in the system than I might be otherwise comfortable with. The effect is confounded by the fact that you can only see one method at a time in the browser, making it difficult to scan for duplication.

What’s not getting across here is how smooth the workflow in the small is when you really get the rhythm going. It’s an environment that really suits a “make a small change, evaluate the change” process. In some ways, it’s not surprising that the test-first movement would have gotten serious momentum in Smalltalk, test-first is just a formalization of the way a good Smalltalk programmer interacts with the system. Plus, I’m a purist in so many ways, and even though the code takes a little getting used to, I like the way it turns out.

So, successful test. Would try again. I still want to try something with Seaside one of these days just to see what that’s like.

Things that Should Be Metaphors, Part 1

UncategorizedNoel RappinComment

I present to you two things that sound like they should be metaphors for project issues. Except for two things:




  1. They are real

  2. I have no idea what they might be metaphors for



The Library Book Priority Conundrum



I read a lot. In general, I purchase books I’m very excited about reading, and books that I’m less sure about I check out from my local library. The problem is that I always have more books than time to read, and the library books have the nasty property that the library expects them back sooner or later.



The practical effect is that aside from a few books that are “drop everything” when they come out, I’m forever reading library books first, before the books I’ve purchased, because they will go back home, where as the books I own I get to keep.



Since I’ve already said that I’m usually more excited about the books I’ve purchased and get to keep, this is clearly not optimal. But even knowing this problem, I book I actually purchased (Ganymede by Cherie Priest) has been waiting behind a bunch of library books (That is All by John Hodgman, Kingdom of the Gods by N. K. Jemisin, among others). Okay, those books are great too. But still…



The last clementine problem



I go to the grocery store and buy a bag of clementines. Because they are yummy and easier to peel than grapefruits. Every bag has one or two clementines that are clearly a little less good than the others, so naturally I wait to eat those last. However, since as the clementines get yuckier they become less desirable than anything else in my kitchen, I don’t eat them at all. The result is that I don’t eat the last few clementines, because the pretzels are more enticing. I don’t get new clementines, because I already have clementines, and I don’t throw out the bad clementines until they are well and truly no longer food. The upshot is that I can go weeks without clementines, even though I like them.