Noel Rappin Writes Here

Agile

Velocity, Agile Estimation, And Trust

AgileNoel Rappin1 Comment

Charles Max Wood posted this on Twitter:

After trying like six times to fit my response in a tweet, I gave up and remembered that I had this web site where once upon a time I wrote things that were more than 140 characters.

Disclaimer: I have no idea how Charles’ team is working and what might have been said in planning meetings or anything else.

That said, here’s what I think.

The goal of agile project management is accepting the inevitability of change through continual feedback, continual improvement, and a realistic sense of progress. In an agile project, things that are hard when done in bulk — testing, integration, estimating — are done continually, in smaller pieces, to reduce complexity and risk.

Yes, in a functional agile project, velocity is set by team pace. Ideally, you had a meeting at the beginning of the sprint where you estimated velocity based on past performance, and determined the stories you hope to get done based on point estimates of the stories and that velocity. (Point estimates, remember, are measures of complexity, not time…)

It’s possible to be behind in a sprint, in some sense, if it doesn’t look like you are going to complete the agreed upon stories. For example, a story may turn out to be way more complex then estimated. Or a bug may have turned up. (Although agile point estimates are robust against bugs as long as bug fixes remain a roughly consistent percentage of your time).

Remedies for this problem might include changing the point value of a story when new information is determined, splitting the story, lowering velocity estimates going forward, moving a story to the next sprint. Retrospective meetings are a great place for trying to figure out why a story was mis-estimated.

However, trying to assess the state of the project in the middle of a sprint can be a little misleading. There can be a kind of optical illusion if a lot of stories start at the same time, where progress is being made but not booked because the stories aren’t finished. (Sometimes this means you need more granular stories.) Often, it’s helpful to organize the daily standup by outstanding story to give visibility to how stories are moving.

Charles is right that one of the points of Agile project management is not to work by wishful thinking of when you hope things will get done. If a story is more complex than we thought, then it just is, and you need to adjust to that by dealing with the software triangle — change scope, change time, change budget.

Granted #1: This requires a fair amount of trust between the team and the management that when the devs say something is taking longer than expected, that’s assumed to come from a place of expertise, not ignorance. You build trust by being right, and admitting it when you are not right.

Granted #2: Velocity and story points are robust against underestimating, as long as you are consistent. What will happen is that your velocity will settle at a point that factors in the overestimate. However, if specific stories or types of stories are continually taking more time than expected, it’s worth trying to figure out why. If for one reason or another, you aren’t the expected velocity isn’t being allowed to settle to a new state, that’s where the wishful thinking comes in. (Although if your actual velocity is continually dropping, that indicates a problem, too.)

Granted #3: Sometimes there are really are business needs for particular deadlines. That doesn’t change the laws of software physics, but it does determine what a reasonable response is from the team.

Okay, that’s more than 600 words, and I don’t know if I’ve answered the question.

  • In an agile project, velocity should be related to past performance, not hoped-for results.
  • If velocity is being determined top down — for example just trying to determine the velocity by guessing total story points in the project divided by sprints before deadline — that’s not really in the spirit of the thing.
  • It’s possible to miss expectations for a sprint, but the appropriate response to that is usually not “type faster”.
  • This all critically depends on trust between the project managers and engineers.

Wow, it’s been a while since I blasted out a blog post that quickly. Felt good. Hope this helps.

September 7, 2010: On Writing Bad Code

AgileNoel Rappin1 Comment
I've been working on my tutorial session for WindyCityRails (tickets still available...). The session is about how to test when you are working in a legacy app that doesn't have tests.

Naturally, that requires some legacy code for the attendees to work with during the tutorial. My own worst Rails messes are either back in the 1.2.x time frame or I don't have access to them any more. I don't have the right to distribute legacy code that I have inherited, and most of those people wouldn't want me calling their code a junkpile in public.

So I've been writing a faux-legacy application, or at least enough of one to make the needed points in the tutorial. The idea stumped me for a bit because the app needs to be both complex enough to plausibly show the issues in legacy testing and simple enough so that setup and changes can actually happen in a short workshop.

Eventually, I hit on the following guidelines for writing deliberately bad code:


  • Aggressive corner cutting on features that aren't essential to the presentation.

  • Don't look anything up and don't use gems or plugins, not least of which to prevent setup issues.

  • Make no effort to put things in the "right" place.

  • Work quickly, without design and never go back to clean up a mess.

  • Randomly, do something a little bit less elegantly than normal. Oh, and some metaprogrammy Ruby things were off limits, assuming I was writing as somebody who didn't know Ruby that well.



And I think I got some nicely tangled code rather quickly.

At this point I think I'm supposed to say one of two things:


  • Boy I sure was able to write that code fast without the pesky rules! I guess that TDD stuff isn't that great after all.

  • Boy, I sure wrote nasty code without those pesky rules! I guess that TDD stuff really is great after all.



I think I believe the second point more than the first. It's hard to look at this code and not see some major pain coming in the future. That said, you have to acknowledge the emotional power of seeming to write fast code.

Because I did go pretty fast here, and I got a satisfying amount of app built in a relatively short number of hours with a very continuous novelty burst in my head from seeing new things in a browser.

The temptation to say, "I was deliberately writing ugly code. If I just stopped doing that, then boy, I could go really fast and not use TDD, I can control bugs without TDD." And the thing is, that'll be true for a while. Maybe a long time, if you're pretty good and working by yourself.

This is related to the very seductive idea that your project doesn't need to use Agile methods because you can control your changes up front. In both cases, you go quickly mostly by ignoring the inevitability of anything changing in the future (who cares how tangled the code is if nobody ever has to modify it...)

In the end, though, change is coming. So the trick to working in a legacy environment is taking code that was never written to allow change and making it more amenable to change.

XP or not XP, that is the question. The answer is XP.

AgileNoel RappinComment
While I'm commemorating anniversaries this summer, I just remembered another one.

Ten years ago this summer was when I first read the original Kent Beck "white book", Extreme Programming Explained, which is one of only a couple of books that completely changed the way I approach whatever it is that I do. Considering that I've spent most of the last ten years practicing, advocating for, and writing about XP and Agile development, it's not an overstatement to say that Kent's book, and the ideas about how to be a professional programmer, changed my professional life.

Here's the setup.

I spent most of the winter, spring, and early summer of 2000 working on a largish web project for a Fortune 500 company that you've definitely heard of, but which made a regular practice of scaring the hell out of any vendors who might want to ever advertise the fact of their association.

This company was large enough to have a technical conference/fair for the IT departments of its various subsidiaries. I remember attending as a vendor, and being in a presentation from the company's legal department that prominently featured the comment, "you vendors have the silly idea that your liability is limited to the size of your contract with us".

Anyway... the project was a collaboration among several subsidiaries that had never worked together before. Not only was this the largest project I had been on by far, but there were also about a half dozen customer companies that all had different ideas about how the project should run -- it wasn't unusual to have conference calls with about three dozen client representatives to about five of us. I always kind of thought that the reason our tiny web company was picked for the project was to act as an impartial referee.

Unsurprisingly, we made every mistake there was. We picked tools that were not adequate to the task, and although our application design wasn't bad, regressions were a constant feature of our staging deployments. The customer kept changing the goals of the site, not to mention tiny details, and we had no way to organize or manage the changes. We had an ongoing argument with one of the internal teams who wanted us to change our entire database schema. Our own management grabbed half the development team to completely redo the interface less than a week before the deadline.

We did deliver on time, but only after my one-and-only real death march to date, over a month of 70-80 hour weeks. (At which point the customer froze the site and didn't deploy, but only looked at it internally for two weeks. Never did learn why.)

In the wake of the initial delivery, my company did some retrospective sessions. I had heard about XP a little at that point, and was looking for some way to vocalize my feeling that the project had been a structural failure, even though we had more or less delivered on time. Which is how I came to read the book in the first place.

Unsurprisingly, XP's emphasis on sustainability, clear descriptions of priority and responsibility, testing, refactoring, and quality code resonated with me pretty strongly. It seemed like a rebuttal to every pain point we had in the project. I remember reading directly from the book at a retrospective meeting, prompting some wisecrack from a manager about my academic bookishness. Guilty, I guess.

With one thing and another, it was several years before I was in a position to impose a full Agile/XP project structure on a project. Until then, what I was able to do was test -- even in places where that wasn't part of the team's normal process. The fact that test-first seemed so helpful made the whole Agile structure seem more plausible, and when I was finally able to put an Agile project in place, I was ready to make all new mistakes.

July 16, 2010: Why Not Four?

AgileNoel Rappin3 Comments
Not much time this morning, not many accumulated links. So just a little bit today.

Book Status



Still writing the new parts of the legacy coding chapter, last night a little bit on removing dependencies. I think only one more section to go before that's a complete draft. Next up, I think, is making the code samples Rails 3 compatible.

One quick thing



Sometimes you don't realize how weird something is until you try to explain it. I had this conversation last week about Agile planning meetings, with another person who is not familiar with programming details:

Me: And then we estimate the story, and we give it points, we can make it 1, 2, 3, or 5 points.

Other person: Why not 4?

Me: We use Fibonacci numbers, 4 isn't a Fibonacci number.

Other person: Why do you do that?

Me: [Long pause] Well, um, that's a good question.

And scene.

I do actually understand in theory why you might want to limit the range of options for story points. You want to make the estimates deliberately coarse so as not to get a false sense that they are more accurate then they are. I also understand why its not worth debating whether a story is a 10 or an 11.

That said, Fibonacci points seems to be something that got popular when I wasn't paying attention, and it kind of seems to me that, theory aside, 4 is a useful concept in practice for, you know, marking a story that is bigger than 3 points, but not quite 5 points. On our team, we have at least one story in that slot every iteration, and somebody (okay, me) always bemoans that the story should be a 4.

Those of you who want to join this little "Why Not Four?" crusade, should start chanting "More Four! More Four" or "What do we want? Four. When do we want it? Three to five days from now" at your iteration planning game events. Together, we can see a brave new world without an awkward pause between three and five.

(Have I beaten the joke into the ground yet?)

Pair Programming, or Two of a Kind

AgileNoel Rappin5 Comments
Repeating yourself is clearly an occupational hazard of blogging. I've been trying to put together a post about pair programming for a while. Somewhat randomly, I found myself wandering through my blog archives at Pathfinder, and I came across this little essay, which was the last thing I wrote at Pathfinder before, shall we say, Events Transpired, so I probably blocked it a bit. I definitely blocked the responses that were on the Pathfinder blog the following week, because, well, Events Were Transpiring.

Anyway, at the time, I was not paring particularly consistently at Pathfinder (small team sizes being the main culprit), but I have spent most of the last eight months at a client site that does aspire to constant pairing. I find that my basic opinion -- pairing is valuable to a point, but you can over-pair to the team's detriment -- hasn't changed, but it has sharpened a little. A few observations:

First off, a real problem I have when writing about pair programming is that a significant percentage of the people I've been pairing with recently are likely to read this. I don't mean anything by it -- it isn't you it's me. I apologize in advance, and all that.


  • No programming technique is always appropriate, and that goes for pair programming, too. Some tasks, especially the rote, boring yak-shaving ones, don't need a pair, and the team should have leeway to make that decision.


  • Have a plan with your pair. The Pomodoro 25 minutes on/ 5 minutes off thing is effective, but you need to agree to it (or some variation) when you start. However, a project with a long build time or test suite run time really kills any kind of Pomodoro-like schedule.


  • Pair programming is part of the agile bet -- in my experience it's almost always a short term time loss with the hope of better performance over the long haul. I know there are studies that say that two programmers pairing are faster that two programmers working alone. All I can say is that isn't consistently my subjective experience. It is common to work in a pair that is short-term faster than I would be on my own, but I don't always feel like the pair is faster than both programmers would be on their own separately. Again, that's in the short term.


  • The bet is that the investment will pay off over time in lower defect rates and better knowledge transfer. And I do think that's reasonable, but it's by no means assured.


  • I think the best long-term goal of pairing is to keep knowledge from being siloed within the team. Anytime you are forced to say something like, "that's a search problem, Fred has to work on it", that becomes a potential bottleneck for your team. So rotating pairs such that everybody gets to do everything is an important part of what makes pairing work.


  • That said, people external to the team love knowing that if it's a search problem, then Fred needs to work on it, so they
    hate it when pairs rotate, because they don't know who to talk to about issues having to do with a particular feature. If you are pair rotating frequently, people outside the team need to have some way of discovering who is the current contact on a story or set of stories.


  • You'll often hear the highly irritating argument that pair programming works because it's less likely that one programmer will slack if there's a pair. Maybe. My experience, both direct and through observation, is that the effect works in reverse, too, which is to say that the slacking pair member can drag down a working member. (I'm totally not going to say whether I'm the slacking pair or the working pair.)


  • There's a strong personality component here -- some people really love pair programing, some people genuinely don't. (As for me, I'm mostly neutral but there's definitely a point where I just want to put the headphones on and grind away on a problem...)


  • Logistics are a major pain in pairing. You pay for mis-alignment of time. One person gets in a half-hour earlier. They eat lunch at different times. One person gets pulled into a meeting (a particular problem if you are pairing senior and junior members), hell, one person goes to the bathroom more. Every disconnect gets felt when a team is pairing. Mandating specific hours and eliminating work at home is emphatically not keeping with the spirit of agile as I understand it, so what's the solution?


  • Pairing, by definition, leads to a loud work environment -- pairs need to talk. Some people will definitely find this to be a problem, and I think it can work against concentrating on a problem.


  • Ultimately, I think it's a problem if pairing is the only way that team members are allowed to work, both for logistical reasons, and also for loud work environment type environmental reasons. Which leads to the radical mush-mouth position of "pair when it's appropriate, don't when it's not", which is a really annoying conclusion to come to at the end of a blog post like this.


July 9, 2010: Beta 4 Released and More

Agile, Dropbox, Kent Beck, Mocha, Typing, iPadNoel RappinComment

Update



Beta 4 of Rails Test Prescriptions is now available, with two new chapters, one on Rcov and coverage in general, and one on writing better tests. Buy here.

While I'm in the self-promoting mode, the book is also available for pre-order at Amazon and other exciting locations.

More Promotion



And while I'm here, I should mention that Obtiva has updated their training schedule. Obtiva offers a 4-day Boot Camp for learning Rails and TDD that will next be offered August 2nd through 5th. There's a brand-new Advanced Rails class that will be August 30th through September 2nd, and a version of the Boot Camp that meets weekly on Mondays starting September 20th.

Obtiva also offers private versions of our courses. See http://www.obtiva.com for more info.

Links



For some time, Joe Ferris had a fork of the Mocha mock-object tool that allowed mock spies, which let you separate the definition of a stub from the expectation of how the stubbed method will be called. This makes it easier to use mocked objects in a single-assertion test structure, and also makes mock tests easier to read. Anyway, Ferris has split out the spy part in a separate gem called Bourne, which should be easier to install alongside the standard Mocha gem. Yay.
(via Larkware)

Corey Haines would like us all to learn to type better, and toward that end is starting a community event for next week (July 12-18) for some collective support for learning. For as much as "Typing is not the Bottleneck", which is true but not necessarily germane here, my experience is that programmers who are not great typists tend to be much more concerned with making their code concise at the expense of readability. Anything that eliminates friction between your head and the code is basically a good thing. (For the record, I mostly touch type, but I could stand to get faster and more accurate.)

Kent Beck has finished up his series on survey results about practices with a post on commit frequency and one on general practices. The general practices one is interesting, showing that about 50% of all respondents claim TDD, while about 70% claim to use iterations.

All of which sort of reminds me... a few years ago, when I worked at a major telecom company with a reputation for major waterfall development, I attended the "agile" track of an in-house seminar on software techniques in use at the company. Everybody in attendance there claimed to be agile, but only about 1/3 (by show of hands) were doing any kind of automated testing. It was, to say the least, a weird flavor of Agile. But I guess, you do what you can do and what works for your team.

Finally, I've been yammering here about iPad text editing and Dropbox for a few weeks, so I should mention Droptext, which seems to be the first iOS editor that saves and reads from Droptext. It's rather minimal at the moment (and kind of crashy), but I suspect the mere fact that it reads Dropbox files means that it's automatically useful.



June 16, 2010: What Shoulda We Do?

Agile, Git, PDF, RSpec, Relevance, ShouldaNoel RappinComment

Top Story



Thoughtbot talks about their plans for Shoulda moving forward. The big takeaway is that, while the context library will be separated out for use in Test::Unit, both Shoulda style and Shoulda effort will be focused on RSpec integration.

I have some complicated thoughts about this one. I'm thrilled that Shoulda is being maintained -- it's a tool I've used a lot, and I was starting to get worried. And they should move their open source tool in any direction they want. But, of course, I can't help thinking about how this affects me, and having Shoulda be primarily a complement to, rather than an alternative to, RSpec has an interesting effect on the book I'm in the middle of writing.

It's so funny how these things change. It's been about eighteen months since I started writing what would become Rails Test Prescriptions. At the time, I was not a big fan of RSpec, largely because I didn't like the heavily mocked style that seemed to go along with it. With the emergence of Shoulda and the factory tools, Test::Unit had gained basic functional parity with RSpec. It also seemed like Shoulda/Test::Unit was really starting to gain community mindshare.

So, I wrote the book intending it to be a basically tool-agnostic guide to Rails testing, but with most of the examples in Test::Unit on the grounds that a) Test::Unit is part of the core Ruby and Rails stack, so it's always around, b) it's what the core team uses, c) I personally was using Test::Unit, and d) RSpec already had a book, so it seemed prudent for many reasons to find my own space. Those of you who bought the Lulu version will remember that it has longish chapters on Shoulda and RSpec, treating them more-or-less equally as alternative mature test frameworks.

In the interim, tools have ebbed and flowed. Cucumber came out, with very strong RSpec support (especially at first), starting a bit of a trend of new tools supporting RSpec over Test::Unit. The single-assertion style from Shoulda seemed to gain some traction among RSpec users. I started actually using RSpec more, and liking it.

At the same time, some things haven't changed. I'd still like the book to be framework agnostic to the extent possible. Test::Unit is still the Rails default, and is probably still easier for a somebody new to testing to understand. But I think I have some re-writing in my future.

Oh yeah, other links



Martin Fowler on what makes a good team room for an agile project.

Speaking of RSpec 2, here's one of what I hope will be many posts from David Chelimsky about a new RSpec feature, metadata and filtering.

Gitbox is a new Max OS X interface for Git. Currently seems less full featured than the GitX fork that I use, but it does seem like a nice start.

Relevance announces PDFKit, a new library for PDF generation, along the lines of PrinceXML. I don't see this as a replacement for Prawn at all, though. There will always be cases where direct generation makes more sense. And there will always be cases where conversion makes sense. I think doing a book with Prawn would have been challenging, for example.

Finally, here's a simple little survey of the Ruby community. I note parenthetically that RSpec has 42% of the vote for Preferred Testing Framework, with Shoulda and Test::Unit having a combined 31%.




May 17: The Happy Streets of Wilmette

Agile, Consulting, Gems, Onion, RailsRx, RubiniusNoel RappinComment

Book Status



The Cucumber chapter is nearing final edit for beta. I cleared up a handful of errata, of which probably the most serious was a mistake on how to get the fixture data to pass the first test in the book. I'm hoping to get Beta 3 out later this week, and then I have to decide which direction for beta 4.

Oh, and the book: still on sale.

Agile Working



A few links about being a Rails Developer:

Jake Scruggs asks if you are really doing Agile development.

Mike Gunderloy makes a list of the services and tools that he finds useful in his development business. A great post if you are a small consulting firm.

Harri Kauhanen posts about how to sell Ruby on Rails projects. I've heard most of these in sales meetings over the last few years, though my impression was that it was getting better.

And Then...



The Rubinius Ruby interpreter reached a 1.0 milestone on Friday, as noted by nearly every Ruby person on Twitter. So far, I haven't used it, and my RVM install of it failed.

In a less interesting story, Ruby Gems 1.3.7 was also released.

And Finally...



This Onion article about David Simon doing a series in Wilmette, IL, cracked me up, but mostly because I grew up there.

May 13, 2010: The Rules of Agile Estimation

Agile, Apple, Estimates, JRuby, Kent Beck, Newton, Potterverse, RailsConf, iPadNoel Rappin1 Comment

Top Story


JRuby 1.5 is out. Highlights include improved Rails 3 support, better support for Windows, better FFI support, better startup time (yay!) and a lot of other tweaks and fixes.

Book Update


Still Cucumbering, hope to finish today.

The book is still on sale, of course. And I'd still love to see more comments in the forum.

I'll be talking at Chicago Ruby on June 1, exact topic TBD (let me know if you have a preference), but I'm leaning toward talking about how to avoid test problems and write good, robust tests.

And Then...


As an unrepentant old Newton fan, I loved this compare and contrast of a recent iPad ad with an old Newton ad. The Newton, flaws and all, was way ahead of the market back them.

If you are going to RailsConf, first of all have fun and wish I could be there. Second, if you are wondering about the difference between the two Rails 3 tutorials, wonder no more.

Kent Beck is publishing some old pieces again, including one about how the original XP book made the mistake of equating "the team" with "the developers".

Fred and George Weasley are marketing experts.

And Finally


The Rules of Agile Estimation:

1. Estimates are always wrong

2. If you think spending more time on estimates is a good idea, see rule 1.

3. On average, an experienced developer is not going to improve on his or her gut reaction by thinking it over.

4. Team estimates are important, one person may see something that everybody else missed. Just keep it quick.

5. People are much better at estimating size relative to each other than absolute time a task takes.

6. Separate the problem into smaller chunks, the more estimates you make the better the chance that the law of averages will help you.

7. Decomposition into roughly equal sized tasks is pretty much the whole ballgame.

April 15, 2010: The Library of Congress Recommends the Following Tweets

Agile, Bundler, Library of Congress, Pair Programming, Pragmatic, Ruby, Twitter, iPadNoel RappinComment

Top Story



As part of the Chirp conference, Twitter and the Library of Congress jointly announced that the Library will be storing Twitter's entire public archive.

I'm sure your expecting an easy joke about how many sandwiches the LoC now knows about in their archive, or about how scholarly papers about the archive will be limited to 140 characters. (Or, for a more academic joke, limited to 140 authors...) All that aside, though, I think archiving and making all this available is pretty neat.

Book Status



Still messing with Capybara and Webrat. Somewhat hampered by the fact that most of the usage of these tools is via Cucumber, so there's not a lot of documentation on them as standalone tools (particularly Capybara). Muddling through, though.

Also, the prags announced the beta release of the fourth edition of Agile Web Development with Rails.

Tab Dump



Rails Dispatch has their second post, which is another overview of Bundler and library management.

Two Ruby tip articles caught my eye. This one, by Alan Skorkin, is a complete overview serializing objects with Ruby. And here's a small tip from Ruby Quick tips about ensuring that an incoming option hash has only certain keys. I think the benefit of doing that is as much in revealing intention as anything else.

In our Agile section, we've go Nicolas Alpi reviewing what it is like to pair program. I have a half-written post on pairing floating around MarsEdit here... I'm still ambivalent about it even after largely pairing for the last six months. I think some of that is a personality thing, but I also think the idea that pairs do a better job of staying on track has been a little oversold -- it happens, but there are other offsetting issues. I really should finish that other post.

Also, part two of Kent Beck unedited and screencasting about testing. I admit I haven't watched this part yet.


And Finally,



Don't take your iPad to Israel (insert holy tablet joke here), as of a couple of days ago, the Israeli government has been blocking them at customs pending approval of their WiFi and security standards. In the meantime, those people who tried are being charged for every day the iPads sit in customs, and also agreed to pay a 16% Value Added Tax according to Israeli law.

April 13, 2010: iAd, youAd, weAll Ad

Agile, Apple, Bundler, JRuby, Ruby, Yehuda, iPad, standupNoel RappinComment

Top Story



iPads. Lots of them popping up in and around work. Probably some more coherent impressions coming later.

Wait, once again, Twitter has a big announcement after I start writing this. This time, they are going to start placing ads in the Twitter stream in various ways to be announced today. My quick reactions: a) I long suspected this day was coming, b) if the ads in clients are any guide, they aren't particularly burdensome, c) implementation details will decide how irritating this is.

Book Status



Still working on Webrat and Capybara. Still waiting for a cover. Somewhat doubtful that the beta will happen this week, but I haven't been told that for sure.

Tab Dump



Charles Nutter puts out an open call for help with the pure Java port of the Nokogiri XML parser for use with JRuby.

Confused by ==, equal?, and === in Ruby? You won't be after this article.

Hey, it's another big-time Agile founder: Ward Cunningham being interviewed. Pull quote: "When you're doing it well it feels a little plodding, you're not racing ahead like you might do on your own. But what happens is that it never slows down." Can I get that on a T-Shirt?

Yehuda Katz is turning his attention to more Bundler documentation, with two articles that went up as I started typing this. The first one lays out the problems bundler tries to solve, and the second talks a bit more about problems specifying the order of require statements.

Rails Rx Standup: April 12, 2010

Agile, Apple, Git, RSpec, This American Life, Twitter, standup, testingNoel RappinComment

Top Story



For a while, it looked like the top story was going to be Apple's new developer Rule 3.3.1, described here by John Gruber. More on that in a second.

But the real top story is the news that Twitter has bought Tweetie, intending to rebrand it as Twitter for iPhone, and dropping the price to a low, low, free. Eventually, it will be the core of Twitter for iPad. Wow.

Tweetie is probably the only case where I actually prefer the iPhone experience to the desktop experience, but I'd also be very sad if Tweetie for Mac was orphaned. (Not least because I just bought the MacHeist bundle in part as a way to get the Tweetie Mac beta sooner...). Later update: Tweetie developer Loren Brichter said on the MacHeist forum that the next Tweetie/Mac beta will come out.

I actually suspect that at least some of the existing iPhone Twitter clients will be able to continue -- there's clearly room in the ecosystem for apps that have much different opinions than Tweetie. It depends on how aggressive Twitter is planning to be. Dropping Tweetie's price to free strikes me as agressive, although it may just be that the Twitter team is averse to direct ways of making money.

As for the Apple story, it's a familiar space. Apple does something -- in this case, blocking apps not originally written in C, C++, or Objective-C -- that might have a reasonable user or branding component (keeping the iPhone platform free of least-common-denominator cross-platform apps) and taking it just too far for users or developers to be comfortable with it. That's, of course, an understatement, as a lot of developers are really angry. Gruber's point about the Kindle apps is good (and was later cited by Steve Jobs), but on the whole, I think this is a bit to far for Apple, or maybe I'm just upset that that the door seems to have been slammed on MacRuby apps for iPhone ever being feasible.

Book Update



Still working on the Webrat/Capybara chapter. Describing two tools that are so similar is really challenging for me -- when there's a difference, keeping it clear which tool is under discussion.

Also I've got the probability that I'll have an article in an upcoming issue of the Pragmatic Magazine. This will probably be based on material from the book, but edited to fit the magazine article format. Probably either factory tools or mocks. Or maybe Ajax testing. Haven't decided yet.

Tab Dump



Don't think I've mentioned this yet, but here is a cool presentation of RSpec tricks. Some of these don't work in RSpec 2, though.

While we're on the presentation kick, here's a nice intro to Git from James Edward Gray.

If you've ever tried to deploy Agile in a hostile environment, then the recent This American Life episode about the General Motors/Toyota NUMMI plant will resonate for you.

And Finally



A comparison of a boatload of Ruby test frameworks, being used in Iron Ruby to test some .NET code. I admit that I was not familiar with all the frameworks used here.

The Agile Bet

Agile, RantsNoel RappinComment
And now some testimony from Brother Nicely-Nicely Johnson, I mean, James Turner, from O'Reilly Radar:


The Cult of Scrum:
If Agile is the teachings of Jesus, Scrum is every abuse ever perpetrated in his name. In many ways, Scrum as practiced in most companies today is the antithesis of Agile, a heavy, dogmatic methodology that blindly follows a checklist of "best practices" that some consultant convinced the management to follow.

Endless retrospectives and sprint planning sessions don't mean squat if the stakeholders never attend them, and too many allegedly Agile projects end up looking a lot like Waterfall projects in the end. If companies won't really buy into the idea that you can't control all three variables at once, calling your process Agile won't do anything but drive your engineers nuts.


If there's one thing I've learned from my several attempts to implement Agile teams inside large, non-agile corporations it's that you can have total developer buy-in, test-driven development, daily standups, continuous integration, story points, and iterations, and still not have anything that feels like a functioning Agile project.

Look at it this way. There are two fundamental ways to deal with change on a software project:


  • You can try, up front, to perfectly analyze the project such that there will be no need to change requirements during development. This approach leads to to various artifacts associated with waterfall processes, such as design documents, UML diagrams, and the like.

  • You can try to set up a structure that assumes that change will happen, and will allow you to lower the impact of changes when they come. This approach leads to Agile practices such as short iterations, TDD, and continuous integration.



Agile, then, is a bet. A bet that you can skip a substantial percentage of the time being spent trying to pre-analyze and start coding much sooner in the process. The bet says that if you use the right coding practices and the right process management, your project will deliver more value more rapidly and more sustainably than it would have with the pre-analysis approach. That's what "responding to change over following a plan" means in the agile manifesto.

At first glance, though, the two options don't look mutually exclusive. And up front analysis is so seductive. Sure, we can do all our great agile stuff, but what's the harm in also writing a UML diagram up front? We can be test-driven, but why not also have a rigorous acceptance process? The more stakeholders there are, and the further they are from the development team, the more pressure there is to write things down, get buy-in up front, and generally increase the ceremony around change management. Somebody on every team always thinks they can eliminate future change by getting the right plan up front. Sometimes it's me.

And look, it might work for you. Every project is different. But there are at least some issues to look out for when you mix waterfall and agile processes together.

You're hedging your bet. In essence, an Agile project spreads it's design time across the length of the project rather than having most of it up front. Agile, by design, accepts what might be somewhat slower initial development practices (pairing, TDD), because they lead to a lower cost of change over time. Again, having some design can help, but there comes a point where adding back in the time-intensive design tasks gets you the worst of both worlds -- the initial development costs of Agile and the high cost of change associated with a waterfall project.

Worse, the existence of a lot of documentation surrounding changes tends to make the agile project management less effective. The more documentation and the more people that have to sign off on change requests, the less flexible the project is. The less flexible the project, the harder it is to realign priorities in response to development and business realities as they change. It's hard to reorganize priorities in a planning game meeting if three levels of management have to sign off on scope changes.


The Point of it All

Agile, Over Proclaiming, ProjectsNoel Rappin1 Comment
In true blog form, a declarative statement:

Hear ye, hear ye! Any so-called Agile team that ever tries to translate "points" into actual units of time is presumed dysfunctional until proven otherwise.

You've done it, I've done it, we've all done it. Doesn't make it a good idea.

In the spirit of my last post, allow me to over-explain.

A typical Agile project handles estimation by splitting the project up into smallish user stories and assigning each one a point value, typically between one and five, though different teams have different standards. During each iteration, typically on the order of two weeks, you count up how many points you accomplished and that becomes your velocity, or the estimate of how many points you can do in an iteration. Then, that total can be used to estimate how many iterations are left in your project. (I oversimplify, of course, but that's the idea.)

At first glance it seems ridiculous to try and estimate time without using time units, but in fact the idea is genius. It actually doesn't matter how accurate you are as long as you are consistent. Always underestimate things by 50%? You'll do less points in the first iteration than you'd expect, but once you establish a baseline, it doesn't matter, you'll still do about the same number of points each iteration. Don't know whether an estimate includes testing or refactoring? Don't want to estimate bugs? That's fine. As long as bug fixing or testing or whatever is a reasonably consistent amount of time from week to week the point method will still work.

You've replaced a very hard task -- estimating exactly how long every programming job will take -- with a much easier and more accurate one -- estimating the relative size of some of your programming tasks.

There are only a couple of problems with the point/velocity method. One is that you don't get good estimates until the project is a few weeks underway. Of course, with a more traditional method you don't really have a good estimate either, but you do have the illusion of one.

Also, there are at least two classes of people who really, really don't like point estimates: people who pay you by the hour, and people with Gantt charts.

People who pay by the hour are a legitimate issue -- they have a reasonable need to understand what their costs are going to be. Interfacing an agile project with a client can be complicated, but it's also not the issue I want to write about today.

People with Gantt charts, by which I mean anybody outside the technical team who is unwilling to accept point-based estimates, can be a real menace to an agile team. In my experience, this starts with the innocent-sounding question, "What does a point stand for?". I maintain that the correct answer to this question is, "I don't know, but we do ten of them a week" if you are feeling generous, you can add "and we have 75 of them left to do."

Sometimes that will be enough, either the managerial type will trust that you know what you are doing as long as you hit your deadlines, or the person will do the math themselves and fit it into their own process as needed. For instance, my various managers at Motorola were generally okay with me using points internally to my team as long as I was able to provide them a completion date for an entire release, which was not a problem.

Other places will require an estimate in "real" hours or days for reasons of varying legitimacy. You might get asked to provide estimates for each story in real hours or days rather than points, or somebody might come up with a conversion factor between points and days. The origin of an agile story point is the idea of an "ideal day" of development without distraction, so the thought that a simple conversion between days and points might exist has a certain intuitive appeal.

If you are doing this conversion at the behest of the rest of your team, you have at least two problems right now.

First off, you've lost the simplicity of the point estimating task, and you're back to the complexity of absolute estimating. I know it doesn't seem that way, and I suppose in theory it's possible to use hours in a point-style estimate. In my experience, though, when external forces want hours, they want exact hour estimates for each task.

The value of points is that there are a lot of implicit assumptions about consistency: time spent on bugs, time spent on non-developer tasks. That works in the points world because it's relatively abstract. But talking about estimates in hours places a powerful expectation that you need to separate that time back out into each individual task. So even though hours and points seem similar, in fact, it's very difficult to actually work with them similarly.

Secondly, if you work in points and then translate that to days, you now have two reasonably similar sets of estimates floating around. Confusion is inevitable. Motorola used to distinguish between "unloaded" and "loaded" estimates, where an unloaded estimate was ideal developer time and loaded estimates made the somewhat optimistic assumption that 1/3 of a developers time would be spent on non-developer tasks. What happened? We were forever asking each other whether an estimate was loaded or unloaded, and forever forgetting which it was. I'm pretty sure that some estimates were converted to loaded multiple times.

All this is indicative of the larger problem that you probably have if you have been asked to start converting points to hours. Ideally, in an agile project, it's the technology team's responsibility to say how much a feature costs, and the business team's responsibility to say how valuable the feature is. The combination of the two factors is what makes up the agile project plan. You don't want the business team determining the estimates any more than you want the technology team determining business value.

If the business team is driving the format and structure of the estimate then it's the project equivalent of a code smell. It's not impossible that your project could still be running smoothly, but it is strongly suggestive that the business side is looking to own the cost side of the equation. In which case you potentially have a very large problem, the most likely manifestation of which is strong pressure to claim lower estimates to provide the illusion of being on schedule.

Which brings us, at long last, back to the beginning. An agile project that is converting points into real units of time is potentially dysfunctional because it's a sign that a) the advantages of the points/velocity system aren't being fully utilized or b) the business team is having too much influence on the creation of estimates, or possibly worse. It's possible that things are still okay, but the presumption is that this estimation style does not encourage good agile practice.