Thursday, August 19, 2010

How Deep are your Pockets ?

This is just a quick micro-blog introducing the Software ROI picture that I draw for executives to help them understand how deep they need their capital investment commitments in a Waterfall vs. an Agile project.  It's nothing revolutionary and you've probably seen it before.  The message is really quite simple.  Here's the picture:


The longer you wait to deliver value and receive money, the longer you are spending (investing) and the deeper that investment gets.  Instead of trying to release the entire system at once, we identify smaller features that can be released early to establish a return on investment sooner in the lifecycle.  The smaller you can make your releases, the less total investment you need to succeed.  Think about the current project you are working on.  How big do your pockets need to be for success.  Can you identify earlier release points that let you work with less investment and risk to the project?

Thursday, January 7, 2010

Portable Personal Kanban

I organize my tasks using a personal kanban and since I move around the building a lot, I like to have my personal kanban with me.  So rather than trucking around a Franklin-Covey day planner all day, I truck around a manilla folder that is my Portable Personal Kanban.


Open it up and Shazam!  My kanban is right there in front of me to keep me focused on Getting Things Done!

 

For note taking, I've recently been using a Livescribe tablet.  It lets me annotate my writing with voice notes and then it syncs it to my laptop where I can transpose the voice notes into text notes or whatever.  I've been playing around with attaching voice notes to cards on my personal kanban when necessary.  Not sure if it's just a gimmick I'm enjoying or not, but it keeps me from packing a Franklin-Covey around all day.

How do *YOU* do personal kanban?  Please leave a comment - I'd love to hear more personal kanban stories.  For more on Personal Kanban, take a look at Jim Benson's Personal Kanban website.

Wednesday, January 6, 2010

Automate Your AJAX Web Acceptance Tests with a Domain Language

I've noticed a recurring surge of advice from my Lean/Agile colleagues that goes something like this:
"Don't Automate Acceptance Tests through the User Interface. These tests tend to be brittle and they are expensive to build and maintain."
The underlying thinking is that the user interface tends to fluctuate a lot in development and this frequently breaks automated tests.  While it doesn't have to be this way, it's often just a matter of how you approach the problem.  The two most common approaches I've seen to automating web acceptance tests on Agile teams are:
  • Use a Capture/Replay tool to record user events into a test script that can be replayed as an acceptance test.
  • Use unit testing / BDD tools like JUnit or RSpec to test user interface components in isolation and design a "thin" view layer into the user interface, that you bypass with automated acceptance tests.
The first approach, using a capture/replay tool rarely lasts for long on a project.  In fact, I think there is probably a direct correlation between the amount of money an organization spends for a capture/replay tool and the time it takes to abandon that tool.  When you're spending tens of thousands of dollars a year in tool licenses and paying several people to maintain the growing suite of brittle tests it's hard to see you made a mistake.  Capture/Replay testing feels good at first, but it's a false sense of security that eventually falls apart.  Often at the times you need it the most - later in the project.

The "thin layer" approach, on the other hand, has been sound advice given the tools we've had available at the time.  I've been giving this advice myself for over a decade.  However, like the saying goes, all good things must come to an end. Sometimes it's a decade of sound, pragmatic advice. 


The problem I've always had with the thin-layer approach is that my experience and respectable studies both tell me to expect over 50% of the code in an interactive application to be in the user interface.  Compound that fact with the growing trend to build complex, AJAX Web User Interfaces and that can be a lot of code that's not getting acceptance tested before a release.  I never liked bypassing all that user interface code anyways.  I really want to test the application through the UI.

Over the past couple of years, as I've experimented with Domain Specific Lanuages for testing web applications, my mind has changed.  A third, pragmatic option has emerged and it's time to challenge the assumption that we can't effectively automate testing through the user interface.

With a little abstraction help from a DSL, we can effectively automate acceptance tests through the UI and deploy software with confidence that the *ENTIRE* application has been automatically tested prior to a release.  That's the hypothesis in this article.  Let's explore it further.

What makes UI Testing Brittle? 
Your classical capture/replay tool creates tests that depend on too much detail, and that detailed information gets duplicated quickly. For example, consider RAD/Track, an opensource kanban tool that I created. In RAD/Track you can drag cards across swimlanes on a kanban board. If you used a capture/replay tool to record the dragging of a card from one swimlane to another, you might end up with something like this:
mousedown 120, 138
mousedrag_to 230, 344
mouseup
After recording dozens of scenarios that involve dragging cards around on boards, you'd end up with that kind of detailed information getting duplicated into a lot of tests. Tests that contain this level of detail are bound to become a nuisance. For example, what happens when we change the position of a board on a web page and proudly re-run our suite of capture/replay tests?  Wham! All of the sudden, those beautiful, "so-easy-an-intern-could-create-them" tests are crashing like the lead pack in the last 60 seconds of a NASCAR!.  The  "mousedown 120, 138" commands now need to become "mousedown 135, 144" and they're not sitting in one, easy-to-change location.  Your fancy capture/replay tool has been faithfully duplicating these nasty details across hundreds of man-hours of automated test development.

The problem isn't testing through the UI. The problem is writing tests that violate the Dependency Inversion Principle.  Capture/Replay tests create dependencies on details and when those details change, breakage happens.

Enter the Dragon.  Domain Specific Language to the Rescue
So how can we effectively automate tests that will effectively exercise the entire application through the user interface?  We need to create an abstraction layer that exposes user tasks and hides the details of the user interface implementation.  In our world, that layer leverages a Domain Specific Language (DSL) -- a language that describes how a system should behave under use.  Let's take a look how it works using Cucumber with Selenium, but you could just as easily be doing this with Fitnesse for the test framework and/or Watir for the Web Browser automation.
Let's go back to our example. Rather than describing the act of moving a card in pixel coordinates with something like
mousedown 120, 138
mousedrag_to 230, 344
mouseup
we can write a DSL statement like this:
When I drag card 13 to the "Done" swimlane.
That's called a "step" in Cucumber and we write a little code behind it that will run when we execute that step in a test. The code might look like this:

When /I drag card "(.*)" to the "(.*)" swimlane/ do |card, swimlane|
    @browser.drag_and_drop_to_object "//li[@id=card_#{card]", swimlane
end

Now consider what happens when we change the user interface layout? Nothing. And if there *were* any changes to be made, you make it in one simple location and all your tests keep passing!

Just like creating SOLID code, building automated acceptance tests requires that we keep it simple and adhere to basic design principles. Done properly, you can effectively test your entire application through the GUI - you just have to think about it at a user task level, not a point-and-click level.

Monday, December 14, 2009

radtrack: Kanban Goodness meets Open-Source

First, a little History ...
Like all software craftsmen, I usually have a few personal programming projects going on. My current interest is to continue increasing my TDD skills with Ruby/Rails, Scala/Lift  and Javascript/jQuery. I find the current world of building highly scaleable web applications fascinating and have become hooked on the mojo of building high-performance, scaleable solutions with code that is extremely simple and concise.

radtrack is a project that keeps me challenged in these technologies.  It's also been a project that lets me experiment with new ways to visualize and promote lean project workflow with distributed teams. Yes, yes I know what you are about to say:  "Distributed teams?  That's not Agile".   You're right.  So what?  I probably don't need the reminder.  I'd be the first to bust your chops and question why you'd suggest such tom-foolery on one of my projects.

The reality is that there are still many large companies out there who are striving to become agile.  They're trying real hard now and I can't be the one going around confusing 'em with a singular focus on co-location or spoiling their day with the observable improvements of co-location.

What I *can* do, in the interest of making their world a little bit brighter, is offer up an effective collaboration/kanban tool for free.  A tool that some companies might charge an arm and a leg for.  Arms and legs are essential to the bureaucracy.  They're expensive too.  We can't have that.

Although I use radtrack for my own software projects, there is really nothing software specific about it.   Here's some high level screen shots of what I've got running at radtrack.com today.

The Kanban Board
If you've seen some of the other fine tools that are currently available, there's nothing revolutionary going on here -- except maybe mine's prettier. :-)  

It's your basic board of swim lanes that represent your value stream (workflow).  Each swimlane has cards which can be dragged to other swimlanes, flowing left to right. 
 


Cards have Tasks
When a card on the kanban board is supposed to represent business value, more than one person will likely have tasks to perform in order make that card flow through the system.  In radtrack, we call those units of individual work a "task" and a card can have many tasks.  In the picture below, you'll see the popup dialog for a card and the tasks for that card.  You can edit the tasks directly in the popup card.




Hover to Get a Quick View of a Card's Details






and ...





 
Personal Dashboard

This is my favorite view.  It gives you a picture of  your personal work queues.  You have  the option to limit the view to only those tasks in the current project or you can choose to see *ALL* of your tasks from every project combined into one view. 
My idea with this view is to encourage you to limit your active tasks in the spirit of Getting Things Done and Personal Kanban

The Unstarted Tasks list shows tasks that you have signed up for, but have not yet started.  If you see several tasks in this list, it might be an indication that you are hoarding tasks.  You don't want to do this.  Nobody wants to be known as the task hoarder.  Hoarding tasks creates bottlenecks and limits other people's opportunity help out when they are freed up and looking for more work.  radtrack makes it easy to spot and eliminate task hoarding on your projects.


The Active Tasks list shows your current work queue.  Too many tasks in this list is an indication that you may not be as productive as you could be.  You need to quit starting tasks and start finishing tasks! As a result, you are probably spending too much time task-switching.  Strive to see only a single task in progress at a time during the day.  Strive for very few active tasks at a time and be very afraid when you have a lot of active tasks.

The Finished Tasks list is the one list that you *DO* want to see get long.  The longer this list, the better you'll feel. Keep your focus on finishing tasks, not starting tasks. 



Clicking on any task in this view will popup the card with that task as shown in the following screenshot:



Keep an eye on your teammates
There is a Team Members tab which gives you a quick view of all members in the current project and a list of currently active tasks that are assigned to each user.  Here's what that tab looks like:




Some Whacky Future Ideas
I have a lot of futuristic ideas on where radtrack can go, but I'd better stay focused at some more basic features, like putting WIP limits on swimlanes.  If you want to stay abreast of what's happing with radtrack, I'm afraid you'll have to do it the old fashioned way.  Follow the radtrack twitter feed.

 

Share the Love
Of course, this all comes with great tidings of yuletide joy.

I'm going to make all of this radtrack goodness available free to the public.  With the help of Karman Blake, an excellent Ruby on Rails developer, we're putting some finishing touches on radtrack and we're planning to release it under an open-source license at github.com by the end of this month.

Even though the radtrack source code is freely available, I will continue to offer limited-free and paid radtrack project hosting, however my goal is to grow an opensource kanban planning tool that is always useful and free to the masses.

Follow my blog here, or you can follow me on twitter.