Thursday, January 7, 2010

Portable Personal Kanban

I organize my tasks using a personal kanban and since I move around the building a lot, I like to have my personal kanban with me.  So rather than trucking around a Franklin-Covey day planner all day, I truck around a manilla folder that is my Portable Personal Kanban.


Open it up and Shazam!  My kanban is right there in front of me to keep me focused on Getting Things Done!

 

For note taking, I've recently been using a Livescribe tablet.  It lets me annotate my writing with voice notes and then it syncs it to my laptop where I can transpose the voice notes into text notes or whatever.  I've been playing around with attaching voice notes to cards on my personal kanban when necessary.  Not sure if it's just a gimmick I'm enjoying or not, but it keeps me from packing a Franklin-Covey around all day.

How do *YOU* do personal kanban?  Please leave a comment - I'd love to hear more personal kanban stories.  For more on Personal Kanban, take a look at Jim Benson's Personal Kanban website.

Wednesday, January 6, 2010

Automate Your AJAX Web Acceptance Tests with a Domain Language

I've noticed a recurring surge of advice from my Lean/Agile colleagues that goes something like this:
"Don't Automate Acceptance Tests through the User Interface. These tests tend to be brittle and they are expensive to build and maintain."
The underlying thinking is that the user interface tends to fluctuate a lot in development and this frequently breaks automated tests.  While it doesn't have to be this way, it's often just a matter of how you approach the problem.  The two most common approaches I've seen to automating web acceptance tests on Agile teams are:
  • Use a Capture/Replay tool to record user events into a test script that can be replayed as an acceptance test.
  • Use unit testing / BDD tools like JUnit or RSpec to test user interface components in isolation and design a "thin" view layer into the user interface, that you bypass with automated acceptance tests.
The first approach, using a capture/replay tool rarely lasts for long on a project.  In fact, I think there is probably a direct correlation between the amount of money an organization spends for a capture/replay tool and the time it takes to abandon that tool.  When you're spending tens of thousands of dollars a year in tool licenses and paying several people to maintain the growing suite of brittle tests it's hard to see you made a mistake.  Capture/Replay testing feels good at first, but it's a false sense of security that eventually falls apart.  Often at the times you need it the most - later in the project.

The "thin layer" approach, on the other hand, has been sound advice given the tools we've had available at the time.  I've been giving this advice myself for over a decade.  However, like the saying goes, all good things must come to an end. Sometimes it's a decade of sound, pragmatic advice. 


The problem I've always had with the thin-layer approach is that my experience and respectable studies both tell me to expect over 50% of the code in an interactive application to be in the user interface.  Compound that fact with the growing trend to build complex, AJAX Web User Interfaces and that can be a lot of code that's not getting acceptance tested before a release.  I never liked bypassing all that user interface code anyways.  I really want to test the application through the UI.

Over the past couple of years, as I've experimented with Domain Specific Lanuages for testing web applications, my mind has changed.  A third, pragmatic option has emerged and it's time to challenge the assumption that we can't effectively automate testing through the user interface.

With a little abstraction help from a DSL, we can effectively automate acceptance tests through the UI and deploy software with confidence that the *ENTIRE* application has been automatically tested prior to a release.  That's the hypothesis in this article.  Let's explore it further.

What makes UI Testing Brittle? 
Your classical capture/replay tool creates tests that depend on too much detail, and that detailed information gets duplicated quickly. For example, consider RAD/Track, an opensource kanban tool that I created. In RAD/Track you can drag cards across swimlanes on a kanban board. If you used a capture/replay tool to record the dragging of a card from one swimlane to another, you might end up with something like this:
mousedown 120, 138
mousedrag_to 230, 344
mouseup
After recording dozens of scenarios that involve dragging cards around on boards, you'd end up with that kind of detailed information getting duplicated into a lot of tests. Tests that contain this level of detail are bound to become a nuisance. For example, what happens when we change the position of a board on a web page and proudly re-run our suite of capture/replay tests?  Wham! All of the sudden, those beautiful, "so-easy-an-intern-could-create-them" tests are crashing like the lead pack in the last 60 seconds of a NASCAR!.  The  "mousedown 120, 138" commands now need to become "mousedown 135, 144" and they're not sitting in one, easy-to-change location.  Your fancy capture/replay tool has been faithfully duplicating these nasty details across hundreds of man-hours of automated test development.

The problem isn't testing through the UI. The problem is writing tests that violate the Dependency Inversion Principle.  Capture/Replay tests create dependencies on details and when those details change, breakage happens.

Enter the Dragon.  Domain Specific Language to the Rescue
So how can we effectively automate tests that will effectively exercise the entire application through the user interface?  We need to create an abstraction layer that exposes user tasks and hides the details of the user interface implementation.  In our world, that layer leverages a Domain Specific Language (DSL) -- a language that describes how a system should behave under use.  Let's take a look how it works using Cucumber with Selenium, but you could just as easily be doing this with Fitnesse for the test framework and/or Watir for the Web Browser automation.
Let's go back to our example. Rather than describing the act of moving a card in pixel coordinates with something like
mousedown 120, 138
mousedrag_to 230, 344
mouseup
we can write a DSL statement like this:
When I drag card 13 to the "Done" swimlane.
That's called a "step" in Cucumber and we write a little code behind it that will run when we execute that step in a test. The code might look like this:

When /I drag card "(.*)" to the "(.*)" swimlane/ do |card, swimlane|
    @browser.drag_and_drop_to_object "//li[@id=card_#{card]", swimlane
end

Now consider what happens when we change the user interface layout? Nothing. And if there *were* any changes to be made, you make it in one simple location and all your tests keep passing!

Just like creating SOLID code, building automated acceptance tests requires that we keep it simple and adhere to basic design principles. Done properly, you can effectively test your entire application through the GUI - you just have to think about it at a user task level, not a point-and-click level.