May 27, 2015

Intensive TDD Workshop, London, Sat July 18th

Just a quick hoorah to announce the next great-value weekend TDD workshop, which will be in South Wimbledon on Saturday July 25th. (Corrected from July 18th in previous version of this post, BTW.)

You can find out more and grab your extremely reasonably-priced ticket here.























May 26, 2015

Ditch The Backlog and Start Iterating!

Goals.

Yes, those.

The old ways have a habit of sneaking back in through the back door. And so it is with Agile Software Development that, despite all our protestations about being iterative and open to feedback and change, The Big PlanTM found its way back cunningly disguised as the backlog.

The reality is that most Agile teams are not iterating their designs in rapid feedback cycles, but instead are incrementally working their way through a plan for a solution that was cooked up by what we used to call "requirements analysts" - generally speaking, people who talk to the customer to find out what they want and draw up a specification - right at the start.

The backlog on many teams doesn't change much. And this is because the goal of each small frequent delivery is not to try out the software and see how it can be made better in the next delivery, but to test each delivery to check that it conforms to The Big PlanTM.

The box-ticking exercise of user acceptance testing usually just asks "is that what we agreed?" The software isn't tested for real, by real users, working on real problems to ask "is that what we really need?"

And so it is that many Agile teams still get that skip-ful of feedback when the "iterated" solution finds its way into the real world. To all intents and purposes, that's a Big Bang release. Y'know, the kind we thought we'd stopped doing.

Better to get that kind of feedback throughout. Better also to shift focus from The Big PlanTM to actual end user goals, not a list of system features that someone believed would meet those goals (if they ever thought to ask what those goals were.)

Imagine we're working for an airline. We turn up for work and are presented with a backlog of feature requests for an online check-in facility. Dutifully, we work our way through this backlog, agreeing acceptance tests with our customer and ticking them off one by one. Eventually, the system goes live. At which point we discover that, because all of the flights we operate are long-haul, and therefore almost all our passengers need to check in baggage for the hold, we've had almost zero impact on the time it takes to check-in.

What we could have been doing, instead of working our way through The Big PlanTM, is working our way towards reducing check-in times. If that was the original goal, then that's what we should have been iterating towards.

This is actually a founding principle of Agile, before it was called "Agile". Tom Gilb's ideas about evolutionary project management, dating back to the late 1980s, clearly highlight the need to focus on goals, not plans. Each iteration needs to bring us closer to the goal, and we need to test and measurre progress not by software delivered against plan - I mean, damn, there was a major clue right there! - but by progress towards reaching our goals.

Instead of putting all our faith in the online check-in solution that was presented to us, we could have been focusing on those baggage check-in queues and streamlining them. The solution might not even involve software. In which case we swallow our pride and acknowledge we don't have the answer, instead of wasting a big chunk of time and money pretending we do.

This requires a different relationship with the customer, where developers like us are just one part of a cross-discipline team tasked with solving problems that may or may nor involve software. We should be incentivised to write software that really achieves something, and to be prepared to change direction when we learn that we're on the wrong track.

The first step in that journey is to ditch the backlog. Put a match to The Big PlanTM.

Instead of plans, have goals; a handful of headline requirements that really are requirements - to reduce check-in times, to detect and treat heart disease sooner, to save 1p on the cost of manufacturing a widget, to get 20% more children in Africa through school, or whatever the goal is that someone thinks is valuable enough to invest the kind of money software costs to create and maintain in. We ain't done until the goal's been achieved at least to some extent, or until we've abandoned the goal.

That requires developers to play an integral part in a wider - and probably longer-term - game. We are not actors who turn up and say the lines someone else wrote on a set someone else built. We write our lines and build our sets and then act them out to an audience whose feedback should determine what happens next in the story.




May 13, 2015

Working Backwards From Test Assertions




The second of six special screencasts to celebrate 6 years since Codemanship was formed.



April 25, 2015

Non-Functional Tests Can Help Avoid Over-Engineering (And Under-Engineering)

Building on the topic of how we tackle non-functional requirements like code quality, I'm reminded of those times where my team has evolved an architecture that developers taking over from us didn't understand the reasons or rationale for.

More than once, I've seen software and systems scrapped and new teams start again from scratch because they felt the existing solution was "over-engineered".

Then, months later, someone on the new team reports back to me that, over time, their design has had to necessarily evolve into something similar to what they scrapped.

In these situations it can be tricky: a lot of software really is over-engineered and a simpler solution would be possible (and desirable in the long term).

But how do we tell? How can we know that the design is the simplest thing that a team could have done?

For that, I think, we need to look at how we'd know that software was functionally over-complicated and see if we can project any lessons we leearn on to non-functional complexity.

A good indicator of whether code is really needed is to remove it and see if any acceptannce tests fail. You'd be surprised how many features and branches in code find their way in there without the customer asking for them. This is especially true when teams don't practice test-driven development. Developers make stuff up.

Surely the same goes for the non-functional stuff? If I could simplify the design, and my non-functional tests still pass, then it's probable that the current design is over-engineered. But in order to do that, we'd need a set of explicit non-functional tests. And most teams don't have those. Which is why designs can so easily get over-engineered.

Just a thought.


Continuous Inspection Screencast

It's been quite a while since I did a screencast. Here's a new one about Continuous Inspection, which is a thing. (Oh yes.)





April 23, 2015

The Big Giant Object Oriented Analysis & Design Blog Post

Having been thwarted in my plan to speak at CraftConf on Budapest this week, I find myself with the luxury of time to blog about a topic that comes up again and again.

Object oriented analysis and design (OOA/D) is not a trendy subject these days. Arguably, it had it's heyday in the late 1990's, when UML dinosaurs ruled the Earth. But the fact remains that most software being written today is, at least to some extent, object oriented. And it's therefore necessary and wise to be able to organise our code effectively in an object oriented way. (Read my old, old paper about OOA/D to give yourself the historical context on this.)

Developers, on the whole, seem to really struggle getting from functional requirements (expressed, for example, as acceptance tests; the 21st century equivalent of Use Case scenarios) to a basis for an object oriented design that will satisfy that requirement.

There's a simple thought process to it; so simple that I see a lot of developers struggling mainly to take the leap of faith that it really can be that simple.

The thought process can be best outlined with a series of questions:

1. Who will be using this software, and what will they be using it to do?

2. How will they interact with the software in order to do what they want?

3. What are the outcomes of each user interaction?

4. What knowledge is involved in doing this work?

5. Which objects have the knowledge necessary to do each piece of the work?

6. How will these objects co-ordinate the work between themselves?

In Extreme Programming, we answer the first question with User Stories like the one below.



A video library member wants to donate a DVD to the library. That is who is using the software, and what they will be using it to do.

In XP, we take a test-driven approach to design. So when we sit down with the customer (in this case, the video library member) to flesh out this requirement - remember that a user story isn't a requirements specification, it's just a placeholder to have a conversation about the requirements - we capture their needs explicitly as acceptance tests, like this one:

Given a copy of a DVD title that isn’t in the library,


When a member donates their copy, specifying the name of the DVD title


Then that title is added to the library
AND their copy is registered against that title so that other members can borrow it,
AND an email alert is sent to members who specified an interest in matching titles,
AND the new title is added to the list of new titles for the next member newsletter
AND the member is awarded priority points



This acceptance test, expressed in the trendy Top Of The Pops-style "given...when...then..." format, reveals information about how the user interacts with the software, contained in the when clause. This is the action that the user performs that triggers the software to do the work.


The then clause is the starting point for our OO design. It clearly sets out all of the individual outcomes of performing that action. Outcomes are important. They describe the effect the user action has on the data in the system. In short, they describe what the user can expect to get when they perform that action. From this point on, we'll refer to these outcomes as the work that the software does.

The "ANDs" in this example are significant. They help us to identify 5 unique outcomes - 5 individual pieces of work that the software needs to do in order to pass this test. And whatever design we come up with, first and foremost, it must pass this test. In other words, the first test of a good OO design is that it works.


The essence of OO design is assigning the work we want the software to do to the objects that are best-placed to do that work. By "best-placed", I mean that the object has most, if not all, of the knowledge required to do that work.

Knowledge, in software terms, is data. Let's say we want to calculate a person's age in years; what do we need to know in order to do that calculation? We need to know their date of birth, and we need to know what today's date is, so we can calculate how much time has elapsed since they were born. Who knows a person's date of birth?

Now, this is where a lot of developers come unstuck. We seem to have an in-built tendency to separate agency from data. This leads to a style of design where objects that know stuff (data objects) are acted upon by objects that do stuff (services, command, managers etc etc). In order to do stuff, objects have to get the necessary knowledge from the objects that know stuff.



So we can end up with lots of low-level coupling between the doing objects and the knowing objects, and this drags us into deep waters when it comes to accommodating change later because of the "ripple effect" that tighter coupling amplifies, where a tiny change to one part of the code can rippled out through the dependencies and become a major re-write.

A key goal of good OO design is to minimise coupling between modules, and we achieve this by - as much as possible - encapsulating both the knowledge and the work in the same place, like this:



This is sometimes referred to as the "Tell, Don't Ask") style of OO design, because - instead of asking for an object's data in order to do some work, we tell the object that has that data to do the work itself. The end result is fewer, higher-level couplings between objects, and smaller ripples when we make changes.

If we're aiming for a loosely coupled design - and we are - then the next step in the OO design process, where we assign responsibility for each piece of work, requires us to put the work where the knowledge is. For that, we need to map out in our heads or on paper what this knowledge is.

Now, I'm very much a test-driven sort of dude, and as such I find that design thinking works best when we work from concrete examples. The acceptance test above isn't concrete enough for my tastes. There are still too many questions: which member, what DVD title, who has registered an interest in matching titles, and so on?

To make an acceptance test executable, we must add concrete examples - i.e., test data. So our hand-wavy test script from above becomes:

Given a copy of The Abyss, which isn’t in the library,

When Joe Peters donates his copy, specifying the name of the title, that it was directed by James Cameron and released in 1989

Then The Abyss is added to the library
AND his copy is registered against that title so that other members can borrow it,
AND an email alert with the subject “New DVD title” is sent to Bill Smith and Jane Jones, who specified an interest in titles matching “the abyss”(non-case-sensitive), stating “Dear , Just to let you know that another member has recently donated a copy of The Abyss (dir: James Cameron, 1989) to the library, and it is now available to borrow.”
AND The Abyss is added to the list of new titles for the next member newsletter
AND Joe Peters receives 10 priority points for making a donation



Now it's all starting to come into focus. From this, if I feel I need to - and earlier in development, when my domain knowledge is just beginning to build, I find it more useful - I can draw a knowledge map based on this example.




It's by no means scientific or exhaustive. It just lays out the objects I think are involved, and what these objects know about. The library, for example, knows about members and titles. (If you're UML literate, you'd might think this is an object diagram... and you'd be right.)


So, now we know what work needs to be done, and we know what objects might be involved and what these objects know. The next step is to put the work where the knowledge is.


This is actually quite a mechanical exercise; we have all the information we need. My tip -as on old pro - is to start with the outcomes, not the objects. Remember: first and foremost, our design must pass the acceptance test.


So, take the first piece of work that needs to be done:

The Abyss is added to the library


...and look for the object we believe has the knowledge to do this. The library knows about titles, so maybe the library should have responsibility for adding this title to itself.


Work through the outcomes one at a time, and assign responsibility for that work to the object that has the knowledge to do it.


Class Responsibility Collaboration cards are a neat and simple way of modeling this information. Write the name of type of object doing the work at the top, and on the left-hand side list what is responsible for knowing and what it is responsible for doing.

(HINT: you shouldn't end up with more CRC cards than outcomes. An outcome may indeed involve a subsystem of objects, but better to hide that detail behind a clean interface like Email Alert, and drill down into it later.)





Now we have a design that tells us what objects are involved, and which objects are doing which piece of work. We're almost there. There's only one piece of the OO jigsaw left; we need to decide how these objects collaborate with each other in order to co-ordinate the work. Objects, by themselves, don't do anything unless they're told to. The work is co-ordinated by objects telling each other to do their bit.

If the work happens in the order it's listed in the test, then that pretty much takes care of the collaborations. We start with the library adding the new title to itself. That's our entry point: someone - e.g., a GUI controller, or web service, or unit test - tells the library to add the title.


Once that piece of work is done, we move on to the next piece of work: registering a default copy to that title for members to borrow. Who does that? The title does it. We're thinking here about the thread of control - fans of sequence diagrams will know exactly what this is - where, like a baton in a relay race, control is passed from one object ("worker") to the next by sending a message. The library tells the new title to register a copy against itself. And on to the next piece of work, until all the work's been done, and we've passed the acceptance test.


Again, this is a largely mechanical exercise, with a pinch of good judgement thrown in, based on our understanding of how best to manage dependencies in software. For example, we may choose to avoid circular dependencies that might otherwise naturally fall out of the order in which the work is done. In this case, we don't have Title tell Library to add the title to the list of "new titles" - library's second piece of work - because that would set up a circular dependency between those two objects that we'd like to avoid. Instead, we allow control to be returned by default to the caller, library after title has done it's work.


On a CRC card, we capture information about collaborations on the right-hand side.






Note that Member has no collaborators. This means that Member doesn't tell any other objects to do any work. This is how we can tell it's at the bottom of the call stack. Library, on the other hand, has no objects that tell it to do anything, which places it at the top of the call stack: Library is the outermost object; our entry point for this scenario.


Note also that I've got a question mark on the collaborators side of the Email Alert object. This is because I believe there may be a whole can of worms hiding behind its inscrutable interface - potentially a whole subsystem dedicated to sending emails. I have decided to defer thinking about how that will work. For now, it's enough to know that Title tells Email Alert to send itself. We can fake it 'til we make it.


So, in essence, we now have an object oriented design that we believe will pass the acceptance test.


The next step would be to implement it in code.


Again, being a test-driven sort of cat, I would seek to implement it - and drive out all the low-level/code-level detail - in a test-driven way.


There are different ways we can skin this particular rabbit. We could start at the bottom of the call stack and test-driven an implementation of Member to check that it does indeed award itself priority points when we tell it to. Once we got Member working as desired, we could move up the call stack to Title, and test-driven an implementation of that using our real Member, and a fake Email Alert as a placeholder. Then, when we get Title working, we could finish up by wiring it all together and test-driving an implementation of Library with our real Title, our real Member, and our fake Email Alert. Then we could go away and get to work on designing and implementing the email subsystem.


Or we could work top-down (or "outside-in", as some prefer it), by test-driving an implementation of Library using mock objects for its collaborators Title and Member, wiring them together by writing tests that will fail if Library doesn't tell those collaborators to do their bit. Once we get Library working, we them move down the stack and test-driven implementations of Title and Member, again with the placeholder (e.g., a mock) for Email Alert so we can defer that part until we know what's involved.


A CRC card implies two different kinds of test:


1. Tests that fail because work was not done correctly

2. Tests that fail because an object didn't tell a collaborator to do its part





I tend to find that, in implementing OO designs end-to-end, both kinds of tests come in handy. The important thing to remember is whether the test you're writing is about the work, or about the collaborations. Tests should only have one reason to fail, so that when they do, it's easier to pinpoint what went wrong, so they should never be about both.


Also, sometimes, the test for the work can be implied by an interaction test when parameter values we expect to be passed to a mock object have been calculated by the caller. Tread very carefully here, though. Implicit in this can be two reasons for the test to fail: because the interaction was incorrect (or missing), or because the parameter value was incorrectly calculated. Just as it can be wise not to mix actions with queries, it can also be wise not to mix the types of tests.


Finally, and to reiterate for emphasis, the final arbiter of whether your design works is whether or not it passes the acceptance test. So as you implement it, keep going back to that test. You're not done until it passes.


And there you have it: a test-driven, Agile approach to object oriented analysis and design. Just like in the 90's. Only with less boxes and arrows.






April 17, 2015

Generating Domain-Driven Design 'Name Palettes' Using Tag Clouds

Here's a quick idea for anyone who's interested in Domain-driven Design (DDD), and believes that the design of code should be driven by our understanding of the problem space.

Take a requirements specification - say, a user acceptance test - and run the plain text of it through a tag cloud generator like the one at tagcrowd.com.

Here, I ran the following acceptance test - written in the trendy "given... when... then... " style of Behaviour-driven Development (BDD) - through the tag cloud generator:

Given a copy of a DVD title that isn’t in the library,

When a member donates their copy, specifying the name of the DVD title

Then that title is added to the library
AND their copy is registered against that title so that other members can borrow it,
AND an email alert is sent to members who specified an interest in matching titles,
AND the new title is added to the list of new titles for the next member newsletter
AND the member is awarded priority points


It generates the following tag cloud:




created at TagCrowd.com




Now, when I'm thinking about the internal design - either up-front or after-the-fact through refactoring - I have a palette of words upon which units of code can be hung that is drawn directly from the problem space.

Need a name for a new class? Take a look in the tag cloud and see if there's anything suitable. Need a name for a new method? Ditto. And so on.

Working on the reverse, you might also find it enlightening to take code that's been stripped of all language features (reserved words etc) so that only identifiers remain, and feed that through the tag cloud generator so it can be compared to the tag cloud generated from the requirements. It's by no means a scientific approach to DDD, but it could at least give you a quick visual way of seeing how closely your code matches the problem domain.



April 15, 2015

Reality-Driven Development - Do We Need To Train Ourselves To Tune Into Real-World Problems?

The world is full of problems.

That is to say, the world is full of people who are hindered in their efforts to achieve their goals, no matter how small and everyday those goals might be.

I'm no different to anyone else in that respect; every day I hit barriers, overcome obstacles, and occasionally give up trying. It might be something as prosaic as being unable to find an item in the shops when I'm in town, or wondering if there's a café or a pub nearby that has seating available at that specific moment in time. Or it could be something as astronomically important as wondering if aliens have ever visited our solar system, and have they left any hardware behind.

Problems, problems, problems. The real world's full of interesting problems.

And so it comes as some surprise that when my apprentice and I take a moment to think about ideas for a software project, we struggle to think of any real-world problems that we could have a go at solving.

It struck me in that moment that maybe, as a species, we software bods are a bit crap at noticing problems that have nothing to do with developing software. Typically, when I ask developers to think about ideas for a project, the suggestions have a distinctly technical bent. Maybe a new testing tool, or a plug-in for Visual Studio, or Yet Another MVC Framework, or something that does something to do with builds (and so on.)

A lifetime of minor frustrations, some of which software might be able to help with, pops clean out of our heads. And I ask: are we just not tuned in to real-world problems?

There are people who walk into a room, and notice that the scatter cushions compliment the carpet. I don't. I'm just not tuned in to that sort of thing. And if you give them some money, and say "improve this room", they'll have all sorts of ideas about furniture and bookshelves and paintings and curtains and pot plants, while I would buy a Massive F***ing TVTM and a Playstation 4. I'm tuned into technology in ways they're not, and they're tuned into home furnishings in ways I'm not.

Do we need to mindfully practice noticing everyday problems where software might help? Do we need to train ourselves to tune in to real-world problems when we're thinking about code and what we can potentially do with it? Do we need to work at noticing the colour of the scatter cushions when we walk into a room?

I've suggested that Will and I keep notes over the next two weeks about problems we came up against, so when we next pair, there might be some inspiration for us to draw from.

Which gives me an idea for an app....



April 8, 2015

Reality-driven Development - Creating Software For Real Users That Solve Real Problems In the Real World

It's a known fact that software development practices cannot be adopted until they have a pithy name to identify the brand.

Hence it is that, even though people routinely acknowledge that it would be a good idea for development projects to connect with reality, very few actually do because there's no brand name for connecting your development efforts with reality.

Until now...

Reality-driven Development is a set of principles and practices aimed at connecting development teams to the underlying reality of their efforts so that they can create software that works in the real world.

RDD doesn't replace any of your existing practices. In some cases, it can short-circuit them, though.

Take requirements analysis, for example: the RDD approach compels us to immerse ourselves in the problem in a way traditional approaches just can't.

Instead of sitting in meeting rooms talking about the problem with customers, we go out to where the problem exists and see and experience it for ourselves. If we're tasked with creating a system for call centre operatives to use, we spend time in the call centre, we observe what call centre workers do - pertinent to the context of the system - and most importantly, we have a go at doing what the call centre workers do.

It never ceases to amaze me how profound an effect this can have on the collaboration between developers and their customers. Months of talking can be compressed into a day or two of real-world experience, with all that tacit knowledge communicated in the only way that tacit knowledge can be. Requirements discussions take on a whole different flavour when both parties have a practical, first-hand appreciation of what they're talking about.

Put the shoe on the other foot (and that's really what this practice is designed to do): imagine your customer is tasked with designing software development tools, based entirely on an understanding they've built about how we develop software purely based on our description of the problem. How confident are you that we'd communicate it effectively? How confident are you that their solutions would work on real software projects? You would expect someone designing dev tools to have been a developer at some point. Right? So what makes us think someone who's never worked in a call centre will be successful at writing call centre software? (And if you really want to see some pissed off end users, spend an hour in a call centre.)

So, that's the first practice in Reality-driven Development: Real-world Immersion.

We still do the other stuff - though we may do it faster and more effectively. We still gather user stories as placeholders for planning and executing our work.. We still agree executable acceptance tests. We still present it to the customer when we want feedback. We still iterate our designs. But all of these activities are now underpinned with a much more solid and practical shared understanding of what it is we're actually talking about. If you knew just how much of a difference this can make, it would be the default practice everywhere.

Just exploring the problem space in a practical, first-hand way can bridge the communication gap in ways that none of our existing practices can. But problem spaces have to be bounded, because the real world is effectively infinite.

The second key practice in Reality-driven Development is to set ourselves meaningful Real-world Goals: that is, goals that are defined in and tested in the real world, outside of the software we build.

Observe a problem in the real world. For example, in our real-world call centre, we observe that operatives are effectively chained to their desks, struggling to take regular comfort breaks, and struggling to get away at the end of a shift. We set ourselves the goal of every call centre worker getting at least one 15-minute break every 2 hours, and to work a maximum of 15 minute's unplanned overtime at the end of a day. This goal has nothing to do with software. We may decide to build a feature in the software they use that manages breaks and working hours, and diverts calls that are coming in just before their break is due. It would be the software equivalent of when the cashier at the supermarket checkout puts up one of those little signs to dissuade shoppers from joining their queue when they're about to knock-off.

Real-world Goals tend to have a different flavour to management-imposed goals. This is to be expected. If you watch any of those "Back to the floor" type TV shows, where bosses pose as front-line workers in their own businesses, it very often the case that the boss doesn't know how things really work, and what the real operational problems are. This raises natural cultural barriers and issues of trust. Management must trust their staff to drive development and determine how much of the IT budget gets spent. This is probably why almost no organisation does it this way. But the fact remains that, if you want to address real-world problems, you have to take your cues from reality.

Important, too, is the need to strike a balance in your Real-world Goals. While we've long had practices for discovering and defining business goals for our software, they tend to suffer from a rather naïve 1-dimensional approach. Most analysts seek out financial goals for software and systems - to cut costs, or increase sales, and so on - without looking beyond that to the wider effect the software can have. A classic example is music streaming: while businesses like Spotify make a great value proposition for listeners, and for major labels and artists with big back catalogues, arguably they've completely overlooked 99.9% of small and up-and-coming artists, as well as writers, producers and other key stakeholders. A supermarket has to factor in the needs of suppliers, or their suppliers go out of business. Spotify has failed to consider the needs of the majority of musicians, choosing to focus on one part of the equation at the expense of the other. This is not a sustainable model. Like all complex systems, dynamic equilibrium is usually the only viable long-term solution. Fail to take into account key variables, and the system tips over. In the real world, few problems are so simple as to only require us to consider one set of stakeholders.

In our call centre example, we must ask ourselves about the effect of our "guaranteed break" feature on the business itself, on its end customers, and anyone else who might be effected by it. Maybe workers get their breaks, but not withut dropping calls. Or without a drop in sales. All of these perspectives need to be looked at and addressed, even if by addressing it we end up knowingly impacting people in a negative way. Perhaps we can find some other way to compensate them. But at least we're aware.

The third leg of the RDD table - the one that gives it the necessary balance - is Real-world Testing.

Software testing has traditionally been a standalone affair. It's vanishingly rare to see software tested in context. Typically, we test it to see if it conforms to the specification. We might deploy it into a dedicated testing environment, but that environment usually bears little resemblance to the real-world situations in which the software will be used. For that, we release the software into production and cross our fingers. This, as we all know, pisses users off no end, and rapidly eats away at the goodwill we rely on to work together.

Software development does have mechanisms that go back decades for testing in the real world. Alpha and Beta testing, for example, are pretty much exactly that. The problem with that kind of small, controlled release testing is that it usually doesn't have clear goals, and lacks focus as a result. All we're really doing is throwing the software out there to some early adopters and saying "here, waddaya think?" It's missing a key ingredient - real-world testing requires real-world tests.

Going back to our Real-world Goals, in a totally test-driven approach, where every requirement or goal is defined with concrete examples that can become executable tests, we're better off deploying new versions of the software into a real-world(-ish) testing environment that we can control completely, where we can simulate real-world test scenarios in a repeatable and risk-free fashion, as often as we like.

A call centre scenario like "Janet hasn't taken a break for 1 hour and 57 minutes, there are 3 customers waiting in the queue, they should all be diverted to other operators so Janet can take a 15-minute break. None of the calls should be dropped" can be simulated in what we call a Model Office - a recreation of all or part of the call centre, into which multiple systems under development may be deployed for testing and other purposes.

Our call centre model office simulates the real environment faithfully enough to get meaningful feedback from trying out software in it, and should allow us to trigger scenarios like this over and over again. In particular, model offices enable us to exercise the software in rare edge cases and under unusually high peak loads that Alpha and Beta testing are less likely to throw up. (e.g., what happens if everyone is due a break within the next 5 minutes?)

Unless you're working on flight systems for fighter aircraft or control systems for nuclear power stations, it doesn't cost much to set up a testing environment like this, and the feedback you can get is worth far more.

The final leg of the RDD table is Real-world Iterating.

So we immerse ourselves in the problem, find and agree real-world goals and test our solutions in a controlled simulation of the real world. None of this, even taken together with existing practices like ATDD and Real Options, guarantees that we'll solve the problem - certainly not first time.

Iterating is, in practice, the core requirements discipline of Agile Software Development. But too many Agile teams iterate blindly, making the mistake of believing that the requirements they've been given are the real goals of the software. If they weren't elucidated from a real understanding of the real problem in the real world, then they very probably aren't the real goals. More likely, what teams are iterating towards is a specification for a solution to a problem they don't understand.

The Agile Manifesto asks us to value working software over comprehensive documentation. Realty-driven Development widens the context of "working software" to mean "software that testably solves the user's problem", as observed in the real world. And we iterate towards that.

Hence, we ask not "does the guaranteed break feature work as agreed?", but "do operatives get their guaranteed breaks, without dropping sales calls?" We're not done until they do.

This is not to say that we don't agree executable feature acceptance tests. Whether or not the software behaves as we agreed is the quality gate we use to decide if it's worth deploying into the Model Office at all. The software must jump the "it passes all our functional tests" gate before we try it on the "but will it really work, in the real world?" gate. Model Office testing is more complex and more expensive, and ties up our customers. Don't do it until you're confident you've got something worth testing in it.

And finally, Real-world Testing wouldn't be complete unless we actually, really tested the software in the real real world. At the point of actual deployment into a production environment, we can have reasonably high confidence that what we're putting in front of end users is going to work. But that confidence must not spill over into arrogance. There may well be details we overlooked. There always are. So we must closely observe the real software in real use by real people in the real world, to see what lessons we can learn.

So there you have it: Reality-driven Development

1. Real-world Immersion
2. Real-world Goals
3. Real-world Testing
4. Real-world Iterating


...or "IGTI", for short.


April 6, 2015

Acceptance Tests - The Difference Between Outcomes & Implications Of Outcomes

A little thought to end Easter Sunday.

When writing acceptance tests, I encourage people to distinguish between outcomes and the implications of outcomes.

It's a subtle difference. Imagine we have a test for winning the lottery jackpot. The outcome is that the prize money is credited to the bank balance of the winner. Once they have the money, they can buy a yacht.

Buying a yacht is not an outcome of winning the lottery. But winning the lottery makes it possible to buy a yacht.

Similarly, the outcome of registering as a member of a video library is not that you can borrow videos. Being a member makes it possible for you to borrow videos. But the outcome is that you are now on the menber's list of people who can borrow videos.

Do you see what I'm getting at?

Outcomes in acceptance tests should describe the changes in state of the system's data - a change to a bank balance, or a change to the list of registered video library members.

Such changes may make pre-conditions of other use cases true, like borrowing videos. But that's not an actual outcome. That's enabled by the outcome.

So, the next time you're writing acceptance tests, and detailing the outcomes of a user action or system event, look for the changes, not the implications of the changes.