April 25, 2015

Non-Functional Tests Can Help Avoid Over-Engineering (And Under-Engineering)

Building on the topic of how we tackle non-functional requirements like code quality, I'm reminded of those times where my team has evolved an architecture that developers taking over from us didn't understand the reasons or rationale for.

More than once, I've seen software and systems scrapped and new teams start again from scratch because they felt the existing solution was "over-engineered".

Then, months later, someone on the new team reports back to me that, over time, their design has had to necessarily evolve into something similar to what they scrapped.

In these situations it can be tricky: a lot of software really is over-engineered and a simpler solution would be possible (and desirable in the long term).

But how do we tell? How can we know that the design is the simplest thing that a team could have done?

For that, I think, we need to look at how we'd know that software was functionally over-complicated and see if we can project any lessons we leearn on to non-functional complexity.

A good indicator of whether code is really needed is to remove it and see if any acceptannce tests fail. You'd be surprised how many features and branches in code find their way in there without the customer asking for them. This is especially true when teams don't practice test-driven development. Developers make stuff up.

Surely the same goes for the non-functional stuff? If I could simplify the design, and my non-functional tests still pass, then it's probable that the current design is over-engineered. But in order to do that, we'd need a set of explicit non-functional tests. And most teams don't have those. Which is why designs can so easily get over-engineered.

Just a thought.


Continuous Inspection Screencast

It's been quite a while since I did a screencast. Here's a new one about Continuous Inspection, which is a thing. (Oh yes.)





April 23, 2015

The Big Giant Object Oriented Analysis & Design Blog Post

Having been thwarted in my plan to speak at CraftConf on Budapest this week, I find myself with the luxury of time to blog about a topic that comes up again and again.

Object oriented analysis and design (OOA/D) is not a trendy subject these days. Arguably, it had it's heyday in the late 1990's, when UML dinosaurs ruled the Earth. But the fact remains that most software being written today is, at least to some extent, object oriented. And it's therefore necessary and wise to be able to organise our code effectively in an object oriented way. (Read my old, old paper about OOA/D to give yourself the historical context on this.)

Developers, on the whole, seem to really struggle getting from functional requirements (expressed, for example, as acceptance tests; the 21st century equivalent of Use Case scenarios) to a basis for an object oriented design that will satisfy that requirement.

There's a simple thought process to it; so simple that I see a lot of developers struggling mainly to take the leap of faith that it really can be that simple.

The thought process can be best outlined with a series of questions:

1. Who will be using this software, and what will they be using it to do?

2. How will they interact with the software in order to do what they want?

3. What are the outcomes of each user interaction?

4. What knowledge is involved in doing this work?

5. Which objects have the knowledge necessary to do each piece of the work?

6. How will these objects co-ordinate the work between themselves?

In Extreme Programming, we answer the first question with User Stories like the one below.



A video library member wants to donate a DVD to the library. That is who is using the software, and what they will be using it to do.

In XP, we take a test-driven approach to design. So when we sit down with the customer (in this case, the video library member) to flesh out this requirement - remember that a user story isn't a requirements specification, it's just a placeholder to have a conversation about the requirements - we capture their needs explicitly as acceptance tests, like this one:

Given a copy of a DVD title that isn’t in the library,


When a member donates their copy, specifying the name of the DVD title


Then that title is added to the library
AND their copy is registered against that title so that other members can borrow it,
AND an email alert is sent to members who specified an interest in matching titles,
AND the new title is added to the list of new titles for the next member newsletter
AND the member is awarded priority points



This acceptance test, expressed in the trendy Top Of The Pops-style "given...when...then..." format, reveals information about how the user interacts with the software, contained in the when clause. This is the action that the user performs that triggers the software to do the work.


The then clause is the starting point for our OO design. It clearly sets out all of the individual outcomes of performing that action. Outcomes are important. They describe the effect the user action has on the data in the system. In short, they describe what the user can expect to get when they perform that action. From this point on, we'll refer to these outcomes as the work that the software does.

The "ANDs" in this example are significant. They help us to identify 5 unique outcomes - 5 individual pieces of work that the software needs to do in order to pass this test. And whatever design we come up with, first and foremost, it must pass this test. In other words, the first test of a good OO design is that it works.


The essence of OO design is assigning the work we want the software to do to the objects that are best-placed to do that work. By "best-placed", I mean that the object has most, if not all, of the knowledge required to do that work.

Knowledge, in software terms, is data. Let's say we want to calculate a person's age in years; what do we need to know in order to do that calculation? We need to know their date of birth, and we need to know what today's date is, so we can calculate how much time has elapsed since they were born. Who knows a person's date of birth?

Now, this is where a lot of developers come unstuck. We seem to have an in-built tendency to separate agency from data. This leads to a style of design where objects that know stuff (data objects) are acted upon by objects that do stuff (services, command, managers etc etc). In order to do stuff, objects have to get the necessary knowledge from the objects that know stuff.



So we can end up with lots of low-level coupling between the doing objects and the knowing objects, and this drags us into deep waters when it comes to accommodating change later because of the "ripple effect" that tighter coupling amplifies, where a tiny change to one part of the code can rippled out through the dependencies and become a major re-write.

A key goal of good OO design is to minimise coupling between modules, and we achieve this by - as much as possible - encapsulating both the knowledge and the work in the same place, like this:



This is sometimes referred to as the "Tell, Don't Ask") style of OO design, because - instead of asking for an object's data in order to do some work, we tell the object that has that data to do the work itself. The end result is fewer, higher-level couplings between objects, and smaller ripples when we make changes.

If we're aiming for a loosely coupled design - and we are - then the next step in the OO design process, where we assign responsibility for each piece of work, requires us to put the work where the knowledge is. For that, we need to map out in our heads or on paper what this knowledge is.

Now, I'm very much a test-driven sort of dude, and as such I find that design thinking works best when we work from concrete examples. The acceptance test above isn't concrete enough for my tastes. There are still too many questions: which member, what DVD title, who has registered an interest in matching titles, and so on?

To make an acceptance test executable, we must add concrete examples - i.e., test data. So our hand-wavy test script from above becomes:

Given a copy of The Abyss, which isn’t in the library,

When Joe Peters donates his copy, specifying the name of the title, that it was directed by James Cameron and released in 1989

Then The Abyss is added to the library
AND his copy is registered against that title so that other members can borrow it,
AND an email alert with the subject “New DVD title” is sent to Bill Smith and Jane Jones, who specified an interest in titles matching “the abyss”(non-case-sensitive), stating “Dear , Just to let you know that another member has recently donated a copy of The Abyss (dir: James Cameron, 1989) to the library, and it is now available to borrow.”
AND The Abyss is added to the list of new titles for the next member newsletter
AND Joe Peters receives 10 priority points for making a donation



Now it's all starting to come into focus. From this, if I feel I need to - and earlier in development, when my domain knowledge is just beginning to build, I find it more useful - I can draw a knowledge map based on this example.




It's by no means scientific or exhaustive. It just lays out the objects I think are involved, and what these objects know about. The library, for example, knows about members and titles. (If you're UML literate, you'd might think this is an object diagram... and you'd be right.)


So, now we know what work needs to be done, and we know what objects might be involved and what these objects know. The next step is to put the work where the knowledge is.


This is actually quite a mechanical exercise; we have all the information we need. My tip -as on old pro - is to start with the outcomes, not the objects. Remember: first and foremost, our design must pass the acceptance test.


So, take the first piece of work that needs to be done:

The Abyss is added to the library


...and look for the object we believe has the knowledge to do this. The library knows about titles, so maybe the library should have responsibility for adding this title to itself.


Work through the outcomes one at a time, and assign responsibility for that work to the object that has the knowledge to do it.


Class Responsibility Collaboration cards are a neat and simple way of modeling this information. Write the name of type of object doing the work at the top, and on the left-hand side list what is responsible for knowing and what it is responsible for doing.

(HINT: you shouldn't end up with more CRC cards than outcomes. An outcome may indeed involve a subsystem of objects, but better to hide that detail behind a clean interface like Email Alert, and drill down into it later.)





Now we have a design that tells us what objects are involved, and which objects are doing which piece of work. We're almost there. There's only one piece of the OO jigsaw left; we need to decide how these objects collaborate with each other in order to co-ordinate the work. Objects, by themselves, don't do anything unless they're told to. The work is co-ordinated by objects telling each other to do their bit.

If the work happens in the order it's listed in the test, then that pretty much takes care of the collaborations. We start with the library adding the new title to itself. That's our entry point: someone - e.g., a GUI controller, or web service, or unit test - tells the library to add the title.


Once that piece of work is done, we move on to the next piece of work: registering a default copy to that title for members to borrow. Who does that? The title does it. We're thinking here about the thread of control - fans of sequence diagrams will know exactly what this is - where, like a baton in a relay race, control is passed from one object ("worker") to the next by sending a message. The library tells the new title to register a copy against itself. And on to the next piece of work, until all the work's been done, and we've passed the acceptance test.


Again, this is a largely mechanical exercise, with a pinch of good judgement thrown in, based on our understanding of how best to manage dependencies in software. For example, we may choose to avoid circular dependencies that might otherwise naturally fall out of the order in which the work is done. In this case, we don't have Title tell Library to add the title to the list of "new titles" - library's second piece of work - because that would set up a circular dependency between those two objects that we'd like to avoid. Instead, we allow control to be returned by default to the caller, library after title has done it's work.


On a CRC card, we capture information about collaborations on the right-hand side.






Note that Member has no collaborators. This means that Member doesn't tell any other objects to do any work. This is how we can tell it's at the bottom of the call stack. Library, on the other hand, has no objects that tell it to do anything, which places it at the top of the call stack: Library is the outermost object; our entry point for this scenario.


Note also that I've got a question mark on the collaborators side of the Email Alert object. This is because I believe there may be a whole can of worms hiding behind its inscrutable interface - potentially a whole subsystem dedicated to sending emails. I have decided to defer thinking about how that will work. For now, it's enough to know that Title tells Email Alert to send itself. We can fake it 'til we make it.


So, in essence, we now have an object oriented design that we believe will pass the acceptance test.


The next step would be to implement it in code.


Again, being a test-driven sort of cat, I would seek to implement it - and drive out all the low-level/code-level detail - in a test-driven way.


There are different ways we can skin this particular rabbit. We could start at the bottom of the call stack and test-driven an implementation of Member to check that it does indeed award itself priority points when we tell it to. Once we got Member working as desired, we could move up the call stack to Title, and test-driven an implementation of that using our real Member, and a fake Email Alert as a placeholder. Then, when we get Title working, we could finish up by wiring it all together and test-driving an implementation of Library with our real Title, our real Member, and our fake Email Alert. Then we could go away and get to work on designing and implementing the email subsystem.


Or we could work top-down (or "outside-in", as some prefer it), by test-driving an implementation of Library using mock objects for its collaborators Title and Member, wiring them together by writing tests that will fail if Library doesn't tell those collaborators to do their bit. Once we get Library working, we them move down the stack and test-driven implementations of Title and Member, again with the placeholder (e.g., a mock) for Email Alert so we can defer that part until we know what's involved.


A CRC card implies two different kinds of test:


1. Tests that fail because work was not done correctly

2. Tests that fail because an object didn't tell a collaborator to do its part





I tend to find that, in implementing OO designs end-to-end, both kinds of tests come in handy. The important thing to remember is whether the test you're writing is about the work, or about the collaborations. Tests should only have one reason to fail, so that when they do, it's easier to pinpoint what went wrong, so they should never be about both.


Also, sometimes, the test for the work can be implied by an interaction test when parameter values we expect to be passed to a mock object have been calculated by the caller. Tread very carefully here, though. Implicit in this can be two reasons for the test to fail: because the interaction was incorrect (or missing), or because the parameter value was incorrectly calculated. Just as it can be wise not to mix actions with queries, it can also be wise not to mix the types of tests.


Finally, and to reiterate for emphasis, the final arbiter of whether your design works is whether or not it passes the acceptance test. So as you implement it, keep going back to that test. You're not done until it passes.


And there you have it: a test-driven, Agile approach to object oriented analysis and design. Just like in the 90's. Only with less boxes and arrows.






April 17, 2015

Generating Domain-Driven Design 'Name Palettes' Using Tag Clouds

Here's a quick idea for anyone who's interested in Domain-driven Design (DDD), and believes that the design of code should be driven by our understanding of the problem space.

Take a requirements specification - say, a user acceptance test - and run the plain text of it through a tag cloud generator like the one at tagcrowd.com.

Here, I ran the following acceptance test - written in the trendy "given... when... then... " style of Behaviour-driven Development (BDD) - through the tag cloud generator:

Given a copy of a DVD title that isn’t in the library,

When a member donates their copy, specifying the name of the DVD title

Then that title is added to the library
AND their copy is registered against that title so that other members can borrow it,
AND an email alert is sent to members who specified an interest in matching titles,
AND the new title is added to the list of new titles for the next member newsletter
AND the member is awarded priority points


It generates the following tag cloud:




created at TagCrowd.com




Now, when I'm thinking about the internal design - either up-front or after-the-fact through refactoring - I have a palette of words upon which units of code can be hung that is drawn directly from the problem space.

Need a name for a new class? Take a look in the tag cloud and see if there's anything suitable. Need a name for a new method? Ditto. And so on.

Working on the reverse, you might also find it enlightening to take code that's been stripped of all language features (reserved words etc) so that only identifiers remain, and feed that through the tag cloud generator so it can be compared to the tag cloud generated from the requirements. It's by no means a scientific approach to DDD, but it could at least give you a quick visual way of seeing how closely your code matches the problem domain.



April 15, 2015

Reality-Driven Development - Do We Need To Train Ourselves To Tune Into Real-World Problems?

The world is full of problems.

That is to say, the world is full of people who are hindered in their efforts to achieve their goals, no matter how small and everyday those goals might be.

I'm no different to anyone else in that respect; every day I hit barriers, overcome obstacles, and occasionally give up trying. It might be something as prosaic as being unable to find an item in the shops when I'm in town, or wondering if there's a café or a pub nearby that has seating available at that specific moment in time. Or it could be something as astronomically important as wondering if aliens have ever visited our solar system, and have they left any hardware behind.

Problems, problems, problems. The real world's full of interesting problems.

And so it comes as some surprise that when my apprentice and I take a moment to think about ideas for a software project, we struggle to think of any real-world problems that we could have a go at solving.

It struck me in that moment that maybe, as a species, we software bods are a bit crap at noticing problems that have nothing to do with developing software. Typically, when I ask developers to think about ideas for a project, the suggestions have a distinctly technical bent. Maybe a new testing tool, or a plug-in for Visual Studio, or Yet Another MVC Framework, or something that does something to do with builds (and so on.)

A lifetime of minor frustrations, some of which software might be able to help with, pops clean out of our heads. And I ask: are we just not tuned in to real-world problems?

There are people who walk into a room, and notice that the scatter cushions compliment the carpet. I don't. I'm just not tuned in to that sort of thing. And if you give them some money, and say "improve this room", they'll have all sorts of ideas about furniture and bookshelves and paintings and curtains and pot plants, while I would buy a Massive F***ing TVTM and a Playstation 4. I'm tuned into technology in ways they're not, and they're tuned into home furnishings in ways I'm not.

Do we need to mindfully practice noticing everyday problems where software might help? Do we need to train ourselves to tune in to real-world problems when we're thinking about code and what we can potentially do with it? Do we need to work at noticing the colour of the scatter cushions when we walk into a room?

I've suggested that Will and I keep notes over the next two weeks about problems we came up against, so when we next pair, there might be some inspiration for us to draw from.

Which gives me an idea for an app....



April 8, 2015

Reality-driven Development - Creating Software For Real Users That Solve Real Problems In the Real World

It's a known fact that software development practices cannot be adopted until they have a pithy name to identify the brand.

Hence it is that, even though people routinely acknowledge that it would be a good idea for development projects to connect with reality, very few actually do because there's no brand name for connecting your development efforts with reality.

Until now...

Reality-driven Development is a set of principles and practices aimed at connecting development teams to the underlying reality of their efforts so that they can create software that works in the real world.

RDD doesn't replace any of your existing practices. In some cases, it can short-circuit them, though.

Take requirements analysis, for example: the RDD approach compels us to immerse ourselves in the problem in a way traditional approaches just can't.

Instead of sitting in meeting rooms talking about the problem with customers, we go out to where the problem exists and see and experience it for ourselves. If we're tasked with creating a system for call centre operatives to use, we spend time in the call centre, we observe what call centre workers do - pertinent to the context of the system - and most importantly, we have a go at doing what the call centre workers do.

It never ceases to amaze me how profound an effect this can have on the collaboration between developers and their customers. Months of talking can be compressed into a day or two of real-world experience, with all that tacit knowledge communicated in the only way that tacit knowledge can be. Requirements discussions take on a whole different flavour when both parties have a practical, first-hand appreciation of what they're talking about.

Put the shoe on the other foot (and that's really what this practice is designed to do): imagine your customer is tasked with designing software development tools, based entirely on an understanding they've built about how we develop software purely based on our description of the problem. How confident are you that we'd communicate it effectively? How confident are you that their solutions would work on real software projects? You would expect someone designing dev tools to have been a developer at some point. Right? So what makes us think someone who's never worked in a call centre will be successful at writing call centre software? (And if you really want to see some pissed off end users, spend an hour in a call centre.)

So, that's the first practice in Reality-driven Development: Real-world Immersion.

We still do the other stuff - though we may do it faster and more effectively. We still gather user stories as placeholders for planning and executing our work.. We still agree executable acceptance tests. We still present it to the customer when we want feedback. We still iterate our designs. But all of these activities are now underpinned with a much more solid and practical shared understanding of what it is we're actually talking about. If you knew just how much of a difference this can make, it would be the default practice everywhere.

Just exploring the problem space in a practical, first-hand way can bridge the communication gap in ways that none of our existing practices can. But problem spaces have to be bounded, because the real world is effectively infinite.

The second key practice in Reality-driven Development is to set ourselves meaningful Real-world Goals: that is, goals that are defined in and tested in the real world, outside of the software we build.

Observe a problem in the real world. For example, in our real-world call centre, we observe that operatives are effectively chained to their desks, struggling to take regular comfort breaks, and struggling to get away at the end of a shift. We set ourselves the goal of every call centre worker getting at least one 15-minute break every 2 hours, and to work a maximum of 15 minute's unplanned overtime at the end of a day. This goal has nothing to do with software. We may decide to build a feature in the software they use that manages breaks and working hours, and diverts calls that are coming in just before their break is due. It would be the software equivalent of when the cashier at the supermarket checkout puts up one of those little signs to dissuade shoppers from joining their queue when they're about to knock-off.

Real-world Goals tend to have a different flavour to management-imposed goals. This is to be expected. If you watch any of those "Back to the floor" type TV shows, where bosses pose as front-line workers in their own businesses, it very often the case that the boss doesn't know how things really work, and what the real operational problems are. This raises natural cultural barriers and issues of trust. Management must trust their staff to drive development and determine how much of the IT budget gets spent. This is probably why almost no organisation does it this way. But the fact remains that, if you want to address real-world problems, you have to take your cues from reality.

Important, too, is the need to strike a balance in your Real-world Goals. While we've long had practices for discovering and defining business goals for our software, they tend to suffer from a rather naïve 1-dimensional approach. Most analysts seek out financial goals for software and systems - to cut costs, or increase sales, and so on - without looking beyond that to the wider effect the software can have. A classic example is music streaming: while businesses like Spotify make a great value proposition for listeners, and for major labels and artists with big back catalogues, arguably they've completely overlooked 99.9% of small and up-and-coming artists, as well as writers, producers and other key stakeholders. A supermarket has to factor in the needs of suppliers, or their suppliers go out of business. Spotify has failed to consider the needs of the majority of musicians, choosing to focus on one part of the equation at the expense of the other. This is not a sustainable model. Like all complex systems, dynamic equilibrium is usually the only viable long-term solution. Fail to take into account key variables, and the system tips over. In the real world, few problems are so simple as to only require us to consider one set of stakeholders.

In our call centre example, we must ask ourselves about the effect of our "guaranteed break" feature on the business itself, on its end customers, and anyone else who might be effected by it. Maybe workers get their breaks, but not withut dropping calls. Or without a drop in sales. All of these perspectives need to be looked at and addressed, even if by addressing it we end up knowingly impacting people in a negative way. Perhaps we can find some other way to compensate them. But at least we're aware.

The third leg of the RDD table - the one that gives it the necessary balance - is Real-world Testing.

Software testing has traditionally been a standalone affair. It's vanishingly rare to see software tested in context. Typically, we test it to see if it conforms to the specification. We might deploy it into a dedicated testing environment, but that environment usually bears little resemblance to the real-world situations in which the software will be used. For that, we release the software into production and cross our fingers. This, as we all know, pisses users off no end, and rapidly eats away at the goodwill we rely on to work together.

Software development does have mechanisms that go back decades for testing in the real world. Alpha and Beta testing, for example, are pretty much exactly that. The problem with that kind of small, controlled release testing is that it usually doesn't have clear goals, and lacks focus as a result. All we're really doing is throwing the software out there to some early adopters and saying "here, waddaya think?" It's missing a key ingredient - real-world testing requires real-world tests.

Going back to our Real-world Goals, in a totally test-driven approach, where every requirement or goal is defined with concrete examples that can become executable tests, we're better off deploying new versions of the software into a real-world(-ish) testing environment that we can control completely, where we can simulate real-world test scenarios in a repeatable and risk-free fashion, as often as we like.

A call centre scenario like "Janet hasn't taken a break for 1 hour and 57 minutes, there are 3 customers waiting in the queue, they should all be diverted to other operators so Janet can take a 15-minute break. None of the calls should be dropped" can be simulated in what we call a Model Office - a recreation of all or part of the call centre, into which multiple systems under development may be deployed for testing and other purposes.

Our call centre model office simulates the real environment faithfully enough to get meaningful feedback from trying out software in it, and should allow us to trigger scenarios like this over and over again. In particular, model offices enable us to exercise the software in rare edge cases and under unusually high peak loads that Alpha and Beta testing are less likely to throw up. (e.g., what happens if everyone is due a break within the next 5 minutes?)

Unless you're working on flight systems for fighter aircraft or control systems for nuclear power stations, it doesn't cost much to set up a testing environment like this, and the feedback you can get is worth far more.

The final leg of the RDD table is Real-world Iterating.

So we immerse ourselves in the problem, find and agree real-world goals and test our solutions in a controlled simulation of the real world. None of this, even taken together with existing practices like ATDD and Real Options, guarantees that we'll solve the problem - certainly not first time.

Iterating is, in practice, the core requirements discipline of Agile Software Development. But too many Agile teams iterate blindly, making the mistake of believing that the requirements they've been given are the real goals of the software. If they weren't elucidated from a real understanding of the real problem in the real world, then they very probably aren't the real goals. More likely, what teams are iterating towards is a specification for a solution to a problem they don't understand.

The Agile Manifesto asks us to value working software over comprehensive documentation. Realty-driven Development widens the context of "working software" to mean "software that testably solves the user's problem", as observed in the real world. And we iterate towards that.

Hence, we ask not "does the guaranteed break feature work as agreed?", but "do operatives get their guaranteed breaks, without dropping sales calls?" We're not done until they do.

This is not to say that we don't agree executable feature acceptance tests. Whether or not the software behaves as we agreed is the quality gate we use to decide if it's worth deploying into the Model Office at all. The software must jump the "it passes all our functional tests" gate before we try it on the "but will it really work, in the real world?" gate. Model Office testing is more complex and more expensive, and ties up our customers. Don't do it until you're confident you've got something worth testing in it.

And finally, Real-world Testing wouldn't be complete unless we actually, really tested the software in the real real world. At the point of actual deployment into a production environment, we can have reasonably high confidence that what we're putting in front of end users is going to work. But that confidence must not spill over into arrogance. There may well be details we overlooked. There always are. So we must closely observe the real software in real use by real people in the real world, to see what lessons we can learn.

So there you have it: Reality-driven Development

1. Real-world Immersion
2. Real-world Goals
3. Real-world Testing
4. Real-world Iterating


...or "IGTI", for short.


April 6, 2015

Acceptance Tests - The Difference Between Outcomes & Implications Of Outcomes

A little thought to end Easter Sunday.

When writing acceptance tests, I encourage people to distinguish between outcomes and the implications of outcomes.

It's a subtle difference. Imagine we have a test for winning the lottery jackpot. The outcome is that the prize money is credited to the bank balance of the winner. Once they have the money, they can buy a yacht.

Buying a yacht is not an outcome of winning the lottery. But winning the lottery makes it possible to buy a yacht.

Similarly, the outcome of registering as a member of a video library is not that you can borrow videos. Being a member makes it possible for you to borrow videos. But the outcome is that you are now on the menber's list of people who can borrow videos.

Do you see what I'm getting at?

Outcomes in acceptance tests should describe the changes in state of the system's data - a change to a bank balance, or a change to the list of registered video library members.

Such changes may make pre-conditions of other use cases true, like borrowing videos. But that's not an actual outcome. That's enabled by the outcome.

So, the next time you're writing acceptance tests, and detailing the outcomes of a user action or system event, look for the changes, not the implications of the changes.

April 4, 2015

Why Software Craftsmanship Needs To Stop Worrying About The Colour Of The Bike Shed & Focus On The Big Issues

Why do software development efforts run into trouble?

When we examine the factors that tend to make development harder, and that raise the risk of failure - that is, failure for development to achieve its goals - we find the same culprits again and again.

1. Lack of customer involvement and "management support"

2. Lack of a clear vision or goal(s)

3. Unrealistic expectations

4. Projects that are too big and complicated to succeed

5. Failure to get feedback & incorporate learning throughout development

6. Lack of care taken to deal with issues early and cheaply

7. Overstaffing of teams to make them go faster

8. Hiring under-skilled teams

9. Being driven by the technology rather than the problem

10. Too much emphasis on the process (and its artefacts) and not enough on outcomes (and the proof)

These risk factors often can be seen conspiring to create some almighty shitstorms.

My overriding experience as a freelance developer and tech lead, going from project to project as we do, was giant overstaffed projects being executed by developers who were largely hired on price rather than ability, encouraged to hack code out with minimal attention to quality, and with stratospheric ambitions being squeezed into tiny schedules with arbitrary release dates cooked up by absentee customers who never gave us a clear idea of what it actually was we were setting out to achieve in the first place, but the teams had some pretty strong ideas about what technologies we should use (usually dictated by what they would like to add to their CV).

That, it would appear, is the norm in software development.

The inverse of that: small projects staffed by small and highly skilled teams, with clear and testable business goals from a committed customer whose involvement is pretty much daily, iterating towards working (tested) solutions on timescales decided by real progress towards our goals, using whatever technology is best to get the job done.

Parkinson's Law of Triviality (the "bike shed effect", named after a nuclear reactor's design committee obsessing over a bike shed while ignoring major issues) predicts that teams will focus on the things that matter least, but they are most comfortable talking about. Hence, we can devote a lot of time to debating the deployment architecture while non of us dares ask "but where is our customer?" It's understandable; many developers do not feel empowered or have confidence in tackling such matters as lack of customer involvement, dysfunctional hiring policy or unrealistic deadlines. So we obsess about Lambda expressions and Maven instead. That's stuff we know; that's stuff we can control.

But, in the grand scheme of things, Lambda expressions and Maven don't amount to a hill of beans compared to the big stuff. A project can hobble along quite nicely using while loops and Ant scripts. But a project that's too big is almost certain to fail, crushed under its own weight.

We're mistaken, though, that we have no power to influence these factors. In actual fact, as the people who make the software happen - without whom, no software will happen - we have the ultimate power. We can say "no".

When the customer is absent, stop development until you've got the access you need.

When the goals aren't clear and realistic, stop development until you have at least one testable and achievable goal.

When the boss insists on hiring a developer who's not up to snuff, stop development until they're removed from the team.

If these risks can't be addressed, the project has likely failed anyway. You're doing everyone a favour by effectively canning it from the developer's side. It's within our power to do these things.

Other professions can and do do this. Some are legally obliged to, and can go to prison if they don't. I don't advocate legislating to make developers more professional. But if we continue to roll over on these issues, eventually someone somewhere will.

Now, the meeker and milder among us, who find themselves retching at the mere thought of taking a stand, will say "I would have a conversation with them first". And of course you should. But here's the thing, and we've all been there: there will be times when we raise our concerns and the boss says "do it anyway". So what then?

I believe that knowingly choosing failure is bad. It's wrong. It really shouldn't be allowed. But it is, and teams do it all the time. They still get paid, of course. And that's one of the reasons why long schedules with release dates months or years away are still popular. It's how we make a living from failure. The longer we can make it before the shit hits the proverbial fan, the easier it is to sustain a living effectively throwing shit at fans.

Now, if you're the only person on the team who thinks this way, then realistically you're choosing between staying and probably failing, and walking. People with mortgages and school fees and car payments do not like this option. Which is why I coined the term mortgage-driven development, specifically to describe an approach to software development that knowingly embraces failure to ensure continuity of income.

And that, too, is pretty much the norm in software development. You are very likely in a minority on your team if you think that these big risks must be addressed. You are outnumbered by mortgage-driven developers who will accept working without clear goals, but with unrealistic expectations, in inexperienced and probably overstaffed teams, with absentee customers, and all the other risk factors that sink projects - for as long they keep paying them.

Until that balance changes, choosing failure will continue to be the default approach.

I suppose, secretly, I hoped the software craftsmanship movement would tip that balance. I hoped that more teams would take a stand. Bu I've seen no evidence of this happening. Mortgage-driven Development is as ubiquitous as ever. Only now, it's more like "Test-driven Mortgage-driven Development (with Angular.js and Microservices)", building on that old tradition of focusing on the things that matter least.

That's not to say that craftsmanship - or rather, the narrow reinterpretation of craftsmanship that's dominated the last 8 years or so - isn't important. It's important enough that I choose to focus my business on it almost exclusively. But it's only one piece of the jigsaw - a vital piece - of the puzzle that should be software craftsmanship.

As you may know, I'm no fan of renaming things. Because the craftsmanship community has been "doing craftsmanship wrong" all these years does not mean that we need a new name for software craftsmanship to make it explicit that it should be about taking care over all aspects of getting software right, including the big issues we prefer not to tackle. I want us to keep the name, and the emphasis on the practical that's characterised craftsmanship in the UK and Europe in particular - it should still be about creating working software, and not just be a talking shop about "what it means to be a craftsman" (we've had enough navel-gazing to last us a while).

There's a tendency for movements in software development to tip us off-balance. We seem to lurch from one over-emphasis (e.g., project management) to another (e.g., code quality), and never seeming to achieve actual balance of all these forces. Unbalanced Forces is itself considered a software project anti-pattern. We've actually woven unbalanced forces into the fabric of how we progress as a profession, constantly rediscovering ideas we knew about decades ago but then forget as the next shiny new idea came along.

There just needs to be greater emphasis now on issues like hiring, project size, agreeing goals, and all the stuff that hopefully will lead to us writing Clean Code on projects that give a damn. We've seen hints of it; and some clever approaches to making it hands-on and practical, as it should be.

There. I've said it.





April 2, 2015

"It Works" Should Mean "It Works In the Real World"

In software development, what we call "testing" usually refers to the act of checking whether what we created conforms to the specification for what we were supposed to build.

To illustrate, let's project that idea onto to some other kind of functional product: motor cars.

A car designer comes up with a blueprint for a car. The specification is detailed, setting out the exact bodyshape and proportions, down to what kind of audio player it should have.

The car testers come in and inspect a prototype, checking off all the items in the specification. Yes, it does indeed have those proportions. Yes, it does have the MP3 player with iPod docking that the designer specified.

They do not start the car. They do not take it for a test drive. They do not stick it in a wind tunnel. If it conforms to the specification, then the car is deemed roadworthy and mass manufacturing begins.

This would, of course, be very silly. Dangerous, in fact.

Whether or not the implementation of the car comforms to the specification is only a meaningful question if we can be confident that the specification is for a design that will work.

And yet, software teams routinely make the leap from "it satisfies the spec" to "okay, ship it" without checking that the design actually works. (And these are the mature teams. Many more teams don't even have a specification to test from. Shipping for them is a complete act of faith and/or indifference.)

Shipping software then becomes itself an act of testing to see if the design works. Which is why software has a tendency to become useful in its second or third release, if it ever does at all.

This is wrongheaded of us. If we were making cars - frankly, if we were making cakes - this would not be tolerated. When I buy a car, or a cake, I can be reasonably assured that somebody somewhere drove it or tasted it before mass manufacture went ahead.

What seduces us is this notion that software development doesn't have a "manufacturing phase". We ship unfinished products because we can change them later. Customers have come to expect it. So we use our customers as unpaid testers to figure out what it is we should have built. Indeed, not only are they unpaid testers, many of them are paying for the privelege of being guinea pigs for our untried solutions.

This should not be acceptable. When a team rolls our a solution across thousands of users in an organisation, or millions of users on the World Wide Web, those users should be able to expect that it was tested not just for conformance to a spec, but tested to find out if the design actually works in the real world.

But it's vanishingly rare that teams bother to pay their customers this basic courtesy. And so, we waste our users' time with software that may or may not work, making it their responsibilty to find out.

My way of doing it requires that teams continuously test their software by deploying each version into an environment similar to a motor vehicle test track. If we're creating, say, a sales automation solution, we create a simulated sales office environment - e.g., a fake call centre with a couple of trained operators, real telephones and mock customers triggering business scenarios gleaned from real life - and put the software through its paces on the kinds of situations that it should be able to handle. Or, rather, that the users should be able to handle using the software.

Like cars, this test track should be set up to mimic the environment in which the software's going to be used as faithfully as we can. If we were test driving a car intended to be used in towns and cities, we might create a test track that simulates high streets, pedestrian crossings, school runs, and so on.

We can stretch these testing environments, testing our solutions to destruction. What happens if our product gets a mention on primetime TV and suddenly the phones are ringing off the hook? Is the software scalable enough to handle 500 sales enquiries a minute? We can create targeted test rigs to answer those kinds of questions, again driven by real-world scenarios, as opposed to a design specification.

There are hard commercial imperatives for this kind of testing, too. Iterating designs by releasing software is an expensive and risky business. How many great ideas have been killed by releasing to a wide audience too early? We obsess over the opportunity cost of shipping too late, but we tend to be blind to the "embarrassment cost" of shipping products that aren't good enough yet. If anything, we err on the former, and should probably learn a bit more patience.

By the time we ship software, we ought to be at least confident that it will work 99% of the time. Too often, we ship software that doesn't work at all.

Yes, it passes all the acceptance tests. But it's time for us to move on from that to a more credible definition of "it works"; one that means "it will work in the real world". As a software user, I would be grateful for it.


March 27, 2015

Component-based & Microservice Architecture: Swappability Happens On The Client Side

Lunch time teleconference looms; just enough time to spew out these thoughts about distributed component architectures (you young hop folk may know them as "microservices", which is the trendy cool Dubstep name for them).

The key to distributed components is swappability - actually, that's kind of the whole point of components generally, distributed or in-process, so take this as generally applicable advice. Or don't. see if I care.

What our bearded young hipster friends often forget to mention is that the swappability in component-based design really happens on the client side of component-to-component (or service-to-service) collaborations. Not, as you may have been led to believe, on the server side.

Sure, we could make pretty much any kind of component present the same, say, REST API. But we still run the risk of binding our client to that API, and to the details of how to consume RESTful services. (Which can be, ironically, anything but to work with.)

Nope. To make that service truly swappable, we have to hide all of the details from the client.

UML 2.0, that leviathan of the component-based era, introduced the notion of components and connectors. It's a pretty neat idea: basically, we cleanly separate the logical conversation held between two components from the dirty business of the medium through which that conversation takes place.

To connect the idea with the real world, let's say I ask the prime minister if he's ever eaten three Shredded Wheat and he replied "Yes". Now, I didn't say how I asked him. Maybe I went to Number 10 and asked him to his face. maybe I emailed him. Maybe I had the words branded onto a poor person and paraded said pauper through the streets so that the PM would see the question when they reported the news on his tellybox.

What matters is that I asked, and he replied.

In component architectures, we seek to separate the logic of component interactions from the protocols through which they physically connect.

Let's say we have a Video Library web application that wants to display 3rd-party reviews of movies.

We could look for reviews on IMDB, or on Rotten Tomatoes (or on both). We want to ask the logical question: what reviews have been written for this movie (identified by the movie's title and year)?

We can codify the logic of that interact with interfaces, and package those interfaces in a component the client knows about - in this case, the VideoReviews .NET assembly, which contains the interfaces IReviewService and IReview.



The VideoTitle class that consumes these services doesn't need to know where the reviews are coming form, or how they're being marshalled. It just wants to ask the logical question. So we present it with a logical interface through which to do that.



By injecting the review service into the VideoTitle, it becomes possible to dynamically bind implementations of those interfaces that know how to connect and interact with the remote server (e.g., the Rotten Tomatoes API), and unpacks the data that comes back, translating into a form that VideoTitle can use easily.

All of that is done behind the scenes: VideoTitle knows nothing about the details. And because it knows nothing about the details, and because we're injecting it into the VideoTitle from outside - e.g., in its constructor, or as a method parameter when VideoTitle is told to get reviews - it becomes possible to swap in different connectors to get reviews from other services.



All of this can be wired together from above (e.g., we could instantiate an implementation of IReviewService when the application starts up), or with a dependency injection framework, and so on. The possibilities are many and varied for runtime hi-jinks and dynamic larks.

The component dependencies are crucial: our app logic (in the VideoLibary component) only depends on the abstractions in the VideoReviews library. It depends in no way on the external services. It does not know there are even external services involved.

All component dependencies point towards the abstractions, satisfying the Stable Abstractions package design principle.

It now becomes possible to do clever things with swappability, like pooling connectors that point to different service instances to provide basic load-balancing or fail-over, or giving the end user the choice of which service to use at runtime.

Most importantly, though, it gives us the ability to vary the logic of our applications independently of the details of how it connects to external services independently of each other. If a new movie review site came along, we would simply have to write a new connector for it, and wouldn't have to rely on them implementing the same web API as our existing providers. Because, that my friends, is beyond our control.

So, succeeding with components is about swappability, and swappability is about programming the logic of our applications against clean interfaces that we control.

The REST is details.