August 17, 2015

The Small Teams Manifesto

So.... grrr.... arrrgh... etc.

Another week, another software project set up to fail right from the start.

What angers me most is that the key factors we know can severely damage our chances of succeeding in delivering software of value are well understood.

The main one is size: big projects usually fail.

We know pretty empirically that the development effort required to deliver software grows exponentially as the software grows. If it takes a team 2 hours to deliver 10 lines of working software, it might take them 4 hours to deliver 20 lines of working software, and 8 hours to deliver 30 lines, and so on.

There's no economy of scale in software development, and we know for a fact that throwing more bodies at the problem just makes things worse. A team of 2 might deliver it in six months. A team of 10 might take a year, because so much of their time will be taken up by being in a team of 10.

The evidence strongly suggests that the probability of project failure grows rapidly as project size increases, and that projects costing more than $1 million are almost certain to run into severe difficulties.

In an evidence-loaded article from 1997 called Less Is More, Steve McConnell neatly sums it all up. For very good reasons, backed up with very good evidence, we should seek to keep projects as small as possible.

Not only does that tend to give us exponentially more software for the same price, but it also tends to give us better, more reliable software, too.

But small teams have other big advantages over large teams; not least is their ability to interact more effectively with the customer and/or end users. Two developers can have a close working relationship with a customer. Four will find it harder to get time with him or her. Ten developers will inevitably end with someone having that working relationship and then becoming a bottleneck within the team, because they're just a proxy customer - seen it so many times.

A close working relationship with our customers is routinely cited as the biggest factor in software development success. The more people competing for the customer's time, the less time everyone gets with the customer. Imagine an XP team picking up user stories at the beginning of an iteration; if there's only one or two pairs of developers, they can luxuriate in time spent with the customer agreeing acceptance tests. If there are a dozen pairs, then the customer has to ration that time quite harshly, and pairs walk away with a less complete understanding of what they're being asked to build (or have to wait a day or two to get that understanding.)

Another really good reason why small teams tend to be better is that small teams are easier to build. Finding one good software developer takes time. Finding a dozen is a full-time job for several months. I know, I've tried and I've watched many others try.

So, if you want more reliable software designed with a close collaboration with the customer at an exponentially cheaper price (and quite possibly in less time, too), you go with small teams. Right?

So why do so many managers still opt for BIG projects staffed by BIG teams?

I suspect the reasons are largely commercial. First of all, you don't see many managers boasting about how small their last project was, and it's the trend that the more people you have reporting to you, the more you get paid. Managers are incentivised to go big, even though it goes against their employer's interests.

Also, a lot of software development is outsourced these days, and there's in obvious incentive for the sales people running those accounts to go as big as possible for as long as possible. Hence massively overstaffed Waterfall projects are still the norm in the outsourcing sector - even when they call it "Agile". (All sorts of euphemisms like "enterprise Agile", "scaling up Agile" etc etc, which we tend to see in this sector more often.)

So there are people who do very well by perpetuating the myth of an economy of scale in software development.

But in the meantime, eye-popping amounts of time and money are being wasted on projects that have the proverbial snowball's chance in hell of delivering real value. I suspect it's so much money - tens of billions of pounds a year in Britain alone, I'd wager - and so much time wasted that it's creating a drag effect on the economy we're supposed to be serving.

Which is why I believe - even though, like many developers, I might have a vested interest in perpetuating The Mythical Man-Month - it's got to stop.

I'm pledging myself to a sort of Small Team Manifesto, that goes something like this:

We, the undersigned, believe that software teams should be small, highly skilled and working closely with customers

Yep. That's it. Just that.

I will use the hashtag #smallteams whenever I mention it on social media, and I will be looking to create a sort of "Small Teams avatar" to use on my online profiles to remind me, and others, that I believe software development teams should be small, highly-skilled and working closely with customers.

You, of course, can join in if you wish. Together, we can beat BIG teams!



August 10, 2015

A Hierarchy Of Software Design Needs

Design is not a binary proposition. There is no clear dividing line between a good software design and bad software design, and even the best designs are compromises that seek to balance competing forces like performance, readability, testability, reuse and so on.

When I refactor a design, it can sometimes introduce side-effects - namely, other code smells - that I deem less bad than what was there before. For example, maybe I have a business object that renders itself as HTML - bad, bad, bad! Right?

The HTML format is likely to change more often than the object's data schema, and we might want to render it to other formats. So it makes sense to split out the rendering part to a separate object. But in doing so, we end up creating "feature envy" - an unhealthy high coupling between our renderer and the business object so it can get the data in needs - in the process.

I consider the new feature envy less bad than the dual responsibility, so I live with it.

In fact, there tends to be a hierarchy of needs in software design, where one design issue will take precedence over another. It's useful, when starting out, to know what that hierarchy of needs is.

Now, the needs may differ depending on the requirements of our design - e.g., on a small-memory device, memory footprint matters way more than it does for desktop software usually - but there is a fairly consistent pattern that appears over and over in the majority of applications.

There are, of course, a universe of qualities we may need to balance. But let's deal with the top six to get you thinking:

1. The Code Must Work

Doesn't matter how good you think the design is if it doesn't do what the customer needs. Good design always comes back to "yes, but does it pass the acceptance tests?" If it doesn't, it's de facto a bad design, regardless.

2. The Code Must Be Easy To Understand

By far the biggest factor in the maintainability of code is whether or not programmers can understand it. I will gladly sacrifice less vital design goals to make code more readable. Put more effort into this. And then put even more effort into it. However much attention you're paying to readability, it's almost certainly not enough. C'mon, you've read code. You know it's true.

But if the code is totally readable, but doesn't work, then spend more time on 1.

3. The Code Must Be As Simple As We Can Make It

Less code generally means a lower cost of maintenance. But beware; you can take simplicity too far. I've seen some very compact code that was almost intractable to human eyes. Readability trumps simplicity. And, yes, functional programmers, I'm particularly looking at you.

4. The Code Must Not Repeat Itself

The opposite of duplication is reuse. Yes it is: don't argue!

Duplication in our code can often give us useful clues about generalisations and abstractions that may be lurking in there that need bringing out through refactoring. That's why "removing duplication" is a particular focus of the refactoring step in Test-driven Development.

Having said that, code can get too abstract and too general at the expense of readability. Not everything has to eventually turn into the Interpreter pattern, and the goal of most projects isn't to develop yet another MVC framework.

In the Refuctoring Challenge we do on the TDD workshops, over-abstracting often proves to be a sure-fire way of making code harder to change.

5. Code Should Tell, Not Ask

"Tell, Don't Ask" is a core pillar of good modular -notice I didn't say "object oriented" - code. Another way of framing it is to say "put the work where the knowledge is". That way, we end up with modules where more dependencies are contained and fewer dependencies are shared between modules. So if a module knows the customer's date of birth, it should be responsible for doing the work of calculating the customer's current age. That way, other modules don't have to ask for the date of birth to do that calculation, and modules know a little bit less about each other.

It goes by many names: "encapsulation", "information hiding" etc. But the bottom line is that modules should interact with each other as little as possible. This leads to modules that are more cohesive and loosely coupled, so when we make a change to one, it's less likely to affect the others.

But it's not always possible, and I've seen some awful fudges when programmers apply Tell, Don't Ask at the expense of higher needs like simplicity and readability. Remember simply this: sometimes the best way is to use a getter.

6. Code Should Be S.O.L.I.D.

You may be surprised to hear that I put OO design principles so far down my hierarchy of needs. But that's partly because I'm an old programmer, and can vaguely recall writing well-designed applications in non-OO languages. "Tell, Don't Ask", for example, is as do-able in FORTRAN as it is in Smalltalk.

Don't believe me? Then read the chapter in Bertrand Meyer's Object Oriented Software Construction that deals with writing OO code in non-OO languages.

From my own experiments, I've learned that coupling and cohesion have a bigger impact on the cost of changing code. A secondary factor is substitutability of dependencies - the ability to insert a new implementation in the slot of an old one without affecting the client code. That's mostly what S.O.L.I.D. is all about.

This is the stuff that we can really only do in OO languages that directly support polymorphism. And it's important, for sure. But not as important as coupling and cohesion, lack of duplication, simplicity, readability and whether or not the code actually works.

Luckily, apart from the "S" in S.O.L.I.D. (Single Responsibility), the O.L.I.D. is fairly orthogonal to these other concerns. We don't need to trade off between substitutability and Tell, Don't Ask, for example. They're quite compatible, as are the other design needs - if you do it right.

In this sense, the trade off is more about how much time I devote t thinking about S.O.L.I.D. compared to other more pressing concerns. Think about it: yes. Obsess about it: no.


Like I said, there are many, many more things that concern us in our designs - and they vary depending on the kind of software we're creating - but I tend to find these 6 are usually at the top of the hierarchy.

So... What's your hierarchy of design needs?









Intensive TDD Workshop, London, Sat Oct 10th

Just a quick note about the next public Intensive TDD workshop, which will be in SW London on Saturday October 10th.

The same unbeatably low price of £49 for a fun, packed, challenging and educational day without the frills.

All of the workshops have sold out this year, so book soon to avoid disappointment.

Powered by Eventbrite






August 7, 2015

Taking Baby Steps Helps Us Go Faster

Much has been written about this topic, but it comes up so often in pairing that I feel it's worth repeating.

The trick to going faster in software development is to take smaller steps.

I'll illustrate why with an example from a different domain: recording music. As an amateur guitar player, I attempt to make recorded music. Typically, what I do is throw together a skeleton for a song - the basic structure, the chord progressions, melody and so on - using a single sequenced instrument, like nice synth patch. That might take me an afternoon for a 5 minute piece of music.

Then I start working out guitar parts - if it's going to be that style of arrangement - and begin recording them (muso's usually call this "tracking".)

Take a fiddly guitar solo, for example; a 16-bar solo might last 30 seconds at ~120 beats per minute. Easy, you might think to record it in one take. Well, not so much. I'm trying to get the best take possible, because it's metal and standards are high.

I might record the whole solo as one take, but it will take me several takes to get one I'm happy with. And even then, I might really like the performance on take #3 in the first 4 bars, and really like the last 4 bars of take #6, and be happy with the middle 8 from take #1. I can edit them together, it's a doddle these days, to make one "super take" that's a keeper.

Every take costs time: at least 30 seconds if I let my audio workstation software loop over those 16 bars writing a new take each time.

To get the takes I'm happy with, it cost me 6 x 30 seconds (3 minutes).

Now, imagine I recorded those takes in 4-bar sections. Each take would last 7.5 seconds. To get the first 4 bars so I'm happy with them, I would need 3 x 7.5 seconds (22.5 seconds). To get the last 4 bars, 6 x 7.5 seconds (45 seconds), and to get the middle 8, just 15 seconds.

So, recording it in 4 bar sections would cost me 1m 22.5 seconds.

Of course, there would be a bit of an overhead to doing smaller takes, but what I tend to find is that - overall - I get the performances I want sooner if I bite off smaller chunks.

A performance purist, of course, would insist that I record the whole thing in one take for every guitar part. And that's essentially what playing live is. But playing live comes with its own overhead: rehearsal time. When I'm recording takes of guitar parts, I'm essentially also rehearsing them. The line between rehearsal and performance has been blurred by modern digital recording technology. Having a multitrack studio in my home that I can spend as much time recording in as I want means that I don't need to be rehearsed to within an inch of my life, like we had to be back in the old days when studio time cost real money.

Indeed, the lines between composing, rehearsing, performing and recording have been completely blurred. And this is much the same as in programming today.

Remember when compilers took ages? Some of us will even remember when compilers ran on big central computers, and you might have to wait 15-30 minutes to find out if your code was syntactically correct (let alone if it worked.)

Those bad old days go some way to explaining the need for much up-front effort in "getting it right", and fuelled the artificial divide between "designing" and "coding" and "testing" that sadly persists in dev culture today.

The reality now is that I don't have to go to some computer lab somewhere to book time on a central mainframe, any more than I have to go to a recording studio to book time with their sound engineer. I have unfettered access to the tools, and it costs me very little. So I can experiment. And that's what programming (and recording music) essentially is, when all's said and done: an experiment.

Everything we do is an experiment. And experiments can go wrong, so we may have to run them again. And again. And again. Until we get a result we're happy with.

So biting off small chunks is vital if we're to make an experimental approach - an iterative approach - work. Because bigger chunks mean longer cycles, and longer cycles mean we either have to settle for less - okay, the first four bars aren't that great, but it's the least worst take of the 6 we had time for - or we have to spend more time to get enough iterations (movie directors call it "coverage") to better ensure that we end up with enough of the good stuff.

This is why live performances generally don't sound as polished as studio performances, and why software built in big chunks tends to take longer and/or not be as good.

In guitar, the more complex and challenging the music, the smaller the steps we should take. I could probably record a blues-rock number in much bigger takes, because there's less to get wrong. Likewise in software, the more there is that can go wrong, the better it is to take baby steps.

It's basic probability, really. Guessing a 4-digit number is an order of magnitude easier if we guess one digit at a time.










August 3, 2015

Refactoring Abstract Feature Envy

Feature Envy is a code smell that describes how a method of one object is really about a different object, evidenced by numerous feature calls to that second object. It suggests we've assigned this responsibility to the wrong object, and - in the simplest examples - we just need to move the method to where it really belongs.

A fiddlier variation on Feature Envy is what I call Abstract Feature Envy, illustrated by this example below:



The feedAnimal() method on Zookeeper is really not about the Zookeeper, it's about the Animal. But Animal is an interface. We can't move this method, because that requires an instance target to move it to, and we can't know until runtime what the parameter animal is an instance of - a Tiger, perhaps, or a Zebra?

To move feedAnimal, we have to add it to the interface, and have somewhere for the implementation that contains the feature envy to go. So we have to do a bit of a dance.

I might start by extracting a super-class from Tiger and Zebra, which would be where I'd eventually move the method:



Note that AnimalBase has no methods yet. That's just what I want. This will be the target for moving feedAnimal.

Okay, the next mini-step would be to add an instance of AnimalBase as a parameter to feedAnimal, passing in a new instance from the client code, so we have a target to move an instance method to:



Okay, now we can move the method to AnimalBase easy peasy. I left a delegate method to keep things simple at the client end:



Next, we make both parameter values the same target object - tiger or zebra - so that the object feedAnimal is acting on is effectively itself:



Next, we make it so AnimalBase implements the Animal interface - which will requires us to make it abstract, because we don't want it to implement the other methods - and remove that implements from the sub-classes Tiger and Zebra:



Next, we pull up feedAnimal from AnimalBase to the Animal interface:



Phew. Nearly there.

Next, since we made them the same object when we pass them in from the tests, I can substitute references to the animal parameter in the implementation with this:



Notice that I also removed the animal parameter, as it's no longer needed.

Then, last, we change the signature of the delegate method on Zookeeper back to what it was originally, as our temporary placeholder for AnimalBase (which is now the same thing as Animal as far as the tests are concerned) is not needed any more:



There will, no doubt, be slicker and quicker ways of achieving the same end result. And we might also like to think about how we might refactor to a "containment and delegation" solution instead, for the purists out there. But this gives us a half-way house to do that, should we wish to.






August 2, 2015

Mocking Without Mocks - The Birth of JMicroMock

I was playing around with a code example to illustrate for trainees the difference between mocks and stubs. I've found a good way to get he message across is to show them how hand-rolled interaction test doubles are strictly speaking "mocks", even though we didn't use a mocking framework to create them. (And, conversely, objects that return test data are "stubs" even when we use a mocking framework to create them.)

In experimenting with mocking without mocks, I initially came up with this:



What struck me is that this code looks not much more complex than the equivalent code might using a mocking framework like Mockito or EasyMock. Even so, I couldn't resist taking it a bit further, thinking about how consistent and helpful failure messages might be generated by a dedicated MockExpectation class instead of the rather more brittle technique of boolean flags and hard-coded strings:



Then it struck me that MockExpectation was possibly doing too many things - more than one reason to change - so I refactored it into 3 classes that checked the expectation, compared arrays of expected and actual parameter values, and built the failure message using the available information. It also occured to me that the hard-coded method name was also possibly a little brittle - how soon might it get out of step with the code? So I added an extra check to make sure the method invoked matched the name of the method we expected to be invoked, so if they got out of step, the tests would fail:



And then I moved all these new classes into their own Eclipse project. And hey, waddayknow? It's a sort of teeny-tiny mocking framework (well, sort of). So I christened it JMicroMock.

Naturally, having just a handful of basic use cases, it's a very limited mocking framework. Much of the work is actually handed back to the Java language, in the use of anonymous inner classes. We can only mock what we can override. We can only match exact parameter values. We cannot set up more sophisticated expectations, like "This method should not be invoked."

And, of course, if we have to set up a large number of expectations, it will become very cumbersome.

But you know what? I can't help feeling these limitations might be a good thing, since they encourage me to limit the interactions between objects and to favour binding to abstractions.

Anyhoo, I don't expect - certainly, I heartily recommend you don't - folk to use JMicroMock for real. It's just an experiment, and probably fundamentally flawed in the implementation. Also, it has no tests of its own as a project, because it grew organically out of my original Video Library test code.

But it has been a useful reminder to me about how frameworks and libraries can evolve out of our everyday code. The use cases for JMicroMock were real use cases. I didn't make them up, I discovered them in solving a real-world problem and then refactoring out some reusable bits and bobs - the stuff I could use on other test code for other problems.

There's a lesson here about code reuse: in order for code to be reusable, it must first be useful.

It's also a warning about just how readily frameworks and libraries can appear out of nowhere. I solved the problem once, and quickly, but then spent twice as much time generalising one part of it. In this particular case, I've added heaps more complexity to my original code. It would need to pay for itself by being reused a fair amount, and that may involve dealing with new use cases for it.

Before you know it, JMicroMock is taking up all your time. I've seen it happen all too often - teams who were supposed to be building business applications devoting the lion's share of the effort working on some developer framework or other. But if nobody did, then we'd have none of these lovely frameworks.

The compromise is to restrict their use cases to what has value in the current project. So, maybe I could add support for more sophisticated parameter value matching (e.g., with Hamcrest), but if I don't need it, then perhaps I shoudn't. Let whoever has those use cases adapt the framework to their needs on their time.

Unless, of course, the plan is to keep development of the framework completely proprietary, in which case, you're on your own...






August 1, 2015

My First, Last & Only Blog Post About #NoEstimates

I've been keeping one eye on the whole #NoEstimates debate on Twitter, and folk have asked me my opinion quite a few times. So here it is.

I believe, very firmly, that the problem with estimation stems from us asking the wrong question.

In fact, this is where many big problems in software development arise; by asking the customer "What software would you like us to build?"

This naturally leads to a shopping list of features, and then a request to know "How much will all that cost and how long will it take?"

If we asked instead "What problem are we trying to solve, and how will we know when we've solved it?" - together with accompanying questions like "When do you need this solution?", "What is a solution worth to you?" and "How much money do you have to invest in solving it?" - we can set out on a different journey.

I believe software development needs to be firmly grounded in reality, and the reality is that it's R&D. At the start, the honest answer to questions like "What features are needed?", "How much will it cost?" and "How long will it take?" is I Don't Know.

Pretending to know the unknowable is what lands us in hot water in the first place. We don't know if we can solve the problem with the budget and the time available.

In the management quest for accounting certainties, though, nobody wants to hear that, and no developer with a mortgage to pay wants to admit it. So we go with the fairy tale instead.

Once we're in the fairy tale - where we know if we deliver this list of features, it will solve the customer's problem, and we can predict how long and how much it will take - it's almost impossible to get out of it. Budgets are committed. Deadlines are agreed. Necks are on chopping blocks.

So, what we do instead, is we wait for the reality to unfold, and then when it no longer matches the fairy tale, there's a major shitstorm of blame and recrimination. Typically, the finger is pointed at everyone and everything except that first mistake; the original sin of software projects: pretending to know the future.

After getting their fingers burned once, the customer's and manager's instinct is to "fix" the problem by "improving" estimates next time around. This is fixing the fairy tale by inventing an even more elaborate fairy tale, to try and disguise the fact that's it's fantasy. This is the management equivalent of sacrificing virgins to make it rain.

The only way out of the estimating nightmare is to call "bullshit" on it, and publicly accept - indeed, embrace - the uncertainty that's inherent in what we're doing.

Yes, you might lose the business if you start out saying "I don't know", but consider that the business you're losing is the same old Death March teams have been suffering for decades. That's not work. That's just passing the time for money.

By all means offer a guess, so the customer can budget realistically. But you must be absolutely 100% crystal clear with them that, at the end of the day, we don't know. We just don't know. It's a punt.

Sell yourself on what you do know. What's your track record as a team? What have you delivered in the past? How much did that cost? How long did that take? And - most importantly, but regrettably least asked - did it work?

When a movie studio hires a director, the director makes no guarantees that this new film will be a commercial success, or that it will cost no more than budgeted, or be completed dead on time. The history of cinema is littered with amazingly good, and often very successful, movies that cost more and took longer than planned. But somehow, James Cameron seems to have no trouble getting movies off the ground. That's because of his track record, not his ability to accurately predict production costs and schedules.

Studios gamble with huge sums of money, and - yes - they do ask for estimates, and things do get hairy when schedules slip and costs overrun, but fundamentally they know what game they're in.

It's time we did, too.




July 31, 2015

Triangulating Your Test Code

While we're triangulating our solutions in TDD, our source code ought to be getting more general with each new test case.

But it's arguably not just the solution that should be getting more general; our test code could probably be generalised, too.

Take a look at this un-generalised code for the first two tests in a TDD'd implementation of a Fibonacci sequence generator:



Jumping in at this point, we see that our solution is still hard-coded. The trick to triangulation is to spot the pattern. The pattern for the first two Fibonacci numbers is that they are the same as their index in the sequence (assuming a zero-based array).

We can generalise our list into a loop that generates the list using the pattern (see Bob Martin's post on the Transformation Priority Premise, or, what I more simply call triangulation patterns).

But we can also generalise our test code into a single parameterised test, using the pattern as the test name, so it reads more like the specification we hope our tests in TDD will become:



Now, because all subequent tests are going to follow the same pattern (we provide an index and check what the expected Fibonacci number is at that index), we could carry on reusing this parameterised test for the rest of the problem.

BUT...

Then we'd have to generalise the name of the test - a key part of our test-driven specification - to the point where every single patterm (every rule) is summarised in one test. I no likey. It's much harder to read, and when a test case fails, it's not entirely clear which rule was broken.

So, what I like to do is keep a bit of duplication in order to have one generalised test for each patterm/rule in the specification.

So, continuing on, I might end up with:



Notice that, although these two test methods are duplication, I've taken the step of refactoring out the duplicated knowledge of how to create and interact with the object being tested. This kind of duplication in test code tends to hurt us most. Many teams report how tight coupling between tests and objects under test led to interfaces being much more expensive to change. So I feel this is a small compromise that aid readability while not sacrificing too much to duplication.




July 9, 2015

The IT Manager, The Mythical Knight Shortage & The Thing That's Like A Sword, But Not As Good

Here's a quick bedtime fairy-tale to send you to sleep.

This tale is called "The IT Manager & The Mythical Knight Shortage (& The Thing That's Like A Sword, But Not As Good)".

Once upon a time, there was an IT manager who lived in a big castle made of steel and glass somewhere near Old Street roundabout.

One day, he needed some knights to fight for the honour and glory of his King, the CEO. They would have to be brave, they would have to be strong, and they would have to have had recent experience of developing microservices in an Agile team environment.

He called for the King's treasurer, and asked him to count out piles of gold, one for each of these brave knights to reward them for their valour should they be selected.

Then he commanded his messengers to ride out to every town and village and hamlet across the land, spreading the word that he was looking for brave knights willing to fight to the death for honour and riches.

Eventually, the best knights were assembled, and put through a string of challenges to prove their worthiness to fight for the King; and from those, the best candidates were offered the chance to serve in the kingdom.

The IT manager showed them the gold they would receive should they accept the honour of serving the King. And one by one, they all turned it down. Not enough gold, they said. For what we do, they said, we know we're worth more. Without us knights, they said, the King would lose all his gold and the kingdom would fall to his enemies.

This made the IT manager angry. Again, he sent out his messengers, this time to seek out all the other IT managers in all the other kingdoms across the land. They gathered for a meeting, where they agreed that there was a chronic shortage of brave, strong, Agile knights.

Something must be done, they exclaimed. And lo, it was decided that every child in the land should train to be a knight. Starting at the age of five.

The knights themselves, persuaded that there was indeed a chronic shortage of knights, and fearful that kingdoms would indeed fall as a result, shouted "We will help you train them!"

And so it came to pass that the following year was declared to be the "Year of Chivalry", and someone who had never held a sword or slain a dragon or rescued a fair maiden (or fair maidman, because this is an equal opportunities fairy-tale) was appointed to tell the people that they could learn to be a knight in a day. A budget of a sack of potatoes, two goats and some straw was allocated to pay for this grand initiative - plenty, they thought, to train millions of knights throughout the land.

A year passed, and still the IT manager could not find brave knights who would serve his King for the gold on offer. At his wits end, he prayed to the Gods of Information, Education and Entertainment, who deliberated for quite a while, and then - just when he'd forgotten all about them - they answered his prayers.

"Behold," they roared from atop Mount Broadcasting House, as thunder and lightning shattered the heavens around them, leaving the IT manager fearful and awed. "Behold... we have heard your prayers for a solution to your chronic shortage of knights, and we bring you.... THIS!"

There was an almighty explosion, and through the flames appeared a sort of sword doodah kind of thing. Well, it was like a sword, but smaller, and not as good. But it had a cool logo.

"We will send a million of these amazing sword wotisname-thingummyjigs - that are like a sword, only not as good - to all children of an arbitrary age in the land, so that they will be inspired to learn how to fight. Just as long as they already have a proper full-size sword, because this thing literally does nothing by itself of any real use."

"Yeah, thanks for that." said the IT manager.

And they all lived happily ever after.






June 25, 2015

Mocks vs. Stubs - It Has Nothing To Do With The Implementation

An area of confusion I see quite often is on the difference between mocking and stubbing.

I think this confusion arises from the frameworks we use to create mock objects, which can also be used to create stubs.

In practice, the difference is very straightforward.

If we want to test some logic that requires data that - for whatever reason - comes via a collaborator, we use a stub. Put simply, a stub returns test data.

If we want to test that an interaction with a collaborator happens the way we expect it should, we use a mock. Mock objects test interactions.

Take a look at these example tests, just to illustrate the difference more starkly:



The first test uses Mockito to create an object that returns test data, so I can test that the total cost of a trade is calculated correctly using the stock price my StockPricer stub returns.

The second test uses a hand-rolled object to test how many times the Trade object calls GetPrice() on the mock StockPricer.

Whether or not we used a mock objects framework has no bearing on whether the test double is a mock or a stub. The kind of test we're writing decides that.

If we want our test to fail because external data suppled by a collaborator was used incorrectly, it's a stub.

If we want our test to fail because a method was not invoked (correctly) on a collaborator, it's a mock.

Simples.

(And, yes, back in the days before mock objects ,we'd have called MockStockPricer a "spy".)