October 1, 2015

Peeple: A Clear-Cut Case for When Developers Should Say "No"

Two things happening simultaneously this morning: firstly, I'm trying to get into Python, but strapped for time at the moment, so getting up an hour earlier to go through tutorials. And keeping one eye on this whole Peeple debacle that's exploded on to Twitter.

Putting Python aside - except to say, expect to see some popping up on the blog soon, so the Python gurus among you can have a good laugh - I'm genuinely gobsmacked that two actual human beings dreamed up the idea for Peeple, and that other human beings gave them actual real money to make it happen.

If you haven't heard already, Peeple is arguably either the dumbest, the cruellest or the most cynical start-up idea in Web history. It allows people to rate other people, and leave reviews about them, like they were restaurants or hotels.

Apparently, if you have someone's cell phone number, you can create a profile for them, and there's no way for that person to opt out yet. It's got "dystopia" written all over it.

But what concerns me even more than the fact that human beings came up with the idea, and human beings funded its execution, is that human beings wrote the code that makes it work.

This, to me, has to be one of those situations where developers should just say "no". No, I am not going to create software that turns people into reviewable, rate-able products. Teh Internets are bad enough without bringing such a blindingly obviously abuse-able system into being.

Given the backlash, and the extremely strong and almost universal revulsion that's being expressed on social and in traditional media this morning, one can only hope that purveyors of apps like Apple, Google and Microsoft see sense (or at least see a PR nightmare looming) and refuse to allow Peeple to distribute their app through the official channels. It's so utterly toxic, with such massive potential to cause harm to vulnerable people, that it truly beggars belief that it even exists at all in any shippable form.

And it really, really bugs me that software developers enabled this insanity. They should be thoroughly ashamed of themselves. I'll wager it won't be going on their CVs.

September 22, 2015

Why Tech Entrepreneurs Need To Be Programmers

I worry - I really do - about the trend for the X Factor-ization of software development that start-up culture has encouraged.

Especially troubling is how the media is now attempting to woo impressionable young people into the industry with promises of untold riches, often creating the impression that "no coding is required". No need for skillz: we can use the VC money to pay some code monkey to do all that trivial stuff like actually making ideas happen.

Far from empowering them to take their destinies into their own hands with "tech", this pushes them in the exact opposite direction. If you cannot write the code for your idea, you have to pay someone to do it for you, and that means you have to raise a lot of money just to get to the most basic working prototype.

This means that you will need to go far into the red just to figure out if your idea is feasible. And here's the thing about start-ups: you'll probably need to try many, many ideas before you find The OneTM. If trying one idea comes at such a high cost... well, you do the maths.

Whereas a young person - or an older person, let's not forget - with the right programming skillz, a cheap laptop and a half-decent Internet connection could do it themselves in their own time, as thousands upon thousands of software developers are doing right now.

What makes software so special is that the economic and social barriers to entry are so low. You don't need hundreds of thousands of pounds and special connections from that posh school you went to to start writing programs. Whereas, all the evidence suggests, you do need to be in a priveleged position to get funding to pay someone else to do it. Take a long hard look at our most successful tech entrepreneurs; not many kids from the wrong side of the tracks among that demographic.

Of course, programming is hard. It takes a long time to learn to do it well enough to realise even moderately ambitious ideas. Which is why such a small proportion of the people who try it end up doing it for a living. What makes software so special is that the personal and intellectual barriers to entry are so high. No wonder kids are relieved to hear you can be a tech rockstar without all that fiddly business of learning to play guitar.

But anyone with a programmable computer and Internet access can create applications and have them hosted for a few dollars on cloud services that - should they get lucky - can scale pretty effortlessly with demand. Anyone with a programmable computer can create applications that could end up listed on app stores and let computing giants take care of the retail side at any scale imaginable. It's not like selling widgets: a million orders for a new widget could put a small business in real cashflow trouble. A million sales of your iOS app needs neither extra manufacturing nor storage capacity. That's the beauty of digital.

The reality, though, is that the majority of software start-ups never reach such dizzying heights of commercial success. Which is why the vast majority of professional programmers work for someone else. And in my humble opinion, someone who learns how to make a good living writing point-of-sale systems for a supermaket is every bit as empowered as someone who's trying to get their POS-as-a-Service start-up off the ground.

And that's why I'm worried; for every successful tech entrepreneur, there are thousands more doing just fine thank you very much writing software for a living, and having as much fun and being as much fulfilled by it. By all means, reach for the stars, but let's not lose sight of the much wider set of opportunities young people might be missing if their gaze is 100% focused on The Big Shiny Success.

And even if you harbour such grand ambitions - and someone has to, at the end of the day - your odds of getting an idea off the ground can be massively improved if you can create that all-important working prototype yourself. Because, in software, a working prototype is the real thing. Making shit real is a very undervalued skill in tech.

September 12, 2015

TDD Katas Too Easy For You? Try the Codemanship Team Dojo

One thing I hear regularly is how the kinds of practical exercises we do in training workshops and pair programming interviews are "too trivial" to test real developers.

Curiously, and without any exceptions, it turns out that the people who make such claims are unable to complete the exercises to a high standard in the allotted time; leading me to think that just maybe we overestimate ourselves sometimes. And it's not escaped my attention that those who brag the loudest tend to do them least well.

But if you really want a bigger challenge - one that's more befitting of your programming genius - then there's always my Team Dojo

There are a number of user stories, with executable acceptance tests, for a social network for programmers from which we can explore and build teams based on a number of criteria.

The exercise is undertaken from a standing start. All you get is your computers and a network connection. You'll have to decide what language(s) and platforms you'll be developing for, set up version control if you think you need it (which, of course, you do), build automation, CI, all of that good stuff - none of it is provided.

Once you've got the sausage machine up and running, you then need to work through the user stories, designing and implementing working code in order to have someone outside your team verify that it does indeed pass each acceptance test on a machine that isn't yours.

Give yourself a maximum of a standard working day (8 hours) to complete it. Afterwards, assess the quality of the implementation code for readability, simplicity, lack of duplication etc. Give yourself a percebtage score for Code Cleanliness, and then multiply the points you picked up from passing acceptance tests by that.

Most good developers can do it in under a day. Curiously, teams of 3 or more tend to struggle to compete it in 8 hours. The rare great teams can do it in under 4 hours. Go figure!

You will learn LOTS, though you may well wish for the naïve simplicity of FizzBuzz by the time you get half-way through...

September 11, 2015

Tests Should Test One Thing (?)

The received wisdom - which chimes strongly with my own experience, and what I observe in teams - is that unit tests should have only one reason to fail.

There are several good reasons for this:

1. Small tests tend to be easier to pass, facilitating faster micro-iterations ("baby steps")

2. Small tests tend to be easier to understand, serving as clearer specifications

3. There tends to be less reasons for small tests to fail, making it easier to pinpoint the problem when it does

But, judging by discussions about an example in my previous blog post where I recommended not testing all the rules of FizzBuzz in a single test, some people disagree. Folk argue that single test like:


...is more readable than several individual tests, each tackling speciic rules. Like:

assertEquals("1", fizzBuzzer.fizzBuzz().split(',')[0]);

There's a couple things to note in these exampes. Firstly, they are presented to require minimal context to understand. In reality, I would expect a good developer to refactor the test code to remove duplicate expressions for accessing individual elements in the FizzBuzz string. So tests might read more like:

assertEquals("1", fizzBuzzElementAtIndex(0));

I'd also expect the tests to be parameterised at some point, each test method named to document a specific FizzBuzz rule, like:

@Parameters({"3", "6"})
public void numbersDivisibleByThreeReplacedWithFizz(int index) {
assertEquals("Fizz", fizzBuzzElementAtIndex(index);

Someone looking to understand what FizzBuzz does - not how it works, but what it does - could simply read the test method names and see a list of the rules of FizzBuzz:

  • oneHundredIntegersSeparatedByCommas
  • integersDivisibleByThreeReplacedWithFizz
  • integersDivisibleByFiveReplacedWithBuzz
  • integersDivisibleByThreeAndFiveReplaedWithFizzBuzz
  • remainingNumbersAreUnchanged

  • I don't know about you, but I find that more readily accessible than oneHundredIntegersSeparatedByCommasWithAnyDivisibleByThreeReplacedWithFizzAndAnyDivisibleByFiveWithBuzzAndAnyDivisibleByThreeandFiveWithFizzBuzz, and more meaningful than the time-honoured cop-out: fizzBuzzStringShouldBeCorrect.

    What do you think?

    What Not To Do In a TDD Pair Programming Interview

    A few quick thoughts this morning after a fairly concentrated run of pair programming interviews for several clients, particularly on Test-driven Development (TDD).

    If, as a candidate, you're lined up to pair with someone like me, and "TDD" is being requested as a key skill, here are some things you probably shouldn't do when we pair:

    1. Start by writing implementation code

    The Golden Rule of TDD is "Don't write any implementation code until there's a failing test that requires it". So if we're doing, say, a program to convert degrees Celcius into degrees Fahrenheit, and we're supposed to be doing TDD "by the book", then I'm going to be disappointed if you start by declaring your TemperatureConverter class first, and then start writing a test for it.

    Yes, it's true: the test code needs a TemperatureConverter class to compile and run, but the whole idea of test-driven approaches to design is that the test comes first, and the declaration follows. Simply imagine you have such a class, write a test that needs it, and as soon as your IDE gives you that red squiggly line because it's not compiling... that's your permission to declare it.

    2. Introduce speculative generality

    The second most common gotcha in a TDD pairing interview is creating code we don't need to pass the tests. For example, surprisingly often candidates will start by declaring an interface to be implemented by the class under test. I'll ask "what do we need the interface for?", and typically the answer will be about some possible unspoken need in the future. e.g., "So we can mock it" or "In case we need to use this dependency injection framework".

    And, just so you know, adding a dependency injection framework - in most TDD exercises - is a Big Red Flag. Just create the implementation code you'll need to pass the tests. Everything else falls under Y.A.G.N.I. (You Ain't Gonna Need It).

    3. Write weak or meanngless tests

    Some candidates have read somewhere that you need to run the test to see it fail. This is because it helps us to check that our test is valid - i.e., it would fail if the result was wrong.

    But writing fail() to see the test fail is just testing fail(). Enough said.

    Likewise, with assertions that leave the range of potential solution wide open. A classic example is checking the length of a list when what we really should be asking is if the item we added can be found in the list where we say it should be.

    4. Writing redundant tests

    Harking back to the first point, we don't declare classes until our failing teests require them. But that doesn't mean we specifically write tests in order to declare classes in our implementation. I see this too often; a candidate thinks "I'm going to need a FizzBuzzer class", and so they write test code:

    FizzBuzzer fizzBuzzer = new FizzBuzzer();

    If fizzBuzzer was null, then attempting to use it's methods in a test will fail. No need to specifically test that it's not null. That's just testing runtime object creation.

    Tests - including unit tests - should be about behaviour. Don't test that a Car has Wheels, or that a Customer has a Name. Tests should be about the work that our code does, not the structure it has in order to do it.

    5. Not running the tests

    Yes, it actually needs saying. I constantly have to remind candidates to run their tests after making changes to the code. That's what automated unit tests are for. This makes me worry about the candidate's habits.

    Of course, they may have become used to letting a continuous testing tool like JUnitMax or Infinitest run the tests for them in the background. But such tools are not always available, and I find that we need to work with them switched off on a regular basis to keep reinforcing that habit for when we find ourselves working in a technology that doesn't have tools like that. And for when we're doing pair programming interviews, of course, because I will ask you to turn the tool off so I can see you swimming without the armbands.

    6. Not refactoring when it's obviously needed

    TDD has 3 key activities: we're either writing a test that fails, or writing the simplest code to pass that test, or refactoring the code to keep it clean and maintainable. The vast majority of developers have to be reminded about that third - vitally important - activity.

    Alarm bells go off when candidates don't refactor. There could be a number of reasons why they don't, all of which are troubling:

    a. They're not in the habit of refactoring
    b. They don't recognise code smells when they see them
    c. They don't know how to refactor

    Typically, it's all three.

    7. Hacking away at the code when you're "refactoring"

    Refactoring is a discipline. You make one small, atomic change to the code (e.g., extract a method), and then you run the tests. Then, if necessary, you do another refactoring. Watching someone just start typing, deleting, cutting, pasting and generally hacking away at the code - without running the tests frequently - is like watching someone doing their makeup or shaving while they're driving a car. If pair programming interviews were driving tests, we'd fail you on the spot. Indeed, maybe we should.

    Refactoring is the best tell-tale I've seen for distinguishing good developers from the rest. That is to say, I've noticed how developers who refactor well tend to do lots of stuff well.

    Get some practice at refactoring, learn the common code smells and the toolset of refactoings you'll need to fix them. Get to know the refactoring menu in your IDE. If it doesn't have one, or you're working in a language that makes many refactorings difficult to automate, then get practice at doing them safely by hand.

    And don't forget to keep running those tests!

    8. Writing one test that asks all the questions

    So, here's my first test for a FizzBuzzer:


    That's actually a whole bunch of tests in one go. All the rules of FizzBuzz are tested in this one assertion. To pass this test, we can either hardcode that entire return string, or we can write a complete FizzBuzzer algorithm.

    If we hardcode it, then move on to the next test:


    We can end up painting ourselves into a corner of either hardcoding the entire output from 1...100, or havig to make a big leap from a hardcoded response to the full algorithm. Triangulation it ain't!

    Ask one question at a time, and generalise your solution with each new test case:

    assertEquals("1", fizzBuzzer().fizzBuzz().split(',')[0]);

    Remember: BABY STEPS

    Of course, there are other things candidates do in pair programming interviews that might suggest they're not as familiar or experienced at TDD as their CV suggests. But these are the ones I see most often.

    Shameless Plug: join us in London on Saturday October 10th to brush up your TDD discipline on the last Codemanship Intensive TDD workshop of 2015. Amazingly good value at just £49

    September 5, 2015

    Turning Non-Terminating JUnit Tests Into Failing Tests

    So, I'm noodling with an implementation of a square root algorithm in Java, and after I get the main logic sort of working, methinks to meself "Oh, I should probably catch inputs that are negative numbers, because the algorithm won't converge in those cases".

    So, I writes me a test to catch the expected exception I plan to throw, and then - being a fairly disciplined TDD sort of a dude - I run the test to see it fail. Low and behold, it does exactly what I expect - it doesn't converge. Which means it just goes around and around in the loop and JUnit hangs like this:

    Now, having a unit test that doesn't terminate presents us with a problem. yes, technically, not terminating is a kind of failing, but if this is one test in a suite, then how do draw a line under it so we can move on to running other tests in the suite?

    You can make the test fail using the optional timeout parameter of @Test. This is the simplest way to make the test actually fail.

    NUnit has a similar attribute you can use. I imagine some of the other xUnit implementations do, too.

    August 17, 2015

    The Small Teams Manifesto

    So.... grrr.... arrrgh... etc.

    Another week, another software project set up to fail right from the start.

    What angers me most is that the key factors we know can severely damage our chances of succeeding in delivering software of value are well understood.

    The main one is size: big projects usually fail.

    We know pretty empirically that the development effort required to deliver software grows exponentially as the software grows. If it takes a team 2 hours to deliver 10 lines of working software, it might take them 4 hours to deliver 20 lines of working software, and 8 hours to deliver 30 lines, and so on.

    There's no economy of scale in software development, and we know for a fact that throwing more bodies at the problem just makes things worse. A team of 2 might deliver it in six months. A team of 10 might take a year, because so much of their time will be taken up by being in a team of 10.

    The evidence strongly suggests that the probability of project failure grows rapidly as project size increases, and that projects costing more than $1 million are almost certain to run into severe difficulties.

    In an evidence-loaded article from 1997 called Less Is More, Steve McConnell neatly sums it all up. For very good reasons, backed up with very good evidence, we should seek to keep projects as small as possible.

    Not only does that tend to give us exponentially more software for the same price, but it also tends to give us better, more reliable software, too.

    But small teams have other big advantages over large teams; not least is their ability to interact more effectively with the customer and/or end users. Two developers can have a close working relationship with a customer. Four will find it harder to get time with him or her. Ten developers will inevitably end with someone having that working relationship and then becoming a bottleneck within the team, because they're just a proxy customer - seen it so many times.

    A close working relationship with our customers is routinely cited as the biggest factor in software development success. The more people competing for the customer's time, the less time everyone gets with the customer. Imagine an XP team picking up user stories at the beginning of an iteration; if there's only one or two pairs of developers, they can luxuriate in time spent with the customer agreeing acceptance tests. If there are a dozen pairs, then the customer has to ration that time quite harshly, and pairs walk away with a less complete understanding of what they're being asked to build (or have to wait a day or two to get that understanding.)

    Another really good reason why small teams tend to be better is that small teams are easier to build. Finding one good software developer takes time. Finding a dozen is a full-time job for several months. I know, I've tried and I've watched many others try.

    So, if you want more reliable software designed with a close collaboration with the customer at an exponentially cheaper price (and quite possibly in less time, too), you go with small teams. Right?

    So why do so many managers still opt for BIG projects staffed by BIG teams?

    I suspect the reasons are largely commercial. First of all, you don't see many managers boasting about how small their last project was, and it's the trend that the more people you have reporting to you, the more you get paid. Managers are incentivised to go big, even though it goes against their employer's interests.

    Also, a lot of software development is outsourced these days, and there's in obvious incentive for the sales people running those accounts to go as big as possible for as long as possible. Hence massively overstaffed Waterfall projects are still the norm in the outsourcing sector - even when they call it "Agile". (All sorts of euphemisms like "enterprise Agile", "scaling up Agile" etc etc, which we tend to see in this sector more often.)

    So there are people who do very well by perpetuating the myth of an economy of scale in software development.

    But in the meantime, eye-popping amounts of time and money are being wasted on projects that have the proverbial snowball's chance in hell of delivering real value. I suspect it's so much money - tens of billions of pounds a year in Britain alone, I'd wager - and so much time wasted that it's creating a drag effect on the economy we're supposed to be serving.

    Which is why I believe - even though, like many developers, I might have a vested interest in perpetuating The Mythical Man-Month - it's got to stop.

    I'm pledging myself to a sort of Small Team Manifesto, that goes something like this:

    We, the undersigned, believe that software teams should be small, highly skilled and working closely with customers

    Yep. That's it. Just that.

    I will use the hashtag #smallteams whenever I mention it on social media, and I will be looking to create a sort of "Small Teams avatar" to use on my online profiles to remind me, and others, that I believe software development teams should be small, highly-skilled and working closely with customers.

    You, of course, can join in if you wish. Together, we can beat BIG teams!

    August 10, 2015

    A Hierarchy Of Software Design Needs

    Design is not a binary proposition. There is no clear dividing line between a good software design and bad software design, and even the best designs are compromises that seek to balance competing forces like performance, readability, testability, reuse and so on.

    When I refactor a design, it can sometimes introduce side-effects - namely, other code smells - that I deem less bad than what was there before. For example, maybe I have a business object that renders itself as HTML - bad, bad, bad! Right?

    The HTML format is likely to change more often than the object's data schema, and we might want to render it to other formats. So it makes sense to split out the rendering part to a separate object. But in doing so, we end up creating "feature envy" - an unhealthy high coupling between our renderer and the business object so it can get the data in needs - in the process.

    I consider the new feature envy less bad than the dual responsibility, so I live with it.

    In fact, there tends to be a hierarchy of needs in software design, where one design issue will take precedence over another. It's useful, when starting out, to know what that hierarchy of needs is.

    Now, the needs may differ depending on the requirements of our design - e.g., on a small-memory device, memory footprint matters way more than it does for desktop software usually - but there is a fairly consistent pattern that appears over and over in the majority of applications.

    There are, of course, a universe of qualities we may need to balance. But let's deal with the top six to get you thinking:

    1. The Code Must Work

    Doesn't matter how good you think the design is if it doesn't do what the customer needs. Good design always comes back to "yes, but does it pass the acceptance tests?" If it doesn't, it's de facto a bad design, regardless.

    2. The Code Must Be Easy To Understand

    By far the biggest factor in the maintainability of code is whether or not programmers can understand it. I will gladly sacrifice less vital design goals to make code more readable. Put more effort into this. And then put even more effort into it. However much attention you're paying to readability, it's almost certainly not enough. C'mon, you've read code. You know it's true.

    But if the code is totally readable, but doesn't work, then spend more time on 1.

    3. The Code Must Be As Simple As We Can Make It

    Less code generally means a lower cost of maintenance. But beware; you can take simplicity too far. I've seen some very compact code that was almost intractable to human eyes. Readability trumps simplicity. And, yes, functional programmers, I'm particularly looking at you.

    4. The Code Must Not Repeat Itself

    The opposite of duplication is reuse. Yes it is: don't argue!

    Duplication in our code can often give us useful clues about generalisations and abstractions that may be lurking in there that need bringing out through refactoring. That's why "removing duplication" is a particular focus of the refactoring step in Test-driven Development.

    Having said that, code can get too abstract and too general at the expense of readability. Not everything has to eventually turn into the Interpreter pattern, and the goal of most projects isn't to develop yet another MVC framework.

    In the Refuctoring Challenge we do on the TDD workshops, over-abstracting often proves to be a sure-fire way of making code harder to change.

    5. Code Should Tell, Not Ask

    "Tell, Don't Ask" is a core pillar of good modular -notice I didn't say "object oriented" - code. Another way of framing it is to say "put the work where the knowledge is". That way, we end up with modules where more dependencies are contained and fewer dependencies are shared between modules. So if a module knows the customer's date of birth, it should be responsible for doing the work of calculating the customer's current age. That way, other modules don't have to ask for the date of birth to do that calculation, and modules know a little bit less about each other.

    It goes by many names: "encapsulation", "information hiding" etc. But the bottom line is that modules should interact with each other as little as possible. This leads to modules that are more cohesive and loosely coupled, so when we make a change to one, it's less likely to affect the others.

    But it's not always possible, and I've seen some awful fudges when programmers apply Tell, Don't Ask at the expense of higher needs like simplicity and readability. Remember simply this: sometimes the best way is to use a getter.

    6. Code Should Be S.O.L.I.D.

    You may be surprised to hear that I put OO design principles so far down my hierarchy of needs. But that's partly because I'm an old programmer, and can vaguely recall writing well-designed applications in non-OO languages. "Tell, Don't Ask", for example, is as do-able in FORTRAN as it is in Smalltalk.

    Don't believe me? Then read the chapter in Bertrand Meyer's Object Oriented Software Construction that deals with writing OO code in non-OO languages.

    From my own experiments, I've learned that coupling and cohesion have a bigger impact on the cost of changing code. A secondary factor is substitutability of dependencies - the ability to insert a new implementation in the slot of an old one without affecting the client code. That's mostly what S.O.L.I.D. is all about.

    This is the stuff that we can really only do in OO languages that directly support polymorphism. And it's important, for sure. But not as important as coupling and cohesion, lack of duplication, simplicity, readability and whether or not the code actually works.

    Luckily, apart from the "S" in S.O.L.I.D. (Single Responsibility), the O.L.I.D. is fairly orthogonal to these other concerns. We don't need to trade off between substitutability and Tell, Don't Ask, for example. They're quite compatible, as are the other design needs - if you do it right.

    In this sense, the trade off is more about how much time I devote t thinking about S.O.L.I.D. compared to other more pressing concerns. Think about it: yes. Obsess about it: no.

    Like I said, there are many, many more things that concern us in our designs - and they vary depending on the kind of software we're creating - but I tend to find these 6 are usually at the top of the hierarchy.

    So... What's your hierarchy of design needs?

    Intensive TDD Workshop, London, Sat Oct 10th

    Just a quick note about the next public Intensive TDD workshop, which will be in SW London on Saturday October 10th.

    The same unbeatably low price of £49 for a fun, packed, challenging and educational day without the frills.

    All of the workshops have sold out this year, so book soon to avoid disappointment.

    Powered by Eventbrite

    August 7, 2015

    Taking Baby Steps Helps Us Go Faster

    Much has been written about this topic, but it comes up so often in pairing that I feel it's worth repeating.

    The trick to going faster in software development is to take smaller steps.

    I'll illustrate why with an example from a different domain: recording music. As an amateur guitar player, I attempt to make recorded music. Typically, what I do is throw together a skeleton for a song - the basic structure, the chord progressions, melody and so on - using a single sequenced instrument, like nice synth patch. That might take me an afternoon for a 5 minute piece of music.

    Then I start working out guitar parts - if it's going to be that style of arrangement - and begin recording them (muso's usually call this "tracking".)

    Take a fiddly guitar solo, for example; a 16-bar solo might last 30 seconds at ~120 beats per minute. Easy, you might think to record it in one take. Well, not so much. I'm trying to get the best take possible, because it's metal and standards are high.

    I might record the whole solo as one take, but it will take me several takes to get one I'm happy with. And even then, I might really like the performance on take #3 in the first 4 bars, and really like the last 4 bars of take #6, and be happy with the middle 8 from take #1. I can edit them together, it's a doddle these days, to make one "super take" that's a keeper.

    Every take costs time: at least 30 seconds if I let my audio workstation software loop over those 16 bars writing a new take each time.

    To get the takes I'm happy with, it cost me 6 x 30 seconds (3 minutes).

    Now, imagine I recorded those takes in 4-bar sections. Each take would last 7.5 seconds. To get the first 4 bars so I'm happy with them, I would need 3 x 7.5 seconds (22.5 seconds). To get the last 4 bars, 6 x 7.5 seconds (45 seconds), and to get the middle 8, just 15 seconds.

    So, recording it in 4 bar sections would cost me 1m 22.5 seconds.

    Of course, there would be a bit of an overhead to doing smaller takes, but what I tend to find is that - overall - I get the performances I want sooner if I bite off smaller chunks.

    A performance purist, of course, would insist that I record the whole thing in one take for every guitar part. And that's essentially what playing live is. But playing live comes with its own overhead: rehearsal time. When I'm recording takes of guitar parts, I'm essentially also rehearsing them. The line between rehearsal and performance has been blurred by modern digital recording technology. Having a multitrack studio in my home that I can spend as much time recording in as I want means that I don't need to be rehearsed to within an inch of my life, like we had to be back in the old days when studio time cost real money.

    Indeed, the lines between composing, rehearsing, performing and recording have been completely blurred. And this is much the same as in programming today.

    Remember when compilers took ages? Some of us will even remember when compilers ran on big central computers, and you might have to wait 15-30 minutes to find out if your code was syntactically correct (let alone if it worked.)

    Those bad old days go some way to explaining the need for much up-front effort in "getting it right", and fuelled the artificial divide between "designing" and "coding" and "testing" that sadly persists in dev culture today.

    The reality now is that I don't have to go to some computer lab somewhere to book time on a central mainframe, any more than I have to go to a recording studio to book time with their sound engineer. I have unfettered access to the tools, and it costs me very little. So I can experiment. And that's what programming (and recording music) essentially is, when all's said and done: an experiment.

    Everything we do is an experiment. And experiments can go wrong, so we may have to run them again. And again. And again. Until we get a result we're happy with.

    So biting off small chunks is vital if we're to make an experimental approach - an iterative approach - work. Because bigger chunks mean longer cycles, and longer cycles mean we either have to settle for less - okay, the first four bars aren't that great, but it's the least worst take of the 6 we had time for - or we have to spend more time to get enough iterations (movie directors call it "coverage") to better ensure that we end up with enough of the good stuff.

    This is why live performances generally don't sound as polished as studio performances, and why software built in big chunks tends to take longer and/or not be as good.

    In guitar, the more complex and challenging the music, the smaller the steps we should take. I could probably record a blues-rock number in much bigger takes, because there's less to get wrong. Likewise in software, the more there is that can go wrong, the better it is to take baby steps.

    It's basic probability, really. Guessing a 4-digit number is an order of magnitude easier if we guess one digit at a time.