February 17, 2015
Clean Code is a Requirements DisciplineGood morning, and welcome to my World of Rant.
Today's rant is powered by Marmite.
If you're one of those keeerrrraaazy developers who thinks that code should be reliable and maintainable, and have expressed that wrong-headed thought out loud and in public, then you've probably run into much more rational, right-thinking people - often who aren't developers, because who would want to do that for a living? (yuck!) - who counter that it's more important to satsify users' needs than to write Clean Code, and that a focus on the latter must necessarily be at the expense of the former.
i.e., people who focus on details must be losing sight of the Bigger Picture (TM).
I call bullshit.
First of all, the mentality that cares about making their software reliable and easy to understand and to change tends to just care, generally. I don't know about you, but I don't get much job satisfaction from writing beautiful code that nobody uses.
Secondly of all, it's a false dichotomy, like we have to choose between useful software and Clean Code, and it's not possible to take care of buiilding the right thing and building it right at the same time.
Good software developers are able to dive into the detail and then step back to look at the Bigger Picture (TM) with relative ease. The requirements discipline is part and parcel of software craftsmanship. It's misinformed to suggest that craftsmanship is all about code quality, just as it is to suggest that TDD is all about unit tests and internal design.
I'd go so far as to say that, not only are Clean Code and the Bigger Picture (TM) perfectly compatible, but in actuality they go together like Test-first and Refactoring. Teams really struggle to achieve one without the other.
Maintainable code requires, first and foremost, that the code be easy to read and understand. Literate programmig demands that our code clearly "tells the story" of what the software does in response to user input, and in the user's language. That is to say, to write code that makes sense, we must have sense to make of it.
And, more importantly, how clean our code is has a direct impact on how easy it will be for us to change that code based on user feedback.
People tend to forget that iterating is the primary requirements discipline: with the best will in the world, and all the requirements analysis and acceptance testing voodoo we can muster, we're still going to need to take a few passes at it. And the more passes we take, the more useful our software is likely to become.
Software that gets used changes. We never get it right first time. Even throwaway code needs to be iterated. (Which is why the "But this code doesn't need to last, so we don't need to bother making it easy to change" argument holds no water.)
Software development is a learning process, and without Clean Code we severely hinder that learning. What's the point in getting feedback if it's going to be too costly to act on it? I shudder to think how many times I've watched teams get stuck in that mud, bringing businesses to their knees sometimes.
Which is why I count Clean Code as primarily a requirements discipline. It is getting the details right so that we can better steer ourselves towards the Bigger Picture (TM).
To think that there's a dichotomy between the two disciplines is to fundamentally misunderstand both.
February 13, 2015
Intensive TDD, Continuous Inspection Recipes & Crappy Remote Collaboration ToolsA mixed bag for today's post, while I'm at my desk.
First up, after the Intensive TDD workshop on March 14th sold out (with a growing waiting list), I've scheduled a second workshop on Saturday April 11th, with places available at the insanely low price of £30. Get 'em while they're hot!
Secondly, I'm busy working on a practical example for a talk I'm giving at NorDevCon on Feb 27th about Continuous Inspection.
What I'm hoping to do is work through a simple example based on my Dependable Dependencies Principle, where I'll rig up an automated code analysis wotsit to find the most complex, most depended upon and least tested parts of some code to give early warning about where it might be most likely to be broken and might need better testing and simplifying.
To run this metric, you need 3 pieces of information:
* Cyclomatic Complexity of methods
* Afferent couplings per method
* Test coverage per method
Now, test coverage could mean different things. But for a short demonstration, I should probably keeep it simple and fairly brute force - e.g., % LOC reached by the tests. Not ideal, but in a short session, I don't want to get dragged into a discussion about coverage metrics. It's also a readily-available measure of coverage, using off-the-shelf tools, so it will save me time in preparing and allow viewers to try it for themselves without too much fuss and bother.
What's more important is to demonstrate the process going from identifying a non-functional requirement (e.g., "As the Architect, I want early warning about code that presents a highr risk of being unreliable so that I can work with the developers to get better assurance for it"), to implementing an executable quality gate using available tools in a test-driven manner (everybody forgets to agree tests for their metrics!), to managing the development process when the gate is in place. All the stuff that constitutes effective Continuous Inspection.
At time of writing, tool choice is split between a commercial code analysis tool called JArchitect, and SonarQube. It's a doddle to rig up in JArchitect, but the tool costs £££. It's harder to rig up in SonarQube, but the tools are available for free. (Except, of course, nothing's ever really free. Extra time taken to get what you want out of a tool also adds up to £££.) We'll see how it goes.
Finally, after a fairly frustrating remote pairing session on Wednesday where we were ultimately defeated by a combination of Wi-Fi, Skype, TeamViewer and generally bad mojo, it's occured to me that we really should be looking into remote collaboration more seriously. If you know of more reliable tools for collaboration, please tweet me at @jasongorman.
February 9, 2015
Mock Abuse: How Powerful Mocking Tools Can Make Code Even Harder To ChangeConversation turned today to that perennial question about mock abuse; namely that there are some things mocking frameworks enable us to do that we probably shouldn't ought to.
In particular, as frameworks have become more powerful, they've made it possible for us to substitute the un-substitutable in our tests.
Check out this example:
Because Orders invokes the static database access method getAllOrders(), it's not possible for us to use dependency injection to make it so we can unit test Orders without hitting the database. Boo! Hiss!
Along comes our mocking knight in shining armour, enabling me to stub out that static method to give a test-specific response:
Problem solved. Right?
Well, for now, maybe yes. But the mocking tool has not solved the problem that I still couldn't substitute CustomerData.getAllOrders() in the actual design if I wanted to (say, to use a different kind of back-end data store or a web service). So it's solved the "how do I unit test this?" problem, but not in a way that buys me any flexibility or solves the underlying design problem.
If anything, it's made things a bit worse. Now, if I want to refactor Orders to make the database back end swappable, I've got a bunch of test code that also depends on that static method (and in arguably a bigger way - more code depends on that internal dependency. If you catch my drift.)
I warn very strongly against using tools and techniques like these to get around inherent internal dependency problems, because - when it comes to refactoring (and what's the point in having fast-running unit tests if we can't refactor?) all that extra test code can actually bake in the design problems.
Multiply this one toy example by 1,000 to get the real scale I sometimes see this one in real code bases. This approach can make rigid and brittle designs even more rigid and more brittle. In the long term, it's better to make the code unit-testable by fixing the dependency problem, even if this means living with slow-running (or even - gasp! - manual) tests for a while.
February 4, 2015
Why Distribution & Concurrency Can Be A Lethal Cocktail For The Unwitting Dev TeamPicture the scene: it's Dec 31st 1990, a small town in upper state New York. I'm at a New Year's Eve party, young, stupid and eager to impress. The host mixes me my first ever Long Island Iced Tea. It tastes nice. I drink three large ones, sitting at their kitchen table, waxing eloquent about life, the universe and everything in my adorable English accent, and feeling absolutely fine. Better than fine.
And then, after about an hour, I get up to go to the bathroom. I'm not fine. Not fine at all. I appear to have lost the use of my legs, and developed an inner-ear problem that's affecting my normally balletically graceful poise and balance.
I proceed to be not-fine-at-all into the bathroom sink and several other receptacles, arguably none of which were designed for the purpose I'm now putting them to.
Long Island Iced Tea is a pretty lethal cocktail. Mixed properly, it tastes like a mildy alcoholic punch with a taste not dissimilar to real iced tea (hence the name), but one look at the ingredients puts pay to that misunderstanding: rum, gin, vodka, tequila, triple sec - ingredients that have no business being in the same glass together. It is a very alcoholic drink. Variants on the name, like "Three Mile Island" and "Adios Motherf***er", provide further clues that this is not something you serve at a child's birthday party.
I end the evening comatose on a water bed in a very hot room. This completes the effect, and Jan 1st 1991 is a day I have no memory of.
Vowing never to be suckered into a false sense of security by something that tastes nice and makes me feel better-than-fine for a small while, I should have known better than to get drawn like a lamb to the slaughter into the distributed components craze that swept software development in the late 1990's.
It went something like this:
Back in the late 1990's, aside from the let's-make-everything-a-web-site gold rush that was reaching a peak, there was also the let's-carve-up-applications-that-we-can't-even-get-working-properly-when-everything's-in-one-memory-address-space-and-there's-no-concurrency-and-distribute-the-bits-willy-nilly-adding-network-deficiencies-distributed-transactions-and-message-queues fad.
This was enabled by friendly technology that allowed us to componentise our software without the need to understand how all the undrelying plumbing worked. Nice in theory. You carve it, apply the right interfaces, deploy to your application server and everything's taken care of.
Except that it wasn't. It's very easy to get and up and running with these technologies, but we found ourselves continually having to dig down into the underlying detail to figure out why stuff wasn't working the way it was supposed to. "It just works" was a myth easily dispelled by looking at how many books on how this invisible glue worked were lying open on people's desktops.
To me, with the benefit of hindsight, object request brokers, remote procedure calls, message queues, application servers, distributed transactions, web services... these are the hard liquor of software development. The exponential increase in complexity - the software equivalent of alcohol units - can easily put unwitting development teams under the table.
I've watched so many teams merrily downing pints of lethal-but-nice-tasting cocktails of distribution and concurrency, feeling absolutely fine - better than fine - and then when it's time for the software to get up and walk any kind of distance... oh dear.
It turns out, this stuff is hard to get right, and the tools don't help much in that respect. They make it easy to mix these cocktails and easy to drink as much as you think you want, but they don't hold your hand when you need to go to the bathroom.
These tools are not your friends. They are the host mixing super-strength Long Island Iced Teas and ruining your New Year with a hangover that will never go away.
Know what's in your drink, and drink in moderation.
February 2, 2015
Stepping Back to See The Bigger PictureSomething we tend to be bad at in the old Agile Software Development lark is the Big Picture problem.
I see time and again teams up to their necks in small details - individual users stories, specific test cases, individual components and classes - and rarely taking a step back to look at the problem as a whole.
The end result can often be a piecemeal solution that grows one detail at a time to reveal a patchwork quilt of a design, in the worst way.
While each individual piece may be perfectly rational in its design, when we pull back to get the bird's eye view, we realise that - as a whole - what we've created doesn't make much sense.
In other creative disciplines, a more mature approach is taken. Painters, for example, will often sketch out the basic composition before getting out their brushes. Film producers will go through various stages of "pre-visualisation" before any cameras (real or virtual) star rolling. Composers and music producers will rough out the structure of a song before worrying about how the kick drum should be EQ'd.
In all of these disciplines, the devil is just as much in the detail as in software development (one vocal slightly off-pitch can ruin a whole arrangement, one special effect that doesn't quite come off can ruin a movie, one eye slightly larger than the other can destroy the effect of a portrait in oil - well, unless you're Picasso, of course; and there's another way in which we can be like painters, claiming that "it's mean to be like that" when users report a bug.)
Perhaps in an overreaction to the Big Design Up-Front excesses of the 1990's, these days teams seem to skip the part where we establish an overall vision. Like a painter's initial rough pencil sketch, or a composer's basic song idea mapped out on an acoustic guitar, we really do need some rough idea of what the thing we're creating as a whole might look like. And, just as with pencil sketches and song ideas recorded on Dictaphones, we can allow it to change based on the feedback we get as we flesh it out, but still maintain an overall high-level vision. They don't take long to establish, and I would argue that if we can't sketch out our vision, then maybe - just maybe - it's because we don't have one yet.
Another thing they do that I find we usually don't (anywhere near enough) is continually step back and look at the thing as a whole while we're working on the details.
Watch a painter work, and you'll see they spend almost as much time standing back from the canvas just looking at it as they do up close making fine brush strokes. They may paint one brush stroke at a time, but the overall painting is what matters most. The vase of flowers may be rendered perfectly by itself, but if its perspective differs from its surroundings, it's going to look wrong all the same.
Movie directors, too, frequently step back and look at an edit-in-progress to check that the footage they've shot fits into a working story. They may film movies one shot at a time, but the overall movie is what matters most. An actor may perform a perfect take, but if the emotional pitch of their performance doesn't make sense at that exact point in the story, it's going to seem wrong.
When a composer writes a song, they will often play it all the way through up to the point they're working on, to see how the piece flows. A great melody or a killer riff might seem totally wrong if it's sandwiched in between two passages that it just doesn't fit in with.
This is why a recommend to the developers that they, too, routinely take a step back and look at how the detail they're currently working on fits in with the whole. Run the application, walk through user journeys that bring you to that detail and see how that feels in context. It may seem like a killer feature to you, but a killer feature in the wrong place is just a wrong feature.
What's The Problem With Static Methods?Quick brain dump before bed.
When I'm pairing with people, quite often they'll declare a method as static. And I'll ask "Why did we make that static?" And they'll say "Well, because it doesn't have any state" or "It'll be faster" or "So I can invoke it without having to write new WhateverClassName()".
And then, sensing my disapproval as I subtly bang my head repeatedly against the desk and scream "Why? Why? Why?!", they ask me "What's the problem with static methods?"
I've learned, over the years, not to answer this question using words (or flags or interpretive dance). It's not that it's difficult to explain. I can do it in one made-up word: composibility.
If a method's static, it's not possible to replace it with a different implementation without affecting the client invoking it. Because the client knows exactly which implementation it's invoking.
Instance methods open up the possibility of substituting a different implementation, and it turns out that flexibility is the whole key to writing software with great composibility.
Composibility is very important when we want our code to accomodate change more easily. It's the essence of Inversion of Control: the ability to dynamically rewire the collaborators in a software system so that overall behaviour can be composed from the outside, and even changed at runtime.
A classic example of this is the need to compose objects under test with fake collaborators (e.g., mock objects or stubs) so that we can test them in isolation.
You may well have come across code that uses static methods to access external data, or web session state. If we want to test the logic of our application without hitting the file system or a database or having to run that code inside a web server, then those static methods create a real headache for us. They'd be the first priority to refactor into instance methods on objects that can be dependency injected into the objects we want to unit test.
The alternatives are high maintenance solutions: fake in-memory file systems, in-memory databases, fake web application servers, and so on. The dependency injection solution is much simpler and much cleaner, and in the long run, much, much cheaper.
But unit testability is just the tip of the iceberg. The "flex points" - to use a term coined in Steve Freeman and Nat Pryce's excellent Growing Object Oriented Software Guided By Tests book - we introduce to make our objects more testable also tend to make our software more composable in other ways, and therefore more open to accommodating change.
And if the static method doesn't have external dependencies, the unit testing problem is a red herring. It's the impact on composibility that really hurts us in the long run.
And so, my policy is that methods should be instance methods by default, and that I would need a very good reason to sacrifice that composibility for a static method.
But when I tell people whose policy is to make methods static if they don't need to access instance features this, they usually don't believe me.
What I do instead is pair with them on some existing code that uses static methods. The way static methods can impede change and complicate automated testing usually reveals itself quite quickly.
So, the better answer to "What's the problem with static methods?" is "Write some unit tests for this legacy code and then ask me again."
* And, yes, I know that in many languages these days, pointers to static functions can be dependency-injected into methods just like objects, but that's not what I'm talking about here.
January 30, 2015
It's True: TDD Isn't The Only Game In Town. So What *Are* You Doing Instead?The artificially induced clickbait debate "Is TDD dead?" continues at developer events and in podcasts and blog posts and commemorative murals across the nations, and the same perfectly valid point gets raised every time: TDD isn't the only game in town
They're absolutely right. Before the late 1990's, when the discipline now called "Test-driven Development" was beginning to gain traction at conferences and on Teh Internets, some teams were still somehow managing to create reliable, maintainable software and doing it economically.
If they weren't doing TDD, then what were they doing?
The simplest alternative to TDD would be to write the tests after we've written the implementation. But hey, it's pretty much the same volume of tests we're writing. And, for sure, many TDD practitioners go on to write more tests after they've TDD'd a design, to get better assurance.
And when we watch teams who write the tests afterwards, we tend to find that the smart ones don't write them all at once. They iteratively flesh out the implementation, and write the tests for it, one or two scenarios (test cases) at a time. Does that sound at all familiar?
Some of us were using what they call "Formal Methods" (often confused with heavyweight methods like SSADM and the Unified Process, which aren't really the same thing.)
Formal Methods is the application of rigorous mathematical techniques to the design, development and testing of our software. The most common approach was formal specification - where teams would write a precise, mathematical and testable specification for their code and then write code specifically to satisfy that specification, and then follow that up with tests created from those specifications to check that the code actually works as required.
We had a range of formal specification languages, with exotic names like Z (and Object Z), VDM, OCL, CSP, RSVP, MMRPG and very probably NASA or some such.
Some of them looked and worked like maths. Z, for example, was founded on formal logic and set theory, and used many of the same symbols (since all programming is set theoretic.)
Programmers without maths or computer science backgrounds found mathematical notations a bit tricky, so people invented formal specification languages that looked and worked much more like the programming languages we were familiar with (e.g., the Object Constraint Language, which lets us write precise rules that apply to UML models.)
Contrary to what you may have heard, the (few) teams using formal specification back in the 1990's were not necessarily doing Big Design Up-Front, and were not necessarily using specialist tools either.
Much of the formal specification that happened was scribbled on whiteboards to adorn simple design models to make key behaviours unambiguous. From that teams might have written unit tests (that's how I learned to do it) for a particular feature, and they pretty much became the living specification. Labyrinthine Z or OCL specifications were not necessarily being kept and maintained.
It wasn't, therefore, such a giant leap for teams like the ones I worked on to say "Hey, let's just write the tests and get them passing", and from there to "Hey, let's just write a test, and get that passing".
But it's absolutely true that formal specification is still a thing that some teams still do - you'll find most of them these days alive and well in the Model-driven Development community (and they do create complete specifications and all the code is generated from those, so the specification is the code - so, yes, they are programmers, just in new languages.)
Watch Model-driven Developers work, and you'll see teams - well, the smarter ones - gradually fleshing out executable models one scenario at a time. Sound familiar?
So there's a bunch of folk out there who don't do TDD, but - by jingo! - it sure does look a lot like TDD!
Other developers used to embed their specifications inside the code, in the form of assertions, and then write tests suites (or drive tests in some other way) that would execute the code to see if any of the assertions failed.
So their tests had no assertions. It was sort of like unit testing, but turned inside out. Imagine doing TDD, and then refactoring a group of similar tests into a single test with a general assertion (e.g., instead of assert(balance == 100) and assert(balance == 200), it might be assert(balance = oldBalance + creditAmount).
Now go one step further, and move that assertion out of the test and into the code being tested (at the end of that code, because it's a post-condition). So you're left with the original test cases to drive the code, but all the questions are being asked in the code itself.
Most programming languages these days include a built-in assertion mechanism that allows us to do this. Many have build flags that allows us to turn assertion checking on or off (on if testing, off if deploying to live.)
When you watch teams working this way, they don't write all the assertions (and all the test automation code) at once. They tend to write just enough to implement a feature, or a single use case scenario, and flesh out the code (and the assertions in it) scenario by scenario. Sound familiar?
Of course, some teams don't use test automation at all. Some teams rely on inspections, for example. And inspections are a very powerful way to debug our code - more effective than any other technique we know of today.
But they hit a bit of a snag as development progresses, namely that inspecting all of the code that could be broken after a change, over and over again for every single change, is enormously time-consuming. And so, while it's great for discovering the test cases we missed, as a regression testing approach, it sucks ass Gagnam Style.
But, and let's be clear about this, these are the techniques that are - strictly speaking - not TDD, and that can (even if only initially, like in the case of inspections) produce reliable, maintainable software. if you're not doing these, then you're doing something very like these.
Unless, of course... you're not producing reliable, maintainable code. Or the code you are producing is so very, very simple that these techniques just aren't necessary. Or if the code you're creating simply doesn't matter and is going to be thrown away.
I've been a software developer for approximately 700 million years (give or take), so I know from my wide and varied experience that code that doesn't matter, code that's only for the very short-term, and code that isn't complicated, are very much the exceptions.
Code that gets used tends to stick around far longer than we planned. Even simple code usually turns out to be complicated enough to be broken. And if it doesn't matter, then why in hell are we doing it? Writing software is very, very expensive. If it's not worth doing well, then it's very probably - almost certainly - not worth doing.
So what is the choice that teams are alluding to when they say "TDD isn't the only game in town?" Do they mean they're using Formal Methods? Or perhaps using assertions in their code? Or do they rely on rigorous inspections to make sure they get it at least right the first time around?
Or are they, perhaps, doing none of these things, and the choice they're alluding to is the choice to create software that's not good enough and won't last?
I suspect I know the answer. But feel free to disagree.
My apprentice, Will Price, has emailed me with a very good question:
"Why don't we embed our assertions in our code and just have the tests exercise the code?
What benefit did we gain from moving them out into tests?
It seems to me that having assertions inside the code is much nicer than having them in tests because it acts as an additional form of documentation, right there when you're reading the code so you can understand what preconditions and postconditions a particular method has, I should imagine you could even automate collection of these to
put them into documentation etc so developers have a good idea of whether they can reuse a method (i.e. have they satisfied the preconditions, are the postconditions sufficient etc)"
My reply to him (copied and pasted):
That's a good question. I think the answer may lie in tradition. Generally, folks writing unit tests weren't using assertions in code, and folks using assertions in code weren't writing automated tests. (For example, many relied on manual testing, or random testing, or other ways of driving the code in test runs).
So, people coming from the unit testing tradition have ended up with assertions in their tests, and folk from the assertions tradition (e.g., Design By Contract) have ended up with no test suites to speak of. Model checking is an example of this: folks using model checkers would often embed assertions in code (or in comments in the code), and the tool would exercise the code using white-box test case generation.
This mismatch is arguably the biggest barrier at the moment to merging these approaches because the tools don't quite match up. I'm hoping this can be fixed.
Thinking a bit more about it, I also believe that asserting correct behaviour inside the code helps with inspections.
January 26, 2015
Intensive Test-driven Development, London, Saturday March 14th - Insanely Good Value!Just a quick note to mention that the Codemanship 1-day Intensive TDD training workshop is back, and at the new and insanely low price of just £20.
You read that right; £20 for a 1-day TDD training course (a proper one!)
Places are going fast. Visit our EventBrite page to book your place now
January 21, 2015
My Solution To The Dev Skills Crisis: Much Smaller TeamsPutting my Iconoclast hat on temporarily, I just wanted to share a thought that I've harboured almost my entire career: why aren't very small teams (1-2 developers) the default model in our industry?
I think back to products I've used that were written and maintained by a single person, like the guy who writes the guitar amp and cabinet simulator Recabinet, or my brother, who wrote a 100,000 line XBox game by himself in a year, as well as doing all the sound, music and graphic design for it.
I've seen teams of 4-6 developers achieve less with more time, and teams of 10-20 and more achieve a lot less in the same timeframe.
We can even measure it somewhat objectively: my Team Dojo, for example, when run as a one day exercise seems to be do-able for an individual but almost impossible for a team. I can do it in about 4 hours alone, but I've watched teams of very technically strong developers fail to get even half-way in 6 hours.
People may well counter: "Ah, but what about very large software products, with millions of lines of code?" But when we look closer, large software products tend to be interconnected networks of smaller software products presenting a unified user interface.
The trick to a team completing the Team Dojo, for example, is to break the problem down at the start and do a high-level design where interfaces and contracts between key functional components are agreed and then people go off and get their bit to fulfil its contracts.
hence, we don't need to know how the spellcheck in our word processor works, we just need to know what the inputs and expected outputs will be. We could sketch it out on paper (e.g., with CRC cards), or we could sketch it out in code with high-level interfaces, using mock objects to defer the implementation design.
There'll still be much need for collaboration, though. It's especially important to integrate your code frequently in these situations, because there's many a slip 'twixt cup and microservice.
As with multithreading (see previous blog post), we can aim to limit the "touch points" in component-based/service-oriented/microservice architectures so that - as much as possible - each component is self-contained, presents a simple interface and can be treated as a black box by everyone who isn't working on its implementation.
Here's the thing, though: what we tend to find with teams who are trying to be all hifalutin and service-oriented and enterprisey-wisey is that, in reality, what they're working on is a small application that would probably be finished quicker and better by 1-2 developers (1 on her own, or 2 pair programming).
You only get an economy of scale with hiding details behind clean interfaces when the detail is sufficiently complex that it makes sense to have people working on it in parallel.
Do you remember from school biology class (or physics, if you covered this under thermodynamics) the lesson about why small mammals lose heat faster than large mammals?
It's all about the surface area-to-volume ratio: a teeny tiny mouse presents a large surface area proportional the volume of its little body, so more of its insides are close to the surface and therefore it loses heat through its skin faster than, say, an elephant who has a massive internal volume proportional to its surface area, and so most of its insides are away from the surface.
It may be stretching the metaphor to breaking point, but think of interfaces as the surface of a component, and the code behind the interfaces as the internal volume. When a component is teeny-tiny, like a wee mouse, the overhead in management, communication, testing and all that jazz in splitting off developers to try to work on it in parallel makes it counterproductive to do that. Not enough of the internals are hidden to justify it. And so much development effort is lost through that interface as "heat" (wasted energy).
Conversely, if designed right, a much larger component can still hide all the detail behind relatively simple interfaces. The "black box-iness" of such components is much higher, in so much as the overhead for the team in terms of communication and management isn't much larger than for the teeny-tiny component, but you get a lot more bang for your buck hidden behind the interfaces (e.g., a clever spelling and grammar checker vs. a component that formats dates).
And this, I think, is why trying to parallelise development on the majority of projects (average size of business code base is ~100,000 lines of code) is on a hiding to nowhere. Sure, if you're creating on OS, with a kernel, and a graphics subsystem, and a networking subsystem, etc etc, it makes sense to a point. But when we look at OS architectures, like Linux for example, we see networks of "black-boxy", weakly-interacting components hidden behind simple interfaces, each of which does rather a lot.
For probably 9 our of 10 projects I've come into contact with, it would in practice have been quicker and cheaper to put 1 or 2 strong developers on it.
And this is my solution to the software development skills crisis.
January 16, 2015
Can Restrictive Coding Standards Make Us More Productive?So, I had this long chat with a client team yesterday about coding standards, and I learned that they had - several years previously - instituted a small set of very rigorously enforced standards. We discussed the effects of this, especially on their ability to get things done.
They've built a quality gate for their product that runs as part of their integration builds (though you can save yourself time by running it on your desktop before you try to commit).
To give you an example of the sort of thing this quality gate catches, and their policies for dealing with it, try this one for size...
Necessarily, parts of their code are multi-threaded and acting on shared data. Before we begin, let's just remind ourselves that this is something that all programming languages have to deal with. Functional programming uses sleight of hand to make it look like multiple threads aren't acting on shared data, but they do this typically by pushing mutable state out into some transactional mechanism where our persistent data is managed. And hence, at some point, we must deal with the potential consequences of concurrency.
So, where was I? Oh yeah...
Anyway, they find that unavoidably there must be some multithreading in their code. Their standard, however, is to do as little multithreading as possible.
So their quality gate catches commits that introduce new threads. Simple as that: check in code that spawns news threads, and the gate slams shut and traps it in "commit purgatory" to be judged by a higher power.
The "higher power", in this case, is the team. Multithreading has to be justified to an impromptu panel of your peers. If they can think of a way to live without it, it gets rejected and you have to redo your code without introducing new threads.
Why go to all this trouble?
Well, we've all seen what unnecessary complexity does to our code. It makes it harder to understand, harder to change, and more likely to be contain errors. Multithreading adds arguably the most pernicious kind of complexity, creating an explosion in the possible ways our code can go wrong.
So every new thread, when it's acting on shared data, piles on the risk, and piling on risk piles on cost. Multithreading comes at a high price.
The team I spoke to yesterday recognised this high price - referring to multithreaded logic as "premium code" (because you have to pay a premium to make it work reliably enough) - and took steps to limit its use to the absolute bare minimum.
In the past, I've encouraged a system of tariffs for introducing "premium code" into our software. For example, fill a small jam jar with tokens of some kind labelled "1 Thread", and every time you think you need to write code that spawns a new thread, you have to go and get a token form the jar. This is a simple way to strictly limit the amount of multithreaded code by limiting the number of available tokens.
This can also serve to remind developers that introducing multithreaded code is a big deal, and they shouldn't do it lightly or thoughtlessly.
Of course, if you hit your limit and you're faced with a problem where multithreading is unavoidable, that can force you to look at how existing multithreaded code could be removed. Maybe there's a better way of doing what we want that we didn't think of at the time.
In the case of multithreading - and remember that this is just an example (e.g., we could do something similar whenever someone wants to introduce a new dependency on an external library or framework) - it can also help enormously to know where in our code the multithreaded bits are. These are weak points in our code - likely to break - and we should make sure we have extra-string scaffolding (for example, better unit tests) around those parts of the code to compensate.
But the real thrust of our discussion was about the impact rigorously enforced coding standards can have on productivity. Standards are rules, and rules are constraints.
A constraint limits the number of allowable solutions to a problem. We might instinctively assume that limiting the solution space will have a detrimental effect on the time and effort required to solve a problem (for the same reason it's easier to a throw a 7 with two dice than a 2 or a 12 - more ways to achieve the same goal).
But the team didn't find this to be the case. If anything, to a point, they find the reverse is true: the more choices we have, the longer it seems to take us.
So we ruminated on that for a while, because it's an interesting problem. Here's what I think causes it:
PROGRAMMERS ARE PARALYSED BY TOO MANY CHOICES
There, I've said it.
Consider this: you are planning a romantic night in with your partner. You will cook a romantic dinner, with a bottle of wine and then settle down in front of the TV to watch a movie.
Scenario #1: You go to the shopping mall to buy ingredients for the meal, the wine and a DVD
Scenario #2: You make do with what's in your fridge, wine rack and DVD collection right now
I've done both, and I tend to find that I spend a lot of time browsing and umm-ing and ah-ing when faced with shelf after shelf of choices when I visit the mall. Too much choice overpowers my feeble mind, and I'm caught like a rabbit in the headlights.
Open up the fridge, though, and there might be ingredients for maybe 3-4 different dishes in it right now. And I have 2 bottles of wine in the rack, both Pinot Noir. (Okay, I do, however, have a truly massive DVD collection, which is why when I decide to have a quiet night in watching a movie, the first half hour can spent choosing which movie.)
Now, not all dishes are born equal, and not all wines are good, and not all movies are enjoyable.
But I generally don't buy food I don't like, or wine I won't drink (and that leaves a very wide field, admittedly), or movies I don't want to watch. So, although my choices are limited at home, they're limited to stuff that I like.
In a way, I have pre-chosen so that, when I'm hungry, or thirsty or in need of mindless entertainment, the limited options on offer are at least limited to stuff that will address those needs adequately.
The trick seems to be to allow just enough choice so that most things are possible. We restrict our solution space to the things we know we often need, just like I restrict my DVD collection to movies I know I'll want to watch again. I'll still want to buy more DVDs, but I don't go to the shop every time I want to watch a movie.
This is the effort we put into establishing standards in the first place (and it's an ongoing process, just like shopping.)
Over the long-term, at our leisure, we limit ourselves to avoid being overwhelmed by choice in the heat of battle, when decisions may need to be made quickly. But we limit ourselves to a good selection of choices - or should I say, a selection of good choices - that will work in most situations. Just as Einstein limited his wardrobe so he could get on with inventing gravity or whatever it was that he did.
Harking back to my crappy DVD library analogy - and I know this is something friends do, too, from what they tell me - I will not watch a particular movie I own on DVD for years, and then it'll be shown on TV, and I will sit there and watch it all the way through, adverts and all, and enjoy it.
This might also have the effect of compartmentalising trying out new solutions (new programming languages, new frameworks, new build tools and so on) - what we might call "R&D" (though all programming is R&D, really) - from solving problems using the selection of available solutions we land upon. This could be a double-edged sword. Making "trying out new stuff" a more explicit activity could have the undesired effect of creating an option for non-technical managers that wasn't there before. Like refactoring, it's probably wise to make this non of their business.
And, sure, from time to time we'll found ourselves facing a problem for which our toolkit is not prepared. And then we'll have to improvise like we always do. Then we can switch into R&D mode. In Extreme Programming, we call these "spikes".
But I can't help feeling that we waste far too much time getting bogged down in choices when we have a perfectly good solution we could apply straight away.
Oftentimes, we just need to pick a solution and then get on with it.
I look forward to your outrage.