December 19, 2014

Software Build Quality - Why Are We Still Not Building It Right?

It's nearly 2015. This is the future we're living in now, where we all live in bakofoil houses and drive round in hover cars and our robotic assistants are powered by the juice of lemons.

Disappointing though it is that some of the key predictions of 20th century science fiction still haven't come to pass - no holidays on the Moon, no super-intelligent talking computers, and no end to disease, poverty and war like in Star Trek - the thing that disappoints me most (and this is probably a sign that I need to get a life) is that the kinds of silly programming errors we had to put up with 30 years ago are the very same silly programming errors we commit today.

Argue all you like that the undesired behaviour the users are complaining about is really a "feature", but it's very hard to build a case for software that throws out unhandled exceptions, or software that stops responding, or software that unexpectedly exits while you're in the middle of doing something important in that respect.

We all know the root causes of these programming boo-boos: trying to invoke methods on null objects, trying to reference elements in arrays that aren't that long, loops that never finish looping, threads that get deadlocked, and so on.

There are no excuses, really, in this day and age for releasing code that has quite so many programming errors in as ours seems to.

Tools and techniques exist to find these kinds of problems quickly, and also to prevent them from occurring in the first place.

I'm not naïve enough to believe we can eliminate them completely; like the bubonic plague, we might be vigilant to the very rare outbreaks that very occasionally occur. But we can get 99.9% of the way there, for sure with what's available to us.

And the real tragedy is that it really wouldn't cost us much, if any, more to do it.

One of my key focuses through Codemanship in 2015 and beyond, therefore, is going to be Software Build Quality.

Sure, maybe we built the wrong thing, but that's no excuse for not building it right. We iterate to address the former, not the latter.

Expect me to give your team a hard time about it from January onwards. You have been warned.


December 18, 2014

A Simple "Trick" For Getting More Bang From Your Unit Test Bucks

I'm finishing off my year with a little bit of R&D.

My personal project is to try and carve out a practical progression from vanilla TDD using good old-fashioned one-unit-test-per-test-case triangulation, generalising the test code as I go. The aim is to find a way to go from the basics up to more heavyweight kinds of unit testing that potentially offer much higher assurance than even TDD can give us.

Typically, if I have two or more unit tests that are essentially the same test, but with different inputs and expected results, I'd refactor them into a single parameterised test, with unique test data for each of the original tests.

So far, so normal.

But I could go further and turn these parameterised tests into something that a tool like, say, JCheck could exploit to test against a potentially massive number of unique test data sets, randomly generated by the tool.

And in playing with this, I've discovered a useful little "trick" you can do in JUnit that allows us to have our cake and eat it - standard parameterised tests using JUnitParams when we need them, and the same tests run with JCheck when we want that.





Just add the annotations for either framework as desired/required, and then control which ones are applied by sub-classing the test fixture and using @RunWith() to specify whether these will be run as JUnitParams tests, or as JCheck tests.

Simples. But potentially very powerfuls.

Two things to note before I sign off: firstly, it becomes necessary to generalise our test assertions so that expected results are calculated rather than given as a parameter of each test case. This comes at a price - duplication of implementation code in particular - but I believe, when we need to go this extra mile or three, it can be a price worth paying. Some pitfalls with calculated expected results, though, are worth avoiding if you can. Most importantly, there can be a risks associated with using identical algorithms as in the implementation. if the implementation design is wrong, then using the same logic to calculate an expected answer is folly. Try finding a different way of computing what the answers should be.

The second thing is that it is practically possible to get from vanilla tests to parameterised tests to JCheck tests in small, safe steps (i.e., refactoring). Don't be tempted to jump in to the deep end and start re-writing the tests in a big, nasty chunk. It's quite a dance, especially with some of the "features" of JUnitParams and the way we feed it non-primitive parameter values like objects and collections, but it can be done with a bit of fancy refactoring footwork, I'm finding.







December 17, 2014

Implicit Contracts - A Simple Technique For Code Inspections

Just a quick bedtime note about a simple code inspection technique I use when the milk and cookies have been keeping me awake.

To cut a long story short, all executable code - from a function all the way down to a single expression - has a contract.

Take this little example:



If I wanted to do a bit of what we call "whitebox testing", and inspect this code for potential bugs, I could break it down into all the little executable chunks that make up the whole. (Not sure if a chunk of code is executable? Hint: could you extract it into its own method?)

For example, the first line:



If we think about it, this line of code has an input, and it has an output, just like a function. The input is the array called items, as it's declared earlier as a method parameter and only referenced here. The output is the string joinedItems, since it's declared and assigned to here, and referenced in code further down.

It also has pre- and post-conditions, just like a function. The array items must have at least one element for the code to work, otherwise we'll get an exception thrown when we try to reference the first element. And, provided there is at least one element in items, the post-condition is that joinedItems will be assigned the value of that first integer element as a string.

We can see already, by considering the contract implicit in this first line of code, that there may be a problem. What if items is empty? We must determine if that could ever happen during the correct functioning of the program as a whole. Could join() ever be called with an empty array as a result of valid user input or other software events?

Let's imagine it could happen, and we need to handle that scenario meaningfully. Perhaps we decide that our method should return an empty string when the array is empty. In which case, our code needs to change.

That's one potential bug. Using this technique, can you spot any more?




December 10, 2014

Make It Real

One piece of advice I give teams on the subject of exploring customer's requirements is "Make it real".

Far too much time is wasted sitting in meeting rooms talking in the abstract, drawing pictures from memory and generally waving our arms around instead of getting a good hard look at the real thing.

There's no substitute for seeing it for yourself. Better still, doing it for yourself.

This works as much with code - our own problem domain - as it does with banking or TV or healthcare or whatever business problem we happen to be working on. Don't sit in a room and have a discussion about, say, the architecture. Bring in a laptop with a copy of the code, hook it to a projector, and look at the code.

Making it real cuts out a lot of the crap, short-circuits a mountain of unnecessary, lossy and often misleading exposition about the way things are, and enables us the chance to explore ideas in a practical way.

The main reason why so much software fails to deliver value isn't because we didn't spend enough time agreeing goals, or because we didn't spend enough time drawing boxes and arrows, or because we didn't have executable acceptance tests. The main reason so much software fails to deliver value is purely and simply because we didn't understand the problem, and traditional analysis techniques have failed to bridge that gap.

This is why, say, legal case management software written by lawyers who happen to be so-so programmers can turn out to be of much more value than legal case management software written by dedicated expert programmers who just happen to be working in law firms.

Of course, in ideal world, we'd have dedicated expert programmers who are lawyers. That way, our valuable solution might be reliable and maintainable enough to be a long-term valuable solution.

Domain knowledge is key, but most of it is tacit, complex, ambiguous, and gets lost in translation, and the people who know the domain better than anyone else are the ones who live there. To gain the necessary domain expertise to really solve a problem, we need to live there for a while, too.

Software solutions should begin with us immersing ourselves in the problem. We need to soak up all that complex information that only being there and doing that can convey.

The right environment for understanding the requirements for a trading application is not a meeting room with a couple of whiteboards - frankly, it doesn't matter how much whiteboard space you've got. The right environment for understanding the requirements for a trading application is a trading environment. Ditto a patient management system is best understood by managing patients, and a newspaper delivery system is best understood by delivering newspapers.

I continue, year after year, and mostly in vain, to strongly recommend that teams build what we call a "model office"; a dedicated analysis and testing environment that - as faithfully as possible - recreates the real world problem domain in which our software is intended to be used, so we can play out usage scenarios, try out ideas, show each other what we really mean when we say "the user does X", and test our software in a context that has meaning, and allows us to really ask the question "but will it work?"

And year after year, teams ignore the advice, and carry on down the sitting-in-rooms-talking-about-user-stories-and-business-goals-and- drawing-pictures-and-writing-cucumber-tests route that has proven largely fruitless these last 7 decades when it comes to creating software that solves real problems in the real world.

With a new year almost upon us, though, it's time again to dish out that advice. So, for what it's worth, here it is:

Make. It. Real.



November 29, 2014

Automated Refactoring Tools Don't Exempt Us From Re-testing

You can find some pretty dodgy advice out there on software development.

This morning, I had an email waiting for me in my inbox from someone who'd attended Codemanship courses, with a worrying question about the discipline of refactoring.

Refactoring legacy code, certainly at the start, can be difficult because of the lack of automated regression tests. She had heard in a presentation at a conference that if you use an automated refactoring tool to perform a refactoring, you don't need to run the tests afterwards.

This is troubling advice. It places more faith in such tools than I believe, from experience, is warranted. They don't always work quite as we expect they should. For sure, using an automated refactoring is no guarantee that behaviour is preserved.

It's also the case that the set of automated refactorings these tools offer - without exception at the moment - do not get us seeamlessly from A to B all of the time without the need for us to do some refactoring by hand. (There's an area of research, by the way, that I think would have much value. I'd love to be able to do refactoring completely using automated tools. It would be quicker and safer.)

So, given that these tools aren't 100% reliable, and given that they don't enable 100% automated refactoring, the need to retest our code is not significantly diminished by their use.

In practice, when the code we want to refactor isn't (sufficiently) covered by automated tests, we have two sensible choices:

1. Write some automated tests for it that will give us enough confidence we didn't break the software

2. Test it manually, if the current design (e.g., because of dependency issues that can only be resolved by refactoring) makes automating tests too difficult





November 27, 2014

S.T.O.L.I.D. OO Design Principles

Just time while we wait for folk to arrive for today's TDD training to capture my thoughts about a discussion that's going on in the room as I write.

The question is "are SOLID principles sufficient as an approach to OO design?" The answer, of course, is no.

If we are to build from the assertion that OO design is about managing dependencies (which I happen to subscribe to), then SOLID principles omit a very key part of the picture.

From conducting various experiments using simplified models of dependency networks, I learned that coupling and cohesion at the class level is a major factor in what we might call the "ripple effect" - where a small change to one class "ripples" out to impact other classes. Code where coupling between classes is high tends to result in wider ripples, and therefore a higher cost of change. Code where more dependencies are encapsulated inside classes tends to result in smaller ripples and a lower cost of change. It's simple physics.

SOLID doesn't address encapsulation, which, in my opinion, is a glaring omission.

A design principle that does is Tell, Don't Ask. Neatly summarised in that one pithy expression is the notion of classes that know stuff being given responsibility for doing the work that requires that knowledge. In other words, put the behaviour where the data is (put the methods where the fields are). This tends to lead to classes that are more cohesive, and reduces the need for coupling between classes - sharing of knowledge.

Jokingly, I propose SOLID becomes S.T.O.L.I.D., the dictionary definition of which is: "calm, dependable, and showing little emotion or animation".

So, it now goes:

Single Responsibility
Tell, Don't Ask
Open-Closed
Liskov Subsitution
Interface Segregation
Depdency Inversion

To mem, this seems like a more complete set of principles, addressing as it does 3 of my 4 principles of dependency management:

1. Minimise dependencies
2. Localise Dependencies
3. Stabilise Dependencies
4. Abstract Dependencies


Minimising dependencies is best achieved by writing less code, so I add two final design principles that a trump all others:

Simple Design (KISS)
Don't Repeat Yourself


But I couldn't think of a workable acronym that included those...






November 25, 2014

Continuous Inspection II - Planning & Executing CInsp

In this second blog post about Continuous Inspection (CInsp, for short), I want to look at how we might manage the CInsp process to get the most value from it.

While some develoment teams are now using CInsp tools to analyse their code to get early warnings about code quality problems when they're easier and cheaper to fix, it's fair to say that this area of the develoment discipline has to date evaded the principles that we apply to other kinds of requirements.

Typically, as a kind of work, CInsp is ad hoc, unplanned, untracked and most teams who do it have only a very vague idea of what kind of cost it has and what kind of benefits they're reaping from it.

CInsp is rarely prioritised, leaving the field wide open to waste a lot of time and effort on activities that add little or no value.

Non-functional requirements obey the same laws as functional ones, which is why we need to attack them using the same principles and techniques.

In this post, I want to examine how we plan and execute CInsp on projects starting from scratch. (In a future post, I'll talk about applying CInsp to existing code bases with a build-up of code quality issues.)

Continuous Inspection Requirements.

There are an infinite number of properties we could look for in our code, but some have value in finding and most don't. Rather than waste our time arbitrarily searching our code for "stuff", it's important we have a clear idea of what it is we're looking for and why.

Extreme Programming, for example, has a perfectly usable mechanism for describing the things we want to inspect for, and the benefits of catching those kinds of code quality problems early.

A Code Quality Story is a non-functional user story that briefly summarises a code quality "bug" we wish to avoid and the pay-off we might expect if we can avoid introducing it into our code.



Note first of all that I've chosen here to use a blue index card. This might be in a system where we write functional user stories on green cards, report bugs on red cards, and record other outcomes - "miscellaneous tasks", like setting up the build and implementing code quality gates - on blue cards.

Why do this? Well, I've found it very useful to know roughly how much of a team's time is split between delivering working features, fixing bugs (ideally, zero time), and "shaving yaks" when the yaks being shaved are sufficiently large and not part of the work of delivering specific features.

The importance of the effort split becomes apparent as time goes by and the software evolves. A healthy project is one where the proportion of effort devoted to delivering working features remains relatively constant. What typically happens on teams who set out at an unsustainable pace is that they begin development with their time devoted mostly to the green cards, and after a few months most of their time is spent tackling red cards and making a lot less progress on new features. This is a good indicator of the rising cost of change we're seeking to avoid, so we can sustain the pace of development and deliver value for longer. This information will help us better judge how well-spent the time devoted to things like CInsp is.

So we have a placeholder for our code quality requirement in the form of a blue index card. What next?

Planning Continuous Inspection

This is where I, and a lot of teams, have gone wrong in the past. What we should never, ever do is allow the customer to choose when and whether we tackle non-functional requirements. And in "customer" I include proxy customers like business analysts and project managers. The overwhelmingly common experience of development teams is that purely technical issues, like code quality, get sidelined by non-technical stakeholders.

We must not give them the chance to drop our Feature Envy story in favour of a story about, say, sorting columns in an HTML table if we strongly believe, as professionals, that avoiding Feature Envy is important. If, as the evidence suggests, care taken over code quality helps to maintain productivity and deliver greater value over time, then we risk presenting customers with a confusing false dichotomy between work that enhances quality and work that directly delivers working features.

The analogy I use is to pretend we're running a restaurant using the planning practices of Extreme Programming.

Every job that needs doing gets written on a card, and placed into a backlog of outstanding work. There will be user stories like "Take table 3's order" and "Serve french fries and beer to table 7" and "Get the bill for table 12". These are stories about work that will make the restaurant money.

There will also be stories like "Wash the dishes in the sink" and "Clean out pizza oven" and "Repaint sign over door". These are about tasks that cost money, but don't directly bring in revenue by themselves.

If we allowed our restaurant's shareholders - who themselves have never worked in a restaurant, but they have a stake in it as a business - to prioritise what stories get done at the expense of others in a world where backlogs always outweigh the available time and resources, then there's a very real danger that the kitchen will rarely get cleaned, the sign above the door will fade until nobody can see it, and we'll run out of clean plates halfway through service.

The temptation for teams who are driven solely by the priorities of non-technical stakeholders is that non-functional issues like code quality will only get tackled when a crisis emerges that blocks progress on functional requirements. i.e., we don't wash up until we run out of plates, or we don't clean the kitchen until the inspector shuts us down, or we don't repaint the sign until the customers have stopped coming in.

One thing we've learned about writing software is that it's cheaper and easier to tackle problems proactively and catch them earlier. Sadly, too many teams are left lurching from one urgent crisis to the next, never getting the chance to get ahead of the issues.

For this reason, I strongly advise against involving non-technical stakeholders in planning CInsp. (As well as other technical work.)

Now put yourself in the diner's shoes: you pick up the menu, and every dish lists all of the tasks restaurant staff have to do in order to deliver it. Let's say we charge £11 for fish and chips, with a clean grill, mopping the floors, cashing up that evening, doing the accounts, getting up early to take delivery of fresh fish, and so on.

Two questions:

1. If we hadn't told them, would the diner even care?

2. If we make it the diner's business, are we inviting them to negotiate the price of the fish and chips down by itemising what goes in to running the restaurant? ("I'll have the fish & chips, but I'm not paying for your trainee chef's college course" etc)

The world is full of work that needs doing, but nobody thinks they should pay for. In order for the world to keep turning, for fish & chips to appear on our dining tables, this work has to get done one way or another, and it has to be paid for.

The way a restaurant squares this circle is to build it into the cost of the meal and to not present diners with a choice. Their choice is simple: don't like the price, don't order the dish.

Likewise in software development, there's a universe of tasks that need doing that do not directly end with a working feature being delivered to the customer's table. We must build this work into the price ("feature X will take 3 days to deliver") and avoid presenting the customer with bewildering choices that, in reality, aren't choices at all.

So planning Continuous Inspection is something that happens within the team among technical stakeholders who understand the issues and will be doing the work. This is good advice for any non-functional requirements, be they about build automation, internal training or hiring developers. This is just "stuff that has to happen" so we can deliver working software reliably, economically and sustainably.

The key thing, to avoid teams disappearing up their own backsides with the technical stuff, is to make sure we're all absolutely clear about why we're doing it. Why are we automating the build? Why are we writing a tool that generates code? Why are we sending half the team to the Software Craftsmanship conference? (Some companies send entire teams.) And the answer should always be something of value to the customer, even if that value might not be realised for months or years.

In practice, we have planning meetings - especially in the early stages of a project - that are for technical stakeholders only. Lock the doors. Close the blinds. Don't tell the boss. (I have literally experienced running around offices looking for rooms where the developers can have these discussions in private, chased by the project manager who insists on sitting in. "Don't mind me. I won't interfere." Two seconds later...)

Such meetings give teams a chance to explicitly discuss code quality and to thrash out what they mean by "good code" and "bad code" and establish a shared set of priorities over code quality. It's far better to have these meetings - and all the inevitable disagreements - at the start, when we can take steps to prevent issues, than to have them later when we can only ask "what went wrong?"

Executing Continuous Inspection

On new software, the effort in Continuous Inspection tends to be front-loaded, and with good reason.

As I've mentioned a few times already, it tends to be far cheaper to tackle code quality "bugs" early - the earlier the better. This means that adding new code quality requirements later in development tends to catch problems when they're much more expensive to fix, so it makes sense to set the quality bar as high as we can at the start.

There's good news and there's bad news. First, the bad news: on a new project, from a standing start, it's going to take considerable effort to get automated code inspections in place. It will vary greatly, depending on the technology stack, availability of tools, experience levels in the team, and so on. But it's not going to take an afternoon. So you may be faced with having to hide a big chunk of effort from non-technical stakeholders if you attempt to start development (from their perspective, when they're actively involved) at the same time as putting CInsp in place. (Same goes for builds, CI, and a raft of other stuff that we need to get up and running early on.)

Another very strong recommendation from me: have at least one iteration before you involve the customer. Get the development engine running smoothly before you wind down the window and shout "Where to, guv'nor?" They may be less than impressed to discover that you just need to build the engine before you can set off. Delighting customers is as much about expectations as it is about actual delivery.

Going back to the restaurant analogy, consider why restaurants distinguish between "service" and "preparation". Service may start at 6pm, but the chefs have probably been there since 9am getting things ready for that. If they didn't, then those first orders might take hours to reach the table. Too many development teams attempt the equivalent of starting service as the ingredients are being delivered to the kitchen. We need to do prep, too, before we can start taking orders.

Now, for the good news: the kinds of code quality requirements we might have on one, say, JEE project are likely to be similar on another JEE project. CInsp practitioners tend to find that they can get a lot of reuse out of code quality gates they've already developed for previous projects. So, over months and years, the overall cost of getting CInsp up and running tends to decrease quite significantly. If your technology stack remains fairly stable over the years, you may well find that getting things up and running can eventually become an almost push-button process. It takes a lot of investment to get there, though.

Code Quality stories work the same way as user stories in their execution. We plan what stories we're going to tackle in the current timebox in the same way. We tackle them in pairs, if possible. We treat them purely as placeholders to have a conversation with the person asking for each story. And, most importantly, we agree...

Continuous Inspection Acceptance Tests

Going back to our Feature Envy code quality story, what does the developer who write that story mean by "Feature Envy"?

Here's the definition from Martin Fowler's Refactoring book:

"A classic [code] smell is a method that seems more interested in a class other than the one it is in. The most common focus of the envy is the data."

It's all a bit handwavy, as is usually the case with software design wisdom. A human being using their intelligence, experience and judgement might be able to read this, look at some code and point to things that seem to them to fit the description.

Programming a computer to do it, on the other hand...

This is where we can inhabit our customer's world for a little while. When we ask our customer to precisely decribe a business rule, we're putting them on the spot every bit as much as a computable definition of Feature Envy might put me and you on the spot. In cold, hard, computable terms: we don't quite know what we mean.

When the business problem we're solving is about, say, mortgages or video rentals or friend requests, we ask the customer for examples that illustrate the rule. Using examples, we can establish a shared vocabulary - a language for expressing the rule - explore the boundaries, and pin down a precise computable understanding of it (if there is one.)

We shouldn't be at all surprised that this technique also works very well for rules about our code. Ask the owner of a code quality story to track down some classic examples of code that breaks the rule, as well as code that doesn't (even if it looks at first glance like it might).

This is where the real skill in CInsp comes into play. To win at Continuous Inspection, development teams need to be skilled as reasoning about code. This is not a bad skill for a developer to have generally. It helps us communicate better, it helps us visualise better, it makes us better at design, at refactoring, at writing tools that work with code. Code is our domain model - the business objects of programming.

Using our code reasoning skills, applied to examples that will form the basis of acceptance tests, we can drive out the design of the simplest tool possible that will sound the alarm when the "bad" examples are considered, while silently allowing the "good" examples to pass through the quality gate.

As with functional user stories, we're not done until we have a working automated quality gate that satisfies our acceptance tests and can be applied to new code straight away.

In the next blog post, we'll be rolling up our sleeves with an example Continuous Inspection quality gate, implementing it using a variety of tools to demonstrate that there's often more than one way to skin the code quality cat.








November 22, 2014

Continuous Inspection I - Why Do We Need It?

This is the first of a series of posts about Continuous Inspection. My goals here is to give you something to think about, rather than to present a complete hands-on guide. The range (and maturity) of tools and techniques we can apply to Continuous Inspection (I'll call it CInsp from now on to save a few keystrokes) is such that I could write 1,000 blog posts and still not cover it all. So here I'll just focus on general CInsp principles and illustrate with cherrypicked examples.

In this first post, I want to summarise what I mean by "Continuous Inspection" and argue that there's a real need for it on most software development teams.

Contininuous Inspection is the practice of - and stop me if I'm getting too technical here - continuously inspecting your code to detect non-functional issues in the software.

CInsp is just another kind of Continuous Testing, which is a cornerstone of Continuous Delivery. To have our software always in a shippable state, we must take steps to assure ourselves that the software is always working.

If we follow the thinking behind continuous testing (and re-testing) of our software to check that it still works, the benefit is that we never stray more than a few minutes from having something we could ship if the business wanted us to.

To date, the only practical way we've found to achieve Continuous Testing is to automate those tests as much as possible, so they can be run quickly and economically. If it takes you 2 weeks to re-test your software, then after each change you make to the code, you are at least 2 weeks away from knowing if the software still works. Manual testing makes Continuous Delivery impractical.

In recent years, automated testing - and especially automated unit testing - has grown in popularity, and the effects can be seen in teams delivering more reliably and more sustainably as a result.

But only to a point.

What I've observed across hundreds of teams over the last decade or more is that, even with high levels of automated testing, the pace of delivery still slows to unacceptable levels.

In order to sustain the pace of change, the code itself needs to remain open to change. Being able to quickly regression test our software is a boon in this respect, no doubt. But it doesn't address the whole picture.

There are other things that can hamper change in our code. If the code's complicated, for example, it will be more likely to break when we change it. If there's duplication in our code - if we've been a bit trigger-happy with Copy+Paste - then that can multiply the cost of making a change. If we've not paid attention to the dependencies in our code, small changes can cause big ripples through the code and amplify the cost.

As we make progress in delivering functionality we tend also to make a mess inside the software, and that mess can get in our way and impede future progress. To maintain the pace of innovation over months and years and get the most out of our investment over the lifetime of a software product, we need to keep our code clean.

Experienced developers view design issues that impede progress in their code as bugs, and they can be every bit as serious as bugs in the functionality of the software.

And, just like functional bugs, these code quality bugs (often referred to as "code smells", because they're indicatice of your code "rotting" as it grows) have a tendency to get harder and more expensive to fix the longer we leave them.

Duplication has a tendency to grow, as does complexity. We build more dependencies on top of our dependencies. Switch statements get longer. Long parameter lists get longer. Big classes get bigger. And so on.

Here's what I've discovered form examining hundreds of code bases over the years: code smells that get committed into the code are very likely to remain for the lifetime of the software.

There seems to be a line that once we've crossed it, our mistakes are likely to live forever (and impede us forever). From observation, I've found that this line is moving on.

In the Test-driven Development cycle, for example, I've seen that when developers move on to the next failing test, any code smells they leave behind will likely not get addressed later. In programming, "later" is a distant and alien land where all our little TO-DO's never get done. "Later" might as well be "Narnia".

Even more so, when developers commit their code to a shared repository, at that point code smells "petrify", and remain forever trapped in the amber of all the other code that surrounds them. 90% of code smells introduced in committed code never get fixed.

This is partly because most teams have no processes for identifying and addressing code quality problems. But even the ones who do tend to find that their approach, while better than nothing, is not up to the task of keeping the code as clean as it needs to be to maintain the pace of change the customer needs.

Why? Well, let's look at the kinds of techniques teams these days use:

1. Code Reviews

There's a joke that goes something like this: "Ask a developer what's wrong with a line of code, and she'll give you a list. Ask her what's wrong with 500 lines of code, and she'll tell you it's fine."

Code reviews have a tendency to store up large amounts of code - potentially containing large numbers of issues - for consideration. The problem here is seeing the wood for the trees. A lot of issues get overlooked in the confusion.

But even if code reviews identified all of the code quality issues, the economics of fixing those issues is working against us. Fixing bugs - functional or non-functional - tends to get exponentially more expensive the longer we leave them in the code, and for precisely the same reasons (longer feedback cycles).

In practice, while rigorous code reviews would be a step forward for many teams who don't do them at all, they are still very much shutting the stable door after the horse has bolted.

2. Pair Programming

In theory, pair programming is a continuous code review where the "navigator" is being especially vigilent to code quality issues and points them out as soon as they spot them. In some cases, this is pretty much how it works. But, sad to say, in the majority of pairs, code quality issues are not high on anyone's agenda.

This is for two good reasons: firstly, most developers are not all that aware of code smells. They don't figure high in our list of priorities. Code quality isn't sexy, and doesn't get you hired at IronicBeards.com.

Secondly, with the best will in the world, people have limitations. When Codemanship does pairing to assess a developer's skill level in certain practices, the level of focus required on what the other person's doing is really quite intense. You don't take your eye off the screen in case you miss something. But there are dozens of code smells we need to be vigilant for, and even with all my experience and know-how, I can't catch them all. My mind will have to skip between lots of competing concerns, and when my remaining brain cells are tied up trying to remember how to do something with Swing, I'm likely to take my eye off the code quality ball. It's also very difficult to maintain that level of focus hour after hour, day-in and day-out. It hurts my brain.

Pair programming, as an approach to guarding against code smells, is good when it's done well. But it's not that good that we can be assured code written in this way will be maintainable enough.

3. Design Authorities

By far the least effective route to ensuring code quality is to make it someone else's job.

Hiring architects or "technical design authorities" suffers from all the shortcomings of code reviews and pair programming, and then adds a big bunch of new shortcomings.

Putting aside the fact that almost every architect or TDA I've ever met has been mostly focused on "the big picture", and that I've seen 1,000-line switch statements waved through the quality gate by people obsessing over whether classes implement certain interfaces they've prescribed, turning design authorities into design quality testers never seems to end well. Who wants to spend their day scouring other people's code for examples of Feature Envy?

I'll say no more, except to summarise by observing that the code I've seen produced by teams with dedicated design authorities counts amongs the worst for code quality.

4. Coding Standards

In theory, a team's coding standards are a codification of what we all agree we mean by "good code".

Typically, these are written down in documents that nobody ever reads, and suffer from the same practical drawbacks as architecture documents and company mission statements. They're aspirational affirmations at best. But, in practice, everybody just ignores them.

Even on those more disciplined teams that try to adhere to coding standards, they still have major drawbacks, all relating back to things we've already discussed.

Firstly, coding standards are a list of "stuff" we need to be thinking about along with all the other "stuff" we have to think about. So they tend to take a lower priority and often get overlooked.

Secondly, as someone who's studied a lot of coding standards documents (and what joy they bring!), they have a tendency to be both arbitrary and by no means universally agreed upon. Often they've been written by some kind of design or development authority, usually with little or no input from the team they're being imposed on. It's rare for issues that affect maintainability to be addressed in a coding standards document. Programmers are a funny bunch: we care deeply about some weird stuff while Elephants In The Room creep in without being questioned and sit on us. Naming conventions, therefore, have little relation to how easy the code will be to read and understand. And it's rare to see duplication, dependencies, complexity and so on even being hinted at. As long as all your instances have names beginning with obj and all your private member variables beging with "m_", the gods of code goodness will be appeased.

And then there's the question of how and when we enforce coding standards. And we're back to the hard physics of software development - time, money and cost. Knowing what we should be looking for is only the tip of the code quality iceberg.

What's needed is the ability to do code reviews so freqently, and do them in a way that's so effective, that we never stray more than a few minutes from clean code. For this, we need code reviewers who miss very little, who are constantly looking at the code, and who never get tired or distracted.

For that, thankfully, we have computers.

Program code is like any other domain model; we can write programs to reason about the design of other programs, expressed in terms of the structure of code itself.

Code quality rules are just like any other computable business rules. If the rule is that a block of code in one class should not make copious references to features in another class ("Feature Envy"), it's possible to write an automated test that reads code and looks at those references to determine if that block of code is in the right place.

Let's illustrate with a technology example. Imagine we're working in Java in, say, Eclipse. We could write code for a plug-in that, whenever we make a change to the code document we're working on, reads the code's Abstract Syntax Tree (basically, a code DOM) and does a calculation for the ratio of internal and external dependencies in that Java method we just changed. If the ratio is too low, it could flag it up as a warning while we're writing the code.

The computational power of computers is such today that this sort of continuous background code reviewing is practically possible, and there have already been some early attempts to create just such plug-ins.

In the article I wrote a few years ago for Visual Studio Journal, Ever-decreasing Cycles, I speculate about the impact such short code quality feedback loops might have on the economics of development.

It's my belief that, just as continuous automated unit testing has had a profound effect on the "bottom line" of software development for many teams and businesses, so too would Continuous Inspection.

In the next blog post, I'll talk about the CInsp process and look at practical ways of managing CInsp requirements, test automation and how we action the code quality problems it can throw up.




November 19, 2014

In 2015, I Are Be Mostly Talking About... Continuous Inspection

Just a quick FYI, for event organisers: after focusing this year on software apprenticeships, in 2015 I'll be focusing on Continuous Inspection.

A critically overlooked aspect of Continuous Delivery is the need to maintain the internal quality of our software to enable us to sustain the pace of innovation. Experience teaches us that Continuous Delivery is not sustainable without Clean Code.

Traditional and Agile approaches to maintaining code quality, like code reviews and Pair Programming, have shown themselves to fall short of the level of rigour teams need to apply. While we place great emphasis on automated testing to ensure functional quality, we fall back on ad hoc and highly subjective approaches for non-functional quality, with predictable results.

Just as with functional bugs, code quality "bugs" are best caught early, and for this we find we need some kind of Continuous Testing approach to raise the alarm as soon after code smells are introduced as possible.

Continuous Inspection is the missing discipline in Continuous Delivery. It is essentially continuous non-functional testing of our code to ensure that we will be able to change it later.

In my conference tutorials, participants will learn how to implement Continuous Inspection using readily available off-the-shelf tools like Checkstyle, Simian, Emma, Java/NDepend and Sonar, as well as rigging up our own bespoke code quality tests using more advanced techniques with reflection and parser generators like ANTLR.

They will also learn about key Continuous Inspection practices that can be used to more effectively manage the process and deliver more valuable results, like Non-functional Stories, Clean Code Check-ins, Build Inspections and Rising Tides (a practice that can be applied to incrementally improving the maintainability of legacy code.)

If you think your audience might find this interesting, drop me a line. I think this is an important and undervalued practice, and want to reach as many developers as possible in 2015.




November 6, 2014

Personal Craftsmanship Coaching Sessions To Raise Money To Feed People In Need This Winter

As part of an Indiegogo campaign to raise £1,000 for hot meals for people in need this winter, I'm offering 10x 2-hour personal coaching sessions for developers in TDD, refactoring, SOLID and other aspects of software craftsmanship.

These would be online pairing sessions, and we can work in any language you like on any problem you like, just as long as the emphasis is on craftsmanship.

The money raised from one coaching session would pay for 100 hot meals for people struggling with poverty this winter.

So if you fancy a bit of one-on-one coaching to help you with any questions or obstacles you're wrestling with in your quest to become a better code crafter, and would like to help people in real need at the same time, please take a look.