July 26, 2016

Seeking Guinea Pigs for TDD Book

I'm writing a book to accompany the Codemanship TDD training workshop, which is undergoing a major upgrade over the summer.

I've written about 70% of it as a first draft, and it's now at a stage where I think:

a. It could prove useful to folk, and
b. It's well-formed enough to get useful feedback on

I'm seeking a handful of enthusiastic learners to give the book's current draft a spin and see what they think.

I'll be hoping for a range of TDD experience, from absolute beginners to old hands, to get a broad spectrum of perspectives.

It in an A5 format and currently has 151 pages, a lot of which is Java code examples and screen grabs etc.

If you'd like to give it a go, drop me a line and tell me a little bit about yourself and your TDD experience.

You can tackle the exercises in any OO language you like, but will need to be able to follow Java to understand the book. (It's a lot like C#, in case you were wondering.)

The final product will be a physical book (made out of paper and everything!), accompanied by a PDF version. Everyone who comes on a Codemanship TDD workshop will have the option to add a copy to their order. The plan is that volunteer reviewers will get a copy for free when it's finished, as well as other merchandise I'm working on (e.g., a t-shirt).



July 23, 2016

On The Compromises of Acceptance Test-Driven Development

I'm currently writing a book on Test-Driven Development to accompany the redesigned training workshop. Having thought very hard about TDD for many years, the first 140 pages were very easy to get out.

But things have - predictably - slowed down now that I'm on the chapter on end-to-end TDD and driving internal designs from customer tests.

The issue is that the ways we currently tackle this are all compromises, and there are many gods that need appeasing, just as there are many ways that folk do it.

Some developers will write, say, a failing FitNesse test and come up with an implementation to pass that test. Some will write a failing automated customer test and then drive an internal design using unit tests and "classic TDD". Some will write a failing automated customer test that makes all the assertions about desired outcomes (e.g., "the donated DVD should be in the library"), and rely entirely on interaction tests to drive out the internal design using mock objects. Some will use test doubles only for external dependencies, ensuring their automated customer test runs faster. Some will include external dependencies and use their automated customer test to do integration testing as well. Some will drive the UI with their automated customer tests, effectively making them complete end-to-end system tests. Some will drive the application through controllers or services, excluding the UI as well as external back-end dependencies, so they can concentrate on the internal design.

And, of course, some won't automate their customer tests at all, relying entirely on their own developer tests for design and regression testing, and favouring manual by-eye confirmation of delivery by the customer herself.

And many will use a combination of some or all of these approaches, as required.

In my own approach, I observe that:

a. You cannot automate customer acceptance. The most important part of ATDD is agreeing the test examples and getting the customer's test data. Making those tests executable through automation helps to eliminate ambiguity, but really we're only doing it because we know we'll be running those tests many times, and automating will save us time and money. We still have to let the dog see the rabbit to get confirmation of acceptance. The customer has to step through the tests with working software and see it for themselves at least once.

b. Non-executable customer tests can be ambiguous, and manually reconciling customer-provided data with unit test parameters can be hit-and-miss

c. The customer rarely, if ever, gets involved with writing "customer tests" using the available tools like FitNesse and Cucumber.




We're probably kidding ourselves that we even need a special set of tools distinct from the xUnit frameworks we would use for other kinds of tests, because - chances are - we're going to be writing those tests

d. Customer tests executed using these tools tend to run slow, even when external dependencies are excluded

e. Relying entirely on top-level tests to check that the work got done right can - and usually does - lead to problems with maintainability later. We might identify a class that could be split off into a component to be reused in ther applications, but where are its functional tests? Imagine we could only test a car radio when it's installed in a Ford Mondeo. This is especially pertinent for teams thinking about breaking down monolithic architectures into component-based or service-based designs.

f. When you exclude the UI and external dependencies, you are still a long way from "done" after your customer test has passed. There's many a slip twixt cup and lip.

g. Once we've established a design that passes the customer's test, the main purpose of having automated tests is to catch regressions as the code evolves. For this, we want to be able to test as much of our code as quickly and cheaply as possible. Over-reliance on slower-running customer tests can be at odds with this goal.

With all this in mind, and revisiting the original goal of driving designs directly from the customer's examples, it's difficult to craft a workable single narrative about how we might approach this.

I tend to automate a "happy path" test automated at entry point to the domain model, drive an internal design mostly through "classic" TDD, and use test doubles (stubs, mocks and dummies) to exclude external dependencies (as well as fake complex components I don't want to get into yet - "fake it 'til you make it".) A lot of edge cases get dealt with only in unit tests and with by-eye customer testing. I will work to pass one customer test assertion at a time, running the FitNesse test to get feedback before moving on to the next assertion.

This does lead to three issues:

1. It's not a system test, so there's still more TDD to do after passing the customer's test

2. It produces some duplication of test code, as the customer test will usually ask some of the same questions as the unit tests I write for specific behaviours

3. Even excluding the UI and external dependencies, they still run much slower than a unit test

I solve issue #3 by adapting my FitNesse fixtures to also be JUnit tests that can be run by me as part of continuous regression testing (see an example at https://gist.github.com/jasongorman/74f6a0a049e03b7030ab46e8b01128e7 ). That test is absolutely necessary, because it's typically the only place that checks that we get all of the desired outcomes from a user action. It's the customer test that drives me to wire the objects doing the work together. I prefer to drive the collaborations this way rather than use mock objects, because I have found over the years that an over-reliance on mocks can lead to maintainability issues. I want as few tests as possible that rely on the internal design.

Being honest, I don't know how to easily solve issue #2. It would require the ability to compose tests so that we can apply the same assertions to different set-ups and actions. I did experiment with an Assertion interface with a check() method, but ending up with every assertion having its own implementation just got kerrrazy. I think what's actually needed is a DSL of some kind that hides all of that complexity.

On issue #1, I've long understood that passing an automated customer test does not mean that we're finished. But there is a strong need to separate the concerns of our application's core logic from its user interface and from external dependencies. Most UIs can actually be unit tested, and if you implement an abstraction for the UI logic, the amount of actual code that directly depends on the UI framework tends to be minimal. All you're really doing is checking that logical views are rendered correctly, and that user actions map correctly onto their logical event handlers. The small sliver of GUI code that remains can be driven by integration tests, usually.

I don't write system tests to test logic of any kind. The few that I will write - complicated and cumbersome as they usually are - really just check that, once the car is assembled, when you turn the key in the ignition, it starts. A dozen or more "smoke tests" tend to suffice to check that the thing works when everything's plugged in.

So I continue to iterate this chapter, refining the narrative down to "this is how I would do it", but I suspect I will still be dissatisfied with the result until there's a workable solution to the duplication issue.


July 17, 2016

Oodles of Free Legacy UML Tutorials

See how we used to do things back in Olden Times by visiting the legacy UML tutorials section of the Codemanship website (the content from the highly-popular-with-your-granddad-back-in-the-day parlezuml.com).



I maintain that:

a. Visual modeling & UML is still useful and probably due for a comeback, and

b. Visual modelling and Agile Software Development can work well together when applied sparingly and sensibly

Check it out.






July 16, 2016

Taking Agile To The Next Level

As with all of my Codemanship training workshops, there's a little twist in the tail of the Agile Software Development course.

Teams learn all about the Agile principles, and the Agile manifesto, Extreme Programming, and Scrum, as you'd expect from an Agile Software Development course.

But they also learn why all of that ain't worth a hill of beans in reality. The problem with Agile, in its most popular incarnations, is that teams iterate towards the wrong thing.

Software doesn't exist in a vacuum, but XP, Scrum, Lean and so forth barely pay lip-service to that fact. What's missing from the manifesto, and from the implementations of the manifesto, is end goals.

In his 1989 book, Principles of Software Engineering Management, Tom Gilb introduced us to the notion of an evolutionary approach to development that iterates towards testable goals.

On the course, I ask teams to define their goals last, after they've designed and started building a solution. Invariably, more than 50% of them discover they're building the wrong thing.

It had a big influence on me, and I devoted a lot of time in the late 90s and early 00s to exploring and refining these ideas.

Going beyond the essential idea that software should have testable goals - based on my own experiences trying to do that - I soon learned that not all goals are created equal. It became very clear that, when it comes to designing goals and ways of testing them (measures), we need to be careful what we wish for.

Today, the state of the art in this area - still relatively unexplored in our industry - is a rather naïve and one-dimensional view of defining goals and associated tests.

Typically, goals are just financial, and a wider set of perspectives isn't taken into account (e.g., we can reduce the cost of manufacture, but will that impact product quality or customer satisfaction?)

Typically, goals are not caveated by obligations on the stakeholder that benefits (e.g., the solution should reduce the cost of sales, but only if every sales person gets adequate training in the software).

Typically, the tests ask the wrong questions (e.g., the airline who measured speed of baggage handling without noticing the increase in lost of damaged property and insurance claims, and then mandated that every baggage handling team at every airport copy how the original team hit their targets.)

Now, don't get me wrong: a development team with testable goals is a big improvement on the vast majority of teams who still work without any goals other than "build this".

But that's just a foundation on which we have to build. Setting the wrong goals, implemented unrealistically and tested misleadingly, can do just as much damage as having no goals at all. Ask any developer whose worked under a regime of management metrics.

Going beyond Gilb's books, I explored the current thinking from business management on goals and measures.

Balancing Goals

First, we need to identify goals from multiple stakeholder perspectives. It's not just what the bean counters care about. How often have we seen companies ruined by an exclusive focus on financial numbers, at the expense of retaining the best employees, keeping customers happy, being kind to the environment, and so on? We're really bad at considering wider perspectives. The law of unintended consequences can be greatly magnified by the unparalleled scalability of software, and there may always be side-effects. But we could at least try to envisage some of the most obvious ones.

Conceptual tools like the Balanced Scorecard and the Performance Prism can help us to do this.

Back in the early 00s, I worked with people like Mike Bourne, Professor of Business Performance Innovation, to explore how these ideas could be applied to software development. The results were highly compatible, but still - more than a decade later - before their time, evidently.

Pre-Conditions

If business goals are post-conditions, then we - above all others - should recognise that many of them will have pre-conditions that constrain the situations in which our strategy or solution will work. A distributed patient record solution for hospitals cannot reduce treatment errors (e.g., giving penicillin to an unconscious patient who is allergic) if the computers they're using can't run our software.

For every goal, we must consider "when would this not be possible?" and clearly caveat for that. Otherwise we can easily end up with unworkable solutions.

Designing Tests

Just as with software or system acceptance tests, to completely clarify what is meant by a goal (e.g., improve customer satisfaction) we need to use examples, which can be worked into executable performance tests. Precise English (or French or Chinese or etc) just isn't precise enough.

Let's run with my example of "improve customer satisfaction"; what does that mean, exactly? How can we know that customer satisfaction has improved?

Imagine we're running a chain of restaurants. Perhaps we could ask customers to leave reviews, and grade their dining experience out of 10, with 1 being "very poor" and 10 being "perfect".

Such things exist, of course. Diners can go online and leave reviews for restaurants they've eaten at. As can the people who own the restaurant. As can online "reputation management" firms who employ armies of paid reviewers to make sure you get a great average rating. So you could be a very highly rated restaurant with very low customer satisfaction, and the illusion of meeting your goal is thus created.

Relying solely on online reviews could actively hurt your business if they invited a false sense of achievement. Why would service improve if it's already "great"?

If diners were really genuinely satisfied, what would the real signs be? They'd come back. Often. They'd recommend you to friends and family. They'd leave good tips. They'd eat all the food you served them.

What has all this got to do with software? Let's imagine a tech example: an online new music discovery platform. The goal is to amplify good bands posting good music, giving them more exposure. Let's call it "Soundclown", just for jolly.

On Soundclown, listeners can give tracks a thumbs-up if they like them, and thumb's down if they really don't. Tracks with more Likes get promoted higher in the site's "billboards" for each genre of music.

But here's the question: just because a track gets more Likes, does that mean more listeners really liked it? Not necessarily. As a user of many such sites, I see how mechanisms for users interacting with music and musicians get "gamed" for various purposes.

First and foremost, most sites identify to the musician who the listener that Liked their track is. This becomes a conduit for unsolicited advertising. Your track may have a lot of Likes, but that could just be because a lot of users would like to sell you promotional services. In many cases, it's evident that they haven't even listened to the track that they're Liking (when you notice it has more Likes than it's had plays.)

If I were designing Soundclown, I'd want to be sure that the music being promoted was genuinely liked. So I might measure how many times a listener plays the track all the way through, for example. The musical equivalent of "but did they eat it all? and "did they come back for more?"

We might also ask for a bit more proof than clicking a thumbs-up icon. One website I use keeps a list of my "fans", but are they really fanatical about my music? Judging by the tumbleweed when I alert my "fans" to new music, the answer is evidently "nope". Again, we could learn from the restaurant, and allow listeners to "tip" artists, conveying some kind of reward that costs the listener something somehow.

Finally, we might consider removing all unintended backwards marketing channels. Liking my music shouldn't be an opportunity for you to try and sell me something. That very much lies at the heart of the corruption of most social networks. "I'm really interested in you, Now buy my stuff!"

This is Design. So, ITERATE!

The lesson I learned early is that, no matter how smart we think we've been about setting goals and defining tests, we always need to revisit them - probably many times. This is a design process, and should be approached the same we design software.

We can make good headway using a workshop format I designed many years ago

Maintaining The Essence

Finally, but most important of all, our goals need to be expressed in a way that doesn't commit us to any solution. If our goal is to promote the best musicians, then that's our goal. We must always keep our eyes on that prize. It takes a lot of hard work and diligence not to lose sight of our end goals when we're bogged down in technical solution details. Most teams fail in that respect, and let the technical details become the goal.






July 12, 2016

It's Time Development Teams Chose Their Own Working Environments

I'm a great believer in software development teams taking back control. For too long now, the way we work has been dictated by people who aren't software developers. No more so is this evident than in the office environments in which we're expected to work.

Typically, developers have no say in their working environment. After the tyranny of cubicles came the dystopia of open-plan offices.

For the past two decades, open-plan has dominated the scene, inflicting a soul-destroying face-time culture and countless concentration-destroying distractions on developers across the globe.

A quick poll on Twitter suggests that very few of us think open-plan is an appropriate work environment for what we do when it's applied exclusively. More of us would prefer to work in our own office, or from home.




But the vast majority of us would appear to want the right working environment for when it suits what we're doing. Sometimes we want to be a team and collaborate. Sometimes we want to lock ourselves away and think hard. Sometimes we don't want to come into the office at all.

Most of the developers I speak to - and this mirrors my own experience - report getting a lot more work done when the office is empty, or when they're at home on their own. Most people I speak to who pair program find that it works best when the pair aren't distracted by what anyone else is doing. Again, I get many reports of high productivity pairing in their own office, or remotely from home offices.

Some Agile teams report highest productivity when they're all co-located in the same room, but only when there's nobody else working in that room (e.g., they're not stuck in a room with people from accounts or marketing).

What can we make of all of this? Well, it would seem there is no one ideal working environment for software developers. For sure, writing software is a very different occupation to selling, marketing, accounting, and a whole host of other professions. But on the surface, it looks just like office work. Which it is, of course. So it's natural that the Gods of Facilities Management might assume we need a similar kind of working environment.

For example, if you think developers should have their own offices, check out the prices (e.g., a 4m x 4m office for one person costs roughly £4000/year more than a workstation in an open-plan office here in London), do the sums, and make the business knowingly choose to throw money away on false economies like saving space.

A customised working space, with small private offices and bigger collaborative spaces, with better facilities for secure and reliable remote working, doesn't cost as much as maybe some people think. For sure, if they just threw the money they're currently spending in total over the wall and got working software back, would they even care how much was spent on office space?

Again, the problem here is that the decision is in the wrong hands. It's up to us to push back and challenge that assumption. Getting the working environment you need can have a profound effect on your productivity.



Reminder: All Codemanship Training Bookings Half-Price in July



Just a quick reminder about the special offer Codemanship's running this month. Any UK training workshop booked in July is half price.

Beat the Brexit blues by treating your development teams to a hands-on code craft boost for as little as £100 per person.

To find out more, visit our website.




July 6, 2016

After Brexit, the UK Could Lose 1 in 3 Software Developers

One thing that has largely gone unreported in the mainstream media is the potential - and already very real - impact of Brexit on software development in the UK.

Our membership of the EU plays a big role in Britain's software development community. According to a poll I conducted on Twitter, roughly 1 in 3 developers working in the UK is an EU migrant. The sample size of 850 suggests this figure has to be at least in the ballpark of reality, and it chimes with my experiences training and coaching teams across the country.




1 in 3 software developers adds up to about 130,000 highly educated, highly skilled people helping to build UK plc's digital infrastructure. Hotly tipped to be the new Prime Minister, Theresa May has refused to give any assurances about their right to remain in the UK. So, right now, about 130,000 software developers, and the organisations they work for, have a sword of Damocles hanging over their futures. Nobody knows if they'll still be here in 2-3 years' time.

130,000 is a lot of developers to lose. Some of us might say "Excellent - higher pay for us locals!" But the reality is likely to play out differently. We're already hearing about IT projects being shelved indefinitely, as well as tech businesses who have either resolved to, or are in the process of deciding, to move out of the UK to where there's a bigger pool of available talent in the EU.

Imagine if the UK's construction industry lost 1 in 3 tradespeople. Indeed, construction in the UK is already taking a big knock from the Brexit vote. Without skilled EU workers, a heck of a lot of stuff here would not have been built in the last decade. International employers have the choice to go elsewhere. And they are choosing, it would seem.

But an office complex in London has to be built in London. Not so for software. There's really nothing stopping our biggest IT spenders moving IT to places like Paris, Brussels or Berlin. We've spent the last 25 years turning IT into a moveable feast. Now, it would appear, the feast is moving.

What worries me most, though, is less the loss of IT projects and jobs, more the loss of economic growth attached to these projects in the wider world. IT projects aren't being shelved in isolation. They usually exist for a purpose, and the much bigger business investment projects they're part of are being put on ice, too. The likely result is a cooling effect on the entire British economy.

Software developers - along with every other person in the UK - woke up to a hefty overnight pay cut on June 25th, thanks to the devalued pound. A Brexit-fuelled brain drain is going to hit software development hard. We can't just magic 130,000 developers out of thin air, and every non-EU country we might look to to bridge the gap has their own chronic shortages to deal with.

Of course, with tit-for-tat, there'll be a lot of British developers moving back to the UK. But nothing like 130,000.

I'm very much hoping that sanity will be restored, and both Right To Remain and freedom of movement will form a core part of any deal the Brexit negotiators do with the EU. But the hardened rhetoric from the likes of May, Leadsom and other prospective PMs makes me fearful that it's going to be a messy and acrimonious divorce, and that - as always - the children, including our young industry, will suffer most.

UPDATE: Yesterday, a motion proposed by Labour in the House of Commons that EU citizens currently living in the UK should be given the right to remain was passed by an overwhelming majority. It's not legally binding and has no direct impact on government policy, but it least shows that the will is there to resolve this problem. We must hope that the government realises the damage being done by prolonging the uncertainty and takes decisive action soon. This will be politically very difficult for them, though. The Leave campaign very strongly implied that immigrants would be leaving, even if they've rowed back on that since the referendum.







July 2, 2016

What Does Brexit Mean for UK Software Development?

The effects of last week's momentous referendum result that has set the UK (well, England and Wales, at any rate) on a path to exiting the EU are already being felt in British software development.

For several months now, as Leavers and Remainers campaigned, the uncertainty created by the referendum about Britain's economic future has seen many employers put a freeze on unnecessary spending.

Anecdotally, I've been hearing since January about IT projects that have been put on ice, recruitment drives shelved, and - most pertinently to my business - training and mentoring deprioritised, as companies seek to reduced spending in case the unthinkable happened.

Now that the unthinkable has happened, what will it mean for the medium and long-term future of British software development?

Undoubtedly, we've relied on our EU membership to staff teams. Good developers are hard to find. It just got a lot harder. Since last Friday, I've seen several European developers I follow on Twitter say they'll be leaving the UK. This is not good. Indeed, there are entire companies here that have been started by EU migrants. The British economy has benefitted from their brains and their hard work. I quite understand why they might now be thinking "Why should I stay if I'm obviously not wanted?" The effect of a brain drain back to the EU will be noticeable.

And while many of us are calling for the status of existing EU migrants to be guaranteed, we may not know for quite some time if that's actually possible. It's likely to generate a lot of anger among Leave voters who thought they were voting to "send them home". And even if their status could be protected, the rise in anti-immigrant rhetoric, including a step-change in attacks and abuse in the streets, is unlikely to have no effect on choices to stay.

Some of our biggest employers have already made it clear that they are looking into potential other locations for their businesses. If banking, in particular, decides London is no longer the place to be, that will put an end to the largest concentration of software developers in Europe - the City of London. "Tech City", just up the road, would be likely to feel the shock waves.

And being out of the EU is likely to have an impact on investment for tech start-ups, too. For years, the UK was considered to be one of the best places to start a software company. Being in the EU was part of the attraction.

Meanwhile, UK citizens - born and educated here - are investigating other possibilities, like Scotland (if it becomes independent), Ireland and Canada. Before our final exit from the EU, we can probably expect a further brain drain, losing thousands of great software developers who have stronger liberal and internationalist leanings.

But, with no signs of a social democrat resurgence on the horizon (with the main opposition doing their level-best to ensure we don't get that choice), the writing's on the wall for a future Britain that's both much diminished economically, and farther to the right politically than the average software developer may be comfortable with.

Software developers understand that you can't run a high-tech 21st century economy on industrial 19th century principles. Evidently, our political class - for all their talk of "high-tech, high-wage" economics - are stuck in the past. Few if any of them know what it means, and right now none of them seem to appreciate what damage has been done to this vision for Britain's future.

For decades, Britain has punched above her weight. But after several years of clamouring to "get kids coding", the EU referendum may just have closed the door on Britain as a software development powerhouse.

Most worrying of all is the failure of politicians to appreciate that our ability to create and adapt software is now a very significant limiting factor on our entire economy. Just ask how many developers working on big government projects at the moment are EU migrants, for example.


Software development in the UK will limp on, no doubt. But, the way things are shaping up, it could be much diminished, and the UK economy with it.

May 31, 2016

Squaring The Circle of "Tell, Don't Ask" vs. Test As Close To Implementation As Possible

One of the toughest circles to square in a test-driven approach to designing software is how to achieve loosely-coupled modules while keeping test assertions as close to the code they apply to as possible, so that they are: a. better at pinpointing failures, b. run faster, and c. can travel wherever the code they exercise travels.

The interaction-focused approach to TDD that allows us to write unit tests that are more about the telling and less about the asking (Tell, Don't Ask) can indeed leead to designs that are more effectively modular. But, as Martin Fowler also mentions in his post on Tell, Don't ask, it can lead to developers becoming somewhat ideological "getter eradicators".



These example classes from a community video library system break our Tell, Don't Ask principle by exposing internal data through getters.

But is, per se, by itself a bad thing? Remember that our goal is to share as little information about objects among the classes in our system. So our focus should be on clients who use Title and Member, and what they know about them.



Library, in this example, isn't bound to the implementation classes that have the getters. It only knows about the interfaces Copyable and Rewardable, which only include the methods Library uses. (See the Interface Segregation Principle).



The tests for Library know nothing of Member or Title.



And the assertions about the work that Member and Title do are isolated in test classes that are specifically about those implementations.

In practice, this turns out to be preferable to the favoured Tell, Don't Ask approach in TDD that puts all the assertions about the work in acceptance tests and relies mostly on interaction testing at the unit level. Just because Library told Copyable to register one copy, that doesn't mean Title actually registered one copy. Someone somewhere has to ask that question. Ask. And to ask that question, the number of copies has to be accessible somehow. Of course, we could cheat, and preserve our getter-less implementation by accessing state by the back door (e.g., going direct to a database), but this is far worse than using a getter, IMHO. We're sharing the knowledge, but we're pushing that dependency out of the code where our compiler can see it.

Just as functional programming's "dirty little secret" is that - somewhere, somehow - state changes, Tell, Don't Ask's is that eventually we have to ask about the internal state of objects (even if we're not going to those classes in our code to do it). It may be for use in an acceptance test, or in the user interface, or in a report - but if we don't make object state accessible somehow, then our OO applications can have no O in its I/O.

Having said all that, I do find that Tell, Don't Ask and interaction tests can lead us to more decoupled designs, in that they can lead us to minimal interfaces through which object collaborations happen.

One last though on this; if the only reason we have methods for accessing state is for testing implementations (which, as mentioned, is not really the case, but let's run with it for the sake of argument), greater information hiding could be achieved by internalising those assertions.



This would satisfy both needs of having the work tested as close to where it's done as possible, and exposing as little internal state as possible. Our tests would not actually ask any questions, they would simply be drivers that exercise our objects with various inputs.






May 25, 2016

How Many Bugs In Your Code *Really*?

A late night thought, courtesy of the very noisy foxes outside my window.

How many bugs are lurking in your code that you don't know about?

Your bug tracking database may suggest you have 1 defect per thousand lines of code (KLOC), but maybe that's because your tests aren't very thorough. Or maybe it's because you deter users from reporting bugs. I've seen it all, over the years.

But if you want to get a rough idea of how many bugs there are really, you can use a kind of mutation testing.

Create a branch of your code and deliberately introduce 10 bugs. Do your usual testing (manual, automated, whatever it entails), and keep an eye on bugs that get reported. Stop the clock at the point you'd normally be ready to ship it. (But if shipping it *is* your usual way of testing, then *start* the clock there and wait a while for users to report bugs.)

How many of those deliberate bugs get reported? If all 10 do, then the bug count in your database is probably an accurate reflection of the actual number of bugs in the code.

If 5 get reported, then double the bug count in your database. If your tracking says 1 bug/KLOC, you probably have about 2/KLOC.

If none get reported, then your code is probably riddled with bugs you don't know about (or have chosen to ignore.)