September 24, 2016

Learn TDD with Codemanship

10 Great Reasons to Learn TDD

1. Rising Demand - Test-Driven Development has been growing in popularity over the last decade, with demand rising by a fairly steady 6% each year here in the UK. In 2016, roughly a quarter of all developer jobs advertised ask for TDD skills and experience. It's not unthinkable that within the next decade, TDD will become a mandatory skill for software developers.

2. Better Pay - according to data on itjobswatch.co.uk, the median salary for a software developer in the UK is £40,000. For devs with TDD skills, it rises a whopping 27.5% to £51,000. Overall, developer pay in Britain has been stagnating, with a real-terms fall in earnings over the last decade. Salaries for jobs requiring TDD skills have consistently outperformed inflation, rising about 4.5% every year for the last 3 years.

3. Better Software - the jury's not out on this one. Studies done into the effects of TDD show quite conclusively that test-driven code is less buggy - by as much as 90% less buggy. But that's not the only effect on code quality; studies done at the BBC and by Keith Braithwaite of Zuhlke Engineering show clear gains in code maintainability, with test-driven code being simpler and containing less duplication. Other studies show test-driven code tends to have lower class coupling and higher class cohesion.

4. Faster Cycle Times - the Holy Grail of Continuous Delivery is code that's always shippable, so when a feature's ready, the business can decide to release it straight away, instead of having to wait for a long and tortuous acceptance testing and stabilisation phase to make it fit for purpose. TDD, done well, delivers on this promise by continuously testing the code to ensure it always works. It's no coincidence that all the teams doing Continuous Delivery successfully are also doing TDD. TDD enables Continuous Delivery.

5. Sustainable Innovation - not only can TDD help teams deliver value sooner, it can also help them deliver value on the same code base for longer. I've seen some big organisations brought to their knees by the crippling cost of changing software products and systems, and having to rewrite software from scratch many times just to make small gains in new functionality. The technical benefits of TDD - continuous testing and cleaner code - tend to flatten out the cost of change over time, which typically rises exponentially otherwise.

6. It Doesn't Cost More - despite what some TDD naysayers claim, it actually doesn't cost significantly more to test-drive the design of your code, and in some cases improved productivity. Many teams report a slowdown when adopting TDD, and this is because of the not-inconsiderable learning curve. It takes 4-6 months to get the hang of TDD, and a lot of teams don't make it that far, which is why I strongly recommend adopting TDD "under the radar".

7. Puts The Problem Horse Before The Solution Cart - one of the biggest failings of software development teams is our tendency to be solution-driven and not problem-driven. TDD forces us to start by explicitly articulating the problem we want to solve, and then encourages us to find the simplest solution. As a result, test-driven code has a tendency to be more useful.

8. Free of Fear - I've seen first-hand the profound effect TDD can have on a developer's confidence. Firstly, using precise examples helps us to clear the fog that usually hangs over requirements, obscuring our view going forward. And continuous automated regression testing - a nifty side-effect of doing TDD - gives us confidence looking back that a change we make hasn't broken the software. The net effect is to make us more us more courageous as we work, more willing to try new ideas, less frightened of failing. Writing software is a learning process, and - to quote from Dune - "fear is the mind-killer".

9. Getting Everyone "On The Same Page" - exploring customer requirements using test examples is a very powerful way to clear up ambiguities and potential misunderstandings, by making everything clear and concrete. And not just for developers: testers, ops teams, UX designers, security experts, technical documentation writers, marketers... we can all benefit from these examples, getting the entire team on the same page.

10. A Gateway To High-Integrity Code - TDD, done well, can open the door to more advanced kinds of software testing at relatively low extra costs. For example, if you're in the habit of refactoring duplicate tests into parameterised tests, a few extra lines of code can add an exhaustive data-driven test. Increasingly, tools and techniques normally associated with high-integrity code (e.g., model checking) are being brought into the unit testing arena. NASA JPL's Pathfinder model checker, for example, can drive tests using JUnit Theories. And there are many versions of Haskell's QuickCheck being ported for other languages and xUnit implementations. Finally, after years of high-integrity software being a largely "academic" domain, we stand on the precipise of it going mainstream.

If you're interested in learning TDD, or are a TDD practitioner looking to distinguish yourself and advance your skills, visit www.codemanship.com






September 13, 2016

Learn TDD with Codemanship

4 Things You SHOULDN'T Do When The Schedule's Slipping

It takes real nerve to do the right thing when your delivery date's looming and you're behind on your plan.

Here are four things you should really probably avoid when the schedule's slipping:

1. Hire more developers

It's been over 40 years since the publication of Fred L. Brooks' 'The Mythical Man-Month'. This means that our industry has known for almost my entire life that adding developers to a late project makes it later.

Not only is this born out by data on team size vs. productivity, but we also have a pretty good idea what the causal mechanism is.

Like climate change, people who reject this advice should not be called "skeptics" any more. In the face of the overwhelming evidence, they're Small Team Deniers.

Hiring more devs when the schedule's slipping is like prescribing cigarettes, boxed sets and bacon for a patient with high blood pressure.

2. Cut corners

Still counterintuitively, for most software managers, the relationship between software quality and the time and cost of delivery is not what most of us think it is.

Common sense might lead us to believe that more reliable software takes longer, but the mountain of industry data on this clearly shows the opposite in the vast majority of cases.

To a point - and it's a point 99% of teams are in no danger of crossing - it actually takes less effort to deliver more reliable software.

Again, the causal mechanism for this is well understood. And, again, anyone who rejects the evidence is not a "skeptic"; they're a Defect Prevention Denier.

The way to go faster on 99% of projects is to slow down, and take more care.

3. Work longer hours

Another management myth that's been roundly debunked by the evidence is that, when a software delivery schedule's slipping significantly, teams can get back on track by working longer hours.

The data very clearly shows that - for most kinds of work - longer hours is a false economy. But it's especially true for writing software, which requires a level of concentration and focus that most jobs don't.

Short spurts of extra effort - maybe the odd weekend or late night - can make a small difference in the short term, but day after day, week after week overtime will burn your developers out faster than you can say "get a life". They'll make stupid, easily avoidable mistakes. And, as we've seen, mistakes cost exponentially more to fix than to avoid. This is why teams who routinely work overtime tend to have lower overall productivity: they're too busy fighting their own self-inflicted fires.

You can't "cram" software development. Like your physics final exams, if you're nowhere near ready a week before, then you're not gong to be ready, and no amount of midnight oil and caffeine is going to fix that.

You'll get more done with teams who are rested, energised, feeling positive, and focused.

4. Bribe the team to hit the deadline

Given the first three points we've covered here, promising to shower the team with money and other rewards to hit a deadline is just going to encourage them to make those mistakes for you.

Rewarding teams for hitting deadlines fosters a very 1-dimensional view of software development success. It places extra pressure on developers to do the wrong things: to grow the size of their teams, to cut corners, and to work silly hours. It therefore has a tendency to make things worse.

The standard wheeze, of course, is for teams to pretend that they hit the deadline by delivering something that looks like finished software. The rot under the bonnet quickly becomes apparent when the business then expects a second release. Now the team are bogged down in all the technical debt they took on for the first release, often to the extent that new features and change requests become out of the question.

Yes, we hit the deadline. No, we can't make it any better. You want changes? Then you'll have to pay us to do it all over again.


Granted, it takes real nerve, when the schedule's slipping and the customer is baying for blood, to keep the team small, to slow down and take more care, and to leave the office at 5pm.

Ultimately, the fate of teams rests with the company cultures that encourage and reward doing the wrong thing. Managers get rewarded for managing bigger teams. Developers get rewarded for being at their desk after everyone else has gone home, and appearing to hit deadlines. Perversely, as an industry, it's easier to rise to the top by doing the wrong thing in these situations. Until we stop rewarding that behaviour, little will change.








September 8, 2016

Learn TDD with Codemanship

Introduction to TDD - "Brown Bag" Sessions

With my new training companion book TDD launching next month, I thought it might be fun to offer some convenient "brown bag" sessions where folks can get a quick practical introduction to TDD and a copy of the book to take away.



Feedback via the Twitters suggests some of you are interested, and now I want to flesh the idea out a bit with some more details.

I want to get folk fired up about learning TDD, and the book can help them take the next steps on that journey.

The basic idea is that you invite me into your office in London* between 12pm-2pm, or after work at 6-8pm. You'll need a room/space for everyone, with a projector or big TV I can plug my laptop into. We'll do some TDD together, and you'll need a computer between every two people at least, as you'll be working in pairs. I'll code and talk, and we'll get straight into it - no time for dilly-dallying.

The session will last one hour, and attendees will get a copy of the TDD book, worth £30.

I would suggest there needs to be a minimum of 4 attendees, and pricing would be as follows:

For 4 people - £95/person

5-8 people - £85/person

9+ people - £75/person

In that hour, we'll cover:

* Why do TDD?
* Red-Green-Refactor basics
* The Golden Rule
* Working backwards from assertions
* Refactoring to parameterised tests
* Testing your tests

There's be NO SLIDES. It'll be 100% hands-on. I'll do the practical stuff in Java or C#, but you can do it in any programming language you like, provided you have the appropriate tools (an IDE/editor and an implementation of xUnit - preferably one that supports parameterised tests (or has an add-on that does). Mocking frameworks will not be required for this introduction.

You can grab yourself a free preview of the book, including the first 7 chapters, from http://codemanship.co.uk/tdd.html

More details soon on how to book. But if you're interested in me running a TDD "brown bag" where you work, drop me a line and we can get the ball rolling now.



* If there's enough demand, I'll do tours of other cities






September 6, 2016

Learn TDD with Codemanship

Empowered Teams Can Make Decisions The Boss Disagrees With

Coming into contact, as I do, with many software development teams across a wide range of industries, you begin to recognise patterns.

One such pattern that - once I noticed it - I realised is very prevalent in Agile Software Development is what I call the Empowered Straightjacket. Teams are "empowered" to make their own decisions, but when the boss doesn't like a decision they've made, he or she overrules it.

Those who remember their set theory will know that if the set of all possible decisions a team is allowed to make can only include decisions the boss agrees with, then they are effectively working in the same set (or a subset) of the boss's rules.

That is not empowerment. Just in case you were wondering.

To have truly empowered development teams, bosses have to recognise that just being the boss doesn't necessarily make them right, and disagreeing with a decision doesn't necessarily make it a bad decision.

Unfortunately, the notion that decisions are made from above by more enlightened individuals has an iron grip on corporate culture.

Moving away from that means that managers have to reshape their role from someone who makes decisions to someone who facilitates the decision-making process (and then accepts the outcome graciously and with energy and enthusiasm.)

Once we recognise that there are other - more democratic and more objective - ways of making decisions, and that the decisions are just as likely to be right (if not more likely) than our own, then we have a golden opportunity to scale decision-making far beyond what traditional command-and-control hierarchies are capable of.

To scale Agile, you must learn to let go and let teams be the masters of their own destinies. You have no superhuman ability to make better decisions than a team full of highly-educated professionals.

The flipside of this is that developers and teams have to allow themselves to be empowered. With great empowerment comes great responsibility. And developers who've been cushioned from the responsibility of making decisions for a long time can run a mile when it comes a'knocking. Like prisoners who can't cope on the outside after a long stretch of regimented days doing exactly what they tell exactly when they tell you, devs who are used to management "taking care of all that" can panic when someone hands them a company credit card and says "if you need anything, use this". It reminds me of how my grandmother's hands used to shake when she had to write a cheque. Granddad took care of all that sort of thing.

This can lead to developers lacking confidence, which leads to them being afraid to take initiative. They may have learned their craft in environments where failure is not tolerated, and learned a survival strategy of not being responsible for anything that matters.

In this situation, developers rise up through the ranks - usually by length of service - and perpetuate the cycle by micromanaging their teams.

Based on my own subjective experiences leading dev teams (and being led): ultimately, developers empower themselves. The maxim "it's easier to ask for forgiveness than permission" applies here.

Now, t'was ever thus. But the rise of Agile Software Development has forced many managers to at least pretend to empower their teams. (And, let's face it, the majority are just pretending. Scrum Masters are not project managers, BTW.)

That's your cue to seize the day. They may not like it. But they'll have to pretend they do.





August 31, 2016

Learn TDD with Codemanship

Slow Running/Unmaintainable Automated Tests? Don't Ditch Them. Optimise Them.

It's easy for development teams to underestimate the effort they need to invest in optimising their automated tests for both maintainability and performance.

If your test code is difficult to change, your software is difficult to change. And if your tests take a long time to run - not unheard of for teams to have test suiites that take hours - then you won't run them very often. Slow-running tests can be a blocker to Continuous Delivery, because Continuous Delivery requires Continuous Testing, to be confident that the software is always shippable.

It's very tempting when your tests are slow and/or difficult to change to delete the source of your pain. But the reality is that these unpleasent side effects pale in comparison to the effects of not having automated tests.

We know from decades of experience that bugs are more likely to appear in code that is tested less well and less often. Ditching your automated tests opens the flood gates, and I've seen many times code bases rapidly deteriorate after teams throw away their test suites. I would much rather have a slow, clunky suite of tests to run overnight than no automated tests at all. The alternative is wilful ignorance, which doesn't make the bugs go away, regrettably.

Don't give in to this temptation. You'll end up jumping out of the frying pan and into the fire.

Instead, look at how the test code could be refactored to better accomodate change. In particular, focus on where the test code is directly coupled to the classes (or services, or UIs) under test. I see time after time massive amounts of duplicated interaction code. Refactoring this duplication so that interactions with classes under test happen in one place can dramatically reduce the cost of change.

And if you think you have too many individual tests, parameterized tests are a criminally under-utilised tool for consolidating multiple test cases into a single test method. You can buy yourself quite staggering levels of test assurance with surprisingly little test code.

When tests run slow, that's usually down to external dependencies. System tests, for example, tend to bring all of the end-to-end architecture into play as they're executed: databases, files, web services, and so on.

A clean separation of concerns in your test code can help bring the right amount of horsepower to bear on the logic of your software. A system test that checks a calculation is done correctly using data from a web service doesn't really need that web service in order to do the check. Indeed, it doesn't need to be a system test at all. A fast-running unit test for the module that does that work will be just spiffy.

Fetching the data and doing the calculation with it are two separate concerns. Have a test for one that gets test data from a stub. And another test - a single test - that checks that the implementation of that stub's interface fetches the data correctly.

We tend to find that interactions with external dependencies form a small minority of our software's logic, and should therefore only require a small minority of the tests. If the design of the software doesn't allow separation of concerns (stubbing, mocking, dummies), refactor it until it does. And, again, the mantra "Don't Repeat Yourself" can dramatically reduce the amount of integration code that needs to be tested. You don't need database connection code for every single entity class.

Of course, for a large test suite, this can take a long time. And too many teams - well, their managers, really - balk at the investment they need to make, choosing instead to either live with slow and unmaintainable tests for the lifetime of the software (which is typically a far greater cost), or worse still, to ditch their automated tests.

Because teams who report "we deleted our tests, and the going got easier" are really only reporting a temporary relief.






August 30, 2016

Learn TDD with Codemanship

TDD 2.0 - Training Bookings & Book Preview

Just time to mention some Codemanship news.

I'm now taking advanced client bookings for the new & improved TDD 2.0 training workshop.

Incorporating lessons from 7 years delivering the original workshop for over 2,000 developers, the TDD 2.0 is more practical, in-depth and hands-on than ever.

There's more on refactoring, more on design principles, and... well, just more!

I've ditched the PowerPoint slides, beefed up the demonstrations and turbo-charged the exercises. The workshop's available in a 1, 2 or 3-day version to suit budgets and time constraints.



Attendees also get an exclusive 200-page book that goes into even greater depth, with a stack more exercises you can use to hone your TDD craft after the workshop. Ongoing practice is all-important.

You can find out more about the workshop, and grab a free preview of the first 7 chapters of the TDD book, by visiting the website.

I'm taking bookings now for delivery from Oct 10th and beyond.






August 25, 2016

Learn TDD with Codemanship

Employers: Don't Use GitHub Stats To Judge Developers





I'm seeing a growing trend among employers to use GitHub statistics as a measure of a developer's capability (or, at least, one of them).

I advise employers very strongly against going down this road. There's absolutely no statistical correlation between those stats and the quality of a developer's work, or their ability to "get shit done" under commercial constraints of time and resources.

GitHub - an extremely useful tool for developers to share and collaborate on projects - is also a social network, and like all social networks it can be gamed. In many ways, it's little different to online services like Soundcloud. Would we measure a musician's worth by how many tracks they upload, or how long those tracks are? Indeed, given the tricks that can be deployed to boost a track's apparent popularity, can we even trust the number of track plays and likes as a metric?

Developers who are very active on GitHub are very active on GitHub. That's all we can reasonably conclude. I know many really spiffy devs who don't even have GitHub accounts. And I've stumbled across more than a few who have amazing-looking GitHub profiles who actually suck as developers.

The only thing employers can usefully get from looking at a developer's GitHub profile is the actual code itself. Does it have automated tests? Is it readable? Is it simple? Can you see lots of obvious duplication? Is it basically a C program, but written in Ruby? You can tell a lot about a developer by what their code's like.

GitHub commits are social interactions. They carry information about the committer, just as track Likes on Soundcloud do. And any interaction on a social network that carries information about the sender can, and eventually will, become just another channel for self-promotion, every bit as much as Facebook "likes", LinkedIn endorsements, and Twitter "follows".

So, yes, if you're a developer, stick some of your code up on a site like GitHub so we can take a peek and see what it's like. But if you're an employer, take those stats with a big pinch of salt. If you want to know how good a developer really is, there are much better ways.







August 9, 2016

Learn TDD with Codemanship

TDD 2.0 - London Saturday Oct 8th

Just a quick plug for the launch of the new and improved Codemanship Test-Driven Development workshop in London on Saturday October 8th.



Incorporating all the lessons learned from years of training and coaching developers in TDD, plus nearly 2 decades of real-world TDD experience, the improved workshop comes with an exclusive new 200-page book that goes into even greater depth and takes you to places that a 1 or 2-day workshop simply doesn't have time for.

The average price of a 2-day TDD training course in the UK is £1,300, and they're typically delivered by less experienced contractors rather than by the person who actually designed the course.

We're offering this intensive 1-day workshop for a mindbogglingly affordable £99, for which you get a packed day of learning, direct from the person who created the workshop, with a book worth £30 included in the price. The only thing a £1,300 course offers that we don't is catering. That's a very expensive lunch!

And we're running it on a Saturday, so you don't even have to get permission from the boss. I'd rather be surrounded by keen developers than be rich any day!

October 8th is the launch of the improved workshop, and the book, so join us and be the first to get your hands on it.

To find out more, and book your place, visit our Eventbrite page.





July 26, 2016

Learn TDD with Codemanship

Seeking Guinea Pigs for TDD Book

I'm writing a book to accompany the Codemanship TDD training workshop, which is undergoing a major upgrade over the summer.

I've written about 70% of it as a first draft, and it's now at a stage where I think:

a. It could prove useful to folk, and
b. It's well-formed enough to get useful feedback on

I'm seeking a handful of enthusiastic learners to give the book's current draft a spin and see what they think.

I'll be hoping for a range of TDD experience, from absolute beginners to old hands, to get a broad spectrum of perspectives.

It in an A5 format and currently has 151 pages, a lot of which is Java code examples and screen grabs etc.

If you'd like to give it a go, drop me a line and tell me a little bit about yourself and your TDD experience.

You can tackle the exercises in any OO language you like, but will need to be able to follow Java to understand the book. (It's a lot like C#, in case you were wondering.)

The final product will be a physical book (made out of paper and everything!), accompanied by a PDF version. Everyone who comes on a Codemanship TDD workshop will have the option to add a copy to their order. The plan is that volunteer reviewers will get a copy for free when it's finished, as well as other merchandise I'm working on (e.g., a t-shirt).



July 23, 2016

Learn TDD with Codemanship

On The Compromises of Acceptance Test-Driven Development

I'm currently writing a book on Test-Driven Development to accompany the redesigned training workshop. Having thought very hard about TDD for many years, the first 140 pages were very easy to get out.

But things have - predictably - slowed down now that I'm on the chapter on end-to-end TDD and driving internal designs from customer tests.

The issue is that the ways we currently tackle this are all compromises, and there are many gods that need appeasing, just as there are many ways that folk do it.

Some developers will write, say, a failing FitNesse test and come up with an implementation to pass that test. Some will write a failing automated customer test and then drive an internal design using unit tests and "classic TDD". Some will write a failing automated customer test that makes all the assertions about desired outcomes (e.g., "the donated DVD should be in the library"), and rely entirely on interaction tests to drive out the internal design using mock objects. Some will use test doubles only for external dependencies, ensuring their automated customer test runs faster. Some will include external dependencies and use their automated customer test to do integration testing as well. Some will drive the UI with their automated customer tests, effectively making them complete end-to-end system tests. Some will drive the application through controllers or services, excluding the UI as well as external back-end dependencies, so they can concentrate on the internal design.

And, of course, some won't automate their customer tests at all, relying entirely on their own developer tests for design and regression testing, and favouring manual by-eye confirmation of delivery by the customer herself.

And many will use a combination of some or all of these approaches, as required.

In my own approach, I observe that:

a. You cannot automate customer acceptance. The most important part of ATDD is agreeing the test examples and getting the customer's test data. Making those tests executable through automation helps to eliminate ambiguity, but really we're only doing it because we know we'll be running those tests many times, and automating will save us time and money. We still have to let the dog see the rabbit to get confirmation of acceptance. The customer has to step through the tests with working software and see it for themselves at least once.

b. Non-executable customer tests can be ambiguous, and manually reconciling customer-provided data with unit test parameters can be hit-and-miss

c. The customer rarely, if ever, gets involved with writing "customer tests" using the available tools like FitNesse and Cucumber.




We're probably kidding ourselves that we even need a special set of tools distinct from the xUnit frameworks we would use for other kinds of tests, because - chances are - we're going to be writing those tests

d. Customer tests executed using these tools tend to run slow, even when external dependencies are excluded

e. Relying entirely on top-level tests to check that the work got done right can - and usually does - lead to problems with maintainability later. We might identify a class that could be split off into a component to be reused in ther applications, but where are its functional tests? Imagine we could only test a car radio when it's installed in a Ford Mondeo. This is especially pertinent for teams thinking about breaking down monolithic architectures into component-based or service-based designs.

f. When you exclude the UI and external dependencies, you are still a long way from "done" after your customer test has passed. There's many a slip twixt cup and lip.

g. Once we've established a design that passes the customer's test, the main purpose of having automated tests is to catch regressions as the code evolves. For this, we want to be able to test as much of our code as quickly and cheaply as possible. Over-reliance on slower-running customer tests can be at odds with this goal.

With all this in mind, and revisiting the original goal of driving designs directly from the customer's examples, it's difficult to craft a workable single narrative about how we might approach this.

I tend to automate a "happy path" test automated at entry point to the domain model, drive an internal design mostly through "classic" TDD, and use test doubles (stubs, mocks and dummies) to exclude external dependencies (as well as fake complex components I don't want to get into yet - "fake it 'til you make it".) A lot of edge cases get dealt with only in unit tests and with by-eye customer testing. I will work to pass one customer test assertion at a time, running the FitNesse test to get feedback before moving on to the next assertion.

This does lead to three issues:

1. It's not a system test, so there's still more TDD to do after passing the customer's test

2. It produces some duplication of test code, as the customer test will usually ask some of the same questions as the unit tests I write for specific behaviours

3. Even excluding the UI and external dependencies, they still run much slower than a unit test

I solve issue #3 by adapting my FitNesse fixtures to also be JUnit tests that can be run by me as part of continuous regression testing (see an example at https://gist.github.com/jasongorman/74f6a0a049e03b7030ab46e8b01128e7 ). That test is absolutely necessary, because it's typically the only place that checks that we get all of the desired outcomes from a user action. It's the customer test that drives me to wire the objects doing the work together. I prefer to drive the collaborations this way rather than use mock objects, because I have found over the years that an over-reliance on mocks can lead to maintainability issues. I want as few tests as possible that rely on the internal design.

Being honest, I don't know how to easily solve issue #2. It would require the ability to compose tests so that we can apply the same assertions to different set-ups and actions. I did experiment with an Assertion interface with a check() method, but ending up with every assertion having its own implementation just got kerrrazy. I think what's actually needed is a DSL of some kind that hides all of that complexity.

On issue #1, I've long understood that passing an automated customer test does not mean that we're finished. But there is a strong need to separate the concerns of our application's core logic from its user interface and from external dependencies. Most UIs can actually be unit tested, and if you implement an abstraction for the UI logic, the amount of actual code that directly depends on the UI framework tends to be minimal. All you're really doing is checking that logical views are rendered correctly, and that user actions map correctly onto their logical event handlers. The small sliver of GUI code that remains can be driven by integration tests, usually.

I don't write system tests to test logic of any kind. The few that I will write - complicated and cumbersome as they usually are - really just check that, once the car is assembled, when you turn the key in the ignition, it starts. A dozen or more "smoke tests" tend to suffice to check that the thing works when everything's plugged in.

So I continue to iterate this chapter, refining the narrative down to "this is how I would do it", but I suspect I will still be dissatisfied with the result until there's a workable solution to the duplication issue.