June 20, 2018

Learn TDD with Codemanship

Design Principles Are The Key To A Testing Pyramid

On the 3-day Codemanship TDD workshop, we discuss the testing pyramid and why optimising your test suites for fast execution is critical to achieving continuous delivery.



The goal with the pyramid is to be able to test as much of our software as possible as quickly as possible, so we can re-test and reassure ourselves that our code is shippable very frequently (i.e., continuously).

If our tests take hours to run, then we can only run them every few hours. Those are hours during which we don't know if the software's shippable.

So the bulk of our automated tests - the base of testing pyramid - should be fast-running "unit" tests. This typically means tests that have no external dependencies. (That's my working definition of "unit" test, for the purposes of making the argument for excluding file systems, databases, web services and the like from the majority of our tests.)

The purpose of our automated tests is to detect when code is broken. Every time we change a line of code, it can break the software. Therefore we need a test to catch every potential broken LOC.

The key to a good testing pyramid is to minimise the tests that have external dependencies, and the key to that is minimising the amount of code that has external dependencies.

I explain in the workshop how our design principles help us achieve this - and three in particular:

* Single Responsibility
* Don't Repeat Yourself
* Dependency Inversion

Take the example of a module that has a method which:

1. Formats a SQL string using data from a business object
2. Connects to a database to execute that query
3. Unpacks the response (recordset or array) into a business object

To test any part of this logic, we must include a trip to the database. If we break it up into 3 methods, each with a distinct responsibility, then it becomes possible to test 1. and 3. without including 2. That's a third as many "integration" tests.

In a similar vein, imagine we have data access objects, each like our module above. Each can format a SQL string using an object's data - e.g., CustomerDAO, InvoiceDAO, OrderDAO. Each connects to the database for fetch and save that object type's data. Each knows how to unpack the database response into the corresponding object type.

There's repetition in this design: connecting to the database. If we consolidate that code into a single module, we again reduce the number of integration tests we need.

Finally, we have to consider the call stack in which database connections are being made. Consider this poor design for a video rental system:



When we examine the code, we see that the methods that have direct external dependencies are not swappable within the overall call stack.



We cannot test pricing a video rental without paying a visit to the external video ratins service. We cannot test rentals without trips to the database, either.

To exclude these external dependencies from a set of tests for Rental, we have to turn those dependencies upside-down (make them swappable by dependency injection, basically).



This is often what people mean when they talk about "testable" code. In effect, it means there's enough "swappability" in our design to allow us to test the bulk of the logic by mocking or stubbing external dependencies. The win-win here is that we not only get a better-proportioned testing pyramid, we also get a more flexible design that can more readily accommodate change. e.g., getting video ratings from Rotten Tomatoes instead.)


June 10, 2018

Learn TDD with Codemanship

Only This Week - Save Up To 65% On Codemanship Training




For one week only, we’re offering a veritable picnic of on-site code craft training at never-to-be repeated prices.

Save up to 65%, and train your developers in key skills like TDD, refactoring and OO design for as little as £40 per person per day. That’s full, action-packed hands-on days of code craft training.

Book any Codemanship training course before June 17th and save a whopping 50%. Book all four of our courses and save 65%. That’s a massive £12,000.


Find out more by visiting www.codemanship.com




June 8, 2018

Learn TDD with Codemanship

The Entire Codemanship TDD Course Book - Absolutely Free

Changes are afoot with my code craft training and coaching company, Codemanship, and as part of that, I'm making my 222-page TDD course book available to download as a spiffy full-colour PDF for free.



It covers everything from the basics of Red-Green-Refactor, through software design principles to apply to your growing code, all the way up to advanced topics other TDD books and courses don't reach, like mutation testing, property-based and data-driven testing and Continuous Inspection. Many people who've read the book have commented on how straightforward and to-the-point it is. Shorter than most TDD/code craft books, but covers more, all in practical detail.

Of course, to get the best from the book, you should try the exercises.

Better still, try the exercises with the guy who wrote the book in the room to guide you.





May 25, 2018

Learn TDD with Codemanship

Ever-Decreasing Cycles - I Called It Right

I'm right about something roughly once in a decade, if I'm lucky. Looking back over 13 years of blog posts, I nominate this little gem as a candidate for "That Thing I Called Right", which predicted that - as our computers grew ever more powerful - continuous background code review would become a thing.

The progression seemed perfectly logical. At the time I wrote it, we'd seen the advent of continuous background code compilation, giving us instant feedback when we make silly syntax errors. Younger developers may not be aware of just what a difference that made to those of us who remember compiling the code involving going away to get a coffee (or lunch, or dinner and a show). So much time saved!

With less brain power dedicated to "does it run?", we were freed up to think about a higher question: does it work?. In 2008, continuous background testing tools like Infinitest and JUnitMax were becoming more popular. Today, I see them quite widely used, and can easily foresee a time when we're all using them within the next decade.

So we've progressed from "does it run?" to "does it work?" as our computers have increased their processing power, and the next evolution I predicted was to continuously ask "will it be easy to change?" At the time, the majority of code analysis tools took too long to do what they did to be running continuously in the background alongside compilation and functional testing. (There were one or two adventurous experimental tools, but we haven't heard much from them in the meantime.)

With Microsoft's Roslyn compiler, continuous background code review is now finally a thing. We can write code quality checks and build them into the compilation pipeline, creating feedback on things like variable names, method size and complexity, couplings, and all that stuff we care about for maintainability, in real time, as we type the code. I suspect such a capability will be added to other compiler platforms in the next decade or so.

Sure, it's still early days, and my experiments with it suggest computing power needs maybe one or two more iterations to rise to meet the number-crunching challenge, but in a practical form that we can begin using today - just like those plucky pioneers who ventured out with Infinitest in the early days it's here. There'll be a learning curve. Start climbing it now, is my recommendation.

My hope for continuous background code review is that it will yet again free up our minds to focus on more important questions, like "is this what they really need?"

And that will be a great day for software.


* And, yes, I had hoped I'd been right about high-integrity software becoming mainstream, but interest in that has flat-lined these past 20 years. Maybe next year... Ho hum.



April 28, 2018

Learn TDD with Codemanship

8 Rules of Maintainable Code: A Handy Cut-Out-And-Keep Chart

If you've been on the Codemanship TDD training course, you may vaguely recall the first afternoon when we discuss design principles and how they can shape our code as it emerges.

I posit 8 principles that I ask participants to apply to the exercises, drawing from Simple Design, "Tell, Don't Ask" and S.O.L.I.D. These 8 factors are interrelated, and form a kind of virtuous - if somewhat complex - virtuous circle.

Code that's easier to change tends to be easier to test quickly. Fast-running tests make refactoring easier. Which helps us make our code easier to change. And around we go.

We don't do slides on the course (hoorah!), but I'm trying this morning to visualise these 8 principles and how they relate to each other in a single graphic.

There's the simple version:



And this is my latest iteration, to print off and hang on your toilet wall or put on a spiffy t-shirt. All non-profit uses are fine.



Going beyond maintainability, there's also a relationship between Clean code and reliability. Code that can be tested very quickly tends to have far fewer bugs. And code that's simpler and easier to understand is likely to get broken when we change it. So, it's more of a virtuous triangle, really.




April 11, 2018

Learn TDD with Codemanship

The Foundation of a Dev Profession Should Be Mentoring

What makes something like engineering or law or medicine a "profession"? Ask me 20 years ago, I'd have said it was standards and ethics, policed by some kind of professional body and/or the law. There are certain things, say, an electronic engineer isn't supposed to do, certain things you can't ask your doctor for, certain things a lawyer would end up in jail for doing.

Ask me today, and my answer would be this: a profession is a community of people following a vocation - like writing software or teaching children - that professes how it works to people who want to learn how to do it.

Experienced school teachers help people learning to be school teachers how to teach. They pass on the benefit of their experience, including all the stuff an even more experienced teacher passed on to them.

I still very much believe that standards and ethics must be part of a profession of software development. But I'm increasingly convinced that the bedrock of any such profession would be mentoring. I think of all the time I wasted in my early years of programming, and all the things that would have helped enormously to know back then. Even programming for fun in my teenage bedroom would have been made easier with some basic code craft like unit testing and rudimentary version control.

I was very lucky to be exposed to much more experienced "software engineers" who nudged me firmly in the direction of rigorous user-centred iterative software development, mentioning books I should read, newsgroups I should visit, courses I should go on, and showing me with their day-to-day examples techniques I still apply - and teach - today.

I make it my business today to pass on the benefits of the mentoring I received.And that, to my mind, should be the basis for a profession of software development.

For that to work, though, it's necessary that developers stay developers. "Use it or lose it" has never been more true than in software. I see developers I coached 10 years ago get promoted into management roles - sheesh, I know a lot of CTOs, according to LinkedIn - and quickly lose their coding abilities and fall behind with the technology. Their experience might be invaluable to someone starting out, but it's hard to lead by example if the last programming you did was in Visual C++ 6.0 and your junior devs are working in F#.

So, another pillar of this professional foundation must necessarily be parallel career progression - up to CTO equivalent - for developers. Looking for work for the first time in a decade has left me in little doubt that - with a handful of glorious exceptions that I'm exploring - many employers don't want older (i.e., more expensive) developers, and even the most senior dev roles typically pay a lot less than management equivalents. I meet a lot of senior managers who are reluctantly in this roles because they have big mortgages and school fees to pay. They'd much rather have stayed hands-on. If the best potential mentors are disappearing into meeting rooms all day, it will always be impossible to square this circle.

The idea's been floated before - including by me - but I think it's finally time to start a software developer's guild, with a specific purpose of championing long-term mentoring and parallel career progression for devs who want to stay devs.

Who's with me?




April 6, 2018

Learn TDD with Codemanship

Could Refactoring (& Refuctoring) Help Us Test Claims About Benefits of Clean Code

One of the more frustrating things about teaching developers about code craft and "Clean Code" is the lack of credible hard evidence from respectable sources about the claimed benefits of it.

Not only does this make code craft a tougher sell to skeptics - and there was a time when I was one of them, decades ago - but it also calls into question whether the alleged benefits are real.

The biggest barrier to doing research in this area has been twofold:

1. The lack of data points. Most software engineering academic studies take data from a handful of projects. If this were, say, medical research, we'd never get our medicines on to the market.

2. The problem of comparing apples with apples. There are so many factors in software development that it's pretty much impossible to isolate one and rule out all others. Studies into the effects of adopting TDD can't account for the variations in experience and ability, for example. Teams new to TDD tend to have to deal with a steep learning curve before they become productive again.

When I consider some of the theories about what makes code harder to change - the central plank of the code craft thesis - some we have strong evidence to back them up, others... not so much.

I've had a bit of a brainwave in this area that might help researchers. Take a code base, then specifically vary it along a single dimension. e.g., refactor to remove duplication, or "refuctor" to introduce duplication (by inlining functions and modules). The resulting variants should all be functionally equivalent, but you could fine-grain the levels of variation. Then ask developers to make changes to the logic, and measure how much code had to be edited to achieve those changes. Automated acceptance tests would ensure that every change was logically equivalent.

I can easily envisage how refactoring (and it's evil twin, refuctoring) could be used to vary readability, complexity, duplication, coupling and cohesion (e.g., by moving methods between classes to introduce or eliminate feature envy), "swabbability" (e.g., by introducing dependency injection, or by reversing the dependency inversion by using explicit references to concrete implementations of interfaces) and a range of other code qualities. Automated tests could ensure that every variant still works exactly the same way on the outside.

And the tests themselves could be varied. For example, you could manipulate test suite execution time so that in some cases developers had to wait an hour for feedback, while others only need wait seconds for the same feedback.

I think I might be on to something. What do you think?


March 24, 2018

Learn TDD with Codemanship

Code Craft: What Is It, And Why Do You Need It?

One of my missions at the moment is to spread the word about the importance of code craft to organisations of all shapes and sizes.

The software craftsmanship (now "software crafters") movement may have left some observers with the impression that a bunch of prima donna programmers were throwing our toys out of the pram over "beautiful code".

For me, nothing could be further from the truth. It's always been clear in my mind - and I've tried to be clear when talking about craft - that it's not about "beautiful code", or about "masters and apprentices". It has always been about delivering software that works - does what end users need - and that can be easily changed to solve new problems.

I learned early on that iterating our designs was the ultimate requirements discipline. Any solution of any appreciable complexity is something we're unlikely to get right first time. That would be the proverbial "hole in one". We should expect to need multiple passes at it, each pass getting it less wrong.

Iterating software designs requires us to be able to keep changing the code over and over. If the code's difficult to change, then we get less throws of the dice. So there's a simple business truth here: the harder our code is to change, the less likely we are to deliver a good working solution. And, as times goes on, the less able we are to keep our working solution working, as the problem itself changes.

For me, code craft's about delivering the right thing in the short-to-medium term, and about sustaining the pace of innovation to keep our solution working in the long term.

The factors involved here are well-understood.

1. The longer it takes us to re-test our software, the bigger the cost of fixing anything we broke. This is supported by a mountain of evidence collected from thousands of projects over several decades. The cost of fixing bugs rises exponentially the longer they go undetected. So a comprehensive suite of good fast-running automated tests is an essential ingredient in minimising the cost of changing code. I see it being a major bottleneck for many organisations, and see the devastating effect long testing feedback loops can have on a business.

2. The harder it is to understand the code, the more likely it is we'll break it if we change it.

3. The more complex our code is, the harder it is to understand and the easier it is to break. More ways for it to be wrong, basically.

4. Duplication in our code multiplies the cost of changing common logic.

5. The more the different units* in our software depend on each other, the wider the potential impact of changing one unit on other units. (The "ripple effect").

6. When units aren't easily swappable, the impact of changing one unit can break other modules that interact with it.

* Where a "unit" could be a function, a module, a component, or a service. A unit of reusable code, essentially.

So, six key factors determine the cost of changing code:

* Test Assurance & Execution Time
* Readability
* Complexity
* Duplication
* Coupling
* Abstraction of Dependencies

Add to these, a few other factors can make a big difference.

Firstly, the amount of "friction" in the delivery pipeline. I'd classify "friction" here as "steps in releasing or deploying working software into production that take a long time and/or have a high cost". Manually testing the software before a release would be one example of high friction. Manually deploying the executable files would be another.

The longer it takes, the more it costs and the more error-prone the delivery process is, the less often we can deliver. When we deliver less often, we're iterating more slowly. When we iterate more slowly, we're back to my "less throws of the dice" metaphor.

Frequency of releases is directly related also to the size of each release. Releasing changes in big batches has other drawbacks, too. Most importantly - because software either works as a whole or it doesn't - big releases incorporating many changes present us with an all-or-nothing choice. If change X is wrong, we now have to carefully rework that one thing with all the other changes still in place. So much easier to do a single release for change X by itself, and if it doesn't work, roll it back.

Another aside factor to consider is how easy it is to undo mistakes if necessary. If my big refactoring goes awry, can I easily get back to the last good state of the code? If a release goes pear-shaped, can we easily roll it back to a working version, with minimal disruption to our end customer?

Small releases help a lot in this respect, as does Version Control and Continuous Integration. VCS and CI is like seatbelts for programmers. It can significantly reduce lost time if we have a little accident.

So, I add:

* Small & Frequent Releases
* Frictionless Delivery Processes (build-test-deploy automation)
* Version Control
* Continuous Integration

To my working definition of "code craft".

Noted that there's more to delivering software than these things. There's requirements, there's UX, there's InfoSec, there's data management, and a heap of other considerations. Which is why I'm clear to disambiguate code craft and software development.

Organisations who depend on software need code that works and that can change and stay working. My belief is that anyone writing software for a living needs to get to grips with code craft.

As software continues to "eat the world", this need will grow. I've watched $multi-billion on their knees because their software and systems couldn't change fast enough. As the influence of code spreads into every facet of life, our ability to change code becomes more and more a limiting factor on what we can achieve.

To borrow from Peter McBreen's original book on software craftsmanship, there's a code craft imperative.



March 20, 2018

Learn TDD with Codemanship

Why I Won't Take Automated "Hacker Tests" To Get Job Interviews

I'm back on the contract market - give me a shout if you're in the London area (or looking for remote-workers) and could use a very experience Java and/or C# bod - and it's been a looong time since I looked for regular work.

Much seems to be the same as it was when I was last contracting: the junior recruiters randomly filtering out candidates because their CV doesn't specifically mention that version of Spring, the dispiriting job ads that effectively say "It's a shitstorm here, but you get foosball and free breakfast!", the banks who make us wait 6 weeks for an interview date, the ever-growing lists of languages, tools and frameworks we're expected to have 500 years experience of. Yep. It's all as I left it.

But there's something new among all this. More and more of us are apparently being asked to take some kind of automated online coding test before the employer will even consider speaking to us. I was asked this week to take a hackerrank test that lasted 90 minutes. The recruiter said the client was "very positive" about my CV. But, it turns out, this step in the recruitment process was non-negotiable.

I have no problem with being asked to demonstrate technical competence. I kind of do it for a living. I code in front of other developers on training workshops, at conferences, on YouTube, and via my Github account. I'm not hiding anything. If you want to pair program with me on a problem to see the cut of my jib, I'm okay with that.

But I draw the line at these online timed tests. The focus on them is necessarily very narrow, for a start. Maths puzzles and algorithms and "stuff about programming languages". That sort of thing. Is this the new "whiteboard interview"? (Flashbacks to interviews where someone wrote some Java on a board and asked "Will that compile?" I'm sorry, I wasn't aware we'd be compiling this software in our heads.)

I think the focus has to be narrow, because there's a limit to what can be scored automatically. Basically, "this is what we know how to measure". I understand that a lot of these tests focus on algorithms. You're asked to solve a problem, and then scored on passing acceptance tests (easy to automate) and execution time (again, easy to automate).

While I agree that passing acceptance tests is kind of important, I worry about the next biggest factor being Big O-style algorithmic efficiency. Maybe my solution is slower, but easier to understand, for example. And it's sneaky, too. If there are performance criteria, we should bee told what they are up-front. I'm not in the business of making code faster than it needs to be just as a matter of course.

I also worry about the competition element of some of these tests, especially given the narrow focus. I do not rank "hackers" by their ability to create efficient algorithms alone, or by their in-depth knowledge of Java syntax. Let's measure something else to illustrate what I mean; in my Team Dojo, developers have to work together to solve a set of non-trivial problems. They also score points for passing acceptance tests. And what I've learned from watching hundreds of teams take this test is that individual technical ability is a poor predictor of team performance. Teams of coding ninjas are routinely outperformed by teams of average devs who just worked together better. It's quite inspiring to watch.

My other objection to taking these tests is the time candidates are expected to invest speculatively, just to be considered for interview. If you're on the market for work, you may be making multiple applications every week. What if they all ask you to take one of these tests, just to be considered? This creates a big overhead for candidates. If you're coming to the end of a contract, have young kids at home, or are caring for a relative, or have other time commitments, where are you going to find 90 minutes in your day just to prove that you know LINQ. Every. Single. Time. You. Apply?

I would be in favour of a website where devs can once-and-for-all demonstrate their competence in something. Not every time an employer says "dance for me!" I thought this site was called "Github", but that shows what I know.

But I'm not in favour of this cookie-cutter-one-size-fits-all approach to filtering. I guess my real gripe about being asked by a well-known Agile consultancy to take a hackerrank test is not that they asked, but that there was simply no other way of demonstrating my coding chops that they'd consider.

In discussing this with other developers, it seems as if there's a "horses for courses" situation here. Not everyone codes in their spare time, not everyone has a portfolio of stuff (e.g., on Github) they can point to. Not everyone shines when they're put on the spot. Not everyone likes to take stuff away and work alone. There's no one single way that will give every developer a chance to show us what they can really do.

Perhaps what I'm saying here is that we should let the candidate choose how they demonstrate technical competence. I might say "take a look at my screencasts" or "let me fire up Zoom.us and pair with one of your devs" or "how about I come in and run a little hands-on workshop?" Someone else might want to do the hackerrank test, perhaps because they lack job experience and need to demonstrate some raw ability, or maybe they just get nervous with new people. Someone else might want to do a - gulp - whiteboard interview because they worry they'll mess up coding in front of other people, but can demonstrate how much they've learned.

The point is that I can tell shit from shinola any of these ways. If you suck, and you have a portfolio, I'll know it from looking at that. If you suck and we pair, I'll know soon enough. If you suck and take a hackerrank test... I'll still want to see the code. But eventually, I'll know. (So might as well look at your Github.) And if you suck and we get around a whiteboard, I'll get it from that, too.

It seems to me that these automated coding tests are an attempt to remove the "it takes one to know one" element from filtering candidates. My contention is that you can't. That kind of machine intelligence is still decades way. Meanwhile, we're stuck with people assessing other people. And it helps enormously if those people know what they're looking at (and what they're looking for.) That's what needs fixing here.

There's no economy of scale in software development. Why would we believe there's economy of scale in software developer recruitment? That's the problem these online tests claim to solve, but - evidently - they haven't. They just filter out experienced candidates like the many developers I've spoken to.

So we might as well let candidates put their best foot forward and let them decide which foot that is.

Otherwise the end result is you filter out a lot of good people who'd be great additions to your team, but who just don't fit in your recruiting process.

Perhaps we need a Dev Recruitement Manifesto?





March 16, 2018

Learn TDD with Codemanship

Lamenting the Golden Age of High-Integrity Software That Never Came

When I was a much younger programmer, I read a paper that had a big impact on the way I thought about software integrity.

Up to then, I - like so many - believed that "software has bugs". It seemed inevitable. Because all the software I'd seen had bugs. And all the software I'd written had bugs. We just have to live with it, right?

And then along came this paper on a thing called Cleanroom Software Engineering, and my mind was blown.

IBM wrote a COBOL pre-compiler that had about 85,000 lines of code and zero bugs reported in production. Not one. Ever. And what really struck me is that - bearing in mind how primitive dev tools were in the 1980s - it only took a team of six, achieving an average dev productivity that was measurably higher than the industry average. Also, the cost of maintaining the product - typically a lot higher than the cost of initial development - was relatively low; just one developer-year per year. Because nobody was bug fixing.

Now, of course, compared to software today 85 KLOC isn't much. But it's not insignificant, statistically. Maybe an equivalent product today would have 20x as much code. But what's 20x zero?

A single paper turned my whole worldview about software integrity (vs. productivity) upside-down. I've been lucky enough to experience this kind of approach - not specifically Cleanroom, but along similar lines - since, and seen the results for myself. Seeing is believing, and - praise Knuth! - I'm a believer!

So you can probably imagine my frustration to see how, 20 years later, the "software has bugs" paradigm still dominates. Who out there is producing very high-integrity code? Vanishingly few. I've waited and waited for high-integrity development techniques to catch on. I've even stirred the pot a few times myself with attempts at training products and talks with various publishers about a book that updates the ideas for the hipster Agile generation. To no avail. Still, vanishingly few are interested.

It's not as if there isn't a compelling business case. More reliable code, for little to no extra cost (you might even save time and money)? Lower maintenance costs? Happier customers? A world of digital stuff we can rely on? What's not to like? It's not as if these techniques are incompatible with Agile, either. I've done both at the same time, for real.

But for every person like me out there selling the dream, there are 10 more actively briefing against it. "Quick and dirty". "Move fast and break stuff". "Perfection is the enemy of good enough." Etc etc etc.

It's an easy sell to managers who don't understand the relationship between quality, time and cost. Cut some corners, get there sooner, save some money. A much harder proposition is "take more care, get there sooner, save some money". Bosses don't believe it. Heck, most devs don't believe it, despite the mountain of strong evidence to back it up.

I still live in hope that - one day - high-integrity software will go mainstream. The tools and techniques are not, despite what you may have heard, rocket science. Most devs are smart, and most devs could learn to do this. I did, so it can't be that difficult.