February 20, 2019
A Tale of Two AgilesThere are few of us left who rode the first wave of what we now call "Agile Software Development" who don't think something has gone very wrong with "Agile".
At the tail-end of the 1990's, it was all about small self-organising teams working closely with their customers to rapidly evolve working solutions and deliver value sooner and more sustainably with technical discipline. Extreme Programming, for example, represented a balance of forces - people, process and technology - all of which had to be addressed to produce valuable working software that met customers' needs today, tomorrow, and on the N+1th day.
Since the signing of the Agile Manifesto at Snowbird, Utah in 2001, that balance has been slowly drifting further and further off-kilter, towards Process. Today, Agile is almost unrecognisable from the small, adaptive set of values and principles celebrated in the manifesto.
Big Process once again dominates. Values and principles be damned. The micromanagers who made teams' lives hell in the 90s are back with new job titles like "Head of Agile", "Head of Delivery" and other heads of things nobody envisaged you could possibly need a head of in 2001.
"Product Managers" act as the new chasm between developers and customers, who are as far removed from each other as ever they were.
Small teams are now organised into major programmes of change and "transformations" under the kosh of "scaled Agile" processes that attempt to achieve conformity where none is either desirable or, frankly, possible. Among the most experienced dev practitoners, they have little credibility. By what magic have the scaled Agile wizards solved the problem of large-scale software development? The answer's simple: they haven't. The problem they have solved is how to make Big IT money from something that's supposed to be small.
And the technical side of it is all but forgotten. I recently reviewed more than 100 CVs for an "Agile leadership" role a client was advertising. More than 70% of candidates had never written code. Most of the rest hadn't written code in more than a decade. Pioneers of early lightweight methods were always clear about this: decisions should be made by people with the necessary expertise and experience to make those decisions.
Meanwhile, armies of "Agile Coaches" - who are also mostly without technical experience - guide development teams in the prescribed process. Agile "maturity models" abound. It's big business now.
And for every 10 Agile coaches, you might find one developer guiding teams in the technical practices. Teams get very little help in things like unit testing, TDD, refactoring, CI/CD, architecture & design, etc. I think, from my own experiences working in this space, this is because most managers are from non-technical backgrounds and find things like Scrum, Kanban and SAFe accessible. They see the value in them. They don't understand code craft, and don't understand the value in it. So they don't invest anywhere near as much in it. Our people-process-technology tripod tries to walk with one leg much more developed than the other two, and tips over.
The upshot of all this is that we're back where we started. Oversized teams (and teams of teams), micromanaged in deep command-and-control heirarchies, measured by arcane heavyweight processes. All the emphasis is back on "Did we do it the right way?", and "Did we do the right thing?" is as immaterial as it was when the Unified Process ruled the waves.
None of this would be a problem if organisations also invested seriously in the other 2 legs of the Agile tripod, especially their People.
None of this would be a problem if every non-technical Agile manager or Agile coach was balanced with an equivalent technical manager or coach.
None of this would be a problem if we could get back to the original vision of small, self-organising teams working directly with customers to solve problems together.
I'm not saying everyone involved has to be a software developer. I'm saying organisations need to restore the balance between those forces to succeed at "delivering sustainable value" through software (to borrow the vernacular).
Whatever it is successful dev teams are doing today, I'm loathe to hang the label of "Agile" around its neck. In 2019, that might be considered an insult.
February 19, 2019
When Should New Programmers Be Introduced To Code Craft?When I look back at my journey to becoming a software developer, and talk to many others who have made similar journeys, it strikes me that there was a point fairly early on when I could really have benefitted from a bit of code craft.
Programs of a few dozen lines like I used to cobble together on my Mattel Aquarius and Commodore 64 probably don't need automated tests or refactoring or a CI server.
But as I grew more confident with code, and the amount of main memory and disk space grew from kilobytes to megabytes, my hobby projects grew too from dozens of lines to hundreds to thousands.
During my coding adolescence I really could have used some automated unit tests. My code really would have benefitted from being simpler and clearer. It really would have helped to break things down into smaller, more modular chunks. It really would have made a difference if my code hadn't repeated itself quite so much. (When I discovered Copy & Paste in Turbo Pascal, I was like a kid who'd found his Dad's gun.) And it would have really, really helped if I'd used version control.
Without those foundational things, my programs experienced growing pains. As with all adolescents, there ought to be a time to take programmers aside and explain the Facts of Code:
1. You'll be spending most of your time trying to understand code, and that includes code you wrote.
2. Code gets real complicated real fast. All the "moving parts" are interconnected. You'd be surprised how many ways just a few lines of code can be broken.
3. Code will need to change many times, mostly because it takes many tries to build something good. It's all one big experiment.
4. Changing code often breaks code, including code you didn't even touch. Like I said, it's all interconnected.
5. The longer our code is broken, the longer it takes to fix. So much so that, after a while, all we seem to be doing is fixing code. That's no fun.
6. There'll be many times you're going to want to go back: to the last time your code worked, to the last time you were happy with the design, to see what was in the code in that version that the user reported a bug in, etc
7. Just because it worked on your computer doesn't mean it'll work on someone else's
8. Just because you think it's good, it doesn't mean end users will
Whether you're coding for fun or for money - above a basic level of complexity - these things are almost always true of the code we write.
Writing computer programs can be fun and rewarding. Burning the midnight oil banging your head against the wall and crying "Why isn't it working?!" or having to write reams of code all over again because you totally borked it and your last back-up was 24 hours ago... not so much.
I would have had more fun and got more done if I'd kept my code clean, if I'd beeen able to re-test it speedily, and if I'd beeen able to go back to any point in its evolution easily. And when I started my professional career, I have no doubt at all that I would have written better software, and got more done. I'm 100% sure of that.
So, with the twin aims of enabling hobbyists to have more fun and waste less time on not-fun activities like debugging and fixing code that didn't need to be broken in the first place, and to produce a new generation of more confident and more capable software developers for those who "go career" (including the millions of people who write code as part of their job but don't consider themselves "software developers" - e.g., scientists and engineers), I still firmly believe that code craft should be introduced once programmers have progressed beyond small and simple programs of < 100 lines of code.
And I would introduce code craft in this order:
1. Version Control. This is seatbelts for programmers. If you're planning to take the car out of the drive, it's time to put them on. The ability to go back to any working version of your code is the ultimate Undoability.
2. Unit Tests. Knowing if your code works is really, really useful. The sooner you know you broke the code, the sooner you can fix it, and the easier it is to keep it working. It turns out working code is a good thing. You'd be surprised just how profound an impact fast-running automated tests can have on everything else. Programmers can experiment with the confidence that if their experiment broke the code, they'd know straight away. Fast-running automated tests make us brave. Programming without fear is way more fun. I know this from 13 years programming without unit tests, and 24 years with them. Night and day.
3. Writing Readable Code. And, no, teachers: I don't mean writing more comments in your code. I mean writing code that clearly communicates what it's doing. If you're looking at a block of code and wondering "what does this do?", maybe it should be in a function with a name that tells that story.
4. Refactoring Code. The ability to make code easier to understand and easier to change without breaking it is super, super useful. Refactoring is arguably one of the most undervalued skills in programming. A woefully small percentage of professional software developers can do it, and the alarming cost of changing commercial software is the result. I've seen multi-billion dollar businesses brought to their knees by their inability to change their code. Buy me a beer and I might even name some of them.
4. Part II. Basic Principles for Writing Code That's Easy To Change. Hand-in-hand with refactoring is knowing what to refactor, and why. What makes code harder to change?
a. Code that's hard to understand is more likely to get broken by programmers who don't understand it
b. Duplicated code may need the same change made in multiple places, multiplying the cost (and risk) of making that change
c. Complex code is more likely to break because there are more ways for it to be broken
d. Changing one line of code might break other code that's connected to it (and, in turn, code that's connected to that code). We have to manage the connections in our code to localise the impact of making changes. This means our code needs to be effectively modular, built out of parts that:
i. Do one job
ii. Know as little about each other as possible
iii. Can be easily changed or swapped without effecting the rest of the code
5. Making It Easy To Get Your Working Code Out Into The World. They say "there's many a slip 'twixt cup and lip", and so many times as a hobbyist programmer (and an early professional developer) I fell at the final hurdle of getting the code I slaved over for hours on my computer to work on other people's computers.
This was because it was a manual process that involved me compiling my code, gathering together all the files my program needed, zipping it all up onto floppy disks (or copying it all across the network), unpacking all that on the target machine, changing some settings on that machine, starting some background programs and services that my program needed, and so on.
As artisanal and cool as "hand-deployed software" might sound, aside from the time this took, installs would often go wrong, and I often found myself saying those immortal words "Well, it worked on my machine!"
The whole process of going from my machine to potentially thousands of target machines needs to be speedy and reliable and - very importantly - reversible. Like all information processes that need to be speedy, reliable and reversible, that means it needs to be completely automated.
Programmers would benefit greatly from automating the building, testing (to make sure it really does work on other machines) and deployment of their programs to whatever target platforms they need them to work on. This will enable them to release their code often, which means they can get useful updates and fixes to their end users sooner and they can learn from the feedback faster. This is the secret sauce of software development. It's a learning process. Most of the value in code is added as a result of user feedback. More feedback, more often, means our programs improve faster, and we mature as programmers sooner.
It also helps us to engage with the people who use our programs more frequently and build those communities more effectively. This is the final way in which code craft adds to the fun...
Another hobby I had as a kid that followed me into adult life is playing the guitar. Playing the guitar in your bedroom is fun. Playing guitar in a band in front of real audiences is really fun. And a little scary, of course. But until you do it in front of other people, it's very hard to know if you're doing something good. The Beatles didn't get great in their bedrooms. They got great by playing hundreds and hundreds of gigs. They'd probably played thousands before their first single was released.
Now, please remember: I'm not suggesting that all people who try programming must learn code craft. Most people will give it a try, maybe get something out of it, but then leave it there. Just as most people who try the guitar learn a few chords and are perfectly happy leaving it at that.
But a proportion who try coding will really enjoy it, and they'll naturally get more ambitious with their projects. These are the people who need a bit of code craft to make that journey easier and utlinately more rewarding. Fun requires the fundamentals.
Teachers and code club organisers and should be looking for the signs, identifying who these people are, and encouraging them to learn about code craft. As a profession, we should be ready to help in any way we can.
February 16, 2019
"Oh, really?" I reply. Let's enumerate the 8 principles for Clean Code I teach on Codemanship training courses and see if that's true.
1. It has to work. Can we at least agree that code should do what it's supposed to, regardless of the programming paradigm? Thanks. Tick.
2. It must be easy to understand. Again, that's paradigm-agnostic, is it not? Tick.
3. It should be low in duplication - unless removing the duplicate code makes it harder to understand. No objects here. Tick.
4. It should be made from simple parts. That could mean simple classes and simple methods, or it could mean simple functions. Tick.
5. Single Responsibility. The accepted definition of the SRP is "Classes should only have one reason to change", but it doesn't take a genius to extrapolate the need to build our software out of parts that do one job. That could mean classes, but it could mean functions or modules. The design benefits of composing our software out of single-purpose functions (which all pure functions should be) are the same as composing it out of objects that only do one job. A function abc() can ony be used one way. but a(), b() and c() could be used 15 ways -> a(), b(), c(), a()b(), a()c(), b()a(), etc etc. The possibilities are greatly increased.
Say we had a function that fetches an IMDB rating from a web API and then calculated a rental price for that title based on the rating.
What if we want to use the IMDB rating in other functions? With this non-SRP-compliant code, we can't. If we refactor fetchng the rating into its own function, we get an increase in composabiity.
6. Swappable Dependencies (Open-Closed + Dependency Inversion + Liskov Substitution). Again, I think what confuses folk here is the explicit use of the word "class" in the definitions of these principles. Reworded, they make much more sense. If we said that modules and functions should be closed for modification, for example, then we have a principle that makes sense. If we said that high-level functions should not depend directly on low-level functions, again, that makes sense. If we said that we should be able to substitute a function with another implementation of that function (e.g., a function that calls a web service with a stub implementation), then that also makes sense. More generally, can we add or replace functions without modifying existing functions?
This leads us on to the mechanism by which we make dependencies easily swappable: dependency injection. And this might be the root of the misunderstanding. OO terminology has dominated discussions about dependencies for so long that I think maybe some programmers only recognise "dependencies" in OO terms. That is to say, a function using another function isn't a dependency. But, of course, it is. 100%. If a() calls b() and I rename b() to c(), then a() breaks.
Back to our movie rental pricer: what if we want to unit test the pricing logic without making a web API request?
A refactored design injects that dependency as a function parameter, making it composable from the outside (e.g, from the test code).
Now we can stub the IMDB rating and turn our integration test into a unit test that executes much faster.
(NB: Of course, we could make the price() function pure by removing any dependency on the IMDB API and just passing in a rating. But to illustrate making functional dependencies swappable, I haven't done that.)
So, swappable dependencies: Tick.
7. Client-Specific Interfaces. If we're working in a functional style, each function is equivalent to an interface with just one method. So this doesn't really appy in FP. But the intent of the Interface Segregation Principle is that modules shouldn't depend on things they're not using. When I review JS code, one of my bugbears is unused imports. In the cut and thrust of coding, dependencies change, and not all developers are fastidious about cleaning up their imports as they change.
Let's say after I introduced dependency injection for the fetch_rating() function I forgot to remove the import for that module from pricer.js:
If the name of the imported module changes (or the file moves), then pricer.js is broken. So, the functional, dynamic equivalent of the ISP might be "Modules shouldn't reference things they don't use."
8. Tell, Don't Ask. This is often oversimplified to "don't use getters and setters", which is why it's typically interpreted as an object oriented design principle. But the spirit of encapsulation lives in all modular programming paradigms, including functional. Our aim is to have our modules and our functions know as little about each other as possible. They should present the smallest surface area through which clients interact with them. Function parameters can be thought of as setters. Every extra parameter creates an extra reason why the client might break.
Imagine we have a function for fetching scalar values from database tables, for example. It requires information to connect to the right database, like it's URL, a user name and password, the name of the table, the name of the column, and the unique key of the row that contains the data we want. That's a lot of surface area!
If this were a class, we could provide most of that information in a constructor, leaving pricer.js with little it needs to know. In functional programming, we do have an equivalent: closures. I can create an outer function that accepts most of those parameters, and an inner function that just needs the unique key for the required row.
Now, with the introduction of dependency injection, I can construct this closure outside of pricer.js - e.g., in my test code:
And pricer.js is presented with a function signature that requires it to know a lot less about how fetching scalar values from database tables works. In fact, it knows nothing about that. It just knows how to get IMDB movie ratings.
And, yes, that would be swappable with the function that fetches ratings from the IMDB web API. So it's a Win-Win.
So, to recap, with a tiny amount of reframing, the eight design principles I teach through Codemanship's code craft courses most definitely can be applied to functional code. FP isn't magic. Even pure FP doesn't, purely by itself, solve all your correctness, readability, duplication and complexity problems. And it most certainly doesn't eliminate the need to manage dependencies.
The irony is I can remember - once upon a time - programmers telling me about Dijkstra and Parnas etc: "These design principles don't apply to us, because we do object oriented programming".
February 8, 2019
10 Years of Codemanship2019 marks the 10th anniversary since I started my training an coaching company for developers, Codemanship.
Since 2009, I've trained more than 3,000 developers on public coourses and client-site workshops, and hundreds of thousands more have downloaded the free tutorials, watched the free screencasts and read the TDD course book online.
I've been lucky enough to work with some great clients, including the BBC, Sage, UBS, Elsevier, John Lewis, Samsung, ASOS, Ordnance Survey and many more great companies, and worked all over the UK, as well as in Ireland, France, Germany, Spain, Portugal, the Netherlands, Norway, Sweden, Finland, Romania and Poland.
I'm also proud to have started the original Software Craftsmanship conference that inspired many more, to have - with your help - raised tens of thousands of pounds for Bletchley Park, and for maths and coding clubs. I've produced a sell-out West End comedy show, and even collaborated on an album that made it into Amazon's dance top 20!
In this time of rapid change and uncertainty, who knows what the next 10 years of Codemanship may bring. One thing's for sure: it won't be dull!
To celebrate 10 years of Codemanship, I'm offering 10 exclusive 1-day code craft training workshops to be run on client sites.
You can choose from:
- Software Design Principles
- Unit Testing
And any one of 10 Tuesdays or Thursdays coming up between now and Brexit (if you're in the EU, get 'em while you can!!!)
Just pick your date, and choose your course, whip out the company credit card, and Bob's your uncle. (Payment by invoice is also available.)
Each 1-day workshop costs £1,795 for up to 20 people - that's as little as £90 per person. (Normal price for a 1-day code craft workshop is £3,995.)
February 5, 2019
Evolutionary Design - What Most Dev Teams Get WrongOne of the concepts a lot of software development teams struggle with is evolutionary design. It's the foundation of Agile Software Development, but also something many teams attempting to be more agile get wrong.
Evolution is an iterative problem solving algorithm. Each iteration creates a product that users can test and give feedback on. This feedback drives changes to improve the design in the next iteration. It may require additional features. It may require refinements to existing features.
To illustrate, consider the evolution of the guitar.
The simplest design for a guitar could be a suitably straight stick of wood with a piece of string fastened taught at both ends, with some kind of container - like a tin can - to amplify the sound it makes when we pluck the string.
That might be our first iteration of a guitar. Wouldn't take long to knock up, and we could probably get a tune out of it.
Anyone who's tried playing that kind of design will probably have struggled with fretting the correct notes, so maybe in the next iteration we add dots to the stick to indicate where key notes should be fretted.
Perhaps in the next iteration we take strips of metal and embed them in our stick to make fretting even easier and more accurate.
In the next iteration, we might replace the stick with a plank and add more strings, tuned at different musical intervals so we can play chords.
We might find that, with extensive use, the strings lose their taughtness and our guitar goes out of tune, so we add a way to adjust the tension with "tuners" at the far end of the plank. Also, occasionally, strings break and we need to be able to replace them easily , so we make it so that replacement strings can be fastened to a "bridge" near the can.
Up close, our guitar sounds okay. But in a larger venue, it's very difficult to hear the sound amplified by the tin can. So we replace that with a larger resonating chamber: a cigar box, perhaps.
Travelling extensively with our cigar-box guitar, we realise that it's not a very robust design. So maybe we can recreate the basic design concepts in a better-crafted wooden neck and body, with properly engineered hardware for the bridge and the tuners. And perhaps it's time to move from using strings to something that will last longer and stay in tune better, like thin metal wires.
News of our guitar has spread, and we find ourselves playing much larger venues where - even with the larger resonating chamber - it's hard to be heard over the rest of the band. For a while we use a well-placed microphone to amplify the sound, but we find that restricts our movement and prevents us from doing all the cool rock poses we've been inventing. So we create "pickups" that generate an electrical signal when the metal strings move within their magnetic field at the frequency of the fretted note. That signal is then sent to an amplifier that can go as loud as we need.
What we find, though, is that the resonance of our guitar generates a lot of electronic feedback. We realise that we don't actually need a resonating chamber any more, since the means by which we're now generating musical tone is no longer acoustic. We could use a solid body instead.
The pickups are still a bit noisy, though. And the strings still go out of tune over an hour or more of playing. So we develop noiseless pickups, and invent a bridge that detects the tuning and autocorrects the tension in the strings continuously, so the guitar's always in tune.
Then we add some cool LED lights, because rock and roll.
And so on.
The evolution of the guitar neatly illustrates the concept of iterative design. We start with the simplest solution possible, play it, and see how it can be improved in the next iteration of the design. Each iteration may add a feature (e.g., add more strings), or refine an existing feature (e.g., make the neck wider) to solve a problem that the previous iteration raised.
Very importantly, though, every iteration is a working solution to the headline problem. Every iteration of the guitar was a working guitar. You could get a tune out of it.
The mistake many teams make is, instead of starting with the simplest solution possible and then iteratively improving on it to solve problems, they start with a concept for a complex and complete solution and incrementally work their way through its long feature list.
Instead of starting with a stick, a string and a tin can, they set out to build (as illustrated above) a Framus Stormbender high-end custom guitar with all the bells and whistles like locking tuners, an Evertune bridge, noiseless Fishman Fluence pickups and a fretboard that lights up (because rock and roll).
This is not iterative, evolutionary design. It's incremental construction of a completed design. The question then is: do we really need the locking tuners? Do we really need the Evertune bridge? Do we really need the Fishman Fluence pickups? Because the Stormbender is a very high-spec guitar, and that makes it very expensive compared to, say, a perfectly usable standard Fender Stratocaster.
The emphasis in evolutionary design must be on solving problems. We're iterating towards the right solution, improving with each pass until the design is good enough for our needs. Each iteration is therefore defined by a goal (ideally one per iteration), not by a list of features. Make it so you can play a tune. Make it so it's easy to fret the rght notes. Make it so you can adjust the tuning. Make it so you can play chords. Make it so you can hear it in a large room. Make it so it doesn't fall to pieces in transit. Make it so it can be heard above the drums. Make it so there's less feedback. Make it so it's always in tune. And so on and so on.
Of course, when Framus construct a Stormbender, they don't start with a stick and a piece of string. They incrementally construct it, because they already know what the finished design is.
And when they designed the Stormbender, they didn't start with a stick and a piece of string, either. They started with the benefit of hundreds of years of guitar design progress and many problems pre-solved. Likewise, I don't start every software product with "First, I'm going to need an AND gate" and work my way up from there. Many of the problems have already been solved. When Google set out to create their own operating system, they didn't start by creating a simple BASIC interpreter. Many of the problems had already been solved. T hey started where others left off and solved new problems for the mobile age.
My point is that the process of solving those problems was evolutionary. Computing didn't start with Windows 10. It started with basic logical operations on 1s and 0s. Likewise, when we're faced with problems for which there are no pre-made solutions, we start with the simplest solution we can think of and iteratively improve on that until it's good enough for our needs.
January 9, 2019
Team Craft - New Codemanship Workshop for 2019I'm delighted to announce the launch of a new Codemanship workshop for 2019. Team Craft is a one-day workshop for 6-12 people that's stretches your skills in collaborative design and continuous delivery.
It's the product of 10 years running team exercises as part of Codemanship training workshops, as well as at conferences and meet-ups around the world. These experiences have been consolidated and refined into an action-packed day that will turbo-charge the Team Fu of new and established teams alike.
It promotes technical team skills like:
- Collaborative Design
- Mob Programming
- Trunk-based Development
- Continuous Integration and Continuous Delivery
Team Craft is a completely hands-on workshop that reaches the parts other workshops don't.
Over the course of the day, participants will design and deliver a working solution to a realistic real-world problem as a team.
From a standing start, they will:
- Break down the requirements
- Assign work to individuals and/or pairs
- Choose the technology stack
- Set up version control
- Set up a CI server to build and test the solution as it takes shape
- Agree on a high-level design and establish the interfaces and contracts between components/services
- Test-drive the implementation of the components
- Demonstrate a working end product to their "customer" (the trainer)
The emphasis is on learning by doing, with a 45-minute introduction with key tips on how to succeed as a team, and ending with a 45-minute retrospective where we will draw out lessons learned in the 5-hour practical exercise.
To find out more, visit http://codemanship.co.uk/teamcraft.html
December 17, 2018
The Santa Present Delivery Route Optimisation KataThe holiday season is upon us, and one of the upsides of running a training company is that - while your clients run themselves ragged trying to hit seasonal deadlines - you get to relax and enjoy the inverse level of busyness.
This gives me time to imagine a fun but challenging new code kata: The Santa Present Delivery Route Optimisation Kata. It's a TDD challenge, but also a Third-Generation Testing challenge, as you'll see.
Santa has to visit the ten biggest cities in the US, listed here with their populations and coordinates:
Santa's sleigh travels at a constant speed of 3,000 km per hour between cities, and the delivery of each present takes him 0.001 seconds (i.e., he can deliver 3.6 million presents an hour in the same city.)
He has between midnight and 6 a.m. to deliver as many presents in the US as possible. He can start in any of the 10 cities.
Using this information, create a program that will efficiently calculate the route that will deliver the most amount of presents in those 6 hours.
Using parameterised tests, check your program's solution against the result of an exhaustive search of all possible routes between the 10 cities (assuming each city is visited only once).
Now generalise your solution for the following variables:
1. The list of cities
2. The amount of available delivery time
3. The speed of Santa's sleigh
4. The number of concurrent Santas
December 10, 2018
The Gaps Between The ToolsOn a training course I ran for a client last week, I issued my usual warning about immediately upgrading to the latest versions of dev frameworks. Wait for the ecosystem to catch up, I always say.
This, naturally, drew some skepticism from developers who like to keep bang up-to-date with their toolsets. On this occasion, I was vindicated the very next day when one participant realised they couldn't get Cucumber to work with JUnit 5.
I abandoned an attempt to be bang up-to-date using JUnit 5 for a training workshop in Sweden the previous month. The workshop was all about "Third-Generation Testing", which required integration between unit testing and a number of other tools for generating test cases, tracking critical path code coverage, and parallelising test execution. I couldn't get any of them to work with the new JUnit Jupiter model.
So I reverted back to safe old JUnit 4. And it all worked just spiffy.
No doubt, at some point, the tools ecosystem will catch up with JUnit 5. But we're not there today. So I'm sticking with JUnit 4. And NUnit 2.x. And .NET Framework 3.5. And the list goes on and on. Basically, take the latest version, subtract 1 from the major release number, and that's where you'll find me.
For sure, the newer versions have newer features, which may or may not prove useful to me. But I'm more concerned about the overall development workflow, and that's why compatibility and interoperability mean more to me than new features.
We're notoriously bad at building dev tools and frameworks that "play nice" with each other. Couple that with a gung ho attitude to backwards compatibility, and you end up with a very heterogenous landscape of tools that can barely co-exist within much wider development practices and processes. It's our very own Tower of Babel.
In other technical design disciplines, like electronic engineering, tool developers worked hard to make sure things work together. Simulation tools plug seamlessly into ECAD tools, which talk effortlessly with manufacturing tools and even accounting solutions to provide a relatively frictionless workflow from initial concept to finished product. The latest release of your ASIC design tool may have some spiffy new features, but if it won't work with that expensive simulator you invested in, then upgrading will just have to wait.
Given that many of us are engaged professionally in integrating software to provide our customers with end-to-end processes, it's surprising that we ourselves invest so little in getting our own house in order.
Looking at the average software build pipeline, it tends to be a Heath Robinson affair of clunky adaptors, fudges and workarounds to compensate for the fact that - for example - every test automation tool produces a different output for what is essentially the exact same information. And it boggles the mind why we need 1,001 different adaptors to run the tests in the first place; every combination of build tool + test framework imaginable.
If test automation tools all supported the same basic command interface, and produced their outputs in the same standard formats, we could focus on the task in hand instead of wasting time reinventing the same plumbing over and over again. JUnit 5 would already work with Cucumber. No need to wait for someone to patch them back together again.
And if you're a tool or framework developer protesting "But how will the tools evolve if nobody upgrades?", my advice is to stop breaking my workflows when you release new versions. They're more important than your point solution.
I vote we start focusing more on the gaps between the tools.
December 9, 2018
Big Dependency Problems Lie In The Small DetailsJust a quick thought about dependencies. Quite often, when we talk about dependencies in software, we mean dependencies between modules, or between components or services. But I think perhaps that can blinker us to a wider and more useful understanding.
A dependency is a relationship between two pieces of code where a change to one could break the other.
If we consider these two lines of code, deleting the first line would break the second line. The expression x + 2 depends on the declaration of x.
Dependencies increase the cost of changing code by requiring us, when we change one thing in our code, to then change all the code that depends on it. (Which, in turn can force us to have to change all the code that depends on that. And so on.)
If our goal is to keep the cost of changing code low, one of the ways we can achieve that is to try to localise these ripples so that - as much as possible - they're contained within the same module. We do this by packaging dependent code in the same module (cohesion), and minimising the dependencies between code in different modules (coupling). The general term for this is encapsulating.
If I move x and y into different classes, we necessarily have to live with a dependency between them.
Now, deleting x in Foo will break y in Bar. Our ripple spreads across the class boundary.
Of course, in order to organise our code into manageable modules, we can't avoid having some dependencies that cross the boundaries. This is often what people mean when they say that splitting up a class "introduces dependencies". It doesn't, though. It redistributes them. The dependency was always there. It just crosses a boundary now.
And this is important to remember. We've got to write the code we've got to write. And that code must have dependencies - unless you're smart enough to write lines of code that in no way refer to other lines of code, of course.
Remember: dependencies between classes are actually dependencies between the code in those classes.
As we scale up from modules to components or services, the same applies. Dependencies beween components are actually dependencies beteween modules in those components, which are actually dependencies between code inside those modules. If I package Foo in one microservice and Bar in another: hey presto! Microservice dependencies!
I say all of this because I want to encourage developers, when faced with dependency issues in large-scale architecture, to consider looking at the code itself to see if the solution might actually lie at that level. You'd be surprised how often it does.
December 8, 2018
True Agile Requirements: Get It Wrong Quickly, Then IterateI'm going to be arguing in this post that our emphasis in the software design process tends to be wrong. To explain this, I'm going to show you some code. Bear with me.
This is a simple algorithm for calculating square roots. It's iterative. It starts with a very rough guess for the square root - half the input - and then refines that guess over multiple feedback cycles, getting it progressively less wrong with each pass, until it converges on a solution.
I use this algorithm when I demonstrate mutation testing, deliberately introducing errors to check if our test suite catches them. When I introduce an error into the line that makes the initial guess:
e.g., changing it to:
The tests still pass. In fact, I can change the initial guess wildly:
And the tests still pass. They take a little longer to run is all. This is because, even with an initial guess 2 million times bigger, it just requires an extra few iterations to converge on the right answer.
What I take from this is that, in an iterative problem solving process, the feedback loops can matter far more than the initial input. It's the iterations that solve the problem.
When I see teams, including the majority of agile teams, focusing on the initial inputs and not on the feedback cycles, I can't help feeling they're focusing on the wrong thing. I believe we could actually start out with a set of requirements that are way off the mark, but with rapid iterating of the design, arrive at workable solution anyway. It would maybe take an extra couple of iterations.
For me, the more effective requirements discpline is testable goals + rapid iterations. You could start with a design for a word processor, but if your goal is to save on heating bills, and you rapidly iterate the design based on customer feedback from real world testing (i.e., "Nice spellchecker, but our gas bill isn't going down!"), you'll end up with a workable smart meter.
This is why I so firmly believe that the key to giving customers what they need is to focus on the factors that affect the speed of iterating and how long we can sustain the pace of evolution. The cost of changing software is a big factor in that. To me, iterating is the key requirements discipline, and therefore the cost of changing software is a requirements issue.
Time spent trying to get the spec right, for me, is time wasted. I'd rather get it wrong quickly and start iterating.