March 8, 2014
Blind Developer Interviews Through Anonymised Remote Pairing - An ExperimentOn Monday, I'll been conducting a reruitment experiment for a brave and progressive client who, sadly, wishes at this point to remain anonymous. Which, coincidentally, is exactly how these this experiment is intended to work.
I will be pairing remotely, as I often do for clients, with 6 candidates via That Skype That They Have Nowadays. Each candidate will log in at a fixed time to a stock Skype account we've set up especially for this experiment. I will not know who they are, what they look like, where they're from, when they were born, what experience or qualifications they have, or even - thanks to instant messaging - what they sound like.
I'll be completely blind except for what happens in the code, and what they communicate via IM. They are under strict instructions to give nothing personal away during this process. The transcripts of chats will be forwarded to the client along with my interpretation of how the session went, and a video of my desktop while it was happening.
The design of the experiment has been the culmination of a year's thought and research by myself, and it will be interesting to find out what happens.
I will not know the gender, the age, the educational background, the ethnicity, the location or the haircuts of any of the candidates as we pair. All I will have to go on - if they stick to the rule about revealing nothing personal via text - is what they are like when they are programming.
Of course, the client will know all of these things, having selected this shortlist. The second part of the experiment will be blind job advertising, which will be much harder because we'll be relying on candidates to follow strict instructions when applying.
Their CVs will need to be scrubbed of all personal information and presented in a distinctly sterile fashion, listing just their key skills, rated by how good they think they are. They will be expected to tackle a simple 1-2 hour programming problem that we hope will weed out the obvious blaggers. The results of that, along with the anonymised CV, will be the basis for shortlisting candidates.
Naturally, we are mindful that, at some point, we'll need to know more about them. For example, have they contributed to open source projects we can go and look at? Do they have a blog? That sort of thing. So the process can't be completely blind all the way along.
But the hope is that by whittling down candidates purely on technical skill first it will be harder to ignore a great developer for the wrong reasons.
February 25, 2014
Why Code Inspections Need To Be EgalitarianThe debate rages on about Uncle Bob's blog post advocating what he calls a "foreman" on software development teams who takes responsibility for the quality of commits by team members.
Rob Bowley of 7digital goes on to suggest that he no longer needs to inspect code quality, choosing instead to measure the effects of code quality on the reliability, frequency and sustainability of releases. This is per a discussion I had with Rob a few years ago, when he showed me the suite of metrics he'd published - a brave move - on 7digital's software development performance. Would that other software organisations were prepared to be so transparent.
The assertion goes back to my "Software Craftsmanship Imperative" keynote that was doing the rounds in 2010-2011.
The point is this: your customer or boss is not going to care about "clean code", no matter how much you try to persuade him or her. I've always maintained that craftsmanship is not an end in itself. There's a reason why code quality is important, and I've learned from bitter experience that you need to focus on that reason with people who aren't writing the code.
Having said that, smart developers who understand the causal link will inspect their code continuously and be ever-vigilent to things that might hinder progress later. So it's quite common to find practices like pair programming, code inspections, static analysis and all that malarkey going on in high-functioning teams.
But let's be clear about who the audience is for these two distinct and closely-related pictures; code metrics etc are for the people writing the code to provide early warning about maintainability "bugs" and as a tool for learning and improving at writing more maintainable code. Release statistics are for everyone (including developers) who cares about the sustainability of innovation.
To use another broken metaphor, code quality is data about the working of your engine, whereas release statistics are about the progress of your journey. High-functioning teams build a picture that can show how tinkering with the engine can improve progress on a long journey, which most software development turns out to be (even when the boss insists it's just going to be a short trip to the shops.)
My own experiences of being asked by managers to impose the wrong kind of picture on the wrong kind of audience have made me extremely wary of doing that, especially in the last 4-5 years.
Most importantly, I've learned that - when it comes to inspections and code quality - you can lead a horse to water, but you can't make it report untested code. The developers have got to want the information, because they believe it will help them, and have got to seek it for themselves. This is perhaps exemplified by the experimental TDD "apprenticeship" scheme we ran at the BBC in their TV Platforms team.
The same applies if one person on the team (call him/her a "foreman" if you like) tries to impose such a regime on the other - probably less willing - members. It just doesn't work.
Not only are the team likely to resent having their code's pants pulled down in such a manner for all to see, but - if they've not been paying attention to code quality as much as they should - the picture revealed is likely to dishearten and impact team morale.
Once you've handed their code its arse, what then? So now they know it sucks. What are they going to do about it? Do they want to do anything about it? Can they do anything about it? Would they know what to do about it?
Much as I'd like to believe I have the power as a coach to make developers who don't care about code quality care about code quality, the reality is that the best I can do is to make them aware of its existence as a thing that some developers care about. And then we're back to the horse and the water and the drinking.
As a software development coach in the same organisation where Rob Bowley and I met, I kind of did both. I made the mistake of imposing code quality metrics on some teams. But I also discovered something that has completely changed my whole outlook on what it is I do for software organisations.
I made a deliberate choice right at the start of my time in that organisation to focus more on developer culture. I immediately instigated internal events - totally voluntary - aimed at developers, roped in some inspiring names to come in and rally the troops, and gradually encouraged the developers there to see themselves as a community. More importantly, as a community that cares about software development.
I was told unequivocally by the people who hired me that there was no point. These people were not motivated. They didn't care, and couldn't be made to care. And they were right. From above, or from the outside, you cannot make people care. But you can build a culture in which it's easier to care than it is not to care.
From that wellspring came much nascent talent that had been festering in a command-and-control culture. Some have become software development coaches themselves in the intervening years, others lead successful development organisations. So much for "don't care, won't care".
Ultimately, my point is this; as with all technical decisions that can be made for a development team, it works best when the team makes it. You can't force people, con people, bribe people or blackmail them into caring. And if they don't care, you can point out all their code quality shortcomings as much as you like, because they're not going to fix them.
February 24, 2014
Why Development Teams Need To Be EgalitarianIn a recent blog post, Robert C. Martin proposes that teams need a technical authority who guards the code against below-par commits.
Uncle Bob argues that you wouldn't fly on a plane without a captain, or live in a house built without a general contractor overseeing the work to make sure it's up to snuff.
I feel I'm a little closer to this problem, having specialised in these "technical authority" roles for most of my freelance career, and having been embedded with teams in the recent past.
The statistics teach us that once code's checked in, the ultimate cost of it to maintain could be 7 times higher than it cost to write in the first place. So Uncle Bob's right, of course; someone should be casting a helpfully critical eye over commits and protecting the code from work that is, for whatever reason, not good enough.
Where we differ is on who that someone should be.
Here's the thing; leading software developers is like herding cats (except that cat herders would probably count themselves lucky if they knew how difficult it is to lead developers, frankly.)
Even the most junior software developers tend to be highly educated, highly intelligent people. These are not bricklayers, or co-pilots. What we do is orders of magnitude more complex and intellectually demanding. Better, I think, to compare software developers to, say, scientists.
Who keeps scientists in check? While professional scientists no doubt have supervisors (e.g., in a laboratory, or the head of a faculty), these individuals are not the arbiters of "good science", or of what's true and what isn't.
The equivalent of a software commit might be submitting a paper to a peer-reviewed journal. There, the key word is "peer". The claims made by a scientist are subjected to exactly the kind of egalitarian scrutiny Bob rails against (no pun intended.)
My own experience, and what I've observed of other software development "leaders" is that when someone takes that responsibility, it can lead to one of two outcomes:
1. The team resents it (or, at least, some of them do), and much of your time is spent dealing with the politics of being the guy who says "no". Don't underestimate how much time this costs.
2. The team accepts it, and resigns responsibility. This is no longer their code, no longer their design. It's your code, your design, and they're "just following orders". Again, never underestimate how readily even the most educated and intelligent people will let themlseves become "institutionalised", losing the desire to take charge of their own work.
As others responding to Uncle Bob's blog post have already suggested, it can work better if the technical authority is the team.
As the team leader, my effort is better spent marshalling and organising the team as a democractic body. I assume - rightly or wrongly - that, collectively, the team knows more than I do. When questions need answering, and decisions need making, put it to the team. Have deliberate mechanisms in place for these kinds of decisions. Be clear what's a team decision and what's a decision for the individual. If it helps, agree a team contract right at the start that sets the "constitution" of the team.
If it all gets a bit "he said, she said", then the team can also collect evidence upon which to base decisions. In particular, they can collect data about the effects of the decisions they make as a team. In this respect, I encourage teams to treat everything they do - every decision they make - as an experiment from which they can potentially learn.
But all of this is highly distinct from being a coach to teams. I absolutely 100% reject the idea that software development coaches should be acting in the way Bob suggests, becxause I absolutely 100% reject the notion of the coach being part of the team.
Been there, done that. The moment the coach starts making decisions for a team, they become the "go-to guy" for those kinds of decisions. There's a world of difference between coaching a team on, say, automated builds, and being "the build guy" on the team. Because that's what happens to coaches. I've seen hundreds of times, literally. Coach or consultant comes in to help a team solve a problem, coach or consultant becomes the solution.
I likee to think of what I do as being a driving instructor (and, warning, here come's another broken metaphor):
When I was 17, my Dad paid for me to have driving lessons. My instructor was a nervous man, and just kept grabbing the wheel every time he saw me doing something even slightly wrong. I had 6 lessons with this guy, and barely learned a thing. Because you cannot learn to drive by being told or being shown how to drive. You must drive yourself. A few years later, when I reaklly needed my license for work, I found a local instructor who had a completely different style. I would drive, and we would chat. Occasionally, he would point out something I could improve on. Even after just one lesson, my confidence grew enormously. I passed my test first time.
That has always stuck with me. The instructor was there to grab the wheel if I did something really stupid or dangerous, but mostly he kept his hands off the wheel and let me drive.
I'm the same way with teams as a coach. I try not to grab the wheel on their projects, letting them drive as much as possible. This is their code, their responsibility.
And if commits aren't up to snuff, that is their problem.
February 19, 2014
Programming Laws & Reality (& Why Most Teams Remain Flat Earthers)An article on Dr Dobbs by Capers Jones has been doing the rounds on That Twitter, all about whether famous "programming laws" that we hold dear stand up to scrutiny with real-world data.
The answer from Capers is; yes, we do know what we think we know. For the most part, these programming laws are backed up by the available data.
For example, Fred Brooks' law that adding programmers to a late project makes it later is mostly true, for teams above a certain size. Adding one good programmer to a team of two probably won't cause delays. Adding another programmer to a team of 50 probably will. Also, adding an inexperienced or less capable programmer to any team will probably slow that team down.
This should come as no surprise. We've known this for decades. It's Software Development 101.
And yet, when a project is overunning, what do 99% of managers do? They hire more programmers. Still. To this day.
Not only do the hire more programmers, but many insist on hiring cheap junior programmers, so they can hire more of them. This tends to compound the problem. The schedule slips further, and they prescribe more of the same medicine.
It's a classic management mistake: the perceived solution is actually the cause of the problem, and the worse the problem gets, the more fuel gets thrown on the fire to try and put it out. It creates a negative feedback loop that can spiral out of control. Hence you will find enormous teams of hundreds of developers barely achieving what a team of four could.
If Brooks' Law is well know, why do so many managers continue to do the exact opposite?
Similarly, with Jones' own law about software defect removal, that states that teams who are better at removing defects before testing tend to be more productive than teams who are worse at it; and Peter Senge's law that simply states Faster is slower.
The jury's really not out about the relationship between quality and time and cost. As Jones' reminds us:
"Empirical data from about 20,000 projects supports this law."
To wit, the way to go faster is to take more care over quality. Teams moan about "not having time" for defect prevention practices like developer testing or inspections, and yet there's a mountain of evidence that suggests that if they did more of these things, they'd actually get done quicker. Again, it's a negative feedback loop. We don't have time, so we skimp on quality, which creates costly delays downstream, which eat up more of our time. Rinse and repeat.
It's another Software Development 101. Being a software developer and not believing it is like being a doctor who doesn't believe in germs, or an astronomer who believes the Earth is flat.
And, yet again, the vast majority of teams are encouraged to do the exact opposite - sometimes even rewarded for doing it.
The evidence tells us that, when the schedule's slipping and we're up against it, the right thing to do is keep the team small and highly skilled - indeed, maybe even move some developers off the team - and focus more on quality.
That's the tragedy of our industry: higher quality, more reliable software is not just compatible with commercial realities, it can actually improve them.
One can't help but wonder why. What motivates teams and their managers to wilfully persue what we might call "Flat Earth" strategies - strategies that are known to be likely to fail because the Earth is, in fact, round?
In considering how we might bring techniques for more reliable software into the mainstream, perhaps we need to devote much time to thinking about why the industry has ignored the facts for so long.
February 18, 2014
TDD, Architecture & Non-Functional Goals - All Of These Things Belong TogetherOne of the most enduring myths about Test-driven Development is that it is antithetical to good software or system architecture. Teams who do TDD, they say, don't plan design up-front. Nor do they look at the bigger picture, or consider key non-functional design quality attributes enough.
Like many of these truisms, TDD gets this reputation from teams doing it poorly. And, yes, I know "you must have not been doing it right" is the go-to excuse for many a snakeoil salesman, so let me qualify my hifalutin claims.
First of all, there's this perception that when we do TDD, we pick up a user story, agree a few acceptance tests and then dive straight into the code without thinking about, visualising or planning design at a higher level.
This is not true.
The first thing I recommend teams do when they pick up user stories - especially in the earlier stages of development on a new product or system - is to go to the whiteboards. Collaborative design is the key to software development. It's all about communicating - this is one of my core principles of writing software.
Scott Ambler's book, Agile Modeling (now roughly 300 years old, or something like that), painted a very clear picture of where collaborative analysis and design could fit into, say, Extreme Programming.
The fact is, TDD is a design discipline that can be effectively applied at every level of abstraction.
There are teams using test cases to plan their high-level architecture - which I highly recommend, as designs are best when they answer real needs, which can be clearly expressed using examples/test cases. Just because they're not writing code yet doesn't mean they're not being test-driven.
And, as most of us will probably have seen by now, test-driving the implementation of our high-level designs is a great way to drive out the details and "build it right".
At the other end of the detail spectrum, there are teams out there test-driving the designs of FPGAs, ASICs and other systems on silicon.
If it can be tested, it can be test-driven. And that applies to a heck of a lot of things, including packages architectures and systems of systems.
As to the non-functional considerations of design and architecture, I've learned from 15 years of TDD-ing a wide variety of software that these are best expressed as tests.
I'm no great believer in the Dark Arts of software architecture. Like a sort of object oriented Gandalf, I've worn that cloak and carried that magical staff more times than I care to remember, and I know from acres of experience that the best design authorities are empiricists (i.e., testers).
How many times have you heard an architect say "this design is wrong" when what they really mean is "that's not how I would have designed it"? I try to steer clear of that kind of highly subjective voodoo.
Better, I find, to express design goals using unambiguous tests that the software needs to pass. It could be a very simple design rules that says "methods shouldn't make more than one decision", or something more demanding like "the software should occupy a memory footprint of no more than 100KB per user session".
Be it about complexity, dependencies, performance, scalability, or even how much entropy an algorithm generates, we can be more scientific about these things if we're of a mind to.
My own experiences have taught me that, when design goals are ambiguous and/or not tested frequently, the software doesn't meet those goals. Because you get what you measure.
Both of these things are no just do-able in a test-driven approach, but I'd highly recommend teams do them. Not only do they make TDD more effective when considering the picture, but they also benefit from being made more effective by being test-driven. That's a Win-Win in my book.
February 14, 2014
Seeking London Venue for Monthly "High-Integrity Thursday Club" Meet-upI'm putting the finishing touches to a plan for a new monthly meet-up for people interested in how we might bring tools and techniques for creating more reliable software into the mainstream.
This will run on the second Thursday of each month, starting on March 13th, venue TBC.
The aim is to bring together likeminded people - developers, academics, tool makers etc - to explore ways in which high-integrity software might become more the norm than the exception.
Each meeting will feature an hour of presentations or tutorials from 6:30-7:30pm, followed by Q&A, then off to a nearby hostelry by 8pm for some high-integrity beer to continue the conversation.
I'll kick us off in the first meet-up with a session called "Who's afraid of Formal Methods?" just to get the ball rolling.
But first, we'll need a venue. If you have (or know someone who might have) a suitable meeting room in central London you might be generous enough to let us use between 6-8pm on the second Thursday of each month, please get in touch. It will need to seat about 30 people, and have a data projector (or massive telly.)
February 13, 2014
Computing For Kids - Three Of The Best Non-ProfitsI'm a firm believer in the potential power of non-profits in spaces where the profit motive can be self-defeating. This is why I want to champion non-profits in our efforts to nurture a new generation of software innovators.
In particular, I want to draw your attention - either as a teacher or a parent looking for help learning about computing, or as a potential volunteer or supporter - to what I think are three of the best non-profits working in this space, targeted at three aspects of the problem.
Computer Science & Formal Computing Education
The most active and established organisation working on getting computing into the classroom is Computing At School. Founded in 2008, CAS have done amazing work on defining a curriculum for computing that goes much deeper than the "mucking about with PowerPoint" that much of ICT seemed to have devolved into.
Their curriculum includes not just some computer programming, but useful theory for computing and computational thinking. It's well-supported by their large network of teachers, computer scientists and professional software developers, and backed by the British Computer Society, as well as Google and Microsoft.
The CAS community has developed a strong set of resources for teachers and schools, and is well worth the £0 entry fee.
Computer Programming & Extracurricular Learning
Many children have been learning the practical elements of computing by joining a local programming club. Code Club are a network of such clubs run by volunteer across the UK. Founded in 2012, Code Club is still quite new, but has already grown to be by far the largest and most successful computing club network.
In Code Club, kids will learn basic programming using tools like MIT's Scratch, and can progress on to more "grown-up" programming in languages like Python.
With their emphasis on the practical, and on having fun and being creative with code, Code Club complements and adds a lot of value to the academic route.
The discipline of software development is to programming what movie making is to working a camera.
Software development brings together a wide range of complex disciplines, of which programming is just a part (albeit a vital one). Creating valuable, usable, reliable, scalable, secure software that meets the needs of real users, under commercial and other real-world constraints, is a world away from the computing kids get to do in classrooms or in clubs.
At the time of writing, Apps for Good are the only non-profit organisation helps kids get to grips with the very wide and challenging set of disciplines, covering everything from defining a winning vision for your software to how to automate the release process to end users.
Backed up by 400 experts in every field of software development, from product strategy to system testing, Apps for Good raises the game for teenagers and young adults and helps bridge that gap between academic computing and the real world, where making working software is orders of magnitude more complicated.
The relationship between these three fine non-profit organisations can probably best be explained by a musical metaphor: Code Club teach kids to play guitar, Computing At School helps kids with useful music theory and can help them gain a recognised qualification, and Apps for Good help these kids transition from practicing guitar in their bedrooms to becoming a professional recording and touring band.
So there it is: three great non-profit organisations who can help get children coding, be it for fun, for qualifications, or for the real world.
Computing At School - http://www.computingatschool.org.uk/
Code Club - https://www.codeclub.org.uk
Apps for Good - http://www.appsforgood.org/
February 12, 2014
The Erosion Of The Status Of Software Developers As ProfessionalsThis last week of Year Of Code shenanigans in social and traditional media has reminded of something that touches a nerve - call it an "old war wound".
In my 20 years writing software for a living, and teaching others to write software, I've witnessed a slow erosion of the stature and esteem in which people who write software are held.
In the early 1990's, it was widely understood that this was a highly specialised and complex technical discipline (and this is back when it was much simpler than it is today). It takes years to gain enough practical mastery of the whole busness - not just programming - of creating useful, valuable software.
As computing entered the mainstream during the 1990s (by 1999, most homes had at least one computer, and using a computer had become pretty much the norm in many lines of work), enter stage right a growing army of armchair experts.
This accelerated rapidly during the dotcom boom of the late 90's, as pretty much anyone who could claim to program at all was drafted in to help float a million and one really stupid business start-up ideas.
Soon, these people, armed with a teeny bit of VBScript and an unstoppable thirst for dotcom spondulicks, coupled with an almost total lack of ambition to become better programmers, started to climb their way up various greasy poles into senior non-technical roles in project management, business analysis, test management, product strategy, IT management and more than a few CTO vacancies.
If you're ambitious and good at office politics and self-promotion, it's entirely possible - maybe even easier - to pass from junior programmer to executive management without actually passing through a stage of technical competence. It's a sort of turbo-charged Peter Principle that operates in the software industry.
Meanwhile, around our industry has grown a whole society of professional commentators, self-proclaimed experts ("social media guru", anybody?) and general busy-bodies who, it transpires, wish to make busy with the bodies of the people who know how to make working software happen. Most visible of these are a new elite class of "non-technical tech entrepreneurs" who have risen to prominence in places like London. These are often well-educated people from comfortable, sometimes wealthy backgrounds who have the resources and the contacts to start technology businesses, but - thanks to a classical education that tends to look down on things like computer programming (I went to one of those schools, who currently don't offer computing at GCSE or A-Level, but have their own theatre - so I know the score here) - these people, who perhaps in another universe would be persuing careers in politics and PR like their peers did, rely on grease monkeys like me and you to make their ideas a reality.
Fast forward to 2014, and it's self-evident that, once you step outside of the bubble of the software development community, nobody else cares what we think about software development or things related to it. The Year Of Code debate has largely played out in the mainstream media without taking into account what software developers think. For the most part, everyone who's been asked to comment is not a software developer. Indeed, some journalists have actively denigrated the opinions of experienced software developers, dismissing them as "code snobbery". Why should anyone believe us when we say it's more complicated than that, or that learning to program is hard? What do we know?
And I don't doubt that this blog post - were it ever read by someone who isn't a software developer - would be as casually dismissed as the biased opinions of a bitter old hacker just looking out for his own future earnings. Anyone who knows me, however, will know I don't do "Mortgage-driven Development". My opinions are based on long, deep and wide experience, and I express them stridently and passionately because I genuinely care. If anything, given my current occupation as a trainer and coach of software developers, it must surely be in my financial interest to flood the market with incompetent developers. Graduate trainees are my biggest market.
This all mirrors the gradual erosion of other highly skilled professions, in particular teaching. Teachers are another group who are woefully underepresented in the debate.
In this whole matter, we have three distinct groups of people who lie at the heart of it all: programmers, teachers and children. There's no working solution that fails to address our concerns and any strategy will ultimately rely on us to gain any real traction, because this is where the rubber meets the road.
February 11, 2014
My World-Famous Test-driven Development Kitchen Design MetaphorWhen people ask me "what is Test-driven Development?", I have a goto kitchen metaphor, which I find quite useful.
Imagine you're asked to design a kitchen. There are different ways to go about it.
You could take a "shopping list" approach, and think of all the features your kitchen should have. e.g., It should have a fan-assisted oven, it should have a microwave, it should have a sink with hot and cold running taps, it should have power sockets, it should have work surfaces, cupboards, a small horse, some sausages, and so on.
There are many, many things your kitchen could have - an infinite universe of possible kitchen designs.
Some will be better than others. But what do we mean by "better"?
Ultimately, kitchens are best designed for people to use. So, in this context, "better" may well mean "more useful".
So we can begin to narrow down the choice of designs by asking the question: who will be using this kitchen, and what will they be using it to do?
Instead of listing things this kitchen should have, we begin by thinking about ways this kitchen will be used.
We identify who will be using the kitchen - e.g., Mum, Dad, the kids, the dog - and then we work with them (okay, maybe the dog gets a business analyst) to understand how the kitchen can meet their functional needs.
A powerful way to do this is to gather together real examples of the kinds of things they would want to do: e.g., prepare a cooked breakfast for four people, make a midnight snack of cheese on toast for one, give a dinner party for 8 guests with a specific starter, main and dessert etc.
We can think of these examples as tests. Our kitchen design is only correct if it passes these tests.
Working one test at a time, we then design the kitchen specifically to pass each test.
We might start with the simplest test - making cheese on toast - and ask ourselves "what's the least we need in our kitchen to do that?" Our initial design might then be a toaster and a grill, one plate, one knife and a cheese slicer. That's all we need to make cheese on toast for one person.
Then we move on to another test. What would our kitchen need to make cheese on toast for one AND to prepare a cooked breakfast of frieds eggs, bacon, sausages, mushrooms and toast with butter? We might add a frying pan and a stove with one ring to heat it on, as well as plates and cutlery for the whole family as needed.
And on we go, adding bit by bit to the design one test at a time, always checking as the design evolves that all our tests are still passing and that we have a working kitchen.
As with all design, the devil's in the detail. Not all toasters are born equal. Not all stoves perform in the same way. As part of our design process, we may well wish to isolate these elements of our design and check independently that - in the context we intend to use them - they will do what we need. Your cooked breakfast could be ruined if the toast keeps burning, or it could end up a lukewarm gloop if your hob isn't hot enough.
We can think of the design details as a chain of dependencies - things that have to happen in a certain order (e.g., we must put cheese on the toast before we put it under the grill, we must toast the bread before we put cheese on it, we must slice the cheese before putting it on the toast, etc)
So it's desirable, rather than tackle each example in one big design "lump" - where there could be many details that might potentially go wrong - we drill down into the design, driving out the details of how our kitchen will pass the test (e.g., is the hob big enough for the frying pan?), repeating the test-driven process down through the chain of dependencies that are involved in making it happen until we have a process that works end-to-end.
So, one test at a time - one example of usage at a time - we "grow" our kitchen design, always checking that every addition or change we make to the design hasn't broken any of our tests.
Of course, such growth, if entirely piecemeal, could lead to some pretty crappy designs. We may spot as time goes on that we've somehow ended up with two fridges, or that the power sockets are right above the sink where water's likely to splash, or that our kitchen consumes enough electricity to power a small town, or that the only place for the dishwasher is on top of the washing machine. And one thing we must be particularly vigilent of throughout is how difficult it will be to change in the future - since, no matter how good your kitchen design is, there will always be room for improvement, always needs we didn't anticipate, and, well, hey... things change.
To avoid making a mess of our design as it emerges, we will need to continously revisit design decisions in light of a variety of non-functional requirements like running costs, ease of maintenance, safety, energy consumption, ergonomics, and even aesthetics. The test-driven approach requires us to continuously restructure and refine our kitchen design so that it not only does it do what they need, but does it in a way they can live with potentially for years, and that will allow them to adapt and extend the design to meet future needs.
In essence, that's Test-driven Development.
February 7, 2014
Year Of Code & The Myth Of The Programmer Shortage2014 is the UK government's Year Of Code.
With a stated aim to get millions of people "coding" (whatever that is) this year, it's an initiative being championed by the Chancellor of the Exchequer, George Osborne.
Behind all this is the belief that Britain and the British economy suffers from a growing shortage of programmers.
But this simply isn't true.
Britain actually has far more programmers than it needs. Don't believe me? Place an ad on Jobserve, the UK's leading source of IT vacancies, and - if it's reasonably well-paid - see how many responses you get.
I help clients to recruit software developers, and I know that for every role that involves programming, you may get hundreds of applicants.
Of those hundreds of applicants, perhaps a dozen will be worth interviewing. Of those dozen, perhaps just one or two will actually be what enlightened employers consider to be good enough to let loose on their critical business systems and software products.
While there may not be a shortage of programmers, there's most definitely a chronic shortage of good software developers. That's something that didn't seem to cross anyone's mind - that there's much more to it than just programming, for a start, and that it takes years to develop the skills and knowledge needed to build good, valuable, reliable, maintanable, secure, scalable software.
From what I've observed, what holds businesses back is not the availability of developers to write new software, but the ability of developers to evolve and adapt existing software to keep pace with the competition. Hiring shitty programmers is a false economy. Most developers are working on legacy code that was - in many cases - written badly by bad programmers. I've watched £billion businesses sunk by their inability to maintain the pace of IT innovation.
And I've also observed that most development team are overstaffed, and that a majority of software projects fail to deliver any real value at all. Hiring smaller teams of more skilled people to focus on projects and products that have a real point to them - you know, proper actual business goals - who write code that is easier to change and costs less to support and maintain, would quite probably lead to a smaller, leaner profession that makes a more positive impact on UK plc.
Gold rushes, like the dotcom boom of the late 1990's, tend to flood the market with average or below-average programmers. And, since the bar is set dangerously low in our profession, "average" really means "not competent".
When the market is flooded with not-competent programmers, and recruitment is handled by people who can't tell the difference (and, let's face it, simply don't care in many cases, just as long as the price is right), it gets even harder to find the good ones.
Year of Code smacks of another gold rush, built on the myth that Britain needs millions of programmers, and promoted by people who evidently haven't the faintest idea what a software developer does.
Check out this interview with the executive director of Year Of Code on BBC's Newsnight. It starts badly (with Jeremy Paxman referring to programming as "keyboard skills"), and goes downhill from there. "This gobbledygook..." etc etc.
Underlying Year Of Code is another, much more dangerous myth: that "coding" can be learned in a day - or even an hour.
I really struggle with this. Non-technical folk going on these "Learn To Code In A Day" courses think I'm making it up when I tell them that you really can't learn to program to any level of proficiency in just a few hours.
Programming - let alone software development - is being devalued as a skill by this repeated mantra that it's that easy to master. It creates very unrealistic expectations among the learners, who either become crushingly disappointed that they're not grokking it by tea time and lose patience (and learning to program takles a lot of patience), or walk away with a highly inflated sense of their achievement on the day and simply will not be told otherwise.
I've already locked horns with two non-technical managers who went on a one-day course and subsequently started interfering in technical decisions they simply weren't qualified to make because "they know all about coding now".
I've also seen at least 40 examples of people walking away from learning to program when it became apparant that it was going to take many hours over weeks and months to get to a working knowledge of just one programming language.
I view the unconscious incompetents as the most dangerous. Indeed, they now appear to be in charge.
By the bye, with so little money being invested (£500,000 to train thousands of teachers?!) and with the whole thing being underpinned by dangerous myths, Year Of Code has already attracted skepticism and some derision from that now quite marginalised group of people who actually know what they're talking about.
No doubt, there is money to made by people willing to play up to the myths and offer quick fixes, and that's in no small part what it's all really about. Politicians and business leaders don't want to hear about long-term solutions. They need good PR today and something to crow about in time for the next election. Who cares if it makes a real difference?
Now, don't get me wrong. I'm all for kids learning to program, and Mr Osborne should be commended for being on board with that. But after the very, very basics have been taught, where do they go next? how will barely competent teachers provide support to the few kids who really get it? What happens to all that potential if everyone they look to for guidance is stuck at "hello, world!"?
And programming should be FUN! I believe it's a very grave mistake to introduce programming and computer science as academic subjects to children as young as five. Let the kids play and explore. If they discover they enjoy it, and have an aptitude for it, there'll be plenty of time for Big O notation later in their education when they've been given the choice to dig deeper. Speaking for myself, CS lessons at age 5 might have put me off programming for life. I learned to program by myself in my own time because I wanted to.
Between me as a software professional today and my first ever "Hello, world!" 30+ years ago has been a long, long journey of everyday practice, reading, conferences, more reading, experimentation and yet more reading - thousands upon thousands of hours to get me to the point where I'm just beginning to feel like I'm getting the hang of it.
If we're to succeed at growing a new generation of good software developers who create real value, then how we set our expectations will be crucial. It does not serve us well to treat this is a problem that can be solved with half a million quid (that works out at about £3 per teacher, by the way) and one day's training.
Imagine we had a shortage of pianists, and Year Of Piano was the proposed solution: £500,000 to train 100,000+ piano teachers - people would naturally call it absurd. You can't learn the piano in a day, certainly not well enough to teach it. Why on earth would anyone believe programming is orders of magnitude easier?
But at the end of it, will our chronic shortage of good software developers be solved? My fear is that, as with previous gold rushes, it will just make them even harder to find.
It's my firm belief that, instead of focusing all our energies on bringing more warm bodies into software development, the government would make a much bigger impact by focusing on raising standards in the profession and improving the signal-to-noise ratio.