April 30, 2016

Goals vs. Constraints

A classic source of tension and dysfunction in software teams - well, probably all kinds of teams, really - is the relativity between goals and constraints.

Teams often mistake constraints for goals. A common example is when teams treat a design specification as a goal, and lose sight of where that design came from in the first place.

A software design is a constraint. There may be countless ways of solving a problem, but we chose this one. That's the very definition of constraining.

On a larger scale, I've seen many tech start-ups lose sight of why they're doing what they're doing, and degenerate into 100% focusing on raising or making the money to keep doing whatever it is they're doing. This is pretty common. Think of these charities who started out with a clear aim to "save the cat" or whatever, but fast-forward a few years and most - if not all - of the charities' efforts end up being dedicated to raising the funds to pay everybody and keep the charity going.

Now, you could argue that a business's goal is to make money, and that they make money in exchange for helping customers to satisfy their goals. A restaurant's goal is to make money. A diner's goal is to be fed. I give you money. You stop me from being hungry.

Which is why - if your organisation's whole raison d'être is to make a profit - it's vitally important to have a good, deep understanding of your customer's goals or needs.

That's quite a 19th century view of business, though. But even back then, some more progressive industrialists saw aims above and beyond just making a profit. At their best, businesses can provide meaning and purpose for employees, enrich their lives, enrich communities and generally add to the overall spiffiness of life in their vicinity.

But I digress. Where was I? Oh yes. Goals vs. constraints.

Imagine you're planning a trip from your home in Los Angeles to San Francisco. Your goal is to visit SF. A constraint might be that, if you're going to drive, you'll need enough gasoline for the journey.

So you set out raising money for gas. You start a lemonade stall in your front yard. It goes well. People like your lemonade, and thanks to the convenient location of your home, there are lots of passers-by with thirsts that need quenching. Soon you have more than enough money for gas. But things are going so well on your lemonade stall that you've been too busy thinking about that, and not about San Francisco. You make plans to branch out into freshly squeezed orange juice, and even smoothies. You get a bigger table. You hire an assistant, because there's just so much to be done. You buy a bigger house on the same street, with a bigger yard and more storage space. Then you start delivering your drinks to local restaurants, where they go down a storm with diners. 10 years later, you own a chain of lemonade stalls spanning the entire city.

Meanwhile, you have never been to San Francisco. In fact, you're so busy now, you may never go.

Now, if you're a hard-headed capitalist, you may argue "so what?" Surely your lemonade business is ample compensation for missing out on that trip?

Well, maybe it is, and maybe it isn't. As I get older, I find myself more and more questioning "Why am I doing this?" I know too many people who got distracted by "success" and never took those trips, never tried those experiences, never built that home recording studio, never learned that foreign language, and all the other things that were on their list.

For most of us - individuals and businesses alike - earning money is a means to an end. It's a constraint that can enable or prevent us from achieving our goals.

As teams, too, we can too easily get bogged down in the details and lose sight of why we're creating the software and systems that we do in the first place.

So, I think, a balance needs to be struck here. We have to take care of the constraints to achieve our goals, but losing sight of those goals potentially makes all our efforts meaningless.

Getting bogged down in constraints can also make it less likely that we'll achieve our goals at all.

Constraints constrain. That's sort of how that works. If we constrain ourselves to a specific route from LA to San Francisco, for example, and then discover half way that the road is out, we need other options to reach the destination.

Countless times, I've watched teams bang their heads against the brick wall trying to deliver on a spec that can't - for whatever reason - be done. It's powerful voodoo to be able to step back and remind ourselves of where we're really headed, and ask "is there another way?" I've seen $multi-million projects fail because there was no other way - deliver to the spec, or fail. It had to be Oracle. It had to be a web service. It had to be Java.

No. No it didn't. Most constraints we run into are actually choices that someone made - maybe even choices that we made for ourselves - and then forgot that it was a choice.

Yes, try to make it work. But don't mistake choices for goals.

April 26, 2016

SC2016 Codemanship Stocking Filler Mini-Projects

Software Craftsmanship 2016 - Codemanship Stocking Filler Mini-Projects

Screencast A Habit

Estimated Duration: 30-60 mins

Author: Jason Gorman, Codemanship

Language(s)/stacks: Any that's screencast-able


Record a screencast to demonstrate a single good coding habit you believe is important (e.g., "always run the tests after every refactoring").

Demonstrate how not to do it (and what the consequences of not doing might be), as well as how to do it well.

Test your screencast on your fellow SC2016 participants.

Please do NOT upload your screencast to the web until after the event. The Wi-Fi won't take it!

Learn Your IDE's Shortcuts - The Hard Way

Estimated Duration: 30-60 mins

Author: Jason Gorman, Codemanship

Language(s)/stacks: Any


Disable the mouse and/or tracker pad on your computer, and attempt a TDD kata (here are some the Web made earlier - https://www.google.co.uk/?q=tdd+katas) only using the keyboard.

Adversarial Pairing - Programmers At War!!!

Estimated Duration: 30-60 mins

Author: Jason Gorman, Codemanship

Language(s)/stacks: Any


Choose a TDD kata (here are some the Web made earlier - https://www.google.co.uk/?q=tdd+katas).

Person A starts by writing the first failing test.

Person B takes over and writes the simplest evil code that passes the test, but is obviously not what was intended.

Person A must improve the test - or add another test - to steer it back to the original intent.

If Person B can still see a way to do evil with that improved tests, they should do it.

And rinse and repeat until the intended behaviour or rule has been correctly implemented - that is to say, for any valid input, the code will produce the correct result for that behaviour or rule.

Then swap over for the next behaviour or rule in the kata (if there is one), so Person B starts by writing a test, and Person A writes the simplest evil code to pass it.

If time allows, and you have access to a mutation testing tool for your programming language, subject your project code to mutation testing to see how well tested it is.

If I Ruled The (Coding) World...

Estimated Duration: 30-60 mins

Author: Jason Gorman, Codemanship

Language(s)/stacks: Any


The programmer ballots have been counted, and you have been elected the Programmer President of The World.

You can - by decree - change any one thing about the way we all write software. JUST ONE THING.

What would it be?

Use pictures, sample code, fake screenshots, plasticine expression or interpretative dance to illustrate your single decree as Programming President. Stick them on the web where we can see them.

Extra Spaces At Software Craftsmanship 2016 (London)

Good news for code crafters in the London area; we've made extra space for 10 more people at SC2016 on Saturday May 14th.

Bring a laptop, find someone to pair with (or someones to mob with if a projector is free), pick a mini-project from our menu of fun, challenging and crafty code excursions, and do what you do best!

Details and tickets can be found at https://www.eventbrite.co.uk/e/software-craftsmanship-2016-london-tickets-21666084843

April 25, 2016

Mutation Testing & "Debuggability"

More and more teams are waking up to the benefit of checking the levels of assurance their automated tests give them.

Assurance, as opposed to coverage, answers a more meaningful question about our regression tests: if the code was broken, how likely is it that our tests would catch that?

To answer that question, you need to test your tests. Think of bugs as crimes in your code, and your tests as police officers. How good are your code police at detecting code crimes? One way to check would be to deliberately commit code crimes - deliberately break the code - and see if any tests fail.

This is a practice called mutation testing. We can do it manually, while we pair - I'm a big fan of that - and we can do it using one of the increasingly diverse (and rapidly improving) mutation testing tools available.

For Java, for example, there are tools like Jester and PIT. What they do is take a copy of your code (with unit tests), and "mutate" it - that is, make a single change to a line of code that (theoretically) should break it. Examples of automated mutations include turning a + into a -, or a < into <=, or ++ into --, and so on.

After it's created a "mutant" version of the code, it runs the tests. If one or more tests fail, then they are said to have "killed the mutant". If no test fails, then the mutant survives, and we may need to have a think about whether that line of code that was mutated is being properly tested. (Of course, it's complicated, and there will be some false positives where the mutation tool changed something we don't really care about. But the results tend to be about 90% useful, which is a boon, IMHO.)

Here's a mutation testing report generated by PIT for my Combiner spike:

Now, a lot of this may not be news for many of you. And this isn't really what this blog post is about.

What I wanted to draw your attention to is that - once I've identified the false positives in the report - the actual level of assurance looks pretty high (about 95% of mutations I cared about got killed.) Code coverage is also pretty high (97%).

While my tests appear to be giving me quite high assurance, I'm worried that may be misleading. When I write spikes - intended to as proof of concept and not to be used in anger - I tend to write a handful of tests that work at a high level.

This means that when a test fails, it may take me some time to pinpoint the cause of the problem, as it may be buried deep in the call stack, far removed from the test that failed.

For a variety of good reasons, I believe that tests should stick close to the behaviour being tested, and have only one reason to fail. So when they do fail, it's immediately obvious where and what the problem might be.

Along with a picture of the level of assurance my tests give me, I'd also find it useful to know how far removed from the problem they are. Mutation testing could give me an answer.

When tests "kill" a mutant version of the code, we know:

1. which tests failed, and
2. where the bug was introduced

Using that information, we can calculate the depth of the call stack between the two. If multiple tests catch the bug, then we take the shallowest depth out of those tests.

This would give me an idea of - for want of a real word - the debuggability of my tests (or rather, the lack of it). The shallower the depth between bugs and failing tests, the higher the debuggability.

I also note a relationship between debuggability and assurance. In examining mutation testing reports, I often find that the problem is that my tests are too high-level, and if I wrote more focused tests closer to the code doing that work, they would catch edge cases I didn't think about at that higher level.

April 23, 2016

Does Your Tech Idea Pass The Future Dystopia Test?

One thing that at times fascinates and at times appals me is the social effect that web applications can have on us.

Human beings learn fast, but evolve slowly. Hence we can learn to program a video recorder, but living a life that revolves around video recorders can be toxic to us. For all our high-tech savvy, we are still basically hominids, adapted to run from predators and pick fleas off of each other, but not adapted for Facebook or Instagram or Soundcloud.

But the effects of online socialisation are now felt in the Real World - you know, the one we used to live in? People who, just 3-4 years ago, were confined to expressing their opinions on YouTube are now expressing them on my television and making pots of real money.

Tweets are building (and ending) careers. Soundcloud tracks are selling out tours. Facebook viral posts are winning elections. MySpace users are... well, okay, maybe not MySpace users.

For decades, architects and planners obsessed over the design of the physical spaces we live and work in. The design of a school building, they theorise, can make a difference to the life chances of the students who learn in it. The design of a public park can increase or decrease the chances of being attacked in it. Pedestrianisation of a high street can breath new life into local shops, and an out-of-town shopping mall can suck the life out of a town centre.

Architects must actively consider the impact of buildings on residents, on surrounding communities, on businesses, on the environment, when they create and test their designs. Be it for a 1-bed starter home, or for a giant office complex, they have to think about these things. It's the law.

What thought, then, do software developers give to the social, economic and environmental impact of their application designs?

With a billion users, a site like Facebook can impact so many lives just by adding a new button or changing their privacy policy.

Having worked on "Web 2.0" sites of all shapes and sizes, I have yet to see teams and management go out of their way to consider such things. Indeed, I've seen many occasions when management have proposed features of such breath-taking insensitivity to wider issues, that it's easy to believe that we don't really think much about it at all. That is, until it all goes wrong, and the media are baying for our blood, and we're forced to change to keep our share price from crashing.

This is about more than reliability (though reliability would be a start).

Half-jokingly, I've suggested that teams put feature requests through a Future Dystopia Test; can we imagine a dark, dystopian, Philip K Dick-style future in which our feature has caused immense harm to society? Indeed, whole start-up premises fail this test sometimes. Just hearing some elevator pitches conjures up Blade Runner-esque and Logan's Run-ish images.

I do think, though, that we might all benefit from devoting a little time to considering the potential negative effects of what we're creating before we create it, as well as closely monitoring those effects once it's out there. Don't wait for that hysterical headline "AcmeChat Ate My Hamster" to appear before asking yourself if the fun hamster-swallowing feature the product owner suggested might not be such a good thing after all.

This blog post is gluten free and was not tested on animals

April 21, 2016

SC2016 Mini-Project: Connect Four Bot

Software Craftsmanship 2016 - Mini-Project

Connect Four Bot

Estimated Duration: 1-2 hours

Author: Jason Gorman, Codemanship

Language(s)/stacks: Any that can support a suitable UI


Test-drive a bot that can play Connect 4 against a human opponent.

Connect 4 is a game for 2 players, each player having round pieces of a specific colour (e.g., red or yellow). It presents players with a vertical game grid of 7 rows of 6 slots. Players take it in turns to insert one of their pieces into one of the rows at the top. That piece will then fall down the slot to occupy the lowest empty slot.

When a row is full, players can no longer insert pieces into that row.

The goal is to place four of your pieces in an unbroken row - horizontal, vertical or diagonal - before your opponent does. If no player achieves a row of 4, then the game is a draw.

To potentially facilitate bot tournaments, ensure that your bot is cleanly separated from the game and UI and can be deployed in a microservice if necessary.


Create a Connect Four tournament web server that pitches bots deployed as JSON microservices against each other.

April 20, 2016

A* - A Truly Iterative Development Process

Much to my chagrin, having promoted the idea for so many years, software development still hasn't caught on to the idea that what we ought to be doing is iterating towards goals.

NOT working through a queue of tasks. NOT working through a queue of features.

Working towards a goal. A testable goal.

We, as an industry, have many names for working through queues: Agile, Scrum, Kanban, Feature-driven Development, the Unified Process, DSDM... All names for "working through a prioritised list of stuff that needs to be done or delivered". Of course, the list is allowed to change depending on feedback. But the goal is usually missing. Without the goal, what are we iterating towards?

Ironically, working through a queue of items to be delivered isn't iterating - something I always understood to be the whole point of Agile. But, really, iterating means repeating a process, feeding back the results of each cycle, until we reach some goal. Reaching the goal is when we're done.

What name do we give to "iterating towards a testable goal"? So far, we have none. Buzzword Bingo hasn't graced the door of true iterative development yet.

Uncatchy names like goal-driven development and competitive engineering do exist, but haven't caught on. Most teams still don't even have even a vague idea of the goals of their project or product. They're just working through a list that somebody - a customer, a product owner, a business analyst - dreamed up. Everyone's assuming that somebody else knows what the goal is. NEWSFLASH: They don't.

The Codemanship way compels us to ditch the list. There is no release plan. Only business/user goals and progress. Features and change requests only come into focus for the very near future. The question that starts every rapid iteration is "where are we today, and what's the least we could do today to get closer to where we need to be?" Think of development as a graph algorithm: we're looking for the shortest path from where we are to some destination. There are many roads we could go down, but we're particularly interested in exploring those that bring us closer to our destination.

Now imagine a shortest-path algorithm that has no concept of destination. It's just a route map, a plan - an arbitrary sequence of directions that some product owner came up with that we hope will take us somewhere good, wherever that might be. Yup It just wouldn't work, would it? We'd have to be incredibly lucky to end up somewhere good - somewhere of value.

And so it is - in my quest for a one-word name to describe "iteratively seeking the shortest (cheapest) path to a testable goal", I propose simply A*

As in:

"What method are we following on this project?"


Of course, there are prioritised lists in my A* method: but they are short and only concern themselves with what we're doing next to TRY to bring us closer to our goal. Teams meet every few days (or every day, if you're really keen), assess progress made since last meeting, and come up with a very short plan, the results of which will be assessed at the next meeting. And rinse and repeat.

In A*, the product owner has no vision of the solution, only a vision of the problem, and a clear idea of how we'll know when that problem's been solved. Their primary role is to tell us if we're getting warmer or colder with each short cycle, and to help us identify where to aim next.

They don't describe a software product, they describe the world around that product, and how it will be changed by what we deliver. We ain't done until we see that change.

This puts a whole different spin on software development. We don't set out with a product vision and work our way through a list of features, even if that list is allowed to change. We work towards a destination - accepting that some avenues will turn out to be dead-ends - and all our focus is on finding the cheapest way to get there.

And, on top of all that, we embrace the notion that the destination itself may be a moving target. And that's why we don't waste time and effort mapping out the whole route beyond the near future. Any plan that tries to look beyond a few days ends up being an expensive fiction that we become all too easily wedded to.

April 19, 2016

SC2016 Mini-Project: Light-Speed Startup - A Serious Team Dojo

Software Craftsmanship 2016 - Mini-Project

Light-Speed Start-Up: A Serious Team Dojo

Estimated Duration: 8+ hours

Author: Jason Gorman, Codemanship

Language(s)/stacks: Web Full Stack / Cloud Services


Could you start an online business in a single day?

From a standing start, come up with a simple idea for an online business, and then implement a Minimum Viable Product that customers could use straight away, and that can be easily adapted based on their feedback and some basic metrics your application will collect.

Your product must be fully *automatically* tested, and under version control. Build and deployment must also be fully automated.

Your product should provide a secure mechanism for customers to register as users/members, and make payments for services. If monetised through advertising, the advertising mechanism (and facility to buy ad space) must be in place. The challenge is to have a functioning business by the end of the day that could actually trade.

You may find a credit card useful for this mini-project to buy cloud services, and may even wish to consider - if you and your co-founder(s) think it's a really good idea - putting your business on a legal footing by incorporating a company or partnership and issuing shares to your co-founders. Company formation can be done online, and costs about £11-£50, depending on the options you choose.

HINT: The secret to this exercise is agreeing on things and making sound decisions quickly

April 18, 2016

SC2016 Mini-Project: Code Risk Heat Map

Software Craftsmanship 2016 - Mini-Project

Code Risk Heat Map

Estimated Duration: 2-4 hours

Author: Jason Gorman, Codemanship

Language(s)/stacks: Any


Create a tool that produces a "heat map" of your code, highlighting the parts (methods, classes, packages) that present the highest risk of failure.

Risk should be classified along 3 variables:

1. Potential cost of failure

2. Potential risk of failure

3. Potential system impact of failure (based on dependencies)

Use colour coding/gradation to visually highlight the hotspots (e.g., red = highest risk, green = lowest risk)

Also, produce a prioritised list of individual methods/functions in the code with the same information.


April 15, 2016

You Can Tell A Lot About A Developer By What They're Like

Forgive me for adapting an old Harry Hill joke. But there's a universe of truth in it.

Over the last couple of years, I've noticed an increasing reliance on tertiary signifiers when employers evaluate software developers. While we poo-poo'd certifications as meaningless, we were busy falling into a very similar trap ourselves.

It's true that when a developer has a blog, and regularly attends dev events and participates in dev communities, and contributes to open source projects and shares their code on sites like GitHub - which has miraculously transformed itself from a hubris-inspired version control platform into a social network for programmers (with all the accompanying social problems that can happen when people congregate in spaces real and virtual) - this can be an indication of "passion" and "commitment" to their craft. And passion and commitment can be an indication that they put a lot of time into learning their craft. Which can lead to them being better software developers.


...it becomes mandatory that you do these things, because you've heard that employers are looking for this now.

And then what you get is the same thing you get when people hear that a certain certification can get them a better-paid gig, and then - before you can say "sprint" - the market's flooded with people who are just making sure they tick that box.

GitHub accounts and blogs and speaker credentials and book credits aren't the gold standard they used to be as a result. An so, as t'was ever thus, we have to go back to looking at their code and looking at the candidate themselves and judging them on what kind of software developer they are.

There are a lot of great developers who haven't put themselves out there like you and I might have. They hide their light under a bushel, and this is why I'm trying these days to look beyond the tertiary signifiers, afraid that I might miss some real hidden gems just because they're not "playing the game".