April 26, 2016

SC2016 Codemanship Stocking Filler Mini-Projects

Software Craftsmanship 2016 - Codemanship Stocking Filler Mini-Projects




Screencast A Habit



Estimated Duration: 30-60 mins

Author: Jason Gorman, Codemanship

Language(s)/stacks: Any that's screencast-able

Summary:



Record a screencast to demonstrate a single good coding habit you believe is important (e.g., "always run the tests after every refactoring").

Demonstrate how not to do it (and what the consequences of not doing might be), as well as how to do it well.

Test your screencast on your fellow SC2016 participants.

Please do NOT upload your screencast to the web until after the event. The Wi-Fi won't take it!


Learn Your IDE's Shortcuts - The Hard Way



Estimated Duration: 30-60 mins

Author: Jason Gorman, Codemanship

Language(s)/stacks: Any

Summary:



Disable the mouse and/or tracker pad on your computer, and attempt a TDD kata (here are some the Web made earlier - https://www.google.co.uk/?q=tdd+katas) only using the keyboard.


Adversarial Pairing - Programmers At War!!!



Estimated Duration: 30-60 mins

Author: Jason Gorman, Codemanship

Language(s)/stacks: Any

Summary:



Choose a TDD kata (here are some the Web made earlier - https://www.google.co.uk/?q=tdd+katas).

Person A starts by writing the first failing test.

Person B takes over and writes the simplest evil code that passes the test, but is obviously not what was intended.

Person A must improve the test - or add another test - to steer it back to the original intent.

If Person B can still see a way to do evil with that improved tests, they should do it.

And rinse and repeat until the intended behaviour or rule has been correctly implemented - that is to say, for any valid input, the code will produce the correct result for that behaviour or rule.

Then swap over for the next behaviour or rule in the kata (if there is one), so Person B starts by writing a test, and Person A writes the simplest evil code to pass it.

If time allows, and you have access to a mutation testing tool for your programming language, subject your project code to mutation testing to see how well tested it is.


If I Ruled The (Coding) World...



Estimated Duration: 30-60 mins

Author: Jason Gorman, Codemanship

Language(s)/stacks: Any

Summary:



The programmer ballots have been counted, and you have been elected the Programmer President of The World.

You can - by decree - change any one thing about the way we all write software. JUST ONE THING.

What would it be?

Use pictures, sample code, fake screenshots, plasticine expression or interpretative dance to illustrate your single decree as Programming President. Stick them on the web where we can see them.



Extra Spaces At Software Craftsmanship 2016 (London)



Good news for code crafters in the London area; we've made extra space for 10 more people at SC2016 on Saturday May 14th.



Bring a laptop, find someone to pair with (or someones to mob with if a projector is free), pick a mini-project from our menu of fun, challenging and crafty code excursions, and do what you do best!

Details and tickets can be found at https://www.eventbrite.co.uk/e/software-craftsmanship-2016-london-tickets-21666084843






April 25, 2016

Mutation Testing & "Debuggability"

More and more teams are waking up to the benefit of checking the levels of assurance their automated tests give them.

Assurance, as opposed to coverage, answers a more meaningful question about our regression tests: if the code was broken, how likely is it that our tests would catch that?

To answer that question, you need to test your tests. Think of bugs as crimes in your code, and your tests as police officers. How good are your code police at detecting code crimes? One way to check would be to deliberately commit code crimes - deliberately break the code - and see if any tests fail.

This is a practice called mutation testing. We can do it manually, while we pair - I'm a big fan of that - and we can do it using one of the increasingly diverse (and rapidly improving) mutation testing tools available.

For Java, for example, there are tools like Jester and PIT. What they do is take a copy of your code (with unit tests), and "mutate" it - that is, make a single change to a line of code that (theoretically) should break it. Examples of automated mutations include turning a + into a -, or a < into <=, or ++ into --, and so on.

After it's created a "mutant" version of the code, it runs the tests. If one or more tests fail, then they are said to have "killed the mutant". If no test fails, then the mutant survives, and we may need to have a think about whether that line of code that was mutated is being properly tested. (Of course, it's complicated, and there will be some false positives where the mutation tool changed something we don't really care about. But the results tend to be about 90% useful, which is a boon, IMHO.)

Here's a mutation testing report generated by PIT for my Combiner spike:



Now, a lot of this may not be news for many of you. And this isn't really what this blog post is about.

What I wanted to draw your attention to is that - once I've identified the false positives in the report - the actual level of assurance looks pretty high (about 95% of mutations I cared about got killed.) Code coverage is also pretty high (97%).

While my tests appear to be giving me quite high assurance, I'm worried that may be misleading. When I write spikes - intended to as proof of concept and not to be used in anger - I tend to write a handful of tests that work at a high level.

This means that when a test fails, it may take me some time to pinpoint the cause of the problem, as it may be buried deep in the call stack, far removed from the test that failed.

For a variety of good reasons, I believe that tests should stick close to the behaviour being tested, and have only one reason to fail. So when they do fail, it's immediately obvious where and what the problem might be.

Along with a picture of the level of assurance my tests give me, I'd also find it useful to know how far removed from the problem they are. Mutation testing could give me an answer.

When tests "kill" a mutant version of the code, we know:

1. which tests failed, and
2. where the bug was introduced

Using that information, we can calculate the depth of the call stack between the two. If multiple tests catch the bug, then we take the shallowest depth out of those tests.

This would give me an idea of - for want of a real word - the debuggability of my tests (or rather, the lack of it). The shallower the depth between bugs and failing tests, the higher the debuggability.

I also note a relationship between debuggability and assurance. In examining mutation testing reports, I often find that the problem is that my tests are too high-level, and if I wrote more focused tests closer to the code doing that work, they would catch edge cases I didn't think about at that higher level.



April 23, 2016

Does Your Tech Idea Pass The Future Dystopia Test?

One thing that at times fascinates and at times appals me is the social effect that web applications can have on us.

Human beings learn fast, but evolve slowly. Hence we can learn to program a video recorder, but living a life that revolves around video recorders can be toxic to us. For all our high-tech savvy, we are still basically hominids, adapted to run from predators and pick fleas off of each other, but not adapted for Facebook or Instagram or Soundcloud.

But the effects of online socialisation are now felt in the Real World - you know, the one we used to live in? People who, just 3-4 years ago, were confined to expressing their opinions on YouTube are now expressing them on my television and making pots of real money.

Tweets are building (and ending) careers. Soundcloud tracks are selling out tours. Facebook viral posts are winning elections. MySpace users are... well, okay, maybe not MySpace users.

For decades, architects and planners obsessed over the design of the physical spaces we live and work in. The design of a school building, they theorise, can make a difference to the life chances of the students who learn in it. The design of a public park can increase or decrease the chances of being attacked in it. Pedestrianisation of a high street can breath new life into local shops, and an out-of-town shopping mall can suck the life out of a town centre.

Architects must actively consider the impact of buildings on residents, on surrounding communities, on businesses, on the environment, when they create and test their designs. Be it for a 1-bed starter home, or for a giant office complex, they have to think about these things. It's the law.

What thought, then, do software developers give to the social, economic and environmental impact of their application designs?

With a billion users, a site like Facebook can impact so many lives just by adding a new button or changing their privacy policy.

Having worked on "Web 2.0" sites of all shapes and sizes, I have yet to see teams and management go out of their way to consider such things. Indeed, I've seen many occasions when management have proposed features of such breath-taking insensitivity to wider issues, that it's easy to believe that we don't really think much about it at all. That is, until it all goes wrong, and the media are baying for our blood, and we're forced to change to keep our share price from crashing.

This is about more than reliability (though reliability would be a start).

Half-jokingly, I've suggested that teams put feature requests through a Future Dystopia Test; can we imagine a dark, dystopian, Philip K Dick-style future in which our feature has caused immense harm to society? Indeed, whole start-up premises fail this test sometimes. Just hearing some elevator pitches conjures up Blade Runner-esque and Logan's Run-ish images.

I do think, though, that we might all benefit from devoting a little time to considering the potential negative effects of what we're creating before we create it, as well as closely monitoring those effects once it's out there. Don't wait for that hysterical headline "AcmeChat Ate My Hamster" to appear before asking yourself if the fun hamster-swallowing feature the product owner suggested might not be such a good thing after all.


This blog post is gluten free and was not tested on animals






April 21, 2016

SC2016 Mini-Project: Connect Four Bot

Software Craftsmanship 2016 - Mini-Project



Connect Four Bot



Estimated Duration: 1-2 hours

Author: Jason Gorman, Codemanship

Language(s)/stacks: Any that can support a suitable UI

Summary:





Test-drive a bot that can play Connect 4 against a human opponent.

Connect 4 is a game for 2 players, each player having round pieces of a specific colour (e.g., red or yellow). It presents players with a vertical game grid of 7 rows of 6 slots. Players take it in turns to insert one of their pieces into one of the rows at the top. That piece will then fall down the slot to occupy the lowest empty slot.

When a row is full, players can no longer insert pieces into that row.

The goal is to place four of your pieces in an unbroken row - horizontal, vertical or diagonal - before your opponent does. If no player achieves a row of 4, then the game is a draw.

To potentially facilitate bot tournaments, ensure that your bot is cleanly separated from the game and UI and can be deployed in a microservice if necessary.

ADVANCED:

Create a Connect Four tournament web server that pitches bots deployed as JSON microservices against each other.











April 20, 2016

A* - A Truly Iterative Development Process

Much to my chagrin, having promoted the idea for so many years, software development still hasn't caught on to the idea that what we ought to be doing is iterating towards goals.

NOT working through a queue of tasks. NOT working through a queue of features.

Working towards a goal. A testable goal.

We, as an industry, have many names for working through queues: Agile, Scrum, Kanban, Feature-driven Development, the Unified Process, DSDM... All names for "working through a prioritised list of stuff that needs to be done or delivered". Of course, the list is allowed to change depending on feedback. But the goal is usually missing. Without the goal, what are we iterating towards?

Ironically, working through a queue of items to be delivered isn't iterating - something I always understood to be the whole point of Agile. But, really, iterating means repeating a process, feeding back the results of each cycle, until we reach some goal. Reaching the goal is when we're done.

What name do we give to "iterating towards a testable goal"? So far, we have none. Buzzword Bingo hasn't graced the door of true iterative development yet.

Uncatchy names like goal-driven development and competitive engineering do exist, but haven't caught on. Most teams still don't even have even a vague idea of the goals of their project or product. They're just working through a list that somebody - a customer, a product owner, a business analyst - dreamed up. Everyone's assuming that somebody else knows what the goal is. NEWSFLASH: They don't.

The Codemanship way compels us to ditch the list. There is no release plan. Only business/user goals and progress. Features and change requests only come into focus for the very near future. The question that starts every rapid iteration is "where are we today, and what's the least we could do today to get closer to where we need to be?" Think of development as a graph algorithm: we're looking for the shortest path from where we are to some destination. There are many roads we could go down, but we're particularly interested in exploring those that bring us closer to our destination.

Now imagine a shortest-path algorithm that has no concept of destination. It's just a route map, a plan - an arbitrary sequence of directions that some product owner came up with that we hope will take us somewhere good, wherever that might be. Yup It just wouldn't work, would it? We'd have to be incredibly lucky to end up somewhere good - somewhere of value.

And so it is - in my quest for a one-word name to describe "iteratively seeking the shortest (cheapest) path to a testable goal", I propose simply A*

As in:

"What method are we following on this project?"

"A*"

Of course, there are prioritised lists in my A* method: but they are short and only concern themselves with what we're doing next to TRY to bring us closer to our goal. Teams meet every few days (or every day, if you're really keen), assess progress made since last meeting, and come up with a very short plan, the results of which will be assessed at the next meeting. And rinse and repeat.

In A*, the product owner has no vision of the solution, only a vision of the problem, and a clear idea of how we'll know when that problem's been solved. Their primary role is to tell us if we're getting warmer or colder with each short cycle, and to help us identify where to aim next.

They don't describe a software product, they describe the world around that product, and how it will be changed by what we deliver. We ain't done until we see that change.

This puts a whole different spin on software development. We don't set out with a product vision and work our way through a list of features, even if that list is allowed to change. We work towards a destination - accepting that some avenues will turn out to be dead-ends - and all our focus is on finding the cheapest way to get there.

And, on top of all that, we embrace the notion that the destination itself may be a moving target. And that's why we don't waste time and effort mapping out the whole route beyond the near future. Any plan that tries to look beyond a few days ends up being an expensive fiction that we become all too easily wedded to.






April 19, 2016

SC2016 Mini-Project: Light-Speed Startup - A Serious Team Dojo

Software Craftsmanship 2016 - Mini-Project



Light-Speed Start-Up: A Serious Team Dojo



Estimated Duration: 8+ hours

Author: Jason Gorman, Codemanship

Language(s)/stacks: Web Full Stack / Cloud Services

Summary:



Could you start an online business in a single day?

From a standing start, come up with a simple idea for an online business, and then implement a Minimum Viable Product that customers could use straight away, and that can be easily adapted based on their feedback and some basic metrics your application will collect.

Your product must be fully *automatically* tested, and under version control. Build and deployment must also be fully automated.

Your product should provide a secure mechanism for customers to register as users/members, and make payments for services. If monetised through advertising, the advertising mechanism (and facility to buy ad space) must be in place. The challenge is to have a functioning business by the end of the day that could actually trade.

You may find a credit card useful for this mini-project to buy cloud services, and may even wish to consider - if you and your co-founder(s) think it's a really good idea - putting your business on a legal footing by incorporating a company or partnership and issuing shares to your co-founders. Company formation can be done online, and costs about £11-£50, depending on the options you choose.

HINT: The secret to this exercise is agreeing on things and making sound decisions quickly





April 18, 2016

SC2016 Mini-Project: Code Risk Heat Map

Software Craftsmanship 2016 - Mini-Project



Code Risk Heat Map



Estimated Duration: 2-4 hours

Author: Jason Gorman, Codemanship

Language(s)/stacks: Any

Summary:



Create a tool that produces a "heat map" of your code, highlighting the parts (methods, classes, packages) that present the highest risk of failure.

Risk should be classified along 3 variables:

1. Potential cost of failure

2. Potential risk of failure

3. Potential system impact of failure (based on dependencies)

Use colour coding/gradation to visually highlight the hotspots (e.g., red = highest risk, green = lowest risk)

Also, produce a prioritised list of individual methods/functions in the code with the same information.


PLEASE INSTALL ANY METRICS OR CODE ANALYSIS TOOLS YOU MIGHT WANT TO USSE *BEFORE* THE EVENT

April 15, 2016

You Can Tell A Lot About A Developer By What They're Like

Forgive me for adapting an old Harry Hill joke. But there's a universe of truth in it.

Over the last couple of years, I've noticed an increasing reliance on tertiary signifiers when employers evaluate software developers. While we poo-poo'd certifications as meaningless, we were busy falling into a very similar trap ourselves.

It's true that when a developer has a blog, and regularly attends dev events and participates in dev communities, and contributes to open source projects and shares their code on sites like GitHub - which has miraculously transformed itself from a hubris-inspired version control platform into a social network for programmers (with all the accompanying social problems that can happen when people congregate in spaces real and virtual) - this can be an indication of "passion" and "commitment" to their craft. And passion and commitment can be an indication that they put a lot of time into learning their craft. Which can lead to them being better software developers.

Until...

...it becomes mandatory that you do these things, because you've heard that employers are looking for this now.

And then what you get is the same thing you get when people hear that a certain certification can get them a better-paid gig, and then - before you can say "sprint" - the market's flooded with people who are just making sure they tick that box.

GitHub accounts and blogs and speaker credentials and book credits aren't the gold standard they used to be as a result. An so, as t'was ever thus, we have to go back to looking at their code and looking at the candidate themselves and judging them on what kind of software developer they are.

There are a lot of great developers who haven't put themselves out there like you and I might have. They hide their light under a bushel, and this is why I'm trying these days to look beyond the tertiary signifiers, afraid that I might miss some real hidden gems just because they're not "playing the game".



The Rise & Rise Of Test-Driven Development

As an advocate of Test-Driven Development, having practiced it myself for most of my professional career and seen what results are possible - both in terms of software quality, and the economic benefits of simple, maintainable code that's always shippable - I'm keen not to ram it down the throat of anyone who simply isn't interested in doing it.

TDD is not compulsory. Do it. Don't do it. We can judge the end result on the - er- end result.

But I would be remiss, to developers who care about their careers and want to keep up with the ever-changing demands of employers, not to point out that - although TDD isn't compulsory - it is rapidly becoming de rigeur.

It took a long while for TDD to enter the mainstream, but since about 2006-7, demand for developers who can do it - or at least claim they can do it (that's a whole other blog post right there) - has been showing pretty strong, consistent growth. In 2006, almost no employers asked for TDD on a candidate's CV. In 2016, about 1 in 5 developers jobs advertised in the UK requires it.

That trend shows little sign of slowing, so it's entirely possible that in 10 years' time half the developer jobs going will require you to have TDD skills and experience. And the other 50% would probably be jobs you really don't want to apply for.

The demand for TDD skills and experience is further evidenced by the fact that the average pay for a developer with TDD is 10% higher than without.

So, yes, TDD isn't compulsory. But organisations are increasingly viewing it as a foundation for cool stuff they really want, like Continuous Delivery. Keeping your code in a continuously shippable state requires the ability to regression test it quickly, cheaply and very frequently. One of the neato side-benefits of TDD is - when done well - good, fast-running automated regression tests.

And yes, you could write those tests after you've written the code if TDD's not your cup of tea. But - guess what? - I've found that teams who tackle test automation after the fact have a tendency not to be anywhere near as thorough, leading to lower confidence that the code really does work.

A straw poll on Twitter I did a couple of days ago found that only 4% of respondents didn't already know how to do TDD, and had no plans to learn. It's a self-selecting audience, of course. But even in my social media echo chamber, 96% is significantly high.

TDD already crossed the chasm, and now it's waving a 10% pay rise at you from the other side.