April 22, 2017

Learn TDD with Codemanship

20 Dev Metrics - 5. Impact of Failure

The fifth in my Twitter series 20 Dev Metrics builds on a previous metric, Feature Usage, to estimate the potential impact on end users when a piece of code fails.

Impact of Failure can be calculated by determining the critical path through the code (the call stack, basically) in a usage scenario and then assigning the value of Feature Usage to each method (or block of code, if you want to get down to that level of detail).



We do this for each usage scenario or use case, and add Feature Usage to methods every time they're in the critical path for the current scenario.

So, if a method is invoked in many high-traffic scenarios, our score for Impact of Failure will be very high.

What you'll get at the end is a kind of "heat map" of your code, showing which parts of your code could do the most damage if they broke. This can help you to target testing and other quality assurance activities at the highest-value code more effectively.

This is, of course, a non-trivial metric to collect. You'll need a way to record what methods get invoked when, say, you run your customer tests. And each customer test will need an associated value for Feature Usage. Ideally, this would be a feature of customer testing tools. But for now, you'll have some DIY tooling to do, using whatever instrumentation and/or meta-programming facilities your language has available. You could also use a code coverage reporting tool, generating reports one customer test at a time to see which code was executed in that scenario.

In the next metric, we'll look at another factor in code risk that we can use to help us really pinpoint QA efforts.




April 21, 2017

Learn TDD with Codemanship

20 Dev Metrics - 4. Feature Usage

The next few metrics in my 20 Dev Metrics Twitter series are going to help us understanding which parts of our code present the greatest risk for failure.

One metric that I'm always amazed to learn teams aren't collecting is Feature Usage. For a variety of reasons, it's useful to know which features of your software are used all the time and which features are used rarely (if ever).



When it comes to the quality of our code, feature usage is important because it guides our efforts towards things that matter more. It's one component of risk of failure that we need to know in order to effectively gauge how reliable any line of code (or method, or module, or component/service) needs to be. (We'll look at other components soon.)

Keeping logs of which features are being used is relatively straightforward; for a web application, your web server's logs might reveal that information (if the action being performed is revealed in the URL somehow). If you can't get them for free like that, though, adding usage logging needs some thought.

What you definitely don't want to do is add logging code willy-nilly all across your code. That way lies madness. Look for ways to encapsulate usage logging in a single part of the code. For example, for a desktop application, using the Command pattern offers an opportunity to start with a Command base class or template class that logs an action, then invokes the helper method that does the actual work. Logs can be stored in memory during the user's session and occasionally sent as a small batch to a shared data store if performance is a problem.

For a web application or web service, the Front Controller pattern can be used to implement logging before any work is done. (e.g., for an ASP.NET application, a custom logging HttpModule.) Whatever the architecture, there's usually a way.

And feature usage is good to know for other reasons, of course. Marketing and product management would be interested, for a start. We're all guessing at what will have value. As soon as we deliver working software that people are using, it's time to test our theories.



April 20, 2017

Learn TDD with Codemanship

Still Time to Grab Your TDD 2.0 Tickets

Just a quick reminder about my upcoming Codemanship TDD training workshop in London on May 10-13. It's quite possibly the most hands-on TDD training out there, and great value at half the price of competing TDD courses.

Powered by Eventbrite





Learn TDD with Codemanship

20 Dev Metrics - 3. Dev Effort Breakdown

The third in my Twitter series 20 Dev Metrics is Development Effort Breakdown.



I've come across many, many organisations who have software and systems that, with each release, require more and more time fixing bugs instead of adding or changing features. Things can get so bad that some products end up with almost all the available time being spent on fixes, and users having to wait months or years for requested changes - if they ever come at all.

This can get very expensive, too. A company who made a call centre management product found themselves needing multiple teams maintaining multiple versions of the product that were in use, all devoting most of their time just to bug fixes.

I'm suggesting this metric instead of the classic "defects per thousand lines of code", because what matters most about bugs - after the trouble they cause end users - is how they soak up developer time, leaving less time to add value to the product.

To track this metric, teams need to record roughly how much time they've spent completing work (with the usual caveat about definitions of "done"). If you're using story cards, a simple system is for the developers to take a moment to record how many person-days/hours were spent on it on the card itself. Your project admin can then tot up the numbers in a spreadsheet.

Remember to include bugs reported in testing as well as production. If someone reported it, and someone had to fix it, then it counts.



April 19, 2017

Learn TDD with Codemanship

20 Dev Metrics - 2. Cost of Changing Code

The second in my Twitter series 20 Dev Metrics is the Cost of Changing Code. This is a brute force metric - anything involving measuring lines of code tends to be - so handle with care. The temptation will be for teams to game this, and it's very easily gamed. As with all metrics, cost of changing code should be used as a diagnostic tool, not as weaponised as a target or as a stick to beat dev teams with.

NB: My advice is to collect these metrics under the radar, and use them to provide hindsight when establishing a link between the way you write code and the results the business gets.

The aim is to establish the trend for cost of change, so we can link it to our first metric, Lead Time to Delivery. There are two things we need to track in order to calculate this simple metric.

1. How much code is changing (lines added, modified and deleted) - often called "code churn". There are tools available for various VCS solutions that can do this for you.



2. How much that change cost (in developer time or wages). Your project admin should have this information available in some form. If there's one thing teams are measuring, it's usually cost.



Divide developer cost by code churn, and - hey presto - you have the cost of changing a line of code.

All teams experience an increase in the cost of changing code, but better teams tend to experience a flatter gradient. The worst teams have a "hockey stick" trend, where change becomes prohibitively expensive alarmingly soon. e.g., one software company, after 8 years, had a cost of change 40x higher than at the start. At the same time, they tend to see lead times growing exponentially, as delivery gets slower and slower.

I dare your boss not to care about these first two metrics!




April 18, 2017

Learn TDD with Codemanship

20 Dev Metrics - 1. Lead Time to Delivery

On the Codemanship Twitter account, I've started posting a series of daily 'memes' called 20 Metrics for Dev Teams.

These are based on the health check some clients ask me to do to establish where teams are when I first engage with them as a coach. My focus, as a business, is on sustaining the pace of innovation, so these metrics are designed to build a picture of exactly that.

The first metric is, in my experience, the most important in that regard; how long does it take after a customer asks the team for a change to the software before it's delivered in a usable state? This is often referred to as the "lead time" to software delivery.

Software development's a learning process, and in any learning process, the time lag before we get useful feedback has a big impact on how fast we learn. While others may focus on the value delivered in each drop, the reality is that "value" is very subjective, and impossible to predict i advance. When the customer says "This feature has 10x the value of that feature to my business", it's just an educated guess. We still have to put working software in front of real end users to find out what really has value. Hence, the sooner we can get that feedback, the more we can iterate towards higher value.

This is why Lead Time to Delivery is my number one metric for dev teams.

To measure it, you need to know two things:



1. When did the customer request the change?

2. When did the customer have working software available to use (i.e., tested and deployed) that incorporated the completed change?

Many teams have repositories of completed user stories (e.g., in Jira - yes, I know!) from which this data can be mined historically. Bu the picture can be confusing, as too many teams have woolly definitions of "done". My advice is to start recording these dates explicitly, and to firm up your definition of "done" to mean specifically "successfully in production and in use".

As with weather and climate, what we should be most interested in is the trend of lead time as development progresses.



Sustaining innovation means keep lead times short. The majority of dev teams struggle to achieve this, and as the software grows and the unpaid technical debt piles up, lead times tend to grow. To some extent this is inevitable - entropy gets us all in the end - but our goal is to keep the trend as flat as we can so we can keep delivering and our customer can keep learning their way to success.




April 17, 2017

Learn TDD with Codemanship

Do It Anyway

In my previous post, I suggested that developers take the necessary technical authority that managers are often unwilling to give.

If I could summarise my approach to software development - my "method", if you could call it that - it would be Do It AnywayTM.

An example that recently came up in a Twitter conversation about out-of-hours support - something I have mixed feelings about - was how temporary fixes applied at 2am to patch a broken system up often end up being permanent, because the temptation's too great for managers to assign the fixer to seemingly more valuable work the next day.

Think of out-of-hours fixes as being like battlefield surgery. The immediate aim is to stop the patient from dying. The patient is then transported to a proper hospital for proper treatment. If the patient's intestines are being held in with sticky tape, that is not a lasting fix.

We've all seen what happens when the code fills up with quick'n'dirty fixes. It becomes less reliable and less maintainable. The cumulative effect increases failures in production, creates a barrier to useful change, and hastens the system's demise considerably.

My pithy - but entirely serious - advice in that situation is Do It AnywayTM.

There are, of course, obligations implied when we Do It AnywayTM. What we're doing must be in the best interests of stakeholders. Do It AnywayTM is not a Get Out Jail Free card for any decision you might want to justify. We are making informed decisions on their behalf. And then doing what needs to be done. Y'know. Like professionals.

And this is where the rubber meets the road; if managers and customers don't trust us to make good decisions on their behalf, they'll try to stop us making decisions. It's a perfectly natural response. And to some extent we brought it on ourselves - as a profession - through decades of being a bit, well, difficult. We don't always do things in the customer's interests. Sometimes - gasp - we do stuff because it benefits us (e.g., adopting a fashionable technology), or because it's all we know to do (e.g., not writing automated tests because we don't know how), or just because it's the new cool thing (and we love new cool things!)

If we can learn to curb our excesses and focus more on the customer's problems instead of our solutions, then maybe managers and customers can learn to trust us to make technical decisions like whether or not a hot fix needs to be done properly later.




April 15, 2017

Learn TDD with Codemanship

IT's Original Sin: Separating Authority & Expertise

I'm recalling a contract I did a few years ago, where I was leading a new team on an XML publishing sort of thing. The company I was working for had been started by two entrepreneurs who had zero experience of the software industry. Which is fine, if you can hire experts who have the necessary experience.

On our little team, things were generally good, except for one fly in the ointment. The guy they'd trained up in this specific database technology we were using had his own ideas about building the whole thing in XQuery, and refused to go along with any technical decision made by the team.

Without his specific skillset, we couldn't deliver anything, and so he had the team over a barrel. It should have been easily fixed, though. I just had to go to the bosses, and get an executive decision. Case closed.

Except the bosses didn't know whether to believe me or believe him, because they knew nothing about what we were trying to do. And they were unwilling to concede authority to the team to make the decisions.

This is a common anti-pattern in the way organisations are run, and it seems to be getting worse. Our whole society now seems to be organised in a way that separates decision-making authority from expertise. Indeed, the more authority someone has, the less they tend to understand the ramifications of the decisions they're making.

In Britain, this is partly - I think - down to an historical belief in a "leadership class". Leaders go to certain good (fee-paying) schools, they get classical educations that focus on ancient languages, history, literature etc, and they study purely academic non-technical subjects at university like Politics or History. They're prepared for the burden of command by their upbringing - "character building" - rather than their actual education or training. They're not actually qualified to do anything. They're not scientists, or doctors, or nurses, or engineers, or designers, or programmers, or builders, or anything drearily practical like that. They're leaders.

These are the people who tend to end up in the boardrooms of Britain's biggest companies. These are the people who end up in charge of government departments. These are the people who are in charge of TV & radio.

And hence we get someone making the big decisions about healthcare who knows nothing about medicine or about running hospitals or ambulance services. And we get someone in charge of all the schools who knows nothing about teaching or running a school. And we get someone in charge of a major software company whose last job was being in charge of a soft drinks company. And so on.

Again, this is fine, if they leave the technical decisions to the technical experts. And that's where it all falls down, of course. They don't.

The guy in charge of the NHS insists on telling doctors and nurses how they should do their jobs. The woman in charge of UK schools insists on overriding the expertise of teachers. The guy in charge of a major software company refuses to listen to the engineers about the need for automated testing. And so on.

This is the Dunning-Kruger effect writ large. CEOs and government ministers are brimming with the overconfidence of someone who doesn't know that they don't know.

Recently, I got into a Twitter conversation with someone attending the launch of the TechNation report here in the UK. It was all about digital strategy, and I had naively asked if there were any software developers at the event. His response was "Why would we want software developers involved in digital strategy?" As far as he was concerned, it was non of our godammed business.

I joked that it was like having an event called "HealthNation" and not inviting any doctors, or "SchoolNation" and not inviting any teachers. At which point, doctors and teachers started telling me that this has in actual fact happened, and happens often.

The consequences for everyone when you separate authority and expertise can be severe. If leaders can't learn humility and leave the technical decisions to the technical experts - if that authority won't be willingly given - then authority must be taken. The easiest way - for the people who do the actual work and make the wheels turn - is to simply not ask for permission. That was our first mistake.





March 22, 2017

Learn TDD with Codemanship

Digital Strategy: Why Are Software Developers Excluded From The Conversation?

I'm running another Twitter poll - yes, me and my polls! - spurred on by a conversation I had with a non-technical digital expert (not sure how you can be a non-technical digital expert, but bear with me) in which I asked if there were any software developers attending the government's TechNation report launch today.




This has been a running bugbear for me; the apparent unwillingness to include the people who create the tech in discussions about creating tech. Imagine a 'HealthNation' event where doctors weren't invited...

The response spoke volumes about how our profession is viewed by the managers, marketers, recruiters, politicians and other folk sticking their oars into our industry. Basically, strategy is non of our beeswax. We just write the code. Other geniuses come up with the ideas and make all the important stuff happen.

I don't think we're alone in this predicament. Teachers tell me that there are many events about education strategy where you'll struggle to find an actual educator. Doctors tell me likewise that they're often excluded from the discussion about healthcare. Professionals are just there to do as we're told, it seems. Often by people with no in-depth understanding of what it is they're telling us to do.

This particular person asserted that software development isn't really that important to the economy. This flies in the face of newspaper story after newspaper story about just how critical it is to the functioning of our modern economy. The very same people who see no reason to include developers in the debate also tell us that in decades to come, all businesses will be digital businesses.

So which is it? Is digital technology - that's computers and software to you and me - now so central to modern life that our profession is as critical today as masons were in medieval times? Or can any fool 'code' software, and what's really needed is better PR people?

What's clear is that highly skilled professions like software development - and teaching, and medicine - have had their status undermined to the point where nobody cares what the practitioner thinks any more. Except other practitioners, perhaps.

This runs hand-in-hand with a gradual erosion in real terms of earnings. A developer today, on average, earns 25% less then she did 10 years ago. This trend plains against the grain of the "skills crisis" narrative, and more accurately portrays the real picture where developers are seen as cogs in the machine.

If we're not careful, and this trend continues, software development could become such a low-status profession in the UK that the smartest people will avoid it. This, coupled with a likely brain drain after Brexit, will have the reverse effect to what initiatives like TechNation are aiming for. Despite managers and marketers seeing no great value in software developers, where the rubber meets the road, and some software has to be created, we need good ones. Banks need them. Supermarkets need them. The NHS needs them. The Ministry of Defence needs them. Society needs them.

But I'm not sure they're going about the right way of getting them. Certainly, ignoring the 400,000 people in the UK who currently do it doesn't send the best message to smart young people who might be considering it. They could be more attracted to working in tech marketing, or for a VC firm, or in the Dept of Trade & Industry formulating tech policy.

But if we lack the people to actually make the technology work, what will they be marketing, funding and formulating policy about? They'd be a record industry without any music.

Now, that's not to say that marketing and funding and government policy don't matter. It's all important. But if it's all important, why are the people who really make the technology excluded from the conversation?






March 21, 2017

Learn TDD with Codemanship

Poll Indicates Possibly Epic Brexit Brain Drain

A small poll I ran on the Codemanship Twitter account paints a pretty grim picture for software development in the UK after Brexit.



Of the 265 people who responded, 12% said they had already left the UK. 10% had plans to leave. And a very worrying 26% were considering leaving.

These numbers aren't entirely surprising, when you consider an earlier poll showed about 80% of developers opposed Brexit, and nearly half of devs working in Britain are immigrants, many of whom suddenly don't feel very welcome and are struggling to live with the uncertainty about whether or not they'll be allowed to stay.

The Codemanship take on this is we shouldn't be surprised that necessarily very international professions like ours suffer under nationalism. It simply isn't possible for any nation to go it alone in computing.

But my fears are worse for the potential knock-on effects on the wider economy of a major brain drain. Think of all the investment projects - both in the private and public sectors - that have a significant computing component.

The short-termist may think "hoorah, less foreign devs = more work and higher pay for me!", but I fear they're just not thinking it through. If projects are too difficult to staff - and building good dev teams is already hard enough - then the natural (and perfectly legal) response from investors will be to take those projects elsewhere.

Government is finally catching on to just how reliant UK plc is on IT, but has yet to make the leap to recognising just how reliant UK plc is on people who write software. Even a brain drain of 10% would be likely to hurt economic performance. 25%+ is likely to be a disaster. And as more of the better devs leave, more good devs will be encouraged to follow.

They'll say, of course, that they're investing in addressing the skills shortage. But when you look at how much is being invested, you realise it's just a bit of PR, really. That budget needs several extra zeros adding to it to have any noticeable impact. And even when it does, the gap won't be filled overnight. It takes many years to grow a software developer. Increasingly, 2019 looks like a cliff edge.

The obvious solution, of course, is to remain in the single market and continue to accept freedom of movement. But this is looking unlikely now.

What are we to do?

I believe there could be a way forward, but it would require us to jump some hurdles that software managers have traditionally balked at. In particular, it will require us to favour smaller teams of better developers. And we know that most dev teams could be working smarter, paying more attention to quality, doing smaller releases more often, and so on. In other words, we might be able to ride out the brain drain by getting better at software development.