May 22, 2017

Learn TDD with Codemanship

20 Dev Metrics - 19. Progress

Some folk have - quite rightly - asked "Why bother with a series on metrics?" Hopefully, I've vindicated myself with a few metrics you haven't seen before. And number 19 in the series of 20 Dev Metrics is something that I have only ever seen used on teams I've led.

When I reveal this metric, you'll roll your eyes and say "Well, duh!" and then go back to your daily routine and forget all about it, just like every other developer always has. Which is ironic, because - out of all the things we could possibly measure - it's indisputably the most important.



The one thing that dev teams don't measure is actual progress towards a customer goal. The Agile manifesto claimed that working software is the primary measure of progress. This is incorrect. The real measure of progress is vaguely alluded to with the word "value". We deliver "value" to customers, and that has somehow become confused with working software.

Agile consultants talk of the "flow of value", when what they really mean is the flow of working software. But let's not confuse buying lottery tickets with winning jackpots. What has value is not the software itself, but what can be achieved using the software. All good software development starts there.

If an app to monitor blood pressure doesn't help patients to lower their blood pressure, then what's the point? If a website that matches singles doesn't help people to find love, then why bother? If a credit scoring algorithm doesn't reduce financial risk, it's pointless.

At the heart of IT's biggest problems lies this failure of almost all development teams to address customers' end goals. We ask the customer "What software would you like us to build?", and that's the wrong question. We effectively make them responsible for designing a solution to their problem, and then - at best - we deliver those features to order. (Although, let's face it, most teams don't even do that.)

At the foundations of Agile Software Development, there's this idea of iterating rapidly towards a goal. Going back as far as the mid 1970's, with the germ of Rapid Development, and the late 80's with Tom Gilb's ideas of an evolutionary approach to software design driven by testable goals, the message was always there. But it got lost under a pile of daily stand-ups and burndown charts and weekly show-and-tells.

So, number 19 in my series is simply Progress. Find out what it is your customer is trying to achieve. Figure out some way of regularly testing to what extent you've achieved it. And iterate directly towards each goal. Ditch the backlog, and stop measuring progress by tasks completed or features delivered. It's meaningless.

Unless, of course, you want the value of what you create to be measured by the yard.




May 19, 2017

Learn TDD with Codemanship

A Clean Code Language?

Triggered by a tweet about the designers of Python boasting that functions can now accept 255 parameters - I mean, why, really? - my magpie mind has been buzzing with the notion of how language designs (and compilers) could enforce clean code?



For example, what if the maximum number of parameters was just three? Need more data for your method than that? Maybe a parameter object is required. Or maybe that method does too much, and that's why it has so many parameters. You would have to fix the underlying problem.

And what if the maximum number of branches or loops in a method was one? Need another branch? You'd have to create another method for it and compose your conditionals that way. Or maybe replace conditionals with a polymorphic solution.

And what if objects that aren't at the top of the call stack weren't allowed to instantiate other objects? You'd have to pass its collaborators in through a constructor or other method.

And what if untested code caused a compile error?

And so on. Hopefully, you get the picture. Now, I'm just thinking out loud, but I think this could be at the very least a valuable thought experiment. By articulating design rules in ways that a compiler (or pre-compiler) might be able to enforce, I'm clarifying in my own mind what those rules really are.

What rules would your Clean Code language have?




Learn TDD with Codemanship

20 Dev Metrics - 18. External Dependencies

18th in my series 20 Dev Metrics is External Dependencies.



If our code relies too much on other people's APIs, we can end up wasting a lot of time fixing things that are broken when the contracts change. (Anyone who's written code that consumes the Facebook API will probably know exactly what I mean.)

In an ideal world, APIs would remain backwards-compatible. But in the real world, where 3rd-party developers aren't as disciplined as we are, they change all the time. So our code has to keep changing to continue to work.

I would argue that, with the way our tools have evolved, it's too easy these days to add external dependencies to our software.

It helps to be aware of the burden we're creating as we suck in each new library or web service, lest we fall prey to the error of buying the whole Mercedes just for the cigarette lighter.

The simplest metric is just to count the number of dependencies. The more there are, the more unstable our code will become.

It's also worth knowing how much of our code has direct dependencies on external APIs. Maybe we only depend on JDBC, but if 50% of our code directly references JDBC interfaces, we still have a problem.

You should aim to have as little of your code directly depend on 3rd-party APIs as possible, and as few different APIs as you can use to build the software you need to.


(And, yes, I'm including GUI frameworks etc in my definition of "external dependencies")



May 18, 2017

Learn TDD with Codemanship

20 Dev Metrics - 17. Test Execution Time

The 17th in my 20 Dev Metrics series can have a profound effect on our ability to sustain the pace of development - Test Execution Time.



When it takes too long to get feedback from tests, we have to test less often, which means more changes to the code in between test runs. The economics of defect removal are stark: the longer it is before a problem is detected, exponentially the more expensive it is to fix. If we break the code and discover it minutes later, then fixing the problem is quick and easy. If we break the code and discover hours later, that cost goes up. Days later and we're into code-and-fix territory.

So it's in our interest to make the tests run as fast as possible. Teams who strive for a testing pyramid, where the base of the pyramid - the bulk of the tests - is made up of fast-running unit tests can usually get good test feedback in minutes or even seconds. Teams whose testing pyramid is upside-down, with the bulk of their tests being slow-running system or integration tests, tend to find test execution a barrier to progress.

Teams should be putting continual effort into performance engineering their test suites as they grow from dozens to hundreds to thousands of tests. Be aware of how long test execution takes, and when it's too long, optimise the test architecture or execution environment. My 101 TDD Tips e-book contains a tip about optimising test performance that you might find useful.

Basically, the more often you want to run a test suite, the faster it needs to run. Simples.


May 17, 2017

Learn TDD with Codemanship

20 Dev Metrics - 16. Dev Pay Market Percentile

The 16th metric in my series 20 Dev Metrics is a simple but powerful one, which can determine how easy (or hard) it could be for you to hire and retain the developers you need.

As someone who gets asked to helped clients find developers, I can attest that Dev Pay Market Percentile is an accurate predictor of how long you'll have to search to find the right person.

Online sources of advertised salaries and contract rates, like itjobswatch.co.uk can show you how much your competitors are offering for specific skills like Java and TDD, in specific locations and specific industries (e.g., London, banking).



You'd be amazed how many employers scratch their heads wondering why they can't find the ace-whizzo developer they need for their dev team, whilst offering an average (or below average) salary.

Want good developers? Aim for the upper quartile on pay. Want great developers who'll stay? Aim for the upper tenth percentile.

I've lost count of the times the "skill shortage" mysteriously disappeared when the employer upped the offer. And I've also lost count of the times that demand for a skill quietly ramped up, leaving employers wondering why all of their long-serving devs suddenly upped and left.

And, of course, if you are a developer, it's worth knowing where you sit in the pay range. If your bosses are poor payers, they may be relying on you not knowing.


May 16, 2017

Learn TDD with Codemanship

My Obligatory Annual Rant About People Who Warn That You Can Take Quality Too Far Like It's An Actual Thing That Happens To Dev Teams

If you teach developers TDD, you can guarantee to bump into people who'll warn you of the dangers of taking quality too far (dun-dun-duuuuuuun!)

"We don't write the tests first because it protects us from over-testing our code", said one person recently. Ah, yes. Over-testing. A common problem in software.

"You need to be careful not to refactor your code too much", said another. And many's the time I've looked at code and thought "This program is just too easy to understand!"

I can't help recalling the time a UK software company, whose main product had literally thousands of open bugs, hired a VP of Quality and sent him around the dev teams warning them that "perfection is the enemy of good enough". Because that was their problem; the software was just too good.

It seems to still pervade our industry's culture, this idea that quality is the enemy of getting things done, despite mountains of very credible evidence that - in the vast majority of cases - the reverse is true. Most dev teams would deliver sooner if they delivered better software. Not aiming for perfection is the enemy of getting shit done more accurately sums up the relationship between quality and productivity in our line of work.

That's not to say that there aren't any teams who have ever taken it too far. In safety-critical software, the costs ramp up very quickly for very modest improvements in reliability. But the fact is that 99.9% of teams are so far away from this asymptote that, from where they're standing, good enough and too good are essentially the same destination.

Worry about wasting time on silly misunderstandings about the requirements. Worry about wasting time fixing avoidable coding errors. Worry about wasting time trying to hack your way through incomprehensible spaghetti code to make changes. Worry about wasting your time doing the same repeatable tasks manually over and over again.

But you very probably needn't worry about over-testing your code. Or about doing too much refactoring. Or about making the software too good. You're almost certainly not in any immediate danger of that.







May 13, 2017

Learn TDD with Codemanship

"Test-Driven Development" - The Clue's In The Name

When devs tell me that they do Test-Driven Development "pragmatically", I'm immediately curious as to what they mean by that. Anyone with sufficient experience in software development will be able to tell you that "pragmatic" is secret code for "not actually doing it".

Most commonly, "pragmatic" TDD turns out to mean declaring implementation classes and interfaces first, and then writing tests for them. This is not TDD, pragmatic or otherwise. The clue's in the name.

Alarmingly, quite a few online video tutorials in TDD do exactly this. So it's understandable how thousands of devs can end up thinking it's the correct way to do it.

But when someone tells you that you don't need to start by writing a failing test, what they're really saying is you don't have to do TDD. And they're right. You don't.

But if you're doing TDD, then putting the test first is kind of the whole point.

It's like telling someone that it's okay to have a pork pie if you're a vegan. What they mean is "You don't have to be vegan".

If you're going vegan, pork pies are out. And if you're doing TDD, writing implementation code first is a no-no.

Good. I'm glad we got that sorted.


May 9, 2017

Learn TDD with Codemanship

What Makes a Software Developer? The Rule of Two

I've been thinking a lot recently about what might qualify someone as a "software developer". For sure, it's not just someone who can code. (Any more than a builder is just someone who can lay bricks.)

One rule of thumb I've used with some success over the years is a rule of two: a software developer, in my experience, is someone who has practical hands-on skills in at least two of everything.

* 2 different programming paradigms (e.g., structured & OO)
* 2 different technology stacks (e.g., LAMP and .NET)
* 2 different kinds of application (e.g., desktop and mobile)
* 2 different problem domains (e.g., banking and medicine)
* 2 different approaches to development (e.g., Extreme Programming & RUP)

Essentially, someone who's been around the block at least twice in their careers.

The reason why I think this matters is that folk tend to need to see multiple examples of something before they can start to draw some key underlying insights. If you've only ever done BDD, you may not be aware that almost all approaches to requirements specification are example- or scenario-driven. If you've only ever worked in, say, Java, you may miss the fact that encapsulation isn't exclusively an OO concept. (Yes, you can have loosely coupled modules in Pascal, C, etc, too).

I'd also be interested in the responsibilities a developer has taken on in their careers. I've been a programmer, a tech lead, an architect, a requirements analyst, a methodologist, a strategist, a coach, a conference speaker, a conference organiser, an author, a business owner, and a trainer in my twelvety five years in this career. Wearing multiple hats - like working in multiple languages - can bring insights that decades working exclusively at the code face would probably miss.

So now, as my attentions turn to focus on the whole question of long-term mentoring for would-be software developers, the >rule of two may well be a key part of the process of identifying who might make good mentors, as well as potentially provide a roadmap for mentoring itself. Essentially, we'd be looking to guide rookies at least twice around the block, allowing them a chance to build those insights for themselves.


May 8, 2017

Learn TDD with Codemanship

How To Avoid The TDD Slowdown

Both personal experience and several empirical studies has taught me that TDD works. By "works", I mean that it can help us to deliver more useful, reliable software that's easier to change, and at little or no extra cost in time and effort.

But...

That describes the view from the top of the TDD hill. To enjoy the view, you've got to climb the hill. And this may be where TDD get's it reputation for taking longer and slowing teams down.

First of all, TDD's learning curve should not be underestimated. I try to make it crystal clear to the developers I train and mentor not to expect amazing results overnight. Plan for a journey of 4-6 months before you get the hang of TDD. Plan for a lead time of maybe a year or more before your business starts to notice tangible results. Adopting TDD is not for the impatient.

Instead, give yourself time to learn TDD. Make a regular appointment with yourself to sit down and mindfully practice it on increasingly ambitious problems. Perhaps start with simpler TDD katas, and then maybe try test-driving one or two personal projects. Or set aside one day a week where your focus will be on TDD and getting it right, while the other four days you "get shit done" the way you currently know how.

Eventually, developers make the transition to test-driving most of their code most of the time, with no apparent loss of productivity.

After this rookie period, the next obstacle teams tend to hit is the unmaintainability of their test code. It's quite typical for newly minted Test-Driven Developers to under-refactor their test code, and over time the tests themselves become a barrier to change. However much refactoring you're doing, you should probably do more. I say that with high confidence, because I've never actually seen test code that was cleaner than it needed to be. (Though I've seen plenty that was over-engineered - let's not get those two problems mixed up!)

Refactoring is one of the most undervalued skills in software development, but it is hard to learn. And employers routinely make the mistake of not emphasising it when they're hiring. Your refactoring skills need to be well-developed if you want to sustain TDD. More bluntly, you cannot learn TDD if you don't learn refactoring.

The other barrier I'm increasingly seeing teams hit is slow-running tests. I think this is in part a result of teams relying exclusively on acceptance tests using tools like Cucumber and Fitnesse, leading to test suites that can - in extreme cases - take hours to run. To sustain the pace of change, we need feedback far sooner. Experienced TDD-ers endeavour to test as much of the logic of their code as possible using fast-running unit tests (or "developer tests", if you like) that exclude external dependencies and don't rely on layers of interpretation or external test data files.

Learn to organise your tests into a pyramid, with the base of the pyramid - the vast bulk of the tests - being fast-running unit tests that we can run very frequently to check our logic. Experienced TDD-ers treat acceptance tests as... well... acceptance tests. Not regression tests.

Another pitfall is over-mocking. When too many of our tests know too much about the internal interactions within the objects they're testing, we can end up baking in a bad design. When we try to refactor, a bunch of tests can get broken, even though we haven't changed the logic at all. Used as an interface design tool, mocks can help us achieve a loosely-coupled "Tell, Don't Ask" style of design. Abused as a testing crutch to get around dependency issues, however, and mocks can hurt us. I tend to use them sparingly, typically at system or component or service boundaries, to help me design the interfaces for my integration code.

(And, to be clear, I'm talking here specifically about mock objects in the strictest sense: not stubs that return test data, or dummies.)

So, if you want to avoid the TDD slowdown:

1. Make a realistic plan to learn and practice

2. Work on those refactoring muscles, and keep your test code clean

3. Aim for a pyramid of tests, with the bulk being fast-running unit tests

4. Watch those mocks!



May 6, 2017

Learn TDD with Codemanship

Not All Test Doubles Make Test Code Brittle.

Much talk out there in Interweb-land about when to use test doubles, when not to use test doubles, and when to confuse mocks with stubs (which almost every commentator seems to).

Bob C. Martin blogs about how he uses test doubles sparingly, and makes a good case for avoiding the very real danger of "over-mocking", where all your unit tests expose internal details of interactions between the object they're testing and its collaborators. This can indeed lead to brittle test code that has to be rewritten often as the design evolves.

But mocks are only one kind of test double, and they definitely have their place. And let's also not confuse mock objects with mocking frameworks. Just because we created it using a mocking tool, that doesn't necessarily mean it's a mock object.

I'm always as clear as I can be that a mock object is one that's used to test an interaction with a collaborator; one that allows us to write a test that fails when the interaction doesn't happen. They're a tool for designing interfaces, really. And you don't need a mocking framework to write mock objects.

I, too, use mock objects sparingly. Typically, for two reasons:

1. Because the object being interacted with has direct external dependencies (e.g. a database) that I don't want to include in the execution of the unit test

2. Because the object being interacted with doesn't exist yet - in terms of an implementation. "Fake it 'til you make it."

In both cases, I'm clear in my own mind that it's only a mock object if the test is specifically about the interaction. A test double that pretends to fetch data from a SQL database is a stub, not a mock. Test doubles that provide test data are stubs. Test doubles that allow us to test interactions are mocks.

Mocks necessarily require our tests to specify an internal interaction. What method should be invoked? What parameter values shoud be passed? I tend to ask those kinds of questions less often.

Stubs don't necessarily have to expose those internal details in the test code. Knowledge of how the object under test asks for the data can be encapsulated inside a general-purpose stub implementation and left out of the actual test itself.



In this example, I'm stubbing an object that knows about video library members who expressed in interest in newly added titles that match a certain string. This is one of those "fake it 'til you make it" examples. We haven't built the component that manages those lists yet.

The stub is parameterised, and we pass in the test data to its constructor. It's not revealed in the test how EmailAlert gets that data from the stub.



This stub code, of course, is test code, too. But using this technique, we don't have to repeat the knowledge of how the stub provides its data to the object under test. So if that detail changes, we only need to change it in one place.

Another thing I do sometimes is use a mocking framework to create dummies of objects where we're not interested in the interaction, and it provides not test data, but it needs to be there and it needs an object indentity for our test.



In this example, Title doesn't need to be a real implementation. We're not interested in any interactions with Title, but we do need to know if it's in the library at the end. This test code doesn't expose any internal details of how Library.donate() works.

If you check out the code for my array combiner spike, you'll notice that there's no use of test doubles at all. This is because of its architectural nature. There are no external dependencies: no database, no use of web services, etc. And there are no components of the design that were so complex that I felt the need to fake them until I made them.

So, to summarise, in my experience over-reliance on mocks can bake in a bad design. (Although, used wisely, they can help us produce a much cleaner design, so there's a balance to be struck.) But I thought I should just qualify "test double", because not all uses of them have that same risk.