July 28, 2014

More Rage On ORM Abuses & How I Do It

So, a day later, and the rage about my last blog post, about how I see many developers abusing Object-Relational Mapping frameworks like Hibernate to build old-fashioned database-driven architectures, continues.

My contention is that we don't use what is essentially a persistence layer to build another persistence layer. We've already got one; we just have to know how to use it.

The litmus test for object-relational persistence - well, any kind of object persistence - is whether we're able to write our application in such a way that if we turned persistence off, the application would carry on working just dandy - albeit with a memory that only lasts as long as the running process.

If persistence occurs at the behest of objects in the core application's logic - and I'm not just talking about domain objects, here - then we have lost that battle.

Without throwing out a platform-specific example - because that way can lead to cargo cults ("Oh, so that's THE way to do it!" etc) - let's illustrate with a pseudo-example:

Mary is writing an application that runs the mini-site for a university physics department. She needs it to do things like listing academic staff, listing courses and modules, and accepting feedback from students.

In a persistence-agnostic application, she would just use objects that live in memory. Perhaps at the root there is a Faculty object. Perhaps this Faculty has a collection of Staff. Perhaps each staff member teaches specific Course Modules. Hey, stranger things have happened.

In the purely OO version, the root object is just there. There's only one Faculty. It's a singleton. (Gasps from the audience!)

So Mary writes it so that Faculty is instantiated when the web application starts up as an app variable.

She adds staff members to the faculty, using an add() method on the interface of Faculty. Inside, it inserts the staff member into the staff collection that faculty holds.

The staff listing page just iterates through that collection and builds a listing for each member. Simples.

Clicking on the link for a staff member takes you to their page, which lists the course modules they teach.

So far, no database. No DAOs. No SQL. No HQL. It's all happening in memory, and when we shut the web app down, all that data is lost.

But, and this is the important point, while the app is running, it does work. Mary knocks up a bit of code to pre-populate the application with staff members and course modules, for testing purposes. To her, this is no different to writing a SQL script that pre-populates a database. Main memory is her database - it's where the data is to be found.

Now, Mary wants to add persistence so she can deploy this web app into the real world.

Her goal is to do it, ideally, without changing a single line of the code she's already written. Her application works. It just forgets, is all.

So now, instead of building the test objects in a bit of code, she creates a database that has a FACULTY table (with only one record in it, the singleton), a STAFF_MEMBER table and a COURSE_MODULE table with associated foreign keys.

She creates a mapping file for her ORM that maps these tables onto their corresponding classes. Then she writes a sliver of code to fetch Faculty from the database. And, sure, if you want to call the object where that code resides a "FacultyRepository" then be my guest. Importantly, it lives aside from the logic of the application. It is not part of the domain model.

That's pretty much it, bar the shouting. The ORM takes care of the navigations. If we add a staff member to the faculty, another sliver of code executed, say, after server page processing persists the Faculty and the ORM cascades that down through it's relationships (providing we've specified the mapping that way.)

By using, say an Http filter (or an HttpModule, if that's what your tech architecture calls it) we can remove persistence concerns completely from the core logic of the application and achieve it as a completely separate done-in-one-place-only concern.

Hibernate's session-per-request pattern, for example, allows us to manage the scope of persistent transactions - fetching, saving, detaching and re-attaching persistent objects and so on - before and after page processing. Our Http filters just need to know where to find the persistent root objects. (Where is Faculty? It's in application state.) The ORM and our mapping takes care of the rest.

And so it is, without changing a line of code in any of her server pages, or her domain model, and without writing an DAOs or that sort of nonsense, Mary is able to make her application state persistent. And she can even, with a quick change to a config file, turn persistence on and off, and swap one method of persistence with another.

The controllers - the server pages - in Mary's web app need know nothing about persistence. They access state in much the same way they would anyway, either locally as variables on each server page, or as session and application variables. For them, the objects are just there when they're needed. And new objects they trigger the creation of are automatically inserted into the database by the ORM (ditto deletions and updates).

Now, we can mix and match and slice and dice this approach. Mary could have used a pre-processing Http filter to load Faculty into session state at the start of a session, making it a thread-specific singleton, and the persistence Http filters could be looking in session state for persistent objects. Or she could load it anew on each page load.

The important thing to remember is that the server page is non the wiser. All it needs to know is where to look to find these objects in memory. They are just there.

This is my goal when tackling object persistence. I want it to look like magic; something that just happens and we don't need to know how. Of course, someone needs to know, and, yes, there is more to it than the potted example I've given you. But the principles remain the same: object persistence should be a completely separate concern, not just from the domain model, but from all the core application code, including UI, controllers, and so on.







July 27, 2014

Object-Relational Mapping Should Be Felt & Not Seen

Here's a hand-wavy general post about object-relational persistence anti-patterns that I still keep seeing popping up in many people's code.

First, let me set out what the true goal of ORM's should be: an ORM is designed to allow us to build good old-fashioned object oriented applications where the data in our objects can outlive the processes they run in by storing said persistent data in a relational database.

Back in the bad old days, we did this by writing what we called "Data Access Objects" (DAO's) for each type of persistent object - often referred to as entities, or domain objects, or even "Entity Beans" (if your Java code happened to have disappeared up that particular arse in the late 1990's.)

This was very laborious, and often took up half the effort of development.

Many development teams working on web and "enterprise" applications were coming from a 2-tier database-driven background, and were most familiar and comfortable with the notion that the "model" in Model-View-Controller was the database itself. Hence, their applications tended to treat the SQL Server, Oracle or wotnot back-end as main memory and transact every state change of objects against it pretty much immediately. "Middle tier" objects existed purely as gateways to this on-disk relational memory. Transactions and isolation of changes was handled by the database server itself.

Not only did this lead to applications that could only be run meaningfully - including for testing - with that database server in place, but it also very tightly coupled the database to the application's code, making it rigid and difficult to evolve. If every third line of code involves a trip to the database, and if objects themselves aren't where the data is to be found most of the time - except to display it on the user's screen - then you still have what is essentially a database-driven application, albeit with a fancy hifalutin "middle tier" to create the illusion that it isn't.

Developers coming from an object oriented background suffered exactly the opposite problem. We knew how to build an application using objects where the data lived in memory, but struggled with persisting that data to a relational database. quite naturally, we just wanted that to sort of happen by magic, without us having to make any changes to our pristine object oriented code and sully it with DAOs and Units of Work and repositories and SQL mappers and transaction handling and blah blah blah.

And, whether you've heard or not, frameworks like Hibernate allow us to do pretty much exactly that; but only if we choose to do it that way.

Sadly, just as a FORTRAN programmer can write FORTRAN code in any programming language you give them, 2-tier database-driven programmers can write 2-tier database-driven code with even the most sophisticated ORMs.

Typically, what I see - and this is possibly built on a common misinterpretation of the advice given in Domain-driven Design about persistence architectures - is developers writing DAO's using ORMs. So, they'll take a powerful framework like Hibernate - which enables us to write our persistent objects as POJO's that hold application state in memory (so that the logic will work even if there's no database there), just like in the good old days - and convert their SQL mappers into HQL mappers that use the Hibernate Query Language to access data in the same chatty, database-as-main-memory way they were doing it before. Sure, they may be disguising it using domain root object "repositories", and that earns them some protection; for example, allowing us to mock repositories so we can unit test the application code. But when they navigate relationships by taking the ID of one object and using HQL to find its collaborator in the database, it all starts to get a bit dicey. A single web page request can involve multiple trips to the database, and if we take the database away, depending on how complicated the object graph is, we can end up having to weave a complex web of mock objects to recreate that object graph, since the database is the default source of application state. After all, it's main memory.

Smarter developers rely on an external mapping file and the in-built capabilities of Hibernate to take care of as much of that as possible.

They also apply patterns that allow them to cleanly separate the core logic of their applications from persistence and transactions. For example, the session-per-request pattern can be implemented for a web application by attaching persistent objects to a standard collection in session state, and managing the scope of transactions outside of main page request processing (e.g., using Http modules in ASP.NET to re-attach persistent objects in session state before page processing begins, and to commit changes after processing has finished.)

If we allow it, the navigation from a customer to her orders can be as simple as customer.orders. The ORM should take care of the fetching for us, provided we've mapped that relationship correctly in the configuration. If we add a new order, it should know how to take care of that, too. Or if we delete an order. It should all be taken care of, and ideally as a single transaction that effectively synchronises all the changes we made to our objects in memory with the data stored in the DB.

The whole point of an ORM is to generate all that stuff for us. To take something like Hibernate, and use it to write a "data access layer" is kind of missing that point.

We should not need a "CustomerRepository" class, nor a "CustomerDAO". We should need none of that, and that's the whole point of ORMs.

As much as possible in our code, Object-Relational Mapping should be felt, and not seen.







July 16, 2014

What Level Should We Automate Most Of Our Tests At?

So this blog post has been a long time in the making. Well, a long time in the procrastinating, at any rate.

I have several clients who have hit what I call the "front-end automated test wall". This is when teams place greatest emphasis on automating acceptance tests, preferring to verify the logic of their applications at the system level - often exercised through the user interface using tools like Selenium - and rely less (or not at all, in some cases) on unit tests that exercise the code at a more fine-grained level.

What tends to happen when we do this is that we end up with large test suites that require much set-up - authentication, database stuff, stopping and starting servers to reset user sessions and application state, and all the fun stuff that comes with system testing - and run very slowly.

So cumbersome can these test suites become that they slow development down, sometimes to a crawl. If it takes half an hour to regression test your software, that's going to make the going tough for Clean Coders.

The other problem with these high-level tests is that, when they fail, it can take a while to pin down what went wrong and where it went wrong. As a general rule of thumb, it's better to have tests that only have one reason to fail, so when something breaks it's alreay pretty well pinpointed. Teams who've hit the wall tend to spend a lot time debugging.

And then there's the modularity/reuse issue: when the test for a component is captured at a much higher level, it can be tricky to take that chunk and turn it into a reusable chunk. Maybe the risk calculation component of you web application could also be a risk calculation component of a desktop app, or a smartwatch app. Who knows? But when its contracts are defined through layers of other stuff like web pages and wotnot, it can be difficult to spin it out into a product in its own right.

For all these reasons, I follow the rule of thumb: Test closest to the responsibility.

One: it's faster. Every layer of unnecessary wotsisname the tests have to go through to get an answer adds execution time and other overheads.

Two: it's easier to debug. Searching for lost car keys gets mighty complicated when your car is parked three blocks away. If it's right outside the front door, and you keep the keys in a bowl in the hallway, you should find them more easily.

Three: it's better for componentising your software. You may call them "microservices" these days, but the general principles is the same. We build our applications by wiring together discrete components that each have a distinct responsibility. The tests that check if a component fulfils its reponsibility need to travel with that components, if at all possible. If only because it can get horrendously difficult to figure out what's being tested where when we scatter rules willy nilly. The risk calculation test wants to talk to the Risk Calculator component. Don't make it play Chinese Whsipers through several layers of enterprise architecture.

Sometimes, when I suggest this, developers will argue that unit tests are not acceptance tests, because unit tests are not written from the user's perspective. I believe - and find from experience - that this is founded on an artificial distinction.

In practice, an automated acceptance test is just another program written by a programmer, just like a unit test. The programmer interprets the user's requirements in both cases. One gives us the illusion of it being the customer's test, if we want it to be. But it's all smoke and mirrors and given-when-then flim-flam in reality.

The pattern, known of old, of sucking test data provided by the users into parameterised automated tests is essentially what our acceptance test automation tools do. Take Fitnesse, for example. Customer enters their Risk Calculation inputs and expected outputs into a table on a Wiki. We write a test fixture that inserts data form the table into program code that we write to test our risk calculation logic.

We could ask the users to jot those numbers down onto a napkin, and hardcode them into our test fixture. Is it still the same test? It it still an automated acceptance test? I believe it is, to all intents and purposes.

And it's not the job of the user interface or our MVC implementation or our backend database to do the risk calculation. There's a distinct component - maybe even one class - that has that responsibility. The rest of the architecture's job is to get the inputs to that component, and marshall the results back to the user. If the Risk Calculator gets the calculation wrong, the UI will just display the wrong answer. Which is correct behaviour for the UI. It should display whatever output the Risk Calculator gives it, and display it correctly. But whether or not it's the correct output is not the UI's problem.

So I would test the risk calculation where the risk is calculated, and use the customer's data from the acceptance test to do it. And I would test that the UI displays whatever result it's given correctly, as a separate test for the UI. That's what we mean by "separation of concerns"; works for testing, too. And let's not also forget that UI-level tests are not the same thing as system or end-to-end tests. I can quite merrily unit test that a web template is rendered correctly using test data injected into it, or that an HTML button is disabled running inside a fake web browser. UI logic is UI logic.

And I know some people cry "foul" and say "but that's not acceptance testing", and "automated acceptance tests written at the UI level tend to be nearer to the user and therefore more likely to accurately reflect their requirements."

I say "not so fast".

First of all, you cannot automate user acceptance testing. The clue is in the name. The purpose of user acceptance testing is to give the user confidence that we delivered what they asked for. Since our automated tests are interpretations of those requirements - eevery bit as much as the implementations they're testing - then, if it were my money, I wouldn't settle for "well, the acceptance tests passed". I'd want to see those tests being executed with my own eyes. Indeed, I'd wanted to execute them myself, with my own hands.

So we don't automate acceptance tests to get user acceptance. We automate acceptance tests so we can cheaply and effectively re-test the software in case a change we've made has broken something that was previously working. They're automated regression tests.

The worry that the sum total of our unit tests might deviate from what the users really expected is mitigated by having them manually execute the acceptance tests themselves. If the software passes all of their acceptance tests AND passes all of the unit tests, and that's backed up by high unit test assurance - i.e., it is very unlikely that the software could be broken from the user's perspsctive without any unit tests failing - then I'm okay with that.

So I still have user acceptance test scripts - "executable specifications" - but I rely much more on unit tests for ongoing regression testing, because they're faster, cheaper and more useful in pinpointing failures.

I still happily rely on tools like Fitnesses to capture users' test data and specific examples, but the fixtures I write underneath very rarely operate at a system level.

And I still write end-to-end tests to check that the whole thing is wired together correctly and to flush out configuration and other issues. But they don't check logic. They just the engine runs when you turn the key in the ignition.

But typically I end up with a peppering of these heavyweight end-to-end tests, a feathering of tests that are specifically about display and user interaction logic, and the rest of the automated testing iceberg is under the water in the form of fast-running unit tests, many of which use example data and ask questions gleaned from the acceptance tests. Because that is how I do design. I design objects directly to do the work to pass the acceptance tests. It's not by sheer happenstance that they pass.

And if you simply cannot let go of the notion that you must start by writing an automated acceptance test and drive downwards from there, might I suggest that as new objects emerge in your design, you refactor the test assertions downwards also and push them into new tests that sit close to those new objects, so that eventually you end up with tests that only have one reason to fail?

Refactorings are supposed to be behaviour-preserving, so - if you're a disciplined refactorer - you should end up with a cluster of unit tests that are logically directly equivalent to the original high-level acceptance test.

There. I've said it.






July 9, 2014

What Problem Does This Solve?

It seems that, every year, the process of getting started with a new application development becomes more and more complicated and requires ever steeper learning curves.

The root of this appears to be the heterogenity of our development tools, which grows exponentially as more and more developers - fuelled by caffeine-rich energy drinks and filled with the kind of hubris that only a programmer seems to be capable of - flex their muscles by doing what are effectively nothing more than "cover versions" of technologies that already exist and are usually completely adequate at solving the problem they set out to solve.

Take, for example, Yet Another Test Automation Tool (YATAT). The need for frameworks that remove the donkey work of wiring together automated tests suites and running all the tests is self-evident. Doing it the old-fashioned way, in the days before xUnit, often involved introducing abstractions that look very xUnit-ish and having to remember to write the code to execute each new test.

Tools like JUnit - which apply convention over that kind of manual configuration - make adding and running new tests a doddle. Handy user-friendly test-runner GUIs are the icing on the cake. Job done now.

For a bit of extra customer-centric mustard, add on the ability to suck test data for parameterised tests out of natural language descriptions of tests written by our customers. We cracked that one many moons ago, when heap big Turbo C++ compilers roamed the earth and programmer kill many buffalo etc. Ah yes, the old "merge the example data with the parameterised test" routine...

Given that the problem's solved, and many times over, what need, asks I, to solve it again, and again? And then solve it again again?

The answer is simple: because we can. Kent Beck learns new programming languages by knocking up a quick xUnit implementation in it. Pretty much any programmer beyond a certain rudimentary ability can do it. And they do. xUnit implementations are the Stairway To Heaven of programming solutions.

Likewise, MVC frameworks. They demonstrate a rudimentary command of a programming language and associated UI frameworks. Just as many rock guitar players have at some point a few weeks into learning the instrument mastered "The Boys Are Back In Town", many developers with an ounce of technical ability have gone "Look, Ma! I done made a MVC!" ("That's nice, dear. Now run outside and rig up an IoC container with your nice friends.")

But most cover versions of Stairway To Heaven (and The Boys Are Back In Town) are not as good as the originals. And even if they were, what value do they add?

Unless you're embuing your xUnit implementation with something genuinely new, and genuinely useful, surely it's little more than masturbation to do another one?

Now, don't get me wrong: masturbation has a serious evolutionary purpose, no doubt. It's practice for the real thing, it keeps the equipment in good working order, and it's also enjoyable in its own right. But what it's not any good for is making babies. (Unless it's immediately proceeded by some kind of turkey baster-type arrangement.)

It's actually quite satisifying to put together something like an xUnit implementation, or an MVC framework, or a Version Control System, or a new object oriented programming language that's suspiciously like C++.

The problems start when some other developers say "Oh, look, a new shiny thing. Let's ditch the old one and start using this one that does exactly the same thing and no better, so we shall."

Now, anyone looking to work with that team has got to drop X and start learning X', so they can achieve exactly what they were achieving before. ("But... it's got monads...")

And thusly we find ourselves climbing a perpetually steepening learning curve, but one that doesn't take us any higher. I shudder to think just how much time we're spending learning "new" technologies just to stand still.

And, yes, I know that we need an xUnit implementation for x=Java and x=C# and x=Object Pascal and so on, but aren't these in themselves self-fulfilling prophesies? A proliferation of sort-of-similar programming languages giving rise to the need for a proliferation of Yet Another 3rd Generation Programming Language xUnit ports?

Genuinely new and genuinely useful technologies come by relatively rarely. And while there are no doubts tweaks and improvements that could be made to make them friendlier, faster, and quite possibly more purple, for the most part the pay-off is at the start when developers find we can do things we were never able to do before.

And so I respectfully request that, before you inflict Yet Another Thing That's Like The Old Thing Only Exactly The Same (YATTLTOTOETS - pronounceed "yattle-toe-totes"), you stop and ask yourself "What problem does this solve? How do this make things better?" and pause for a while to consider if the learning curve you're about to subject us to is going to be worth the extra effort. Maybe it's really not worth the effort, and the time you spend making it and the cumulative time we all spend learning it would be better spent doing something like - just off the top of my head - talking to our customers. (Given that lack of customer involvement is the primary cause of software development failure. Unless you've invented a tool that can improve that. and, before anybody says anything, I refer you back to the "sucking customer's test data into parameterised tests" bit earlier. Been there. Done that. Got a new idea?)

Brought to you by Yet Another Blog Management System Written In PHP That's Not Quite As Good As The Others






June 23, 2014

What's My Problem With Node.js?

So you may have guessed by now, if you follow me on The Twitters, that I'm not the biggest fan of Node.js.

Putting aside that it's got ".js" on the end, and is therefore already committing various cardinal sins in my book - the chief one being that it's written in JavaScript, the programming language equivalent of a Victorian detective who falls through a mysterious space-time warp into 1970's New York and has to hastily adapt to hotpants, television and disco in order to continue solving crimes - my main problem with Node.js is that it makes it easier to do something that most development teams probably shouldn't ought to be. Namely, distributed concurrent programming.

If programming is hard to get right, then distributed concurrent programming is - relatively speaking - impossible to get right. You will almost certainly get it wrong. And the more you do of it, the more wronger what it do be.

The secret to getting concurrency right is to do as little of it as you can get away with. Well-designed applications that achieve this tend to have small, isolated and very heavily tested islands of concurrency. Often they have signs on the shore warning travellers to "turn back, dangerous waters!", "beware of rabid dogs!", "danger: radiation!" and "Look out! Skeletor!" You know; stuff that tends to send right-minded folk who value their lives running in the opposite direction.

Node.js is a great big friendly sign that says "Come on in. Hot soup. Free Wi-Fi.", and it's left to salvage specialists like me to retrieve the broken wrecks.

So, yes, Node.js does make it easier to do distributed concurrency, in much the same way that a hammer makes it easier to drive nails into your head. And both are liable to leave you with a hell of a headache in the morning.





June 15, 2014

A Hippocratic Oath for Software Developers - What Would Yours Be?

Good folk who take the whole notion of a profession of software development seriously are fond of comparing us to medical doctors.

For sure, a professional developer needs to keep abrest of a very wide and growing body of knowledge on tools, techniques, principles and practices, just like a good doctor must.

And, for sure, a professional developer - a true professional - would take responsibility for the consequences of their decisions and the quality of their work, just like doctors must - especially in countries where patients have access to good lawyers who charge on a "no win, no fee" basis.

To my mind, though, what would truly set us apart as a profession would be a strong sense of ethics.

Take, for example, the whole question of user privacy: we, as a profession, seem to suffer from extreme cognitive dissonance on this issue. Understandable, when you consider that we're simultaneously users and creators of systems that might collect user data.

As users, we would wish to choose what information about us is collected, stored and shared. we would want control, and want to know every detail of what's known about us and who gets to see that data. Those of us who've had our fingers burned, or have seen those close to us get burned, by a lax attitude to user privacy, tend to err on the side of caution. We want to share as little personal information as possible, we want as few eyes looking at that data as possible, and we want to know that those eyes are attached to the brains of trustworthy people who have our best interests at heart.

What we've learned in recent years is that none of this is true. We share far more data about ourselves than we realise, and that data seems to be attracting the gaze of far too many people who've shown themselves to be untrustworthy. So, quite rightly, we rail against it all and make a big fuss about it. As users.

But, wearing our other hats, as developers, when we're asked to write code that collects and shares personal data, we don't seem to give it a second thought. "Duh, okay." seems to be the default answer.

I've done it. You've done. We've all done it.

And we did it because, in our line of work, we pay scant attention to ethics and the public good most of the time. At best, we're amoral: too wrapped up in the technical details of how some goal can be achieved for our employers to step back and ask whether it should be achieved at all. Just because we can do it, that doesn't mean that we should.

When was the last time your team had a passionate debate about the ethics of what you were doing? I've watched teams go back and forth for hours over, say, whether or not they should use .NET data binding in their UI, while blithely skimming over ethical issues, barely giving them a second thought.

And just because the guy or gal writing the cheques told us to do it, that doesn't mean that we must. Sure; one way to interpret "professional" is to think it's just someone who does something for money. But some of choose to interpret it as someone who conforms to the standards of their profession. The only problem being that, in software development, we don't have any.

So, if we could take such a thing as a Hippocratic Oath for software development, what would it be?

I suspect, given recent revelations, privacy might figure quite largely in it now. As might user safety in those instances where the software we write might cause harm - be it physical or psychological. You could argue that applications like Twitter and Facebook, for example, have the potential to cause psychological harm; an accidental leak of personal information might ruin someone's life. And then we're back to privacy again.

But what other ethical issues might such an oath need to cover? Would it have anything to say about - just off the top of my head - arbitrarily changing or even withdrawing an API that hundreds of small businesses were relying on? Would it have anything to say about having conflicting financial interests in a development project? Should someone who profits from sales of licenses for Tool X be allowed to influence the decision of whether to buy Tool X? And so on.

What would your oath be?


June 13, 2014

Real Long-Term Apprenticeships In The UK - We Found One!!! Are There More Like This?

So, I'm inspired again.

Someone sent me a link to this web page detailing software apprenticeships in the South West of England and in Wales for Renishaw Plc.

I think this might be the first genuine, long-term apprenticeship I've seen offered by a real, credible employer in the UK. You may know of other examples, but more about that in a minute.

Years of personal research into this had led me to conclude that the apprenticeships on offer - what little their were - simple were not credible. They tended to be short. They tended to be technology or vendor-specific. They tended to be, well, a bit "Mickey Mouse", preferring to skip the tough stuff and the disciplines of writing software under commercial constraints in favour of "cool stuff" that may play well in digital agencies but is a poor foundation for a resurgence in UK tech.

There are three things I'd look for in an apprenticeship:

1. Is it long enough and in-depth enough for such a complex discipline?

2. Does it lead to a degree or equivalent internationally-recognised qualification? (Let's face it, even the best devs - without a degree - can hit a glass ceiling.)

3. Does the employer pay for it - or at the very least, doesn't charge for it.

The Renishaw software apprenticeship appears to tick all three boxes.

I'm not aware of any other schemes that currently do. Perhaps you are?

I'd like to hear about it if you do, so I can compile an online resource for potential apprentices interested in learning to be a good software developer.




June 12, 2014

Next Public Test-driven Development Course, London July 19th

Just a short hoorah to mention I'll be running another public TDD workshop on Saturday July 19th, aimed at and priced for self-starters who self-fund their own training. (Though you're more than welcome to come if the boss is paying, too.)

Still an inflation-beating £99 for a packed day of craftsy goodness. Places are limited. Hope you can join us.




June 8, 2014

Reliability & Sustaining Value Are Entirely Compatible Goals

This is a short blog post about having your cake and eating it.

The Agile Software Development movement has quite rightly shifted the focus in what we do from delivering to meet deadlines to delivering sustainable value.

A key component in sustaining the delivery of value through software is how much it costs to change our code.

The Software Craftsmanship schtick identifies primary factors in the cost of changing software; namely:

1. How easy is it to understand the code?

2. How complicated is the code?

3. How much duplication is there in the code?

4. How interdependent are all the things in the code?

5. How soon can we find out if the change we made broke the code?

By taking more care over these factors, we find that it's possible to write software in a way that not only delivers value today, but doesn't impede us from delivering more value tomorrow. In the learning process that is software development, this can be critical to our success.

And it's a double win. Because, as it turns out, when we take more care over readability, simplicity, removing duplication, managing dependencies and automating tests, we also make our software more reliable in the first instance.

Let us count the ways:

1. Code that's easier to understand is less likely to suffer from bugs caused by misunderstandings.

2. Code that is simpler tends to have less ways to go wrong - fewer points of failure - to achieve the same goals

3. Duplicated code can include duplicated bugs. Anyone who's ever "reused" code from sites like The Code Project will know what I mean.

4. Just as changes can propagate through dependencies, so can failures. If a critical function is wrong, and that function is called in many places and in many scenarios, then we have a potential problem. It's possible for a single bug in a single line of code to bring down the entire system. We call them "show-stoppers". It's for this reason I dreamed up the Dependable Dependencies Principle for software design.

5. High levels of automated test assurance - notice I didn't say "coverage" - tends to catch more programming errors, and sooner. This makes it harder for bugs to slip unnoticed into the software, which can also have economic benefits.


So there's your cake. Now eat it.






May 30, 2014

From Where We're Standing, "Perfect" And "Good Enough" Are The Same Destination

This morning, I heard another tale of woe from a developer working a company that had hired a "quality manager" who goes around warning developers not to overdo software quality.

Demotivational posters have gone up with slogans like "Perfection is the enemy of Good Enough".

I make this point often, but I guess I'll just have to keep making it: for more than 99% of teams, there is no immediate danger of making their software too good.

Indeed, so far are the majority of teams from even nearing perfection, that - from their vantage point - Perfect and Good Enough are effectively the same destination.

Imagine the quality of your software is Miami, Florida. And let's imagine that Perfection for your software is Manhattan, New York.

"Good Enough" would probably be somewhere around Brooklyn, NY. That is to say, if you're in Miami, Florida, the answers to the questions "How do I get to Manhattan?" and "How do I get to Brooklyn?" are essentially the same.

Fear not perfect software. Just because you aim for it, it doesn't meet it will ever happen. But, for the vast bulk of development teams, falling just short of it could well put them where they need to be.

So, here's my demotivational poster about quality:

Aim for perfection, because Good Enough is on the way