April 17, 2015
Generating Domain-Driven Design 'Name Palettes' Using Tag CloudsHere's a quick idea for anyone who's interested in Domain-driven Design (DDD), and believes that the design of code should be driven by our understanding of the problem space.
Take a requirements specification - say, a user acceptance test - and run the plain text of it through a tag cloud generator like the one at tagcrowd.com.
Here, I ran the following acceptance test - written in the trendy "given... when... then... " style of Behaviour-driven Development (BDD) - through the tag cloud generator:
Given a copy of a DVD title that isn’t in the library,
When a member donates their copy, specifying the name of the DVD title
Then that title is added to the library
AND their copy is registered against that title so that other members can borrow it,
AND an email alert is sent to members who specified an interest in matching titles,
AND the new title is added to the list of new titles for the next member newsletter
AND the member is awarded priority points
It generates the following tag cloud:
added alert awarded borrow copy donates dvd email given interest library list matching member name newsletter points priority registered sent specified title
created at TagCrowd.com
Now, when I'm thinking about the internal design - either up-front or after-the-fact through refactoring - I have a palette of words upon which units of code can be hung that is drawn directly from the problem space.
Need a name for a new class? Take a look in the tag cloud and see if there's anything suitable. Need a name for a new method? Ditto. And so on.
Working on the reverse, you might also find it enlightening to take code that's been stripped of all language features (reserved words etc) so that only identifiers remain, and feed that through the tag cloud generator so it can be compared to the tag cloud generated from the requirements. It's by no means a scientific approach to DDD, but it could at least give you a quick visual way of seeing how closely your code matches the problem domain.
April 15, 2015
Reality-Driven Development - Do We Need To Train Ourselves To Tune Into Real-World Problems?The world is full of problems.
That is to say, the world is full of people who are hindered in their efforts to achieve their goals, no matter how small and everyday those goals might be.
I'm no different to anyone else in that respect; every day I hit barriers, overcome obstacles, and occasionally give up trying. It might be something as prosaic as being unable to find an item in the shops when I'm in town, or wondering if there's a café or a pub nearby that has seating available at that specific moment in time. Or it could be something as astronomically important as wondering if aliens have ever visited our solar system, and have they left any hardware behind.
Problems, problems, problems. The real world's full of interesting problems.
And so it comes as some surprise that when my apprentice and I take a moment to think about ideas for a software project, we struggle to think of any real-world problems that we could have a go at solving.
It struck me in that moment that maybe, as a species, we software bods are a bit crap at noticing problems that have nothing to do with developing software. Typically, when I ask developers to think about ideas for a project, the suggestions have a distinctly technical bent. Maybe a new testing tool, or a plug-in for Visual Studio, or Yet Another MVC Framework, or something that does something to do with builds (and so on.)
A lifetime of minor frustrations, some of which software might be able to help with, pops clean out of our heads. And I ask: are we just not tuned in to real-world problems?
There are people who walk into a room, and notice that the scatter cushions compliment the carpet. I don't. I'm just not tuned in to that sort of thing. And if you give them some money, and say "improve this room", they'll have all sorts of ideas about furniture and bookshelves and paintings and curtains and pot plants, while I would buy a Massive F***ing TVTM and a Playstation 4. I'm tuned into technology in ways they're not, and they're tuned into home furnishings in ways I'm not.
Do we need to mindfully practice noticing everyday problems where software might help? Do we need to train ourselves to tune in to real-world problems when we're thinking about code and what we can potentially do with it? Do we need to work at noticing the colour of the scatter cushions when we walk into a room?
I've suggested that Will and I keep notes over the next two weeks about problems we came up against, so when we next pair, there might be some inspiration for us to draw from.
Which gives me an idea for an app....
April 8, 2015
Reality-driven Development - Creating Software For Real Users That Solve Real Problems In the Real WorldIt's a known fact that software development practices cannot be adopted until they have a pithy name to identify the brand.
Hence it is that, even though people routinely acknowledge that it would be a good idea for development projects to connect with reality, very few actually do because there's no brand name for connecting your development efforts with reality.
Reality-driven Development is a set of principles and practices aimed at connecting development teams to the underlying reality of their efforts so that they can create software that works in the real world.
RDD doesn't replace any of your existing practices. In some cases, it can short-circuit them, though.
Take requirements analysis, for example: the RDD approach compels us to immerse ourselves in the problem in a way traditional approaches just can't.
Instead of sitting in meeting rooms talking about the problem with customers, we go out to where the problem exists and see and experience it for ourselves. If we're tasked with creating a system for call centre operatives to use, we spend time in the call centre, we observe what call centre workers do - pertinent to the context of the system - and most importantly, we have a go at doing what the call centre workers do.
It never ceases to amaze me how profound an effect this can have on the collaboration between developers and their customers. Months of talking can be compressed into a day or two of real-world experience, with all that tacit knowledge communicated in the only way that tacit knowledge can be. Requirements discussions take on a whole different flavour when both parties have a practical, first-hand appreciation of what they're talking about.
Put the shoe on the other foot (and that's really what this practice is designed to do): imagine your customer is tasked with designing software development tools, based entirely on an understanding they've built about how we develop software purely based on our description of the problem. How confident are you that we'd communicate it effectively? How confident are you that their solutions would work on real software projects? You would expect someone designing dev tools to have been a developer at some point. Right? So what makes us think someone who's never worked in a call centre will be successful at writing call centre software? (And if you really want to see some pissed off end users, spend an hour in a call centre.)
So, that's the first practice in Reality-driven Development: Real-world Immersion.
We still do the other stuff - though we may do it faster and more effectively. We still gather user stories as placeholders for planning and executing our work.. We still agree executable acceptance tests. We still present it to the customer when we want feedback. We still iterate our designs. But all of these activities are now underpinned with a much more solid and practical shared understanding of what it is we're actually talking about. If you knew just how much of a difference this can make, it would be the default practice everywhere.
Just exploring the problem space in a practical, first-hand way can bridge the communication gap in ways that none of our existing practices can. But problem spaces have to be bounded, because the real world is effectively infinite.
The second key practice in Reality-driven Development is to set ourselves meaningful Real-world Goals: that is, goals that are defined in and tested in the real world, outside of the software we build.
Observe a problem in the real world. For example, in our real-world call centre, we observe that operatives are effectively chained to their desks, struggling to take regular comfort breaks, and struggling to get away at the end of a shift. We set ourselves the goal of every call centre worker getting at least one 15-minute break every 2 hours, and to work a maximum of 15 minute's unplanned overtime at the end of a day. This goal has nothing to do with software. We may decide to build a feature in the software they use that manages breaks and working hours, and diverts calls that are coming in just before their break is due. It would be the software equivalent of when the cashier at the supermarket checkout puts up one of those little signs to dissuade shoppers from joining their queue when they're about to knock-off.
Real-world Goals tend to have a different flavour to management-imposed goals. This is to be expected. If you watch any of those "Back to the floor" type TV shows, where bosses pose as front-line workers in their own businesses, it very often the case that the boss doesn't know how things really work, and what the real operational problems are. This raises natural cultural barriers and issues of trust. Management must trust their staff to drive development and determine how much of the IT budget gets spent. This is probably why almost no organisation does it this way. But the fact remains that, if you want to address real-world problems, you have to take your cues from reality.
Important, too, is the need to strike a balance in your Real-world Goals. While we've long had practices for discovering and defining business goals for our software, they tend to suffer from a rather naïve 1-dimensional approach. Most analysts seek out financial goals for software and systems - to cut costs, or increase sales, and so on - without looking beyond that to the wider effect the software can have. A classic example is music streaming: while businesses like Spotify make a great value proposition for listeners, and for major labels and artists with big back catalogues, arguably they've completely overlooked 99.9% of small and up-and-coming artists, as well as writers, producers and other key stakeholders. A supermarket has to factor in the needs of suppliers, or their suppliers go out of business. Spotify has failed to consider the needs of the majority of musicians, choosing to focus on one part of the equation at the expense of the other. This is not a sustainable model. Like all complex systems, dynamic equilibrium is usually the only viable long-term solution. Fail to take into account key variables, and the system tips over. In the real world, few problems are so simple as to only require us to consider one set of stakeholders.
In our call centre example, we must ask ourselves about the effect of our "guaranteed break" feature on the business itself, on its end customers, and anyone else who might be effected by it. Maybe workers get their breaks, but not withut dropping calls. Or without a drop in sales. All of these perspectives need to be looked at and addressed, even if by addressing it we end up knowingly impacting people in a negative way. Perhaps we can find some other way to compensate them. But at least we're aware.
The third leg of the RDD table - the one that gives it the necessary balance - is Real-world Testing.
Software testing has traditionally been a standalone affair. It's vanishingly rare to see software tested in context. Typically, we test it to see if it conforms to the specification. We might deploy it into a dedicated testing environment, but that environment usually bears little resemblance to the real-world situations in which the software will be used. For that, we release the software into production and cross our fingers. This, as we all know, pisses users off no end, and rapidly eats away at the goodwill we rely on to work together.
Software development does have mechanisms that go back decades for testing in the real world. Alpha and Beta testing, for example, are pretty much exactly that. The problem with that kind of small, controlled release testing is that it usually doesn't have clear goals, and lacks focus as a result. All we're really doing is throwing the software out there to some early adopters and saying "here, waddaya think?" It's missing a key ingredient - real-world testing requires real-world tests.
Going back to our Real-world Goals, in a totally test-driven approach, where every requirement or goal is defined with concrete examples that can become executable tests, we're better off deploying new versions of the software into a real-world(-ish) testing environment that we can control completely, where we can simulate real-world test scenarios in a repeatable and risk-free fashion, as often as we like.
A call centre scenario like "Janet hasn't taken a break for 1 hour and 57 minutes, there are 3 customers waiting in the queue, they should all be diverted to other operators so Janet can take a 15-minute break. None of the calls should be dropped" can be simulated in what we call a Model Office - a recreation of all or part of the call centre, into which multiple systems under development may be deployed for testing and other purposes.
Our call centre model office simulates the real environment faithfully enough to get meaningful feedback from trying out software in it, and should allow us to trigger scenarios like this over and over again. In particular, model offices enable us to exercise the software in rare edge cases and under unusually high peak loads that Alpha and Beta testing are less likely to throw up. (e.g., what happens if everyone is due a break within the next 5 minutes?)
Unless you're working on flight systems for fighter aircraft or control systems for nuclear power stations, it doesn't cost much to set up a testing environment like this, and the feedback you can get is worth far more.
The final leg of the RDD table is Real-world Iterating.
So we immerse ourselves in the problem, find and agree real-world goals and test our solutions in a controlled simulation of the real world. None of this, even taken together with existing practices like ATDD and Real Options, guarantees that we'll solve the problem - certainly not first time.
Iterating is, in practice, the core requirements discipline of Agile Software Development. But too many Agile teams iterate blindly, making the mistake of believing that the requirements they've been given are the real goals of the software. If they weren't elucidated from a real understanding of the real problem in the real world, then they very probably aren't the real goals. More likely, what teams are iterating towards is a specification for a solution to a problem they don't understand.
The Agile Manifesto asks us to value working software over comprehensive documentation. Realty-driven Development widens the context of "working software" to mean "software that testably solves the user's problem", as observed in the real world. And we iterate towards that.
Hence, we ask not "does the guaranteed break feature work as agreed?", but "do operatives get their guaranteed breaks, without dropping sales calls?" We're not done until they do.
This is not to say that we don't agree executable feature acceptance tests. Whether or not the software behaves as we agreed is the quality gate we use to decide if it's worth deploying into the Model Office at all. The software must jump the "it passes all our functional tests" gate before we try it on the "but will it really work, in the real world?" gate. Model Office testing is more complex and more expensive, and ties up our customers. Don't do it until you're confident you've got something worth testing in it.
And finally, Real-world Testing wouldn't be complete unless we actually, really tested the software in the real real world. At the point of actual deployment into a production environment, we can have reasonably high confidence that what we're putting in front of end users is going to work. But that confidence must not spill over into arrogance. There may well be details we overlooked. There always are. So we must closely observe the real software in real use by real people in the real world, to see what lessons we can learn.
So there you have it: Reality-driven Development
1. Real-world Immersion
2. Real-world Goals
3. Real-world Testing
4. Real-world Iterating
...or "IGTI", for short.
April 6, 2015
Acceptance Tests - The Difference Between Outcomes & Implications Of OutcomesA little thought to end Easter Sunday.
When writing acceptance tests, I encourage people to distinguish between outcomes and the implications of outcomes.
It's a subtle difference. Imagine we have a test for winning the lottery jackpot. The outcome is that the prize money is credited to the bank balance of the winner. Once they have the money, they can buy a yacht.
Buying a yacht is not an outcome of winning the lottery. But winning the lottery makes it possible to buy a yacht.
Similarly, the outcome of registering as a member of a video library is not that you can borrow videos. Being a member makes it possible for you to borrow videos. But the outcome is that you are now on the menber's list of people who can borrow videos.
Do you see what I'm getting at?
Outcomes in acceptance tests should describe the changes in state of the system's data - a change to a bank balance, or a change to the list of registered video library members.
Such changes may make pre-conditions of other use cases true, like borrowing videos. But that's not an actual outcome. That's enabled by the outcome.
So, the next time you're writing acceptance tests, and detailing the outcomes of a user action or system event, look for the changes, not the implications of the changes.
April 4, 2015
Why Software Craftsmanship Needs To Stop Worrying About The Colour Of The Bike Shed & Focus On The Big IssuesWhy do software development efforts run into trouble?
When we examine the factors that tend to make development harder, and that raise the risk of failure - that is, failure for development to achieve its goals - we find the same culprits again and again.
1. Lack of customer involvement and "management support"
2. Lack of a clear vision or goal(s)
3. Unrealistic expectations
4. Projects that are too big and complicated to succeed
5. Failure to get feedback & incorporate learning throughout development
6. Lack of care taken to deal with issues early and cheaply
7. Overstaffing of teams to make them go faster
8. Hiring under-skilled teams
9. Being driven by the technology rather than the problem
10. Too much emphasis on the process (and its artefacts) and not enough on outcomes (and the proof)
These risk factors often can be seen conspiring to create some almighty shitstorms.
My overriding experience as a freelance developer and tech lead, going from project to project as we do, was giant overstaffed projects being executed by developers who were largely hired on price rather than ability, encouraged to hack code out with minimal attention to quality, and with stratospheric ambitions being squeezed into tiny schedules with arbitrary release dates cooked up by absentee customers who never gave us a clear idea of what it actually was we were setting out to achieve in the first place, but the teams had some pretty strong ideas about what technologies we should use (usually dictated by what they would like to add to their CV).
That, it would appear, is the norm in software development.
The inverse of that: small projects staffed by small and highly skilled teams, with clear and testable business goals from a committed customer whose involvement is pretty much daily, iterating towards working (tested) solutions on timescales decided by real progress towards our goals, using whatever technology is best to get the job done.
Parkinson's Law of Triviality (the "bike shed effect", named after a nuclear reactor's design committee obsessing over a bike shed while ignoring major issues) predicts that teams will focus on the things that matter least, but they are most comfortable talking about. Hence, we can devote a lot of time to debating the deployment architecture while non of us dares ask "but where is our customer?" It's understandable; many developers do not feel empowered or have confidence in tackling such matters as lack of customer involvement, dysfunctional hiring policy or unrealistic deadlines. So we obsess about Lambda expressions and Maven instead. That's stuff we know; that's stuff we can control.
But, in the grand scheme of things, Lambda expressions and Maven don't amount to a hill of beans compared to the big stuff. A project can hobble along quite nicely using while loops and Ant scripts. But a project that's too big is almost certain to fail, crushed under its own weight.
We're mistaken, though, that we have no power to influence these factors. In actual fact, as the people who make the software happen - without whom, no software will happen - we have the ultimate power. We can say "no".
When the customer is absent, stop development until you've got the access you need.
When the goals aren't clear and realistic, stop development until you have at least one testable and achievable goal.
When the boss insists on hiring a developer who's not up to snuff, stop development until they're removed from the team.
If these risks can't be addressed, the project has likely failed anyway. You're doing everyone a favour by effectively canning it from the developer's side. It's within our power to do these things.
Other professions can and do do this. Some are legally obliged to, and can go to prison if they don't. I don't advocate legislating to make developers more professional. But if we continue to roll over on these issues, eventually someone somewhere will.
Now, the meeker and milder among us, who find themselves retching at the mere thought of taking a stand, will say "I would have a conversation with them first". And of course you should. But here's the thing, and we've all been there: there will be times when we raise our concerns and the boss says "do it anyway". So what then?
I believe that knowingly choosing failure is bad. It's wrong. It really shouldn't be allowed. But it is, and teams do it all the time. They still get paid, of course. And that's one of the reasons why long schedules with release dates months or years away are still popular. It's how we make a living from failure. The longer we can make it before the shit hits the proverbial fan, the easier it is to sustain a living effectively throwing shit at fans.
Now, if you're the only person on the team who thinks this way, then realistically you're choosing between staying and probably failing, and walking. People with mortgages and school fees and car payments do not like this option. Which is why I coined the term mortgage-driven development, specifically to describe an approach to software development that knowingly embraces failure to ensure continuity of income.
And that, too, is pretty much the norm in software development. You are very likely in a minority on your team if you think that these big risks must be addressed. You are outnumbered by mortgage-driven developers who will accept working without clear goals, but with unrealistic expectations, in inexperienced and probably overstaffed teams, with absentee customers, and all the other risk factors that sink projects - for as long they keep paying them.
Until that balance changes, choosing failure will continue to be the default approach.
I suppose, secretly, I hoped the software craftsmanship movement would tip that balance. I hoped that more teams would take a stand. Bu I've seen no evidence of this happening. Mortgage-driven Development is as ubiquitous as ever. Only now, it's more like "Test-driven Mortgage-driven Development (with Angular.js and Microservices)", building on that old tradition of focusing on the things that matter least.
That's not to say that craftsmanship - or rather, the narrow reinterpretation of craftsmanship that's dominated the last 8 years or so - isn't important. It's important enough that I choose to focus my business on it almost exclusively. But it's only one piece of the jigsaw - a vital piece - of the puzzle that should be software craftsmanship.
As you may know, I'm no fan of renaming things. Because the craftsmanship community has been "doing craftsmanship wrong" all these years does not mean that we need a new name for software craftsmanship to make it explicit that it should be about taking care over all aspects of getting software right, including the big issues we prefer not to tackle. I want us to keep the name, and the emphasis on the practical that's characterised craftsmanship in the UK and Europe in particular - it should still be about creating working software, and not just be a talking shop about "what it means to be a craftsman" (we've had enough navel-gazing to last us a while).
There's a tendency for movements in software development to tip us off-balance. We seem to lurch from one over-emphasis (e.g., project management) to another (e.g., code quality), and never seeming to achieve actual balance of all these forces. Unbalanced Forces is itself considered a software project anti-pattern. We've actually woven unbalanced forces into the fabric of how we progress as a profession, constantly rediscovering ideas we knew about decades ago but then forget as the next shiny new idea came along.
There just needs to be greater emphasis now on issues like hiring, project size, agreeing goals, and all the stuff that hopefully will lead to us writing Clean Code on projects that give a damn. We've seen hints of it; and some clever approaches to making it hands-on and practical, as it should be.
There. I've said it.
April 2, 2015
"It Works" Should Mean "It Works In the Real World"In software development, what we call "testing" usually refers to the act of checking whether what we created conforms to the specification for what we were supposed to build.
To illustrate, let's project that idea onto to some other kind of functional product: motor cars.
A car designer comes up with a blueprint for a car. The specification is detailed, setting out the exact bodyshape and proportions, down to what kind of audio player it should have.
The car testers come in and inspect a prototype, checking off all the items in the specification. Yes, it does indeed have those proportions. Yes, it does have the MP3 player with iPod docking that the designer specified.
They do not start the car. They do not take it for a test drive. They do not stick it in a wind tunnel. If it conforms to the specification, then the car is deemed roadworthy and mass manufacturing begins.
This would, of course, be very silly. Dangerous, in fact.
Whether or not the implementation of the car comforms to the specification is only a meaningful question if we can be confident that the specification is for a design that will work.
And yet, software teams routinely make the leap from "it satisfies the spec" to "okay, ship it" without checking that the design actually works. (And these are the mature teams. Many more teams don't even have a specification to test from. Shipping for them is a complete act of faith and/or indifference.)
Shipping software then becomes itself an act of testing to see if the design works. Which is why software has a tendency to become useful in its second or third release, if it ever does at all.
This is wrongheaded of us. If we were making cars - frankly, if we were making cakes - this would not be tolerated. When I buy a car, or a cake, I can be reasonably assured that somebody somewhere drove it or tasted it before mass manufacture went ahead.
What seduces us is this notion that software development doesn't have a "manufacturing phase". We ship unfinished products because we can change them later. Customers have come to expect it. So we use our customers as unpaid testers to figure out what it is we should have built. Indeed, not only are they unpaid testers, many of them are paying for the privelege of being guinea pigs for our untried solutions.
This should not be acceptable. When a team rolls our a solution across thousands of users in an organisation, or millions of users on the World Wide Web, those users should be able to expect that it was tested not just for conformance to a spec, but tested to find out if the design actually works in the real world.
But it's vanishingly rare that teams bother to pay their customers this basic courtesy. And so, we waste our users' time with software that may or may not work, making it their responsibilty to find out.
My way of doing it requires that teams continuously test their software by deploying each version into an environment similar to a motor vehicle test track. If we're creating, say, a sales automation solution, we create a simulated sales office environment - e.g., a fake call centre with a couple of trained operators, real telephones and mock customers triggering business scenarios gleaned from real life - and put the software through its paces on the kinds of situations that it should be able to handle. Or, rather, that the users should be able to handle using the software.
Like cars, this test track should be set up to mimic the environment in which the software's going to be used as faithfully as we can. If we were test driving a car intended to be used in towns and cities, we might create a test track that simulates high streets, pedestrian crossings, school runs, and so on.
We can stretch these testing environments, testing our solutions to destruction. What happens if our product gets a mention on primetime TV and suddenly the phones are ringing off the hook? Is the software scalable enough to handle 500 sales enquiries a minute? We can create targeted test rigs to answer those kinds of questions, again driven by real-world scenarios, as opposed to a design specification.
There are hard commercial imperatives for this kind of testing, too. Iterating designs by releasing software is an expensive and risky business. How many great ideas have been killed by releasing to a wide audience too early? We obsess over the opportunity cost of shipping too late, but we tend to be blind to the "embarrassment cost" of shipping products that aren't good enough yet. If anything, we err on the former, and should probably learn a bit more patience.
By the time we ship software, we ought to be at least confident that it will work 99% of the time. Too often, we ship software that doesn't work at all.
Yes, it passes all the acceptance tests. But it's time for us to move on from that to a more credible definition of "it works"; one that means "it will work in the real world". As a software user, I would be grateful for it.
March 27, 2015
Component-based & Microservice Architecture: Swappability Happens On The Client SideLunch time teleconference looms; just enough time to spew out these thoughts about distributed component architectures (you young hop folk may know them as "microservices", which is the trendy cool Dubstep name for them).
The key to distributed components is swappability - actually, that's kind of the whole point of components generally, distributed or in-process, so take this as generally applicable advice. Or don't. see if I care.
What our bearded young hipster friends often forget to mention is that the swappability in component-based design really happens on the client side of component-to-component (or service-to-service) collaborations. Not, as you may have been led to believe, on the server side.
Sure, we could make pretty much any kind of component present the same, say, REST API. But we still run the risk of binding our client to that API, and to the details of how to consume RESTful services. (Which can be, ironically, anything but to work with.)
Nope. To make that service truly swappable, we have to hide all of the details from the client.
UML 2.0, that leviathan of the component-based era, introduced the notion of components and connectors. It's a pretty neat idea: basically, we cleanly separate the logical conversation held between two components from the dirty business of the medium through which that conversation takes place.
To connect the idea with the real world, let's say I ask the prime minister if he's ever eaten three Shredded Wheat and he replied "Yes". Now, I didn't say how I asked him. Maybe I went to Number 10 and asked him to his face. maybe I emailed him. Maybe I had the words branded onto a poor person and paraded said pauper through the streets so that the PM would see the question when they reported the news on his tellybox.
What matters is that I asked, and he replied.
In component architectures, we seek to separate the logic of component interactions from the protocols through which they physically connect.
Let's say we have a Video Library web application that wants to display 3rd-party reviews of movies.
We could look for reviews on IMDB, or on Rotten Tomatoes (or on both). We want to ask the logical question: what reviews have been written for this movie (identified by the movie's title and year)?
We can codify the logic of that interact with interfaces, and package those interfaces in a component the client knows about - in this case, the VideoReviews .NET assembly, which contains the interfaces IReviewService and IReview.
The VideoTitle class that consumes these services doesn't need to know where the reviews are coming form, or how they're being marshalled. It just wants to ask the logical question. So we present it with a logical interface through which to do that.
By injecting the review service into the VideoTitle, it becomes possible to dynamically bind implementations of those interfaces that know how to connect and interact with the remote server (e.g., the Rotten Tomatoes API), and unpacks the data that comes back, translating into a form that VideoTitle can use easily.
All of that is done behind the scenes: VideoTitle knows nothing about the details. And because it knows nothing about the details, and because we're injecting it into the VideoTitle from outside - e.g., in its constructor, or as a method parameter when VideoTitle is told to get reviews - it becomes possible to swap in different connectors to get reviews from other services.
All of this can be wired together from above (e.g., we could instantiate an implementation of IReviewService when the application starts up), or with a dependency injection framework, and so on. The possibilities are many and varied for runtime hi-jinks and dynamic larks.
The component dependencies are crucial: our app logic (in the VideoLibary component) only depends on the abstractions in the VideoReviews library. It depends in no way on the external services. It does not know there are even external services involved.
All component dependencies point towards the abstractions, satisfying the Stable Abstractions package design principle.
It now becomes possible to do clever things with swappability, like pooling connectors that point to different service instances to provide basic load-balancing or fail-over, or giving the end user the choice of which service to use at runtime.
Most importantly, though, it gives us the ability to vary the logic of our applications independently of the details of how it connects to external services independently of each other. If a new movie review site came along, we would simply have to write a new connector for it, and wouldn't have to rely on them implementing the same web API as our existing providers. Because, that my friends, is beyond our control.
So, succeeding with components is about swappability, and swappability is about programming the logic of our applications against clean interfaces that we control.
The REST is details.
March 19, 2015
Requirements 2.0 - Make It RealThis is the second post in a series to float radical ideas for changing the way we handle requirements in software development. The previous post was Ban Feature Requests
In my previous post, I put forward the idea that we should ban customers from making feature requests so that we don't run the risk of choosing a solution too early. For example, in a user story, we'd get rid of most of the text, just leaving the "So that..." clause to describe why the user wants the software changed.
Another area where there's great risk of pinning our colours to a specific solution is in the collaboration between a customer and a UI/UX designer. The issue here is that things like wireframes and UI mock-ups tend to be the first concrete discussion points we put in front of customers. Up to this point, it's all very handwavy and vague. But seeing a web page with a text box and a list and some buttons on it can make it real enough to have a more meaningful discussion about the problem we're trying to solve.
This would be fine if we didn't get so attached to those designs. But, let's face it, we do. We get very attached to them, and then the goal of development transforms into "what must we do in order to realise that design?", when in reality, we're still exploring the problem space.
So, we need some way to make our ideas concrete, so we can have meaningful discussions about the problem, without presenting the customer with a design for a solution.
Here's what I do, when the team and the customer are willing to play ball:
I make it real by... well... making it real. I call this Tactile Modeling. (No doubt by tomorrow afternoon, some go-getting young hipster will have renamed it "Illustrating Requirements Using Things You Can See and Hold In Your Hand-driven Development". But for now, it's Tactile Modeling.)
Now, I'm old enough to remember when we were all so young and stupid we really thought that visual models in notations like UML would serve this purpose. Yeah, I know. It's like watching old movies of women smoking next to their babies. Boy, were we dumb!
But the idea of being able to concretely explore examples and business scenarios in a practical way can carry real power to break down the communication barriers; far more effectively than our current go-to techniques like agreeing acceptance tests in some airless meeting room with a customer who is pulling domain facts out of thin air half the time.
So, if we're talking about a system for managing a video library, let's create a video library and explore real-world systems for managing it. Let's get some videos. Let's get some shelves to put them. Let's get some boxes and folders and sticky-tape and elastic bands and build a video library management system out of real actual atoms and stuff, and explore how it works in different scenarios.
And instead of drawing boxes and arrows and wireframes and wizardry up on the whiteboard or in a modelling tool (like PowerPoint, for example), let's whip out our camera phones and take snaps at key steps and take videos to show how a process works and stick them in the Wiki for everyone to see.
And let's not sit in meeting rooms going "blah blah blah must be scalable etc etc", let's have our discussions inside this environment we've created, so we're surrounded by the problem domain, and at any point requiring clarification, the clarifier can jump up and show us what they mean, so that we can all see it (using our eyes).
As our understanding evolves, and we start to create software to be used in some of these scenarios to help the end users in their work, we can deploy that software into this fake video library and gradually swap out the belt-and-braces information systems with slick software, all the while testing to see that we're achieving our goals.
Now, I know what some of you are thinking: "but our problem domain is all abstract concepts like 'currency', 'option' and 'ennui'. " Well, here's the good news. Movies are an abstract concept. Sure, they come in boxes sometimes, or on cassettes. But that's just the physical representation - the medium - through which that concept is expressed. It's the same movie whether we download it as a file, buy it on a disc or get someone to paint it as a mural. That's what separates us from the beasts of the jungle. Well, that and the electrified fence around our compound. But mostly, it's our ability to express abstract concepts like money, employment contract and stock portfolio that we've built our entire civilisation on. Money can be represented by little pieces of paper with numbers written on them. (A radical idea, I know, but worth a try sometime.) And so on.
There is always a way to make it practical: something we can pick up and look at and manipulate and move to model the information in the system, be it information about hospital patients, or about chemicals components in self-replicating molecules, or about single adults who are looking for love.
Of course, there's more to it than that. But you get the gist, I'm sure. And we'll look at some of that in th next post, no doubt. In particular, the idea of a model office: a simulated testing and learning environment into which we should be deploying our software to see how it fairs in something approaching the real world.
Wanna have a meaningful conversation about requirements? Then make it real.
Requirements 2.0 - Ban Feature RequestsThis is the first post in a series that challenges the received wisdom about how we handle requirements in software development.
A lot of the problems in software development start with someone proposing a solution too early.
User stories in Agile Software Development are symptomatic of this: customers request features they want the software to have, qualifying them with a "so that..." clause that justifies the feature with a benefit.
Some pundits recommend turning the story format around, so the benefit comes first, a bit like writing tests by starting with the assertion and working backwards.
I'm going to suggest something more radical: I believe we should ban feature requests altogether.
My format for a user story would only have the "so that..." clause. Any mention of how that would be achieved in the design of the software would be excluded. The development team would figure out the best way to achieve that in the design, and working software would be iterated until the user's goal has been satisfied.
It's increasingly my belief that the whole requirements discipline needs to take a step back from describing solutions and their desired features or properties, to painting a vivid picture of what the user's world will look like with the software in it, with a blank space where the software actually goes.
Imagine trying to define a monster in a horror movie entirely through reaction shots. We see the fear, we hear the screams, but we never actually see the monster. That's what requirements specs, in whatever form they're captured, should be like. All reaction shots and no monster.
Well, three reasons:
1. All too often, we find ourselves building a solution to a problem that's never been clearly articulated. Iterating designs only really works when we iterate towards clear goals. Taking away the ability to propose solutions (features) early forces customers (and developers) to explicitly start by thinking about the problem they're trying to solve. We need to turn our thinking around.
2. The moment someone says "I want a mobile app that..." or "When I click on the user's avatar..." or even, in some essential way, "When I submit the mortgage application..." they are constraining the solution space unnecessarily to specific technologies, workflows and interaction designs. Keeping the solution space as wide open as possible gives us more choices about how to solve the user's problem, and therefore a greater chance of solving it in the time we have available. On many occasions when my team's been up against it time-wise, banging our heads against a particular technical brick wall, when we took a step back and asked "What are we actually trying to achieve here?" and the breakthrough came when we chose an easier route to giving the users what they really needed.
3. End users generally aren't software designers. For the exact same reason that it's not such a great idea to specify a custom car for me by asking "What features do you want?" or for my doctor to ask me "What drugs would you like?", it's probably best if we don't let users design the software. It's not their bag, really. They understand the problem. We do the design. We play to our strengths.
So there you have it. Ban feature requests.
March 11, 2015
Distributed Architecture - "Swappability" Is Enabled On The Client Side, Not The ServerIt's a common mistake; developers building applications out of multiple distributed components often fall into this trap.
The trick with distributed component-based designs is to recognise that the protocols we use to wire components ("services") together is a detail, not an abstraction.
The goal is swappability, and we achieve this goal on the client's side of a distributed interaction, not on the supplier's side.
So, for example, the JSON interface of a "microservice" isn't the swappable abstraction that makes it possible for us to easily replace that component with a different implementation.
You will note how mature component technologies involve tools that generate clean client-side abstractions that we bind the client logic to. The details of how exactly that interaction takes place (e.g., via HTTP) is hidden behind it. When the details aren't hidden, we risk binding our client logic to a specific way of communicating, when it should be focusing on the meaning of the conversation.
In this example, our Trade object needs an up-to-date stock price. It does not need to know where this stock price comes from. By abstracting the conversation on the client side using a StockPriceService interface, it becomes possible for us dynamically substitute sources of stock prices - including test sources - without having to recompile, re-test and re-deploy our Trade object's component.
It could be that we want to switch to a different supplier of stock information - e.g., switch from Reuters to Bloomberg. Or, indeed, present the user with a choice at runtime. or we might want to test that the total price is calculated correctly by swapping in a stub StockPriceService implementation. Or write a better implementation of our StockPriceService.
Having this abstraction available on the client side makes that swappability much easier than through binding to the interface presented by the supplier (e.g., a web service.)
So remember, folks: real swappability is enabled at the client end, not on the server.