January 23, 2018
Without Improving Code Craft, Your Agile Transformation Will Fail"You must be really busy!" is what people tend to say when I tell them what I do.
It stands to reason. If software is "eating the world", then code craft skills must be highly in demand, and therefore training and coaching for developers in those skills must be selling like hotcakes.
Well, you'd think so, wouldn't you?
The reality, though, is that code craft is critically undervalued. The skills needed to deliver reliable, maintainable software at a sustainable pace - allowing businesses to maintain the pace of innovation - are not in high demand.
We can see this both in the quality of code being produced by the majority of teams, and in where organisations focus their attentions and where they choose to invest in developing skills and capabilities.
"Agile transformations" are common. Some huge organisations are attempting them on a grand scale, sending their people on high-priced training courses and drafting in hundreds of Agile coaches - mostly Scrum-certified - to assist, at great expense.
Only a small minority invest in code craft at the same time, and typically they invest a fraction of the time, effort and money they budget for Agile training and coaching.
The end result is software that's difficult to change, and an inability to respond to new and changing requirements. Which is kind of the whole point of Agile.
Let me spell it out in bold capital letters:
IF CODE CRAFT ISN'T A SIGNIFICANT PART OF YOUR AGILE TRANSFORMATION, YOU WILL NEVER ACHIEVE AGILITY.
You can't be responsive to change if your code is expensive to change. It's that simple.
While you build your capability in product management, agile planning and all that scrummy agile goodness, you also need to be addressing the factors that increase the cost of changing code. Skills like unit testing, TDD, refactoring, SOLID, CI/CD are a vital part of agility. They are hard skills to crack. A 3-day Certified Code Crafter course ain't gonna cut the mustard. Developers need ongoing learning and practice, with the guidance of experienced code crafters. I was lucky enough to get that early in my career. Many other developers are not so lucky.
That's why I built Codemanship; to help developers get to grips with the code-facing skills that few other training and coaching companies focus on.
But, I'll level with you: even though I love what I'm doing, commercially it's a struggle. The reason so few others offer this kind of training and coaching is because there's little money it. Decision makers don't have code craft on their radars. There's been many occasions when I've thought "May as well just get Scrum-certified". I'm not going to go down without a fight, but what I really need (apart from them to cancel Brexit) is for a shift in the priorities of business who are currently investing millions on Agile transformations that are all but ignoring this crucial area.
Of course, those are my problems, and I made my choices. I'm very happy doing what I'm doing. But it's indicative of a wider problem that affects us all. Getting from A to B is about more than just map reading and route planning. You need a well-oiled engine to get you there, and to get you wherever you want to go next. Too many Agile transformations end up broken down by the side of the road, unable to go anywhere.
January 21, 2018
Delegating "Junior" Development Tasks. (SPOILER ALERT: It doesn't work)When I first took on a leadership role on a software development team 20 years ago, from the reading I did, I learned that the key to managing successfully was apparently delegation.
I would break the work down - GUI, core logic, persistence, etc - and assign it to the people I believed had the necessary skills. The hard stuff I delegated to the most experienced and knowledgeable developers, The "easy" stuff, I left to the juniors.
It only took me a few months to realise that this model of team management simply doesn't work for software development. In code, the devil is in the detail. To delegate a task, I had to explain precisely what I wanted that code to do, and how I wanted it to be (in terms of coding standards, our architecture, and so on).
If the task was trivial enough to give to a "junior" dev, it was usually quicker to do it myself. I spent a lot more time cleaning up after them than I thought I was saving by delegating.
So I changed my focus. I delegated work in big enough chucks to make it worthwhile, which meant it was no longer "junior" work.
Looking back with the benefit of 20 years of hindsight, I realise now that delegating "junior" dev tasks is absurd. It's like a lead screenwriter delegating the easy words to a junior screenwriter. It would also probably be a very frustrating learning experience for them. I'm very glad I never went through a phase in my early career of doing "junior" work (although I probably wrote plenty of "junior" code!)
The value in bringing inexperienced developers in to a team is to give them the opportunity to learn from more seasoned developers. I got that chance, and it was invaluable. Now, I recommend to managers that their noobs pair up with the old hands on proper actual software development, and allow for the fact that it will take them longer.
This necessitates - if you want the team to be productive as a whole - that the experienced developers outnumber the juniors. Actually, let's not call them that. The trainees.
Over time - months and years - the level of mentoring required will fall, until eventually they can be left to get on with it. And to mentor new developers coming in.
But I still see and hear from many, many people who are stuck in the hell of a Thousand Junior Programmers, where senior people - often called "architects" - are greatly outnumbered by people still wet behind the ears, to whom all the "painting by numbers" is delegated. This mindset is deeply embedded in the cultures of some major software companies. The result is invariably software that's much worse, and costs much more.
It also leads to some pretty demoralised developers. This is not the movie industry. We don't need runners to fetch our coffee.
ADDENDUM: It also just occurred to me, while I'm recalling, that whenever I examined those "junior" dev tasks more closely, their entire existence was caused by a problem in the way we were doing things (e.g., bugginess, lack of separation of presentation and logic, duplication in data access code, etc). These days, when it feels like "grunt" work - repetitive grind - I stop and ask myself why.
January 20, 2018
10 Classic TDD Mistakes20 years of practicing Test-Driven Development, and training and coaching a few thousand developers in it, has taught me this is not a trivial skillset to learn. There are many potential pitfalls, and I've seen many teams dashed on the rocks by some classic mistakes.
You can learn from their misfortunes, and hopefully steer a path through these treacherous waters. Here are ten classic mistakes I've seen folk make with TDD.
1. Underestimating The Learning Curve
Often, when developers try to adopt TDD, they have unrealistic expectations about the results they'll be getting in the short term. "Red-Green-Refactor" sounds simple enough, but it hides a whole world of ideas, skills and habits that need to be built to be effective at it. If I had a pound for every team that said "we tried TDD, and it didn't work"... Plan for a journey that will take months and years, not days and weeks.
2. Confusing TDD with Testing
The primary aim of TDD is to come up with a good design that will satisfy our customer's needs. It's a design discipline that just happens to use tests as specifications. A lot of people still approach TDD as a testing discipline, and focus too much on making sure everything is tested when they should be thinking about the design. If you're rigorous about applying the Golden Rule (only write solution code when a failing test requires it), your coverage will be high. But that isn't the goal. It's a side benefit.
3. Thinking TDD Is All The Testing They'll Ever Need
If you practice TDD fairly rigorously, the resulting automated tests will probably be sufficient much of the time. But not all of the time. Too many teams pay no heed to whether high risk code needs more testing. (Indeed, too many teams pay no heed to high risk code at all. Do you know where your load-bearing code is?) And what about all those scenarios you didn't think of? It's rare to see a test suite that covers every possible combination of user inputs. More work has to be done to explore the edges of what was specified.
4. Not Starting With Failing Customer Tests
In all approaches to writing software, how we collaborate with our customers is critically important. Designs should be driven directly from testable specifications that we've explicitly agreed with them. In TDD, unsurprisingly, these testable specifications come in the form of... erm... tests. The design process starts by working closely with the customer to flesh out executable acceptance tests that fail. We do not start writing code until we have those failing customer tests. We do not stop writing code until those tests are passing. But a lot of teams still set out on their journey with only the vaguest sense of the destination. Write all the unit tests you want, but without failing executable customer tests, you're just being super-precise about your own assumptions of what the customer wants.
5. Confusing Tools With Practices
Just because they're written using a customer test specification tool like Cucumber or Fitnesse does not mean those are customer tests. They could be automated using JUnit, and be customer tests. What makes them customer tests is that you wrote them with the customer, codifying their examples of how the software will be used. Similarly, just because you used a mock objects framework, that doesn't mean that you are mocking. Mocking is a technique for discovering the design of interfaces by writing failing interaction tests. Just because you're writing JUnit tests doesn't mean you're doing TDD. Just because you use Resharper doesn't mean you're refactoring. Just because you're running Jenkins doesn't mean you're doing Continuous Integration. Kubernetes != Continuous Delivery. And the list goes on (and on and on). Far too many developers think that using certain tools will automatically produce certain results. The tools will not do your thinking for you. As far as I'm aware, RSpec doesn't discuss the requirements with the customer and write the tests itself. You have to talk to the customer.
6. Not Actually Doing TDD. At All.
When I run the Codemanship TDD training workshop, I often start the first day by asking for a show of hands from people who think they've done TDD. At the end of the first day I ask them to raise their hands if they still think they've done TDD. The number is always considerably lower. Here's the thing: I know from experience that 9 out of 10 developers who put "TDD" on their CV really mean "unit testing". Many don't even know what TDD is. I know this sounds basic, but if you're going to try doing TDD, try doing TDD. Google it. Read an introduction. Watch a tutorial or three. Buy a book. Come on a course.
7. Skimping On Refactoring
To produce code that's as clean as I feel it needs to be, I find I tend to spend about 50% of my time refactoring. Most dev teams do a lot less. Many do none at all. Now, I know many will say "enough refactoring" is subjective, and the debate rages on social media about whether anyone is doing too much refactoring, but let's be frank: the vast majority of us are simply not doing anywhere near enough. The effects of this are felt soon enough, as the going gets harder and harder. Refactoring's a very undervalued skill; I know from my training orders. For every ten TDD courses I run, I might be asked to run one refactoring course. Too little refactoring makes TDD unsustainable. Typical outcome: "We did TDD for 6 months, but our tests got so hard to change that we threw them away."
8. Making The Tests Too Big
The granularity of tests is key to making TDD work as a design discipline, as well as determining how effective your test suites will be at pinpointing broken code. When our tests ask too many questions (e.g., "What are the first 10 Fibonacci numbers?"), we find ourselves having to make a bunch of design decisions before we get feedback. When we work in bigger batches, we make more mistakes. I like to think of it like crossing a stream using stepping stones; if the stones are too far apart, we have to make big, risky leaps, increasing the risk of falling in. Start by asking "What's the first Fibonacci number?".
9. Making The Tests Too Small
Conversely, I also often see people writing tests that focus on minute details that would naturally fall out of passing a more interesting higher-level test. For example, I see people writing tests for getters and setters that really only need to exist because they're used in some interesting behaviour that the customer wants. I've even seen tests that create an object and then assert that it isn't null. Those kinds of tests are redundant. I can kind of see where the thinking comes from, though. "I want to declare a BankAccount class, but the Golden Rule of TDD is I can't until I have a failing test that requires it. So I'll write one." But this is coming at it from the wrong direction. In TDD, we don't write tests to force the design we want. We write tests for behaviour that the customer wants, and discover the design by passing it (and by refactoring afterwards if necessary). We'll need a BankAccount class to test crediting an account, for example. We'll need a getter for the balance to check the result. Focus on behaviour and let the details follow. There's a balance to be struck on test granularity that comes with experience.
10. Going Into "Design Autopilot"
Despite what you may have heard, TDD doesn't take care of the design for you. You can follow the discipline to the letter, and end up with a crappy design.
TDD helps by providing frequent "beats" in development to remind us to think about the design. We're thinking about what the code should do when we write our failing test. We're thinking about how it should do it when we're passing the test. And we're thinking about how maintainable our solution is after we've passed the test as we refactor the code. It's all design, really. But it's not magic.
YOU STILL HAVE TO THINK ABOUT THE DESIGN. A LOT.
So, there you have it: 10 classic TDD mistakes. But all completely avoidable, with some thought, some practice, and maybe a bit of help from an old hand.
January 15, 2018
Refactoring to the xUnit Pattern16 days left to get my spiffy on-site Unit Testing training workshop at half-price. It's jam-packed with unit testy goodness. Here's a little taste of the kind of stuff we cover.
In the introductory part of the workshop, we look at the anatomy of unit test suites and see how - from the most basic designs - we eventually arrive by refactoring at the xUnit design pattern for unit testing frameworks.
If you've been programming for a while, there's a good chance you've written test code in a Main() method, like this:
This saves us the bother of having to run an entire application to get quick feedback while we're adding or changing code in, say, a library.
Notice that there are three components to this test:
Arrange - we set up the object(s) we're going to use to be in the initial state we need for this particular test
Act - we invoke the method we want to test
Assert - We ask questions about the final state of our test object(s) to see if the action has had the desired effect
Of course, a real-world application might need hundreds or even thousands of such tests. Our Main() method is going to get pretty big and unwieldy if we keep adding more and more test cases.
So we can break it down into multiple test methods, one for each test case. The name of each test method can clearly describe what the test is.
Our original Main() method just calls all of our test methods.
But still, when there are hundreds or thousands of test methods, we can end up with one ginormous class. That too can be broken down, grouping related test methods (e.g., all the tests for a bank account) into smaller test fixtures.
Note that each test fixture has a method that invokes all of its test methods, so our original main method doesn't need to invoke them all itself.
This is a final piece of the unit testing jigsaw: the class that tells all of our test fixtures to run their tests. We call this a test suite.
At the most basic level, this simple design gives us the ability to write, organise and run large numbers of tests quickly.
As time goes on, we may add a few bells and whistles to streamline the process and make it more useful and usable.
For example, in our current design, when an assertion fails (using .NET's built-in Debug.Assert() method), it will halt execution. If the first test fails in a suite of 1,000 tests, it won't run the other 999. So we might write our own assertion methods to check and report test failures without halting execution.
And we might want to make the output more user friendly and display more helpful results, so we may add a custom formatter/reporter to write out test results.
And - I can attest from personal experience - it can be a real pain the you-know-what to have to remember to write code to invoke every test method on every test fixture. So we might create a custom test runner - not just a Main() method - that automates the process of test discovery and execution.
We could, for example, invert the dependencies in our test suite on individual test fixtures by extracting a common interface that all fixtures must implement for running its tests. Then we could use reflection or search through the source code for all classes that implement that interface and build the suite automatically.
Likewise, we could specify that test methods must have a specific signature (e.g., start with "Test", a void return type, and have no parameters) and search for all test methods that match.
In my early career, I wrote several unit testing frameworks, and they tended to end up with a similar design. Thousands more had the same experience, and that commonality of experience is captured in the xUnit design pattern for unit testing frameworks.
The original implementation of this pattern was done in Smalltalk ("SUnit") by Kent Beck, and many more have followed in pretty much every programming language you can think of.
In the years since, some useful advanced features have been added, which we'll explore later in the workshop. But, under the hood, they're all pretty much along these lines.
January 9, 2018
Test Granularity Matters. Ask Any Accountant.It's that time of year when I have to make sure my company's accounts are all up to date and tickety-boo, and I got a useful reminder about why the granularity of our tests really matters.
In my spreadsheet for bank payments and receipts, I have a formula for calculating what the closing balance at the end of the financial year is. Today, I realised that calculated balance was about £1200 short. Evidently, I had either entered one or more payments incorrectly, or one or more receipts.
I had to go back through all the bank statements for the year double-checking every line item against the spreadsheet.
Now, if I'd had a formula for the balance at the end of every line item, I could simply have checked the closing balances on each statement to see where they diverged.
I've experienced similar pain when relying on tests that check logic at too high a level (e.g., system tests or API tests). When a test fails, I have to go rummage through the call stack to figure out where it went wrong - the equivalent of reading all my bank statements looking for the line item that doesn't match. Much time is spent in the debugger: a red flag.
I strongly encourage teams to rely more on small, focused tests that - ideally - have only reason to fail, and to write those tests as close to the module that's doing that piece of work as they can. So when a test fails it's easy to deduce that "the problem is this, and the problem is here".
January 7, 2018
Do Your Automated Tests Give You Confidence In Your Code?I ran a little poll on the @codemanship Twitter account asking:
How much confidence do your automated tests give you that your software really works?— Codemanship (@codemanship) January 6, 2018
The responses suggest many developers don't put a lot of faith in their automated tests for detecting bugs. The aim of test automation is to dramatically lower the cost and execution time of regression testing our code so that we're alerted to new bugs sooner rather than later.
The ultimate goal is to have high confidence at any point in time that the software works, and is therefore fit for release. This is a foundational requirement of Continuous Delivery - software should always be shippable.
Examining many test suites, as I do every year, I think I have some insight into this problem. Firstly, most teams that have automated tests don't have particularly good test suites. Much of the code isn't reached by them. Many of the tests ask loose questions, leaving big gaps in their assertions that you could drive a bus-load of bugs through.
Teams quickly learn, after the first few releases, that just because their tests are passing, that doesn't mean the code is working. But there seems to be little appetite for beefing up their tests suites to plug the leaks that bugs are pouring in through.
Very few teams test their tests to see how effective they are at catching bugs. Even fewer teams target more exhaustive testing at "load-bearing" code, or even have any awareness of which parts of the code present the highest risk.
Happy Path thinking still dominates the developer mindset. Most of us don't think like testers. We want to show that our code works, not that it doesn't in certain edge cases. So our tests tend to skip over the edge cases.
In code reviews - for those teams that do them on any regular basis - test assurance tends not to be one of the things reviewers look for. At best, line coverage is checked. If the coverage report shows the new or changed code is executed in a test, that's spiffy for most dev teams. And, to be fair, most teams don't even check for that. You'd be shocked at how many teams are genuinely surprised to learn how low their coverage is. "But we do TDD...!" Evidently not much of the time.
Teams that practice TDD fairly rigorously tend to have test suites they can put more faith in. But, even as a TDD trainer and mentor with two decades of experience doing it, I regularly feel the need to take testing further after my design is complete.
I'm a big fan of guided inspection, reading the code carefully, looking for test cases I may have missed. I'm also big on parameterised testing, because it can buy you potentially massive amounts of test coverage with surprisingly little extra test code.
And, believe it or not, to some extent you can also automate exploratory testing. One example is the simple Java prototype for generating combinations of inputs for use in JUnit tests that I threw together last year. Another example is tools that can randomly generate input data, like Haskell's QuickCheck (and it's many language-specific ports, like JCheck).
I also find simple test analysis techniques like truth tables and decision tables, state transition and program flow models very useful for discovering edge cases I might have missed. Think you're thinking like a tester? Read the first few chapters of Robert Binder's Testing Object Oriented Systems and think again.
So, if you're one of the 58% who said they don't have high confidence in their automated tests, it may be time to take your automated testing to the next level.
January 4, 2018
The Impact of Fast-Running Unit Tests Can Be ProfoundThe most common issue I find that holds dev teams back is the testing bottleneck. How long it takes to check that your software still works before rolling it out is a major factor in how often you can do releases.
Consider a rudimentary "maturity model" for checking that our code is fit for release: it's a spectrum of maturity, with the lowest level (let's call it Level 0) being that we don't test it at all and just release it for the users to "test" in the field, and the highest level being testing continuously to try to ensure bugs don't make it into the next 10 minutes, let alone into a production release (call that Level 5).
And there are all levels in between 0 and 5. You might be manually testing before a big release. You might be manually testing iteratively, every couple of weeks. You might be running automated GUI tests overnight. You might have a suite of, say, Cucumber tests that take an hour to run. And so on. Or we might have a mix of 50/50 GUI and unit tests. Or a bunch of "unit" tests that hit databases, making them integration tests. And so on.
There are 3 axes for our maturity model:
x. How effective our tests are at detecting bugs
y. How quickly they run
z. How often we run them
These factors all interrelate. Catching more bugs often means running more tests, which takes longer. And the longer the tests take to run, the less often we're likely to run them.
Together, they answer the question: how long before a bug is likely to be detected?
Teams have to climb the maturity model if they want to release more reliable code more often and reap the business benefits of Continuous Delivery.
They not only have to improve at writing fast-running automated tests, which is a non-trivial skillset that takes years to master, but also at test analysis and design, so the tests they write are asking more of the right questions. (Yes, it's not all about automation.)
Slow-running tests (manual or automated) is a very common bottleneck I find in dev teams, who wrestle with the much higher cost of removing bugs resulting from catching them much later. I've watched teams go round and round in circles trying to stabilise their product to make it acceptable for a major release, sometimes for many months and at a cost of millions. Such costs are typically dwarfed by the knock-on opportunity cost to the business waiting for critical updates to their software and systems.
I also come into contact with a lot of teams who've been writing automated tests for years, but have remained at a fairly low level of testing maturity. Their tests run slow (hours). Their tests miss a bunch of stuff. While these teams don't suffer from prolonged "stabilisation phases" before releases, they still feel like they're wading through treacle to get working code out of the door. High productivity at the birth of a new code base quickly drops to a trickle of new features and a great deal of bug fixing.
The aim for teams striving for sustainable Continuous Delivery is to be able to re-test their code every single time a change is made. Make one change. Run the tests. Fix the one thing you broke if you broke it. Then on to the next baby step.
This means that your tests must run in seconds, not hours, or days, or weeks. And you need high confidence that if you broke the code, a test would show that.
The effect of tightening up test execution can be profound for dev teams, and for the businesses relying on them. I've witnessed some miracles in my time where organisations that were on their knees trying to evolve their legacy systems eventually managed to stand up and walk, even run, as their testing cycles accelerated.
So, for a developer, writing effective fast-running automated tests is a key skill. It's something we should learn early, and continue to improve on throughout our careers.
If you or your team needs to work on your unit testing chops, I've designed a jam-packed 1-day training workshop that'll kickstart things. And this month, bookings are half-price.
January 3, 2018
Professionalism & the "Customer"Just a few words to add to a post I wrote a few days ago about TDD & "professionalism". I scribbled a quick Venn diagram to illustrate my ideas about stuff software development "professionals" should aim for.
A few good folk have understandably raised objections, which is the natural consequence of saying stuff on the Internet. In particular, some folk object to the idea that a "professional" doesn't write code the customer didn't ask for.
What if the customer doesn't know what they want? Should we build something and see if they like it? Call it an "experiment". We could do that. But before we do that, we could discuss it with the customer and seek their input before we build what we're planning to build. A mock-up, a storyboard, or other lo-fi prototype could clue them in as to what exactly it is we're planning to try for them.
And what if we're building software for the general public? How do we seek permission to try ideas?
This is the problem with words.
What exactly is a "customer"? Different teams will be working in different situations with different kinds of "customer". And there are many understandings of what that word means.
To me, the "customer" is whoever decides what the money gets spent on. In relation to professionalism, we can look at our relationship with our "customer" in many ways.
Think of doctors and patients: the doctor doesn't ask the patient "What medicine would you like me to prescribe?" Instead, she examines the patient, diagnoses the illness, and proposes a treatment. But she still seeks permission from the patient to try it. (Unless the patient is unable to give consent.) Arguably, it would be "unprofessional" of a doctor to administer a treatment without telling the patient what it is, what it's supposed to do, and what side effects it might have. There is a dialogue, then there is consent. The patient decides yay or nay, usually.
Or think of it as gambling. In the casino of software development, decisions are made to bet sums of money on features and changes. Some bets will be bigger than others. Some features will have a potentially larger pay-out than others. In that scenario, where we don't know what the outcome is going to be (which is - let's be honest - how it really is in software development anyway), who are we? Are we the gambler? Or are we the croupier? Do we take their money and tell them to go to the bar while we place bets on their behalf? Or do we ask them to sit at the table, and at least seek consent for every bet before it's placed?
And when it's us deciding what features to try, aren't we the "customer"? In this situation, it's our money we're gambling with. Do we randomly write code and see how it turns out? Or do we take aim before we fire? I've found it to be a bad idea to start writing code without a clear idea of what that code's supposed to do, regardless of whether this is decided in a conversation with a "customer", or in a conversation with myself.
One thing is clear to me (and feel free to disagree): all software development is an experiment. So, personally, I don't distinguish between a "spike" and a "finished solution". They're all spikes. I've found I'm genuinely no quicker producing working code when I cut corners. So my spikes have automated tests, and the code's maintainable. (I rarely even write sample code (e.g., for blog posts) without tests any more.) And they proceed a conversation in which the purpose of the spike is explicitly agreed, and consent - even if it's my own consent - is given to do it.
Now, like I said in the original post: I don't find discussions about professionalism very helpful. Words are difficult. However I spin it, some folk will object. And that's fine. Don't wanna do it my way? Don't do it. I'm not in charge of anyone except myself.
And isn't that, after all is said and done, the real definition of a "professional"?
January 2, 2018
It's ALL User ExperienceInspire by this tweet that appeared on the @codemanship timeline:
I wanted to take a moment to spitball some thoughts about user experience and how it fits within the software development process.
I've always believed that UX is a central pillar of software design and development. When you think about it, it's all about the user's experience.
You might ask "What has Clean code got to do with user experience?" Well, I can think of many examples of software I was relying on that had very infrequent updates. In fact, in some cases, the updates never came. I got very heavily into a software app for recording guitar-based music, for example, and was forced to abandon it after a long-promised major release - with critical enhancements and bug fixes I was counting on - never materialised.
In an indirect - but no less frustrating way - end users feel the effects of code smells, poor automated test support, lack of CI, and all that kind of gnarly business.
Just as we feel the effects of architectures that don't scale. Exact same company had a website for users to post their music on via the app. Often unresponsive. Frequently unavailable. Eventually pulled altogether. LinkedIn is another application I can think of where I've experienced quite frequent unavailability. And Twitter, of course. (You can't substitute good scalable architecture with an 'Oops, something went wrong' message, much as you might like to.)
And millions of unlucky end users have felt the effects of unsecured code on their bank and credit card accounts.
As developers, pretty much everything we do - every decision we make - can be felt by our end users. It may be a direct visible effect, like an unhandled exception, or an indirect effect like rising prices to cover ballooning development costs. But, one way or another, they feel it. Just as surely as you can feel the effects of a decision not to invest in those better ovens or to use frozen pasta instead of making it fresh in your dining experience at a restaurant.
And then there's planned UX: the stuff we intended our end users to experience. The user's journey, as they navigate the features of our applications through the visual languages (GUIs) we establish for them, is the Alpha and Omega of good software design. It all starts and ends there.
The best software is designed from the user's perspective (just as the best APIs are designed from client code). Put the user at the heart of the process, and track them all the way through from conception to delivery and beyond.
This is why I firmly believe that UX should be a key component of any development team's activities, and why it should not be a shared or outsourced function. UX is part of the team.
And, yes, I believe this too about security, about architecture, about testing, about operations, and all the other key activities that go into delivering good working software. Vertical roles just don't cut it in software development. But it's especially true of UX, because - ultimately - all of these things contribute to the user's experience.
January 1, 2018
New Year 2018 Special Offer - 50% off Unit Testing trainingA good suite of fast-running unit tests (tests that don't have external dependencies) is essential to our ability to sustain the pace of delivering clean, reliable code.
But unit testing practices are still not as widespread as they should be. Many development teams still relay on automated system/GUI testing, and far too many rely totally on expensive and slow manual testing. This is the most common cause of slow release cycles and major delays we see on a daily basis. If your team is stuck in "stabilisation phase hell", fast-running unit tests may very likely be part of the solution.
Unit tests are the foundation for code craft, and developers and teams looking for a place to start find my 1-day unit testing workshop very helpful.
To celebrate New Year 2018, we're offering a whopping 50% discount all the way through January. Confirm your booking by Jan 31st and get it half-price.
It comes in Java and .NET flavours, using the most popular unit testing tools, and covers everything you'll need to get started, plus more advanced techniques like mocking and stubbing, fluent assertions, parameterised and data-driven testing, as well as unit testing patterns and architectures you'll find useful as your test suites grow. We end by introducing you to foundational Test-Driven Development, opening the door to further code craft learning.
You can find out more at http://www.codemanship.com/unittesting.html