February 4, 2018

Learn TDD with Codemanship

Don't Bake In Yesterday's Business Model With Unmaintainable Code

I'm running a little poll on the Codemanship Twitter account asking whether code craft skills should be something every professional developer should have.




I've always seen these skills as foundational for a career as a developer. Once we've learned to write code that kind of works, the next step in our learning should be to develop the skills needed to write reliable and maintainable code. The responses so far suggest that about 95% of us agree (more than 70% of us strongly).

Some enlightened employers recognise the need for these skills, and address the lack of them when taking on new graduates. Those new hires are the lucky ones, though. Most employers offer no training in unit testing, TDD, refactoring, Continuous Integration or design principles at all. They also often have nobody more experienced who could mentor developers in those things. It's still sadly very much the case that many software developers go through their careers without ever being exposed to code craft.

This translates into a majority of code being less reliable and less maintainable, which has a knock-on effect in the wider economy caused by the dramatically higher cost of changing that code. It's not the actual £ cost that has the impact, of course. It's the "drag factor" that hard-to-change code has on the pace of innovation. Bosses routinely cite IT as being a major factor in impeding progress. I'm sure we can all think of businesses that were held back by their inability to change their software and their systems.

For all our talk of "business agility", only a small percentage of organisations come anywhere close. It's not because they haven't bought into the idea of being agile. The management magazines are now full of chatter about agility. No shortage of companies that aspire to be more responsive to change. They just can't respond fast enough when things change. The code that helped them scale up their operations simultaneously bakes in a status quo, making it much harder to evolve the way they do business. Software giveth, and software taketh away. I see many businesses now achieving ever greater efficiencies at doing things the way they needed to be done 5, 10 or 20 years ago, but unable to adapt to the way things are today and might be tomorrow.

I see this is finance, in retail, in media, in telecoms, in law, in all manner of private sector organisations. And I see it in the public sector, too. "IT delays" is increasingly the reason why government policies are massively delayed or fail to be rolled out altogether. It's a pincer movement: we can't do X at the scale we need to without code, and we can't change the code to do X+1 for a rapidly changing business landscape.

I've always maintained that code craft is a business imperative. I might even go as far as to say a societal imperative, as software seeps into every nook and cranny of our lives. If we don't address issues like how easy to change our code is, we risk baking in the past, relying on inflexible and unreliable systems that are as anachronistic to the way things need to be in the future as our tired old and no-longer-fit-for-purpose systems of governance. An even bigger risk is that other countries will steal a march on us, in much the same way that more agile tech start-ups can steam ahead of established market players simply because they're not dragging millions of lines of legacy code behind them.

While the fashion today is for "digital transformations", encoding all our core operations in software, we must be mindful that legacy code = legacy business model.

So what is your company doing to improve their code craft?






February 1, 2018

Learn TDD with Codemanship

BDD & Specification By Example - Where Did We Go Wrong?

I've been saving this post up for a while, but with a bit of pre-dinner free time I wanted to put it out there now.

I meet a lot of teams, and one thing many of them tell me is that the "customer tests" they've been driving their designs from are actually written by the developers, not the customer.



Sure, they're written using a "Behaviour-Driven Development" or "Acceptance Testing" tool like Cucumber or Fitnesse. But just because you've built a "granny annex" on your house, if there's no granny living in it, it's just an "annex".

We've dropped the ball on this. The CHAOS report, published every year by the Standish Group, consistently cites lack of customer involvement as the number one factor in project failure. A tool won't fix that.

Especially when that tool wasn't designed with customer collaboration in mind. When your "Getting Started" guide begins "First, install Visual Studio..." or requires your customer to learn a mark-up language or to use version control, arguably you're bound to have a hard time getting them to engage in the process.

Increasingly, I work with teams who want to somehow connect the way their customer actually prefers to capture examples with the way devs like to automate tests. 90% of the time, that means pulling data out of Excel spreadsheets - still the most widely used tool in both communities - into unit tests. Some unit testing frameworks even have that facility built in (e.g., MSTest for .NET). But reading data from spreadsheets is child's play for most developers. With OLD DB or JDBC, for example, a spreadsheet's just a database.

But, regardless of the tools, the problem most teams need to solve is a people problem. I've found that close customer involvement is so critical to the chances of a team succeeding at solving the customer's problems that I actually stop development until they engage at the level we need them to. No play? No code.

The mistake many of us make is to give them a choice. "Would you like to spend a lot of time with us discussing requirements and playing with candidate releases and giving us feedback?" "No thanks, ta very much. See you in a year's time."

We made a rod for our backs by allowing them to be absentee partners and trying to figure out what they want and need for them. Specification By Example presents us with an opportunity to make the relationship clearer. The customer has to be "trained" to understand that if they haven't agreed a test for it, they ain't gonna get it.



Learn TDD with Codemanship

A Bit of Old School BDD with NUnit & MS Excel

I'm going Old School this morning with my pairing partner, and while she's popped out for a meeting, I thought I'd quickly jot down what we've been working on.

Back in the good old days before BDD/ATDD frameworks, when we wanted to automate customer tests we just captured the customer's example data in something like MS Excel and then wrote a bit of code to read that data into a unit test. (That, essentially, is what SBE tools do, just with some bells and whistles.)

For example, imagine our customer wants to be able to calculate square roots using the software. We could agree an acceptance test, in the trendy hipster "Given...When...Then..." style, and put that in a spreadsheet, like so.



If we name the cell range containing the example data "examples" (for ease of extracting using OLE DB), and save this spreadsheet in the root directory of our Visual Studio test project, then we can relatively easily suck out that data to provide NUnit test cases for a parameterised test with arguments that match the data in the table.

Here's a complete source listing for our basic spike.



(We're going to try and refine this a bit, and see if it can't be made more general. One of the downsides of using a custom TestCaseSource is that we can't parameterise it easily to specify different Excel files and different ranges. Though why such a mechanism doesn't already exist is a bit of a mystery, after 15+ years of NUnit.)



January 26, 2018

Learn TDD with Codemanship

Good Code Speaks The Customer's Language

Something we devote time to on the Codemanship TDD training course is the importance of choosing good names for the stuff in our code.

Names are the best way to convey the meaning - the intent - of our code. A good method name clearly and concisely describes what that method does. A good class name clearly describes what that class represents. A good interface name clearly describes what role an object's playing when it implements that interface. And so on.

I strongly encourage developers to write code using the language of the customer. Not only should other developers be able to understand your code, your customers should be able to follow the gist of it, too.

Take this piece of mystery code:



What is this for? What the heck is a "Place Repository" when it's at home? For whom or for what are we "allocating" places?

Perhaps a look at the original user story will shed some light.

The passenger selects the flight they want to reserve a seat on.
They choose the seat by row and seat number (e.g., row A, seat 1) and reserve it.
We create a reservation for that passenger in that seat.


Now the mist clears. Let's refactor the code so that it speaks the customers language.



This code does exactly what it did before, but makes a lot more sense now. The impact of choosing better names can be profound, in terms of making the code easier to understand and therefore easier to change. And it's something we all need to work much harder at.


January 23, 2018

Learn TDD with Codemanship

Without Improving Code Craft, Your Agile Transformation Will Fail

"You must be really busy!" is what people tend to say when I tell them what I do.

It stands to reason. If software is "eating the world", then code craft skills must be highly in demand, and therefore training and coaching for developers in those skills must be selling like hotcakes.

Well, you'd think so, wouldn't you?

The reality, though, is that code craft is critically undervalued. The skills needed to deliver reliable, maintainable software at a sustainable pace - allowing businesses to maintain the pace of innovation - are not in high demand.

We can see this both in the quality of code being produced by the majority of teams, and in where organisations focus their attentions and where they choose to invest in developing skills and capabilities.

"Agile transformations" are common. Some huge organisations are attempting them on a grand scale, sending their people on high-priced training courses and drafting in hundreds of Agile coaches - mostly Scrum-certified - to assist, at great expense.

Only a small minority invest in code craft at the same time, and typically they invest a fraction of the time, effort and money they budget for Agile training and coaching.

The end result is software that's difficult to change, and an inability to respond to new and changing requirements. Which is kind of the whole point of Agile.

Let me spell it out in bold capital letters:

IF CODE CRAFT ISN'T A SIGNIFICANT PART OF YOUR AGILE TRANSFORMATION, YOU WILL NEVER ACHIEVE AGILITY.

You can't be responsive to change if your code is expensive to change. It's that simple.

While you build your capability in product management, agile planning and all that scrummy agile goodness, you also need to be addressing the factors that increase the cost of changing code. Skills like unit testing, TDD, refactoring, SOLID, CI/CD are a vital part of agility. They are hard skills to crack. A 3-day Certified Code Crafter course ain't gonna cut the mustard. Developers need ongoing learning and practice, with the guidance of experienced code crafters. I was lucky enough to get that early in my career. Many other developers are not so lucky.

That's why I built Codemanship; to help developers get to grips with the code-facing skills that few other training and coaching companies focus on.

But, I'll level with you: even though I love what I'm doing, commercially it's a struggle. The reason so few others offer this kind of training and coaching is because there's little money it. Decision makers don't have code craft on their radars. There's been many occasions when I've thought "May as well just get Scrum-certified". I'm not going to go down without a fight, but what I really need (apart from them to cancel Brexit) is for a shift in the priorities of business who are currently investing millions on Agile transformations that are all but ignoring this crucial area.

Of course, those are my problems, and I made my choices. I'm very happy doing what I'm doing. But it's indicative of a wider problem that affects us all. Getting from A to B is about more than just map reading and route planning. You need a well-oiled engine to get you there, and to get you wherever you want to go next. Too many Agile transformations end up broken down by the side of the road, unable to go anywhere.


January 21, 2018

Learn TDD with Codemanship

Delegating "Junior" Development Tasks. (SPOILER ALERT: It doesn't work)

When I first took on a leadership role on a software development team 20 years ago, from the reading I did, I learned that the key to managing successfully was apparently delegation.

I would break the work down - GUI, core logic, persistence, etc - and assign it to the people I believed had the necessary skills. The hard stuff I delegated to the most experienced and knowledgeable developers, The "easy" stuff, I left to the juniors.

It only took me a few months to realise that this model of team management simply doesn't work for software development. In code, the devil is in the detail. To delegate a task, I had to explain precisely what I wanted that code to do, and how I wanted it to be (in terms of coding standards, our architecture, and so on).

If the task was trivial enough to give to a "junior" dev, it was usually quicker to do it myself. I spent a lot more time cleaning up after them than I thought I was saving by delegating.

So I changed my focus. I delegated work in big enough chucks to make it worthwhile, which meant it was no longer "junior" work.

Looking back with the benefit of 20 years of hindsight, I realise now that delegating "junior" dev tasks is absurd. It's like a lead screenwriter delegating the easy words to a junior screenwriter. It would also probably be a very frustrating learning experience for them. I'm very glad I never went through a phase in my early career of doing "junior" work (although I probably wrote plenty of "junior" code!)

The value in bringing inexperienced developers in to a team is to give them the opportunity to learn from more seasoned developers. I got that chance, and it was invaluable. Now, I recommend to managers that their noobs pair up with the old hands on proper actual software development, and allow for the fact that it will take them longer.

This necessitates - if you want the team to be productive as a whole - that the experienced developers outnumber the juniors. Actually, let's not call them that. The trainees.

Over time - months and years - the level of mentoring required will fall, until eventually they can be left to get on with it. And to mentor new developers coming in.

But I still see and hear from many, many people who are stuck in the hell of a Thousand Junior Programmers, where senior people - often called "architects" - are greatly outnumbered by people still wet behind the ears, to whom all the "painting by numbers" is delegated. This mindset is deeply embedded in the cultures of some major software companies. The result is invariably software that's much worse, and costs much more.

It also leads to some pretty demoralised developers. This is not the movie industry. We don't need runners to fetch our coffee.


ADDENDUM: It also just occurred to me, while I'm recalling, that whenever I examined those "junior" dev tasks more closely, their entire existence was caused by a problem in the way we were doing things (e.g., bugginess, lack of separation of presentation and logic, duplication in data access code, etc). These days, when it feels like "grunt" work - repetitive grind - I stop and ask myself why.

January 20, 2018

Learn TDD with Codemanship

10 Classic TDD Mistakes

20 years of practicing Test-Driven Development, and training and coaching a few thousand developers in it, has taught me this is not a trivial skillset to learn. There are many potential pitfalls, and I've seen many teams dashed on the rocks by some classic mistakes.

You can learn from their misfortunes, and hopefully steer a path through these treacherous waters. Here are ten classic mistakes I've seen folk make with TDD.

1. Underestimating The Learning Curve

Often, when developers try to adopt TDD, they have unrealistic expectations about the results they'll be getting in the short term. "Red-Green-Refactor" sounds simple enough, but it hides a whole world of ideas, skills and habits that need to be built to be effective at it. If I had a pound for every team that said "we tried TDD, and it didn't work"... Plan for a journey that will take months and years, not days and weeks.

2. Confusing TDD with Testing

The primary aim of TDD is to come up with a good design that will satisfy our customer's needs. It's a design discipline that just happens to use tests as specifications. A lot of people still approach TDD as a testing discipline, and focus too much on making sure everything is tested when they should be thinking about the design. If you're rigorous about applying the Golden Rule (only write solution code when a failing test requires it), your coverage will be high. But that isn't the goal. It's a side benefit.

3. Thinking TDD Is All The Testing They'll Ever Need

If you practice TDD fairly rigorously, the resulting automated tests will probably be sufficient much of the time. But not all of the time. Too many teams pay no heed to whether high risk code needs more testing. (Indeed, too many teams pay no heed to high risk code at all. Do you know where your load-bearing code is?) And what about all those scenarios you didn't think of? It's rare to see a test suite that covers every possible combination of user inputs. More work has to be done to explore the edges of what was specified.

4. Not Starting With Failing Customer Tests

In all approaches to writing software, how we collaborate with our customers is critically important. Designs should be driven directly from testable specifications that we've explicitly agreed with them. In TDD, unsurprisingly, these testable specifications come in the form of... erm... tests. The design process starts by working closely with the customer to flesh out executable acceptance tests that fail. We do not start writing code until we have those failing customer tests. We do not stop writing code until those tests are passing. But a lot of teams still set out on their journey with only the vaguest sense of the destination. Write all the unit tests you want, but without failing executable customer tests, you're just being super-precise about your own assumptions of what the customer wants.

5. Confusing Tools With Practices

Just because they're written using a customer test specification tool like Cucumber or Fitnesse does not mean those are customer tests. They could be automated using JUnit, and be customer tests. What makes them customer tests is that you wrote them with the customer, codifying their examples of how the software will be used. Similarly, just because you used a mock objects framework, that doesn't mean that you are mocking. Mocking is a technique for discovering the design of interfaces by writing failing interaction tests. Just because you're writing JUnit tests doesn't mean you're doing TDD. Just because you use Resharper doesn't mean you're refactoring. Just because you're running Jenkins doesn't mean you're doing Continuous Integration. Kubernetes != Continuous Delivery. And the list goes on (and on and on). Far too many developers think that using certain tools will automatically produce certain results. The tools will not do your thinking for you. As far as I'm aware, RSpec doesn't discuss the requirements with the customer and write the tests itself. You have to talk to the customer.

6. Not Actually Doing TDD. At All.

When I run the Codemanship TDD training workshop, I often start the first day by asking for a show of hands from people who think they've done TDD. At the end of the first day I ask them to raise their hands if they still think they've done TDD. The number is always considerably lower. Here's the thing: I know from experience that 9 out of 10 developers who put "TDD" on their CV really mean "unit testing". Many don't even know what TDD is. I know this sounds basic, but if you're going to try doing TDD, try doing TDD. Google it. Read an introduction. Watch a tutorial or three. Buy a book. Come on a course.

7. Skimping On Refactoring

To produce code that's as clean as I feel it needs to be, I find I tend to spend about 50% of my time refactoring. Most dev teams do a lot less. Many do none at all. Now, I know many will say "enough refactoring" is subjective, and the debate rages on social media about whether anyone is doing too much refactoring, but let's be frank: the vast majority of us are simply not doing anywhere near enough. The effects of this are felt soon enough, as the going gets harder and harder. Refactoring's a very undervalued skill; I know from my training orders. For every ten TDD courses I run, I might be asked to run one refactoring course. Too little refactoring makes TDD unsustainable. Typical outcome: "We did TDD for 6 months, but our tests got so hard to change that we threw them away."

8. Making The Tests Too Big

The granularity of tests is key to making TDD work as a design discipline, as well as determining how effective your test suites will be at pinpointing broken code. When our tests ask too many questions (e.g., "What are the first 10 Fibonacci numbers?"), we find ourselves having to make a bunch of design decisions before we get feedback. When we work in bigger batches, we make more mistakes. I like to think of it like crossing a stream using stepping stones; if the stones are too far apart, we have to make big, risky leaps, increasing the risk of falling in. Start by asking "What's the first Fibonacci number?".

9. Making The Tests Too Small

Conversely, I also often see people writing tests that focus on minute details that would naturally fall out of passing a more interesting higher-level test. For example, I see people writing tests for getters and setters that really only need to exist because they're used in some interesting behaviour that the customer wants. I've even seen tests that create an object and then assert that it isn't null. Those kinds of tests are redundant. I can kind of see where the thinking comes from, though. "I want to declare a BankAccount class, but the Golden Rule of TDD is I can't until I have a failing test that requires it. So I'll write one." But this is coming at it from the wrong direction. In TDD, we don't write tests to force the design we want. We write tests for behaviour that the customer wants, and discover the design by passing it (and by refactoring afterwards if necessary). We'll need a BankAccount class to test crediting an account, for example. We'll need a getter for the balance to check the result. Focus on behaviour and let the details follow. There's a balance to be struck on test granularity that comes with experience.

10. Going Into "Design Autopilot"

Despite what you may have heard, TDD doesn't take care of the design for you. You can follow the discipline to the letter, and end up with a crappy design.

TDD helps by providing frequent "beats" in development to remind us to think about the design. We're thinking about what the code should do when we write our failing test. We're thinking about how it should do it when we're passing the test. And we're thinking about how maintainable our solution is after we've passed the test as we refactor the code. It's all design, really. But it's not magic.

YOU STILL HAVE TO THINK ABOUT THE DESIGN. A LOT.


So, there you have it: 10 classic TDD mistakes. But all completely avoidable, with some thought, some practice, and maybe a bit of help from an old hand.


January 15, 2018

Learn TDD with Codemanship

Refactoring to the xUnit Pattern

16 days left to get my spiffy on-site Unit Testing training workshop at half-price. It's jam-packed with unit testy goodness. Here's a little taste of the kind of stuff we cover.

In the introductory part of the workshop, we look at the anatomy of unit test suites and see how - from the most basic designs - we eventually arrive by refactoring at the xUnit design pattern for unit testing frameworks.

If you've been programming for a while, there's a good chance you've written test code in a Main() method, like this:



This saves us the bother of having to run an entire application to get quick feedback while we're adding or changing code in, say, a library.

Notice that there are three components to this test:

Arrange - we set up the object(s) we're going to use to be in the initial state we need for this particular test

Act - we invoke the method we want to test

Assert - We ask questions about the final state of our test object(s) to see if the action has had the desired effect

Simples!

Of course, a real-world application might need hundreds or even thousands of such tests. Our Main() method is going to get pretty big and unwieldy if we keep adding more and more test cases.

So we can break it down into multiple test methods, one for each test case. The name of each test method can clearly describe what the test is.



Our original Main() method just calls all of our test methods.

But still, when there are hundreds or thousands of test methods, we can end up with one ginormous class. That too can be broken down, grouping related test methods (e.g., all the tests for a bank account) into smaller test fixtures.



Note that each test fixture has a method that invokes all of its test methods, so our original main method doesn't need to invoke them all itself.

This is a final piece of the unit testing jigsaw: the class that tells all of our test fixtures to run their tests. We call this a test suite.



At the most basic level, this simple design gives us the ability to write, organise and run large numbers of tests quickly.

As time goes on, we may add a few bells and whistles to streamline the process and make it more useful and usable.

For example, in our current design, when an assertion fails (using .NET's built-in Debug.Assert() method), it will halt execution. If the first test fails in a suite of 1,000 tests, it won't run the other 999. So we might write our own assertion methods to check and report test failures without halting execution.

And we might want to make the output more user friendly and display more helpful results, so we may add a custom formatter/reporter to write out test results.

And - I can attest from personal experience - it can be a real pain the you-know-what to have to remember to write code to invoke every test method on every test fixture. So we might create a custom test runner - not just a Main() method - that automates the process of test discovery and execution.

We could, for example, invert the dependencies in our test suite on individual test fixtures by extracting a common interface that all fixtures must implement for running its tests. Then we could use reflection or search through the source code for all classes that implement that interface and build the suite automatically.

Likewise, we could specify that test methods must have a specific signature (e.g., start with "Test", a void return type, and have no parameters) and search for all test methods that match.

In my early career, I wrote several unit testing frameworks, and they tended to end up with a similar design. Thousands more had the same experience, and that commonality of experience is captured in the xUnit design pattern for unit testing frameworks.



The original implementation of this pattern was done in Smalltalk ("SUnit") by Kent Beck, and many more have followed in pretty much every programming language you can think of.

In the years since, some useful advanced features have been added, which we'll explore later in the workshop. But, under the hood, they're all pretty much along these lines.







January 9, 2018

Learn TDD with Codemanship

Test Granularity Matters. Ask Any Accountant.

It's that time of year when I have to make sure my company's accounts are all up to date and tickety-boo, and I got a useful reminder about why the granularity of our tests really matters.

In my spreadsheet for bank payments and receipts, I have a formula for calculating what the closing balance at the end of the financial year is. Today, I realised that calculated balance was about £1200 short. Evidently, I had either entered one or more payments incorrectly, or one or more receipts.

I had to go back through all the bank statements for the year double-checking every line item against the spreadsheet.

Now, if I'd had a formula for the balance at the end of every line item, I could simply have checked the closing balances on each statement to see where they diverged.

I've experienced similar pain when relying on tests that check logic at too high a level (e.g., system tests or API tests). When a test fails, I have to go rummage through the call stack to figure out where it went wrong - the equivalent of reading all my bank statements looking for the line item that doesn't match. Much time is spent in the debugger: a red flag.

I strongly encourage teams to rely more on small, focused tests that - ideally - have only reason to fail, and to write those tests as close to the module that's doing that piece of work as they can. So when a test fails it's easy to deduce that "the problem is this, and the problem is here".


January 7, 2018

Learn TDD with Codemanship

Do Your Automated Tests Give You Confidence In Your Code?

I ran a little poll on the @codemanship Twitter account asking:




The responses suggest many developers don't put a lot of faith in their automated tests for detecting bugs. The aim of test automation is to dramatically lower the cost and execution time of regression testing our code so that we're alerted to new bugs sooner rather than later.

The ultimate goal is to have high confidence at any point in time that the software works, and is therefore fit for release. This is a foundational requirement of Continuous Delivery - software should always be shippable.


Examining many test suites, as I do every year, I think I have some insight into this problem. Firstly, most teams that have automated tests don't have particularly good test suites. Much of the code isn't reached by them. Many of the tests ask loose questions, leaving big gaps in their assertions that you could drive a bus-load of bugs through.

Teams quickly learn, after the first few releases, that just because their tests are passing, that doesn't mean the code is working. But there seems to be little appetite for beefing up their tests suites to plug the leaks that bugs are pouring in through.

Very few teams test their tests to see how effective they are at catching bugs. Even fewer teams target more exhaustive testing at "load-bearing" code, or even have any awareness of which parts of the code present the highest risk.

Happy Path thinking still dominates the developer mindset. Most of us don't think like testers. We want to show that our code works, not that it doesn't in certain edge cases. So our tests tend to skip over the edge cases.

In code reviews - for those teams that do them on any regular basis - test assurance tends not to be one of the things reviewers look for. At best, line coverage is checked. If the coverage report shows the new or changed code is executed in a test, that's spiffy for most dev teams. And, to be fair, most teams don't even check for that. You'd be shocked at how many teams are genuinely surprised to learn how low their coverage is. "But we do TDD...!" Evidently not much of the time.

Teams that practice TDD fairly rigorously tend to have test suites they can put more faith in. But, even as a TDD trainer and mentor with two decades of experience doing it, I regularly feel the need to take testing further after my design is complete.

I'm a big fan of guided inspection, reading the code carefully, looking for test cases I may have missed. I'm also big on parameterised testing, because it can buy you potentially massive amounts of test coverage with surprisingly little extra test code.

And, believe it or not, to some extent you can also automate exploratory testing. One example is the simple Java prototype for generating combinations of inputs for use in JUnit tests that I threw together last year. Another example is tools that can randomly generate input data, like Haskell's QuickCheck (and it's many language-specific ports, like JCheck).

I also find simple test analysis techniques like truth tables and decision tables, state transition and program flow models very useful for discovering edge cases I might have missed. Think you're thinking like a tester? Read the first few chapters of Robert Binder's Testing Object Oriented Systems and think again.

So, if you're one of the 58% who said they don't have high confidence in their automated tests, it may be time to take your automated testing to the next level.