October 19, 2018

Learn TDD with Codemanship

How Not To Use An ORM?

An anti-pattern I see often is applications - often referred to as "enterprise" applications - that have database transactions baked into their core logic via a "data access layer".

It typically goes something like this:

"When the order page loads, we fetch the order via an Order repository. Then we take the ID of that order and use that to fetch the list of order items via an Order Item repository. Then we load the order item product descriptions via a Product repository. We load the customer information for the order, using the customer ID field of the order, via a Customer repository. And then the customer's address via an Address repository.

"It's all nicely abstracted. We have proper separation of concerns between business logic and data access because we're using repositories, so we can stub out all the data access for testing.

"Yes, it does run a little slow, now that you ask. I wonder why that is?"

Then, behind the repositories, there's usually a query that's constructed using the object key or foreign keys - to retrieve the result of what ought to be a simple object navigation: order.items is implemented as orderItemRepository.items(orderId). You may believe that that you've abstracted the database because you're going through a repository interface, and possibly/probably using an object-relational mapping tool to fetch the entities, but if you're writing code that stitches object graphs together using keys and foreign keys, then you are writing the ORM tool. You're just using the off-the-shelf ORM as an xDBC substitute. It's the old "we used an X tool to build an X tool" problem. (See also "MVC frameworks built using MVC frameworks".)

The goal of an ORM is to make the mapping from tables and joins to object graphs Somebody Else's ProblemTM. That's a simpler way of defining true separation of concerns. As such, we should aim to write our core logic in the simplest object-oriented way we can, so that - ideally - the whole thing could run in memory with no database at all. Saving and fetching stored objects just happens. Not a foreign key or object repository in sight. It can vastly simplify the code (including test code).

The most powerful and flexible ORMs - like Hibernate - make this possible. I've written entire "enterprise" applications that could be run in memory, with the mapping and persistence happening entirely outside the core logic. In terms of hexagonal architecture, I treat data access as an external dependency and try to minimise it as much as possible. I don't write data access "layers".

Teams that go down the "layered" route tend to end up with heaps of code that depends directly on the ORM they're using (to write an ORM). It's a similar - well, these days, identical - problem to Java teams who do dependency injection using Spring and end up massively dependent on Spring - to the extent that their code can only be run in a Spring context.

At best, they end up with thousands of tests that have to stub and mock the data access layer so they can test ther core logic. At worst, they end up only being able to test their core logic with a database attached.

The ORM's magic doesn't come for free, of course. Yes, there's a heap of tweaking you need to do to make a completely seperated persistence/mapping component work. Many decisions have to be made (e.g., lazy loading vs. pre-emptive vs. SQL views vs. second level caching etc etc) to make it performant, but you were making those decisions anyway. You just weren't using the ORM to handle them, because you were too busy writing your own.








October 12, 2018

Learn TDD with Codemanship

TDD Training - Part I (Classic TDD), London, Sat Dec 1st

My flagship Codemanship TDD training course returns in a series of 3 standalone Saturday workshops aimed at self-funding learners.

It's the exact same highly popular training we've delivered to more than 2,000 developers since 2009, with 100% hands-on learning reinforced by our jam-packed 200-page TDD course book.

Part 1 is on Saturday Dec 1st in central London, and it's amazingly good value at just £99.

Part I goes in-depth on "classic" TDD, the super-important refactoring discipline, and software design principles that you can apply to your code as it grows and evolves to keep it easy to change so you can maintain the pace of development.

  • Why do TDD?

  • An introduction to TDD

  • Red, Green, Refactor

  • The Golden Rule

  • Working backwards from assertions

  • Testing your tests

  • One reason to fail

  • Writing self-explanatory tests

  • Speaking the customer's language

  • Triangulating designs

  • The Refactoring discipline

  • Software Design Principles
    • Simple Design

    • Tell, Don’t Ask

    • S.O.L.I.D.




The average price of a public 1-day dev training course, per person, is around £600-800. This is fine if your company is picking up the tab.

But we've learned over the years that many devs get no training paid for by their employer, so we appreciate that many of you are self-funding your professional development. Our Saturday workshops are priced to be accessible to professional developers.

In return, developers who've attended our weekend workshops have recommended us to employers and colleagues, and most of the full-price client-site training and coaching we do comes via these referrals.

Please be advised that we do not allow corporate bookings on our workshops for self-funders. Group bookings are limited to a maximum of 4 people. If you would like TDD training for your team(s), please contact me at jason.gorman@codemanship.com to discuss on-site training.

Find out more at the Eventbrite course page

Powered by Eventbrite

October 6, 2018

Learn TDD with Codemanship

Be The Code You Want To See In The World

It's no big secret that I'm very much from the "Just Do It" school of thought on how to apply good practices to software development. I meet teams all the time who complain that they've been forbidden to do, say, TDD by their managers. My answer is always "Next time, don't ask".

After 25 years doing this for a living, much of that devoted to mentoring teams in the developer arts , I've learned two important lessons:

1. It's very difficult to change someone's mind once it's made up. I wasted a lot of time "selling" the benefits of technical practices like unit testing and refactoring to people for whom no amount of evidence or logic was ever going to make them try it. It's one of the reasons I don't do much conference speaking these days.

2. The best strategies rely on things within our control. Indeed, strategies that rely on things beyond our control aren't really strategies at all. They're just wishful thinking.

The upshot of all this is an approach to working that has two core tenets:

1. Don't seek permission

2. Do what you can do

Easy to say, right? It does imply that, as a professional, you have control over how you work.

Here's the thing: as a professional, you have control over how you work. It's not so much a matter of getting that control, as recognising that - in reality - because you're the one writing the code, you already have that control. Your boss is very welcome to write the code themselves if they want it done their way

Of course, with great power comes great responsibility. You want control? Take control. But be sure to be acting in the best interests of your customer and other stakeholders, including the other developers on your team. Code is something you inflict on people. Do it with kindness.

And so there you have it. A mini philosophy. Don't rant and rave about how code should be done. Just do it. Be the code you want to see in the world.

Plenty of developers talk a good game, but their software tells a different story. It's often the case that the great and worthy and noble ideas you see presented in books and at conferences bear little resemblence to how their proponents really work. I've been learning, through Codemanship, that it's more effective to show teams what you do. Talk is cheap. That's why my flagship TDD workshop doesn't have any slides. Every idea is illustrated with real code, every practice is demonstrated right in front of you.

And there isn't a single practice in any Codemanship course I haven't applied many times on real software for real businesses. It's all real, and it all really works in the real world.

What typically prevents teams from applying them isn't their practicality, or how difficult they are to learn. (Although don't underestimate the learning curves.) The obstacles are normally whether they have the will to give it a proper try, and tied up in that, whether they're allowed to try it.

My advice is simple: learn to do it under the radar, in the background, under the bedsheets with a torch, and then the decision to apply it on real software in real teams for real customers will be entirely yours.




October 1, 2018

Learn TDD with Codemanship

50% Off Codemanship Training for Start-ups and Charities

One of the most fun aspects of running a dev training company is watching start-ups I helped a few years ago go from strength to strength.

The best part is seeing how some customers are transforming their markets (I don't use the "d" word), and reaping the long-term benefits of being able to better sustain the pace of innovation through good code craft.

I want to do more to help new businesses, so I've decided that - as of today - start-ups less than 5 years old, with less than 50 employees, will be able to buy Codemanship code craft training half-price.

I'm also extending that offer to non-profits. Registered charities will also be able to buy Codemanship training for just 50% of the normal price.


September 28, 2018

Learn TDD with Codemanship

Micro-cycles & Developing Your Inner Egg Timer

When I'm coaching developers in TDD and refactoring, I find it important to stress the benefits of keeping one foot on the path of working code at all times.

I talk about Little Red Riding Hood, and how she was warned not to stray off the path into the deep dark forest. Bad things happen in the deep dark forest. Similarly, I warn devs to stay on that path of code that works - code that's shippable - and not go wandering off into the deep dark forest of code that's broken.

Of course, in practice, we can't change code without breaking it. So the real skill is in learning how to make the changes we need to make by briefly stepping off the path and stepping straight back on again.

This requires developers to build a kind of internal egg timer that nudges them when they haven't seen their tests pass for too long.



An exercise I've used to develop my internal egg timer uses a real egg timer (or the timer on my smartphone). When I'm mindfully practicing refactoring, for example, I'll set a timer to countdown for 60 seconds, and start it the moment I edit any code.

The moment a source file goes "dirty" - no longer compiles or no longer passes the tests - the countdown starts. I have to get back to passing tests before the sands run out (or the alarm goes off).

I'll do that for maybe 10-15 minutes, then I'll drop the countdown to 50 seconds and do another 10-15 minutes. Then 40 seconds. Then 30. Always trying, as best I can, to get what I need to do done and get back to passing tests before the countdown ends.

I did this every day for about 45-60 minutes for several months, and what I found at the end was that I'd grown a sort of internal countdown. Now, when I haven't seen the tests pass for a few minutes, I get a little knot in my stomach. It makes me genuinely uncomfortable.

I do a similar exercise with TDD, but the countdowns apply the moment I have a failing test. I have 60 seconds to make the test pass. Then 50. Then 40. Then 30. This encourages me to take smaller steps, in tighter micro-cycles.

If my test requires me to take too big a leap, I have to scale back or break it down to simpler steps to get me where I want to go.

The skill is in making progress with one foot firmly on the path of working code at all times. Your inner egg timer is the key.



September 25, 2018

Learn TDD with Codemanship

Third-Generation Testing - Øredev 2018, Malmö, November 22nd

If you're planning on coming to Øredev in Sweden this November, I'm running a brand new training workshop on the final day about Third-Generation Software Testing.

First-generation testing was manual: running the program and clicking the buttons ourselves. We quickly learned that this was slow and often patchy, creating a severe bottleneck in development cycles.

Second-generation testing removed that bottleneck by writing code to test our code.

But what about the tests we didn't think of?

Exploratory testing brought us back to a manual process of exploring what else might be possible - what combinations of inputs, user actions and pathways - using the code we delivered, outside of the behaviours encoded in our automated tests.

Manual exploratory testing suffers from the same setbacks as any kind of manual testing, though. It's slow, and can miss heaps of cases in complex logic.

Third-generation testing automates the generation of the test cases themselves, enabling us to explore much wider state spaces than a manual process could ever hope to achieve. With a little extra test code, and a bit of ingenuity, you can explore thousands, tens of thousands, hundreds of thousands and even millions of extra test cases - combinations, paths, random inputs and ranges - using tools you already know.

In this workshop, we'll explore some simple techniques for adapting and reusing our existing unit tests to exhaustively test our critical code. We'll also look at techniques for identifying what code might need us to go further, and how we can use Cloud technology to execute millions of extra tests in minutes.

You can find out more and book your place at http://oredev.org/2018/sessions/third-generation-software-testing



September 24, 2018

Learn TDD with Codemanship

Why I Throw Away (Most Of) My Customer Tests

There was a period about a decade ago, when BDD frameworks were all new and shiny, when some dev teams experimented with relying entirely on their customer tests. This predictably led to some very slow-running test suites, and an upside-down test pyramid.

It's very important to build a majority of fast-running automated tests to maintain the pace of development. Upside-down test pyramids become a severe bottleneck, slowing down the "metabolism" of delivery.

But it is good to work from precise, executable specifications, too. So I still recommend teams work with their customers to build a shared understanding of what is to be delivered using tools like Cucumber and Fitnesse.

What happens to these customer tests after the software's delivered, though? We've invested time and effort in agreeing them and then automating them. So we shoud keep them, right?

Well, not necessarily. Builders invest a lot of time and effort into erecting scaffolding, but after the house is built, the scaffolding comes down.

The process of test-driving an internal design with fast-running unit tests - by which I mean tests that ask one question and don't involve external dependencies - tends to leave us with the vast majority of our logic tested at that level. That's the base of our testing pyramid, and as it should be.

So I now have customer tests and unit tests asking the same questions. One of them is surplus to requirements for regression testing, and it makes most sense to retain the fastest tests and discard the slowest.

I keep a cherrypicking of customer tests just to check that everything's wired together right in my internal design - maybe a few dozen key happy paths. The rest get archived and quite possibly never run again, or certainly not an a frequent basis. They aren't maintained, because those features or changes have been delivered. Move on.




August 26, 2018

Learn TDD with Codemanship

Yes, Developers Should Learn Ethics. But That's Only Half The Picture.

Given the negative impact that some technology start-ups have had on society, and how prominent that sentiment is in the news these days, it's no surprise that more and more people are suggesting that the people who create this technology develop their sense of humanity and ethics.

I do not deny that many of us in software could use a crash course in things like ethics, philosophy, law and history. Ethics in our industry is a hot potato at the moment.

But I do not believe that it should all be on us. When I look at the people in leadership positions - in governments, in key institutions, and in the boardrooms - who are driving the decisions that are creating the wars, the environmental catastrophes, the growing inequality, and the injustice and oppression that we see daily in the media - it strikes me that the problem isn't that the world is run by scientists or engineers. Society isn't ruled by evidence and logic.

As well as STEM graduates needing a better-developed sense of ethics, I think the world would also be improved if the rest of the population had more effective bullshit detectors. Taking Brexit as a classic example, voters were bombarded with campaign messages that were demonstrably false, and promises that were provably impossible to deliver. Leave won by appealing to voters' feelings about immigration, about globalisation and about Britain's place in the EU. Had more voters checked the facts, I have no doubt the vote would have swung the other way.

Sure, this post-truth world we seem to be living in now was aided and abetted by new technology, and the people who created that technogy should have said "No". But, as far as I can tell, it never even occured to them to ask those kids of questions.

But let's be honest, it wasn't online social media advertising that gifted a marginal victory to the British far-right and installed a demagogue in the White House, any more than WWII was the fault of the printing presses that churned out copy after copy of Mein Kampf. Somebody made a business decision to let those social media campaigns run and take the advertisers' money.

Rightly IMHO, it's turned a spotlight on social media that was long overdue. I do not argue that technology should not require ethics. Quite the reverse.

What I'm saying, I guess, is that a better understanding of the humanities among scientists and engineers is only half the picture. If we think the world's problems will be solved because a coder said "I'm not going to track that cookie, it's unethical" to their bosses, we're going to be terribly disappointed.


August 6, 2018

Learn TDD with Codemanship

Agile Baggage

In the late 1940s, a genuine mystery gripped the world as it rebuilt after WWII. Thousands of eye witnesses - including pilots, police officers, astronomers, and other credible observers - reported seeing flying objects that had performance characteristics far beyond any known natural or artificial phenomenon.

These "flying saucers" - as they became popularly known - were the subject of intense study by military agencies in the US, the UK and many other countries. Very quickly, the extraterrestrial hypothesis - that these objects were spacecraft from another world - caught the public's imagination, and "flying saucer" became synonymous with Little Green Men.

In an attempt to outrun that pop culture baggage, serious studies of these objects adopted the less sensational term "Unidentified Flying Object". But that, too, soon became shorthand for "alien spacecraft". These days, you can't be taken seriously if you study UFOs, because it lumps you in with some very fanciful notions, and some - how shall we say? - rather colorful characters. Scientists don't study UFOs any more. It's not good for the career.

These days, scientific studies of strange lights in the sky - like the Ministry of Defence's Project Condign - use the term Unidentified Aerial Phenomena (UAP) in an attempt to outrun the cultural baggage of "UFOs".

The fact remains, incontravertibly, that every year thousands of witnesses see things in the sky that conform to no known physical phenomena, and we're no closer to understanding what it is they're seeing after 70 years of study. The most recent scientific studies, in the last 3 decades, all conclude that a portion of reported "UAPs" are genuine unknowns, they they are of real defence significance, and worthy of further scientific study. But well-funded studies never seem to materialise, because of the connotation that UFOs = Little Green Men.

The well has been poisoned by people who claim to know the truth about what these objects are, and they'll happily reveal all in their latest book or DVD - just £19.95 from all good stores (buy today and get a free Alien Grey lunch box!) If these people would just 'fess up that, in reality, they don't know what they are, either - or , certainly, they can't prove their theories - the scientific community could get back to trying to find out, like they attempted to in the late 1940s and early 1950s.

Agile Software Development ("agile" for short) is also now dragging a great weight of cultural baggage behind it, much of it generated by a legion of people also out to make a fast buck by claiming to know the "truth" about what makes businesses successful with technology.

Say "agile" today, and most people think you're talking about Scrum (and its scaled variations). The landscape is very different to 2001, when the term was coined at a ski resort in Utah. Today, there are about 20,000 agile coaches in the UK alone. Two thirds of them come from non-technical backgrounds. Like the laypeople who became "UFO researchers", many agile coaches apply a veneer of pseudoscience to what is - in essence - a technical persuit.

The result is an appearance of agility that often lacks the underlying technical discipline to make it work. Things like unit tests, continuous integration, design principles, refactoring: they're every bit as important as user stories and stand-up meetings and burndown charts.

Many of us saw it coming years ago. Call it "frAgile", "Cargo Cult agile", or "WAgile" (Waterfall-Agile) - it was on the cards as soon as we realised Agile Software Development was being hijacked by management consultants.

Post-agilism was an early response: an attempt to get back to "doing what works". Software Craftsmanship was a more defined reaction, reaffirming the need for technical discipline if we're to be genuinely responsive to change. But these, too, accrued their baggage. Software craft today is more of a cult of personality, dominated by a handful of the most vocal proponents of what has become quite a narrow interpretation of the technical disciplines of writing software. Post-agilism devolved into a pseudo-philosophical talking shop, never quite getting down to the practical detail. Their wells, too, have been poisoned.

But teams are still delivering software, and some teams are more successfully delivering software than others. Just as with UFOs, beneath the hype, there's a real phenomenon to be understood. It ain't Scrum and it ain't Lean and it certainly ain't SAFe. But there's undeniably something that's worthy of further study. Agile has real underlying insights to offer - not necessarily the ones written on the Manifesto website, though.

But, to outrun the cultural baggage, what shall we call it now?




August 3, 2018

Learn TDD with Codemanship

Keyhole APIs - Good for Microservices, But Not for Unit Testing

I've been thinking a lot lately about what I call keyhole APIs.

A keyhole API is the simplest API possible, that presents the smallest "surface area" to clients for its complete use. This means there's a single function exposed, which has the smallest number of primitive input parameters - ideally one - and a single, simple output.

To illustrate, I had a crack at TDD-ing a solution to the Mars Rover kata, writing tests that only called a single method on a single public class to manipulate the rover and query the results.

You can read the code on my Github account.

This produces test code that's very loosely coupled to the rover implementation. I could have written test code that invokes multiple methods on multiple implementation classes. This would have made it easier to debug, for sure, because tests would pinpoint the source of errors more closely.

If we're writing microservices, keyhole APIs are - I believe - essential. We have to hide as much of the implementation as possible. Clients need to be as loosely coupled to the microservices they use as possible, including microservices that use other microservices.

I encourage developers to create these keyhole APIs for their components and services more and more these days. Even if they're not going to go down the microservice route, its helpful to partition our code into components that could be turned into microservices easily, shoud the need arise.

Having said all that, I don't recommend unit testing entirely through such an API. I draw a distinction there: unit tests are an internal thing, a sort of grey-box testing. Especially important is the ability to isolate units under test from their external dependencies - e.g., by using mocks or stubs - and this requires the test code to know a little about those dependencies. I deliberately avoided that in my Mars Rover tests, and so ended up with a design where dependencies weren't easily swappable in ths way.

So, in summary: keyhole APIs can be a good thing for our architectures, but keyhole developer tests... not so much.