November 21, 2017

Learn TDD with Codemanship

What Can We Learn About Dev Team Performance from Distributed System Design?

There are all sorts of analogies for software development teams ("teams are like a box of chocolates" etc), and one I find very useful is to picture them as distributed information processing systems.

Each worker process (person) has a job to do. Each job has information inputs and outputs. Each job requires data (knowledge). And the biggest overhead is typically not the time or effort required to process the information in each process, but the communication overhead between processes. This is why, as we add more people (worker processes), performance starts to degrade dramatically.

But, ironically, most of the available thinking about dev team performance focuses on optimising the processes, and not the communication between the processes.

Following this analogy, if we apply performance patterns for distributed computing to dev teams, we can arrive at some basic principles. In particular, if we seek to minimise the communication overhead without harming outcomes, we can significantly improve team performance.

Processes communicate to share data. The less data they need to share, the lower the communication overhead. And this is where we make our classic mistake; the original sin of software development, if you like.

Imagine our processes can act on data at a low level, but all the conditional logic is executed by external management processes that coordinate the workflow. So every time a worker process needs a decision to be made, it must communicate with a management process and wait for a response. Yes, this would be a terrible design for a distributed system. And yet, this is exactly how most dev teams operate. Teamwork is coordinated at the task level - the level of details. A more performant design would be to give individual worker processes a goal, and then let them make any decisions required to achieve that goal. Tell them the what and then let them figure out the how for themselves.

And I can attest from personal experience that dev teams that empower their developers to make the technical decisions perform much better.

But, as any developer who's worked on a team will tell you, there still needs to be coordination beween developers to reach consensus on the how. A classic example is how many teams fail to reach a consensus on how they implement model-view-controller, each one coming up with their own architecture.

Often, the amount of coordination and consensus needed can be front-loaded. Most of the key technical decisions will need to be made in the first few weeks of development. So maybe just take the hit and have a single worker process (the whole team) work together to establish baseline data, a skeleton of technical and logical architecture, of technical standards and common protocols (e.g., check-in etiquette) on which everyone can build mostly autonomously later. I've been doing this with teams since the days affordable portable data projectors became available. These days they call it "mob programming".

And, of course, there's unavoidably one shared piece of mutable data all processes have no choice but to act on in parallel: the code.

Much has been said on the subject of distributed version control of source code, most of focusing on entirely the wrong problem. Feature Branching, for example, tries to achieve more autonomy between developers by isolating their code changes from the rest of the team for longer. If every check-in is a database transaction (which it is - don't say it isn't), then this is entirely the wrong lever to be pulling on to speed things up. When we have many processes committing transactions to shared database, making the transactions bigger and longer won't speed the system up, usually. We're aiming not to break the data. The only way to be sure of that is to lock the data while the transaction's being written to the database. (Or to partition the data so that it's effectively no longer shared - more on that in a moment.)

To avoid blocking the rest of the worker processes, we need transactions to be over as soon as possible. So our check-ins need to be smaller and more frequent. In software development, we call this Continuous Integration.

It also helps if we split the shared data up, so each blob of data is accessed by fewer worker processes. More simply, the smaller the shared codebase, the less of a CI overhead. Partition systems into smaller work products.

But, just as partitioning software systems into - say - microservices - can increase the communication overhead (what were once method calls are now remote procedure calls), partitioning shared codebases creates a much greater overhead of communication between teams. So it's also vitally important that the various codebases are as decoupled as possible.

I rail against developers who add third-party dependencies to their software for very simple pieces of work. I call it "buying the Mercedes to use the cigarette lighter". In the world of microservices, system component needs to be largely responsible for doing their own work. Only add a dependency when the development cost of writing the code to do that bit of work is significantly greater than the potential ongoing communication overhead. You have to be merciless about minimising external dependencies. Right now, developers tend to add dependencies far too lightly, giving the additional costs little or no thought. And our tools make it far too easy to add dependencies. I'm looking at you, Maven, NuGet, Docker etc.

So, to summarise, here are my tips for optimising the performance of development teams:

1. Give them clear goals, not detailed tasks

2. Make developers as autonomous as possible. They have the technical data, let them make the technical decisions.

3. Accept that, initially, parallelism of work will be very difficult and risky. Start with mob programming to establish the technical approach going forward.

4. Small and frequent merging of code speeds up team performance. Long-lived code branches tend to have the reverse effect to that intended.

5. Partition your architectures so you can partition the code.

6. Manage dependencies between codebases ruthlessly. Duplicated logic can be cheaper to live with than inter-team communication.




November 20, 2017

Learn TDD with Codemanship

10 Days Left to Book Half-Price TDD Training

A quick reminder about the special offer I'm running this month to help teams whose training budgets have been squeezed by Brexit uncertainty.



If you confirm your booking for a 1, 2 or 3-day TDD training workshop this month (for delivery before end of Feb 2018), you'll get a whopping 50% off.

This is our flagship course - refined through years delivering TDD training to thousands of developers - and is probably the most hands-on and comprehensive TDD and code craft training workshop you can get... well, pretty much anywhere. There are no PowerPoint presentations, just live demonstrations and practical exercises to get your teeth into.

As well as the basics, we cover BDD and Specification by Example, refactoring, software design principles, Continuous Integration and Continuous Delivery, end-to-end test-driven design, mocking, stubbing, data-driven and property-based unit testing, mutation testing and heap more besides. It's so much more than a TDD course!

And every attendee gets a copy of our exclusive 200-page TDD course book, rated 5 stars on goodreads.com, which goes into even more detail, with oodles of extra practical exercises to continue your journey with.

If you want to know more about the course, visit http://www.codemanship.com/tdd.html, or drop me a line.


November 19, 2017

Learn TDD with Codemanship

Everything Else Is Details

For pretty much all my freelancing and consulting career, I've strongly advocated driving software development directly from testable end user goals. I'm not talking here about use cases, or the "so that..." art of a user story. I'm talking actual goals. Not "reasons to use the software", but "reasons to build it in the first place".

Although the Agile movement has started to catch up, with ideas like "business stories" and "impact mapping", it's still very much the exception not the rule that teams set out on their journey with a clear destination in mind.

Once goals have been established, the next step is to explore and understand the current model. How do users currently do things? And this is where a see another classic mistake being made by dev teams. They build an understanding of the existing processes, and then just reproduce those as they currently are in code. This can bake in the status quo, making it doubly hard for businesses to adapt and improve.

The rubber meets the road when we work with our customers to build a shared vision of how things will work when our software has been delivered. And, most importantly, how that will help us achieve our goals.

The trick to this - a skills that's sadly still vanishingly rare in our industry - is to paint a clear picture of how the world will look with our software in it, without describing the software itself. A true requirements specification does not commit in any way to the implementation design of a solution. It merely defines the edges of the solution-shaped hole into which anything we create will need to fit.

I think we're getting better at this. But we're still very naïve about it. Goals are still very one-dimensional - typically just focusing on financial objectives - and fail to balance multiple stakeholder perspectives. The Balanced Scorecard has yet to arrive in software development. Goals are usually woolly and vague, too, with no tests we could use to measure how we're doing. And - arguably our biggest crime as an industry - goals are often confused with strategies and solutions. 90% of the requirements specs I read are, in fact, solution designs masquerading as business requirements.

This ought to be the job of a business analyst. Not to tell us what software to build, but instead to describe what problem we need to solve. What we need from them is a clear, testable vision of how the world will be different because of our software. What needs to change? Then our job is to figure out how - if possible - software could help change it. Does your team have this vision?

I continue to strongly recommend that dev teams ditch the backlogs (and any other forms of long-term plans or blueprints), sit down with their customers and other key stakeholders, and work to define a handful of clear, testable business goals.

Everything else is details.




November 7, 2017

Learn TDD with Codemanship

Why Agile's Not For Me

There's a growing consensus among people who've been involved with Agile Software Development since the early (pre-Snowbird) days that something is rotten in the state of Agile.

Having slowly backed out of the Agile movement over the last decade or more (see my semi-jocular posts on Post-Agilism from 2007), I approach the movement as a fairly skeptical observer.

Talking with folk both inside and outside the Agile movement - and many with one foot in and one foot out - has highlighted for me where the wheels came off, so to speak. And it's a story that's by no means unique to Agile Software Development. Like all good ideas in software, it's never long before the money starts taking an interest and the pure ideas that it was founded on get corrupted.

1. Too Much Emphasis On Working Software

But, arguably, Agile Software Development was fundamentally flawed straight out of the gate (or straight out of the ski resort, more accurately). If I look for a foundation for Agile, it clearly has its roots in the concept of evolutionary software development. Evolution is a goal-seeking algorithm that searches for an optimum solution by iterating designs rapidly - the more rapidly the better - and feeding back in what we learn with each iteration to improve our solution.

There are two key words in that description: iterating and goal-seeking. There is no mention of goals in the original Agile Manifesto. The manifesto stipulates that the measure of progress is "working software". It does not address the question of why we should build that software in the first place.

And so, many Agile teams - back in the days when Extreme Programming was still a thing - focused on iterating software designs to solve poorly-defined - or not defined at all, let's face it - business problems. This is pretty much guaranteed to fail. But, bless our little cotton socks, because we set ourselves the goal of delivering "working software", we tended to walk away thinking we'd succeeded. Our customers... not so much.

This was the crack in Agile through which the project office snuck back in. (More about them later.)

2. Not Enough Emphasis On Working Software

As Agile evolved as a brand, more and more of us tried to paint ourselves in the colours of management consultants. Because, let's be frank, that's where the big bucks are. People who would once have been helping you to fix your build script were now suddenly self-professed McKinsey-style business gurus telling you how to "maximise the flow of value" in your enterprise, often to comic effect because nobody outside of the IT department took us seriously.

And then, one day - to everyone's horror - somebody outside the IT department did start taking us seriously, and suddenly it wasn't funny any more. Agile "crossed the chasm", and now people were talking about "going Agile" in the boardroom. Management and business magazines now routinely run articles about Agile, typically seeking input from people I've certainly never heard of who are now apparently world-leading experts. None of these people has heard of Kent Beck or Ward Cunningham or Brian Marick or any other signatory of the original Agile Manifesto. Agile today is very much in the hands of the McKinseys of this world. A classic "be careful what you wish for" moment for those from the IT department who aspired to be dining at the top table of consulting.

Agile's now Big Business. And the business of Agile is going BIG. Like every good and pure thing that falls into the hands of management consultants, Agile has mutated from a small, beautiful bird singing a twinkly tune to a bloated enterprise albatross with a foghorn.

3. We Didn't Nuke The Project Office From Orbit To Be Sure

I'm often found hanging around on street corners muttering to myself incoherently about the leadership class. Well, it's good to have a hobby.

Across the world - and especially in the UK - we have a class of people who have no actual practical skills or specific expertise to speak of, but a compelling sense of entitlement that they should be in charge, often of things they barely understand.

In the pre-Agile Manifesto world, IT was ruled by the leadership class. There was huge emphasis on processes, driven by the creation of documents, for the benefit of people who were neither using the software or writing it. This was a non-programmer's idea of what programming should be. In the late 1990's, the project office was the Alpha and the Omega of software and systems development. People who'd never written a line of code in their lives telling people who do it day-in and day-out how it should be done.

Because, if they let programmers make the decisions, they'll do it wrong!!! And, to be fair, we often did do it wrong. We built the wrong thing, and we built it wrong. It was our fault. We let the project office in by frequently disappointing our customers. But their solution just meant that we still did it wrong, only now we did it wrong on a much grander scale.

And just as we developers kidded ourselves that, because we delivered working software, that meant we had succeeded, managers deluded themselves that - because the team followed the prescribed processes - the customer's needs had been met.

Well, nope. We ticked the boxes while the customer got ticked off.

It turns out that the working relationship between software developers and their customers is, and always has been, the crux of the problem. Teams that work closely and communicate effectively with customers tend to build the right thing, at least. There's no process, standard or boxes-and-arrows diagram that can fix a dysfunctional developer-customer relationship. CMMi all you like. It doesn't help in the end. And, as someone who specialised on software process engineering and wore the robes and pointy hat of a Chief Architect, I would know.

The Agile Manifesto was a reaction to the Big Process top-heavy approach that had failed us so badly in the previous decades. Self-organising teams should work directly with customers and do the simplest things to deliver value. Why write a big requirements specification when we can have a face-to-face conversation with the customer? Why create a 200-page architecture document when developers can just gather round a whiteboard when they need to talk about design?

XP in particular seemed to be a welcome death knell for value-sucking Plan-Driven, Big Architecture, Big Process roles. It was the end for those projects like the one where I was the only developer but for some reason reported to three project managers, spending a full day every week travelling the country helping them to revise their constantly out-of-date Gantt charts.

And, for a while, it was working. The early noughties was a Golden Age for me of working on small teams, communicating directly with customers, making the technical decisions that needed to be made, and doing it our way.

But the project office wasn't going to just slink away and die in a corner. People with power rarely relinquish it voluntarily. And they have the power to make sure they don't need to.

Just as before, we let them back in by disappointing our customers. A lack of focus on end business goals - real customer needs - and too much focus initially on the mechanics of delivering working software created the opportunity for people who don't write code to proclaim "Look, the people writing the code are doing Agile wrong!"

And, again, their solution is more processes, more management, more control. And, hey presto, our 6-person XP projects transformed into beautiful multi-team Enterprise Agile butterflies. Money. That's what I want.

Back To Basics

Agile today is completely dominated by management. It's no longer about software development, or about helping customers achieve real goals. It's just as top-heavy, process-oriented and box-ticky as it ever was in the 1990s. And it's therefore not for me.

Working closely with customers to solve real problems by rapidly iterating working software on small self-organising teams very much is, still. But I fear the word for that has had its meaning so deeply corrupted that I need to start calling it something else.

How about "software development"?





November 6, 2017

Learn TDD with Codemanship

Half-Price TDD Training - Special Offer



Just a quick plug for a special offer Codemanship are doing for November. To help companies whose training budgets are being squeezed by Brexit uncertainty, if you confirm your booking for a 1, 2 or 3-day TDD training workshop this month (for delivery before end of Feb 2018), you'll get a whopping 50% off.

This is our flagship course - refined through years delivering TDD training to thousands of developers - and is probably the most hands-on and comprehensive TDD and code craft training workshop you can get... well, pretty much anywhere. There are no PowerPoint presentations, just live demonstrations and practical exercises to get your teeth into.

As well as the basics, we cover BDD and Specification by Example, refactoring, software design principles, Continuous Integration and Continuous Delivery, end-to-end test-driven design, mocking, stubbing, data-driven and property-based unit testing, mutation testing and heap more besides. It's so much more than a TDD course!

And every attendee gets a copy of our exclusive 200-page TDD course book, rated 5 stars on goodreads.com, which goes into even more detail, with oodles of extra practical exercises to continue your journey with.

If you want to know more about the course, visit http://www.codemanship.com/tdd.html, or drop me a line.


October 31, 2017

Learn TDD with Codemanship

Safer Duck Typing: Could We Have Our Cake & Eat It?

I've been thinking a lot lately about how we might reconcile the flexibility of duck typing, like in Python and Ruby, with the safety of static type checking like we have in Java and C#.

For many, the swappable-by-default you get with duck typing is a boon for an evolving domain model. And for me... well, there's a reason why people invented static type checking, and I would prefer not to go back to the bad old days of having to keep that model in my head.

Is there a "having our cake and eating it" arrangement possible here? I think there just might be.

In programming, "type safety" means there's a low risk of trying to invoke methods that the target object doesn't support. Static type systems like in Java enforce this as part of the design. All objects have a type, and that type describes what methods they support. Part of the "drag" of programming in Java is having to qualify this for every single object we want to use. Want to pass an object as a parameter in a method call? You'll need to specify what type that object is. So it's not possible to invoke a method that an object doesn't support - well, not easily, anyway.

It seems to me that, in languages like Python, we have that information in our code. We just need to apply it somehow.



In this example, two methods are invoked on a parameter of the calculate_net_salary() function. Whatever is passed in as that parameter value must implement those two methods with the exact signatures. But the Python compiler doesn't see that. If I passed in an instance of a class that didn't implement those methods, it would be perfectly happy.

Scala introduced a concept called structural typing, where we can specify with function parameters what methods any object passed in as that parameter value must support. e.g.,


def quacker(duck: {def quack(value: String): String}) {
println (duck.quack("Quack"))
}


I like how the required interface is defined by the client - I think this is how it ought to be. But I hate that I need to define this at all. It's duplicated information, and extra work that we have to remember to do. It, too, requires me to carry the model in my head. And when these contracts change, it's more code we have to change.

As far as I can see, the information's already in the function's implementation. We could examine the body of a function or method and build a table of the methods it calls. Then we trace back the source of the target instance. (This may require a bit of fancy shooting, but it's doable, surely?)

In my calculate_net_salary() I can trace the target of the invocations of income_tax() and national_insurance() to the parameter tax_calc. Following that back up the call stack to code that invokes calculate_net_salary(), I see that the class of the target instance is TaxCalculator.



Now I can examine this class to see if it does indeed support those methods.

This is something I've been saying all along about duck typing: the type is there, it's just implied.

I think what's needed is a belt-and-braces prototype to see if this could work in practice. If it really could work, then I - personally - would find something like this very useful. Would you? Do you know of something that already does this that my Googling has overlooked?





October 24, 2017

Learn TDD with Codemanship

Don't Want to TDD? Don't TDD. Just Be Honest About Your Reasons


A growing body of evidence strongly suggests that Test-Driven Development produces code that is - on average - more reliable (less bugs per KLOC) and more maintainable (simpler, less duplication, more modular, and far faster and cheaper to re-test). And it can do this with little or no extra effort.

Hooray for our side!

But, of course, I'm always at pains to be clear that TDD is not compulsory. If you don't want to do it, then don't do it.

But if not TDD, then what else are you doing to ensure your code is reliable and maintainable? Perhaps you're writing automated tests after you've written the code, for example. Perhaps you're writing assertions inside the code and using a tool like QuickCheck to drive test cases. Perhaps you're doing Design by Contract. Perhaps you're using a model checker. What matters is the end result. TDD is optional.

You'll have to forgive my skepticism when people tell me they choose not to do TDD, though. Usually - it transpires upon seeing the resulting software - what they really mean is "we choose not to write reliable and maintainable code" and "we choose not to worry about the release after this one." The world is full of legacy production code that wasn't intended to last, much of it decades old.

So, by all means, do it some other way if that's what floats your boat. But be honest with yourself about your reasons. Eventually, whatever your stated justification, your code will tell the real story.






October 18, 2017

Learn TDD with Codemanship

12 Things a Professional Computer Programmer Needs to Learn

The last few years has seen an explosion of great learning resources for people interesting in getting into computer programming.

But alongside that, I've noticed a growing number of people, who have ambitions to work in the industry as programmers, being bamboozled into believing all it takes is a few weeks of self-paced JavaScript tutorials to reach a professional level.

Nothing could be further from the truth, though. Programming languages are just one small aspect of writing software as a professional (albeit a crucial one).

When learners ask me "What else do I need to know how to do?", I'm typically unprepared to answer. Unhelpfully, I might just say "Loads!"

Here, I'm going to attempt to structure some thoughts on this.

1. Learn to code. Well, obviously. This is your starter for 10. You need to be able to make computers do stuff to order. There's no getting around that, I'm afraid. If you want to do it for a living, you're probably best off learning programming languages that are in demand. As unhip and uncool as they may be these days, languages like Java and C# are still very much in demand. And JavaScript is at the top of the list. To become an in-demand "full-stack" software developer, you're going to need to learn several languages, including JavaScript. Research the kinds of applications you want to work on. Find out what technologies are used to create them. Those are the languages you need to learn.

2. Learn to use Version Control. Version Control Systems (VCSs) are seatbelts for programmers. If your code has a nasty accident, you want to be able to easily go back to a versin of it that worked. And most professional developers collaborate with other developers on the same source code, so to do it for a living you'll want to know how to use VCSs like Git and Mercurial to effectively manage collaborating without tripping over each other.

3. Learn to work with customers. Typically, when we're learning to code, we tackle our own projects, so - in essence - we are the customer. It gets a bit more complicated when we're creating software for someone else. We need to get a clear understanding of their requirements, and so it's important to learn some simple techniques for exploring and capturing those requirements. Look into use cases and user stories to get you started. Then learn about Specification by Example.

4. Learn to test software. There's more to making sure our code works than running the application and randomly clicking buttons. You'll need to understand how to turn requirement specifications into structured test scripts that really give the code a proper, in-depth workout. How do make sure every requirement is satisfied? How do make sure every line of code is put through its paces? How do we identify combinations of inputs that the code can't handle?

5. Learn to write automated tests. Automated tests are commonly used in Specification by Example to really nail down exactly what the customer wants. They are also crucial to maintaining our code as it grows. Without a decent set of fast-running automated tests, changing code becomes a very risky and expensive business. We're likely to break it and not find out for a long time. Learn how to write automated unit tests for your code, and how to automate other kinds of tests (like system tests that check the whole thing through the user interface or API, and integration tests that check system components work together).

6. Learn to write code that's easy to change. On average, software costs 7-10x as much to maintain over its lifetime as it did to write in the first place. And if there's one thing we've learned from 70 years of writing software, it's that it'll need to change. But, even though we call it "software" - as opposed to "hardware" - because it's easier to change than the design of, say, a printed circuit board, it can still be pretty hard to change code without breaking it. You'll need to learn what kind of things we can do in code tend to make it harder to change and easier to break, and how to avoid doing them. Learn about writing code that's easy to read. Learn about simple design. Learn how to avoid writing "spaghetti code", where the logic gets complicated and tangled. Learn how to shield modules in your code from knowing too much about each other, creating a dense web of dependencies in which even the smallest changes can have catastrophic impact. Learn how to use abstractions to make it easier to swap out different parts of the code when they need to be replaced or extended.

7. Learn to improve the code without breaking it. We call this skill "refactoring", and it's really, really important. Good programmers can write code that works. Great programmers can improve the code - to make it easier to understand and easier to change - in ways that ensure it still works. A function getting too complicated to understand? Refactor it into smaller functions. A module doing too much? Refactor it into multiple modules that do one job. This skill is very closely connected to #5 and #6. You need to know bad code when you see it, and know how to make it better. And you need to be able to re-test the code quickly to make sure you haven't broken anything. Automated Tests + Design Smarts + Refactoring form a Golden Circle for code that works today and can be easily changed tomorrow to meet new customer requirements.

8. Learn to automate donkeywork like building and deploying the software. Good software developers don't manually copy and paste files to production servers, run database scripts, and all of that repetitive stuff, when they want to create test or production builds of their systems and deploy them to a live environment. They program computers to do it. Learn how to automate builds, to do Continuous Integration, and automate your deployments, so that whole delivery process can become as easy and as frictionless as possible.

9. Learn about software architecture. Is your application a mobile app? A website? A Cloud service? Does it need huge amounts of data to be stored? Does it need to be super-secure? Will some features be used by millions of users every day? Will it have a GUI? An API? Is the data really sensitive (e.g., medical records)? We have 7 decades of knowledge - accumulated through trial and error - about how to design software and systems. We have principles for software architecture and the different qualities we might need our software to have: availability, speed, scalability, security, and many more. And there are hundreds of architectural patterns we can learn about that encapsulate much of this knowledge.

10. Learn to manage your time (and yourself). You might enjoy the occasional late night working on your own projects as a beginner, but a professional programmer's in this for lomg haul. So you need to learn to work at a sustainable pace, and to prioritise effectively so that the important stuff gets done. You need to learn what kinds of environments you work best in, and how to change your working environment to maximise your productive time. For example, I tend to work best in the morning, so I like to get an early start. And I rarely spend more than 7-8 hours in a day programming. Learn to manage your time and get the best out of yourself, and to avoid burning out. Work smarter, not harder, and pace yourself. Writing software's a marathon, not a sprint.

11. Learn to collaborate effectively. Typically, writing software is a team sport. Teams that work well together get more done. I've seen teams made up of programmers who are all individually great, but who couldn't work together. They couldn't make decisions, or reach a consensus, and stuff didn't get done because they were too busy arguing and treading on each others' toes. And I've seen teams where everyone was individually technically average, but as a single unit they absolutely shone. Arguably, this is the hardest skill, and the one that takes the longest to master. You may think code's hard. But people are way harder. Much has been written about managing software teams over the decades, but one author I highly recommend is Tom DeMarco (author of "Peopleware"). In practice, this is something you can really only learn from lots and lots of experience. And increasingly important is your ability to work well with diverse teams. The days when computer programming was predominanly a pursuit for western, white, middle class heterosexual men are thankfully changing. If you're one of those people who thinks "girls can't code", or that people from third-world countries are probably not as educated as you, or that people with disabilties probably aren't as smart, then I heartily recommend a different career.

12. Learn to learn. For computer programmers, there's 70 years of learning to catch up on, and a technology landscape that's constantly evolving. This is not a profession where you can let the grass grow under your feet. People with busy lives and limited time have to be good at making the most of their learning opportunities. If you thought you'd learned to learn at college... oh boy, are you in for shock? So many professional programmers I know said they learned more in the first 6 months doing it for a living than they did in 3-4 years of full-time study. But this is one of the reasons I love this job. It never gets boring, and there's always something more to learn. But I've had to work hard to improve how I learn over the years. So will you. Hopefully, the more you learn, the clearer the gaps that need filling will become.

So, there are my twelve things I think you need to learn to be a professional computer programmer. Agree? What would be on your list? You can tweet your thoughts to @jasongorman.






October 17, 2017

Learn TDD with Codemanship

Manual Refactoring : Convert Static Method To Instance Method

In the previous post, I demonstrated how to introduce dependency injection to make a hard-coded dependency swappable.

This relies on the method(s) our client wants to invoke being instance methods. But what if they're not? Before we can introduce dependency injection, we may need to convert a static method (or a function) to an instance method.

Consider this Ruby example. What's stopping us from stubbing video ratings is that we're getting them via a static fetchRatings() method.



Converting it to an instance method - from where we can refactor to dependency inject - is straightforward, and requires two steps.

1. Find and replace ImdbRatings.fetchRating( with ImdbRatings.new().fetchRating( whereever the static method is called.

2. Change the declaration of fetchRating() to make it an instance method. (In Ruby, static method names are preceded by self. - which strikes me as rather counterintuitive, but there you go.)



NOW RUN THE TESTS!

If fetchRating() was just a function (for those of us working in languages that support them), we'd have to do a little more.

1. Find and replace fetchRating( with ImdbRatings.new().fetchRating( wherever that function is called.

2. Surround the declaration of fetchRating() with a declaring class ImdbRatings, making it an instance method.

(AND RUN THE TESTS!)

Now, for completeness, it woul make sense to demonstrate how to convert an instance method back into a static method or function. But, you know what? I'm not going to.

When I think about refactoring, I'm thinking about solving code maintainability issues, and I can't think of a single maintainability issue that's solved by introducing non-swappable dependencies.

When folk protest "Oh, but if the method's stateless, shouldn't we make it static by default?" I'm afraid I disagree. That's kind of missing the whole point. Swappability is the key to managing dependencies, so I preserve that by default.

And anyway, I'm sure you can figure out how to do it, if you absolutely insist ;)



October 16, 2017

Learn TDD with Codemanship

Manual Refactoring : Dependency Injection



One of the most foundational object oriented design patterns is dependency injection. Yes, dependency injection is a design pattern. (Not a framework or an architectural philosophy.)

DI is how we can make dependencies easily swappable, so that a client doesn't know what specific type of object it's collaborating with.

When a dependency isn't swappable, we lose flexibility. Consider this Ruby example where we have some code that prices video rentals based on their IMDB rating, charging a premium for highly-rated titles and knocking a quid off for poorly-rated ones.



What if we wanted to write a fast-running unit test for VideoPricer? The code as it is doesn't enable this, because we can't swap the imdbRatings dependency - which always connects to the IMDB API - with a stub that pretends to.

What if we wanted to get video ratings from another source, like Rotten Tomatoes? Again, we'd have to rewrite VideoPricer every time we wanted to change the source. Allowing a choice of ratings source at runtime would be impossible.

This dependency needs to be injected so the calling code can decide what kind of ratings source to use.

This refactoring's pretty straightforward. First of all, let's introduce a field for imdbRatings and initialise it in a constructor.



NOW RUN THE TESTS!

Next, introduce a parameter for the expression ImdbRatings.new().



So the calling code decides which kind of ratings source to instantiate.



AND RUN THE TESTS!

Now, technically, this is all we need to do in a language with duck typing like Ruby to make it swappable. In a language like, say, C# or C++ we'd have to go a bit further and introduce an abstraction for the ratings source that VideoPricer would bind to.

Some, myself included, favour introducing such abstractions even in duck-typed languages to make it absolutely clear what methods a ratings source requires, and help the readability of the code.

Let's extract a superclass from ImdbRatings and make the superclass and the fetchRating() method abstract. (Okay, so in C# or Java, this would be an interface. Same thing; an abstract class with only abstract methods.)



DON'T FORGET TO RUN THE TESTS!


One variation on this is when the dependency is on a method that isn't an instance method (e.g., a static method). In the next post, we'll talk about converting between instance and static methods (and functions).