August 21, 2017

Learn TDD with Codemanship

Codemanship Code Craft FxCop Rules

So, here they are. Hot from the oven, my FxCop code rules for the upcoming Codemanship Code Craft "Driving Test".


Some rubbish code, yesterday.

If you're signed up to be one of our valiant guinea pigs for the trail driving test on Sept 16th, I heartily recommend you download them and get a bit of practice. Try writing code that breaks each of the 11 rules, and then refactoring that code to make the nasty messages go away.

There's versions for Visual Studio 2013, 2015 and 2017, plus instructions on installing and suing the rules with your own projects.

And even if you're not doing the driving test on Sept 16th, have a go anyway. Your code may not be as clean as you think ;)

Any bugs or false positives, drop me a line.



August 19, 2017

Learn TDD with Codemanship

Time For Learning - An Inconvenient Truth

I've watched many tweet debates ("twebates"?) recently on the subject of finding time for learning in software development.

In the culture of the code craft movement, the consensus has been that you have to put in the hours. And by that, they tend to mean your own hours, outside of the day job. I've seen many job ads stipulating that candidates would need to show evidence of this extra-curricular commitment: blogs, speaking at conferences, OSS contributions, personal projects and all that.

The counter argument comes chiefly from people advocating greater diversity in software. Single parents, for example, have a lot on their plate that makes popping along to the Extreme Tuesday Club or speaking at a conference in, say, Norway, logistically difficult. Where's the time in their day/week/year to read all three volumes of the Art of Computer Programming?

My perspective on all this, I'm afraid, is cold and sobering. It takes a lot of reading and talking and sharing and experimentation (also known as "trying new stuff") to get good at writing software - and to stay good at it.

That's an inescapable reality. It's an inconvenient truth about software development. Everyone wants those skills, but nobody's willing to pay to develop them. Cui bono? Demonstrably, the employer benefits from more skilled developers. So they should make a contribution to bulding those skills. Simples.

What we're really debating is where does that time come from? Most employers aren't willing to support learning out of their own budgets. They expect developers to arrive fully formed, and that means that anything aside from direct on-the-job experience is down to us to learn in our own time. It's wholly inadequate to the task because we can only learn things that have immediate relevance to what we're doing.

Imagine if doctors had to learn everything that way. "Well, Mr Gorman, I'm afraid you have a burst appendix. This hasn't come up before, so I'm going on a course to learn how to treat it. See you in 2 weeks."

This also excludes people whose backgrounds and situations make finding those extra hours every week very difficult. This is why I believe offering developers "10% time" or "20% time" is really very necessary if we want a more diverse profession. This is another inconvenient truth about software development. Job ads that demand large amounts of extra hours of "elective" work are effectively restricting applications to people with not a lot else going on in their lives.

In practice, the code crafter landscape is still pretty homogenous. When I run public events, we still get about 85% men, and most of those are white. Very occasionally, someone with a disability comes along - and I always try to make sure the event's accessible, and advertise that fact.

But the fact remains that there are a lot of potentially great developers out there who, much as they'd like to, can't get along to a Saturday workshop, and whose employers won't let them take time out for learning during the working week.

Those are the people we in the dev community rarely see. But we shouldn't assume that they're not there because they don't want to learn.

If your job ads say you're committed to increasing diversity, and then demand a large portfolio of extra-curricular activities, you have a cognitive dissonance.

So, my question is this: how do we square this circle? My current belief is that we must adapt the very nature of our jobs so that time for learning and deliberate practice is built into the working week. I believe that this should become the norm, whereas today it's very much the exception.

I come from the school of "if this needs to happen, then let's just do it". We made the mistake, as professionals, of letting other people manage our time. If we're to move forward then that needs to stop. As "prima donna" as this sounds, we should take that time, and not ask for permission.

Because if we ask for permission, we know what the answer will be.




August 17, 2017

Learn TDD with Codemanship

Your House, Your (Code Quality) Rules


Picking up where I left off on the custom FxCop rules for the Codemanship Code Craft "Driving Test" has reminded me of something that's vitally important.

This morning I wrote a class that enumerates a type's collaborators. The code currently looks like this:



Codified in this class is an understanding of what I mean by a "collaborator", for the purposes of the driving test.

First of all, I'm not including non-project types. This is a judgement call to keep design rules realistic. My rule will limit the number of collaborating types to 3. If that includes core library types etc, it's going to be really tough.

I'm including fields, parameters and local variables. I'm also including the declaring types of any bound members. So if I call a method that returns an object and then I call a method on that, it'll be included.

I'm also not counting base classes as collaborators. Again, it's a judgement call.

I'm working alone, so I get to make the rules. But in a team setting that absolutely should not happen. Don't send the "tools dev" away to work in isolation on quality gates for Continuous Inspection. Because what will happen is, when they return and unleash their rules on the rest of the team, there'll be tears before bedtime.

The whole team needs to be involved. This is a great candidate for mob programming, in my experience. While you're waiting for business requirements in the early stages of a project/product, here's what the team could be doing to get the delivery engine up and running.

It will require the team to have discussions about code quality with a level of precision they probably never have before. I think this is a good thing.



August 11, 2017

Learn TDD with Codemanship

Update: Code Craft "Driving Test" FxCop Rules



I've been continuing work on a tool to automatically analyse .NET code submitted for the Code Craft "Driving Test".

Despite tying myself in knots for the first week trying to build a whole code analysis and reporting framework - when will I ever learn?! - I'm making good progress with Plan B this week.

Plan B is to go with the Visual Studio code analysis infrastructure (basically, FxCop). It's been more than a decade since I wrote FxCop rules, and it's been an uphill battle wrapping my head around it again (along with all the changes they've made since 2006).

But I now have 8 code quality rules that are kind of sort of working. Some work well. Some need more thought.

1. Methods can't be longer than 10 LOC

2. Methods can't have > 2 branches

3. Identifiers must be <= 20 characters. (Plan is to exempt test fixture/method names. TO-DO.)

4. Classes can't have > 8 methods (so max class size is 80 LOC)

5. Methods can't use more than one feature of another class. (My very basic interpretation of "Feature Envy". Again, TO-DO to improve that. Again, test code may be exempt.)

6. Boolean parameters are not allowed

7. Methods can't have > 3 parameters

8. Methods can't instantiate project types, unless they are factory or builder methods that return an abstract type. (The beginning of my Dependency Inversion "pincer movement". 2 more rules to come preventing invocation of static project methods, and methods that aren't virtual or abstract. Again, factories and builders will be exempt, as well as test code.)

What's been really fun about the last couple of weeks has been eating my own dog food. As each new rule emerges, I've been applying it frequently to my own code. I'm a great believer in the power of Continuous Inspection, and this has been a timely reminder of just how powerful it can be.

Red-Green-INSPECT-Refactor


After passing every test, and performing every refactoring, I run a code analysis that will eventually systematically check all my code for 15 or so issues. I fix any problems it raises there and then. I don't commit or push code that fails code analysis.



In Continuous Inspection, this is the equivalent of all my tests being green. Of course, as with functional tests, the resulting code quality may only be as good as the code quality tests. And I'm refining them with more and more examples, and applying them to real code to see what designs they encourage. So far, not so shabby.

And for those inevitable occasions when blindly obeying the design rules would make our code worse, the tool will have a mechanism for opting out of a rule. (Probably a custom attribute that you can apply to classes and fields and methods etc, specifying which rule you're breaking and - most importantly - documenting why. Again, a TO-DO.) In the Driving Test, I'm thinking candidates will get 3 such "hall passes".

If you want to see the work so far, and try it out for yourself, the source code's at https://github.com/jasongorman/CodeCraft.FxCop

And I've made a handful more tickets available for the trial Code Craft "Driving Test" for C# developers on Sept 16th. It's free this time, in exchange for your adventerous and forgiving participation in this business experiment :)

Powered by Eventbrite






August 9, 2017

Learn TDD with Codemanship

Clean Code isn't a Programming Luxury. It's a Business Necessity

I'm not going to call out the tweeter, to spare their blushes, but today I saw one of those regular tweets denigrating the business value of "clean code".

This is an all-too-common sentiment I see being expressed at all levels in our industry. Clean Code is a luxury. A nice-to-have. It's just prima donna programmers making their code pretty. Etc etc.

Nothing could be further from the truth. Just from direct personal experience, I've seen several major businesses brought to their knees by their inability to adapt and evolve their software.

There was the financial services company who struggled for nearly a year to produce a stable release of the new features their sales people desperately needed.

There was the mobile software giant who was burning $100 million a year just fixing bugs while the competition was racing ahead.

There was the online recruitment behemoth who rewrote their entire platform multiple times from scratch because the cost of changing it became too great. Every. Single. Time.

I see time and time again businesses being held back by delays to software releases. In an information economy - which is what we now find ourselves in (up to our necks) - the ability to change software and systems is a fundamental factor in a business's ability to compete.

Far from being a luxury, Clean Code - code that's easy to change - is a basic necessity. It is to the Information Age what coal was to the Industrial Age.

No FTSE500 CEO these days can hatch a plan to change their business that doesn't involve someone somewhere changing software. And the lack of responsiveness from IT is routinely cited as a major limiting factor on how their business performs.

Code craft is not about making our code pretty. It's about delivering value today without blocking more value tomorrow. It's that simple.





August 6, 2017

Learn TDD with Codemanship

What *Exactly* Is "Feature Envy"?

I'm currently writing some custom FxCop rules for the trial Codemanship Code Craft "driving test" on Sept 16th. The aim is that not only will I be able to automatically check candidate's code, but they will be able to while they're writing it, too. The power of Continuous Inspection!

One of the rules is that methods of one class must not display Feature Envy for another class. Typically, Feature Envy's defined as:

A method accesses the features of another class more than its own.


And this might seem trivial to check for using a tool like FxCop. Look at all the member bindings inside a method. If there are more bindings to members of other types then to members of the type on which this method's declared, then we've got Feature Envy. To fix it, we can just move the method to the focus of its envy.

But I'm not sure it's quite that simple. This example might be an open-and-shut case:



But how about this?



The majority of feature calls in this method are to methods of the same class. But that code smell we saw in the first example is still here, on lines 3 and 4. Proof? What if we extract those 2 lines into their own method?



The method obviousFeatureEnvy now completely satisfies our definition of Feature Envy and should be moved to the other class.

I think this leads me to a better definition of Feature Envy:

Feature Envy is when any unit of executable code - a method, a block, a statement or an expression - uses features of another class more than features of its own class


Basically, if you can extract any portion of code into a method that displays the original, "classic" definition of Feature Envy.

But wait; there's more. Take a look at this example:



Technically, only one of these methods satisfies our definition of Feature Envy, but if were to inline the call stack, we'd end up with one method with very obvious Feature Envy.

It's much more complex than I thought. But, for the driving test, I'll probably keep it simple and stick with the classic - and much easier - definition of Feature Envy.

But one day, when I've got time...



August 1, 2017

Learn TDD with Codemanship

Codemanship Code Craft Driving Test - Trial Run



In my last blog post, I talked about the code quality criteria for our planned "driving test" for code crafters.

If you're interested in taking the test, I'm planning a trial run on Saturday Sept 16th. Initially, it will be just for C# programmers.

Entry is free (this time). You'll need a decent Internet connection, Visual Studio (2013 or later, Community is fine), and a GitHub account. Resharper is highly recommended (you can download a trial version from https://www.jetbrains.com/resharper/download/ - but don't do it now, because it will time out before the trial!) You'll be writing tests with NUnit 2.6.x. You can use whichever .NET mocking framework you desire. No other third-party libraries should be required.

The project we'll set you should take 4-8 hours, and you'll have 24 hours to submit your solution and your screencast link after we begin.

If you've been on Codemanship training courses, the quality criteria should make sense to you, and you'll know what we're looking for in your screencast.

If you haven't, then I recommend twisting the boss's arm and sending them a link to http://www.codemanship.com :)

You can register using the form below.

Powered by Eventbrite



July 31, 2017

Learn TDD with Codemanship

Codemanship Code Craft "Driving Test" - Code Quality Critera

Much pondering today about the code quality standards that should be applied in the Codemanship Code Craft Driving Test we'll be trialling at the end of the summer.

Since it's a test, I think it's fair to apply more rigorous and unyielding standards, provided that these are laid out unambiguously in advanced.

We can divide it up into 7 key areas, with some slightly different criteria for test code to allow for more verbose method names and a bit more code duplication:

1. It Works

* Your solution passes all of the customer tests we'll give you
* Your solution also survives a more exhaustive suite of tests to hunt for any lurking bugs

2. It's Readable

* The Conceptual Correlation between your code and the requirements is > 80%
* Non of your identifiers contain > 20 characters (except for test method names)
* No line of your code contains > 100 characters

3. It's Low in Duplication

* A check using Simian will reveal no more than 15% code duplication (We'll give you the precise Simian options so you can check for yourself)

4. It's Made of Simple Parts

* No method will contain > 10 LOC
* No method will have > 2 branches
* No method will have > 3 parameters
* No method will have Boolean parameters
* No class will have > 6 methods

5. It's Made of Swappable Parts

* Excluding in your tests, all dependencies on other classes in your solution will be swappable by dependency injection. Use of DI frameworks will also be forbidden. (Dependency Inversion)
* No non-test class will invoke any method on an instance of another class in the solution that can't be easily extended or swapped (e.g., in C#, only methods on interfaces or virtual methods can be invoked)

6. The Parts Are Loosely Coupled

* No class will depend on > 3 other classes in your solution
* No method will exhibit Feature Envy (when a method of one class uses more than one method of another) for other classes in your solution (Tell, Don't Ask)
* No class or interface will expose features to another class that it doesn't use (Interface Segregation)
* No class will invoke methods on solution classes which are not direct collaborators (i.e., fields or parameters) (Law of Demeter)

7. Test Code Quality

* No unit test will make more than one assertion (or mock object equivalent)
* There will be exactly one unit test method per requirements rule, the name of the test will clearly describe the rule
* All of the unit tests will pass without any external dependencies
* There will be a maximum of 10% integration test code, packaged separately
* The tests will run in < 10 seconds
* Tests will contain < 25% code duplication


Now, even though there's quite a lot of meat on these bones, these criteria may change, of course. But probably not much.

In the trial, I'll be verifying many of them by hand. This will give ma chance to validate them and iron out any conceptual kinks.

The long-term intention is that most - if not all - of these checks will be automated. Initially, I'm working on doing that in C# for the .NET developer community.

The code quality criteria will form half the score for the driving test. To pass it, you'll also need to demonstrate your practices and habits, and explain why you're doing them, so we can evaluate how much insight you have into code craft and the reasons for it. This will be done by recording a 30-minute screencast at some point during the test that we can assess.

More news soon.



July 30, 2017

Learn TDD with Codemanship

Clean# - My Imaginary C# Compiler Extension


Almost a decade ago, I wrote an article for the now-defunct itarchitect.co.uk that predicted the rise of real-time code quality analysis as we type.

What we were waiting for was more number-crunching power, and I think perhaps that day has arrived. So, too, has the technology we'd need to achieve it. Microsoft's Roslyn compiler platform offers the ability to perform real-time ("live") code analysis and report it in much the same way editors of old reported syntax errors.

This raises the prospect for detecting common code smells as they appear. and even fix some of them automatically.

Consider an example: Long Parameter Lists. As we type our method declaration, we type one parameter, then another parameter, then another, and then... as type the fourth parameter, a squiggly line appears underneath it. Hovering over that shows a warning: "Too many parameters". And then potential fixes would drop down, like introducing a parameter object, or accessing data through a private getter, if that's possible, or even splitting the method into two methods to be called in succession (assuming the client code is in the same class, otherwise we'd be introducing Feature Envy.)

Or how about this? We type a call to a method on an instance of a collaborating class. Then we type a call to a second method on the same object. At which point that whole expression, statement or code block (or entire method, if all calls are to this other object) lights up with a message warning "Feature Envy". We're then offered fixes like extracting the offending code into a separate method and moving it to the other class.

I can see a range of code smells being detected as we type: long methods, too many branches/loops, too many dependencies, large classes, and many more. I can even envision a strict mode, where we not only get warnings about certain code smells, but the code actually won't compile until they've been fixed.

Let's call this imaginary C# compiler extension Clean#, because everything has to have a cool name even when it doesn't exist yet.

And as computing power continues to increase (16-core, 32-core, etc etc), I can see the analysis getting much more sophisticated. As we type, code could be analysed to check if it's reachable from unit tests, for example.

The future is here. Now, where's my hover car?






July 28, 2017

Learn TDD with Codemanship

How To Backwards Compatibility (And Why For)

One of my biggest bugbears as a software developer is how casually many developers of tools, libraries, services and frameworks make changes that aren't backwards compatible.

It breaks my first rule of software development:

Thou shalt not break shit that was working


I acknowledge there are times - rare occasions - when it's unavoidable. But we seem to do it with careless abandon, creating untold hours of needless work that adds no value for client code developers.

I know from experience that backwards compatibility is not that hard to get right. You just have to choose to do it.

Here are 5 ideas that I've had success with that you might like to incorporate into your development practices.

1. Run The Old Tests Against The New API

This is a no brainer. Some of your automated tests will work at the API level. Keep those in their own folder or project, so you can run them by themselves. In your builds, as well as checking the code passes the new API tests, check they also pass the old API tests. It can be as simple as merging the API tests from the previous release with the latest code.

If the tests won't compile (e.g., because an API method has an added parameter), you've broken syntactic compatibility. FAIL. Do not pass GO. Do not collect £200.

If the tests compile, but when you run them some of them fail, you've broken semantic compatibility. FAIL. Go directly to jail.

2. Apply The Rules of Extending Classes in Design by Contract

Bertrand Meyer's Design by Contract has strict rules about how we can extend classes so that we don't break the original contracts with client code.

Pre-conditions to calling methods can be weakened. That is, that method will work in the same or a wider set of circumstances. Otherwise, clients may find themselves invoking the method in situations that are now forbidden.

For example, in a video library service, let's say that originally any member could borrow any video. If we add ratings to the system and require that members must be old enough to borrow a video, potentially a whole bunch of client code breaks when it tries to borrow a copy of The Exorcist for a 9-year-old member.

Post-conditions to using methods can be strengthened. That is, the client will get the same or more than they did before.

In our video library example, we could include a free pizza when members borrow 5 or more videos. They get the videos AND free pizza. But if we changed it so that we can substitute titles (let's say they thought they were borrowing all of the Harry Potter titles for a marathon fan weekend, and then we send them 8 Star Trek movies instead), we've broken the original contract - and ruined their Potter Party.

Whenever you change an API, ask yourself "are we breaking the original contract?"

3. Overload API Methods Instead of Changing The Originals

If we want to change the way borrowing videos works so that members can specify they want it for an extended rental (e.g., if usual rental period is 48 hours, but they want to take it on a holiday for a week), we could just add a new parameter for that to the original API method. But that will break all the clients.

Instead, overload the method for borrowing a video so that new clients - who are aware of this new feature - can take advantage, but existing clients will still work with the default rental period.

4. Minimise The Pain

On those occasions when we really can't see another way, we can remove a lot of the pain and extra work for client code developers.

First of all, give them plenty of warning. And don't make the change in a single step. Put your replacement API method in place first, and deprecate the one you're getting rid of several versions ahead of actually removing it. Steer people writing new client code away from the old method, but keep old client code working for as long as possible.

And then, when it's time for the old API method to go, seriously consider giving client code developers an automatic way of migrating their code. 9 times out of 10, a regular expression will do the job. Match the signature of the old API method and replace it with the new one, including default values for new parameters.

5. Offer Real Value in Migrating. Avoid Change For The Sake of Change

I could probably live with API updates that break my client code if there was a good enough pay-off for the extra work. What really annoys me is when there's no real pay-off. The API's developers, of course, will list a whole bunch of "benefits" of the new version. But they've usually confused benefits with features.

We really do have to get over this culture of "but there's gotta be a new version with new features!"

No there doesn't. Consider carefully whether the benefits of the changes you're making really will outweigh the benefits of leaving client code working.


ADDENDUM: Occurred to me this afternoon, after hearing from devs who have chosen not to upgrade their unit testing framework because of lack of backwards compatibility (that means they'd lose the use of a range of tools relying on the previous API)... It's also a very risky product strategy to put out a non-backwards-compatible release.

If you raise the prospect of a migration path and extra work for client developers, this opens the door to competing products. Better, I think, to just develop an all-new product with a clean slate.