Monday, August 25, 2014

Unit testing – you’re measuring it wrong

I’ve been having this problem ever since I’ve started teaching (and preaching) about SCRUM, clean code, unit testing and TDD and any other software development practice.
When implementing a change - how do you measure its success?
For that matter – how can you measure a developer productivity?
Faccia is watching by Our Hero, on Flickr
Creative Commons Creative Commons Attribution-Noncommercial-Share Alike 2.0 Generic License   by Our Hero

A few years ago I worked with a “brilliant” manager who tried to measure developer’s productivity by counting the number of bugs a developer fixed on a given day. Being my argumentative old self I’ve explained that no two bugs are created equal and besides we should aspire not to write these bugs in the first place.
My comment was not well received – I was branded a subversive element – he felt that I wanted to prevent him from assessing developer’s work (which I didn’t).
Over the years several attempts has been made to calculate developer’s productivity – with similar result such as counting lines of code:
“Measuring programming progress by lines of code is like measuring aircraft building progress by weight.”
- Bill Gates
And it only get’s trickier when faced with the need to measure unit testing efforts.

How to measure unit testing effectiveness

The problem is that it’s impossible to measure both how much time was saved and how many bugs were prevented. The only viable way is to have two developers write the same feature and compare the results – and I’m yet to find a company willing to throw money down the drain.
A few years ago I found a research (which I use all the time) that did something similar: two teams working at the same same projects, one using TDD and one not were timed coding time and bug density was checked afterwards (guess who had less bugs). Unfortunately I cannot reproduce that experiment on every single company I work with in order to show that unit testing/TDD improved the way they are doing things.
There are two “matrices” that seem to interest managers when unit testing are concerned:
  1. Test count (i.e. how many tests we have)
  2. Test coverage (i.e. what percentage of the code was tested)
Both are inherently flawed.
Test count does not tell me anything about the project’s state and it’s impossible to provide a number of tests per feature developed - how many tests are “enough”? 10, 100, 1000 or maybe a million?
As for code coverage it’s a tool, a very effective tool that helps me daily to find code I need to test and help me understand how healthy my unit testing suite is – to put it in the words of one of a former boss of mine:
What do you mean 20% code coverage? we had 75% three months ago…
I know that I need an excellent reason to have less then 50% code coverage for a component, but how much coverage is enough coverage? 80%, 95% or 100%?
And just like any other tool code coverage can be gamed and easily abused to show good results for very bad code and tests.

Both code coverage and test count are good at showing quantity not quality and can be used to measure progress (to some extent- we had 100 tests a week ago and 50% coverage and now we have 1000 tests and 85% coverage), and both should be handled with care.

Why do you want to know?

There’s always a reason behind the need to measure or predict success. Usually when asked about measuring unit tests it’s due to a need to estimate future effort (which we still don’t know how to do). Someone up the food chain needs to fill reports, Gantt-charts  and approve expenses and needs to know what would be changed in the next X days/weeks/months. Perhaps the guy who wanted to start using unit tests needs to explain it to his superior or investors.
The problem is that any metric given would be inaccurate – I can show you two different projects with roughly the same amount of unit tests and 90% code coverage -one would be a success and one an utter and complete failure.
From time to time I’m forced to provide these guesstimates – since people need to be able to know where they heading and how much work is planned. When doing this I do my best to explain that these are educated guesses and the real benefits are more then these numbers but hard to predict and quantify – but no one ever listens…
It all comes back to the fact that I cannot show time saved nor bugs which were not created due to the change – the two factors that I find most interesting.

Success is measurable – in retrospective

I promise you one thing – after a short while the team would feel the change – for better or worse (usually for the better). Adding features would be a breeze, less bugs introduced with each new feature, shipping a working product would be easier and faster and developers tends to be more effective in their work (and usually happier)
Success by aloshbennett, on Flickr
Creative Commons Creative Commons Attribution 2.0 Generic License   by aloshbennett

I usually use nDepend to show the current code complexity and how various coding violations disappeared after a few tests were written an refactoring were made.
Looking back after a few weeks I can show real tangible improvement that I couldn’t possible predict with an accurate number.
And so I keep collecting these numbers with hope that someday when asked to predict or set measureable targets for developments I’ll have  a good answer…

Thursday, July 24, 2014

Designing with Tests talk at IASA

Last week I had the pleasure of presenting at the local IASA (International Association of  Software Architects). I talked about how to use unit tests to design software and the role of the architect when using TDD.

It was a good talk with a lot of good questions from the audience. We’ve discussed when to use TDD and its limits and I got to show a full demo of developing a real application using TDD from scratch.

Afterwards I’ve attended a panel with Gil Zilberfeld and Lior Israel where we talked about TDD, BDD, how the architect can use those methodologies as part of his architecture handout to the development team and answered questions from the audience.

The session was recorded and published on YouTube (in Hebrew)

I’d like to thank IASA for this opportunity - I enjoyed giving this talk and I hope I get more opportunities to discuss it in the future.

Monday, June 30, 2014

TDD != Unit Tests (and vise versa)

It’s been a busy week that started somewhere three months ago and I’ve missed most of the whole “TDD is dead” argument.

I finally had some time to sit and watch the discussions on the topic between Kent Beck, Martin fowler and David Heinemeier Hansson.

If you’re interested in unit testing and TDD (and you should be) - this is a great opportunity to listen to great minds and learn what they think of the subject – I know I learnt a lot and plan to continue watching till the end (there’s .

Although I only got as far as the 2nd part–  I can see a pattern emerging – it seems that every argument for or against TDD is actually about unit testing.

It seems that the discussion is not about “why TDD is good/bad” but a discussion about whether or not use unit tests (use!), the unnecessary layers of abstraction they might introduce (the dreaded design for testability) and how people completely miss the point by tying themselves to implementation by over mocking their application.


A quick search in the internet showed me why at least one of the participants think so:


DHH has a good point – obviously how you write unit tests effects directly the way you practice TDD.

But there’s more to TDD than just writing unit tests just like there’s more to software development than writing code. I always argue that the unit tests are a byproduct of TDD and not the other way around – we use unit test to drive our code design, and at the end of the process we’re left with a pretty decent test coverage that can prevent regression but we’re also left with a very specific design – which is the objective of the whole process.

Knowing how to write good unit tests is crucial in order for TDD to succeed – but you also need to have good coding practices, know how to refactor your code and a thing or two on software design in order to be successful.

Because TDD is not about writing unit tests – it’s about design. The tests are there to help with that and theoretically can be deleted as soon as you’re done writing your code – although I’m yet to find someone who would give up the benefit of having the unit tests.

I hope that the next 3 parts of the hangout would concentrate on TDD as a design methodology and leave the unit testing discussion behind.

Thursday, June 26, 2014

My DevGeekWeek sessions

I’ve had fun today at DevGeekWeek where I got to talk about writing clean code and unit testing best practices.

I’d like to thank the participants of today’s seminar – you were a great audience and I enjoyed speaking with you and learning about your experience with Agile, unit testing and writing code.

Just in case you weren’t there today – here are the slides:

Monday, June 09, 2014

DevGeekWeek 2014

Those you know me (or read my blog) know that I’m passionate (with a capital P) about software development, clean code and of course unit testing.

And I’m happy to be given the opportunity to talk about these topics as part of The DevGeekWeek 2014 conference.DGW_addmails_down

The DevGeekWeek is a week of all things software development and is scheduled for the 22-26 of June and will be held in the Daniel Hotel in Hertzelia (Israel).

CodeValue is responsible for the Extreme .NET with C# track with great speakers including yours truly.


I’ll be delivering two sessions on the last day at the Code Quality, Testing & Automation with Visual Studio & TFS seminar:

We’ll start with a session by Alon Fliess about Architecting Code For Quality, then my colleague Haim Kabesa with Building Coded UI Tests with Visual Studio and Test Manager and after launch I get to present two of my favorite topics:

  • Writing Clean Code in C# and .NET
  • Building Unit Tests correctly with Visual Studio 2013

I’ll talk about code and readability, avoiding stupid bugs and unit testing for the .NET developer – using Visual Studio to make it all happen.


See you there!

Monday, May 26, 2014

What’s wrong with TDD

A while  ago I was asked to talk about the problems of using TDD – being me I’ve decided to do the exact opposite, this session was names “what is wrong with TDD”.
I felt that one of the major issues is that TDD looks weird, it’s counter-intuitive, and convincing developers to actually try it hard and requires a mental leap of faith.
And so I’ve created the talk aiming to help those developers who wish to teach their team a new methodology (because TDD is a tool not a religion) and need “ammunition”, answers for their co-worker’s questions, suspicions, rants and excuses for not using Test Driven Development.image
Since then I got to present this talk several times both as 15 minutes  lightning talk as well as a 2 hours session (and anything in between) –  and so I thought to write a blog post that would summarize these sessions on the topic of reasons of not using TDD and why they are wrong…

I don’t like tests/I don’t have enough time

I believe TDD and “unit tests” has been done a great injustice by not giving it a cooler name – preferable one that doesn’t have the word “test” in it – because it’s a PR disaster!
A few years ago I have been invited to talk about “TDD in the real world” at a local user group – around the half hour mark one guy who set at the front row raised his hand and asked me – “does that mean that developers write these tests?”
It’s easy to understand why some developers think that “tests” are not their job – after all they are called “developers” and not “testers”.
Usually this lack of enthusiasm of writing these strange little tests hides behind a more “acceptable” argument – I really want to write unit tests (wink, wink) but unfortunately I do not have enough time to do so – obviously such developers are developing the only project in the world that have a deadline.
There is truth in the “I don’t have enough time” argument – writing code with unit tests takes more time then writing code without, I call this development phase as the “hitting keyboard phase” and it could take from 15% to 35% (!) more time – even more on hard to test projects.
But the benefits of TDD are much greater and in fact using TDD we reduce the time it takes to get your code to the client (the one who uses your code) by having less bugs (around 40% to 91% less bugs) and more maintainable code. I wrote about it back then when I worked at Typemock  - The Cost of Test Driven Development.
[Realizing quality improvement through test driven development: results and experiences of four industrial teams]

In conclusion – Writing tests costs time, but overall development takes less time. It’s an investment and like all investments you can choose not to make it – it’s up to you. Just make sure you’re aware of the cost of that decision.

Writing tests before the code is counterintuitive

Since most developers cannot predict the future – how can you know which tests to write before actually writing the code?
I think there are two real reasons not to want to write tests before the code:
It requires leaving you comfort zone
Up until this point you were taught to think of all of the possible scenarios and find the optimal solution, using the power of design methodologies learnt during a long and successful career. And now you need to “not think about design” and develop one test at a time – an it’s suppose to fail!
Planning for failure
Deep down it’s hard to see something I wrote break – even if it’s just for a little moment. And so writing a failing test a.k.a the first step of TDD is hard for some.
What would a developer not willing to try (exit comfort zone) do? Write unit tests – after writing the code and try to sell this practice as TDD.
This is usually a bad idea – most experienced TDD practitioners can tell whether or not the unit tests has been written before or after the code. And writing unit tests for existing code is harder, much harder than writing the tests before.
A developer who write unit tests after writing his code is missing the whole point – TDD is a design methodology – the unit tests are just a by-product of the process.
It’s about growing your code writing only what you need and about emergent design and not about writing unit tests.

Not everything is testable

This is actually a good reason – when this is really the case.
There are pieces of code that cannot be run as part of unit tests. I’m not talking about bad written code – there are ways to handle these. I’m talking about UI, video streaming, and some closed architectures that make unit testing almost impossible.
Needless to say if you cannot write unit tests – you cannot use TDD (duh).
But make damn sure that this is the case – because hard to test is not untestable:
  • By following MV* patterns (MVP, MVC, MVVM etc.) you can still test the UI business logic and drive its design using TDD.
  • Mocking frameworks are there to help you stub/mock external dependencies (Web, DB, 3rd party components)
  • When all else fails write integration tests (yes – as part of TDD)
In fact there are many ways to test this so-called “untestable code”, books have been written about it and there’s a plethora of tools to help with unit testing less than trivial code.
In the end you might not have 100% test coverage (whatever that means) but at least you’ll be able to easily maintain and develop your business logic.

TDD locks design

I get this usually half way through teaching a unit testing/TDD course - what would happen if we write all these tests and then need to change the design?
It feels like a big waste of time re-writing tests over and over again just because code (being code) was changed to fix a bug or add functionality.
Once I even got a complaint by a co-worker that “writing the code took one hour but fixing the tests took half a day”.
The good news is that there are principles of good test design that would prevent having to change your tests every single code change – this is where one assert per test and don’t test private methods come from (and there are many others).
Ideally only requirement change or a bug should cause your tests to break. This is not always the case but after writing a few hundred tests, reading a good book (or blog) you’ll find what to avoid and how to reduce the cost of maintaining your unit tests.
Good tests are simple, robust and easy to update if the need arise – bad tests get deleted and replaced.


TDD is not perfect:
  • It requires time investment
  • Seems counterintuitive at times
  • A new skill to learn
  • It might not work for all scenarios
But none of the reasons above is a good reason not to give it a try – it’s a powerful methodology that helped me combat “analysis paralysis”, create robust, maintainable code and there’s the added benefit of the resulting unit tests which provide a safety-net against regression bugs.
So why don’t you give it a try?

Monday, May 12, 2014

Things I learnt reading C# specifications (2)

The story so far: After reading Jon Skeet’s excellent C# in Depth - again I’ve decide to try and actually read the C# language specification
You can read about in a previous post of mine:
Things I learnt reading C# specifications (#1)
And so as I continue to read the C# specifications along with the excellent comments by many industry leaders (I own the annotated version) and I keep discovering cool stuff about the language I’ve been using for more then a decade.
And so here is the second list of things I found out while reading the C# specifications:

args can never be null

Any .NET developer who ever created a console application knows what I’m talking about – there’s always a Main method with one of the four possible signatures:
  • static void Main()
  • static void Main(string[] args)
  • static int Main()
  • static int Main(string[] args)
Two of which pass the command line arguments (if any) used.
I’ve always was a bit defensive about the args maybe due to scars I have from writing C++ programs - although in C++ args would never be NULL but that’s a different story.
In any case according to the C# specs:
The string[] argument is never null, but may have length of zero if no command-line arguments were specified
So there you have it – one less thing to check for.

C# uses banker’s rounding

Every needed a value rounded in .NET – it might happen if you use the decimal type or explicitly using Math.Round. Ever wondered what would be the result of such round?
The fact is that usually it would behave as expected i.e. round to the closest number (integer) which means 0.9 becomes 1 while 4.2 becomes 4. But what happens when rounding half a number would it round or down?
The answer is “it depends” -  in fact both 42.5 and 41.5 would be rounded to 42!
This rounding happens mostly due to the fact that it’s the answer for the ultimate question but also because the rounding method used rounds to the closest even number. Known as the Banker’s rounding and it’s has many nice properties:
Over the course of many roundings performed, you will average out that all .5's end up rounding equally up and down. This gives better estimations of actual results if you are for instance, adding a bunch of rounded numbers. I would say that even though it isn't what some may expect, it's probably the more correct thing to do.
[From StackOverflow]

Boxing and “is” operator

All us .NET developers knows about Boxing & Unboxing, those of us who used the early 1.1 version used them extensively when using collections (we were young and needed the money and besides generics were not implemented yet).
Luckily these days I don’t use boxing/unboxing much but from time to time a value type (e.c. int) needs to be passed as a reference type and Boxing occurs.
But is it still a value type – of course not! it’s a reference wrapping a copy of the value we’ve been using so how come the following code would write “True”?
int myInt = 42;
object aBox = (object) myInt;

Console.WriteLine(aBox is int);

Why’s that? well, it’s part of the specs (4.3) and besides it’s the way I’ve subconsciously expected it to work in fact I’ve been using this behavior without noticing for years -  I had a int and so please don’t confuse me with Boxing because in my eyes it’s still an int.

That’s it for now, I’ll keep reading the specs and probably produce a few more posts on the way when I find interesting facts about the language I’ve been using for more then a decade.

Until then – happy coding…
Related Posts Plugin for WordPress, Blogger...