Automocking fields using NUnit

No comments

Tuesday, June 30, 2015

From time to time I get to teach and mentor Java developers on the fine art of unit testing. There are many similarities when unit testing in Java and .NET but more interesting are the differences between the two.

Faking objects in Java using Mockito

One of the well-used Java Isolation (Mocking) frameworks is called Mockito which is easy to use, and has an API that reminds me of FakeItEasy, Isolator or Moq (as well as others). Creating a new fake object using Mockito is as simple as writing mock(SomeDependecy.class).
Mockito also enables automatically creating fakes, spies (partial fakes) and classes with dependencies using annotations (similar to attributes in Java):
public class ArticleManagerTest extends SampleBaseTestCase {

    private ArticleCalculator calculator;

    @Mock(name = "database") 
    private ArticleDatabase dbMock; // note the mock name attribute
    private UserProvider userProvider = new ConsumerUserProvider();
    private ArticleManager manager;

    public void shouldDoSomething() {

public class SampleBaseTestCase {

 public void initMocks() {
In the example above annotations are used to create fake objects and a real object which uses them. The end result is very similar to using an AutoMocking container.

What about NUnit?

I wanted to see if I can create similar functionality in the .NET world. At first I’ve tried using PostSharp but unfortunately the test runners had problems finding the new injected code. Instead I’ve decided to use NUnit’s ITestAction - a simple AOP for unit tests that seemed like a good fit.

NUnit has had the ability to execute code upon these events by decorating fixture classes and methods with the appropriate NUnit- provided attributes. Action Attributes allow the user to create custom attributes to encapsulate specific actions for use before or after any test is run.

I’ve created a simple attribute to mark fields that I wanted faked and named it (FakeItAttribute) and create a new attribute to enable discovery and creation of fake object for fields with that attribute:
[AttributeUsage(AttributeTargets.Class | AttributeTargets.Interface, AllowMultiple = false)]
public abstract class AutoFakeAttributeBase : Attribute, ITestAction
    private readonly IFakeHelper _fakeHelper;

    protected AutoFakeAttributeBase(IFakeHelper fakeHelper)
        _fakeHelper = fakeHelper;

    private IEnumerable<FieldInfo> _testFields;

    public void BeforeTest(TestDetails details)
        var isTestFixture = details.Method == null;

        if (isTestFixture)


        foreach (var testField in _testFields)
            var fakeObject = _fakeHelper.DynamicallyCreateFakeObject(testField.FieldType);

            testField.SetValue(details.Fixture, fakeObject);

    public void AfterTest(TestDetails details)

    private void DiscoverFieldsToFake(TestDetails details)
        _testFields = details.Fixture.GetType()
            .GetFields(BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic)
            .Where(testField => testField.CustomAttributes.Any(data => data.AttributeType == typeof(FakeItAttribute)));

    public ActionTargets Targets
        get { return ActionTargets.Test | ActionTargets.Suite; }
I’ve inherited that attribute with AutoFakeItEasyAttribute that uses FakeItEasy and reflection to create fake objects.
And now I can write tests that automatically create fake objects by adding that attribute.
[TestFixture, AutoFakeItEasy]
public class UsingSimpleClassTests
    private IDependency _fakeDependency;

    // Not faked
    private IDependency _uninitializedDependency;

    public void FakesCreatedAutomatically()
        Assert.That(_fakeDependency, Is.Not.Null);

    public void FieldsWithoutAttributesAreNotInitialized()
        Assert.That(_uninitializedDependency, Is.Null);
Quite cool, and yes it’s on GitHub.

Now what

I’m not sure if I like the use of fields in unit tests. I’ve seen it misused to create unreadable (and unmaintainable) tests more often than not – but at least now the fake fields are will be initialized between test runs. Now I know it’s possible to create AutoFaking using attributes and I considering adding more features until I fully implement AutoFakes.

Until then – happy coding…
In my not-so-distant past I needed to write a component which would control an air conditioning unit.
My application needed to send commands to the external device (on, off, set temperature) and read information from it (room temperature, humidity etc.)
And so I came with the following:
It’s a bit simplistic but the idea was that the client would be responsible for communication with the device (protocols, timeouts) while the manager would handle the “business logic” end of things (if temp > 10c than turn on cooling unit).
Commands from the manager would get translated to and unholy stream of bytes and sent to the device (and re-sent – numerous times) while data arriving from the device would be converted to strongly typed C# classes and passed to the manager where additional actions can be taken. Some of that data would end up in the application's data model according to the business rules inside the manager.
Obviously I developed the client and manager using TDD. While writing my tests (and later my classes) I’ve kept a clear division between the manager and client – each had it’s own set of tests and each had clear inputs and outputs (or so I thought).
A week later a new requirement came which caused a changed in my client and so I needed to re-write both the client’s tests as well as the manager’s tests. a few days later a bug was found – one that my tests should have caught but missed since it was a collaboration between the client and the manager that caused it. After a while I‘ve noticed that I’m doing something wrong – my tests would break whenever I’ve changed one of the methods between my client and manager. And so I’ve decided to test my manager and client as one “unit” using fake objects only for faking the device out of my tests. And it worked! I’ve managed to reduce the amount of tests while increasing my functionality – and there was much rejoicing.
That is until one day a co-worker of mine seen my tests – and told me that “those are not unit tests since you’re testing more than a single class”. It made me think about the actual definition of unit tests.

Maybe we should not call them unit tests anymore?

It all came back to me yesterday when read a post by Simon Brown: “Unit and integration are ambiguous names for tests”. In his post Simon explains that there is no clear definition of unit/integration tests (which is dead on the money) and come up with his own definitions which is clear (or at least clearer) and also helps understand which automatic tests a developer should write for each scope of the project – nicely aligned with the architecture.
Simon’s definitions are great. They help the align the developer’s test with the project architecture – something most of the projects I witnessed lack.
But there’s a small problem with splitting the tests into class-, component- and system- tests. While it set clear rules about their scope, what to fake, test size and speed - it does not help avoid testing the wrong thing…
I think that’s “class tests” falls short of explaining what we want to test namely the behavior of the code and not the actual implementation. I fear that calling a test “class test” would result in developers writing unit tests for every single class (and every single method). Those over specified tests tends to test trivial functionality while being extremely fragile.

Is there a definition for unit tests?

Since I needed an actual definition I went to the definite unit testing encyclopedia – XUnit Test Patterns and got the following definition for my troubles:
A test that verifies the behavior of some small part of the overall system. What makes a test a unit test is that the system under test (SUT) is a very small subset of the overall system and may be unrecognizable to someone who is not involved in building the software. The actual SUT may be as small as a single object or method that is a consequence of one or more design decisions although its behavior may also be traced back to some aspect of the functional requirements. There is no need for unit tests to be readable, recognizable or verifiable by the customer or business domain expert. Contrast this with a customer test which is derived almost entirely from the requirements and which should be verifiable by the customer. In eXtreme Programming, unit tests are also called developer tests or programmer tests.
[From: XUnit test patterns – Unit test]
So… Nothing other than only special people (someone who is involved in building the software) can know what the subject under test is?
It does states that a unit test can test a single class – or more…
There’s an excellent blog post by Martin Fowler – titled simply: UnitTest in which he tries to define unit tests but it seems that the only definite there is that unit tests should be fast – how fast? it depends!

What I use

I don’t have a clear answer here but I seem to go back to three different types of developer tests:
Unit Tests
Test a single unit of work (one class or more), and employ fake objects to prevent running external dependencies (DB, server, 3rd party and other classes). They are fast – usually runs for a fraction of a second. And they are relatively short (tens LOC) and simple.
Unit tests are independent from one another and each run on any machine should provide the exact same result.
Integration tests
They are similar to my unit tests in look and used to test interaction with external dependencies. I usually use Integration Tests when in need to test logic inside external dependency (think stored procedures) or interaction between my code and that external dependency.
They usually require setup (and cleanup) between test runs and a specific environment in place to pass. Fake objects can be used to avoid running other dependencies. They will be slower than unit test (seconds to minutes) and might fail due to reasons outside of the scope of the test (permissions, server down etc.)
Acceptance tests
Those are application, system wide or scenario level tests that make sure that business requirement are implemented according to spec.
They are slow and cross component boundaries and usually do not employ fake objects of any kind.


What I like about Simon’s methodology is that it’s easy to explain and set very good ground rules to the kind of tests a good project should have. I still think that the definition of class tests does not change the original definition (or dis-definition) of unit tests – while it’s about focusing on the operation of a single class it does not mean that other classes are not exercises as well.
It’s hard to define what is a “good” or “correct” unit test. I’ve noticed that there is no one rule for every project out there. And so I’ll keep on using my flawed yet accurate definition of “unit tests” – until I find a better definition.

And until then – Happy coding…

Broken windows and software development

No comments

Sunday, June 14, 2015

In 1969 a car with no license plates was parked with it’s hood up in the Bronx. Within minutes it was vandalized and stripped. That car was part of an experiment to test a theory called “broken windows”.

The Broken windows theory

The original broken windows theory was introduced in 1982:
Consider a building with a few broken windows. If the windows are not repaired, the tendency is for vandals to break a few more windows. Eventually, they may even break into the building, and if it's unoccupied, perhaps become squatters or light fires inside.
Or consider a pavement. Some litter accumulates. Soon, more litter accumulates. Eventually, people even start leaving bags of refuse from take-out restaurants there or even break into cars.
[Wilson, James Q; Kelling, George L (Mar 1982), "Broken Windows: The police and neighborhood safety", The Atlantic]
The bottom line is that we need to fix the small issues quickly in order to prevent bigger issues down the road. And by fixing these problems we avoid escalation and make sure that the neighborhood does not become a crime ridden cesspool, or the inspiration of the next Fallout game. 

What does it have to do with my code?

Code can also have broken windows – bad design, leftover comments that doesn't mean anything anymore and dead code. I’ve seen it happen countless times – long methods become longer (instead of refactored to smaller methods) and messy code becomes harder to read and maintain as time go by.
I remember one code review in which the developer didn’t bother with writing unit tests for a new feature since “the existing code did not have any unit tests in place”…
When code issues are left unattended development of new features becomes harder (or even impossible) and every change introduces new and exotic bugs. Weird governing processes are created – I’ve seen one place where is was common practice to have the equivalent of two weeks of meetings before a single line of code was written.
Broken windows in code leads to fear – fear of changing code, fear of writing new code and overall
The funny thing is that usually fixing broken windows in code is simple – refactoring when applied correctly can solve most issues before they become and impossible mess. Since most modern IDEs support at least a subset of the common refactoring (such as rename and extract to method) – these refactorings can be done safely and without breaking existing code.

Other broken windows in software development

But there are other ways in which we can unintentionally create a broken window that has nothing to do with our code. At one job we had to use a tool that made building a new version impossible to automate, no once know why we’ve been using it and no one could explain the merits on using it but every single time I’ve tried to replace it I was met with fierce resistance – since it’s the tool that all the big players are using.
If releasing a new version takes three days and a hundred steps of manual work – this could be a problem that can escalate quickly, and before you know it a wrong version is shipped to your customer.
And it could be the small things such as a version number of another dependency that seem to change without any logic behind it (yesterday version was, today’s is – this pseudo randomness can cause big problems once you need to revisit a change (hotfix anyone?) or distribute a new version using automatic update tool.
The solution for most of these ALM related is education and automation which seems like a lot of work at beginning but once it place saves quite a lot of time every single day.  


Broken windows are everywhere – it’s the small things that can becomes huge problems if left unchecked. It becomes a real problem once the development team (or the whole organization) accept them as a fact of life (a.k.a it’s the way it was always done here).
In the book “Clean code” Bob Martin explain about the “boy scout rule”:
If we all checked-in our code a little cleaner than when we checked it out, the code simply could not rot. The cleanup doesn't have to be something big. Change one variable name for the better, break up one function that's a little too large, eliminate one small bit of duplication, clean up one composite if statement.
Can you imagine working on a project where the code simply got better as time passed? Do you believe that any other option is professional? Indeed, isn't continuous improvement an intrinsic part of professionalism?
So pay attention to the small pains – it usually means that you can fix something today instead of waiting until it would be impossible to change. At once company we had a rule – if something annoyed us more than three times – we would change it. I still find this to be a useful way of finding ways to improve my code as well as the development process.

Happy coding…

TDD is like riding a bicycle

No comments

Thursday, May 14, 2015

From time to time I get to teach unit testing and TDD to developers. And every single time I get to learn something new.
During such class we got to the part where I talk about TDD. When I’ve explained about writing tests before code as a design activity - nobody objected. When we did step by step FizzBuzz kata together – everything was just rainbows and unicorns. Since the class seemed to grasp the concept of TDD I’ve decided to get their hands dirty with another TDD exercise – to do on their own (pair programming style) – and all hell broke loose… Actually it was fine, but obviously harder for the students. Some did better than others, keep in mind that it was the end of (a very long) day, it was the first time they ever attempted TDD and remember that these guys only found out about unit testing that same morning.
I did get very interesting results – as well as comments from the class.
After writing a few tests (doing TDD) one guy looked at me and said: “This is quite a lot of work, I think I can solve the problem faster without any unit tests...” I told him to try, but it made me think TDD and learning new skills. Any new skill requires practice in order to master it – think playing guitar, juggling or the ability to sleep late regardless of the amount of noise in your house (and an angry spouse trying to wake you up). In the beginning it’s hard and feels like a lot of effort for little gain and as time, as you improve it becomes easier and easier until one day you might become so good in what you do – it actually look easy to someone looking from the side.
Timageo me it reminded learning to ride a bicycle – at first you might have trouble making the wheels turn, in this point you’ll probably move faster (and get where you want to go) without the bike (a.k.a walking) - just like the guy in my class – in this point he has better chance to solve his problem without TDD (with or without bugs). The next step is learning how to make semi-trusty vehicle stop, again doing what you did until now is the simplest possible solution – just put your legs down! But if you persist you learn that using brakes is better. After some time your brain translates “I need to stop” to whatever you need to do in order to make your bike stop.
As you get better in making your new mode of transportation go where you want to go it becomes easier and easier until one day you notice that you no longer think about the actual operation of riding the bicycles – you just do it. In that point it’s quicker to use your trusty (hello kitty pink?) bicycle to get wherever you need to be. At this stage you discover that you prefer riding your bike as oppose to the old way of “walking there”.
Same goes for TDD – in the beginning it might be hard and even seems like a waste of time – time you could have used to do something better (drink more coffee?) but as you progress you might find that it saves you time until one day you find out that it’s the way your brain solves design problems – automatically.
There is another aspect in which TDD is similar to riding bike:
Life is like riding a bicycle. To keep your balance, you must keep moving.
Albert Einstein
Just like one of the smartest guys ever lived said – you need to keep moving if you want to avoid falling – which also reminded me of the Red-Green cycle – if you want it to succeed you have to keep short, quick iterations and try to avoid making too many stops along the way – but that’s a subject for another blog post.

Until then – Happy coding…

AssertHelper V1 released

No comments

Friday, April 24, 2015

Exactly one year and four months passed since my first try at fixing the state of asserts in NUnit. You can read all about it in my post – One assert to rule them all.
My intent was to create one assert that would automatically choose the right way to check test result (I didn’t invet the idea but the old project has been discontinued).
Since the initial release I’ve used AssertHelper on several occasions (projects), taking requests and adding support along the way. Finally AssertHelper V1 is ready to be tested in the real world.
I’ve added MSTest support and a last minute entry – print the “offending” lambda that caused the assert to fail.
So why don’t you give it a try - AssertHelper is available via NuGet (Install-Package AssertHelper).

And please let me know what you think.

I’m a software developer - not a lawyer

No comments

Wednesday, April 15, 2015

61101315A long time ago not that far away I’ve been hired by Omni-Corp to work on the new and shiny product. We had talent, budget and cool technology on our side, and that project was going to crash and burn (and ultimately cancelled) in less than a year.
Nobody’s perfect - we had our share of problems, some technical and some not. One of which was the way that requirements were managed:
The product guy (or gal) would create a word document describing a new feature. A few meetings would happen and when development agree on the specifics of that feature coding would begin. Lastly testers would use the word document to create test plans and validate that the requirements were met.
It was a good process, easy to explain (and follow) with clear stages (Requirements –> Development –> Testing), and clear outcome of each stage.
Like all best laid plans – this one did not survive its meet-up with the real world.
Since the product persons were “business people” and not “software developers” they did not fully understand the logical way in which developers understood the requirements and since developers were being “developers” we tended to come up with our own solutions whenever we met a “logic hole” in the requirements document.

And mistrust and chaos would follow…

We (devs) would claim that since it was not in the document we had to implement was we saw fit – and so a young bright manager came with the solution: quickly update the document five minutes before a meeting and then point it out that “it was definitely in the document!”.
Now we had a new problem – it was impossible to write software with a constantly changing requirements and so the “requirement freeze” came to be.

Requirement freeze!?

As you might have guessed at a specific point in time the requirement document would be locked and cannot be updated again, and just like code freeze – it didn’t last. Once a product guy discovered that he can make a copy of the frozen document, update it and link to it.
At his point it was just ridicules as well and counterproductive – mistrust grow as both teams felt that the other team is trying to cheat them. The development team felt that product was trying to shove half baked, constantly changing features while the product team felt as if the developers were always on the lookout for new excuses to avoid work.
And so every single stage in the process felt as negotiation between opposite sides – I got to a point where I refused to sign on requirements before reading them several times and then asking my boss to read them to make sure that they were complete and did not contain any hidden conditions.
I felt like a lawyer! There’s nothing wrong in being a lawyer (some of my best friends are lawyers) as long as it’s in your job description.

And it was all our fault

We (both teams) forgot that it was our job to create software according to the demands of our customers.
Looking back at that time I realized something – requirements change, even in a perfect project customers tend to change their minds, bugs are fixed and new data can cause us to look at our product differently. I got to work for many companies before and since and not once did I meet the mythical 100% true never changing requirements project – even with the best of the best.
We were fighting reality – where projects tend to change over time. If only we’ve used the same passion and energy to make sure that we could change our code as easily.
The truth is that it should be easy to change your code. Refactoring, unit testing, code reviews and other software development best practices can help you get there and as good, experienced software developers it was our job to use the right practices in order to provide our customers with new features and bug fixes as quickly as possible – instead of complaining about the fact that requirements always changed…
It can be very hard to make changes in large complicated code bases. When we make changes it's important to know that we are making them one at a time. Too often we think we are changing only one thing but instead we end up changing other things unintentionally: we end up introducing bugs.
Michael Feathers – working effectively with legacy code
It’s as simple as that – simple code equals easy to change. So make sure you’re doing your job before trying to change the world.

Happy (& clean) coding…

Electronics 101 - Getting started with Arduino

No comments

Wednesday, April 01, 2015

ArduinoYunFront_2_450pxThese days everybody talks about IoT. Connecting your toaster to the internet has become a nationwide priority. Finally the barrier to entry to the hobbyist/home electronics have fallen and anyone can hack an hardware solution using cheap and simple components.
And putting together a simple circuit controlled by Arduino/Raspberry Pi/whatever is easy, it’s just a matter of
I have always enjoyed writing software that effect the real world and this new wave seemed like a good opportunity to dust off my 12th grade electronics classes from old. Having acquired an Arduino Yun I’m ready to create my very own “hello world” example. In this case make a LED (Light Emitting Diode) blink.


For the purpose of this simple demo you would need to following:
  • Arduino software – download from, install and you’re ready to go.
  • Arduino - I’m using Arduino Yun but for the purpose of this demo any Arduino will do.
  • Breadboard – this nifty “board” would hold the components in place. This enable us to connect components without soldering stuff together. It has a bunch of holes in it with “wires” connecting the holes to one another.
    The idea is that all the components at the same row are connected to one another. Just make sure that you do not “cross the streams” – each component has at least two leads and they (usually) need to be connected to different rows.
  • LED (pick a color, any color)
    The star of this demo – lights up when current passes through it. you might have noticed that our friend here has two leads one longer than the other. The longer one is the positive one and should be connected to where the current comes from the other shorter lead should connect to the ground.
  • Resistor
    Since we don’t want to burn the LED we need to add a resistor to the mix.
    It has a bunch of lines on it with pretty colors. Those tell us what is the resistor’s resistance, but don’t worry about it just yet. In this case I’ve used a 1K resistor.
  • 2 wires – to connect stuff together.

Writing the application

Writing code for Arduino is easy. A program is called “sketch” and it uses a C like syntax. We have comments, variables and functions. The bare minimum are two methods: setup and loop.
  • void setup() runs once and this is where you add your initialization logic
  • void loop() runs continuously – this is where the program logic should be written.
You can add additional methods if the need arise but this is as simple as it gets.
Let’s write our program. We’ll use pin 13 for output. The reason for choosing pin 13 is that on most Arduino boards we have a LED on board.
The code for this first simple program looks something like this:

const int ledPin = 13; 

void setup() { 
    pinMode(ledPin, OUTPUT); 

void loop() { 
    digitalWrite(ledPin, HIGH); 
    digitalWrite(ledPin, LOW); 

Seems simple enough, but just in case let’s go over the code:
  • The first line declare a constant value which we’re going to use as the output pin
  • In the setup method we initialize that pin as an output pin
  • The loop would start with setting pin 13 to HIGH which supplied 5 volts to that pin –> LED light goes on
  • Delay for a second (1000ms) so that we’ll be able to see the pin light
  • Set pin 13 to 0 volts (LOW) –> LED is off
  • Another Delay
And that’s it, the loop would run continuously turning the LED on and off.
Use Ctrl+R to compile and verify the sketch. You can also upload and run without connecting any components and see the little L13 LED on the board light up – but what’s the fun in that?

Connecting the board

Looking at the Arduino you’ll see it has a few numbers and weird words on its side.
The numbers are the digital inputs/outputs. near number 13 we have GND (ground) which will also need.
Connecting to the outputs is as simple as putting a wire through a hole.
  • Since we’re used 13 as the output we’ll connect one wire to 13. The other wire will be connected to GND.
  • Connect the first wire to the resistor
  • Connect the resistor to the LEDs longer lead
  • Connect the short lead to the wire that goes back to GND (ground).

In the real world it would look something like this:
For the uninitiated it might look a bit “wireless” – the components do not appear to be connected. Just remember that the bread board has wires running underneath which essentially connect the whole line.
Here is the same circuit with the “hidden connections” marked.


Connect your Arduino to your computer and Upload the sketch.
Note: Don’t forget to choose the correct port and board before trying to upload.
And after a short while you should see the LED lighting up!


Troubleshooting: If your LED does not light up (but the on-board LED does)check that all the components are connected in the right order, make sure that you’ve connected the positive side of the LED (long) to pin 13 and the negative side to GND.
If you feel comfortable enough with this simple example – why not try and implement a binary counter:

What’s next

That was a simple electric circuit using Arduino Yun. I hope that it looked simple enough and that you’ll be able to use this post to build your own. I hate to see software developers shy from electronics just because it’s out of their comfort zone (being “hardware”).
As for me – I’m waiting for a big shipment of electronic goodies – and have plans for future hobby projects that I can use them in. I might even write a few more posts on the subject.

But until then – happy coding…
Related Posts Plugin for WordPress, Blogger...