It’s been a while since my last local appearance – too long in my opinion. Luckily I have a chance to remedy that:
e2d712faabaedaccd7ce0744e8a24ea3184dc595_smallAugust 19th: I’ll be speaking at the IL .NET Developers user group (IDNDUG) about Navigating the TDD alphabet soup. I’ll be speaking about TDD/BDD/ATDD, their origins, how (and when) to use them and which is better. I liked presenting this session on the last DevWeek (UK) and can’t wait to do so again.
global_342343352
September 7th: I’ll be co-presenting a Unit testing for legacy and concurrent code session at the Clean Code Alliance (IL). I’ll be speaking on unit testing concurrent and/or asynchronous and/or multi-treaded code. My session is right after Itzik Saban talks about unit testing legacy code.
unnamed
September 8th: IL Tech Days. Seems that unit testing + multi core is in high demand. I have another opportunity to speak about Unit testing Concurrent code. It has been a popular session and I got to do it many times and yet I keep on learning new ideas and technics each time I present this topic.
global_440578632
And finally on October the 13th I’ll be introducing Clean code in a newly formed group – the Generalist Engineer.
It should be an interesting & fun session – and I’m coming right after Dr. Adi Avidor talk on Analytical disabstraction.

And now for something completely different

atd_banner_300x300_speakerOn November I’ll be speaking at the Agile Testing days.
I’m going to Navigate the xDD alphabet soup once again – this time twice as fast!
It’s a great conference with excellent speakers and I can’t wait to go.


That’s it for now. So in case you come to one (or all) of my sessions – please say hi and let me know what you think.

Automocking fields using NUnit

No comments

Tuesday, June 30, 2015

From time to time I get to teach and mentor Java developers on the fine art of unit testing. There are many similarities when unit testing in Java and .NET but more interesting are the differences between the two.

Faking objects in Java using Mockito

One of the well-used Java Isolation (Mocking) frameworks is called Mockito which is easy to use, and has an API that reminds me of FakeItEasy, Isolator or Moq (as well as others). Creating a new fake object using Mockito is as simple as writing mock(SomeDependecy.class).
Mockito also enables automatically creating fakes, spies (partial fakes) and classes with dependencies using annotations (similar to attributes in Java):
public class ArticleManagerTest extends SampleBaseTestCase {

    @Mock 
    private ArticleCalculator calculator;

    @Mock(name = "database") 
    private ArticleDatabase dbMock; // note the mock name attribute
    
    @Spy 
    private UserProvider userProvider = new ConsumerUserProvider();
    
    @InjectMocks 
    private ArticleManager manager;

    @Test 
    public void shouldDoSomething() {
         manager.initiateArticle();
 
         verify(database).addListener(any(ArticleListener.class));
    }
}

public class SampleBaseTestCase {

 @Before 
 public void initMocks() {
     MockitoAnnotations.initMocks(this);
    }
}
In the example above annotations are used to create fake objects and a real object which uses them. The end result is very similar to using an AutoMocking container.

What about NUnit?

I wanted to see if I can create similar functionality in the .NET world. At first I’ve tried using PostSharp but unfortunately the test runners had problems finding the new injected code. Instead I’ve decided to use NUnit’s ITestAction - a simple AOP for unit tests that seemed like a good fit.

NUnit has had the ability to execute code upon these events by decorating fixture classes and methods with the appropriate NUnit- provided attributes. Action Attributes allow the user to create custom attributes to encapsulate specific actions for use before or after any test is run.

I’ve created a simple attribute to mark fields that I wanted faked and named it (FakeItAttribute) and create a new attribute to enable discovery and creation of fake object for fields with that attribute:
[AttributeUsage(AttributeTargets.Class | AttributeTargets.Interface, AllowMultiple = false)]
public abstract class AutoFakeAttributeBase : Attribute, ITestAction
{
    private readonly IFakeHelper _fakeHelper;

    protected AutoFakeAttributeBase(IFakeHelper fakeHelper)
    {
        _fakeHelper = fakeHelper;
    }

    private IEnumerable<FieldInfo> _testFields;

    public void BeforeTest(TestDetails details)
    {
        var isTestFixture = details.Method == null;

        if (isTestFixture)
        {
            DiscoverFieldsToFake(details);

            return;
        }

        foreach (var testField in _testFields)
        {
            var fakeObject = _fakeHelper.DynamicallyCreateFakeObject(testField.FieldType);

            testField.SetValue(details.Fixture, fakeObject);
        }
    }

    public void AfterTest(TestDetails details)
    {
    }

    private void DiscoverFieldsToFake(TestDetails details)
    {
        _testFields = details.Fixture.GetType()
            .GetFields(BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic)
            .Where(testField => testField.CustomAttributes.Any(data => data.AttributeType == typeof(FakeItAttribute)));
    }

    public ActionTargets Targets
    {
        get { return ActionTargets.Test | ActionTargets.Suite; }
    }
}
I’ve inherited that attribute with AutoFakeItEasyAttribute that uses FakeItEasy and reflection to create fake objects.
And now I can write tests that automatically create fake objects by adding that attribute.
[TestFixture, AutoFakeItEasy]
public class UsingSimpleClassTests
{
    [FakeIt]
    private IDependency _fakeDependency;

    // Not faked
    private IDependency _uninitializedDependency;

    [Test]
    public void FakesCreatedAutomatically()
    {
        Assert.That(_fakeDependency, Is.Not.Null);
    }

    [Test]
    public void FieldsWithoutAttributesAreNotInitialized()
    {
        Assert.That(_uninitializedDependency, Is.Null);
    }
}
Quite cool, and yes it’s on GitHub.

Now what

I’m not sure if I like the use of fields in unit tests. I’ve seen it misused to create unreadable (and unmaintainable) tests more often than not – but at least now the fake fields are will be initialized between test runs. Now I know it’s possible to create AutoFaking using attributes and I considering adding more features until I fully implement AutoFakes.

Until then – happy coding…
In my not-so-distant past I needed to write a component which would control an air conditioning unit.
My application needed to send commands to the external device (on, off, set temperature) and read information from it (room temperature, humidity etc.)
And so I came with the following:
image
It’s a bit simplistic but the idea was that the client would be responsible for communication with the device (protocols, timeouts) while the manager would handle the “business logic” end of things (if temp > 10c than turn on cooling unit).
Commands from the manager would get translated to and unholy stream of bytes and sent to the device (and re-sent – numerous times) while data arriving from the device would be converted to strongly typed C# classes and passed to the manager where additional actions can be taken. Some of that data would end up in the application's data model according to the business rules inside the manager.
Obviously I developed the client and manager using TDD. While writing my tests (and later my classes) I’ve kept a clear division between the manager and client – each had it’s own set of tests and each had clear inputs and outputs (or so I thought).
A week later a new requirement came which caused a changed in my client and so I needed to re-write both the client’s tests as well as the manager’s tests. a few days later a bug was found – one that my tests should have caught but missed since it was a collaboration between the client and the manager that caused it. After a while I‘ve noticed that I’m doing something wrong – my tests would break whenever I’ve changed one of the methods between my client and manager. And so I’ve decided to test my manager and client as one “unit” using fake objects only for faking the device out of my tests. And it worked! I’ve managed to reduce the amount of tests while increasing my functionality – and there was much rejoicing.
image
That is until one day a co-worker of mine seen my tests – and told me that “those are not unit tests since you’re testing more than a single class”. It made me think about the actual definition of unit tests.

Maybe we should not call them unit tests anymore?

It all came back to me yesterday when read a post by Simon Brown: “Unit and integration are ambiguous names for tests”. In his post Simon explains that there is no clear definition of unit/integration tests (which is dead on the money) and come up with his own definitions which is clear (or at least clearer) and also helps understand which automatic tests a developer should write for each scope of the project – nicely aligned with the architecture.
Simon’s definitions are great. They help the align the developer’s test with the project architecture – something most of the projects I witnessed lack.
But there’s a small problem with splitting the tests into class-, component- and system- tests. While it set clear rules about their scope, what to fake, test size and speed - it does not help avoid testing the wrong thing…
image
I think that’s “class tests” falls short of explaining what we want to test namely the behavior of the code and not the actual implementation. I fear that calling a test “class test” would result in developers writing unit tests for every single class (and every single method). Those over specified tests tends to test trivial functionality while being extremely fragile.

Is there a definition for unit tests?

Since I needed an actual definition I went to the definite unit testing encyclopedia – XUnit Test Patterns and got the following definition for my troubles:
A test that verifies the behavior of some small part of the overall system. What makes a test a unit test is that the system under test (SUT) is a very small subset of the overall system and may be unrecognizable to someone who is not involved in building the software. The actual SUT may be as small as a single object or method that is a consequence of one or more design decisions although its behavior may also be traced back to some aspect of the functional requirements. There is no need for unit tests to be readable, recognizable or verifiable by the customer or business domain expert. Contrast this with a customer test which is derived almost entirely from the requirements and which should be verifiable by the customer. In eXtreme Programming, unit tests are also called developer tests or programmer tests.
[From: XUnit test patterns – Unit test]
So… Nothing other than only special people (someone who is involved in building the software) can know what the subject under test is?
It does states that a unit test can test a single class – or more…
There’s an excellent blog post by Martin Fowler – titled simply: UnitTest in which he tries to define unit tests but it seems that the only definite there is that unit tests should be fast – how fast? it depends!

What I use

I don’t have a clear answer here but I seem to go back to three different types of developer tests:
Unit Tests
Test a single unit of work (one class or more), and employ fake objects to prevent running external dependencies (DB, server, 3rd party and other classes). They are fast – usually runs for a fraction of a second. And they are relatively short (tens LOC) and simple.
Unit tests are independent from one another and each run on any machine should provide the exact same result.
Integration tests
They are similar to my unit tests in look and used to test interaction with external dependencies. I usually use Integration Tests when in need to test logic inside external dependency (think stored procedures) or interaction between my code and that external dependency.
They usually require setup (and cleanup) between test runs and a specific environment in place to pass. Fake objects can be used to avoid running other dependencies. They will be slower than unit test (seconds to minutes) and might fail due to reasons outside of the scope of the test (permissions, server down etc.)
Acceptance tests
Those are application, system wide or scenario level tests that make sure that business requirement are implemented according to spec.
They are slow and cross component boundaries and usually do not employ fake objects of any kind.

Conclusion

What I like about Simon’s methodology is that it’s easy to explain and set very good ground rules to the kind of tests a good project should have. I still think that the definition of class tests does not change the original definition (or dis-definition) of unit tests – while it’s about focusing on the operation of a single class it does not mean that other classes are not exercises as well.
It’s hard to define what is a “good” or “correct” unit test. I’ve noticed that there is no one rule for every project out there. And so I’ll keep on using my flawed yet accurate definition of “unit tests” – until I find a better definition.

And until then – Happy coding…

Broken windows and software development

No comments

Sunday, June 14, 2015

In 1969 a car with no license plates was parked with it’s hood up in the Bronx. Within minutes it was vandalized and stripped. That car was part of an experiment to test a theory called “broken windows”.

The Broken windows theory

The original broken windows theory was introduced in 1982:
Consider a building with a few broken windows. If the windows are not repaired, the tendency is for vandals to break a few more windows. Eventually, they may even break into the building, and if it's unoccupied, perhaps become squatters or light fires inside.
Or consider a pavement. Some litter accumulates. Soon, more litter accumulates. Eventually, people even start leaving bags of refuse from take-out restaurants there or even break into cars.
[Wilson, James Q; Kelling, George L (Mar 1982), "Broken Windows: The police and neighborhood safety", The Atlantic]
The bottom line is that we need to fix the small issues quickly in order to prevent bigger issues down the road. And by fixing these problems we avoid escalation and make sure that the neighborhood does not become a crime ridden cesspool, or the inspiration of the next Fallout game. 

What does it have to do with my code?

Code can also have broken windows – bad design, leftover comments that doesn't mean anything anymore and dead code. I’ve seen it happen countless times – long methods become longer (instead of refactored to smaller methods) and messy code becomes harder to read and maintain as time go by.
I remember one code review in which the developer didn’t bother with writing unit tests for a new feature since “the existing code did not have any unit tests in place”…
When code issues are left unattended development of new features becomes harder (or even impossible) and every change introduces new and exotic bugs. Weird governing processes are created – I’ve seen one place where is was common practice to have the equivalent of two weeks of meetings before a single line of code was written.
Broken windows in code leads to fear – fear of changing code, fear of writing new code and overall
The funny thing is that usually fixing broken windows in code is simple – refactoring when applied correctly can solve most issues before they become and impossible mess. Since most modern IDEs support at least a subset of the common refactoring (such as rename and extract to method) – these refactorings can be done safely and without breaking existing code.

Other broken windows in software development

But there are other ways in which we can unintentionally create a broken window that has nothing to do with our code. At one job we had to use a tool that made building a new version impossible to automate, no once know why we’ve been using it and no one could explain the merits on using it but every single time I’ve tried to replace it I was met with fierce resistance – since it’s the tool that all the big players are using.
If releasing a new version takes three days and a hundred steps of manual work – this could be a problem that can escalate quickly, and before you know it a wrong version is shipped to your customer.
And it could be the small things such as a version number of another dependency that seem to change without any logic behind it (yesterday version was 1.0.0.111, today’s is 1.0.0.9) – this pseudo randomness can cause big problems once you need to revisit a change (hotfix anyone?) or distribute a new version using automatic update tool.
The solution for most of these ALM related is education and automation which seems like a lot of work at beginning but once it place saves quite a lot of time every single day.  

Summary

Broken windows are everywhere – it’s the small things that can becomes huge problems if left unchecked. It becomes a real problem once the development team (or the whole organization) accept them as a fact of life (a.k.a it’s the way it was always done here).
In the book “Clean code” Bob Martin explain about the “boy scout rule”:
If we all checked-in our code a little cleaner than when we checked it out, the code simply could not rot. The cleanup doesn't have to be something big. Change one variable name for the better, break up one function that's a little too large, eliminate one small bit of duplication, clean up one composite if statement.
Can you imagine working on a project where the code simply got better as time passed? Do you believe that any other option is professional? Indeed, isn't continuous improvement an intrinsic part of professionalism?
So pay attention to the small pains – it usually means that you can fix something today instead of waiting until it would be impossible to change. At once company we had a rule – if something annoyed us more than three times – we would change it. I still find this to be a useful way of finding ways to improve my code as well as the development process.

Happy coding…

TDD is like riding a bicycle

No comments

Thursday, May 14, 2015

From time to time I get to teach unit testing and TDD to developers. And every single time I get to learn something new.
During such class we got to the part where I talk about TDD. When I’ve explained about writing tests before code as a design activity - nobody objected. When we did step by step FizzBuzz kata together – everything was just rainbows and unicorns. Since the class seemed to grasp the concept of TDD I’ve decided to get their hands dirty with another TDD exercise – to do on their own (pair programming style) – and all hell broke loose… Actually it was fine, but obviously harder for the students. Some did better than others, keep in mind that it was the end of (a very long) day, it was the first time they ever attempted TDD and remember that these guys only found out about unit testing that same morning.
I did get very interesting results – as well as comments from the class.
After writing a few tests (doing TDD) one guy looked at me and said: “This is quite a lot of work, I think I can solve the problem faster without any unit tests...” I told him to try, but it made me think TDD and learning new skills. Any new skill requires practice in order to master it – think playing guitar, juggling or the ability to sleep late regardless of the amount of noise in your house (and an angry spouse trying to wake you up). In the beginning it’s hard and feels like a lot of effort for little gain and as time, as you improve it becomes easier and easier until one day you might become so good in what you do – it actually look easy to someone looking from the side.
Timageo me it reminded learning to ride a bicycle – at first you might have trouble making the wheels turn, in this point you’ll probably move faster (and get where you want to go) without the bike (a.k.a walking) - just like the guy in my class – in this point he has better chance to solve his problem without TDD (with or without bugs). The next step is learning how to make semi-trusty vehicle stop, again doing what you did until now is the simplest possible solution – just put your legs down! But if you persist you learn that using brakes is better. After some time your brain translates “I need to stop” to whatever you need to do in order to make your bike stop.
As you get better in making your new mode of transportation go where you want to go it becomes easier and easier until one day you notice that you no longer think about the actual operation of riding the bicycles – you just do it. In that point it’s quicker to use your trusty (hello kitty pink?) bicycle to get wherever you need to be. At this stage you discover that you prefer riding your bike as oppose to the old way of “walking there”.
Same goes for TDD – in the beginning it might be hard and even seems like a waste of time – time you could have used to do something better (drink more coffee?) but as you progress you might find that it saves you time until one day you find out that it’s the way your brain solves design problems – automatically.
There is another aspect in which TDD is similar to riding bike:
Life is like riding a bicycle. To keep your balance, you must keep moving.
Albert Einstein
Just like one of the smartest guys ever lived said – you need to keep moving if you want to avoid falling – which also reminded me of the Red-Green cycle – if you want it to succeed you have to keep short, quick iterations and try to avoid making too many stops along the way – but that’s a subject for another blog post.

Until then – Happy coding…

AssertHelper V1 released

No comments

Friday, April 24, 2015

Exactly one year and four months passed since my first try at fixing the state of asserts in NUnit. You can read all about it in my post – One assert to rule them all.
My intent was to create one assert that would automatically choose the right way to check test result (I didn’t invet the idea but the old project has been discontinued).
Since the initial release I’ve used AssertHelper on several occasions (projects), taking requests and adding support along the way. Finally AssertHelper V1 is ready to be tested in the real world.
I’ve added MSTest support and a last minute entry – print the “offending” lambda that caused the assert to fail.
image
So why don’t you give it a try - AssertHelper is available via NuGet (Install-Package AssertHelper).

And please let me know what you think.

I’m a software developer - not a lawyer

No comments

Wednesday, April 15, 2015

61101315A long time ago not that far away I’ve been hired by Omni-Corp to work on the new and shiny product. We had talent, budget and cool technology on our side, and that project was going to crash and burn (and ultimately cancelled) in less than a year.
Nobody’s perfect - we had our share of problems, some technical and some not. One of which was the way that requirements were managed:
The product guy (or gal) would create a word document describing a new feature. A few meetings would happen and when development agree on the specifics of that feature coding would begin. Lastly testers would use the word document to create test plans and validate that the requirements were met.
It was a good process, easy to explain (and follow) with clear stages (Requirements –> Development –> Testing), and clear outcome of each stage.
Like all best laid plans – this one did not survive its meet-up with the real world.
Since the product persons were “business people” and not “software developers” they did not fully understand the logical way in which developers understood the requirements and since developers were being “developers” we tended to come up with our own solutions whenever we met a “logic hole” in the requirements document.

And mistrust and chaos would follow…

We (devs) would claim that since it was not in the document we had to implement was we saw fit – and so a young bright manager came with the solution: quickly update the document five minutes before a meeting and then point it out that “it was definitely in the document!”.
Now we had a new problem – it was impossible to write software with a constantly changing requirements and so the “requirement freeze” came to be.

Requirement freeze!?

As you might have guessed at a specific point in time the requirement document would be locked and cannot be updated again, and just like code freeze – it didn’t last. Once a product guy discovered that he can make a copy of the frozen document, update it and link to it.
At his point it was just ridicules as well and counterproductive – mistrust grow as both teams felt that the other team is trying to cheat them. The development team felt that product was trying to shove half baked, constantly changing features while the product team felt as if the developers were always on the lookout for new excuses to avoid work.
And so every single stage in the process felt as negotiation between opposite sides – I got to a point where I refused to sign on requirements before reading them several times and then asking my boss to read them to make sure that they were complete and did not contain any hidden conditions.
I felt like a lawyer! There’s nothing wrong in being a lawyer (some of my best friends are lawyers) as long as it’s in your job description.

And it was all our fault

We (both teams) forgot that it was our job to create software according to the demands of our customers.
Looking back at that time I realized something – requirements change, even in a perfect project customers tend to change their minds, bugs are fixed and new data can cause us to look at our product differently. I got to work for many companies before and since and not once did I meet the mythical 100% true never changing requirements project – even with the best of the best.
We were fighting reality – where projects tend to change over time. If only we’ve used the same passion and energy to make sure that we could change our code as easily.
The truth is that it should be easy to change your code. Refactoring, unit testing, code reviews and other software development best practices can help you get there and as good, experienced software developers it was our job to use the right practices in order to provide our customers with new features and bug fixes as quickly as possible – instead of complaining about the fact that requirements always changed…
It can be very hard to make changes in large complicated code bases. When we make changes it's important to know that we are making them one at a time. Too often we think we are changing only one thing but instead we end up changing other things unintentionally: we end up introducing bugs.
Michael Feathers – working effectively with legacy code
It’s as simple as that – simple code equals easy to change. So make sure you’re doing your job before trying to change the world.

Happy (& clean) coding…
Related Posts Plugin for WordPress, Blogger...
Top