MSTest V2 - First impressions

1 comment

Thursday, August 11, 2016

It’s been a while since I’ve tried a new Unit Testing framework. It seemed that between NUnit, XUnit & MSTest I had enough to choose from. I’ve always tried to be pragmatic when choosing a test framework for a new project and when suggesting one to a new client.

Although all .NET unit testing framework are pretty similar there are some differences between them – and I’m not talking about how they call the “Test” attribute.

It always frustrated me that MSTest didn’t seem to change much since it was introduced back in 2005 while both XUnit & NUnit have improved over time with new ideas that made unit testing easier to adopt.

One of the features I missed the most was Parameterized unit tests – the ability to write a test once and run it several times with the same input. Since MSTest is widely used it was frustrating to see good developers write bad tests just because that featured was missing.

Over the years I grow tired of waiting for that support and I’ve tried implementing it myself – with some success but without a lot of problems:

And then those solutions stopped working and I’ve stopped using them.

That’s why I was happy to find out that not only is Microsoft working on a new “MSTest V2” but it will have the parameterized tests I’ve always wanted.

On top of that getting started with it is simple. You no longer have the create a special “Unit Testing” project – any class library will do.

CreateClassLib

And Then just add MSTest.TestFramework & MSTest.TestAdapter using NuGet and you’re ready to go. (don’t forget to mark Include Prerelease)

nugettestadpater

nugetTestadapter

And writing tests just works just like before – even better now we have the ability to write tests with RowTests:

[TestClass]
public class TestFrameworkTest
{
    [TestMethod]
    public void SimpleTest()
    {
        Assert.IsTrue(false);
    }

    [DataTestMethod]
    [DataRow(1, 2, 3)]
    [DataRow(2, 2, 4)]
    [DataRow(3, 2, 6)]
    [DataRow(5, 2, 7)]
    public void RowTest(int a, int b, int result)
    {
        Assert.AreEqual(result, a + b);
    }
}

And it works! I run the tests and from each row runs as a separate test.

image

At the moment there is no way to run only one row – you have to run them all. This means that I cannot debug a single row – which can become very painful with multiple rows. Although NUnit (and XUnit) still have better row testing functionality at least now I can write (and teach how to write) proper tests even when a client has chosen to use MSTest (due to other good reasons).

Seems that MSTest is changing and I can’t wait to see what other new features will come next.

 

Until then – Happy coding…

Implementing Soundex using LINQ (with help from OzCode)

No comments

Wednesday, June 08, 2016

A while ago I came across the very interesting Soundex algorithm. It’s a way to find similarity between words based on how they sound – I’ll let Wikipedia explain:

Soundex is a phonetic algorithm for indexing names by sound, as pronounced in English. The goal is for homophones to be encoded to the same representation so that they can be matched despite minor differences in spelling. The algorithm mainly encodes consonants; a vowel will not be encoded unless it is the first letter. Soundex is the most widely known of all phonetic algorithms (in part because it is a standard feature of popular database software such as DB2, PostgreSQL, MySQL, Ingres, MS SQL Server and Oracle) and is often used (incorrectly) as a synonym for "phonetic algorithm". Improvements to Soundex are the basis for many modern phonetic algorithms.

So basically Soundex can help you fix spelling mistakes by finding the word you meant to use based on how the words sound – so if you accidently search the internet for “Drawer Hellber” you’ll still be able find my blog:

image

Actually you won’t, but you get the point

It’s fairly easy to follow the steps of the algorithm (as defined by Wikipedia):

  1. Retain the first letter of the name and drop all other occurrences of a, e, I, o, u, y, h, w.
  2. Replace consonants with digits as follows (after the first letter):
    • b, f, p, v → 1
    • c, g, j, k, q, s, x, z → 2
    • d, t → 3
    • l → 4
    • m, n → 5
    • r → 6
  3. If two or more letters with the same number are adjacent in the original name (before step 1), only retain the first letter; also two letters with the same number separated by 'h' or 'w' are coded as a single number, whereas such letters separated by a vowel are coded twice. This rule also applies to the first letter.
  4. If you have too few letters in your word that you can't assign three numbers, append with zeros until there are three numbers. If you have more than 3 letters, just retain the first 3 numbers.

In a clear case of “When you have an hammer everything looks like a nail” I thought to myself – why not implement this algorithm in LINQ. And so I came up with the following code:

public string Encode(string word)
{
    if (IsNullOrEmpty(word))
    {
        return Empty;
    }

    return word
        .Select((ch, index) => EncodeCharacter(word, ch, index))
        .Where((encodedChar, index) => 
                    encodedChar.IsValidEncoding && encodedChar.Curr != encodedChar.Prev)
        .Select(arg => arg.Curr)
        .Concat(Enumerable.Repeat("0", MaxEncodedLength))
        .Take(MaxEncodedLength)
        .Aggregate((I, j) => I + j);
}

If you’re interested the entire Encoder.cs code file can be found here.

Although the algorithm seems simple enough, I had four bugs in my initial implementation but they were quickly squashed using OzCode with it’s new LINQ debugging feature (in the Early Access Preview). Now that I’ve got it working, I’m going to use OzCode to show you how the Soundex algorithm processes “Drawer” and “Dror” (one of which is not my name) and check that they both provide the same results.

Let’s start with “Drawer” – it should be encoded as D660.

Stopping the debugger at the beginning of the LINQ query shows us that indeed the result of the LINQ query is “D660” but it also shows us numbers at the end of each operator – those numbers indicate how many items were returned from each LINQ operator.

image

Looking at these numbers, I can tell that from the 6 characters in the beginning, only three were left after Where. The rest were either Invalid or the same as the letter before (rules 1-3). Then we concatenated 4 ‘0’ and then taken the first four characters (Take) and from there it was a simple case of aggreagating all of the letters into a single string.

Let see it step by step – using OzCode’s Detailed LINQ tool window:

Step 1: First letter saved and encode the rest of the letters according to rule #2 (‘*’ means invalid encoding):

image

As you can see, ‘D’ was kept and the two ‘r’s were encoded as ‘6’s while ‘a’, ‘w’ and ‘e’ were replaced with stars (invalid character).

Next we get rid of invalid & duplicate letters:

image

As you can tell by the red X’s all of the invalid characters were thrown away.

image

The Select is a simple conversion from my result type into simple strings.

With your permission I’ll jump directly to Aggregate where I stitch the strings together:

image

Simple.

In the case of “Dror” the result is similar

DrorSoundex

How cool is that?

If you want to find out more about OzCode and LINQ debugging – try the EAP, or better yet come to visit us at the OzCode booth at NDC Oslo – we’ve just landed and we plan on having a great week.

The real difference between NUnit and XUnit

2 comments

Tuesday, May 31, 2016

I’ve just started yet another pet project and wanted to pick a unit testing framework (.NET). On a soon-to-regret whim I’m tried googling “NUnit vs. XUnit” and read the first 10 posts I got. They were informative and mostly correct, unfortunately all completely missed the one big difference between those two excellent unit testing frameworks…
Consider the following two test Fixtures
NUnit
[TestFixture]
public class NUnitTwoTests
{
    private int _myInt = 0;

    [Test]
    public void Test1()
    {
        _myInt++;

        Assert.That(_myInt, Is.EqualTo(1));
    }

    [Test]
    public void Test2()
    {
        _myInt++;

        Assert.That(_myInt, Is.EqualTo(1));
    }
}
XUnit
public class XUnitTwoTests
{
    private int _myInt = 0;

    [Fact]
    public void Test1()
    {
        _myInt++;

        Assert.Equal(1, _myInt);
    }

    [Fact]
    public void Test2()
    {
        _myInt++;

        Assert.Equal(1, _myInt);
    }
}
Both looks almost the same and yet if you run them you’ll notice that they are in fact quite different:
image

The same result will happen regardless of unit test runner used – you can try using Visual Studio or command line – the only change will be which of the two NUnit test fails – depending on which test run first.
The reason that one of NUnit tests failed is that NUnit runs all of the tests in the same Fixture (a.k.a class) using the same instance while XUnit start a new instance per test. Which means that if you have fields (just like _myInt) in your tests they may cause problems in other tests due to shared state – also please don’t have fields in your unit tests – ever!
Running each test as a separate instance makes sure (read: reduces the chance) that one test will cause other test to fail. Although you can always prevent shared state in tests using other methods and tools it is another layer of isolation.
Does it mean that you should use XUnit and not NUnit – that’s up to you (I use both).
Can you guess how does Microsoft’s MSTest behaves? try the same experiment and find out.

Until then – happy coding…

Why is the build broken?

No comments

Sunday, May 29, 2016

Let’s pretend that we’re in the house building industry (c’tors Inc.), One days while you’re getting fresh air and working hard the building inspector comes along, climbs to the top of our not yet complete structure and yells: “There’s something wrong with the left side of the building!” then goes away. As a pretend construction professional – what do you think are the chances that someone would fix the problem?
If the scenario above sounds crazy to you – that’s ok, unfortunately I see it unfold daily.
Most companies these days have some kind of automatic build process (and I use the term loosely), files get checked-in/submitted/pushed (all of the above?) to the source control and server will try it’s best to build the new source (and maybe run some tests) at one point according to preconfigured trigger – anything from immediately to the next day. The problem starts when that process fails, in that point there are two possible outcomes: someone would fix the problem and make the build continue to pass (a.k.a green) or everybody will ignore that server for a long time which could become forever.
I’ve noticed that usually when software developers ignore the broken build do not do so out of malice or laziness.
Unfortunately, a broken build means that although someone (perhaps yourself) took the time of automating parts (or all) of the build/test process and all of her hard work is wasted because no one would fix the damn build.
I’ve noticed that when the build system is left broken for a long time is happens due to one of the following reasons:
  • No/little build visibility
  • Lack of knowledge
  • No definition of Individual responsibility

Build Visibility

Ideally anyone every relevant member of the team must know when a build fails. Better yet all of the company should have easy access to the current build state.
Consider the following:
  1. All of the team has access to the build server by URL
  2. Email is sent when a build fails to the relevant person(s)
  3. 60-inch screen in the middle of the dev room shows the current build status
  4. When a build fails a big red light mounted in the dev room/hallway blinks
  5. When a build fails a picture of the person who broke the build shows in every screen in every conference room
I think #5 is going too far but you get the point.
If you think that installing a build server and making the URL available for the whole company is good enough – I got news for you. People are way too busy to go to that URL and try to understand what the build server is showing them. Adding Email is case of failure is also a good idea but not sufficient – after a few of those some (read: most) developers would learn to ignore them. If you add email notifications on successful build you’ll only make this process (of ignoring builds) happen faster.
A failing build should be visible and impossible to ignore
At one company I worked with some developers didn’t even know what the build URL was, and no idea how to find out why the build has just failed…
Another important factor is how easy/hard it is to discover why the build has failed. Not all build servers were created equal – some do a better job of showing the root cause of the failure and some require reading 10 pages of logs. My point is – fixing a broken build happens when you need to do something else (developing software) and as such should be as simple and painless as possible.

Missing knowledge

This usually a problem if the build script performs too many things. Let’s go back to our imaginary scenario where the build inspector’s shouts about a problem in one of the build’s components – and I’m not familiar with that component or I don’t have the right expertise to fix that particular problem. In that case I’m going to continue working as if nothing happened – or go and grab a cup of coffee until the problem resolves itself.
The problem with big build scripts that does a lot of things is that it’s hard to tell why a specific step (or 100 tests) have just failed and then everyone on the team get a bad case of “it’s somebody else’s problem”.
After fixing the visibility problem we know the build has failed and with some investigation we can tell why – and yet it does not matter if the problem domain is so complex that no one how to understand the failure reason.
For example, imagine a 3 hours build process that combines C++ components, C# logic exposed as web services and some JavaScript client. If that build fails developers would have difficulties finding the reason for the failure – and with a build that big there would be usually more than one commit which would only make things harder.
The right solution is to try and split the build into several individual builds where each team (and each team-member) knows exactly where their responsibility (development wise) starts and ends.

Individual Responsibility

In the heart of a healthy process lies personal responsibility and integrity.
When a build fails the last person to commit code is responsible to make sure that the build passes as quickly as possible. Anyone effected by this failure is responsible not to make the problem worse by blindly committing more code and help if asked. As simple as that. This kind of personal integrity could only be achieved if the build failures are visible, easy to investigate. Some teams need a manager to tell them so and some need a simple reminder from time to time. It usually helps if there is someone who is passionate about the build although this is a team effort, not “Joe’s” problem.
I would avoid shaming (e.g. show build breaker name on all conference screens) and instead try to understand why people don’t care that the build is broken. Usually it has something to do with one of the previous points and not because of lack of commitment.

Conclusion

A broken build is not a pretty sight and should be fixed as quickly as you can. The good news is that it’s easily solved with the proper tools, education and plain old nagging. As long as you take the time to understand what are the reasons other talented developers seem content of leaving it broken.
Try it out – you might be surprised to find out that you’re not the only one who cares.

Until then – happy coding…

VS15 can add conditions to exceptions!

1 comment

Wednesday, April 06, 2016

Last week at (or rather during) //Build conference a new Visual Studio was unveiled. You can download the preview right now – I’ll wait…

Getting a new Visual Studio feels like Christmas morning (if I was celebrating Christmas). There’s always cool features to explore and new things to play with, an improved shade of purple of the infinity icon – good stuff!

I was browsing the VS15 release notes (not to be confused with the previous VS2015) when I saw a cool feature I always wanted. In fact I was asked for that specific feature during one of the many OzCode’s demos. That feature is the ability to add an condition to when (or rather where) to break on an exception.

.NET exceptions are great, they help me understand when my code misbehave and I get to use quite a lot. The problem is catching those tricky bastards. Usually when I try to squash a bug that cause an exception I want to break when that exception is thrown. To do so I would usually open the Exception settings dialog and tell Visual Studio to break when a specific exception is thrown.

So, if I have a console application that uses two external libraries both with bugs, but only one I care about:

static void Main(string[] args)
{
    var mineAllMine = new MyClassLibrary.MyClass();
    var buggyCode = new OtherClassLibrary.ExternalClass();

    try
    {
        buggyCode.SomeMethod();
    }
    catch (Exception)
    {
        Console.WriteLine("Buggy code exception");
    }

    try
    {
        mineAllMine.AmazingMethod();
    }
    catch (Exception)
    {
        Console.WriteLine("Performed according to spec!");
    }
}

And since I cannot fix the external class (and its buggy code) we want to only break on exceptions thrown from our own superior (and amazing) method.

Doing so with VS15 is simple simple open Visual Studio’s Exception settings and choose, or better yet search for the exception you want mark it and then press the small pencil icon to add an condition.

image

Right now we can only choose on which module to break (or which module not to break).

And it works like a charm.

IMHO Debugging just got better and hopefully will continue to improve.

 

Until then – happy coding…

A few days ago Microsoft has finally announced that some of the old(er) windows phones would soon get the new shiny OS. That left a few windows phone owners a little disappointed. I remember that only a few months ago it was clear that all (or at least most) windows phone devices would be updated to Windows 10 and now it seems that they would in fact keep running windows 8.1.

As the proud owner of one Lumia 925 – I feel a little cheated, but not surprised. I have used the insider builds of the latest and greatest for a few weeks, and while I enjoyed using the new OS it was not “production ready” just yet.Nokia-Lumia-925-smartphone-Black-update-jpg

This is not the first time I’ve tried the insider builds – about a year ago when I just got my phone I immediately installed the newest version I could get my hands on – and then removed it after less than 4 hours (and 3 failed attempts). I was buggy, crashed all the time and basic capabilities (a.k.a calling other people) would not work.

This time around the new version worked better, I got most of the functionality (call app still crashed though) and I liked what I saw and was ready for the next version that would crash the bugs I kept on reporting – which today I know would likely not happen.

Part of being in the insider program was that my phone kept asking me if I’m happy with the current experience and whether I would recommend this particular build to a friend. I would answer that I am happy but a few glitches still preventing me from telling someone else to try it out. My aim was to provide feedback so that the things that bothered me the most would be addressed. Another tings I learnt since is that Microsoft was also using this feedback to decide which devices would get the new OS and which would be left out.

Don’t get me wrong – I think that from a business perspective this is a good path to go. The next version of Windows mobile should be released and decreasing the scope and/or supported devices is one way to do so.

There is another aspect to this decision: One thing I cared about for all of the phones I owned in the past (I tend to break them – a lot) is that no matter the OS I wanted it to get updated.

I want the latest and greatest – even with an older phone. In the past I bought Nexus 4 phone because at the time it that was the only Android phone that consistently got updated and for the same reason I got my latest phone as well. As a user I hate the practice of some vendors – which don’t bother to update the phones firmware just because it’s not economically viable – or is it?

I had some work in the Windows phone 8, then UWP (Universal Windows platform) 8.1 and 10 space. And decision like this that make me a bit worried.

I worked for a company that made an investment and created a windows 8 app. Before we managed to complete it – there came //build announcing that the way to go from now on is to write Universal apps (8.1). We made an effort and managed to convert most of the code to the new platform, and we were waiting for some crucial functionality when the next //Build came along and kindly explained that the old Universal Apps (a.k.a 8.1) were over and there’s a new platform now. At that point the managers got a little worried.

It seemed that every year there’s a change and the new shine we were working on was now obsolete. I guess we were lucky we haven’t started a year before working on Windows 7 apps (remember those)…

I love my phone – it works amazingly, has a good battery life and I love the user experience (tiles!) but when I’m asked about it I feel the need to apologize – because there are not a lot of applications for it. As a developer I completely understand why – companies don’t want to invest in new platform and by the time it seems stable enough – it gets replaced. The low market share of Windows phones doesn’t help either, but users won’t buy a phones that don’t have their favorite app – do you see the magic cycle yet?

The next //Build conference starts this Wednesday – and as a software developer and (a proud) Windows phone owner I’m both excited and worried. I want to learn about the new, cool stuff but I fear that just like the last time I would also learn which technologies are going to kick the bucket.

Nokia-Lumia-925-front-png Until then – Happy coding…

Comparing Two objects using Assert.AreEqual()

4 comments

Monday, March 21, 2016

Anyone who ever googled (binged?) about unit testing have heard about the “one assert per test rule”. The idea is that every unit test should have only one reason to fail. It’s a good rule that help me write good, robust unit test – but like all such rules-of-the-thumb it’s not always right (just most of the time).
If you’ve been using unit tests for some time you might have come to a conclusion that using multiple asserts is not always a bad idea – in fact for some tests it’s the only way to go…
Consider the following class:
public class SomeClass
{
    public int MyInt { get; set; }
    public string MyString { get; set; }
}
And now imagine a test in which that SomeClass is the result of your unit tests – what assert would you write?
[TestMethod]
public void CompareTwoAsserts()
{
    var actual = new SomeClass { MyInt = 1, MyString = "str-1" };

    Assert.AreEqual(1, actual.MyInt);
    Assert.AreEqual("str-1", actual.MyString);
}
Using two asserts would work, at least for a time. The problem is that failing the first assert would cause an exception to be thrown leaving us with no idea if the second would have passed or failed.
We can solve this issue by splitting the test into two tests – one test per assert. Which seems like an overkill in this case - we’re not asserting for two different, unrelated “things”, we’re in fact testing one SomeClass that happen to have two properties.
Ideally I would have liked to write the following test:
[TestMethod]
public void CompareTwoObjects()
{
    var actual = new SomeClass {MyInt = 1,MyString = "str-1"};
    var expected = new SomeClass {MyInt = 1,MyString = "str-1"};

    Assert.AreEqual(expected, actual);
}
Unfortunately it would fail. The reason is that deep down inside our assert have no idea what is an “equal” object and so it runs Object.Equals and throws an exception in case of failure. Since the default behavior of Equals is to compare references (in case of classes) the result is a fail.
Due to this behavior there are many (myself included) who suggest overriding Equals to make sure that the actual values are compared. which could be a problem if our production code cannot be changed just to accommodate our tests.  There are ways around this limitation – such as using a Helper class (ahem) that would do the heavy lifting by inheriting (or not) the original class and adding custom Equals code.
I propose another option – one that could be useful , especially when there’s a need to compare different properties in different tests.

Using Fake objects to compare real objects

In order to change the way two objects are compared in an assert we only need change the behavior of one of them – the expect value (might change depending on unit testing framework). And who is better in changing behavior of objects in tests than your friendly-neighborhood mocking framework.
And so using FakeItEasy I was able to created the following code:
[TestMethod]
public void CompareOnePropertyInTwoObjects()
{
    var actual = new SomeClass { MyInt = 1, MyString = "str-1" };
    var expected = new SomeClass { MyInt = 1, MyString = "str-1" };

    var fakeExpected = A.Fake<someclass>(o => o.Wrapping(expected));

    A.CallTo(() => fakeExpected.Equals(A<object>._)).ReturnsLazily(
        call =>
        {
            var other = call.GetArgument<someclass>(0);

            return expected.MyInt == other.MyInt;
        });

    Assert.AreEqual(fakeExpected, actual);
}
What we got here is a new fake object a.k.a fakeExpected which would call custom code when its Equals method is called.
The new Equals would return true if MyInt is the same in the two objects. I’ve also create the new fake using Wrapping  so that the original methods on the class would still be called – I really care about ToString which I would override to produce meaningful assertion message.
Now all I needed to so is to compare the fakeExpected with the actual result from the test.
In a similar way I’ve created a new extension method that would compare the properties on two classes:
public static T ByProperties<T>(this T expected)
{
    var fakeExpected = A.Fake<T>(o => o.Wrapping(expected));

    var properties = expected.GetType().GetProperties(BindingFlags.Instance | BindingFlags.Public);

    A.CallTo(() => fakeExpected.Equals(A<object>._)).ReturnsLazily(
        call =>
        {
            var actual = call.GetArgument<object>(0);

            if (ReferenceEquals(null, actual))
                return false;
            if (ReferenceEquals(expected, actual))
                return true;
            if (actual.GetType() != expected.GetType())
                return false;

            return AreEqualByProperties(expected, actual, properties);
        });

    return fakeExpected;
}

private static bool AreEqualByProperties(object expected, object actual, PropertyInfo[] properties)
{
    foreach (var propertyInfo in properties)
    {
        var expectedValue = propertyInfo.GetValue(expected);
        var actualValue = propertyInfo.GetValue(actual);

        if (expectedValue == null || actualValue == null)
        {
            if (expectedValue != null || actualValue != null)
            {
                return false;
            }
        }
        else if (typeof (System.Collections.IList).IsAssignableFrom(propertyInfo.PropertyType))
        {
            if (!AssertListsEquals((IList) expectedValue, (IList) actualValue))
            {
                return false;
            }   
        }
        else if (!expectedValue.Equals(actualValue))
        {
            return false;
        }
    }

    return true;
}

private static bool AssertListsEquals(IList expectedValue, IList actualValue)
{
    if (expectedValue.Count != actualValue.Count)
    {
        return false;
    }

    for (int I = 0; I < expectedValue.Count; I++)
    {
        if (!Equals(expectedValue[I], actualValue[I]))
        {
            return false;
        }
    }

    return true;
}
And now I can use the following to compare my expected value with the value returned by the test:
[TestMethod]
public void CompareTwoObjectsByProperties()
{
    var actual = new SomeClass { MyInt = 1, MyString = "str-1" };
    var expected = new SomeClass { MyInt = 1, MyString = "str-1" };

    Assert.AreEqual(expected.ByProperties(), actual);
}
Simple(ish) is it? I prefer this method since I no longer need to make changes to my production code (e.g. SomeClass) but I can still use plain vanilla unit testing framework.

What do you think?
Related Posts Plugin for WordPress, Blogger...
Top