Book review: Real-World Functional Programming

No comments

Wednesday, June 30, 2010


I’ve picked up Real-World Functional Programming because I wanted to learn F# and despite numerous attempts on my side I just couldn’t understand it. I understood the F# syntax and I knew how to write applications using F# but I just didn’t get it – I always thought that it was only suited for math majors,  and it seemed like a lot of trouble just to have the same result I can get using C#.

After reading this book I know I was wrong! Now I know two things:

  1. F# can be used to solve real world problems in my domain of work
  2. How my C# code can benefit from functional programming.

Some examples in the book are shown in C#, some in F# and many in both – which help you learn functional programming and not just F#.

The book is divided to four parts form the simplest tutorial (yes - “hello world”) to the most complicated graphic/animation/multi threaded application.

Part 1 Learning to think functionally

This part of the book is an introduction to the world of functional programming. Reading this chapter made something in my mind “clicked” - I finally understood what the whole fuss was about. This chapter has mostly C# examples that look a bit weird at the beginning – (e.g. using readonly (immutable) fields) but by the end of this chapter you might start to consider how to apply these simple ideas to refactor your own code.

The basic F# “types” are introduced namely tuples and lists. The last chapter of this part shows how to integrate the new F# code with the old C# application – on how to call the .NET libraries from your F# code and on using the interactive console as a development tool.

You’ll learn about the benefits of declarative code and state recursion to name a few. By the end of this part you’ll be able to read and write simple F# code and know a lot more about declarative programming than you did before picking up this book.

Part 2 Fundamental functional techniques

This is where the fun begins. After functional programming was explained and you were shown a few simple examples on the first part it’s time to really learn about how to use all of the functional programming goodies.

This chapter starts with building a simple application and improves it by using more advances language features such as using discriminating unions, generic types and lambda expressions.

The last two chapters explains about how to design a data and behavior centric applications.

Part 3 Advanced F# programming techniques

This part shows several unique capabilities of the F# programming language namely .NET integration. The efficiency of data structures is explained along with tips on how to optimize F# code. There is a good chapter on refactoring and testing of functional programs – that could have been a bit more detailed in my opinion. And Linq and Monads are explained.

Part 4 Applied functional programming

This chapter is the one you’ve been looking for when you started to learn about F# and functional programming – Asynchronous operations, parallel programming composite applications and reactive (!) are the topic discussed. This part is all about harnessing the power of functional programming to solve complicated problems in an elegant and simple solutions.



Functional programming is not simple nor trivial, Real-World Functional Programming does an excellent job of explaining what its all about.

Real-World Functional Programming teaches a paradigm using a functional language and not the other way around. There are better books to learn all of the ins and outs of F# but it would be just like reading a book on C# without understanding OOP (Object Oriented Programming).


So if you’re a software developer that’s looking how to improve your ability – even if you’re not interested in F#, hack even if you’re not even a .NET developer – read this book!

How to run your unit tests as part of TFS build

No comments

Thursday, June 24, 2010

Writing unit tests is good, having a build server that run the unit test on each commit/check-in is great!


In the past I’ve used TeamCity and FinalBuilder to administer my builds and run my tests, it was easy and painless and it worked. Unfortunately we cannot always decide our organization build strategy and if you’re writing .NET code you might need to use Team Foundation Server (TFS).

Faced with this new challenges you have two options either complain and try to convince everyone that it would be better to change how things work or you can learn how to harness the new tool to do your biddings – I choose the latter.

Running MSTest unit tests in TFS

Running Microsoft’s testing framework tests is easier of the two – because it’s fully supported by TFS (2008). In order to run the tests you need to open the TFS project file and change the RunTest element


Congratulations you’ve just told TFS to run your unit tests - now let’s tell it what tests to actually run. Find the related ItemGroup you’ll recognize it by the remark:

If the RunTest property is set to true then the following test arguments will be used to run
tests. Tests can be run by specifying one or more test lists and/or one or more test containers.

Defining what tests to run can be done in three different ways by adding the following under the ItemGroup :

1. Running test assembly/assemblies:

<TestContainer Include="$(OutDir)\MyProject1.Tests.dll" />
<TestContainer Include="$(OutDir)\MyProject2.Tests.dll" />
<TestContainer Include="$(OutDir)\MyProject3.Tests.dll" />

2. Running all assemblies in path:


3. Running according to the vsmdi file

<MetaDataFile Include="$(SolutionRoot)\MyProject\MyProject.vsmdi">

Note that the TestList parameter is optional – if you do not choose test list to run it will run all of the tests in your project.


That’s it, your unit tests should run as part of the build and their results will be reported to-whom-it-might-concern.


Running NUnit unit tests in TFS

Running NUnit tests is a bit trickier, the problem is not to run the test (which can be done by using Exec) but how to publish the results to TFS.

Luckily for us this problem was already solved by the NUnit for Team Build project.

What you need to do is run the tests and then use NUnit for Team Build to convert the result xml file into MSTest result (trx) and publish the result to TFS server.

Another handy component you want ot have (although not a must) is MSBuild.Community.Tasks with it’s NUnit task that believe it or not runs NUnit tests.


If you cannot install both tools on your server or you prefer a self contained build process you can copy both locally and add the following at the beginning of the .proj file:

<Import Project="$(MSBuildCommunityTasksPath)\MSBuild.Community.Tasks.targets" />

Instead of \MyBuildTools just put the path to where you’ve installed the community tasks

The following was more or less copied from the example that came with NUnit for Team Build:

<Target Name="AfterCompile">
<Message Text="Running NUnit test with custom task" />

<!-- Create a Custom Build Step -->
<BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Name="NUnitTestStep" Message="Running Nunit Tests">
<Output TaskParameter="Id" PropertyName="NUnitStepId" />

<TestAssemblies Include="$(OutDir)\MyProject.Tests.NUnit.dll" />

<!-- Run NUnit and check the result -->

<NUnit ContinueOnError="true"
<Output TaskParameter="ExitCode" PropertyName="NUnitResult" />

<BuildStep Condition="'$(NUnitResult)'=='0'" TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Id="$(NUnitStepId)" Status="Succeeded" />
<BuildStep Condition="'$(NUnitResult)'!='0'" TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Id="$(NUnitStepId)" Status="Failed" />

<!-- Regardless of NUnit success/failure merge results into the build -->
<Exec Command="&quot;$(NUnitTFSDirectory)\NUnitTFS.exe&quot; –n &quot;$(OutDir)nunit_results.xml&quot; -t &quot;$(TeamProject)&quot;
-b &quot;$(BuildNumber)&quot; -f &quot;%(ConfigurationToBuild.FlavorToBuild)&quot; -p &quot;%(ConfigurationToBuild.PlatformToBuild)&quot;
-x &quot;$(NUnitTFSDirectory)\NUnitToMSTest.xslt&quot;"

<!-- If NUnit failed it's time to error out -->
<Error Condition="'$(NUnitResult)'!='0'" Text="Unit Tests Failed" />

And that’s it… hopefully.


Related links:

I’ve just received a new T510 two days ago, and started installing my favorite applications on it immediately. I’ve noticed that there are some applications that I could afford not to install on the new machine while there were others I could not do without.

The list below was compiled for the benefit of two parties – you the reader and myself, because I want to keep track of the applications I use for the next time I need to install a new dev/home machine.

General Productivity

  1. Dropbox – One sync to rule them all! I use Dropbox to synchronize files between my computers (duh). My EBooks, reference projects, presentations and of course my Keepass data file (see below).
  2. Evernote – for all my note taking needs. I use it to save PDFs and code snippets as well as interesting blog posts and lists
  3. Everything Search Engine – this free, fast and (very) small utility help me find the files on my machine. After a quick indexing session it was able to find any file I’ve looked for. Unlike many desktop search utilities it does not bother me with annoying dialogs nor does it cause performance decrease.
  4. Keepass password safe  - It seems nowadays that every site I use require me to create a user name, I have quite a few passwords and I want to have them on my machine just in case I need to log to a site I’ve used last year. I keep all of my passwords and serial numbers in Keepass and I use Dropbox to sync the data file between all of my machines. Now I only need to remember a single password.
  5. Mozilla Firefox – Currently my browser of choice - because IE is just not good enough.
  6. Live Writer – for writing blog posts, just like this one.

Development Environment

  1. Visual Studio – I’m a C++/.NET developer and this is my main tool of work. Right now I use VS2008 for most of my development needs and I’m constantly moving my projects to VS2010. I’ve installed F#, WIX and AXUM templates for my pet/hobby projects and research.
  2. Expresso Regular Expression Tool – if you ever needed to write (or read) RegEx you know you might need some help. There are other good tools for RegEx authoring such as Regulator and Regulazy but I prefer Expresso simple because it works the way I need it to.
  3. JetBrains Resharper – The ultimate visual studio add-in. It enhances my productivity and improve my visual studio experience.
  4. NDepend – This tool floods you with information about your project and it’s health. Using it you will probably learn a few things about your project that you weren’t aware of.
  5. NUnit – The unit testing framework I use for the projects that I do not use MSTest.
  6. Sysinternals Suite – No development machine should go without it. It’s not a development tool per se but I find I use Process Explorer, File Monitor and PSTools quite a lot during software development.
  7. Tortoise SVN – Most of my pet projects are stored on repositories that has SubVersion support. If I want to view source of a project on Assembla, CodePlex or a similar site I know I can do it using Tortoise SVN.
  8. Typemock Isolator – do I really need to explain? The best Isolation framework I know from the inside out.

That’s my list, it’s not complete by any means and I plan to keep updating it from time to time.

Now it’s your turn - What applications do you install on a new machine?


Related Links:

When to use the SetUp attribute

No comments

Monday, June 07, 2010

I want to share with you a debate we had at work today:

Which one of the following is a better test – this one?

public class MyClassTests
private MyClass myClass;

public void Initialize()
var arg1 = //...
var arg2 = //...
// More initialization logic
myClass = new MyClass(arg, arg2);

public void MyTest()

// Assert something on my class

Or perhaps this test is better?

public class MyClassTests
public void MyTest()
var arg1 = //...
var arg2 = //...
// More initialization logic
var myClass = new MyClass(arg, arg2);


// Assert something on my class

Don’t worry I’m not trying to make you find all of the differences between the two tests – the main difference is that the first test uses SetUp to initialize the class under test while the 2nd test initializes the class as part of the test – that’s it.

So which is better? I’d go for the second test – the one without a setup – in a heartbeat!

This may seem strange at first – mainly because the test that uses a different method to setup the object that will be used throughout all of the tests seems to be “more correct”. You may argue that without using a setup method my tests are bound to repeat the same logic (initialization) over and over again – and we all know that “writing the same code twice is writing it once too many” as my university professor used to say.

Well I prefer to write the same code twice as long as it makes my test more readable. I want to be able to look at a failing test and understand why it failed simply by reading it – without traversing the code I’m testing and without debugging it - I want to be able to know what went wrong in a single glance.

Although it seems that the example above is clear enough with or without a setup method consider what would happen if (or shall I say when) we’ll have 10 or 20 or 100 tests on the same file using the same setup method? probably one or two of the following outcomes:

  1. We won’t know/remember we have a setup method and what it does – and even if we did it can become very tiresome very fast if we need to re-check that method in order to understand what every single test in that file does.

  2. Some of the tests may need the class under test to be initialized differently, either with different arguments or we may require to create certain fake objects and pass them in a specific way in order for test X to pass.

But what about refactoring? If I make the slightest change to the class I’m testing I need to go over many tests and re-write them. What would happen if I add another parameter to the the class’s c’tor or change an existing one?

Well there is a simple solution to that problem that will not hinder the test’s readability:

public class MyClassTests
private MyClass CreateMyClass()
var arg1 = //...
var arg2 = //...
// More initialization logic
return new MyClass(arg, arg2);

public void MyTest()
myClass = CreateMyClass();


// Assert something on my class

That’s right – by using a method that create and initialize the class we make sure that you only need to change a single method and not every test in your suite. When we need to find out the creation logic we can do so by navigating to the Create method – and now there is no chance you’ll forget it exist like you might have with the SetUp method.

And the added benefit is that unlike the testing framework built–in setup this method can receive arguments – so it can really be re-used and in case a completely different logic is needed – just write another function and use it instead.


In fact I only use setup/teardown method – to handle environment creation/destruction in my integration tests. If I need a specific file to be copied and I don’t care how it got there or if I need to setup an database or a registry value – that’s when I use SetUp attribute.

Poor C++ developer’s performance profiler

No comments

Thursday, June 03, 2010

A while back I wrote a post about how I’ve used Stopwatch to profile .NET applications, this post is similar, it shows how “micro-benchmarking” can be done in C++ and more importantly it also shows how we can create a “using block” in C++.



Igal T. has written an excellent post on  how to analyze the results using Excel.

The unmanaged Stopwatch

Benchmarking in C++ can be done using the following code (courtesy of Igal T):


The performance counter is very simple –

  • it has two static methods – Start and End (lines 13-14)
  • A Stack of LARGE_INTEGER (line 9)


The implementation is also very simple – as long as you know your WinAPI:

When Start called:

  1. Get the current performance count (line 8-9)
  2. Shove the value into the stack (line 10)

When End is called:

  1. Get the current performance count (line 15-16)
  2. Get the frequency (17-18) – this can be done only once
  3. Get the topmost value from the counter stack (line 20)
  4. Calculate the seconds passed between when start was called till now (lines 22-23)
  5. Mix and print (lines 24 – 26)

Note: The only problem with this approach is that you must make sure that your code runs on the same processor throughout the application execution – otherwise you might get very weird results. So either run your benchmark on a single processor machine or set processor affinity either by code or from task manager.

What about “Using”

The Performancer class is simple to use and it supports nested class to Start but there is still something missing, in the .NET example I’ve used IDisposable and Using to make my life easier by “automatically” calling the End method. C++ does not have IDisposable but it does have destructors that can be used in the same way.

First let’s update the Performancer class:

Because we’re only need one of these per call – I’ve removed the stack.

Now for the interesting part – I present before you my interpretation of Stopwatch:

For simplicity stake I’ve written the implementation in the header file – please don’t do that in any C++ project I might need to work on ;)

So what do we have here:

  1. Two fields - an instance of old trusty Performancer and the method’s name (3-4)
  2. A private c’tor that receives a method name and call Start (6-9)
  3. A d’tor that calls End (11-14)
  4. A static Create method that makes everything “tick”

How do I use it?

Just call StopWatch::Create to start counting and the counter will stop working when you go out of scope and the destructor will get called. It also means that you need to store Stopwatch instance in some variable like so:


Happy coding…


Related posts:
DotNetKicks Image
Related Posts Plugin for WordPress, Blogger...