In my not-so-distant past I needed to write a component which would control an air conditioning unit.
My application needed to send commands to the external device (on, off, set temperature) and read information from it (room temperature, humidity etc.)
And so I came with the following:
It’s a bit simplistic but the idea was that the client would be responsible for communication with the device (protocols, timeouts) while the manager would handle the “business logic” end of things (if temp > 10c than turn on cooling unit).
Commands from the manager would get translated to and unholy stream of bytes and sent to the device (and re-sent – numerous times) while data arriving from the device would be converted to strongly typed C# classes and passed to the manager where additional actions can be taken. Some of that data would end up in the application's data model according to the business rules inside the manager.
Obviously I developed the client and manager using TDD. While writing my tests (and later my classes) I’ve kept a clear division between the manager and client – each had it’s own set of tests and each had clear inputs and outputs (or so I thought).
A week later a new requirement came which caused a changed in my client and so I needed to re-write both the client’s tests as well as the manager’s tests. a few days later a bug was found – one that my tests should have caught but missed since it was a collaboration between the client and the manager that caused it. After a while I‘ve noticed that I’m doing something wrong – my tests would break whenever I’ve changed one of the methods between my client and manager. And so I’ve decided to test my manager and client as one “unit” using fake objects only for faking the device out of my tests. And it worked! I’ve managed to reduce the amount of tests while increasing my functionality – and there was much rejoicing.
That is until one day a co-worker of mine seen my tests – and told me that “those are not unit tests since you’re testing more than a single class”. It made me think about the actual definition of unit tests.
Maybe we should not call them unit tests anymore?
It all came back to me yesterday when read a post by Simon Brown: “Unit and integration are ambiguous names for tests”. In his post Simon explains that there is no clear definition of unit/integration tests (which is dead on the money) and come up with his own definitions which is clear (or at least clearer) and also helps understand which automatic tests a developer should write for each scope of the project – nicely aligned with the architecture.
Simon’s definitions are great. They help the align the developer’s test with the project architecture – something most of the projects I witnessed lack.
But there’s a small problem with splitting the tests into class-, component- and system- tests. While it set clear rules about their scope, what to fake, test size and speed - it does not help avoid testing the wrong thing…
I think that’s “class tests” falls short of explaining what we want to test namely the behavior of the code and not the actual implementation. I fear that calling a test “class test” would result in developers writing unit tests for every single class (and every single method). Those over specified tests tends to test trivial functionality while being extremely fragile.
Is there a definition for unit tests?
Since I needed an actual definition I went to the definite unit testing encyclopedia – XUnit Test Patterns and got the following definition for my troubles:
A test that verifies the behavior of some small part of the overall system. What makes a test a unit test is that the system under test (SUT) is a very small subset of the overall system and may be unrecognizable to someone who is not involved in building the software. The actual SUT may be as small as a single object or method that is a consequence of one or more design decisions although its behavior may also be traced back to some aspect of the functional requirements. There is no need for unit tests to be readable, recognizable or verifiable by the customer or business domain expert. Contrast this with a customer test which is derived almost entirely from the requirements and which should be verifiable by the customer. In eXtreme Programming, unit tests are also called developer tests or programmer tests.
So… Nothing other than only special people (someone who is involved in building the software) can know what the subject under test is?
[From: XUnit test patterns – Unit test]
It does states that a unit test can test a single class – or more…
There’s an excellent blog post by Martin Fowler – titled simply: UnitTest in which he tries to define unit tests but it seems that the only definite there is that unit tests should be fast – how fast? it depends!
What I use
I don’t have a clear answer here but I seem to go back to three different types of developer tests:
Test a single unit of work (one class or more), and employ fake objects to prevent running external dependencies (DB, server, 3rd party and other classes). They are fast – usually runs for a fraction of a second. And they are relatively short (tens LOC) and simple.
Unit tests are independent from one another and each run on any machine should provide the exact same result.
They are similar to my unit tests in look and used to test interaction with external dependencies. I usually use Integration Tests when in need to test logic inside external dependency (think stored procedures) or interaction between my code and that external dependency.
They usually require setup (and cleanup) between test runs and a specific environment in place to pass. Fake objects can be used to avoid running other dependencies. They will be slower than unit test (seconds to minutes) and might fail due to reasons outside of the scope of the test (permissions, server down etc.)
Those are application, system wide or scenario level tests that make sure that business requirement are implemented according to spec.
They are slow and cross component boundaries and usually do not employ fake objects of any kind.
What I like about Simon’s methodology is that it’s easy to explain and set very good ground rules to the kind of tests a good project should have. I still think that the definition of class tests does not change the original definition (or dis-definition) of unit tests – while it’s about focusing on the operation of a single class it does not mean that other classes are not exercises as well.
It’s hard to define what is a “good” or “correct” unit test. I’ve noticed that there is no one rule for every project out there. And so I’ll keep on using my flawed yet accurate definition of “unit tests” – until I find a better definition.
And until then – Happy coding…
Labels: Thoughts, Unit tests