Code ownership in Agile teams

No comments

Friday, July 23, 2010

I have been practicing Agile and SCRUM for the better part of my career, I have seen it done in many ways as well as abused from time to time. Although there are clear guidelines on how SCRUM should be done, in each company it was done differently.

Many SCRUM practitioners believe that “all team members are create equal” and as such it should be beneficial to the project if every single developer could take any task. This approach has many benefits:

  • No single developer can be a bottleneck.
  • The project’s tasks can be distributed among the team optimally and
  • if one of the developers is sick or on vacation it does not mean that some feature won’t be implemented on time.

Making sure that every single developer on the team has a good understanding of the project is a challenge – which can be addressed by pair programming and doing code reviews. But with best of intensions it’s impossible to guarantee that all of the developer understand all of the code all of the time.

Using the practice described above has one important implication – the team owns the code, meaning that no single developer owns the code – meaning that a developer will only be responsible for as task until it finished, afterwards it shall become part of the “team’s responsibility”.

Most of the time this is not an issue – But what if the problem you’re trying to solve isn’t contained in the development team realm of control?

Let’s say that the team need to add a dialog that would open a web page on the company site so that the user shall write his feedback on the new product. One of the team members implement the required functionality (i.e. code that opens a specific URL) and then sends an email to the guy responsible for the website and pass the task to done.

As far as the developer concerned he has finished the task, unfortunately no one checks with the web admin to make sure that the web page has been added and that the data is actually being stored according to spec – big problem.

I’ve seen these tasks that requires integration with other teams or a contractor tend to fall between the cracks because of lack of ownership.


On the other side what happens if each developer of your team owns a part of the teams code – in that case you know that the developer that responsible for the feature will make sure that the integration actually works. You loose the ability to swap tasks between team members but gain responsibility that is hard to artificially create when “everybody (read: nobody) owns the feature.

Code ownership doesn’t come free it – when someone “owns” code it means that he is the only person that can change it but there are ways to make sure that no developer shall become the project’s bottleneck – namely code review and pair programming, the only difference is that there is a clear owner of that code.

The 2nd ALT.NET tool night

No comments

Thursday, July 15, 2010

On Monday I attended the 2nd ALT.NET tool night (IL) – and it was a blast! There were 20-30 of us .NET developers who decided to take the time and learn about new tools. 

AN so I came, I saw, I ate pizza and in between I got to talk with some really talented folk.

The following tools were demoed:

Test Lint

TestLint is a new addition to Typemock’s unit testing tools. It parses your code and finds common unit testing problem such as tests without any assertions, logic inside your unit tests and so on.




TestLint has a free Visual Studio (currently 2010) add-in. a commercial command line version (that you can run on your build server) currently costs $199.


I’m a Resharper person myself but nevertheless I was amazed on how much productivity features CodeRush has. In the right hands you can make this tool “go to 11”. The audience enjoyed the colored markers, arrows and jumping code that were displayed with every single refactoring done.



About $249 – it does have a free version and 60 day trial you can try.


I’ve done a short overview of nDepend. I’ve used NUnit as the target project and used nDepend to analyze it. I’ve shown the many matrices it has and how CQL can be used to create new rules. For more details have a look at the review I wrote.

image image


Pricing details are available on nDepend site. I suggest you download a fully functional evaluation and see what it says about your project – you might be surprised.

Process Explorer

If you don’t know about Sysinternals and Process Explorer – shame on you. I can’t start counting the times this tool has helped me. Ariel has done a good job of showing all the ins and outs of process explorer and even managed to explain most of the data it shows.





Although IronRuby is not a tool Shay made a good case on how it can help us static language developers in everyday tasks:

  • Using Console (IR.exe) to investigate new classes e.g. System::IO::Path.methods – Class.methods
  • Build administration – NANT is pain, why not use code to administer your build using Rake.


Absolutely nothing


Testify A tool for the 1st time you have to start a new project. All you have to do project type and choose a name and press Generate.

Testify creates the tree structure, along with a few well known open source tools:

You get an installer, build script, and 30 unit tests (what??).

But that’s not all! just in case you’re the type that gets his kicks from running stuff from command line - you get a bunch of batch files to do these repetitive tasks such as opening NUnit GUI.

But seriously – this tool address a need: When starting a new project and you’re not sure what to put where or how to create the initial build script or installer, give it a try and if you don’t like what you get – customize it!


Free – open source

So far so good

It was a good meet-up and I hope not the last.

If you’re a .NET developer that wants to learn more about tools and practices – join us at ALT.NET either from the Facebook page or sign up at for emails notification and discussions at the Google group (or both).

When the going gets tough – automate it!

No comments

Tuesday, July 06, 2010

nummi39 Corolla Assembly Line, Fremont CA 2000 by CanadaGoodLet me tell you a story about my first job: a long ago I was hired for my very first software development job. I worked with some really talented people on a cutting edge technology and it was fun.

After a few weeks at work I’ve noticed something odd – Although I was hired to write code I’ve spent more than 50% of my time doing other things - I’ve administered development machines, created test environments, built installers and such.

Being a junior developer I accepted this as a necessary evil that must be done in order to produce value for the organization…

Fast forward a few years I’m no longer a junior developer but the overhead is still the same – but something changed, I no longer spend my valuable time doing repetitive tasks.


The change didn't happen overnight, I wasn’t even aware that I had a problem until while complaining about the hassle of releasing a new product and how long it took – my team lead asked my why I didn’t automate the process.

A few words about the release process: it was well documented and consisted from multiple stages - from building multiple installers based on the product offering to uploading the results to our site using FTP.

Automating this process seemed like a lot of work most of it pretty trivial when done by hand - it seemed like a bad tradeoff to work two days to create an automatic release process instead of two to four hours to release the product manually. To my amazement my team lead insisted I drop what I was doing and do exactly that, after two days I had a release process that with a click of a button – fire and forget style. Since then I no longer needed to do the tasks that was the manual release – and looking back at it I know today that it was an error infested waste of time.

I’ve understood my lesson – and nowadays when I find myself frustrated with a task I do I immediately look for a way to automate it, not only is it a waste of my time but because manual steps need to be documented while an automatic script is self documenting.


One example for a process that must be automatic is Continuous integration - It always amazes me when a development team fails to automate it’s CI process – Instead of trusting all of the developers in all of the teams to remember to run all of the unit tests each time – have a server that gets the latest version of the code and runs all of the tests for you. Although it’s still a good practice to run the tests on your machine before commit/check-in we’re human and so we err.


My advice to you is - next time you feel like you’re wasting your time doing a boring repetitive task, just ask yourself how you can automate it…

I’ve never was a big fan of “coding standard” – Although I always thought that the same style should be kept throughout a project or even the entire company’s code base - the idea of forcing developers to write the same code based on a document nobody ever read seemed just wrong.

Fast forward a few years and suddenly I’m responsible that my team’s code will be written according to the coding standard of the company.

At first I thought it shouldn’t be too hard – everybody knows the coding standard – and boy was I wrong. The coding standard document was copied from a previous document and even the developers that did read it couldn’t remember all of it’s 20+ pages of rules and ideas.

It was clear that I needed help – preferably in a form of some tool that would process my team’s code. After looking a bit I’ve found that tool – named StyleCop.

What is StyleCop

StyleCop is a free static code analysis tool from Microsoft that checks C# code for conformance to StyleCop's recommended coding styles and a subset of Microsoft's .NET Framework Design Guidelines. StyleCop analyzes the source code, allowing it to enforce a different set of rules from FxCop. The rules are classified into the following categories:

  • Documentation
  • Layout
  • Maintainability
  • Naming
  • Ordering
  • Readability
  • Spacing

StyleCop includes both GUI and command line versions of the tool. It is possible to create new rules to be used.

StyleCop was re-released as an open source project in April of 2010, at

[From Wikipedia]

On the process of implementing custom StyleCop rules

Back to the problem at hand – although my company has a coding standard it’s not exactly similar to Microsoft’s so I needed to develop some custom rules luckily this topic was already covered by my fellow bloggers:

A custom Rule would look something like:

public class NamingRules : SourceAnalyzer
public override void AnalyzeDocument(CodeDocument document)
var csdocument = (CsDocument)document;

if (csdocument.RootElement != null && !csdocument.RootElement.Generated)
csdocument.WalkDocument(VisitElement, null, null);

private bool VisitElement(CsElement element, CsElement parentElement, object context)
if (element.Generated)
return true;

if(element.ElementType == ElementType.Class && !(element.Parent is Class))
var csClass = (Class)element;

var fileName = csClass.Document.SourceCode.Name;
var fileNameWithoutExtension = string.Format("class {0}", Path.GetFileNameWithoutExtension(fileName));
var className = csClass.GetShortName();

if(fileNameWithoutExtension.Equals(className) == false)
AddViolation(element, element.LineNumber, "FileNameMustMatchClassName");

return true;

In case you were wondering the code above checks that a class resides in a file with the same name.


But there is a problem with writing style rules – they look trivial at first but tend to accumulate corer cases as you progress. My solution was to find a way to test the custom rules I’ve written so that I won’t accidently break during my work.

The added value of using unit tests is that I didn’t need to manually test my new rules – a process that consists from the following steps:

  1. Implement a new style rule

  2. Compile the custom rule assembly

  3. Copy the assembly to StyleCop’s folder

  4. Open a new instance of Visual Studio

  5. Write code to test the new rule

  6. Run the new rule

  7. More often than not – find a bug. close visual studio and go to step #1

Instead I got the following:

  1. Write failing test

  2. Run test

  3. Implement a new style rule

  4. Run the test again

  5. If test still fail go to step #1

Better, Simpler, Faster

Writing unit tests for my custom rules

I’ve wanted to be able to parse an actual file and analyze it using StyleCop and my new rules – using some Reflector magic I was able to discover how StyleCop worked and I was able to write the following “helper” method:

   1:  public static CodeDocument ParseDocument(string codeFileName, string projectFileName) 

   2:  {

   3:     var parser = new CsParser();

   4:     var configuration = new Configuration(null);

   5:     var project = new CodeProject(projectFileName.GetHashCode(), projectFileName, configuration);

   6:     var sourceCode = new CodeFile(codeFileName, project, parser);


   8:     CodeDocument document = new DummyCodeDocument(sourceCode);


  10:     parser.ParseFile(sourceCode, 0, ref document);


  12:     return document;

  13:  }

The method receives a file name and a project name and creates StyleCop’s representation of that file.


  • Lines 3–6: Creation of StyleCop’s types I needed to parse the code file.

  • Line 8: Due to some design fluke I needed a CodeDocument to pass to the parser. Unfortunately CodeDocument is an abstract class so I just inherited it in a dummy class I’ve created. No need to implement anything because this instance will be replaced after the new command.

  • Line 10: parse the source file – and that’s it.

Armed with a method that enable me to parse code files I was now able to test my new rule – almost, I’ve needed to fake a call to AddViolation and verify it got called and for that I’ve used Typemock Isolator:

   1:  [TestMethod]

   2:  [DeploymentItem(@"..\..\..\.StyleCop.Rules.Tests.TestClasses\FileNameMustMatchClassNameRule.cs")]

   3:  public void AnalyzeDocument_FileNameDoesNotMustMatchClassNameRule_AddViolationCalledWithCorrectRule() 

   4:  {

   5:     const string projectFileName = @"StyleCop.Rules.Tests.TestClasses.csproj";

   6:     const string codeFileName = @"FileNameMustMatchClassNameRule.cs";


   8:     var document = TestHelpers.ParseDocument(codeFileName, projectFileName);


  10:     var namingRules = new NamingRules();


  12:     Isolate.WhenCalled(() => namingRules.AddViolation(null, 0, string.Empty)).IgnoreCall();


  14:     namingRules.AnalyzeDocument(document);


  16:     Isolate.Verify.WasCalledWithArguments(() => namingRules.AddViolation(null, 0, string.Empty))

  17:        .Matching(objects => {

  18:           var ruleName = objects[2].ToString();

  19:           return ruleName.Equals("FileNameMustMatchClassNameRule");

  20:        });

  21:  }


  • Line 2: I’ve created a new project that contains my test classes. I’ve used MSTest Deploy to make sure the file I want will be copies to the place the tests are run.

  • Lines 5–6: I don’t really need to explain that one – right?

  • Line 8: invoking the helper method

  • Line 10: Create a new instance of the class that holds the custom rules

  • Line 12: Using Isolator to make sure that AddViolation does not get called. In order to call it I would have needed to initialize a lot more of StyleCop’s environment and instead I’ve used this simple line

  • Line 14: Call my method under test

  • Lines 16-19: Using Isolator to test  that AddViolation was called and that the correct string was passes.


That’s enough code for now. Using this method I was able to write tests to all of the custom rules I’ve implemented.

As for my opinion about the need for an official coding style - it changed, after fixing a few (thousands) lines – the code actually look better and more importantly it’s more readable.

Related Posts Plugin for WordPress, Blogger...