Dennis Burton's Develop Using .NET

Change is optional. Survival is not required.

How many times have you attended, or given, a presentation where the speaker threw out the phrase: “The UI looks horrible, I am not a front end person.” If you don’t want to be that guy, this book is a good start for you! The book Web Design for Developers by Brian P. Hogan helps someone who has had to utter that phrase understand some of the fundamentals of design. This book has an easy to read quality to it that allowed even a slow reader like me to complete the book on a flight from Detroit to Seattle.

Part 1 – The Basics of Design

Developers often perceive the design side of things as an artistic rather than tactical endeavor. This section shows that there are rules and principals to design that, if followed, lead to decent looking results. Exposure to this material alone would make most of the prototypes and presentations I have seen more effective.  The initial chapter covered the purpose of the site that would be designed over the course of the book, presenting compelling arguments for pencil and paper sketching along the way. If you have ever been in one of those early design reviews where the client is focused on the color of the buttons instead of the flow of the application, you know exactly where the author is coming from.

That brings us to the topic of color. There was so much great information in here, starting with the fundamentals of using the color wheel as well as differences in choosing color for web vs. print media. One thing I learned from this chapter was the concept of taking a picture of something in nature and use that as a basis for a color scheme. I find myself looking at colors that occur naturally together and saving them for future use.

Typography was covered next. This was another area where the author called out some of the older practices and idioms that were based on print medium that change a bit for the web. The mind blowing topic for me in this chapter was establishing a layout grid based on the dimensions of the fonts chosen for the site. This is one of those things that seemed so obvious when the author explained; I wondered why it had never occurred to me. This is a great idea that I plan on incorporating on future designs.

Part 2 – Adding Graphics

This section contained a bunch of good advice on choosing graphics as well as some of the mechanics of using Photoshop. I have to admit that I don’t use Photoshop simply because my talent level is covered by Expression, which comes with my MSDN subscription, and which has a lot of community support. The concepts covered here applied directly to the tools that I use regularly and taught me to use them better. The section on layers and building graphics took another topic I knew and gave me new skills to work with. Layering is another one of those fundamental concepts that exponentially expands what you can do with a graphics tool.

Part 3 – Building the Site

This section covers what has gotten to be very popular material over the last couple of years. The importance of separating the sematic HTML from the visual aspects in the CSS is explained well here. The concept of separation of concerns is not new to developers, but it does seem like compromises come quickly when a developer is building HTML and CSS. I think this has more to do with exposure to good practices than anything else. The approach covered here will lead to sites that are much easier to extend, and much easier to add features such as dynamic content without post back. If you are an ASP.NET classic developer used to drag and drop design, please read this chapter. Considerable flexibility has been added in ASP.NET 4, this will help you understand why it is important. Since reading this book, this is the section that I find I reference most often.

Part 4 – Preparing for Launch

No book that covers web design would be complete without covering the 800 pound gorilla. The very important topic of dealing with IE in its various releases gets its own chapter. If you have done any web development at all, this is no shock to you. If you do not know the tricks and traps of dealing with IE, it will consume a good portion of your time during the development process. This is an important chapter to be aware of not only for its contents, but also for the references mentioned.

Accessibility is another topic that is popular in design/web development circles, but rarely discussed in the world of the developer. I was first exposed to this topic by my good friend’s book on Testing ASP.NET Web Applications. If you have not been exposed to this topic, you will be amazed at the impact of your design choices on those with disabilities. I have yet to have to code for 508 compliance as a requirement, but these two books would be right by my side if I did.


Web Design for Developers by Brian P. Hogan is an exceptionally well thought out and timely book. With so many developers heading towards MVC based web authoring, the importance of well-constructed HTML and CSS is at a premium. The tactical side of this book covers a lot of important ground, but more importantly the material in Part 1 on theory is some of the best I have seen.  Make no mistake; you will not change professions from back end developer to designer based on this book alone. But, you will have enough of a solid basis for creating things that look professional. No longer will your UI skills be the focal point of your applications, demos, and prototypes. Understanding the basics covered here will allow your core competencies take center stake. If you can’t tell by now, I highly recommend this book!

Tags: diabetes

On October 4th and October 10th, my family will be participating in fund raising walks to raise money for research into better management, and someday a cure, for diabetes. Similar to many of the technical conferences I attend, these events provide a great place to get tips and tricks, and to share stories of shared experiences. As parents, we walk away from these events refreshed with new ideas and comforted that our struggles are shared. It is also a good place for my son Drew to be around other kids who have diabetes--a place where it is not unusual to do a blood sugar test before lunch.

Since last year, Drew has now moved on to first grade. He has become a voracious reader, loves launching model rockets, riding his bike, and of course, pestering his little brother. He lives a very full life, just like any other 6 year old boy, with a bit more ceremony around eating, exercise, and bedtime. One of his favorite activities is driving, whether it is Power Wheels, the lawn tractor, or popping dad’s car out of gear and releasing the emergency brake for a fun-filled ride into the ditch; he relishes every minute behind the wheel. As any 6 year old should, he now knows the tune to The Victors and most of the words. In short, this little guy does not let anything keep him from enjoying being a kid.

Through all of the cake and ice cream ridden birthday parties, hormone changes, and marathon play sessions, control has been pretty good so far this year. The last three A1c readings have been at 7.1%. The goal that we have been given by the endocrinologist is 8%. We try very hard, as parents, to learn and teach how to deal with life with diabetes so that Drew will have all the knowledge he needs to significantly reduce the risk of common complications. Through events like this walk, my hope is that in my lifetime, management of this disease will not be a 24/7 activity. If you would like to help support us in this endeavor, you can sponsor our team here:

JDRF Team Donation Page

Tags: aspnetmvc | jquery

The model binders in ASP.NET MVC represent a fantastic example of Coding by Convention. If you choose to abide by the naming rules, large amounts of work can be performed for you, but if you stray off of the path, you will have a lot of code to write. Let’s start with the simplest possible case, a model with a single property and a view with a single textbox for populating that property.

public class SimpleModel
  public string SimpleProperty { get; set; }

In order for a Create view to bind to this property, all that is required is the name attribute of any input type (like textbox) match the name of the property in the model you intend to populate, in this case SimpleProperty.

<input name="SimpleProperty" />

When the form is submitted, the Create action on the SimpleModelController is fired with a SimpleModel object as a parameter. This object has the SimpleProperty set with the value from the <input> with the same name. Pretty cool, but where is the massive amount of work being done for me?

Nested Classes

Now, lets make the model class a bit more interesting. This time, the model will be an outer class containing a nested class.

public class OuterClass
  public string SomeProperty { get; set; }
  public NestedClass Nested { get; set; }

public class NestedClass
  public string SomeOtherProperty { get; set; }

Now, if you use the Add View wizard, the resulting view will not contain any reference to the NestedClass, but really who keeps those around anyway? (Until v2 when you can customize the template.) Just as in the previous sample, we need an <input> for each property that should be populated with values from the POST. SomeProperty is the same case as the previous sample; Nested and SomeOtherProperty are slightly different. To understand the convention applied to this binding, think of how you would access SomeOtherProperty from an instance of an OuterClass. You would access the outerClass.Nested.SomeOtherProperty, but remember outerClass is our model and does not need to be named, so we are left with Nested.SomeOtherProperty.

<input name="SomeProperty"/> 
<input name="Nested.SomeOtherProperty" />

Again, when the form is submitted, the Create action on the OuterClassController will have a fully populated OuterClass as a parameter including an instance of NestedClass with SomeOtherProperty set. Now, I am starting to get impressed. The techniques used to map controls to objects within WebForms, are starting to look pretty clunky. The default model binder is a powerful tool as long as you follow the naming rules.

Nested Lists

This next scenario is where it all clicked for me. I was trying to bind a dynamic list of objects to a property of my model. This is a simplified version of the scenario I was looking at:

public class LogEntry 
  public int BloodSugar { get; set; } 
  public List<FoodEntry> FoodEntries { get; set; } 

public class FoodEntry 
  public string Name { get; set; } 
  public string Carbs { get; set; } 

Just as with simple nested property, the default strongly-typed view creator will not add any UI elements for the nested class. That is fine; fewer lines of code for us to delete. To set up the view correctly, think of the description given in the last sample for the naming convention and the Rule of Least Surprise. If you think of how to access each individual element of FoodEntries you would have FoodEntries[N].Name and FoodEntries[N].Carbs. These are the names required of our input elements on the view.

<div class="logEntry"> 
  <div class="bloodSugarEntry"> 
    <label for="BloodSugar">BloodSugar:</label> 
    <input type="text" name="BloodSugar" />
  <div class="foodEntries"> 
    <div class="foodEntry"> 
      <label for="FoodEntries[0].Name">Name:</label>
      <input type="text" name="FoodEntries[0].Name" /> 
      <label for="FoodEntries[0].Carbs">Carbs:</label>
      <input type="text" name="FoodEntries[0].Carbs" /> 
    <input type="button" value="Add More" name="FoodEntries_AddMore" /> 
    <input type="submit" value="Log It!" /> 

Now that there is some structure in the HTML that we can leverage, we can start adding some dynamic client side behavior. In this case, I want to be able to add more food items to the form without having to postback or even perform any AJAX requests. JQuery will be leveraged to inject more food entries into the DOM. The following code is added to the click event from the FoodEntries_AddMore button.

$('#FoodEntries_More').click(function() { 
    var nextFoodEntryIndex = $('.foodEntry').size(); 
    var nextFoodEntryNamespace = 'FoodEntries[' + nextFoodEntryIndex + ']';

    var newFoodEntry = $('<div class="foodEntry">' + 
                         '<label for="' + nextFoodEntryNamespace + '.Name">Name:</label>' + 
                         '<input type="text" id="' + nextFoodEntryNamespace + '_Name"' +
                         'name="' + nextFoodEntryNamespace + '.Name" />' + 
                         '<label for="' + nextFoodEntryNamespace + '.Carbs">Carbs:</label>' + 
                         '<input type="text" id="' + nextFoodEntryNamespace + '_Carbs"' +
                         'name="' + nextFoodEntryNamespace + '.Carbs" />' + 


What this does is look at the set of food entries to determine how many are on the current page in order to determine the index used for the name on the next row of food entries. The first click will result in adding FoodEntries[1].Name and FoodEntries[1].Carbs. These names follow the naming convention that we established earlier. The default model binder recognizes this naming convention and populates the LogEntry object with as many FoodEntry items as have been created on the page.


All of this auto-magic binding assumes that you are willing and able to follow the naming conventions necessary to have the default model binder do the work for you. If you are facing an exceptional situation where the default model binder will not work for you, use a FormCollection in the Create method and do all of the parsing yourself or create a custom model binder. As you can imagine, these tasks can become complicated and unreadable. So if at all possible, try to follow the path that does the work for you. It is worth noting that this is not at all like the drag and drop designers that have a habit of creating poor, hard-to-maintain code. All that is happening here is a form element to object mapping with the code on either end being as elegant as you like.

Where is the code?

Code for this post can be found on GoogleCode at


Speaking at CodeStock 2009

This year CodeStock is shaping up to be a conference like no other. Most regional conferences have around 25 sessions with some attempt to do Open Spaces on the side. CodeStock 2009 will certainly blow the doors off the norm with a community driven event with something to offer for everyone. To start with, it is the first conference that I know of that allowed the early registering attendees to choose the direction for the more than 50 hour-long sessions as well as 6 extended hands-on sessions. Then, to kick it up one more notch, Alan Stevens will be facilitating the Open Spaces. Make no mistake: Alan knows how to do Open Spaces right. Last year’s CodeStock was the moment the community took notice of the way that Alan’s particular talent set meshes with coordinating an exceptionally effective Open Space event. This is truly an event that has something to offer everyone.

Given the community selection process, it is even more of an honor to be selected to speak at this event. I will be giving the PatternsInTesting presentation that has already been well received in the Midwest area.

Indy Code Camp 2009

I was privileged to be able to give the PatternsInTesting presentation at Indy Code Camp this year. There were many really good testing related conversations before and after the presentation. I am encouraged by the amount of interest the community is taking towards improving the adoption of test driven design. I also got to see a few top notch presentations:

  • Tim Wingfield - Care About Your Craft: A very motivating presentation on doing the right thing
  • Philip Japikse - CRUD Sucks! NHibernate to the rescue: Phil has an impressive in depth knowledge of NHibernate. I was most fascinated by all of the extensions that he has written to really make NHibernate hum.
  • Jon Fuller - Dealing with Dependencies: This was my favorite presentation of the day. Jon had only enough slides for an overview and went directly in to writing code. It was an in depth, working look at using DI tools.
  • Michael Eaton - Developing Solid WPF Applications: A very informative view into a real world WPF application and its development evolution

Greater Lansing .net User Group

A big thank you to GLUGnet for letting me do the PatternsInTesting presentation for the group. This is a group that I frequently attend, it is nice to be able to present on a topic that I talk about so much with the members of that group.

Tags: patterns | programming | testing

Your mission, should you choose to accept it, is to observe the interaction with an object and verify that this interaction is in your best interest.

The Scenario

Some objects that you have to consume are just poorly written. The ones that are most egregious always seem to be the ones you have no control over. That lack of control may be because you have no access to the source, it may be because it would be a political minefield to change the source, or a lack of tests makes the team afraid to change the source. It seems like every time I go to a new client, these objects exist (as well as the political minefields). The developers have a mysterious set of incantations that they have memorized for interaction with these objects in order to avoid bugs. Often, no one knows where these "rules" came from and they are usually not written down.

The Vocabulary

The name of this pattern is the Spy. What the spy will do is capture information about interaction with an object, and only take action if the need arises. A spy object looks just like the object that you need to interact with (i.e. implements the same public interface) such that your code should not even notice it is even there, but, behind the scenes it will perform validation and give the useful feedback you wish the original object would have implemented in the first place. This spy or validation wrapper is commonly implemented by holding on to a reference to original object. This allows for the spy to call the actual implementation in order to preserve the original behavior.

The ideal world

In an ideal world, the kinds of interactions that you are trying to validate with a spy should be captured be captured by the actual object you are interacting with, rather than the spy. Having a wrapper object whose function is validation is a massive code smell. If you have the ability to fix the original code by adding the relevant validation, that is by far a better solution than creating a spy object.

The real world

In reality, you do not have access to change the source of third party libraries, even if sometimes that third party is a couple of buildings or even cubes away. The first thing you should do when you run in to these bizarre incantations “required” for successful object interaction, is to ask “why?” and be persistent, dig deep. You may be (not so) surprised that most of the reasons have long since gone away. If you do find that some of the hidden rules are indeed valid, you need a way to validate that your code is following the rules.


Consider the following example where MethodToObserve will throw an uninformitive exception if the PropertyToObserve has not yet been set.

    1 public interface IInterfaceToUse

    2 {

    3     void MethodToObserve();

    4     List<string> PropertyToObserve { get; set; }

    5 }


    7 public class ClassToUse : IInterfaceToUse

    8 {

    9     public void MethodToObserve()

   10     {

   11         PropertyToObserve.ForEach(str =>

   12                  Console.WriteLine("Calling the MethodToObserve:" + str) );

   13     }


   15     private List<string> propertyToObserve;

   16     public List<string> PropertyToObserve

   17     {

   18         get

   19         {

   20             Console.WriteLine( "Calling the PropertyToObserve setter");

   21             return propertyToObserve;

   22         }

   23         set

   24         {

   25             Console.WriteLine("Calling the PropertyToObserve getter");

   26             propertyToObserve = value;

   27         }

   28     }

   29 }

As mentioned previously, the ideal solution is to fix the implementation. If your only access to this code is Reflector or you are just not authorized to change it, the next best thing is to protect yourself (flaming email to the author of the code is,of course, optional). Our protection, or at least better information will come from a class implementing IInterfaceToUse just like the original, only this time the implementation will provide the consumer with information that they can act on.

    1 public class ValidatingObserver : IInterfaceToUse

    2 {

    3     private IInterfaceToUse _observedClass;


    5     public ValidatingObserver(IInterfaceToUse observedClass)

    6     { _observedClass = observedClass; }


    8     public void MethodToObserve()

    9     {

   10         if (PropertyToObserve == null)

   11             throw new ArgumentNullException("PropertyToObserve",

   12                                             "Property must be set prior to calling Method");

   13         // perform observations

   14         Console.WriteLine("The spy is watching: MethodToObserve");


   16         // pass through to implementing object

   17         _observedClass.MethodToObserve();

   18     }


   20     public List<string> PropertyToObserve

   21     {

   22         get

   23         {

   24             Console.WriteLine("The spy is watching: PropertyToObserve getter");

   25             return _observedClass.PropertyToObserve;

   26         }

   27         set

   28         {

   29             Console.WriteLine("The spy is watching: PropertyToObserve setter");

   30             _observedClass.PropertyToObserve = value;

   31         }

   32     }

   33 }

Note that this time, instead of the “oops, I forgot something exception” known in .net as the “Object reference not set to instance of an object” exception, we get meaningful information about what is missing and even some hint as to how to fix it. The error now clearly states that the PropertyToObserve should be set prior to calling the MethodToObserve.

Using the spy

There are many ways to create a spy object, I chose containment for this post; You may also use derivation to create your wrapper. Derivation will get you up and running faster, and you will not have maintenance work to do if you add a method to the interface, but this will come at a cost. Containment will allow you to swap out the actual implementation of the object with a mock implementation at some time in the future. As always consider you needs before choosing a spy implementation.

The test below shows how to use the spy created in this post

    1 [TestFixture]

    2 class ManualObserverTests

    3 {

    4     [Test]

    5     public void MethodCallSpy()

    6     {

    7         var observedClass = new ClassToUse();

    8         var validatingObserver = new ValidatingObserver(observedClass);


   10         Assert.Throws<ArgumentNullException>( validatingObserver.MethodToObserve);

   11     }

   12 }

Tool Support

All of the previous posts in this series have mentioned leveraging tools to assist in creating these test objects. They spy object however is a strange beast; the demands it places on the tools turn out to create as much code as the manually coded version. If you are so inclined, you can use Rhino for a spy object. What is required is taking advantage of the Do extension method. Do takes a delegate as a parameter that matches the signature of the method being called. So what you will end up establishing is an Expectation that a method will be called and when it is Do the operation specified by the delegate.


The spy object reminds me of the Broken Windows section of the Pragmatic Programmer. It clearly states not to live with broken windows, but if you cannot fix the window, at least put a board over the window. In the case of third party code where you cannot change the source code, a validation wrapper is the board you need to keep further damage from occurring and show other developers in the area that you still care about the quality of code.

The Series

PatternsInTesting[2] - Stub Pattern PatternsInTesting[4] - Mock Pattern
Tags: hardware

revolution_3I love my MX Revolution mouse. It is the most comfortable mouse I have used to date. For me, this is the first mouse that the wireless works without glitches, the scroll wheel has virtually no resistance, and there are a pile of programmable features on well placed buttons. As a desktop mouse, I would suggest anyone give it a try.

As a presentation mouse, however, it did not work so well. The scroll wheel with no resistance would scroll under its own weight causing my slides to advance and back up in a very rapid fashion. This is a great example of a highly desired feature in one context being a complete hindrance in another. Now off I go to  find a device well suited for presentations.

Tags: community | patterns | tdd | testing
I will be presenting items from the PatternsInTesting series as well as some additional content in the Test Driven is Driving me Insane talk at the Great Lakes .net User Group on 3/18/2009 and at the Northwest Ohio .net User Group on 4/21/2009. This has been a really fun talk so far and I have enjoyed the conversation it generates. Stop by if you can make it.

Tags: community | patterns | tdd | testing

I will be presenting items from the PatternsInTesting series at the Greater Lansing .net User Group Flint meeting. I was compelled to put this blog series and presentation together to address the pain many organization experience when trying to include automated testing into their development process. The content is based on the insight and lessons learned that I have picked up by experiencing the same transition in multiple organizations. Participants in this presentation will walk away with tools for writing more effective tests and how to better identify issues in tests.

Tags: patterns | programming | testing

The Scenario

On our continuing quest to create unit tests that exercise only the class under test, we look at another common scenario that occurs while writing tests. As with the Dummy, our class under test has a dependency on another class, only this time the dependent class has an active role in our test. Our testing needs are about the logic of the class under test and not the interaction with the dependent object. In order to create a good test, the class under test must be isolated from the dependent object.

An example of this scenario might look like this:

   11     public interface ICalculator

   12     {

   13         int Add(int left, int right);

   14     }

   15     public class Fib

   16     {

   17         private ICalculator _calculator;

   18         public ICalculator Calculator

   19         {

   20             get { return _calculator; }

   21             set { _calculator = value;    }

   22         }


   24         public int Next(int i, int j)

   25         { return Calculator.Add(i,j); }

   26     }

The class under test in this scenario is Fib, which has a dependency on an ICalculator. The test's objective is to validate that the Next method returns the correct result for some well-known examples.

The Vocabulary

The name of this pattern is the Stub. A stub stands in the place of the actual object in use and provides known answers and predictable behavior. If you are doing any sort of evolutionary development, chances are that the initial versions of your classes more resemble stubs then real code. Why? The goal is the same: you wrote stub functionality to allow you to focus your development efforts on different parts of the system. This is exactly what we are doing with Stub tests: isolate one part of the system from another by providing known results.

The solution without tools

Unlike with a Dummy, providing a class that throws a NotImplementedExcpetion in the Add method does not meet our needs. Since the functionality of ICalculator is outside the scope of this test, we assume that it is working correctly (and hopefully under test). A simplistic implementation of ICalculator will work nicely. Since we are not testing the calculator, provide a simplistic calculator that returns fixed results.

   38     [Test]

   39     public void NextResultIsCorrect()

   40     {

   41         Fib fib = new Fib();

   42         fib.Calculator = new StubCalculator();


   44         Assert.That(fib.Next(2, 3), Is.EqualTo(5));

   45     }


   29     public class StubCalculator : ICalculator

   30     {

   31         public int Add(int left, int right)

   32         { return 5; }

   33     }

The test will check that the Next method returns the correct result given the StubCalculator. What we end up doing here is fully exercising the Fib class with known values from its dependent classes. The stub gives us the proper level of isolation for this test.

The solution with Rhino Mocks

For this version, leverage Rhino Mocks to keep us from having to code physical versions of the Stub class. Using Rhino's Fluent Interface, this reads as Expect a call on Calculator with the parameters 2 and 3, and when making this call return 5 as a result.

   47     [Test]

   48     public void NextNumberIsCorrect()

   49     {

   50         Fib fib = new Fib();

   51         fib.Calculator = MockRepository.GenerateStub<ICalculator>();

   52         fib.Calculator.Expect(calc => calc.Add(2, 3)).Return(5);


   54         Assert.That(fib.Next(2, 3), Is.EqualTo(5));

   55     }

The tools advantage

Just as discussed with IComplicated in the Dummy sample, adding methods to ICalculator does not require any additional maintenance of this test. However, unlike the Dummy sample, calls into the stub return the expected value. Additional benefits can easily pile up. Consider adding multiple calls to Add with different parameters. The hand-coded version would need some sort of conditional logic to determine what to return based on the calling parameters. Complexity adds up fast, even in the simple example listed here. Using Rhino, one concise and readable line of code can use new parameters for an additional expectation, including the expected result, and Rhino deals with matching up the parameters with the correct result. This is just a glimpse into the functionality offered by Rhino; the upcoming patterns will cover even more capability.

Isolation effects

Should a test that checks the result of the Next method fail if there is something wrong with the ICalculator implementation? The answer, as always, is "it depends." This test should fail if we were writing an integration test—a test that ensures all the pieces of a system are working together. This test should not fail based on ICalculator if it is a unit test, focused only on the result of Next. Isolating ICalculator from Fib helps build a set of unit tests that can quickly identify the location of errors introduced into a system. The stub is a common pattern of isolation, and using it will make a marked and immediate improvement in your tests.

The Series

PatternsInTesting[4] - Mock Pattern
Tags: patterns | programming | testing

The Scenario

One of the characteristics of a good unit test is that the object under test is the only object being exercised. The problem in this scenario is that the object under test requires a dependent object, even though the functionality of the dependent object is not used in the test. To make a good test, the dependent object needs to be isolated from the class under test.

An example of this scenario might look like this:

    8     public interface IAmComplicated

    9     { void DoStuff(); }


   11     public class ClassUnderTest

   12     {

   13         private IAmComplicated complicated;

   14         public double circumference;

   15         public double radius;

   16         public ClassUnderTest(IAmComplicated externalComplicated)

   17         {

   18             if( externalComplicated == null )

   19                 throw new ArgumentNullException("complicated is required");

   20             complicated = externalComplicated;

   21         }

   22         public double DoInternalStuff()

   23         {

   24             return circumference/(2*radius);

   25         }

   26     }

The dependency in ClassUnderTest is that its constructor requires an instance of an object that implements IAmComplicated. The rather simple objective of this test is to validate that DoInternalStuff returns Pi within a reasonable amount of rounding error.

The Vocabulary

The name of this pattern is the Dummy. It is unclear to me whether this is a reference to the object that enables isolation for the test or a reference to the original author of the code. It seems to be a code smell for this scenario to even occur. However, sometimes you need to use an external library and do not have liberty of changing the code.

The solution without tools

Using only Visual Studio, add a class that implements IAmComplicated, right-click on it, and choose implement interface. Every method in the class will throw a NotImplementedException. This class meets the needs of the test because none of the methods are ever called; its only purpose is existence.This is your Dummy.

    1     public class ComplicatedDummy : IAmComplicated

    2     {

    3         public void DoStuff()

    4         {

    5             throw new NotImplementedException();

    6         }

    7     }

This Dummy class allows us to create the following test:

   39     [Test]

   40     public void DoStuffValidates()

   41     {

   42         ClassUnderTest cut = new ClassUnderTest(new ComplicatedDummy());

   43         cut.circumference = 314;

   44         cut.radius = 50;

   45         Assert.AreEqual(Math.PI, cut.DoInternalStuff(), 0.01d);

   46     }

The solution with Rhino Mocks

Rhino Mocks has the capability to create an object at runtime by reflecting the IAmComplicated interface. This gives us the capability we need without having to maintain another class in the test project. Since Rhino is reflecting the interface at runtime, adding a method to the interface at a later date does not require changes to the test code. There are several different ways Rhino could give us a placeholder for IAmComplicated. Here we will use a simple one line call to GenerateStub.

   55     MockRepository.GenerateStub<IAmComplicated>();

This one line saves us an entire file of maintenance, and our test only requires minor modification to use this technique. Place the call to Rhino as the parameter to the ClassUnderTest constructor instead of creating a new ComplicatedDummy.

   52         [Test]

   53         public void DoStuffValidates()       {

   55             ClassUnderTest cut = new ClassUnderTest(MockRepository.GenerateStub<IAmComplicated>());

   56             cut.circumference = 314;

   57             cut.radius = 50;

   58             Assert.AreEqual(Math.PI, cut.DoInternalStuff(), 0.01d);

   59         }      

Isolation of Failure

One of the primary goals of good unit tests is to identify exactly which unit caused the error. By applying this pattern, tests that cover ClassUnderTest no longer require an implementation of IAmComplicated to instantiate without failure. This helps shorten your bug hunting cycle by reducing the areas that indicate error to where the real error occurred. As you attain higher levels of isolation and better definition of units, you will find you spend much less time in the debugger. This means spending less time finding problems more time fixing them.

The Series

PatternsInTesting[2] - Stub Pattern PatternsInTesting[4] - Mock Pattern

Dennis Burton

View Dennis Burton's profile on LinkedIn
Follow me on twitter
Rate my presentations
Google Code repository

Community Events

Windows Azure Boot Camp Lansing GiveCamp