Dennis Burton's Develop Using .NET

Change is optional. Survival is not required.
Tags: patterns | programming | testing

Your mission, should you choose to accept it, is to observe the interaction with an object and verify that this interaction is in your best interest.

The Scenario

Some objects that you have to consume are just poorly written. The ones that are most egregious always seem to be the ones you have no control over. That lack of control may be because you have no access to the source, it may be because it would be a political minefield to change the source, or a lack of tests makes the team afraid to change the source. It seems like every time I go to a new client, these objects exist (as well as the political minefields). The developers have a mysterious set of incantations that they have memorized for interaction with these objects in order to avoid bugs. Often, no one knows where these "rules" came from and they are usually not written down.

The Vocabulary

The name of this pattern is the Spy. What the spy will do is capture information about interaction with an object, and only take action if the need arises. A spy object looks just like the object that you need to interact with (i.e. implements the same public interface) such that your code should not even notice it is even there, but, behind the scenes it will perform validation and give the useful feedback you wish the original object would have implemented in the first place. This spy or validation wrapper is commonly implemented by holding on to a reference to original object. This allows for the spy to call the actual implementation in order to preserve the original behavior.

The ideal world

In an ideal world, the kinds of interactions that you are trying to validate with a spy should be captured be captured by the actual object you are interacting with, rather than the spy. Having a wrapper object whose function is validation is a massive code smell. If you have the ability to fix the original code by adding the relevant validation, that is by far a better solution than creating a spy object.

The real world

In reality, you do not have access to change the source of third party libraries, even if sometimes that third party is a couple of buildings or even cubes away. The first thing you should do when you run in to these bizarre incantations “required” for successful object interaction, is to ask “why?” and be persistent, dig deep. You may be (not so) surprised that most of the reasons have long since gone away. If you do find that some of the hidden rules are indeed valid, you need a way to validate that your code is following the rules.

Example

Consider the following example where MethodToObserve will throw an uninformitive exception if the PropertyToObserve has not yet been set.

    1 public interface IInterfaceToUse

    2 {

    3     void MethodToObserve();

    4     List<string> PropertyToObserve { get; set; }

    5 }

    6 

    7 public class ClassToUse : IInterfaceToUse

    8 {

    9     public void MethodToObserve()

   10     {

   11         PropertyToObserve.ForEach(str =>

   12                  Console.WriteLine("Calling the MethodToObserve:" + str) );

   13     }

   14 

   15     private List<string> propertyToObserve;

   16     public List<string> PropertyToObserve

   17     {

   18         get

   19         {

   20             Console.WriteLine( "Calling the PropertyToObserve setter");

   21             return propertyToObserve;

   22         }

   23         set

   24         {

   25             Console.WriteLine("Calling the PropertyToObserve getter");

   26             propertyToObserve = value;

   27         }

   28     }

   29 }

As mentioned previously, the ideal solution is to fix the implementation. If your only access to this code is Reflector or you are just not authorized to change it, the next best thing is to protect yourself (flaming email to the author of the code is,of course, optional). Our protection, or at least better information will come from a class implementing IInterfaceToUse just like the original, only this time the implementation will provide the consumer with information that they can act on.

    1 public class ValidatingObserver : IInterfaceToUse

    2 {

    3     private IInterfaceToUse _observedClass;

    4 

    5     public ValidatingObserver(IInterfaceToUse observedClass)

    6     { _observedClass = observedClass; }

    7 

    8     public void MethodToObserve()

    9     {

   10         if (PropertyToObserve == null)

   11             throw new ArgumentNullException("PropertyToObserve",

   12                                             "Property must be set prior to calling Method");

   13         // perform observations

   14         Console.WriteLine("The spy is watching: MethodToObserve");

   15 

   16         // pass through to implementing object

   17         _observedClass.MethodToObserve();

   18     }

   19 

   20     public List<string> PropertyToObserve

   21     {

   22         get

   23         {

   24             Console.WriteLine("The spy is watching: PropertyToObserve getter");

   25             return _observedClass.PropertyToObserve;

   26         }

   27         set

   28         {

   29             Console.WriteLine("The spy is watching: PropertyToObserve setter");

   30             _observedClass.PropertyToObserve = value;

   31         }

   32     }

   33 }

Note that this time, instead of the “oops, I forgot something exception” known in .net as the “Object reference not set to instance of an object” exception, we get meaningful information about what is missing and even some hint as to how to fix it. The error now clearly states that the PropertyToObserve should be set prior to calling the MethodToObserve.

Using the spy

There are many ways to create a spy object, I chose containment for this post; You may also use derivation to create your wrapper. Derivation will get you up and running faster, and you will not have maintenance work to do if you add a method to the interface, but this will come at a cost. Containment will allow you to swap out the actual implementation of the object with a mock implementation at some time in the future. As always consider you needs before choosing a spy implementation.

The test below shows how to use the spy created in this post

    1 [TestFixture]

    2 class ManualObserverTests

    3 {

    4     [Test]

    5     public void MethodCallSpy()

    6     {

    7         var observedClass = new ClassToUse();

    8         var validatingObserver = new ValidatingObserver(observedClass);

    9 

   10         Assert.Throws<ArgumentNullException>( validatingObserver.MethodToObserve);

   11     }

   12 }

Tool Support

All of the previous posts in this series have mentioned leveraging tools to assist in creating these test objects. They spy object however is a strange beast; the demands it places on the tools turn out to create as much code as the manually coded version. If you are so inclined, you can use Rhino for a spy object. What is required is taking advantage of the Do extension method. Do takes a delegate as a parameter that matches the signature of the method being called. So what you will end up establishing is an Expectation that a method will be called and when it is Do the operation specified by the delegate.

Summary

The spy object reminds me of the Broken Windows section of the Pragmatic Programmer. It clearly states not to live with broken windows, but if you cannot fix the window, at least put a board over the window. In the case of third party code where you cannot change the source code, a validation wrapper is the board you need to keep further damage from occurring and show other developers in the area that you still care about the quality of code.

The Series

PatternsInTesting[2] - Stub Pattern PatternsInTesting[4] - Mock Pattern
Tags: patterns | programming | testing

The Scenario

On our continuing quest to create unit tests that exercise only the class under test, we look at another common scenario that occurs while writing tests. As with the Dummy, our class under test has a dependency on another class, only this time the dependent class has an active role in our test. Our testing needs are about the logic of the class under test and not the interaction with the dependent object. In order to create a good test, the class under test must be isolated from the dependent object.

An example of this scenario might look like this:

   11     public interface ICalculator

   12     {

   13         int Add(int left, int right);

   14     }

   15     public class Fib

   16     {

   17         private ICalculator _calculator;

   18         public ICalculator Calculator

   19         {

   20             get { return _calculator; }

   21             set { _calculator = value;    }

   22         }

   23 

   24         public int Next(int i, int j)

   25         { return Calculator.Add(i,j); }

   26     }

The class under test in this scenario is Fib, which has a dependency on an ICalculator. The test's objective is to validate that the Next method returns the correct result for some well-known examples.

The Vocabulary

The name of this pattern is the Stub. A stub stands in the place of the actual object in use and provides known answers and predictable behavior. If you are doing any sort of evolutionary development, chances are that the initial versions of your classes more resemble stubs then real code. Why? The goal is the same: you wrote stub functionality to allow you to focus your development efforts on different parts of the system. This is exactly what we are doing with Stub tests: isolate one part of the system from another by providing known results.

The solution without tools

Unlike with a Dummy, providing a class that throws a NotImplementedExcpetion in the Add method does not meet our needs. Since the functionality of ICalculator is outside the scope of this test, we assume that it is working correctly (and hopefully under test). A simplistic implementation of ICalculator will work nicely. Since we are not testing the calculator, provide a simplistic calculator that returns fixed results.

   38     [Test]

   39     public void NextResultIsCorrect()

   40     {

   41         Fib fib = new Fib();

   42         fib.Calculator = new StubCalculator();

   43 

   44         Assert.That(fib.Next(2, 3), Is.EqualTo(5));

   45     }

 

   29     public class StubCalculator : ICalculator

   30     {

   31         public int Add(int left, int right)

   32         { return 5; }

   33     }

The test will check that the Next method returns the correct result given the StubCalculator. What we end up doing here is fully exercising the Fib class with known values from its dependent classes. The stub gives us the proper level of isolation for this test.

The solution with Rhino Mocks

For this version, leverage Rhino Mocks to keep us from having to code physical versions of the Stub class. Using Rhino's Fluent Interface, this reads as Expect a call on Calculator with the parameters 2 and 3, and when making this call return 5 as a result.

   47     [Test]

   48     public void NextNumberIsCorrect()

   49     {

   50         Fib fib = new Fib();

   51         fib.Calculator = MockRepository.GenerateStub<ICalculator>();

   52         fib.Calculator.Expect(calc => calc.Add(2, 3)).Return(5);

   53 

   54         Assert.That(fib.Next(2, 3), Is.EqualTo(5));

   55     }

The tools advantage

Just as discussed with IComplicated in the Dummy sample, adding methods to ICalculator does not require any additional maintenance of this test. However, unlike the Dummy sample, calls into the stub return the expected value. Additional benefits can easily pile up. Consider adding multiple calls to Add with different parameters. The hand-coded version would need some sort of conditional logic to determine what to return based on the calling parameters. Complexity adds up fast, even in the simple example listed here. Using Rhino, one concise and readable line of code can use new parameters for an additional expectation, including the expected result, and Rhino deals with matching up the parameters with the correct result. This is just a glimpse into the functionality offered by Rhino; the upcoming patterns will cover even more capability.

Isolation effects

Should a test that checks the result of the Next method fail if there is something wrong with the ICalculator implementation? The answer, as always, is "it depends." This test should fail if we were writing an integration test—a test that ensures all the pieces of a system are working together. This test should not fail based on ICalculator if it is a unit test, focused only on the result of Next. Isolating ICalculator from Fib helps build a set of unit tests that can quickly identify the location of errors introduced into a system. The stub is a common pattern of isolation, and using it will make a marked and immediate improvement in your tests.

The Series

PatternsInTesting[4] - Mock Pattern
Tags: patterns | programming | testing

The Scenario

One of the characteristics of a good unit test is that the object under test is the only object being exercised. The problem in this scenario is that the object under test requires a dependent object, even though the functionality of the dependent object is not used in the test. To make a good test, the dependent object needs to be isolated from the class under test.

An example of this scenario might look like this:

    8     public interface IAmComplicated

    9     { void DoStuff(); }

   10 

   11     public class ClassUnderTest

   12     {

   13         private IAmComplicated complicated;

   14         public double circumference;

   15         public double radius;

   16         public ClassUnderTest(IAmComplicated externalComplicated)

   17         {

   18             if( externalComplicated == null )

   19                 throw new ArgumentNullException("complicated is required");

   20             complicated = externalComplicated;

   21         }

   22         public double DoInternalStuff()

   23         {

   24             return circumference/(2*radius);

   25         }

   26     }

The dependency in ClassUnderTest is that its constructor requires an instance of an object that implements IAmComplicated. The rather simple objective of this test is to validate that DoInternalStuff returns Pi within a reasonable amount of rounding error.

The Vocabulary

The name of this pattern is the Dummy. It is unclear to me whether this is a reference to the object that enables isolation for the test or a reference to the original author of the code. It seems to be a code smell for this scenario to even occur. However, sometimes you need to use an external library and do not have liberty of changing the code.

The solution without tools

Using only Visual Studio, add a class that implements IAmComplicated, right-click on it, and choose implement interface. Every method in the class will throw a NotImplementedException. This class meets the needs of the test because none of the methods are ever called; its only purpose is existence.This is your Dummy.

    1     public class ComplicatedDummy : IAmComplicated

    2     {

    3         public void DoStuff()

    4         {

    5             throw new NotImplementedException();

    6         }

    7     }

This Dummy class allows us to create the following test:

   39     [Test]

   40     public void DoStuffValidates()

   41     {

   42         ClassUnderTest cut = new ClassUnderTest(new ComplicatedDummy());

   43         cut.circumference = 314;

   44         cut.radius = 50;

   45         Assert.AreEqual(Math.PI, cut.DoInternalStuff(), 0.01d);

   46     }

The solution with Rhino Mocks

Rhino Mocks has the capability to create an object at runtime by reflecting the IAmComplicated interface. This gives us the capability we need without having to maintain another class in the test project. Since Rhino is reflecting the interface at runtime, adding a method to the interface at a later date does not require changes to the test code. There are several different ways Rhino could give us a placeholder for IAmComplicated. Here we will use a simple one line call to GenerateStub.

   55     MockRepository.GenerateStub<IAmComplicated>();

This one line saves us an entire file of maintenance, and our test only requires minor modification to use this technique. Place the call to Rhino as the parameter to the ClassUnderTest constructor instead of creating a new ComplicatedDummy.

   52         [Test]

   53         public void DoStuffValidates()       {

   55             ClassUnderTest cut = new ClassUnderTest(MockRepository.GenerateStub<IAmComplicated>());

   56             cut.circumference = 314;

   57             cut.radius = 50;

   58             Assert.AreEqual(Math.PI, cut.DoInternalStuff(), 0.01d);

   59         }      

Isolation of Failure

One of the primary goals of good unit tests is to identify exactly which unit caused the error. By applying this pattern, tests that cover ClassUnderTest no longer require an implementation of IAmComplicated to instantiate without failure. This helps shorten your bug hunting cycle by reducing the areas that indicate error to where the real error occurred. As you attain higher levels of isolation and better definition of units, you will find you spend much less time in the debugger. This means spending less time finding problems more time fixing them.

The Series

PatternsInTesting[2] - Stub Pattern PatternsInTesting[4] - Mock Pattern
Tags: patterns | programming | testing

It does not matter if your test philosophy is Test Driven Development, Test Eventually Development, or I write tests occasionally. Very early in your testing experience, you will want to start isolating portions of your application or libraries. Without this isolation, you will start seeing a fragile set of tests that make it challenging to track down the point of failure. In order to reach the level of isolation that your tests need, you will need to introduce Mocks (or Test Doubles) to your testing toolbox.

The Testing Story

Someone on your team (extra geek cred if it was you) brings the concept of adding tests to the development process. No doubt there is some confusion at first. What are tests? How do we write them? This phase quickly passes as the team finds things that they understand how to test. This initial period of enlightenment yields great results. Everyone is happy that code is doing what it should be doing. Occasionally, the team celebrates when a broken test flags an unintended change. Clearly development will never be the same again.

As time goes on and the test library gets larger, you start to notice that a breaking change often causes quite a few tests to fail. The team finds this annoying, but decides having tests is better than developer life before tests. At some point though, the tests start to feel burdensome. Adding simple changes start to propagate through the whole system of tests. Discouragement and discontentment set in, and some of your team members want to abandon developing with tests.

Enter Mocks

At this point, the prominent options are to ignore the tests (the blue pill) or find a way to make the tests better (the red pill). This series will cover some of the test development patterns used to isolate areas of your code in the hopes that you will choose the red pill. The goal for the initial four or five posts will be to cover the types of scenarios that occur in test code and what common solutions look like. After that, I hope to cover some of the patterns for dealing with legacy code as I am currently reading through Working Effectively with Legacy Code and it seems relevant to the client I am working with.

The Mock Controversy

There has been considerable conversation in the community about the usage of the term mock. Some prominent members of the development community have been pushing for changing the vocabulary to test doubles, where mock, stub, fake and spy are a particular type of test doubles. My take on this is that these test double types are each describing the scenario where it is useful and the technique for solving the problem; this is describing a pattern. Good names are the key to effective patterns, and we have a good set of names for pattern description. It really does not matter what you call the concept (mocking | doubling | faking | test doubles) if the pattern names describe the scenario effectively.

What's next?

This Series of posts will explore these test double patterns and decompose a few alternate implementations.

The Series

PatternsInTesting[4] - Mock Pattern
Tags: programming | tdd

RowTests are a great tool for consolidating tests that have the same logic, but differ by the parameters and the expected values. I did a post a while back on using RowTest in MbUnit, and Michael Eaton recently posted on using RowTest with NUnit. Reading Eaton's post reminded me that I had just run into a new feature in the NUnit 2.5 drop. One of my favorite features of competitive open source tools is that they usually add at least one feature more than the other guy with each release. For the 2.5 release of NUnit, one of those features is the ValuesAttribute. Granted, MbUnit had this feature in concept with CombinationalTestAttribute, but the usage of the ValuesAttribute in NUnit seems much cleaner to me.

Tests with no [Values]

Looking at the previous posts on RowTests, you can see that passing in many values for a specific test is not that difficult. So why would we need anything else? In certain scenarios, the number of permutations that you would have to write Row attributes for could be pretty cumbersome, and if it is truly a permutation scenario, difficult to maintain. The test case for this post will be that the test should be called with all possible combinations of 1,2,3 for one parameter and 10,20,30 for another parameter. With this simple example you can see that we have already blown up to 9 rows, adding on more variant onto either of the parameters and we would have 12 rows to maintain.

Giving your tests [Values]

What the values attribute does is allow you to express all 9 of these combinations in a very concise syntax. This lines up better with the expression of your requirements; instead of 9 Row attributes, you would now have the following:

[Test]
public void Use_Value_Attributes([Values(1,2,3)] int param1,[Values(10,20,30)] int param2)
{
    r.DoStuffWithParms(param1, param2);
}

This nicely wraps up in one line what would have been expressed with 9 [Row] attributes in the past. Bring this up in the NUnit gui to verify that the tests you were looking for were in fact generated.

ValueAttributes

What do you use it for

Now that you have such a powerful tool in you toolset, start to think about how this would be useful. The real world case where I use this feature ensures that interaction with an external library is correct both in number and order of calls when the actual parameters from the system are applied. This test is purely focused on the behavior of the system not the expectation of a mathematical result.

What wouldn't you use it for

It would be very difficult to tie an expected value to the generated permutations. So if you are going after a specific output from an algorithm, a RowTest will work much better for you. Keep in mind that the RowTest feature has been around a while, so it is likely that you will use it more often.

What do you do when the limitations of SQL Management Studio will not show all of your results?

Our application works with an XML file defined by an external industry specification. We found that a significant portion of this file is not required by our application. So, when loading this file into our system, we remove the portions that we do not care about and stuff the rest into a database. What is left can still be a pretty good sized XML string. Common usage shows the reduced size to still be around 300-500 KB.

Recently, we needed to investigate the contents of one of these modified files as well as the history of what was loaded. Using the standard developer playbook, the first response is to quickly fire up SQL Management Studio and craft a query to get the ntext column. At that point, the 65K limit on a column in Management Studio yields about a quarter of the file.

It would be pretty easy to pull up Visual Studio and put together a quick utility to extract the XML into a file. As a developer, I would likely keep this project around for an excessively long period of time. After all, it is now part of my code toolbox. So, rather than squirrel away a project file, source file, and all the related baggage of a VS project, I chose PowerShell as the hammer for this nail. The C# code that would have been created in Visual Studio, in this case, is simply a language that is accessing the functionality in the .net framework. Since PowerShell has access to the .net framework, it is very similar code simply using a different language. Additionally, the PowerShell script is a single file without the weight of a VS project. The only thing that I keep in my toolbox is the code required to complete the task.

What is required is the pretty common code of opening a connection, executing a query, and using the results. This is how this common C# task is represented in PowerShell. We are pulling back several rows that contain this large XML field and tagging the resuts by upload date.

param( $theId, $server,$db )


$connection = New-Object System.Data.SqlClient.SqlConnection("data source=$server;initial catalog=$db;Integrated Security=SSPI")
$command = New-Object System.Data.SqlClient.SqlCommand("select * from MyTable where theItemId=$theId", $connection)

$connection.Open()

$reader = $command.ExecuteReader()
while( $reader.Read()  ) {
    $fileName = "Item_{0}_DateLoaded_{1:yyyy_MM_dd_HH_mm_ss}.xml" -f $theId,$reader['dStart']
    Out-File  -FilePath $fileName -InputObject $reader['reallyBigXMLText']
}

$connection.Close()
$reader.Close()

Save that text into a .ps1 file and that is all that is necessary to extract the large text field. I am not one that has cut over from the classic command line to PowerShell. At the same time, it is important to know when this tool can be helpful.

Tags: programming | spec# | SpecSharp

The next topic in this series was supposed to be invariants. While that topic will still be coming up, the topic of nulls was not complete enough. In the last post, coverage was mostly related to parameters and method signatures. Since parameters are not the only variables in a system that can be null, it makes sense to talk a little about class fields. Consider the following class member from the Sentence class discussed in Part 1.

List<string> words = new List<string>();

Several methods and properties in the sentence class perform operations on this variable by dereferencing it and calling its methods and properties. Spec# notices this and warns of a potential null dereference. At first this seemed like a bit of an inconvenience. But if you think about this a second, this warning is for every location where "object reference not set to instance of an object" can occur. Wow, removing that error from the development cycle would no doubt be a huge boost to productivity. So a simple change to the signature eliminates that warning.

List<string>! words = new List<string>();

So what if you are not instantiating objects the same way for each class creation. Try removing the new from the declaration, and a whole different error shows up. This time the constructors all get flagged. Spec# notices that the fields that have been marked as non-null have not been assigned in the constructor. New up a sentence object in the constructor on the validation engine is now happy.

There is an impressive depth to the null checks that can lead to a much more solid code base. In the strong typed world where compiler errors are preferred, Spec# strengthens the type checking pushing potential errors earlier in the development cycle to the point of some of them showing up in the editor pre-compile time!

Tags: programming | spec# | SpecSharp

One of the key characteristics that makes a professional-quality application is defensive coding. While necessary, this can hide the meaningful portions of logic. It can also lead to many tests that are more related to the chosen implementation than the business rules. Take this simple piece of code to add a word to a sentence.

public void AddWord(string word)
 {
     if (word == null) throw new ArgumentNullException("word");
     if (word.Length == 0) throw new ArgumentException("word");
     words.Add(word);
 }

The validation code dominates this simple function. Additionally it does not provide the consumer of the method with any hints to the parameters constraints. Sure, xml docs can provide some documentation, but that still does not help at design time. A set of tests that check the expected exception would also be common for this block of code. There is plenty to debate about the non-functional tests, but it is safe to assume that this kind of testing is common.

The Spec# team is doing some incredible work to make constraints like this either enforced by the language or more visible by placing them within the method signature.

The first parameter check relates to nullness. Spec# provides an operator for forcing non-null. The signature of AddWord would now look like:

public void AddWord(string! word)

This tells the compiler and Visual Studio that this parameter cannot be null. If you even try to pass a string that can be null to this method, a warning will appear at design time. A compiler error will also result. This really cranks up the visibility of the constraint. The user of this method can also see the constraint right in the signature.

The second parameter check relates to some data constraint on the string. In this case, it is required to be a non-empty string. What is really going on here is that this method has a pre-condition relating to the data it is going to work with. Preconditions in Spec# are specified with the requires keyword. So our method now looks like:

public void AddWord(string! word)
 requires word.Length > 0;
{
 words.Add(word);
} 

Now the requirements are part of the method signature that show up in intellisense. This alone is a big help to the development process. The precondition is validated at runtime kicking out an exception if the condition is not met. It is worth noting that you can map the stock RequiresException back to an argument exception via the otherwise keyword.

requires word.Length > 0 otherwise ArgumentException;

It is pretty evident to see the big gain here in the signature of the method containing all of the relevant constraints. Even more assistance can be provided at design and compile time when these constraints are provided in the signature. Even better than all of this is that the body of the method is left with only the required logic the method is responsible for.

There is plenty more to cover with the direction spec# is taking. Next up, validating the internal state of an object as part of the class definition.

Tags: CodeRush | community | programming | tdd

One of the things I really enjoy about the local developer groups is the sharing of ideas and usage patterns of common tools. At tonight's Ann Arbor .net Developers Group, the topic of CodeRush templates came up. I commonly use a set of templates centered around the Rhino Mocks framework As I was describing these templates Jay Wren claimed they should be posted. The irony in that is the first time I saw someone using CodeRush was in a presentation Jay was giving on IoC. As a result of that presentation, we are using Castle Windsor in our application and CodeRush as a productivity tool. 

Creating a Mock Instance

To use Rhino Mock to create a mock instance, you write something like:

MyType myType = (MyType)mocks.DynamicMock(typeof(MyType));

This is an exceptionally redundant exercise that just screams for a Template. The core part of the template takes a string from the Get string provider (named Type) for the type of the instance to create. It will also name the variable with the same name as the type with a slightly different format. Even though this variable name is a linked field, you can break the link by pressing Ctrl-Enter while the cursor is on the variable name. This is commonly required in when creating multiple mock instances in the same scope.

#MockInstance#: Base template not intended to be called directly

«Caret»«Link(«?Get(Type)»)»«BlockAnchor»«Link(«?Get(Type)»,FormatLocalName,PropertyNameFromLocal)»= («Link(«?Get(Type)»)»)mocks.DynamicMock(typeof(«Link(«?Get(Type)»)»));

mk: the type name is initialized to MyType

«?Set(Type,MyType)»«:#MockInstance#»

mk\: the type name is initialized from the Clipboard

«?Set(Type,«?Paste»)»«:#MockInstance#»

Setting up the Tests

The mock instance templates are not much use without having the MockRepository set up and test methods to call. These templates come in handy as well.

mtf: the test fixture

[TestFixture]
public class «Caret»My«BlockAnchor»Test
{
 private MockRepository mocks;

 [SetUp]
 public void Setup()
 {
  mocks = new MockRepository();
 }
 
 «Marker»
}


mt: the test

[Test]
public void «Caret»Test«BlockAnchor»()
{
 «Marker»
 mocks.ReplayAll();
}

Hopefully these templates will be found useful and maybe even kick off some ideas for new ones.

UPDATE: Yea, it would be easier if I added the export file.

CSharp_Custom.xml (12.45 KB)

PowerShell has been a topic of interest for me for the last couple of weeks. I have been squeezing in some reading on it here and there for about a little over a week. Out of the blue pops up one of those issues at work that does not involve making an elegant object model, but rather digging through log files. How cool is that? For more years than I care to mention, it seems like every time I start looking into a new topic, something comes up where that research saves a ton of time. So there I am whipping up this script, when SharpReader pops up toast that says Using an IDE to write PowerShell Scripts. If you are doing any PowerShell work at all, go ahead, stop reading this post and go get that tool. The post will be here when you get back. The PowerShell Suite has been discounted for 2 weeks associated with Hanselman's post and there is a free version for non-commercial use.

The issue was that users in Germany were getting access denied during a particular time frame. Since this time frame overlapped with with the nightly import of user information, there was some speculation that there was a collision with the import. We wanted to get a list of users that got the access denied and correlate that with the users that were imported. That leaves us with 500MB for 2 hours on each server. 8 web servers and a 4 hour period in question yields 16GB worth of log files to mine. Definitely an automation task.

The log file looks like:

... cut
[8364/18416][Tue Feb 07 04:30:27 2008][..\..\..\CSmHttpPlugin.cpp:402][INFO:1] PLUGIN: ProcessResource - Resolved Url '/security/accessdenied.aspx?ReturnUrl=index.aspx'.
[8364/18416][Tue Feb 07 04:30:27 2008][..\..\..\CSmHttpPlugin.cpp:515][INFO:1] PLUGIN: ProcessResource - Resolved Method 'GET'.
[8364/18416][Tue Feb 07 04:30:27 2008][..\..\..\CSmHttpPlugin.cpp:566][INFO:2] PLUGIN: ProcessResource - Resolved cookie domain '.SITENAMEWASHERE.com'.
[8364/18416][Tue Feb 07 04:30:27 2008][..\..\..\CSmHttpPlugin.cpp:3727][INFO:1] PLUGIN: EstablishSession - Decoded SMSESSION cookie -  - User = 'uid=USERIDHERE,dcxdealercode=DEALERCODEWASHERE,ou=dealerships,ou=dealers,o=DOMAINWASHERE.com', IP address = 'IPADDRESSWASHERE'.
... cut

What we needed was the occurrence of access denied where the source was the home page. A few lines later in the log, the SiteMinder user information was present. The script then needs two modes: looking for access denied and looking for the user info. Get-Content provides reader like functionality kicking out a line of text at a time from the provided file. Piping to a ForEach calls the associated script block for each line of text in the file. Use some Regex magic to find the correct string and extract information from the user information line. The result of the Regex is where I ran into the biggest hang up. PowerShell has an automatic variable $matches that is populated when a match is found (when a match is found not when a match is attempted). So I had to clear out the $matches variable after processing a match. The only output was the parsed user information that could be passed off to Brian, who deals with the imports. It is a pretty simple piece of script that looks like:

param ( $infile )

$lookingForSiteMinderCookie = $FALSE
$matches = $null
$Get-Content $infile | ForEach-Object {
  if( $lookingForSiteMinderCookie )
  {
    if ( $_ -match '^.*\[.*\](\[.*\])\[.*\]\[.*\].*uid=([a-zA-Z0-9]*).*dcxdealercode=([0-9]*).*$' )
    {
      "TimeStamp: {0}`tUser: {1}`tDealer: {2}" -f $matches[1],$matches[2],$matches[3]
      $matches = $null
      $lookingForSiteMinderCookie = $FALSE
    }
  }
  else
  {
    if ( $_ -match '^.*accessdenied\.aspx\?ReturnUrl=\%2fhome\%2fmain\.aspx.*$' )
    { 
      $matches = $null
      $lookingForSiteMinderCookie = $TRUE 
    }
  }
}

 

This was saved as LogParse.ps1.

From the PowerShell command line:

  • Use Get-ChildItem with a filter that matches the log file names to get a list of log files
  • Iterate through each file using ForEach calling LogParse.
  • Output to Out-File to save the results

Get-ChildItem logfilepattern | ForEach-Object { .\LogParse $_ } | Out-File deniedUsers.results

I have found PowerShell to be a very useful tool for doing some quick and dirty tasks. I can certainly see that with a bit more experience with it, a command line .net interpreter could be very handy. Check it out. You can rarely go wrong by having another tool in the toolbox. I do want to point out that this was primarily an exercise in using Powershell because I wanted to learn it better. The power of PowerShell is not going to come in text processing where there are many tools that have been around for a very long time that do that job very well. Many shells use strings when piping between commands. Passing a string object is not that much better. The real power comes when you start using objects and even more when you use objects in the pipeline. That is the feature unique to PowerShell.

Tags: programming

Predicate: Logic. that which is affirmed or denied concerning the subject of a proposition.

In programming terms, a function that returns true or false taking a subject as a parameter. So, Predicate<T> is really just a specific signature applied to a delegate. A method compliant with this signature looks like:

bool method(T item)

The collection classes in System.Collections.Generic have several methods that will take a predicate as a parameter. I really like this separation of concerns. The responsibility of iterating through the collection is owned by the collection; the responsibility of making a decision based on data in an object is owned by the object. What seemed to be missing to me was the common operation of combining boolean results with logical operators. What I really wanted to do was to pass a set of predicate functions to the collection methods. Of course, the collection authors could not know how I would want the boolean results combined. This seemed to be the responsibility of something else that could contain a list of predicates, but at the same time return a predicate function that is the logical combination of the results.

The PredicateList class is a container of Predicate<T> delegates. The methods that can be used as a predicate are the logical AND and OR operations. These methods iterate through the contained predicates and short circuit when arriving at a value that determines the result of the operation.

using System;
using System.Collections.Generic;

namespace Utilities
{
  public class PredicateList<T> : List<Predicate<T>>
  {
    public PredicateList(params Predicate<T>[] predicates)
    {  AddRange(predicates);  }
    public bool And(T item)
    {
      foreach (Predicate<T> pred in this)
      {
        if (!pred(item)) return false;
      }
      return true;
    }
    public bool Or(T item)
    {
      foreach (Predicate<T> pred in this)
      {
        if (pred(item)) return true;
      }
      return false;
    }
  }
}

Using this class with one of the collection methods would look like:

PredicateList<T> filters = new PredicateList<T>(IsThis, IsThat, IsTheOtherThing);
return FindAll(filters.And);

Dennis Burton

View Dennis Burton's profile on LinkedIn
Follow me on twitter
Rate my presentations
Google Code repository

Community Events

Windows Azure Boot Camp Lansing GiveCamp