Translation Tester, part 4 – High Level Design

In the fourth part of my Translation Tester series I’d like to give a brief description of the overall mechanism I’m planning to implement for testing translations. The main aim is to test a class that does a translation between two types; this breaks down into the following tests:

  • Test that all properties have been mapped (or excluded from the test)
  • Test that all the specified mappings are fulfilled by the translator
  • Test that the types being translated have not had changes which invalidate the translator.

The Translation Tester will allow a developer to create a specification for the translation; the translation will be based on public properties of the types being translated (may extend this to public fields and methods in the future) and will be found using reflection when the specification is created.

A specification will then be built up by adding mappings between properties on the types, several types of mapping will be supported; the most basic being where one property is directly assigned to another property with the same type. Complex mappings such as where one property maps to several other properties after going through various modifications will be supported by allowing the developer to specify Predicate type delegates that should be called to test the mapping.

The test for whether there are any unmapped properties will be based solely on the ‘From’ type, as a translation will be treated as unidirectional, and I believe it only really makes sense that the ‘From’ type be fully specified in the translation.

Advertisements
Posted in development, requirements, TranslationTester | Leave a comment

Supporting plural and singular parameters

This problem relates to all APIs I would say, but I’ll give an example. You have a service layer, a business layer, and a data access layer. An operation on the service layer for getting instances of a class by Ids can either support passing in a single Id, or passing in plural Ids (in an array, IList, ICollection etc). The same choice is also available at the business layer, and again at the data access layer. So what do you choose at each layer?

If you choose just the singular then at some point someone is going to want to query for multiple Ids, and instead of creating a plural version of the method they may just call the singular version multiple times (very chatty).

If you treat all invocations as plural operations then the client is forced to create a plural parameter even though they may only have one value.

If you provide both at the service layer then should you have 2 versions of the operation all the way down the stack? Which will almost certainly lead to duplication, or do you convert all calls to multiple singular stacks, or a single plural stack? If you treat all calls as either option then you end up either missing performance improvements available by batch processing, or you add the complication of batch processing even when 100% of the calls are actually singular.

I’m not sure there is a ‘correct’ answer.

  • I think you should only create the operation at the service layer when it is needed, so if you only need singular operation start with this. As soon as someone needs the batch version then the additional method signature should be created.
  • If you do need both I think I would be tempted to only have both versions at the service layer (where we want to reduce network traffic). At lower levels I would begin by treating batch calls as multiple calls to the singular stack. The justification for this is that singular versions are almost always more simple, and that means more to me than some premature optimization. As soon as there is a performance bottle neck treating batch calls this way I would look at either having 2 routes down the stack or treating all as batch calls, with the added complexity of the batch operations having to consider the singular case.
Posted in guidelines, web service | Leave a comment

My development stack

A couple of days ago I started developing my TranslationTester. To do this I needed to get my home PC up to speed in terms of a development stack. At work I have a pretty much MS stack (Rhino Mocks being about the only exception), but I wanted to see what the open source offerings would be. So currently I have:

  • IDE – Sharp Develop, (3.0 Beta 2)
  • .Net Framework – (3.5 SP1)
  • Windows/.Net SDK – (6.1)
  • Code/Style Analysis – FxCop (1.36), StyleCop (4.3)
  • Testing – NUnit (2.4.8)
  • Source Control Client – Tortoise SVN (1.5.3)
  • Source Control Server – Google Code
  • Issue Tracking – Google Code

So far I’m pretty impressed, particularly with #Develop. The integration between them all is top notch; One of my worries with Subversion was that to get integrated source control with Visual Studio required a commercial plugin (as far as I’m aware), with #Develop and TortoiseSVN there’s great integration and it tracks and indicates which files you’ve changed right in the IDE.

I’ve setup a Google Code project for the TranslationTester, I’ll start linking to it soon, once I’ve got a bit more up there.

Posted in agile, development, open source, TranslationTester | Leave a comment

Extending unit testing frameworks with the Translation Tester

While I’ve been working on my TranslationTester project, I’ve been wondering whether the tool should either be a stand-along tool, or whether it would make sense to contribute it to one of the many unit testing frameworks out there (NUnit, MbUnit, xUnit). I don’t think it’s ‘frameworkey’ enough to contribute to the main products, but perhaps it could be contributed as some sort of extension/plugin etc.

My current thinking is that I’ll create it framework-independent, and then maybe write simple wrapper classes that allow it to be used in each framework in a more integrated fashion, e.g. Define a base TestFixture class for NUnit.

Posted in testing, TranslationTester | Leave a comment

Rationalising mocked dependencies using concrete types

This really is amazingly obvious and simple, but for some reason it never occurred to me.

The situation I’ve been getting myself into is that I have Business Logic actions that do a very discrete bit of logic, these will have dependencies on data access repositories as well as other business logic actions (potentially). I was happily using interfaces for the data access dependencies and then continued to use them for business logic dependencies, this lead to a bit of a mass of interfaces, and I also ended up with tests that mocked all these dependencies, resulting in missing lots of problems that occur in the interaction between business logic actions.

What I really want is that all data access dependencies can be replaced by mocks, but the ‘real’ instances of all the business logic actions are used. This seemed hard to achieve as we are not using an IOC container for our dependency injection (What a pain!) so use a default constructor for each class that just provides real implementations. So if we use a real instance of the business logic dependency we either have to know about it’s dependencies (resulting in massive constructors for all possible dependencies in the object graph).

The solution I hit upon today seems kind of obvious now (and of course is completely irrelevant if you have an IOC Container). Specify all data access dependcies by interface so they can be mocked out. Specify all business logic dependencies as concrete dependencies (still in the constructor). Use default constructors to use default instances of all the dependencies. In tests create all business logic objects for the required object graph using dependency injection, use mocks for data access dependencies and real instances for business logic actions.

Demo class diagram

Demo class diagram

public class Class1
{
  private IDataAccess1 dataAccess1Dependency;
  private Class2 class2Dependency;

  public Class1()
    : this(new DataAccess1(), new Class2())
  { }

  public Class1(IDataAccess1 dataAccess1, Class2 class2)
  {
    this.dataAccess1Dependency = dataAccess1;
    this.class2Dependency = class2;
  }

  public int MethodToTest()
  {
    return class2Dependency.Method() + dataAccess1Dependency.GetInt();
  }
}
public class Class2
{
  private IDataAccess2 dataAccess2Dependency;

  public Class2()
    : this(new DataAccess2())
  { }

  public Class2(IDataAccess2 dataAccess2)
  {
    this.dataAccess2Dependency = dataAccess2;
  }

  public int Method()
  {
    return dataAccess2Dependency.GetInt();
  }
}
/// <summary>
///A test for MethodToTest
///</summary>
[TestMethod()]
public void MethodToTestTest()
{
  MockRepository mocks = new MockRepository();
  IDataAccess1 mockDA1 = mocks.DynamicMock<IDataAccess1>();
  IDataAccess2 mockDA2 = mocks.DynamicMock<IDataAccess2>();
  Class2 class2Instance = new Class2(mockDA2);

  Class1 target = new Class1(mockDA1, class2Instance);
  using (mocks.Record())
  {
    Expect.Call(mockDA1.GetInt()).Return(5);
    Expect.Call(mockDA2.GetInt()).Return(6);
  }
  using (mocks.Playback())
  {
    int expected = 11;//5+6
    int actual = target.MethodToTest();
    Assert.AreEqual(expected, actual);
  }
}
Posted in dependency injection, development, IoC, mocks, MSTest, testing | Leave a comment

Translation Tester, part 3 – Requirements

In the 3rd part of the Translation Tester series I intend to come up with some initial requirements for the product. This will then be used to create a product backlog and drive development via ‘Test Driven Development’. I’m going to try and express the requirements as ‘user stories‘.

The users

First a quick introduction to the users; I realise that I should probably have some real users providing my user stories, and in due time I hope to add to my initial user stories with real end-user stories.

The Traditional Developer is a developer who writes code and then writes some unit tests afterwards to test parts of their code.

The Test Driven Developer uses tests to drive the design and development of their code

The term Developer will be used to refer to all developers.

The Development Manager manages a team of potentially changing developers over a period of times, potentially on several projects.

The Build Engineer manages and monitors a product’s automated build.

The Stories

So without further ado lets start writing some stories, although they’ll probably go from simple to complex no prioritisation or ordering is intended at this stage.

As a Developer
I want to test my translator classes
So that I have confidence that they work

As a Developer
I want to specify that a property should not be translated
So that I can have minimal classes

As a Developer
I want to be able to clearly see properties that were excluded from the translation
So that as requirements change I can easily identify the changes to be made and bug fixing should be easier

As a Developer
I want to know when my code change has broken another area of the code
So that I can have confidence in making changes without the risk of regression

As a Developer
I want to reduce the amount of ‘boilerplate’ code for testing translators
So that I can work more efficiently

As a Developer
I want to be able to specify complex translations where necessary
So that as much as possible of the translation can be tested

As a Developer
I want the tests for my translator to specify the desired output, not how the output is achieved
So that my tests are more robust, and don’t just duplicate the production code

As a Developer
I want each test to test one aspect of the translator
So that a failed test clearly indicates the reason for the failure and other failures are not hidden

As a Developer
I want to exercise the translator with a wide range of inputs
So that I can test the translator handles a wide range of inputs correctly

As a Developer
I want to be able to use a mocking framework to test that a call was made as part of the translation
So that I can test parts of the translation that do not easily allow for state based testing

As a Developer
I do not want to be forced to use a specific mocking framework
So that I can work the way I (and my team) work.

As a Development Managers
I do not want to be tied to any other dependencies (such as on a specific unit testing framework)
So that I can choose what tools the team will use and change my mind in the future

As a Build Engineer
I want to automate running of the translation tests
So that I can be notified of a failure without any manual intervention

As a Test Driven Developer
I want to be able to start from a small skeleton translator and test class
So that I can get quick feedback on my development

As a Test Driven Developer
I want each small test to indicate the work that must be done in the translator to make the test pass
So that I can use the tests to drive my development.

Posted in acceptance testing, agile, requirements, testing, TranslationTester, Uncategorized | 1 Comment

Combined context menus in a CAB application

As with my previous post on context menus in CAB applications, this post may well only really apply to the following scenario.

The application is a composite style application using the Microsoft Composite Application Block (CAB) libraries. There are several independent couples that can be included/excluded without affecting the other modules. There is a common Domain Model that represents the pure business domain of the application; each module largely just acts as a UI/View for an aspect of the model. The Shell of the application makes use of the Infragistics ToolbarManager (Office 2007 ribbon style).

The next problem I encountered was that although a domain entity should have a common context menu for all modules, some additional options should only be available from some of the modules; for example one module supported multiple selection of the entity whereas another didn’t, therefore the ‘Filter on selected’ command only really made sense in one place.

After some thought I decided that the reason for the differences in the context menu options is that in each place there is a slightly different context; There is the context of the selected domain entity, along with the context of the control that was clicked on. Therefore I decided to split the context menu into different sections (each section would have its own heading label to indicate the context that the tools relate to), then when a right click is detected the module could make a call to the ContextMenuService to show a combined context menu, passing in the keys of all the context menu sections to show.

Previously with the ‘domain entity context menus’ the definition of the tools would live in the domain model; with the new local context menus the definition for the tools would reside in the module, clearly indicating that this context menu is not common, and relates chiefly to the module.

A problem I encountered with this approach is that the Infragistics toolbar manager does not have a built in method for showing multiple PopupMenuTools as a single context menu. Therefore my solution was that when a call was made to show a combined context menu a key would be created using the combination of all the context menus to be shown. A check would then be made to see if this specific combined context menu had already been created, if not, a new PopupMenuTool would be created, simply taking the tools from all the individual menus and adding them, in order, to the new menu.

This solution has worked remarkably well. There is an outstanding issue, in that if a tool is later added to one of the individual context menus it will not be picked up by any combined context menus already created; If this functionality was ever required it would be relatively simple to just catch when a tool was added to a context menu and make sure it was added to all the others at the same time; In my current application the context menus are defined at startup and never changed, so this problem never occurs.

Posted in CAB, Composite Application Block, development, Infragistics | Leave a comment