How does the DRY Principal relate to a heavily layered architecture?

Today I’ve been thinking about how the DRY Principal (Do not Repeat Yourself) applies to an SOA or heavily layered architecture. A colleague of mine pointed out that our architecture violates this principal quite a lot, we didn’t really get the chance to dig into it, but I’ve been thinking about it quite a lot since.

The scenario that we’re dealing with is a shared database (shared with legacy apps). A shared back-end WCF service (split into many services, each with a service, business and data access layer). And several client application. With an architecture like this I think it’s very easy to fall into duplication, but it’s also worth differentiating between logic and definition.

As far as logic goes I think we’re pretty good at DRY; any real business logic goes in the WCF services and the clients are (in theory) pretty thin. When it comes to definitions I think we’re repeating ourselves a lot, but I’m not sure if this is necessarily a bad thing, because as soon as two areas of code share the same definition of something they’re both bound to that definition and therefore to each other (at least in a small way).

So let’s take an example of a definition of an aircraft that’s ‘repeated’ in several places, and in each place see if we can justify ‘repeating’ the definition:

  1. Database – The aircraft exists in the database, let’s take this as our initial declaration
  2. Data access – In the WCF service the aircraft exists in a dataset xsd – this acts as the mapping definition between the database and the business entities and is pretty much autogenerated anyway.
  3. Business entity – In the WCF service the aircraft is represented as a POCO business entity, this is seperate from the data access so that the business logic is decoupled from the pesistence technology. It’s also a much richer representation of the data that can be manipulated in code.
  4. Translator – Although not strictly it’s own definition a translator class translates between the business entity and the DTO, and ends up having to ‘know’ about the definition of an aircraft
  5. DTO – At the service layer the aircraft is represented as one or more data contracts. This is the contract that the WCF service exposes, therefore it’s a good idea to seperate it from the business entity so that the internals of the service can change without necessarily modifying the contract. Also a DTO is just for transferring the data, whereas the business entity can be a much richer representation.
  6. Client business entity – Some of the clients convert the DTO into a locally defined business entity, this is because some of the clients have quite rich behaviour themselves and decoupling from the DTOs (that are shared with the service and other clients) is advantageous.
  7. Client side translator – again a translator is needed to convert between the client side business entity and the DTO.
  8. View – Sometimes the UI needs a slightly different representation of an aircraft to display slightly different things, this occasionally leads to one last definition of aircraft.

Looking at the list it does initially look like we’ve repeated the definition a few too many times, and I’d probably concede that there are a few too many definitions; however none of the definitions are a straight forward repeat, each definition is there to provide decoupling, and is usually slightly different from the other definitions, even in cases where the definition is the same this is just ‘coincidence’ – as requirements change the different layers can diverge in what they think an aicraft looks like.

There are some places where I can see potential for reducing duplication, I think the two translators could be replaced with an automapper. The dataset could be replaced with a proper ORM, and possibly one that used convention over configuration.

So in summary I think that the DRY Principal is ‘good’ but that it needs to actually be thought about, and there are times when you want to ‘repeat’ yourself for another reason such as achieving decoupling.

Posted in development, guidelines, Uncategorized | Leave a comment

Specifying properties using Lambda expression trees instead of Reflection

For my TranslationTester project I need users to be able to specify properties on classes so that they can specify that one property maps to another, and to indicate which property they are mapping (or even excluding).

Up until now I’ve only been able to achieve this using strings that match the property names. This is bad because a) it’s difficult to use, and b) it doesn’t support refactoring very well – (although the tests give helpful exceptions when a property has been renamed it would still need to be manually corrected)

Whilst attending DDD7 I saw a demo on LINQ by Jon Skeet and saw a type of Lambda expression I’d not really seen before that was being used to specify a Key on  class. This got me to thinking that this is basically what I want to do. Looking into it I found a few other resources explaining how to do this.

Here’s what I came up with:

target.AddMapping(f=>f.Property1,t=>t.Property1);

which is achieved by:

public SimpleMapping<TFrom, TTo> AddMapping<TProp>(
  Expression<Func<TFrom, TProp>> fromProp,
  Expression<Func<TTo, TProp>> toProp)
{
  var fromName = fromProp.ToPropertyName();
  var toName = toProp.ToPropertyName();
  return AddMapping(fromName, toName);
}  

...

public static class ExpressionExtensions
{   
    public static string ToPropertyName<TFrom, TProp>(this Expression<Func<TFrom, TProp>> propertyExpression)
    {
      var lambda = ToLambda(propertyExpression);
      var prop = lambda.Body as MemberExpression;    
      if (prop != null)
      {
        var info = prop.Member as PropertyInfo;
        if (info != null)
        {
          return info.Name;
        }
      }      
      throw new ArgumentException();
    }    

    private static LambdaExpression ToLambda(Expression expression)
    {
      var lambda = expression as LambdaExpression;
      if (lambda == null)
      {
        throw new ArgumentException();
      }  
      var convertLambda = lambda.Body as UnaryExpression;
      if (convertLambda != null
          && convertLambda.NodeType == ExpressionType.Convert)
      {
        lambda = Expression.Lambda(convertLambda.Operand, lambda.Parameters.ToArray());
      }  
      return lambda;
    }
}

Note: Some of this is borrowed/inspired very heavily from Moq, particularly the conversion of lambda convert expressions.

One of the nifty things about it is that the syntax of the two Expressions enforces a lot of the requirements on the property types. By using TProp the properties that are referenced in the expressions must be of the same (or common base) type.

I’ve been using the new way of referring to properties and I must say I’m really happy with it!

Posted in .net, development, techniques, TranslationTester | Leave a comment

Casting reflected types – What I learnt

For my Translation Tester I originally wanted a user to be able to add a mapping between two properties of different types, so for example they could say that a property of type Int16 would be directly assigned to a property of type Int32 by the translator. This turned out to be real tricky to get working, and after much analysis I’m not sure it should even be done as part of a ‘simple mapping’.

The problem is that the types of the parameters are found using reflection, so you then need ways to tell whether there are any commonly supported ways to convert one type to another, and then a way to actually do the conversion to verify that the mapping was fulfilled.

To expand the problem let’s list some of the possible scenarios:

  • Primitive type that can be implicitly cast to another primitive type (e.g. Int16 assigned to Int32)
  • Primitive type that can explicitly cast to another primitive type (e.g. Int32 to Int16)
  • Primitive/Struct/Class to Primitive/Struct/Class where an implicit cast operator has been added
  • Primitive/Struct/Class to Primitive/Struct/Class where an explicit cast operator has been added

This gets pretty confusing, particularly with all the permutations for the last two.

IConvertible/Convert.ChangeType

When dealing solely with the primitive types there is an interface IConvertible that they all support, that allows conversion between the primitive types. The Convert.ChangeType() method simply provides a nice wrapper around the IConvertible interface. One problem here is that for several of the primitive types they have the whole interface, but individual conversions will always fail at run-time (e.g. decimal to DateTime).

So this facility can be used for primitive types at comparison time, but to determine whether a simple mapping should be allowed you’d have to actually attempt a dummy conversion, using something like Activator.CreateInstance().

TypeDescriptor.getConverter

I posted a question on StackOverflow.com, one of the answers suggested I look into TypeConverters. From what I can see the built in TypeConverters are focussed on converting from and to strings. This facility could be used to provide Converters for primitive types and even allow users to add converters for custom types. But this all seems rather heavy.

Dynamic Cast

This post contains an interesting implementation for dynamic casting based on looking for implicit (and if necessary explicit) cast operator methods on the reflected types and using them to perform a cast. One problem is that the primitive types don’t use cast operators to cast between themselves, rather they use the IConvertible interface. So this wouldn’t work for primitive-to-primitive conversions.

What I’ve done

This all seemed to be getting really confusing, so what I’ve done at the moment is only allow simple mappings to be added where the types are identical. This makes the code a lot easier, and also removes any magic being hidden from the user; if they want to map a property to a property of another type they can specify this using a complex mapping (predicate).

What I might do

Having written all this down I think it might be possible to do what I originally intended. using the following steps to add a mapping (confirm that it’s possible):

  • If both types are primitives – Consult a local dictionary of valid mappings
  • else use the Dynamic Cast method to determine if there is a cast operator specified
  • If no cast operator specified see if there is a custom converter (perhaps based on TypeConverter).

Then the following steps to verify the mapping

  • if both types are primitives use Convert.ChangeType() to convert the from type to the to type.
  • else use the cast operator or custom converter specified.

Another thought I’ve had is that you might want to limit the allowed conversions to just implicit conversions (where you wont lose precision/data), this could be achieved by splitting the primitive mappings into 2 lists, and separating the dynamic cast into 2 checks.

Posted in .net, development, testing, TranslationTester | Leave a comment

Translation Tester – Progress update

It’s been a little while since I blogged anything about my TranslationTester project; I have however been hard at work doing some coding for it. The source is all now on google code as are the user stories that I originally blogged along with some new ones. I’ve done a reasonable amount of work and now have something that’s semi-usable, although it is currently limited to simple mappings.

So far the user stories I’ve done are:

Exclude property

Simple mapping

Verify ‘from’ instance

I’m currently working on the facility to automatically add mappings based on identical property names.

Posted in development, TranslationTester | Leave a comment

How to collapse all projects in a Visual Studio 2008 solution

For ages my team and me have been wanting an easy way to navigate all the projects in our Visual Studio solutions, admittedly we have far too many projects, but forgetting that for a moment it becomes such a pain to expand/collapse the relevant solution folders and projects. We found the occasional macro that did the trick partially, but never really fully satisfied the requirement.

But recently a colleague discovered The “PowerCommands for Visual Studio 2008“. One of the commands is:

Collapse Projects
This command collapses a project or projects in the Solution Explorer starting from the root selected node. Collapsing a project can increase the readability of the solution. This command can be executed from three different places: solution, solution folders and project nodes respectively.

Another great tool!

Posted in development, Visual Studio 2008 | 3 Comments

How to modify project files (csproj) in Visual Studio in XML format

Recently I wanted to modify the xml that makes up a C# project file (csproj) so that Code Analysis would use British English (en-GB) dictionary for the spelling checks (as explained here). The problem I found was that there’s no easy way to do this without leaving Visual Studio IDE; you can try to ‘open file’ and point at the csproj, but this ‘throws a wobbly’ as the project is already open as a project (assuming you have the solution open), to get around this you have to close the solution or unload the project; what a faff!

Then the very next day a colleague showed me a plugin for Visual Studio 2008 called “PowerCommands for Visual Studio 2008“. One of the features it has is to “Edit Project File”, this is basically just a shortcut for unload project followed by open file.

Simple but awesome!

What would be even better would be a power command (or other plugin), for setting the CodeAnalysisCulture for a project with a single click, or even better for every project in a solution.

Posted in development, Visual Studio 2008 | Leave a comment

Branching build scripts for production branches

At work we recently switched to using Team Foundation Server with a ‘proper’ branch/merge strategy. We also have improved our build process so that although it’s still a partly manual process, a document leads you through the process, and the process is repeatable as all the build scripts are source controlled and each build starts from a know starting point (VM with pre-reqs and build software).

Until very recently the builds were always run against ‘Main’, but we’ve now got a production branch, and I’ve been asked to modify the build scripts to get the source from the production branch rather than Main. To do this properly I think I need to do the following:

  1. In the production branch modify the source controlled build scripts to point to the correct source location.
  2. Create a copy of the build instructions specifically for this branch that say where to get the build scripts from (production branch)

To allow builds from Main and any Production branches both the document and the build scripts need to be branched.

Posted in build, guidelines, methodology | Leave a comment

Translation Tester, part 4 – High Level Design

In the fourth part of my Translation Tester series I’d like to give a brief description of the overall mechanism I’m planning to implement for testing translations. The main aim is to test a class that does a translation between two types; this breaks down into the following tests:

  • Test that all properties have been mapped (or excluded from the test)
  • Test that all the specified mappings are fulfilled by the translator
  • Test that the types being translated have not had changes which invalidate the translator.

The Translation Tester will allow a developer to create a specification for the translation; the translation will be based on public properties of the types being translated (may extend this to public fields and methods in the future) and will be found using reflection when the specification is created.

A specification will then be built up by adding mappings between properties on the types, several types of mapping will be supported; the most basic being where one property is directly assigned to another property with the same type. Complex mappings such as where one property maps to several other properties after going through various modifications will be supported by allowing the developer to specify Predicate type delegates that should be called to test the mapping.

The test for whether there are any unmapped properties will be based solely on the ‘From’ type, as a translation will be treated as unidirectional, and I believe it only really makes sense that the ‘From’ type be fully specified in the translation.

Posted in development, requirements, TranslationTester | Leave a comment

Supporting plural and singular parameters

This problem relates to all APIs I would say, but I’ll give an example. You have a service layer, a business layer, and a data access layer. An operation on the service layer for getting instances of a class by Ids can either support passing in a single Id, or passing in plural Ids (in an array, IList, ICollection etc). The same choice is also available at the business layer, and again at the data access layer. So what do you choose at each layer?

If you choose just the singular then at some point someone is going to want to query for multiple Ids, and instead of creating a plural version of the method they may just call the singular version multiple times (very chatty).

If you treat all invocations as plural operations then the client is forced to create a plural parameter even though they may only have one value.

If you provide both at the service layer then should you have 2 versions of the operation all the way down the stack? Which will almost certainly lead to duplication, or do you convert all calls to multiple singular stacks, or a single plural stack? If you treat all calls as either option then you end up either missing performance improvements available by batch processing, or you add the complication of batch processing even when 100% of the calls are actually singular.

I’m not sure there is a ‘correct’ answer.

  • I think you should only create the operation at the service layer when it is needed, so if you only need singular operation start with this. As soon as someone needs the batch version then the additional method signature should be created.
  • If you do need both I think I would be tempted to only have both versions at the service layer (where we want to reduce network traffic). At lower levels I would begin by treating batch calls as multiple calls to the singular stack. The justification for this is that singular versions are almost always more simple, and that means more to me than some premature optimization. As soon as there is a performance bottle neck treating batch calls this way I would look at either having 2 routes down the stack or treating all as batch calls, with the added complexity of the batch operations having to consider the singular case.
Posted in guidelines, web service | Leave a comment

My development stack

A couple of days ago I started developing my TranslationTester. To do this I needed to get my home PC up to speed in terms of a development stack. At work I have a pretty much MS stack (Rhino Mocks being about the only exception), but I wanted to see what the open source offerings would be. So currently I have:

  • IDE – Sharp Develop, (3.0 Beta 2)
  • .Net Framework – (3.5 SP1)
  • Windows/.Net SDK – (6.1)
  • Code/Style Analysis – FxCop (1.36), StyleCop (4.3)
  • Testing – NUnit (2.4.8)
  • Source Control Client – Tortoise SVN (1.5.3)
  • Source Control Server – Google Code
  • Issue Tracking – Google Code

So far I’m pretty impressed, particularly with #Develop. The integration between them all is top notch; One of my worries with Subversion was that to get integrated source control with Visual Studio required a commercial plugin (as far as I’m aware), with #Develop and TortoiseSVN there’s great integration and it tracks and indicates which files you’ve changed right in the IDE.

I’ve setup a Google Code project for the TranslationTester, I’ll start linking to it soon, once I’ve got a bit more up there.

Posted in agile, development, open source, TranslationTester | Leave a comment