Wednesday, September 28, 2011

Silverlight Cookbook: Switching to another IoC Framework

The Rationale

As long as I have been using the Dependency Inversion Principle, Microsoft Unity has always been my preferred Inversion-of-Control framework. So it’s not strange that the Silverlight Cookbook has been using Unity 2 in both its WCF/REST layer as well as within the Silverlight client. I never even bothered looking at other frameworks, with the exception of NInject, that I used in a Windows Mobile 5 project, and StructureMap, of which I learned a lot while reading Jeremy D. Miller’s many posts of design patterns…

…Until I read this post where Philip Mateescu compared the performance of the most popular IoC frameworks…and declared Autofac as the clear winner…

Let’s be honest though. I’ve always liked Unity and managed to use it to solve all my IoC and AOP problems without too much hassle. However, an analysis of some performance issues in my latest project revealed that its AOP features were not the fastest available. Notwithstanding, I would never simply change my strategy based on a single post. In fact, some claim that the comparison was faulty in the first place. But after browsing through the Autofac documentation I really became quite fond of Autofac’s strategy. I’ve always abided to the “Microsoft, unless…” philosophy, but my engineering heart couldn’t resist the temptation to try to introduce Autofac into the Silverlight Cookbook.

So what are the advantages?

Separation between configuration and resolution.

To be more precise, you use a ContainerBuilder to setup the dependencies like this:


And use its Build() method to construct an IContainer that you cannot change anymore. (Well, strictly speaking you can, but that doesn’t mean you should). Notice the fluent interface, one of the aspects of Autofac I like quite a lot.


This separation is actually the biggest reason why it took me so much time to migrate the Cookbook. You have to organize your setup code so that all registrations happen at the same place. But by doing so, I’ve found that my design actually became more clean with better separation of concerns. So, unlike what you might expect, I see this as an advantage rather than a disadvantage. But beware of this when you consider migrating from Unity to Autofac.

Implicit support for factory methods

Check out the updated version of the AddNewRecipeHandler:


So instead of introducing a dedicated factory interface, you can simply add a dependency to a Func<T> where T is your actual dependency. Autofac will automatically inject a delegate that you can use to create new instances of that dependency at will. And you don’t have to configure anything for that. It’s the caller that decides what kind of dependency he needs. And that’s not all. Autofac will keep track of any object you create that requires explicit disposal through its IDisposable interface. If you don’t want that, simply replace Func<T> with Owned<T>, which is Autofac’s way of giving you control. If you want to make this permanent for a particular registration, append ExternallyOwned() to the registration.


Collections of dependencies

Another feature I’ve always missed in Unity is support for taking a dependency on all instances of some type. And a big difference compared to Unity is that you can register as many implementations and instances of some interface as you want, without the need to specify some unique name.


Again, you don’t have to think about that during registration. It’s all part of Autofac’s extensive support for relationship types, which includes things like Lazy<T>, IIndex<T> or even Func<X, Y, B> if you need to parameterize the dependency somehow. And obviously you can combine those types to create some pretty advanced dependencies (although I wonder if you should).


If, you may wonder, you have to combine all type registrations in a single location, isn’t that code going to be very difficult to understand (and maintain)? Well, no. Autofac includes the notion of combining registration in so-called Modules. In the Cookbook I’ve used that mechanism to combine everything that is related to supporting the creating of units-of-work using NHibernate. So rather than code like this.


I can now do something like this:


Although the CookbookUnitOfWorkModule contains two more complex classes in its hierarchy that I introduced to simplify working with both SQL Server as well as SQLLite, the module concept makes it a breeze to work with.

Assembly Scanning

What I particularly liked in the Managed Extensibility Framework is its support for scanning a directory for assemblies and automatically registering specific types. In the previous version of the Cookbook, I created hand-written code to automatically find my command handlers to overcome Unity’s lack of such functionality. Luckily, Autofac does include assembly scanning out-of-the-box, and now I can do this:



No attributes

Yes, Autofac does not contain an equivalent of Unity’s [Dependency] attribute, and I think it’s great. In fact, Autofac’s preferred dependency injection mechanism is constructor injection. It will never inject (unset) properties, unless you explicitly configure it for that upon registration.

I’ve never liked property/setter injection because it allows an object to have many dependencies. According to my own coding guidelines and those stated by Clean Code, no member should ever have more than three parameters, not even the constructor. If you need that somehow, chances are your class has too much responsibility. So beware, because that is another of the pitfalls if you switch from Unity to Autofac.

Surely not everything is that great?

Well, in the beginning of my migration attempt I was a bit set back by the lack of property injection, the fact that you cannot create proxies of objects to intercept at any time in your code, and the explicit separation of configuration and resolution. However, after refactoring my code to accommodate for those changes, I noticed that I actually started to like those requirements. All in all, my code base has significantly improved because of those changes. But by now, it should be clear that a migration to Autofac may not be the best thing to do in most projects unless you’ve been clearly separating the responsibilities from the start. While looking back at all the code bases I’ve seen in my career, I think that ideal situation is not something you’ll encounter often….

This article is part of a series of posts dealing with all the choices and solutions used in the Silverlight Cookbook. If you have any comments, let me know by commenting or sending me a tweet on @ddoomen.

Monday, September 26, 2011

In Retrospect: About Requirements Management

This is the first of several posts in which I’d like to share some of the things we decided throughout 14 sprint retrospective. Some of them might appear as open doors, but I wish I knew or thought about those before I started that project. Just by looking back at the mistakes a team of 9 developers and one tester in a period of 12 months made, they apparently aren’t that obvious.

To provide some context, I’m talking about an ASP.NET project involving the development of a suite of configurable and extendible products, developed in ASP.NET WebForms and executed using Scrum and XP. I was the Scrum Master, architect and lead developer (although those latter roles don’t officially exist in Scrum).

Groom your product backlog
The product backlog should be your single point of truth in terms of which functionality to build in what order. You should not keep any other lists than that backlog, particularly if you’re the product owner of the team (like ours). In other words, whoever and whenever asks you about the status of the project, you should be able to deduct it from the backlog. You should continuously maintain the order of the stories and not only before a sprint planning meeting or when some manager is requesting a status. As team member, beware that you never work on something that is not on the product backlog. And yes, that means that even technical work needs to be represented by a story.

Create a user interface mockup
Earlier in our project, we discovered that most developers are generally poor UI designers. Not only did they create rather non-intuitive interfaces, those also required a lot of rework. At that point we decided that a user interface mock-up was a requirement for every story that involved non-trivial UI changes. That helped not only during the sprint itself, but also during the sprint planning meetings. Having a visual representation of a requirement can significantly improve the estimations.
One note though. Even if your product owner is a master in PhotoShop (ours was), beware that a detailed mockup might cause your team to spend too much time on getting the actual product pixel-perfect. That’s why I prefer tools like Balsamiq. They provide much more room for working towards a good-enough user interface.

Worship the Ubiquitous Language
The Ubiquitous Language is a practice originating from Domain Driven Design and forces all stakeholders involved in a project to make sure they use the same terms for the same concepts, and different terms for different concepts, everywhere. That sounds rather trivial, but isn’t in reality. Especially within larger corporations it is quite common to violate those rules. We’ve learned that investing in this requires quite a lot of discipline but has saved us from many interpretation problems. Just make sure that whenever a term changes (e.g. because of new insights), change it in documentation, in code, in unit tests, everywhere.

Don’t waste time on a glossary
Considering the importance of the Ubiquitous Language, we thought that having a glossary with all those terms (and the differences between our customers) was worthwhile. However, after a few sprints that glossary became hopelessly out-of-date. And after discussing this in the retrospective, we found out that most people in the group were using the coded domain model as the definitive source of any concept or term. So if you go that direction, make sure the domain model is properly documented.

Define the what, who and why of each story
The best understood stories were those that listed the what, the who and the why. In other words, what person or role is going to benefit from a specific feature, what does it allow him or her to do, and why does he or she need it. Don’t underestimate that last crucial part. More than once, it allowed us to come up with a better solution than the product owner initially thought off. In fact, we even managed to get rid off some stories because some other story already allowed that person to do same in a better way.

All stories must have a proper How-to-Demo
The how-to-demo is a short list of steps that illustrate what a story is about. Initially our stories didn’t include a how-to-demo, and the product owner had to spent a lot of time explaining the team how a story affected the system. Worse, it was not uncommon for the team to lose track of the story’s purpose later on. But after introducing the how-to-demo for every new story, the situation improved a lot. In fact, when we started discussing the how-to-demo during the sprint planning meetings the team managed to reveal several inconsistencies and/or oversights in the product’s feature set.

Schedule regular product backlog meetings
One of the most typical aspects of this project was the lack of time the product owner was available to the team and the pressure his customer was applying on him to deliver new functionality. This resulted in two symptoms. First, the stories scheduled for the sprint planning meeting were not of enough quality (e.g. missing how-to-demos, functionality that wasn’t thought through, or stories that still required a mockup). Secondly, at random and usually very inconvenient moments, he wanted to discuss some change requests and get preliminary estimations for them. We worked around this by introducing weekly backlog meetings that we used to discuss new change requests and/or enhance the backlog with missing details.

Well, that’s it for this first short episode in this series of blog posts. Remember that these are the experiences of me and my team, so they might not work as well for you as they did for us. Nevertheless, I’m really interested in learning about your own experiences, so let me know by commenting on this post or tweeting me at ddoomen. Next time, I’ll be sharing my best practices for efficient sprint planning meetings.

Saturday, September 17, 2011

So what does Windows 8 mean for .NET developers?

Last updated on September 22nd

Unfortunately, the Microsoft Build conference conflicted with our company's 5-year anniversary and the associated sailing trip in Greece.

Fortunately, the blogosphere and twitter-space provided plenty of opportunities for trying to grasp what the stuff Sinofsky and his team have been sharing in Anaheim means for us developers. I tried to read as much as is available right now , and this is my interpretation of that news. But first, consider this improved diagram (created by Doug Seven) in his interpretation of the Windows 8 story.

Metro apps

  • Are supposed to exist side-by-side with desktop apps (at least, for the next few Windows versions).
  • Run in a kind of sandbox and don't have access to desktop apps
  • Are suspended within 5 seconds after the user has switched to another app. This should increase battery life and keep the system responsive.
  • Cannot use overlapping dialog boxes
  • Can only be distributed through the upcoming Microsoft app store and require verification and signing before any app is allowed. I don’t know whether there are any other mechanism for distribution.
  • Can implement specific APIs for easy exchange of files and other data streams (e.g. you don't have to download a file to a disk first to use it in another app)
  • Can be build using three sets of technologies, all ran against the new WinRT API. Consequently, Metro apps can be build regardless in what technology you have been investing:
    • C/C++ with XAML
    • C#/VB with XAML and .NET 4.5/CLR
    • Javascript with HTML5/CSS
  • Microsoft introduced a specialized version of Expression Blend for HTML that you can use for building HTML/CSS/JavaScript apps.
  • Microsoft apparently didn't mention JQuery, but it seems to work after all.
  • The JavaScript apps will run using the Internet Explorer 10 engine and doesn't allow any plug-ins to run. This has nothing to do with IE10 as you would use as a desktop app. There, all plug-ins still work, including Silverlight, Adobe and other traditional plug-ins.


  • The following diagram extracted from the Lap Around the Windows Runtime session by Martyn Lovell provides a more in-depth view of the runtime.
  • WinRT is a fully object-oriented API next to Win32 that talks directly to the Windows Kernel. It is based on a modern version of COM, but that fact is completely hidden away.
  • WinRT is created for the fast fluid experience required for Metro apps, so any operation that might take longer than 50ms to complete is only available through an asynchronous model
  • The WinRT objects are exposed using language projections at compile-time (C++ and .NET) and/or run-time (JavaScript and .NET)
  • WinRT provides language independent primitive types for integers, enums, (immutable) strings, arrays and interfaces. They are projected to the corresponding language-specific types.
  • WinRT uses a metadata system similar to .NET Reflection based on COM’s IUnknown and a new IInspectable interface.
  • Apps cannot expose its objects to another app, other than through the earlier mentioned communication contracts.
  • If you develop in C#/VB, then you'll be running against the full .NET framework, but the API is filtered to what WinRT can provide. It works similarly as the .NET Framework Client Profile works. You could still use Reflection to access the hidden parts, but such apps will not be accepted by the app store.
  • You can use C++ and .NET to build WinRT components and then use them from all three development models (including Javascript). Doing this, does impose some limitations to your classes.
  • The UI runs on a single non-reentrant thread, but an app can still use the thread pool.
  • Existing Silverlight apps only require a few minor changes to some namespaces and any networking code to be able to run as Metro apps
  • Already many open-source Silverlight libraries are working on supporting WinRT. Caliburn Micro for instance, already supports some parts of WinRT.
  • Make sure you read the excellent in-depth analysis of WinRT by Miguel de Icaza and the threats/opportunities analysis by Steven Smith.

This also means that Metro apps have nothing to do with Silverlight, WPF or any other part of .NET for full-blown desktop apps. In fact, Windows 8 ships with the .NET Framework 4.5 which includes a shipload of new improvements, even to WPF. Everything discussed around WinRT and Metro is about building specialized apps specifically targeted to Windows 8. As far as I'm concerned, any posts or discussions talking about the death of the .NET framework or Silverlight (shame on you InfoQ!) don’t (want to) get the whole picture.

Wednesday, September 14, 2011

Yes, the Silverlight Cookbook is still alive

Although it might not seem like that, I am still working on the Silverlight Cookbook. However, I’ve just moved to a new house, so I ran out of time recently. Fortunately, since my colleague Martin Opdam is actively working on Fluent Assertions, so I only have to divide my free time between the cookbook and my attempt to blog about the things I’ve learned in my latest Agile project.

As we speak, I’m trying to replace Unity with Autofac, but that has not been an easy task. Not because Autofac is difficult to work with, but because Unity has allowed me to sneak in some solutions that don’t really match the SOLID principles.

Regardless, to give you an idea of the things I’ve been planning, here’s my internal backlog.

  • Figure out how to pass exceptions in the Commmand/Query agent to the current VM
  • Keep client exceptions in IsolatedStorage and retry sending the error to the server
  • Refactor the CommandService into a HandlerRegistry and a CommandExecutor
  • Support functional keys in the AttributeMappedCommandHandler
  • Module loading
  • Authenticatie
  • Authorisation
  • Introduce a domain specific object for setting the recipe rating
  • TargetInvocationException unwrappen
  • Properly deal with transactions
  • Support automapping using overloaded aggregate root methods
  • Intercept database connection problems
  • Asynchronous validation of title uniqueness via INotifyDataError
  • Introduce NotifyDataErrorBase
  • Introduce a mechanism for client exception handling
  • Support warnings
  • Silverlight 5

Speaking at Developer Developer Developer North

I’m honored to have been selected to speak at my first event in the United Kingdom, Developer Developer Developer North, hosted at the University of Sunderland, near New Castle.

Our job as a software developer seems to revolve mostly around programming languages, frameworks and Visual Studio. And to be honest, most of us have their hands full with that already. However, our profession includes a whole bunch of best practices that can seriously improve the efficiency and effectiveness of you and your teams, help you to deliver at a higher quality, or involve the business even more.

What am I talking about? Well, think about Unit Testing (and TDD/BDD), Peer Reviews, Daily builds & Continuous Integration, Brown Paper Sessions, Coding Standards, Common Code Layout, Static Code Analysis, Refactoring, Evolutionary Design, Checklists and Pair Programming. At DDD North I’ll be talking about what those practices are about and why should include them in your toolbox.