Wednesday, November 30, 2011

Feedback Requested: Is there any valid usage for the ‘new’ keyword?

Yesterday I ended up being part of a discussion about using the ‘new’ keyword to hide base-class members. A colleague of mine used it to alter a base-class property in a derived class with the purpose of making it more strongly typed.

I’ve always rationalized guideline AV1010 (Don’t hide inherited members with the new keyword) by referring to the Liskov Substitution Principle and claiming that if you need it, then you’re probably facing a design smell. In this particular case that was indeed the issue, so after fixing it, the keyword wasn’t necessary at all.

But, his arguments did make sense. In fact, he was so convinced about it that he sent me a proposal for an exception to the C# Coding Guidelines. As part of the discussion, he also sent me some background info on the keyword’s origin by C# co-author Eric Lippert.

This is the example he claims is a valid and useful exception to the guideline. The purpose was to ensure that a manager always has a smart phone rather than any other type of phone.

    public class Phone
{
}

public class SmartPhone : Phone
{
}

public class Employee
{
public Employee(Phone phone)
{
Phone = phone;
}

public virtual Phone Phone { get; }
}

public class Manager : Employee
{
public Manager(SmartPhone phone) : base(phone)
{
}

public new virtual SmartPhone Phone
{
get { return (SmartPhone)base.Phone; }
}
}

As far as I see it, the Manager class is violating the contract defined by the Employee class, and thus violates the LSP. If an employee can have any type of phone, and a Manager cannot, than clearly, a Manager is not an Employee. They may share some characteristics, but that might be solved by a common base-class.


Small Liskov Substitution Principle poster


I don’t easily loosen guidelines unless there’s a really sensible exception, but I do like to give any suggestion serious consideration. So please provide me your feedback. Is there any valid reason for using the ‘new’ keyword other than for legacy purposes (where you can’t change some base-class code) and which is not a design smell?


Let me know by commenting on this post, by sending me email or tweeting me at @ddoomen.

Friday, November 04, 2011

Almost five years at Aviva Solutions and still enjoying it every minute

I know, I know, I’m still 3 months away from it. But without doubt, on the 1st of February, I will be celebrating my first 5-year anniversary in the 15 years of my professional career. That might sound silly, but I’ve always had a problem of getting restless after a few years . It never was a problem with the people around me, and in fact, from a employee point of view I’ve never had anything to complain at all. It has just been my desire to move along at some point and discover a fresh new environment.

Strangely enough I don’t have that problem at all currently. Some may claim its my age and the corresponding desire for finding a final settlement, but I’m quite sure it’s the uniqueness of my current employer, Aviva Solutions. I think this is well illustrated by this photo collage (click for a bigger picture).

Collage Summer Event XL 2011

Obviously our annual social trip to a warm place close to the beach is only barely sufficient to keep me happy (pun intended). But the fact that we have practically no hierarchy, that I enjoy the company of some very experienced colleagues, and that our two CEOs value each and every employee by heavily facilitating in their personal and technical knowledge and skills might be more important.

To clarify my added value to the company, I try to lead the themes Custom Solutions & ALM and Cloud Solutions. That means that I spend four days a week on (long-term) consultancy jobs and one day in making sure we as a company invest enough in current and new Microsoft technologies, practices and software development methodologies.

BTW, the other themes include Application Integration, E-Commerce & CMS, Business Intelligence and Portals & Collaboration. This doesn’t mean anybody is fixed to a particular theme for life. Its just to allow them to focus a bit and prevent them from drowning in the elaborate world of Microsoft. In fact, everybody is expected to have thorough knowledge of the .NET Framework and a clear understanding with the practices from Extreme Programming and Agile methodologies such as Scrum or Kanban.

We’re only with 40 professionals right now, and we’re still looking for new colleagues that match the following characteristics:

  • Are passionate about the software development profession
  • Like to share their opinions
  • Are capable of conveying that opinion to the lesser gods (read: managers)
  • Are not afraid to propose alternate or better solutions to a problem
  • Have a natural tendency of trying to improve themselves and the people around them
  • Speak Dutch fluently
  • Who don’t have anything against an annual weekend at a warm sunny beach and a drink here and there

If you think you fit that description in any way, let me know by tweeting me at @ddoomen or emailing me at dennis.doomen@avivasolutions.nl. We might be drinking a beer at next year’s annual event.

Silverlight Cookbook: Looking for a great UI design

Why? Because even though think I have a reasonable idea of when a user interface is consistent and user friendly, I suck at the raw design skills. Just check out the current ‘design’ if you don’t believe me.

image

I’m actually looking for something that resembles the Cosmopolitan theme or MetroTwit:

image

It’s enough to have a one or two screenshots to get me started. But I’m currently a bit out of inspiration to get something interesting out of my hands. Obviously you’ll get all the credits in my tweets, posts, etc.

Anybody?

Thursday, November 03, 2011

In Retrospect: About Bugs

This is the third of several posts in which I’d like to share some of the things we learned throughout more than 14 sprints of Agile development using Scrum. Some of them might appear as open doors, but I wish I knew or thought about those before I started that project. Just by looking back at the mistakes a team of 9 developers and one tester made in a period of 12 months, they apparently aren’t that obvious. So after having discussed the way we handle the sprint planning meeting elaborately, let’s briefly talk about bugs.

Strive for a zero-bug policy

In other words, try to keep your list of bugs empty and don’t start a new story until all bugs are solved. The reason for this is the longer you wait with fixing that bug, chances are that it will take a whole lot of more work to fix it. And be reluctant to moving a bug to the product backlog for reprioritizing. If you’re doing that, then you’re probably not dealing with a bug after all and should have been filed as a user story instead. A side-effect of this practice is that it will make it very visible when your team is suffering from a lot of bugs. It should give you a natural tendency to change your way of working so that bugs are less likely to occur.

Reduce the focus factor to deal with regressions.

We never assign story points to bugs that are being solved within the same sprint. The estimated focus factor of your sprint should accommodate for the average number of bugs that are typically found in a sprint. And beware that if you’re not applying automated testing to your entire codebase, the larger the codebase, the bigger the chance that regression issues will occur.

Schedule left-overs in the sprint planning

When, at the end of the sprint not all bugs have been solved, move them onto the agenda of the next sprint planning meeting and estimate them just like you do with stories. This makes it clear that the quality level of the work in the last sprint was not at the expected level and protects you from a sprint with a false start.

Make sure a bug is really a bug

I lost count of how many times somebody in the team started working on a bug and found out that it really was a disguised change request. Especially HTML/CSS-related improvements have the tendency of ending up as bugs and require a lot of time to fix. Just move them to the product backlog as a user story storyotyped as UI Enhanchement. Misinterpretations and oversights in business rules have popped up as bugs once in the while as well, especially if the product owner didn’t spend enough time evaluating the story in the previous sprint. If that happens, we simply change it into a user story and ask the PO to reprioritize it for the next sprint. If you do this often enough, it might encourage the PO to allocate a bit more time to the evaluation next time.

Use a tool to keep track of issues

In the beginning of the project, we simply created a Microsoft OneNote page to keep track of small issues that our tester found throughout the sprint. We thought the low threshold of OneNote would keep the administration to a minimum. But we found that some descriptions were a bit vague, especially if the test professional that created them was out of the office. We introduced some specific requirements for each issue such as what he or she did to trigger the problem, what the expected behavior was, etc. Then we discovered that occasionally, two developers were working on the same issue. So we agreed to highlight the issue in the issue list as soon as a developer started to work on it and to add his name in a dedicated column. The next problem was the length of that list, in particular because our lack of automated UI tests. We tried to solve that by moving the fixed issues to another OneNote page, all to keep away from too much administration.

But you can guess how this story ends. We finally started to file real bugs in our Team Foundation Server environment. Our test professional are now using the Microsoft Test Manager to do structured and exploratory testing, so for them it’s little trouble to create a bug. In fact, we now get a lot more details about the circumstances under which the test was executed. Those really help tracking down a particular problem.

Remember that these are the experiences of me and my team, so they might not work as well for you as they did for us. Nevertheless, I’m really interested in learning about your own experiences, so let me know by commenting on this post or tweeting me at ddoomen. Next time, I’ll be discussing the things we did to get through the sprints as efficiently as possible.

Sunday, October 30, 2011

Fluent Assertions is finally gaining some momentum

Indeed it is, in particular within the part of the .NET community that believes test-first development is non-negotiable. We receive more and more suggestions, contributions and questions, and we’ve started to notice some blog posts here and there.

It’s not that it is being downloaded thousands of times per month, but since its first release in February 2010 it has been downloaded 1738 times through CodePlex. The biggest increase was caused by NuGet though. Since we’ve uploaded our first NuGet package in January this year, it counted 2863 downloads. That’s more than enough to make us happy.

Anyway, after having tested several intermediate versions in one of our major projects, we’ve finally released version 1.6.0. And yes, we’re doing semantic versioning, so this version should only add new functionality, bug fixes and no breaking changes compared to 1.5.0. All credits for this release go to my fellow colleague and close friend Martin Opdam. He spend more actual development time than me, and I’m very happy with that because it allows me to keep my focus on the Silverlight Cookbook for a while more. We are also happy to see that the contributions are coming as well. For instance, Urs Enzler has been quite active and provided various patches. He’s even trying to set us up with a continuous integration server based on TeamCity.

So what's new?

  • And() extension method to TimeSpanConversionExtensions to support 4.Hours().And(30.Minutes()).
  • More TimeSpan extensions to fluently create a TimeSpan like 23.Hours(59.Minutes()).And(20.Seconds()).
  • MSpec as contributed by Urs Enzler.
  • Support for the ComparisonMode to assert inner exception messages as well. Also added ComparisonMode Equivalent and EquivalentSubstring to assert that the message of an (inner) exception matches a certain case-insensitive phrase.
  • Guid assertions like Be(), NotBe(), BeEmpty() and NotBeEmpty().
  • Support for recursively comparing the properties of nested objects using ShouldHave().AllProperties().IncludingNestedObjects().EqualTo().
  • Type and MethodInfo assertions for asserting class members are virtual or decorated with specific attributes.
  • Before() and After() extensions methods for TimeSpans .
  • Should().Be() and NotBe() extensions to the TypeAssertions.
  • Added PDB files to the release build as another contribution by Urs Enzler.
  • Added the name of the property to the ShouldFirePropertyChanged extension method failure message, also contributed by Urs Enzler.
  • Added missing comments to some of the assertion classes.

What did we fix?

  • Fixed a stack overflow exception due to a recursive call between the various overloads of floating point extension method BeApproximately()
  • While comparing two collections for equality, FA didn't check any superfluous items in the expected collection.
  • Boolean assertions did not properly check against null values.
  • Fixed a stack overflow exception while creating a displayable representation of an object that contains circular references.
  • Fixed some potential memory leaks fix in MonitorEvents() using a patch provided by Remo Gloor.
  • Sometimes the wrong name of a the property or type was reported in a failure message.
  • ShouldHave().AllProperties().EqualTo() sometimes treated two objects that are functional equivalent according to their Equals() override as different, simply because they were not of the same type.
  • Fixed the detection of collection items that appear in the wrong order in Should().ContainInOrder().

Release 1.6.0 can be downloaded from its CodePlex site, but I suggest you start using NuGet as your primary delivery mechanism.

Wednesday, October 12, 2011

In Retrospect: About the Sprint Planning

This is the second of several posts in which I’d like to share some of the things we learned throughout more than 14 sprints of Agile development using Scrum. Some of them might appear as open doors, but I wish I knew or thought about those before I started that project. Just by looking back at the mistakes a team of 10 made in a period of 12 months, they apparently aren’t that obvious. So after having discussed requirements management, let’s talk about the sprint planning.

clip_image001

Don’t ignore holidays and days off throughout a sprint

When calculating the velocity for a sprint, it’s quite common to determine the number of working days and multiply these with the estimated focus factor for that sprint. So if somebody’s holiday ends somewhere throughout the sprint, it seems you’ve everything covered. But don’t forget that people need time to get up to speed on what’s going on in that sprint. Many aspects are not written down, especially doing sprint planning meetings. Somebody else has to spend time to update that person. So either remove an extra day from the available working days or adjust the focus factor.

Account for the (structured) absence of senior team members

This one is really about two scenarios. First off, any software development book will tell you that part-time team members will be less productive than equally experienced full-time team members. The fact that somebody is always out of the office on Tuesday requires him or her some time to get up to date on Wednesday. He may even have been working on something on Monday and didn’t find the time to transfer the task to somebody else. That on itself may cause some slight delay, especially if somebody else depends on it.

The other scenario happens when the most senior developers are out of the office for a few days. Normally, those developers will be the ones most capable of catching subtle but important design violations or critical bugs during peer reviews. However, when they’re not in, somebody else will take over the review tasks (hence: peer reviews), and they might not detect these subtle problems. If it’s a bug and you’re lucky, chances are that the skilled tester in your team might find it. But design issues (a.k.a. technical debt) may not become a problem until later in the sprint. Worse, it may even appear that the focus factor was higher than usual. But the fact of the matter is that those decisions will haunt you later on in the project. Deal with that by either reserving some time for ad-hoc reviewing or refactoring.

Have shorter planning earlier at the day

Every sprint planning meeting we had to plan after lunch somehow was a lesser success than the ones before lunch. Obviously, the lunch itself is a primary reason for that. But most people in my office start early, so by the time lunch has finished, they’ve been working for 4-5 hours already. By then, don’t expect them to be focused for another 2-3 hour.

Everyone is responsible for keeping it short

Yes, it’s the Scrum Master who should help the team to focus the sprint planning meeting on its purpose. But that doesn’t relieve anyone from keeping the meeting short. Don’t start a lengthy discussion on how to functionally solve a problem if that’s not absolutely necessary. If the product owner hasn’t done his homework properly, force him to move that story to the next sprint. Also beware for technical people who like to let everybody know how much they know. Just propose to continue the discussion outside the meeting (he still may have something useful to share), or cut him off if he’s not willing to comply. Also, don't discuss details that won’t affect the estimation. Remember, a user story is a placeholder for having a more detailed conversation with the product owner at the last possible moment.

Help the product owner to focus the meeting

Here are a few things you can do to help prevent the product owner from wasting the time of the team in sprint planning meetings.

1. Don’t allow him to propose technical solutions

2. Don’t allow him to influence the estimations

3. Force him to use a checklist while preparing the user stories so that they include important aspects as the How-to-Demo, the impact on other stories, and to make sure they comply to INVEST as much as possible.

4. Let him read this excellent summary of product owner responsibilities

clip_image002

Schedule breaks every 45 minutes

Regardless of the actual length of the meeting, schedule a short break regularly. People are simply not capable of keeping focused for a prolonged time. Even if it seems the discussion is almost over and you ‘only need to discuss two more stories’, it never is. As an extra tip, make sure there is sufficient water and/or soda on the table. I’ve found that can help extend the time people can focus.

Don’t discuss the technical solution during the meeting

We experimented a lot with the length of the meeting, whether or not to discuss the technical details and when to start doing story point estimations. We even tried splitting it up into two parts such as explained in the excellent Scrum/XP from the Trenches. But what worked best for us is to schedule a single meeting in the morning lasting about 2-3 hours where the entire team participates. We go through the stories selected by the product owner and try to identify the important aspects that might impact the effort required. We then estimate the size of the story in story points, and continue with the next story until the sprint is full enough.

Right after the meeting – but typically after lunch – 2 or 3 seniors will sit together and scan the sprint backlog to find the most risky or technically complex stories and discuss them in more detail. They may even start writing down some notes about the proposed solution or note some functional questions that weren’t covered in the sprint planning. Then they’ll use that new information to reassess the original estimate. Even though we do like to involve the entire team, we noticed that only the more experienced developers will participate in the technical discussions anyhow.

Re-estimate the remaining stories from last sprint

Ideally your team will be doing everything they can to get the stories to the done-done state. Unfortunately, in reality we are often faced with unexpected problems that prevent you from doing that. Obviously, those stories don’t count for the final velocity calculation, but you still need to finish them. In earlier sprints we roughly reserved the first one or two days of the next sprint to finish those stories. But quite often we didn’t know exactly what still needed to be done. What has worked a lot better for us is to simply reschedule and re-estimate those incomplete stories similarly to any other product backlog item.

Consider combining user stories that deal with similar business rules

I remember a situation that we had to realize four related different business rules. Because of their relative complexity, we decided to treat them as separate stories (using the Business Rule storyotype obviously). We did consider any potential conflicts, but that all seemed fine at the time of the meeting. You can guess what happened. Somewhere after implementing the fourth business rule we discovered some serious functional conflicts that we couldn’t solve without extensive discussions with the business and some significant rework. For this particular situation, we concluded that if we had combined them in one single story, we would have detected this much earlier.

Only accept stories that are well defined

As I mentioned before, a badly prepared user story can seriously hamper the sprint planning meeting as well as the sprint itself. We’ve been forced to accept such a story in the beginning of the project a few times, but some of those have seriously screwed up some of our sprints. Especially be suspicious when a product owner claims he still needs to discuss ‘some minor items’ with the end-user. It’s not uncommon to have some story lingering throughout the entire sprint and never reaching the done-done state. Also beware of stories which name resembles ‘various changes’. That’s why we decided not to accept any of those stories anymore. Again, to help the product owner, participate in regular backlog meetings or propose a user story checklist.

Include only a few bigger stories

Having too much bigger stories in a sprint increases the risk that your team won’t be able to finish that last story at the end of the sprint. A bigger story usually involves more review work, more rework, more testing effort and suffers from a higher change in estimation deficiencies. If you can’t get it to done, you won’t be allowed to include its story point estimation in the sprint’s velocity. Therefore, make sure you have a few small low-risk user stories that you can use to finish your sprint in a clean way.

Don’t include too many critical stories

At some point of time, one of our sprints included quite a lot of technically complex stories that could only be done by the most senior developers. According to our product owner, those stories were essential for some business demo. Because of the same things I mentioned in the previous practice, we ended up not finishing an important one that also happened to be important for some of the other, less critical, stories. During the retrospective we decided that the next time, we would make sure we kept the number of critical stories low and move them to the beginning of the sprint. Additionally, we decided to require pair programming for all critical stories.

Plan a story for every deployment

Throughout the project we occasionally had to deploy a status-quo of the system to an online demonstration server. As we were using automatic deployments anyhow, that usually wasn’t supposed to cost a lot of time. We didn’t get the time to build a sufficient level of code coverage for the presentation logic, so we always explicitly noted that bugs may still occur. But somehow the product owner almost always managed to demand some last-minute changes such as modified test data, disabling some features that were not working yet, and other minor changes that he deemed necessary for a successful demo. Even if you employ automated testing of the entire code base, I would still schedule some time for that demo. If you don’t need it, you’re lucky and you might finish an additional story. If not, you at least covered your tracks.

Don’t let anyone influence estimations

Yes, guilty as charged. In fact, I remember a particular story that involved some non-trivial changes to the architecture. As the architect, I already spent some time doing a bit of preliminary design upfront and had a fairly good idea how to solve that particular problem. So during the sprint planning, I explained the rough outlines of that solution and asked for a group estimation. They responded with a much larger figure than I had expected and which started me to explain why I thought it was “quite easy to do”. Obviously I badly influenced the re-estimation with that. You can guess how well that went for us…. Anyway, Beware of words like "just", "only", "nothing more than", etc. And don’t allow people to overreact on other people's estimations ("What?? That much!"), as this will influence the estimations surely.

clip_image004

Postpone the task breakdown until the moment the story is picked up

In the beginning of the project we still broke down a story into tasks as part of the sprint planning meeting. Then, right after that meeting, we refined all those tasks and assigned estimations in hours matching the amount of story points assigned to that story. However, we quickly discovered that those tasks were typically not well defined, or did not match the real work well enough. That made us change our tactics (and we still do it like this) by postponing the breakdown until somebody actually started working on it. At that point of time, a minimal of two developers would discuss the details and manage to get a much better breakdown. To keep the benefit of a burn down chart, we do assign a placeholder task to every story with the total amount of hours available for that story.

Don’t assign all working hours to the story

Suppose you have estimated or determined your teams focus factor to be 50%, then each story point will take 16 man-hours to complete. You might be tempted to apply the same calculation to the tasks for a particular story. However, we’ve found it to be wiser to assign only 75% of those hours to the tasks and keep 25% as slack for any continuous or intermittent activities. In our team, it also appeared to have a nice side-effect. Developers tend to try to stay within the assigned hours of a task. So even if you have a setback, it won’t immediately blow up your sprint. As an example, consider a story that is estimated at 5 story points and a focus factor of 50%. We will divide 5 SP * 8 hours-per-day / 50% * 75% = 60 man-hours over the tasks of that story.

Reserve a task for the peer review

Peer Reviews should be part of the ordinary work required to finish a story, and thus also be part of the definition-of-done. Earlier in our project we didn’t explicitly reserve any time for it and assumed the developers would account for that themselves. But after we were constantly faced with a lot of rework after all tasks were supposed to be done, we started reserving about 25% of the story’s hours to a dedicated task named Review & Rework. The complexity of the story ultimately influences whether or not we assign the entire 25% or reduce that value a bit.

Complete the sprint planning meeting with a sanity check

At the end of the sprint planning, ask the team to reevaluate the sprint backlog for a minute. Let each of them decide for themselves whether the entire list of stories selected for that sprint still seam feasible. We usually use this moment of reflection to see if there are any undesired dependencies, or if we need to reorder stories to move some complexities to the beginning of the sprint.

And with completing the sprint planning I’ve completed part two of this series. Remember that these are the experiences of me and my team, so they might not work as well for you as they did for us. Nevertheless, I’m really interested in learning about your own experiences, so let me know by commenting on this post or tweeting me at ddoomen. Next time, I’ll be talking about how we deal with bugs.

Wednesday, September 28, 2011

Silverlight Cookbook: Switching to another IoC Framework

The Rationale

As long as I have been using the Dependency Inversion Principle, Microsoft Unity has always been my preferred Inversion-of-Control framework. So it’s not strange that the Silverlight Cookbook has been using Unity 2 in both its WCF/REST layer as well as within the Silverlight client. I never even bothered looking at other frameworks, with the exception of NInject, that I used in a Windows Mobile 5 project, and StructureMap, of which I learned a lot while reading Jeremy D. Miller’s many posts of design patterns…

…Until I read this post where Philip Mateescu compared the performance of the most popular IoC frameworks…and declared Autofac as the clear winner…

Let’s be honest though. I’ve always liked Unity and managed to use it to solve all my IoC and AOP problems without too much hassle. However, an analysis of some performance issues in my latest project revealed that its AOP features were not the fastest available. Notwithstanding, I would never simply change my strategy based on a single post. In fact, some claim that the comparison was faulty in the first place. But after browsing through the Autofac documentation I really became quite fond of Autofac’s strategy. I’ve always abided to the “Microsoft, unless…” philosophy, but my engineering heart couldn’t resist the temptation to try to introduce Autofac into the Silverlight Cookbook.

So what are the advantages?

Separation between configuration and resolution.

To be more precise, you use a ContainerBuilder to setup the dependencies like this:

image

And use its Build() method to construct an IContainer that you cannot change anymore. (Well, strictly speaking you can, but that doesn’t mean you should). Notice the fluent interface, one of the aspects of Autofac I like quite a lot.

image

This separation is actually the biggest reason why it took me so much time to migrate the Cookbook. You have to organize your setup code so that all registrations happen at the same place. But by doing so, I’ve found that my design actually became more clean with better separation of concerns. So, unlike what you might expect, I see this as an advantage rather than a disadvantage. But beware of this when you consider migrating from Unity to Autofac.

Implicit support for factory methods

Check out the updated version of the AddNewRecipeHandler:

image

So instead of introducing a dedicated factory interface, you can simply add a dependency to a Func<T> where T is your actual dependency. Autofac will automatically inject a delegate that you can use to create new instances of that dependency at will. And you don’t have to configure anything for that. It’s the caller that decides what kind of dependency he needs. And that’s not all. Autofac will keep track of any object you create that requires explicit disposal through its IDisposable interface. If you don’t want that, simply replace Func<T> with Owned<T>, which is Autofac’s way of giving you control. If you want to make this permanent for a particular registration, append ExternallyOwned() to the registration.

image

Collections of dependencies

Another feature I’ve always missed in Unity is support for taking a dependency on all instances of some type. And a big difference compared to Unity is that you can register as many implementations and instances of some interface as you want, without the need to specify some unique name.

image

Again, you don’t have to think about that during registration. It’s all part of Autofac’s extensive support for relationship types, which includes things like Lazy<T>, IIndex<T> or even Func<X, Y, B> if you need to parameterize the dependency somehow. And obviously you can combine those types to create some pretty advanced dependencies (although I wonder if you should).

Modules

If, you may wonder, you have to combine all type registrations in a single location, isn’t that code going to be very difficult to understand (and maintain)? Well, no. Autofac includes the notion of combining registration in so-called Modules. In the Cookbook I’ve used that mechanism to combine everything that is related to supporting the creating of units-of-work using NHibernate. So rather than code like this.

image

I can now do something like this:

image

Although the CookbookUnitOfWorkModule contains two more complex classes in its hierarchy that I introduced to simplify working with both SQL Server as well as SQLLite, the module concept makes it a breeze to work with.

Assembly Scanning

What I particularly liked in the Managed Extensibility Framework is its support for scanning a directory for assemblies and automatically registering specific types. In the previous version of the Cookbook, I created hand-written code to automatically find my command handlers to overcome Unity’s lack of such functionality. Luckily, Autofac does include assembly scanning out-of-the-box, and now I can do this:

image

image

No attributes

Yes, Autofac does not contain an equivalent of Unity’s [Dependency] attribute, and I think it’s great. In fact, Autofac’s preferred dependency injection mechanism is constructor injection. It will never inject (unset) properties, unless you explicitly configure it for that upon registration.

I’ve never liked property/setter injection because it allows an object to have many dependencies. According to my own coding guidelines and those stated by Clean Code, no member should ever have more than three parameters, not even the constructor. If you need that somehow, chances are your class has too much responsibility. So beware, because that is another of the pitfalls if you switch from Unity to Autofac.

Surely not everything is that great?

Well, in the beginning of my migration attempt I was a bit set back by the lack of property injection, the fact that you cannot create proxies of objects to intercept at any time in your code, and the explicit separation of configuration and resolution. However, after refactoring my code to accommodate for those changes, I noticed that I actually started to like those requirements. All in all, my code base has significantly improved because of those changes. But by now, it should be clear that a migration to Autofac may not be the best thing to do in most projects unless you’ve been clearly separating the responsibilities from the start. While looking back at all the code bases I’ve seen in my career, I think that ideal situation is not something you’ll encounter often….

This article is part of a series of posts dealing with all the choices and solutions used in the Silverlight Cookbook. If you have any comments, let me know by commenting or sending me a tweet on @ddoomen.

Monday, September 26, 2011

In Retrospect: About Requirements Management

This is the first of several posts in which I’d like to share some of the things we decided throughout 14 sprint retrospective. Some of them might appear as open doors, but I wish I knew or thought about those before I started that project. Just by looking back at the mistakes a team of 9 developers and one tester in a period of 12 months made, they apparently aren’t that obvious.

To provide some context, I’m talking about an ASP.NET project involving the development of a suite of configurable and extendible products, developed in ASP.NET WebForms and executed using Scrum and XP. I was the Scrum Master, architect and lead developer (although those latter roles don’t officially exist in Scrum).

clip_image002
Groom your product backlog
The product backlog should be your single point of truth in terms of which functionality to build in what order. You should not keep any other lists than that backlog, particularly if you’re the product owner of the team (like ours). In other words, whoever and whenever asks you about the status of the project, you should be able to deduct it from the backlog. You should continuously maintain the order of the stories and not only before a sprint planning meeting or when some manager is requesting a status. As team member, beware that you never work on something that is not on the product backlog. And yes, that means that even technical work needs to be represented by a story.

Create a user interface mockup
Earlier in our project, we discovered that most developers are generally poor UI designers. Not only did they create rather non-intuitive interfaces, those also required a lot of rework. At that point we decided that a user interface mock-up was a requirement for every story that involved non-trivial UI changes. That helped not only during the sprint itself, but also during the sprint planning meetings. Having a visual representation of a requirement can significantly improve the estimations.
One note though. Even if your product owner is a master in PhotoShop (ours was), beware that a detailed mockup might cause your team to spend too much time on getting the actual product pixel-perfect. That’s why I prefer tools like Balsamiq. They provide much more room for working towards a good-enough user interface.

Worship the Ubiquitous Language
The Ubiquitous Language is a practice originating from Domain Driven Design and forces all stakeholders involved in a project to make sure they use the same terms for the same concepts, and different terms for different concepts, everywhere. That sounds rather trivial, but isn’t in reality. Especially within larger corporations it is quite common to violate those rules. We’ve learned that investing in this requires quite a lot of discipline but has saved us from many interpretation problems. Just make sure that whenever a term changes (e.g. because of new insights), change it in documentation, in code, in unit tests, everywhere.

Don’t waste time on a glossary
Considering the importance of the Ubiquitous Language, we thought that having a glossary with all those terms (and the differences between our customers) was worthwhile. However, after a few sprints that glossary became hopelessly out-of-date. And after discussing this in the retrospective, we found out that most people in the group were using the coded domain model as the definitive source of any concept or term. So if you go that direction, make sure the domain model is properly documented.

Define the what, who and why of each story
The best understood stories were those that listed the what, the who and the why. In other words, what person or role is going to benefit from a specific feature, what does it allow him or her to do, and why does he or she need it. Don’t underestimate that last crucial part. More than once, it allowed us to come up with a better solution than the product owner initially thought off. In fact, we even managed to get rid off some stories because some other story already allowed that person to do same in a better way.

All stories must have a proper How-to-Demo
The how-to-demo is a short list of steps that illustrate what a story is about. Initially our stories didn’t include a how-to-demo, and the product owner had to spent a lot of time explaining the team how a story affected the system. Worse, it was not uncommon for the team to lose track of the story’s purpose later on. But after introducing the how-to-demo for every new story, the situation improved a lot. In fact, when we started discussing the how-to-demo during the sprint planning meetings the team managed to reveal several inconsistencies and/or oversights in the product’s feature set.

Schedule regular product backlog meetings
One of the most typical aspects of this project was the lack of time the product owner was available to the team and the pressure his customer was applying on him to deliver new functionality. This resulted in two symptoms. First, the stories scheduled for the sprint planning meeting were not of enough quality (e.g. missing how-to-demos, functionality that wasn’t thought through, or stories that still required a mockup). Secondly, at random and usually very inconvenient moments, he wanted to discuss some change requests and get preliminary estimations for them. We worked around this by introducing weekly backlog meetings that we used to discuss new change requests and/or enhance the backlog with missing details.

Well, that’s it for this first short episode in this series of blog posts. Remember that these are the experiences of me and my team, so they might not work as well for you as they did for us. Nevertheless, I’m really interested in learning about your own experiences, so let me know by commenting on this post or tweeting me at ddoomen. Next time, I’ll be sharing my best practices for efficient sprint planning meetings.

Saturday, September 17, 2011

So what does Windows 8 mean for .NET developers?

Last updated on September 22nd

Unfortunately, the Microsoft Build conference conflicted with our company's 5-year anniversary and the associated sailing trip in Greece.

Fortunately, the blogosphere and twitter-space provided plenty of opportunities for trying to grasp what the stuff Sinofsky and his team have been sharing in Anaheim means for us developers. I tried to read as much as is available right now , and this is my interpretation of that news. But first, consider this improved diagram (created by Doug Seven) in his interpretation of the Windows 8 story.

Metro apps

  • Are supposed to exist side-by-side with desktop apps (at least, for the next few Windows versions).
  • Run in a kind of sandbox and don't have access to desktop apps
  • Are suspended within 5 seconds after the user has switched to another app. This should increase battery life and keep the system responsive.
  • Cannot use overlapping dialog boxes
  • Can only be distributed through the upcoming Microsoft app store and require verification and signing before any app is allowed. I don’t know whether there are any other mechanism for distribution.
  • Can implement specific APIs for easy exchange of files and other data streams (e.g. you don't have to download a file to a disk first to use it in another app)
  • Can be build using three sets of technologies, all ran against the new WinRT API. Consequently, Metro apps can be build regardless in what technology you have been investing:
    • C/C++ with XAML
    • C#/VB with XAML and .NET 4.5/CLR
    • Javascript with HTML5/CSS
  • Microsoft introduced a specialized version of Expression Blend for HTML that you can use for building HTML/CSS/JavaScript apps.
  • Microsoft apparently didn't mention JQuery, but it seems to work after all.
  • The JavaScript apps will run using the Internet Explorer 10 engine and doesn't allow any plug-ins to run. This has nothing to do with IE10 as you would use as a desktop app. There, all plug-ins still work, including Silverlight, Adobe and other traditional plug-ins.

WinRT

  • The following diagram extracted from the Lap Around the Windows Runtime session by Martyn Lovell provides a more in-depth view of the runtime.
    image
  • WinRT is a fully object-oriented API next to Win32 that talks directly to the Windows Kernel. It is based on a modern version of COM, but that fact is completely hidden away.
  • WinRT is created for the fast fluid experience required for Metro apps, so any operation that might take longer than 50ms to complete is only available through an asynchronous model
  • The WinRT objects are exposed using language projections at compile-time (C++ and .NET) and/or run-time (JavaScript and .NET)
  • WinRT provides language independent primitive types for integers, enums, (immutable) strings, arrays and interfaces. They are projected to the corresponding language-specific types.
  • WinRT uses a metadata system similar to .NET Reflection based on COM’s IUnknown and a new IInspectable interface.
  • Apps cannot expose its objects to another app, other than through the earlier mentioned communication contracts.
  • If you develop in C#/VB, then you'll be running against the full .NET framework, but the API is filtered to what WinRT can provide. It works similarly as the .NET Framework Client Profile works. You could still use Reflection to access the hidden parts, but such apps will not be accepted by the app store.
  • You can use C++ and .NET to build WinRT components and then use them from all three development models (including Javascript). Doing this, does impose some limitations to your classes.
  • The UI runs on a single non-reentrant thread, but an app can still use the thread pool.
  • Existing Silverlight apps only require a few minor changes to some namespaces and any networking code to be able to run as Metro apps
  • Already many open-source Silverlight libraries are working on supporting WinRT. Caliburn Micro for instance, already supports some parts of WinRT.
  • Make sure you read the excellent in-depth analysis of WinRT by Miguel de Icaza and the threats/opportunities analysis by Steven Smith.

This also means that Metro apps have nothing to do with Silverlight, WPF or any other part of .NET for full-blown desktop apps. In fact, Windows 8 ships with the .NET Framework 4.5 which includes a shipload of new improvements, even to WPF. Everything discussed around WinRT and Metro is about building specialized apps specifically targeted to Windows 8. As far as I'm concerned, any posts or discussions talking about the death of the .NET framework or Silverlight (shame on you InfoQ!) don’t (want to) get the whole picture.

Wednesday, September 14, 2011

Yes, the Silverlight Cookbook is still alive

Although it might not seem like that, I am still working on the Silverlight Cookbook. However, I’ve just moved to a new house, so I ran out of time recently. Fortunately, since my colleague Martin Opdam is actively working on Fluent Assertions, so I only have to divide my free time between the cookbook and my attempt to blog about the things I’ve learned in my latest Agile project.

As we speak, I’m trying to replace Unity with Autofac, but that has not been an easy task. Not because Autofac is difficult to work with, but because Unity has allowed me to sneak in some solutions that don’t really match the SOLID principles.

Regardless, to give you an idea of the things I’ve been planning, here’s my internal backlog.

  • Figure out how to pass exceptions in the Commmand/Query agent to the current VM
  • Keep client exceptions in IsolatedStorage and retry sending the error to the server
  • Refactor the CommandService into a HandlerRegistry and a CommandExecutor
  • Support functional keys in the AttributeMappedCommandHandler
  • Module loading
  • Authenticatie
  • Authorisation
  • Introduce a domain specific object for setting the recipe rating
  • TargetInvocationException unwrappen
  • Properly deal with transactions
  • Support automapping using overloaded aggregate root methods
  • Intercept database connection problems
  • Asynchronous validation of title uniqueness via INotifyDataError
  • Introduce NotifyDataErrorBase
  • Introduce a mechanism for client exception handling
  • Support warnings
  • Silverlight 5

Speaking at Developer Developer Developer North

I’m honored to have been selected to speak at my first event in the United Kingdom, Developer Developer Developer North, hosted at the University of Sunderland, near New Castle.

Our job as a software developer seems to revolve mostly around programming languages, frameworks and Visual Studio. And to be honest, most of us have their hands full with that already. However, our profession includes a whole bunch of best practices that can seriously improve the efficiency and effectiveness of you and your teams, help you to deliver at a higher quality, or involve the business even more.

What am I talking about? Well, think about Unit Testing (and TDD/BDD), Peer Reviews, Daily builds & Continuous Integration, Brown Paper Sessions, Coding Standards, Common Code Layout, Static Code Analysis, Refactoring, Evolutionary Design, Checklists and Pair Programming. At DDD North I’ll be talking about what those practices are about and why should include them in your toolbox.

Friday, July 29, 2011

Why I created Fluent Assertions in the first place

A few weeks ago I read The value of open-source is the vision not the source code and that made me think about my own reasons for starting Fluent Assertions, now more than a year ago. In the light of that article, lets briefly go over my own goals for Fluent Assertions.
The intention of a unit test must be clear from the code
An intention revealing unit test has the following characteristics
  1. The arrange, act and assertion parts are clearly distinguishable
  2. The name represents the functional scenario the test verifies
  3. The assertion provides sufficient context to understand why a specific value is expected
For instance, consider this small test.

        [TestMethod]
       
public void When_assigning_the_endorsing_authorities_it_should_update_the_property
()
        {
           
//-----------------------------------------------------------------------------------------------------------

            // Arrange
            //-----------------------------------------------------------------------------------------------------------
            Permit permit = new PermitBuilder().Build();
 
           
//-----------------------------------------------------------------------------------------------------------

            // Act
            //-----------------------------------------------------------------------------------------------------------
            permit.AssignEndorsingAuthorities(new[] { "john", "jane"});
 
           
//-----------------------------------------------------------------------------------------------------------

            // Assert
            //-----------------------------------------------------------------------------------------------------------
            permit.EndorsingAuthorities.Should().BeEquivalentTo(new[] { "john", "jane"
});
        }
Yes, I agree that it's a very simple test and that this scenario should be tested as part of a larger scope. Nonetheless, it clearly illustrates my three points. Now consider this test .

        [TestMethod]
       
public void When_a_substance_is_specified_that_is_not_required_it_should_throw
()
        {
           
//-----------------------------------------------------------------------------------------------------------

            // Arrange
            //-----------------------------------------------------------------------------------------------------------
            var permit = new PermitBuilder().Build();           
           
var someUser = new UserBuilder().Build
();
 
           
var dataMapper = new InMemoryDataMapper
(permit, someUser);
           
var service = new CommandServiceBuilder().Using(dataMapper).Build
();
 
           
//-----------------------------------------------------------------------------------------------------------

            // Act
            //-----------------------------------------------------------------------------------------------------------
            try
            {
                service.
Execute(new AddMeasurementsCommand
                {
                   
Id = permit.Id,
                   
Version = permit.Version
,
                   
Username = someUser.Username
,
                   
Measurements = new
[]
                    {
                       
new MeasurementData("Oxygen", 1.1d, new DateTime
(2011, 4, 13, 16, 30, 0))
                    },
                });

               
Assert.Fail("The expected exception was not thrown"
);
            }

           
//-----------------------------------------------------------------------------------------------------------

            // Assert
            //-----------------------------------------------------------------------------------------------------------
            catch (InvalidOperationException exc)
            {
               
Assert.IsTrue(exc.Message.Contains("not required"
));
            }
        }
It is testing a particular business action against one of its business rules. It's quite easy to understand, but there's still a lot of noise caused by the try...catch construction and the additional Assert.Fail() to assert that an exception was thrown at all. Less important, but still a bit obscure is the usage of the DateTime constructor. With Fluent Assertions, we can do better.
            //-----------------------------------------------------------------------------------------------------------
            // Act
            //-----------------------------------------------------------------------------------------------------------
            Action action = () => service.Execute(new AddMeasurementsCommand
            {
               
Id = permit.Id,
               
Version = permit.Version
,
               
Username = user.Username
,
               
Measurements = new
[]
                {
                   
new MeasurementData("Oxygen", 1.1d, 13.April(2011).At
(16, 30)),
                },
            });
 
           
//-----------------------------------------------------------------------------------------------------------

            // Assert
            //-----------------------------------------------------------------------------------------------------------
            action
                .
ShouldThrow<InvalidOperationException
>()
                .
WithMessage("not required", ComparisonMode.Substring
);

Something similar is also possible for verifying that events have been raised, with special support for the INotifyPropertyChanged interface so common in Silverlight and WPF projects. I blogged about that earlier this year.
A unit test that fails should explain as best as possible what went wrong
Nothing is more annoying then a unit test that fails without clearly explaining why. More than often, you need to set a breakpoint and start up the debugger to be able to figure out what went wrong. Jeremy D. Miller once gave the advice to "keep out of the debugger hell" and I can only agree with that.
For instance, only test a single condition per test case. If you don't, and the first condition fails, the test engine will not even try to test the other conditions. But if any of the others fail, you'll be on your own to figure out which one. I often run into this problem when developers try to combine multiple related tests that test a member using different parameters into one test case. If you really need to do that, consider using a parameterized test that is being called by several clearly named test cases.
Obviously I designed Fluent Assertions to help you in this area. Not only by using clearly named assertion methods, but also by making sure the failure message provides as much information as possible. Consider this example:
"1234567890".Should().Be("0987654321");

This will be reported as:

clip_image001[6]

The fact that both strings are displayed on a separate line is on purpose and happens if any of them is longer than 8 characters. However, if that's not enough, all assertion methods take an optional formatted reason with placeholders, similarly to String.Format, that you can use to enrich the failure message. For instance, the assertion

new[] { 1, 2, 3 }.Should().Contain(item => item > 3, "at least {0} item should be larger than 3", 1);
will fail with:

clip_image002[6]
The code itself should be a great example of what high-quality code should look like
This goal should be rather obvious. I don't only want to deliver a great framework, I also want people to learn from it. And isn't that the single biggest reason why people join open-source projects? However, the article I mentioned before states that the vision behind an open-source project should be much more important than the quality of the source code. As you might expect, I don't agree with that. In fact, one of the challenges I ran into was my desire to control all code contributions so that they complied with Clean Code, my own coding guidelines and my ideas about unit testing.
At first the article made me doubt about the approach to take, but then I decided that the quality of the code was just as important. I now respond to all contribution requests with some information on the way I'd like FA to see evolve. Additionally, I've set-up a dedicated contribution branch that they can use. Depending on the quality of their contribution, merging them into the main branch requires a corresponding amount of work at my side. I know that this might keep some contributors away, but up to now, most of of them agreed with my approach and were willing to deliver high-quality code.

Tuesday, July 05, 2011

Fluent Assertions 1.5 is done! Now it's time for that summer.

In the last couple of months, me and colleague Martin Opdam have spent a considerable amount of time on both improving the reporting capabilities of Fluent Assertions as well as fixing and incorporating various community contributions. Because of the many changes required, a very busy client project, and my endeavors around moving to a new house, it took us over four months to complete the next version. So let's quickly highlight the changes.

clip_image001[8]

So what's new?

A contributor working under the CodePlex account MatFiz added the missing counterpart of string.Should().Contain(), consistently named string.Should().NotContain(). And while he was doing that, he also included the case-insensitive versions string.Should().ContainEquivalentOf() and string.Should().NotContainEquivalentOf(). He also included fluent equivalents of string.IsNullOrWhiteSpace() through string.Should().BeBlank() and string.Should().NotBeBlank().

Asserting that a string matches a particular wildcard pattern can now be done using string.Should().Match() and the case-insensitive MatchEquivalentOf(), both supporting the familiar * and ? Characters. You might wonder why we choose for ordinary wildcards rather than Regular Expressions, and the answer has everything to do with keeping your tests intention revealing. I haven't met a lot of developers that know all the regex escape codes by heart. But I do know a few that are capable of writing with some very obscure (but technically correct) expressions. And that isn't going to help the majority of the developers.

Another request involved asserting that an object can be serialized and then deserialized using the binary or XML formatters. I never felt the need for something like that, but I decided to include the object.Should().BeBinarySerializable() and object.Should().BeXmlSerializable() anyhow. Both of them will serialize an object to a MemoryStream using the corresponding serializer and then use FA's property comparison assertions to compare the properties of the deserialized object with the original object. Notice that this works only for objects that support full round-trip serialization.

While Martin introduced the possibility to execute various assertions on IDictionary<T>, including checking for specific keys, values or a combination of both, at roughly the same time I added support for IComparable<T> through methods such as comparable.Should().BeLessThan(), BeGreaterOrEqual(), BeNull() or BeInRange().

One of the things I've always have been annoyed with is the fact that when you assert that an exception was thrown with a particular exception message, you had to specify the entire message, including punctuation and whitespace. But usually I don't really care about the specifics, and only need to ensure myself that parts of the message match. But since we've introduced wildcard-based string matching in this release anyway, it didn't require a lot of work to support this:

  action.ShouldThrow<ArgumentOutOfRangeException>().WithMessage(

       "code InvalidCode does not match a known", ComparisonMode.Substring);

Or this:

  action.ShouldThrow<AssertFailedException>().WithMessage(
       "Expected object*World*to be less than*City*because a city is smaller than the world."
,

       ComparisonMode.Wildcard);

Another little neat new feature is a fluent API for specifying dates and times. For example, consider this.

  var period = new Period(
        new
DateTime(2011, 2, 1, 8, 0, 0), new DateTime(2011, 2, 1, 18, 00));  
           

If you're into fluent interfaces like me (if not, why would you be here anyway :-)), this is a much better read, don't you think?

  var period = new Period(1.February(2011).At(08, 00), 1.February(2011).At(18, 00));

What property?

Because I don't like spending time in my debugger, the majority of this release was spent on the reporting side. I've tried to make sure that FA reports failures as clear and intention revealing as possible.

For instance, object.ShouldHave().AllProperties().EqualTo(object) now internally uses the equality assertions appropriate for the type of property. This significantly improves the details reported for a string or IEnumerable<T> property.

clip_image002[6]

The property assertion also throws with a clearer explanation when the types of equally named properties are not convertible.

Where's the difference?

Another area that has been improved is the way string differences are reported. This applies both to exception message assertions as well as direct string comparisons. The exception assertions will always display the expected and the actual messages on separate lines so that it is easier to spot the difference.

clip_image003[6]

String assertions will do the same, but only if one of the involved strings contains a newline or is longer than 6 characters. As you've probably already noticed, line breaks and other special characters are escaped now.

An object is not an object

Something I already benefited from during dogfooding an intermediate release in a client project is that FA will now display the public properties of the involved objects whenever an assertion failure occurs.

clip_image004[6]

Before this change, the only thing you got was a message along the line of "Expected object classname to be equal to classname, but they weren't". This will surely help tracking down the problem in your production code. And notice, FA will only do this if the object involved doesn't override ToString(). And since the collection assertions now use the same formatting infrastructure as the rest of FA, collections of objects will also be displayed using the actual structure of the object. In fact, it even support nested collections.

Did I break anything?

Upon specific request from a current user, I did change the behavior of the collection.Be(Not)SubsetOf() method so that an empty set is now treated as a subset of any set. Something similar happened to the collection.Be(Not)EquivalentTo() method; an empty set is now treated as equivalent to another empty set. So if you find that some of your tests start to fail after the upgrade, make sure they don't rely on this behavior.

Also, if you have been extending FA using the Execute class, pay particular attention to the release notes. We've changed the way you need to refer to the because of an assertion and marked the old API as obsolete.

How do I get started?

Just install NuGet and download the latest version of Fluent Assertions from its corresponding NuGet page. If don't want to use NuGet, then download it from CodePlex directly.