Sunday, November 02, 2008

Visual Studio 2010 is huge!

During this week’s PDC 2008, I’ve been spending particular attention to sessions related to the new features of Visual Studio 2010 Team System and Team Foundation Server 2010. The one statement almost every session started with was that this release is huge, and I totally agree! I’ve not seen so many new development environment features since the introduction of Visual Studio .NET in 2001. And the best of it all, almost all of them are intended to integrate the many roles team member are typically working in.

Microsoft .NET 4.0 Logo

To gain a sense of feeling for the many new features, I compiled a comprehensive list for you. Notice that these included only the things related to Visual Studio itself or the Team System features. I tried to keep all the application and technology specific things out of it.


  • Custom folders within the Reports, Builds and Work Items nodes.
  • Most toolbars have a revamped design or have been split up so that they only appear when necessary.
  • Multi-monitor support
  • VSTS now uses WPF as the built-in technology for rendering the user interface therefore allowing some great visual effect such as a rich-editor for code comments or better integration with bug tracking.
  • More advanced code generation a la Resharper (generate class, generate stub). Its intention is to facilitate TDD by introducing shortcuts for generated stubbed classes, methods and other members. Hopefully, Jetbrains will start working on a 2010-compatible version soon.

Work Item Tracking

  • Work item queries support hierarchical views to see parent-child relationships. You can even use drag-and-drop to change the order in which tasks occur.
  • The work item description and history fields now support rich editing features.
  • The layout of the bug entry form has been redesigned to use the available desktop space more efficiently. The history and the description are all on the same page now, and many fields and drop-downs have been shrunken to get a better overview.
  • The work item query editor now supports having a filter that specifies which child work items should be displayed in the tree-view. And that same filter can also be used to show only work items that have (or not) specific child work items.
  • You can now filter work items on groups of users, so you no longer need to use Areas to organize your work items by team.

Source Control

  • Branched folders use a new icon in the Source Control Explorer to distinct them from ordinary folders.
  • Annotate has been enhanced to show the change set of the branch from where it originated, instead of the change set at which it was merged.
  • New branch viewer that shows the branches and their merge points in time or in a hierarchy. It even supports dragging-and-dropping a branch onto another to initiate a merge.
  • Merge conflicts during check-in or branch merges now appear in the Pending Changes window instead of the currently blocking modal window.


  • Dependency Viewer to find the amount of dependencies between assemblies, namespaces and classes in different views (tree-view, graphs, zoomable, navigable, jump to code.). For example, if you see a Death Star pattern, then you know you have a super class.
  • Reverse engineer a UML 2.0 sequence diagram from an arbitrary location to understand how some piece of code works together. You can create your own as well and generate code from it (not required). The Demeter pattern is obvious from that.
  • UML 2.4 Class Diagram, including UML profiles, packages,
  • Also Component Diagrams, Use Case Diagram, Activity Diagrams. Use Case Diagrams can be associated with Word documents and work items.
  • On great addition is the Layer Diagrams that allow you to model the architecture of your solution. It specifies which classes, assemblies or namespaces are allowed to use each other. Whenever you build this solution, MSBuild will verify whether any dependencies are violated.

Project Management

  • There is a new process template called VSTS for Agile Development. It is has different work item types which contain more information that might just be very help-full to bridge the gap between the current MSF Agile and MSF CMMI templates.
  • All work item queries have received a Create Report in Excel context-menu item that will create a nice Excel worksheet with figures on current values or trends which can be fully customized.
  • I’ve tried a excellent Excel worksheet for planning available team capacity against the planned work for one or more iterations. It supports dealing with holidays and sports various graphs for detecting any planning issues. It even includes the possibility to let TFS calculate the average productivity in a previous iteration.
  • New Sharepoint (MOSS/WSS) team site with special TFS WebParts to display various customizable Excel reports, graphs, list of bugs and tasks, KPIs. Visual Studio Web Access is now integrated in that site.
  • All existing reports have been cleaned up and improved and also include a short list of questions explaining the purpose of the report. Moreover, each report lists the query parameters used, a reference to on-line help, and a few links that bring you to related reports using the same query parameters (such as the data range, area, iterations).
  • Conditional formatting or extra columns in MS Excel lists are retained after synchronizing a work item list with TFS.
  • VSWA has received a new layout as well, including the tree views introduced in VSTS.
  • Full synchronization between Microsoft Project and the corresponding work items in TFS, hour estimations, scheduling, ordering between tasks, and many more.
  • Team sites can be created as a site collection or as a individual site.


  • Double-clicking on a test result jumps to the code instead of the test result details.
  • There is a new tool window that shows the tests that are affected by the current pending changes.
  • Codename “Camano” is a standalone test suite for managing test cases, execution of them in a set of managed virtual servers (through Hyper-V, Systems Center or ESX), being able to get access to the test environment, get checkpoints, record videos, etc. It is integrated with TFS to link test cases with requirements and bugs in TFS The goal? To easily reproduce bugs because all information needed to investigate the defect is all there.
  • Part of “Camano” is a standalone manual test runner tool that can be used by testers to get everything they do recorded as a video. It also records the detailed system info, and a comprehensive stack dump of the system-under-test that allows debugging the application from your development PC as if it is running locally. The current codename for this is Historical Debugging. No more ‘can’t reproduce the bug’. If you create a bug, everything is attached to it.
  • .NET 4.0 introduces a partial implementation of Spec# called Contract Library which allows you to include statements that specify the pre- and post-conditions of your methods and invariants and let those be validated both compile-time and run-time.
  • There is a new UI recorder that you can use to record an interactive test of a WPF, Windows Forms or ASP.NET application. It will generate unit tests (annotated with the new [CodedUITest] that talk to generated code to simulate the interaction with the application-under-test.

Building & Deployment

  • It includes a tray tool that serves a build notification agent.
  • VSTS introduces a new check-in policy named Gated Check-In. Whenever a developer attempts to check-in a new change set, the policy will automatically shelve that change set and start a Buddy Build on the configured build machine. If this succeeded, then the change will become available to the rest of the team.
  • Also, whenever a team member wants to verify its changes against a number of other shelve sets, he can start a Buddy Build. He can simply select the shelve sets to include and let it build on the build machine.
  • The build log has been redesigned from scratch. Through collapsible regions, it provides much more details on each individual step, such as the exact timings of that step. It hosts clickable actions to jump right into the corresponding code or other related artifact. And it has a nice histogram that shows the performance and the success of the last couple of builds.
  • More refurbished and redesigned hierarchical build log, including collapsable items, detailed timings, action buttons with common tasks such as opening the drop folder, and a mini map (like the document structure in Word), little graph showing the timing and succesrate of the last couple of bugs
  • New build reports showing a better overview of the success rate, unit tests and code coverage.
  • New option to index and share symbol files on a central symbol share in case you need to debug release files
  • If you have multiple build agents, you can tag those agents with one or more tags that describe the agent’s capabilities. Whenever you set up a new build, you can then specify the tags that your build needs. Visual Studio will then automatically try to find an available build machine that matches the requirements.
  • The internal structure of a build representing by an MSBuild project file is now available by a fully customizable Windows Workflow. This essentially allows you to modify or tweak every step that occurs within a typical build. For instance, you can now change the way build IDs are generated, or every execute specific steps within a builds in parallel.
  • You can delete builds without loosing any labels or test results related to them.
  • Brian Harry demonstrated the cooperation of Eclipse / Team Prise with Visual Studio. You can, with the right tools, let Team Build build Java projects and even run unit tests using MSTest. Obviously, the corresponding results and build-logs are available from within Team Explorer.

Friday, October 31, 2008

What is “Oslo”?

On the second official day of the PDC 2008, me and many colleagues from the Dutch .NET community attended the introduction session titled “A lap around Oslo”. Up to this session, Oslo was something nobody really knew about other than that is was somewhat related to the next evolution of SOA and Domain Specific Languages.

So what is Oslo really? Well, the answer is not really that simple. As a matter of fact, after finishing the session, both me, Dennis Vroegop and Alex Thissen were still a bit unsecure about what it really was. And even after attending the subsequent session by Don Box, we still had a lot of questions. But after some hefty discussions we managed to get our ideas aligned, at least, as far as we could tell. So to make sure we were right, we went to the Pavilion where Microsoft had setup many technical booths, and talked to Douglus Purdy, the host of this session. And as a matter of fact, it looks like we were right indeed. “So then, tell me”, I hear you think…

At a high level, it is the combination of two tools codenamed “IntelliPad” and “Quadrant” (see screenshot), a few compilers, the “M” programming language and a database repository. M can be used to write a text-based Domain Specific Language (DSL) consisting of data structures and language grammar, and let that compile, through some intermediate steps, into a XAML file representing an object graph consisting of expressions and statements. For every DSL, you need to create a runtime engine that knows what to do with that XAML file. Moreover, a shared characteristic of all those DSLs is that both the structure of the language as well as the domain-specific data is stored in a central SQL Server repository.


An example of such a DSL is what Microsoft calls “mschema”. This ‘language’ allows you to ‘sketch’ the structure of a business domain model by specifying values and relationships using a very simple syntax. The corresponding compiler will infer the formal data structures from that and generate a XAML file representing the values and structure of the domain model. This compiler will also execute the necessary T-SQL to setup the corresponding tables and columns for you. The runtime can then be used to execute various LINQ-style queries on the data or insert new data using another DSL called “mgraph”.

Another example of a working DSL is “mservice” what was what we were waiting for when we entered the session. It basically represents a textual domain-specific language for defining WCF services where the implementation is handled by a Windows Workflow. From what we saw, it was very powerful. With just a few lines you can setup a few operations and an endpoint, and define a fairly straightforward workflow without the need to use any activities or other WF-specific artifacts. In this DSL, the runtime is responsible for dynamically creating a WCF service, setting up the endpoint and required binding configuration, hosting the WF runtime environment and injecting the workflow matching the language. Even better, the entire implementation of the service operation is handled by this same runtime.

So how do Quadrant and IntelliPad fit into this? Well, IntelliPad is a textual tool to write the DSL itself, to query for or update/insert domain-specific data, and to preview how this all ends up into XAML. Quadrant is a very powerful graphical tool written in WPF that you can use to visualize the domain-specific data in almost infinite different ways, even if they originate from a completely different DSL. So if you combine for instance “mschema” and “mservice”, you can build a running service-oriented architecture just by using the visual designer.

As a matter of fact, the entire tool, including configuration settings, UI elements such as the Ribbon, and its visualization engine can be modified by drilling down through the models that make up the application. It’s like the ultimate meta-meta tool. So apparently, Microsoft has used their own “M” language to create DSLs for the different aspects of this application. However, as of now, it is still a bit vague what role “Quadrant” is really intended to play. It is at a minimum a developer tool, but it should be possible to customize it (again, visually, not with coding) to turn it in a very powerful exploratory data viewer.

Is this all? Not really. In fact, even after the many discussions we had, there are still some aspects we are not entirely confident we get yet.

  • The “M” language specifies ways for defining data structures that you can use in your DSL. But is that the same as “mschema”? And if so, can use “mschema” in your own DSL to have a nice composable way of reusing DSLs?
  • How does “mgraph” fit into this? Is it just another DSL written with “M”? Or is it part of the “M” language specification itself?

If anybody could enlight me on that, I’d love to here from you…

Tuesday, October 28, 2008

Windows 7 at the PDC

This morning, at the second keynote of this PDC, some of those Microsoft guys demoed Windows 7 and it is awesome. I´m not going to repeat the nice features because the Windows 7 team has already blogged about it with some nice screenshots. Check it out at here.

Windows 7 Desktop

Monday, October 27, 2008

Windows Azure announced

On Monday morning, at the PDC 2008 keynote, Microsoft announced Windows Azure, a new platform for hosting and enabling something what they call ‘services in the cloud’. It was, as seems to be traditional for keynotes, an awful presentation, but the idea is quite interesting. In essence (from what I understand of all this), they want to move developing services to a higher level by offloading the operational aspects to a managed hosting environment and introducing tools so that we can use Visual Studio to develop and model services and their operational requirements as if they were first class citizens.

The new “m” language intends to continue what the Web Service Software Factory: Modeling Edition has started: being able to design services using DSL-style graphical models. However, it does not seem to be a traditional programming language, because it appears to be tightly linked to SQL Server as a repository for storing them. Windows Azure is the on-line platform that Microsoft offers for hosting and managing these services, including Windows Workflows and the full suite of SQL Server features such as reporting and analysis services. Even differences between identity management is taken care of, although I don’t quite understand how that is going to work. They mentioned that these services are hosted at Microsoft, but I wonder if it’s something other hosting providers are allowed to do as well. And since the management interfaces for it interoperate with the next edition of Microsoft Systems Center, you can treat the hosted environment as if it is part of your own infrastructure.

Since the most complex tasks of developing services are the operational aspects such as security or technical decisions like where to host the Windows Workflow runtime, Windows Azure could have serious impact on our work. However, I really doubt whether organizations are willing to hand over their business-critical data to a company like Microsoft. All my customers genuinely believe that the only place where their data is safe, is at their own premises. However, I do know that most companies have a lot of trouble setting up a secure environment for that. And if there is one company who knows something about security and has invested heavily in it, it is Microsoft.

It is all a bit vague yet and many things are still open for interpretation, but things will become much clearer in the next couple of weeks (and hopefully at the PDC).

Live from LA

Well, here I am. After an agonizing 11-hour flight from Amsterdam to LA in a Boeing 747-400, I’m finally in the City of Angels, also known as Los Angeles California. Fortunately, the entire Dutch .NET community joined me on this KLM flight, so we spend loads of time discussing architecture….NOT. In fact, the only things people were discussing was about the hotel they were staying, what place to see and the external hard drive with Windows 7 we were supposingly going to receive this week. However, Alex Thissen, Maurice de Beijer and me have actually been discussing some of the design choices I’ve made in my demo code from the SDC presentations.

Anyhow, it’s Sunday right now,and I’m currently attending the pre-conference day. Even though I tried to go to bed as late as possible in order to make sure I was really sleep, I’ve had a horrible first night at the otherwise really decent Westin Bonaventura hotel. There were multiple Halloween parties and after having diner with some other colleagues, Olaf Conijn and I went to have a drink at a lounge bar at the rooftop of his hotel, The Standard. From there, the view on the surrounding hotels and sky-scrapers (like the AON Center and US Bank Tower) is awesome, and for somebody who has a slight anxiety for heights, quite tricky.

For the pre-conference, I chose a rather interactive session by Brian A. Randell, an MVP on Team Foundation Server. He shared quite a lot of his best practices on infrastructure, working in teams and getting the most out of the suit of products. I sure took a lot of notes on things to check-out when I’m back home. The day was concluded with a panel discussion hosted by Brian Harry, the Product Manager for the entire Visual Studio Team System team. This allowed us to ask questions about certain design decisions and future improvements. One particular feature I liked is the fact that you can divide your team into smaller sub-teams (security groups) and have work item queries that filter by group. Before that, you had to use areas for that and assigning a work item to another team usually also required to change the area.

By the way, check out the pictures at the PDC site to get an impression of this first day.

Wednesday, October 08, 2008

Slides and demo code from the SDN Conference 2008

For the attendees who have visited my two sessions on building a service oriented architecture using architectural best practices, check out the introduction slides here and the slides with best practices over here. Also, build 2187 of the demo application can be downloaded from this location.

In the next couple of months, I’ll continue updating the demo code with additional best practices which I’ve already collected but did not found the time to include yet. If you have suggestions or questions, don’t hesitate to contact me on my email ( or Also, some of my colleagues will be starting to build several frontends for the demo application using the Web Client Software Factory, the new ASP.NET MVC Framework, and of course, Silverlight 2.0. I’ll use the blog to notify anyone who’s interested about new builds.

Finally, I noticed a lot of discussions and questions during the sessions, and there was simply not enough time left to answer or complete all of them. Therefore, I’d like to organize a so-called Chalk’n’talk session (a.k.a. a pizza-session) at Aviva Solutions in Leiden in which we can continue an open discussion on software factories, architecture, design patterns and the stuff Patterns & Practices provides. No date is planned yet, but it will happen soon after the PDC2008, so if you like to be included, let me know.

Update 10-10-2008: I forgot to update the web.config, so the services did not work. Please get the update .zip from the same location.

Friday, August 15, 2008

Soon, in a theater near you

Well, not a real theater though, but nevertheless, I'll be speaking at two events planned in the next two months.

  • On Saturday the 6th of September, the Dutch .NET community will be organizing CodeCamp, an interactive event intended for developers by developers. I'll be hosting a Chalk 'n' Talk session on software architecture in .NET and everything related to that. Because the intention of this session is to have an open and lively technical discussion on the impact of all those recent .NET technologies (P&P factories, SOA, SilverLight, MVC) on software architecture, it is mostly hosting I'll be doing (rather than boring you with my voice). My session will start at 16:15, and most attendees will be quite tired by then (I know, I'll be), so if you think you'll have trouble staying awake, join me.
  • Next is the SDN Conference 2008 on the 6th and 7th of October. This event expects between 400 and 500 attendees and hosts many national and international speakers, including myself. I'll be doing two sessions on building a service oriented system using the components and guidance Patterns & Practices offers. The first one is the introduction session (advanced level) and explains the constraints and goals of such a system, elaborates on the technical design (layering, choices) and shows how P&P facilitates them. The second session (expert level) continues on the introduction and discusses many technical best practices of gaining the most from the .NET Framework and the Enterprise Library 4.0.

Oh, and if you get a chance, try to convince your manager to allow you to visit this year's Professional Developers Conference in Los Angeles, the most important conference of all. It was canceled last year, so this one promises to be especially spectacular.

Tuesday, August 12, 2008

Service Pack 1 of Visual Studio 2008, Team Foundation Server 2008 and .NET 3.5 have been released

If you have, like me, been regularly checking the MSDN blogs, you must have noticed the dozens of posts on yesterday's release of Visual Studio 2008 Service Pack 1, .NET Framework 3.5 Service Pack 1, and Team Foundation Server Service Pack 1. So, I'm not going to repeat the list of what's new over here, but just point you to the right places. Make sure you download the corresponding version of the Training Kit since it contains updated PowerPoint presentations and sample code for the changes.

Oh, and don't worry, we've updated our entire team to the latest version of Visual Studio (both Developer and Team Suite editions) and upgraded the servers to .NET 3.5 SP1 and have not observed any issue (yet). What's nice to know is that the update of Visual Studio took only half an hour, even though the installer is twice the size of the corresponding service pack for Visual Studio 2005. However, if you did install the beta of this service pack a few months ago, you'll need to run a patch to uninstall any residual pieces of that beta. 

Monday, June 09, 2008

Guidebook: Microsoft Patterns & Practices

In the beginning of this year, Nucleus Research has been interviewing several architects from the .NET community with the intention of writing a whitepaper on the added value of Microsoft's patterns & practices. As an expert advisor to the Web Service Software Factory : Modeling Edition, I was interviewed as well, and I'm happy to see that some of my quotes have been used in the paper. Unfortunately, the writers forgot to properly spell out our company name and mentioned Aviva only. Anyway, check out this link to read the paper.

Wednesday, May 28, 2008

First impressions with NHibernate 2.0

Almost two months ago, Ayenda Rahien announced the release of the first alpha of a new generation of NHibernate. NHibernate's feature set has always been a subset of the corresponding Java equivalent, Hibernate 3.x. With the new Nhibernate 2.0 release, the team has aligned its feature set with that of Hibernate 3.2. Its almost at the same level now.

In his announcement, Ayende also stated that this alpha version was tested thoroughly in a production environment. Due to Ayende's reputation, we decided to try it as well, and I must agree, other than the initial breaking changes, we did not run into any problems yet. Since the documentation is not up to date yet, we have not started using any of the new features, and limited the upgrade to getting the existing system running with 2.0 alpha 1. Some of the more noticeable things to take into account include:

  • You cannot wrap a binding variable in a lower() function anymore. I don't know why this has changed, but you now have to call ToLower() while passing the value to the SetParameter() method of IQuery.
  • The SysCache provider is no longer part of the NHibernate distribution. You have to get it from SourceForge yourself, but getting the sources is painfully difficult without using SubVersion. For your convenience, you can get a compiled version compatible with NH2.0 alpha 1 from here.
  • The IQuery.SetParameterList method has received an overload that takes an ICollection in addition to the one that takes an array. Consequently, the compiler will start to comply about ambiguous calls. You need to explicitly specify the overload you need.
  • The ValueValueType (from the NHibernate.Type namespace) has been phased out. We used it to create a custom NHibernate type that supports storing a boolean as a J/N character (Dutch for yes and no). The best alternative is the ImmutableType.
  • The NHibernate.Expression namespace has been renamed to NHibernate.Criterion
  • We use a modified connection provider that passes some additional information to the database whenever a connection is created. It requires a few configuration settings that we made part of the <hibernate> configuration section. NH 2.0's DriverConnectionProvider changes the Configure method from a IDictionary to a generic collection, and it has become very picky of the contents of the <hibernate> section (it validates it using an XSD). We had to move our configuration data to the <appSettings> section.
  • NH 2.0 will validate all named HQL and SQL queries when it's loading the mapping files. We discovered quite a few unused queries, so that was a nice side-effect.

Another aspect that makes this new version quite interesting is that the approaching release of the ADO.NET Entity Framework (now part of Visual Studio 2008 Service Pack 1 Beta) reveals that NHibernate easily wins on the feature and flexibility level. I myself was considering moving over to the Entity Framework whenever a suitable project passed by, but after having heard of the lack of lazy-loading support and the amount of impact it has on my domain model (the generated business classes), I'm not sure anymore. Hopefully the NHibernate team will continue with the LINQ-to-NHibernate project as soon as possible. Since Ayende seems to track blog posts referring to NHibernate, he may answer this question soon enough.

Thursday, May 22, 2008

Wilkommen in die welt von Schiller English...Welcome to the world of Schiller. Say what?

Well, as a starter, take an excellent German composer/producer whose origins lay in the trance scene, add some elements from well known artists like Jean-Michel Jarre and Genesis, and finally, complement with awesome male and female vocalists such as Kim Sanders and Peter Heppner and you get the ultimate mix of pop, ambient, trance and lounge music. This more or less characterizes the music Christopher von Deylen produces under the name Schiller.

After having been a fan since he released his first album titled Zeitgeist in 1999, last week I finally went to see him performing at the Palladium in Cologne (Köln). Although my girlfriend is not a big fan, she does enjoy quite few of the more vocally oriented songs, so she was so kind to join me. Obviously, we used the concert as an opportunity to visit Köln again and luckily the city was having wine feasts that week :-)


Since the Palladium can handle an audience of 4000 people, you won't be surprised that the queue we had to wait in continued along the pavement for multiple blocks. Surprisingly, the Germans are not as blunt or bold as the Dutch and waited patiently. Nobody tried to get into the front of the line. Anyway, the concert was well worth the waiting.

For an ambient-style concert it was quite spectacular. Check out this video for some footage from last year's performance, although I do think that that video does not quite catch the power of the real performance. Christopher was accompanied by another keyboard player, two guitar players (both acoustic and electric) and two independent drummers. All added a specific touch to the original synthetic songs, but especially the electric guitar and the drummers playing in parallel added an unparallel twist to the already awesome Schiller music.

I know that this kind of music does not excite everybody, but for me, this was an unforgettable experience. And once again, my respect for musicians has increased significantly. If you want to find out more about Schiller, check out this website or look for some examples on YouTube.

Tuesday, April 29, 2008

Software it worth it?

Last week, I read an article published by Microsoft in cooperation with Siemens that discussed the business case of investing in software factories. In the conclusion, the writers claim that a well-designed software factory can improve your productivity by a factor of 10. However, they also stated that you need to complete somewhere between 10 and 20 projects to get sufficient return of investment. Since software factories are a major part of my job as a Microsoft consultant, I have developed a strong opinion on the approach that companies should take. Unfortunately, the ultimate answer is… "it depends".

Unfortunately, the term software factory is not well defined. I clearly remember a discussion that Don Smith (Patterns & Practices), Olaf Conijn and I had in Barcelona while preparing for our presentation at last year's TechED Europe. It appeared that when Microsoft uses this term, they usually refer to the wizards, project templates and DSLs that P&P ships as part of their Software Factory packages. On the other hand, we at Aviva Solutions usually include all the aspects of a typical software development lifecycle process. In other words, we include not only the tools, but also the process methodology, coaching & training, architecture and infrastructure.

I often run into customers that think that a software factory is something you deliver in a box. But unlike popular believe, introducing a software factory is really a project on its own. Because of its scope, it requires careful consideration of the culture, configuration, and experience of the team that intends to use it. Moreover, it's not a one-off investment. Typically, the request to introduce a software factory at a customer site usually co-exists with a pilot project that is used to prove the choices made. In order to gain maximum efficiency from it, you need to be prepared to make adjustments while executing the first few projects.

An essential decision in the process of introducing such a factory is the level of automation and/or customization you desire. It is tempting to build the perfect factory that automates every aspect of your software development lifecycle. But many architects forget that every customization may add a considerable amount of maintenance overhead. It may seem easy to automate yet another cumbersome manual task, but you'd be surprised how seldom you actually gain from that on the long run. As an experienced developer with a tendency towards a pure architecture, I know from first hand that the dark side of engineering lures around every corner. Nonetheless, if you really plan to execute between at least 10 to 20 projects with (more or less) the same factory, then investing in a decent factory can really boost your team's productivity. I don't believe in the huge factors mentioned in the opening article, but an increase of 30-50% should be feasible.

On the other hand, I've recently received signals from the .NET community that more and more customers get fed up with their highly customized factories . Every self-respecting IT service provider is offering one these days. But as soon as that architect has left the building, the customer is stuck with a factory that offers no upgrade path whatsoever. To solve that, some have considered offering a standard off-the-shelf software factory that allows no customization at all and offers supported upgrade paths to future releases. Though a welcome promise, up until now I have not experienced a customer whose software development lifecycle did not require extensive customization. In fact, thinking that you can compare one customer with another may be interpreted as an insult.

So what about Aviva Solutions? What do I do to overcome these dilemmas? Well, in fact, again the answer is 'it depends'. But I believe that using what's there and make sure you're being pragmatic is a good approach.

For instance, Microsoft Patterns & Practices offers a lot of free tools for architecting web and windows applications using a service oriented architecture. The Web Service Software Factory is one of those, and we have been working closely with the P&P team to include many of our best practices. Moreover, they provide lots of guidance on building properly secured and high performant systems. Since experience made me a bit conservative, if and in what way I do customize, depends entirely on the company culture, experience and involvement of the team executing the project.

I also restrain from forcing a particular process guidance, but instead, I gradually introduce elements of MSF, TDD and SCRUM that work for the team. On the other hand, I always suggest using Team Foundation Server to allow integrated support for work item tracking, project reporting and source control. And I'm really fond of using UML for describing functional requirements. I've seen many customers fail while trying to use tools like Word, and use cases and business class diagrams proved to be very understandable for all stakeholders.

Monday, March 31, 2008

Presentation slides for C# 3.0 and Rhino Mocks

Last Friday, I took the honors of filling up one of the presentation slots at the first Software Development Network Event of 2008. As a enthusiastic user of Rhino Mocks, I used it to demonstrate the power of Rhino Mocks and C# 3.0 combined.

Even though my session started right after lunch concurrently to a LINQ-to-SQL session brought by Microsoft Regional Director Anko Duizer, the number of attendees was quite impressive.

DSC_2138_small DSC_2135_small

In addition to the specific Rhino Mocks examples, we had several very interesting discussions on unit testing in general and its invasive nature to the original source code. Unit testing is definitely something that many developers are struggling with.

For the people who were there, many thanks for attending my session. Please find the presentation over here, and the corresponding sources over here.

Friday, February 22, 2008

.NET community news bulletin issue 2

Yes, I do remember that I promised to write this bulletin at a regular basis, but probably due to my busy schedule and my occasional laziness, I failed horribly. Anyhow, here are some highlights from the last couple of weeks.

  • Patterns & Practices have released a second CTP of a Visual Studio 2008-compatible version of the Service Factory : Modeling Edition. Fortunately it solves all bugs we have filed since the first CTP, and it introduces a very helpful Order All Data Members recipe.
  • In addition to the new Service Factory release, two Dutch community members have written a nice introducing article on extending the factory. Check it out on MSDN.
  • And if you have not seen me and Olaf Conijn co-hosting Don Smith's presentation on Building Your Own Software Factory on the 2007 TechED Developers in Barcelona, check out this MSDN spotlight.
  • They've also released a new version of GAX and GAT (Guidance Automation Extensions and Toolkit). I suspect both the Service Factory and Web Client teams are waiting for this before they release their final VS2008 factories, since this version of GAX adds better support for it. Moreover, it finally allows installing without the need to first uninstall ALL software factories and guidance packages. We're near to the end of February, and Glenn Block, the product manager for the Web Client Software Factory promised us a new factory around the end of this month...
  • It seems that P&P is very busy these days. In a first step towards Enterprise Library 4, they've released a first, but very promising CTP of the next installment of the ObjectBuilder dependency injection framework. It's called Unity and closely resembles other DI frameworks such as Spring and Castle Windsor.
  • Scott Guthrie has released some of Microsoft's plans for building .NET client applications. Check out his blog post.
  • Even though Visual Studio 2008 is not yet physically available in stores, Microsoft has already released a hotfix. Check out the details on Scott's blog.
  • JetBrains have started releasing nightly builds of Resharper 4.0 for Visual Studio 2008. Check out the release notes to see what awesome new features have been added. Since we are using LINQ very heavily, I've started working with these early builds immediately. I must say, I'm quite impressed with the stability and the huge productivity boost it gives.
  • While searching for more information on how to customize the Team Foundation Server reports, I ran into something that is called the Scenario Coverage Analyzer. It's a handy add-on that introduces an .NET Attribute that creates a relation between a particular part of your code and the corresponding TFS Scenario. Using a custom MS Build task it can generate a report providing statistics on aspects like code coverage, ordered by scenario.

Friday, January 11, 2008

Moving to Visual Studio 2008 and Team Foundation Server 2008

During the Christmas holiday, I started migrating my customer's development environment to the newest set of Microsoft development products. We already planned to do this a bit earlier, but since we had to wait for the first VS2008 CTP of the Patterns & Practices Web Service Software Factory : Modeling Edition, we simply couldn't. We are now two weeks further, so I decided to share some my experiences since then.



  • Well, I did not ran into any problems after the upgrade to .NET 3.5 while converting the existing Visual Studio 2005 solutions. I know about the advice to first upgrade the solutions, and then plan the introduction to .NET 3.5, but I couldn't resist.
  • If you choose to upgrade your solution to .NET 3.5 during the automatic conversion, it seems that only the web application projects (or web site projects) are modified to target .NET 3.5. I'm not entirely sure about this, but at least, that is what I've observed. Unfortunately, you have to go through each and every project one by one and change its framework version.
  • Beware that if you upgrade an existing ASP.NET 2.0 web site (or web application) to .NET 3.5, Visual Studio will automatically add an assembly redirect to your web.config that forwards any dependencies on the 1.0.61025.0 version of the System.Web.Extensions assembly to the corresponding .NET 3.5 assemblies. We did not notice any differences (yet), but if you do, you may need look in that area. All our existing control libraries (Obout, Telerik and the ASP.NET Ajax Control Toolkit) kept working without a glitch. There is a .NET 3.5-specific update of the AJAX Control Toolkit though.
  • You can't update existing WCF Service References (the .map files created from within VS2005) anymore. You need to re-add the reference from scratch. See this post for more information.
  • Although the installation of .NET 3.5 should not cause any troubles, we did not notice an anomaly after we installed Visual Studio 2008 next to Visual Studio 2005. The reason for this is that .NET 3.5 installer also installs two service packs on top of .NET 2.0 and 3.0. Apparently, we were using something that was wrong in the first place, but not detected by the compiler. Lucky, we invested heavily in unit tests and this one issue surfaced almost immediately.
  • The best thing you should do after upgrading to Visual Studio 2008 is read about LINQ. We are using Nhibernate in our data access layer and don't need LINQ-to-SQL, but still, LINQ is the best thing since Generics. Even if you don't like the query expression, simply include the System.Linq namespace and enjoy the power of the many extension methods it adds to your arrays, lists and other collection classes.

Add-ins & Tools

  • Beware that the Guidance Automation Extensions installer can only be installed once. You either have to choose to install it on Visual Studio 2005 or on Visual Studio 2008, but not both. Consider that if you're thinking about gradially migrating to 2008.
  • The Web Client Software Factory does not officially work with VS2008 yet, but it appeared that the hack suggested by this knowledge base article works quite well. A more official version is expected in February.
  • I discovered that the same trick also works for the Enterprise Library 3.1 installer. Obviously, EntLib is not compiled to .NET 3.5, but up to now, I did not find any issues related to that. Since the only changes with respect to EntLib is the service pack that the .NET Framework 3.5 installer applies to .NET Framework 2.0
  • The TFS Administration Tool is a beautiful little commodity that allows you to add or remove user accounts from TFS, WSS and SQL Reporting Services in a single click. Unfortunately, if you chose to upgrade Windows Sharepoint Services to 3.0 while upgrading from TFS 2005 to 2008, the tool does not work with WSS 3.0. For the time being, you have to fall back on the WSS Site Settings, but this will change soon.
  • After installing the Guidance Automation Extensions of May 2007 and the software factories, some of my colleagues using the Team Developer edition of Visual Studio 2008 ran into multiple occurrences of the "System.IO.FileLoadException: Could not load file or assembly 'Microsoft.VisualStudio.TemplateWizardInterface, Version=". As explained in this post, removing a single redirect from the devenv.exe.config solved the problem. It did not occur on a Team Suite install.
  • I personally believe that JetBrains' Resharper is the very best tool since Visual Studio. Version 3.1 does work with Visual Studio 2008, but its excellent code analysis features have some trouble with the new C# 3.0 keywords. Until the Early Access Program for Resharper 4.0 opens, I found an acceptable workaround that gives (most of) the best of both worlds.
    Simply disable Resharper's Code Analysis function and let it use Visual Studio's Intellisense . You may have to re-enable the IntelliSense settings under Tools->Options->Text Editor->C#->IntelliSense. Even though Visual Studio's IntelliSense cannot compete with Resharper's, assigning the Alt-Enter keyboard shortcut to View.ShowSmartTag gives you a bit of Resharper-style behavior.

Unit testing & Team Build

  • After upgrading, we noticed that while running unit tests, the Enterprise Library Logging Application Block could not find its enterpriselibrary.config (we moved the EntLib settings into a dedicated .config). At first, I thought it was a compatibility issue between .NET 3.5 and EntLib, but we also noticed that log4net was suffering from the same thing. After further investigation, I discovered that this is in fact a bug in Visual Studio 2008. As a workaround, we now have every test start with the following hack.
    AppDomain.CurrentDomain.SetData("APPBASE", Environment.CurrentDirectory);
  • Right after upgrading your test project, you may notice that the ExpectedException attribute does not seem to work anymore. This usually happens because your test project will still try to load version 8.x of the Microsoft.VisualStudio.QualityTools.UnitTestFramework assembly (which is part of Visual Studio 2005). Simply remove it and then reference the correct version 9.x of the assembly again. See also this post for more information.
  • If you have, like me, set-up your own workspaces from within your Team Build .proj file, you can now remove this stuff. The Build Definition dialog box allows configuring exactly which part of your source control tree should be included in the build.
  • In Visual Studio 2005, it was not possible to configure Team Build to run all the unit tests part of an assembly without falling back on a .vsdmi test file. However, Buck Hodges provided a nice replacement for MSBuild's TestToolsTask that did the job quite nice. Fortunately, Microsoft integrated this functionality into Team Foundation Server 2008 out of the box. Simply define an <ItemGroup> with a <TestCountainer> that includes a wildcard pattern for the assemblies that should be included. For instance:
    <TestContainer Include="$(OutDir)\%2a%2a\%2aTest.dll" />
  • Don't forget to go to Tools->Options->Test Tools->Test Execution and limit the number of Test Results during unit testing. Oh, and check out the new shortcut keys for starting/debugging the unit tests visible within the current context.

Obviously, this is not all. During the next weeks, I'll try to update this post with new experiences and solutions to common issues.