Sunday, March 22, 2015

Bringing the power of PowerShell to your build scripts

If you look back at the last couple of years, you'll notice an increasing attention for best practices that should make us more professional software developers. We design our classes using Test Driven Development, we review our code in pairs, and we apply all kinds of architectural principles such as those represented by the S.O.L.I.D. acronym. In other words, we really care about our code. But what about our build scripts? Do we care as much about those as we do for our regular code? I doubt it, but shouldn't we?

About a year ago we switched from Microsoft Team Foundation Server's build environment to JetBrains TeamCity, mostly because my client was moving to Git and GitHub. After being used to the cryptic MSBuild XML documents (I never bothered with the Windows Workflow Foundation stuff), being able to use TeamCity's elaborate build step system seemed like an attractive approach. But with that, we almost made the mistake of treating build scripts as something exotic again.

Luckily, my (then new) colleague Damian Hickey convinced us to look at how the open-source community solves the build problem. Quite a lot of these projects use either a .bat file that directly invokes the msbuild executable or use PowerShell to do the same thing. As the teams didn't have any prior experience with PowerShell, being able to use TeamCity's graphical user interface sounded like the way to go.

Tying the build process to TeamCity has some drawbacks however.
  • You can't run a build outside TeamCity. Being able to fully test whether your local changes are working correctly becomes an invaluable capability that you really start to appreciate once you have it.
  • A codebase evolves over time, potentially involving changes in the way the code is build, tested and/or deployed. You wouldn't want to have separate build definitions for different branches, or worse, align the build definition with the changes in the code-base.
  • Being able to treat your build scripts as first-class citizens of your code base also enables the capabilities you are used to such as merging changes from different contributions, supporting pull requests, and analyzing the history of your script. Although TeamCity supports auditing, it'll be a completely different experience.
  • Although TeamCity is a state-of-the-art build engine, especially compared to Microsoft Team Build, being able to switch to another build product (such as AppVeyor) is a big advantage.

So putting your build definition in source control, next the code base it builds, was a big advantage for us. And though we didn't have much prior PowerShell experience, being able to use .NET classes provides a lot of flexibility. We decided to use an open-source PowerShell library called PSake (pronounced as sake) that combines the concepts of the old make build language with the power of PowerShell.

After unzipping the release and adding the files to your source control repository, a simple default.ps1 file might look like this:
task default -depends Compile task Compile -description "Compiles the solution" { exec { msbuild /v:m /p:Platform="Any CPU" TestApplication.sln /p:Configuration=Release /t:Rebuild } }


Notice that both msbuild and exec are wrapper functions provided by PSake. The first ensures that the right version of msbuild for the appropriate .NET Framework is used. The second ensures that the last exit code of a DOS command-line is converted in the correct PowerShell exception. You can run this build script using psake.ps1 or psake.cmd, depending on whether you're running it from a PowerShell console or a Command Prompt.


clip_image001
Since the default behavior is to run the default task in the default.ps1 script, you don't need to provide any parameters. If you want, you can specify an explicit task to run. Just run psake.cmd -help or psake.ps1 -help to get the very extensive help page.

Now suppose you want to parameterize the project's solution configuration (from release to debug). PSake supports script properties for which you can pass values as part of the Psake.cmd call.
properties { $Configuration = "Release" } task default -depends Compile task Compile { exec { msbuild /v:m /p:Platform="Any CPU" TestApplication.sln /p:Configuration=$Configuration /t:Rebuild } }


Now when you run the following it'll build the debug version of the solution.


.\psake.ps1 -properties @{"Configuration"="Debug")

In most projects, your build script will consist of multiple tasks each depending on other tasks. In the previous example, the default task depends on the Compile task. Obviously, much more elaborate dependencies are possible, including conditions.


properties { $PackageName = "" } task default -depends Compile, BuildNugetPackage task Compile { } task BuildNugetPackage -depends Compile -precondition { $PackageName -ne "" } { }

So in addition to verifying that the $PackageName variable must be specified as a property, you can also run the BuildNugetPackage task directly. But since it depends on the Compile step, the net result is the same as just running the default task. It's good practice to add the essential dependencies. BTW, if you want to make that package name property mandatory, you can write the task like this:



task BuildNugetPackage -depends Compile -requiredVariables $PackageName { }

Notice that running this build script from a build product is just going to route the PowerShell output as the build output. However, both TeamCity and AppVeyor have ways to enable more deep integration. For instance, TeamCity will recognize certain phrases in the output and use that to interpret the phase of the build or the severity of certain build problems:


task Compile { "##teamcity[blockOpened name='Compilation']" # compilation steps "##teamcity[blockClosed name='Compilation']" }

But since PSake will scan and load PowerShell modules (.psm1 files) from under the Modules folder, you can make this a lot more readable by importing the TeamCity extensions provided by the Psake Contrib project:
task Compile { TeamCity-Block "Compiling the solution" { } }


AppVeyor offers PowerShell extensions to interact with their build system as well. And don't underestimate the capability of using PowerShell modules here. This is what will allow you to treat your PSake scripts as real code including opportunities for refactoring, code reuse, accepting open-source contributions, etc.

Anyway, this post was in no way meant to be a comprehensive discussion on Psake. Instead, I recommend you to checkout out the many examples from the GitHub repository or look at the build script that I'm using to build Fluent Assertions from CodeBetter's TeamCity environment.

So how do you organize your build environment? Does what I'm proposing here make sense? Let me know by commenting below or tweeting me at @ddoomen.

Thursday, March 12, 2015

A scalable software development organization? This is how you do it!

Hey architect! Building a complex system with four talented developers is one thing. Building one with 40 developers is a whole different league.

The quote above is closer to the truth than you might expect. Many large organizations develop software systems with hundreds of software developers. But how often does that go well? Not that often… And what is the better approach? Defining all aspects of the system upfront? Or postponing decisions as late as possible? Let's look at both…

A strict approach

How difficult is it to design a system in all its details that is completely unique (they always are) and for which many functional details will change along the way? Quite difficult if you ask me. Moreover, many stakeholders think they know exactly what they are asking from the development teams, but still, they keep on asking for new requirements as soon as the building starts. A solution many organizations choose, is to hire shiploads of project managers whose single task is to tightly control those changes. This happens a lot in organizations where a hierarchical structure is still the norm and where architects live in the proverbial ivory towers. What the project managers do to the functional scope, the architects do to the technical design: restrict change.

But, are those changes really something you want to avoid? Are these changes not needed to end up with a software system that will support the end-users as optimal as possible? I believe so.

I guess you might be wondering how to approach this problem then.

A flexible approach

At the other end of the spectrum you'll find those organizations that follow the agile principle. Agile projects can be characterized by time-boxed well-scoped periods of two to three weeks in which multi-disciplinary teams complete several functional increments, including functional design, construction, testing, and in some cases, even delivery. The function of an architect doesn't really exist here so many team members will take that up as one of their roles. The big challenge in this approach is that no big upfront detailed functional design is ever done. Instead, a team will do as much design is necessary to estimate and complete the work planned for the upcoming period of two or three weeks. That doesn't say the team just jumps into the project. Usually they work with a high-level reference architecture that defines the boundaries and responsibilities of the team and evolves along the way.

Agile teams have a lot of autonomy and need to be empowered to make the decisions they think are needed to complete the functionality they committed to. But, as soon as the number of teams increases, this approach tends to be susceptible to uncoordinated architecture changes and overlap where teams are solving the same problems in different ways or doing the same work. A solution that you see quite often is to introduce one or more architects whose single task is to try to keep up with what the teams are doing. Not a scalable approach if you ask me. Then what?

Best of both worlds

Is it possible to combine the strengths of these two approaches? Yes, you can!

First, you need somebody who'll sit with the most experienced representatives from the teams to define a reference architecture that can solve the problem at hand. Then those people will need to break down the system in smaller autonomous components which the teams can own and develop in relative isolation. Note though that this somebody, often called an agile architect, does not only need to look at the problem from a technological perspective, but will also need to consider the combination of people, the processes and the tools. Proper architecture involves all of these aspect.

But next to that, each component should be owned by a team with enough autonomy to make decisions without the involvement of some kind of global architect. At the same time, such a component should be open enough so that other teams can contribute improvements and features. In fact, they should be able to work with a temporary copy of that component as long as their contribution hasn't been fully accepted by the owning team. Only then can you avoid dependencies in which one team has to wait for another before they can continue their own work.

But that's not all. Teams should be able to look at a component's version and immediately tell whether that component contains breaking changes. To accomplish that, they'll need to agree on a versioning scheme that explicitly differentiates between bug fixes, backwards compatible functional improvements or breaking changes. Ideally, they use a distribution tool that understands that scheme, provides notifications about new versions, and empowers them to select a version that best suits their need.

In short, autonomy, clear responsibilities and boundaries, an unambiguous versioning scheme, and the right tools are a great recipe for scaling agile software development.

What else can you do?

In the sixties, Melvin Conway completed studies on the relationship between architecture and the physical organization of a company. His main conclusion stated that eventually, the structure of a complex architecture will adapt to the way teams are physically located within an organization. If you think about that, that's not a weird conclusion at all. If teams feel any barriers while trying to talk with other teams, such as physical separation, they tend to introduce rules of conduct on how they exchange information. For instance, introducing a well defined protocol with which components communicate is a practical example of this. Since then, many software practitioners have confirmed Conway's observations. I particularly favor co-located teams, e.g. teams that sit together in the same open space, because that guarantees short communication lines. Over time however, I've learned from first hand that this can also result in large monolithic systems. Instead, creating physical boundaries that closely align with the desired architectural boundaries might be just what you need to prevent that in the first place. In other words, don't deny Conway's Law.

Ok, now back to reality

So far so good. Now that I've covered a lot of theory, let's see if this is even remotely possible to do in reality. First of all, no off-the-shelf solution or product can help you here. The trick is to collect the right tools and services and combine them in a smart way.

Git & GitHub

To begin with, you're going to need Git, an open-source version control system that, unlike Microsoft's own Team Foundation Server, has been designed to be used in a distributed occasionally connected environment. Using Git, each component identified in the prior discussion will end up in a separate project where only the owning team can make changes. This gives those teams the power they need to control what they own. As an example of one of the many cloud services that can host Git repositories, we've been using Github for both private and public repositories for over a year now.

Forks & Pull Requests

Two very crucial concepts that Github offers are forks and pull requests. A fork, a term originating from the Unix world, allows you to make a personal or team-specific copy of an existing repository while retaining a reference to its origin. This allows a team to make changes to an existing component in isolation, without the need to have write access on the original repository. When you combine this with the pull request concept, those same changes can be contributed back to the owning team by sending a request to pull the local changes back into the owning repository. This is not a requirement though. It's perfectly fine for a team to fork the other team's repository and continue from there. But if you do use pull requests, you can use it as a central hub for code reviews, discussions and rework, all to make sure the owning team can incorporate the changes without too much hassle. In a way, the owning team gains maximum control over their components, without holding back any other team.

NuGet & MyGet

I didn't mention the form in which components are shared between teams. Yes, you can share the original source code, but that's a great recipe for loosing control quickly and creating way too many dependencies between the teams. A much better option, and the de-facto standard within the open-source .NET community is to employ NuGet packages, a ZIP based packaging format that combines binary DLLs with metadata about the supported .NET versions, release notes and such.

Obviously, you don't want your internal corporate packages to publicly appear on nuget.org. That's why a couple of guys have build MyGet, a commercial offering that allows you to share NuGet packages in a secure and controlled environment. MyGet offers a hosting solution, but you can also run it on-premise. If you do work on an open-source project, MyGet even allows you to use it as a staging area for intermediate versions, before you publish your package to nuget.org using MyGet's one-click publishing features.

Semantic Versioning & GitFlow

Both NuGet and MyGet have a versioning system that supports the notion of pre-release component, and give you fine-grained control on how to relate one version of a component to another. There are several schemes that you can use, but I'm pretty fond of one called semantic versioning. This scheme contains unambiguous rules on how to version minor and major changes as well as bug fixes and patches. It creates clarity for the teams using a component. But, determining the version number of a particular component is still a manual step. As an example, consider version 1.2 of a component. Compared to version 1.1, it should be fully backwards compatible and just add new functionality that doesn't effect existing consumers. Version 2.0 is different though. It is not backwards compatible and requires changes to component that uses it. This may sound trivial at first, but you might be amazed how often version numbers are increment just for commercial or marketing purposes. Also, wouldn't it be nice to have some kind of git branching strategy that could generate version numbers automatically based on conventions? Indeed, if you use the GitFlow branching strategy (which gives special meaning to the master, develop and release- branches) and combine this with GitVersion, your component versions will be derived from the branch name.

Even Microsoft has seen the light

Doesn't that sound great? But if all these tools and practices work so well, I hear you asking, why aren't any other organizations doing this already? More and more companies are adopting this approach. In fact, even Microsoft has seen the light and decided to make almost all parts of the .NET platform open-source. And not just through Microsoft's own CodePlex or Visual Studio Online services. No, they've dropped everything on Github and now even accept pull requests from the community. I gather you didn't think that would have been possible 5 years ago….

So what do you do to scale your agile software development teams? Let me know by commenting below or tweeting me at @ddoomen.

Tuesday, March 03, 2015

What kind of software architect are you? Strategist or tactician?

If there's one term next to agile that's so overloaded nobody knows what to expect, then it must be architect. Yes, you can specialize it by prefixing it with software, as in software architect. And with that, you would be able to determine that he or she is probably designing some kind software system. But you still wouldn't be able to understand what that means exactly. You have solution architects, system architects and enterprise architects, and to make matters worse, you even have agile architects. And within that realm, you can also designate a seniority level. So, you could start as a junior software architect, then move on to become senior software architect, then become the lead architect and eventually end up being the chief architect. But nothing means less then somebody introducing himself as "I am the chief architect" without explaining what that means.

clip_image001

Also, many people think that being an architect is similar to being a project manager or some other organizational function. And some organizations still treat it as such, where being promoted from senior software developer to architect is the only way for a developer to get a pay raise. In contrast to that, you'll see that organizations which embrace the agile principles treat an architect more as a role or responsibility that every experienced software developer should have.

So what kind of architect am I? Well, I'm definitely a software architect. You won't see me designing buildings anytime soon. I also consider myself to be an agile architect. For me that means that I'll try to avoid doing any big design upfront, and instead provide the teams with a high-level design that defines the major responsibilities and includes the architectural constraints that they have to work with. But I wasn't always like that. I used to have a lot of trust issues delegating design tasks to teams and always insisted on reviewing every little decision. I don't have to tell you that this is something nobody benefits from. It's not a scalable approach and has cost me a lot of energy already. These days, we voluntarily ask for people to take on the sub-system owner role. They get involved in new design challenges and review certain aspects of the code base, all based on their personal skill or interests. It's a much more scalable approach and allows me to do some real coding throughout the day (which I still love to do). Good architects should do some real coding. You simply can't judge the feasibility of a design if you're not actively using new technology or tried new programming languages yourself.

Another aspect you can use to characterize architects is how they approach their design process. Typically you'll see them balance between taking tactical and strategic decisions. Until I read Jeremy D. Miller's recent discussion on long-lived codebases, I didn't realize this difference. In numerous occasions, I've been wondering whether my tendency to go for a pragmatic solution (especially where others choose a much more elaborate and complicated solution) is me being too na├»ve, or others ignoring the KISS and YAGNI principles. In my opinion, if a (potentially) wrong decision can be easily reverted at a later point in time (it has high reversibility), a quick decision is fine. As long as common sense is being used and no technical debt is being introduced. Only if a wrong decision would have widespread consequences, a more thorough analysis and/or spike is warranted. With that knowledge, you could say I'm more of a tactical architect. I've even grown an allergy for strategic architects, especially if they can't keep themselves from that endless search for the perfect solution (which obviously doesn't exist). On the other hand, I do realize I can learn a lot from them. So when the circumstances are there and a problem with a low reversibility needs to be solved, I won’t hesitate to get a strategic architect involved.

So what kind of architect are you? Let me know by commenting below or tweeting me at @ddoomen.

Thursday, February 26, 2015

A beacon of light in the shadow of failing builds

As long as I can remember, I've been using an automatic build system to regularly verify the quality of the code based I've been working on. First using Team Foundation Server, but since a year or so, Jetbrains' Team City. In my current project we use continuous integration builds for running the 12000 .NET and JavaScript unit tests, builds for delivering NuGet packages, builds to deploy the latest version of our system on multiple virtual machines, and builds to verify that the Oracle and SQL Server databases can be upgraded from one version to the other. If one of these fails, we usually don't need a lot of effort to track down the developer that needs to fix it.

But we have another set of builds that never get the same amount of love the normal builds get; our UI automation tests. Since our system is web based, we've been investing a lot in automated end-to-end browser tests using WaTiN and SpecFlow. In spite of our efforts to build a reliable test automation framework on top of that, stability remains a big issue. Sometimes the timing of such a test is an issue, sometimes it's an issue with the combination of browser itself and the WaTiN, and sometimes it is just a temporary infrastructure problem.

Regardless, having developers actively monitor and analyze those builds is painful to say the least. We encourage developers to either configure an email notification or install TeamCity's tray icon to get notified about failing builds. We even put up two big 40 inch TV monitors on the wall displaying the build status. Still, we regularly observe builds that have been failing for an hour or so without anybody noticing that. We tried to introduce a code of conduct, talked many times to the scrum masters, attended retrospective meetings to get feedback from the teams and organized department-wide intervention meetings. It usually helps for a couple of weeks, but then the routine kicks in again.

clip_image001

I don't remember where, but at some point I read a post on some agile blog where they used a real-life traffic light to signal the build quality. They used red and green for the fail/pass status and yellow to signal that a failing build is being investigated. So when I started looking for something similar, I ran into the company Delcom which ships coffee cup-sized LED based USB visual indicators that you can stick to any surface. And depending on the model you buy, it even gets a button and a buzzer.

Since we use TeamCity, I quickly found a little piece of open-source software, TeamFlash, that allows you to connect the two together. Unfortunately, it was kind of a one-time thing, and looking at the commit history, not much has happened in those 2 years. But the code was on Github, so I decided to become an active contributor and submitted my first very small pull request. Short story short, it took me several tweets, a couple of direct messages and a month of patience to get my pull request accepted. Considering this was a pretty important thing to me, and forking would not make a lot sense, I convinced myself to reboot that project.

I named it Beacon. When you run it, it will do things like turning orange when a failing build is being investigated.

clip_image002

It is available as a ZIP file on Github or as a Chocolatey package that you can install using choco install beacon. Its functionality is a bit rudimentary right now, but I've planned several improvements for the coming weeks. Being able to use TeamCity guest accounts and fine-grained control over the colors, power levels and the flashing mode are just a few of them. If you can't wait, intermediate release candidates will be available through MyGet. And since this project is open-source, feel free to provide me with ideas, feedback or even pull requests.

So what do you do to get your teams to care about your builds? Let me know by commenting below or tweeting me at @ddoomen.

Thursday, February 19, 2015

Fluent Assertions just a got a little bit better

Just a quick post to let you all know that I’ve just published a new version of Fluent Assertions with a load of little improvements that will improve your life as a unit test developer a little bit.

New features

  • Added CompareEnumsAsString and CompareEnumsAsValue to the options taken byShouldBeEquivalentTo so specify how enumerations are compared.
  • Added ShouldThrowExactly and WithInnerExceptionExactly to assert a specific exception was thrown rather than the default of allowing sub-classes of those exceptions. #176
  • Introduced a new static AssertionOptions class that can be used to change the defaults used by ShouldBeEquivalentTo, alter the global collection of IEquivalencySteps that are used internally, and change the rules that are used to identify value types. #134
  • ShouldBeEquivalentTo will now also include public fields. Obviously, this can be changed using a set of new members on the EquivalencyAssertionOptions<T> class that the equivalency API takes.
  • Extended the collection assertions with StartWith, EndWith, HaveElementPreceding andHaveElementSucceeding.
  • Added methods ThatAreDecoratedWith, ThatAreInNamespace, ThatAreUnderNamespace,ThatDeriveFrom and ThatImplement to filter types from assemblies that need to comply to certain prerequisites.
  • Added BeAssignableTo that directly apply toType objects.

Minor improvements and fixes

  • Extended the time-conversion convenience methods with 4.Ticks()
  • When an object implements IDictionary<T,K> more than once, ShouldBeEquivalentTo will fail rather than pick a random implementation. Likewise, if a dictionary only implementsIDictionary<,> explicitly, it will still be treated as a dictionary. Finally, ShouldBeEquivalentTowill now respect the declared type of a generic dictionary.
  • A null reference in a nested collection wasn't properly detected by ShouldBeEquivalentTo.
  • Corrected the remaining cases where ShouldBeEquivalentTo did not respect the declared type. #161
  • Adding an overload to collection.ContainSingle() having no arguments.
  • Included the time zone offset when displaying a DateTimeOffset. #160
  • collection.Should().BeEmpty() now properly reports the collection items it found unexpectedly. #224
  • Made the fallback AssertFailedException serializable to help in certain cross-AppDomain unit tests. #214
  • Better support for rendering the TimeSpan's MinValue and MaxValue without causign stack overflow exceptions. #212
  • Fixed an issue where the Windows 8.1 test framework detection code would ran into a deadlock when using a [UITestMethod]. #223
  • Fixed an issue where ShouldBeEquivalentTo would throw an internal exception on a unset byte[] property. #165

Internal changes

  • We now use StyleCop to improve the quality level of the code.
  • The first steps have been taken to deprecate IAssertionRule.
  • The internal assertion API has been changed to allow chaining complex assertions using a fluent API. This should make it a lot easier to extend Fluent Assertions. You can read more about that in this blog post
  • We've started to use Chill to improve the readability of the more behavioral unit tests.

As usual, you can get the bits from NuGet or through the main landing page. Tweet me at @ddoomen for questions or post them on StackOverflow. If you find any issues, post them to the GitHub repository.