Thursday, February 26, 2015

A beacon of light in the shadow of failing builds

As long as I can remember, I've been using an automatic build system to regularly verify the quality of the code based I've been working on. First using Team Foundation Server, but since a year or so, Jetbrains' Team City. In my current project we use continuous integration builds for running the 12000 .NET and JavaScript unit tests, builds for delivering NuGet packages, builds to deploy the latest version of our system on multiple virtual machines, and builds to verify that the Oracle and SQL Server databases can be upgraded from one version to the other. If one of these fails, we usually don't need a lot of effort to track down the developer that needs to fix it.

But we have another set of builds that never get the same amount of love the normal builds get; our UI automation tests. Since our system is web based, we've been investing a lot in automated end-to-end browser tests using WaTiN and SpecFlow. In spite of our efforts to build a reliable test automation framework on top of that, stability remains a big issue. Sometimes the timing of such a test is an issue, sometimes it's an issue with the combination of browser itself and the WaTiN, and sometimes it is just a temporary infrastructure problem.

Regardless, having developers actively monitor and analyze those builds is painful to say the least. We encourage developers to either configure an email notification or install TeamCity's tray icon to get notified about failing builds. We even put up two big 40 inch TV monitors on the wall displaying the build status. Still, we regularly observe builds that have been failing for an hour or so without anybody noticing that. We tried to introduce a code of conduct, talked many times to the scrum masters, attended retrospective meetings to get feedback from the teams and organized department-wide intervention meetings. It usually helps for a couple of weeks, but then the routine kicks in again.

clip_image001

I don't remember where, but at some point I read a post on some agile blog where they used a real-life traffic light to signal the build quality. They used red and green for the fail/pass status and yellow to signal that a failing build is being investigated. So when I started looking for something similar, I ran into the company Delcom which ships coffee cup-sized LED based USB visual indicators that you can stick to any surface. And depending on the model you buy, it even gets a button and a buzzer.

Since we use TeamCity, I quickly found a little piece of open-source software, TeamFlash, that allows you to connect the two together. Unfortunately, it was kind of a one-time thing, and looking at the commit history, not much has happened in those 2 years. But the code was on Github, so I decided to become an active contributor and submitted my first very small pull request. Short story short, it took me several tweets, a couple of direct messages and a month of patience to get my pull request accepted. Considering this was a pretty important thing to me, and forking would not make a lot sense, I convinced myself to reboot that project.

I named it Beacon. When you run it, it will do things like turning orange when a failing build is being investigated.

clip_image002

It is available as a ZIP file on Github or as a Chocolatey package that you can install using choco install beacon. Its functionality is a bit rudimentary right now, but I've planned several improvements for the coming weeks. Being able to use TeamCity guest accounts and fine-grained control over the colors, power levels and the flashing mode are just a few of them. If you can't wait, intermediate release candidates will be available through MyGet. And since this project is open-source, feel free to provide me with ideas, feedback or even pull requests.

So what do you do to get your teams to care about your builds? Let me know by commenting below or tweeting me at @ddoomen.

Thursday, February 19, 2015

Fluent Assertions just a got a little bit better

Just a quick post to let you all know that I’ve just published a new version of Fluent Assertions with a load of little improvements that will improve your life as a unit test developer a little bit.

New features

  • Added CompareEnumsAsString and CompareEnumsAsValue to the options taken byShouldBeEquivalentTo so specify how enumerations are compared.
  • Added ShouldThrowExactly and WithInnerExceptionExactly to assert a specific exception was thrown rather than the default of allowing sub-classes of those exceptions. #176
  • Introduced a new static AssertionOptions class that can be used to change the defaults used by ShouldBeEquivalentTo, alter the global collection of IEquivalencySteps that are used internally, and change the rules that are used to identify value types. #134
  • ShouldBeEquivalentTo will now also include public fields. Obviously, this can be changed using a set of new members on the EquivalencyAssertionOptions<T> class that the equivalency API takes.
  • Extended the collection assertions with StartWith, EndWith, HaveElementPreceding andHaveElementSucceeding.
  • Added methods ThatAreDecoratedWith, ThatAreInNamespace, ThatAreUnderNamespace,ThatDeriveFrom and ThatImplement to filter types from assemblies that need to comply to certain prerequisites.
  • Added BeAssignableTo that directly apply toType objects.

Minor improvements and fixes

  • Extended the time-conversion convenience methods with 4.Ticks()
  • When an object implements IDictionary<T,K> more than once, ShouldBeEquivalentTo will fail rather than pick a random implementation. Likewise, if a dictionary only implementsIDictionary<,> explicitly, it will still be treated as a dictionary. Finally, ShouldBeEquivalentTowill now respect the declared type of a generic dictionary.
  • A null reference in a nested collection wasn't properly detected by ShouldBeEquivalentTo.
  • Corrected the remaining cases where ShouldBeEquivalentTo did not respect the declared type. #161
  • Adding an overload to collection.ContainSingle() having no arguments.
  • Included the time zone offset when displaying a DateTimeOffset. #160
  • collection.Should().BeEmpty() now properly reports the collection items it found unexpectedly. #224
  • Made the fallback AssertFailedException serializable to help in certain cross-AppDomain unit tests. #214
  • Better support for rendering the TimeSpan's MinValue and MaxValue without causign stack overflow exceptions. #212
  • Fixed an issue where the Windows 8.1 test framework detection code would ran into a deadlock when using a [UITestMethod]. #223
  • Fixed an issue where ShouldBeEquivalentTo would throw an internal exception on a unset byte[] property. #165

Internal changes

  • We now use StyleCop to improve the quality level of the code.
  • The first steps have been taken to deprecate IAssertionRule.
  • The internal assertion API has been changed to allow chaining complex assertions using a fluent API. This should make it a lot easier to extend Fluent Assertions. You can read more about that in this blog post
  • We've started to use Chill to improve the readability of the more behavioral unit tests.

As usual, you can get the bits from NuGet or through the main landing page. Tweet me at @ddoomen for questions or post them on StackOverflow. If you find any issues, post them to the GitHub repository.

Sunday, February 15, 2015

Evolving your agile architecture without losing any people

I'm a big fan of just-in-time architecture (a.k.a. agile architecture) because it prevents me from trying to predict the future which, frankly, I suck at. So, although I generally start out with a reference architecture, a lot of changes happen after that point, when the functional requirements gain more clarity. If you're working with a couple of developers, you don't need do much to keep everybody up-to-date. But if, like us, your organization is comprising of numerous teams, that quickly becomes a challenge. Even though all developers get a thorough introduction to the high-level architecture and its most important principles, we need some way to convey those changes. This is how we do it…

One of the first practices we introduced was to send an email for every architectural change and copy those into a kind of change log on OneNote. This worked reasonably well for reference purposes, but we discovered that not all developers read all these emails. Unfortunately, scanning through a single OneNote package wasn't practical either even though OneNote has a pretty brilliant indexing engine. After inquiring on why some developers never read the emails, we learned that this caused too much of a context switch from them.

Regardless, a lot of people kept thinking their time is wasted on those emails. In a way, we were really emulating a blog of some sort. So why not use a real blog? And indeed, that's what we have been doing for a year now. All these architectural changes are now posted online with clear categories and version tracking. This gives the developers the freedom to manually track the blog, use an RSS feed, set-up an email alert, or get a notification through Flowdock.

Another more direct technique for sharing architectural changes are tech sessions and architecture forum meetings. The former is used to share upcoming architectural changes and do presentations on current technical projects. The latter is more like an open forum (hence the name) where developers can submit particular topics which they feel need further clarification. This is also the place where we gauge new ideas, discuss new coding and design guidelines or have discussions to resolve disputes on certain solutions. I myself tend to use them to discuss design mistakes, coding errors and similar issues that I ran into while doing ad-hoc code reviews.

Within the organization we have the notion of a subsystem owner or SSO for short, a developer who's skills align well with certain aspects of our code base and who is freed up for a half a day a week to safeguard that aspect. For instance, the JavaScript SSO knows everything about writing high-performance memory-safe JavaScript as well as writing Jasmine unit tests. So ideally, he scrutinizes all JavaScript code written within a project. However, most of the SSOs are relying on somebody pinging them for questions and don't feel they have time to just start doing some ad-hoc reviews.

That's why pull requests on Github are such a nice principle. Whoever reviews the PR can easily refer to an SSO by using that person's handle, thereby making sure the right people get a notification. For all new components and GitHub repositories the use of PRs is already required and it's just a matter of time until all repositories are government by this pull request ruling.
Ideally, you would like to capture certain architectural violations while a developer is still coding. Some kind of static code analysis can help, but is usually limited to proper use of the .NET framework. Because that would just slow down compilation, we don't use Visual Studio projects to separate architectural layers. Instead, we've been introducing NDepend as a means to detect certain violations as part of the CI build. The learning curve is a bit on the high side, but there are plenty of samples you can start from. And the beauty is that you can include NDepend analysis in your CI builds (which our developers can run from their development box thanks to PSake) so early feedback is guaranteed.

So what happens if a developer doesn't know who to address? Or what if he can't find that particular blog post he remembered reading about that particular topic? That's when Flowdock comes in handy. We have specific flows per topic where he or she can ask for help, guidance or ideas. And if someone doesn't know the specific flow to post in, there's always the development flow for general-purpose development discussions. I often use Flowdock to read back on the original discussion that happened before a certain design decision was made.

So what do you do keep your fellow developers up-to-date on an evolving agile architecture? Let me know by commenting below or by tweeting me at @ddoomen.

Sunday, February 01, 2015

Continuous refactoring in its natural habitat

So over the last three weeks or so, after the kids went to bed, I've been working on some new features for Fluent Assertions . While doing so I went off track several times in an attempt to improve several API's and internal designs that didn't felt quite right. Since this thought process is representative for the way I approach professional software development and my blog is about continuous improvement I decided to try converting my brain's thought process in a post.
Before I go into the deep, let me share some context for the problem at hand. Fluent Assertions (let's call it FA from now) has an API to compare two object graphs which internally uses a collection of implementations of the IEquivalencyStep interface. As part of the next release, I wanted to allow people to directly affect the API's behavior by adding, removing or replacing steps with their own. Before that change, the EquivalencyValidater class had a method GetSteps to provide it with the out-of-the-box equivalency steps.
private IEnumerable<IEquivalencyStep> GetSteps() { yield return new TryConversionEquivalencyStep(); yield return new ReferenceEqualityEquivalencyStep(); yield return new RunAllUserStepsEquivalencyStep(); yield return new GenericDictionaryEquivalencyStep(); yield return new DictionaryEquivalencyStep(); yield return new GenericEnumerableEquivalencyStep(); yield return new EnumerableEquivalencyStep(); yield return new StringEqualityEquivalencyStep(); yield return new SystemTypeEquivalencyStep(); yield return new EnumEqualityStep(); yield return new StructuralEqualityEquivalencyStep(); yield return new SimpleEqualityEquivalencyStep(); }
So I started to move those defaults into a new static AssertionOptions class. Yes, I know, it's static, but it's supposed to be global and should affect all usages of FA.
public static class AssertionOptions { private static List<IEquivalencyStep> steps = new List<IEquivalencyStep>(); static AssertionOptions() { steps.AddRange(GetDefaultSteps()); } private static IEnumerable<IEquivalencyStep> GetDefaultSteps() { yield return new TryConversionEquivalencyStep(); yield return new ReferenceEqualityEquivalencyStep(); // …left out for brevity } }

Practicing object-oriented design

But then my obsession about maintainable code kicked in. Why would I overload the AssertionOptions class with the responsibilities and knowledge on where to insert new steps in relation to the built-in steps? So let's apply rule 4 of Object Calisthenics which is also known as First Class Collections:

Any class that contains a collection should contain no other member variables. If you have a set of elements and want to manipulate them, create a class that is dedicated for this set.

I cannot stress this enough. Whenever your class contains multiple private fields, please consider extracting those in dedicate collection classes or value types. It might feel as unnecessary refactoring, but it's really going to make your code mode object-oriented and maintainable. Regardless, after refactoring all this logic ended up in a new EquivalencyStepCollection that is used like this:
public static class AssertionOptions { private static EquivalencyAssertionOptions defaults = new EquivalencyAssertionOptions(); static AssertionOptions() { EquivalencySteps = new EquivalencyStepCollection(GetDefaultSteps()); } public static EquivalencyStepCollection EquivalencySteps { get; private set; } }
The collection class really behaves as a collection and implements, at a minimum, IEnumerable:
public class EquivalencyStepCollection : IEnumerable<IEquivalencyStep>
{ private readonly List<IEquivalencyStep> steps = new List<IEquivalencyStep>(); public EquivalencyStepCollection(IEnumerable<IEquivalencyStep> defaultSteps) { steps.AddRange(defaultSteps); } public IEnumerator<IEquivalencyStep> GetEnumerator() { return steps.GetEnumerator(); } IEnumerator IEnumerable.GetEnumerator() { return GetEnumerator(); } }
I didn't mention it before, but I wouldn't be taken myself seriously if I would not be practicing Test Driven Development. So one of those tests involves specifying the behavior of adding a step and making sure it ends up just before the built-in step that does a simple comparison using Object.Equals:
[TestClass] public class When_appending_a_step : Given_temporary_equivalency_steps { public When_appending_a_step() { When(() => { Steps.Add<MyEquivalencyStep>(); }); } [TestMethod] public void Then_it_should_precede_the_final_builtin_step() { IEquivalencyStep builtinStep = Steps.LastOrDefault(s => s is SimpleEqualityEquivalencyStep); IEquivalencyStep addedStep = Steps.LastOrDefault(s => s is MyEquivalencyStep); int builtinStepIndex = Steps.LastIndexOf(builtinStep); int addedStepIndex = Steps.LastIndexOf(addedStep); addedStepIndex.Should().Be(builtinStepIndex - 1); } }

Intention-revealing unit tests

Did you notice that I'm hiding some of the complexities needed to reset the static AssertionOptions class in a base-class? I'm not in favor of test base-classes, especially because they tend to get misused pretty quickly. But with the help of Chill, a project by Erwin van der Valk, I decided to use one anyhow, simply because it helps clarify the intend of my test. I think it was Jeremy D. Miller that once said "If it's not important for the unit test, it's very important not to show it" and clearing up after a test is not important to understand the test. This is what the base-class looks like. Notice Chill's GivenWhenThen class.
public class Given_temporary_equivalency_steps : GivenWhenThen { protected override void Dispose(bool disposing) { Steps.Reset(); base.Dispose(disposing); } protected static EquivalencyStepCollection Steps { get { return AssertionOptions.EquivalencySteps; } } }
Did you also notice the implementation of the Then_it_should_precede_the_final_builtin_step? It's basically a copy of the internal implementation of the Add method, so I hardly think it is helping on the intention revealing side of my story. I'm sure you'll agree. This is where my code quality obsession kicked in again, so I decided to extend FA with some specialized extension methods that would help me making those tests a bit more intention revealing.

But wait! I surely don't want to pollute my current changes with even more refactoring, would I? No, I definitely prefer small commits and a clean and tidy commit history. But switching to another branch without committing those half-finished changes is not going to allow me to start with a clean slate. Sure, I could stash my changes, but that requires me to think of some unique name. And yes, I do have a second clone somewhere on my SSD, but I'd rather create a temporary commit that I can use to rebase on those new assertions at a later point. Well, that's just what Phil Haacked's git save and git undo aliases do.

clip_image001
After installing those aliases and executing git save from your favorite git bash or PowerShell console (don't forget Posh-Git if you do) will take the local changes and commit those as a local commit named SAVEPOINT. Now I can safely switch to a new branch (git cob does just that) and work on those extensions.

clip_image002
One of the first assertion methods I implemented was the collection.Should().StartWith() method. After the first spec representing the happy path it looked like this:
public AndConstraint<TAssertions> StartWith(object element, string because = "", params object[] becauseArgs) { object first = Subject.Cast<object>().FirstOrDefault(); Execute.Assertion .ForCondition(first.IsSameOrEqualTo(element)) .BecauseOf(because, becauseArgs) .FailWith("Expected {context:collection} to start with {0}{reason}, but found {1}.", element, first); return new AndConstraint<TAssertions>((TAssertions) this); }

Finding a better assertion API

But after finishing all the other paths as part of me practicing TDD, it ended up like this.
public AndConstraint<TAssertions> StartWith(object element, string because = "", params object[] becauseArgs) { bool succeeded = Execute.Assertion .ForCondition(!ReferenceEquals(Subject, null)) .BecauseOf(because, becauseArgs) .FailWith("Expected {context:collection} to start with {0}{reason}, but the collection is {1}.", element, null); if (succeeded) { succeeded = Execute.Assertion .ForCondition(Subject.Cast<object>().Any()) .BecauseOf(because, becauseArgs) .FailWith("Expected {context:collection} to start with {0}{reason}, but the collection is empty.", element); } if (succeeded) { object first = Subject.Cast<object>().FirstOrDefault(); Execute.Assertion .ForCondition(first.IsSameOrEqualTo(element)) .BecauseOf(because, becauseArgs) .FailWith("Expected {context:collection} to start with {0}{reason}, but found {1}.", element, first); } return new AndConstraint<TAssertions>((TAssertions) this); }
This implementation is quite representative for most of the other extension methods in FA, but somehow it didn't feel right. I was planning to include EndWidth and HaveElementPreceding as well, but I wasn't looking forward to more of these monstrosities. In particular the constructs with the succeeded variable don't help understanding the code. You might expect FailWith to throw some kind of exception when the condition is not met, and usual it does. But the structural equivalency API uses an AssertionScope to collect all assertion failures and will throw them as one failure at the end. In fact, anybody can build extensions to FA and use the AssertionScope in some more advanced scenarios.

Anyway, I decided to commit those changes and give myself a couple of days to come up with a better approach. I already knew I was going to create some kind of fluent API, but I needed a bit of time to chew on it. This is what I ended up with:
public AndConstraint<TAssertions> StartWith(object element, string because = "", params object[] becauseArgs) { Execute.Assertion .BecauseOf(because, becauseArgs) .WithExpectation("Expected {context:collection} to start with {0}{reason}, ", element) .ForCondition(!ReferenceEquals(Subject, null)) .FailWith("but the collection is {0}.", (object)null) .Then .Given(() => Subject.Cast<object>()) .ForCondition(subject => subject.Any()) .FailWith("but the collection is empty.") .Then .Given(objects => objects.FirstOrDefault()) .ForCondition(first => first.IsSameOrEqualTo(element)) .FailWith("but found {0}.", first => first); return new AndConstraint<TAssertions>((TAssertions) this); }
What's important to know is that the Given and Then members are not even invoked if the previous condition was not met. Granted, it's more than my typical maximum of 7 statements, but it allowed me to get rid of those intermediate boolean variables and prevent repeating the expectation message. And with that, implementing the other extension methods became pretty easy.

Getting back on track

So, after committing those changes back to develop, my main development branch (I'm using the Gitflow branching strategy), it was time to back-track to the global AssertionOptions API I began this post with. I started that work on a separate branch which head now pointed to the temporary commit I created using git save. To get my working directory to the state it was before I side-tracked, but including the new extension methods was just a matter of doing a git pull develop --rebase to replay the changes on my feature branch on top of develop, followed by git undo to restore my work-in-progress from that temporary commit. I don't understand how I managed to get anything done without those aliases.

Anyway, this is one of those final unit tests.
public class When_appending_a_step : Given_temporary_equivalency_steps
{ public When_appending_a_step() { When(() => { Steps.Add<MyEquivalencyStep>(); }); } [TestMethod] public void Then_it_should_precede_the_final_builtin_step() { var equivalencyStep = Steps.LastOrDefault(s => s is SimpleEqualityEquivalencyStep); var subjectStep = Steps.LastOrDefault(s => s is MyEquivalencyStep); Steps.Should().HaveElementPreceding(equivalencyStep, subjectStep); } }
I'm doing all of this in my private hours so side-tracking from my original goal so much is not a typical situation for me either. I fully realize that this is usually not an option in real projects. Regardless, if you ask me, you should strive for continuous improvement every single day. One practical way of tracking these kinds of refactorings is to create check lists on Github or in OneNote. Another method I'm experimenting with is to insert dedicated comments to mark code as smelly or to suggest possible refactoring ideas. You can read more about this workflow in the article Natural Cause of Refactoring. All being well, whatever you do, please never forget The Boy Scout Rule:
Always leave the campground cleaner than you found it
So what do you do to continuously improve your code base? Let me know by commenting below or tweeting me at @ddoomen.