Wednesday, June 24, 2015
As part of stabilizing an upcoming release, I always dog food a beta package against the 12000 unit tests in one of our bigger projects. In the early days, that would surface all kinds of edge cases I never thought of. In every single case, the first thing I would do is to add a new unit test to Fluent Assertions to make sure that edge case would be covered from that point on. But during the last couple of releases, finding a failing unit test would be pretty unique. What an unpleasant surprise it was when I encountered about 150 failing unit tests when running against a beta of v3.4.
The cause of this was FA's most powerful extension method ShouldBeEquivalentTo. It performs a recursive comparison of two object graphs. To determine which properties have to be included in the comparison, by default it's supposed to use the compile-time type (a.k.a. the declared type) of the objects in the graph. However, in early releases this didn't behave entirely consistent. Both me and top contributor Adam Voss have worked on the internals many times thereby slowly improving the consistency with every release. In v3.4 we both applied some more improvements, but those unfortunately surfaced a false-positive from an earlier release.
In our particular case, we were comparing a collection of events, declared as Event, with another collection. Because of that existing bug (I'm not sure when it was introduced), it would still use the run-time type of the particular event during the comparison. The call to ShouldBeEquivalentTo excluded the properties of the Event base-class. And since v3.4 will now include the properties defined by Event class only, the net result is the following InvalidOperationException:
No members were found for comparison. Please specify some members to include in the comparison or choose a more meaningful assertion.
This particular exception can be solved by using the IncludingAllRuntimeProperties option. But be prepared for unit tests that suddenly fail on some nested property's value that didn't match the expectation. In all cases where this happened in our code base, the unit test failed correctly. In other words, we had some pretty serious false positives.
So what does this have to do with semantic versioning? Well, Fluent Assertions' release strategy is based on semantic versioning. This requires me to carefully think about the version number increment and how that affects the changes I make in a particular release. To elaborate on that a bit, assume the current version is v3.4.0. If I change some internal logic without affecting the public API and/or fix a bug in a backwards compatible fashion, I'm supposed to increment the version to v3.4.1. If instead, I'm adding new extension methods, additional overloads or marking APIs obsolete, the version should get bumped to v3.5. And finally, if I would drop those obsolete APIs or a particular .NET framework variant, I must change the version to 4.0. In short, the versioning strategy is not based on a marketing decision, but purely derived from the changes you made. By strictly adhering to this, people using v3.3 should be able to upgrade to any other v3.x version with confidence.
So looking back at this false-positive, the question remains whether this release demands a major, minor or patch increment. Since this release also includes new backwards-compatible API changes, it's going to be at least a minor increment. Treating it as a patch release is out of the question. However, when somebody updates their FA version from v3.3 to v3.4, chances are that it'll break some unit tests. You could say that is a breaking change they want to postpone to a later point in time. On the other hand, we're talking about a false positive here. I guess people would like to know that their production code contains a problem not previously covered by Fluent Assertions. Having said that, I've decided to treat this fix as a normal bug fix.
Considering this story, I can imagine more gray area exist in the semantic versioning realm. I couldn't find a nice spot to discuss those topics, not on Gitter nor on Jabber. So until somebody else comes up with a better place, I've created a Gitter room. Please join me here if you have questions about semantic versioning.
Sunday, June 21, 2015
A recurring topic in every software project I've been involved with is what to document, when to do that, and where to store it. So it wasn't a big surprise that at a recent event, somebody asked me how we track and communicate design decisions. I initially pointed him to an article I wrote in February, but then I realized that that one only covers documentation in the context of an evolving architecture. So let me elaborate on documentation a bit more.
In short, we make a distinction between structured and unstructured documentation. We had lots of discussions about that, in particular because some people would really like to introduce a single tool for both of these categories. But I don't believe in silver bullets and prefer to use the best tool for the right job. Neither Microsoft's SharePoint nor Atlassian's Confluence does both well.
Let's start with unstructured documentation. This category covers notes you make during development, usually tightly couple to a user story in Atlassian's JIRA, our tool of choice for agile work item tracking. For those kinds of notes we still use Microsoft OneNote. A great example where OneNote shines is a detailed check list with things that still need to happen within a user story.
We generally start off with a couple of items, but this gets expanded heavily during development. Since we practice Test Driven Development, we track the refactorings, the edge cases and any leftovers here, including the blog post we need to write about some important change we introduced. In my team, having unchecked checkboxes means we still have work to do. So anything that pops up during discussions or sprint demos ends up here. Even if we decide to not do something, I expect the team to track it here. I really feel uncomfortable when open ends are discussed that are not in OneNote. And yes, I still frown upon people that use Notepad to track a couple of notes…
Discussions are also treated as unstructured documentation. For this we use Flowdock, a team-oriented threaded discussion platform, about which I blogged before. Obviously we prefer face-to-face discussions, but with an increasing group of professionals that are less vocal or working remotely, Flowdock has become invaluable to us. Even if we had a face-to-face discussion, we try to add a summary of that to the respective discussion flow, just to allow those that are out-of-office to catch-up the next day. Flowdock is particularly strong because it can serve as an efficient replacement for those long email threads where anybody who has access can read along or decide to participate. You'd be surprised if you see some very valuable contributions from people that would do so during a live conversation.
Related to these discussions we have documentation with semi-structured characteristics; blog posts. Whenever a major architectural, product-wise or process-related change is introduced, the team involved is supposed to write a blog post about that on a SharePoint Blog. SharePoint's WYSIWYG editor really has a lot to wish for and doesn’t support MarkDown. As soon as we find a way to migrate all those posts to Confluence, we'll do. Note that most of the people I know who really treat those posts seriously, write those posts in OneNote so that the team can contribute and provide feedback. We then use Microsoft Word's blog editor to post the final result to SharePoint.
Structured documentation ends up somewhere else. But within that category, we treat product-specific documentation different from project and team documentation. The former is tightly coupled to the version or variant of the product and should follow its development lifecycle. Examples of those are installation guides, (web service and REST) API documentation and release notes. Because of that, we really want people to be able to update that documentation as part of the product or architecture changes happening in the appropriate source control branch. So for that category we use Markdown as well, especially because it's a text-based mark-up, where merging concurrent changes is pretty painless. For editing we either use GitHub's built-in Markdown support or MarkdownPad 2.
Documentation not directly related to stories, products or systems ends up as Confluence pages. Although Confluence can be a bit sluggish sometimes (I suspect Atlassian is still catching up with its popularity), its editing and collaboration features are marvelous. I love getting those emails with recent changes and such. It really allows me to see what's going on within the organization. Architectural or high-level PowerPoint presentations also end up here, but need a special document library to track them. I have the feeling Confluence's integration with Office is not as strong as SharePoint's, but it's acceptable for now. By the way, did I mention that we often create carbon-board posters from those slides? We use them during discussions or as part of the introduction of new people, and also put them on the walls for everyone to see. Those really spark off interesting discussions with new candidates, prospects or existing clients…
So how do you approach software documentation? Let me know by commenting below or tweeting me at @ddoomen.