Sunday, November 06, 2016

The three mental modes of working with unit tests

The other day, while pairing up on some unit test, I started to realize that I generally have three modes of looking at my unit tests.

The Writing Mode

While writing, I mostly focus on the mechanics of getting the test to pass. By then, I usually have a mental model and a particular scenario in mind, and my thoughts mostly focus on finding the most elegant syntax and structure to get my test from red to green. Since I already know the exact scenario, I don't put too much attention on the name. If I'm really into the flow, the edge cases and alternative scenarios just pop into the back of my mind without me needing to really think about. In this mode, I also spend a lot of thoughts to come up with opportunities to refactor the test itself or the underlying constructs. For instance, is the scope of my test correct, does the subject-under-test not have too many dependencies. Since I practice Test Driven Development, some of these refactoring opportunities surface quick enough when I my set-up code explodes, or when my test code doesn't communicate the intend anymore.

The Review Mode

While reviewing somebody's pull request I switch to review mode in which I use the unit tests to understand the scope, the responsibilities and the dependencies of a class or set of classes. To understand those responsibilities, I put particular attention to the names of the tests thereby completely ignoring the implementation of the test itself. With the names as my only truth, I try to understand the observable behavior of the subject-under-test (SUT) under different scenarios. They should make me wonder about possible alternative scenarios or certain edge cases. In other words, they should make it possible for me to look at the code from a functional perspective. That doesn't mean they need to be understandable by business analysts or product owners, but they must help me understand the bigger picture.

Only when I'm satisfied that the developer considered all the possible scenarios, I start to look at the implementation details of particular test cases. What dependencies does the SUT have? Are there any I didn't expect? If so, did I understand the test case correctly, or is the test hiding important details? Are all dependencies I did expect there? If not, where are they? Is everything I see important to understand the test? If not, what aspects could be moved to a base-class (for BDD-style tests), or is a Test Data Builder or Object Mother a better solution? Do all assertion statements make sense? Did he or she use any constant values that are difficult to reason about? Is each test case testing a single thing. What if the test fails? Does it give a proper message to the developer what went wrong functionally or technically? A proper assertion framework can help because what use would the error "Expected true, but found false" have?

The Analysis Mode

Now, consider a test fails and I'm the one that needs to analyze the cause of this. In this debugging mode, I first need to understand what this test was supposed to verify. For this, I need a name that clearly explains the specifics of the test case on a functional level. Again, I won't let my thoughts be distracted by the implementation. The name should help me understand what is the expected behavior and help me make up my mind on whether that scenario makes sense at all. After I conclude that the test case indeed makes sense, I'll start studying the implementation to determine if the code really does do what the test name suggest. Does it bring the context in the right state? Does it set-up the dependencies correctly (either explicitly or through some kind of mocking framework)? Does it invoke the SUT using the right parameters? And does the assertion code expect something that makes sense to me considering the initial state and the action performed? Only if I've confirmed the correct implementation, it's time to launch a debugger.

I know the world is not perfect, but keeping out of the debugger hell should be a primary concern for the test writer. This is a difficult endeavour and requires the developer to ensure the intend of a unit test is as clear as crystal. Naming conventions, hiding the irrelevant stuff, and a clear cause and effect are adamant to prevent yourself from shooting in your own foot in the long run. If you're looking for tips to help you in this, consider reading my prior post on writing maintainable unit tests.

So what do you think? Do you recognize yourself in these modes? What do you think is important to be successful in unit testing and/or Test Driven Development? I've love to know what you think by commenting below. Oh, and follow me at @ddoomen to get regular updates on my everlasting quest for better tests.

Sunday, October 30, 2016

Principles for Successful Package Management

A couple of months ago I shared some tips & tricks to help you prevent ending up in NuGet dependency hell. As a big fan of the SOLID principles, I've always wondered why nobody thought of applying these principles on the package level. If SOLID can help you to build cohesive, loosely coupled components which do one thing only and do that well, why can't we do the same thing on the package level. As it happens, my colleague Jonne enthousiastically referred me to the book Principles of Package Design by Matthias Noback. It's available from Leanpub and does exactly that, offering a couple of well-named guidelines inspired by SOLID that will help you design better NuGet, NPM or whatever your package management solution of choice uses.

The first half of the 268 pages provide an excellent refresh of the SOLID principles. He even does a decent job of explaining the inversion of control principle (although I would still refer to the original to really grasp that often misunderstood principle). After that he carefully dives into the subtleties of cohesion as a guiding principles before he commences on the actual package design principles. The examples are all in PHP (yeah, really), but the author clearly explains how these would apply to other platforms. Notice that this post is mostly an exercise for me to see if I got the principles right, so I would highly recommend buying the .epub, .mobi or PDF from Leanpub. It's only 25 USD and well worth your money. So let's briefly discuss the actual principles.

The Release/Reuse Equivalency Principle

IMHO, the first principle has a rather peculiar name. Considering its purpose, it could have been called The Ship a Great Package Principle. The gist of this principle is that you should not ship a package if you don't have the infrastructure in place to properly support that. This means that the package should follow some kind of clear (semantic) versioning strategy, has proper documentation, a well-defined license, proper release notes, and is covered by unit tests. The book goes into great lengths to help you with techniques and guidance on ensuring backwards compatibility. Considering the recentness of the book and the fact it mentions Semantic Versioning, I would expected some coverage of GitFlow and GitHubFlow. Nonetheless, most of the stuff mentioned here should be obvious, but you'll be surprised how often I run into a unmaintainable and undocumented package.

The Common Reuse Principle

The purpose of the second principle is much clearer. It states that classes and interfaces that are almost always used together should be packaged together. Consequently, classes and interfaces that don't meet that criteria don't have a place in that package. This has a couple of implications. Users of your package shouldn't need to take the entire package if they just need a couple of classes. Even worse, if they use a subset of the package's contents, there must not be a need to get confronted with additional package dependencies that have nothing to do with the original package. And if that specific package has a dependency, then it's an explicit dependency. A nice side-effect of this principle is that it makes packages Open for Extension and Closed for Modification.

I've seen packages that don't seem to have any dependencies until you use certain classes that employ dynamic loading. NHibernate is clear violator of this principle in contrast to the well-defined purpose of the Owin NuGet package. My own open-source library, Fluent Assertions also seems to comply. When a contributor proposed to build a Json extension to my library, I offered to take in the code and ship the two NuGet packages from the same repository. So if somebody doesn't care about Json, it can use the core package only, without any unexpected dependencies on NewtonSoft.Json.

The Common Closure Principle

The third principle is another one that needs examples to really grasp its meaning. Even the definition doesn't help that much:

The classes in a package should be closed against the same kinds of changes. A change that affects a package affects all the classes in that package.

According to many examples in the book, the idea is that packages should not require changes (and thus a new release) for unrelated changes. Any change should affect the smallest number of packages possible, preferably only one. Alternatively, a change to a particular package is very likely to affect all classes in that package. If it only affects a small portion of the package, or it affects more than one package, chances are you have your boundaries wrong. Applying this principle might help you decide on which class belongs in which package. Reflecting on Fluent Assertions again, made me realize that even though I managed to follow the Common Reuse Principle, I can't release the core and Json packages independently. A fix in the Json package means that I also need to release the core package.

The Acyclic Dependencies Principle

For once, the fourth principle discussed in this book is well described by its definition:

The dependency structure between packages must be a directed acyclic graph, that is, there must be no cycles in the dependency structure.

In other words, your package should not depend on a package which dependencies would eventually result in cyclic dependency. At first thought, this looks like an open door. Of course you don't want to have a dependency like that! However, that cyclic dependency might not be visible at all. Maybe your dependency depends on something else that ultimately depends on a package that is hidden in the obscurity of all the other indirect dependencies. In such case, the only way to detect that, is to carefully analyze each dependency and create a visual dependency graph.

Another type of dependencies that the book doesn’t really cover are diamond dependencies (named for the visual dependency graph). Within the .NET realm this is a quite a common thing. Just consider the enormous amount of NuGet packages that depend on NewtonSoft's Json .NET. So for any non-trivial package, it's quite likely that more than one dependency eventually depends on that infamous Json library. Now consider what happens if those dependencies depend on different versions.

The book offers a couple of in-depth approaches and solutions to get yourself out of this mess. Extracting an adapter or mediator interface to hide an external dependency behind is one. Using inversion-of-control so that your packages only depend on abstract constructs is another. Since the book is written by a PHP developer, it's no surprise that it doesn't talk about ILMerge or its open-source alternative ILRepack. Both are solutions that will merge an external .NET library into the main DLL of your own package. This essentially allows you to treat that dependency as internal code without any visible or invisible DLL dependencies. An alternative to merging your .NET libraries is to use a source-only NuGet package. This increasingly popular technique allows you to take a dependency on a NuGet package that only contains, surprise, source code that is compiled into your main package. LibLog, TinyIoc and even my own caching library FluidCaching uses this approach. It greatly reduces the dependency chain of your package.

The Stable Dependencies Principle

The name of the principle is quite self-explanatory, but the definition is even clearer.

The dependencies between packages in a design should be in the direction of the stability of the packages. A package should only depend upon packages that are more stable than it is.

In other words, you need to make sure you only depend on stable packages. The more stable your dependency, the more stable your package is going to look to your consumers. Determining whether a package is stable or not isn't exact science. You need to do a bit of digging for that. For instance, try to figure out how often a dependency introduced a breaking change? And if they did, did they use Semantic Versioning to make that clear? How many other public packages depend on that package? The more dependents, the higher the chance that the package owners will try to honor the existing API contracts. And how many dependencies does that package have? The more dependencies, the higher the chance some of those dependencies introduce breaking changes or instability. And finally, check out its code and judge how well that package follow the principles mentioned in this post? The book doesn't mention this, but my personal rule-of-thumb to decide on whether I will use a package as a dependency is to consider the circumstances when the main author abandons the project. The code should either be good enough for me to maintain it myself/ourselves, or the project should be backed by a large group of people that can ensure continuity.

The Stable Abstractions Principle

Now if you understand (and agree) with the Stable Dependencies principle, you'll most definitely understand and agree with the Stable Abstractions Principle. After all, what's more stable? An interface, an abstract type or a concrete implementation? An interface does not have any behavior that can change, so it is the most stable type you can depend on. That's why a well-designed library often uses interfaces to connect many components together and quite often provides you would with an interface-only package. For the same reason, the Inversion of Control principle tries to nudge you in the same direction. In fact, in the .NET world even interfaces are being frowned on and are being replaced with old-fashioned delegate types. These represent a very tiny and very focused interface so it doesn't get any more stable than that. And because of their compatibility with C#'s lambda statements you don't even need to using a mocking library.

So what about you?

The names are not always that catchy and easy to remember, mostly because they use the same wording, but the underlying philosophy makes a lot of sense to me. I've already started to re-evaluate the design decisions of my projects. The only thing I was hoping to read more about is the explicit consequence of building a component or package as a library versus building it as a framework. This is something that heavily influences the way I'm building LiquidProjections, my next open-source project.

So what do you think? Do you see merits in these principles? Do they feel as helpful as the original SOLID principles? I've love to know what you think by commenting below. Oh, and follow me at @ddoomen to get regular updates on my everlasting quest for better designs.

Thursday, October 06, 2016

The magic of keeping a band of developers together

As I work as a consultant for Aviva Solutions, and the nature of my job is to be involved in moderately long-running client projects, I don't get to come to the office that often. And if I do, it's on different days of the week. Over the last year so, our locations in Koudekerk aan de Rijn and Eindhoven have grown with almost 10 new developers. Since we did not have any company-wide meetings since May, it should not come as a surprise that I arrived at the office one day, only to conclude that I could not recall the name of those new guys. When our company was only 15 people strong, any new colleague joining was a big thing. And it's still a big thing. It's just that it is becoming less visible. Sounds familiar? It's even more embarrassing considering that I'm usually the one that creates the new Github and FlowDock accounts. But even if I do remember a new name, I often can't map that to somebody's face. That's why I ask all new employees to add a little picture of themselves to their Flowdock profile. In a way, I'm cheating a bit.

Aviva Summer Event IbizaA similar challenge happens on the other side of the hiring spectrum. When somebody joins a company like Aviva, it's important for that person to feel valuable and be able to identify with the company's culture. The only way to do that is to engage with your coworkers, finding out who knows what, understanding the chain of command (which we don't have), and surfacing the areas where that person can add value. The problem in 2016 is that competitors are looking for people as well and recruiters are getting more aggressive by the day. So how do you keep a group of people like that together? Sure, you can offer them more money, give them expensive company cars, laptops and phones. But that's never really going to help. If they don't feel connected to the company, they might leave for the first deal they get.

So how do we stay connected? Well, twice a year, we have company-wide meetings, either at our HQ, our office in Eindhoven, a restaurant or some other venue. This usually involves a formal part where the two owners and the social committee share an update on sales, running projects, HR, the financial outlook and any upcoming social events. Then we have dinner, an occasional drink and some kind of activity (for those that want that). For example, when the Eindhoven office was just opened, the meet-up was organized at that office, and all co-workers were offered an overnight stay to go have a fun night out in downtown Eindhoven. This not only allowed the people that joined the Eindhoven office to meet the other people, but it also ensured that everybody has seen the new office and knows where to find it. We really encourage people to work in a different office occasionally, regardless of you being involved with an internal project or working for a client.

We also regularly get together for pizza sessions where we exchange project experiences, explore new technologies and run trial talks for public events. These are very informal evenings were everybody can share their findings or have discussions, whether or not you have presentation skills, you're a senior developer or just joined the company. Quite often, these little evenings are visited by colleagues from other companies or people who just happen to have heard about the topic (we usually Tweet and post on Facebook about them). Sometimes, these events turn into something bigger when we work together with the .NET community to organize public events.

I particularly like Flowdock as a low-threshold collaboration tool that is accessible as a web site, a desktop app or smartphone app. We have different channels (or flows) for trolling around, getting technical support, or having deep discussions on technical topics. Next to that, all our projects have dedicated flows, so everybody can read along and learn about the daily troubles and tribulations of their co-workers, even if you're stationed at a client's office. Flowdock is probably the most engaging platform I've come across. Neither Skype, Jabber or any other platform has helped us so much to keep in touch with each other. It also allowed us to avoid those long email threads that nobody is waiting for. And since we heavily use GitHub for our project's sources, we can directly see what's going on from inside Flowdock.

Now, all of this helps a lot to keep the group together, but the ultimate trick to make this group a team is to stuff the entire company in a plane, fly us to a warm place in Spain, a nice city like Prague or, like this year's 10-year-anniversary special edition trip, Ibiza. Yes, it sounds like pure luxury and over spoiling your employees. And yes, we did have a lot of fun, took a trip on a catamaran and explored the island driving around in those old 2CVs. But for 12 people, this was the first time they joined us on our annual trip. It allowed them to get to know their co-workers, find people with similar interests, share some frustration about that last project, get advice on how to resolve a technical or other work-related challenge, or receive tips on advancing their careers. Some would even use the weekend to debate technical solutions or new technological hypes. Nonetheless, the point is that before that weekend they were just employees. After that weekend they were colleagues, and in some unique cases, friends. What more can you expect from a company?

Aviva Summer Event Ibiza

So what do you think? Are you a junior, medior or senior .NET or front-end developer? Did you just graduate from university or a polytechnical college? Does a company like this appeal to you? Comment below, or even better, contact me by email, twitter or phone to visit one of the upcoming events or join us for a coffee and taste the unique atmosphere of an office with passionate people….

Bonus Content: We’ve compiled a little compilation of our trip. Check it out on YouTube.