Thursday, December 24, 2015

10 more things professional software developers do

Since my last post about 12 things I believe professional software developers should do, in August, I started to take note of other behavior, skills, and characteristics I like or miss in the people I run into while working on complex software projects. Next to that, I received quite a few suggestions through Twitter. So let's discuss some more of these.

They don't write code for themselves
A very common response I get when I ask an experienced developer to refactor some long method is that they have no problems reading their own code. It gets even worse when you work with one that is interested in functional programming and who loves to write pages of nested lambdas calling other lambdas. Just search for some OWIN middeware code and you'll get my point. When a told a guy like that about my objections, he responded by saying that his code was meant to be maintained by the best only. I guess there might be some code out there that requires special skills to understand, and extracting that in a separate component is probably wise. But I still believe that high-quality code starts with accepting you're not alone.

They do not cherry-pick the fun work. They do what needs to be done.
A very common situation is to see the technical lead in the team do all fun and challenging work and delegate all the repetitive work to the 'common folks'. Although the tone in that is wrong, it's not that surprising to see. The lead probably became the lead because he's proven to have great skills in adopting new technology within an existing code base. He's probably the best guy to analyze the hardest problems as quickly as possible. But I don't think it's a surprise that this attitude is not the best for the team's morale nor for knowledge sharing.

If I look at my current role as lead architect, I'm sure I've been guilty of this myself. But next to an occasional 'fun' spike to prepare some work for the team, I regularly fix quite a lot of hard bugs, both in the development pipeline as well as those reported by clients. These tend to be ignored by teams because they are difficult to reproduce or because they are not directly related to the work they are doing. If I believe these need structural fixes (rather than patches), and my team members are occupied, I pick them up myself, even if that keeps me from working on the fun stuff.

They work hard to share their knowledge with less experienced developers.
Some would say that knowledge is power, and at first, it may sound like a good idea to be the one who has all the knowledge. But with great power comes great responsibility (yeah, I know, it's cheesy), the responsibility to share that knowledge with the less experienced developers. And I'm not talking about showing them how much you know. No, you need to make an effort to explain them how you think, how you approach complex problems, where you get your information, how to structure their work, who to talk to and for what topics. I'm quite an impatient person, so I sometimes have to make an effort to really explain how I came to a certain conclusion. Being able to do that, makes a world of difference. It's not the specific knowledge at that point of time that counts for them, it's the process on how you got there.

They can place themselves in the feet of less experienced developers.
This is something different than sharing your knowledge. The point here is to be able make decisions that take into account the lack of experience of your fellow developers. Just assuming that they will understand is not enough. One example is to overwhelm them with technobabble. Even if they do appear to understand what you’re saying, try to get some confirmation by asking the right question. They might be afraid to tell you they don't know something or are just overconfident. But don't ask questions like "Do you understand what I mean?". Instead, ask questions like "Is there any part of what we've just discussed that require more clarification?", or "Did I miss anything?". These are open questions that might put them at ease. And once again (I'm now talking to myself) , be patient…

They give credits to those that deserve it
In my opinion, one of the most annoying habits a software developer, architect or anybody else in our profession can have, is to pretend he or she came up with an idea themselves. If I post an interesting article on Twitter, I'll try to include the Twitter handle of the author. If I got that article through somebody I know, I include a "via @whoever". This is my way of saying thank you to the author or referrer. I do the same in my blog posts. For instance, I learned a lot about OWIN from my colleague Damian Hickey. Similarly, Yves Reynhout has been my go-to-guy on Event Sourcing and Domain Driven Design. Even the reboot of my blog was inspired by the book Soft Skills by John Sonmez. Within the scope of a client, I try to give credits to individual developers in our internal blogs or FlowDock discussions. And that is particularly important for the confidence of less experienced developers. Being appreciated for an idea or an accomplishment can make a world of difference.

They adhere to coding conventions
It should not come as a surprise that I'm a strong advocate for coding guidelines (or standards if you will). I've been involved in these for years, and in the light of the points I've discussed above, I still believe they are very valuable. They don't exist to annoy developers or restrict them in their creativity. Instead, they are there to help them to prevent making common mistakes, promote object-oriented development and ensure a code base with consistent naming conventions and layout. Some architects believe that each component or project should be able to define their own conventions, but I don't agree with that. In reality the same people move from one project to another so consistency between different code-bases can be very helpful. And between the two of us, I'm pretty sure they don't agree with me because they want to use their own conventions…

They know that not every problem is a nail if they have a hammer
It's incredible how often you read about some new technique, framework or library that is supposed to solve all your problems. I still remember when KnockoutJS was introduced. Then AngularJS came along as the ultimate client-side framework. And now we have Facebook's React. No single tool, framework or library will solve all your problems, nor will it solve a specific problem in every situation. Exceptions such as Git or OWIN exist and they have had a profound effect on the way we build software, but be realistic. Just make sure you understand the added value of such a solution, e.g. the context it applies to, the situations where it is a no-go, and keep an eye open for alternatives.

In general, you may notice a pattern whenever something new pops up. At first, you treat it as another solution to the same problem, and kind of refuse to use it or see its added value. Then you start to get excited about it, you notice you're telling everybody about it, and it turns into that well-known hammer. In the next phase, all problems become a nail on which you can use your new shiny hammer. You try to use it everywhere, simply because you haven't figured out yet the limits of the technology. But there light at the end of that tunnel. Eventually, you learn where and when that new thing is the right thing to use, and when not. It becomes just another tool in your toolbox. This is where you become effective. I'm pretty sure this pattern has been coined by somebody, but I can't recall the name of that person anymore. If you do, let me know.

They are not afraid to share their accomplishments
So you've been working on some awesome project for a couple of months and it's supposed to be a next-level foundation for some other project or product. Now is the time to share your work and do a bit of (internal) marketing. Being able to effectively 'talk' about your accomplishment is just as important than building the thing in the first place. The worst thing that could happen is that your beautiful shiny new toy is not going to be used by anybody, simply because you failed to create awareness. And you don’t need to necessarily do a formal presentation to a large audience. Instead, you could ask for feedback in some kind of demo session. Or you could write a couple of (internal) blog posts about it, including a similar invitation. Heck, you could even ask somebody else in your team to do that part of the work for you. The point is, sharing your work is a crucial part of your profession.

They refrain from criticizing the status quo until they understand its nature and history
One of the biggest mistakes I think I've made is to join an organization and criticizing the way they did their work or build something in a certain way without really, really understanding how they ended up that way. Since then, I've learned that people generally try to do the best they can under the circumstances they're in. So if they made decisions that you disagree with, consider the fact that it might be more than just lack of experience. I didn't realize this until I was the one being criticized (or to be more specific, my work was, not me personal). In some cases, your criticism might be correct and changes are needed. But in a lot of other cases, they might have made plans to the take right actions, but those got side-tracked by project or commercial pressure. Even worse, they might got overruled by management that quite often doesn't really understand why software development involves more than just adding functionality.

They think by themselves
A blog post that is about how you should behave but tells you to think for yourself. Ironic isn't it? But I concluded this post with that on purpose. There are so many people in our profession that tell you how to behave, how to code, how to design, what practices to use, or what technology to use. So even though I really believe the 10 things mentioned here represent the habits of highly effective professional software developers, it's up to you to decide whether to agree or not. I just hope I made you (re)think about your own habits and how you participate in your profession.

So what do you think? Does this resonate with your opinions as a professional software developer? Did I miss anything? Do you object with anything I said? Let me know by commenting below. And follow me at @ddoomen to get regular updates on my everlasting quest for better solutions.

Sunday, December 13, 2015

Why I am abandoning GitFlow

Now that the number of downloads of Fluent Assertions is about to cross the magic number of 1 million downloads, and the library is quite feature complete, it is time to rethink the release strategy. Since its inception we've always used a separate branch for working on future features and improvements, and then used dogfooding before the final release was done. After we switched to Git, GitHub, and PSake, we automated the entire development pipeline and embraced Vincent Driessen's GitFlow as a formal release strategy. In particular, we used to use the develop branch for upcoming features, a release- branch for dogfooding or beta releases, and the master branch to represent what's on NuGet.




Technically, shipping a new version requires nothing more than merging a pull request and tagging a commit on the master branch. Versioning, packaging and everything else that is needed to build a nuget package is completely automated. And there's more than enough unit tests to guarantee the quality, so dogfooding is an exception these days. The only thing left is to create some release notes, but even that can be automated these days. 

The fact of the matter is that new releases of Fluent Assertions aren't as massive as they used to be. We still get feature requests and run into occasional bug fixes obviously, but they are relatively small and have low impact. For the same reason, we're not working on the project as active as we used to be, so collecting all improvements until it's time for a major release doesn't make sense anymore. 

So as of today, we'll be shipping releases of Fluent Assertions much more often, albeit with a lot less changes as prior releases. In essence, we will be practicing a form of GitHub Flow (but without the deployment part). We'll still use Semantic Versioning, so you will be able the impact of the releases from the version number. 

So what do you think? Do you agree with my decision? Let me know by participating in the Disqussions below. And please follow me at @ddoomen to get regular updates on my everlasting quest for better solutions.

Sunday, November 22, 2015

An opinionated definition of a unit test

During the same C# code reviews that triggered last week's blog post about writing great unit tests, another discussion tends to pop-up, in particularly with new joiners (both experienced and junior):

"What's the difference between a unit test and an integration test?"

Well, in my opinion (which is heavily influenced by The Art of Unit Testing), a unit test has the following characteristics:

  • Fast (as in a couple of milliseconds)
  • Runs in-memory
  • Has no dependencies on external I/O-bound resources such as networks, configuration files, databases and such
  • Can be run an infinite number times without failing
  • Can be run in any order with the other tests
  • Doesn't cause any side-effects that might affect other tests or require manual clean-up
  • Cover one or more closely related classes

If a unit test does not meet ALL of these characteristics, it's an integration test.

But what about subcutaneous tests, automated tests that cover everything from just below the user interface layer (which is usually technologically specific) all the way to the database? The goal of such a test is to cover as much as possible without relying on front-end technologies. So the question if it's a unit test is kind of irrelevant. It's more useful to discuss whether or not to include them in a CI build of some sort. And that all depends on how much they meet the earlier mentioned criteria.

And what about tests that employ temporary in-memory databases such as Sqlite or RavenDB? Some would argue that the mere fact they rely on a database disqualifies them as a unit test. However, in our current project, we have hundreds of these tests. They run fast, cause no side-effects and don't have any dependencies. So yes, I usually treat these as unit tests as well.

Another example in the grey area of unit tests is a test that uses OWIN to test the entire pipeline from the HTTP API all the way to the database level. In most cases, we'll use the same approach as subcutaneous tests and use an embedded Sqlite or RavenDB database. Technically they meet most of the characteristics of a unit test and we always run them as part of the CI build. However, whether I call them unit or integration tests depends highly on the scope of the test. If the purpose is to just test an HTTP API without involving the other layers, I tend to call them unit tests. Which brings me to the topic of the 'unit' in unit test.

Of all the things you need to think of while writing automated tests or practicing Test Driven Development, defining the scope of the unit you're testing is the hardest one. I still think Jeremy D. Miller's Third Law of TDD, in which he advices to test small before you test big, make sense. But small doesn’t mean a single class though. Instead, I recommend you to evaluate the collaboration of a couple of closely related classes and consider which of these change together when requirements change. Those might be a good candidate for the unit in a unit test. At the same time, having a couple of end-to-end integration tests is just as useful. Just don't follow any na├»ve advice in which you need to choose between one or the other. Focusing on unit tests only might bring you in a situation were it’s difficult to refactor or redesign the internals of your codebase without breaking your test. Similarly, focusing on integration tests only will most definitely bring you into the debugger hell. Always choose the right tool for the right issue.

So what do you think? Does this definition make sense? If not, I'm very interested in hearing your thoughts. Let me know by participating in the Disqussions below. Oh and follow me at @ddoomen to get regular updates on my everlasting quest for better solutions.

Wednesday, November 11, 2015

12 tips to write unit tests that don't cripple your codebase

Over the last months, I've been involved in more and more code reviews, mostly because we've increased the level of quality required for code changes to our code base. While doing that, I started to track my most frequently used review comments intended to improve .

  1. If the dependencies of your subject are not important for a particular test, use a Test Data Builder to build the subject-under-test (SUT). However, always have at least one test that clearly show those dependencies. Hiding them increases the change your subject ends up with more dependencies then what's good for  you.

  2. Don't repeat the action your executing against the SUT in the name of the class. For instance, tests for a query handler should not start with When_executing_…. That's superfluous information that does not help to keep the purpose of the test apart.

  3. Be very conservative in applying the typical Don't Repeat Yourself (DRY) principles to unit tests. They tend to obscure the cause and effect, especially if you're moving stuff into some kind of base-class (or worse, a hierarchy of base-classes).

  4. Don't introduce constant variables and such at the top of the test, unless you're dealing with a magic number. For example, I often use literals like "TheIdentifier" and "OtherIdentifier" to set them apart and communicate intend. Even numeric values like 1123 don't need constants, even if you use them multiple times. It's quite obvious not just a number. But, numbers like 1, 0 and such, don't say much, and usually benefit from a well-named constant.

  5. Hide everything that is not important to see in that particular test. In other words, if it's not important for the test, it's very important NOT to show it. Test Data Builders or a simple With extension method can help hide the umimportant. But again, be careful to hide design errors by adding intelligence into the test data builders. That's the software equivalence of sweeping the dirt under the carpet.

  6. Ensure that the cause and effect of the test is crystal clear. Nothing is more annoying than a failing test that makes no sense at all, neither by looking at the name, nor the implementation. If you're practicing Test Driven Development well, the tests can serve as documentation for your API. So make sure your tests read like a book.

  7. If the test fails, ensure that the failure message is so clear that no debugging sessions are necessary. Using a proper assertion library is not a luxury anymore. One note though. If you love FakeItEasy like I do, don't use constructions like this:

    A.CallTo(() => commandService.Execute(A.That.Matches(cmd =>
        (cmd.Site == new SiteName(equipment.Owner.ToString())) &
        (cmd.Identity == equipment.Code))))
        .MustHaveHappened(); 


    If the Site and Identity properties of the command contain anything but the expected values, you'll need a debugger to figure out what happened.

  8. Don't stick to a single style. I generally make a distinction between tests that do simple state testing and those that are more orchastration oriented. For the former, I prefer the AAA syntax, whereas the latter is more suitable for a BDD-style test, e.g. using Chill or something else. But be consistent. Don't mix more than two styles in a single code-base (or git repository).

  9. Whether to test small or test big is a topic that is a source for heated debates. I see value in both of them, so depending on the thing I'm building you might see me start of with an end-to-end integration test and then drill down into the code base to add more fine grained unit tests. Reverting this process is perfectly fine as well. However, if you don't start off with such an end-to-end test, you might discover that you didn't design for it, making it diffuclt to add it later on.

  10. If you need to verify that a certain string value or exception message matches the expectation, never verify the exact message. Use wildcards to verify the specific parts that are relevant to verify that the behavior meets the expectation. It will save from unexpectingly failing tests if you decide to refine the text involved.

  11. Don't refer to technical names such as the names of methods, variables and UI elements in the name of a test. In my opinion, the name of a test should explain the functional scenario, whereas the cause and effect should clearly illustrate the technicalities of that scenario. If some class or member is renamed, it shouldn't affect the functional name of the test.

  12. Be careful with libraries like AutoFixture. They have value, but using those for everything increases the chance your test starts to fail on something that is irrelevant for that specific test. Moreover, they usually have a negative impact on performance. And again, if the value of certain object is important to understand the test scenario, it is VERY important to show that.
As usual, these opinions are mine, so feel free to comment below. Oh and follow me at @ddoomen to get regular updates on my everlasting quest for better solutions.

Sunday, October 25, 2015

9 simple practices for writing better object-oriented code

Consider a fantasy game that must track a collection of items, each having a certain amount of quality (or value) that increases or decreases after time passes. This collection contains the following six items:


  •  +5 Dexterity Vest
  • Aged Brie
  • Elixir of the Mongoose,
  • Sulfuras, Hand of Ragnaros
  • Backstage passes to a TAFKAL80ETC concert
  • Conjured Mana Cake


How much each item's quality increases or decreases over time depends on an intricate algorithm that involves the quality and sell-in (days) properties of each item. Fortunately a prior C# developer encoded this for us. Let's see what this algorithm looks like:

private static void UpdateQuality(Item item)
{
    if (item.Name != "Aged Brie" && item.Name != "Backstage passes to a TAFKAL80ETC concert")
    {
        if (item.Quality > 0)
        {
            if (item.Name != "Sulfuras, Hand of Ragnaros")
            {
                item.Quality = item.Quality - 1;
            }
        }
    }
    else
    {
        if (item.Quality < 50)
        {
            item.Quality = item.Quality + 1;

            if (item.Name.Equals("Backstage passes to a TAFKAL80ETC concert"))
            {
                if (item.SellIn < 11)
                {
                    if (item.Quality < 50)
                    {
                        item.Quality = item.Quality + 1;
                    }
                }

                if (item.SellIn < 6)
                {
                    if (item.Quality < 50)
                    {
                        item.Quality = item.Quality + 1;
                    }
                }
            }
        }
    }

    if (!item.Name.Equals("Sulfuras, Hand of Ragnaros"))
    {
        item.SellIn = item.SellIn - 1;
    }

    if (item.SellIn < 0)
    {
        if (item.Name != "Aged Brie")
        {
            if (item.Name != "Backstage passes to a TAFKAL80ETC concert")
            {
                if (item.Quality > 0)
                {
                    if (item.Name != "Sulfuras, Hand of Ragnaros")
                    {
                        item.Quality = item.Quality - 1;
                    }
                }
            }
            else
            {
                item.Quality = item.Quality - item.Quality;
            }
        }
        else
        {
            if (item.Quality < 50)
            {
                item.Quality = item.Quality + 1;
            }
        }
    }
}

So? What do you think? If you want, you can find the full sources of this version here. Is it readable enough for you?  I hope not, because that would render this entire article useless. So let's use this opportunity to introduce the first practice: use only one level of indention per method. In other words, a method can have an if statement as long as it is not nested in another construct, other than the class member itself. Obviously the code snippet above violates this practice big time, so let's rewrite the algorithm.

private static void UpdateQuality(Item item)
{
if ((item.Name != "Aged Brie") && (item.Name != "Backstage passes to a TAFKAL80ETC concert"))
{
ReduceQualityUnlessNameIsSulfuras(item);
}
else
{
HandleInsufficientQuality(item);
}

if (!item.Name.Equals("Sulfuras, Hand of Ragnaros"))
{
item.SellIn = item.SellIn - 1;
}

if (item.SellIn < 0)
{
HandleInsufficientSellIn(item);
}
}

private static void ReduceQualityUnlessNameIsSulfuras(Item item)
{
if ((item.Quality > 0) && (item.Name != "Sulfuras, Hand of Ragnaros"))
{
item.Quality = item.Quality - 1;
}
}

private static void HandleInsufficientQuality(Item item)
{
if (item.Quality < 50)
{
item.Quality = item.Quality + 1;

HandleBackstagePasses(item);
}
}

private static void HandleBackstagePasses(Item item)
{
if (item.Name.Equals("Backstage passes to a TAFKAL80ETC concert"))
{
UpdateQualityBasedOnQualityAndSellIn(item);
}
}

private static void UpdateQualityBasedOnQualityAndSellIn(Item item)
{
if ((item.SellIn < 11) && (item.Quality < 50))
{
item.Quality = item.Quality + 1;
}

if ((item.SellIn < 6) && (item.Quality < 50))
{
item.Quality = item.Quality + 1;
}
}

private static void HandleInsufficientSellIn(Item item)
{
if (item.Name != "Aged Brie")
{
HandleItemsThatAreNotAgedBrie(item);
}
else
{
HandleOtherItems(item);
}
}

private static void HandleItemsThatAreNotAgedBrie(Item item)
{
if (item.Name != "Backstage passes to a TAFKAL80ETC concert")
{
IfNotSulfurasAndQualityIsSufficient(item);
}
else
{
item.Quality = item.Quality - item.Quality;
}
}

private static void IfNotSulfurasAndQualityIsSufficient(Item item)
{
if ((item.Quality > 0) && (item.Name != "Sulfuras, Hand of Ragnaros"))
{
item.Quality = item.Quality - 1;
}
}

private static void HandleOtherItems(Item item)
{
if (item.Quality < 50)
{
item.Quality = item.Quality + 1;
}
}

I tried to replace all the nested constructs with a simple Extract Method refactoring pattern. Did it help? I suspect you've discovered by now that backstage passes increase in value the closer you get to their end-of-life. You may also have noticed that Sulfarus never looses its value and that the value of an item never exceeds 50. But we can do better. Let's introduce the practices Don't Use The Else Keyword and Use Only Dot Per Line.

Most of the dots in the code snippet above are caused by the reference to the item variable, which is a sign of Feature Envy code. Similarly, you could say most of the elses are caused by the procedural nature of this code example. And what else could be the the best solution to this than introducing a class per item? This is the revised implementation for the Elixer of the Mongoose item. 

public class ElixirOfTheMongoose : Item
{
    public ElixirOfTheMongoose(int sellIn, int quality)
        : base("Elixir of the Mongoose", sellIn, quality)
    {
    }

    public override void UpdateQuality()
    {
        if (Quality > 0)
        {
            Quality = Quality - 1;
        }

        SellIn = SellIn - 1;

        if (SellIn < 0 && Quality > 0)
        {
            Quality = Quality - 1;
        }
    }
}

And now the algorithm is clear; each day the quality decreases gradually, but until the end-of-life has been reached, after which the quality decreases twice as fast. You could say that these little practices force you to consider a more object-oriented design, don't you agree? But, if you take a closer look you'll notice something else. Apparently, the quality can never become negative.

You could have solved this by using an unsigned int, but by applying the Wrap All Primitives and Strings practice you can do better. By wrapping the two numeric properties of the Item base-class in classes with value-type semantics such as overloads for Equals, GetHashCode, various equality operators and by adding intention revealing members like Decrease, IsOverdue and Decrement, you can rewrite the UpdateQuality method as:

public override void UpdateQuality()
{
    Quality = Quality.Decrease();

    SellInDays = SellInDays.Decrement();

    if (SellInDays.IsOverdue)
    {
        Quality = Quality.Decrease();
    }
}

We're left with four lines of code. Few enough to almost instantaneously understand the algorithm. But we're not done yet. Suppose we want to extract some characteristics from a collection of items such as the highest value item as well as the items that are overdue. We could take that List from the GildedRose class and do the following.

Item highestValueItem = items.OrderBy(i => i.Quality).Last();
IEnumerable itemsOverdue = items.Where(i => i.SellInDays.IsOverdue);

But then we're not only violating Use Only Dot Per Line again. The mere fact that we use a simple List is violation of another practice, which is the one named First Class Collections. And now that I think of it, how would one call a collection of items that need to be sold? Ah, what about an inventory. So let's create the inventory class.

public class Inventory : IEnumerable
{
private readonly HashSet items = new HashSet();

public Item HighestValued
{
get { return items.OrderBy(i => i.Quality).Last(); }
}

public IEnumerable Overdue
{
get { return items.Where(i => i.SellInDays.IsOverdue).ToArray(); }
}

public IEnumerator GetEnumerator()
{
return items.GetEnumerator();
}

IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}

public void Add(Item item)
{
items.Add(item);
}

public void HandleDayChanges(int nrOfDays)
{
for (int i = 0; i < nrOfDays; i++)
{
foreach (var item in items)
{
item.HandleDayChange();
}
}
}
}

Which means we can rewrite the original two calls like this:

Item highestValueItem = inventory.HighestValued;
IEnumerable itemsOverdue = inventory.OverDue;

Wrapping simple collections in first-class types is actually a very smart thing to do. How many times have your classes exploded in size with private methods which only task was to modify or query your private collection field? If you would have applied the practices I mentioned up to now, I'm pretty sure you agree you would have ended up with better distribution of responsibilities. It might feel all like overkill, but since I've learned about these practices, it has really helped me create better object-oriented code.

Another practice, aptly called Don't Abbreviate, tells us not to abbreviate names of classes, members and variables. This practice can be confusing, because the question you should ask yourself should be why you want to abbreviate names in the first place. I guess it's either because you couldn't find a shorter name to describe your class, or because you are trying to avoid repeating the same words over and over. The former usually happens to me if I'm trying to name a class that has too many responsibilities (hence the principle Single Responsibility Principle in SOLID). The latter could be, and usually is, caused by not following First Class Collections and Wrap All Primitives and Strings.

It's awesome to realize how these practices work together so well to accomplish another practice named Keep All Entities Small. Of course, small is not an objective measurement, but I highly doubt anyone would think the code example I started this post with is small. And that's the purpose of this practice. Whenever your class turns into anything but small, start applying the previous practices to make it small again. One heuristic that can help you detect this as early as possible is to practice No Classes With More Than Two Instance Variables. I know that is going to be difficult to meet. Hey, you might even call it unrealistic and overkill. Regardless, I challenge you to consider the other practices whenever you pass the first two instance fields.

Now consider the following implementation of the Item class.

using System;

public abstract class Item
{
private readonly string name;

protected Item(string name, SellInDays sellIn, Quality quality)
{
this.name = name;
Quality = quality;
SellInDays = sellIn;
}

public SellInDays SellInDays { get; set; }

public Quality Quality { get; set; }

public string Name
{
get { return name; }
}

public int DaysOverdue
{
get { return SellInDays.DaysOverdue; }
}

public abstract void HandleDayChange();

public override string ToString()
{
return name;
}
}

Technically, this class doesn't have any fields (instance variables in C#), but the properties used here are just compiler-generated getters and setters around a hidden field. So in reality this class violates the two instance practice with one field. But do we need this getters and setters at all? If you look closely at the code, you'll see that those properties are either used for sorting or filtering as illustrated in the previous practice, or to compare them in unit tests. Quite often, properties open up the possibility for calling code to apply algorithms that really belong inside the class that exposes the properties. So you could say that they are generally not a favorable solution, hence the No Getters/Setters/Properties practice.

In our case, this theory is actually true. For instance, two items are semantically equivalent if their Quality, SellinDays and Name are the same, which is what the unit tests rely on. We can easily replace that behavior by the correct implementation of Equals. Likewise, sorting items based on the quality of these properties can be replaced by exposing an implementation of IComparer. The only one that we can't really remove is the DaysOverdue property. And I don't mind too much either. Being able to ask an item how many days it's due sounds like something you should be able to ask an item in this domain. Anyway, this the current implementation, although I'm discovering refactoring opportunities every time I look at the code.

public abstract class Item
{
private readonly string name;
private Days shelfLife;
private Quality quality;

protected Item(string name, Days shelfLife, Quality quality)
{
this.name = name;
this.quality = quality;
this.shelfLife = shelfLife;
}

public bool IsExpired
{
get { return shelfLife < new Days(0); }
}

public Days DaysOverdue
{
get { return (shelfLife > new Days(0)) ? new Days(0) : -shelfLife; }
}

public static IComparer ByQualityComparer
{
get { return new QualityComparer(); }
}

public abstract void OnDayHasPassed();

public override string ToString()
{
return string.Format("{0} (quality {1}, sell in {2} days)", name, quality, shelfLife);
}

protected bool Equals(Item other)
{
return shelfLife.Equals(other.shelfLife) && string.Equals(name, other.name) && quality.Equals(other.quality);
}

public override bool Equals(object obj)
{
if (ReferenceEquals(null, obj))
{
return false;
}

if (ReferenceEquals(this, obj))
{
return true;
}

return Equals((Item)obj);
}

public override int GetHashCode()
{
unchecked
{
int hashCode = shelfLife.GetHashCode();
hashCode = (hashCode * 397) ^ name.GetHashCode();
hashCode = (hashCode * 397) ^ quality.GetHashCode();
return hashCode;
}
}

protected void IncreaseQuality()
{
quality = quality.Increase();
}

protected void DecreaseQuality()
{
quality = quality.Decrease();
}

protected void Devaluate()
{
quality = new Quality(0);
}

protected void ReduceShelfLife()
{
shelfLife = shelfLife.ReduceByOneDay();
}

protected bool IsDueWithin(Days days)
{
return shelfLife < days;
}

private class QualityComparer : IComparer
{
public int Compare(Item x, Item y)
{
return x.quality.CompareTo(y.quality);
}
}
}


So what do you think? Do these practices, collectively called Object Calisthenics, make sense? Did I motivate you to try them right away? If so, tell me your success stories by participating in the Disqussions below. And please follow me at @ddoomen to get regular updates on my everlasting quest for better solutions. 

Friday, October 09, 2015

The real challenges of building the right thing right

Suppose you're in the following situation.

  • You and your team are supposed to start working on some important architectural changes that were long due.
  • Your team is then asked by your manager to help build an important feature that requires the unique skills of your team, and which you think will take 5-6 weeks to build plus two weeks to support quality assurance. That feature is the final check in the box for the company to send a big fat invoice.
  • Accepting the task will mean that those architectural improvements will get postponed even more.

Now suppose you do understand the need for this and know management has no other options, you agree, and commit for eight weeks. Over those 6 weeks of development, you make a lot of little technical decisions, some good, some bad. For instance, introducing a couple of HTTP APIs on top of ASP.NET controllers rather than using a proper OWIN pipeline. Or struggling with a client-side control and the right framework that fits with it, resulting in JavaScript code that’s difficult to test with Jasmine.

After delivering the feature, it's time to focus on those architectural improvements again. But wait, should we not first repair those bad decisions we made during the last 6 weeks? You wonder, what's more important? Well, those architectural improvements will open up all kinds of significant opportunities that will improve the performance and scalability of the system. On the other hand, using an OWIN pipeline for those HTTP APIs serves as an example for other teams to build more efficient code. Then let's ignore that untested JavaScript for now. But you can’t really do that either. You yourself have been instructing teams to design all JavaScript code in a testable fashion. "So what", you think? You had a deadline, didn’t you? But, do you remember what you told that team when they had the same excuse?

Although this is a bit of a contrived scenario, it is a situation I'm pretty sure you will recognize. So how will I deal with such a situation? Well, I'd probably skip the OWIN pipeline conversion. Although it's nice to have a reference example for the other developers, I might be able to pick this up as part of a future story. Or, if the opportunity arises, I might be able to convince another developer needing similar functionality to include in his work. But refactoring the JavaScript code to make It testable is not something I would postpone. How can I ask other developers to follow those rules if I don't follow myself? I really believe in practicing what you preach and building the right thing right. So only after I've finished this I would switch my focus to the architectural changes.

So what do you think? Would you follow the same line of thought as I would? Let me know by participating in the Disqussions below. Oh and follow me at @ddoomen to get regular updates on my everlasting quest for better solutions.