Jekyll2024-01-10T10:25:46+00:00https://www.continuousimprover.com/feed.xmlThe Continuous ImproverOn an everlasting quest for better solutionsDennis Doomendennis.doomen@avivasolutions.nl22 reasons to ditch Azure DevOps and switch to GitHub as soon as possible2024-01-07T00:00:00+00:002024-01-07T00:00:00+00:00https://www.continuousimprover.com/2024/01/github-vs-azdo<p>As an open-source maintainer for over 15 years, and an <a href="https://fluentassertions.com/">open-source project</a> with over <a href="https://www.nuget.org/packages/FluentAssertions">300 million downloads on NuGet</a>, I like to think I know what it takes to have large numbers of people contribute to a code-base efficiently. Next to that, I’ve been a consultant for almost 27 years helping organizations to get the most out of modern software development efforts. As such, I regularly work with Azure DevOps (AZDO), GitHub and even BitBucket and have been able to experience their differences first-hand. In this post, I’m going to give you 22 reasons why you should switch to GitHub for your source control as soon as possible.</p>
<h2 id="collaboration">Collaboration</h2>
<h3 id="lack-of-forks">Lack of forks</h3>
<p>I’ve <a href="https://www.continuousimprover.com/2020/03/keep-source-control-history-clean.html#dealing-with-code-review-comments">written</a> about this before, but suffice to say I care a lot about a clean source control history. Having feature branches mixed up with shared branches like main, develop, hotfix/x.x isn’t just noisy, it will often seriously obscure the visual graph of your commit history. The obvious solution to that is to use personal forks and create pull requests to bring your changes back into the main repository. AZDO <em>does</em> support a form of forks, but they are really just additional repositories in the same project as the main repo. It clearly was an afterthought. Just imagine a project with 50 developers. Compare this to Github where forks are completely hidden unless you look for them.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2024/github-forks.png" class="align-center" /></p>
<h3 id="inner-sourcing">Inner Sourcing</h3>
<p>I’m a big proponent of Inner Sourcing where teams contribute to eachother’s repositories, just like people do in open-source projects. It’s also the perfect model for platform teams where shared components and infrastructure is build for other teams. Being able to fork <em>any</em> repository within the organization, fix a bug or add a feature, and submit a pull request for review without prior permission is crucial for for this. In AZDO, everything is locked down by default. Repositories are created under a project, and nobody has access to that project, unless access was granted. And not only that, because AZDO treats forks as projects, you must have permissions to create a repository in that project. You can work around all of this to a certain extend, but for me these are all demotivating factors for adopting Inner Sourcing.</p>
<h2 id="pull-requests-and-reviews">Pull Requests and Reviews</h2>
<h3 id="visual-real-estate">Visual real estate</h3>
<p>The first thing I noticed when I had to review a pull request in AZDO is how little space is left for the actual file diff. The below view shows AZDO with as much parts of the UI collapsed as possible.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2024/azdo-view.png" class="align-center" /></p>
<p>The toolbar on the left and the entire top header section remains visible (including the Side-by-Side button), even if you scroll down.</p>
<p>Now compare this to the same file and same revision on GitHub. Notice that the entire top bar with your profile information and commit information is hidden so to keep as much screen real estate available for the diff. Also notice the readability of the changes lines compared to AZDO. For reference, both screenshots were made with the same browser and zooming set to 100%.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2024/github-diff.png" class="align-center" /></p>
<p>Only when you scroll the view all the way to the top, then the rest of the information will reappear. GitHub is full with UX optimizations like that and keeps improving it.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2024/github-diff2.png" class="align-center" /></p>
<h3 id="reviewing-commit-by-commit">Reviewing commit by commit</h3>
<p>While implementing a feature or fixing a bug, I almost always run into refactoring opportunities and potential naming improvements. To avoid polluting the functional changes with those refactorings, I try to separate those changes into separate commits. The idea is that it’ll make it easier for the reviewer to understand my changes, resulting in a quicker and more thorough review. Unfortunately, AZDO doesn’t properly support that. You <em>can</em> create a pull request with multiple commits, but the review comments on those commits will <em>not</em> be visible on the resulting pull request. In GitHub you can review one or more commits at the same time and easily browse back and forth between them.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2024/github-commits.png" class="align-center" /></p>
<h3 id="stacking-branches">Stacking branches</h3>
<p>As a single pull request with multiple commits doesn’t really work, I tried another approach where I push my changes to a branch, create a pull request, and then create a new branch from the previous one. By stacking those branches on top of each other, I can continue delivering my changes in a small chunks for easy reviewing and a clean history. But even that isn’t properly supported in AZDO. You <em>can</em> create a pull request from the branch that was stacked on top the previous one, but as soon as the original pull request is merged, AZDO gets confused. It’ll show the correct commits between the feature branch and the target branch of the pull request in the Commits tab, but the Files tab keeps showing files from the previous pull request this branch was based off. The only workaround is to recreate the pull request from scratch. But that is a pain if you tend to add an extensive rationale to every pull request.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2024/azdo-files-tab.png" class="align-center" /></p>
<h3 id="grouping-code-review-comments">Grouping code-review comments</h3>
<p>A code review is a serious process that requires the reviewer to thoroughly understand the context of the changes and the changes themselves and provide well-written and thoughtfull comments. For example, I often use Emojis to emphasize my intention and help myself from bitpicking too much. GitHub allows you to submit individual comments, just like in AZDO, but it’s much more common to first complete the review and <em>then</em> submit them as one batch. Because of this, GitHub understands which comments belong together and will visually group them so to keep the comments from different reviewers organized.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2024/github-comment-panel.png" class="align-center" /></p>
<p>A nice extra is that you can revise a previous comment before you submit the entire review. In fact, you can edit multiple comment at the same time.Sometimes I realize something while adding a code review comment and want to quickly update a previous comment before finalizing the current.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2024/github-resolved-comments.png" class="align-center" /></p>
<h3 id="supporting-non-review-comments">Supporting non-review comments</h3>
<p>AZDO doesn’t allow me to make a distinction between review comments that need to be resolved or general comments on the pull request. So if I want to ping somebody so they are aware of the pull request or leave any other kind of comment like a link to a related pull request or issue, that comment will need to be “resolved” to unblock the pull request. In GitHub, you can make that decision per comment.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2024/github-start-review.png" class="align-center" /></p>
<h3 id="tracking-force-pushes">Tracking force pushes</h3>
<p>As I often commit my changes into a temporary commit using <code class="language-plaintext highlighter-rouge">git save</code>, I also keep amending and force-pushing the follow-up changes to that same commit. I do the same when processing code review comments and pushing <a href="https://www.continuousimprover.com/2020/03/keep-source-control-history-clean.html">fix-up commits</a>. In AZDO, there’s no way to see what changes that force push overwrote. Compare this to GitHub, which adds a nice clickable link to see the differences.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2024/github-force-push-diff.png" class="align-center" /></p>
<h3 id="expanding-the-diff-view">Expanding the diff view</h3>
<p>Both AZDO and GitHub will hide the unchanged lines in the diff viewer. This is nice and makes it easier to review the relevant changes. But sometimes, you also want to see the context of the change. GitHub allows you to incrementally expand the diff to show the lines above or below.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2024/github-expand-diff.png" class="align-center" /></p>
<h3 id="multi-line-links">Multi-line links</h3>
<p>In AZDO, creating a link to a specific line in a file that is part of a pull request is cumbersome. Also, there’s no way to create a link to multiple lines of text in such a PR. I often use that to refer to similar changes from a comment. In GitHub, you can SHIFT-click on a set of a lines to get a friendly URL you can share.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2024/github-block-link.png" class="align-center" /></p>
<h3 id="editing-files-during-review">Editing files during review</h3>
<p>In GitHub, you can directly edit a file that is a part of a pull request and which source branch might be even on a different fork. It’ll open a new window to edit the file and push a new commit directly to the source branch.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2024/github-edit-file.png" class="align-center" /></p>
<p>And if you don’t have write access, GitHub will ask you whether you want to create a branch instead and use a pull request to the contributor’s fork.</p>
<h3 id="emoji-support">Emoji support</h3>
<p>As I use emojis a lot to help me categorize the code review comments (based on <a href="https://github.com/erikthedeveloper/code-review-emoji-guide">this article</a>), it’s slightly annoying that AZDO doesn’t auto-complete and understand emojis like <code class="language-plaintext highlighter-rouge">:wrench:</code>, <code class="language-plaintext highlighter-rouge">:question:</code>, `:seedling:. To be fair, on Windows, you can use the WIN-dot pop-up as a workaround.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2024/github-emojis.png" class="align-center" /></p>
<h3 id="smart-linking">Smart linking</h3>
<p>If you paste a link to another pull request or issue into a pull request, AZDO will just do that: treat it as a link. GitHub is smart enough to realize that it is something native to GitHub and change it into a shortened version of that link that looks like any reference to something within GitHub. In fact, GitHub will also show you a summary of that issue or pull request in a pop-up if you hover your mouse over it.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2024/github-hover.png" class="align-center" /></p>
<h2 id="other-features">Other features</h2>
<h3 id="symbol-navigation">Symbol navigation</h3>
<p>As I said earlier, the GitHub C# parser has a deep understanding of the language. And because of that, it can provide a list of symbols within a class on a side panel.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2024/github-symbols.png" class="align-center" /></p>
<p>But not only that, when you select a symbol such as the method below, it’ll immediatelly show you where that method is defined and where it is used. Not a critical feature, but it has helped me avoid the need to open up an IDE on many occasions.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2024/github-references.png" class="align-center" /></p>
<p>And if you really need to power of a real IDE in your browser, just press the . (dot) key to open up the repository in <a href="https://code.visualstudio.com/docs/editor/vscode-web">Visual Studio Code for Web</a>.</p>
<h3 id="builds-vs-repositories">Builds vs repositories</h3>
<p>Although not a big issue, I do prefer the direct connection between a repository and its build pipelines. I do see the advantage of having a pipeline associated with multiple repositories as AZDO has it, but then make it visually clear how to find the pipeline for a repo. I now have to manually add a badge to the read-me.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2024/azdo-badge.png" class="align-center" /></p>
<h3 id="auto-updating-dependencies">Auto-updating dependencies</h3>
<p>One of the biggest maintenance challenges most software projects have these days is to keep up with new versions of the open-source NPM and NuGet packages they depend on. I’ve seen my fair share of projects that neglected this and ran into breaking changes or hard to solve dependency conflicts at the most inconvenient time. And don’t get me started on vulnerabilities introduced by this. In GitHub, has something called <a href="https://github.com/features/security">Dependabot</a> that will automatically create pull requests that update your Nuget and NPM packages. It’s extremely smart, understands semantic versioning and is configurable enough to group updates to related packages.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2024/github-dependabot.png" class="align-center" /></p>
<h3 id="push-and-create">Push and create</h3>
<p>While pushing a new branch from the CLI, GH will give you a link you can click to directly create a pull request on the website.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2024/github-cli-pr.png" class="align-center" /></p>
<h3 id="the-github-cli">The GitHub CLI</h3>
<p>GitHub introduced a command-line tool called the <a href="https://cli.github.com/">GitHub CLI</a> that allows you to do practically everything you can through the website, think of things like creating forks, open issues, check out pull requests locally and many others.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2024/github-cli.png" class="align-center" /></p>
<p>I use <code class="language-plaintext highlighter-rouge">gh</code> a lot to review pull requests locally, something which requires quite some git magic on AZDO.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2024/github-cli-pr-checkout.png" class="align-center" /></p>
<h3 id="release-notes-generation">Release notes generation</h3>
<p>Another awesome GitHub feature that AZDO doesn’t have is the ability to generate release notes from pull requests. In Fluent Assertions, we use that heavily and will result in something like this</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2024/github-release-notes.png" class="align-center" /></p>
<p>Notice how it groups the pull requests, adds the links, avatars and names of the contributors and even mentions first-time contributors. And it’s all heavily customizable. Just check out <a href="https://github.com/fluentassertions/fluentassertions/blob/develop/.github/release.yml">the configuration</a> Fluent Assertions uses.</p>
<h3 id="repository-insights">Repository insights</h3>
<p>GitHub offers extensive insights in the activity of a repository, something unavailable in AZDO. Especially in larger organization with 100+ repositories, it’s a nice way to see which package is still maintained.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2024/github-insights.png" class="align-center" /></p>
<h3 id="compare-anything">Compare anything</h3>
<p>In GitHub, you can compare any commit, branche or tags. In AZDO, you can only compare tags with tags or branches with branches. This can be quite annoying if you’re, for example, trying to figure out what changed between a feature branch and the last released tag. And the URL for comparing is quite human-readable. For example, <code class="language-plaintext highlighter-rouge">https://github.com/fluentassertions/fluentassertions/compare/6.12.0...develop</code> will give you something like this.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2024/github-compare.png" class="align-center" /></p>
<h3 id="syntax-highlighting">Syntax highlighting</h3>
<p>Although AZDO does have some level of syntax highlighting for most file types, GitHub generally supports more file types and has better understanding of C# files. Compare for example the left screenshot from GitHub and a similar one from AZDO. Although the difference isn’t that big, you can see that the highlighter in GitHub really understands C#.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2024/github-syntax.png" class="align-left" style="max-width: 400px" />
<img src="https://www.continuousimprover.com/assets/images/posts/2024/azdo-syntax.png" class="align-right" style="max-width: 400px" /></p>
<h2 id="wrap-up">Wrap-up</h2>
<p>That was a bigger post than I initially thought it would take. Regardless, if you care about clean source control, quick and thorough reviews and intensive collaboration between teams, I urge you to drop Azure DevOps for source control and switch to GitHub. Even more if you care about up-to-date dependencies and reducing the risk of vulnerabilities. IMO, Dependabot is already reason enough to switch.</p>
<p>GitHub offers such a refined experience, it’s just not fair to other source control providers. It uses human-friendly URLs all over the place and its user interface is continuously being improved. In AZDO, it feels like the team is doing the minimum amount of work to support the most requested features and without considering proper interaction design. In GitHub, everything feels so well-thought-out.</p>
<p>If you’re considering to drop AZDO altogether, know that work item tracking is not yet on par with AZDO. Although looking at the pace at which new issue tracking features are added, I’m sure it’s getting there pretty fast. And if you can’t wait for that and want to completely drop AZDO, switch to JIRA. I have nothing but great experiences with JIRA.</p>
<p>It hurts to work with AZDO. Not because of personal feelings, but because I’ve seen myself how it is holding back the teams I work with to collaborate efficiently and commit code that has a high level of traceability.</p>
<p>So if you see the chance to try GitHub, do it. You’ll never look back.</p>
<h2 id="about-me">About me</h2>
<p>I’m a Microsoft MVP and Principal Consultant at <a href="https://avivasolutions.nl/">Aviva Solutions</a> with 27 years of experience under my belt. As a coding software architect and/or lead developer, I specialize in building or improving (legacy) full-stack enterprise solutions based on .NET as well as providing coaching on all aspects of designing, building, deploying and maintaining software systems. I’m the author of <a href="https://www.fluentassertions.com">Fluent Assertions</a>, a popular .NET assertion library, <a href="https://www.liquidprojections.net">Liquid Projections</a>, a set of libraries for building Event Sourcing projections and I’ve been maintaining <a href="https://www.csharpcodingguidelines.com">coding guidelines for C#</a> since 2001. You can find me on <a href="https://twitter.com/ddoomen">Twitter</a>, <a href="https://mastodon.social/@ddoomen">Mastadon</a> and <a href="https://bsky.app/profile/ddoomen.bsky.social">Blue Sky</a>.</p>Dennis Doomendennis.doomen@avivasolutions.nlHow Azure DevOps is holding back the teams I work with to collaborate efficiently and commit code that has a high level of traceability, and how GitHub fixes thatMonetizing open-source development and supporting the community2023-06-23T00:00:00+00:002023-06-23T00:00:00+00:00https://www.continuousimprover.com/2023/06/funding-open-source<p>I was recently interviewed by a popular .NET podcast and we ended up discussing how to get companies to support the open-source community. So here’s me calling all developers and tech enthusiasts! It’s time to take action and support the open-source projects that drive innovation and empower so many projects. I invite you to join me in an attempt to monetize open source development and ensuring the sustainability of these valuable initiatives. As the creator of <a href="https://fluentassertions.com/">Fluent Assertions</a>, a popular .NET open-source project that has crossed 250 million downloads on nuget.org, I understand firsthand the impact and challenges of maintaining open source. Together, I hope we can make a difference and foster a thriving open-source ecosystem.</p>
<h2 id="requesting-financial-support-from-your-organization">Requesting financial support from your organization</h2>
<p>Let’s start by approaching our managers or CTOs and requesting financial support for open-source projects. Highlight the benefits of investing in open-source, such as improved software quality, enhanced security, and the opportunity to collaborate with a vibrant developer community. I encourage you to share the success stories of open-source projects like Fluent Assertions, xUnit or Identity Server, demonstrating the tangible value they bring to organizations. Urge your company to consider donating a portion of their budget each month to support these projects and the wider ecosystem.</p>
<h2 id="creating-a-sponsorship-selection-committees">Creating a sponsorship selection committees</h2>
<p>To ensure fairness in distributing funds, propose the idea of creating a sponsorship selection committee (or something less formal). Such a committee could consist of representatives from different teams and (potentially) departments. Together, they can evaluate and prioritize open-source projects that require financial backing. By involving a diverse range of perspectives, you can make informed decisions that align with our organization’s goals and ensure that funds are allocated where they will have the greatest impact.</p>
<h2 id="using-tools-for-project-assessment-and-distribution">Using tools for project assessment and distribution</h2>
<p>Efficiently managing financial contributions to open source projects is crucial. You could utilize tools and services like Black Duck, Dependabot or others that help assess the open-source projects your project or department relies on and distribute funds accordingly. These tools provide valuable insights into project popularity, usage statistics, and development activity. By leveraging such data, you can make sensible decisions about where to allocate financial support.</p>
<h2 id="sponsoring-your-favorite-open-source-project-yourself">Sponsoring your favorite open-source project yourself</h2>
<p>Let’s take some personal responsibility for monetizing open-source development by sponsoring the projects we are passionate about. I believe Fluent Assertions has been mostly successful thanks to the support and contributions from the community. If there’s an open-source project that you love, consider sponsoring it yourself. Even a small monthly contribution, like 5 EUR per month, can make a significant difference when combined with the support of others. By sponsoring projects like that, we empower their maintainers and encourage them to continue their valuable work.</p>
<h2 id="about-me">About me</h2>
<p>I’m a Microsoft MVP and Principal Consultant at <a href="https://avivasolutions.nl/">Aviva Solutions</a> with 26 years of experience under my belt. As a coding software architect and/or lead developer, I specialize in building or improving (legacy) full-stack enterprise solutions based on .NET as well as providing coaching on all aspects of designing, building, deploying and maintaining software systems. I’m the author of <a href="https://www.fluentassertions.com">Fluent Assertions</a>, a popular .NET assertion library, <a href="https://www.liquidprojections.net">Liquid Projections</a>, a set of libraries for building Event Sourcing projections and I’ve been maintaining <a href="https://www.csharpcodingguidelines.com">coding guidelines for C#</a> since 2001. You can find me on <a href="https://twitter.com/ddoomen">Twitter</a>, <a href="https://mastodon.social/@ddoomen">Mastadon</a> and <a href="https://bsky.app/profile/ddoomen.bsky.social">Blue Sky</a>.</p>Dennis Doomendennis.doomen@avivasolutions.nlSome thoughts on how we could convince companies to support the open-source community.What’s the “unit” in unit testing and why is it not a class2023-04-24T00:00:00+00:002023-04-24T00:00:00+00:00https://www.continuousimprover.com/2023/04/unit-testing-scope<h2 id="why-care-about-the-scope-of-testing">Why care about the scope of testing?</h2>
<p>Somewhere in 2018, I asked my Twitter friends for advice on defining a heuristic to define the right scope of your unit tests. This resulted in some interesting discussions, but I still remember two responses that somehow stuck. I particularly liked the humor in the first one:</p>
<blockquote>
<p>When someone else can modify your code safely, without you getting sweaty armpits, the scope of your unit test is okay</p>
</blockquote>
<p>The other one sounded more thoughtful and wise:</p>
<blockquote>
<p>Our unit test should be large enough that you can assert something meaningful, but small enough that you can quickly read & assess it</p>
</blockquote>
<p>You may wonder why we should care about this in the first place. Well, I hope you do agree with the value of unit testing. In my experience, It can help produce code that can be changed by any developer in the team without fear and with confidence. But unit tests do not come for free. They can easily extend the <em>initial</em> development time with 50%. But I promise you, your return of investment will be significant. You’ll end up with happier developers and happier clients.</p>
<p>But that’s not what I meant with “free”. The “dark side” of unit testing and Test Driven Development, as some like to call it, is that you can do it wrong. And if you do, it will hurt all <em>successive</em> development in such a way that you regret adopting unit testing in the first place. Fortunately for you I’ve already shot myself in my feet extensively and thus have a lot of experience to share. This already let to my <a href="/2021/10/laws-test-driven-development.html">recent post</a> on the “laws” of test driven development. But I never elaborated on how to find the right scope for automated testing.</p>
<h2 id="a-real-life-example-involving-databases">A real-life example involving databases</h2>
<p>Let’s start with the first example. Consider a type which main purpose is to provide general database management operations. It has a method that will check a particular table exists, and if not, create it.</p>
<div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">_databaseManager</span><span class="p">.</span><span class="nf">EnsureTableExists</span><span class="p">(</span><span class="s">"users"</span><span class="p">);</span>
</code></pre></div></div>
<p>Now ask yourself, what should be the scope of the automated tests?</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2023/uml-databasemanager.png" class="align-center" style="max-width: 500px" /></p>
<p>If you follow the guidance by some of the books on this topic, every type should be covered by a separate set of tests. So one set for the <code class="language-plaintext highlighter-rouge">DatabaseManager</code> and assuming the factory can be covered in one go, one set for the <code class="language-plaintext highlighter-rouge">SqlDatabaseAdapter</code>. However, I don’t think you can make that decision without understanding the relationship between those types. Are the adapter and its factory part of the same layer or module? Is any of them supposed to be reusable or are they just implementation details to make the code easier to change in the future? What were the original requirements that led to this design?</p>
<p>After consulting with the developer that designed this, it turned out that there’s really only one implementation of the <code class="language-plaintext highlighter-rouge">IDatabaseAdapter</code>. He added those interfaces just to “be ready for the future” or to “be SOLID”. There was no requirement to support any other database than SQL Server, and as far as we know, there never will be. In fact, the existing <code class="language-plaintext highlighter-rouge">DatabaseManager</code> tests were creating a mock of the <code class="language-plaintext highlighter-rouge">IDatabaseAdapterFactory</code> that returns a mock of the <code class="language-plaintext highlighter-rouge">IDatabaseAdapter</code>. This is the result of he manager delegating all the “dirty” interaction with SQL Server to the adapter. In other words, those tests were only ensuring that a call to <code class="language-plaintext highlighter-rouge">EnsureTableExists</code> resulted in a call to <code class="language-plaintext highlighter-rouge">IDatabaseAdapter.EnsureTableExists</code>. The actual adapter wasn’t covered at all. Since the primary purpose of that manager is to interact with the database, testing only the mocks is quite wasteful. So for these <em>specific tests</em> I would just <a href="/2023/03/docker-in-tests.html">use a Linux test container</a> running SQL Server to cover everything the <code class="language-plaintext highlighter-rouge">DatabaseManager</code> is supposed to do.</p>
<p>In my opinion, the original developer didn’t understand the subtleties behind SOLID and applied the guidelines rather dogmatically. Given the requirements at that time, it could have all been a single class. Only when there would be a need to support multiple database vendors, I would have considered refactoring and introducing the Adapter pattern. And that’s my point. Even if those abstractions <em>were</em> needed at some point, they would be the result of refactoring. The original purpose of the <code class="language-plaintext highlighter-rouge">DatabaseManager</code> wouldn’t change. And you shouldn’t need to rewrite your tests if you decide to refactor the implementation from a single class into multiple classes. Refactoring shouldn’t change the purpose, nor the behavior. That’s why testing too small is such a bad practice. It can complete kill your ability to move fast.</p>
<h2 id="another-example-from-fluentassertions">Another example from FluentAssertions</h2>
<p>As you may know, <a href="https://fluentassertions.com/">FluentAssertions</a> has a feature to compare two object graphs even if the types in those graphs differ. This capability, available through the <code class="language-plaintext highlighter-rouge">BeEquivalentTo</code> method, allow you to do something like this:</p>
<div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">eventMonitor</span><span class="p">.</span><span class="n">OccurredEvents</span><span class="p">.</span><span class="nf">Should</span><span class="p">().</span><span class="nf">BeEquivalentTo</span><span class="p">(</span><span class="k">new</span><span class="p">[]</span>
<span class="p">{</span>
<span class="k">new</span>
<span class="p">{</span>
<span class="n">EventName</span> <span class="p">=</span> <span class="s">"PropertyChanged"</span><span class="p">,</span>
<span class="n">TimestampUtc</span> <span class="p">=</span> <span class="n">utcNow</span> <span class="p">-</span> <span class="m">1.</span><span class="nf">Hours</span><span class="p">(),</span>
<span class="n">Parameters</span> <span class="p">=</span> <span class="k">new</span> <span class="kt">object</span><span class="p">[]</span> <span class="p">{</span> <span class="err">“</span><span class="n">third</span><span class="err">”</span><span class="p">,</span> <span class="err">“</span><span class="n">first</span><span class="err">”</span><span class="p">,</span> <span class="m">123</span> <span class="p">}</span>
<span class="p">},</span>
<span class="k">new</span>
<span class="p">{</span>
<span class="n">EventName</span> <span class="p">=</span> <span class="s">"NonConventionalEvent"</span><span class="p">,</span>
<span class="n">TimestampUtc</span> <span class="p">=</span> <span class="n">utcNow</span><span class="p">,</span>
<span class="n">Parameters</span> <span class="p">=</span> <span class="k">new</span> <span class="kt">object</span><span class="p">[]</span> <span class="p">{</span> <span class="s">"first"</span><span class="p">,</span> <span class="m">123</span><span class="p">,</span> <span class="s">"third"</span> <span class="p">}</span>
<span class="p">}</span>
<span class="p">},</span> <span class="n">o</span> <span class="p">=></span> <span class="n">o</span><span class="p">.</span><span class="nf">WithStrictOrdering</span><span class="p">());</span>
</code></pre></div></div>
<p>It executes a recursive comparison member by member. And it does that in a smart way. For instance, types that have members themselves and do not override <code class="language-plaintext highlighter-rouge">Equals</code> are compared by recursively traversing their members. Dictionaries are equivalent if they have the same keys and their values are equivalent (again by running a nested recursive comparison). And collections are equivalent when they contain the same equivalent object in any order (unless you use something like <code class="language-plaintext highlighter-rouge">WithStrictOrdering</code>). And it doesn’t stop there. Here’s a class diagram showing just a subset of the implementation.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2023/uml-fluentassertions.png" class="align-center" /></p>
<p><code class="language-plaintext highlighter-rouge">BeEquivalentTo</code> is a method on a class that is returned from the <code class="language-plaintext highlighter-rouge">Should</code> extension method. When I started the implementation of this API almost ten years ago, there was only a single class to implement the behavior: <code class="language-plaintext highlighter-rouge">EquivalentValidator</code>. But over the years, I added more and more capabilities and needed to refactor the implementation to break it down into smaller and well-focused supporting class. And that’s exactly what the original authors of the Design Patterns book meant when they said they should have named the book “Refactoring towards Design Patterns”. And just like the previous example, refactoring my code shouldn’t affect the behavior, the API, and more importantly, the tests. Applying the test-per-class strategy would have completely screwed up my ability to refactor.</p>
<h2 id="a-less-trivial-example">A less trivial example</h2>
<p>Consider the following call:</p>
<div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kt">var</span> <span class="n">user</span> <span class="p">=</span> <span class="k">await</span> <span class="n">_httpClient</span><span class="p">.</span><span class="nf">GetAsync</span><span class="p">(</span><span class="s">"/api/users/1234"</span><span class="p">,</span> <span class="n">body</span><span class="p">);</span>
</code></pre></div></div>
<p>In the .NET world, this is most likely implemented like this:</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2023/uml-usermodule.png" class="align-center" style="max-width: 500px" /></p>
<p>Now what should the test scope be here? A developer who just started with TDD would probably write individual tests for the <code class="language-plaintext highlighter-rouge">UsersController</code> and the <code class="language-plaintext highlighter-rouge">SqlUserRepository</code>. But given what I’ve been trying to tell you in this post, I guess your default answer would be to test the controller and repository in one go.</p>
<p>Well, I think the correct answer is “it depends”. Are the controller and repository part of the same module or functional slice and specifically built for that? If so, I would most likely cover both in one set of tests (possibly using a SQL Server docker container). But if this is part of some kind of Onion or Hexagon Architecture (and thus using the Dependency Inversion Principle), it is very likely that the <code class="language-plaintext highlighter-rouge">IUserRepository</code> is a specific interface owned by the same module that owns the <code class="language-plaintext highlighter-rouge">UsersController</code>. Other modules may have their own version of that interface. In that case, the <code class="language-plaintext highlighter-rouge">SqlUserRepository</code> is implementing those interfaces, and by definition, lives outside the scope of the controller. And because of this, I would most definitely test the repository separately.</p>
<p>With these three real-world examples, I hope you see my point that the default test-per-class idea is rubbish. But if that’s the case, how do you find the right scope then? Unfortunately there aren’t simple rules and guidelines to determine that scope. It really depends on the architecture and the internal boundaries of your code base. But in my next post, I’ll give you some heuristics and smells that will help identify the right boundaries.</p>
<h2 id="about-me">About me</h2>
<p>I’m a Microsoft MVP and Principal Consultant at <a href="https://avivasolutions.nl/">Aviva Solutions</a> with 26 years of experience under my belt. As a coding software architect and/or lead developer, I specialize in building or improving (legacy) full-stack enterprise solutions based on .NET as well as providing coaching on all aspects of designing, building, deploying and maintaining software systems. I’m the author of <a href="https://www.fluentassertions.com">Fluent Assertions</a>, a popular .NET assertion library, <a href="https://www.liquidprojections.net">Liquid Projections</a>, a set of libraries for building Event Sourcing projections and I’ve been maintaining <a href="https://www.csharpcodingguidelines.com">coding guidelines for C#</a> since 2001. You can find me on <a href="https://twitter.com/ddoomen">Twitter</a> and <a href="https://mastodon.social/@ddoomen">Mastadon</a>.</p>Dennis Doomendennis.doomen@avivasolutions.nlIf you choose the wrong unit testing scope, you'll regret adopting unit testing and TDD in the first place20 questions to determine whether your teams are mature enough2023-04-17T00:00:00+00:002023-04-17T00:00:00+00:00https://www.continuousimprover.com/2023/04/team-predictability<p>With more than 26 years of experience, as a consultant, I help organizations in the .NET space to professionalize their entire software development efforts, from idea to production. During such visits, I get to scrutinize their development practices, quality standards, design principles, the tools they use, their deployment pipeline, the team dynamics, the requirements process and much more. In this series of short posts, I’ll share some of the most common pain points I run into.</p>
<p>In the <a href="/2023/04/signals-unknown-architecture.html">previous post</a> of this series, I’ve provided you with a couple of angles that can help you determine whether your architecture, your code and your documentation are a consistent whole. And with that, I completed the more technical part of this series of posts. In this final post, I’m going to change direction and talk about the predictability and maturity of your development team(s). Here’s a bunch of questions you can ask to help you assess the situation.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2023/team-maturity.png" class="align-center" /></p>
<ul>
<li>
<p>Are your development teams already working under the principles of Scrum, Kanban or a combination of those, but they never managed to achieve a stable <em>velocity</em>? Do you think this a problem or do you think this in an inherent aspect of software development in general?</p>
</li>
<li>
<p>Are teams struggling to break down the work in appropriately-sized chunks that allow them to work together using swarming or pair programming?</p>
</li>
<li>
<p>Do you struggle trying to capture the more technical aspects of software development into (functional) user stories?</p>
</li>
<li>
<p>Do bigger technical changes generally fit in a user story, or do they often tend to run over the sprint boundaries? And do you see friction in the original definition of what a user story is supposed to mean? Or do you apply ideas like <em>skeleton stories</em> or stories with <em>project value</em> to capture the technical work?</p>
</li>
<li>
<p>Do teams know exactly what is expected before they start developing a feature? And do they also know what is expected at the end?</p>
</li>
<li>
<p>How accurate and useful are your team’s estimates? Do you see a lot of deviations and extremes? Do they still try to convert the story points back to hours?</p>
</li>
<li>
<p>Do you see estimates as a goal by itself? Or are their only value the fact that developers think about the size and complexity of the work so that they can create a decent work break down.</p>
</li>
<li>
<p>What about sprint goals? Are these just ceremonies with artificial goals that usually end up being about delivering the most important backlog item? Or do they really make a difference to the focus a team has?</p>
</li>
<li>
<p>How often are the stories <em>almost</em> ready at the end of the sprint? Is it this a common thing (like in many teams), or do team members really work together efficiently to deliver what was planned?</p>
</li>
<li>
<p>Do your <em>burn down charts</em> also look like block diagrams or show steadily <em>increasing</em> work-in-progress? Or do yours really burn <em>down</em> in a gradual and predictable manner? And do you even look at them during the stand-ups to track and adjust priorities?</p>
</li>
<li>
<p>Do you often notice that your teams start to loose focus from the most important backlog item that sprint should deliver? For example by working on lesser important stuff (so they can keep work in isolation). Or one team working and struggling to get that important feature delivered, whereas the other teams mind their own business without offering help.</p>
</li>
<li>
<p>Are stand-ups where people are actively engaging and offering each other help? Or do you always see the same people speak and the rest being completely passive? Do you manage to finish your stand-ups with 5 minutes or do they always linger on too long?</p>
</li>
<li>
<p>Do you often see backlog items on the board that seem to be stuck? And what about too many backlog items being in the active state at the same time? And do your Kanban teams even have a <em>WIP</em> limit that they honor?</p>
</li>
<li>
<p>Do teams have a mindset that makes them work together to get those backlog items moving over the board from left to right, even if that means they have to do work outside their comfort zone?</p>
</li>
<li>
<p>How well do you manage to balance technical work, getting rid of technical debt and delivering customer value? Or is this a continuous fight between product owners, architects, developers and even managers?</p>
</li>
<li>
<p>Can teams focus on the work they planned to work on? Or are they continuously disturbed by outside sources? And if it’s the latter, have you found ways to prevent and/or funnel this?</p>
</li>
<li>
<p>Do development teams understand how their work contributes to the wider goals of the project or organization? And do they understand the goal of their current assignment? If it is a POC or MVP, do they realize that the goal is not to deliver a rock-solid product, but just something to open up the market.</p>
</li>
<li>
<p>Do the developers see the development process and the associated ceremonies as useful? Or do they feel like it is just a waste of time that keeps them from hitting that keyboard? Is anybody trying to sabotage the process a little bit.</p>
</li>
<li>
<p>Are (sprint) retrospectives just routine meetings where people mumble a bit on what happened without any concrete actions? Or are they joyful, surprising and effective meetings that really help achieve the agile mindset of inspecting-and-adapting?</p>
</li>
</ul>
<p>Do you recognize any of the typical struggles I just listed? Do you think they apply to your organization? Let me know by commenting below.</p>
<h2 id="about-me">About me</h2>
<p>I’m a Microsoft MVP and Principal Consultant at <a href="https://avivasolutions.nl/">Aviva Solutions</a> with 26 years of experience under my belt. As a coding software architect and/or lead developer, I specialize in building or improving (legacy) full-stack enterprise solutions based on .NET as well as providing coaching on all aspects of designing, building, deploying and maintaining software systems. I’m the author of <a href="https://www.fluentassertions.com">Fluent Assertions</a>, a popular .NET assertion library, <a href="https://www.liquidprojections.net">Liquid Projections</a>, a set of libraries for building Event Sourcing projections and I’ve been maintaining <a href="https://www.csharpcodingguidelines.com">coding guidelines for C#</a> since 2001. You can find me on <a href="https://twitter.com/ddoomen">Twitter</a> and <a href="https://mastodon.social/@ddoomen">Mastadon</a>.</p>Dennis Doomendennis.doomen@avivasolutions.nlIn this final post of this series, I'm going to change direction and talk about the predictability and maturity of your development team(s)6 signals that your architecture is not visible enough2023-04-11T00:00:00+00:002023-04-11T00:00:00+00:00https://www.continuousimprover.com/2023/04/signals-unknown-architecture<p>With more than 26 years of experience, as a consultant, I help organizations in the .NET space to professionalize their entire software development efforts, from idea to production. During such visits, I get to scrutinize their development practices, quality standards, design principles, the tools they use, their deployment pipeline, the team dynamics, the requirements process and much more. In this series of short posts, I’ll share some of the most common pain points I run into.</p>
<p>In the <a href="/2023/03/coding-smells.html">previous post</a> of this series, I’ve been diving into the depths of coding practices that have a smell. This time, I’d like to focus on the availability and discoverability of technical information about architecture. Some teams are more then willing to write documentation, but what’s lacking is clarity on where to find that documentation, how it relates to the architecture and how up-to-date it still is. Here are some consequences of that.</p>
<ul>
<li>
<p>It is often not obvious where in the system some piece of functionality is implemented. This is particularly an issue when the code is structured along the technical aspects of the system instead of aligning with functional modules. This is frequently caused by developers that don’t understand the internal boundaries and architectural seams of the system, either because they are too focused on the details, or because the architecture itself has been obfuscated.</p>
</li>
<li>
<p>And if you look at the architecture from the code’s perspective, you should be able to deduce the architecture. In fact, in an ideal world, the code should make the architecture evident just by looking at the folder names. The industry likes to call that a <em>screaming architecture</em>. Even if you decide to look at the system from the UI’s perspective, you should get a decent understanding of where certain sections of the UI are implemented. And it shouldn’t matter whether it’s a desktop application or a modern single-page web application.</p>
</li>
</ul>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2023/architecture-visibility.png" class="align-center" /></p>
<ul>
<li>
<p>Even if a developer found a suitable place for some new piece of code, I’ve noticed that they aren’t always aware how that place fits in the bigger picture. And if you don’t know that, then you probably also don’t know how that place relates to other parts of the system and what kind of coupling is allowed or not. Having some kind of visual representation can really make a difference. Unfortunately, developers that are very experienced with a code base often built a mental map of the architecture and know their way instinctively. Or worse, they may reject the need for such a visual representation because it may surface the inconsistency of the code base. All issues that don’t help protect the internal architectural boundaries.</p>
</li>
<li>
<p>It doesn’t help if documentation is not up-to-date, especially if it is supposed to cover the architecture and its underlying principles. Some organizations ensure writing documentation is part of the Definition of Done. That’s a good thing, but doesn’t guarantee that you’ll end up with useful documentation. Documentation needs to have purpose, scope, and more importantly, the right audience and a clear actuality. It should be completely obvious where to find it and how to navigate through related content. And all of this also applies to technical and architectural documentation. That’s one of the main reasons why I don’t believe in UML tools like Sparx Enterprise Architect. Architecture diagrams should only exist as visual support to tell a story, even those using the <a href="https://c4model.com/">C4 Model</a>.</p>
</li>
<li>
<p>Now imagine a beautiful world where the architecture, the code and the documentation are all nicely aligned. Unfortunately (or fortunately) things change. New functionality, new environments, new technical insights and refactoring needs are all inevitable in a successful system. And that’s fine, provided that the developers involved keep the rest of the developers and stakeholders posted about those changes. In a previous post, I’ve mentioned internal blog posts already, but technical sessions, architecture decision logs and similar techniques seem to be a bridge too far for many teams.</p>
</li>
<li>
<p>A small, but not unimportant part of the architecture is the code itself. Does every code project (such as a Github or AZDO repository) a clear read-me explaining the purpose of the project, how it is released, who owns it, where to find the build artifacts, what you need to compile the project and how it is deployed in production. I’ve seen too many projects, both within companies as well as in the open-source world, that lack that kind of information.</p>
</li>
</ul>
<p>Those were only six points. I’ve seen quite some teams that ran into structural problems because they were convinced that “the code is documentation enough” or “I know the architecture by heart”. And it really isn’t that hard to identify the necessary ingredients to make people aware of the architecture, both in code as well as at the documentation level.</p>
<p>Do you recognize one or more of these signals? Do you think they apply to your organization? Let me know by commenting below.</p>
<h2 id="about-me">About me</h2>
<p>I’m a Microsoft MVP and Principal Consultant at <a href="https://avivasolutions.nl/">Aviva Solutions</a> with 26 years of experience under my belt. As a coding software architect and/or lead developer, I specialize in building or improving (legacy) full-stack enterprise solutions based on .NET as well as providing coaching on all aspects of designing, building, deploying and maintaining software systems. I’m the author of <a href="https://www.fluentassertions.com">Fluent Assertions</a>, a popular .NET assertion library, <a href="https://www.liquidprojections.net">Liquid Projections</a>, a set of libraries for building Event Sourcing projections and I’ve been maintaining <a href="https://www.csharpcodingguidelines.com">coding guidelines for C#</a> since 2001. You can find me on <a href="https://twitter.com/ddoomen">Twitter</a> and <a href="https://mastodon.social/@ddoomen">Mastadon</a>.</p>Dennis Doomendennis.doomen@avivasolutions.nlI've been reflecting on common issues that make it hard for the developers to understand the architectureAre you at your Plateau of Productivity yet?2023-04-02T00:00:00+00:002023-04-02T00:00:00+00:00https://www.continuousimprover.com/2023/04/plateau-of-productivity<p>Here’s a little story:</p>
<blockquote>
<p>Imagine you’ve been attending that conference where you first learned about that cool new thing, let’s say, something like Test Driven Development. Your first reaction might have been “Meh, not for me”. You were not seeing the value just yet and could already imagine yourself having to explain your manager why development of a new feature is taking so much time. But then, over time, the idea of TDD starts to sink in and you start to experiment with writing tests first.</p>
<p>Week by week, you become more enthusiastic and really start to grasp how powerful TDD is. You start to tell your colleagues about it, offer to run some internal demos, and even decide to write a couple of blog posts about it. Since you’re not afraid to talk in public, you manage to convince your boss that you doing a full-blown presentation is a great idea. And since it is, and you did a great job, you even got accepted at a local event to convince the community about how great TDD is. And hey, and since you’re a millennial, you record a YouTube video of that presentation and decide to invest in building a real Pluralsight training on the many advantages of TDD.</p>
<p>In the meantime, you’ve been applying this test-first mindset quite rigorously and have discovered that it is much harder than you thought. You and your team wrote tons of small and focused unit tests and the number of regression decreased significantly. So even though your manager doesn’t like the extra time it costs to build a new feature, he does like the improved quality. On the other hand, a lot of your colleagues start to complain about having to rewrite a lot of those tests every time they change some functionality. In fact, it gets so bad that you start to believe that in retrospect, adopting TDD wasn’t such a great idea after all.</p>
<p>As a consequence of this, you start to talk to other teams and recommend them not to even bother with TDD. You mention the time it’ll cost you to write tests before writing code, and particularly complain about all the rewriting you have to do all the time. And as you’ve become part of the community, you write a blog post about it, build a presentation titled “The Dark Side of TDD” and record a popular YouTube video with the catchy title “10 reasons why shouldn’t use TDD”. From a proponent, you’ve completely switched to become an opponent.</p>
<p>But then, many years later, you’ve finally discovered that TDD is really as great as you initially thought. In fact, you can’t even write code without first writing some tests anymore. It’s really like driving without seatbelts for you. You just needed to make sure you stay away from any dogmatic ideas. You now know a class is not the default scope for a unit tests and that TDD is equally applicable to JavaScript and TypeScript as it is to C# and Java. Also, you finally realized that understanding the internal seams of your system is crucial to be successful with TDD. But you’ve accepted that first trying to sketch out the overall design of your classes without writing a single test is totally fine. You know quite well that first defining the responsibilities, the ownership of data and how those classes will work together is how software development works. You’ve finally found the sweet spot.</p>
</blockquote>
<p>Recognize anything in this not-so-made-up history lesson? There’s a pattern in this that happens a lot when discovering some new technology, practice or principle. Gartner made this very visual through their Hype Cycle for Technology Adoption. It looks like this:</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2023/gartner-hype-cycle.png" class="align-center" /></p>
<p>You can clearly see the points in time as they happened in my little story. The Peak of Inflated Expectations is when you became 100% convinced that TDD is “the way” and tried to convince everybody of the same “fact”. The Trough of Disillusionment is when you discovered the flipside of TDD and started to tell everybody to stay away from it. The Slope of Enlightenment is that period of time where you learned the pros and cons, ditched any dogmatic believes and finally became a proficient practitioner of TDD.</p>
<p>So the next time you become overly zealous about some new technology, principle or practice, ask yourself where you are at the technology hype cycle. Whether you’re looking at the cloud as a solution for your scalability problems, microservices to move away from that monolith, Event Sourcing, Onion Architectures or the next JavaScript framework, keep those feet on the ground and challenge yourself. It puts things in perspective and helps you avoid going down the rabbit hole.</p>
<h2 id="about-me">About me</h2>
<p>I’m a Microsoft MVP and Principal Consultant at <a href="https://avivasolutions.nl/">Aviva Solutions</a> with 26 years of experience under my belt. As a coding software architect and/or lead developer, I specialize in building or improving (legacy) full-stack enterprise solutions based on .NET as well as providing coaching on all aspects of designing, building, deploying and maintaining software systems. I’m the author of <a href="https://www.fluentassertions.com">Fluent Assertions</a>, a popular .NET assertion library, <a href="https://www.liquidprojections.net">Liquid Projections</a>, a set of libraries for building Event Sourcing projections and I’ve been maintaining <a href="https://www.csharpcodingguidelines.com">coding guidelines for C#</a> since 2001. You can find me on <a href="https://twitter.com/ddoomen">Twitter</a> and <a href="https://mastodon.social/@ddoomen">Mastadon</a>.</p>Dennis Doomendennis.doomen@avivasolutions.nlA little story about the typical adoption process of TDD and how that works for other tools, principles and practices9 coding practices that have a smell2023-03-27T00:00:00+00:002023-03-27T00:00:00+00:00https://www.continuousimprover.com/2023/03/coding-smells<p>With more than 26 years of experience, as a consultant, I help organizations in the .NET space to professionalize their entire software development efforts, from idea to production. During such visits, I get to scrutinize their development practices, quality standards, design principles, the tools they use, their deployment pipeline, the team dynamics, the requirements process and much more. In this series of short posts, I’ll share some of the most common pain points I run into.</p>
<p>In the <a href="/2022/01/symptoms-traceability.html">last post</a> of this series, we looked at both the traceability of technical and architectural decisions as well as the micro-decisions that were made at the code-level and (hopefully) captured in your source control system. So the logical follow-up to that is to look at the readability and maintainability of the code itself. There’s a lot to cover there, so let’s see what I tend to run into.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2023/code-smells.png" class="align-center" /></p>
<ul>
<li>
<p>Code that may be readable but which purpose is unclear doesn’t make anybody happy. If it seems to have a bug or something needs to change functionally, being able to understand the original intention of the developer that wrote it is going to be crucial. And that doesn’t necessarily mean that that’s what the code is doing right now. The issues usually come from improper naming, the way the code is organized and the lack of functional documentation at the code level. Especially that last category is often difficult. Either it’s too technical or repeats unnecessary context.</p>
</li>
<li>
<p>It becomes even more a challenge if there’s a bug in the automated tests itself. When a test fails, your first thought is usually that the code-under-test must behave incorrectly. But I’m not making things up if I say that I’ve encountered quite some tests cases that were testing the wrong thing. That’s why it’s so important to treat test code as real code including a crystal clear intention.</p>
</li>
<li>
<p>Talking about intention, another issue I see a lot is that code is not written in such a way that it clarifies the algorithm that a method tries to execute. Methods, functions and operations are ways to encapsulate complexity and provide an abstraction that makes it easy to work with. But as soon as you want to understand its behavior, it’s going to be important to “see” the algorithm. Mixing invocations to other methods and low-level statements isn’t helping with that.</p>
</li>
<li>
<p>Code documentation is a sensitive topic as well and often a source of passionate debates. I generally see a lot of extremes here. On one side you’ll find code bases that have no code-level documentation at all. Strange arguments I’m often hearing are “code <em>is</em> the documentation” or “when the code changes, I have to rewrite the documentation”. But on the other side where people dogmatically require all code to be documented isn’t the right way either. Just like everything else, you’ll need to find the right balance. Understanding the conceptional difference between intention and implementation details is an important capability here.</p>
</li>
<li>
<p>An issue that’s much simpler to fix is when developers force you to scroll up and down the file just to understand the flow of the code. That’s like ordering the paragraphs of a page in a novel in some random order. And yes, I know that analogy isn’t 100% perfect, but I’m sure you get my point</p>
</li>
<li>
<p>Another practice that has both lovers and haters is refactoring. I’ve seen teams where refactoring is always under pressure from being abandoned because developers have to request permission from their product owner or manager before doing any kind of refactoring. And even though I strongly disagree with that, I’ve seen developers misunderstanding the difference between refactoring and redesign. The former should be part of their day-to-day work, but the latter is something that needs to be planned. So you can imagine this doesn’t help establish trust between developers and management.</p>
</li>
<li>
<p>With respect to class design I’ve seen my fair share of bad examples. Having a derived class that only makes sense after you jump around through multiple layers of base and derived classes is one of those. But equally worrisome are classes with names like <code class="language-plaintext highlighter-rouge">Manager</code> or <code class="language-plaintext highlighter-rouge">Helper</code>. I see those as an excuse to have some place to group all kinds of unrelated technical functionality. And what about all these “senior” developers who love to introduce layers, abstractions and design patterns “because that’s SOLID” or “to be prepared for the future”? You’ll immediately recognize those by looking at the number of interfaces which name is the implementing class prefixed with <code class="language-plaintext highlighter-rouge">I</code>.</p>
</li>
<li>
<p>And it doesn’t stop with design. I’ve seen plenty of code smells that make me frown. Think about deeply nested structures, often found in very long methods, full with “magic numbers” and variables with cryptic names. And if you’re unlucky, and somebody has been dogmatically using <code class="language-plaintext highlighter-rouge">var</code> in C#, those cryptic names are the only thing you have. And what about boolean parameters that make it impossible to understand their purpose, e.g. what does <code class="language-plaintext highlighter-rouge">true</code> or <code class="language-plaintext highlighter-rouge">false</code> really mean? And don’t forget the need for multiple dots to de-reference a deeply nested property or field. That’s a sign of bad design.</p>
</li>
<li>
<p>Are you using <a href="https://www.sonarsource.com/products/sonarqube/">SonarQube</a> to automatically verify your C#, TypeScript or JavaScript code against best practices from the industry? And if so, did you go beyond the default rule set or did you just disable most of the rules? And what about using <a href="https://eslint.org/">ESLint</a> for JavaScript/TypeScript? And StyleCop/FxCop or the <a href="https://github.com/bkoelman/CSharpGuidelinesAnalyzer">free Roslyn</a> analyzers for C#? Do you use an IDE that <a href="https://www.jetbrains.com/rider/">understands those</a>, provides automatic formatting based on <code class="language-plaintext highlighter-rouge">.editorconfig</code> or a <a href="https://prettier.io/">Prettier</a> configuration, and may even help you write clean code with its out-of-the-box features? Considering the many tools to our disposal that can detect code smells, design smells and even architectural smells, it’s a surprise developers still produce so many of them.</p>
</li>
</ul>
<p>Those are just a few of the many, sometimes dogmatic, best practices from our industry and a lot has been said and be written about this. Finding a good trade-off is hard, in particular because of the many opinions that senior developers often have. Waving with another book isn’t helping there, so I totally get that these are sensitive topics within any organization.</p>
<p>Do you recognize any of these smells? What do you do to mitigate them? And what about your own smells? Care to share them here? Let me know by commenting below.</p>
<h2 id="about-me">About me</h2>
<p>I’m a Microsoft MVP and Principal Consultant at <a href="https://avivasolutions.nl/">Aviva Solutions</a> with 26 years of experience under my belt. As a coding software architect and/or lead developer, I specialize in building or improving (legacy) full-stack enterprise solutions based on .NET as well as providing coaching on all aspects of designing, building, deploying and maintaining software systems. I’m the author of <a href="https://www.fluentassertions.com">Fluent Assertions</a>, a popular .NET assertion library, <a href="https://www.liquidprojections.net">Liquid Projections</a>, a set of libraries for building Event Sourcing projections and I’ve been maintaining <a href="https://www.csharpcodingguidelines.com">coding guidelines for C#</a> since 2001. You can find me on <a href="https://twitter.com/ddoomen">Twitter</a> and <a href="https://mastodon.social/@ddoomen">Mastadon</a>.</p>Dennis Doomendennis.doomen@avivasolutions.nlCode that may be readable but which purpose is unclear doesn't make anybody happyHow I keep my test names short and functional2023-03-20T00:00:00+00:002023-03-20T00:00:00+00:00https://www.continuousimprover.com/2023/03/test-naming<p>There are plenty of topics in software development land that cause passionate (and sometimes almost religious) debates. Tabs vs spaces is most definitely one of them. And in .NET land there’s always some team debating about the use of <code class="language-plaintext highlighter-rouge">_</code> in private fields and <code class="language-plaintext highlighter-rouge">var</code> for local variables. Fortunately, naming unit tests isn’t such a hot potato and a lot of variations are widely accepted. But I still care about having functional and self-explanatory names.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2023/naming-tests.png" class="align-center" /></p>
<p>I never really liked the idea that (I think) was introduced by Roy Osherove in his famous book <a href="https://www.artofunittesting.com/">The Art of Unit Testing</a>. He proposed to use a convention like <code class="language-plaintext highlighter-rouge">[UnitOfWork]_[StateUnderTest]_[ExpectedBehavior]</code>. This results in test names like <code class="language-plaintext highlighter-rouge">Sum_NegativeNumberAs1stParam_ExceptionThrown</code>. Although I totally dig the use of underscores, those names are way too technical and cryptic to me. As I wrote in an <a href="/2016/11/the-three-mental-modes-of-working-with.html">earlier post</a>, I see the name of the test as an important piece of information. For me, it should explain the functional scenario that the test is trying to assert and not a description of how the test behaves.</p>
<p>For a long time I have preferred the more functional <code class="language-plaintext highlighter-rouge">When_[scenario]_it_should_[expected_behavior]</code>. For example <code class="language-plaintext highlighter-rouge">When_the_same_objects_are_expected_to_be_the_same_it_should_not_fail</code>. It’s functionally correct, but I’m seeing this style more and more as noisy and verbose. So let’s look at how we can make it shorter. Consider the tests below from <a href="https://fluentassertions.com/">Fluent Assertions</a>:</p>
<div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">class</span> <span class="nc">ReferenceTypeAssertionsSpecs</span>
<span class="p">{</span>
<span class="p">[</span><span class="n">Fact</span><span class="p">]</span>
<span class="k">public</span> <span class="k">void</span> <span class="nf">When_the_same_objects_are_expected_to_be_the_same_it_should_not_fail</span><span class="p">()</span>
<span class="p">{</span>
<span class="c1">// Arrange</span>
<span class="kt">var</span> <span class="n">subject</span> <span class="p">=</span> <span class="k">new</span> <span class="nf">ClassWithCustomEqualMethod</span><span class="p">(</span><span class="m">1</span><span class="p">);</span>
<span class="kt">var</span> <span class="n">referenceToSubject</span> <span class="p">=</span> <span class="n">subject</span><span class="p">;</span>
<span class="c1">// Act / Assert</span>
<span class="n">subject</span><span class="p">.</span><span class="nf">Should</span><span class="p">().</span><span class="nf">BeSameAs</span><span class="p">(</span><span class="n">referenceToSubject</span><span class="p">);</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>
<p>Given that test is part of the <code class="language-plaintext highlighter-rouge">ReferenceTypeAssertionsSpecs</code> class (and notice the <code class="language-plaintext highlighter-rouge">Specs</code> postfix I’m always using), you can safely assume that this is one of the test sets covering the fluent APIs that deal with reference types. In this particular example, it’s covering the <code class="language-plaintext highlighter-rouge">BeSameAs</code> method, and that’s just one of many.</p>
<p>A common pattern I’ve been adopting lately is to group the test using a nested class. I’m essentially providing more context about a group of related tests. By doing that, I can immediately remove some superfluous information from the test name: the fact that we’re expecting objects to be “the same”.</p>
<div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">class</span> <span class="nc">ReferenceTypeAssertionsSpecs</span>
<span class="p">{</span>
<span class="k">public</span> <span class="k">class</span> <span class="nc">BeSameAs</span>
<span class="p">{</span>
<span class="p">[</span><span class="n">Fact</span><span class="p">]</span>
<span class="k">public</span> <span class="k">void</span> <span class="nf">When_two_variables_are_referring_to_the_same_object_it_should_not_fail</span><span class="p">()</span>
<span class="p">{}</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>
<p>Although the names read much more natural, given the context of <code class="language-plaintext highlighter-rouge">BeSameAs</code>, they are still quite long. So let’s try some versions of that when we drop terms like <em>should</em> and/or <em>when</em>:</p>
<ol>
<li><code class="language-plaintext highlighter-rouge">Should_succeed_when_the_variables_refer_to_the_same_object</code></li>
<li><code class="language-plaintext highlighter-rouge">Succeeds_when_the_variables_refer_to_the_same_object</code></li>
<li><code class="language-plaintext highlighter-rouge">The_variables_must_refer_to_the_same_object</code></li>
<li><code class="language-plaintext highlighter-rouge">Must_refer_to_the_same_object</code></li>
</ol>
<p>I think the fourth is a bit too short, so let’s continue with option 3. But what if there’s another test that tests the opposite and that throws if the two variables do not point to the same object? Technically, <code class="language-plaintext highlighter-rouge">The_variables_must_refer_to_the_same_object</code> would fit here as well, so we need some way to make the distinction between the happy and the unhappy paths. Let’s try to capture the “fact” that we’re really trying to assert here instead of describing the behavior of the test:</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">References_to_the_same_object_are_valid</code></li>
<li><code class="language-plaintext highlighter-rouge">References_to_different_objects_are_invalid</code></li>
</ul>
<p>Pretty short right? And I think it’s concise enough to still sound like proper English? Is it possible to make this shorter? Sure, I’m convinced I’ll come up with something better in a couple of months. But it’s good enough like this. Naming things still remains one of the hardest things in our profession.</p>
<p>To wrap up this post, here’s the final version:</p>
<div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">class</span> <span class="nc">ReferenceTypeAssertionsSpecs</span>
<span class="p">{</span>
<span class="k">public</span> <span class="k">class</span> <span class="nc">BeSameAs</span>
<span class="p">{</span>
<span class="p">[</span><span class="n">Fact</span><span class="p">]</span>
<span class="k">public</span> <span class="k">void</span> <span class="nf">References_to_the_same_object_are_valid</span><span class="p">()</span>
<span class="p">{}</span>
<span class="p">[</span><span class="n">Fact</span><span class="p">]</span>
<span class="k">public</span> <span class="k">void</span> <span class="nf">References_to_different_objects_are_invalid</span><span class="p">()</span>
<span class="p">{}</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>
<p>So what naming conventions do you use for your tests? And what do you think of what I’m proposing here? Let me know by commenting below.</p>
<h2 id="about-me">About me</h2>
<p>I’m a Microsoft MVP and Principal Consultant at <a href="https://avivasolutions.nl/">Aviva Solutions</a> with 26 years of experience under my belt. As a coding software architect and/or lead developer, I specialize in building or improving (legacy) full-stack enterprise solutions based on .NET as well as providing coaching on all aspects of designing, building, deploying and maintaining software systems. I’m the author of <a href="https://www.fluentassertions.com">Fluent Assertions</a>, a popular .NET assertion library, <a href="https://www.liquidprojections.net">Liquid Projections</a>, a set of libraries for building Event Sourcing projections and I’ve been maintaining <a href="https://www.csharpcodingguidelines.com">coding guidelines for C#</a> since 2001. You can find me on <a href="https://twitter.com/ddoomen">Twitter</a> and <a href="https://mastodon.social/@ddoomen">Mastadon</a>.</p>Dennis Doomendennis.doomen@avivasolutions.nlNaming in software is hard, so here's how I name and group my automated tests to use them as documentationHow to properly test your HTTP API contracts in .NET2023-03-13T00:00:00+00:002023-03-13T00:00:00+00:00https://www.continuousimprover.com/2023/03/test-http-contracts<p>As I’m a Test Driven Development practitioner (with conviction), I regularly have to create automated tests that include the HTTP API of a module or component. Whether you call that a unit, integration or component test is debatable, but is beyond the point of this post. What I do care about that is that your tests only interact with the surface area designed for your production code. So if a particular part of your system is only invoked through an HTTP API, then your test should be doing the same thing. Directly invoking a method on an ASP.NET <code class="language-plaintext highlighter-rouge">Controller</code> class would violate that idea.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2023/api-testing.png" class="align-center" /></p>
<p><a href="/2021/10/laws-test-driven-development.html">Another important principle</a> which I follow in my testing endeavors is to ensure you only assert what is relevant for that particular test case. If you expect an exception, ensure it is the right type of exception, that its properties have the right value, and that its message matches your expectation. But with respect to the exception message, you only want to assert the relevant parts of that message. <a href="https://fluentassertions.com/">Fluent Assertions</a>’ <code class="language-plaintext highlighter-rouge">WithMessage</code> assertion takes a wildcard for that exact reason. Similarly, if your API is supposed to return a particular HTTP error code, only assert that it does and ignore the payload. If your test covers a particular path where only a specific property of the body is relevant, ignore the rest. This avoids failing tests for unrelated issues.</p>
<p>I’ve seen a lot of developers that reuse the JSON-serializable type from the production code in the test code to deserialize from. A common argument developers give for that is that it makes the code more refactoring-friendly. In other words, changing the name of a property on that type (often used as a Data Transfer Object) would not break the test. But in my opinion, that <em>should</em> break the test. The route, the headers and the specific JSON returned by an HTTP API <em>are</em> the contract, and thus should be treated as such.</p>
<p>But how do you do that? There are two common ways: using raw JSON or by deserializing the body to an anonymous type of a particular structure. Using raw JSON is the most pure and thorough way of doing that, but will become ugly when you only want to assert the relevant parts match the expectation.</p>
<p>Deserializing to an anonymous type can be done like this when you use <code class="language-plaintext highlighter-rouge">NewtonSoft.Json</code> and Fluent Assertions:</p>
<div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">IHost</span> <span class="n">host</span> <span class="p">=</span> <span class="nf">GetTestClient</span><span class="p">();</span>
<span class="n">HttpResponseMessage</span> <span class="n">response</span> <span class="p">=</span> <span class="k">await</span> <span class="n">host</span><span class="p">.</span><span class="nf">GetAsync</span><span class="p">(</span>
<span class="s">$"http://localhost/statistics/metrics/CountsPerState?country=</span><span class="p">{</span><span class="n">countryCode</span><span class="p">}</span><span class="s">&kind=Filming"</span><span class="p">);</span>
<span class="kt">string</span> <span class="n">body</span> <span class="p">=</span> <span class="k">await</span> <span class="n">response</span><span class="p">.</span><span class="n">Content</span><span class="p">.</span><span class="nf">ReadAsStringAsync</span><span class="p">();</span>
<span class="kt">var</span> <span class="n">expectation</span> <span class="p">=</span> <span class="k">new</span><span class="p">[]</span>
<span class="p">{</span>
<span class="k">new</span>
<span class="p">{</span>
<span class="n">State</span> <span class="p">=</span> <span class="s">"Active"</span><span class="p">,</span>
<span class="n">Count</span> <span class="p">=</span> <span class="m">1</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="n">T</span> <span class="n">actual</span> <span class="p">=</span> <span class="n">JsonConvert</span><span class="p">.</span><span class="nf">DeserializeAnonymousType</span><span class="p">(</span><span class="n">body</span><span class="p">,</span> <span class="n">expectation</span><span class="p">);</span>
<span class="n">actual</span><span class="p">.</span><span class="nf">Should</span><span class="p">().</span><span class="nf">BeEquivalentTo</span><span class="p">(</span><span class="n">expectation</span><span class="p">);</span>
</code></pre></div></div>
<p>What we’re doing here is to set-up the <code class="language-plaintext highlighter-rouge">expectation</code> with specific values and then using <code class="language-plaintext highlighter-rouge">DeserializeAnonymousType</code> to tell <code class="language-plaintext highlighter-rouge">NewtowSoft.Json</code> to try to deserialize the JSON into an anonymous object which structure is defined by that same <code class="language-plaintext highlighter-rouge">expectation</code> object. We complete the test by using <code class="language-plaintext highlighter-rouge">BeEquivalentTo</code> to do a deep comparison between <code class="language-plaintext highlighter-rouge">expectation</code> and <code class="language-plaintext highlighter-rouge">actual</code>, where the <code class="language-plaintext highlighter-rouge">expectation</code> defines the properties we care about.</p>
<p>If you prefer <code class="language-plaintext highlighter-rouge">System.Text.Json</code>, we can do achieve the same result like this:</p>
<div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">IHost</span> <span class="n">host</span> <span class="p">=</span> <span class="nf">GetTestClient</span><span class="p">();</span>
<span class="n">HttpResponseMessage</span> <span class="n">response</span> <span class="p">=</span> <span class="k">await</span> <span class="n">host</span><span class="p">.</span><span class="nf">GetAsync</span><span class="p">(</span>
<span class="s">$"http://localhost/statistics/metrics/CountsPerState?country=</span><span class="p">{</span><span class="n">countryCode</span><span class="p">}</span><span class="s">&kind=Filming"</span><span class="p">);</span>
<span class="kt">string</span> <span class="n">body</span> <span class="p">=</span> <span class="k">await</span> <span class="n">response</span><span class="p">.</span><span class="n">Content</span><span class="p">.</span><span class="nf">ReadAsStringAsync</span><span class="p">();</span>
<span class="kt">var</span> <span class="n">expectation</span> <span class="p">=</span> <span class="k">new</span><span class="p">[]</span>
<span class="p">{</span>
<span class="k">new</span>
<span class="p">{</span>
<span class="n">State</span> <span class="p">=</span> <span class="s">"Active"</span><span class="p">,</span>
<span class="n">Count</span> <span class="p">=</span> <span class="m">1</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="kt">object</span> <span class="n">actual</span> <span class="p">=</span> <span class="n">JsonSerializer</span><span class="p">.</span><span class="nf">Deserialize</span><span class="p">(</span><span class="n">body</span><span class="p">,</span> <span class="n">expectation</span><span class="p">.</span><span class="nf">GetType</span><span class="p">(),</span> <span class="k">new</span> <span class="n">JsonSerializerOptions</span>
<span class="p">{</span>
<span class="n">PropertyNameCaseInsensitive</span> <span class="p">=</span> <span class="k">true</span>
<span class="p">});</span>
<span class="n">actual</span><span class="p">.</span><span class="nf">Should</span><span class="p">().</span><span class="nf">BeEquivalentTo</span><span class="p">(</span><span class="n">expectation</span><span class="p">);</span>
</code></pre></div></div>
<p>You could encapsulate most of this logic into a custom <code class="language-plaintext highlighter-rouge">BeEquivalentTo</code> that acts on a <code class="language-plaintext highlighter-rouge">HttpResponseMessage</code> like I did <a href="https://github.com/dennisdoomen/EffectiveTddDemo/blob/master/Tests/DocumentManagement.Specs/13_SimplerDeserialization_NewtonSoft/HttpClientExtensions.cs#L8">here</a> for <code class="language-plaintext highlighter-rouge">NewtonSoft.Json</code> and <a href="https://github.com/dennisdoomen/EffectiveTddDemo/blob/master/Tests/DocumentManagement.Specs/14_SimplerDeserialization_SystemText/HttpClientExtensions.cs#L8">here</a> for <code class="language-plaintext highlighter-rouge">System.Text.Json</code>. But you could also start using the <code class="language-plaintext highlighter-rouge">Should().BeAs()</code> provided by this community library called <a href="https://github.com/adrianiftode/FluentAssertions.Web#fluentassertionsweb-examples">FluentAssertions.Web</a>.</p>
<p>What do you think about this approach? Do agree with the principles? And do you test your HTTP APIs like this too? Let me know by commenting below.</p>
<h2 id="about-me">About me</h2>
<p>I’m a Microsoft MVP and Principal Consultant at <a href="https://avivasolutions.nl/">Aviva Solutions</a> with 26 years of experience under my belt. As a coding software architect and/or lead developer, I specialize in building or improving (legacy) full-stack enterprise solutions based on .NET as well as providing coaching on all aspects of designing, building, deploying and maintaining software systems. I’m the author of <a href="https://www.fluentassertions.com">Fluent Assertions</a>, a popular .NET assertion library, <a href="https://www.liquidprojections.net">Liquid Projections</a>, a set of libraries for building Event Sourcing projections and I’ve been maintaining <a href="https://www.csharpcodingguidelines.com">coding guidelines for C#</a> since 2001. You can find me on <a href="https://twitter.com/ddoomen">Twitter</a> and <a href="https://mastodon.social/@ddoomen">Mastadon</a>.</p>Dennis Doomendennis.doomen@avivasolutions.nlThe route, the headers and the specific JSON returned by an HTTP API are the contract, and thus should be treated as suchUsing docker to write automated tests against a real database2023-03-05T00:00:00+00:002023-03-05T00:00:00+00:00https://www.continuousimprover.com/2023/03/docker-in-tests<p>Whether you work with SQL Server, PostgreSQL or some other database that can’t run in-memory, you inevitably end up having a need to write automated tests that cover the specific queries and database operations your product needs. Whether or not you call those “unit” or “integration tests” is irrelevant to me, but I often see developers introduce abstractions like a generic <code class="language-plaintext highlighter-rouge">IPermitRepository</code>, a <code class="language-plaintext highlighter-rouge">IDatabaseManager</code> (with <code class="language-plaintext highlighter-rouge">IDatabaseManagerFactory</code>), or worse, their own expression-based abstraction on top of LINQ. Introducing abstractions is not a bad thing by itself and can help create natural seams in your architecture. But I’ve seen plenty of examples where people overengineer their code “because of SOLID”.</p>
<p><img src="https://www.continuousimprover.com/assets/images/posts/2023/docker.png" class="align-center" /></p>
<p>That being said, in the ideal world, you want to make sure your tests also cover the database. Entity Framework supports <a href="https://learn.microsoft.com/en-us/ef/core/testing/">both a simplified in-memory provider as well as a built-in SQLite provider</a> to help you with that. And using SQLite is not a bad idea at all. In a previous life, where we were using the excellent <a href="https://nhibernate.info/">NHibernate</a>, we did the same thing. But again, it’s not the real deal, and it’s 2023, so what if you could run SQL Server in a Linux docker container just before your tests start?</p>
<p>With the open-source library <a href="https://dotnet.testcontainers.org/">Testcontainers for .NET</a>, this becomes rather trivial. Consider the below code snippet.</p>
<div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">_sqlServerContainer</span> <span class="p">=</span> <span class="k">new</span> <span class="nf">ContainerBuilder</span><span class="p">()</span>
<span class="p">.</span><span class="nf">WithImage</span><span class="p">(</span><span class="s">"mcr.microsoft.com/mssql/server:2019-GA-ubuntu-16.04"</span><span class="p">)</span>
<span class="p">.</span><span class="nf">WithPortBinding</span><span class="p">(</span><span class="m">1443</span><span class="p">,</span> <span class="n">assignRandomHostPort</span><span class="p">:</span> <span class="k">true</span><span class="p">)</span>
<span class="p">.</span><span class="nf">WithEnvironment</span><span class="p">(</span><span class="s">"ACCEPT_EULA"</span><span class="p">,</span> <span class="s">"Y"</span><span class="p">)</span>
<span class="p">.</span><span class="nf">WithEnvironment</span><span class="p">(</span><span class="s">"SA_PASSWORD"</span><span class="p">,</span> <span class="n">Password</span><span class="p">)</span>
<span class="p">.</span><span class="nf">WithCleanUp</span><span class="p">(</span><span class="n">cleanUp</span><span class="p">:</span> <span class="k">true</span><span class="p">)</span>
<span class="p">.</span><span class="nf">WithWaitStrategy</span><span class="p">(</span><span class="n">Wait</span><span class="p">.</span><span class="nf">ForUnixContainer</span><span class="p">()</span>
<span class="p">.</span><span class="nf">UntilOperationIsSucceeded</span><span class="p">(()</span> <span class="p">=></span> <span class="nf">HealthCheck</span><span class="p">(</span><span class="n">CancellationToken</span><span class="p">.</span><span class="n">None</span><span class="p">).</span><span class="nf">GetAwaiter</span><span class="p">().</span><span class="nf">GetResult</span><span class="p">(),</span>
<span class="m">10</span><span class="p">))</span>
<span class="p">.</span><span class="nf">Build</span><span class="p">();</span>
<span class="k">await</span> <span class="n">_sqlServerContainer</span><span class="p">.</span><span class="nf">StartAsync</span><span class="p">();</span>
</code></pre></div></div>
<p>This will build and start a new Ubuntu Linux container running SQL Server 2019 on the first available mapped to internal port 1433. Then it will wait until the custom <code class="language-plaintext highlighter-rouge">HealthCheck</code> operation succeeds, but not more than 10 times. In our case, we use the <code class="language-plaintext highlighter-rouge">HealthCheck</code> method to try to connect to the container-hosted SQL Server instance. And how do we get the port to connect to? Simple, using <code class="language-plaintext highlighter-rouge">_sqlServerContainer.GetMappedPublicPort(1433)</code>.</p>
<p>And what about cleaning up after ourselves? Well, another nice feature is that the <code class="language-plaintext highlighter-rouge">WithCleanUp</code> will launch a second container which only task is to monitor the first one for inactivity. If the SQL Server container has been inactive for a certain amount of time, all containers will be shut down automatically.</p>
<p>To use this in your tests, we’ve created a nice wrapper around all of this called <code class="language-plaintext highlighter-rouge">DockerMsSqlServerDatabase</code>. It’ll create and start the container, create an empty database, expose the connection string, and delete the database during the dispose. If another test reuses the container before the timeout expires, it’ll reuse it. Otherwise a new one will be created on another free port. You can find the full implementation <a href="https://gist.github.com/dennisdoomen/9a97e07a4c4a8f2eef3af5ac293d6759">here</a>. With that, you can use the wrapper in your tests like this:</p>
<div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">class</span> <span class="nc">DiagramIndexServiceSpecs</span> <span class="p">:</span> <span class="n">IAsyncLifetime</span>
<span class="p">{</span>
<span class="k">private</span> <span class="n">DockerMsSqlServerDatabase</span> <span class="n">_databaseServer</span><span class="p">;</span>
<span class="k">public</span> <span class="k">async</span> <span class="n">Task</span> <span class="nf">InitializeAsync</span><span class="p">()</span>
<span class="p">{</span>
<span class="n">_databaseServer</span> <span class="p">=</span> <span class="k">await</span> <span class="n">DockerMsSqlServerDatabase</span><span class="p">.</span><span class="nf">Create</span><span class="p">();</span>
<span class="p">}</span>
<span class="k">public</span> <span class="k">async</span> <span class="n">Task</span> <span class="nf">DisposeAsync</span><span class="p">()</span>
<span class="p">{</span>
<span class="k">await</span> <span class="n">_databaseServer</span><span class="p">.</span><span class="nf">DisposeAsync</span><span class="p">();</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>
<p>So how do you run automated tests that need a real database? Do you think my proposal is useful? Let me know by commenting below.</p>
<h2 id="about-me">About me</h2>
<p>I’m a Microsoft MVP and Principal Consultant at <a href="https://avivasolutions.nl/">Aviva Solutions</a> with 26 years of experience under my belt. As a coding software architect and/or lead developer, I specialize in building or improving (legacy) full-stack enterprise solutions based on .NET as well as providing coaching on all aspects of designing, building, deploying and maintaining software systems. I’m the author of <a href="https://www.fluentassertions.com">Fluent Assertions</a>, a popular .NET assertion library, <a href="https://www.liquidprojections.net">Liquid Projections</a>, a set of libraries for building Event Sourcing projections and I’ve been maintaining <a href="https://www.csharpcodingguidelines.com">coding guidelines for C#</a> since 2001. You can find me on <a href="https://twitter.com/ddoomen">Twitter</a> and <a href="https://mastodon.social/@ddoomen">Mastadon</a>.</p>Dennis Doomendennis.doomen@avivasolutions.nlIt's 2023, so what if you could run SQL Server in a Linux docker container just before your tests start?