Thursday, October 06, 2016

The magic of keeping a band of developers together

As I work as a consultant for Aviva Solutions, and the nature of my job is to be involved in moderately long-running client projects, I don't get to come to the office that often. And if I do, it's on different days of the week. Over the last year so, our locations in Koudekerk aan de Rijn and Eindhoven have grown with almost 10 new developers. Since we did not have any company-wide meetings since May, it should not come as a surprise that I arrived at the office one day, only to conclude that I could not recall the name of those new guys. When our company was only 15 people strong, any new colleague joining was a big thing. And it's still a big thing. It's just that it is becoming less visible. Sounds familiar? It's even more embarrassing considering that I'm usually the one that creates the new Github and FlowDock accounts. But even if I do remember a new name, I often can't map that to somebody's face. That's why I ask all new employees to add a little picture of themselves to their Flowdock profile. In a way, I'm cheating a bit.

Aviva Summer Event IbizaA similar challenge happens on the other side of the hiring spectrum. When somebody joins a company like Aviva, it's important for that person to feel valuable and be able to identify with the company's culture. The only way to do that is to engage with your coworkers, finding out who knows what, understanding the chain of command (which we don't have), and surfacing the areas where that person can add value. The problem in 2016 is that competitors are looking for people as well and recruiters are getting more aggressive by the day. So how do you keep a group of people like that together? Sure, you can offer them more money, give them expensive company cars, laptops and phones. But that's never really going to help. If they don't feel connected to the company, they might leave for the first deal they get.

So how do we stay connected? Well, twice a year, we have company-wide meetings, either at our HQ, our office in Eindhoven, a restaurant or some other venue. This usually involves a formal part where the two owners and the social committee share an update on sales, running projects, HR, the financial outlook and any upcoming social events. Then we have dinner, an occasional drink and some kind of activity (for those that want that). For example, when the Eindhoven office was just opened, the meet-up was organized at that office, and all co-workers were offered an overnight stay to go have a fun night out in downtown Eindhoven. This not only allowed the people that joined the Eindhoven office to meet the other people, but it also ensured that everybody has seen the new office and knows where to find it. We really encourage people to work in a different office occasionally, regardless of you being involved with an internal project or working for a client.

We also regularly get together for pizza sessions where we exchange project experiences, explore new technologies and run trial talks for public events. These are very informal evenings were everybody can share their findings or have discussions, whether or not you have presentation skills, you're a senior developer or just joined the company. Quite often, these little evenings are visited by colleagues from other companies or people who just happen to have heard about the topic (we usually Tweet and post on Facebook about them). Sometimes, these events turn into something bigger when we work together with the .NET community to organize public events.

I particularly like Flowdock as a low-threshold collaboration tool that is accessible as a web site, a desktop app or smartphone app. We have different channels (or flows) for trolling around, getting technical support, or having deep discussions on technical topics. Next to that, all our projects have dedicated flows, so everybody can read along and learn about the daily troubles and tribulations of their co-workers, even if you're stationed at a client's office. Flowdock is probably the most engaging platform I've come across. Neither Skype, Jabber or any other platform has helped us so much to keep in touch with each other. It also allowed us to avoid those long email threads that nobody is waiting for. And since we heavily use GitHub for our project's sources, we can directly see what's going on from inside Flowdock.

Now, all of this helps a lot to keep the group together, but the ultimate trick to make this group a team is to stuff the entire company in a plane, fly us to a warm place in Spain, a nice city like Prague or, like this year's 10-year-anniversary special edition trip, Ibiza. Yes, it sounds like pure luxury and over spoiling your employees. And yes, we did have a lot of fun, took a trip on a catamaran and explored the island driving around in those old 2CVs. But for 12 people, this was the first time they joined us on our annual trip. It allowed them to get to know their co-workers, find people with similar interests, share some frustration about that last project, get advice on how to resolve a technical or other work-related challenge, or receive tips on advancing their careers. Some would even use the weekend to debate technical solutions or new technological hypes. Nonetheless, the point is that before that weekend they were just employees. After that weekend they were colleagues, and in some unique cases, friends. What more can you expect from a company?

Aviva Summer Event Ibiza

So what do you think? Are you a junior, medior or senior .NET or front-end developer? Did you just graduate from university or a polytechnical college? Does a company like this appeal to you? Comment below, or even better, contact me by email, twitter or phone to visit one of the upcoming events or join us for a coffee and taste the unique atmosphere of an office with passionate people….

Bonus Content: We’ve compiled a little compilation of our trip. Check it out on YouTube.

Tuesday, August 30, 2016

Continuous Delivery within the .NET realm

Continuous what?

Well, if you browse the internet regularly, you will encounter two different terms that are used rather inconsistently: Continuous Delivery and Continuous Deployment. In my words, Continuous Delivery is a collection of various techniques, principles and tools that allow you to deploy a system into production with a single press of a button. Continuous Deployment takes that to the next level by completely automating the process of putting some code changes that were committed to source control into production, all without human intervention. These concepts are not trivial to implement and involve both technological innovations as well some serious organizational changes. In most projects involving the introduction of Continuous Delivery, an entire cultural shift is needed. This requires some great communication and coaching skills. But sometimes it helps to build trust within the organization by showing the power of technology. So let me use this post to highlight some tools and techniques that I use myself. 

What do you need?
As I mentioned, Continuous Delivery involves a lot more than just development effort. Nonetheless, these are a few of the practices I believe you need to be successful.

  • As much of your production code as possible must be covered by automated unit tests. One of the most difficult part of that is to determine the right scope of those tests. Practicing Test Driven Development (TDD), a test-first design methodology, can really help you with this. After trying both traditional unit testing as well as TDD, I can tell you that is really hard to add maintainable and fast unit tests after you've written your code.
  • If your system consists of multiple distributed subsystems that can only be tested after they've been deployed, then I would strongly recommend investing in acceptance tests. These 'end-to-end' tests should cover a single subsystem and use test stubs to simulate the interaction with the other systems.
  • Any manual testing should be banned. Period. Obviously I realize that this isn't always possible due to legacy reasons. So if you can't do that for certain parts of the system, document which part and do a short analysis on what is blocking you.
  • A release strategy as well as a branching strategy are crucial. Such a strategy defines the rules for shipping (pre-)releases, how to deal with hot-fixes, when to apply labels what version numbering schema to use.
  • Build artifacts such as DLLs or NuGet packages should be versioned automatically without the involvement of any development effort.
  • During the deployment, the administrator often has to tweak web/app.config settings such as database connections strings and other infrastructure-specific settings. This has to be automated as well, preferably by parametrizing deployment builds.
  • Build processes, if they exist at all, are quite often tightly integrated with build engines like Microsoft's Team Build or JetBrain's Team City. But many developers forget that the build script changes almost as often as the code itself. So in my opinion, the build script itself should be part of the same branching strategy that governs the code and be independent of the build product. This allows you to commit any changes needed to the build script together with the actual feature. An extra benefit of this approach is that developers can test the build process locally.
  • Nobody is more loathed by developers than DBAs. A DBA that needs to manually review and apply database schema changes is a frustrating bottleneck that makes true agile development impossible. Instead, use a technique where the system uses metadata to automatically update the database schema during the deployment.

What tools are available for this?

Within the .NET open-source community a lot of projects have emerged that have revolutionized the way we build software.

  • OWIN is an open standard to build components that expose some kind of HTTP end-point and that can be hosted everywhere. WebAPI, RavenDB and ASP.NET Core MVC are all OWIN based, which means you can build NuGet packages that expose HTTP APIs and host them in IIS, a Windows Service or even a unit test without the need to open a port at all. Since you have full control of the internal HTTP pipeline you can even add code to simulate network connectivity issues or high-latency networks.
  • Git is much more than a version control system. It changes the way developers work at a fundamental level. Many of the more recent tools such as those for automatic versioning and generating release notes have been made possible by Git. Git even triggered de-facto release strategies such as GitFlow and GitHubFlow that directly align with Continuous Delivery and Continuous Deployment. In addition to that, online services like GitHub and Visual Studio Team Services add concepts like Pull Requests that are crucial for scaling software development departments.
  • XUnit is a parallel executing unit test framework that will help you build software that runs well in highly concurrent systems. Just try to convert existing unit tests built using more traditional test frameworks like MSTest or Nunit to Xunit. It'll surface all kinds of concurrency issues that you normally wouldn't detect until you run your system in production under a high load.
  • Although manual testing of web applications should be minimized and superseded by JavaScript unit tests using Jasmine, you cannot entirely get rid of a couple of automated tests. These smoke tests can really help you to get a good feeling of the overall end-to-end behavior of the system. If this involves automated tests against a browser and you've build them using the Selenium UI automation framework, then BrowserStack would be the recommended online service. It allows you to test your web application against various browser versions and provides excellent diagnostic capabilities.
  • Composing complex systems from small components maintained by individual teams has been proven to be a very successful approach for scaling software development. MyGet offers (mostly free) online NuGet-based services that promotes teams to build, maintain and release their own components and libraries and distribute using NuGet, all governed by their own release calendar. In my opinion, this is a crucial part of preventing a monolith.
  • PSake is a PowerShell based make-inspired build system that allows you to keep your build process in your source code repository just like all your other code. Not only does this allow you to evolve your build process with new requirements and commit it together with the code changes, it also allows you to test your build in complete isolation. How cool is it to be able to test your deployment build from your local PC, isn't it?
  • So if your code and your build process can be treated as first-class citizens, why can't we do the same to your infrastructure? You can, provided you take the time to master PowerShell DSC and/or modern infrastructure platforms like TerraForm. Does your new release require a newer version of the .NET Framework (and you're not using .NET Core yet)? Simply commit an updated DSC script and your deployment server is re-provisioned automatically.

Where do you start?

By now, it should be clear that introducing Continuous Delivery or Deployment isn't for the faint of heart. And I didn’t even talk about the cultural aspects and the change management skills you need to have for that. On the other hand, the .NET realm is flooded with tools, products and libraries that can help you to move in the right direction. Provided I managed to show you some of the advantages, where do you start?

  • Switch to Git as your source control system. All of the above is quite possible without it, but using Git makes a lot of it a lot easier. Just try to monitor multiple branches and pull requests with Team Foundation Server based on a wildcard specification (hint: you can't).
  • Start automating your build process using PSake or something alike. As soon as you have a starting point, it'll become much easier to add more and more of the build process and have it grow with your code-base.
  • Identify all configuration and infrastructural settings that deployment engineers normally change by hand and add them to the build process as parameters that can be provided by the build engine. This is a major step in removing human errors.
  • Replace any database scripts with some kind of library like Fluent Migrator or the Entity Framework that allows you to update schema through code. By doing that, you could even decide to support downgrading the schema in case a (continuous) deployment fails.
  • Write so-called characterization tests around the existing code so that you have a safety net for the changes needed to facilitate continuous delivery and deployment.
  • Start the refactoring efforts needed to be able to automatically test more chunks of the (monolithical) system in isolation. Also consider extracting those parts into a separate source control project to facilitate isolated development, team ownership and a custom life cycle.
  • Choose a versioning and release strategy and strictly follow it. Consider automating the version number generation using something like GitVersion.

Let's get started

Are you still building, packaging and deploying your projects manually? How much time have you lost trying to figure out what went wrong, only to find out you forgot some setting or important step along the way? If this sounds familiar, hopefully this post will help you to pick up some nice starting points. And if you still have question, don't hesitate to contact me on twitter or by reaching out to me at TechDays 2016.

Monday, July 25, 2016

Scaling a growing organization by reorganizing the teams

During this year's QCon conference held in New York, I attended a full-day workshop on the scalability challenges a growing organization faces, hosted by Randy Shoup. In my previous two posts I discussed a model to understand the needs of an organization in its different life phases, as well as a migration strategy for getting from a monolith to a set of well-defined microservices.

The Universal Scalability Law…again

However, Randy also talked about people, or more specifically, how to reorganize the teams for scalability without ignoring the Universal Scalability Law. What this means is that you should be looking for a way to have lots of developers in your organization working on things in isolation (thereby reducing contention) without the need for a lot of communication (a.k.a. coherence). So any form of team structuring that involves a lot of coordination between teams is obviously out of the question, particularly skill-based teams, project-based teams or large teams.

For the same reason, Randy advises against geographically split teams or outsourcing to so-called job shops. Those do not only involve a lot of coordination, but local conversations will become disruptive in melding a team. Just like Randy, I find face-to-face discussions crucial for effective teams. But if your team is not co-located, those local conversations will never reach the rest of the team. Yes, you may persist on posting a summary of that discussion on some kind of team wiki, Flowdock/Slack or other team collaboration tool, but they will still miss the discussions that let to that summary. Even using a permanent video conferencing set-up doesn't always solve that, particularly if the people in the team don't share the same native language (which is already a problem for co-located teams).

The ideal team

He also said something about the effect of getting more people in the organization. In his view, 5 people is the ideal. That number of people can sit around a table, will benefit from high bandwidth communication and roles can be fluid. When you reach about 20 people, you require structure, which on turn, can be a potential trough of productivity and motivation. When you reach 100 people, you must shift your attention from coordinating individuals to coordinating teams. A clear team structure and well-defined responsibilities becomes critical. Knowing this, it's kind of expected that Randy likes to size his teams using the "2 pizza rule", the number of people you can feed from 2 pizza's. So a team consisting of 4-6 people in a mix of junior and senior and (obviously) co-located has his preference.

Ideally he wants to have that team take ownership of a component or service, including maintenance and support as well as the roadmap for that component or service. This implies that all teams are full-stack from a technological perspective and are capable of supporting their component or service all the way into production. But Randy emphasizes that managers shouldn't see teams like this as software factories. Teams should have an identity and be able to build up proud-of- ownership. This also implies taking responsibility over the quality of those services. He purposely mentioned the problem of not having the time to do their work right and taking shortcuts because of (perceived) pressure from management or other stakeholders. In his opinion, this is the wrong thing to do, since it means you'll need to do the work twice. The more constrained the team is in time, the more important it is to do it the right way first.

The effect of team structure on architecture

Another argument for his ideas is provided by Conway's Law. Melvin Conway observed that in a lot of organizations the structure of the software system closely followed the structure of the organization. This isn't a big surprise, since quite often, cross-team collaboration requires some kind of agreed way of working, both on the communication level as well as on the technical level. Quite often, architectural seams like API contracts or modules emerge from this. So based on that observation, he advices organizations to structure the teams along the boundaries you want to accomplish in your software architecture. And this is how Conway's Law is usually used. But in this workshop, Randy has already steered us towards the notion of using microservices for scaling the organization. So does Conway's Law apply here? Each team owns one or more microservices, or, the API contracts I just discussed. They work in isolation, but negotiate about the features provided by their services. I would say that is a resounding yes!

All things considered, it should not come as a surprise that he believes microservices are the perfect architecture for scaling an organization on both the people and the technical level. So what do you think? Are microservices the right technology to scale teams? Or is this the world up-side down? Let me know by commenting below. Oh, and follow me at @ddoomen to get regular updates on my everlasting quest for better solutions