Friday, November 22, 2013

QCon: The Culture Engine; How to build High Performance Teams

Sometimes you run into a workshop that you didn't have any particular expectations for and then your mind is completely blown. That happened to me on the first day of tutorials here at QCon San Francisco. "By whom?" I hear you asking. Well, by Steve Peha and Amr Elssamadisy and heavily supported by Ashley Johnson as part of their full-day workshop on The Culture Engine.


The goal of this workshop was to learn how to build high-performance teams, something we would all like to achieve, don't we? I myself have learned the hard way that tools, techniques, best practices and a lot of talent isn't enough to reach that goal. In fact, the biggest problems are generally caused by cultural differences that cause invisible impediments in open and respectful communication. I don't think that that observation is really shocking, but what I liked about this workshop is that those guys provided a model that helps us understand the behavior exposed by those very same cultural differences, as well as tools to overcome them.

So what are those impediments? The first one they discussed is the lack of safety. How likely am I going to participate in solving a problem if I don't feel safe enough to speak my mind or take changes, whether it involves a colleague or a supervisor? Of the many tools Amr and Steve shares with us, I liked leveraging your vulnerability. For instance, share your origin story. Where did you come from? What are your goals in life? What choices did you make? Sharing information like that breaks down hierarchies, improves respect and levels the playing field. And if that all feels too soft, consider the foreverness of your current situation. Do you really want to be in this situation forever? Considering that might fuel your sense of safety just enough to do something about it.


The second impediment is lack of respect. How likely am I to participate in solving a problem if you think I'm an idiot? Because of my tendency to get frustrated quickly if somebody didn’t meet my expectations, this one resonated strongly with me. An essential mindset when dealing with people is to consider that it is not about how you talk with somebody, but how you see that person. In fact, see others as 'other' significantly impedes our ability to work together well. Looking at respect as "re-spect" or "look again" might also trigger you to evaluate the relationship you have with a person once again. Amr claimed that if you don't like somebody, you haven't' talked enough with that person and you should retry. That, and the statement that respect is a gift, was already a key takeaway for me.

So what about some tools to help you regain respect for somebody? Well, be curious and try to focus on the similarities. Know their stories and understand their pride. And make sure you look at that person's character without involving the context in which he is operating right now. Somebody might expose certain behavior not because of his or hers intrinsic character, but because the circumstances that apply.

The third impediment is lack of intention, or in different words, how likely am I to participate in solving a problem if I don't know what you want? Does that make sense? Because it happened to me many times; getting disappointed or irritated because a co-worker didn't do or behave as I expected, but never realizing that I never told hem what I expected in the first place. Suffice to say is being open and clear about what you expect from other people.

The last but not least impediment is lack of ownership. How likely am I to participate in solving a problem if I think it's not my problem? Ownership is a very important phase in a relationship. In fact, it is the strongest state of mind in Christoper Avery's model. And although it may be potentially an expensive way to go, it is always worth the investment. It's this impediment that my current project is suffering from. It's really difficult to expect people to take ownership if you don't give them the freedom to make their own choices. Unfortunately there are no easy tools or strategies to overcome this impediment. However, there are ways to work together that increase the chance that somebody does take ownership; agreeing to work by agreement…


Working together means that you need to be able trust each other. Trust generally increases if people do what you expected them to do, but gets destroyed very quickly when they don’t. Quite often this fails horribly when people didn’t know what was expected from them in the first place. That's why the act of making agreements is so important. And don’t you think doing that is much easier without the impediments mentioned earlier? At the same time, making agreements that both parties adhere to, might actually increase the respect and ownership the parties feel about this. That's why Amr and Steve conclude that making, keeping, confronting, and renegotiating agreements is the engine of cultural change.

Before explaining how to make agreements it's important to understand what is not considered as an agreement.

  • Agreements that are supposed to be common sense, or 'default' agreements.
  • Agreements that are implied to be part of a role, situation or circumstance.
  • Expectations you might have that you have never explicitly stated.
  • Coercions such as when your boss tells you "You are going to do that for me, right?".
  • Rules introduced without you being there to agree or disagree with.
  • Missions, visions and value statements


Knowing that, making agreements shouldn't be a big hassle, as long as you make the intention clear and both parties agree and commit to adhering to it. Doing that with a larger group might be a bit trickier though, and generally involves a lot of fruitless and unconstructive discussions. Worse, you might end up with an agreement that only some people will support, but a lot of others (who didn't speak out) sabotage later on without the possibility to confront them on your agreement. A nice technique to overcome that is the Decider Protocol, one of the Core Protocols. Basically it works by you proposing to agree on something and asking a vote for it. Voters may ask questions for clarifications, but discussions are not allowed. Then, if all questions have been answered, each and every person in the group has three options.

  • Thumbs up; the voter fully agrees with the proposal and will commit to it.
  • Thumbs side-ways; the voter has some doubts, but will go along and commit to whatever the outcome of the voting process is.
  • Thumbs down; the voter either disagrees with the proposal and will provide an alternative, or likes to postpone the discussion to a later moment in time.

That last part is an essential difference with traditional voting techniques; the voter cannot just say no to a proposal. He or she must provide an alternative (and potentially better) proposal instead or just go along and commit to the outcome. Of course, sometimes voters just need some time to think about the proposal before they can make the final decision. The lack of discussions is another major advantage that is a very common culprit for not ending up with any decision at the end of a meeting.


As you might know yourself, keeping agreements isn't always that easy. On the other hand, it is much better to tell somebody early that you cannot meet the agreement anymore and renegotiate a newer one, then to just break it. The latter will seriously harm the trust the other party has in you, whereas the former might actually give him or her the appropriate signals that you do value the agreement, but you just couldn't meet it. Similarly, if the other person is the one that is not meeting the agreement, you can either ignore it, or confront him. You can't correct what you aren't willing to confront. And don't worry, to confront somebody is not the same as a confrontation. You just want to understand why the other party didn't meet the agreement and whether renegotiation is required. To prepare for that, consider the following advice:

  • Make sure you feel safe enough to confront, have respect for the person, feel ownership for the relationship with that person, and make your intention crystal clear.
  • Favor face-to-face communication.
  • Confront in your own unique way (so don't try to behave differently just because that person has a different character).
  • Don't diminish your concern just because you like the person or because of the authority involved.
  • Confronting is personal so don't make it impersonal.
  • Always be closing on a (renewed) agreement.
  • If you can't agree now, agree to agree at another time.
  • Strive for a long-term agreement.

Well, by now, I hope you understand how well safety, respect, ownership and intention work together with making and keeping agreements. It isn't strange that Amr and Steve call this the 'engine of cultural change'. And for me, that really makes sense. Think of it as learning a new habit. You will break some agreements, have compassion for yourself others, confront and renegotiate those broken agreements quickly, and then get back to work.

Just make it a habit for yourself, for your team and for your organization...

Friday, November 15, 2013

Noticeable quotes from QCon San Francisco 2013

As with any conference, speakers and attendees tend to make a lot of interesting, inspiring or funny statements. QCon 2013 was no different. Some key takeaways:

  • “Can't tune until you have real users.”  (Stephen Rylander)
  • "If someone comes in suggesting that we move a release date… we mock them." at the Facebook release process talk (Rich Schoenrock)
  • "We've really worked hard to get rid of the 'hero' inside Twitter." @raffi on twitter's eng team organization (Pete Soderling)
  • “Gain productivity and performance by tuning the use of your stack, not by changing your stack” (Paul McCallick)
  • “Your product should be cutting edge not the technology” in the GitHub talk (Prateek Jain )
  • ”Anyone that wants to write multithreaded code should not be allowed to.” so true. (Christopher Frenning)
  • “No code should be considered completed until someone else is capable of maintaining it.” (Mauro Botelho)
  • "Roll your own solution to your hardest problems, not your easiest ones." (_wee_ )
  • “Rewrites will always fail (that doesn't stop people from trying, though)” (Ronald Harmsen ‏)

These made me smile:

  • "Stability is sexy" (Justin Swartsel)
  • “Following #qconsf makes me feel like I'm on the wrong side of the globe right now. Have fun over there!” (Ralph Winzinger)
  • "The purpose of a change management process is to make sure nothing ever changes" by @tpbrown and @jezhumble (Dave Kichler)
  • “I don't buy iPads. I just go to conferences and win them. Thanks” (Matt O'Keefe)
  • "Once you understand monads, you immediately become incapable of explaining them to anyone else" by Gilad Bracha (Navis )
  • "Software development in teams is all about feelings" by @stevepeha
  • "My 2 year old son can understand this" says @headinthebox about a recursive function in Haskell (Jon Norton)
  • “Not sure whether I’m happy that #qconsf is over or want 3 more days :)” (Vladimir Syerik)
  • “You don’t need to know Category Theory, but we mention it to show how cool we are.” Gilad Bracha (Rick Warren ‏)
  • "Almost every problem in life can be solved by a Broadway musical" by @stevepeha
  • “Just had to do the weirdest thing on a conference ever...looking your neighbor in the eyes for 60 seconds without speaking.” during the culture engine tutorial

QCon Day 3: About JavaScript, Scaling GitHub and Twitter, and Cultural Diversity

Man, it really feels like we've been around for ages, but this is just the 3rd day of QCon San Francisco. After a quick breakfast at the local Starbucks we dropped in on another keynote, right after the daily introduction by all track owners. The keynote was done by Brendan Eich, the inventor of JavaScript, and dealt with the present and future of web development. However, he didn't hide the fact that he was essentially trying to convince us that the next versions of EcmaScript are going to be feature rich, performing like race cars, and solve all the issues JavaScript opponents have been throwing at it. In fact, looking at the amount of bullet points on his slides, it is as the JavaScript committee has been trying to add every single feature of every other programming language around. I did notice how well some of the things align with Microsoft's TypeScript, and support for the lambda expressions is indeed an awesome little gem. On a side note, he did manage to impress us with something really cool; running Unreal in WebGL (provided you have Firefox 22).


I also attended a follow-up session on Ember, another JavaScript alternative to Knockout and AngularJs. Tom Dale, the author of Ember did a pretty good job highlighting some of the conventions and elegance he was missing in the other frameworks. Within our current project, we've already decided to go for AngularJs, but maybe we should re-evaluate that decision once again.

And just to stay on the JavaScript topic, I also attended a talk on Reactive Extensions for JavaScript, performed by Jafar Husain, an extremely fast-speaking Netflix architect. It seems that Netflix is doing an impressive job, because a lot of the talks on scalability and high-performance services were done by Netflix developers. Anyway, I was happy to see that somebody finally managed to find some good use for Rx. What I particularly liked about his approach is to think of events as collections that you can execute operations on. I've recently attended an internal talk at my employer Aviva Solutions that dealt with Rx for .NET and I was pretty impressed. But since we're mostly doing ASP.NET web sites, we haven't really found some practical use for it. That notion itself might make me think about Rx again, the next time I deal with that. I've recently attended an internal talk at my employer Aviva Solutions that dealt with Rx for .NET and I was pretty impressed. But since we're mostly doing ASP.NET web sites, we haven't really found some practical use for it. Oh, and for those .NET developers that have been using Rx for .NET already, the JavaScript version has practically the same API.


All that JavaScript was interesting, but what really made this day were the talks from Twitter, GitHub and the Open Space on cultural diversity. For instance, GitHub's Zach Holman shared the many trials and tribulations of moving from a few people to the 250 they have right now. Some of the things they do to deal with the many small and distributed teams are really interesting. As an example, they try to limit the number of meetings that require in-person contact and instead facilitate collaboration by recording all internal talks (they have a dedicated team for that), and providing always-on video conferencing.

But what really struck me is that they don't have any managers. Everything is based on trust. Definitely something a lot of companies can learn from. If you can get your hands on the slides, make sure you check them out. They contain a lot of fresh ideas, but one that really stood out for me is: "Your product should be cutting edge, not your technology". It really covers the no-nonsense mentality that GitHub is showing.

What's interesting is that GitHub puts a lot of emphasis on their values, even though it has been growing ridiculously. And that's something that resonates well with the stuff Pedram Keyani has been sharing with us in his talk on evolving culture and values. Just like Zach said, you should allow failure as well as define your companies values, even if it means making a tradeoff, and make sure your entire company supports those values. I've never thought about these things, but being part of a company that is growing fast, it's easy to see how much those values can help new employees to understand the culture of the company. In fact, if you're hiring new people, you should be verifying whether that persons aligns well with your values. If you don't, you risk people who are fully disconnected from the company's culture.


We got a similar talk from Raffi Krikorian of Twitter, and again the same ground rules apply here as well. And they put a lot of focus on teams and the learning experience. For instance, the teams (5-7 people) get a lot of freedom to execute an assignment in whatever way then want to do it, using whatever process that works for them. They can publicly accept an assignment and then act fully autonomous. Moreover, it's the team itself that is accountable for everything, never the individual person. In fact, bonuses are always granted to the team as well. Only your salary is affected by the individual performance. And if you don't feel comfortable in the team anymore, just apply the "vote with your feet" rule and move to a different team.

Everything else Twitter does is to support those teams. You want to know what the other teams are doing? Just join one of the many Open Beer sessions where teams show each other what they plan to do and when. Or wander through the corridors and look at the posters teams put up to share their plans. You need certain skills? Just talk to the full-time in-house teachers to setup a training (which is then recorded and posted on YouTube for the rest of the world). In fact, Raffi told us that although they do hire a-class developers, they prefer to train the in-house developers instead (which also keeps it interesting for them). In other words, continuous training is part of their values.

So how do they protect the quality of the code? Well, failure is allowed, provided that you learn from that, but hacks are not. Even better, teams are required to spend 10% of their time on removing technical debt. "You don't have technical debt? I don't believe you!" would be Raffi's reaction. Next to that, a global architecture team exists that are there to assist or advice the teams on anything that is required to maintain quality. Raffi told us that Twitter wants to be best company in the world for developers to work for, and I must admit, this does sound like an awesome company that a lot of others can learn from.


As if the day couldn't get better, I joined another Open Space session, this time on cultural diversity. I attended two discussions of which the first tried to identify ways to solve different levels of skills within the teams. We concluded that the more common solutions like brown paper sessions, peer reviews and pair programming can really help, at the end of the day, it's much better to accept the differences and allow room for failures.

The other one was proposed by myself to get some feedback on how to help team members that are afraid to ask questions, or are too much focused on solving a problem without verifying they are actually solving the right problem. To overall consensus is that you should start recognizing the differences, asking specific questions and to try to understand the people in your teams on a personal or private level. One way to do that is to organize a kind of recess outside the confines of the office to meet up with other colleagues and talk about the job, the company or whatever you like. Dana Caulder also pointed out that triggering people to ask two questions about something that cause them confusion might help break through the often seen "Yes, I'm doing fine. I fully understand". Another thing that I'm trying to do myself a bit more often is get back at people after a meeting to see if there were any concerns or questions, especially for those that find it difficult to speak out in a large group.

And just in case you don't have anything to do, I got recommended the books Good to Great, Quiet: The Power Of Introverts and Blindspot: Hidden Biases of Good People, as well as a training called Crucial Conversations.

With that, I must conclude that those three days of QCon have brought me an astonishing amount of new insights and ideas, and a lot of inspiration. But wait, we're not done yet. We've still two full days of workshops ahead. Pff, my head is already exploding with new knowledge.

Wednesday, November 13, 2013

QCon Day 2: Persons, Groups and Teams

Although I did get the feeling that QCon isn't at the same quality level as the last time I was here, I did manage to pick up a considerable amount of new ideas and insights on the first day. Not anything mind blowing, but still worth the trip. So let's see what the 2nd day did bring us.


The day started with a keynote by Keith Adams, a Facebook founding member. He claimed that only 25% of the success of software can be accounted to brilliant teams, tools, frameworks, disciplines and programming languages. The other 75% is added by tuning the system. I'm not sure I fully agree, but I do see that you don't really have a clear understanding of your system's performance until you put it in production. And that's something we do agree on, because he said that if you don't have users, you're not tuning yet (and you shouldn't). I've noticed myself that benchmarks and heuristics don't really give you enough real data to really make your system shine. On the other hand, I think you're already lucky if you managed to pull of a performance or load test in your project. He also started talking on machine learning, but that's a topic that doesn't work for me (yet).

For the second slot, I decided to attend a talk on mobile platforms that should enable you to create mobile apps that can run on iOS, Android and Windows Phone. Since that's a topic that is becoming quite relevant within my current project, I was really looking forward to it. The speaker, Alex Gaber, gave us a short overview of the many tools, platforms and products, but forgot to actually give some decent advice. Worse, he finished within the first 30 minutes of the slot. I couldn't resist the feeling that he didn't really prepare his talk. Nevertheless, I used the remaining time to do some research myself and look at PhoneGap, Appcelerate and Xamarin. It seems that moving towards a native experience is still the prefered approach to get all the functionality of the platform and still a smooth experience. For your reference, I ran into this nice little article that compares Appcelerator to the other platforms. And now that Microsoft has announced a closer cooperation with Xamarin, that platform promised to become even more interesting.

During the break I learned that I missed a great talk on happiness in virtual teams with a lot of valuable tips for ordinary agile team and I liked to learn a bit more about UX design. Consequently, attending the UX virtual team talk sounded like a good idea. I learned about the Design Studio process, an agile approach for UX design tasks and how they use it with virtual teams. Not my cup of tea, but at least I got to bring home another book title I need to read at some point in time: The Lean Startup. I was surprised though that they actually used a group chat to a distributed daily Scrum meeting. I would reckon not being able to see each other as well as read each other's body language is a big impediment.


After lunch I moved to another virtual team related session hosted by Ashley Johnson, a very experienced agile coach which proved to be capable of involving the audience in a unique fashion. He basically give the audience short assignments that you were supposed to complete with your neighbor. Even better, at some point he even managed to get an entire row of seats to work together to complete little task. And that's also the most important message of this track. The only way to turn a group of people in a team is to give them a task to complete. That may sound like an open door, but it's quite a fundamental notion that really helped me understand why our teams are still not real teams.

We practice Kanban and use a feature-driven approach. We have been doing Scrum for 45 3-week sprints and decided to move to Kanban to better support the continuous nature of our work. However, now teams are missing some of the pressure they were used to in Scrum and a clear goal/purpose. With the insights from this session it is clear we really need to look at finding a way to reintroduce an explicit task or assignment.

Another great analogy that made me rethink some of our teams issues is to consider what happens if the wheels of a car are not aligned properly. You don't need a engineering degree in the automotive industry to understand that a slight misalignment is enough to really spoil the driving experience. In our project we've introduced a lot of principles, rules and guidelines, but I think we've failed to really help the teams understand why we are doing this. E.g. what's the architectural vision? Why did we need to introduce so much complexity? What kind of challenges are ahead of us that might affect our technical design? I'm definitely going to give some attention to that once I'm back at the office. Oh, Ashley also recommended another book titled Agile Adoption Patterns.

For the record, the audience agreed that these were important characteristics of the best teams: humility, helping each other, autonomy, tasks, and clear success criteria. Similarly, they also identified some of the worst things a team can suffer from such as: nano management, fear of failure, personal agendas, tasks per person, competition for leadership or an uneven workload.

Part of the same track was a talk by Dana Caulder on how they managed to overcome some of the cultural and time zone differences when working with teams in the US and Poland. Dana was unlucky to have followed Ashley's track, and she mostly confirmed some of the stuff Ashley has been showing us, but I did learn something from her. In her experience a successful team knows each other well. So a great tip is to always use the time that you have to wait for each other during meetings to small talk on weekend plans, someone's family, their pets or traditions, or to make jokes. It can really increase the cohesion of the teams. Hopefully my colleagues will not look funny at me the next time I'm asking those kinds of questions…


Since the last slot of the day didn't include a single decent topic, my last session of the day involved another Open Space discussion. I proposed a topic on how to promote the pride of ownership within the teams, as well as encourage developers to feel more responsible of the code base as well as the product. I was lucky to have both Dana and Ashley in our little group. Ashley provided some background on the psychology behind the responsibility of people and basically told us that it is very hard to make people feel responsible for their choices from the outside. It should come from the person him/herself as part of moving through the stages of a responsibility process. However, generally people feel responsible for the choices that they made themselves or things they've opted in. Obviously that resonates well with Ashley's earlier observation that tasks help transforming groups of people into a team.

All in all a very interesting second day. Now it's time for a drink…

Tuesday, November 12, 2013

QCon Day 1: Musicians, Google, Open Space & Netflix

So after all that fun during the first weekend (and about which I’ll blog separately), now it's the time to finally shift our attention to the reason for which we're in San Francisco in the first place; InfoQ's highly rated internal software conference QCon. Since my previous visit to QCon was such an impressive experience, my expectations were high. After dropping off our rental car at Alamo’s, we strolled along Market Street to the new venue, the Hyatt Regency at Embarcadero Center, a longer walk than I expected.

As usual, the keynote was preceded by an introduction of each of the tracks hosted by the corresponding track owners. However, I couldn’t resist the notion that some of them were really not feeling comfortable being on stage. In fact, it all sounded like some kind of obligation they couldn't get away from. Fortunately, the keynote speaker, Rich Hickey, was so much better suited for this. He elaborated on a great analogy between software developers and professional musicians. He explained how musicians always start with ideas about short melodies, cadences and rhythms and use that to compose a harmonious song. In our profession that would be the equivalent of creating small autonomous components and use those to 'compose' a system 'in harmony'. If those components end up to be too complex, then you should really work hard to take them apart. Just accept that a large system will eventually end up as too complex to grasp completely. Instead, focus on the small parts first and then look at the bigger picture.



But he even went further with that. He explained us that instruments are always created for experienced musicians, simply because of the fact that you're only a novice for a short time. So why do we still optimize our code for beginners? Just like musicians practice every day, rather than making our ‘instruments’ (our tools) simpler, we should practice hard as well. Definitely a very inspiring talk, in particular due to the many references to the origins of electronic music (which, as a former fan of Jean-Michel Jarre, really resonated with me).

The 2nd session of the day was done by a Netflix architect, Jeremy Edberg, and dealt with how they managed to get such a scalable and reliable solution, something you would take for granted when using their services. The big 'secret' is that they always design their systems with the assumption that something will go wrong eventually. He mostly showed us how they build custom tools to monitor all their environments, but definitely impressed with the vast amounts of energy they’ve invested in making their systems so resilient against failures. And now that I think of it, I noticed that all those companies start to create their own tools after they’ve grown to a certain scale. Definitely something to remember the next time you’re looking for an off-the-shelve tool to solve your problem.

The slot just before lunch was reserved for an Open Space on architecture. As somebody who really spends a great deal of my time on agile and architecture, I can tell you it was a great experience. First I joined a discussion on agile architecture and how to get the business to understand how important bringing down technical debt is, and why we need time to work on architectural changes. I was glad to hear that this challenge is a universal problem in our profession (and not only mine :-)). The main take away for me was that you should really try to avoid running in stealth mode and spend more energy explaining the business the cost of technical debt.


I couldn't resist proposing a discussion on Event Sourcing and using it to build occasionally connected system. It was fun to explain a group of experienced architects how and why we used events as a unit of synchronization. The big question was whether I would choose this architecture style again, provided similar requirements. But after some contemplating, my answer is 'most definitely yes'. However, I did emphasize the fact that I underestimated the complexity it introduces, even though I've been talking about this topic for several years now.

After having selected the wrong talk in an already bad slot (the speaker didn't get to his point until after 30 minutes), I decided to check out that new noBackend hype. It was mostly about how a solution that has virtually no backed isn't so susceptible to NSA hacks. By the way, did you hear about this story that some AT&T engineers found a network splitter in their server room, and discovered it was used by the NSA to route all data to their servers for analysis? The speaker, Parker Higgens, was working for a foundation that is supposed to help companies dealing with this NSA issue, but I was mainly surprised that they could even talk about all this in public.

A talk I was looking forward to was the one where Rachel Laycock would explain us how to adapt your architecture to facilitate continuous delivery. She discussed a lot of architectural and design principles, in particular Conway's Law, but what I missed is how she would approach teaching existing teams about this principles in such a way they'd really get it. Don't get me wrong though, all of what she says makes sense and aligns perfectly with my own ideas, but it's the challenge convincing other people about it that's the difficult part of our job. I’m sure going to have a chat with her somewhere this week.


One session I did get a lot out of it is the one on how Google has setup their developer workflow. In our current project we've recently started using feature branches to move towards a more stable trunk. But just imagine the surprise when I learned Google has about 10000 developers all checking in on the trunk about 20 times a minute. But after learning their aggressive testing strategy and how they've optimized their automated build-and-test pipeline I've gained a renewed goal to improve ours as well. They don’t even have a traditional QA group anymore. Developers are held responsible and they facilitate the testing efforts by injecting a test engineering professional into each team. In fact, test evangelism is a quintessential aspect of the developer workflow. A good example is the Testing On The Toilet practice, a single page of new ideas and experiences that developers are encouraged to read at the toilet. How's that about dedication…


After serving all those beers, I was surprised to see how many attendees appeared at the post-day keynote. On the other hand, that proves once again that the QCon audience generally has a lot of passion. Well, the fact that we’re still here at 19:30 proves ours as well…

Wednesday, August 28, 2013

It took almost a year, but Fluent Assertions 2.1 is done

It has been way too long since I last released a new version of Fluent Assertions, but somehow my intention to deliver at least every three months has once again failed by the obligations a working husband and father of two has. Nonetheless, Fluent Assertions 2.1 is a fact. And although it isn't such a big release as 2.0, somehow it has accumulated a lot of nice improvements. As always you'll find the detailed release notes on the NuGet landing page, but just for the fun of it, let me provide some background on some of the changes 2.1 introduces.

For instance, the primary reason why this release took so long was the amount of work required to add the following two improvements to the structural equality assertions; reporting all differences and order independence. Those two required me to almost completely rewrite the internal engine. You may think "How difficult can that be?", but order independence between collections requires FA to compare each item from the subject collection with each item from the expected collection. Now consider that comparing two items might actually involve comparing two object graphs as well. If you find an exact match, all is fine. But what if there's no exact match? Which object graph should FA use for reporting the differences? I decided to solve this problem by selecting the object graph with the least amount of differences compared to the expectation. It will not always give you the perfect result (what if two items result in the exact same number of differences?), but chances are you'll get enough information to fix your code. As an example, consider the following scenario:

var subject = new 
Property1 = "A",
Property2 = "B",
SubType1 = new
SubProperty1 = "C",
SubProperty2 = "D",

var expectation = new
Property1 = "1",
Property2 = "2",
SubType1 = new
SubProperty1 = "3",
SubProperty2 = "D",

Calling subject.ShouldBeEquivalentTo(expectation ) will result in the following test failure

Expected property Property1 to be "1", but "A" differs near "A" (index 0).
Expected property Property2 to be "2", but "B" differs near "B" (index 0).
Expected property SubType1.SubProperty1 to be "3", but "C" differs near "C" (index 0).

With configuration:
- Select all declared properties
- Match property by name (or throw)
- Invoke Action<DateTime> when info.RuntimeType.IsSameOrInherits(System.DateTime)
- Invoke Action<String> when info.RuntimeType.IsSameOrInherits(System.String)

Supporting aggregated exceptions was another of those little challenges. What I tried to accomplish is that the various ShouldThrow and ShouldNotThrow overloads would intercept any AggregateException instances and apply the assertion on the exceptions within. So from an end-user perspective it shouldn't matter if some expected or unexpected exception is first wrapped in an AggregateException. The less trivial part involved adding that behavior without breaking the .NET 3.5 and Silverlight versions (they share the same extension methods). Using the Strategy Pattern by means of an IExtractExceptions interface allowed me to plug in a framework-specific version of Fluent Assertions.

And while I was looking at exceptions anyway, I decided to change the way exception messages are asserted. In version 2.0 you had the option to specify how FA should interpret the WithMessage extension method. Having worked with this for a while in our own project (with 6000 unit tests) I came to the conclusion that you should really never want to check the exception message using an exact case-sensitive match. Doing that would only result in very brittle unit tests. As a result of this, I decided that as of version 2.1, the ComparisonMode is obsolete and any assertions against the exception message is treated as a case-insensitive wildcard match.

So what's next? Well, before including those little feature requests waiting on the issue list, I have two important steps to complete.

Move to GitHub
I've long hoped that Microsoft's approach embracing the open-source mindset would provide the CodePlex team with the resources to make it a first-class hub for open-source projects. Support for Git was a major step, but the lack of any big improvements for over a year is what made me decide to move to GitHub. The source code and binaries have already been moved to its new home, but I still need to clean up the documentation and find a definite place for downloading

Switch to Portable Class Libraries
Supporting multiple versions of FA has always been a pain, even while using linked source files. Especially during the many internal redesigns of 2.1, I got sick of constantly having to fix-up renames of classes or copy an added file into all other projects. I'm not 100% sure if PCLs will solve all problems, but I will give it a try anyhow.

Tuesday, March 12, 2013

Entity Framework 5 and 6 vs NHibernate 3 – The State of Affairs

It has been almost two years since I've last compared NHibernate and Entity Framework, so with the recent alpha version of EF 6, it's about time to look at the current state of affair. I've been using NHibernate for more than 6 years so obviously I'm a bit biased. But I can't ignore that EF's feature list is growing and some of the things I like about the NHibernate eco-system such as code-based mappings and automatic migrations have found a place in EF. Moreover, EF is now open-source, so they're accepting pull requests as well.

Rather than doing a typical feature-by-feature comparison, I'll be looking at those aspects of an object-relational mapper that I think are important when building large-scale enterprise applications. So let's see how those frameworks match up. Just for your information, I've been looking at Entity Framework 6 Alpha 3 and NHibernate 3.3.1GA.

Support for rich domain models
When you're practicing Domain Driven Design it is crucial to be able to model your domain using the right object-oriented principles. For example, you should be able to encapsulate data and only expose properties if that is needed by the functional requirements. If you model an association using a UML qualifier, you should be able to implement that using a IDictionary<T,T>. Similarly, collection properties should be based on IEnumerable<T> or any of the newer read-only collections introduced in .NET 4.5 so that your collections are protected by external changes.

NHibernate supports all these requirements and adds quite a lot of flexibility like ordered and unordered sets. Unfortunately, neither EF5 or 6 supports mapping private fields (yet) nor can you directly use a dictionary class. In fact, EF only supports ICollections of entities, so collections of value objects are out of the question. One notable type that still isn't fully supported is the enum. It was introduced in EF5, but only if you target .NET 4.5. EF6 will fortunately fixes this so that it is also available in .NET 4.0 applications.

A good ORM should also allow your domain model to be as persistence ignorant as possible. In other words, you shouldn't need to decorate your classes with attributes or subclass some framework-provided base-class (something you might remember from Linq2Sql). Both frameworks impose some limitations such as protected default constructors or virtual members, but that's not going to be too much of an issue.

Vendor support
Although Microsoft makes us believe that corporate clients only use SQL Server or SQL Azure, we all know that the opposite is much more true. The big drawback of EF compared to NH is that the latter has all the providers built-in. So whenever a new version of the framework is released you don't have to worry about vendor support.

Both EF5 and NH 3.3 support various flavors of SQL Server/Azure, SQLite, PostgreSQL, Oracle, Sybase, Firebird and DB2. Most of these providers originate from EF 4, so they don’t support code-first (migrations) or the new DBContext fa├žade. EF6 is still an alpha release and its provider model seems to contain some breaking changes so don't expect any support for anything other than Microsoft's own databases anytime soon.

Support switching databases for automated testing purposes
Our architecture uses a repository pattern implementation that allows swapping the actual data mapper on-the-fly. Since we're heavily practicing Test Driven Development, we use this opportunity to approach our testing in different ways.

  1. We use an in-memory Dictionary for unit tests where the subject-under-test simply needs some data to be setup in a specific way (using Test Data Builders).
  2. We use an in-memory SQLite database when we want to verify that NHibernate can process the LINQ query correctly and performs sufficiently using NHProf.
  3. We use an actual SQL Server for unit tests that verify that our mapping against the database schema is correct.
  4. We have some integration code that interacts with a third-party Oracle system that is tested on SQL Server on a local development box, but uses Oracle on our automated SpecFlow build.

So you can imagine switching between database providers without changing the mapping code is quite essential for us.

During development, we decided that we did not care about the actual class that represented the integration tables, so we tried to use the Entity Framework model-first approach. Unfortunately, when you do that, you're basically locking yourself to a particular database. After switching back to our normal NHibernate approach, changing the connection string during deployment was enough to switch between SQL Server and Oracle. Fortunately this has also been possible since EF 4.1 and Jason Short wrote a good blog post about that.

Automatic schema migration
When you're practicing an agile methodology such as Scrum, you'll probably try to deliver a potentially shippable release at the end of every sprint. Part of being agile is that functionality can be added at any time where some of that might be affecting the database schema. The most traditional way of dealing with that is to generate or hand-write SQL scripts that are applied during deployment. The problem with SQL scripts is that they are tedious to write, might contain bugs, and are often closely coupled to the database vendor. Wouldn't it be great if the ORM framework would support some way of figuring out what version of the schema is being used and automatically upgrade the database scheme as part of your normal development cycle? Or what about the ability to revert the schema to an older version?

The good news that this exists for both frameworks, but with a caveat. For instance, NHibernate doesn't support this out-of-the-box (although you can generate the initial schema). But with the help of another open-source project, Fluent Migrations, you can get very far. We currently use it in an enterprise system and it works like a charm. The caveat is that the support for the various databases is not always at the same level. For instance, SQLite doesn't allow renaming a column and Fluent Migrations doesn't support it (although theoretically it could create a new column, copy the old data over, and drop the old column). As an example of a fluent migration supporting both an update as well as a rollback, check out this snippet.

public class TestCreateAndDropTableMigration: Migration
public override void Up()





Insert.IntoTable("TestTable").Row(new { Name = "Test" });

public override void Down()

Entity Framework has something similar built-in since version 5. It's called Code-First Migrations and looks surprisingly similar to Fluent Migrations. Just like the NHibernate solution has some limitations, EF's has as well and that is support from vendors. At the time of this writing not a single vendor supports Code-First Migrations. On the other hand, if you're only using SQL Server, SQL Express, SQL Compact or SQL Azure, there's nothing from stopping you to use it.

Code-based mapping
If you remember the old days of NHibernate, you might recall those ugly XML files that were needed to configure the mapping of your .NET classes to the underlying database. Fluent NHibernate has been offering a very nice fluent API for replacing those mappings with code. Not only does this prevent errors in the XML, it is also a very refactor-friendly approach. We've been using it for years and the extensive (and customizable) convention-based mapping engine even allows auto-mapping entities to tables without the need of explicit mapping code.

Strangely enough, NHibernate 3.2 has introduced a brand new fluent API that directly competes with Fluent NHibernate. Because of lack of documentation, I've never bothered to look at it, especially since Fluent NHibernate has been doing its job remarkedly. But during my research for this post I noticed that Adam Bar has written a very extensive series on the new API, and he actually managed to raise a renewed interest in the new API.

Until Entity Framework 4.1 the only way to set-up the mapping was through its data model designer (not to be confused with an OO designer). But apparently the team behind it learned from Fluent Nhibernate and decided to introduce their own code-first approach, surprisingly named Code-First. In terms of convention-based mapping, it was quite limited, especially compared to Fluent NHibernate. EF 6 is going to introduce a lot of hooks for changing the conventions, both on property level as well as on class level.

Supporting custom types and collections
One of the guidelines in my own coding guidelines is to consider wrapping primitive types with more domain-specific types. Conincedentily it is also one of the rules of Object Calisthenetics. In Domain Driven Design these types are called Value Objects and their purpose is to encapsulate all the data and behavior associated with a recurring domain concept. For instance, rather than having two separate DateTime properties to represent a period and separate methods for determining whether some point of time occurs within that period, I would prefer to have a dedicated Period class that contains all that logic. This approach results in a design that contains less duplication and is easier to understand.

Contrary to NHibernate, Entity Framework doesn't offer anything like this and as far as I know, doesn't plan to. NH on the other hand offers a myriad of options for creating custom types, custom collections or even composite types. Granted, you have to do a bit of digging to find the right documentation (and StackOverflow is your friend here), but if you do, it really helps to enrich your domain model.

Query flexibility
Some would argue that EF's LINQ support is much more mature, and until NH 3.2 I would have agreed. But since then, NH's LINQ support has improved substantially. For instance, during development we use an in-memory SQLite database in our query-related unit tests to make sure the query can actually be executed by NH. Before 3.2, we regularly ran into strange cast exceptions or exceptions because of unsupported expressions. Since 3.2, we've never seen those anymore.

I haven't tried to run all our existing queries against EF, but I have no doubts that it would have any issue with it. In terms of non-LINQ querying, EF supports Entity SQL as well as native SQL (although I don’t know if all vendors are supported). NHibernate offers the HQL, QueryOver and Criteria APIs next to native vendor-specific SQL. Both frameworks support stored procedures. All in all plenty of flexibility.

EF 6 uses the service locator pattern to allow replacing certain aspects of the framework at runtime. This is a good starting point for extensibility, but unfortunately the team always demonstrates this by replacing the pluralization service. As if someone would actually like to do that. Nonetheless, I'm sure the team's plan is to expose more extension points in the near future.

NH has a very extensive set of observable collections called listeners that can be used to hook into virtually every part of the framework. We've been using it for cross-cutting concerns, for hooking up auditing services and also for some CQRS related aspects. You can also tweak a lot of NH's behavior through configuration properties (although you'll have to Google…eh…Bing for the right examples).

Other notable features
Each of the frameworks has some unique features that don't fit in any of the other topics I've discussed up to now. A short summary:

  • Entity Framework 6 adds async/await support, a feature that NHibernate may never get due to the impact it has on the entire architecture.

  • It also has built-in support for automatically reconnecting to the database, which is particularly useful for a known issue with SQL Azure.

  • Both NHibernate as well as the Entity Framework support .NET 4.5, but only the latter gains some significant performance improvements from it.

  • NHibernate has a unique selling point and that is its very advanced and pluggable 2nd level cache. This has allowed significant performance improvements in one of our projects, simply by caching reference data.

  • NHibernate offers another unique feature called Futures that you can use to compose a set of queries and send them to the database as a single request.

  • The Entity Framework allows creating a DBContext with an existing open connection. As far as I know that's not possible in NHibernate.

  • Version 6 of the Entity Framework adds spatial support, something for which you need a 3rd party library to get that in NHibernate. Pedro Sousa wrote an in-depth blog post series about that.

The big difference between Entity Framework and NHibernate from a developer perspective is that the former offers an integrated set of services whereas the latter requires the combination of several open-source libraries. That on itself is not a big issue - that's why we have NuGet, don't we? - but we've noticed that those libraries are not always up-to-date soon enough when new NHibernate versions are released.

From that same perspective NHibernate does offer a lot of flexibility and clearly shows its maturity. On the other hand, you could also see that as a potential barrier for new developers. It's just much easier to get started with Entity Framework than with NH. The documentation on Entity Framework is quite comprehensive and even the new functionality for version 6 is extensively documented using feature specifications. The NHibernate documentation has always been lagging behind a bit. For instance, the new mapping system is not even mentioned even though the reference documentation mentions the correct version. The information is available, but you just have to search a bit.

The fact that the EH is being developed using a Git source control repository is also a big plus. Just look at the many pull requests they've been taking in. On the other hand, to my surprise somebody moved the NHibernate source code to GitHub while I wasn't paying attention. So on that aspect they are equals.

And does NHibernate have a future at all? Some would argue it is dead already. I don't agree though. Just look at the statistics on GitHub; 240 forks, almost 200 pull requests and a lot of commits in the last few months. I do agree that NoSQL solutions like RavenDB are extremely powerful and offer a lot of fun and flexibility for development, but the fact of the matter is that they are still not widely accepted by enterprises with a history in SQL Server or Oracle.

Nevertheless, the RAD aspect of EF cannot be ignored and is important for small short-running projects where SQL Server is the norm. And for those projects, I would wholeheartedly recommend EF. But for the bigger systems where a NoSQL solution is not an option, especially those based on Domain Driven Design, NHibernate is still the king in town.

As usual, I might have overlooked a feature or misinterpreted some aspect of both frameworks. If so, leave a comment, email me or drop me a tweet at @ddoomen.

Wednesday, January 02, 2013

About ideas that stick

A while ago, some business man from the US who travels a lot throughout the US as part of his job, was sitting in his airline's business lounge for a drink. Right after finishing his 2nd, an attractive women approached him and offered him a drink in exchange for somebody to talk to. Somewhere halfway finishing that drink the man passed out and awoke an indeterminate number of hours later……in a bathtub filled with ice.

Totally confused by the situation and without a clue about where he was and what he was doing in this iced bathtub, he noticed a chair next to the tub. On top of the chair there was a hand-written sign saying "Don't Move. Call 911" and a cell phone. After dialing 911, and after explaining the situation to the woman on the phone, she asked "Is there a tube of some kind protruding from your lower back?". The man reached for his back and shockingly discovered that there was indeed something exiting his back. "Try to remain calm sir, but they've probably stolen one of your kidneys"…

This story is one of the many examples from the book titled Made to Stick, Why Some Ideas Survive and Others Die by Chip and Dan Heath. I didn't even have to reread the original version from the book; it just stuck with me. Apparently there's an entire theory that deals with the characteristics a story must have to stick. Don't worry, I'm not going to spoil the reading experience by sharing those characteristics right here.


If you are a professional like me who regularly needs to convey some kind of message through a blog post, a presentation or even a tweet, you might learn a thing or two from this book as well. I'm not saying I'm a changed man, but it sure did contain some very useful tips that can help you bring back the message to its core. And if it doesn't bring you anything new after all, the book is still a lot of fun to read. It's packed with anecdotes and real-live stories that will most definitely surprise you.

Oh, and for the record, the above story is a myth Smile