Event Sourcing from the Trenches: Mixed Feelings

Edit this page | 2 minute read

While visiting QCon New York this year, I realized that a lot of the architectural problems that were discussed there could benefit from the Event Sourcing architecture style. Since I've been in charge of architecting such a system for several years now, I started to reflect on the work we've done, e.g. what worked for us, what would I do differently next time, and what is it that I still haven't made my mind up about. So after having discussed my thoughts on projections, I still have a couple of doubts to discuss.

Don't write to more than one aggregate per transaction
As long as you postpone dispatching those events until all is done, you should be fine. I know that Vaughn Vernon stated this in his posts about effective aggregate design, I still don't see the real pragmatic value here. Really trying to use a single transaction per aggregate means you need to perfectly design your aggregates so that every business action only affects a single aggregate. I seriously doubt most people will manage to do that. And if you can't, sticking to the single aggregate-per-transaction means you need to build logic for handling retries and compensating logic for when other parts of the bounded context are interested in those events. However, never ever use transactions across bounded contexts.

Separation of the aggregate root class and its state
Inspired by Locad.CQRS, we used a separate class to contain the When methods that are used to change the internal state as a result of an event, both during AR method invocations as well as during dehydration. However, using state results in some cumbersome usage of properties that point to the state class. Having them on the main AR class is going to make it very big, but maybe using a partial class makes sense.

Functional vs unique identifiers
For the aggregates that have a natural key, we use that key to identify the stream events belong to. However, Greg Young once mentioned that using a Guid or something is probably better, but somehow that never aligned with what I've learned to value from Pat Helland's old article Data on the Inside, Data on the Outside. Maybe you should do both?

Share by contract vs by type
Share events as a binary package or through some platform-agnostic mechanism (e.g. Json Schema) is a difficult one for me. Some people argue that sharing the binary package is going to be cause an enormous amount of coupling. But I would think that sharing just some Json Schema still means you're tied to that contract. For instance, if you're in the .NET space, being able to use a NuGet package that only contains the events from a bounded context that can be consumed by another context sounds very convenient. The only thing a schema-based representation will help you with is that it will force you to add a transformation step from that schema into some internal type. By doing that, you have a bit more flexibility in decoupling versioning differences. But somehow, I'm not convinced the added complexity is worth it (yet).

Feedback, please!
So what do you think? Do my thoughts make sense? Am I too pragmatic here? Are you using Event Sourcing yourself? If so, care to share some experiences? Really love to hear your thoughts by commenting below. Oh, and follow me at @ddoomen to get regular updates on my everlasting quest for better solutions.

Leave a Comment