Putting it all together – From Monolith to Cloud-Native

There’s an underlying theme to the posts I’ve been making lately, namely that I am beginning the process of migrating an existing application to an Event-Driven Architecture (EDA). This has posed some interesting technical challenges, and the overall process can be used to break down an existing monolithic application into component microservices based on Domain-Driven Design. Let’s start at the beginning at look at the challenges that will be faced, and the required work.

1. The Domain Model

As with any DDD project, you’ll begin with the domain model for a given domain. Follow DDD best practices to build this domain model.

2. Data Migration

There’s a fundamental difference between the static state stored by a traditional application in a relational database, and the aggregate state stored by an event-sourced system in (e.g.) Event Store DB. Storing the static state is like storing a document – you only get the latest version of the document. However, if you turn on “Track Changes”, you get a different document, one with a history of revisions. This document with the revision history is what is stored in the Event Store DB.

This is challenge #1: how do we convert the static state of an existing persisted entity into the required aggregate state of a new event stream?

The migration is doubly difficult, because the legacy application uses Entity Framework 6.0 as its persistence layer. The new application will run on .NET 6, using EF Core 6. So, we can’t just copy the entity model verbatim: any fluent customizations will have to be rewritten for EF Core 6. The good news is that existing attributes are used by EF Core 6 as well. However, copying the model verbatim is a good start, and tweaking it as we go along.

With the copy of the EF6 model, we can rewrite any fluent customizations that are necessary (e.g. many-many relationships). This is also a good time to replace unnecessary fluent customizations with the corresponding attributes. Test your new .NET 6 EF Core model and ensure you can iterate through data.

Now that we can read in our static entities, the challenge is met by essentially reverse engineering the current state of the entity into its component methods in the domain model. i.e. We load the entity into memory, and construct a domain object using the methods we defined in DDD that has the same “value” as the entity. See the example below:

var entities = db.Entities.ToList();
foreach (var entity in entities)
    var aggregate = new Aggregate();
    foreach (var item in entity.Collection1)
// aggregate's current state now equals entity
// write the aggregate to the aggregate store

Note that we use aggregate.SetProperty1() instead of aggregate.Property1 = . This is the important part of the migration – we must use the methods defined in the DDD to achieve the desired state. When we do so, we will have created our aggregate suitable for storing in Event Store DB. Repeat the process for all aggregates identified in DDD. You have now migrated your data to an event store.

Important: You will probably not use the same ID values in the new system. You should create a property in your legacy entity to store the ID value from the new system. This will become necessary later on when we must build a “bridge” back to the existing application.

Of course, the event store is not suitable for querying data. For that, we need to project the event store data into a form suitable for reading.

3. Projections

Projections are code that is called in response to an incoming event. Any service can subscribe to the stream of events being written to the event store. The service will then choose which events to respond to, usually by writing data to another database that can be used for queries. Good choices here are SQL Server and CosmosDB. This is one of the big performance gains we get by separating the “read side” and “write side” of the database: instead of doing join queries on a relational database in SQL Server, we can write materialized query results in JSON documents to CosmosDB instead. While not as space-efficient, the performance gains in read speed far exceed the additional space required.

4. The “Bridge”

Challenge #2: How to incrementally release a monolithic application?

We do not want to wait for the entire new application to be written before we start using the new system. Therefore, we need a method of being able to run both applications in parallel and migrate users to the new application as new features become available for their team. This requires that both applications work on the same data. Of course, at this point, the data is in two separate databases! However, this is not an insurmountable difficulty: we can write a projection to deal with writing back to SQL Server. Since we already created our entity model for .NET 6 when we migrated the data to the event store, it is actually a rather simple task to write a projection that writes to SQL Server instead of CosmosDB. And, you can simply copy the code that writes to SQL Server from your old application, since it uses the exact same entity model (albeit for different platforms).

These projections back to SQL Server are the key. With these projections in place, it is a simple matter of fetching an entity by v2 ID and performing the desired updates as per the incoming event.

The interesting thing is that this bridge need not be temporary. While it is certainly possible to remove the bridge once the new application is complete, there are cases when it may be desirable to keep some or all of the bridge to facilitate external processes that may prefer SQL Server to CosmosDB.

5. Microservices and Kubernetes

The goal is to run the microservices on Kubernetes (or some other as-yet unwritten container orchestrator). This requires that the application run on Linux. This is what necessitates the upgrade to .NET 6. We expect that the infrastructure savings over time using an orchestrator should result in a reduced core count overall (Microsoft reports 50% reduction in core count when moving AAD gateway from .NET to .NET Core) and more efficient use of resources.

6. Conclusions

This is, in a nutshell, the process used to break down the monolithic legacy application based on Entity Framework. .NET 6 is a stable, multi-platform framework target that will allow you to containerize your .NET applications. The upgrade to .NET 6 should take advantage of all the platform and language features available, and use modern design techniques to build a lasting architecture that minimizes maintenance and maximizes readability and understanding to the reader.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: