• About

Brian Richardson's Blog

  • Data Architecture – Too much normalizing?

    July 13th, 2022

    I’m slowly breaking out of the mentality that all data must be normalized as much as possible. As I work through creating an event sourced system from an existing application, I find that a lot of data ends up in the same place even though it’s not the same data. Let’s look at a simple example, a country with provinces. In a relational database, you’d probably have a lookup table for each of these, and have a foreign key into the province table so we know which province belongs to which country. And then we’d have an address table that has a foreign key to the country and province table. Compare this to a JSON document that simply stores all the provinces as an array of objects:

    {
      "Country": {
        "Id": "CA",
        "Name": "Canada",
        "Provinces": [
          {
            "Id": "AB",
            "Name": "Alberta"
          },
          {
            // ...
          }
        ]
      }
    }
    

    The address, then, can’t even store any kind of reference id to the province; it must store some minimal amount of information about the province within the address document itself:

    {
      "StreetAddress": "123 Fake Street",
      "City": "Calgary",
      "Province": {
        "Abbreviation": "AB",
        "Name": "Alberta"
      },
      "PostalCode": "H0H 0H0"
    }
    

    So, really, what’s wrong with this? I guess the main criticism is that we don’t have a central location to update the name of the province. But that’s easy enough to deal with:

        public static async Task UpdateMultipleItems<T>(
            this IMongoCollection<T> collection,
            Expression<Func<T, bool>> query,
            Func<T, Task> update) where T : IId
        {
            var writes = new List<ReplaceOneModel<T>>();
            foreach (var item in collection.AsQueryable().Where(query))
            {
                await update(item);
                writes.Add(new ReplaceOneModel<T>(Builders<T>.Filter.Eq(e => e.Id, item.Id), item));
            }
    
            await collection.BulkWriteAsync(writes);
        }
    
    

    When we receive an event that a province name was updated, we simply update all documents that contain that province. If this is something we plan to do regularly, an index would certainly help.

    I think it’s important to stick with DDD principles here. If something isn’t an aggregate root, it shouldn’t have its own documents. The province, here, is not an aggregate root as it doesn’t even exist outside the context of its containing country. So, we’ll never see provinces as more than a property of some other aggregate root. Given the size of a province, it seems easiest just to store its value inline.

    Ok, so most small entities can be dealt with in this fashion. We do have _some_ need for normalizing, though. Consider the case where there is a relationship between two aggregate roots. In this case, we can simply store a reference id for this property, and use a lookup to get the associated document. But why not take some of what we’ve learned above? For example, instead of merely storing the id value, also store a name or description value as well. And you’re not limited to a single field. Perhaps there’s a LastUpdated field or similar that you’d want to retrieve without loading the entire linked document. Yes, you will have to use the same technique as above to update that field when it changes, but in a lot of cases, you won’t need anything more than a text identifier until a user actually triggers loading of the entire document.

    I believe this to be a sound approach. We are already working with eventually consistent databases, so a slight delay in updating subdocuments shouldn’t have a profound impact. I’ll need to work with much larger datasets before I have any basis for comparison, but there’s a benefits to working this way:

    • documents remain logically separated. we don’t link things together simply because one entity has the text we want to use. an address belonging to one type of entity is not necessarily the same thing as an address belonging to another type of entity, and indeed may have different business rules.
    • we gain speed at the expense of space. this is generally a tradeoff most people are willing to make these days.
    • the document for any given aggregate root is human-readable. It is not necessary to perform multiple lookups to obtain the necessary information.

    The flip side of this is that we do repeat ourselves, at least superficially. In the address example, there are two sets of POCOs that represent addresses. That is not itself an indication that it’s wrong, but you may need to further consider whether those addresses are, in fact, aggregate roots themselves. However, if they’re not, then I’d continue to argue that the values should be stored inline. I’ll look into the performance implications of this position and follow up. For now, though, it would seem that we are normalizing too much, and much clarity is to be gained by duplicating storage of similar information.

    Advertisement
  • Full-text Searching with CosmosDB

    July 7th, 2022

    While I settled on CosmosDB as the final destination for the document database in my solution, I did early work on the application using the MongoDB docker container. I was happy with how easy it was to write a search method for MongoDB by defining some full-text search indexes and querying those indexes. It was an easy search, since MongoDB did all the hard work. Something like:

    public async Task Search<T>(string search)
    {
        var query = Builders<T>.Filter.Text(search);
        var results = await _collection.FindAsync(query);
        return results;
    }
    

    However, when creating those same full-text search indexes on CosmosDB, you will find that it is not supported. So, what’s the analogous solution in Azure?

    Azure does support the idea of full-text indexes, but at a larger scale. Azure Cognitive Search allows you to index a CosmosDB collection for full-text search, as well as filtering, sorting, suggestions and facets. The concept is much the same: a full-text index is defined in Cognitive Search, and it is applied to a specific data source. An indexer process is configured and triggered every time a change is projected to CosmosDB. This isn’t quite automatic, so let’s look at the process:

    Define the data source, index, and an indexer in Cognitive Search. The process of creating data sources, indexes and indexers is well-documented.

    Ensure that the defined data source includes the high watermark change detection. We are going to disable periodic indexing and use an on-demand approach instead, and need to ensure that the indexing is incremental.

    "dataChangeDetectionPolicy": {
            "@odata.type": "#Microsoft.Azure.Search.HighWaterMarkChangeDetectionPolicy",
            "highWaterMarkColumnName": "_ts"
        },
    

    I should note that the instructions given in the documentation use REST API calls from Postman or VS Code. This is not currently necessary, as the Azure Portal supports all the necessary interface elements for defining a data source, index, and indexer for use with CosmosDB MongoDB.

    Once the search components have been defined, it will now be possible to glue it all together:

    Run the indexer and verify that your documents appear as intended in the search index. Note that the search index doesn’t need to contain the whole document. Only the fields marked as retrievable will be transferred to the index. I was able to use the same read model to query the search index, with the understanding that not all fields would be available for use when retrieving search results.

    Verify that your searches work. The Cognitive Search resource has a search explorer where you can choose an index and run queries against it.

    Update your write activities on searchable collections to also run the indexer. The code below is from an Event Sourced system which has a distinct read and write side. A projection is a persistent subscription to the event store that duplicates changes into the document database. I’ve updated the Projection constructor to optionally allow passing a search endpoint and indexer to run.

        public Projection(
            IMongoClient mongo, 
            Projector projector, 
            SecretClient secrets, 
            string? searchEndpoint = null, 
            string? indexer = null)
        {
            _mongo = mongo;
            _projector = projector;
            _secrets = secrets;
            _searchEndpoint = searchEndpoint;
            _indexer = indexer;
        }
    
    

    Then, after completing the projection to the document database:

            if (!string.IsNullOrWhiteSpace(_searchEndpoint))
            {
                // run the indexer if it's been provided
                var key = await _secrets.GetSecretAsync("CognitiveSearch-ApiKey");
                var indexClient = new SearchIndexerClient(new Uri(_searchEndpoint), 
                    new AzureKeyCredential(key.Value.Value));
                await indexClient.RunIndexerAsync(_indexer);
            }
    
    

    That’s the read side taken care of. Every projection to the read side will result in the new or updated document being indexed and very quickly available to search. “Very quickly” in this case means that the human delay between triggering persistence of the document, and providing the search query is more than sufficient to allow the indexer to work. There is _some_ delay, but in practical terms, it is real-time.

    I’m not fond of having to pull the API key out of the secret vault every time, but RBAC access to the search endpoint is in public preview and not yet supported by the SDK. At some point we will presumably be able to provide an access token instead of the administrative key and reduce the permissions allowed by the application to the search service.

    Now we simply need to replace the MongoDB search code with something that queries the Cognitive Search index:

    [HttpGet("search")]
    public async Task<IReadOnlyList<ReadModels.MyModel>> Search([FromQuery] string search)
    {
        var endpoint = _configuration["Azure:Cognitive:SearchEndpoint"];
        var index = _configuration["Azure:Cognitive:SearchIndexName:MyModel"];
        var key = await _secrets.GetSecretAsync("CognitiveSearch-ApiKey");
        var credential = new AzureKeyCredential(key.Value.Value);
        var searchClient = new SearchClient(new Uri(endpoint), index, credential);
        var results = await searchClient.SearchAsync<ReadModels.MyModel>(search);
        var list = await results.Value.GetResultsAsync().Select(result => result.Document).ToListAsync();
        return list.AsReadOnly();
    }
    

    There is the end-to-end solution. Any time an aggregate is persisted to the event store, the resulting projection will also run the indexer and index the newly updated document. Because our search index schema is a subset of our read model, we can use the same model classes with the caveat that not all fields will be available from the results of a search.

    Some final notes:

    While the Azure Portal API for importing data supports a connection string using ApiKind=MongoDb, this is not enabled by default. It is necessary to join the public preview (which is linked under the Cognitive Search documentation referring to CosmosDB MongoDB API) for now to enable ApiKind in the connection string. Once it is enabled, however, you should be able to translate the instructions provided in the Cognitive Search documentation for use in the Azure Portal.

    This is obviously a more involved solution than having the full-text index stored directly in the database, but I think the benefits outweigh the costs. A search index is a relatively inexpensive thing (you get 15 of them for $100USD/mo) and provides full-text search of documents of arbitrary complexity. Depending on your search tier, you may also have unlimited numbers of documents in your index. This is an extremely scalable, customizable, and easy-to-use search that is useful in many applications. There are many features you will find useful in your own application, and the initial setup is really very easy.

  • Service Location on Kubernetes using System Environment

    July 5th, 2022

    I unexpectedly came across the following environment variables table in the AKS documentation.

    Apologies for this tangent, but I found this very difficult to troubleshoot, and I want to put it here in case it helps someone. I first noticed this when I was attempting to start my EventStore DB container, and kept getting this very strange error:

    Error while parsing options: The option ServicePortEventstore is not a known option. (Parameter ‘ServicePortEventstore’)

    This was incredibly confusing to me, since nowhere was I setting this option, as far as I could see. After examining the EventStore documentation, I noticed that EventStore was able to configure itself from the environment, by simply turning any environment variable prepended with EVENTSTORE_ into an option. So, for example, EVENTSTORE_HTTP_PORT would translate to the command line option –http-port.

    I concluded, then, that there must be an environment variable EVENTSTORE_SERVICE_PORT_EVENTSTORE being set somewhere. What a strange variable to be set! But, the documentation linked above says that a service named ‘eventstore’ would cause an environment variable EVENTSTORE_SERVICE_PORT to be created. Since there is no –service-port option, the existence of this environment variable causes EventStore to be unable to start up. The answer, then, is to not use the name “eventstore” in your deployments or services. I replaced the string eventstore with esdb, and the error went away.

    I’d note, though, that these variables could potentially be quite useful for service discovery. You can enumerate through all the environment variables to determine the service names defined in the cluster, and then use the same environment variables to find endpoint information. The example given in the documentation shows construction of a URL from these environment variables, but I’m sure your imagination can think of better 🙂

  • Time Zones in Blazor WASM

    July 5th, 2022

    Any system that has clients in multiple time zones is always problematic. With the rise in cloud services and containerized applications, it is frequently the case that your web or desktop clients do not use the same time as your server components (which are more frequently running in UTC). Typically this is dealt with by using DateTimeOffset instead of DateTime. But I’ve noticed that it doesn’t quite work as expected in Blazor WASM. While the actual DateTime value itself works as expected, the times are always displayed in GMT. Asking clients to work in GMT simply because the server or database does is not going to be acceptable.

    The odd thing is that Blazor WASM seems to know the correct time zone offset:

    <p>
        Local Time Zone: @TimeZoneInfo.Local.DisplayName<br />
        Local Time Offset: @TimeZoneInfo.Local.BaseUtcOffset<br />
        Local Time: @DateTimeOffset.Now.ToString("R")
    </p>
    
    

    But, the Local Time displayed here is in GMT! Why? I haven’t really been able to find an answer, but plenty of similar complaints. The real oddity is as follows:

    <p>
    Local Time: @DateTimeOffset.Now.LocalDateTime.ToString("g")
    </p>
    

    I use the “g” format here because LocalDateTime is a DateTime property, without time zone information. This displays the correct local time! So why doesn’t DateTimeOffset.Now contain the correct offset information? This seems to remain a mystery.

    Still, this is at least workable. The correct time is stored in the database and it is possible to display the local user’s time for DateTimeOffset values using the LocalDateTime property. A combination of this, plus the (correct, thankfully) data in System.TimeZoneInfo should provide all of the necessary components to display the time as desired.

  • An Event Store DB Container on AKS

    July 3rd, 2022

    I am perhaps a little stubborn and impatient, but I don’t want to go through the whole provisioning process for a staging Event Store cluster. Indeed, a single pod attached to some storage should be sufficient for the needs of the staging cluster. However, support for Event Store DB on Kubernetes is in a bit of limbo. Not wanting to support a production virtualized Event Store cluster (understandable), the Helm chart has been deprecated in favor of an as-yet-to-be-developed cluster operator. So, here are the steps to get this working on an Azure Kubernetes cluster:

    1. Create persistent volume and persistent volume claim
    2. Change the ownership of the volume mount
    3. Create deployment
    4. Create service
    5. Test

    Persistent Volume

    We want to host the Event Store DB data on a managed disk. Begin by creating a managed disk of the desired size in the Azure portal, in the correct resource group. You will then create a file, pv-azuredisk.yaml to apply to your cluster. The file will look like this:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-eventstore-azuredisk
    spec:
      capacity:
        storage: 50Gi
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Retain
      storageClassName: managed-csi
      csi:
        driver: disk.csi.azure.com
        readOnly: false
        volumeHandle: /subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group-name>/providers/Microsoft.Compute/disks/<my-managed-disk-name>
        volumeAttributes:
          fsType: ext4
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-eventstore-azuredisk
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 50Gi
      volumeName: pv-eventstore-azuredisk
      storageClassName: managed-csi
    

    You should replace the fields in the volumeHandle with your values, and update the storage size request as desired. I created a separate namespace to hold the Event Store deployment:

    kubectl create ns eventstore

    Then deploy the PersistentVolumeClaim with

    kubectl apply -n eventstore -f pv-azuredisk.yaml

    Change Ownership of Volume Mount

    The volume mount will have the default permissions that the filesystem was created with, namely root:root. Since Event Store DB docker image runs with UID 1000:1000, you’ll need to change the ownership of the volume mount or the image won’t start up. We can do this by creating a “shell” deployment with the same volume mount and changing the ownership from there. This only needs to be done once.

    Ubuntu Deployment

    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: ubuntu-deployment
      labels:
        app: ubuntu
    spec:
      selector:
        matchLabels:
          app: ubuntu
      replicas: 1
      template:
        metadata:
          labels:
            app: ubuntu
        spec:
          containers:
            - name: ubuntu
              image: ubuntu:latest
              stdin: true
              tty: true
              volumeMounts:
                - name: eventstore-azure
                  mountPath: /mnt/eventstore
          volumes:
            - name: eventstore-azure
              persistentVolumeClaim:
                claimName: pvc-eventstore-azuredisk

    After applying this:

    kubectl apply -n eventstore -f shell.yaml

    you can now attach to a bash shell:

    kubectl get pods -n eventstore
    kubectl attach -n eventstore -it

    Since we mounted the Event Store volume mount at /mnt/eventstore, we can simply:

    cd /mnt/eventstore
    chown -R 1000:1000 .

    The shell has served its purpose for now, so delete the deployment:

    kubectl delete -n eventstore -f shell.yaml

    Now we can deploy Event Store with the following file:

    apiVersion: apps/v1                                                                                                        kind: Deployment
    metadata:
      name: eventstore-deployment
      labels:
        app: eventstore
    spec:
      selector:
        matchLabels:
          app: eventstore
      replicas: 1
      template:
        metadata:
          labels:
            app: eventstore
        spec:
          restartPolicy: Always
          containers:
            - name: eventstore
              image: eventstore/eventstore:latest
              ports:
                - containerPort: 2113
              env:
                - name: EVENTSTORE_CLUSTER_SIZE
                  value: "1"
                - name: EVENTSTORE_RUN_PROJECTIONS
                  value: "All"
                - name: EVENTSTORE_START_STANDARD_PROJECTIONS
                  value: "true"
                - name: EVENTSTORE_HTTP_PORT
                  value: "2113"
                - name: EVENTSTORE_INSECURE
                  value: "true"
                - name: EVENTSTORE_ENABLE_ATOM_PUB_OVER_HTTP
                  value: "true"
              volumeMounts:
                - name: eventstore-azure
                  mountPath: /var/lib/eventstore
          volumes:
            - name: eventstore-azure
              persistentVolumeClaim:
                claimName: pvc-eventstore-azuredisk
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: eventstore
    spec:
      type: ClusterIP
      ports:
        - name: eventstore
          port: 2113
          targetPort: 2113

    Apply the above file to your cluster, and it should start.

    You can verify that it is running:

    kubectl get all -n eventstore

    NAME                                         READY   STATUS    RESTARTS   AGE
    pod/eventstore-deployment-5cff847b4f-xcjps   1/1     Running   0          29m
    
    NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
    service/eventstore   ClusterIP   10.2.0.219   <none>        2113/TCP   21m
    
    NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/eventstore-deployment   1/1     1            1           29m
    
    NAME                                               DESIRED   CURRENT   READY   AGE
    replicaset.apps/eventstore-deployment-5cff847b4f   1         1         1       29m

    You can then access the dashboard for Event Store using port-forward:

    kubectl port-forward -n eventstore –address 127.0.0.1 pod/eventstore-deployment-5cff847
    b4f-xcjps 2113:2113

    The dashboard will be accessible through http://localhost:2113.

    I should note that this configuration is not recommended for production. It is a fair bit of setup to get Event Store deployed to your test cluster, but once it is, you won’t have to redo it. You can keep the shell.yaml file in case you want to reconnect to the volume mount and, say, erase all the data. It is theoretically possible to use access mode ReadWriteMany with host caching off to allow for more than 1 replica, but then you might as well deploy a bare metal cluster.

    Hopefully Event Store will renew their efforts to create a cluster operator soon. I would imagine that the steps look similar to the manual process I’ve outlined above, but I can understand that supporting multiple cloud environments makes this difficult, as well as the caveat that this is not intended for production.

  • Starting an Architecture Practice in your Organization

    June 29th, 2022

    I think I can speak for many developers when I say that there comes a point in every project when things just seem to get stuck. And from that day forward, you never look at the project the same way again. So you move on, hoping that you won’t get stuck again. And, perhaps you won’t, but why keep moving around only to get disappointed again? I’d propose an alternative to moving on: making development fun again.

    Development is a job that is really dependent on the work itself for fulfillment. Working on a great project with great people is often worth more than making more money doing something boring or tedious; or worse, downright painful. If you find yourself in the latter category, it might be time to look for bigger picture solutions than keeping running into the same issues over and over. So, as a developer, sometimes the best thing you can do for yourself is find a different perspective. To that end, starting or joining an architecture practice at your organization can provide you both with new opportunities, and the chance to really “build it right”.

    I started in architecture with a value proposition to my CIO. I didn’t claim that I was going to save tons of money, or that I was going to revolutionize anything. But I did have solutions to problems the organization was currently facing. It was his buy-in that made everything possible. Executive support is a critical step in establishing the architecture practice, so take your time with your value proposition. Really think about the challenges your organization faces, and how you can provide not only technological solutions, but business solutions as well. IT Architecture is about bridging the gap between business and IT – to understand business problems in technical terms, and to present technical solutions in business terms.

    The next step I took was to build a roadmap. In the case where you are just establishing an architecture practice, there’s probably a lot of chaos. It’s important to have a vision of where you want to get to. Research is important here. Finding out about best practices, modern designs, and new techniques is the bread and butter of being an architect. Once you have an idea, however nebulous it might be right now, it’s time to figure out how to get there. Current State – Future State – Gap Analysis. That’s the essence of a roadmap. Here’s where you communicate your plan to the executive and present clear steps and timelines to achieve goals. Don’t worry about solving everything at once – establish some key principles and focus on those. Architecture is an iterative process, and you’ll hopefully get a chance to build another roadmap.

    5 years ago I wrote my first roadmap. 6 months ago I wrote my second. Now I am in charge of making it happen. It is far better for me to have taken a different path, and it has been far better for the organization. This all started because I thought: “there must be a better way”. If you think there’s a better way, don’t be afraid to show its value and find a champion for your ideas. You just might get to implement them.

  • Game Review – Sherlock Chapter 1

    June 29th, 2022

    Frogwares brings back their successful line of Sherlock Holmes games as an open world adventure. Many familiar gameplay elements are combined into an enjoyable and immersive adventure. Cases abound, from the very small, helping the inept local police with some crime scene reconstructions – to the very large: how did Violet Holmes die?

    Holmes arrives on the small island of Cordona seeking closure as he visits his mother’s grave. This is as early as we’ve ever seen Holmes in an adventure, coming well before Holmes meets Watson. But, Sherlock’s childhood friend Jon is there to fill the void. Mystery never follows far behind, and Holmes finds that closure is not so simple.

    Jon will put his own comments about your performance into his diary for you to peruse. Acting like a dullard or behaving in unsavory ways is sure to evoke something pithy. But, he is equally generous with his praise when you dazzle as Holmes is known to.

    Your mind palace will still allow you to combine clues to form deductions and find new avenues of investigation. As with the previous incarnation of the mind palace, some deductions can have multiple implications, and choosing the wrong one will lead to the wrong conclusion.

    I am fond of a map function that requires you to actually read the map and the clues. While most games hold your hand and give you modern conveniences such as mini-maps, pathfinding and immediate knowledge of all locations, this game will have you read the clues and scour the map for the correct location. We see the same sort of map function in The Sinking City as well, also by Frogwares. I think it helps add to the immersiveness.

    Your casebook is very well-organized. Holmes is nothing if not methodical, and you will find all the evidence laid out neatly, with icons suggesting the necessary actions to be taken. Actions include: crime scene reconstructions, asking people about specific evidence, disguising yourself, following footprints and other traces. While the game guides you toward new clues, there is still plenty of opportunity to use your brain and come to satifying conclusions worth of Holmes himself.

    Holmes needs to learn how to defend himself in this episode. While armed with a revolver and a box of pepper snuff, Holmes still has a revulsion for murder, and combat is geared toward arresting and subduing your opponents, not simply killing them. Frogwares helpfully provides you with a number of bandit lairs to hone your combat skills. I probably spent 2-3 hours in one of these lairs alone (and did not complete it, alas). My combat skills improved significantly though by pursuing this side activity.

    Overall, this episode presents very well as an open world adventure, giving a lot of freedom to pursue your own agenda, but having a number of cases and adventures to pursue at your leisure. The stories of Holmes’ early life are enjoyable, and a plausible picture of Holmes’ youth emerges. If you’ve enjoyed Frogwares’ previous episodes, you will love this one. It boasts excellent writing, thoughtful puzzles and a beautiful world to explore through the eyes of the ever-curious Sherlock Holmes.

  • Modern Design Techniques – Event Sourcing

    June 20th, 2022

    I’ve spent the last few months studying Alexey Zimarev’s excellent book, Hands-On Domain Driven Design with ASP.NET Core (2017). While 5 years old now, it was clearly cutting edge at the time, since the Event Sourced system he demonstrates is now becoming more popular. I’ve found several reliable sources who refer to Event Sourcing by name, but very few actual implementations even today. Zimarev’s remains by far the most complete.

    The Event Sourced system precipitates some properties I haven’t seen before. First, it comes with a natural audit log. Every action taken on the domain is stored in the event store along with our choice of metadata, most obviously the user taking the action. The event store is not queryable, necessitating the use of a projection into a readable form. While I’ve designed software using logical CQRS, this system is the first implementation I’ve seen that has different capabilities on the command and query sides, and allows many choices to fulfill these capabilities. Zimarev uses RavenDB + Event Store DB, a good choice today as they can both be deployed in cloud clusters to the AWS, GCP or Azure region of your choice. The specific choices for command and query side are quite involved themselves, and the subject for another article.

    The next thing that jumps out at me is that the system is truly asynchronous. I had to move away from the idea that I present an object to the server for processing, and wait for some kind of result back (e.g. a database ID). Instead, the client does all of these id generations in advance and simply issues commands to the server to keep server state in sync. (Side note – look into ULIDs instead of GUIDs for universal IDs). This has its own challenges that must be understood. For example, the queryable database is now the “query” (“read”) side. However, there is latency between when an event enters the store, and when the read side is updated. This only matters if we insist on re-reading the object from the database to get something back. Universal id generation is important here, and the relationship between client and server is flipped on its head. The client holds the master state, and issues commands to the server to persist it.

    Eventual consistency is a new challenge to developers. We should not avoid systems that are not immediately consistent, however. We simply need to learn how to build them such that we can truly fire and forget our commands, and be notified only if something goes wrong. The performance potential here is enormous: how much database effort is spent on joins and indexes to split object data into multiple tables, when we could simply store the data the way it is actually used? While some data (accounting, e.g.) seems to be better suited to tabular data, other data never needs to be anything but an object, and it is easiest to store it as such as a JSON document in a document database.

    The biggest challenge that I face right now is the migration of the legacy system. It is simply too big to flip the switch on day 1. It needs to incrementally migrate to a new system, which means that the existing version must be able to write to the event store, and the new version must project back to the original database in addition to the document database. Of course, this is a lot of work that will be immediately obsolete, but we see this technique in bridge maintenance as well. If we wish to upgrade a bridge while allowing people to continue to cross the river, we need to build a temporary bridge first before closing the bridge for upgrades.

    I don’t know what my performance expectations are at this point. Intuitively, it feels like a better design than using a relational database for absolutely everything. I know I no longer have to write error-prone mapping code to shove the data into an ORM. The more I work with document databases, the more I feel that relational databases are not appropriate persistence for object-based data. An interesting thought is that the databases will do less work. Writing to event streams is not compute-intensive work, nor is reading a selection of documents from a document database. The costs are already lower for the two databases, but I haven’t seen the resource requirements yet. The cost is simply the latency between write and read sides, which is handled by building the system asynchronously and making the application client-centric rather than server-centric. The projection, of course, is handled by the application server, so the latency can be managed to an extent by increasing the resources available to the application server.

    Those are my initial thoughts about an Event Sourced system. With any luck, I’ll have a chance to design one soon as our company updates their applications to be cloud-native. I hope to see some tangible results this year.

  • Game Review – Diablo Immortal

    June 5th, 2022

    Diablo Immortal is Blizzard’s latest offering in the Diablo world of Sanctuary. For the first time, however, this game has been written ground-up for mobile. While you may think that this means it’s another phone-based “ARPG”, it’s anything but. While there is a phone app as well, the iPad app is a different version again, and Blizzard opened up a PC port on June 2. I’ve tried it out on all of the platforms, and offered my thoughts below.

    Running on iPad Pro 2020 model, the game runs smoothly at 60fps and looks great. The iPad and iPhone versions are currently in their release states, while the PC version is in open beta. The game is best played using a gamepad, which is supported on all platforms. The phone UI is surprisingly good given the size of the screen, but I certainly wouldn’t want to have to use it for serious combat, or on anything smaller than the iPhone 12. Again, you’ll probably want a gamepad with a stand. The differences between the platforms are minimal, though. This is a true “play anywhere” experience.

    I was myself skeptical of a game that targeted mobile when I first heard it, but having played it on iPad especially, it really is one of the best native games for the iPad and iPhone. And really, Blizzard hasn’t sacrificed much of the original game to make it a mobile MMO. Complaints in-game seem to mostly be related to the “Pay-to-Win” aspect, which I haven’t yet seen myself. You can buy a limited selection of in-game items that improve combat rating, but most of what I’ve seen has been cosmetic.

    Once you get through the initial hand-holding, the combat starts to have a Diablo feel about it. A primary attack, 4 secondary attacks, and an “ultimate” ability make up your arsenal. If you like the previous Diablo games, you shouldn’t be disappointed by either the gameplay or the level of difficulty. For serious Diablo players, difficulty can be set once you surpass level 60 into the Paragon levels. It doesn’t take long to get rolling, and soon you’ll be blasting your way through hordes of Hell’s minions non-stop.

    The story itself takes place between the events of Diablo 2 and Diablo 3, and you will see a lot of familiar friends, enemies, and places. The story itself centers around the fragments of the Worldstone that were scattered when Tyrael shattered it. Even a fragment of the Worldstone is a powerful artifact, and many ambitious agents of Hell are racing to retrieve them and sieze the power contained within them. You, of course, are the hero to stop them.

    Many familiar gameplay elements are present: rifts and bounties. Immortal adds many familiar MMO elements, such as dungeons and raids (which are done in warbands of 8 players, a considerable improvement on the 4-player limit of previous games), crafting of various types of equipment and leaderboards.

    The monetization of the game is 100% microtransactions. There are monthly benefits such as the empowered battle pass that give you additional rewards for progress through the game guide, as well as the Horn of Plenty, which provides additional login rewards. These are mostly convenience items, though some rarer crafting components and the crests required for Elder Rifts are also provided. I didn’t spend a lot of time in the store – there seem to be “troves” available after every major story point and cosmetic items galore. If you want to spend money, there’s no shortage of things to buy. But free-to-play is honestly just that. You won’t get hit with ads beyond the store ads at login and the notifications of new “troves” being available.

    Overall, Blizzard has achieved their goal of making the Diablo universe available on both desktop and mobile platforms. I’ve found it quite enjoyable and immersive, and the social aspect has been thus far non-toxic, with very few spambots and truly objectionable conversation. There are, of course, the usual trolls, but the signal-to-noise ratio of the chats is pretty high. I’m not a particularly social player, but the game guide encourages finding a warband to accomplish the higher-level raid content, and it is relatively easy to join one. Without the massive size of the “newb” guilds on other MMOs, it feels like you can actually get to know people and not get lost in a sea of other “newbs”. A warband seems to be a good size, and the game provides many goals for them. Worth checking out for both newcomers and veterans to Diablo alike.

←Previous Page
1 2 3 4

Blog at WordPress.com.

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy
  • Follow Following
    • Brian Richardson's Blog
    • Already have a WordPress.com account? Log in now.
    • Brian Richardson's Blog
    • Edit Site
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar