• About

Brian Richardson's Blog

  • Time Zones in Blazor WASM

    July 5th, 2022

    Any system that has clients in multiple time zones is always problematic. With the rise in cloud services and containerized applications, it is frequently the case that your web or desktop clients do not use the same time as your server components (which are more frequently running in UTC). Typically this is dealt with by using DateTimeOffset instead of DateTime. But I’ve noticed that it doesn’t quite work as expected in Blazor WASM. While the actual DateTime value itself works as expected, the times are always displayed in GMT. Asking clients to work in GMT simply because the server or database does is not going to be acceptable.

    The odd thing is that Blazor WASM seems to know the correct time zone offset:

    <p>
        Local Time Zone: @TimeZoneInfo.Local.DisplayName<br />
        Local Time Offset: @TimeZoneInfo.Local.BaseUtcOffset<br />
        Local Time: @DateTimeOffset.Now.ToString("R")
    </p>
    
    

    But, the Local Time displayed here is in GMT! Why? I haven’t really been able to find an answer, but plenty of similar complaints. The real oddity is as follows:

    <p>
    Local Time: @DateTimeOffset.Now.LocalDateTime.ToString("g")
    </p>
    

    I use the “g” format here because LocalDateTime is a DateTime property, without time zone information. This displays the correct local time! So why doesn’t DateTimeOffset.Now contain the correct offset information? This seems to remain a mystery.

    Still, this is at least workable. The correct time is stored in the database and it is possible to display the local user’s time for DateTimeOffset values using the LocalDateTime property. A combination of this, plus the (correct, thankfully) data in System.TimeZoneInfo should provide all of the necessary components to display the time as desired.

    Advertisement
  • An Event Store DB Container on AKS

    July 3rd, 2022

    I am perhaps a little stubborn and impatient, but I don’t want to go through the whole provisioning process for a staging Event Store cluster. Indeed, a single pod attached to some storage should be sufficient for the needs of the staging cluster. However, support for Event Store DB on Kubernetes is in a bit of limbo. Not wanting to support a production virtualized Event Store cluster (understandable), the Helm chart has been deprecated in favor of an as-yet-to-be-developed cluster operator. So, here are the steps to get this working on an Azure Kubernetes cluster:

    1. Create persistent volume and persistent volume claim
    2. Change the ownership of the volume mount
    3. Create deployment
    4. Create service
    5. Test

    Persistent Volume

    We want to host the Event Store DB data on a managed disk. Begin by creating a managed disk of the desired size in the Azure portal, in the correct resource group. You will then create a file, pv-azuredisk.yaml to apply to your cluster. The file will look like this:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-eventstore-azuredisk
    spec:
      capacity:
        storage: 50Gi
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Retain
      storageClassName: managed-csi
      csi:
        driver: disk.csi.azure.com
        readOnly: false
        volumeHandle: /subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group-name>/providers/Microsoft.Compute/disks/<my-managed-disk-name>
        volumeAttributes:
          fsType: ext4
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-eventstore-azuredisk
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 50Gi
      volumeName: pv-eventstore-azuredisk
      storageClassName: managed-csi
    

    You should replace the fields in the volumeHandle with your values, and update the storage size request as desired. I created a separate namespace to hold the Event Store deployment:

    kubectl create ns eventstore

    Then deploy the PersistentVolumeClaim with

    kubectl apply -n eventstore -f pv-azuredisk.yaml

    Change Ownership of Volume Mount

    The volume mount will have the default permissions that the filesystem was created with, namely root:root. Since Event Store DB docker image runs with UID 1000:1000, you’ll need to change the ownership of the volume mount or the image won’t start up. We can do this by creating a “shell” deployment with the same volume mount and changing the ownership from there. This only needs to be done once.

    Ubuntu Deployment

    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: ubuntu-deployment
      labels:
        app: ubuntu
    spec:
      selector:
        matchLabels:
          app: ubuntu
      replicas: 1
      template:
        metadata:
          labels:
            app: ubuntu
        spec:
          containers:
            - name: ubuntu
              image: ubuntu:latest
              stdin: true
              tty: true
              volumeMounts:
                - name: eventstore-azure
                  mountPath: /mnt/eventstore
          volumes:
            - name: eventstore-azure
              persistentVolumeClaim:
                claimName: pvc-eventstore-azuredisk

    After applying this:

    kubectl apply -n eventstore -f shell.yaml

    you can now attach to a bash shell:

    kubectl get pods -n eventstore
    kubectl attach -n eventstore -it

    Since we mounted the Event Store volume mount at /mnt/eventstore, we can simply:

    cd /mnt/eventstore
    chown -R 1000:1000 .

    The shell has served its purpose for now, so delete the deployment:

    kubectl delete -n eventstore -f shell.yaml

    Now we can deploy Event Store with the following file:

    apiVersion: apps/v1                                                                                                        kind: Deployment
    metadata:
      name: eventstore-deployment
      labels:
        app: eventstore
    spec:
      selector:
        matchLabels:
          app: eventstore
      replicas: 1
      template:
        metadata:
          labels:
            app: eventstore
        spec:
          restartPolicy: Always
          containers:
            - name: eventstore
              image: eventstore/eventstore:latest
              ports:
                - containerPort: 2113
              env:
                - name: EVENTSTORE_CLUSTER_SIZE
                  value: "1"
                - name: EVENTSTORE_RUN_PROJECTIONS
                  value: "All"
                - name: EVENTSTORE_START_STANDARD_PROJECTIONS
                  value: "true"
                - name: EVENTSTORE_HTTP_PORT
                  value: "2113"
                - name: EVENTSTORE_INSECURE
                  value: "true"
                - name: EVENTSTORE_ENABLE_ATOM_PUB_OVER_HTTP
                  value: "true"
              volumeMounts:
                - name: eventstore-azure
                  mountPath: /var/lib/eventstore
          volumes:
            - name: eventstore-azure
              persistentVolumeClaim:
                claimName: pvc-eventstore-azuredisk
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: eventstore
    spec:
      type: ClusterIP
      ports:
        - name: eventstore
          port: 2113
          targetPort: 2113

    Apply the above file to your cluster, and it should start.

    You can verify that it is running:

    kubectl get all -n eventstore

    NAME                                         READY   STATUS    RESTARTS   AGE
    pod/eventstore-deployment-5cff847b4f-xcjps   1/1     Running   0          29m
    
    NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
    service/eventstore   ClusterIP   10.2.0.219   <none>        2113/TCP   21m
    
    NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/eventstore-deployment   1/1     1            1           29m
    
    NAME                                               DESIRED   CURRENT   READY   AGE
    replicaset.apps/eventstore-deployment-5cff847b4f   1         1         1       29m

    You can then access the dashboard for Event Store using port-forward:

    kubectl port-forward -n eventstore –address 127.0.0.1 pod/eventstore-deployment-5cff847
    b4f-xcjps 2113:2113

    The dashboard will be accessible through http://localhost:2113.

    I should note that this configuration is not recommended for production. It is a fair bit of setup to get Event Store deployed to your test cluster, but once it is, you won’t have to redo it. You can keep the shell.yaml file in case you want to reconnect to the volume mount and, say, erase all the data. It is theoretically possible to use access mode ReadWriteMany with host caching off to allow for more than 1 replica, but then you might as well deploy a bare metal cluster.

    Hopefully Event Store will renew their efforts to create a cluster operator soon. I would imagine that the steps look similar to the manual process I’ve outlined above, but I can understand that supporting multiple cloud environments makes this difficult, as well as the caveat that this is not intended for production.

  • Starting an Architecture Practice in your Organization

    June 29th, 2022

    I think I can speak for many developers when I say that there comes a point in every project when things just seem to get stuck. And from that day forward, you never look at the project the same way again. So you move on, hoping that you won’t get stuck again. And, perhaps you won’t, but why keep moving around only to get disappointed again? I’d propose an alternative to moving on: making development fun again.

    Development is a job that is really dependent on the work itself for fulfillment. Working on a great project with great people is often worth more than making more money doing something boring or tedious; or worse, downright painful. If you find yourself in the latter category, it might be time to look for bigger picture solutions than keeping running into the same issues over and over. So, as a developer, sometimes the best thing you can do for yourself is find a different perspective. To that end, starting or joining an architecture practice at your organization can provide you both with new opportunities, and the chance to really “build it right”.

    I started in architecture with a value proposition to my CIO. I didn’t claim that I was going to save tons of money, or that I was going to revolutionize anything. But I did have solutions to problems the organization was currently facing. It was his buy-in that made everything possible. Executive support is a critical step in establishing the architecture practice, so take your time with your value proposition. Really think about the challenges your organization faces, and how you can provide not only technological solutions, but business solutions as well. IT Architecture is about bridging the gap between business and IT – to understand business problems in technical terms, and to present technical solutions in business terms.

    The next step I took was to build a roadmap. In the case where you are just establishing an architecture practice, there’s probably a lot of chaos. It’s important to have a vision of where you want to get to. Research is important here. Finding out about best practices, modern designs, and new techniques is the bread and butter of being an architect. Once you have an idea, however nebulous it might be right now, it’s time to figure out how to get there. Current State – Future State – Gap Analysis. That’s the essence of a roadmap. Here’s where you communicate your plan to the executive and present clear steps and timelines to achieve goals. Don’t worry about solving everything at once – establish some key principles and focus on those. Architecture is an iterative process, and you’ll hopefully get a chance to build another roadmap.

    5 years ago I wrote my first roadmap. 6 months ago I wrote my second. Now I am in charge of making it happen. It is far better for me to have taken a different path, and it has been far better for the organization. This all started because I thought: “there must be a better way”. If you think there’s a better way, don’t be afraid to show its value and find a champion for your ideas. You just might get to implement them.

  • Game Review – Sherlock Chapter 1

    June 29th, 2022

    Frogwares brings back their successful line of Sherlock Holmes games as an open world adventure. Many familiar gameplay elements are combined into an enjoyable and immersive adventure. Cases abound, from the very small, helping the inept local police with some crime scene reconstructions – to the very large: how did Violet Holmes die?

    Holmes arrives on the small island of Cordona seeking closure as he visits his mother’s grave. This is as early as we’ve ever seen Holmes in an adventure, coming well before Holmes meets Watson. But, Sherlock’s childhood friend Jon is there to fill the void. Mystery never follows far behind, and Holmes finds that closure is not so simple.

    Jon will put his own comments about your performance into his diary for you to peruse. Acting like a dullard or behaving in unsavory ways is sure to evoke something pithy. But, he is equally generous with his praise when you dazzle as Holmes is known to.

    Your mind palace will still allow you to combine clues to form deductions and find new avenues of investigation. As with the previous incarnation of the mind palace, some deductions can have multiple implications, and choosing the wrong one will lead to the wrong conclusion.

    I am fond of a map function that requires you to actually read the map and the clues. While most games hold your hand and give you modern conveniences such as mini-maps, pathfinding and immediate knowledge of all locations, this game will have you read the clues and scour the map for the correct location. We see the same sort of map function in The Sinking City as well, also by Frogwares. I think it helps add to the immersiveness.

    Your casebook is very well-organized. Holmes is nothing if not methodical, and you will find all the evidence laid out neatly, with icons suggesting the necessary actions to be taken. Actions include: crime scene reconstructions, asking people about specific evidence, disguising yourself, following footprints and other traces. While the game guides you toward new clues, there is still plenty of opportunity to use your brain and come to satifying conclusions worth of Holmes himself.

    Holmes needs to learn how to defend himself in this episode. While armed with a revolver and a box of pepper snuff, Holmes still has a revulsion for murder, and combat is geared toward arresting and subduing your opponents, not simply killing them. Frogwares helpfully provides you with a number of bandit lairs to hone your combat skills. I probably spent 2-3 hours in one of these lairs alone (and did not complete it, alas). My combat skills improved significantly though by pursuing this side activity.

    Overall, this episode presents very well as an open world adventure, giving a lot of freedom to pursue your own agenda, but having a number of cases and adventures to pursue at your leisure. The stories of Holmes’ early life are enjoyable, and a plausible picture of Holmes’ youth emerges. If you’ve enjoyed Frogwares’ previous episodes, you will love this one. It boasts excellent writing, thoughtful puzzles and a beautiful world to explore through the eyes of the ever-curious Sherlock Holmes.

  • Modern Design Techniques – Event Sourcing

    June 20th, 2022

    I’ve spent the last few months studying Alexey Zimarev’s excellent book, Hands-On Domain Driven Design with ASP.NET Core (2017). While 5 years old now, it was clearly cutting edge at the time, since the Event Sourced system he demonstrates is now becoming more popular. I’ve found several reliable sources who refer to Event Sourcing by name, but very few actual implementations even today. Zimarev’s remains by far the most complete.

    The Event Sourced system precipitates some properties I haven’t seen before. First, it comes with a natural audit log. Every action taken on the domain is stored in the event store along with our choice of metadata, most obviously the user taking the action. The event store is not queryable, necessitating the use of a projection into a readable form. While I’ve designed software using logical CQRS, this system is the first implementation I’ve seen that has different capabilities on the command and query sides, and allows many choices to fulfill these capabilities. Zimarev uses RavenDB + Event Store DB, a good choice today as they can both be deployed in cloud clusters to the AWS, GCP or Azure region of your choice. The specific choices for command and query side are quite involved themselves, and the subject for another article.

    The next thing that jumps out at me is that the system is truly asynchronous. I had to move away from the idea that I present an object to the server for processing, and wait for some kind of result back (e.g. a database ID). Instead, the client does all of these id generations in advance and simply issues commands to the server to keep server state in sync. (Side note – look into ULIDs instead of GUIDs for universal IDs). This has its own challenges that must be understood. For example, the queryable database is now the “query” (“read”) side. However, there is latency between when an event enters the store, and when the read side is updated. This only matters if we insist on re-reading the object from the database to get something back. Universal id generation is important here, and the relationship between client and server is flipped on its head. The client holds the master state, and issues commands to the server to persist it.

    Eventual consistency is a new challenge to developers. We should not avoid systems that are not immediately consistent, however. We simply need to learn how to build them such that we can truly fire and forget our commands, and be notified only if something goes wrong. The performance potential here is enormous: how much database effort is spent on joins and indexes to split object data into multiple tables, when we could simply store the data the way it is actually used? While some data (accounting, e.g.) seems to be better suited to tabular data, other data never needs to be anything but an object, and it is easiest to store it as such as a JSON document in a document database.

    The biggest challenge that I face right now is the migration of the legacy system. It is simply too big to flip the switch on day 1. It needs to incrementally migrate to a new system, which means that the existing version must be able to write to the event store, and the new version must project back to the original database in addition to the document database. Of course, this is a lot of work that will be immediately obsolete, but we see this technique in bridge maintenance as well. If we wish to upgrade a bridge while allowing people to continue to cross the river, we need to build a temporary bridge first before closing the bridge for upgrades.

    I don’t know what my performance expectations are at this point. Intuitively, it feels like a better design than using a relational database for absolutely everything. I know I no longer have to write error-prone mapping code to shove the data into an ORM. The more I work with document databases, the more I feel that relational databases are not appropriate persistence for object-based data. An interesting thought is that the databases will do less work. Writing to event streams is not compute-intensive work, nor is reading a selection of documents from a document database. The costs are already lower for the two databases, but I haven’t seen the resource requirements yet. The cost is simply the latency between write and read sides, which is handled by building the system asynchronously and making the application client-centric rather than server-centric. The projection, of course, is handled by the application server, so the latency can be managed to an extent by increasing the resources available to the application server.

    Those are my initial thoughts about an Event Sourced system. With any luck, I’ll have a chance to design one soon as our company updates their applications to be cloud-native. I hope to see some tangible results this year.

  • Game Review – Diablo Immortal

    June 5th, 2022

    Diablo Immortal is Blizzard’s latest offering in the Diablo world of Sanctuary. For the first time, however, this game has been written ground-up for mobile. While you may think that this means it’s another phone-based “ARPG”, it’s anything but. While there is a phone app as well, the iPad app is a different version again, and Blizzard opened up a PC port on June 2. I’ve tried it out on all of the platforms, and offered my thoughts below.

    Running on iPad Pro 2020 model, the game runs smoothly at 60fps and looks great. The iPad and iPhone versions are currently in their release states, while the PC version is in open beta. The game is best played using a gamepad, which is supported on all platforms. The phone UI is surprisingly good given the size of the screen, but I certainly wouldn’t want to have to use it for serious combat, or on anything smaller than the iPhone 12. Again, you’ll probably want a gamepad with a stand. The differences between the platforms are minimal, though. This is a true “play anywhere” experience.

    I was myself skeptical of a game that targeted mobile when I first heard it, but having played it on iPad especially, it really is one of the best native games for the iPad and iPhone. And really, Blizzard hasn’t sacrificed much of the original game to make it a mobile MMO. Complaints in-game seem to mostly be related to the “Pay-to-Win” aspect, which I haven’t yet seen myself. You can buy a limited selection of in-game items that improve combat rating, but most of what I’ve seen has been cosmetic.

    Once you get through the initial hand-holding, the combat starts to have a Diablo feel about it. A primary attack, 4 secondary attacks, and an “ultimate” ability make up your arsenal. If you like the previous Diablo games, you shouldn’t be disappointed by either the gameplay or the level of difficulty. For serious Diablo players, difficulty can be set once you surpass level 60 into the Paragon levels. It doesn’t take long to get rolling, and soon you’ll be blasting your way through hordes of Hell’s minions non-stop.

    The story itself takes place between the events of Diablo 2 and Diablo 3, and you will see a lot of familiar friends, enemies, and places. The story itself centers around the fragments of the Worldstone that were scattered when Tyrael shattered it. Even a fragment of the Worldstone is a powerful artifact, and many ambitious agents of Hell are racing to retrieve them and sieze the power contained within them. You, of course, are the hero to stop them.

    Many familiar gameplay elements are present: rifts and bounties. Immortal adds many familiar MMO elements, such as dungeons and raids (which are done in warbands of 8 players, a considerable improvement on the 4-player limit of previous games), crafting of various types of equipment and leaderboards.

    The monetization of the game is 100% microtransactions. There are monthly benefits such as the empowered battle pass that give you additional rewards for progress through the game guide, as well as the Horn of Plenty, which provides additional login rewards. These are mostly convenience items, though some rarer crafting components and the crests required for Elder Rifts are also provided. I didn’t spend a lot of time in the store – there seem to be “troves” available after every major story point and cosmetic items galore. If you want to spend money, there’s no shortage of things to buy. But free-to-play is honestly just that. You won’t get hit with ads beyond the store ads at login and the notifications of new “troves” being available.

    Overall, Blizzard has achieved their goal of making the Diablo universe available on both desktop and mobile platforms. I’ve found it quite enjoyable and immersive, and the social aspect has been thus far non-toxic, with very few spambots and truly objectionable conversation. There are, of course, the usual trolls, but the signal-to-noise ratio of the chats is pretty high. I’m not a particularly social player, but the game guide encourages finding a warband to accomplish the higher-level raid content, and it is relatively easy to join one. Without the massive size of the “newb” guilds on other MMOs, it feels like you can actually get to know people and not get lost in a sea of other “newbs”. A warband seems to be a good size, and the game provides many goals for them. Worth checking out for both newcomers and veterans to Diablo alike.

←Previous Page
1 2 3 4

Blog at WordPress.com.

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy
  • Follow Following
    • Brian Richardson's Blog
    • Already have a WordPress.com account? Log in now.
    • Brian Richardson's Blog
    • Edit Site
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar