• About

Brian Richardson's Blog

  • Back to the CLI

    October 1st, 2022

    My fingers hurt. I’ve spent the last few hours bemoaning the fact that I left my notebook charger at the office 30km away, and that the replacement won’t arrive until tomorrow. Not content to sit around, I realized that while most iPadOS remote desktop/VNC clients suck, there are plenty of good SSH clients. So, I set about learning how to use Neovim, and a dash of LUA for good measure. I won’t post the file, since there’s already plenty to search for, but I will point out why Neovim is better.

    So, first: LUA. If you’ve tried to make any kind of useful VIM configuration, you’ve probably seen that it can get ugly fast. LUA allows for modular configuration and is a rather pleasant configuration language. It reads well, indents well and generally communicates intent pretty clearly. Configuration with LUA is much cleaner, so point #1 for Neovim.

    I wanted to get the most basic setup possible, spend as little time configuring things, and end up with a very specific result: autocompletion, syntax checking and navigation. I’ve struggled with getting all these to work together in VIM, but Neovim uses fewer plugins to accomplish the same task, and thus has fewer configuration issues.

    But, this is not about Neovim, as awesome as it is, but more of a philosophical reflection on how hard it is to get away from the command line. It’s where I started so many (I won’t even tell you!) years ago. And it’s where I’m going back to. Somewhere around when Windows Vista came out, I made a concious effort to learn how to use a GUI, because my fingers hurt then too.

    But, it turns out that you need language, not gestures, to express complex thoughts. And so interfaces go back to the CLI. And though I can still navigate certain pieces of complex software (VS, e.g.) extremely well with the GUI, notice that all of them have added back a text interface to allow for more complex commands (the command palette in VS Code, e.g.). And there are options that exist only in the Azure CLI that you can’t find in the Portal. No, no matter how hard I try, the CLI just keeps coming back. And now my fingers hurt again. Where’s my voice-controlled computer that understands spoken C# and bash? 😂

    Advertisement
  • An Intro to Apache Kafka

    September 26th, 2022

    I’ve been evaluating queues and storage for an event sourced system lately, and I seem to have found what I am looking for in Apache Kafka. Kafka is used in a surprising number of places. I only learned today, for example, that the Azure Event Hub has a Kafka surface, and can be used as a Kafka cluster itself.

    I have the following requirements:

    • Topic-based subscriptions
    • Event-based
    • Infinite storage duration
    • Schema validation
    • MQTT connection for web and mobile clients

    I’ve tried out a number of different solutions, but the one I am thinking about right now is based on the Confluent platform, a cloud-managed Kafka cluster. It is relatively easy to set up a cluster with the above requirements. Confluent has a nice clear option to turn on infinite storage duration, and provides a schema registry which supports multiple definition languages, such as JSON schema, Avro and ProtoBuf. Schema registries are a nice way of ensuring your event streams stay clean, preventing buggy messages from even entering the queue.

    I say “event-based”, but really we just need to be able to identify the schema type and use JSON. That’s pretty standard, but it needed to be mentioned.

    MQTT is a little bit of a challenge, but not really. I’d recommend checking out CloudMQTT, a simple site for deploying cloud-based Mosquitto instances. Setting up the MQTT broker took 2 minutes, and then it was off to Kafka Connect to hook it up. Adding the MQTT source is really easy as expected: provide the URL and credentials and the rest just happens automatically. You can additionally subscribe to topics to push back to MQTT. This works perfectly for web and mobile clients, whose tasks are to push events, and to receive notifications. MQTT will allow for a very nice async request/response mechanism that doesn’t use HTTP and doesn’t have timeouts.

    Finally, as I mentioned, Azure Event Hub has a Kafka surface, so you can even push certain topics (auditing, e.g.) out to Azure Event Hub to eventually make its way into the SIEM. There’s a number of useful connectors for Kafka, but I haven’t really looked at them yet except to note that there’s a connector for MongoDB.

    Kafka is a publish/subscribe based event broker that includes storage. This makes it ideal for storing DDD aggregates. Having the broker and the database in the same place simplifies the infrastructure, and it’s a natural role for the broker to fill.

    The net result of this architecture is that we no longer need to talk HTTP once the Blazor WASM SPA has loaded. All communication with the back-end system is done via event publishing and subscribing over MQTT.

    I’m happy with this architecture. Confluent seems to be reasonably priced for what you get (the options I have chosen run about $3USD per hour). CloudMQTT is almost not worth mentioning price-wise, and Kafka Connect leaves open a lot of integration possibilities with other event streams. As it is a WASM application, a lot of the processing is offloaded to the client, and the HTTP server backend stays quiet. The microservices subscribe to their topics and react accordingly, and everything that ever touches the system is stored indefinitely.

  • Some Thoughts on k8s on WSL2

    September 23rd, 2022

    It’s getting to the point now where I rarely touch a Powershell prompt. My WSL2 system has taken over the computer, and it thinks it runs Linux now. The last thing for me to get running on my local “Linux” dev environment is Kubernetes. I use the docker-desktop Kubernetes cluster, since I am already using Docker Desktop and it’s as simple as checking a box. Minikube comes recommended as well as k3s.

    I initially installed the snap for kubectl as per the suggestion from the CLI when I typed the non-existent kubectl command. I’d suggest installing it from the package repository instead. In any case, obtaining kubectl is not that hard; just don’t use the snap.

    The real difficulty with running a local k8s environment is that your localhost interface fills up awfully fast with ports. It would be much easier if you could assign your external IPs to an aliased interface and then you could not worry about port collisions as much. Enter MetalLB, a virtual load balancer for your k8s cluster. Installation instructions are at https://metallb.org/. MetalLB will create the necessary virtual interfaces for you, and assign all of your LoadBalancer ports to one or more address pools. This is great – all of your load balancers in one place.

    The only problem is that Windows can’t reach them by default. At least our MetalLB configuration is based on static addresses, because our WSL address will move every time WSL2 starts up. Not only that, but we’ll get an address in a different network! This makes setting up static routes a little difficult, but thankfully not impossible:

    $wsl_gw=$($(netsh interface ipv4 show addresses "vEthernet (WSL)" | findstr "IP") -replace "^.*?(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?).*$","`$1.`$2.`$3.`$4")
    
    try {
        route delete 10.100.0.0/24 > Out-Null 2>&1
    } catch {
    
    }
    
    try {
        route add 10.100.0.0/24 $wsl_gw > Out-Null 2>&1
    } catch {
    
    }
    

    The magic comes from the first line, a one-liner for grabbing the IP address of the WSL gateway interface. The regex is excruciatingly correct, but it works every time. It’s probably worth unpacking the declaration a bit:

    Show the address information for the interface named “vEthernet (WSL)”. This is the default name of the virtual network installed to host WSL. The output looks like:

    Configuration for interface "vEthernet (WSL)"
        DHCP enabled:                         No
        IP Address:                           172.17.192.1
        Subnet Prefix:                        172.17.192.0/20 (mask 255.255.240.0)
        InterfaceMetric:                      5000

    Now that we have the output, it seems that the address we are interested in is all by its lonesome on the line labelled “IP Address”. This sounds like a job for a regular expression. So, we match the output against an entire line containing a single IP address and replace the line with the IP address. We use findstr to narrow the output to a single line in the first place.

    And finally, we fire and forget a couple of network commands that should (occassionally) give us a route from our Windows machine to our MetalLB address pool.

    There’s a lot that can go wrong here. Most notably, we cannot run anything related to WSL without the user logging in. So, we’re stuck with startup scripts in the Windows Startup folder to get the job done, and the one above goes in there.

    Second, we need to trigger the Linux-side scripting on startup too. One clever thing you can do is to install systemd-genie and then start terminal upon login. Set Ubuntu to your default terminal and logging in will cause systemd to kick in and start those services. With any luck, both the route and the additional interface will be allocated without issue. But if you expect it to work without occasional (frequent) intervention, you’re crazy. Microsoft, if you’re listening, please have your systemd guy give us a real systemd! 🙂

    But, as expected, other than the fiddling with IPs and ports, k8s runs very well on “WSL2” (remember – it’s running in a different Linux container on your Windows host). The Docker Desktop implementation of Kubernetes is fine.

    One last thought. Deployment to Kubernetes is usually achieved using a CI/CD pipeline. I’d suggest you do the same for your local k8s cluster as well. Install an Azure DevOps agent in WSL2 and connect it to your DevOps project. Now you can manage your k8s builds from DevOps instead of tinkering from the command line. This turns out to be excellent for other reasons – it lets you break the build dozens of times in a pipeline whose stats don’t matter. Then you can simply clone the pipeline and update your variable groups for test/prod.

    As usual, I’m late to the party on Kubernetes. But I believe it’s a key architectural component that allows us to avoid vendor lock-in. Building cloud-native apps for Kubernetes increases agility and reduces lock-in because you can easily deploy your application to other clouds, giving you the ability to rapidly move from cloud to cloud with minimal disruption. This leaves you in a much better negotiating position with your current cloud vendor.

  • A Deeper Dive with Terraform

    September 19th, 2022

    Having spent the weekend building infrastructure scripts, I can now say that I like Terraform. My initial foray into the Infrastructure-as-Code (IaC) arena was using Pulumi. This appealed to me because it supported declarations in C#. After not very long, however, I found that Pulumi’s C# support was rather lacking, and put IaC on the shelf.

    IaC was resurrected when my boss told me that all infrastructure I built for this project would have to be scripted. One of my team members dove into Terraform, and recently gave me a crash course. The language itself is easy, but I think the development environment itself is very good as well.

    As I posted earlier, I’m using Terraform Cloud both as a state provider for my development workspace, and as CI/CD for my dev/test and production workspaces. This produces a very nice development workflow.

    1. Create a feature branch
    2. Make the feature branch work
    3. PR to QA branch
    4. Terraform Cloud (TC) plans the updated code
    5. Confirm plan in TC
    6. PR to …
    7 …

    Infrastructure workflows are the same as Git workflows! With all of the features that Azure DevOps provides, including work item association and approvals. The pipeline itself is triggered upon push. But that’s ok. The code review process was required first, since the merge to the QA branch required approval. Then, TC will not automatically apply changes unless instructed to. Making infrastructure changes requires confirmation from a user with the appropriate access.

    That’s the development workflow, and it works very well, and was not that difficult to set up given the extensive Terraform documentation. Personally I run it on WSL2 using VS Code with the Terraform extension. Infrastructure dev environments should probably run some flavor of Unix (WSL2, MacOS, Linux) and nothing more than VS Code is required.

    Code organization is very important. Terraform is a declarative language with very few opportunities for reuse and almost no control structures. This makes for the possibility of unreadable code very quickly. Organizing into workspaces and modules is the only way to keep larger projects under control. It’s probably easiest to explain with an example, so let’s put down some requirements.

    The project in question has 4 total environments, 3 non-prod and 1 prod. The 3 non-prod environments share a single Azure Dev/Test subscription, and the production environment has its own subscription. Costs should be minimized without sacrificing code readability. Development should plan and apply from the CLI or Web UI only. QA should plan and apply upon push to qa branch, and UAT should plan and apply upon push to uat branch. Finally, production should plan and apply upon push to main branch.

    The first thing to note from these requirements is the cost minimization. It’s always easiest to duplicate, and it would be much easier to simply build three copies of all infrastructure in dev/test. But that’s expensive: the search service alone costs $300/mo per instance. And having multiple key vaults when one will suffice is kind of annoying. And you don’t need multiple storage accounts, multiple containers is fine. Upon analyzing the infrastructure requirements, it was clear that the Redis cache, the Key Vault, the Storage Account and the Container Repository could all be shared among the three non-prod projects.

    So how to achieve this? My first attempt simply included the same module in multiple projects. Alas, this simply led to cycles of create/recreate as updates were made to the various branches. Creating and recreating the core infrastructure is absolutely the opposite of what we want. So, the first step was to split out the shared infrastructure into its own folder, and added the Terraform files. Then, the shared infrastructure modules moved, and the root Terraform definition updated. Finally, the shared infrastructure main.tf and variables.tf are updated.

    Now, the workspace layout. First, we need a workspace to hold the shared infrastructure. I created a single workspace with a -devtest suffix and initialized it. From the shared directory, run terraform plan and terraform apply. This will create the shared infrastructure in a separate workspace so that there are no conflicts between the environments. This workspace should be CLI-driven. The development environment should always be CLI-driven, and the shared infrastructure for development also belongs to QA and UAT. Therefore, no other workspace is needed for Dev/Test core infrastructure. Note the need to create a corresponding production workspace later.

    Next, we need workspaces and variable sets for each environment. The dev workspace will be marked as CLI-driven, and the other two (QA and UAT) will be attached to Azure DevOps Git and trigger runs upon push. The variable sets will be created and associated accordingly.

    That’s the gist of it. Terraform really saves a great deal of effort and reduces mistakes, as well as providing a lot more process around infrastructure development. Good use of workspaces and modules can make for reasonably organized layouts for even more complex infrastructure requirements.

  • An Introduction to Terraform Cloud

    September 17th, 2022

    I’ve been given a crash course in Terraform lately, and the first thing I did was to get a Terraform Cloud account to help manage all of the variables and workspaces. The general workflow of Terraform remains the same:

    terraform init
    terraform plan
    terraform apply
    

    However, we can optionally connect Terraform Cloud to our Git repository and trigger runs based on commits to specific branches. So, the workflow I used was as follows:

    Create an organization and workspace connected to my Git repository

    Switch the workspace to Local

    Configure main.tf as follows:

    terraform {
      required_providers {
        azurerm = {
          source  = "hashicorp/azurerm"
          version = "2.86.0"
        }
      }
      cloud {
        organization = "my-organization"
        token = "my-api-key"
        workspaces {
          name = "my-workspaces"
          #tags = ["my-app"]
        }
      }
    }
    

    Create a Terraform API token with the required permissions. Note that we have to put the api-key in this file 😦 I’d really rather not do that

    Initialize Terraform

    Develop your Terraform plan and verify that it is working as usual. I’d suggest that you do NOT commit your code with the token value above (remove it before committing and provide the API key to each developer). There isn’t a way to pass this value in the environment or on the command line, so be careful! Code reviews are a good thing.

    Switch your Terraform Cloud workspace from Local to Remote

    Commit your code

    Trigger run on Terraform Cloud, either automatically upon push, or manually

    You no longer have to manually trigger runs locally; you can simply push your code and trigger the run in the Cloud console. I’m no stranger to a CLI, and Terraform has a pretty nice one. But I prefer the web interface. Additionally, we get teams, users and permissions. We can, for example, assign a different team to the production environment than the development environment.

    Think of this as the CD for your DevOps for Infrastructure. Using Azure DevOps (or similar), you will be able to enforce code reviews before committing to branches used by Terraform Cloud. Terraform will automatically execute once the code is committed, saving you the management of a DevOps pipeline to run your code.

    There’s a lot of advanced features here that I haven’t looked at yet. For example, you can apply organization-wide policies that are checked when any developer in the organization checks in. Terraform will fail if these policies are breached. My initial thoughts are that this will simplify the workflow and management of an infrastructure DevOps process.

  • Putting it all together – From Monolith to Cloud-Native

    September 9th, 2022

    There’s an underlying theme to the posts I’ve been making lately, namely that I am beginning the process of migrating an existing application to an Event-Driven Architecture (EDA). This has posed some interesting technical challenges, and the overall process can be used to break down an existing monolithic application into component microservices based on Domain-Driven Design. Let’s start at the beginning at look at the challenges that will be faced, and the required work.

    1. The Domain Model

    As with any DDD project, you’ll begin with the domain model for a given domain. Follow DDD best practices to build this domain model.

    2. Data Migration

    There’s a fundamental difference between the static state stored by a traditional application in a relational database, and the aggregate state stored by an event-sourced system in (e.g.) Event Store DB. Storing the static state is like storing a document – you only get the latest version of the document. However, if you turn on “Track Changes”, you get a different document, one with a history of revisions. This document with the revision history is what is stored in the Event Store DB.

    This is challenge #1: how do we convert the static state of an existing persisted entity into the required aggregate state of a new event stream?

    The migration is doubly difficult, because the legacy application uses Entity Framework 6.0 as its persistence layer. The new application will run on .NET 6, using EF Core 6. So, we can’t just copy the entity model verbatim: any fluent customizations will have to be rewritten for EF Core 6. The good news is that existing attributes are used by EF Core 6 as well. However, copying the model verbatim is a good start, and tweaking it as we go along.

    With the copy of the EF6 model, we can rewrite any fluent customizations that are necessary (e.g. many-many relationships). This is also a good time to replace unnecessary fluent customizations with the corresponding attributes. Test your new .NET 6 EF Core model and ensure you can iterate through data.

    Now that we can read in our static entities, the challenge is met by essentially reverse engineering the current state of the entity into its component methods in the domain model. i.e. We load the entity into memory, and construct a domain object using the methods we defined in DDD that has the same “value” as the entity. See the example below:

    var entities = db.Entities.ToList();
    foreach (var entity in entities)
    {
        var aggregate = new Aggregate();
        aggregate.SetProperty1(entity.Property1);
        foreach (var item in entity.Collection1)
            aggregate.AddToCollection1(item);
    }
    // aggregate's current state now equals entity
    // write the aggregate to the aggregate store

    Note that we use aggregate.SetProperty1() instead of aggregate.Property1 = . This is the important part of the migration – we must use the methods defined in the DDD to achieve the desired state. When we do so, we will have created our aggregate suitable for storing in Event Store DB. Repeat the process for all aggregates identified in DDD. You have now migrated your data to an event store.

    Important: You will probably not use the same ID values in the new system. You should create a property in your legacy entity to store the ID value from the new system. This will become necessary later on when we must build a “bridge” back to the existing application.

    Of course, the event store is not suitable for querying data. For that, we need to project the event store data into a form suitable for reading.

    3. Projections

    Projections are code that is called in response to an incoming event. Any service can subscribe to the stream of events being written to the event store. The service will then choose which events to respond to, usually by writing data to another database that can be used for queries. Good choices here are SQL Server and CosmosDB. This is one of the big performance gains we get by separating the “read side” and “write side” of the database: instead of doing join queries on a relational database in SQL Server, we can write materialized query results in JSON documents to CosmosDB instead. While not as space-efficient, the performance gains in read speed far exceed the additional space required.

    4. The “Bridge”

    Challenge #2: How to incrementally release a monolithic application?

    We do not want to wait for the entire new application to be written before we start using the new system. Therefore, we need a method of being able to run both applications in parallel and migrate users to the new application as new features become available for their team. This requires that both applications work on the same data. Of course, at this point, the data is in two separate databases! However, this is not an insurmountable difficulty: we can write a projection to deal with writing back to SQL Server. Since we already created our entity model for .NET 6 when we migrated the data to the event store, it is actually a rather simple task to write a projection that writes to SQL Server instead of CosmosDB. And, you can simply copy the code that writes to SQL Server from your old application, since it uses the exact same entity model (albeit for different platforms).

    These projections back to SQL Server are the key. With these projections in place, it is a simple matter of fetching an entity by v2 ID and performing the desired updates as per the incoming event.

    The interesting thing is that this bridge need not be temporary. While it is certainly possible to remove the bridge once the new application is complete, there are cases when it may be desirable to keep some or all of the bridge to facilitate external processes that may prefer SQL Server to CosmosDB.

    5. Microservices and Kubernetes

    The goal is to run the microservices on Kubernetes (or some other as-yet unwritten container orchestrator). This requires that the application run on Linux. This is what necessitates the upgrade to .NET 6. We expect that the infrastructure savings over time using an orchestrator should result in a reduced core count overall (Microsoft reports 50% reduction in core count when moving AAD gateway from .NET to .NET Core) and more efficient use of resources.

    6. Conclusions

    This is, in a nutshell, the process used to break down the monolithic legacy application based on Entity Framework. .NET 6 is a stable, multi-platform framework target that will allow you to containerize your .NET applications. The upgrade to .NET 6 should take advantage of all the platform and language features available, and use modern design techniques to build a lasting architecture that minimizes maintenance and maximizes readability and understanding to the reader.

  • The Importance of Naming Conventions

    August 31st, 2022

    I recently migrated an Entity Framework model from EF6 to EF Core/.NET 6. This was made considerably more difficult by the fact that the original developers had not taken advantage of Entity Framework’s conventions. Not only that, but a few small conventions changed between the two versions. And it would have been so easy had properties been named correctly.

    It got me to thinking about just how important naming conventions have become. Automation such as Infrastructure-as-Code, code generation tools, and your own automation based on reflection: all of these are based on having predictable names. But simply being predictable is not enough. It is also necessary that you be able to reconstruct the name the same way every time. And finally, you should probably be able to type it, or fit more than one identifier on a line of code. This was the main flaw of the model I was working with: the names were so descriptive that following a naming convention led to excessively long names, so abbreviations were made that broke the convention.

    It seems that a middle ground needs to be taken. I grew up on C, where identifiers were typically one- and two-letter abbreviations that probably only meant anything in the mind of the author. Calling C code a “technical specification” is probably pretty generous given the lack of descriptive nouns and verbs within the spec. A webcomic I saw a long time ago, and alas cannot find now, makes the point that well-written code _is_ a technical specification. The characters were discussing how great it would be if it were possible to create a specification detailed enough that a computer could write a program. And yet, that is what software developers do every day – write higher-level constructs that specify the software to the point that a compiler can create executable machine language. That sounds like a specification to me.

    So let’s treat it like one. Instead of the glory days of C where programmers competed with each other to create the most obfuscated code possible, let’s use the code also as a document that can clearly trace back to business requirements written in business language. Optimizing for readability should be a thing! Or at very least demanding that developers create readable specs that can be easily reviewed by an architect or other technically-capable businessperson.

    That’s where the Ubiquitous Language of Domain-Driven Design comes in. If the business calls something by a name, we should also use that name in our specification. It will allows us to clearly trace our code back to our business requirements. It will allows us to desk check business rules without wondering what certain variables refer to. Indeed, a modern language like C# is fluid enough that you can express thoughts in very readable fashion. A well-organized, well-named code base is easy to work with. You can use your IDE’s code completion features much more easily if you don’t have to guess multiple names.

    Other naming conventions that are important are the Entity Framework names I mentioned above. For example, a property named Id is always going to be considered the primary key for an entity. Additionally, a property named <Entity>Id is also considered the primary key if Id doesn’t exist. Foreign keys can be inferred in much the same way. While annotations are provided in EF Core to allow you to override the default behavior, it’s so much easier if you work _with_ the conventions rather than _against_ them.

    Terraform will get you thinking about names too, in terms of input variables. All of our names are going to look programmatic, because they are concatenations of input variables. Still, we want to be careful that names don’t get too long, since our cloud provider is going to have limits. So, what many have adopted is a variant of Hungarian notation (made popular during those glory days of C/C++!) where common abbreviations are adopted by those working with the technology. For example, a resource group will always lead to a -rg suffix. A container repository -acr and Kubernetes cluster -aks.

    So, I guess I’d make the following recommendations about naming in your own programs:

    1. Descriptive is good. Long is bad. Find a balance between the two. Hungarian notation can really help with this.
    2. It’s still ok to use one- and two-letter abbreviations for temporary/local variables
    3. Think of names in terms of input variables and how you can combine them programmatically as Terraform does.
    4. Use Ubiquitous Language from business requirements when available
    5. Take the time to do Domain-Driven Design when not available

    Remember that you are not just writing software for an end-user. You are also writing documentation of business processes and rules for an organization. When you win the lottery, the organization still needs to be able to understand what has been written and what the current state of the business is. So, it is important that some conventions be put in place and enforced via peer review.

  • Review – Steam Deck

    August 24th, 2022

    It was a long wait, but worth it. I reserved my Steam Deck something like six months ago, and finally got the notification to complete the purchase at the beginning of the month. It is a rather nice piece of hardware. Both graphics and CPU are sufficient to play plenty of modern games at 1280×800. The battery life is pretty good for a portable device with 3d graphics and decent CPU. I was going through about 10% battery per 30 minutes, giving a total battery life of about 5 hours per charge. Charging is done via a USB-C connection.

    That is the first point to make: it is not a 1080p display. This may be a bit of a surprise to people who are used to gaming on their phone at 1080p, but the graphics do not disappoint. Some games are too difficult to play at this resolution (most of my strategy games require too much screen real estate to play well at such a low resolution), but there are plenty of good ones to keep me going. Stellaris, for example, is surprisingly playable at lower resolutions, and the community control map works very well.

    Funnily enough, I ditched the Steam Controller I bought when I first picked up a Steam Link. I found the mouse controls too finicky to use well. However, I am back to the Steam Controller, because that’s what the Deck uses. I’ve gotten reasonably proficient with the mouse, and I don’t find that it’s particularly prohibitive in deciding what to play.

    A nice touch on the Deck is can boot into Desktop mode, giving you a KDE shell on the underlying Linux OS. Most people won’t use this, but there’s plenty who will find this appealing.

    I guess the final point to make is that because it is a Linux-based device, it won’t run all your games. Windows games are run using Proton, a collection of Windows libraries ported to Linux. The Steam Store now has Steam Deck compatibility notes for all titles (though not all have been tested yet, and have a compatibility rating of “Unknown”). Many titles are “verified”, meaning that they are ideal games to play on the Deck in terms of control scheme and hardware requirements, as well as considering the lower resolution. Most titles will not be “verified”, but upon reading the notes, you can decide to install it anyway.

    Common concerns include: small text, requirement to invoke on-screen keyboard, no official control scheme. Most of these concerns are trivial: there’s a magnifier available, the on-screen keyboard is easily invoked, and there are many good community control schemes. It’s worth going through your library to see what works.

    Overall, I was skeptical of the lower resolution, but I have found many titles that run well under this constraint. I had some cooling issues during a recent heat wave, but since the heat wave has subsided, I’ve had no further issues. The Deck is maybe a little larger than I would have expected, but it’s still a good size for a handheld device. It’s very usable, and it’s seen some use since I’ve gotten it. I’d definitely recommend it at the current price; it’s hard to see how it could be much cheaper and still make Valve any money.

  • Initial Thoughts on .NET MAUI

    August 18th, 2022

    I had to run into it sometime. .NET MAUI has generated a bit of buzz in the community. For React people, it’s nothing new, I know. But the Microsoft answer to React Native is really quite nice. It allows use of both XAML and Blazor (!) pages, and produces an application that runs on Windows, MacOS, Android and iOS. This puts mobile and utility development in a whole new light. Now we can:

    • Create utility programs as easily as writing a Blazor WASM app. MAUI allows me to write a desktop app as a hosted SPA within the application itself. This is so much nicer than a console program for no extra effort.
    • Create native applications that runs on tablets, smartphones and desktop PCs
    • Learn one framework for web, desktop and mobile

    A new, practical use for Blazor WASM! You can use the exact same controls you use in your web application inside a desktop app as well. And, since I’m on the subject, I’ll also add a plug for Radzen Blazor, a wonderful free UI library for use with Blazor WASM. These controls are beautiful and easy to use, and are completely free!

    If you haven’t tried out .NET MAUI yet, try writing your next utility using it. I think you’ll never go back to console again. I am hoping that Linux support will be added as well at some point in the future, though I can see why it might be more of a challenge than Windows or MacOS.

  • Full-Text Searching w/CosmosDB (cont…)

    July 26th, 2022

    It turns out that full-text searching requires that you enable the “Accept connections from within public Azure datacenters” option in the CosmosDB networking blade. The Cognitive Search service is not hosted in the VNET (although you can enable the private endpoint for security purposes – it doesn’t use this as its outgoing network). This presents a slight security risk that may not be tolerable for sensitive data. Now, the ability to find the exact CosmosDB you are looking for is if you were so inclined is practically non-existent. Trying to brute-force multiple CosmosDB services is likely to set off some alarms in the datacenter, and still won’t get you in (the keys are really quite difficult to break).

    So, practically speaking, I don’t feel like this represents any significant risk in terms of organizational data. But I hate checking off boxes that allow more unsolicited traffic.

←Previous Page
1 2 3 4
Next Page→

Proudly powered by WordPress

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy
  • Follow Following
    • Brian Richardson's Blog
    • Already have a WordPress.com account? Log in now.
    • Brian Richardson's Blog
    • Edit Site
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar