I’ve become disillusioned with the big-name music distributors, and I wanted to find a good replacement without sacrificing quality or selection. Price was not a concern provided it was competitive. Enter TIDAL Music, a high-quality music service for $29 CAD/mo for a family. This is slightly higher than other services, but a premium subscription assures that more money is being paid to the artist than other services.
In terms of offerings, I was able to find all of my (somewhat) obscure music and far more. A large percentage of the content is available in “Master” quality, a treat if you have a nice sound system or a good pair of headphones. There’s also a smattering of Dolby Atmos and Spatial audio. If you’re used to the high-quality offerings on Prime and Apple, you won’t be disappointed by the fidelity of TIDAL tracks.
For me, perhaps the most important feature of a music service is its ability to function as a DJ throughout the day. TIDAL makes this really easy. It provides the standard “artist” radios with non-stop streams based on a particular artist’s style. It also provides a “track” radio that does the same thing based on a specific track. But perhaps the most important feature for me is the daily curated music. TIDAL seems to be really smart. I spend my mornings going through the Discovery mix. From the input I provide through the discovery mix (liking tracks, albums and artists), I get a curated set of mixes of the distinct styles of music within my collection.
The day it separated my music into dance music and industrial/EBM, I was happy. It seemed intelligent to keep the distinct styles separate. Two days later, there were 2 new categories: 80’s Synthpop and German Metal. Truly impressive, in my opinion. I thought Apple was pretty good at choosing music, but I find that TIDAL is the best DJ of them all.
I’d definitely recommend thinking about switching if you’re tired of your current music provider, or just want to support the little guy and put a few more bucks in the artists’ pockets.
Happy New Year! I hope you had a relaxing and enjoyable holidays. My second Pulsar post is still in the works, but I haven’t written anything in a while, so I’m going to put up what I’ve been doing lately: working on my Stellaris opening. I’ve got a few hours under my belt, and this strategy is highly adaptable and resilient.
2200-2210
The goal by the end of 2210 is to have both of your guaranteed habitable planets founded. You’ll need a second science ship to ensure you find both in time, and the second science ship will need a leader. On console (where I’m playing) you still pay energy for leaders, so selling 100 food will give you a leg up in saving for the scientist. Selling off your excess monthly food will take care of the rest and ensure you have two science ships immediately. If you are playing a derivative of the Earth democracy, as I am, you will find your habitable planets in Sirius and Alpha Centauri. Sirius is a binary system and Alpha Centauri is a trinary system. Usually you can pick them out pretty easily. During this time, you will want to take the Discovery tradition, and also the To Boldly Go tradition. Once you have taken those two in Discovery, proceed to take Expansion, Colonization Fever, and A New Life. If you have enough unity (and you should make it a priority unless you start with a ton of it), you can have Colonization Fever taken before your first planet is complete, giving you an extra pop.
2210-2220
The goal by the end of 2220 is to have 20 corvettes and 100 monthly science. It’s tempting to reduce or delay expansion until you have these corvettes, because if you don’t build them, the AI will eat you for breakfast. However, generous use of the market should allow you to buy enough alloys to continue expanding at the pace of your influence, and still buy the 17 ships you need prior to 2220. To get to 100 monthly science, you will need one more research lab. This itself requires more goods, which in turn require more minerals. You’ll need planetside minerals from your first two planets, and one more farm. I build only as many cities as I need in the new colony (housing is the key thing here) until the planetary administration is in place.
Now that the two production centres have been added, it’s extremely important to improve both goods and alloys production. At the first opportunity, build another industrial district, preferably on one of the new planets. Industrial districts really suck up the minerals, so ensure you have enough of a surplus before building it. Once your goods production is high enough, you’ll want the additional research lab. The second building slot is unfortunately going to have to go to an administrative building. If you are not playing a megacorp, you’ll need to build the administrative building before the research lab.
Use the market to manage your surpluses and deficits. While you can’t overcome huge deficits, selling off some surplus goods and food can give you a little extra energy. Every little bit helps here. The 2220 deadline for your 20 corvettes is pretty hard if anyone else is nearby. Even the least aggressive empires will attack you for being so weak. I once had a pacifist empire offer to protect me, and then in 2220, offer me forced vassalization. So, get those ships up, and don’t stop expanding to do so.
2220-2240
So, awesome. You got your ships in time, you have 2 new planets well underway, and you’re sciencing away, looking for cool new systems to expand to, looking for your precursor empire and other goodies. Guess what? You have to do it again. Our goal by the end of 2240 is 200 science and 40 corvettes. If you neglect to build the second fleet of corvettes, you will also suffer, because odds are, someone else will. Thankfully it’s a little easier to get the additional naval capacity by taking the Superiority tradition, but you can still get there the old-fashioned way by building a Stronghold and 2 anchorages. In fact, I do both. I avoid taking Superiority early on, though, because there’s bigger fish to fry.
After taking Colonization Fever and A New Life from the Expansion tree, I move to the Prosperity tree, because Earth is quickly going to run out of jobs without Interstellar Franchising. As a megacorp, I don’t have the early unity to afford taking the additional tradition – Interstellar Franchising must be taken with haste. That said, early unity is critical. The timing on some of these traditions in this opening cut very close – Colonization Fever is often taken just barely before the colony is finished. Earth is on the verge of running out of districts by the time I take Interstellar Franchising. Losing unity to unemployment and unhappiness is simply not going to be tolerated.
To get to 200 science, you’ll need a research lab on both new planets. Again, make sure you’re building planetside minerals and industrial districts to build up your goods production. Make sure you keep buying alloys and push hard for the second fleet of corvettes. You can’t really breathe until you have them.
General Advice
Stellaris is a game of patience. Planets take a long time to grow, and it’s very easy to overbuild. I try not to build new districts and buildings until there’s only 1 job available. This way I know that all of the districts I’ve built to-date have been filled, and I won’t get weird shifting of workers between all the possible resources they could be working. Additionally, this will keep the sprawl to a minimum, always a good thing, because every administrative building you build takes up a valuable building slot and consumes valuable goods.
The market is your best friend. You’ll never get a perfect distribution of resources, so you’ll need to trade for the optimal solution. I try to keep my food surplus between 0-10 and my goods surplus between 0-5. Anything more can be sold to buy alloys. I also try not to have much more than 500 goods or food on hand unless I’m saving for colonies. It really doesn’t serve any purpose than to back up your income. If you have 15k food and goods at the end of the game, you’re doing it wrong. Again, you can sell off food in quantities of 500 to keep your reserves low. I’d add that you can really make bank off the market if you get the Galactic Market (market fee -10%) and take the market fee reduction in the Diplomacy tradition (market fee -10%). At only 10% market fee, you trade with great efficiency, and with a global market, being a net exporter is awesome. You can also have some fun by messing with prices. If you have the cash and no need to buy (say) alloys, you can push the price of alloys through the roof and make it painful for empires that must buy it.
Kick the AI’s ass with science. If you can hold them off early on by being an unattractive target (and make nice with diplomacy), you will get a ton of early game bonuses by researching anomalies and excavating archaeology sites. However, you still need to survey all of your nearby systems, so you’ll need to balance the need to keep moving with the need to research some of the higher-level anomalies. I usually use 300 days as a cut-off. If it’s more than 300 days, I leave it for later (or when I have very little to do). By the end of 40 years, you should start pulling ahead of the AI, and be well-positioned to transition into the middle game with a lot of options available to you. At very least, you will win your fight for survival. There’s a good chance you will be able to target 300 science and a fleet of destroyers for 2260.
Starbase placement is key. You don’t have many starbases, so you need to make the best use of them. You won’t get starholds for a while, so you’ll need a combination of starbases and trade hubs that capture all of your planets. Be extremely mindful of the gap. If there’s gaps in your trade protection, you will invite pirates, and that will suck for you because you can’t afford the ships or the loss of income. So make sure there’s no gaps. You can put starbases beside each other, use trade hubs to expand your trade range and guns/missiles/hangar bays to extend trade protection range. There’s usually a configuration for your starting planet that will provide 100% trade protection without the need for patrols. I make it a policy not to expand to a planet until the trade network is in place.
That’s about all the concrete tips I can offer about the early game. It’s not hard, but it’s important that you have 20 ships in 20 years and 40 ships in 40 years to stave off AI aggression. It’s also important to ensure you can afford each and every research lab you build. You actually don’t need that many to win the game, so be patient and make sure all of your districts are fully staffed before adding the research labs. I hope these help! My latest game has me with 9 planets and 25 destroyers before 2260 (in addition to the 40 corvettes), with 330 science to boot. I’d kick my neighbour’s ass except that we like each other and I’m making good money off of him. Good strategy leads to good RNG, and truly enjoyable games.
Some buzz came my way recently about Apache Pulsar, specifically Apache Pulsar hosted by StreamNative. Given that it has a free offering with no CC requirement, I decided to have a look. I’ll start by saying this blog post can’t possibly do it justice, but I’ll try to cover the basics at least. The documentation is reasonably good, and enough time spent going through them should get you the answers you need. I’ll walk through a simple application using Pulsar, and add some commentary at the end.
I chose to connect a Blazor WASM application to Pulsar, since that’s what most of my client apps will be. There’s a bit of disappointment there, since it’s not possible to connect the WASM client directly to Pulsar and thus avoid delays associated with proxying. Still, that’s a lot to ask for, so it’s not a huge deal. Proxying through the server API is more secure, anyway since credentials don’t have to be sent to the client side.
Create the application using dotnet new blazorwasm --hosted -n HelloPulsar -o HelloPulsar. Open the resulting solution file in your favourite IDE and clean it up by removing the Weather Forecast page, cleaning up navigation, etc. In the server component, add the Pulsar.Client NuGet package. This package is produced by the F# community, and attempts to mimic the Java API verbatim, as opposed to the official .NET client, DotPulsar, which doesn’t. Add the following initialization to Program.cs:
var pulsarClient = await pulsarBuilder
.ServiceUrl("pulsar+ssl://<my-cluster>.<my-org-namespace>.snio.cloud:6651")
.Authentication(AuthenticationFactoryOAuth2.ClientCredentials(
new Uri("https://auth.streamnative.cloud/"),
"urn:sn:pulsar:<my-namespace>:<my-instance>",
new Uri("file:///path/to/my/keyfile.json")))
.BuildAsync();
builder.Services.AddSingleton(pulsarClient);
This will add the PulsarClient singleton to the application to allow us to create producers. Note that the service account used for authentication must have producer access to the tenant, namespace, or topic.
Now, in a controller within the server API (say, PulsarController), create a method that says hello:
[HttpPost("Hello")]
public async Task<IActionResult> Hello([FromBody] string? name = "world")
{
var greeting = $"Hello, {name}!";
var producer = await _pulsarClient.NewProducer(Schema.STRING())
.Topic("persistent://public/default/hello")
.CreateAsync();
await producer.SendAsync(greeting);
return Ok();
}
This will send a greeting using the posted string to Pulsar. You will need to create the topic first in the StreamNative portal. You can verify that the message was received in the StreamNative portal by creating a subscription and peeking the message.
Ok, this works. But why would I want to choose it over Kafka? I can see some glimmers of why, and I’ll be keeping an eye on things to make a choice. The killer feature for me, I think, would be to use my own OAuth server for authentication. There’s some indication this is possible, though not yet with the StreamNative offering.
The community seems kind of small, and the StreamNative product rather minimal at present. On the other hand, if Pulsar offers more features (especially the custom OAuth server) it could be worth migrating new projects from Kafka. More to come…
I’ve spent a bit of time working with KeyCloak lately. It’s been some time since I looked in the Open Source world for an OIDC/OAuth2 solution, and when I found KeyCloak, I thought, “How did I miss this?”. I’ve been working with an ancient OIDC framework available for .NET, whose name escapes me right now. Later on, I came across IdentityServer4, now IdentityServer5, available as Duende IdentityServer under commercial license.
But KeyCloak was developed quietly by Red Hat, and seems to have gained some traction. Indeed, it is a highly capable authentication server, supporting the OIDC protocol, SSO complete with user provisioning via SCIM, and a complete OAuth2 implementation for more advanced scenarios. For this article, I’ll discuss the more basic approach of RBAC using KeyCloak and the built-in authn/authz available in .NET 6.
Unzip/untar the binary package in a system directory somewhere
Create a service that launches the KeyCloak server from the distribution
I’ll give instructions for Linux, but I imagine it should work equally well on any machine with a reasonably recent version of Java. I untarred under /opt, which creates a directory /opt/keycloak-20.01. I link this to /opt/keycloak.
We have to bootstrap the service before we can install it. We will need an initial username and password, and those will be set in the environment variables KEYCLOAK_ADMIN and KEYCLOAK_ADMIN_PASSWORD respectively. Before we start that, though, we need to install and configure Java and MySQL. I’ll leave this for the reader, as it’s usually just a simple matter of installing the openjdk-18-jdk and mysql-server packages.
Next, we need to modify /opt/keycloak/conf/keycloak.conf as follows:
# postgres is supported too if you prefer
db=mysql
# Create this user
db-username=keycloak
db-password=keycloak
# The full database JDBC URL. If not provided, a default URL is set based on the selected database vendor.
db-url=jdbc:mysql://localhost:3306/keycloak
# HTTPS - requires root privileges or change of port
https-protocols=TLSv1.3,TLSv1.2
https-port=443
# The file path to a server certificate or certificate chain in PEM format.
https-certificate-file=${kc.home.dir}/conf/server.crt.pem
# The file path to a private key in PEM format.
https-certificate-key-file=${kc.home.dir}/conf/server.key.pem
This will enable you to login to the service at http://localhost:8080 with the username and password set in the environment. Your first order of business should be to create a new administrative user with a secure password, and disable the admin user.
You can now stop the running service (Ctrl-C in the terminal in which you ran the kc.sh command. It is time to replace it with a proper service file and have KeyCloak start automatically on boot.
The first thing we will need is a new realm to manage our application’s users. Create a realm by logging into the server (hopefully you took the time to configure HTTPS!). In the top left corner, there is a dropdown that will list all the current realms:
You can add a new realm from the button on the bottom. Give it a name, and save it. This will bring you to the main configuration screen for your new realm:
There’s a lot here to configure, but don’t worry about most of it for now. A lot of the options are related to security policies and automatic enforcement of scopes and permissions using OAuth2 Resource Server flow. This is an advanced topic that this article will not cover.
For our purposes, we will configure just the name. We will use the Client settings to configure our RBAC. So, create a new Client by selecting Clients on the left navigation, and clicking Create. Fill in a name, leave the protocol on openid-connect. You don’t need to fill in the Root URL, but you can if you like.
Now you are at the main configuration screen for your new client:
We are only interested in roles. Go to the Roles tab and add any roles you might need (I used Administrator and User, making Administrator a composite role that contained User as well). You can then assign these roles to individual users in their details screen.
So adding users with roles is easy enough. How do we inform our application of those roles? We need to put a claim in the access token that will declare our roles to the application. KeyCloak’s built-in mapper for User Client Role will put those roles in a JSON block within the token as follows:
Unfortunately, .NET 6 won’t interpret these roles out-of-the-box, so we need to give it a little help. Help is provided in the form of a class extending AccountClaimsPrincipalFactory<RemoteUserAccount>. The base class provides a virtual method, CreateUserAsync(), that will construct the ClaimsIndentity given an access token (well, more specifically a token accessor – more on that below). The entire class looks like this:
public class KeycloakClaimsPrincipalFactory : AccountClaimsPrincipalFactory<RemoteUserAccount>
{
public KeycloakClaimsPrincipalFactory(IAccessTokenProviderAccessor accessor) : base(accessor)
{
}
public override async ValueTask<ClaimsPrincipal> CreateUserAsync(RemoteUserAccount account, RemoteAuthenticationUserOptions options)
{
var user = await base.CreateUserAsync(account, options);
if (user.Identity is ClaimsIdentity identity)
{
var tokenRequest = await TokenProvider.RequestAccessToken();
if (tokenRequest.Status == AccessTokenResultStatus.Success)
{
if (tokenRequest.TryGetToken(out var token))
{
var handler = new JwtSecurityTokenHandler();
var parsedToken = handler.ReadJwtToken(token.Value);
var json = parsedToken.Claims.SingleOrDefault(c => c.Type == "resource_access");
if (json?.Value != null)
{
var obj = JsonConvert.DeserializeObject<dynamic>(json.Value);
var roles = (JArray?) obj?["bookstore"]["roles"];
if (roles != null)
foreach (var role in roles)
identity.AddClaim(new Claim(ClaimTypes.Role, role.ToString()));
}
}
}
}
return user;
}
Note that we use the TokenProvider provided by the base class. This is an IAccessTokenProvider, which will use the IdP token endpoint to fetch a fresh access token. This is important to note, because if we are not yet authenticated, we obviously cannot get an access token, hence the need to ensure that we are receiving a valid token response prior to proceeding.
The key line here is var roles = (JArray?) obj?["bookstore"]["roles"]. A JArray works very much like a Javascript Array, and can dereference multiple levels of a hierarchy using array notation. Once we have the roles, we simply add the claim to the identity using the expected claim type and return the updated identity.
Now that we have an access token with the proper claims, we should be able to simply use the following service declaration:
(Note – this is Blazor WASM. You will need to use an appropriate package for Blazor Server to do the same thing). You will also need an appsettings.json with the following content:
You may have come across one or more of these products in recent months: GitHub CodeSpaces or JetBrains Space. So what, you say, why would I care about coding in my browser? I have a good computer, why do I need to screw around with remote connections, etc.? As it turns out, this journey for me started from a desire to code in my browser…
I like my iPad. It’s really good at many, many things, but it simply doesn’t have any native coding tools, nor is it likely to ever have them. Originally, I came across a project on GitHub called code-server. I think this project has now morphed into a couple of different forks now, and I don’t even know if the original still works since it seems to have forked into commercial offerings. In any case, this program allowed me to run VSCode full screen in Safari. That seemed to work reasonably well, but it felt pretty clunky.
CodeSpaces was a lot better than VSCode. I could add individual projects to my iPad home screen and get a near-native experience that was quite usable. I more or less left it at that until I came across JetBrains Space. Space offered the same thing, but for JetBrains IDEs. But the client is no longer a simple HTML5 client. It’s a full thick client written in Java. Again, this will never run on the iPad. I put it aside since it didn’t seem to serve any immediate purpose.
But then secure coding discussions came up. How do we prevent keep source code from leaving the organization? How can we support developers as the infrastructure components of our growing application grow accordingly? I remember when I first implemented RabbitMQ and provided instructions to developers on how to install it locally, there was a lot of grumbling. And, to be fair, they had a point. “Local Development Workstations” are not really a thing anymore given that many resources required by cloud applications don’t even have local analogues, or have very limited local analogues.
I thought about this for a long time. Remote development seemed like a good fit, but it just felt so clunky. Now there was a reason for it, though, so I tried again. After I figured out what the roles of the various JetBrains products are (Space Desktop, Gateway, IntelliJ-based IDEs), it actually seemed like a pretty good solution. I was able to run Rider IDE on an Azure D2. Latency through OpenVPN hovered between 70-100ms with the occasional spike over 100ms. System load for the IDE with a basic project is very reasonable for the debugging cycle. I could see needing a D4 for larger projects, though. Still, 4 cores is a relatively small VM and reasonably inexpensive when running Linux. (< $200 CAD/mo).
It’s important to put aside concerns about the idle desktop PC. That is not the problem we are trying to solve. We’re not even trying to figure out how to code on the iPad anymore (though that is a nice-to-have). The problem we are trying to solve is one of security and technical support. Security and Support have requirements that impose on developers slightly. It’s our job to make sure that imposition is as painless as possible.
One of the ways we can do that is to build shared infrastructure that developers no longer have to support. In my RabbitMQ experience above, I ended up having to do just that: build shared infrastructure so I didn’t have to support multiple developer workstations. Instead, I only had to support a single development cluster and come up with a way the application could share a single configuration file and cluster, while still allowing for easy developer onboarding.
So, our massive local development workstation turns into a series of smaller purpose-built VMs and PaaS services and a netbook (or a developer’s personal device, if the corporate security can accomodate it). I’m not going to pretend this is cheaper than buying a laptop every couple of years. Indeed, a quick estimate shows that the cloud environment is about twice as expensive. This is a trade-off that you’ll have to decide on yourself. In our circumstances, the security and infrastructure support concerns win out for us, ymmv.
It took me a long time to see the benefits of remote development. For the individual, beyond the very narrow use-case of coding in a browser, it’s not really there. It’s not until you start worrying about your code and data walking out the door, or how you’re going to support that developer in India that any of these costs start to make sense. But, remote development does provide solutions to both of those concerns that are not easily satisfied by issuing physical hardware to developers.
To conclude, it looks like Microsoft has hopped on this bandwagon too with their vscode-server, currently in private preview. If it’s anything like code-server, it should provide a similar in-browser experience to code-server, with hopefully less of the clunkiness. Having used github.dev, vscode.dev, JetBrains Space and Gateway, I think I can say I’ve covered most of the gamut of possibilities. The JetBrains offering definitely feels the most integrated right now, but doesn’t run on the iPad. Maybe one day I’ll get both!
I’ve spent some time now in the Steam Deck CLI, and gotten a good flavor of Arch Linux from the perspective of a long-time Ubuntu user.
Honestly, I like it a lot. It’s really a unique OS that offers a lot to many different users. For your average Steam Deck user who dabbles in the desktop, the Discover software center, based on Flatpak, will be all you need. For the power-user, though, it’s a little more complicated based on what you need. There are 4 programs that you can use to install software: the Discover software center, the pacman tool, the yay tool and the makepkg tool. All of these serve different purposes.
The basis of all of these tools is makepkg. Without going into technical detail, makepkg works by reading a build specification and producing a binary package. This is similar to other distributions that provide a “build-from-source” mechanism such as source RPMs. pacman and yay seem to have a fair bit of overlap. pacman reads only from system repositories though. I like to think of pacman as the super-user yay. For installing system libraries and upgrades, it is ideal. Crash course:
pacman -Ss <search> – search for a package pacman -S <packagename> – install a package pacman -Sy <packagename> – update spec from repository and sync package pacman -Syu – update all specs from repository and upgrade all packages
The Arch User Repository (AUR) contains contributed binary packages and build specs from around the world. These packages typically install from source code, and are based on build specs instead of binary packages. You install these packages with the yay tool, which has the same invocations as pacman, but is not invoked with sudo. You will be prompted for sudo credentials if required.
Finally, you can make your own packages with makepkg. From a directory with a build spec, simply type makepkg -si (no sudo required). The spec will allow a binary package to be built and installed.
Overall, I find the AUR to be too unstable. I know I harp on Neovim, but it’s an important tool for me. The “stable” release of Neovim (i.e. from pacman) is only at 0.6. I need a minimum of 0.7. Theoretically the “unstable” release of Neovim (neovim-git), is at 0.7. But because it is a git repository, the package information contained in the AUR is wrong – the git repository has since been updated to 0.8. So, yay -S neovim-git doesn’t work. yay -S neovim is too old. That leaves only flatpak, which thankfully has Neovim 0.8.
From all of this, I’ve learned that the Flatpak repository is probably the best way to go for software. While the pacman system repositories are stable, the AUR most definitely is not. I’d consider it a last resort before building it from source yourself. Flatpak is also stable, and allows installation of important userspace applications.
System Services
Arch Linux is also built on systemd and uses systemctl to enable, disable, start, and stop services.
Desktop
While there’s no “official” desktop for either distribution, SteamOS defaults to KDE Plasma 4, and Ubuntu to GNOME 3. Arch Linux itself is not a graphical distribution, and allows installation of many desktops. This is the first time I’ve used KDE Plasma, and my first impressions are very good. I haven’t used KDE in a long time, and I’m surprised to see how stable and shiny it is.
Support
I think that if you require commercial support, you’ll want to stick with Ubuntu. Arch Linux is very much a DIY OS, and isn’t really advisable from a business perspective. However, most sites that have Linux software provide instructions for Arch Linux; or, the AUR contains a build spec.
Conclusion
Arch Linux goes on my “approved” list. I have a good deal of flexibility in how to get software onto the machine. It is similar enough to Ubuntu that I didn’t get lost, but is clearly a unique OS that is meant more for power users than casual users. It’s a decent choice on Valve’s part, one that has turned a great purchase in the Steam Deck into a very viable development PC.
I think I am obsessed with Neovim. After finding out that the default Neovim currently distributed with SteamOS is only 0.6, I became determined to install the current git version, 0.8. This led me down an interesting rabbit hole, which while ultimately unsuccessful, may be possible in the future.
The idea is simple enough: install Arch Linux and use the linux-steamos AUR package to get the full hardware support. But it just didn’t work out that way. Installing Arch Linux is simple enough. Just image a USB and boot from it. On the Steam Deck, this is accomplished by plugging a bootable USB into the dock and holding down the volume down key while powering on until you hear the chime.
Following the Arch Linux installation instructions was easy enough too, once all the required packages were installed. First, yay-bin from GitHub, then use yay to install update-grub (This is apparently just a convenient alias for grub-mkconfig).
Getting it to boot the default OS is easy enough, and so is installing the KDE Plasma desktop (yay -S plasma xorg sddm plasma-wayland-session). Unfortunately, the current Arch Linux kernel doesn’t support the Steam Deck audio or bluetooth hardware. This, of course, makes it quite unusable for gaming.
So, there’s a package in the AUR called linux-steamos. This is theoretically the answer to the problem above, since it would have all the necessary hardware support. However, after several hours, I find I am unable to get the system to boot using the new kernel.
My first attempt, I used an ext4 root file system, but continued to get error messages about an unknown root file system type. And indeed, there’s no ext4 module for grub to insmod. I recalled that the default SteamOS is using btrfs. So attempt #2 was to use btrfs. Again, I got complaints about unknown filesystem type, even after dropping into the grub shell and manually entering the boot commands:
set root=(hd0,gpt2) linux (hd0,gpt1)/vmlinuz-linux-steamos root=/dev/nvme0n1p2 rw initrd (hd0,gpt1)/initramfs-linux-steamos.img insmod btrfs boot
This has me stumped, and I’ve given up and gone back to stock SteamOS. I guess I’ll have to make do with Neovim 0.6 until the next SteamOS update. For posterity, I had one note about building linux-steamos, which is that the build will fail on linking vmlinux (i.e. right at the end, after 2 hours!) unless you modify the file scripts/link-vmlinux.sh with change discussed here: https://lore.kernel.org/bpf/20220916171234.841556-1-yakoyoku@gmail.com/
So, it’s possible to boot Arch Linux on kernel 6.0, but the hardware support for all parts of the Steam Deck isn’t there. If you try to install the package linux-steamos, I can’t seem to get it to boot no matter what I try. So, for now at least, stock Arch Linux is out of the picture. For the most part, this doesn’t matter as Valve maintains their packages. However, this does mean that you must wait for reasonably recent packages. Consider this blog post my only complaint to Valve about the Steam Deck 😛
For those of us who have been using WSL for a long time, we have all been quite frustrated by the lack of a functioning systemd. I’ve personally resorted to a package called systemd-genie that did some gymnastics to fool WSL into thinking it had booted properly (with systemd at PID 1). This mostly worked but was exceedingly brittle. Trying to remember whether you were in or out of the bottle was a pain (VS Code integrated terminal – out of the bottle). I changed my shell to always be in the bottle, but it just didn’t quite work out.
To my surprise, I was following through my WSL setup instructions with another developer, and pointed him at systemd-genie, only to find the message on the page: Microsoft is previewing the completed systemd!
So. The bad news. Your WSL installation is probably borked beyond repair if you’ve been using systemd-genie or similar. Back it up and blow it away. Get the latest release from the [Microsoft WSL Github repository](https://github.com/Microsoft/WSL/releases). Run wsl –update to get the latest updates, and install one or more distributions from the Microsoft store. Boot the distribution, and edit the file /etc/wsl.conf. Add the lines:
[boot]
systemd=true
Restart the distribution. When you boot back up, you’ll find that systemd is now running. Try snap install … or start some services with systemctl. It’s been a long journey, but Microsoft Windows 11 is now Linux!
My fingers hurt. I’ve spent the last few hours bemoaning the fact that I left my notebook charger at the office 30km away, and that the replacement won’t arrive until tomorrow. Not content to sit around, I realized that while most iPadOS remote desktop/VNC clients suck, there are plenty of good SSH clients. So, I set about learning how to use Neovim, and a dash of LUA for good measure. I won’t post the file, since there’s already plenty to search for, but I will point out why Neovim is better.
So, first: LUA. If you’ve tried to make any kind of useful VIM configuration, you’ve probably seen that it can get ugly fast. LUA allows for modular configuration and is a rather pleasant configuration language. It reads well, indents well and generally communicates intent pretty clearly. Configuration with LUA is much cleaner, so point #1 for Neovim.
I wanted to get the most basic setup possible, spend as little time configuring things, and end up with a very specific result: autocompletion, syntax checking and navigation. I’ve struggled with getting all these to work together in VIM, but Neovim uses fewer plugins to accomplish the same task, and thus has fewer configuration issues.
But, this is not about Neovim, as awesome as it is, but more of a philosophical reflection on how hard it is to get away from the command line. It’s where I started so many (I won’t even tell you!) years ago. And it’s where I’m going back to. Somewhere around when Windows Vista came out, I made a concious effort to learn how to use a GUI, because my fingers hurt then too.
But, it turns out that you need language, not gestures, to express complex thoughts. And so interfaces go back to the CLI. And though I can still navigate certain pieces of complex software (VS, e.g.) extremely well with the GUI, notice that all of them have added back a text interface to allow for more complex commands (the command palette in VS Code, e.g.). And there are options that exist only in the Azure CLI that you can’t find in the Portal. No, no matter how hard I try, the CLI just keeps coming back. And now my fingers hurt again. Where’s my voice-controlled computer that understands spoken C# and bash? 😂
I’ve been evaluating queues and storage for an event sourced system lately, and I seem to have found what I am looking for in Apache Kafka. Kafka is used in a surprising number of places. I only learned today, for example, that the Azure Event Hub has a Kafka surface, and can be used as a Kafka cluster itself.
I have the following requirements:
Topic-based subscriptions
Event-based
Infinite storage duration
Schema validation
MQTT connection for web and mobile clients
I’ve tried out a number of different solutions, but the one I am thinking about right now is based on the Confluent platform, a cloud-managed Kafka cluster. It is relatively easy to set up a cluster with the above requirements. Confluent has a nice clear option to turn on infinite storage duration, and provides a schema registry which supports multiple definition languages, such as JSON schema, Avro and ProtoBuf. Schema registries are a nice way of ensuring your event streams stay clean, preventing buggy messages from even entering the queue.
I say “event-based”, but really we just need to be able to identify the schema type and use JSON. That’s pretty standard, but it needed to be mentioned.
MQTT is a little bit of a challenge, but not really. I’d recommend checking out CloudMQTT, a simple site for deploying cloud-based Mosquitto instances. Setting up the MQTT broker took 2 minutes, and then it was off to Kafka Connect to hook it up. Adding the MQTT source is really easy as expected: provide the URL and credentials and the rest just happens automatically. You can additionally subscribe to topics to push back to MQTT. This works perfectly for web and mobile clients, whose tasks are to push events, and to receive notifications. MQTT will allow for a very nice async request/response mechanism that doesn’t use HTTP and doesn’t have timeouts.
Finally, as I mentioned, Azure Event Hub has a Kafka surface, so you can even push certain topics (auditing, e.g.) out to Azure Event Hub to eventually make its way into the SIEM. There’s a number of useful connectors for Kafka, but I haven’t really looked at them yet except to note that there’s a connector for MongoDB.
Kafka is a publish/subscribe based event broker that includes storage. This makes it ideal for storing DDD aggregates. Having the broker and the database in the same place simplifies the infrastructure, and it’s a natural role for the broker to fill.
The net result of this architecture is that we no longer need to talk HTTP once the Blazor WASM SPA has loaded. All communication with the back-end system is done via event publishing and subscribing over MQTT.
I’m happy with this architecture. Confluent seems to be reasonably priced for what you get (the options I have chosen run about $3USD per hour). CloudMQTT is almost not worth mentioning price-wise, and Kafka Connect leaves open a lot of integration possibilities with other event streams. As it is a WASM application, a lot of the processing is offloaded to the client, and the HTTP server backend stays quiet. The microservices subscribe to their topics and react accordingly, and everything that ever touches the system is stored indefinitely.