I am perhaps a little stubborn and impatient, but I don’t want to go through the whole provisioning process for a staging Event Store cluster. Indeed, a single pod attached to some storage should be sufficient for the needs of the staging cluster. However, support for Event Store DB on Kubernetes is in a bit of limbo. Not wanting to support a production virtualized Event Store cluster (understandable), the Helm chart has been deprecated in favor of an as-yet-to-be-developed cluster operator. So, here are the steps to get this working on an Azure Kubernetes cluster:
- Create persistent volume and persistent volume claim
- Change the ownership of the volume mount
- Create deployment
- Create service
- Test
Persistent Volume
We want to host the Event Store DB data on a managed disk. Begin by creating a managed disk of the desired size in the Azure portal, in the correct resource group. You will then create a file, pv-azuredisk.yaml to apply to your cluster. The file will look like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-eventstore-azuredisk
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: managed-csi
csi:
driver: disk.csi.azure.com
readOnly: false
volumeHandle: /subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group-name>/providers/Microsoft.Compute/disks/<my-managed-disk-name>
volumeAttributes:
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-eventstore-azuredisk
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
volumeName: pv-eventstore-azuredisk
storageClassName: managed-csi
You should replace the fields in the volumeHandle with your values, and update the storage size request as desired. I created a separate namespace to hold the Event Store deployment:
kubectl create ns eventstore
Then deploy the PersistentVolumeClaim with
kubectl apply -n eventstore -f pv-azuredisk.yaml
Change Ownership of Volume Mount
The volume mount will have the default permissions that the filesystem was created with, namely root:root. Since Event Store DB docker image runs with UID 1000:1000, you’ll need to change the ownership of the volume mount or the image won’t start up. We can do this by creating a “shell” deployment with the same volume mount and changing the ownership from there. This only needs to be done once.
Ubuntu Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: ubuntu-deployment
labels:
app: ubuntu
spec:
selector:
matchLabels:
app: ubuntu
replicas: 1
template:
metadata:
labels:
app: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu:latest
stdin: true
tty: true
volumeMounts:
- name: eventstore-azure
mountPath: /mnt/eventstore
volumes:
- name: eventstore-azure
persistentVolumeClaim:
claimName: pvc-eventstore-azuredisk
After applying this:
kubectl apply -n eventstore -f shell.yaml
you can now attach to a bash shell:
kubectl get pods -n eventstore
kubectl attach -n eventstore -it
Since we mounted the Event Store volume mount at /mnt/eventstore, we can simply:
cd /mnt/eventstore
chown -R 1000:1000 .
The shell has served its purpose for now, so delete the deployment:
kubectl delete -n eventstore -f shell.yaml
Now we can deploy Event Store with the following file:
apiVersion: apps/v1 kind: Deployment
metadata:
name: eventstore-deployment
labels:
app: eventstore
spec:
selector:
matchLabels:
app: eventstore
replicas: 1
template:
metadata:
labels:
app: eventstore
spec:
restartPolicy: Always
containers:
- name: eventstore
image: eventstore/eventstore:latest
ports:
- containerPort: 2113
env:
- name: EVENTSTORE_CLUSTER_SIZE
value: "1"
- name: EVENTSTORE_RUN_PROJECTIONS
value: "All"
- name: EVENTSTORE_START_STANDARD_PROJECTIONS
value: "true"
- name: EVENTSTORE_HTTP_PORT
value: "2113"
- name: EVENTSTORE_INSECURE
value: "true"
- name: EVENTSTORE_ENABLE_ATOM_PUB_OVER_HTTP
value: "true"
volumeMounts:
- name: eventstore-azure
mountPath: /var/lib/eventstore
volumes:
- name: eventstore-azure
persistentVolumeClaim:
claimName: pvc-eventstore-azuredisk
---
apiVersion: v1
kind: Service
metadata:
name: eventstore
spec:
type: ClusterIP
ports:
- name: eventstore
port: 2113
targetPort: 2113
Apply the above file to your cluster, and it should start.
You can verify that it is running:
kubectl get all -n eventstore
NAME READY STATUS RESTARTS AGE
pod/eventstore-deployment-5cff847b4f-xcjps 1/1 Running 0 29m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/eventstore ClusterIP 10.2.0.219 <none> 2113/TCP 21m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/eventstore-deployment 1/1 1 1 29m
NAME DESIRED CURRENT READY AGE
replicaset.apps/eventstore-deployment-5cff847b4f 1 1 1 29m
You can then access the dashboard for Event Store using port-forward:
kubectl port-forward -n eventstore –address 127.0.0.1 pod/eventstore-deployment-5cff847
b4f-xcjps 2113:2113
The dashboard will be accessible through http://localhost:2113.
I should note that this configuration is not recommended for production. It is a fair bit of setup to get Event Store deployed to your test cluster, but once it is, you won’t have to redo it. You can keep the shell.yaml file in case you want to reconnect to the volume mount and, say, erase all the data. It is theoretically possible to use access mode ReadWriteMany with host caching off to allow for more than 1 replica, but then you might as well deploy a bare metal cluster.
Hopefully Event Store will renew their efforts to create a cluster operator soon. I would imagine that the steps look similar to the manual process I’ve outlined above, but I can understand that supporting multiple cloud environments makes this difficult, as well as the caveat that this is not intended for production.