There is a lot of confusing information on KeyDB and setups using Podman,
with many stating that it is not possible to get it to work. I have gotten it to work, and here is how.
In this blog I will be deploying KeyDB on a server running a web application that requires it. KeyDB
will be running 8 nodes in total: 4 master and 4 replicas. Each node will be running in its container
and all nodes will be running inside a single pod on EuroLinux 9.
I am not going to discuss why I am running a cluster setup over a single instance, I just hope that
these notes will make your life easier were you to need to deploy this kind of setup.
# All the following commands are run as root user
#
# Install podman and related packages on EuroLinux 9
dnf install container-tools
# Prepare directories
cd ~
mkdir -p keydb/{29130..29137}/data
touch keydb/{29130..29137}/{node,keydb}.conf
chcon -Rt container_file_t keydb
chown 999 keydb/*/node.conf
# Create a temporary container instance
podman run \
--detach \
--name=keydb \
--tz=local \
--publish=29121:6379 \
--rm \
eqalpha/keydb
# Copy the default configuration file into the new configuration file
podman exec keydb bash -c "cat /etc/keydb/keydb.conf" > keydb/default.conf
# Generate a pseudorandom password and copy it
#
# These notes assume that the generated password is 608fe9278d074fabdee80b314d9d1cd3c7d266efb9d2ca490641f8242ef2aec2f6e24584995e79d2527451a30f12a5f15c3baaedf13a28a326d9e121755ea21d
podman exec -it keydb /usr/local/bin/keydb-cli
ACL GENPASS 512
EXIT
The default configuration file can be used as a template, each node requires its own configuration file
which will be ideantical to every other node except for that each node uses a different port
directive.
The following directives are either added if missing, or the value changed if they're already set. In
my case, the directives port
, protected-mode
and appendonly
were already set to some value.
port 29130
protected-mode yes
cluster-enabled yes
cluster-config-file /etc/keydb/node.conf
cluster-node-timeout 5000
appendonly yes
requirepass 608fe9278d074fabdee80b314d9d1cd3c7d266efb9d2ca490641f8242ef2aec2f6e24584995e79d2527451a30f12a5f15c3baaedf13a28a326d9e121755ea21d
masterauth 608fe9278d074fabdee80b314d9d1cd3c7d266efb9d2ca490641f8242ef2aec2f6e24584995e79d2527451a30f12a5f15c3baaedf13a28a326d9e121755ea21d
I want my 8 containers to use ports ranging from 29130 to 29137, which is why I have created directories
with such names inside of the /root/keydb
directory. The data
directory is for persisiting data,
instead of using a volume I opt for bind mounts because that allows me to move from a containerized
instance to a native instance whenever I want.
Now it is time to create the pod that will contain all the 8 containers, this pod will make it easier
to start containers as a systemd service, but it also will simplify the configuration a great deal:
bus ports will not be exposed but will be internal to the pod. The pod will expose only the command
ports to the host.
# Create a pod
podman pod create \
--name=keydb-cluster \
--publish=29130-29137:29130-29137
# Run the final container instances
podman run \
--detach \
--name=keydb-node0 \
--label="io.containers.autoupdate=registry" \
--tz=local \
--mount=type=bind,src=/root/keydb/29130/node.conf,dst=/etc/keydb/node.conf \
--mount=type=bind,src=/root/keydb/29130/keydb.conf,dst=/etc/keydb/keydb.conf \
--mount=type=bind,src=/root/keydb/29130/data,dst=/data \
--pod=keydb-cluster \
--rm \
eqalpha/keydb
podman run \
--detach \
--name=keydb-node1 \
--label="io.containers.autoupdate=registry" \
--tz=local \
--mount=type=bind,src=/root/keydb/29131/node.conf,dst=/etc/keydb/node.conf \
--mount=type=bind,src=/root/keydb/29131/keydb.conf,dst=/etc/keydb/keydb.conf \
--mount=type=bind,src=/root/keydb/29131/data,dst=/data \
--pod=keydb-cluster \
--rm \
eqalpha/keydb
podman run \
--detach \
--name=keydb-node2 \
--label="io.containers.autoupdate=registry" \
--tz=local \
--mount=type=bind,src=/root/keydb/29132/node.conf,dst=/etc/keydb/node.conf \
--mount=type=bind,src=/root/keydb/29132/keydb.conf,dst=/etc/keydb/keydb.conf \
--mount=type=bind,src=/root/keydb/29132/data,dst=/data \
--pod=keydb-cluster \
--rm \
eqalpha/keydb
podman run \
--detach \
--name=keydb-node3 \
--label="io.containers.autoupdate=registry" \
--tz=local \
--mount=type=bind,src=/root/keydb/29133/node.conf,dst=/etc/keydb/node.conf \
--mount=type=bind,src=/root/keydb/29133/keydb.conf,dst=/etc/keydb/keydb.conf \
--mount=type=bind,src=/root/keydb/29133/data,dst=/data \
--pod=keydb-cluster \
--rm \
eqalpha/keydb
podman run \
--detach \
--name=keydb-node4 \
--label="io.containers.autoupdate=registry" \
--tz=local \
--mount=type=bind,src=/root/keydb/29134/node.conf,dst=/etc/keydb/node.conf \
--mount=type=bind,src=/root/keydb/29134/keydb.conf,dst=/etc/keydb/keydb.conf \
--mount=type=bind,src=/root/keydb/29134/data,dst=/data \
--pod=keydb-cluster \
--rm \
eqalpha/keydb
podman run \
--detach \
--name=keydb-node5 \
--label="io.containers.autoupdate=registry" \
--tz=local \
--mount=type=bind,src=/root/keydb/29135/node.conf,dst=/etc/keydb/node.conf \
--mount=type=bind,src=/root/keydb/29135/keydb.conf,dst=/etc/keydb/keydb.conf \
--mount=type=bind,src=/root/keydb/29135/data,dst=/data \
--pod=keydb-cluster \
--rm \
eqalpha/keydb
podman run \
--detach \
--name=keydb-node6 \
--label="io.containers.autoupdate=registry" \
--tz=local \
--mount=type=bind,src=/root/keydb/29136/node.conf,dst=/etc/keydb/node.conf \
--mount=type=bind,src=/root/keydb/29136/keydb.conf,dst=/etc/keydb/keydb.conf \
--mount=type=bind,src=/root/keydb/29136/data,dst=/data \
--pod=keydb-cluster \
--rm \
eqalpha/keydb
podman run \
--detach \
--name=keydb-node7 \
--label="io.containers.autoupdate=registry" \
--tz=local \
--mount=type=bind,src=/root/keydb/29137/node.conf,dst=/etc/keydb/node.conf \
--mount=type=bind,src=/root/keydb/29137/keydb.conf,dst=/etc/keydb/keydb.conf \
--mount=type=bind,src=/root/keydb/29137/data,dst=/data \
--pod=keydb-cluster \
--rm \
eqalpha/keydb
# Configure the pod as a service
podman generate systemd --new --name --files keydb-cluster
# Copy the generated service files to the correct location
cp pod-keydb-cluster.service container-keydb-node*.service /etc/systemd/system/
# Reload systemd, enable and start the service
systemctl daemon-reload
systemctl enable --now pod-keydb-cluster
# Once all the nodes are running they must be joined into a cluster
podman exec -it keydb-node0 /usr/local/bin/keydb-cli \
-a "608fe9278d074fabdee80b314d9d1cd3c7d266efb9d2ca490641f8242ef2aec2f6e24584995e79d2527451a30f12a5f15c3baaedf13a28a326d9e121755ea21d" \
--cluster create \
127.0.0.1:29130 \
127.0.0.1:29131 \
127.0.0.1:29132 \
127.0.0.1:29133 \
127.0.0.1:29134 \
127.0.0.1:29135 \
127.0.0.1:29136 \
127.0.0.1:29137 \
--cluster-replicas 1
KeyDB nodes will now be linked and working. The local application can access KeyDB on any of the exposed
ports at localhost
or 127.0.0.1
.