In this guide we will install an S3 object storage service using MinIO to a Kubernetes cluster in Origo OS. We will use the Kubernetes stack to quickly fire up a Kubernetes cluster, prepare the virtual disks that will be used by MinIO and then install MinIO to the cluster.
Install a Kubernetes cluster by following this guide.
IMPORTANT: Make sure you install at least 4 nodes instead of the default 2 (as shown in the image above). Also make sure you name the cluster “MinIO”, as this naming is assumed for the commands below (specifically regarding the formatting of drives).
After the Kubernetes cluster is up and running, you must set a password for the stabile user in the “security” tab, and allow ssh access from your current IP address.
Once you have done this, head straight to your favourite ssh terminal and ssh to the stack administration server (running the control-plane), to execute all the commands in the steps below.
The administration server running the control plane is the one with a name ending in “.0” – find its IP address in the dashboard (as shown in the image above).
To use the data disks attached to each of the nodes in the Kubernetes cluster, we must resize, reset the and disable mouting of the disks (because going forward they will be managed by MinIO). We will do this from the ssh terminal as the user “stabile”, and we will use the utility “stabile-helper”, which allows us to execute the same command in parallel on all servers in a Stack:
First resize the data disks of all the nodes to the desired capacity – we use 100GB in this example:
stabile-helper runcommand "stabile-helper resizestorage 100GB vdb1"
Change fstab on all nodes to prevent trying to mount the data disks, as they no longer will contain a regular file system:
stabile-helper runcommand "sed -i 's/\/dev\/vdb1.*//' /etc/fstab"
Then unmount and wipe the data disks on all nodes, so they are ready for MinIO:
stabile-helper runcommand "umount /mnt/local"
stabile-helper runcommand "wipefs -a /dev/vdb"
To verify the data disks have been resized, type:
stabile-helper runcommand lsblk
References:
https://docs.min.io/minio/k8s/reference/minio-kubectl-plugin.html
https://docs.min.io/minio/k8s/deployment/deploy-minio-operator.html
To install the MinIO operator to your control plane, execute as user “stabile”:
wget https://github.com/minio/operator/releases/download/v4.2.7/kubectl-minio_4.2.7_linux_amd64 -O kubectl-minio
chmod +x kubectl-minio
sudo mv kubectl-minio /usr/local/bin/
kubectl minio init
The procedure to install the MinIO client is very similar:
wget https://dl.min.io/client/mc/release/linux-amd64/mc
chmod +x mc
sudo mv mc /usr/local/bin
Reference:
https://krew.sigs.k8s.io/docs/user-guide/setup/install/
We install the very nice plug-in manager Krew by executing:
OS="$(uname | tr '[:upper:]' '[:lower:]')" &&
ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')" &&
KREW="krew-${OS}_${ARCH}" &&
curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz" &&
tar zxvf "${KREW}.tar.gz" && ./"${KREW}" install krew
sudo bash -c "ln -s /home/stabile/.krew/bin/kubectl-krew /usr/local/bin/"
Reference:
https://github.com/minio/operator/issues/420
The Kubernetes CA service needs a little fix to work properly:
sudo cp /etc/kubernetes/pki/ca.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
Reference:
https://github.com/minio/direct-csi/blob/master/docs/installation.md
We use direct-csi to access the data disks. Install it by executing:
kubectl krew install direct-csi
Make direct-csi available to kubectl:
sudo ln -s /home/stabile/.krew/store/direct-csi/v1.4.6/kubectl-direct_csi /usr/local/bin
Install the driver to the cluster and format the drives:
kubectl direct-csi install --crd
kubectl direct-csi drives format --drives /dev/vdb --nodes minio.*
List the drives being used for MinIO:
kubectl direct-csi drives list
If all your drives are not listed, repeat the previous format command – you need at least 4 “ready” drives to proceed.
The MinIO tenant is what acually exposes the S3 service etc., so we obviously want to install a tenant.
kubectl create namespace minio-tenant-1
We now create the MinIO tenant, which provides the actual S3 services. We want to capture the output, since the admin password is only given once – upon tenant creation. However it seems that MinIO writes directly to the TTY, thus disabling regular capture by simply piping stdout – presumably this is a security measure. We bypass this, since we really need this password.
script -c "kubectl minio tenant create minio-tenant-1 --servers 4 --volumes 16 --capacity 100G --storage-class direct-csi-min-io --namespace minio-tenant-1" tenant.out
cat tenant.out | sed -n -e 's/.*Password: //p' | grep -oh "\S*" | tee tenant-secret.out 2>&1
We need to add entry to /etc/hosts, in order for certificate validation to work properly:
echo "`kubectl describe svc minio --namespace minio-tenant-1 | sed -n -e 's/.*IP:\s*//p'` minio.minio-tenant-1.svc.cluster.local" | sudo tee -a /etc/hosts
Now configure mc with the password we pub int “minio-secret.out” above:
mc alias set minio/ https://minio.minio-tenant-1.svc.cluster.local admin <tenant-secret.out
If this command fails, please wait a few minutes and try again.
Then test that mc actually works:
mc admin info minio
mc mb minio/test-bucket
To access the operator web UI we can temporarily (or permanently in a start-up script) start a proxy:
kubectl minio proxy -n minio-operator
To access the tenant console web UI we can temporarily (or permanently in a start-up script) port-forward traffic to the service:
kubectl port-forward service/minio-tenant-1-console 9443:9443 --namespace minio-tenant-1 --address 0.0.0.0
Now point a browser to https://administration-servers–IP-address:9443 and log in with “admin” and the password found in tenant-secret.out.
After this it’s a good idea to remove the files containing the password:
rm tenant*.out
That’s it. You should now have a working MinIO object storage service. Happy S3’ing!