Deploy HA Vault Cluster with Integrated Storage (Raft)


Published: July 02, 2022 Author: Saad Ali

WARNING! Following this article, improvise if necessary. Your environment may be different than mine. I am not responsible if you screw up!

On a production environment in the Cloud, you can use a variety of tools to make Vault highly-available and use auto-unseal methods native to Cloud services. If you are not running in the Cloud and/or don't want to leverage Cloud services to auto-unseal Vault, you can also Auto-unseal using Transit Secrets Engine.

This tutorial, sets up a Vault cluster that doesn't use any of the above methods to auto-unseal Vault. It uses a Bash script with a Kubernetes CronJob to make sure that Vault is initialized, unsealed and cluster nodes are part of the cluster. This method doesn't require any manual user intervention. The root token and the unseal keys are saved as Kubernetes secrets.


Setup a Kind Cluster

I'll use the following Kind cluster configuration to start with:

# four node cluster config
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: vault
networking:
  ipFamily: ipv4
  podSubnet: "10.10.0.0/16"
  serviceSubnet: "10.11.0.0/16"
nodes:
  - role: control-plane
    image: kindest/node:v1.22.9
    kubeadmConfigPatches:
    - |
      kind: InitConfiguration
      nodeRegistration:
        name: "control-plane"
        kubeletExtraArgs:
          node-labels: "ingress-ready=true"
    extraPortMappings:
    - { containerPort: 80, hostPort: 80, protocol: TCP }
    - { containerPort: 443, hostPort: 443, protocol: TCP }

  - role: worker
    image: kindest/node:v1.22.9
    kubeadmConfigPatches:
    - |
      kind: JoinConfiguration
      nodeRegistration:
        name: "worker-01"

  - role: worker
    image: kindest/node:v1.22.9
    kubeadmConfigPatches:
    - |
      kind: JoinConfiguration
      nodeRegistration:
        name: "worker-02"

  - role: worker
    image: kindest/node:v1.22.9
    kubeadmConfigPatches:
    - |
      kind: JoinConfiguration
      nodeRegistration:
        name: "worker-03"

Install Ingress Controller

Install ingress NGINX Controller:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/helm-chart-4.1.4/deploy/static/provider/kind/deploy.yaml

Install Cert-Manager

Install cert-manager:

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.7.1/cert-manager.yaml

Since we are running in Kind, we will use self-signed cluster SSL certificate issuer with selfsigned-cluster-issuer.yaml as follows:

---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: selfsigned-issuer
spec:
  selfSigned: {}

Apply it with:

kubectl apply -f selfsigned-cluster-issuer.yaml

Setup Vault

We'll use a combination of Kustomize and Helm known as kustomized-helm. We'll use the combination as described by Mickaël Canévet in his blog post where he used it with ArgoCD.

I'll be using some configuration files from my GitHub repository NIXKnight/ArgoCD-Demo.

In a clean directory create the following structure. These files are also present in the GitHub repository linked above so you can copy the files from the repository.

|-- vault-bootstrap
|   |-- cronjob.yaml
|   |-- kustomization.yaml
|   |-- rolebinding.yaml
|   |-- role.yaml
|   |-- serviceaccount.yaml
|   `-- vault-bootstrap.sh
`-- kustomization.yaml
`-- values.yaml

The kustomization.yaml file uses vault-bootstrap directory as base and additionally uses resources in manifest.yaml. We will generate manifest.yaml using helm template command.

The values.yaml file is a Helm values file present in the repository at this path. We will slightly modify it as follows:

---
global:
  enabled: true
ui:
  enabled: true
server:
  ha:
    enabled: true
    replicas: 3
    raft:
      enabled: true
  livenessProbe:
    enabled: true
    path: "/v1/sys/health?standbyok=true"
    initialDelaySeconds: 120
  readinessProbe:
    enabled: true
    path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
  standalone:
    enabled: false
  ingress:
      enabled: true
      ingressClassName: "nginx"
      annotations:
        cert-manager.io/cluster-issuer: selfsigned-issuer
        nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
        nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
      hosts:
        - host: vault.local.nixknight.pk
      tls:
        - secretName: vault-tls-certificate
          hosts:
            - vault.local.nixknight.pk

In the above directory structure, vault-bootstrap directory contains all Kubernetes resources required to run the script vault-bootstrap.sh which runs as a Kubernetes CronJob. The script uses kubectl to query the Kubernetes cluster for the number of Vault replicas present on the cluster. If not initialized, it will initialize Vault and stores the init json file (that contains root token and unseal keys) as a Kubernetes secret. It will then use the secret to unseal Vault and join all raft nodes to the leader.

Add helm repository for Vault:

helm repo add hashicorp https://helm.releases.hashicorp.com

Create a namespace for Vault:

kubectl create ns vault

Switch context to the namespace:

kubectl config set-context --current --namespace=vault

Instead of install using the helm command, we will generate manifest.yaml using helm template by providing is our helm values file:

helm template vault hashicorp/vault --namespace vault -f values.yaml --include-crds > manifest.yaml

At this point, we run:

kustomize build

Which will build a kustomization target based on kustomization.yaml in our current directory structure, which will include both the helm generated manifest and vault-bootstrap manifests as follows. This will be the final manifest that gets applied to Kubernetes:

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault
    helm.sh/chart: vault-0.20.1
  name: vault
  namespace: vault
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault-agent-injector
  name: vault-agent-injector
  namespace: vault
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: vault-bootstrap
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault
    helm.sh/chart: vault-0.20.1
  name: vault-discovery-role
  namespace: vault
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - watch
  - list
  - update
  - patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: vault-bootstrap
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - pods/log
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - pods/exec
  verbs:
  - create
- apiGroups:
  - apps
  resources:
  - statefulsets
  verbs:
  - get
  - list
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - create
  - update
  - get
  - patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault-agent-injector
  name: vault-agent-injector-clusterrole
rules:
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - mutatingwebhookconfigurations
  verbs:
  - get
  - list
  - watch
  - patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault
    helm.sh/chart: vault-0.20.1
  name: vault-discovery-rolebinding
  namespace: vault
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: vault-discovery-role
subjects:
- kind: ServiceAccount
  name: vault
  namespace: vault
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: vault-bootstrap
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: vault-bootstrap
subjects:
- kind: ServiceAccount
  name: vault-bootstrap
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault-agent-injector
  name: vault-agent-injector-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: vault-agent-injector-clusterrole
subjects:
- kind: ServiceAccount
  name: vault-agent-injector
  namespace: vault
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault
    helm.sh/chart: vault-0.20.1
  name: vault-server-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: vault
  namespace: vault
---
apiVersion: v1
data:
  extraconfig-from-values.hcl: |-
    disable_mlock = true
    ui = true

    listener "tcp" {
      tls_disable = 1
      address = "[::]:8200"
      cluster_address = "[::]:8201"
    }

    storage "raft" {
      path = "/vault/data"
    }

    service_registration "kubernetes" {}
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault
    helm.sh/chart: vault-0.20.1
  name: vault-config
  namespace: vault
---
apiVersion: v1
data:
  vault-bootstrap.sh: |
    #!/usr/bin/env bash

    # +------------------------------------------------------------------------------------------+
    # + FILE: vault-bootstrap.sh                                                                 +
    # +                                                                                          +
    # + AUTHOR: Saad Ali (https://github.com/NIXKnight)                                          +
    # +------------------------------------------------------------------------------------------+

    VAULT_INIT_OUTPUT_FILE=/tmp/vault_init
    VAULT_INIT_K8S_SECRET_FILE=/tmp/vault-init-secret.yaml

    # Get total number of pods in Vault StatefulSet
    VAULT_PODS_IN_STATEFULSET=$(expr $(kubectl get statefulsets -o json | jq '.items[0].spec.replicas') - 1)

    # Setup all Vault pods in an array
    VAULT_PODS=($(seq --format='vault-%0g' --separator=" " 0 $VAULT_PODS_IN_STATEFULSET))

    # Raft leaders and followers
    VAULT_LEADER=${VAULT_PODS[0]}
    VAULT_FOLLOWERS=("${VAULT_PODS[@]:1}")

    # Wait for pod to be ready
    function waitForPod {
      while [[ $(kubectl get pods $1 -o 'jsonpath={..status.conditions[?(@.type=="Ready")].status}') != "True" ]] ; do
        echo "Waiting for pod $1 to be ready..."
        sleep 1
      done
    }

    # Initialize Vault
    function vaultOperatorInit {
      waitForPod $1
      kubectl exec $1 -c vault -- vault operator init -format "json" > $VAULT_INIT_OUTPUT_FILE
    }

    # Create Kubernetes secret for Vault Unseal Keys and Root Token
    function createVaultK8SInitSecret {
      cat <<EOF > $1
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: vault-init
    type: Opaque
    data:
    $(
      local VAULT_INIT_JSON_BASE64=$(cat $2 | base64 -w0)
      echo -e "  VAULT_INIT_JSON: $VAULT_INIT_JSON_BASE64"
    )
    EOF
      kubectl apply -f $1
    }

    function unsealVault {
      local VAULT_INIT_JSON_BASE64_DECODED=$(kubectl get secrets/vault-init --template={{.data.VAULT_INIT_JSON}} | base64 -d)
      for VAULT_UNSEAL_KEY in $(jq -r '.unseal_keys_b64[]' <<< ${VAULT_INIT_JSON_BASE64_DECODED}) ; do
        waitForPod $1
        echo -e "Unsealing Vault on pod $1"
        sleep 5
        kubectl exec -it $1 -- vault operator unseal $VAULT_UNSEAL_KEY
      done
    }

    function joinRaftLeader() {
      waitForPod $1
      kubectl exec $1 -- vault operator raft join http://$VAULT_LEADER.vault-internal:8200

    }

    # Get Vault initialization status
    waitForPod $VAULT_LEADER
    VAULT_INIT_STATUS=$(kubectl exec $VAULT_LEADER -c vault -- vault status -format "json" | jq --raw-output '.initialized')

    # If vault initialized, check if it sealed. If vault is sealed, unseal it.
    # If vault is uninitialized, initialize it, create vault secret in Kubernetes
    # and unseal it. Do it all on the raft leader pod (this will be vault-0).
    if $VAULT_INIT_STATUS ; then
      VAULT_SEAL_STATUS=$(kubectl exec $VAULT_LEADER -c vault -- vault status -format "json" | jq --raw-output '.sealed')
      echo -e "Vault is already initialized on $VAULT_LEADER"
      if $VAULT_SEAL_STATUS ; then
        echo -e "Vault sealed on $VAULT_LEADER"
        unsealVault $VAULT_LEADER
      fi
    else
      echo -e "Initializing Vault on $VAULT_LEADER"
      vaultOperatorInit $VAULT_LEADER
      echo -e "Creating Vault Kubernetes Secret vault-init"
      createVaultK8SInitSecret $VAULT_INIT_K8S_SECRET_FILE $VAULT_INIT_OUTPUT_FILE
      unsealVault $VAULT_LEADER
    fi

    # For all other pods, check unseal status and check if the pod is part
    # of the raft cluster. If either condition is false, then do the needful.
    for POD in "${VAULT_FOLLOWERS[@]}" ; do
      VAULT_TOKEN=$(kubectl get secrets/vault-init --template={{.data.VAULT_INIT_JSON}} | base64 -d | jq -r '.root_token')
      RAFT_NODES_JSON=$(kubectl exec $VAULT_LEADER -c vault -- /bin/sh -c "VAULT_TOKEN=$VAULT_TOKEN vault operator raft list-peers -format \"json\"")
      RAFT_NODES=$(echo $RAFT_NODES_JSON | jq '.data.config.servers[].address' -r)
      waitForPod $POD
      VAULT_SEAL_STATUS=$(kubectl exec $POD -c vault -- vault status -format "json" | jq --raw-output '.sealed')
      if [[ ${RAFT_NODES[@]} =~ $POD ]] ; then
        echo -e "Pod $POD is already part of raft cluster"
      else
        joinRaftLeader $POD
      fi
      if $VAULT_SEAL_STATUS ; then
        echo -e "Vault sealed on $POD"
        unsealVault $POD
      fi
    done
kind: ConfigMap
metadata:
  name: vault-bootstrap-kb2cg72675
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault
    helm.sh/chart: vault-0.20.1
  name: vault
  namespace: vault
spec:
  ports:
  - name: http
    port: 8200
    targetPort: 8200
  - name: https-internal
    port: 8201
    targetPort: 8201
  publishNotReadyAddresses: true
  selector:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/name: vault
    component: server
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault
    helm.sh/chart: vault-0.20.1
  name: vault-active
  namespace: vault
spec:
  ports:
  - name: http
    port: 8200
    targetPort: 8200
  - name: https-internal
    port: 8201
    targetPort: 8201
  publishNotReadyAddresses: true
  selector:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/name: vault
    component: server
    vault-active: "true"
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault-agent-injector
  name: vault-agent-injector-svc
  namespace: vault
spec:
  ports:
  - name: https
    port: 443
    targetPort: 8080
  selector:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/name: vault-agent-injector
    component: webhook
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault
    helm.sh/chart: vault-0.20.1
  name: vault-internal
  namespace: vault
spec:
  clusterIP: None
  ports:
  - name: http
    port: 8200
    targetPort: 8200
  - name: https-internal
    port: 8201
    targetPort: 8201
  publishNotReadyAddresses: true
  selector:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/name: vault
    component: server
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault
    helm.sh/chart: vault-0.20.1
  name: vault-standby
  namespace: vault
spec:
  ports:
  - name: http
    port: 8200
    targetPort: 8200
  - name: https-internal
    port: 8201
    targetPort: 8201
  publishNotReadyAddresses: true
  selector:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/name: vault
    component: server
    vault-active: "false"
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault-ui
    helm.sh/chart: vault-0.20.1
  name: vault-ui
  namespace: vault
spec:
  ports:
  - name: http
    port: 8200
    targetPort: 8200
  publishNotReadyAddresses: true
  selector:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/name: vault
    component: server
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault-agent-injector
    component: webhook
  name: vault-agent-injector
  namespace: vault
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/instance: vault
      app.kubernetes.io/name: vault-agent-injector
      component: webhook
  template:
    metadata:
      labels:
        app.kubernetes.io/instance: vault
        app.kubernetes.io/name: vault-agent-injector
        component: webhook
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchLabels:
                app.kubernetes.io/instance: vault
                app.kubernetes.io/name: vault-agent-injector
                component: webhook
            topologyKey: kubernetes.io/hostname
      containers:
      - args:
        - agent-inject
        - 2>&1
        env:
        - name: AGENT_INJECT_LISTEN
          value: :8080
        - name: AGENT_INJECT_LOG_LEVEL
          value: info
        - name: AGENT_INJECT_VAULT_ADDR
          value: http://vault.vault.svc:8200
        - name: AGENT_INJECT_VAULT_AUTH_PATH
          value: auth/kubernetes
        - name: AGENT_INJECT_VAULT_IMAGE
          value: hashicorp/vault:1.10.3
        - name: AGENT_INJECT_TLS_AUTO
          value: vault-agent-injector-cfg
        - name: AGENT_INJECT_TLS_AUTO_HOSTS
          value: vault-agent-injector-svc,vault-agent-injector-svc.vault,vault-agent-injector-svc.vault.svc
        - name: AGENT_INJECT_LOG_FORMAT
          value: standard
        - name: AGENT_INJECT_REVOKE_ON_SHUTDOWN
          value: "false"
        - name: AGENT_INJECT_CPU_REQUEST
          value: 250m
        - name: AGENT_INJECT_CPU_LIMIT
          value: 500m
        - name: AGENT_INJECT_MEM_REQUEST
          value: 64Mi
        - name: AGENT_INJECT_MEM_LIMIT
          value: 128Mi
        - name: AGENT_INJECT_DEFAULT_TEMPLATE
          value: map
        - name: AGENT_INJECT_TEMPLATE_CONFIG_EXIT_ON_RETRY_FAILURE
          value: "true"
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        image: hashicorp/vault-k8s:0.16.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 2
          httpGet:
            path: /health/ready
            port: 8080
            scheme: HTTPS
          initialDelaySeconds: 5
          periodSeconds: 2
          successThreshold: 1
          timeoutSeconds: 5
        name: sidecar-injector
        readinessProbe:
          failureThreshold: 2
          httpGet:
            path: /health/ready
            port: 8080
            scheme: HTTPS
          initialDelaySeconds: 5
          periodSeconds: 2
          successThreshold: 1
          timeoutSeconds: 5
        securityContext:
          allowPrivilegeEscalation: false
      hostNetwork: false
      securityContext:
        runAsGroup: 1000
        runAsNonRoot: true
        runAsUser: 100
      serviceAccountName: vault-agent-injector
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault
  name: vault
  namespace: vault
spec:
  podManagementPolicy: Parallel
  replicas: 3
  selector:
    matchLabels:
      app.kubernetes.io/instance: vault
      app.kubernetes.io/name: vault
      component: server
  serviceName: vault-internal
  template:
    metadata:
      labels:
        app.kubernetes.io/instance: vault
        app.kubernetes.io/name: vault
        component: server
        helm.sh/chart: vault-0.20.1
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchLabels:
                app.kubernetes.io/instance: vault
                app.kubernetes.io/name: vault
                component: server
            topologyKey: kubernetes.io/hostname
      containers:
      - args:
        - "cp /vault/config/extraconfig-from-values.hcl /tmp/storageconfig.hcl;\n[
          -n \"${HOST_IP}\" ] && sed -Ei \"s|HOST_IP|${HOST_IP?}|g\" /tmp/storageconfig.hcl;\n[
          -n \"${POD_IP}\" ] && sed -Ei \"s|POD_IP|${POD_IP?}|g\" /tmp/storageconfig.hcl;\n[
          -n \"${HOSTNAME}\" ] && sed -Ei \"s|HOSTNAME|${HOSTNAME?}|g\" /tmp/storageconfig.hcl;\n[
          -n \"${API_ADDR}\" ] && sed -Ei \"s|API_ADDR|${API_ADDR?}|g\" /tmp/storageconfig.hcl;\n[
          -n \"${TRANSIT_ADDR}\" ] && sed -Ei \"s|TRANSIT_ADDR|${TRANSIT_ADDR?}|g\"
          /tmp/storageconfig.hcl;\n[ -n \"${RAFT_ADDR}\" ] && sed -Ei \"s|RAFT_ADDR|${RAFT_ADDR?}|g\"
          /tmp/storageconfig.hcl;\n/usr/local/bin/docker-entrypoint.sh vault server
          -config=/tmp/storageconfig.hcl \n"
        command:
        - /bin/sh
        - -ec
        env:
        - name: HOST_IP
          valueFrom:
            fieldRef:
              fieldPath: status.hostIP
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: VAULT_K8S_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: VAULT_K8S_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: VAULT_ADDR
          value: http://127.0.0.1:8200
        - name: VAULT_API_ADDR
          value: http://$(POD_IP):8200
        - name: SKIP_CHOWN
          value: "true"
        - name: SKIP_SETCAP
          value: "true"
        - name: HOSTNAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: VAULT_CLUSTER_ADDR
          value: https://$(HOSTNAME).vault-internal:8201
        - name: HOME
          value: /home/vault
        image: hashicorp/vault:1.10.3
        imagePullPolicy: IfNotPresent
        lifecycle:
          preStop:
            exec:
              command:
              - /bin/sh
              - -c
              - sleep 5 && kill -SIGTERM $(pidof vault)
        livenessProbe:
          failureThreshold: 2
          httpGet:
            path: /v1/sys/health?standbyok=true
            port: 8200
            scheme: HTTP
          initialDelaySeconds: 120
          periodSeconds: 5
          successThreshold: 1
          timeoutSeconds: 3
        name: vault
        ports:
        - containerPort: 8200
          name: http
        - containerPort: 8201
          name: https-internal
        - containerPort: 8202
          name: http-rep
        readinessProbe:
          failureThreshold: 2
          httpGet:
            path: /v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204
            port: 8200
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 5
          successThreshold: 1
          timeoutSeconds: 3
        resources:
          limits:
            cpu: 500m
            memory: 2Gi
          requests:
            cpu: 500m
            memory: 2Gi
        securityContext:
          allowPrivilegeEscalation: false
        volumeMounts:
        - mountPath: /vault/data
          name: data
        - mountPath: /vault/config
          name: config
        - mountPath: /home/vault
          name: home
      securityContext:
        fsGroup: 1000
        runAsGroup: 1000
        runAsNonRoot: true
        runAsUser: 100
      serviceAccountName: vault
      terminationGracePeriodSeconds: 10
      volumes:
      - configMap:
          name: vault-config
        name: config
      - emptyDir: {}
        name: home
  updateStrategy:
    type: OnDelete
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
---
apiVersion: batch/v1
kind: CronJob
metadata:
  name: vault-bootstrap
spec:
  concurrencyPolicy: Forbid
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - command:
            - /vault-bootstrap/vault-bootstrap.sh
            image: bitnami/kubectl:1.22-debian-10
            imagePullPolicy: IfNotPresent
            name: vault-bootstrap
            volumeMounts:
            - mountPath: /vault-bootstrap
              name: vault-bootstrap
          restartPolicy: OnFailure
          serviceAccountName: vault-bootstrap
          volumes:
          - configMap:
              defaultMode: 493
              name: vault-bootstrap-kb2cg72675
            name: vault-bootstrap
  schedule: '*/2 * * * *'
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault
    helm.sh/chart: vault-0.20.1
  name: vault
  namespace: vault
spec:
  maxUnavailable: 1
  selector:
    matchLabels:
      app.kubernetes.io/instance: vault
      app.kubernetes.io/name: vault
      component: server
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: selfsigned-issuer
    nginx.ingress.kubernetes.io/backend-protocol: HTTP
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault
    helm.sh/chart: vault-0.20.1
  name: vault
  namespace: vault
spec:
  ingressClassName: nginx
  rules:
  - host: vault.local.nixknight.pk
    http:
      paths:
      - backend:
          service:
            name: vault-active
            port:
              number: 8200
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - vault.local.nixknight.pk
    secretName: vault-tls-certificate
---
apiVersion: v1
kind: Pod
metadata:
  annotations:
    helm.sh/hook: test
  name: vault-server-test
  namespace: vault
spec:
  containers:
  - command:
    - /bin/sh
    - -c
    - |
      echo "Checking for sealed info in 'vault status' output"
      ATTEMPTS=10
      n=0
      until [ "$n" -ge $ATTEMPTS ]
      do
        echo "Attempt" $n...
        vault status -format yaml | grep -E '^sealed: (true|false)' && break
        n=$((n+1))
        sleep 5
      done
      if [ $n -ge $ATTEMPTS ]; then
        echo "timed out looking for sealed info in 'vault status' output"
        exit 1
      fi

      exit 0
    env:
    - name: VAULT_ADDR
      value: http://vault.vault.svc:8200
    image: hashicorp/vault:1.10.3
    imagePullPolicy: IfNotPresent
    name: vault-server-test
    volumeMounts: null
  restartPolicy: Never
  volumes: null
---
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault-agent-injector
  name: vault-agent-injector-cfg
webhooks:
- admissionReviewVersions:
  - v1
  - v1beta1
  clientConfig:
    caBundle: ""
    service:
      name: vault-agent-injector-svc
      namespace: vault
      path: /mutate
  failurePolicy: Ignore
  matchPolicy: Exact
  name: vault.hashicorp.com
  objectSelector:
    matchExpressions:
    - key: app.kubernetes.io/name
      operator: NotIn
      values:
      - vault-agent-injector
  rules:
  - apiGroups:
    - ""
    apiVersions:
    - v1
    operations:
    - CREATE
    - UPDATE
    resources:
    - pods
  sideEffects: None
  timeoutSeconds: 30

Notice that in this output, all resources generated by the helm chart have namesapce defined in their metadata. That is because we used the helm template command with --namespace vault while generating manifest.yaml file. However, the vault-bootstrap targets built and merged by kustomize do not have namespace defined in their metadata. Since we have already switched context to use vault namesapce, we do not need to additionally define the namespace itself. Because when we apply these resources, Kubernetes will automatically add and set namespace (when not defined) to the current namespace context. Which means that if you switch context before applying the manifests, you do not have to add --namesapce vault while generating manifest using helm template.

Lets apply this and see what happens:

kustomize build | kubectl apply -f -

After a while you see resources fully created and vault-bootstrap container running for the 1st time where it will initialize, unseal and join nodes to the Vault cluster.

kubectl get all
NAME                                        READY   STATUS      RESTARTS   AGE
pod/vault-0                                 1/1     Running     0          2m8s
pod/vault-1                                 1/1     Running     0          2m8s
pod/vault-2                                 1/1     Running     0          2m8s
pod/vault-agent-injector-5b5889ffb4-86q7b   1/1     Running     0          2m8s
pod/vault-bootstrap-27612546--1-gv874       1/1     Running     0          10s
pod/vault-server-test                       0/1     Completed   0          2m8s

NAME                               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
service/vault                      ClusterIP   10.11.149.250   <none>        8200/TCP,8201/TCP   2m9s
service/vault-active               ClusterIP   10.11.20.119    <none>        8200/TCP,8201/TCP   2m9s
service/vault-agent-injector-svc   ClusterIP   10.11.206.107   <none>        443/TCP             2m9s
service/vault-internal             ClusterIP   None            <none>        8200/TCP,8201/TCP   2m9s
service/vault-standby              ClusterIP   10.11.195.72    <none>        8200/TCP,8201/TCP   2m8s
service/vault-ui                   ClusterIP   10.11.46.22     <none>        8200/TCP            2m8s

NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/vault-agent-injector   1/1     1            1           2m8s

NAME                                              DESIRED   CURRENT   READY   AGE
replicaset.apps/vault-agent-injector-5b5889ffb4   1         1         1       2m8s

NAME                     READY   AGE
statefulset.apps/vault   3/3     2m8s

NAME                            SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob.batch/vault-bootstrap   */2 * * * *   False     1        10s             2m8s

NAME                                 COMPLETIONS   DURATION   AGE
job.batch/vault-bootstrap-27612546   0/1           10s        10s

You can check logs of vault-bootstrap-27612546--1-gv874 pod using:

kubectl logs --follow pod/vault-bootstrap-27612546--1-gv874

As mandated by the CronJob, the script will run every 2 minutes to see if the Vault cluster is operational.

At this point, Vault with Raft is ready to use!

Share

Tagged as: EKS Kubernetes K8s Containers Kind IaC Kubectl Kustomize Helm Vault