Setup Kubernetes Authentication in Vault


Published: August 04, 2022 Author: Saad Ali

WARNING! Following this article, improvise if necessary. Your environment may be different than mine. I am not responsible if you screw up!

This post is a continuation of my previous post Deploy HA Vault Cluster with Integrated Storage (Raft). We will use Ansible to configure Vault so that pods can read Vault secrets using Kubernetes ServiceAccounts.

This tutorial uses a Bash script coupled with an Ansible playbook and an Ansible extravars file inside a Kubernetes CronJob. It will make sure that configuration, policies and roles defined via Ansible for Vault are in sync with Vault. We will use the Vault root token stored as a Kubernetes secret, as demonstrated in my previous post.

Setup Vault ACL Directory Structure

I will use our existing directory structure from my previous blog post and add a new directory named vault-acl. I'll be pulling files from my repo. The updated directory tree looks as follows:

|-- vault-acl
|   |-- ansible
|   |   |-- extravars.yaml
|   |   |-- vault-acls-and-roles-playbook.yaml
|   |   `-- vault-acls-and-roles-wrapper.sh
|   |-- cronjob.yaml
|   |-- kustomization.yaml
|   |-- rolebinding.yaml
|   |-- role.yaml
|   `-- serviceaccount.yaml
|-- vault-bootstrap
|   |-- cronjob.yaml
|   |-- kustomization.yaml
|   |-- rolebinding.yaml
|   |-- role.yaml
|   |-- serviceaccount.yaml
|   `-- vault-bootstrap.sh
|-- kustomization.yaml
`-- values.yaml

As with vault-bootstrap directory, vault-acl directory contains standard Kubernetes resources. The CronJob uses nixknight/kube-utils container image. You can have a look at the source here.

Ansible Playbook

The Ansible playbook vault-acls-and-roles-playbook.yaml under vault-acl/ansible/ is run via vault-acls-and-roles-wrapper.sh. The wrapper script is specifically written to get and set VAULT_TOKEN and VAULT_ADDR in the CronJob pod environment so that we don't have to mention both these values to each Ansible playbook task. The Ansible module ansible-modules-hashivault will automatically read these two variables from the environment and run tasks. The playbook reads Ansible variables from extravars.yaml and ensures to:

  • Enable Vault Kubernetes authentication method.
  • Configure Vault to talk to Kubernetes.
  • Enable Vault secrets engine.
  • Configure policies and roles.

As this is run via a Kubernetes CronJob, you can make changes to these tasks and apply via kustomized-helm which will ensure to update the ConfigMap (defined in vault-acl/kustomization.yaml file) used by the CronJob.

Currently, we are ensuring that the playbook configures a policy and role on Vault that allows read-only access to ArgoCD.

Generating and Applying the Manifest

The outer most kustomization.yaml has been updated to include vault-acl directory as base. Lets generate Vault resources through kustomized-helm as follows:

helm template vault hashicorp/vault --namespace vault -f values.yaml --include-crds > manifest.yaml && kustomize build

This will generate a final manifest as follows that later gets applied to Kubernetes:

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault
    helm.sh/chart: vault-0.20.1
  name: vault
  namespace: vault
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault-agent-injector
  name: vault-agent-injector
  namespace: vault
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: vault-acl
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: vault-bootstrap
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault
    helm.sh/chart: vault-0.20.1
  name: vault-discovery-role
  namespace: vault
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - watch
  - list
  - update
  - patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: vault-acl
rules:
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
  - watch
  - list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: vault-bootstrap
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - pods/log
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - pods/exec
  verbs:
  - create
- apiGroups:
  - apps
  resources:
  - statefulsets
  verbs:
  - get
  - list
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - create
  - update
  - get
  - patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault-agent-injector
  name: vault-agent-injector-clusterrole
rules:
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - mutatingwebhookconfigurations
  verbs:
  - get
  - list
  - watch
  - patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault
    helm.sh/chart: vault-0.20.1
  name: vault-discovery-rolebinding
  namespace: vault
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: vault-discovery-role
subjects:
- kind: ServiceAccount
  name: vault
  namespace: vault
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: vault-acl
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: vault-acl
subjects:
- kind: ServiceAccount
  name: vault-acl
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: vault-bootstrap
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: vault-bootstrap
subjects:
- kind: ServiceAccount
  name: vault-bootstrap
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault-agent-injector
  name: vault-agent-injector-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: vault-agent-injector-clusterrole
subjects:
- kind: ServiceAccount
  name: vault-agent-injector
  namespace: vault
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault
    helm.sh/chart: vault-0.20.1
  name: vault-server-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: vault
  namespace: vault
---
apiVersion: v1
data:
  extraconfig-from-values.hcl: |-
    disable_mlock = true
    ui = true

    listener "tcp" {
      tls_disable = 1
      address = "[::]:8200"
      cluster_address = "[::]:8201"
    }

    storage "raft" {
      path = "/vault/data"
    }

    service_registration "kubernetes" {}
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault
    helm.sh/chart: vault-0.20.1
  name: vault-config
  namespace: vault
---
apiVersion: v1
data:
  extravars.yaml: |
    kubernetes_host: "https://kubernetes.default.svc.cluster.local"
    kubernetes_ca_cert: "{{lookup('ansible.builtin.file', '/var/run/secrets/kubernetes.io/serviceaccount/ca.crt') }}"

    vault_secret_backends:
      - name: "kv_secrets"
        backend: "kv-v2"

    vault_default_capabilities: 'capabilities = [ "read", "list" ]'

    vault_roles_and_policies:
      - role:
          name: "argocd"
          bound_service_account_names:
            - "argocd-repo-server"
          bound_service_account_namespaces:
            - "argoproj"
        policies:
          - name: "argocd"
            rules: |
              path "kv_secrets/data/apps/" {
                {{ vault_default_capabilities }}
              }
              path "kv_secrets/data/apps/*" {
                {{ vault_default_capabilities }}
              }
  vault-acls-and-roles-playbook.yaml: |
    ---
    - name: Configure Vault Authentication and ACLs
      connection: local
      hosts: localhost
      gather_facts: yes
      become: False
      vars_files:
        - "{{ playbook_dir }}/extravars.yaml"
      tasks:
        - name: Enable Vault Kubernetes Authentication Method
          hashivault_auth_method:
            method_type: "kubernetes"

        - name: Configure Vault to Talk to Kubernetes
          hashivault_k8s_auth_config:
            kubernetes_host: "{{ kubernetes_host }}"
            kubernetes_ca_cert: "{{ kubernetes_ca_cert }}"


        - name: Enable Vault kv-v2 Secret Engine
          hashivault_secret_engine:
            name: "{{ item.name }}"
            backend: "{{ item.backend }}"
          with_items: "{{ vault_secret_backends }}"

        - name: Write Policies to Vault
          hashivault_policy:
            name: "{{ item.name }}"
            rules: "{{ item.rules }}"
            state: "present"
          with_items: "{{ vault_roles_and_policies | map(attribute='policies') }}"

        - name: Write Roles to Vault
          hashivault_k8s_auth_role:
            name: "{{ item.role.name }}"
            policies: "{{ item.policies | map(attribute='name') }}"
            bound_service_account_names: "{{ item.role.bound_service_account_names }}"
            bound_service_account_namespaces: "{{ item.role.bound_service_account_namespaces }}"
          with_items: "{{ vault_roles_and_policies }}"
  vault-acls-and-roles-wrapper.sh: |
    #!/usr/bin/env bash

    # +------------------------------------------------------------------------------------------+
    # + FILE: vault-acls-and-roles-wrapper.sh                                                    +
    # +                                                                                          +
    # + AUTHOR: Saad Ali (https://github.com/NIXKnight)                                          +
    # +------------------------------------------------------------------------------------------+

    export K8S_API_SERVER="https://$KUBERNETES_SERVICE_HOST"
    export K8S_SERVICEACCOUNT_DIR="/var/run/secrets/kubernetes.io/serviceaccount"
    export K8S_NAMESPACE="$(cat $K8S_SERVICEACCOUNT_DIR/namespace)"
    export K8S_AUTH_TOKEN="$(cat $K8S_SERVICEACCOUNT_DIR/token)"
    export K8S_AUTH_SSL_CA_CERT="$K8S_SERVICEACCOUNT_DIR/ca.crt"

    export VAULT_ADDR="http://$VAULT_SERVICE_HOST:$VAULT_SERVICE_PORT"
    export VAULT_TOKEN=$(curl --cacert $K8S_AUTH_SSL_CA_CERT --header "Authorization: Bearer $K8S_AUTH_TOKEN" -X GET $K8S_API_SERVER/api/v1/namespaces/vault/secrets/vault-init | jq -r ".data.VAULT_INIT_JSON" |base64 -d | jq -r '.root_token')

    export ANSIBLE_STDOUT_CALLBACK="debug"
    export ANSIBLE_CALLBACKS_ENABLED="profile_tasks"

    cd /vault-acl/
    ansible-playbook vault-acls-and-roles-playbook.yaml -vv
kind: ConfigMap
metadata:
  name: vault-acl-7hm229f65c
---
apiVersion: v1
data:
  vault-bootstrap.sh: |
    #!/usr/bin/env bash

    # +------------------------------------------------------------------------------------------+
    # + FILE: vault-bootstrap.sh                                                                 +
    # +                                                                                          +
    # + AUTHOR: Saad Ali (https://github.com/NIXKnight)                                          +
    # +------------------------------------------------------------------------------------------+

    VAULT_INIT_OUTPUT_FILE=/tmp/vault_init
    VAULT_INIT_K8S_SECRET_FILE=/tmp/vault-init-secret.yaml

    # Get total number of pods in Vault StatefulSet
    VAULT_PODS_IN_STATEFULSET=$(expr $(kubectl get statefulsets -o json | jq '.items[0].spec.replicas') - 1)

    # Setup all Vault pods in an array
    VAULT_PODS=($(seq --format='vault-%0g' --separator=" " 0 $VAULT_PODS_IN_STATEFULSET))

    # Raft leaders and followers
    VAULT_LEADER=${VAULT_PODS[0]}
    VAULT_FOLLOWERS=("${VAULT_PODS[@]:1}")

    # Wait for pod to be ready
    function waitForPod {
      while [[ $(kubectl get pods $1 -o 'jsonpath={..status.conditions[?(@.type=="Ready")].status}') != "True" ]] ; do
        echo "Waiting for pod $1 to be ready..."
        sleep 1
      done
    }

    # Initialize Vault
    function vaultOperatorInit {
      waitForPod $1
      kubectl exec $1 -c vault -- vault operator init -format "json" > $VAULT_INIT_OUTPUT_FILE
    }

    # Create Kubernetes secret for Vault Unseal Keys and Root Token
    function createVaultK8SInitSecret {
      cat <<EOF > $1
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: vault-init
    type: Opaque
    data:
    $(
      local VAULT_INIT_JSON_BASE64=$(cat $2 | base64 -w0)
      echo -e "  VAULT_INIT_JSON: $VAULT_INIT_JSON_BASE64"
    )
    EOF
      kubectl apply -f $1
    }

    function unsealVault {
      local VAULT_INIT_JSON_BASE64_DECODED=$(kubectl get secrets/vault-init --template={{.data.VAULT_INIT_JSON}} | base64 -d)
      for VAULT_UNSEAL_KEY in $(jq -r '.unseal_keys_b64[]' <<< ${VAULT_INIT_JSON_BASE64_DECODED}) ; do
        waitForPod $1
        echo -e "Unsealing Vault on pod $1"
        sleep 5
        kubectl exec -it $1 -- vault operator unseal $VAULT_UNSEAL_KEY
      done
    }

    function joinRaftLeader() {
      waitForPod $1
      kubectl exec $1 -- vault operator raft join http://$VAULT_LEADER.vault-internal:8200

    }

    # Get Vault initialization status
    waitForPod $VAULT_LEADER
    VAULT_INIT_STATUS=$(kubectl exec $VAULT_LEADER -c vault -- vault status -format "json" | jq --raw-output '.initialized')

    # If vault initialized, check if it sealed. If vault is sealed, unseal it.
    # If vault is uninitialized, initialize it, create vault secret in Kubernetes
    # and unseal it. Do it all on the raft leader pod (this will be vault-0).
    if $VAULT_INIT_STATUS ; then
      VAULT_SEAL_STATUS=$(kubectl exec $VAULT_LEADER -c vault -- vault status -format "json" | jq --raw-output '.sealed')
      echo -e "Vault is already initialized on $VAULT_LEADER"
      if $VAULT_SEAL_STATUS ; then
        echo -e "Vault sealed on $VAULT_LEADER"
        unsealVault $VAULT_LEADER
      fi
    else
      echo -e "Initializing Vault on $VAULT_LEADER"
      vaultOperatorInit $VAULT_LEADER
      echo -e "Creating Vault Kubernetes Secret vault-init"
      createVaultK8SInitSecret $VAULT_INIT_K8S_SECRET_FILE $VAULT_INIT_OUTPUT_FILE
      unsealVault $VAULT_LEADER
    fi

    # For all other pods, check unseal status and check if the pod is part
    # of the raft cluster. If either condition is false, then do the needful.
    for POD in "${VAULT_FOLLOWERS[@]}" ; do
      VAULT_TOKEN=$(kubectl get secrets/vault-init --template={{.data.VAULT_INIT_JSON}} | base64 -d | jq -r '.root_token')
      RAFT_NODES_JSON=$(kubectl exec $VAULT_LEADER -c vault -- /bin/sh -c "VAULT_TOKEN=$VAULT_TOKEN vault operator raft list-peers -format \"json\"")
      RAFT_NODES=$(echo $RAFT_NODES_JSON | jq '.data.config.servers[].address' -r)
      waitForPod $POD
      VAULT_SEAL_STATUS=$(kubectl exec $POD -c vault -- vault status -format "json" | jq --raw-output '.sealed')
      if [[ ${RAFT_NODES[@]} =~ $POD ]] ; then
        echo -e "Pod $POD is already part of raft cluster"
      else
        joinRaftLeader $POD
      fi
      if $VAULT_SEAL_STATUS ; then
        echo -e "Vault sealed on $POD"
        unsealVault $POD
      fi
    done
kind: ConfigMap
metadata:
  name: vault-bootstrap-kb2cg72675
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault
    helm.sh/chart: vault-0.20.1
  name: vault
  namespace: vault
spec:
  ports:
  - name: http
    port: 8200
    targetPort: 8200
  - name: https-internal
    port: 8201
    targetPort: 8201
  publishNotReadyAddresses: true
  selector:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/name: vault
    component: server
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault
    helm.sh/chart: vault-0.20.1
  name: vault-active
  namespace: vault
spec:
  ports:
  - name: http
    port: 8200
    targetPort: 8200
  - name: https-internal
    port: 8201
    targetPort: 8201
  publishNotReadyAddresses: true
  selector:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/name: vault
    component: server
    vault-active: "true"
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault-agent-injector
  name: vault-agent-injector-svc
  namespace: vault
spec:
  ports:
  - name: https
    port: 443
    targetPort: 8080
  selector:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/name: vault-agent-injector
    component: webhook
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault
    helm.sh/chart: vault-0.20.1
  name: vault-internal
  namespace: vault
spec:
  clusterIP: None
  ports:
  - name: http
    port: 8200
    targetPort: 8200
  - name: https-internal
    port: 8201
    targetPort: 8201
  publishNotReadyAddresses: true
  selector:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/name: vault
    component: server
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault
    helm.sh/chart: vault-0.20.1
  name: vault-standby
  namespace: vault
spec:
  ports:
  - name: http
    port: 8200
    targetPort: 8200
  - name: https-internal
    port: 8201
    targetPort: 8201
  publishNotReadyAddresses: true
  selector:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/name: vault
    component: server
    vault-active: "false"
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault-ui
    helm.sh/chart: vault-0.20.1
  name: vault-ui
  namespace: vault
spec:
  ports:
  - name: http
    port: 8200
    targetPort: 8200
  publishNotReadyAddresses: true
  selector:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/name: vault
    component: server
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault-agent-injector
    component: webhook
  name: vault-agent-injector
  namespace: vault
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/instance: vault
      app.kubernetes.io/name: vault-agent-injector
      component: webhook
  template:
    metadata:
      labels:
        app.kubernetes.io/instance: vault
        app.kubernetes.io/name: vault-agent-injector
        component: webhook
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchLabels:
                app.kubernetes.io/instance: vault
                app.kubernetes.io/name: vault-agent-injector
                component: webhook
            topologyKey: kubernetes.io/hostname
      containers:
      - args:
        - agent-inject
        - 2>&1
        env:
        - name: AGENT_INJECT_LISTEN
          value: :8080
        - name: AGENT_INJECT_LOG_LEVEL
          value: info
        - name: AGENT_INJECT_VAULT_ADDR
          value: http://vault.vault.svc:8200
        - name: AGENT_INJECT_VAULT_AUTH_PATH
          value: auth/kubernetes
        - name: AGENT_INJECT_VAULT_IMAGE
          value: hashicorp/vault:1.10.3
        - name: AGENT_INJECT_TLS_AUTO
          value: vault-agent-injector-cfg
        - name: AGENT_INJECT_TLS_AUTO_HOSTS
          value: vault-agent-injector-svc,vault-agent-injector-svc.vault,vault-agent-injector-svc.vault.svc
        - name: AGENT_INJECT_LOG_FORMAT
          value: standard
        - name: AGENT_INJECT_REVOKE_ON_SHUTDOWN
          value: "false"
        - name: AGENT_INJECT_CPU_REQUEST
          value: 250m
        - name: AGENT_INJECT_CPU_LIMIT
          value: 500m
        - name: AGENT_INJECT_MEM_REQUEST
          value: 64Mi
        - name: AGENT_INJECT_MEM_LIMIT
          value: 128Mi
        - name: AGENT_INJECT_DEFAULT_TEMPLATE
          value: map
        - name: AGENT_INJECT_TEMPLATE_CONFIG_EXIT_ON_RETRY_FAILURE
          value: "true"
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        image: hashicorp/vault-k8s:0.16.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 2
          httpGet:
            path: /health/ready
            port: 8080
            scheme: HTTPS
          initialDelaySeconds: 5
          periodSeconds: 2
          successThreshold: 1
          timeoutSeconds: 5
        name: sidecar-injector
        readinessProbe:
          failureThreshold: 2
          httpGet:
            path: /health/ready
            port: 8080
            scheme: HTTPS
          initialDelaySeconds: 5
          periodSeconds: 2
          successThreshold: 1
          timeoutSeconds: 5
        securityContext:
          allowPrivilegeEscalation: false
      hostNetwork: false
      securityContext:
        runAsGroup: 1000
        runAsNonRoot: true
        runAsUser: 100
      serviceAccountName: vault-agent-injector
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault
  name: vault
  namespace: vault
spec:
  podManagementPolicy: Parallel
  replicas: 3
  selector:
    matchLabels:
      app.kubernetes.io/instance: vault
      app.kubernetes.io/name: vault
      component: server
  serviceName: vault-internal
  template:
    metadata:
      labels:
        app.kubernetes.io/instance: vault
        app.kubernetes.io/name: vault
        component: server
        helm.sh/chart: vault-0.20.1
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchLabels:
                app.kubernetes.io/instance: vault
                app.kubernetes.io/name: vault
                component: server
            topologyKey: kubernetes.io/hostname
      containers:
      - args:
        - "cp /vault/config/extraconfig-from-values.hcl /tmp/storageconfig.hcl;\n[
          -n \"${HOST_IP}\" ] && sed -Ei \"s|HOST_IP|${HOST_IP?}|g\" /tmp/storageconfig.hcl;\n[
          -n \"${POD_IP}\" ] && sed -Ei \"s|POD_IP|${POD_IP?}|g\" /tmp/storageconfig.hcl;\n[
          -n \"${HOSTNAME}\" ] && sed -Ei \"s|HOSTNAME|${HOSTNAME?}|g\" /tmp/storageconfig.hcl;\n[
          -n \"${API_ADDR}\" ] && sed -Ei \"s|API_ADDR|${API_ADDR?}|g\" /tmp/storageconfig.hcl;\n[
          -n \"${TRANSIT_ADDR}\" ] && sed -Ei \"s|TRANSIT_ADDR|${TRANSIT_ADDR?}|g\"
          /tmp/storageconfig.hcl;\n[ -n \"${RAFT_ADDR}\" ] && sed -Ei \"s|RAFT_ADDR|${RAFT_ADDR?}|g\"
          /tmp/storageconfig.hcl;\n/usr/local/bin/docker-entrypoint.sh vault server
          -config=/tmp/storageconfig.hcl \n"
        command:
        - /bin/sh
        - -ec
        env:
        - name: HOST_IP
          valueFrom:
            fieldRef:
              fieldPath: status.hostIP
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: VAULT_K8S_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: VAULT_K8S_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: VAULT_ADDR
          value: http://127.0.0.1:8200
        - name: VAULT_API_ADDR
          value: http://$(POD_IP):8200
        - name: SKIP_CHOWN
          value: "true"
        - name: SKIP_SETCAP
          value: "true"
        - name: HOSTNAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: VAULT_CLUSTER_ADDR
          value: https://$(HOSTNAME).vault-internal:8201
        - name: HOME
          value: /home/vault
        image: hashicorp/vault:1.10.3
        imagePullPolicy: IfNotPresent
        lifecycle:
          preStop:
            exec:
              command:
              - /bin/sh
              - -c
              - sleep 5 && kill -SIGTERM $(pidof vault)
        livenessProbe:
          failureThreshold: 2
          httpGet:
            path: /v1/sys/health?standbyok=true
            port: 8200
            scheme: HTTP
          initialDelaySeconds: 120
          periodSeconds: 5
          successThreshold: 1
          timeoutSeconds: 3
        name: vault
        ports:
        - containerPort: 8200
          name: http
        - containerPort: 8201
          name: https-internal
        - containerPort: 8202
          name: http-rep
        readinessProbe:
          failureThreshold: 2
          httpGet:
            path: /v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204
            port: 8200
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 5
          successThreshold: 1
          timeoutSeconds: 3
        securityContext:
          allowPrivilegeEscalation: false
        volumeMounts:
        - mountPath: /vault/data
          name: data
        - mountPath: /vault/config
          name: config
        - mountPath: /home/vault
          name: home
      securityContext:
        fsGroup: 1000
        runAsGroup: 1000
        runAsNonRoot: true
        runAsUser: 100
      serviceAccountName: vault
      terminationGracePeriodSeconds: 10
      volumes:
      - configMap:
          name: vault-config
        name: config
      - emptyDir: {}
        name: home
  updateStrategy:
    type: OnDelete
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
---
apiVersion: batch/v1
kind: CronJob
metadata:
  name: vault-acl
spec:
  concurrencyPolicy: Forbid
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - command:
            - /vault-acl/vault-acls-and-roles-wrapper.sh
            image: nixknight/kube-utils:latest
            imagePullPolicy: IfNotPresent
            name: vault-acl
            volumeMounts:
            - mountPath: /vault-acl
              name: vault-acl
          restartPolicy: OnFailure
          serviceAccountName: vault-acl
          volumes:
          - configMap:
              defaultMode: 493
              name: vault-acl-7hm229f65c
            name: vault-acl
  schedule: '*/2 * * * *'
---
apiVersion: batch/v1
kind: CronJob
metadata:
  name: vault-bootstrap
spec:
  concurrencyPolicy: Forbid
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - command:
            - /vault-bootstrap/vault-bootstrap.sh
            image: bitnami/kubectl:1.22-debian-10
            imagePullPolicy: IfNotPresent
            name: vault-bootstrap
            volumeMounts:
            - mountPath: /vault-bootstrap
              name: vault-bootstrap
          restartPolicy: OnFailure
          serviceAccountName: vault-bootstrap
          volumes:
          - configMap:
              defaultMode: 493
              name: vault-bootstrap-kb2cg72675
            name: vault-bootstrap
  schedule: '*/2 * * * *'
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault
    helm.sh/chart: vault-0.20.1
  name: vault
  namespace: vault
spec:
  maxUnavailable: 1
  selector:
    matchLabels:
      app.kubernetes.io/instance: vault
      app.kubernetes.io/name: vault
      component: server
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: selfsigned-issuer
    nginx.ingress.kubernetes.io/backend-protocol: HTTP
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault
    helm.sh/chart: vault-0.20.1
  name: vault
  namespace: vault
spec:
  ingressClassName: nginx
  rules:
  - host: vault.local.nixknight.pk
    http:
      paths:
      - backend:
          service:
            name: vault-active
            port:
              number: 8200
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - vault.local.nixknight.pk
    secretName: vault-tls-certificate
---
apiVersion: v1
kind: Pod
metadata:
  annotations:
    helm.sh/hook: test
  name: vault-server-test
  namespace: vault
spec:
  containers:
  - command:
    - /bin/sh
    - -c
    - |
      echo "Checking for sealed info in 'vault status' output"
      ATTEMPTS=10
      n=0
      until [ "$n" -ge $ATTEMPTS ]
      do
        echo "Attempt" $n...
        vault status -format yaml | grep -E '^sealed: (true|false)' && break
        n=$((n+1))
        sleep 5
      done
      if [ $n -ge $ATTEMPTS ]; then
        echo "timed out looking for sealed info in 'vault status' output"
        exit 1
      fi

      exit 0
    env:
    - name: VAULT_ADDR
      value: http://vault.vault.svc:8200
    image: hashicorp/vault:1.10.3
    imagePullPolicy: IfNotPresent
    name: vault-server-test
    volumeMounts: null
  restartPolicy: Never
  volumes: null
---
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
  labels:
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault-agent-injector
  name: vault-agent-injector-cfg
webhooks:
- admissionReviewVersions:
  - v1
  - v1beta1
  clientConfig:
    caBundle: ""
    service:
      name: vault-agent-injector-svc
      namespace: vault
      path: /mutate
  failurePolicy: Ignore
  matchPolicy: Exact
  name: vault.hashicorp.com
  objectSelector:
    matchExpressions:
    - key: app.kubernetes.io/name
      operator: NotIn
      values:
      - vault-agent-injector
  rules:
  - apiGroups:
    - ""
    apiVersions:
    - v1
    operations:
    - CREATE
    - UPDATE
    resources:
    - pods
  sideEffects: None
  timeoutSeconds: 30

Switch context to the vault namespace to ensure that any resources without explicit namespace definition get applied to Vault namespace only:

kubectl config set-context --current --namespace=vault

Lets apply above full manifest and see what happens:

kustomize build | kubectl apply -f -

After a while you can see all resources in the vault namespace using kubectl get all:

NAME                                        READY   STATUS      RESTARTS        AGE
pod/vault-0                                 1/1     Running     0               12m
pod/vault-1                                 1/1     Running     1 (10m ago)     12m
pod/vault-2                                 1/1     Running     1 (9m50s ago)   12m
pod/vault-acl-27644226--1-xplx5             0/1     Completed   0               5m54s
pod/vault-acl-27644228--1-dxwjx             0/1     Completed   0               3m54s
pod/vault-acl-27644230--1-6xhnk             0/1     Completed   0               114s
pod/vault-agent-injector-5b5889ffb4-2zdnr   1/1     Running     0               12m
pod/vault-bootstrap-27644226--1-qdrb2       0/1     Completed   0               5m54s
pod/vault-bootstrap-27644228--1-qvrnc       0/1     Completed   0               3m54s
pod/vault-bootstrap-27644230--1-6cx2w       0/1     Completed   0               114s
pod/vault-server-test                       0/1     Completed   0               12m

NAME                               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
service/vault                      ClusterIP   10.11.3.34      <none>        8200/TCP,8201/TCP   12m
service/vault-active               ClusterIP   10.11.118.149   <none>        8200/TCP,8201/TCP   12m
service/vault-agent-injector-svc   ClusterIP   10.11.233.139   <none>        443/TCP             12m
service/vault-internal             ClusterIP   None            <none>        8200/TCP,8201/TCP   12m
service/vault-standby              ClusterIP   10.11.78.44     <none>        8200/TCP,8201/TCP   12m
service/vault-ui                   ClusterIP   10.11.27.162    <none>        8200/TCP            12m

NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/vault-agent-injector   1/1     1            1           12m

NAME                                              DESIRED   CURRENT   READY   AGE
replicaset.apps/vault-agent-injector-5b5889ffb4   1         1         1       12m

NAME                     READY   AGE
statefulset.apps/vault   3/3     12m

NAME                            SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob.batch/vault-acl         */2 * * * *   False     0        114s            12m
cronjob.batch/vault-bootstrap   */2 * * * *   False     0        114s            12m

NAME                                 COMPLETIONS   DURATION   AGE
job.batch/vault-acl-27644226         1/1           6s         5m54s
job.batch/vault-acl-27644228         1/1           6s         3m54s
job.batch/vault-acl-27644230         1/1           6s         114s
job.batch/vault-bootstrap-27644226   1/1           32s        5m54s
job.batch/vault-bootstrap-27644228   1/1           4s         3m54s
job.batch/vault-bootstrap-27644230   1/1           5s         114s

The ACL job has run multiple times. I managed to get full log of its first run before running kubectl get all and here is the job output:

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
 3372  100  3372    0     0   548k      0 --:--:-- --:--:-- --:--:--  548k
ansible-playbook [core 2.13.1]
  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
  ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/local/bin/ansible-playbook
  python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
  jinja version = 3.1.2
  libyaml = True
No config file found; using defaults
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
redirecting (type: callback) ansible.builtin.debug to ansible.posix.debug
redirecting (type: callback) ansible.builtin.debug to ansible.posix.debug
redirecting (type: callback) ansible.builtin.profile_tasks to ansible.posix.profile_tasks
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.

PLAYBOOK: vault-acls-and-roles-playbook.yaml ***********************************
1 plays in vault-acls-and-roles-playbook.yaml

PLAY [Configure Vault Authentication and ACLs] *********************************

TASK [Gathering Facts] *********************************************************
task path: /vault-acl/vault-acls-and-roles-playbook.yaml:2
Sunday 24 July 2022  09:04:25 +0000 (0:00:00.016)       0:00:00.016 *********** 
ok: [localhost]
META: ran handlers

TASK [Enable Vault Kubernetes Authentication Method] ***************************
task path: /vault-acl/vault-acls-and-roles-playbook.yaml:10
Sunday 24 July 2022  09:04:26 +0000 (0:00:00.688)       0:00:00.704 *********** 
ok: [localhost] => {
    "changed": false,
    "created": false,
    "rc": 0,
    "state": "enabled"
}

TASK [Configure Vault to Talk to Kubernetes] ***********************************
task path: /vault-acl/vault-acls-and-roles-playbook.yaml:14
Sunday 24 July 2022  09:04:26 +0000 (0:00:00.315)       0:00:01.019 *********** 
changed: [localhost] => {
    "changed": true,
    "keys_updated": [
        "kubernetes_host",
        "kubernetes_ca_cert"
    ],
    "rc": 0
}

TASK [Enable Vault kv-v2 Secret Engine] ****************************************
task path: /vault-acl/vault-acls-and-roles-playbook.yaml:20
Sunday 24 July 2022  09:04:27 +0000 (0:00:00.360)       0:00:01.380 *********** 
changed: [localhost] => (item={'name': 'kv_secrets', 'backend': 'kv-v2'}) => {
    "ansible_loop_var": "item",
    "changed": true,
    "created": true,
    "item": {
        "backend": "kv-v2",
        "name": "kv_secrets"
    },
    "rc": 0
}

TASK [Write Policies to Vault] *************************************************
task path: /vault-acl/vault-acls-and-roles-playbook.yaml:26
Sunday 24 July 2022  09:04:27 +0000 (0:00:00.361)       0:00:01.742 *********** 
changed: [localhost] => (item={'name': 'argocd', 'rules': 'path "kv_secrets/data/apps/" {\n  capabilities = [ "read", "list" ]\n}\npath "kv_secrets/data/apps/*" {\n  capabilities = [ "read", "list" ]\n}\n'}) => {
    "ansible_loop_var": "item",
    "changed": true,
    "item": {
        "name": "argocd",
        "rules": "path \"kv_secrets/data/apps/\" {\n  capabilities = [ \"read\", \"list\" ]\n}\npath \"kv_secrets/data/apps/*\" {\n  capabilities = [ \"read\", \"list\" ]\n}\n"
    },
    "rc": 0
}

TASK [Write Roles to Vault] ****************************************************
task path: /vault-acl/vault-acls-and-roles-playbook.yaml:33
Sunday 24 July 2022  09:04:27 +0000 (0:00:00.324)       0:00:02.067 *********** 
changed: [localhost] => (item={'role': {'name': 'argocd', 'bound_service_account_names': ['argocd-repo-server'], 'bound_service_account_namespaces': ['argoproj']}, 'policies': [{'name': 'argocd', 'rules': 'path "kv_secrets/data/apps/" {\n  capabilities = [ "read", "list" ]\n}\npath "kv_secrets/data/apps/*" {\n  capabilities = [ "read", "list" ]\n}\n'}]}) => {
    "ansible_loop_var": "item",
    "changed": true,
    "item": {
        "policies": [
            {
                "name": "argocd",
                "rules": "path \"kv_secrets/data/apps/\" {\n  capabilities = [ \"read\", \"list\" ]\n}\npath \"kv_secrets/data/apps/*\" {\n  capabilities = [ \"read\", \"list\" ]\n}\n"
            }
        ],
        "role": {
            "bound_service_account_names": [
                "argocd-repo-server"
            ],
            "bound_service_account_namespaces": [
                "argoproj"
            ],
            "name": "argocd"
        }
    },
    "rc": 0
}
META: ran handlers
META: ran handlers

PLAY RECAP *********************************************************************
localhost                  : ok=6    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

Sunday 24 July 2022  09:04:28 +0000 (0:00:00.339)       0:00:02.407 *********** 
=============================================================================== 
Gathering Facts --------------------------------------------------------- 0.69s
/vault-acl/vault-acls-and-roles-playbook.yaml:2 -------------------------------
Enable Vault kv-v2 Secret Engine ---------------------------------------- 0.36s
/vault-acl/vault-acls-and-roles-playbook.yaml:20 ------------------------------
Configure Vault to Talk to Kubernetes ----------------------------------- 0.36s
/vault-acl/vault-acls-and-roles-playbook.yaml:14 ------------------------------
Write Roles to Vault ---------------------------------------------------- 0.34s
/vault-acl/vault-acls-and-roles-playbook.yaml:33 ------------------------------
Write Policies to Vault ------------------------------------------------- 0.32s
/vault-acl/vault-acls-and-roles-playbook.yaml:26 ------------------------------
Enable Vault Kubernetes Authentication Method --------------------------- 0.32s
/vault-acl/vault-acls-and-roles-playbook.yaml:10 ------------------------------

There will be no changed items after the first run, unless you change anything in tasks or their properties.

At this point, according to our policies, ArgoCD should be able to access Vault secrets under kv_secrets/apps/.

Share

Tagged as: EKS Kubernetes K8s Containers Kind IaC Kubectl Kustomize Helm Vault ansible