Ingress-nginx CVE-2025-1974: What It Is and How to Fix It

Profile Picture
Abhimanyu Saharan

Today, when I woke up and checked my RSS feed, one headline stood out immediately—CVE-2025-1974. It’s a new high-severity vulnerability in ingress-nginx, and it’s bad. Like "any pod in your cluster can potentially take over your entire Kubernetes environment without credentials" bad.

Naturally, I dropped everything and dug into the advisory.

So, What’s Going On?

The ingress-nginx maintainers just dropped patches for five security vulnerabilities. The most critical among them—CVE-2025-1974—has a CVSS score of 9.8. It allows configuration injection via the Validating Admission Controller, meaning any workload on the Pod network could potentially compromise your cluster.

Let that sink in: even if a user or service doesn’t have permission to create Ingress objects, it could still exploit this path—just by being on the same network.

For context, ingress-nginx is a widely used ingress controller in Kubernetes (deployed in 40%+ clusters). It translates Ingress resources into NGINX configurations to route external traffic to internal services. It's flexible, easy to deploy—and now, evidently, a big attack surface.

Why This One's Especially Dangerous?

Ingress controllers usually require elevated privileges. But this particular flaw means that even low-privileged pods can interact with the Validating Admission Controller in unintended ways. Combine that with other config handling issues patched today, and you’re looking at a cluster-wide exposure scenario.

Here's What You Must Do: Upgrade Immediately (Recommended)

✅ Best-case fix: Upgrade to a patched version.

Patched versions available:

  • v1.12.1
  • v1.11.5

Upgrading ensures you're protected not just from CVE-2025-1974 but also from four other related vulnerabilities fixed in this release.

Pros

  • Full protection from all known CVEs.
  • No feature regression; you can continue using the Validating Admission Controller.
  • Long-term fix—no need for workaround maintenance.

Cons

  • Requires a cluster upgrade cycle, which may be gated in large orgs or restricted environments.
  • May cause temporary downtime depending on your deployment strategy.

Can't Upgrade Right Now? Disable the Validating Admission Controller (Workaround)

If you’re unable to upgrade but still need to take action immediately, the next best thing is to disable the Validating Admission Controller.

🔧 If installed via Helm:

helm upgrade <release-name> ingress-nginx \
  --reuse-values \
  --set controller.admissionWebhooks.enabled=false

🔧 If installed manually:

Remove the --validating-webhook container args and associated webhook configurations from your Deployment or DaemonSet.

⚠️ Reminder

Re-enable the Validating Admission Controller after you’ve upgraded to a patched version—it’s there for a reason: to prevent misconfigured Ingress resources from being applied.

Pros

  • Quick to apply.
  • Mitigates the exploit path.

Cons

  • Disables important guardrails; invalid Ingress resources will not be caught.
  • Still vulnerable to other less-severe issues if not upgraded later.
  • Requires manual tracking to ensure it’s re-enabled later.

Still Can't Upgrade or Disable? Block the Exploit at the Network Layer

If you're locked out of upgrading and cannot disable the webhook due to organizational or technical constraints, there’s still a fallback:

Use a DaemonSet to block external access to the webhook port (8443) using iptables, while allowing kube-apiserver traffic through.

Here’s how it works:

  • A DaemonSet runs on every node.
  • It fetches the Kubernetes API server’s IP.
  • It applies an iptables rule to drop external traffic to port 8443 (the webhook).
  • It allows only the API server to talk to the webhook.

YAML Snippet

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: allow-get-kubernetes-endpoint
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    resourceNames: ["kubernetes"]
    verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: allow-get-kubernetes-endpoint-binding
  namespace: default
subjects:
  - kind: ServiceAccount
    name: default
    namespace: kube-system
roleRef:
  kind: Role
  name: allow-get-kubernetes-endpoint
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webhook-port-blocker
  namespace: kube-system
  labels:
    app: webhook-port-blocker
spec:
  replicas: 1
  selector:
    matchLabels:
      app: webhook-port-blocker
  template:
    metadata:
      labels:
        app: webhook-port-blocker
    spec:
      hostNetwork: true
      serviceAccountName: default
      volumes:
        - name: shared
          emptyDir: {}
        - name: sa-token
          projected:
            sources:
              - serviceAccountToken:
                  path: token
                  expirationSeconds: 3600
              - configMap:
                  name: kube-root-ca.crt
              - downwardAPI:
                  items:
                    - path: namespace
                      fieldRef:
                        fieldPath: metadata.namespace
      initContainers:
        - name: fetch-apiserver-ips
          image: bitnami/kubectl:1.20
          command:
            - sh
            - -c
            - |
              echo "[+] Fetching all Kubernetes API server IPs..."
              kubectl get endpoints kubernetes -n default -o jsonpath='{.subsets[0].addresses[*].ip}' | tr ' ' '\n' > /shared/apiserver-ips

              echo "[+] IPs written to /shared/apiserver-ips:"
              cat /shared/apiserver-ips
          volumeMounts:
            - name: shared
              mountPath: /shared
            - name: sa-token
              mountPath: /var/run/secrets/kubernetes.io/serviceaccount
              readOnly: true
      containers:
        - name: iptables
          image: alpine:3.19
          securityContext:
            privileged: true
          command:
            - sh
            - -c
            - |
              apk add --no-cache iptables ipset >/dev/null

              echo "[+] Waiting for /shared/apiserver-ips..."
              while [ ! -s /shared/apiserver-ips ]; do sleep 1; done

              echo "[+] Creating or flushing ipset 'apiserver-ips'..."
              ipset create apiserver-ips hash:ip 2>/dev/null || ipset flush apiserver-ips

              echo "[+] Adding allowed IPs to ipset..."
              while read -r IP; do
                ipset add apiserver-ips "$IP" 2>/dev/null || true
              done < /shared/apiserver-ips

              echo "[+] Removing old iptables rules for port 8443..."
              iptables -L INPUT --line-numbers -n | grep dpt:8443 | sort -r -n | awk '{print $1}' | while read -r line; do
                iptables -D INPUT "$line"
              done

              echo "[+] Inserting new iptables rule using ipset..."
              iptables -I INPUT -p tcp --dport 8443 -m set ! --match-set apiserver-ips src -j DROP

              echo "[+] Final iptables rules:"
              iptables -L INPUT -n --line-numbers | grep 8443

              tail -f /dev/null
          volumeMounts:
            - name: shared
              mountPath: /shared
          resources:
            requests:
              cpu: 25m
              memory: 32Mi
            limits:
              cpu: 100m
              memory: 64Mi
      tolerations:
        - operator: Exists

Pros

  • Doesn’t require disabling core features.
  • Blocks unauthorized traffic from within the cluster.
  • Does not rely on ingress-nginx release cycle or upstream fixes.

Cons

  • More complex to deploy and maintain.
  • Requires privileged containers and host networking.
  • Does not fully mitigate other related CVEs.
  • May break in clusters with dynamic API server IPs or advanced CNI setups.

Final Thoughts

This one’s serious. If you’re running ingress-nginx—especially if it has access to Secrets (which is often the case)—take action now.

Patches are out. Mitigations exist. But the window for exploitability remains wide open until you lock it down.

If you want to dive deeper, check out the GitHub issues for each CVE:

Stay safe out there.