Skip to content
Profile PictureBlog | Abhimanyu Saharan
Home
Series
  • Zero to Hero: Rancher
YouTubeTwitterRSS Feed
Profile Picture

Abhimanyu Saharan

Home
Series
  • Zero to Hero: Rancher
RSS Feed

© Abhimanyu Saharan. All rights reserved.

Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Abhimanyu Saharan with appropriate and specific direction to the original content.

Helm Upgrade Failed After v1.25 Due to PDB API
  1. Home
  2. /
  3. Helm Upgrade Failed After v1.25 Due to PDB API

Helm Upgrade Failed After v1.25 Due to PDB API

May 10, 2025
  • Kubernetes
Read time: 2 minutes
Abhimanyu Saharan
Abhimanyu Saharan
Table of Contents
  1. The Gotcha: Removed API in Helm Metadata
  2. Enter: helm-mapkubeapis
  3. Installation
  4. How I Fixed the Problem
  5. 1. Dry Run
  6. 2. Apply the Rewrite
  7. Important: Update Your Charts Too
  8. Key Takeaways

Share this post

We recently started upgrading one of our oldest Kubernetes clusters. This cluster had been running reliably on v1.19 for years without issue. Now that we’re preparing to bring it up to v1.31, we decided to step through the intermediate versions. Everything was smooth—until we hit v1.25.

That’s when Helm threw an unexpected error and blocked the upgrade:

  • Error: UPGRADE FAILED: resource mapping not found for name: "<object-name>" namespace: "<object-namespace>" from "": no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"
    ensure CRDs are installed first

    No application code had changed. The chart logic was untouched. Just the Kubernetes version was bumped. So what happened?

    The Gotcha: Removed API in Helm Metadata¶

    We had overlooked one detail: one of our internal Helm charts still referenced the PodDisruptionBudget (PDB) API version policy/v1beta1, which was removed in Kubernetes v1.25.

    Even after we corrected the chart to use the updated policy/v1 API, the upgrade still failed. That’s because Helm’s release metadata was still referencing the removed API.

    Helm keeps track of all resources associated with a release, and if one of them refers to an API version that no longer exists in the cluster, it refuses to proceed.

    Enter: helm-mapkubeapis¶

    helm-mapkubeapis is a Helm plugin built specifically to address this issue. It scans Helm release metadata for deprecated or removed Kubernetes APIs and updates them in-place to supported versions.

    Installation¶

  • helm plugin install https://github.com/helm/helm-mapkubeapis

    Be sure to install v0.4.1 or later, as earlier versions don’t fully support resource removal.

    How I Fixed the Problem¶

    Once the plugin was installed, I followed these two steps:

    1. Dry Run¶

    Check what changes would be applied:

  • helm mapkubeapis --dry-run <release-name> --namespace <namespace>

    The plugin correctly identified the policy/v1beta1 reference in the metadata.

    2. Apply the Rewrite¶

    Update the release metadata in-place:

  • helm mapkubeapis <release-name> --namespace <namespace>

    A new release revision was created with the corrected references—and the Helm upgrade succeeded.

    Important: Update Your Charts Too¶

    The plugin fixes Helm’s internal records, but your actual chart templates still need to be updated manually. Here’s the change we made:

  • # BEFORE
    apiVersion: policy/v1beta1
    kind: PodDisruptionBudget
    
    # AFTER
    apiVersion: policy/v1
    kind: PodDisruptionBudget

    Make sure all of your manifests—Deployments, CRDs, PDBs—use API versions supported by your cluster.

    Key Takeaways¶

    • Kubernetes v1.25 removes policy/v1beta1, not just deprecates it.
    • Helm stores API references from past releases. Even if your charts are up to date, metadata can block upgrades.
    • Use helm-mapkubeapis to rewrite Helm release history safely and unblock upgrades.

    We’ve now added this plugin to our standard upgrade procedure to avoid similar issues in future migrations.

    Have you hit similar snags upgrading across Kubernetes versions? I’d love to hear how you handled them.

    You Might Also Like

    • Helm Upgrade Failed After v1.25 Due to PDB API
      Helm Upgrade Failed After v1.25 Due to PDB API

      Helm upgrade failed after moving to Kubernetes v1.25? Here’s how I fixed it by cleaning up legacy PDB API references using helm-mapkubeapis.

      Abhimanyu Saharan
      Abhimanyu Saharan

      May 10, 2025
      • Kubernetes
    • Cutting Kubernetes Costs with kube-downscaler
      Cutting Kubernetes Costs with kube-downscaler

      Schedule pod downscaling in Kubernetes with kube-downscaler to cut costs during off-hours—my experience, setup, and where it fits best.

      Abhimanyu Saharan
      Abhimanyu Saharan

      May 10, 2025
      • Kubernetes
    • Kubernetes Production Checklist
      Kubernetes Production Checklist

      Abhimanyu Saharan
      Abhimanyu Saharan

      May 10, 2025
      • Kubernetes