Helm Upgrade Failure: When Managed Fields Bite Back
Helm upgrade failed due to Kubernetes managedFields conflict. Learn why spec looked fine, yet patching caused errors, and how to fix it.
Recently I ran into a Helm upgrade failure that at first looked baffling:
UPGRADE FAILED: cannot patch "webhook" with kind Deployment:
spec.template.spec.containers[0].env[0].valueFrom:
Invalid value: "": may not have more than one field specified at a time
Naturally, my first assumption was that something in the Deployment spec was invalid, perhaps both value and valueFrom had been set on the same environment variable.
But after reviewing the manifest, the spec looked completely fine. Only valueFrom was present.
So what was really going on?
The Hidden Culprit: Managed Fields¶
Kubernetes maintains managedFields internally to track which manager (Helm, kubectl, etc.) last set which fields. In my case, I had manually patched the Deployment some time ago. That patch introduced a value field, and even though it was no longer visible in the spec, it persisted in managedFields.
When Helm attempted an upgrade, the API server merged Helm’s valueFrom with the previously recorded value from managedFields. The result was an invalid object containing both.
Why This Matters¶
The error didn’t come from the manifest itself, it came from how Kubernetes reconciled historical state. This is one of those subtle areas where Helm and kubectl patch operations can collide.
Fixes and Best Practices¶
- Avoid manual patches on Helm-managed resources unless absolutely necessary.
- If you must patch, remember you may need to clean up later.
To resolve issue:
- Delete the Deployment and let Helm recreate it cleanly, or
- Patch out the conflicting field explicitly with a
kubectl patch --type=json
command.
Takeaway¶
Not all errors originate from the visible spec. Sometimes it’s the invisible history, managedFields, that causes problems. Understanding this distinction can save a lot of debugging time during upgrades.