Kubernetes has introduced a powerful scheduling enhancement, matchLabelKeys and to improve how pods honor and , especially during rolling updates and multi-tenant isolation scenarios.
During rolling updates, pods from different revisions (e.g., older and newer ReplicaSets) may co-exist. Without a way to differentiate them, Kubernetes might incorrectly schedule new pods, either failing due to affinity constraints or underutilizing available nodes.
Similarly, in multi-tenant clusters, there was no clean way to enforce pod placement boundaries unless the full label values were known in advance—an assumption that doesn't hold in dynamic environments.
When a pod is created, the API server evaluates matchLabelKeys and mismatchLabelKeys, fetches values from the pod's labels, and automatically modifies the labelSelector.
1. Rolling Updates Enable pods to schedule near only those with the same pod-template-hash, avoiding clashes between old and new versions.
2. Multi-Tenancy Ensure tenant pods are scheduled together, but isolated from pods of other tenants—without needing to hardcode tenant names in manifests.
The introduction of matchLabelKeys and mismatchLabelKeys is a subtle yet powerful enhancement to Kubernetes scheduling. It offers a cleaner, more dynamic way to define pod co-location and separation without relying on hardcoded label values. Whether you're managing rolling updates or enforcing tenant-level isolation, this feature gives you more control and flexibility without altering existing behavior—making it a valuable addition for production-grade workloads.
FAQs
What are matchLabelKeys and mismatchLabelKeys in Kubernetes PodAffinity?
These are new optional fields in PodAffinityTerm that dynamically inject label values from the incoming Pod into the affinity's labelSelector. This allows flexible co-location or separation of Pods based on matching or mismatching keys without hardcoding label values.
What problem do these fields solve in Pod scheduling?
They address challenges during rolling updates and multi-tenant isolation. Without them, affinity rules couldn’t distinguish between different versions of a deployment or tenants unless label values were known in advance. These new fields eliminate that static requirement.
How does this feature work at runtime?
When a Pod is created, Kubernetes inspects the labels of that Pod and uses the specified matchLabelKeys or mismatchLabelKeys to construct a labelSelector. This selector is then used by the scheduler to find Pods with matching or differing values on the target nodes.
What are the main use cases for matchLabelKeys and mismatchLabelKeys?
Rolling updates: Ensure new pods co-locate only with pods from the same ReplicaSet revision (e.g., same pod-template-hash).
Multi-tenancy: Dynamically group tenant pods together while keeping them isolated from others, without hardcoding tenant IDs.
What is the feature status of matchLabelKeys and how do I enable it?
As of Kubernetes v1.33, the feature is beta and enabled by default, controlled by the MatchLabelKeysInPodAffinity feature gate. It is safe to use in production, and existing manifests remain unaffected unless explicitly configured.
Like what you read? Support my work so I can keep writing more for you.
A small act of support goes a long way. You're helping me stay consistent and keep the content flowing.
Learn. Build. Ship. Together
40+
Technical blog posts
8+ years
Industry experience in DevOps & Infra
100+
YouTube subscribers
1500+
Developers on the newsletter
matchLabelKeys: # Affinity applies to pods with the same key-value as the incoming podmismatchLabelKeys: # Anti-affinity applies to pods with different key-values