What’s New in Kubernetes 1.34
Kubernetes 1.34 brings GA features, scheduler speedups, kubelet and networking updates, plus security and performance boosts for production clusters.
Kubernetes 1.34 (released late August 2025 and code-named “Of Wind & Will”) is packed with enhancements across the board. This release includes 58 notable changes: 23 features graduating to Stable (GA), 22 new Beta features, and 13 new Alpha capabilities. In this post, we’ll cover the highlights of what’s new: from major new features, to performance improvements, and updates in key components like the kubelet, scheduler, and the networking stack.
Major New Features and Enhancements¶
Kubernetes 1.34 introduces a variety of new features (at varying stages) that developers and cluster operators will find exciting. Here are some of the most impactful updates in this release:
- Dynamic Resource Allocation (DRA) reaches GA – The core of Dynamic Resource Allocation for hardware devices is now Stable. DRA allows Pods to request specialized resources (GPUs, TPUs, high-performance NICs, FPGAs, etc.) via a new
resourceClaims
field in Pod specs. Backed byResourceClaim
objects and other APIs underresource.k8s.io/v1
, DRA provides a flexible, opaque mechanism (inspired by CSI storage provisioning) to allocate and configure devices dynamically. With DRA graduating to GA, cluster operators can reliably use it by default to manage complex device resources. - Projected Service Account Tokens for Image Pulls (Beta) – The kubelet can now use short-lived, Pod-scoped ServiceAccount tokens when pulling images from private registries. Previously, image pull secrets were long-lived and shared across pods or nodes, posing security risks. In 1.34, the kubelet’s image credential provider can request ephemeral, audience-bound tokens tied to the Pod’s identity, eliminating the need for node-level static Secrets. This greatly improves security and simplifies credential management for pulling private images.
- KYAML – a Safer Kubernetes YAML Dialect (Alpha) – Kubernetes 1.34 introduces KYAML, a Kubernetes-specific YAML subset, as a new
kubectl
output format. KYAML aims to reduce YAML foot-guns (significant whitespace, implicit type conversions, etc.) by using a constrained YAML syntax better suited for Kubernetes configs. All KYAML is valid standard YAML, butkubectl
v1.34 can now emit output in KYAML format (withkubectl get -o kyaml
) when the env varKUBECTL_KYAML=true
is set. This provides an option for more robust, less ambiguous config files going forward. - Pod-Level Resource Requests & Limits (Beta) – A long-requested feature for simpler resource management: you can now specify total CPU/memory requests and limits at the Pod level, instead of only per-container. This was introduced in 1.32 and graduates to Beta in 1.34. Defining an overall resource “budget” for a Pod makes it easier to manage multi-container workloads – the scheduler ensures the sum of container resources stays within the Pod’s limit. This leads to more intuitive resource planning and more efficient cluster utilization. Horizontal Pod Autoscaler also now supports Pod-level resources in Beta.
- Mutating Admission Policies (Beta) – Kubernetes 1.34 introduces as a built-in, declarative alternative to custom mutating webhooks. This feature (powered by the Common Expression Language ) allows cluster admins to define in-cluster mutation rules (using JSON patches/merge logic) for incoming objects, simplifying admission control configuration. Originally an alpha in 1.32, it’s now Beta and enabled by default. In short, you can enforce custom mutations (e.g. defaulting or transforming fields) without maintaining an external webhook service.
These are just a few highlights – other improvements include delayed Job Pod replacement policy reaching GA (Jobs can now wait for a failing Pod to fully terminate before starting a replacement, avoiding resource contention), enhancements to storage (e.g. recover from volume expansion failures now GA), and more. Be sure to check the full release notes for the complete list of features and their stages.
Performance and Scalability Improvements¶
Many of the updates in Kubernetes 1.34 are aimed at improving performance, efficiency, and scalability of the control plane:
- Non-Blocking Scheduler Operations – The
kube-scheduler
is now smarter about how it interacts with the API server during scheduling cycles. In 1.34, a new mechanism allows the scheduler to perform asynchronous (non-blocking) API calls when binding pods. Instead of stalling the scheduler thread while waiting on API persistence, the scheduler can queue and deduplicate requests and continue scheduling other Pods in parallel. This reduces scheduling latency and prevents throughput bottlenecks, especially under high load, by ensuring the scheduler isn’t idle during slow API responses. - Faster Pod Rescheduling via Plugin Callbacks – In a related improvement now GA, the scheduler’s plugins can register fine-grained callback functions to better decide when an unschedulable Pod should be retried. Rather than using a fixed back-off for all situations, scheduler plugins (e.g. for resources, affinity, DRA, etc.) can signal that a specific cluster event likely makes a previously rejected Pod now schedulable, prompting an immediate requeue. This avoids needless delays and speeds up scheduling in clusters with dynamic resources – certain plugins can even bypass the usual backoff if it’s safe to do so. The result is higher scheduling throughput and faster placement of Pods when cluster state changes.
- Streaming List API Responses – Kubernetes API server can now stream large list responses instead of assembling huge objects in memory. Previously, listing thousands of objects (Pods, CRs, etc.) could consume a lot of memory and even destabilize the apiserver, since it would gather the entire result set before sending it. In 1.34, the JSON and protobuf list endpoints use a streaming encoder (now a stable feature) so that the apiserver sends objects incrementally without buffering the whole list. This greatly reduces memory pressure and makes large-scale queries more efficient, improving cluster stability under heavy client load.
- More Efficient Informers (Watch List) – Building on the above, streaming informers (introduced in 1.32) see further refinements in 1.34. The controllers (
kube-controller-manager
) and other clients can now leverage a WatchList mechanism by default, meaning they receive list results as a stream of watch events. This keeps memory usage steady and avoids spikes when controllers list a large number of objects. In practice, this makes the control plane more predictable and reliable under scale, by not loading entire resource lists into memory at once. - Snapshottable API Cache for Historical Reads – The API server’s watch cache (which caches recent state from etcd) gets a new enhancement: it can serve . If a client requests a list at an older , the apiserver can now often serve it from a cached snapshot instead of hitting etcd. This reduces etcd load and latency for controllers that might list resources with a slightly stale resourceVersion. Under the hood, the watch cache creates lightweight snapshots on each watch event and keeps them for a short window, so historical paginated lists can be satisfied cheaply. This feature () is on by default in 1.34 (Beta) and helps for large-scale controllers.
Overall, these improvements mean faster scheduling, more efficient API calls, and better scalability for large clusters. Teams running big or busy clusters will notice smoother performance thanks to these optimizations in the scheduler and API machinery.
Kubelet and Node Updates¶
Several of the new features in 1.34 involve the kubelet (node agent) and node-level behavior, which are important for operators and DevOps engineers managing nodes:
- Swap Support is Now GA – Kubernetes historically did not support swap on nodes, but that changes in 1.34. The feature to allow limited swap usage (per-node opt-in) has graduated to Stable. In the default “NoSwap” mode, nothing changes (swap remains off for workloads). But cluster admins can configure a node’s kubelet with
LimitedSwap
mode to let Pods use swap space within their memory limits. This can improve resilience for workloads with infrequently used memory pages, avoiding OOM kills by using a portion of swap instead. It’s a niche but important capability for certain workloads, now officially supported (first introduced in v1.22, now GA). - Graceful Node Shutdown on Windows (Beta) – The kubelet on Windows nodes now supports graceful node shutdown, similar to Linux. This means when a Windows node is shutting down or rebooting, the kubelet will proactively start evicting pods and allow them to terminate gracefully, rather than abruptly killing them. It uses Windows system shutdown notifications to trigger the same safe Pod termination logic that Linux nodes have had. This improvement, enabled by default in 1.34 (Beta), ensures workloads on Windows get a chance to clean up and save state on shutdown, improving reliability during maintenance reboots or updates.
- Kubelet OpenTelemetry Tracing (GA) – The kubelet now includes built-in OpenTelemetry tracing support, which has reached GA. This means the node can emit distributed tracing spans (for operations like pod sandbox setup, image pull, etc.) to an OpenTelemetry collector, helping operators get deeper visibility into node performance and pod startup time. Tracing integration makes debugging and monitoring complex issues easier.
- Lifecycle Hook “Sleep” Action (GA) – The kubelet has a new trick for container lifecycle management: a Sleep action for
PostStart
andPreStop
hooks is now stable. This lets you specify that a container should pause for a set duration after starting or before shutting down. It’s a simple but useful addition – for example, a PreStop hook can sleep for a few seconds to delay termination until a load balancer stops sending traffic. (Introduced in 1.29, zero-duration support added in 1.32, now GA. - Auto-Detect Cgroup Driver (Deprecation) – Configuring the container runtime’s cgroup driver is becoming simpler. Kubernetes 1.28 introduced an auto-detection for the cgroup driver, where the kubelet queries the CRI runtime to pick the correct driver. In 1.34 this has gone GA, and manual configuration is deprecated. The kubelet will automatically use the driver reported by containerd/CRI-O. The old flag (and kubelet config field) will be removed in a future release (not before 1.36).
Scheduler and Control Plane Updates¶
Kubernetes 1.34 also brings improvements to the scheduler and various control-plane controllers beyond just performance tweaks:
- Better Coordination with Cluster Autoscaler – A new alpha feature helps the scheduler communicate intent to external components. The scheduler can now use the Pod’s
.status.nominatedNodeName
field to indicate an intended placement even when the Pod is not yet bound (previously this field was only for preemption scenarios). With theNominatedNodeNameForExpectation
feature gate on, the scheduler will set this field as a hint of where it plans to schedule the Pod. This allows the Cluster Autoscaler to recognize that a node is about to receive a Pod and avoid mistakenly removing that node for being “empty”. In short, it prevents autoscaler from racing against the scheduler when scaling down, improving reliability in scale-down decisions. - Device Binding Conditions in Scheduling (Alpha) – For workloads that rely on external devices or persistent volumes, scheduling is getting more transactional. In 1.34, the scheduler will delay binding a Pod to a Node until required external device resources are confirmed to be ready (alpha feature). This happens in the scheduler’s PreBind phase: if a Pod uses Dynamic Resource Allocation or attachable volumes that aren’t ready, the scheduler can hold off on final binding. This prevents situations where a Pod is scheduled to a node but then fails immediately because a device couldn’t attach in time. It makes scheduling more robust and predictable for device-heavy workloads.
- Ordered Namespace Deletion (GA) – While not specific to the scheduler, this change affects cluster controllers and overall safety. Namespace deletion is now deterministic and ordered. In prior releases, when you deleted a namespace, Kubernetes would wipe out all resources inside in a semi-random order. That could lead to awkward scenarios – for example, Pods might linger after their NetworkPolicies were gone, leaving a short window where isolation rules weren’t enforced. Kubernetes 1.34 fixes this by ensuring a structured deletion sequence: certain resource types (like Pods) are removed before others (like Policies, CRDs, etc.). This makes namespace teardown safer and closes potential security gaps (it also addresses a specific CVE). Introduced in 1.33, this behavior is now stable in 1.34.
- External SA Token Signing (Beta) – The API server now supports outsourcing ServiceAccount token signing to an external service. In 1.34, the
ExternalJWTSigner
gRPC API is Beta and enabled by default. This lets you integrate Kubernetes with external Key Management Systems or HSMs for signing service account JWTs, instead of using a static key file on disk. Clusters can thus rotate keys or use hardware-backed signing for better security.
There are many more under-the-hood improvements as well. For example, tooling for declarative API validation (continuing in Beta) which uses CEL-based rules to make it easier for contributors to define and review API validations, and in-place Pod resource resizing (memory downsize support) with further refinements in Beta. hese help polish Kubernetes’ internal mechanics and user experience.
Networking and Service Updates¶
Finally, let’s look at what’s new in the networking stack and Service APIs in Kubernetes 1.34:
- Relaxed DNS Search Path Rules (GA) – Kubernetes has eased a long-standing restriction on DNS config. In previous versions, there were strict rules on the content of the DNS search path for Pods, which could block some advanced network setups. Now, Pods can specify a single
.
(dot) as the first search domain to effectively turn off automatic search path expansion for external lookups. This change (alpha in 1.32, now GA) lets you avoid having internal cluster domains appended to every DNS query, which in some setups caused unnecessary DNS traffic or resolution errors. For example, Pods that need to query external services can set their DNS config to usesearch: [".", "other.internal.domain"]
so that non-cluster hostnames won’t get the cluster suffix. This is especially useful in hybrid environments and prevents leaks of internal DNS queries. - Windows Networking – Direct Server Return – As mentioned under performance, Direct Server Return (DSR) for Services with externalTrafficPolicy is now fully supported on Windows and is GA. Windows nodes in LoadBalancer or NodePort scenarios can use DSR to send responses directly back to clients, improving service performance and achieving parity with Linux’s capabilities.
- Topology-Aware Service Routing Updates – Kubernetes 1.34 deprecates the old
PreferClose
option for service routing and replaces it with clearer choices. TheService.spec.trafficDistribution
field now supportsPreferSameZone
andPreferSameNode
as the preference values. In fact,PreferSameZone
is effectively an alias for whatPreferClose
meant (favor same zone endpoints), but with a less ambiguous name. AndPreferSameNode
is a new option to prefer endpoints on the same node for Service traffic when possible. This gives an even stronger locality preference for scenarios like daemon-sets or node-local proxies. The feature gatePreferSameTrafficDistribution
is Beta in 1.34 and enabled by default, so clusters can start using these new Service traffic routing hints.
Aside from those, no major changes to core networking APIs (like Ingress or CNI) were introduced in 1.34. The focus this release was on refinements and performance in networking – e.g. better DNS config and faster service routing on Windows. It’s worth noting that if you use multi-zonal clusters or multi-node services, the new trafficDistribution options can help optimize network latency by keeping traffic local when possible.
Conclusion and Further Reading¶
Kubernetes 1.34 delivers a wide array of improvements from enhanced device support and security features to significant boosts in performance and reliability of the control plane. Developers and platform teams can start taking advantage of these features by enabling the new beta/alpha features where appropriate (for testing), and by upgrading to benefit from the stable enhancements.
Happy upgrading!