Python 3.14 Adds compression.zstd for Zstandard Support
Python 3.14 adds a native Zstandard module under compression.zstd, unifying high-performance zstd support and existing stdlib compressors under one namespace.
With Python 3.14, the standard library gains a first-party wrapper for Zstandard (zstd), a modern compression algorithm renowned for its high compression ratios and rapid decompression speeds. To avoid clashing with existing PyPI packages named or , PEP 784 consolidates all compression modules under a single namespace, while preserving the legacy imports you already use.
Zstandard has emerged as the industry standard for performance-sensitive compression. Benchmarks consistently show:
Higher ratio than zlib (DEFLATE) and bzip2
Faster decompression than lzma
Hardware and filesystem support, including ZFS and Btrfs
Projects from Conda to network protocols now rely on zstd. By bundling it into the standard library, Python enables faster installs, smaller archives, and consistent APIs without external dependencies.
Rather than shadow existing modules, PEP 784 introduces:
This structure prevents naming collisions with third-party packages (zstd, zstandard) and lays the groundwork for future additions (e.g., LZ4) without import conflicts.
All new C extensions undergo AddressSanitizer and libFuzzer testing. The upstream zstd library is itself well-fuzzed and covered by a bug-bounty program, minimizing memory-safety risks.
By integrating Zstandard under a clear, conflict-free namespace and extending archive, streaming, and dictionary APIs, Python 3.14 empowers developers with a high-performance compression toolkit, while ensuring existing code and imports remain fully supported.
FAQs
What is Zstandard and why is it added to Python 3.14?
Zstandard (zstd) is a modern compression algorithm offering high compression ratios and fast decompression speeds. Python 3.14 adds it to the standard library to support performance-sensitive use cases and eliminate the need for external dependencies when using zstd.
How is Zstandard integrated into Python’s standard library?
Zstandard is added under the new compression namespace as compression.zstd, per PEP 784. This avoids naming conflicts with third-party PyPI packages like zstd or zstandard, and paves the way for other algorithms (like LZ4) under the same namespace.
Can I still use the existing compression modules like gzip or lzma?
Yes. Existing top-level imports such as gzip, bz2, lzma, and zlib remain unchanged and fully supported. The compression namespace is an addition, not a replacement.
What APIs does compression.zstd provide?
One-shot APIs for simple compress() and decompress() operations
Streaming interfaces (compressobj, decompressobj) for chunked data
File wrappers for working with .zst files using familiar I/O patterns
How is compatibility and security handled in this new module?
Python on Windows ships with vendored libzstd
Unix builds detect libzstd at build time
The C extension is tested with AddressSanitizer, libFuzzer, and benefits from upstream bug bounty coverage for zstd
Dual-version compatibility can be maintained with fallback imports if needed
Like what you read? Support my work so I can keep writing more for you.
from compression.zstd import compress, decompressraw = b"example data"zipped = compress(raw, level=5)assert decompress(zipped) == raw
from compression.zstd import ZstdCompressor, ZstdDecompressor# Prepare some test datafull_data = b"Hello, world! " * 10_000 # ~140 KiB of datachunk_size = 64 * 1024 # 64 KiBdata_chunks = [full_data[i : i+chunk_size] for i in range(0, len(full_data), chunk_size)]# Incremental compressioncompressor = ZstdCompressor(level=3)chunks = [compressor.compress(chunk) for chunk in data_chunks]chunks.append(compressor.flush())# Incremental decompressiondecompressor = ZstdDecompressor()reassembled = b"".join(decompressor.decompress(part) for part in chunks)assert reassembled == full_dataprint("Round-trip successful, size compressed →", sum(len(c) for c in chunks))
from compression.zstd import ZstdFile, ZstdCompressor, ZstdDecompressor# 1. Prepare some “large_bytes” (e.g. ~100 KiB of repeating text)large_bytes = (b"Python PEP 784: Zstd in stdlib! " * 1_000)[:100_000]# 2. Compress to diskwith ZstdFile("example.zst", "wb", level=10) as out: out.write(large_bytes)# 3. Read it backwith ZstdFile("example.zst", "rb") as inp: restored = inp.read()assert restored == large_bytesprint(f"Success: wrote and read back {len(restored)} bytes.")
try: from compression.lzma import LZMAFileexcept ImportError: from lzma import LZMAFile
Learn the key differences between `removesuffix()` and `rstrip()` in Python. Avoid common pitfalls and choose the right method for precise string manipulation.
Explore the top 10 Python trends in 2025, from faster runtimes and type safety to AI, web, data, DevOps, and quantum. Stay ahead in every domain with Python.
Python 3.14 simplifies exception handling with PEP 758, letting you drop parentheses when catching multiple exceptions, cleaner, consistent, and backward-safe.
GitHub cut hosted runner prices but planned fees for self-hosted Actions, triggering backlash. The change was paused. Here’s what happened and what to expect.