AMD DGF SuperCompression cuts geometry storage size by up to 22%

by Chief Editor

The End of the Polygon Limit: How AMD’s DGF is Reshaping Game Worlds

For decades, game developers have played a constant game of compromise. You want a hyper-realistic statue or a sprawling gothic cathedral? You have to “bake” the detail into textures or use clever tricks to hide the fact that the model is actually quite low-poly. But the industry is hitting a tipping point where “polygon counts” are becoming a legacy metric.

AMD’s recent push with the Dense Geometry Format (DGF) and its new SuperCompression (DGFS) isn’t just a minor software update—it’s a glimpse into a future where geometry is nearly “infinite,” and the bottleneck shifts from how many triangles a GPU can push to how efficiently we can move that data from the SSD to the screen.

Did you know? A single DGF-meshlet consists of 64 vertices and 64 triangles packed into a tiny 128-byte block. This granular approach allows GPUs to stream only the geometry that is actually visible, rather than loading entire massive scenes into VRAM.

Solving the Storage Crisis: The Magic of DGFS

High-fidelity assets are massive. As we move toward 4K and 8K gaming, the sheer size of geometry data can bloat a game’s installation size and choke memory bandwidth. This represents where DGF SuperCompression (DGFS) comes into play.

Solving the Storage Crisis: The Magic of DGFS
Solving the Storage Crisis: Magic of DGFS

According to recent test data, DGFS can shrink raw DGF data by roughly 30%. For example, a complex “Dragon” model that previously took up 29.25MB can be squeezed down to 20.15MB. When paired with GDeflate compression, the savings remain significant, with some assets seeing a reduction of up to 22.22%.

The real brilliance here is the flexibility. DGFS acts as a storage layer; it can be reconstructed back into original DGF blocks for future hardware or decoded into conventional vertex and index buffers for older GPUs. In other words developers can create one “master” asset that works across multiple generations of hardware without needing to store five different versions of the same rock or character.

AMD DGF vs. NVIDIA RTX Mega Geometry: A Different Philosophy

In the arms race for visual fidelity, NVIDIA has its RTX Mega Geometry, and AMD has DGF. While they both aim to solve the “too many triangles” problem, they approach it from different angles.

From Instagram — related to Mega Geometry, Different Philosophy
  • NVIDIA RTX Mega Geometry focuses heavily on the acceleration structure—essentially optimizing how the GPU searches for intersections in a ray-traced scene.
  • AMD DGF is fundamentally a geometry compression format. It optimizes how the data is stored and streamed, making it a hardware-friendly way to handle dense meshes.

This distinction is critical. By focusing on compression and an open-source SDK, AMD is positioning DGF as a vendor-neutral standard that can be implemented via DirectX 12 and Vulkan, potentially bringing these benefits to a wider range of hardware beyond just the RDNA 5 architecture.

Pro Tip: If you’re a developer or a tech enthusiast, keep an eye on mesh shading. DGF’s block-based approach is the perfect companion to mesh shaders, allowing the GPU to discard invisible geometry before it ever hits the rasterizer, drastically boosting FPS in dense environments.

The “Nanite” Effect and the Future of Real-Time Rendering

We’ve already seen a preview of this future with Unreal Engine 5’s Nanite. Nanite allows artists to import cinema-quality assets with millions of polygons without worrying about manual LODs (Levels of Detail). However, Nanite primarily uses software rasterization for its smallest triangles, which can create challenges for ray tracing.

AMD DGF Tech Offers Massive Increase In Geometry In Ray Traced Games 🚀🎮

AMD’s DGF is designed to bridge this gap. By providing direct hardware support for compressed dense geometry, future GPUs (like the upcoming RDNA 5 series) can handle these micro-polygons natively. This means ray-traced reflections and shadows will look significantly more accurate on complex surfaces, as the “proxy” geometry used for ray tracing will be much closer to the actual visual model.

The performance is already promising. Tests on a Radeon RX 9070 XT show that a 10-million triangle model can be decoded in as little as 0.15 seconds using a single CPU core. Once this process is fully moved to the GPU, the latency will become virtually nonexistent.

Frequently Asked Questions

Q: Do I need an RDNA 5 GPU to use DGF?
A: No. While future GPUs will have direct hardware acceleration for DGF, the SDK is open-source and supports current GPUs via Vulkan and DirectX 12 through software decoding.

Q: Will DGFS make game download sizes smaller?
A: Yes. By reducing the storage footprint of geometry data by up to 22-30%, DGFS helps keep game installs from ballooning as visual fidelity increases.

Q: Is this the same as DLSS or FSR?
A: No. DLSS and FSR are upscaling technologies that handle pixels. DGF/DGFS is a geometry technology that handles the 3D shapes and triangles that make up the scene.

What do you think? Will the move toward “infinite geometry” finally kill the concept of the “polygon count,” or will memory bandwidth always be the ultimate ceiling? Let us know in the comments below, or subscribe to our newsletter for the latest deep dives into GPU architecture.

Want to dive deeper into the technical side? Check out our guides on Mesh Shading and the Evolution of Ray Tracing.

You may also like

Leave a Comment