The big flex came in the form of a “trillion triangle” jungle ruins scene running path traced at 1440p, hitting a respectable 30 frames per second using what looks suspiciously like a poor man’s version of Nvidia ray reconstruction.
That demo was powered by Intel’s Arc B580, which turns out to be more than just marketing fodder. The rendering trickery includes resampled importance sampling, Quasi Monte Carlo histograms, blue noise patterns and a spatiotemporal neural network denoiser. It all sounds very Cyberpunk 2077, just without the part where it melts your GPU.
According to Chipzilla, the target here is high-end visuals on low-power iGPUs, with techniques usually reserved for top-tier AAA titles. Their goal is delivering ray-traced eye candy without sending your laptop into thermal lockdown.
One of the key weapons in this PR offensive is Open Image Denoise 2, an open-source library that’s gaining popularity across vendors. It now comes with optimisations for Intel, NVIDIA and AMD GPUs alike, showing that Intel’s finally grasped the idea that people like cross-platform things.
What sets Intel’s demo apart, besides the trillion triangles, is its focus on handling scene complexity: vegetation, dynamic shadows and exotic materials, all rendered with one sample per pixel. The denoiser then swoops in to clean up the noise using AI magic, similar to DLSS 3.5’s ray reconstruction or AMD’s upcoming Ray Regeneration gimmick.
All this wizardry has artefacts. Intel admits to the usual suspects—flickering, ghosting, moiré patterns, shadow errors, disocclusion messes and janky reflections. Their fix is to train the denoising model harder and throw more diverse samples at it until it behaves.
Under the hood, Intel is pushing hardware-accelerated neural texture compression, or TSNC, bundled with DirectX Cooperative Vectors. That combo offers up to 47 times speedup compared to older FMA-based implementations. Even entry-level hardware like Arc 140V and B580 is clocking in faster than the old BC6 compression standard.
It is a clear pivot from Chipzilla, which seems desperate to convince the world that it is not just clinging to legacy x86 chips and boardroom nostalgia. With Xe2 architecture and open-source tools in tow, Intel might just be making iGPUs interesting again.