Print this page
Published in News

AMD bangs open source drum to prise devs away from Cuda

by on27 November 2025


Pitches ROCm as the antidote to Nvidia’s walled garden

AMD reckons it can win over AI developers by offering an open software ecosystem rather than locking everything behind a proprietary moat.

The company has been puffing up its Instinct datacentre GPUs and Ryzen processors for years, but its AI software boss Anush Elangovan told Computer Weekly that silicon is only half the story. The real play is ROCm, AMD’s open-saucey platform that aims to squeeze every drop of performance out of its hardware without building a gilded cage around it.

Elangovan said: “We could try to build something that’s closed source, but we won’t get the velocity of an open ecosystem. Instead, we want to leverage everyone’s capabilities to move the industry forward. It’s like the Linux kernel, where everyone collaborates and moves really fast.”

He argues that ROCm’s openness gives Asia Pacific outfits a common baseline for building robust AI capabilities, rather than being trapped in someone else’s software fortress.

He said he has watched companies across the region scale up datacentres running AMD kit and sees ROCm as the leveller that lets them compete in model development and infrastructure.

AMD is pushing a “ROCm everywhere” scheme to give developers the same experience on a cheap laptop as on a monster supercomputer. Students and start-ups can muck about on consumer hardware and then scale smoothly when they have cash to burn. The approach plays to

AMD’s chiplet architecture, which Elangovan claims has advantages in inference jobs where density and bandwidth matter more than showing off.

ROCm 7 arrived in September 2025 and added native support for the Instinct MI350 and MI325X accelerators, the chips designed to handle massive generative AI workloads. AMD promised developers improved efficiency for modern language models by adding full support for low-precision formats such as FP4 and FP8, delivering up to a 3.5x inference speed-up over earlier releases.

The update broadened support to Windows systems and consumer-grade Radeon cards, meaning bedroom coders can prototype on a gaming box before shoving their work into the cloud. AMD also claimed day zero support for PyTorch and vLLM to stop developers from wondering when the libraries would catch up.

Elangovan said AMD’s kit is designed with high memory bandwidth to run colossal AI models on a single system. He said this often avoids the cost of pricey liquid-cooling retrofits.

He said, “You can go for a little less density so that you can do air-cooled infrastructure versus liquid-cooled infrastructure, and then still get the capabilities that are top of the line.”

Alongside LLMs, he said organisations are running more text-to-image and text-to-video workloads. He pointed to Luma Labs and said its Ray3 video generation model is “fully trained and serving on AMD platforms.”

Elangovan wants developers to stop viewing AMD as purely a hardware shop. He said: “AMD is increasingly building and shipping software as a software company. This means [developers] should see it as a platform they can trust and build on long after the current wave of GPUs has been dumped on the scrapheap."

Last modified on 27 November 2025
Rate this item
(0 votes)