AMD boosts AI with ROCm 6.0 and Radeon GPUs

0
211

Building on our previously announced support of the AMD Radeon RX 7900 XT, XTX and Radeon PRO W7900 GPUs with AMD ROCm 5.7 and PyTorch, we are now expanding our client-based ML Development offering, both from the hardware and software side with AMD ROCm 6.0.

Firstly, AI researchers and ML engineers can now also develop on Radeon PRO W7800 and on Radeon RX 7900 GRE GPUs. With support for such a broad product portfolio, AMD is helping the AI community to get access to desktop graphics cards at even more price points and at different performance levels.

Furthermore, we are complementing our solution stack with support for ONNX Runtime. ONNX, short for Open Neural Network Exchange, is an intermediary Machine Learning framework used to convert AI models between different ML frameworks. As a result, users can now perform inference on a wider range of source data on local AMD hardware. This also adds INT8 via MIGraphX – AMD’s own graph inference engine – to the available data types (including FP32 and FP16).

With AMD ROCm 6.0, we are continuing our support for the PyTorch framework bringing mixed precision with FP32/FP16 to Machine Learning training workflows.

These are exciting times for anyone deciding to start working on AI. ROCm for AMD Radeon desktop GPUs is a great solution for AI engineers, ML researchers and enthusiasts alike and no longer remains exclusive to those with large budgets. AMD is determined to keep broadening hardware support and adding more capabilities to our Machine Learning Development solution stack over time.