AMD and Hugging Face partner up to innovate on AI language models
AMD and Hugging Face have teamed up to innovate the AI language model space.
WePC is reader-supported. When you buy through links on our site, we may earn an affiliate commission. Prices subject to change. Learn more
AMD were very excited to announce that they have partnered with AI language firm, Hugging Face. The announcement was officially made at the AMD Data Center & AI Technology Premiere that took place recently. Here’s how AMD and Hugging Face partner up to innovate on AI language models.
If you think this is cool, you should check out AMD’s Genoa-X CPUs.
Black Friday is back, and with it comes huge savings on some of the market’s most popular gaming and tech products. We’ll be covering all the best deals in more details over in our deals hub, but if you haven’t got time to read through those, why not see our top picks below.
-
ASUS TUF NVIDIA RTX 5080
Was $1599
Now $1199
-
ASUS TUF RTX 5070 Ti
Was $999
Now $849
-
Samsung Odyssey OLED G6
Was $899
Now $649
-
TCL 43S250R Roku TV 2023
Was $279
Now $199
-
iBUYPOWER Y40 Gaming PC
Was $2,299
Now $1,819
-
Samsung Odyssey G9 (G95C)
Was $1,299
Now $777
-
Alienware Area-51 gaming laptop
Was $3,499
Now $2,799
-
Samsung 77-inch OLED S95F
Was $4,297
Now $3,497
-
ASUS ROG Strix G16
Was $1,499
Now $1,199
*Prices and savings subject to change. Click through to get the current prices.
What is the AMD and Hugging Face partnership about?

AMD and Hugging Face partner to deliver state-of-the-art transformer performance on AMD CPUs and GPUs. Additionally, this enables the Hugging Face community to benefit from the latest AMD platforms for training and inference.
AMD and Hugging Face hardware collaboration
The partnership focuses on optimizing performance on supported hardware platforms. On the GPU side, the collaboration will take place on the Instinct MI2xx, MI3xx, and Radeon Navi3x families. Initial testing has shown that AMD’s MI250 trains BERT-Large 1.2x faster and GPT2-Large 1.4x faster than its competitors.
AMD and Hugging Face will explore optimizing client Ryzen and server EPYC CPUs. CPUs are well-suited for transformer inference, especially when combined with quantization techniques. Moreover, the collaboration includes the high-performance Alveo V70 AI accelerator, known for its lower power requirements.
The partnership aims to support various model architectures and frameworks, including transformer architectures for natural language processing, computer vision, and speech. The partnership will support popular models like BERT, DistilBERT, ROBERTA, Vision Transformer, CLIP, and Wav2Vec2, as well as generative AI models such as GPT2, GPT-NeoX, T5, OPT, LLaMA, and Hugging Face’s own BLOOM and StarCoder models.
Hugging Face will work closely with AMD to optimize key models and integrate the AMD ROCm SDK seamlessly into their open-source libraries. Starting with the transformers library, the collaboration will focus on ensuring the models work well out of the box on AMD platforms. With future plans to create an Optimum library dedicated to AMD platforms.
AMD and Hugging Face partner up to innovate on AI language models: Final Word
This partnership opens up new hardware platforms for Hugging Face users. AMD provides them with cost-effective performance benefits for training and inference tasks. It represents an exciting opportunity for Hugging Face to leverage AMD’s world-class hardware solutions and further enhance the capabilities of its platform.
This was how AMD and Hugging Face partner up to innovate on AI language models. If you missed the AMD data center and AI technology premiere, you can watch it on Youtube.