AMD and Hugging Face partner up to innovate on AI language models
AMD and Hugging Face have teamed up to innovate the AI language model space.
WePC is reader-supported. When you buy through links on our site, we may earn an affiliate commission. Prices subject to change. Learn more
AMD were very excited to announce that they have partnered with AI language firm, Hugging Face. The announcement was officially made at the AMD Data Center & AI Technology Premiere that took place recently. Here’s how AMD and Hugging Face partner up to innovate on AI language models.
If you think this is cool, you should check out AMD’s Genoa-X CPUs.
What is the AMD and Hugging Face partnership about?
AMD and Hugging Face partner to deliver state-of-the-art transformer performance on AMD CPUs and GPUs. Additionally, this enables the Hugging Face community to benefit from the latest AMD platforms for training and inference.
AMD and Hugging Face hardware collaboration
The partnership focuses on optimizing performance on supported hardware platforms. On the GPU side, the collaboration will take place on the Instinct MI2xx, MI3xx, and Radeon Navi3x families. Initial testing has shown that AMD’s MI250 trains BERT-Large 1.2x faster and GPT2-Large 1.4x faster than its competitors.
AMD and Hugging Face will explore optimizing client Ryzen and server EPYC CPUs. CPUs are well-suited for transformer inference, especially when combined with quantization techniques. Moreover, the collaboration includes the high-performance Alveo V70 AI accelerator, known for its lower power requirements.
The partnership aims to support various model architectures and frameworks, including transformer architectures for natural language processing, computer vision, and speech. The partnership will support popular models like BERT, DistilBERT, ROBERTA, Vision Transformer, CLIP, and Wav2Vec2, as well as generative AI models such as GPT2, GPT-NeoX, T5, OPT, LLaMA, and Hugging Face’s own BLOOM and StarCoder models.
Hugging Face will work closely with AMD to optimize key models and integrate the AMD ROCm SDK seamlessly into their open-source libraries. Starting with the transformers library, the collaboration will focus on ensuring the models work well out of the box on AMD platforms. With future plans to create an Optimum library dedicated to AMD platforms.
AMD and Hugging Face partner up to innovate on AI language models: Final Word
This partnership opens up new hardware platforms for Hugging Face users. AMD provides them with cost-effective performance benefits for training and inference tasks. It represents an exciting opportunity for Hugging Face to leverage AMD’s world-class hardware solutions and further enhance the capabilities of its platform.
This was how AMD and Hugging Face partner up to innovate on AI language models. If you missed the AMD data center and AI technology premiere, you can watch it on Youtube.