AI 解决方案

AI 解决方案

How to Get Started on the AMD Developer Cloud

How to Get Started on the AMD Developer Cloud

June 30, 2025

Driving Virtualization Forward: The Power of AMD and HPE Morpheus VM Essentials

Driving Virtualization Forward: The Power of AMD and HPE Morpheus VM Essentials

As organizations rethink infrastructure virtualization, many are facing steep licensing changes and limited flexibility. These shifts are prompting IT…

June 25, 2025

AMD in Driver and Occupant Monitoring Systems

AMD in Driver and Occupant Monitoring Systems

AMD Driver Monitoring System DMS Occupant Monitoring System OMS Euro NCAP, NCAP26, NCAP29, SAE Level 2, SAE Level 2+, SAE Level 3, autonomy, ADAS, Zyn…

June 23, 2025

Enabling Real-Time Context for LLMs: Model Context Protocol (MCP) on AMD GPUs — ROCm Blogs

Enabling Real-Time Context for LLMs: Model Context Protocol (MCP) on AMD GPUs — ROCm Blogs

Learn how to leverage Model Context Protocol (MCP) servers to provide real time context information to LLMs through a chatbot example on AMD GPUs

June 19, 2025

Silo AI Continued Pretraining Provides Blueprint for GLA

Silo AI Continued Pretraining Provides Blueprint for GLA

AMD Silo AI, and TurkuNLP, release performant open weights Finnish models, Poro 2, enabling organizations to adapt powerful LLMs for native languages

June 18, 2025

Fine-Tuning LLMs with GRPO on AMD MI300X: Scalable RLHF with Hugging Face TRL and ROCm — ROCm Blogs

Fine-Tuning LLMs with GRPO on AMD MI300X: Scalable RLHF with Hugging Face TRL and ROCm — ROCm Blogs

Fine-tune LLMs with GRPO on AMD MI300X—leverage ROCm, Hugging Face TRL, and vLLM for efficient reasoning and scalable RLHF

June 17, 2025

Continued Pretraining: A Practical Playbook for Language-Specific LLM Adaptation — ROCm Blogs

Continued Pretraining: A Practical Playbook for Language-Specific LLM Adaptation — ROCm Blogs

A step by step guide to adapting LLMs to new languages via continued pretraining, with Poro 2 boosting Finnish performance using Llama 3.1 and AMD GPU…

June 17, 2025

Maximizing AI Performance: The Role of AMD EPYC 9575F CPUs in Latency-Constrained Inference Serving

Maximizing AI Performance: The Role of AMD EPYC 9575F CPUs in Latency-Constrained Inference Serving

What are the key benefits of running AI inference on a CPU? How can CPUs be utilized for AI? Are GPUs truly required? Join us as we explore CPU infere…

June 12, 2025

相关推荐