This video features several speakers from the Agentic AI Summit discussing various aspects of AI agents and their underlying technologies. Open Source & Standards ( 0:00 ): Matt White from PyTorch Foundation discusses "completeness" and "openness" in open-source AI, highlighting the importance of transparency and unrestricted use for building an "agentic future." VLM for LLM Inference ( 5:00 ): Michael Goen from Red Hat introduces VLM, an open-source LLM inference and serving engine, explaining its importance for efficient scheduling, memory management (KV cache), and optimizations like page attention, automatic prefix caching, quantization, and speculative decoding. MCP Protocol ( 15:52 ): Jason Kim from Anthropic explains MCP (Model-Context Protocol) as a foundational protocol for agents. It standardizes how AI applications and agents access external context (tools, data, memory), aiming to reduce parallel development and speed up integration. Ray for Agent Infrastructure ( 24:48 ): Sumant Hedge from Anyscale talks about Ray as a scalable open-source project for training and deploying AI agents. He highlights how Ray orchestrates complex distributed RL workflows by managing processes, scheduling, data communication, and autoscaling. Training Agents with RL ( 35:00 ): Daniel Han Chen from Unsloth discusses using Reinforcement Learning (RL), specifically GPO (Generalized Policy Optimization), to train intelligent AI agents. He explains how agents learn to maximize rewards in an environment, contrasting it with traditional fine-tuning. A2A Protocol ( 47:02 ): Chitra Vattadi from Google introduces A2A (Agent-to-Agent) protocol, an open standard designed to enable seamless and secure communication and collaboration between different AI agents, preventing them from operating in silos (e.g., a flight agent talking to a hotel agent). AMD GPU for AI Agents ( 54:56 ): Mati Ghazi from AMD discusses using AMD GPUs, specifically MI300, for AI development and agent workloads. He highlights AMD's support for open-source AI frameworks like PyTorch and Hugging Face, and their collaboration with VLM and SG Lang. Here's a quick summary of the main topics: YouTube generated summary YouTube generated key takeaways The key takeaways from the summit are: Openness is Crucial: The speakers consistently emphasize the importance of open-source principles, open standards, and open protocols (like VLM, MCP, and A2A) for fostering collaboration, transparency, and accelerating the development of robust AI agents ( 0:22 , 1:29 , 16:57 , 49:13 ). Agentic AI Requires Infrastructure: Building powerful AI agents necessitates advanced underlying infrastructure for efficient model inference, training, and deployment. Tools like VLM for fast inference and Ray for scalable training and deployment are highlighted ( 5:00 , 24:48 ). Context and Collaboration are Key: AI models perform better with the right context. Protocols like MCP are designed to provide models with external tools, data, and memory ( 17:32 ). Furthermore, agents need to collaborate securely and seamlessly, which is where the A2A protocol comes in ( 49:03 ). Reinforcement Learning for Agent Training: Reinforcement Learning (RL), specifically methods like GPO, is a critical paradigm for training agents to maximize rewards and learn complex behaviors, moving beyond simple instruction following ( 35:10 ). Hardware Drives Performance: Powerful hardware, such as AMD's MI300 GPUs, is essential for handling the demanding computational and memory requirements of large language models and complex agent workloads ( 59:34 ).