Hot Chips 2025

Hot Chips 2025 will be held Sunday, August 24 - Tuesday, August 26, 2025 at Memorial Auditorium, Stanford, Palo Alto, CA.

registration button

Advance Program

Tutorials: Sunday, August 24, 2025

Title  
Breakfast/Registration
 
Tutorial 1: Datacenter Racks
 
Coffee Break (1/2 hr)
 
Tutorial 1: Datacenter Racks (cont)
 
Lunch (1 hr 15 min)
 
Tutorial 2: Kernel Programming
 
Coffee Break (1/2 hr)
 
Tutorial 2: Kernel Programming (cont)
 
Reception
 

Conference Day 1: Monday, August 25, 2025

Title Affiliation
Breakfast/Registration
 
Welcome
 
General Chair Welcome
Jan-Willem van de Waerdt, General Chair
Progam Co-Chairs Welcome
Ian Bratt & Nhon Quach, PC Co-Chairs
CPU

Chair: Gabriel Southern
 
Cuzco: A High-Performance RISC-V RVA23 Compatible CPU IP
Condor Computing
PEZY-SC4s: The Fourth Generation MIMD Many-core Processor with High Energy Efficiency and Flexibility for HPC and AI Applications
Pezy
IBM’s Next Generation Power Microprocessor
IBM
Introducing the Next Generation Intel® Xeon® Processor with Efficiency Cores
Intel
Coffee Break
 
Security

Chair: Greg Papadopoulos
 
Presto: A RISC-V-Compatible SoC for Unified Multi-Scheme FHE Acceleration over Module Lattice
Tsinghua University
Azure Secure Hardware Architecture: Establishing a Robust Security Foundation for Cloud Workloads
Microsoft
Lunch (1 hr 15 min)
 
Keynote #1

Chair: Cliff Young
 
Predictions for the Next Phase of AI
Noam Shazeer, Google
Graphics

Chair: Lavanya Subramanian
 
AMD RDNA 4 and Radeon RX 9000 Series GPU
AMD
RTX 5090: Designed for the Age of Neural Rendering
NVIDIA
Specialized SoC enabling low-power ‘World Lock Rendering’ in Augmented and Mixed Reality Devices
Meta
Coffee Break (1/2 hr)
 
Networking

Chair: Sherry Xu
 
Intel Mount Morgan Infrastructure Processing Unit (IPU)
Intel
AMD Pensando™ Pollara 400 AI NIC Architecture and Application
AMD
NVIDIA ConnectX-8 SuperNIC: A Programmable RoCE Architecture for AI Data Centers
NVIDIA
HIgh Performance Ethernet for HPC/AI Networking
Broadcom
Reception
 

Conference Day 2: Tuesday, August 26, 2025

Title Affiliation
Breakfast/Registration
 
Networking

Chair: Borivoje Nikolic
 
Celestial AI Photonic Fabric Module (PF Module) - The world’s first SoC with in-die Optical IO
Celestial AI
A UCIe Optical I/O Retimer Chiplet for AI Scale-up Fabrics
Ayar Labs
Passage M1000: 3D photonic interposer for AI
Lightmatter
Co-Packaged Silicon Photonics Switches for Gigawatt AI Factories
NVIDIA
Coffee Break
 
Power / Methodology

Chair: Jae W. Lee
 
ECAM Enabled Advanced Thermal Management Solutions for the AI Data Center
Fabric8Labs
Everactive Self-Powered SoC with Energy Harvesting, Wakeup Receiver, and Energy-Aware Subsystem
Everactive
Taping Out Three Class Chips per Semester in Intel 16 Technology
UC Berkeley
Lunch (1 hr 15 min)
 
Keynote #2

Chair: Yasuo Ishii
 
Up and Running with Rapidus: How Japan and Cutting-Edge Technologies are Transforming Semiconductor Manufacturing
Dr. Atsuyoshi Koike, Rapidus
Machine Learning 1

Chair: Ronny Krashinsky
 
Memory: (Almost) the Only Thing That Matters
Marvell
UB-Mesh: Huawei’s Next-Gen AI SuperComputer with A Unified-Bus Interconnect and nD-FullMesh Architecture
Huawei
Corsair - An In-memory Computing Chiplet Architecture for Inference-time Compute Acceleration
d-Matrix
Coffee Break (1/2 hr)
 
Machine Learning 2

Chair: Pradeep Dubey
 
NVIDIA’s GB10 SoC: AI Supercomputer On Your Desk
NVIDIA
4th Gen AMD CDNA™ Generative AI Architecture Powering AMD Instinct™ MI350 Series Accelerators and Platforms
AMD
Ironwood : Delivering best in class perf, perf/TCO and perf/Watt for reasoning model training and serving
Google
Closing Remarks
 
Closing Remarks
Larry Yang, Vice Chair,

Posters

Title Affiliation
HyperAccel Adelia: A 4nm LLM Processor for Efficient Generative AI Inference Hyper Accel
Basilisk: A 34 mm² End-to-End Open-Source 64-bit Linux-Capable RISC-V SoC in 130nm BiCMOS ETH Zurich
Tunable Resist Photonics (TRP): Rethinking Silicon photonics Light Multinary
Bit-Separable Transformer Accelerator Leveraging Output Activation Sparsity for Efficient DRAM Access Kyungpook National University
Multi-modal Few-step Diffusion Model Accelerator with Mixed-Precision and Reordered Group-Quantization for On-device Generative AI KAIST
An Energy-Efficient Spatial Computing SoC for Real-time Interactable-Rendering and Modeling with Surface-aware 3D Gaussian Splatting KAIST
BROCA: A Low-power and Low-latency Conversational Agent RISC-V System-on-Chip for Voice-interactive Mobile Devices KAIST
MEGA.mini: A NPU with Novel Heterogeneous AI Processing Architecture Balancing Efficiency, Performance, and Intelligence for the Era of Generative AI Chung-Ang Univeristy
High Density Si-IPD Technologies as enabler for High-Performance and Low-Power consumption Processor Chips Murata
Clo-HDnn: Continual On-Device Learning Accelerator with Hyperdimensional Computing via Progressive Search UC San Diego
A 4.69mW LLM Processor with Binary/Ternary Weights for Billion-Parameter Llama Model Yonsei University
KLIMA: Low-latency mixed-signal In-Memory Computing accelerator for solving arbitrary-order Boolean Satisfiability UC Santa Barbara