Skip to main content

Upcoming Talks

· 3 min read

Multi-AI Agents for Chip Design with Distilled Knowledge Debugging Graph, Task Graph Solving, and Multi-Modal Capabilities - Nov 9, 2024

Speakers: Chia-Tung Ho

Biography of the speakers:

Chia-Tung Ho is a senior research scientist at Nvidia Research. He received his Ph.D. in electrical and computer engineering from the University of California, San Diego, USA, in 2022. Chia-Tung has several years of experience in the EDA industry. Before moving to the US, he worked for IDM and EDA companies in Taiwan, developing in-house design-for-manufacturing (DFM) flows at Macronix, as well as fastSPICE solutions at Mentor Graphics and Synopsis. During his Ph.D., he collaborated with the Design Technology Co-Optimization (DTCO) team at Synopsis and served as an AI resident at X, the Moonshot Factory (formerly Google X). His recent work focuses on developing LLM agents for chip design and integrating advanced knowledge extraction, task graph solving, and reinforcement learning techniques for debugging and design optimization.

Abstract:

Hardware design presents numerous challenges due to its complexity and rapidly advancing technologies. The stringent requirements for performance, power, area, and cost (PPAC) in modern complex designs, which can include up to billions of transistors, make hardware design increasingly demanding compared to earlier generations. These challenges result in longer turnaround times (TAT) for optimizing PPAC during RTL synthesis, simulation, verification, physical design, and reliability processes. In this talk, we introduce multi-AI agents built on top of AutoGen to improve efficiency and reduce TAT in the chip design process. The talk explores the integration of novel distilled knowledge debugging graphs, task graph solving, and multimodal capabilities within multi-AI agents to address tasks such as timing debugging, Verilog debugging, and Design Rule Check (DRC) code generation. Based on these studies, multi-AI agents demonstrate promising improvements in performance, productivity, and efficiency in chip design.

Sign Up: https://discord.gg/8WFKcVN2?event=1300487847681196082

Integrating Foundation Models and Symbolic Computing for Next-Generation Robot Planning - Nov 18, 2024

Speakers: Yongchao Chen

Biography of the speakers:

Yongchao Chen is a PhD student of Electrical Engineering at Harvard SEAS and MIT LIDS. He is currently working on Robot Planning with Foundation Models under the guidance of Prof. Chuchu Fan and Prof. Nicholas Roy at MIT and co-advised by Prof. Na Li at Harvard. He is also doing the research in AI for Physics and Materials, particularly interested in applying Robotics/Foundation Models into AI4Science. Yongchao interned at Microsoft Research in 2024 summer and has been working with MIT-IBM Watson AI Lab starting from 2023 Spring.

Abstract:

State-of-the-art language models, like GPT-4o and O1, continue to face challenges in solving tasks with intricate constraints involving logic, geometry, iteration, and optimization. While it's common to query LLMs to generate a plan purely through text output, we stress the importance of integrating symbolic computing to enhance general planning capabilities. By combining LLMs with symbolic planners and solvers, or guiding LLMs to generate code for planning, we enable them to address complex decision-making tasks for both real and virtual robots. This approach extends to various applications, including task and motion planning for drones and manipulators, travel itinerary planning, website agent design, and more.

Sign Up: https://discord.gg/Swn3DmBV?event=1303162642298306681

How to follow up with the latest talks?

Join our community Discord (https://discord.gg/sUkGceyd) to be the first to know about amazing upcoming talks!

Connect: shaokunzhang529@gmail.com