IQ AI를 발표했습니다.
Hyung Won Chung (Korean: 정형원) is a South Korean artificial intelligence research scientist recognized for his contributions to the development and scaling of large language models (LLMs). He is currently a part of Meta Superintelligence Labs and has held research positions at OpenAI, and Google Brain, where he contributed to prominent models and frameworks such as PaLM, Flan-T5, T5X, and OpenAI's o1. [1] [2]
Chung is originally from South Korea. He currently resides in Mountain View, California, a key hub for the technology industry. [1]
Hyung Won Chung completed his doctoral studies at the Massachusetts Institute of Technology (MIT), where he earned a PhD. His academic background provided the foundation for his subsequent research career in machine learning and artificial intelligence. [2]
Chung began his industry career as a research scientist at Google Brain, where his work centered on overcoming challenges related to the scaling of large AI models. He was a key contributor to T5X, a JAX-based framework designed to facilitate large-scale training of models, and was involved in training major models like the Pathways Language Model (PaLM). His research also significantly advanced the field of instruction fine-tuning, leading to the development of the Flan-PaLM and Flan-T5 model families, which improved the ability of LLMs to follow user instructions. [1]
In February 2023, Chung transitioned to OpenAI. At OpenAI, his research focused on enhancing the reasoning capabilities of AI systems and developing autonomous agents. He was a foundational contributor to several of the organization's major initiatives, including the o1-preview (September 2024), the full o1 model (December 2024), and the Deep Research project (February 2025). During this time, he also led the training efforts for the Codex mini model, a smaller, specialized version of the code-generation model. [1] [2]
In July 2025, Chung joined Meta's Superintelligence Labs as an AI Research Scientist. He made the move from OpenAI alongside his colleague Jason Wei, with whom he had a close working relationship at both Google and OpenAI. [4] [5]
Chung has co-authored numerous influential papers in the field of machine learning and natural language processing. His research has been published in top-tier journals and presented at major conferences.
These publications highlight Chung's focus on model scaling, instruction tuning, and the practical application of large language models. [1]
Chung frequently shares his research and insights with the broader academic and technical communities through invited lectures and seminars at universities. His presentations cover topics such as the evolution of large language models, the principles of instruction fine-tuning, Reinforcement Learning from Human Feedback (RLHF), and high-level perspectives on paradigm shifts in AI research. He has delivered talks at institutions including:
These lectures are often made publicly available and serve as educational resources for students and researchers in the field. [1] [3]