Chengxu Zhuang

Wiki Powered byIconIQ
Chengxu Zhuang

IQ AI를 발표했습니다.

확인해보세요

Chengxu Zhuang

Chengxu Zhuang is an artificial intelligence research scientist at Meta. He is known for his work in natural language processing, computer vision, and computational neuroscience, including contributions to OpenAI's ChatGPT and his current role on team. [1] [2]

Education

Zhuang attended Tsinghua University from 2011 to 2016, where he earned a Bachelor of Engineering in Electronic Engineering and a second major, a Bachelor of Science in Mathematics. He then pursued his doctoral studies at Stanford University from 2016 to 2022, obtaining a Ph.D. in Psychology under the advisement of Daniel Yamins. Following his Ph.D., Zhuang was an ICoN Postdoctoral Fellow at the Massachusetts Institute of Technology (MIT) from 2022 to 2024, where he worked with researchers Ev Fedorenko and Jacob Andreas. [1] [5]

Career

Zhuang began his post-doctoral career at MIT's EvLab, focusing on the intersection of language and brain sciences. He later joined OpenAI, where he contributed to the development of the ChatGPT Advanced Voice Mode. In 2025, he transitioned to a role as an AI Research Scientist at Meta. He is a member of the company's recently formed "" team, a group assembled to advance research in artificial general intelligence. This team includes numerous researchers and engineers from prominent AI organizations such as OpenAI and Google's DeepMind.

While at Stanford, Zhuang served as a teaching assistant for several courses, including Statistical Methods for Behavioral and Social Sciences, Experimental Methods, Large-Scale Neural Network Models for Neuroscience, and High-Dimensional Methods for Behavioral and Neural Data. [1] [3] [4] [5] [7]

Research and Publications

Zhuang's research interests include natural language processing, language acquisition, computer vision, computational neuroscience, and deep learning. His work often explores the connections between biological intelligence and artificial intelligence models.

He has co-authored numerous papers presented at major AI and neuroscience conferences. His publications cover topics such as unsupervised learning from video, the development of neural network models for the ventral visual stream, and the use of visual grounding to improve language modeling. [1] [6]

Selected Publications

  • "Visual Grounding Helps Learn Word Meanings in Low-Data Regimes" (2024): Co-authored with Evelina Fedorenko and Jacob Andreas, this paper was presented at the North American Chapter of the Association for Computational Linguistics (NAACL) conference, where it received a Best Paper Award.
  • "Lexicon-Level Contrastive Visual-Grounding Improves Language Modeling" (2024): This work was published in the Findings of the Association for Computational Linguistics (ACL).
  • "How Well Do Unsupervised Learning Algorithms Model Human Real-time and Life-long Learning?" (2022): Presented at the NeurIPS Datasets and Benchmarks Track, this paper investigates the parallels between unsupervised learning in AI and human learning processes.
  • "Unsupervised neural network models of the ventral visual stream" (2021): Published in the Proceedings of the National Academy of Sciences (PNAS), this research explores the use of unsupervised models to simulate the brain's visual processing system.
  • "Local Aggregation for Unsupervised Learning of Visual Embeddings" (2019): This paper, presented at the International Conference on Computer Vision (ICCV), received a Best Paper Award Nomination.
  • "Toward Goal-Driven Neural Network Models for the Rodent Whisker-Trigeminal System" (2017): An early work presented at the NIPS conference that aimed to create neural network models inspired by the sensory systems of rodents.

A list of his major publications is available on his personal website. [1] [6]

참고 문헌.

카테고리순위이벤트용어집