We've just announced IQ AI.
Chenxi Liu is a research scientist specializing in artificial intelligence, computer vision, and deep learning. He is a member of Meta's Superintelligence Labs and previously worked at Google DeepMind, where he was a core contributor to the Gemini family of multimodal models. [1] [2]
Liu completed his undergraduate studies at Tsinghua University, where he earned a bachelor's degree in automation. He continued his education in the United States, obtaining a master's degree in statistics from the University of California, Los Angeles (UCLA). Liu then pursued his doctoral studies at Johns Hopkins University, where he was advised by Bloomberg Distinguished Professor Alan Yuille. He graduated with a Doctor of Philosophy (PhD) in computer science, with his research focusing on computer vision and deep learning. [3] [4] [7]
After completing his PhD, Liu began his professional career as a senior researcher at Waymo, the autonomous driving technology company. He later joined Google DeepMind as a staff research scientist. At DeepMind, he was part of the team responsible for the post-training of the Gemini series of large language models and is listed as a core contributor on the technical reports for Gemini, Gemini 1.5, and Gemini 2.5.
In 2025, Liu joined Meta as a research scientist in the company's newly established Superintelligence Labs. His recruitment was part of a broader initiative by Meta to assemble a team of prominent researchers from organizations like OpenAI and DeepMind to focus on the development of artificial general intelligence (AGI). [6] [2] [1] [5] [7]
Liu's research interests include computer vision, deep learning, neural architecture search (NAS), and vision-language models. He has published numerous papers at major AI conferences such as the Conference on Computer Vision and Pattern Recognition (CVPR), the European Conference on Computer Vision (ECCV), and the International Conference on Learning Representations (ICLR).
His work at Google DeepMind involved significant contributions to the Gemini project, a family of highly capable multimodal models. Before his work on large language models, Liu's research focused on automatically discovering efficient neural network architectures. His work on "Progressive Neural Architecture Search" (PNAS) introduced a method for learning the structure of a convolutional network that was more efficient than previous approaches. Another notable work, "Auto-DeepLab," presented a hierarchical neural architecture search algorithm for semantic image segmentation. [1] [8]
A selection of his major works includes:
This list represents a small portion of his extensive publication record. [1] [8]
In 2019, Liu was named a Google AI Fellow, an award created to recognize and support outstanding graduate students conducting exceptional research in computer science and related fields. He was one of 54 students selected for the fellowship, which provided two years of paid tuition and fees. At the time of the award, Liu stated his research goal was "to let the machines spontaneously and efficiently discover architectures that best facilitate various kinds of visual intelligence tasks." [4]