We've just announced IQ AI.
James Lee-Thorp is an artificial intelligence researcher specializing in Transformer theory and AI alignment, currently working as a Research Scientist at Meta's Superintelligence team. He is known for his work on efficient Transformer models, including the FNet architecture. [1] [2]
Lee-Thorp earned both a Bachelor of Science and a Master of Science degree in Mathematics from the University of Cape Town. He later moved to the United States, where he completed his Ph.D. in Mathematics at Columbia University between 2011 and 2016. [1] [3] [5]
After completing his doctorate, Lee-Thorp held a postdoctoral position at New York University from 2016 to 2017. His early career also included a role as a Software Engineer at Goldman Sachs. He then transitioned to Google, where he worked as a researcher and software engineer. At Google, he was a key contributor to research on efficient Transformer architectures. In 2025, Lee-Thorp joined Meta as a Research Scientist as part of the company's newly formed "Superintelligence" team.
His work focuses on AI alignment, which aims to ensure that AI systems act in accordance with human intentions and values. This includes research into Reinforcement Learning from Human Feedback (RLHF) and the use of human cognitive signals, such as eye-tracking, to refine AI reward models. His expertise is considered a significant part of Meta's strategy to address the safety and controllability of advanced AI systems.
Lee-Thorp has co-authored several influential papers in the field of natural language processing and machine learning. His research often focuses on improving the efficiency and understanding of large-scale AI models.
These publications highlight his focus on creating more computationally efficient and scalable AI models.
In 2022, Lee-Thorp and his co-authors received the "Best efficient NLP paper" award at the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL) for their paper, "FNet: Mixing Tokens with Fourier Transforms." [4] [1] [2] [3] [5] [6]