Alexander Ku

I'm a PhD student in Psychology and Neuroscience at Princeton, where I work with Tom Griffiths and Jon Cohen, and also a Research Scientist at Google DeepMind. Before this, I received my BA and MS in Computer Science from UC Berkeley.

My research focuses on the computational principles by which adaptive systems, including artificial neural networks and human cognition, manage the tradeoff between flexibility and efficiency in response to complex, dynamic conditions. A key part of this tradeoff lies in how these systems internally represent information: flexible representations support rapid, context-sensitive behavior, while specialized ones underpin long-term efficiency and generalization. I investigate the costs of maintaining and processing flexible representations, the mechanisms driving their consolidation for greater efficiency, and how these processes are optimized across different timescales.

Keywords: Deep Learning, Meta-Learning, In-Context Learning, Cognitive Control, Automaticity, Intertemporal Choice

Email / Google Scholar / GitHub / Twitter / CV

Selected Papers

Recent or representative papers:

Using the Tools of Cognitive Science to Understand Large Language Models at Different Levels of Analysis
Alexander Ku, Declan Campbell, Xuechunzi Bai, Jiayi Geng, Ryan Liu, Raja Marjieh, R. Thomas McCoy, Andrew Nam, Ilia Sucholutsky, Veniamin Veselovsky, Liyi Zhang, Jian-Qiao Zhu, Thomas L. Griffiths

Predictability Shapes Adaptation: An Evolutionary Perspective on Modes of Learning in Transformers
Alexander Y. Ku, Thomas L. Griffiths, Stephanie C.Y. Chan

On the Generalization of Language Models from In-Context Learning and Finetuning: A Controlled Study
Andrew K. Lampinen, Arslan Chaudhry, Stephanie C.Y. Chan, Cody Wild, Diane Wan, Alex Ku, Jörg Bornschein, Razvan Pascanu, Murray Shanahan, James L. McClelland