Alexander Ku

I'm a PhD candidate at Princeton and a research scientist at Google DeepMind, working with Tom Griffiths, Jon Cohen, and Mike Mozer.

My research uses insights from cognitive science to inform artificial intelligence, and vice versa. I focus on three questions: (1) how the mind combines familiar parts to solve unfamiliar problems (compositionality), (2) how those parts are represented and processed, and (3) how these representations adapt to make solving familiar problems easier over time (automaticity). I also explore how methodologies from cognitive science, particularly rational analysis, can provide a framework for evaluating the capabilities and limitations of frontier models.

Research interests: continual learning, meta-learning, meta-reasoning

Email / Google Scholar / GitHub / Twitter / CV

Selected papers

Recent or representative papers:

Levels of analysis for large language models
Alexander Ku, Declan Campbell, Xuechunzi Bai, Jiayi Geng, Ryan Liu, Raja Marjieh, R. Thomas McCoy, Andrew Nam, Ilia Sucholutsky, Veniamin Veselovsky, Liyi Zhang, Jian-Qiao Zhu, Thomas L. Griffiths

Predictability shapes adaptation: An evolutionary perspective on modes of learning in transformers
Alexander Y. Ku, Thomas L. Griffiths, Stephanie C.Y. Chan

On the generalization of language models from in-context learning and finetuning: A controlled study
Andrew K. Lampinen, Arslan Chaudhry, Stephanie C.Y. Chan, Cody Wild, Diane Wan, Alex Ku, Jörg Bornschein, Razvan Pascanu, Murray Shanahan, James L. McClelland