Alexander Ku

I'm a PhD student at Princeton, working with Tom Griffiths and Jon Cohen, and a research scientist at Google DeepMind. My research focuses on the principles underlying the flexibility and efficiency of human cognition, and whether these principles extend to large language models. Specifically, I study how the mind combines familiar concepts to solve unfamiliar problems (compositionality), how those concepts are represented and processed, and how these representations adapt to make solving familiar problems easier over time (automaticity).

Email / Google Scholar / GitHub / Twitter / CV

Selected papers

Recent or representative papers:

Levels of analysis for large language models
Alexander Ku, Declan Campbell, Xuechunzi Bai, Jiayi Geng, Ryan Liu, Raja Marjieh, R. Thomas McCoy, Andrew Nam, Ilia Sucholutsky, Veniamin Veselovsky, Liyi Zhang, Jian-Qiao Zhu, Thomas L. Griffiths

Predictability shapes adaptation: An evolutionary perspective on modes of learning in transformers
Alexander Y. Ku, Thomas L. Griffiths, Stephanie C.Y. Chan

On the generalization of language models from in-context learning and finetuning: A controlled study
Andrew K. Lampinen, Arslan Chaudhry, Stephanie C.Y. Chan, Cody Wild, Diane Wan, Alex Ku, Jörg Bornschein, Razvan Pascanu, Murray Shanahan, James L. McClelland