Harvard’s Kempner Institute Hosts “Frontiers in NeuroAI” Symposium

Harvard’s Kempner Institute Hosts “Frontiers in NeuroAI” Symposium

The Kempner Institute at Harvard University hosted the Frontiers in NeuroAI symposium on June 5–6, 2025, in the Science and Engineering Complex (SEC) in Allston, Massachusetts. More than a hundred in-person attendees, along with numerous virtual participants, convened to explore research at the intersection of neuroscience and artificial intelligence. Sessions included podium talks, a poster reception, interactive Q&As, and networking opportunities.


Day One Highlights

  • Surya Ganguli (Stanford University) started with “Theories of Learning, Imagination and Reasoning: Of Mice and Machines,” comparing biological learning in animals to algorithmic strategies in deep networks.
  • SueYeon Chung (Flatiron Institute/NYU), with his talk “Computing with Neural Manifolds,” illustrated how statistical physics and geometry reveal low-dimensional structure in both neural population data and artificial networks.
  • Konrad Körding (University of Pennsylvania) addressed causal inference in “Causality: Why Most Claims to Causality Are Bogus and How to Sometimes Get It Nonetheless,” and Karel Svoboda (Allen Institute for Neural Dynamics) surveyed biologically plausible synaptic learning in “Illuminating Synaptic Learning.”
  • Next, Mackenzie Mathis (EPFL) presented “Using AI to Measure and Model Neural Dynamics & Behavior,” introducing Zebra, a manifold-learning framework aligning noisy neural recordings across animals into a shared low-dimensional space. She explained how inverted Jacobians yield neuron-level attribution scores that link individual neurons to latent components, aiding cell-type discovery. Mathis then described LAVA Action, a mixed-modal vision-language model for video captioning and action prediction, demonstrating its superior performance on benchmarks like EPIC Kitchens. She concluded with a vision of an integrated “brain-behavior operating system” where foundation models for neural encoding, pose estimation, and vision-language interoperate dynamically, using prediction errors to trigger fine-tuning.
Mackenzie Mathis at the Frontiers in NeuroAI Symposium (June 5–6, 2025), presenting her Zebra manifold-learning framework.
  • Yilun Du (Harvard University) followed with “Learning Compositional Models of the World,” showing how energy-based generative models enable zero-shot generalization for embodied agents. By training separate energy functions for object identity, physics constraints, and rewards, agents can compose these energies at test time for long-horizon planning without retraining. He demonstrated zero-shot navigation tasks up to 512 steps, outperforming offline reinforcement-learning baselines. Du also described a bi-level sampler for multimodal planning—combining text, video, and action energies—enabling a robot to execute novel instructions (e.g., stacking blocks in unseen configurations). He explained how per-clause energy verifiers solve symbolic problems like graph coloring and SAT zero-shot on instances larger than training examples, highlighting the power of decomposed reasoning.
Yilun Du at the Frontiers in NeuroAI Symposium (June 5–6, 2025), presenting “Learning Compositional Models of the World.”
  • João Sacramento (Google) spoke on “Sequence Prediction through Local Learning,” presenting a theoretical framework in which autoregressive models perform in-context learning via layers driven by local learning objectives. Cengiz Pehlevan (Harvard University) closed the afternoon with “Summary Statistics of Learning: Linking Changing Neural Representations to Behavior,” arguing that a handful of low-dimensional summary statistics can predict network performance and guide analysis of large-scale neural recordings.

Day Two Highlights

  • Luke Zettlemoyer (University of Washington & Meta) presenting “Mixed-modal Language Modeling: Chameleon, Transfusion, and Mixture of Transformers,” advocating early-fusion hybrid architectures to process interleaved text and images, addressing modality competition.
  • Ellie Pavlick (Brown University & Google DeepMind) followed with “What Came First, the Sum or the Parts? Emergent Compositionality in Neural Networks,” reporting empirical studies showing neural networks often display a middle-ground form of emergent associativity rather than strict symbolic compositionality.
  • Mark Andermann (Harvard Medical School) delivered “Offline Cortical Reactivations of Recent Experiences,” reporting calcium imaging experiments in mouse visual cortex that revealed stimulus-specific reactivations predicting representational drift within and across days, suggesting a role for offline replay in optimizing neural representations. Asma Ghandeharioun (Google DeepMind) then introduced “Model Interpretability: From Illusions to Opportunities,” unveiling Patchscopes, a framework using language models to interpret another model’s hidden representations, which reveals and corrects multistep reasoning errors in large language models.
  • Mason Kamb (Stanford University) presenting “An Analytic Theory of Creativity in Convolutional Diffusion Models,” where he unveiled a patch-based theoretical framework demonstrating how locality and equivariance biases enable diffusion models to generate novel images by recombining training-set patches.
  • Mozes Jacobs (Harvard University) followed with “Can Your Neurons Hear the Shape of an Object?”, showing how traveling-wave dynamics in recurrent networks integrate global spatial context and rival non-local U-Nets on semantic segmentation tasks.
  • Veronica Chelu (McGill University), with her talk “Excitatory-Inhibitory Dynamics in Adaptive Decision-Making,” demonstrated that biologically inspired E/I balance in recurrent networks supports continual reinforcement learning by dynamically modulating the speed-accuracy trade-off.
  • Fernanda Viégas (MIT/Harvard) finished with “What Do AI Chatbots Think About Us? Implications for User Transparency and Control,” showcasing a dashboard that exposes—and allows editing of—implicit social biases in conversational AI.

Bridging Disciplines and Future Directions

Organized by Sham Kakade (Harvard), Elise Porter (Kempner Institute), Kanaka Rajan (Harvard Medical School), Bernardo Sabatini (Harvard Medical School/HHMI), and Martin Wattenberg (Harvard), the event aimed to break down silos between neuroscience and AI. Talks ranged from synaptic learning rules to compositional generative models, illustrating how insights in one field drive progress in the other. Attendees noted that the interdisciplinary format spurred conversations that might not have occurred at single-discipline conferences.


The Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University is an interdisciplinary research center launched in December 2021, funded by a $500 million gift from Priscilla Chan and Mark Zuckerberg. Named in honor of Zuckerberg’s maternal grandparents, Sidney and Gertrude Kempner, the institute aims to discover the principles underlying both biological and machine intelligence.

The Conf is a platform that reports on scholarly conferences, symposia, roundtables, book talks, and other academic events. It is managed by a group of students from leading American and European universities and is published by Alma Mater Europaea University, Location Vienna.

Share this article: