We are broadly interested in how the nervous systems process information at different scales. Our research aims to elucidate principles of neural computation in the brain by analyzing how the brain acquires, stores, and manipulates information to support adaptive behaviors. To this end, we integrate techniques from statistical physics, dynamical systems, machine learning and information theory, and work closely with experimental collaborators to build mechanistic and interpretable models. We emphasize the shared principles of neural computation across different systems, and generalizable to build more efficient artificial intelligent systems. Some of the core questions we are addressing include:
- How does the brain sense and represent the world around us?
- How does it store, update, and organize memories over time?
- How can we translate these biological insights into better learning algorithms?
Neural Computation in the Olfactory System
The olfactory system is a powerful window into brain function. Odors are high-dimensional signals, yet animals detect, recognize, and respond to them with speed and accuracy.
Key directions:
- Compressed sensing in the nose: We’ve shown that olfactory neurons use efficient coding strategies to represent vast numbers of odors using relatively few sensory receptors (PNAS 2019).
- Early processing circuits: We model how local circuits in the insect and mammalian olfactory systems (like the antennal lobe and olfactory bulb) shape responses. These circuits adapt based on context—such as whether the animal is hungry (Frontiers Comp Neuro 2021, Sci. Adv. 2022).
- Associative learning and inference: In Drosophila fruit fly, we are exploring how the brain links different odors and forms expectations, even in the absence of direct rewards or punishments.
We’re also building detailed circuit models of higher olfactory centers, like the mushroom body and piriform cortex, to understand how the brain forms and updates odor memories.
Memory Dynamics and Lifelong Learning
Animals excel at learning throughout life without overwriting old memories—a challenge for current AI systems.
Our work addresses:
- Representational drift: Brain activity patterns change over time, even for familiar tasks. We model this “drift” as a natural result of learning in noisy biological networks, and explore how it might help the brain remain flexible and robust (Nature Neuro 2023, Biol. Cybern. 2022).
- Memory interactions: Memories are not static. We study how they link, compete, and update each other across timescales—how one memory may influence another, and how new learning reshapes what’s already known.
- Schemas and structured knowledge: We investigate how the brain organizes memories into generalizable frameworks (schemas), supporting fast learning and transfer of knowledge.
Brain-Inspired Learning Algorithms
We aim to bridge neuroscience and machine learning by designing algorithms inspired by how the brain learns.
Areas of focus:
- Biologically plausible learning rules: We’ve developed new learning algorithms that mimic how neurons could actually learn in the brain—relying only on local, realistic signals (unlike backpropagation) (Neur. Comp. 2021).
- Learning continuous representations: We’re exploring how neural networks can learn stable, continuous patterns of activity—key to representing time, space, and motion in the brain.
- Lifelong and continual learning in AI: Taking inspiration from memory drift and synaptic dynamics, we’re developing machine learning systems that can adapt over time without forgetting earlier knowledge.
To learn more, read our publications and google scholar.