Speaker:Mayank Mehta, Ph.D.

Time: April 18, 15:00-16:00

Venue:Room B101,Lvzhihe Building(吕志和楼)

Host:Prof. Si Wu


The brain is perhaps the most complex entity, composed of astronomically large components that are dynamic, that interact via free parameters that can’t be measured in vivo. Most theories of the emergent neural dynamics are based on the Ising model of interacting spins, e.g. various Hopfield attractor models. While these models can explain neural dynamics qualitatively, but not quantitatively, partly because these models often make assumptions that violate crucial neurobiological facts. The challenge is to develop a fundamental, physics style theory of network dynamics that respects neurobiological facts. Further, the theory should use only a handful of equations and free parameters to maintain predictability and generalization to reveal fundamental principles. Yet the theory should explain quantitatively, not just qualitatively, a wide array of experimental data. We have recently developed such a theory about the ground state of interacting networks, and a technique to test the theory quantitatively in vivo. About a dozen in vivo observations quantitatively match the simple theory with just two free parameters. This theory-experiment combination reveals a novel type of memory generated by interacting networks that is both dynamic and energy efficient.


Mayank Mehta did PhD in QFT. Since then he has been doing theory and experiments to understand how networks of neurons interact and learn. He is a professor at UCLA in the departments of Physics, Neurology and ECE. He directs the Keck center for Neurophysics and the Center for Physics of Life and spearheads the Neuro-AI initiative at UCLA. More information at Mayank Mehta / TED talk