Mapping the Structure of Thought

Understanding the structure and function of the nervous system is an exceptionally complex task: the system consists of thousands of cells connected to thousands of other cells in microscopic networks that extend over large volumes and exhibit a seemingly endless variety of behaviors. We believe that mapping such networks at the level of synaptic connections, and understanding the relation of their connectivity and geometry to function, will play a key role in unraveling the mystery of thought.

Our group’s goal is to create, based on such microscopic connectivity and functional data, new mathematical models explaining how neural tissue computes. Our modeling spans the connectomics gamut from the behavior of individual neurons in exiguous circuits to collections of neurons in increasingly complex networks. We collaborate with neurobiologists to design experiments based on our theoretical models, and work extensively to analyze the resulting data in order to confirm or disprove our theoretical predictions.


MIT Commencement: June 3, 2016-Hayk SaribekyanNir-Shavit

  • June 3, 2016, MIT Commencement: Congratulations to new graduates,
    Gregory Odor and Hayk Saribekyan!
    (Pictured: Hayk Saribekyan and Professor Nir Shavit.)



High Throughput Connectomics
The current design trend in large scale machine learning is to use distributed clusters of CPUs and GPUs with MapReduce-style programming. Some have been led to believe that this type of horizontal scaling can reduce or even eliminate the need for traditional algorithm development, careful parallelization, and performance engineering. This paper is a case study showing the contrary: that the benefits of algorithms, parallelization, and performance engineering, can sometimes be so vast that it is possible to solve “clusterscale” problems on a single commodity multicore machine.

Connectomics is an emerging area of neurobiology that uses cutting edge machine learning and image processing to extract brain connectivity graphs from electron microscopy images. It has long been assumed that the processing of connectomics data will require mass storage, farms of CPU/GPUs, and will take months (if not years) of processing time. We present a high-throughput connectomics-ondemand system that runs on a multicore machine with less than 100 cores and extracts connectomes at the terabyte per hour pace of modern electron microscopes.


Graduate Students

Lu Mi Jonathan Rosenfeld


David Budden Jonathan Stoller Gergely Odor Victor Jakubiuk Quan Nguyen Robert Radway




No resources yet! But check back in soon.