Annual Report [2018]

Building a Network That Learns Like We Do

Flatiron Institute

At each instant, our senses gather oodles of sensory information, yet somehow our brains reduce that fire hose of input to simple realities: A car is honking. A bluebird is flying. 

How does this happen?

One part of simplifying visual information is ‘dimensionality reduction.’ The brain, for instance, takes in an image made up of thousands of pixels and labels it ‘teapot.’ One such simplification strategy shows up repeatedly in the brain, and recent work from a team led by Dmitri Chklovskii, group leader for neuroscience at the Center for Computational Biology, suggests the strategy may be no accident. 

Consider color. In the brain, one neuron may fire when a person looks at a green teapot, whereas another fires at a blue teapot. Neuroscientists say that these cells have localized receptive fields, as each neuron responds strongly to one hue, collectively spanning the entire rainbow. Similar setups allow us to distinguish aural pitches. 

Mimicking how the human brain processes information can improve machine-learning techniques. This diagram shows how a biologically inspired neural network can identify data clusters and manifolds. Each neuron outputs a rectified sum of its inputs, in turn influencing its neighboring neurons. Blue circles represent excitatory neurons, whereas red circles represent inhibitory neurons. The strength of the connection between each neuron is adjusted according to biologically plausible local learning rules. Illustration adapted from A. Sengupta et al./Advances in Neural Information Processing Systems 2018

Conventional artificial neural networks accomplish similar tasks, such as classifying images, but these algorithms work completely differently from those in the brain. Many artificial networks, for instance, tweak the connections between neurons by using information from distant neurons. In a real brain, however, the strength of a connection predominantly depends only on nearby neurons.

However, by extending a tradition of emulating biological learning, Chklovskii and his collaborators developed an approach that is not only biologically plausible but also powerful. “It basically explains how these systems, even though the agents are doing their own things with little information about others, can collectively organize as a system and learn something,” says Cengiz Pehlevan, a theoretical neuroscientist at Harvard University who was a Flatiron Institute research scientist until early 2019.

The new framework — presented at NeurIPS 2018 — starts by expressing three biological truths about how a network ought to function in a mathematical way.

The typical neural network uses training data to tweak parameters until it churns out correct results, but the new framework — presented at NeurIPS 2018 — starts by expressing three biological truths about how a network ought to function in a mathematical way: Neuronal activity should never be negative. (Real neurons can’t do anything less than not fire.) Similar inputs should produce similar outputs. (Put two teapots in, get two teapots out.) Different inputs should yield different outputs. (A teapot and a kettle should not produce two teapots). 

When the team optimized this mathematical expression, ‘the objective function,’ the resulting network repeatedly developed the architecture of a human brain: It divided the input space into overlapping sections and assigned one neuron to handle each chunk. In one instance, it learned to recognize a rotating teapot using neurons that fire at specific angles.

In other words, the same localized receptive fields that help people parse what they see and hear had evolved from the team’s network. “We had a different expectation of what this algorithm would do,” says Anirvan Sengupta, a visiting scholar at the Flatiron Institute and systems neuroscientist at Rutgers University in New Jersey. “It emerged despite us.” 

The work is the group’s latest in a series deriving optimal networks for learning various tasks. The results hint that the way the brain simplifies inputs is efficient and perhaps borders on inevitable. 

Chklovskii’s team will continue to reverse engineer learning in biological networks. “The fact that you keep on getting localized receptive fields for many different objective functions representing the same spirit,” Sengupta says, “seems to tell me there is something bigger there.”