I’m not going to describe the article and research because it would take too long for me to grasp it well enough to explain it.
I bet Google loves this kind of research, too.
. Yonatan Zunger David Amerland Gideon Rosenblatt Hans Youngmann
Originally shared by Greg Batmarx
…She and colleagues found that cognitive function and abstract thought exist as an agglomeration of many cortical sources ranging from those close to sensory cortices to far deeper from them along the brain connectome, or connection wiring diagram.
Siegelmann is director of the Biologically Inspired Neural and Dynamical Systems Laboratoryat UMass Amherst and one of 16 recipients in 2015 of the National Science Foundation’s (NSF) Brain Research through Advancing Innovative Neurotechnologies (BRAIN) program initiated by President Obama to advance understanding of the brain.
The authors say their work demonstrates not only the basic operational paradigm of cognition, but shows that all cognitive behaviors exist on a hierarchy, starting with the most tangible behaviors such as finger tapping or pain, then to consciousness and extending to the most abstract thoughts and activities such as naming. This hierarchy of abstraction is related to the connectome structure of the whole human brain, they add.
For this study, the researchers took a data-science approach. They first defined a physiological directed network of the whole brain, starting at input areas and labeling each brain area with the distance or “depth” from sensory inputs. They then processed the massive repository of fMRI data. The idea was to project the active regions for a cognitive behavior onto the network depth and describe that cognitive behavior in terms of its depth distribution says Siegelmann. We momentarily thought our research failed when we saw that each cognitive behavior showed activity through many network depths. Then we realized that cognition is far richer, it wasn’t the simple hierarchy that everyone was looking for. So, we developed our geometrical ‘slope’ algorithm.
To illustrate, she suggests imagining a balance where the right pan holds total brain activity with the shallowest depth; the other pan holds activity in deepest brain areas most removed from inputs. If the balance arm describes the total brain activity for a particular cognitive behavior, the right pan will be lower, creating a negative slope, when most activity is in shallow areas, and the left pan will go lower when most activity is deeper, creating a positive slope. The balance arm’s slope describes the relative shallow-to-deep brain activity for any behavior.
Our geometric algorithm works on this principle, but instead of two pans, it has many she says. The researchers summed all neural activity for a given behavior over all related fMRI experiments, then analyzed it using the slope algorithm. With a slope identifier, behaviors could now be ordered by their relative depth activity with no human intervention or bias she adds. They ranked slopes for all cognitive behaviors from the fMRI databases from negative to positive and found that they ordered from more tangible to highly abstract. An independent test of an additional 500 study participants supported the result.
Siegelmann says this work will have great impact in computer science, especially in deep learning. Deep learning is a computational system employing a multi-layered neural net, and is at the forefront of artificial intelligence (AI) learning algorithms she explains. “It bears similarity to the human brain in that higher layers are agglomerations of previous layers, and so provides more information in a single neuron.
But the brain’s processing dynamic is far richer and less constrained because it has recurrent interconnection, sometimes called feedback loops. In current human-made deep learning networks that lack recurrent interconnections, a particular input cannot be related to other recent inputs, so they can’t be used for time-series prediction, control operations, or memory.”
Her lab is now creating a “massively recurrent deep learning network,” she says, for a more brain-like and superior learning AI. Another interesting outcome of this research will be a new geometric data-science tool, which is likely to find widespread use in other fields where massive data is difficult to view coherently due to data overlap…
http://www.umass.edu/newsoffice/article/how-brain-architecture-leads-abstract
]]>