Is Geometry a Language That Only Humans Know?

Probing further, the researchers tried to replicate the performance of humans and baboons with artificial intelligence, using neural-network models that are inspired by basic mathematical ideas of what a neuron does and how neurons are connected. These models — statistical systems powered by high-dimensional vectors, matrices multiplying layers upon layers of numbers — successfully matched the baboons’ performance but not the humans’; they failed to reproduce the regularity effect. However, when researchers made a souped-up model with symbolic elements — the model was given a list of properties of geometric regularity, such as right angles, parallel lines — it closely replicated the human performance.

These results, in turn, set a challenge for artificial intelligence. “I love the progress in A.I.,” Dr. Dehaene said. “It’s very impressive. But I believe that there is a deep aspect missing, which is symbol processing” — that is, the ability to manipulate symbols and abstract concepts, as the human brain does. This is the subject of his latest book, “How We Learn: Why Brains Learn Better Than Any Machine … for Now.”

Yoshua Bengio, a computer scientist at the University of Montreal, agreed that current A.I lacks something related to symbols or abstract reasoning. Dr. Dehaene’s work, he said, presents “evidence that human brains are using abilities that we don’t yet find in state-of-the-art machine learning.”

That’s especially so, he said, when we combine symbols while composing and recomposing pieces of knowledge, which helps us to generalize. This gap could explain the limitations of A.I. — a self-driving car, for instance — and the system’s inflexibility when faced with environments or scenarios that differ from the training repertoire. And it’s an indication, Dr. Bengio said, of where A.I. research needs to go.

Dr. Bengio noted that from the 1950s to the 1980s symbolic-processing strategies dominated the “good old-fashioned A.I.” But these approaches were motivated less by the desire to replicate the abilities of human brains than by logic-based reasoning (for example, verifying a theorem’s proof). Then came statistical A.I. and the neural-network revolution, beginning in the 1990s and gaining traction in the 2010s. Dr. Bengio was a pioneer of this deep-learning method, which was directly inspired by the human brain’s network of neurons.

His latest research proposes expanding the capabilities of neural-networks by training them to generate, or imagine, symbols and other representations.

It’s not impossible to do abstract reasoning with neural networks, he said, “it’s just that we don’t know yet how to do it.” Dr. Bengio has a major project lined up with Dr. Dehaene (and other neuroscientists) to investigate how human conscious processing powers might inspire and bolster next-generation A.I. “We don’t know what’s going to work and what’s going to be, at the end of the day, our understanding of how brains do it,” Dr. Bengio said.

Source

Be the first to comment

Leave a Reply

Your email address will not be published.


*


10 − 4 =