Harvard University has been granted $28 million to try to figure out why human brains are better at processing and absorbing information than artificial intelligence (A.I.). The award, provided by the Intelligence Advanced Projects Activity (IARPA), could make A.I. systems more like the breathing tissue of biological brains than the cold steel of artificial silicon.
A typical computer already has the storage capacity of a human brain; however, the ability to learn and connect patterns of the former are unmatched by the latter. Researchers need a better understanding of how neurons are put together if they want to develop more complex A.I.
The general consensus among neuroscientists is that the human brain’s storage capacity ranges between 10 and 100 terabytes. Other estimates even push that number closer to 2.5 petabytes. In terms of functionality, the human brain can perform a myriad of tasks, including the ability to recollect, synthesize information, generate abstractions, as well as test the structure and consistency of the world.
Getting at the meat of information processing
Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS), Centre for Brain Studies (CBS) and Department of Molecular and Cellular Biologies will join forces to document activity in the visual cortex. The aim is for neural imaging to help researchers understand how brain cells work together, with the end goal of creating a more accurate and complex artificial intelligence system. The research could pave the way for the first generation of computer systems capable of learning information as quickly and as efficiently as human beings.(1)
These computers may be able to read MRIs, drive cars, recognize network invasions and perform other tasks typically reserved for people. The research will create more than a petabyte of data, which will be reviewed by a set of algorithms that produce a detailed 3D neural map.(1)
“The pattern-recognition and learning abilities of machines still pale in comparison to even the simplest mammalian brains,” Hanspeter Pfister, professor of computer science at Harvard, told sources. “The project is not only pushing the boundaries of brain science, it is also pushing the boundaries of what is possible in computer science. We will reconstruct neural circuits at an unprecedented level from petabytes of structural and functional data. It requires us to make new advances in data management, high-performance computing, computer vision and network analysis,” he added.(1)
Neurological algorithms to supersede conventional computer algorithms
After the neural map is created, researchers will attempt to understand how interactions between neural connections enable the brain to process information. These algorithms, grounded in the motion of the brain, could be replicated and implemented to improve computer perception, navigation and recognition.
“This is a moonshot challenge,” noted team leader David Cox, assistant professor of molecular and cellular biology and computer science. “The scientific value of recording the activity of so many neurons and mapping their connections alone is enormous, but that is only the first half of the project.(1)
“As we figure out the fundamental principles governing how the brain learns, it’s not hard to imagine that we’ll eventually be able to design computer systems that can match, or even outperform, humans.”(1)
“We have a huge task ahead of us in this project, but at the end of the day, this research will help us understand what is special about our brains,” he added. “One of the most exciting things about this project is that we are working on one of the great remaining achievements for human knowledge — understanding how the brain works at a fundamental level.”(1)
Source used:
(1) Wired.CO.UK