Technology AI has achieved impressive things so far, but needs a lot of volume data in order to achieve them. In contrast, the human brain can often learn from a small number of examples. New research shows that using the architectural principles of the brain can help Artificial Intelligence to approach our visual ability.
As we know so far, the deep learning is based on the theory that the more data we use in an algorithm, the better it learns. And in the age of Big Data, this is easier than ever, especially for the big data-driven tech companies that do cutting-edge research on artificial intelligence.
The biggest deep learning models that exist today like OpenAI's GPT-3 and BERT of Google, are trained with billions of data and even the most modest models require large amounts of data. THE collection of these data sets and the investment of computing resources for their intersection is a significant barrier, especially for academic Laboratories with fewer resources.
So artificial intelligence is much less flexible than natural intelligence. While a person only needs to see a few examples of an animal, a tool or an object to be able to distinguish it, most AIs need to be trained with many examples of an object to be able to recognize.
There is, however, a subcategory of AI that aims at what is known as "one-shot"Or"few-shot»Learning, where the algorithms are designed to be able to learn from very few examples. But these approaches are still largely experimental and can not even approach the human brain.
Two scientists wanted to see if they could design an artificial intelligence that could be learned in a few minutes. data, borrowing principles from how we think the brain solves this problem. In a document at Frontiers in Computational Neuroscience, explained that the approach significantly enhances AI's ability to learn new visual concepts from a few examples.
«Our model provides a biologically reasonable way for artificial neural networks to learn new visual concepts from a small number of examples.» said ο Maximilian Riesenhuber, from Georgetown University Medical Center. «We can make computers learn much better than a few examples, using previous learning in a way that we think reflects what the brain does».
Several decades of research in neuroscience suggest that the brain's ability to learn so quickly depends on its ability to use prior knowledge to understand new concepts based on a few facts. In terms of visual comprehension, this may be based on similarities in shape, structure, or color, but the brain can also utilize abstract visual concepts believed to be encoded in an area of the brain called the anterior temporal lobe (ATL).
The researchers decided to try and recreate this ability, to help artificial intelligence learn quickly from previous categories of images.
Deep learning algorithms work by creating layers of artificial neurons to learn increasingly complex features of an image or other type of data, which are then used to categorize new data. For example, the initial layers will look for simple functions such as edges, while later they may look for more complex ones, such as noses, faces or even more high-end features.
They first trained AI in 2,5 million images in 2.000 different categories from the popular dataset ImageNet. They then exported features from various network levels, including the last level before the output level. They are referred to as "conceptual features" because they are the highest levels of knowledge they have learned and are more like abstract concepts that may be coded in the ATL.
They then used these different skill sets to train the AI to learn new concepts based on 2, 4, 8, 16, 32, 64 and 128 examples. They found that artificial intelligence using conceptual features performed much better than training using lower-level functions in a smaller number of examples.
While the researchers acknowledged that the challenge posed by artificial intelligence was relatively simple and covered only one aspect of the complex visual reasoning process, they said using a biologically sound approach to solving the problem opened up many promising new avenues.
"Our findings not only suggest techniques that could help computers learn faster and more efficiently, but can also lead to improved neuroscience experiments aimed at understanding how people learn so quickly, which is not the case." still well understood, "said Riesenhuber. As the researchers note, the human visual system is still the gold standard for understanding the world around us. Borrowing from design authorities can prove to be a lucrative direction for future research.