Google announced Pathways, a new artificial intelligence (AI) architecture that works like a human brain and can learn better than previous algorithms.
With a new AI architecture, Google has significantly expanded its ability to compute AI in a profound way. Google’s new AI architecture is based on a single model that can be trained to do millions of tasks, so it’s more like a mammalian brain. It would seem that AI is currently being used to train a machine to do one thing very well, like identifying images or understanding animal sounds.
Google said, “we’re crafting the kind of next-generation AI system that can quickly adapt to new needs and solve new problems all around the world as they arise, helping humanity make the most of the future ahead of us.”
Consequently, two different AI models are required to understand sight and sound. The new AI architecture takes a different approach by teaching it generalizable skills that can be transferred across tasks. The human brain uses only a small portion of its processing power to perform tasks, utilizing only specific parts and not the entire brain network.
Google explained, “we’d like to train one model that can not only handle many separate tasks but also draw upon and combine its existing skills to learn new tasks faster and more effectively. That way what a model learns by training on one task – say, learning how aerial images can predict the elevation of a landscape – could help it learn another task — say, predicting how floodwaters will flow through that terrain. We want a model to have different capabilities that can be called upon as needed, and stitched together to perform new, more complex tasks – a bit closer to the way the mammalian brain generalizes across tasks.”
In addition to being more energy-efficient, Google’s Pathways AI will be able to learn more and accomplish tasks faster than older models. Pathways is more than just a search engine. According to Google, it can be adapted to solve many problems, including those we have not yet encountered.