The ultimate achievement to some in the AI industry is creating a system with Artificial General Intelligence (AGI), or the ability to understand and learn any task that a human can. It has been suggested that AGI would bring about systems with the ability to reason, plan, learn, represent knowledge and communicate in natural language.
DeepMind, the Alphabet-backed research lab, took a step toward it this week with the release of an AI system called Gato.
Gato is what DeepMind describes as a “general-purpose” system, a system that can be taught to perform many different types of tasks. Researchers at DeepMind trained Gato to complete 604, to be exact, including captioning images, engaging in dialogue, stacking blocks with a real robot arm and playing Atari games.
Scott Reed, a research scientist at DeepMind and co-creator of Gato said, the significance of Gato is mainly that one agent with one model can do hundreds of very different tasks, including to control a real robot and do basic captioning and chat. Gato does not necessarily do these tasks well all the time. But on 450 of the 604 aforementioned tasks, DeepMind claims that Gato performs better than an expert more than half the time.
Perhaps even more remarkably, Gato is orders of magnitude smaller than single-task systems, including GPT-3, in terms of the parameter count. DeepMind researchers kept Gato purposefully small so the system could control a robot arm in real time. But they hypothesize that — if scaled up — Gato could tackle any “task, behavior, and embodiment of interest.”