Google’s new AI will play video games with you — but not to win – The Verge

author
3 minutes, 3 seconds Read

/

Google DeepMind trained its video game playing AI agent on games like Valheim, No Man’s Sky, and Goat Simulator.

p>span:first-child]:text-gray-13 [&_.duet–article-byline-and]:text-gray-13″>

Google logo with colorful shapes

a:hover]:text-gray-63 [&>a:hover]:shadow-underline-black dark:[&>a:hover]:text-gray-bd dark:[&>a:hover]:shadow-underline-gray [&>a]:shadow-underline-gray-63 dark:[&>a]:text-gray-bd dark:[&>a]:shadow-underline-gray”>Illustration: The Verge

Google DeepMind unveiled SIMA, an AI agent training to learn gaming skills so it plays more like a human instead of an overpowered AI that does its own thing. SIMA, which stands for Scalable, Instructable, Multiworld Agent, is currently only in research.

SIMA will eventually learn how to play any video game, even games with no linear path to end the game and open-world games. Though it’s not intended to replace existing game AI, think of it more as another player that meshes well with your party. It mixes natural language instruction with understanding 3D worlds and image recognition. 

“SIMA isn’t trained to win a game; it’s trained to run it and do what it’s told,” said Google DeepMind researcher and SIMA co-lead Tim Harley during a briefing with reporters. 

Google worked with eight game developers, including Hello Games, Embracer, Tuxedo Labs, Coffee Stain, and others, to train and test SIMA. Researchers plugged SIMA into games like No Man’s Sky, Teardown, Valheim, and Goat Simulator 3 to teach the AI agent the basics of playing the games. In a blog post, Google said that SIMA doesn’t need a custom API to play the games or access source codes. 

Harley said the team chose games that were more focused on open play than narrative to help SIMA learn general gaming skills. If you’ve played or watched a playthrough of Goat Simulator, you know that doing random, spontaneous things is the point of the game, and Harley said it was this kind of spontaneity they hoped SIMA would learn. 

To do this, the team first built a new environment in the Unity engine where the agents needed to create sculptures to test their understanding of object manipulation. Then, Google recorded pairs of human players — one controlling the game and the other giving instructions on what to do next — to capture language instructions. Afterward, players played independently to show what led to their actions in the game. All of this was fed to the SIMA agents to learn to predict what would happen next on the screen. 

SIMA currently has about 600 basic skills, such as turning left, climbing a ladder, and opening the menu to use a map. Eventually, Harley said, SIMA could be instructed to do more complex functions within a game. Tasks like “find resources and build a camp” are still difficult because AI agents can’t perform actions for humans. 

SIMA isn’t meant to be an AI-powered NPC like the ones from Nvidia and Convai, but another player in a game that impacts the result. SIMA project co-lead Frederic Besse said it’s too early to tell what kind of uses AI agents like it could bring to gaming outside of the research sphere.

Like AI NPCs, however, SIMA may eventually learn to talk, but it’s far from that. SIMA is still learning how to play games and adapt to ones it hasn’t played before. Google said that with more advanced AI models, SIMA may eventually be able to do more complex tasks and be the perfect AI party member to lead you to victory. 

This post was originally published on this site

Similar Posts