Google Deepmind is using Gemini to train agents inside Goat Simulator 3


The researchers claim that SIMA 2 can carry out a range of more complex tasks inside virtual worlds, figure out how to solve certain challenges by itself, and chat with its users. It can also improve itself by tackling harder tasks multiple times and learning through trial and error.

“Games have been a driving force behind agent research for quite a while,” Joe Marino, a research scientist at Google DeepMind, said in a press conference this week. He noted that even a simple action in a game, such as lighting a lantern, can involve multiple steps: “It’s a really complex set of tasks you need to solve to progress.”

The ultimate aim is to develop next-generation agents that are able to follow instructions and carry out open-ended tasks inside more complex environments than a web browser. In the long run, Google DeepMind wants to use such agents to drive real-world robots. Marino claimed that the skills SIMA 2 has learned, such as navigating an environment, using tools, and collaborating with humans to solve problems, are essential building blocks for future robot companions.

Unlike previous work on game-playing agents such as AlphaZero, which beat a Go grandmaster in 2016, or AlphaStar, which beat 99.8% of ranked human competition players at the video game StarCraft 2 in 2019, the idea behind SIMA is to train an agent to play an open-ended game without preset goals. Instead, the agent learns to carry out instructions given to it by people.

Humans control SIMA 2 via text chat, by talking to it out loud, or by drawing on the game’s screen. The agent takes in a video game’s pixels frame by frame and figures out what actions it needs to take to carry out its tasks.

Like its predecessor, SIMA 2 was trained on footage of humans playing eight commercial video games, including No Man’s Sky and Goat Simulator 3, as well as three virtual worlds created by the company. The agent learned to match keyboard and mouse inputs to actions.

Hooked up to Gemini, the researchers claim, SIMA 2 is far better at following instructions (asking questions and providing updates as it goes) and figuring out for itself how to perform certain more complex tasks.  

Google DeepMind tested the agent inside environments it had never seen before. In one set of experiments, researchers asked Genie 3, the latest version of the firm’s world model, to produce environments from scratch and dropped SIMA 2 into them. They found that the agent was able to navigate and carry out instructions there.



Source link

  • Related Posts

    This Week’s Awesome Tech Stories From Around the Web (Through February 7)

    ARTIFICIAL INTELLIGENCE Moltbook Was Pure AI TheaterWill Douglas Heaven | MIT Technology Review ($) “As the hype dies down, Moltbook looks less like a window onto the future and more…

    AT&T Launches Its Own Kid Phone in Collaboration With Samsung, the AmiGo Jr.

    Parents grapple with the modern day question of whether to give their kids phones for staying in contact and keeping tabs on their whereabouts, while also navigating the realities of…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    This Week’s Awesome Tech Stories From Around the Web (Through February 7)

    This Week’s Awesome Tech Stories From Around the Web (Through February 7)

    Washington Sundar to link up with Indian squad in Delhi

    Tech’s AI Push Risks a Bond Market Blowback: Credit Weekly

    Her father’s war grave in Gaza was bulldozed by Israel. Amid the grief and anger, she wants answers | Australian military

    Her father’s war grave in Gaza was bulldozed by Israel. Amid the grief and anger, she wants answers | Australian military

    AT&T Launches Its Own Kid Phone in Collaboration With Samsung, the AmiGo Jr.

    AT&T Launches Its Own Kid Phone in Collaboration With Samsung, the AmiGo Jr.

    Out of Office: A Fashion Insider’s Guide to Dublin

    Out of Office: A Fashion Insider’s Guide to Dublin