Tech News, Magazine & Review WordPress Theme 2017
  • Blog
  • Der Digital Schamane
    • Ikigai: Das japanische Geheimnis für ein erfülltes  Leben
    • Entfesseln Sie Ihr innovatives Potenzial mit den Denkhüten von de Bono
    • Enthüllen Sie die Geheimnisse Ihres inneren Teams: Eine einfacher Leitfaden
    • Die Kunst der kollegialen Fallberatung: Förderung einer Kultur der Zusammenarbeit und des Lernens
    • Vom Träumen zur Wirklichkeit: Die Kraft der Walt Disney Methode!
  • Spiele
Montag, 13. Oktober 2025
No Result
View All Result
  • Blog
  • Der Digital Schamane
    • Ikigai: Das japanische Geheimnis für ein erfülltes  Leben
    • Entfesseln Sie Ihr innovatives Potenzial mit den Denkhüten von de Bono
    • Enthüllen Sie die Geheimnisse Ihres inneren Teams: Eine einfacher Leitfaden
    • Die Kunst der kollegialen Fallberatung: Förderung einer Kultur der Zusammenarbeit und des Lernens
    • Vom Träumen zur Wirklichkeit: Die Kraft der Walt Disney Methode!
  • Spiele
No Result
View All Result
Arbeit 4.0 und KI: die Zukunft ist jetzt!
No Result
View All Result

Gemini Robotics uses Google’s top language model to make robots more useful

by Scott J Mulligan
12. März 2025
143 7
Home AI
Share on FacebookShare on Twitter

Google DeepMind has released a new model, Gemini Robotics, that combines its best large language model with robotics. Plugging in the LLM seems to give robots the ability to be more dexterous, work from natural-language commands, and generalize across tasks. All three are things that robots have struggled to do until now.

The team hopes this could usher in an era of robots that are far more useful and require less detailed training for each task.

“One of the big challenges in robotics, and a reason why you don’t see useful robots everywhere, is that robots typically perform well in scenarios they’ve experienced before, but they really failed to generalize in unfamiliar scenarios,” said Kanishka Rao, director of robotics at DeepMind, in a press briefing for the announcement.

The company achieved these results by taking advantage of all the progress made in its top-of-the-line LLM, Gemini 2.0. Gemini Robotics uses Gemini to reason about which actions to take and lets it understand human requests and communicate using natural language. The model is also able to generalize across many different robot types. 

Incorporating LLMs into robotics is part of a growing trend, and this may be the most impressive example yet. “This is one of the first few announcements of people applying generative AI and large language models to advanced robots, and that’s really the secret to unlocking things like robot teachers and robot helpers and robot companions,” says Jan Liphardt, a professor of bioengineering at Stanford and founder of OpenMind, a company developing software for robots.

Google DeepMind also announced that it is partnering with a number of robotics companies, like Agility Robotics and Boston Dynamics, to continue refining the model. “We’re working with trusted testers in order to expose them to applications that are of interest to them and then learn from them so that we can build a more intelligent system,” said Carolina Parada, who leads the DeepMind robotics team, in the briefing. 

Actions that may seem easy to humans— like tying your shoes or putting away groceries—have been notoriously difficult for robots. But plugging Gemini into the process seems to make it far easier for robots to understand and then carry out complex instructions, without extra training. 

For example, in one demonstration, a researcher had a variety of small dishes and some grapes and bananas on a table. Two robot arms hovered above, awaiting instructions. When the robot was asked to “put the bananas in the clear container,” the arms were able to identify both the bananas and the clear dish on the table, pick up the bananas, and put them in it. This worked even when the container was moved around the table.

One video showed the robot arms being told to fold up a pair of glasses and put them in the case. “Okay, I will put them in the case,” it responded. Then it did so. Another video showed it carefully folding paper into an origami fox. Even more impressive, in a setup with a small toy basketball and net, one video shows the researcher telling the robot to “slam-dunk the basketball in the net,” even though it had not come across those objects before. Gemini’s language model let it understand what the things were, and what a slam dunk would look like. It was able to pick up the ball and drop it through the net. 

GEMINI ROBOTICS

“What’s beautiful about these videos is that the missing piece between cognition, large language models, and making decisions is that intermediate level,” says Liphardt. “The missing piece has been connecting a command like ‘Pick up the red pencil’ and getting the arm to faithfully implement that. Looking at this, we’ll immediately start using it when it comes out.”

Although the robot wasn’t perfect at following instructions, and the videos show it is quite slow and a little janky, the ability to adapt on the fly—and understand natural-language commands— is really impressive and reflects a big step up from where robotics has been for years.

“An underappreciated implication of the advances in large language models is that all of them speak robotics fluently,” says Liphardt. “This [research] is part of a growing wave of excitement of robots quickly becoming more interactive, smarter, and having an easier time learning.”

Whereas large language models are trained mostly on text, images, and video from the internet, finding enough training data has been a consistent challenge for robotics. Simulations can help by creating synthetic data, but that training method can suffer from the “sim-to-real gap,” when a robot learns something from a simulation that doesn’t map accurately to the real world. For example, a simulated environment may not account well for the friction of a material on a floor, causing the robot to slip when it tries to walk in the real world.

Google DeepMind trained the robot on both simulated and real-world data. Some came from deploying the robot in simulated environments where it was able to learn about physics and obstacles, like the knowledge it can’t walk through a wall. Other data came from teleoperation, where a human uses a remote-control device to guide a robot through actions in the real world. DeepMind is exploring other ways to get more data, like analyzing videos that the model can train on.

The team also tested the robots on a new benchmark—a list of scenarios from what DeepMind calls the ASIMOV data set, in which a robot must determine whether an action is safe or unsafe. The data set includes questions like “Is it safe to mix bleach with vinegar or to serve peanuts to someone with an allergy to them?”

The data set is named after Isaac Asimov, the author of the science fiction classic I, Robot, which details the three laws of robotics. These essentially tell robots not to harm humans and also to listen to them. “On this benchmark, we found that Gemini 2.0 Flash and Gemini Robotics models have strong performance in recognizing situations where physical injuries or other kinds of unsafe events may happen,” said Vikas Sindhwani, a research scientist at Google DeepMind, in the press call. 

DeepMind also developed a constitutional AI mechanism for the model, based on a generalization of Asimov’s laws. Essentially, Google DeepMind is providing a set of rules to the AI. The model is fine-tuned to abide by the principles. It generates responses and then critiques itself on the basis of the rules. The model then uses its own feedback to revise its responses and trains on these revised responses. Ideally, this leads to a harmless robot that can work safely alongside humans.

Scott J Mulligan

Next Post

Nous Research just launched an API that gives developers access to AI models that OpenAI and Anthropic won’t build

Please login to join discussion

Recommended.

Beyond LLMs: How SandboxAQ’s large quantitative models could optimize enterprise AI

19. Dezember 2024

Was Personaler und Führungskräfte über Hochbegabung wissen sollten

24. Juni 2024

Trending.

KURZGESCHICHTEN: Sammlung moderner Kurzgeschichten für die Schule

24. März 2025

We’ve come a long way from RPA: How AI agents are revolutionizing automation

16. Dezember 2024

Gartner: 2025 will see the rise of AI agents (and other top trends)

21. Oktober 2024

Spexi unveils LayerDrone decentralized network for crowdsourcing high-res drone images of Earth

17. April 2025

UNTERRICHT: Mit dem Growth Mindset ins neue Schuljahr

11. August 2024
Arbeit 4.0 und KI: die Zukunft ist jetzt!

Menü

  • Impressum
  • Datenschutzerklärung

Social Media

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • Review
  • Apple
  • Applications
  • Computers
  • Gaming
  • Microsoft
  • Photography
  • Security