Gemini hit with 100,000+ prompts in cloning attempt



Google says its flagship artificial intelligence chatbot, Gemini, has been inundated by “commercially motivated” actors who are trying to clone it by repeatedly prompting it, sometimes with thousands of different queries — including one campaign that prompted Gemini more than 100,000 times.

In a report published Thursday, Google said it has increasingly come under “distillation attacks,” or repeated questions designed to get a chatbot to reveal its inner workings. Google described the activity as “model extraction,” in which would-be copycats probe the system for the patterns and logic that make it work. The attackers appear to want to use the information to build or bolster their own AI, it said.

The company believes the culprits are mostly private companies or researchers looking to gain a competitive advantage. A spokesperson told NBC News that Google believes the attacks have come from around the world but declined to share additional details about what was known about the suspects.

The scope of attacks on Gemini indicates that they most likely are or soon will be common against smaller companies’ custom AI tools, as well, said John Hultquist, the chief analyst of Google’s Threat Intelligence Group.

“We’re going to be the canary in the coal mine for far more incidents,” Hultquist said. He declined to name suspects.

The company considers distillation to be intellectual property theft, it said.

Tech companies have spent billions of dollars racing to develop their AI chatbots, or large language models, and consider the inner workings of their top models to be extremely valuable proprietary information.

Even though they have mechanisms to try to identify distillation attacks and block the people behind them, major LLMs are inherently vulnerable to distillation because they are open to anyone on the internet.

OpenAI, the company behind ChatGPT, accused its Chinese rival DeepSeek last year of conducting distillation attacks to improve its models.

Many of the attacks were crafted to tease out the algorithms that help Gemini “reason,” or decide how to process information, Google said.

Hultquist said that as more companies design their own custom LLMs trained on potentially sensitive data, they become vulnerable to similar attacks.

“Let’s say your LLM has been trained on 100 years of secret thinking of the way you trade. Theoretically, you could distill some of that,” he said.



Source link

  • Related Posts

    Maybe America Needs Some New Cities

    It sounds a bit kooky to promise a whole city from scratch. But it has been done before — and might just help solve the housing crisis. Source link

    Ukraine Drones Strike Second Lukoil Refinery in Russia This Week

    The lull offered temporary relief for Russia’s downstream sector, allowing refinery runs to gradually recover. As processing rates improved, the government lifted its ban on most gasoline exports, enabling producers…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Maybe America Needs Some New Cities

    One number shows why Commanders moved on from Kliff Kingsbury

    One number shows why Commanders moved on from Kliff Kingsbury

    Lebanon’s divisive banking bill jeopardises reform agenda

    Lebanon’s divisive banking bill jeopardises reform agenda

    Russia fires another barrage at Ukrainian cities as next round of US-brokered talks is unclear

    Russia fires another barrage at Ukrainian cities as next round of US-brokered talks is unclear

    How Computers Powered by Light Could Help With AI’s Energy Problem

    How Computers Powered by Light Could Help With AI’s Energy Problem

    Ukraine Drones Strike Second Lukoil Refinery in Russia This Week