5 AI-developed malware families analyzed by Google fail to work and are easily detected



The assessments provide a strong counterargument to the exaggerated narratives being trumpeted by AI companies, many seeking new rounds of venture funding, that AI-generated malware is widespread and part of a new paradigm that poses a current threat to traditional defenses.

A typical example is Anthropic, which recently reported its discovery of a threat actor that used its Claude LLM to “develop, market, and distribute several variants of ransomware, each with advanced evasion capabilities, encryption, and anti-recovery mechanisms.” The company went on to say: “Without Claude’s assistance, they could not implement or troubleshoot core malware components, like encryption algorithms, anti-analysis techniques, or Windows internals manipulation.”

Startup ConnectWise recently said that generative AI was “lowering the bar of entry for threat actors to get into the game.” The post cited a separate report from OpenAI that found 20 separate threat actors using its ChatGPT AI engine to develop malware for tasks including identifying vulnerabilities, developing exploit code, and debugging that code. BugCrowd, meanwhile, said that in a survey of self-selected individuals, “74 percent of hackers agree that AI has made hacking more accessible, opening the door for newcomers to join the fold.”

In some cases, the authors of such reports note the same limitations noted in this article. Wednesday’s report from Google says that in its analysis of AI tools used to develop code for managing command and control channels and obfuscating its operations “we did not see evidence of successful automation or any breakthrough capabilities.” OpenAI said much the same thing. Still, these disclaimers are rarely made prominently and are often downplayed in the resulting frenzy to portray AI-assisted malware as posing a near-term threat.

Google’s report provides at least one other useful finding. One threat actor that exploited the company’s Gemini AI model was able to bypass its guardrails by posing as white-hat hackers doing research for participation in a capture-the-flag game. These competitive exercises are designed to teach and demonstrate effective cyberattack strategies to both participants and onlookers.

Such guardrails are built into all mainstream LLMs to prevent them from being used maliciously, such as in cyberattacks and self-harm. Google said it has since better fine-tuned the countermeasure to resist such ploys.

Ultimately, the AI-generated malware that has surfaced to date suggests that it’s mostly experimental, and the results aren’t impressive. The events are worth monitoring for developments that show AI tools producing new capabilities that were previously unknown. For now, though, the biggest threats continue to predominantly rely on old-fashioned tactics.



Source link

  • Related Posts

    Lunar Energy raises $232M to deploy home batteries that prop up the grid

    Forget EVs — stationary batteries are getting all the buzz, and investment, in the U.S. these days. Startup Lunar Energy is the latest example. The six-year-old company, which builds battery packs…

    Maha trapdoors and apolitical pilates: my relentless search for lefty mommy bloggers | Parents and parenting

    For someone who doesn’t have a marble island in their kitchen I spend a disproportionate amount of time staring at marble kitchen islands, slack-jawed, brain turned half off. That’s because…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    This Mario Tennis Fever Ad Is Giving Us Big Nintendo Wii Vibes

    This Mario Tennis Fever Ad Is Giving Us Big Nintendo Wii Vibes

    DeMellier’s New Woven Tote Bag Is Already an Insider Hit

    DeMellier’s New Woven Tote Bag Is Already an Insider Hit

    Ryan Routh sentenced to life in prison for Trump golf course assassination attempt

    Ryan Routh sentenced to life in prison for Trump golf course assassination attempt

    Medical staff leave Kashechewan First Nation as parasite contaminates water

    Medical staff leave Kashechewan First Nation as parasite contaminates water

    Lunar Energy raises $232M to deploy home batteries that prop up the grid

    Lunar Energy raises $232M to deploy home batteries that prop up the grid

    Air Canada Reveals 2 New Long Airbus A321XLR Routes: Here’s Where It’s Flying

    Air Canada Reveals 2 New Long Airbus A321XLR Routes: Here’s Where It’s Flying