A rogue AI led to a serious security incident at Meta


For almost two hours last week, Meta employees had unauthorized access to company and user data thanks to an AI agent that gave an employee inaccurate technical advice, as previously reported by The Information. Meta spokesperson Tracy Clayton said in a statement to The Verge that “no user data was mishandled” during the incident.

A Meta engineer was using an internal AI agent, which Clayton described as “similar in nature to OpenClaw within a secure development environment,” to analyze a technical question another employee posted on an internal company forum. But the agent also independently publicly replied to the question after analyzing it, without getting approval first. The reply was only meant to be shown to the employee who requested it, not posted publicly.

An employee then acted on the AI’s advice, which “provided inaccurate information” that led to a “SEV1” level security incident, the second-highest severity rating Meta uses. The incident temporarily allowed employees to access sensitive data they were not authorized to view, but the issue has since been resolved.

According to Clayton, the AI agent involved didn’t take any technical action itself, beyond posting inaccurate technical advice, something a human could have also done. A human, however, might have done further testing and made a more complete judgment call before sharing the information — and it’s not clear whether the employee who originally prompted the answer planned to post it publicly.

“The employee interacting with the system was fully aware that they were communicating with an automated bot. This was indicated by a disclaimer noted in the footer and by the employee’s own reply on that thread,” Clayton commented to The Verge. “The agent took no action aside from providing a response to a question. Had the engineer that acted on that known better, or did other checks, this would have been avoided.”

Last month, an AI agent from open source platform OpenClaw went more directly rogue at Meta when an employee asked it to sort through emails in her inbox, deleting emails without permission. The whole idea behind agents like OpenClaw is that they can take action on their own, but like any other AI model, they don’t always interpret prompts and instructions correctly or give accurate responses, a fact Meta employees have now discovered twice.



Source link

  • Related Posts

    The AI Race Is Pressuring Utilities to Squeeze More From Europe’s Power Grids

    European countries are racing to bring new data centers online as AI labs across the globe continue to demand more compute. The primary limiting factor is energy—and specifically, the ability…

    Pinterest CEO calls for ban on social media for youth under 16 | Pinterest

    Pinterest’s CEO called on world leaders to ban social media for youth under 16 in a LinkedIn post on Friday. “We need a clear standard: no social media for teens…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Supreme Court tackles dispute over mail-in ballots ahead of November elections

    Supreme Court tackles dispute over mail-in ballots ahead of November elections

    Crimson Desert admits it used generative AI for “early-stage iteration” and apologises AI images were not replaced before release

    Crimson Desert admits it used generative AI for “early-stage iteration” and apologises AI images were not replaced before release

    The AI Race Is Pressuring Utilities to Squeeze More From Europe’s Power Grids

    The AI Race Is Pressuring Utilities to Squeeze More From Europe’s Power Grids

    Energy Attacks in War on Iran Could Turn Economic Shock Into Long-Term Damage

    Iran live updates: Trump’s 48-hour deadline to expire Monday evening

    Iran live updates: Trump’s 48-hour deadline to expire Monday evening

    Matt Renshaw pushes for Australia Test place saying I can adapt to any situation

    Matt Renshaw pushes for Australia Test place saying I can adapt to any situation