Anthropic sues US over blacklisting; White House calls firm “radical left, woke”



Google and OpenAI staff support lawsuit

Another brief supporting Anthropic was filed by various technical, engineering, and research employees of Google and OpenAI. Google is an investor in Anthropic. The Google and OpenAI employees wrote that “mass domestic surveillance powered by AI poses profound risks to democratic governance—even in responsible hands.” On the topic of autonomous weapon systems, they wrote that “current AI models are not reliable enough to bear the responsibility of making lethal targeting decisions entirely alone, and the risks of their deployment for that purpose require some kind of response and guardrails.”

The Google and OpenAI employees said that in using the supply chain risk designation “in response to Anthropic’s contract negotiations, [the Pentagon] introduces an unpredictability in our industry that undermines American innovation and competitiveness. It chills professional debate on the benefits and risks of frontier AI systems and various ways that risks can be addressed to optimize the technology’s deployment.”

Anthropic CEO Dario Amodei explained the company’s objections to certain AI uses in a February 26 post. “We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values,” he wrote.

Current law allows the government to “purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant,” and “AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale,” Amodei wrote.

CEO: Autonomous weapons too risky

Amodei expressed support for partially autonomous weapons like those used in Ukraine, but not for fully autonomous weapon systems “that take humans out of the loop entirely and automate selecting and engaging targets.” He said that fully autonomous weapons “may prove critical for our national defense” eventually but that AI is not yet reliable enough to power them.



Source link

  • Related Posts

    AI toys for young children must be more tightly regulated, say researchers | AI (artificial intelligence)

    It was all going well. Charlotte, five, was chatting with an AI soft toy called Gabbo at a London play centre about her family, her drawing of a heart to…

    RAMaggedon not expected to ease this year as IDC cuts 2026 PC market forecast again

    We’ve been seeing all sorts of warnings about how RAMaggedon is nigh. The latest horseman signalling a disaster is the International Data Corporation, which had already cautioned that things were…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    ‘A few beatings won’t kill you’: judge rejects divorce request of woman abused by husband in Afghanistan | Afghanistan

    ‘A few beatings won’t kill you’: judge rejects divorce request of woman abused by husband in Afghanistan | Afghanistan

    All Marathon weapons in the Server Slam

    All Marathon weapons in the Server Slam

    AI toys for young children must be more tightly regulated, say researchers | AI (artificial intelligence)

    AI toys for young children must be more tightly regulated, say researchers | AI (artificial intelligence)

    Avalanche take road winning streak into game against the Jets

    Avalanche take road winning streak into game against the Jets

    Venice Biennale risks losing EU funding over planned Russia involvement | Venice Biennale

    Venice Biennale risks losing EU funding over planned Russia involvement | Venice Biennale

    Nasa ‘on track’ for Artemis II moon mission launch as soon as 1 April | Nasa

    Nasa ‘on track’ for Artemis II moon mission launch as soon as 1 April | Nasa