The Pentagon is making plans for AI companies to train on classified data, defense official says


Training versions of AI models on classified data is expected to make them more accurate and effective in certain tasks, according to a US defense official who spoke on background with MIT Technology Review. The news comes as demand for more powerful models is high: the Pentagon has reached agreements with OpenAI and Elon Musk’s xAI to operate their models in classified settings, and is implementing a new agenda to become an “an ‘AI-first’ warfighting force” as the conflict with Iran escalates. (The Pentagon did not comment on its AI training plans as of publication time.)

Training would be done in a secure data center that’s accredited to host classified government projects, and where a copy of an AI model is paired with classified data, according to two people familiar with how such operations work. Though the Department of Defense would remain the owner of the data, personnel from AI companies with appropriate security clearances might in rare cases access the data, the official said. 

Before allowing this new training, though, the official said the Pentagon intends to first evaluate how accurate and effective models are when trained on non-classified data, like commercially available satellite imagery. 

The military has long used computer vision models, an older form of AI, to identify objects in images and footage it collects from drones and airplanes, and federal agencies have awarded contracts to companies to train AI models on such content. And AI companies building large language models (LLMs) and chatbots have created versions of their models fine-tuned for government work, like Anthropic’s Claude Gov, which are designed to operate across more languages and in secure environments. But the official’s comments are the first indication that AI companies building LLMs, like OpenAI and xAI, could train government-specific versions of their models directly on classified data.

Aalok Mehta, who directs the Wadhwani AI Center at the Center for Strategic and International Studies and previously led AI policy efforts at Google and OpenAI, says training on classified data, as opposed to just answering questions about it, would present new risks. 



Source link

  • Related Posts

    Remedy’s live-service shooter Firebreak is getting its final major update

    Remedy is winding down its team shooter FBC: Firebreak with a big update that launches today. But while the game won’t be getting any new content going forward, the studio…

    Apple can delist apps “with or without cause,” judge says in loss for Musi app

    “Admitting to receiving an email is materially different from admitting to Musi’s conclusion from the email—that Apple knowingly relied on false evidence,” Lee wrote. Musi’s law firm presented the theory…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Isolated and exposed: can New Zealand’s fragile economic recovery withstand the global oil shock? | New Zealand

    Isolated and exposed: can New Zealand’s fragile economic recovery withstand the global oil shock? | New Zealand

    Doug Armstrong steps down as general manager of Canada’s Olympic hockey team

    Doug Armstrong steps down as general manager of Canada’s Olympic hockey team

    Remedy’s live-service shooter Firebreak is getting its final major update

    Remedy’s live-service shooter Firebreak is getting its final major update

    Former Gilbert HS, NAU basketball star signed by NFL’s Colts

    Former Gilbert HS, NAU basketball star signed by NFL’s Colts

    Boeing Under Pressure As Investors Sue Over 737 MAX Safety Claims

    Boeing Under Pressure As Investors Sue Over 737 MAX Safety Claims

    Accidental Deliberations: Tuesday Afternoon Links