OpenAI is looking for a new Head of Preparedness


OpenAI is looking to hire a new executive responsible for studying emerging AI-related risks in areas ranging from computer security to mental health.

In a post on X, CEO Sam Altman acknowledged that AI models are “starting to present some real challenges,” including the “potential impact of models on mental health,” as well as models that are “so good at computer security they are beginning to find critical vulnerabilities.”

“If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can’t use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying,” Altman wrote.

OpenAI’s listing for the Head of Preparedness role describes the job as one that’s responsible for executing the company’s preparedness framework, “our framework explaining OpenAI’s approach to tracking and preparing for frontier capabilities that create new risks of severe harm.”

The company first announced the creation of a preparedness team in 2023, saying it would be responsible for studying potential “catastrophic risks,” whether they were more immediate, like phishing attacks, or more speculative, such as nuclear threats.

Less than a year later, OpenAI reassigned Head of Preparedness Aleksander Madry to a job focused on AI reasoning. Other safety executives at OpenAI have also left the company or taken on new roles outside of preparedness and safety.

The company also recently updated its Preparedness Framework, stating that it might “adjust” its safety requirements if a competing AI lab releases a “high-risk” model without similar protections.

Techcrunch event

San Francisco
|
October 13-15, 2026

As Altman alluded to in his post, generative AI chatbots have faced growing scrutiny around their impact on mental health. Recent lawsuits allege that OpenAI’s ChatGPT reinforced users’ delusions, increased their social isolation, and even led some to suicide. (The company said it continues working to improve ChatGPT’s ability to recognize signs of emotional distress and to connect users to real-world support.)



Source link

  • Related Posts

    How to Use Apple’s Live Translation on Your AirPods

    At the same time Apple announced the AirPods Pro 3 last year, the company also introduced a new feature called Live Translation. It makes the idea of the Babel fish,…

    Ultrahuman ramps up U.S. push with Ring Pro as Oura tightens its grip

    Ultrahuman, a Bengaluru-based health-tech startup known for its smart rings, is attempting to revive its U.S. business after securing approval for its Ring Pro, setting up a renewed battle with…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    MLB Opening Day 2026: A guide to the offseason chaos

    MLB Opening Day 2026: A guide to the offseason chaos

    How Analysts Are Viewing a Possible Puig-Estée Lauder Cos. Merger

    How Analysts Are Viewing a Possible Puig-Estée Lauder Cos. Merger

    As Denmark Goes to the Polls, Here’s What to Know About the Election and Key Issues

    Sportradar Launches iGaming Brand Playradar, Combining Sports Data Expertise with Casino Content for Global Operators

    Malnourished Ontario boy’s death not a murder: Defence

    Malnourished Ontario boy’s death not a murder: Defence

    How to Use Apple’s Live Translation on Your AirPods

    How to Use Apple’s Live Translation on Your AirPods