
That aspect of OpenAI’s vision requires firms like OpenAI to develop safety systems, among other efforts, that will help improve public trust in AI. And we should trust those systems will work and only interfere with these firms when actual dangers are looming, OpenAI seems to suggest.
“As we progress toward superintelligence, there may come a point where a narrow set of highly capable models—particularly those that could materially advance chemical, biological, radiological, nuclear, or cyber risks—require stronger controls,” OpenAI said.
When that day arrives, OpenAI opined, there should be a global network in place to communicate emerging risks. However, only the firms with the most advanced models should be subjected to rigorous audits, so that smaller firms can still compete. That’s the path to ensure no firm’s dominant position can be abused to unfairly shut down rivals or weaken democratic values, OpenAI said, while insisting that public input is vital to AI’s success.
Altman has previously persuaded “a tech-skeptical public that their priorities, even when mutually exclusive, are also his priorities,” The New Yorker reported. But for the public, which is already reporting alleged harms from OpenAI models, it might be getting harder to entertain lofty ideas from a company that is led by “the greatest pitchman of his generation,” The New Yorker reported.
One OpenAI researcher told The New Yorker that Altman’s promises can sometimes seem like a stopgap to overcome criticism until he reaches the next benchmark. When it comes to superintelligence, some optimistic experts think it could take two years, which is longer than Elon Musk stayed at OpenAI before famously criticizing Altman’s leadership and leaving to start his own AI firm.
Altman “sets up structures that, on paper, constrain him in the future,” the OpenAI researcher told The New Yorker. “But then, when the future comes and it comes time to be constrained, he does away with whatever the structure was.”






