The United States and China will discuss guardrails on artificial intelligence, including establishing a protocol for keeping powerful A.I. models out of the hands of nonstate actors, Treasury Secretary Scott Bessent said on Thursday.
Mr. Bessent, who was speaking from Beijing in an interview with CNBC, did not give more details, including when these discussions would take place. But Xi Jinping, China’s leader, and President Trump had been expected to discuss A.I. during their summit in the Chinese capital.
If these talks happen, it would be the first time the two countries formally take up the issue during Mr. Trump’s second term. The capabilities and usage of A.I. have grown rapidly, and so have concerns that this technology could be weaponized by hackers and terrorists, or spiral out of human control.
“The two A.I. superpowers are going to start talking,” Mr. Bessent said. “We’re going to set up a protocol in terms of, how do we go forward with best practices for A.I. to make sure nonstate actors don’t get ahold of these models.”
Still, Mr. Bessent made clear that the fierce competition between the United States and China for supremacy in A.I. — which has been a major hurdle to cooperation on safety — remained front of mind for U.S. policymakers. Officials and experts in both countries have argued that they cannot slow technological development and risk losing out to their rivals.
Mr. Bessent said that the United States was willing to cooperate with China on A.I. safety because “the Chinese are substantially behind us” in terms of the technology’s development.
“I do not think we would be having the same discussions if they were this far ahead of us. So we’re going to put in U.S. best practices, U.S. values, on this, and then roll those out to the world,” Mr. Bessent said.
Experts have suggested that China’s A.I. models may be a few months behind the leading U.S. models.
Another hurdle to the United States and China working together on A.I. safety is that they have generally focused on different potential threats.
American experts have generally highlighted existential risks, such as the possibility of artificial general intelligence, or super-intelligence that exceeds that of humans. Chinese researchers and officials have more often highlighted risks related to social stability and information control, such as the possibility of chatbots producing content that challenges China’s leadership and policies.
Still, researchers in both countries have highlighted some shared risks, such as the possibility of A.I. being used to develop new biological weapons.








