This is The Takeaway from today’s Morning Brief, which you can sign up to receive in your inbox every morning along with:
When Slack or Outlook or whatever office software goes down, there’s a sigh of relief to go alongside the corporate headache. Office life stops, and after a brief walk outside, people are back in business.
But what would happen if banking platforms went offline — or even worse, were tampered with or digitally wiped?
The risks of advanced AI exploiting vulnerabilities in the financial world were top of mind for Treasury Secretary Scott Bessent and Fed Chair Jerome Powell, who summoned bank CEOs to Washington this week to warn of a grave new era of cyber threats ushered in by Anthropic’s latest AI model, Mythos.
Leaders of Citigroup, Bank of America, Morgan Stanley, Wells Fargo, and Goldman Sachs were among those present, Bloomberg reported.
The urgent huddle highlighted the risks of AI-driven cyberattacks. But for all the talk around the software apocalypse and AI outmaneuvering corporate America, a major threat posed by the technology is undermining the global financial system. Yes, it’s bad if AI takes your job. But it’s even worse if it takes all your money — or simply makes your account balances zero.
The tech turmoil might be seizing the attention of investors, but finance still reigns supreme. Add systemic risk to the list of things to be worried about because of AI.
But the anxiety around Mythos — it will be offered only to a few dozen companies to limit exposure — is hard to separate from the years of not-so-subtle marketing by AI firms that tap into fears of disruption and catastrophe.
It has become a trope for AI labs and their constellation of developers, practitioners, and prognosticators to warn of AI unleashing the end times while simultaneously peddling their latest model and strategic planning as appropriate remedies for the problems they cause.
Situating a donut shop right next to a gym isn’t inherently wrong. But it would be peculiar to learn that the businesses are owned by the same entity. Taking that peculiar feeling to the extreme, manufacturing a disease and selling the cure is absolutely a Bond villain business plan.
But is what Anthropic doing in its underground lair all that different? The trepidation emanating from Treasury and the Fed is instructive. If anything, we’re accustomed to a more blasé response from the government. This feels different.
The AI labs, of course, don’t see a problem with the new skills of its creations and the market for shields it’s creating. The development of AI is inevitable, and it is morally right and financially convenient that providing solutions is simply good business. Though this week’s emergency meeting may be a harbinger of regulation, you can imagine an AI-centered argument that the best outcome for society is that a responsible and transparent organization — that’s also American — be the steward of a new, powerful technology.








