I Am Begging AI Companies to Stop Naming Features After Human Processes


Anthropic just announced a new feature called “Dreaming” at the company’s developer conference in San Francisco. It’s part of Anthropic’s recently launched AI agent infrastructure designed to help users manage and deploy tools that automate software processes. This “dreaming” aspect sorts through the transcript of what an agent recently completed and attempts to glean insights to improve the agent’s performance.

Folks using AI agents often send them on multi-step journeys, like visiting a few websites or reading multiple files, to complete online tasks. This new “dreaming” feature allows agents to look for patterns in their activity log and improve their abilities based on those insights.

The feature’s name immediately calls to mind Philip K. Dick’s seminal sci-fi novel, Do Androids Dream of Electric Sheep?, which explores the qualities that truly separate humans from powerful machines. While our current generative AI tools come nowhere close to the machines in the book, I’m ready to draw the line right here, right now: no more generative AI features with names that rip off human cognitive processes.

“Together, memory and dreaming form a robust memory system for self-improving agents,” reads Anthropic’s blog post about the launch of this research preview for developers. “Memory lets each agent capture what it learns as it works. Dreaming refines that memory between sessions, pulling shared learnings across agents and keeping it up-to-date.”

Page Text Document.

Courtesy of Claude

Since the spark of the chatbot revolution in 2022, leaders at AI companies have gone full tilt into naming aspects of generative AI tools after what goes on in the human brain. OpenAI released its first “reasoning” model back in 2024, where the chatbot needed “thinking” time. The company described this release at the time as “a new series of AI models designed to spend more time thinking before they respond.” Numerous startups also refer to their chatbots as having “memories” about the user. Rather than the fast storage that’s typically referred to as a computer’s “memories,” these are much more human-like nuggets of information: he lives in San Francisco, enjoys afternoon baseball games, and hates eating cantaloupe

It’s a consistent marketing approach used by AI leaders, who have continued to lean into branding that blurs the line between what humans do and what machines can. Even the ways these companies develop chatbots, like Claude, with distinct “personalities,” can make users feel as if they are talking with something that has the potential for a deep inner life, something that would potentially have dreams even when my laptop is closed.

At Anthropic, this anthropomorphizing runs deeper than just marketing strategies. “We also discuss Claude in terms normally reserved for humans (e.g., ‘virtue, ’wisdom’),” reads a portion of Anthropic’s constitution describing how it wants Claude to behave. “We do this because we expect Claude’s reasoning to draw on human concepts by default, given the role of human text in Claude’s training; and we think encouraging Claude to embrace certain human-like qualities may be actively desirable.” The company even employs a resident philosopher to try to make sense of the bot’s “values.”



Source link

  • Related Posts

    The SECURE Data Act is Not a Serious Piece of Privacy Legislation

    The federal SECURE Data Act is not a serious consumer privacy bill, and its provisions—if enacted—would be a retreat from already insufficient state protections. Republicans on the House Energy and…

    Anthropic Is Doubling Claude Code Rate Limits After Deal With SpaceX

    jackpress/Shutterstock Anthropic and SpaceX have struck a deal to allow the company behind Claude to use xAI’s Colossus 1 data center. As…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Venezuela tells UN court that mineral-rich part of Guyana was fraudulently taken in colonial era

    Venezuela tells UN court that mineral-rich part of Guyana was fraudulently taken in colonial era

    Higher Gas Prices Are Hitting Lower-Income Americans the Hardest

    Higher Gas Prices Are Hitting Lower-Income Americans the Hardest

    Civilization VII’s Huge Free ‘Test Of Time’ Update Turns Back The Clock Later This Month

    Civilization VII’s Huge Free ‘Test Of Time’ Update Turns Back The Clock Later This Month

    Cannabis store employee sexually assaulted during robbery, Winnipeg police say – Winnipeg

    Cannabis store employee sexually assaulted during robbery, Winnipeg police say – Winnipeg

    WATCH: Father and son help rescue boaters

    WATCH:  Father and son help rescue boaters

    The SECURE Data Act is Not a Serious Piece of Privacy Legislation

    The SECURE Data Act is Not a Serious Piece of Privacy Legislation