Anthropic has announced a new feature called “Dreaming” at the company’s developer conference in San Francisco. It is part of the recently launched Anthropic I have an agent infrastructure designed to help users manage and deploy tools that automate application processes. This “dreaming” feature organizes transcripts of what the agent has recently completed and tries to gain insights to improve the agent’s performance.
People who use AI agents often send them on multi-step journeys, such as visiting a few websites or reading a lot of files, to complete online tasks. This new “dreaming” feature allows agents to look for patterns in their activity logs and improve their capabilities based on those insights.
The title of this feature immediately reminds us of Philip K. Dick’s science-fi novel, Do Androids Dream of Electric Sheep?which explores the characteristics that truly separate humans from powerful machines. While our current AI tools come nowhere near the machines in the book, I’m willing to draw the line right here, right now: no more. AI to generate named features that eliminate human cognitive processes.
“Together, memory and dreams form a robust memory system for self-enhancing agents,” it reads Anthropic blog post about the launch of this research preview for developers. “Memories allow each agent to capture what it learns it works. Dreaming improves that memory between sessionsto obtain shared training for agents and update it.”
Courtesy of Claude
Since the spark of chatbot revolution in 2022, leaders in AI companies have gone full steam ahead in naming aspects of AI production tools after what goes on in the human brain. OpenAI released its first “Causation” example back in 2024, where chat needed time to “think”. The company described this version at the time as “a new series of AI models designed to spend more time thinking before reacting.” Many startups also refer to their chatbots as having “memory” about the user. Instead of the fast storage commonly known as computer “memories,” these are more useful, human-like information: he lives in San Francisco, enjoys afternoon baseball games, and hates eating melons.
It’s a strong marketing tactic used by AI leaders, who have continued to rely on branding that blurs the line between what humans do and what machines can do. Even the ways in which these companies develop chatbots, like Claudewith a certain “character”, it can make users feel as if they are talking to something that may have an inner life, something that would be I can dream even when my laptop is closed.
At Anthropic, this anthropomorphizing goes beyond mere marketing strategies. “We also discuss Claude in terms normally reserved for humans (eg, ‘virtue,’ wisdom’),” reads part of Anthropic constitution explain how it wants Claude to act. “We do this because we expect Claude’s arguments to be applied to human concepts by default, given the role of human texts in Claude’s learning; and we think encouraging Claude to embrace some human-like characteristics would be very desirable.” The company even employs a resident philosopher trying to understand the “morality” of the bot.






