AI Conversations Move Too Fast To Be Comprehensible


You hear wild things all the time. Like this story that Nat Friedman, the former CEO of GitHub, recently told at a conference. Friedman uses OpenClaw, an autonomous AI agent that runs on his computer, acting as a personal assistant. One day, his OpenClaw decided he wasn’t drinking enough water, so Friedman instructed the agency to “do whatever it takes” to make sure he stayed hydrated. According to Friedman, the boat eventually directed him to go to the kitchen and drink a bottle of water. It informed him that it was monitoring him through a camera connected to his home. “I’ll check to make sure you do,” the bot said. Friedman did as he was told, and, moments later, the robot sent a frame of him drinking from a bottle of water and saying good job. “I felt like I did a good job,” Friedman said he said.

The world is only a few years into the AI ​​boom, and this strange drink of hype, utility, and intrigue is the norm. At X—arguably the heart of the AI ​​insider conversation—investors, influencers, programmers, researchers, advertisers, and a host of other hang-outs reach out to the algorithm to shake your shoulders. Claude “broke my whole life with terrifying accuracy. No stars. No tarot. Just pure AI,” one post reads. Other crows: “Our team is stunned. We gave Claude Opus 4.6 by @AnthropicAI $10k to trade on @Polymarket. He now has an account value of $70,614.59.” The post includes a graph with a small star indicating that this trade was part of a trading simulation and was not done with real money.

The defining feature of all this evangelism is its surprising pace. If you don’t pay close attention to the everyday speech of AI, most of the conversation is almost incomprehensible. From week to week, the narrative is trending. A new tip seminar “IT WILL CHANGE HOW YOU BUILD WITH AI FOREVER”; no, wait, influence is the dead. Claude “CHANGES EVERYTHING”; in fact, it’s all about OpenAI’s Codex now. Log in, loss, we keep sites to keep the vibe. Scratch that: We are vibe-business now-making money while we sleep.

It’s all moving so fast that AI speech veterans are jokingly wishing for the good old days… of 2022.

I have ever previously written that one of the lasting cultural effects of AI is to make people feel like they’re being outsmarted. Some of that is due to extreme fandom or the way technology is clearly positioned to eliminate jobs. But lately, I believe, it’s the rapid nature of the AI ​​boom that’s driving people everywhere crazy. The discourse about technology and its implementation is dominated by a paradigmatic logic. Intelligence, income, ability—all should be connected, say promoters. New, promising breakthroughs are suggested but immediately underscored by the reminder that this is the worst technology will ever be. Because AI systems have permeated every domain of our culture and economy, it is very difficult to assess the technology’s impact outside of a case-by-case basis. That you can’t begin to wrap your mind around the AI ​​boom or orient yourself into it is a feature, not a bug, for those building the technology. But for anyone just trying to adjust, it’s hard not to feel resentful or left out. Silicon Valley is trying to accelerate unity, and it’s alienating the rest of us in the process.

The whip itself has been around for several years. Since the arrival of ChatGPT, AI growth has veered off the “It’s Over”–“We’re Back” axis, with the industry seemingly falling short of its own myths, then heralding yet another paradigm shift. But the latest shift from chat to encryption agents—self-directed tools like the one that apparently focused on Friedman’s water-inducing practices—has fueled this upheaval. Optimists see agents, as opposed to chatbots, as a persuasive step toward AI practitioners’ predictions that the technology could eliminate many white-collar jobs and reshape the nature of work. Adoption and use of frameworks such as Claude’s Code and the OpenAI Codex has grown exponentially, along with revenue. The bubble talk has (for now) cooled, and CEOs are saying things like “Think of this as the dawn of a new Atomic Age.” We are back a lot.

In AI research, a popular view is that a “collapsed frontier” exists in AI use and adoption: AI tools can be very good, unexpectedly in some human tasks and very bad, unexpectedly in others. As these boundaries become more rigid, they seem to push people further into their preconceived notions of AI, so much so that AI evangelists and critics live in different worlds. On Reddit and LinkedIn, employees are to mourn managers who have cool names for their bots and who command every marketing brief run through Microsoft Copilot. Some of the workers say they write their memos, pretending to be chatters, just to have some agency in their work.

Elsewhere on the internet, programmers are beginning to describe a coding agent relationship that is veering into unhealthy territory. “I wake up at 2 a.m. on Tuesdays,” Anita Kirkovska, head of growth at an AI company, he wrote recently, “not because I have a deadline, but because Claude Code made it so easy to keep going that I forgot to stop.” He describes the “mastery addiction” caused by the tools that make him effective: “You hit fast, the agent succeeds, you get dopamine. The agent fails hard, you get adrenaline. Both are reinforcing. Both keep you closer to the station.” Kirkovska says that she sees this among all kinds of AI power users—a state of unsustainable flux where decision-making begins to falter and people become careless when they’re tired.

MIT Technology AssessmentMatt Hanan explains the feeling that too much is changing, too quickly as “AI malaise.” You’re starting to see it in the studies—a soon A Gallup poll found that only 18 percent of Gen Zers said they are optimistic about AI (down 9 percent over the past year), or NBC News. investigation showing that AI has a favorability rating of 26 percent. It shows up in the real world—in the 20 data center projects canceled because of internal opposition in the first quarter of this year or in college commencement ceremony where students booed a speaker hailing AI as “the next Industrial Revolution.” You can see it in a few isolated, unprovoked acts of violence, such as the homemade bomb thrown at the home of OpenAI CEO Sam Altman.

I’d say that the most common feeling about AI is one of wonder: an undertone of anxiety that’s hard to pin down, which is the result of loud people constantly suggesting that the future will look very little like the present and that nothing—your job or the social contract—is immune to change.

The apocalyptic messaging of the AI ​​industry taps into this sentiment. Even when AI practitioners encourage a decline in AI discourse, as Altman did in soon blog post after the attack, language is a grave. “Fear and concern about AI are the same,” he wrote. “We are in the process of witnessing the biggest change to society in a long time, and maybe ever.” Similar tactics were used in the release of Anthropic’s Mythos, a new model that the company claimed was so powerful that Anthropic could not release it widely because of concerns that it would cause a global cyber security crisis. Should you be excited, scared, excited by the idea that the internet as we know it may no longer work? (Anthropic, of course, it has a history of AI doomerism and a clear financial interest in making its products look historically powerful.)

As the industry has warned about the dangers of AI, it has also done a poor job of articulating the positive vision of the future it wants to build. The experiments have been so good as to go as far as barbaric parenting. In April, OpenAI published page 13 map on “Industrial Policy for the Age of Intelligence” with a wonderful subtitle: “The Idea of ​​Putting People First.” Perhaps the most thoughtful (or at least longest) expression of what AI can do beautifully, 14,000 words. essay by Anthropic CEO Dario Amodei titled “Grace Loving Machines,” is more of a wish list than a plan. And even at its most sincere, Amodei’s vision still comes across as alienating, even dystopian. Near the end of the piece, Amodei imagines a scenario in which AI has rendered the current economic system irrelevant. One solution, he thinks, could be to create a new system where economic decisions, including resource allocation, are not completely outsourced to AI. He then nods to “the need for a broader social conversation about how the economy should be organized.” Left unanswered is who will participate in the conversation. At X, writer Noah Smith asked a further question without hiding: “In 20 or 50 years, will AI giants rule the world?”

Everything is flooding in faster than most people can process. Last week, Jack Clark, co-founder of Anthropic, has been published on X that he now believes there is a 60 percent chance that, by the end of 2028, “AI systems may soon be able to build themselves.” AI CEOs have made many wrong predictions about advanced skills, so should any of us believe that a unified version is 18 months away? What should one do with this information? Buy shares? Buy a gun? Maybe not learn to set rules. Here we are in the year 2026, living in a time where locals are shutting themselves down for a while as the rest of the world becomes computers, while many others are worried about gas prices and just trying to get through the day.

About the only thing clear at this point is that a power struggle over who gets to define the coming years is looming. It is a struggle between AI labs and between nations. The White House has hinted that it could be a struggle between the government and Silicon Valley. Influencer use of Silicon Valley AI suggests okay But for most of us, navigating the rocky border will feel personal. What might seem like a civilized or seven-figure war game to AI CEOs will seem to others like more Silicon Valley giving their boss a reason to force them to fire their jobs or their loved ones.

For the past decade, popular tech platforms—many of them built or managed by the same group that creates modern AI tools—have favored speed over focus. They encouraged us to work with this logic, often as the worst and loudest versions of ourselves. Over time, these tools leveled our arguments, our politics, our culture, crushing them in the same endless fight, so that people accepted their own truth.

The same dynamics govern the AI ​​discourse. AI development is a race, a gold rush, and the gap between AI true believers and the jaded masses is widening. In the same feed, you can read a blind item about AI researchers starting to smoke because they believe AI will cure lung cancer and reported to send on the “shared sense of harvest and future” holding America and China. Silicon Valley leaders pay lip service to the social conversation about what’s next, but their actions say otherwise: Go ahead or get left behind. Humanity rewriting the social contract together sounds good; less so when you have a gun to your head. Time is of the essence, we are told. Maybe that’s true. But how can we build the future if we cannot agree on the present? A cynic might conclude that our contribution is not wanted at all.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *