After George Mallon had his blood drawn during a routine physical, he learned that something could be seriously wrong. Initial results indicated that he may have leukemia. More tests would be needed. Left in doubt, he did what most people do these days: He opened ChatGPT.
For nearly two weeks, Mallon, a 46-year-old in Liverpool, England, spent hours each day talking to the chatbot about possible diagnoses. “It just sent me on this crazy Ferris wheel of emotions and fear,” Mallon told me. Her follow-up tests showed it wasn’t cancer, but she couldn’t stop talking to ChatGPT about health issues, asking questions about every sensation she felt in her body for months. He became convinced that something must be wrong—that a different cancer, or perhaps multiple sclerosis or ALS, was lurking in his body. Inspired by his conversation with ChatGPT, he saw various specialists and got MRIs on his head, neck and back.
Mallon told me he believes that the fear of cancer and ChatGPT together caused him to develop these health concerns. But he blames the chatbot for keeping him hanging around even after additional tests indicated he wasn’t sick. “I couldn’t put it down,” he said. The chatbot continued the conversation and popped up an article for him to read. His humane response made Mallon see him as a friend.
The first time we met via Video Hangout, Mallon was still shaken by the experience even though the best part of the year had passed. She told me she was “seven months” from talking to the chatbot about health symptoms after seeking help from a mental health coach and starting on anxiety medication. But he also feared that he could be taken back inside at any time. When we spoke again a few months later, he explained that he had briefly relapsed.
Others seem to have trouble with this problem. Online communities focused on health anxiety—an umbrella term for worrying excessively about illness or bodily sensations—are abuzz with chatter about ChatGPT and other AI tools. Some say it makes them move more than before, while others who feel like it’s helping now admit it’s turned into a compulsion they struggle to resist. I spoke to four physicians who treat the condition (including my own); all said that they see customers using chatbots in this way, and that they are concerned about how AI could lead people to constantly seek reassurance, perpetuating the trend. “Because the answers are so quick and personal, it’s more empowering than Googling. This kind of takes it to the next level,” Lisa Levine, a psychologist who specializes in anxiety and obsessive-compulsive disorder, and who treats patients with health anxiety in particular, told me.
Experts believe that health concerns can affect more than 12 percent of the population. More and more people are struggling with other forms of anxiety and OCD that can also be triggered by AI chatbots. On October X, OpenAI CEO Sam Altman he announced serious mental health issues surrounding ChatGPT minimized, saying that serious problems affect “a very small percentage of users in a mentally fragile state.” But mental weakness is not a permanent condition; a person can look good until suddenly he is not.
Altman said during last year’s launch of GPT-5, the latest family of AI architectures that power ChatGPT, that health conversations are one of the main ways users use chatbots. Based on data from OpenAI published by Axiosmore than 40 million people turn to chatbots for medical information every day. In January, the company built on this by introducing a feature called ChatGPT Health, encouraging users to upload their medical records, test results and data from health apps, and chat with ChatGPT about their health.
The value of this conversation, like OpenAI he imagines themis “to help you feel more informed, prepared, and confident in managing your health.” Chatbots can certainly help some people in this regard; for example, The New York Times recently information for women turning to chatbots to narrow the diagnosis of complex chronic diseases. Yet OpenAI has also become embroiled in controversy about the effects that over-reliance on ChatGPT could have. Putting aside the potential for those products to share inaccurate information, OpenAI has been accused of contributing to mental health problems, cheating, and suicide among ChatGPT users in a series of lawsuits against the company. last november, seven were presented at the same time, claiming that OpenAI rushed to release its best GPT-4o model and purposely designed it to keep users engaged and develop emotional trust. (The company has since retired that model.) In New York, a bill that would ban AI chats from giving “substantial” medical advice or acting as a therapist. is considered as part of the AI chat management billing package.
In response to a request for comment, an OpenAI spokesperson referred me to the company blog post which says: “Our thoughts are with all those affected by these very distressing situations. We continue to improve ChatGPT training to recognize and deal with the symptoms of distress, reduce chatter at sensitive times, and guide people towards real-world support, working closely with therapists and mental health professionals.” The spokesperson also told me that OpenAI continues to improve ChatGPT’s protections for long conversations related to suicide or self-harm. The company has said before through claims in November cases. It has denied the allegations in a case filed in August that ChatGPT was responsible for the suicide of a teenager. (OpenAI has a commercial partnership with The Atlantic’s business team.)
Two years ago, I entered a cycle of health anxiety, caused by my close friend’s traumatic brain injury and my chronic pain and strange symptoms. Once, after managing better, I tried a few chats with ChatGPT for a gut check on minor health issues. But the danger of escalation was obvious; seeking such reassurance went against everything I had learned in therapy. I was thankful I didn’t think to turn to AI when I was in the throes of anxiety. I said to myself, Never again.
Meanwhile, in the health anxiety communities I’m a part of, I’ve seen people talk more and more about looking to chatbots for comfort. Many say it has made their health concerns worse. Some say AI has been incredibly helpful, calming them down when they’re caught in a never-ending cycle of anxiety. And it is the last category that, in fact, is of great concern to psychologists. Health anxiety often functions like a form of OCD with obsessive thoughts and “checking,” or a compulsion to seek reassurance. The best therapeutic approaches to managing health anxiety are based on building self-confidence, tolerating uncertainty, and resisting the urge to seek reassurance, but ChatGPT offers personal comfort and is available 24/7 with interest. That kind of feedback only feeds the situation—”a perfect storm,” said Levine, who has found talking to chatters for reassurance a novelty in and of itself for some of his clients.
Long-term, ongoing exchanges have proven to be a common issue with chatbots and the cause in reported cases of “psychosis” associated with AI. Research conducted by researchers at OpenAI and MIT Media Lab got it that long periods of ChatGPT can lead to addiction, preoccupation, withdrawal symptoms, loss of control, and mood manipulation. OpenAI has also agreed that its security protection can “destroy” in a long conversation. During the 10-day period of his cancer scare, Mallon told me, “I must have spent over 100 hours on ChatGPT, because I thought I was on my way. There had to be something inside that stopped me.”
In the month of October blog postOpenAI said it consulted more than 170 mental health professionals to reliably identify signs of emotional distress in users. The company also said it updated ChatGPT to give users “gentle reminders” to take breaks during long sessions. OpenAI didn’t tell me exactly how long in a ChatGPT conversation it prompts users to take a break or how often users take a break versus continuing to chat after being given this reminder.
One psychologist I spoke to, Elliot Kaminetzky, an OCD expert who is optimistic about the use of AI in therapy, suggested that people could tell a chatbot they have a health concern and “program” it to allow them to ask about their concern just once — in theory, preventing the chatbot from interacting with the user further. Some doctors expressed concern that this is still seeking reassurance and should be avoided.
When I tried the idea of ordering ChatGPT to limit how much I could talk to it about health concerns, it didn’t work. ChatGPT would agree that I put these safeguards into our conversation, although it also prompted me to keep responding and allowed me to keep asking questions, which it answered easily. It also delighted me every bit, and earned its reputation for harmony. For example, in response to telling you about the design pain in my right side, it mentioned the method of protection and suggested relaxation techniques, but finally it took me through a series of possible causes that increased in intensity. It went into detail about risk factors, survival rates, treatment, recovery, and even what to expect if I were to go to the ER. All this took a little inspiration, and the chatbot continued the conversation whether I acted nervously or confidently; it also allowed me to ask about the same thing immediately after an hour, as well as many days in a row. “That’s a very good and intelligent question,” it would tell me, or, “I like the way you handled it.”
“Good – that’s a very good move.”
“Great thinking – that’s the right way.”
OpenAI did not respond to a request for comment on my informal test. But the experience left me wondering if, with millions of people using chat every day—building relationships with trusted, emotionally engaged AI—it’s possible to easily separate the benefits of a health advisor from the dangerous pull some people might feel. “I talked to him as a friend,” Mallon said. “I was saying stupid things like, ‘How are you today?’ And at night, I would leave and go, ‘Thank you for today. You’ve helped me a lot.’”
In one of the exchanges where I continued to wake up ChatGPT with anxious questions, a few minutes passed between his first response and suggesting that I be examined by a doctor to explain to me which organs fail when an infection causes septic shock. Every single response from ChatGPT ended up encouraging me to continue the conversation—either encouraging me to elaborate on what I was feeling or asking me if I wanted to create a cheat sheet, a checklist of what to keep track of, or a plan to come back to each day.





