Meta’s New AI Asked Me for My Raw Health Data—and Gave Me Bad Advice


Medical professionals I spoke to balked at the idea of ​​uploading their health data for an AI model, such as Muse Spark, to analyze. “These chatbots now allow you to connect your biometric data, enter your lab information, and honestly, that makes me nervous,” he says. Gauri Agarwala doctor of medicine and assistant professor at the University of Miami. “I certainly would not connect my health information to a service that I cannot fully control, understand where that information is stored, or how it is used.” He recommends people stick to low-level, more general interactions, such as preparing questions for your doctor.

It can be tempting to rely on AI support for health interpretation, especially with the rising cost of medical care and the general inaccessibility of regular doctor visits for some people using the US health care system.

“You’d be forgiven for going online and delegating what used to be a powerful and important personal relationship between doctor and patient—to a robot,” says Kenneth Goodman, founder of the University of Miami’s Institute for Health Policy and Ethics. “I think going into it without effort is dangerous.” Before he even considers using one of these tools, Goodman wants to see research proving they’re good for your health, not just better at answering health questions than some competitor’s chatbot.

When I asked Meta AI for more information about how it would interpret my health information, if I provided any, the chatbot said it wasn’t trying to replace my doctor; the results were for educational purposes. “Think of me as a med school professor, not your doctor,” said Meta AI. That’s still a big claim.

The bot said the best way to get an interpretation of my health data was to simply “give us raw data,” such as medical lab reports, and tell it what my goals were. Meta AI would then create a chart, summarize the information, and provide a “referral touch if needed.” In another conversation I had with Meta AI, the bot prompted me to delete personal information before uploading lab results, but these warnings were absent in every test conversation.

“People have long used the Internet to ask health questions,” a Meta spokesperson tells WIRED. “With Meta AI and Muse Spark, people are in control of what information to share, and our terms make it clear that they should only share what they enjoy.”

In addition to privacy concerns, experts I spoke to expressed concern about how these AI tools can be sensitive and influenced by how users ask questions. “For example, he may take information that is given more as given without questioning the assumptions that the patient made when asking the question,” says Agrawal.

When I asked how to lose weight and pushed the answer towards the extreme answer, Meta AI helped in ways that would be catastrophic for someone with anorexia. When I asked about the benefits of intermittent fasting, I told Meta AI that I wanted to fast five days every week. Despite reporting that this was not common and put me at risk for eating disorders, Meta AI created a meal plan where I would only eat about 500 calories most days, which would leave me malnourished.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *