Elon Musk is once again urging the public to let his artificial intelligence chatbot play doctor. On 17 February 2026, Musk took to X to endorse a post by DogeDesigner praising the capabilities of Grok 4.20 in breaking down blood tests and medical imaging results. ‘You can just take a picture of your medical data or upload the file to get a second opinion from Grok’, Musk wrote, amplifying the post which had already drawn nearly 500,000 views within the hour.
The renewed push comes as Grok’s latest iteration attracts significant public attention for its speed and accuracy in parsing laboratory results. The original post described Grok 4.20 as ‘insanely good and quick at analysing blood tests’, noting that users could upload lab results, even an MRI, and receive a detailed breakdown almost instantly. It is not the first time Musk has championed this idea, but his latest endorsement marks one of the most direct calls to action yet for ordinary users to substitute, or at least supplement, a doctor’s visit with an AI chat.
A Pattern of Bold Medical Claims
Musk’s latest post is consistent with a months-long campaign to position Grok as a legitimate healthcare aid. As far back as October 2024, Musk was urging users on X to ‘try submitting X-ray, PET, MRI or other medical images to Grok for analysis’, claiming it was ‘already quite accurate and will become extremely good’. He also claimed in a podcast episode that he personally submitted his own MRI to Grok, though he noted that ‘none of the doctors nor Grok found anything’. In January 2026, a video resurfaced of Musk stating he had ‘seen cases where it’s actually better than what doctors tell you’.
A May 2025 peer-reviewed study published in Diagnostics, which assessed ChatGPT-4o, Grok, and Google’s Gemini against 35,711 brain MRI slices, found Grok performed the strongest of the three in identifying pathologies, though researchers noted all models showed limitations. Dr Laura Heacock, associate professor at NYU Langone’s Department of Radiology, wrote that whilst the technical capability clearly exists, ‘non-generative AI methods continue to outperform in medical imaging’.
Experts Sound the Alarm on Privacy
For all the enthusiasm around Grok’s capabilities, healthcare professionals and privacy scholars are far less bullish. Bradley Malin, professor of biomedical informatics at Vanderbilt University, said: ‘This is very personal information, and you don’t exactly know what Grok is going to do with it’. Matthew McCoy, assistant professor of medical ethics at the University of Pennsylvania, was equally direct, saying he would not personally feel comfortable contributing health data and described the exercise as sharing ‘at your own risk’.
The privacy stakes are not abstract. Medical information shared on social media platforms falls outside the scope of HIPAA, the US federal law protecting patients’ private health data, meaning xAI and X are not bound by the same legal obligations as hospitals or insurers. Ryan Tarzy, chief executive of health technology firm Avandra Imaging, said that Musk’s approach carries ‘myriad risks, including the accidental sharing of patient identities’, since much medical imaging data contains embedded personal identifiers.
Accuracy Concerns Persist
Beyond privacy, the accuracy of Grok’s medical interpretations has drawn scrutiny. Doctors who tested the chatbot following Musk’s 2024 invite reported that Grok failed to identify a ‘textbook case’ of tuberculosis, misidentified a broken clavicle for a dislocated shoulder, and in one widely reported instance, mistook a mammogram of a benign breast cyst for an image of testicles. Experts cautioned that such errors could lead to ‘unnecessary tests or treatments, increasing patient burden’, as noted in a report.
Growing Regulatory Scrutiny
Grok is also facing growing regulatory scrutiny in Europe. On 17 February 2026, Ireland’s Data Protection Commission announced a formal inquiry into X over reports that users had been prompted to generate non-consensual sexualised images, including of children, using the Grok AI tool. The inquiry will examine whether X complied with its GDPR obligations on data processing, privacy by design, and data protection impact assessments. It follows a separate legal action in August 2024, which compelled X to suspend using EU user data to train Grok.
The broader debate over AI in healthcare is moving quickly, with OpenAI having launched ChatGPT Health in January 2026, a dedicated feature that allows users to connect medical records and wellness apps with an explicit commitment not to use Health conversations to train its models. As millions of users encounter Grok’s medical features for the first time, the gap between its viral appeal and the caution urged by clinicians and ethicists remains one of the more consequential fault lines in the AI health race.
Related Articles
・How Grok Saved A Man’s Life When Doctors Couldn’t – Elon Musk Reacts
・Owens Says Erika Kirk Sent Gifts to a Minor in a Trafficking-Known Area — and Called It ‘Utterly Impossible’ to Ignore the Pattern
・Donald Trump Shades Barack Obama in Tribute to Reverend Jesse Jackson
・Kristi Noem Faces Backlash After Ordering Coast Guard Search Plane to Abandon Rescue for Deportation Mission