Across India's bustling hospitals, a quiet revolution is underway. Healthcare is going all-in on artificial intelligence, deploying it for everything from interpreting patient scans to battling insurance claim rejections. This rapid adoption is turning major hospital systems into the primary testing ground for AI's capabilities—and its significant limitations.
The AI Speed Boost: From 75 Seconds to 45
The impact is palpable for doctors on the front lines. Take the experience of Samir Abboud, chief of emergency radiology at Northwestern Medicine in Chicago. Before AI, his meticulously honed routine, aided by voice dictation, allowed him to finish an X-ray report in a brisk 75 seconds. That changed in 2024 when his health system introduced generative AI designed to analyze scans and draft reports.
Now, with Abboud reviewing the AI's work for potential changes, the process has accelerated to about 45 seconds per report. The sheer speed was both breathtaking and startling. "It was the first time I felt like there was a clock on my career," Abboud admitted. Yet, he emphasises that human oversight remains non-negotiable. The faster pace also brings tangible benefits for patient care, alleviating the guilt of stepping away when "hundreds of patients are waiting for our read, and any one of them could be one that's actively dying."
Widespread Adoption Amidst Workforce Challenges
This shift is not isolated. A recent survey by Menlo Ventures and Morning Consult reveals that 27% of health systems are now paying for commercial AI licenses. This rate is triple the average across the broader U.S. economy, highlighting healthcare's aggressive pivot to technology.
The drivers are clear: rising healthcare needs from aging populations, persistent clinician shortages leading to burnout, and the constant search for operational efficiency. AI has found particularly strong footing in unglamorous but critical daily tasks: documenting patient notes, managing phone calls, and processing insurance claims. These are precisely the kind of "labor-dependent, rote processes done thousands of times" where AI can add immense value, notes Rupal Malani, a senior partner at McKinsey.
In diagnostics, AI's potential is profound. A University of California, Los Angeles study found AI could identify subtle signs of breast cancer missed between routine screenings, potentially reducing such cases by 30%. At Northwestern, an AI review of a million scans flagged 70 that humans missed; a manual check confirmed five needed further follow-up.
The Dark Side: Fabricated Data and 'Deskilling' Fears
However, the path is fraught with cautionary tales. Paul A. Friedman, a cardiologist at Mayo Clinic, asked ChatGPT for case studies on a complex defibrillator implantation. The AI provided convincing references to medical journal reports that, upon investigation, were completely fabricated. Friedman now adheres to a strict "trust but verify" approach, always checking references. OpenAI, ChatGPT's creator, stated its latest models are significantly better at providing accurate health information and that the tool is not a substitute for professional guidance.
Another deep concern is the potential for deskilling among medical professionals. An October study in The Lancet found physicians using AI to spot growths during colonoscopies detected fewer growths once the tool was removed. "I'm constantly worried about myself with deskilling," says Anthony Cardillo, a pathologist in New York. "Any time I outsource my thoughts... I'm worried I'm going to lose that muscle memory."
Transforming the Back Office: Insurance and Administration
Beyond diagnostics, AI is streamlining administrative burdens. Epic Systems launched a generative AI tool in 2024 to draft appeal letters for insurance denials. About 1,000 hospitals now use it. At Northwestern, this has cut the time spent on each denied claim by 23%. Similarly, New York's Mount Sinai reported a 3% increase in overturned insurance denials, netting an extra $12 million annually.
For doctors like Cheryl Wilkes of Northwestern Medicine, AI transcription and summarisation of patient visits has slashed her daily paperwork from 2-3 hours to just 30 minutes of review. Yet, not all implementations are smooth. Mount Sinai paused an Epic AI tool for drafting patient message responses after doctors found the drafts unhelpful and requiring excessive rewriting. In one mishap, a patient asking for a walker was told the system couldn't help; in another, a headache inquiry triggered a verbose response ranging from minor issues to a brain tumour.
As Dean Seehusen, a family medicine doctor in Augusta, Georgia, notes, the ultimate fear is that AI might "further degrade mainstream confidence in medicine, and actually leads to a kind of Wild West for patients," especially with the rise of patient self-diagnosis via AI. Despite the risks, the industry sees no alternative. "When you think about the tsunami of need that's coming... technology is one of the only levers we have to pull," concludes Doug King, Northwestern Medicine's chief digital and innovation officer. The experiment in India's hospitals and globally is full-speed ahead, balancing breathtaking promise with sobering reality.