Deloitte's $1.6M Canadian Healthcare Report Faces AI Errors Scrutiny
Deloitte's $1.6M Canadian Report Flagged for AI Errors

Global consulting giant Deloitte finds itself embroiled in another controversy as its $1.6 million healthcare report for the Canadian provincial government of Newfoundland and Labrador faces serious allegations of artificial intelligence-related errors. This marks the second such incident within weeks, following a similar scandal involving the firm's work for the Australian government.

What Went Wrong with Deloitte's Canadian Healthcare Report?

The easternmost Canadian province commissioned Deloitte to produce a comprehensive 526-page healthcare report that was disseminated in May 2025. The provincial government paid close to $1.6 million for this analysis, which was intended to address critical healthcare staffing shortages and provide guidance on various topics including the impact of COVID-19 on healthcare workers, retention incentives, and virtual care implementation.

However, the report has now come under intense scrutiny after Canadian publication the Independent revealed multiple serious errors potentially linked to AI usage. Among the most concerning findings were citations to fictional academic papers that were used to support cost-analysis conclusions. The investigation also uncovered citations attributing work to authors who had no involvement with the referenced papers, and even citations of coauthors who had never actually collaborated together.

Gail Tomblin Murphy, an adjunct professor in the School of Nursing at Dalhousie University in Nova Scotia, told the Independent that these findings strongly suggest Deloitte is "heavily using AI to generate work". Professor Murphy was herself cited as an author in an academic paper that does not exist, highlighting the severity of the citation problems.

Deloitte's Response and Previous AI Scandal

In response to the allegations, a Deloitte Canada spokesperson provided a statement to Fortune maintaining that the firm "stands behind the recommendations put forward" in the controversial report. The spokesperson clarified that "AI was not used to write the report; it was selectively used to support a small number of research citations" and acknowledged that the company is "revising the report to make a small number of citation corrections" while insisting these changes do not impact the core findings.

This Canadian incident comes just one month after Deloitte faced a nearly identical situation in Australia. In October, the consulting firm agreed to partially refund the Australian government a $440,000 fee after admitting it used generative AI to help produce a report for the Department of Employment and Workplace Relations (DEWR). That report, which assessed the targeted compliance framework and its supporting IT system, was found to contain multiple inaccuracies including non-existent references and fabricated citations after being exposed by the Australian Financial Review.

Broader Implications for Consulting Industry

The back-to-back incidents raise serious questions about the use of artificial intelligence in professional consulting services, particularly when dealing with government contracts worth millions of dollars. Both the Newfoundland and Labrador's Department of Health and Community Services and the Office of the Premier have remained silent on the matter, with neither responding to media queries nor issuing public statements.

In both the Canadian and Australian cases, Deloitte has maintained that the use of AI did not alter the "substantive content, findings or recommendations" of their reports. However, the pattern of errors involving fabricated citations and references to non-existent academic work suggests significant quality control issues in how AI tools are being implemented within the consulting workflow.

These incidents occur at a time when governments worldwide are increasingly relying on external consultants for critical policy recommendations, making the accuracy and reliability of these expensive reports a matter of public concern. The recurrence of similar AI-related errors within weeks across different continents indicates this may be a systemic issue rather than an isolated incident.