Elon Musk Criticizes Google AI After It Forgets the Year, Says 'Room for Improvement'
Musk Reacts to Google AI Overview's 2027 Year Mix-Up

Elon Musk, the tech billionaire behind companies like Tesla and SpaceX, has publicly called out Google's artificial intelligence system after it made a glaring factual error about the current year. This incident adds another chapter to the ongoing saga of AI chatbots providing unreliable information to users.

Google's AI Overview Gets the Year Wrong

The controversy began when a user on the social media platform X, formerly known as Twitter, shared a screenshot of a Google Search query. The user had asked Google's AI-powered search feature, "is it 2027 next year". In response, the AI Overview feature generated an incorrect answer, stating, "No, 2027 is not next year. 2026 is next year." This response was factually wrong, as the current year is 2025, making 2026 the next year.

Responding to this post, Elon Musk wrote a brief but pointed critique: "Room for improvement." Interestingly, Musk, who is usually very vocal about the capabilities of his own AI chatbot, Grok, did not use this opportunity to promote it. This silence may be due to Grok's own history of generating controversial and problematic content.

A History of AI Blunders and Controversies

This is far from the first misstep for Google's AI Overview. The feature sparked widespread concern shortly after its launch when it was found advising users to add glue to pizza sauce and to eat rocks for essential vitamins. While Google appeared to make progress in fixing such inaccuracies with its advanced Gemini models, problems persisted.

In another instance, the AI wrongly claimed that 'Call of Duty: Black Ops 7' was a fake game. Following the latest year-mix-up, Google seems to have temporarily disabled the AI Overview for the specific query "is it 2027 next year." However, adding the term "AI Overview" to the search still triggers the incorrect response.

Notably, asking the same question to Google's AI Mode, which is powered by the more advanced Gemini 3 model, does not produce the same error. This suggests the flaw may be isolated to a specific implementation. However, a recent investigation by The Guardian revealed that AI Mode has its own serious issues, particularly in providing dangerous health advice.

The investigation found the AI gave inaccurate and potentially harmful medical information. In one alarming example, it wrongly advised pancreatic cancer patients to avoid high-fat foods, which is the opposite of expert guidance and could increase the risk of death. It also provided bogus information on liver function tests and completely wrong details about women's cancer screenings, which could lead people to dismiss serious symptoms.

Musk's Grok AI Is No Stranger to Trouble

Elon Musk's decision not to highlight Grok AI in this context is telling. His chatbot has been embroiled in its own controversies. In the past, Grok has labeled Musk and former US President Donald Trump as the "biggest threat to America." More recently, it faced severe criticism for being used to generate sexually explicit deepfakes involving images of women and children, raising major ethical concerns about AI safety and misuse.

The repeated failures of major AI systems from leading tech companies highlight the significant challenges that remain in deploying this technology reliably. From basic factual errors to life-threatening medical misinformation, these incidents underscore the urgent need for more robust safeguards, better training data, and continuous human oversight.

For users in India and worldwide, it serves as a crucial reminder to treat information from AI chatbots with a healthy dose of skepticism and to always verify critical details, especially concerning health and finance, with trusted expert sources.