AI Chatbots Cite Elon Musk's Grokipedia, Raising Misinformation Concerns
In a development that underscores the complex challenges facing artificial intelligence systems, major AI chatbots including OpenAI's ChatGPT and Anthropic's Claude have been found to repeatedly cite Elon Musk's Grokipedia as a source in their responses. This trend has sparked significant concerns about how misinformation can easily circulate across AI platforms, creating self-reinforcing feedback loops that could potentially compromise the reliability of information provided by these increasingly popular tools.
The Grokipedia Citation Pattern
According to a detailed report by The Guardian, GPT-5.2, the latest large language model powering ChatGPT, has been observed citing Musk-owned xAI's Grokipedia as a source in response to a wide range of user queries. The AI-powered Wikipedia challenger was reportedly cited nine times in ChatGPT's responses to more than a dozen questions covering diverse topics including political structures in Iran and discussions about Holocaust deniers.
Beyond ChatGPT, Anthropic's Claude chatbot has also been found referencing Grokipedia in its responses to various subjects such as petroleum production and Scottish ales. This growing pattern of citations suggests that Grokipedia is emerging as a fast-growing rival to traditional knowledge sources like Wikipedia, but with fundamentally different underlying mechanisms that raise important questions about information accuracy.
Fundamental Concerns About AI-Generated Content
The increasing reliance on Grokipedia by major AI systems has highlighted serious concerns about the potential spread of misinformation. Unlike Wikipedia, which relies on human editors and established verification processes, Grokipedia is entirely powered by large language models that are known to be prone to hallucinations and factual inaccuracies. This fundamental difference in content creation methodology creates significant challenges for maintaining information integrity.
Shortly after Grokipedia's launch in October 2025, Wikipedia co-founder Jimmy Wales expressed serious reservations about using LLM-powered chatbots for fact-finding tasks. "The LLMs he [Musk] is using to write it are going to make massive errors. We know ChatGPT and all the other LLMs are not good enough to write wiki entries," Wales stated, emphasizing the limitations of current AI technology in handling complex factual information.
The Self-Reinforcing Misinformation Loop
The phenomenon of AI chatbots citing AI-generated content creates a particularly troubling scenario where flawed or misleading information becomes difficult to trace, correct, or fully remove once it has filtered into AI systems. This creates a self-reinforcing feedback loop where misinformation can circulate and amplify across multiple AI platforms, potentially gaining credibility through repeated citation.
An OpenAI spokesperson addressed these concerns by explaining that their AI model's web search "aims to draw from a broad range of publicly available sources and viewpoints. We apply safety filters to reduce the risk of surfacing links associated with high-severity harms, and ChatGPT clearly shows which sources informed a response through citations." The company is reportedly working on additional projects to filter out low-credibility information and influence campaigns.
Selective Citation Patterns
Interestingly, the report revealed selective citation patterns in ChatGPT's responses. The AI chatbot did not cite Grokipedia when prompted to repeat specific types of misinformation, such as claims about the January 6 insurrection or media bias against Donald Trump. However, when asked about more obscure topics including claims of the Iranian government's links to MTN-Irancell, ChatGPT reportedly cited the Wikipedia clone and provided more assertive responses.
In another concerning instance, the AI chatbot cited Grokipedia and repeated already debunked misinformation about Sir Richard Evans' work as an expert witness in David Irving's trial, demonstrating how historical inaccuracies can potentially be perpetuated through AI systems.
Understanding Grokipedia's Structure
Grokipedia represents a fundamentally different approach to knowledge aggregation compared to traditional encyclopedias. When users search for topics on the platform, they receive a list of available articles, all of which are "Fact-checked by Grok" according to xAI's claims, with timestamps indicating when the AI last updated the content.
Unlike Wikipedia's collaborative editing model, Grokipedia users cannot directly edit posts but can suggest edits or flag false information through a pop-up form. Some content on the platform carries a disclaimer stating "The content is adapted from Wikipedia, licensed under Creative Commons Attribution-ShareAlike 4.0 License," indicating a complex relationship with existing knowledge sources.
The Philosophical Debate
On his social media platform X, Elon Musk has previously argued that an AI-generated encyclopedia is "super important for civilisation" because the absence of human authors theoretically eliminates bias toward any particular political ideology or thinking. This perspective represents a fundamental philosophical departure from traditional knowledge curation methods and raises important questions about how society should approach information verification in the age of artificial intelligence.
The growing integration of AI-generated content into mainstream AI systems represents a critical juncture for the technology industry, requiring careful consideration of how to balance innovation with information integrity and reliability.