Elon Musk and OpenAI CEO Sam Altman are at loggerheads over Trump’s new AI infrastructure development project ‘Stargate’. (Reuters photo)
Elon Musk and Sam Altman have been trading barbs on social media for months even as the AI products built by their respective companies become increasingly intertwined, particularly in the information sources used by OpenAI’s ChatGPT.
GPT-5.2, the latest large language model (LLM) powering ChatGPT, has been found to cite Musk-owned xAI’s Grokipedia as a source in response to a wide range of queries, according to a report by The Guardian. The Wikipedia challenger was reportedly cited nine times in ChatGPT’s responses to more than a dozen questions on various topics such as the political structures in Iran and Holocaust deniers.
Besides GPT 5.2, Anthropic’s Claude chatbot also referenced Grokipedia in its responses to topics such as petroleum production and Scottish ales. The growing number of Grokipedia citations suggests that it is emerging as a fast-growing rival to Wikipedia. But the trend has also raised concerns about the spread of misinformation because, unlike Wikipedia, Grokipedia is entirely powered by LLMs that are prone to hallucinations.
Shortly after the launch of Grokipedia in October 2025, Wikipedia co-founder Jimmy Wales expressed serious concerns about using LLM-powered chatbots for fact-finding tasks. “The LLMs he [Musk] is using to write it are going to make massive errors. We know ChatGPT and all the other LLMs are not good enough to write wiki entries,” Wales said.
Additionally, ChatGPT and Claude citing Grokipedia in their responses underscores how misinformation can easily circulate across AI systems, creating a self-reinforcing feedback loop. Flawed or misleading information also becomes difficult to trace, correct, or fully remove once it has filtered into an AI chatbot.
The AI model’s web search “aims to draw from a broad range of publicly available sources and viewpoints […] We apply safety filters to reduce the risk of surfacing links associated with high-severity harms, and ChatGPT clearly shows which sources informed a response through citations,” an OpenAI spokesperson was quoted as saying. The AI startup is also reportedly working on projects to filter out low-credibility information and influence campaigns.
Also Read | Elon Musk vs OpenAI: What newly unsealed court docs reveal and what they don’t
ChatGPT did not cite Grokipedia when prompted to repeat misinformation about the January 6 insurrection, media bias against Donald Trump, etc, as per the report. But when asked about more obscure topics such as claims of the Iranian government’s links to MTN-Irancell, ChatGPT reportedly cited the Wikipedia clone and provided more assertive responses.
Story continues below this ad
The AI chatbot further cited Grokipedia and repeated already debunked misinformation about Sir Richard Evans’ work as an expert witness in David Irving’s trial.
When you search for a topic on Grokipedia, it shows you a list of articles that are available on the website. xAI says all of the available articles on the website are “Fact-checked by Grok” and have a timestamp about when the AI last updated it.
Unlike Wikipedia, users who visit Grokipedia are unable to edit any of the posts on the website, but they can suggest edits or flag false information using a pop-up form. Also, some content on the platform has a disclaimer saying “The content is adapted from Wikipedia, licensed under Creative Commons Attribution-ShareAlike 4.0 License.”
Also Read | ‘Grokipedia will make massive errors’: Wikipedia founder questions use of AI for reliable info
On his social media platform X, Musk had previously said that an AI-generated encyclopedia is “super important for civilisation” because the lack of human authors means it will have no bias towards any political ideology or thinking.
© IE Online Media Services Pvt Ltd
