A surprising development has emerged in the AI landscape: OpenAI's ChatGPT is now pulling information from Grokipedia, the controversial AI-generated encyclopedia created by Elon Musk's xAI company.

What is Grokipedia?

Grokipedia launched in October 2025, positioned as an alternative to Wikipedia. The project emerged after Musk repeatedly criticized Wikipedia for what he perceived as bias against conservative viewpoints. However, the platform quickly drew scrutiny for its content quality and ideological slant.

🎯 Featured
★★★★★ 5.0

Professional Website Solutions for Your Business

Expert design & development by Itxperts

Fast & SEO
🎨
Premium UI
💰
Affordable
🛡️
24/7 Support

Early investigations revealed troubling patterns. While many articles appeared to be directly copied from Wikipedia, some entries contained deeply problematic content. Reports highlighted claims that pornography contributed to the AIDS crisis, the inclusion of what observers called "ideological justifications" for slavery, and the use of denigrating language toward transgender individuals.

The encyclopedia is associated with Grok, a chatbot that previously made headlines for describing itself using Nazi imagery and being exploited to create sexualized deepfakes on X (formerly Twitter).

How ChatGPT is Using Grokipedia

Recent testing revealed that GPT-5.2 cited Grokipedia nine times across various queries. Interestingly, the AI system avoided referencing Grokipedia for topics where its inaccuracy has been widely documented, such as the January 6 insurrection or the HIV/AIDS epidemic.

Instead, ChatGPT pulled from Grokipedia on more obscure subjects. One notable instance involved claims about Sir Richard Evans that had previously been fact-checked and debunked by journalists. The pattern suggests that AI systems may be inadvertently amplifying misinformation on lesser-known topics where scrutiny is less intense.

It's not just OpenAI either—Anthropic's Claude has also been observed citing Grokipedia in response to certain queries.

OpenAI's Response

When questioned about this development, an OpenAI spokesperson explained that the company strives to draw from a diverse range of publicly available sources and perspectives. This approach, while aimed at comprehensiveness, raises questions about how AI systems should handle sources with documented accuracy issues.

The Broader Implications

This situation highlights a critical challenge facing the AI industry: how to balance diverse viewpoints with factual accuracy. As large language models increasingly shape how people access information, the sources these systems rely on become crucial.

When an AI encyclopedia created in response to perceived political bias itself becomes a cited source in major AI systems, it creates a feedback loop that could spread inaccurate or ideologically skewed information further into the digital ecosystem.

The development underscores the need for greater transparency in how AI companies select and evaluate their information sources, particularly as these systems become primary knowledge tools for millions of users worldwide.

What This Means for Users

For anyone using AI chatbots for research or information gathering, this serves as an important reminder to verify claims, especially on politically sensitive or obscure topics. Cross-referencing information from multiple reliable sources remains essential, even when dealing with sophisticated AI systems that may project confidence regardless of source quality.

As AI-generated content increasingly populates the internet, and AI systems cite that same content, the potential for amplifying both accurate information and misinformation grows exponentially. The Grokipedia situation may be an early warning sign of challenges to come.