Jimmy Wales on Grokipedia: Wikipedia Founder Says Musk’s AI Encyclopedia Isn’t

NEW YORK — Wikipedia founder Jimmy Wales voiced skepticism about Elon Musk’s latest artificial intelligence venture, “Groki­pedia,” a self described rival to the online encyclopedia, calling it “unlikely to create anything very useful right now.” 

Speaking Tuesday at the CNBC Technology Executive Council Summit in New York City, Wales questioned the reliability of large language models (LLMs) as trustworthy sources of public information.

“I haven’t had the time to really look at Grokipedia,” Wales said, “but apparently it has a lot of praise about the genius of Elon Musk in it. So I’m sure that’s completely neutral.”

The remarks underscored growing tensions between traditional, community driven knowledge platforms and AI generated information sources that are emerging as part of the broader digital information economy.

Musk introduced Grokipedia earlier this week, describing it on X, formerly known as Twitter, as an alternative to Wikipedia that would surpass it “by several orders of magnitude in breadth, depth and accuracy.” 

The project stems from xAI, Musk’s artificial intelligence company, and follows the launch of Grok, a chatbot integrated into the X social platform.

Wikipedia, launched in 2001, remains one of the internet’s most visited websites and a pillar of online knowledge. 

It operates under the nonprofit Wikimedia Foundation, which relies on community editors and verified sources rather than algorithmic text generation. 

Wales emphasized that distinction, dismissing claims that Wikipedia carries a “woke bias.” “He is mistaken about that,” Wales said in response to Musk’s criticisms. 

“His complaints about Wiki are that we focus on mainstream sources, and I am completely unapologetic about that. We don’t treat random crackpots the same as The New England Journal of Medicine, and that doesn’t make us woke.”

Experts say the debate between Wales and Musk highlights a larger question facing the internet whether AI can replace human editorial oversight in producing accurate, neutral information.

“AI models are powerful at synthesizing data, but they are still unreliable when it comes to factual accuracy,” said Dr. Lina Park, a computational linguist at Columbia University. 

They generate content that sounds right but often isn’t verifiable. That’s a major issue when trying to build a knowledge repository like Wikipedia.

Artificial intelligence models such as OpenAI’s ChatGPT, Anthropic’s Claude, and now xAI’s Grok have demonstrated both the promise and pitfalls of automated text generation. 

While capable of generating readable and coherent summaries, they are also known for “hallucinations” false or invented information presented as fact.

“Even when trained on vast datasets, LLMs have no inherent understanding of truth,” said Dr. Omar Qureshi, a data ethics researcher at Oxford University. 

They can replicate patterns of misinformation that exist in their training data, which makes them unsuitable as the sole curators of public knowledge.

Wikipedia operates with an estimated annual technology budget of about $175 million, according to Wales a fraction of the billions invested annually in AI research and infrastructure by major technology firms. 

Analysts project that global AI spending will exceed $550 billion next year, driven largely by cloud and hyperscale computing investments from companies like Microsoft, Google, and Amazon.

“Despite the disparity in funding, Wikipedia has achieved a level of reliability that no machine learning model has matched,” said tech analyst Maria Gutierrez of the Boston based consultancy FutureGrid. 

The community verification process, combined with transparency in sourcing, makes it far less prone to systematic misinformation.

Wales offered examples illustrating why he remains wary of AI generated content. He said when he asks chatbots to identify his wife a “not famous but known” figure in British politics the results are always “plausible but wrong.”

He also cited a German Wikipedia contributor who discovered that certain book citations in wiki entries were fabricated by ChatGPT. 

“It just very happily makes up books for you,” Wales said, underscoring the difficulty of distinguishing fact from fiction in AI generated material.

Among Wikipedia’s volunteer editors, Wales’s comments resonated as a defense of human centered fact checking.

People underestimate how much discussion and review goes into every sentence, said Berlin based editor Hans Keller, who has contributed to the German language Wikipedia for over a decade. 

An AI might write faster, but accuracy takes judgment and debate something algorithms can’t replicate.” Some users of X, however, expressed interest in Musk’s Grokipedia as a potential innovation. 

If Grokipedia can correct the bias people see in traditional sources, it might gain traction, said tech entrepreneur Laila Ahmed in Karachi. “But it has to earn trust and that’s not easy.”

Industry observers suggest that both platforms could coexist, serving different purposes. “Wikipedia is the public library of the web Grokipedia might aim to be the AI research lab,” said digital strategist Brandon Liu. 

But credibility will decide which one endures. Wales acknowledged that AI could still play a role within Wikipedia, particularly in limited domains where automation can support human editors. 

Maybe it helps us do our work faster, he said, noting ongoing experiments to use AI tools to surface new information from existing sources.

However, he added that developing a proprietary LLM for Wikipedia remains cost prohibitive. We are really happy Wiki is now part of the infrastructure of the world, Wales said. 

It’s a heavy burden, and maintaining neutrality is central to that mission. Musk, meanwhile, remains confident in Grokipedia’s prospects, suggesting it will outstrip Wikipedia in accuracy and scope. 

Yet experts caution that building a trusted, community driven information resource cannot be achieved through automation alone. Trust takes years of transparent collaboration, said Park. That’s something no LLM can replicate overnight.

As the AI era accelerates, the clash between Grokipedia and Wikipedia reflects deeper tensions over the future of digital knowledge between algorithmic speed and human scrutiny, between data synthesis and editorial rigor.

Whether Musk’s Grokipedia can fulfill its ambitious claims remains uncertain, but Wales’s warning was clear the pursuit of truth online still depends on people who know how to verify it.

Leave a Comment