Wikipedia has taken a firm stance on generative AI, drawing clear boundaries around how editors can use it, El.kz reports citing Interesting Engineering.
The English-language version of the platform now prohibits contributors from using large language models (LLMs) to generate or rewrite article content.
The move reflects growing concern about accuracy, reliability, and the integrity of community-driven knowledge.
The decision comes as AI tools rapidly spread across everyday workflows.
From emails to long-form content, machine-generated text has become harder to distinguish from human writing. For a platform built on verifiable information, that shift presents a serious challenge.
AI use faces limits
Under the new policy, editors cannot rely on AI to produce or significantly rewrite Wikipedia articles. The platform argues that such use often conflicts with its core content standards, especially around verifiability and sourcing.
However, Wikipedia has carved out two narrow exceptions.
Editors can use AI tools to refine their own writing, similar to grammar or style assistants. They must review all changes carefully before publishing.
The policy explains why caution is necessary. LLMs can introduce subtle distortions even when given clear instructions. It states that they “can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.”
Editors may also use AI for translation. In these cases, the tool can generate a draft, but the editor must be fluent in both languages. They must verify accuracy and correct any errors before publication.
The policy reflects broader resistance within online communities to the rapid expansion of AI.
Wikipedia administrator Chaotic Enby framed the move as part of a larger cultural shift.
“My genuine hope is that this can spark a broader change. Empower communities on other platforms, and see this become a grassroots movement of users deciding whether AI should be welcome in their communities, and to what extent,” they wrote.
They also described the policy as a “pushback against enshittification and the forceful push of AI by so many companies in these last few years.”
The path to this decision was not simple. Earlier proposals for sweeping AI rules failed to gain consensus. According to Chaotic Enby, “Prior proposals for an immediate, all-encompassing community guideline on LLMs have failed due to the standard issues of addressing complex, large-scale issues at once.”
They added, “Consensus has existed on the idea of change, but not on the implementation of change.”
Detection challenges persist
Despite the new restrictions, enforcement remains difficult. Identifying AI-generated text is far from reliable. Wikipediaacknowledges that detection tools lag behind rapidly improving models.
The platform has issued guidance to help moderators spot machine-written content. Still, it admits that some human editors may write in ways that resemble AI output.
That uncertainty raises the risk of both false positives and missed violations. Less active pages face an even higher chance of slipping through moderation gaps.
Importantly, Wikipedia operates as a network of independent communities. The English version’s policy does not apply globally. Other versions may adopt different approaches.
Spanish Wikipedia, for example, has already implemented a stricter rule. It bans LLM use entirely, including for refinement and translation.
As AI continues to evolve, Wikipedia’s decision signals a cautious approach. The platform aims to protect credibility, even as the line between human and machine writing grows harder to see.