The online encyclopedia updated its guidelines regarding the same last week. “Text generated by large language models (LLMs) often violates several of Wikipedia’s core content policies,” it said. The update applies to the English version of Wikipedia, the report added.
“Caution is required, because LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited,” Wikipedia said.
It also allows editors to use AI to translate articles from other language editions of Wikipedia into English. However, editors must adhere to the platform’s guidelines on LLM-assisted translations, which require them to have sufficient knowledge of the source language to verify accuracy, the report added.
The platform also warned of a situation where some editors may have similar writing styles to LLMs. It added that editors will need more evidence than just stylistic or linguistic signs to justify any potential sanctions.
“It is best to consider the text’s compliance with core content policies and recent edits by the editor in question,” it said.
The policy update comes after months of deliberations by Wikipedia editors over AI-generated articles. Last year, the platform also implemented a policy to allow for the “speedy deletion” of poorly written articles to limit AI use, according to The Verge. Editors have also launched WikiProject AI Cleanup, an initiative aimed at tackling AI-generated content and helping others identify it, the report added.