Wikipedia:Large language models

While large language models (often known as "chatbots") are very useful, machine-generated text (like human-generated) often contains errors, is useless, whilst seeming accurate.

Specifically, asking an LLM to "write a Wikipedia article" can sometimes cause the output to be outright fabrication, complete with fictitious references. It may be biased, may libel living people, or may violate copyrights. Thus, all text generated by LLMs should be verified by editors before use in articles.

Editors who are not fully aware of these risks and not able to overcome the limitations of these tools, should not edit with their assistance. LLMs should not be used for tasks with which the editor does not have substantial familiarity. Their outputs should be rigorously scrutinized for compliance with all applicable policies. In any case, editors should avoid publishing content on Wikipedia obtained by asking LLMs to write original content. Even if such content has been heavily edited, alternatives that do not use machine-generated content are preferable. As with all edits, an editor is fully responsible for their LLM-assisted edits.

Wikipedia is not a testing ground. Using LLMs to write one's talk page comments or edit summaries, in a non-transparent way, is strongly discouraged. LLM use to generate or modify text should be mentioned in the edit summary, even if their terms of service do not require it.

  1. ^ Smith, Adam (25 January 2023). "What Is ChatGPT? And Will It Steal Our Jobs?". Context. Thomson Reuters Foundation. Retrieved 27 January 2023.

© MMXXIII Rich X Search. We shall prevail. All rights reserved. Rich X Search