Jump to content

Wikipedia:Large language models: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
→‎Attribution: quite obviously lacks consensus, judging from the talk page - at the very least we need to alert reader to that fact
Line 72: Line 72:


=== Attribution ===
=== Attribution ===
{{under discussion|section|talk=We've_gone_off_the_rails_on_attribution}}
For content added to articles and drafts, [[WP:INTEXT|in-text attribution]] is necessary. If an LLM by OpenAI was used, this can be achieved by adding the following template to the bottom of the article: {{tlx|OpenAI|{{font color|darkgrey|''[GPT-3, ChatGPT etc.]''}}}}. Additionally, the template {{tlx|AI generated notification}} may be added to the talk page of the article.
For content added to articles and drafts, [[WP:INTEXT|in-text attribution]] is necessary. If an LLM by OpenAI was used, this can be achieved by adding the following template to the bottom of the article: {{tlx|OpenAI|{{font color|darkgrey|''[GPT-3, ChatGPT etc.]''}}}}. Additionally, the template {{tlx|AI generated notification}} may be added to the talk page of the article.



Revision as of 17:41, 3 April 2023

This policy covers how large language models (LLMs) may and may not be used on Wikipedia to generate new text or modify existing text.

LLMs are natural language processing computer programs that use deep learning and neural networks to generate text and source code. Some notable ones are GPT-3, GPT-4, LaMDA, BLOOM, and LLaMA. LLMs power many applications, such as AI chatbots and AI search engines. They are used for a growing number of features in common applications, such as word processors, spreadsheets, etc. In this policy, the terms "LLM" and "LLM output" refer to all such programs and applications, and their outputs—present or future: The policy's applicability is never affected by anyone's claims that, due to technological advances, a particular LLM automatically complies with Wikipedia policies and guidelines.

LLM-generated content can be biased, non-verifiable, may constitute original research, and may violate copyrights. Editors who are not fully aware of said risks must not edit with the assistance of these tools. LLMs must not be used for tasks and in subject areas with which the editor does not have substantial familiarity. Their outputs must be rigorously scrutinized for compliance with all applicable policies. As with all their edits, an editor is fully responsible for their LLM-assisted edits.

Furthermore, in-text attribution to the LLM provider is required for non-trivial changes to articles and drafts.

Risks and relevant policies

The use of LLMs to produce content on Wikipedia is associated with various risks. This section clarifies key issues that arise from LLM use on Wikipedia and the specific policies that apply.

LLM copyright violations

Relevant policy: Wikipedia:Copyrights
Tip: If you want to import text that you have found elsewhere or that you have co-authored with others (including LLMs), you can only do so if it is available under terms that are compatible with the CC BY-SA license.
Further: Wikipedia:Large language models and copyright

An LLM can generate copyright-violating material.[a] Generated text may include verbatim non-free content or be a derivative work. In addition, using LLMs to summarize copyrighted content (like news articles) may produce excessively close paraphrases. The copyright status of LLMs trained on copyrighted material is not yet fully understood. Their output may not be compatible with the CC BY-SA license and the GNU license used for text published on Wikipedia.

LLM-generated original research and "hallucinations"

Relevant policy: Wikipedia:No original research
Tip: Wikipedia articles must not contain original research – i.e. facts, allegations, and ideas for which no reliable, published sources exist. This includes any analysis or synthesis of published material that serves to reach or imply a conclusion not stated by the sources. To demonstrate that you are not adding original research, you must be able to cite reliable, published sources that are *directly related* to the topic of the article and *directly support* the material being presented.

While LLMs may give accurate answers in response to some questions, they may also generate responses that are biased or false, sometimes in subtle ways, sometimes not so subtle. For example, if asked to write an article on the benefits of eating crushed glass, they will sometimes do so. This can be dangerous, and therefore, editors using LLMs to assist with writing Wikipedia content must be especially vigilant in not adding instances of such LLM-generated original research to the encyclopedia.

LLM's are pattern completion programs: they generate text by outputting the words most likely to come after the previous ones, based on their training data, which includes a wide variety of content from the Internet and elsewhere, including works of fiction, conspiracy theories, propaganda, and so on. Because of this, LLMs can make things up, which, in addition to being considered original research, are also called hallucinations.

Asking LLMs about obscure subjects, complicated questions, or telling them to do tasks which they are not suited to (i.e. tasks which require extensive knowledge or analysis) makes these types of errors much more likely.

And since LLMs answer with an air of confidence, this makes their mistakes easily accepted as true opinion or actual fact.

Unsourced or unverified LLM-generated content

Relevant policy: Wikipedia:Verifiability
Tip: Readers must be able to check that any of the information within Wikipedia articles is not just made up. This means all material must be attributable to reliable, published sources. Additionally, quotations and any material challenged or likely to be challenged must be supported by inline citations

LLMs do not follow Wikipedia's policies on verifiability and reliable sourcing. LLMs have been known to exclude citations altogether, cite sources that don't meet Wikipedia's reliability standards (including citing Wikipedia as a source), and hallucinate citations of non-existent references by making up titles and URLs.

LLM-hallucinated content, in addition to being original research as explained above, also breaks the verifiability policy, as it can't be verified because it is made up: there are no references to find.

Biases and POVs in LLM-generated text

Relevant policy: Wikipedia:Neutral point of view
Tip: Articles must not take sides, but should explain the sides, fairly and without editorial bias. This applies to both what you say and how you say it.

LLMs can produce content that is neutral-seeming in tone, but not necessarily in substance. This concern is especially strong for biographies of living persons.

Using LLMs

Writing articles

Large language models can be used to copy edit or expand existing text, to generate ideas for new or existing articles, or to create new content. Every change to an article must comply with all applicable policies and guidelines: you must become familiar with relevant sources for the content in question, and then carefully evaluate the output text for its verifiability, neutrality, absence of original research, compliance with copyright, and compliance with all other applicable policies and guidelines. Compliance with copyright includes respecting the copyright licensing policies of all sources, as well as that of the AI-provider. As part of providing a neutral point of view, you must not give undue prominence to irrelevant details or minority viewpoints. If citations are generated as part of the output, you must verify that the corresponding sources are non-fictitious, reliable, relevant, and suitable sources, and check for text–source integrity.

Equally, raw LLM outputs must not be pasted directly into drafts: Despite drafts being works in progress, and their initial versions falling well short of the standard required for articles as the norm, they should still not be marked by some of the serious problems outlined in the 'Relevant policies and associated risks' section above—in particular, copyright problems and original research. Enabling editors to develop article content by starting from an unaltered LLM-outputted initial version is not one of the purposes of draft space or user space.

Using sources with LLM-generated text

All sources used for writing an article must be reliable, as described at Wikipedia:Verifiability § Reliable sources. Before using any source written by a large language model, you must verify that the content was evaluated for accuracy.

Talk pages

While you may include an LLM's raw output in your talk page comments for the purposes of discussion, you should not use LLMs to "argue your case for you" in talk page discussions. Communication among human editors is at the root of core Wikipedia processes like building and reaching consensus, and it is presumed that editors contributing to the English-language Wikipedia possess the ability to communicate with other editors in edit summaries and talk pages.

Be constructive

Wikipedia relies on volunteer efforts to review new content for compliance with our core content policies. This is often time consuming. The informal social contract on Wikipedia is that editors will put significant effort into their contributions, so that other editors do not need to "clean up after them". Editors must ensure that their LLM-assisted edits are a net positive to the encyclopedia, and do not increase the maintenance burden on other volunteers.

Wikipedia is not a testing ground for LLM development, for example, by running experiments or trials on Wikipedia for this sole purpose. Edits to Wikipedia are made to advance the encyclopedia, not a technology. This is not meant to prohibit editors from responsibly experimenting with LLMs in their userspace for the purposes of improving Wikipedia.

Repeated misuse of LLMs form a pattern of disruptive editing, and may lead to a block or ban.

Attribution

For content added to articles and drafts, in-text attribution is necessary. If an LLM by OpenAI was used, this can be achieved by adding the following template to the bottom of the article: {{OpenAI|[GPT-3, ChatGPT etc.]}}. Additionally, the template {{AI generated notification}} may be added to the talk page of the article.

Experience is required

LLMs are assistive tools, and cannot replace human judgment. Careful judgment is needed to determine whether such tools fit a given purpose. Editors using LLMs are expected to familiarize themselves with a given LLM's inherent limitations and then must overcome these limitations, to ensure that their edits comply with relevant guidelines and policies. To this end, prior to using an LLM, editors should have gained substantial experience doing the same or a more advanced task without LLM assistance.[b]

Editors should have enough familiarity with the subject matter to recognize when an LLM is providing false information—if an LLM is asked to paraphrase source material or existing article content, having some understanding of the topic will help identify whether the meaning has changed along with the wording.

Experience is required not just in relation to Wikipedia practices but also concerning the proper usage of LLMs. For example, this applies to the issue of how to formulate good prompts.

High-speed editing

Human editors are expected to pay attention to the edits they make, and ensure that they do not sacrifice quality in the pursuit of speed or quantity. For the purpose of dispute resolution, it is irrelevant whether high-speed or large-scale edits that a) are contrary to consensus or b) cause errors an attentive human would not make are actually being performed by a bot, by a human assisted by a script, or even by a human without any programmatic assistance. No matter the method, the disruptive editing must stop or the user may end up blocked. However, merely editing quickly, particularly for a short time, is not by itself disruptive. Consequently, if you are using LLMs to edit Wikipedia, you must do so in a manner that complies with Wikipedia:Bot policy, specifically WP:MEATBOT.

Handling suspected LLM-generated content

Identification and tagging

Editors who identify LLM-originated content that does not to comply with our core content policies should consider placing {{AI-generated|date=June 2024}} at the top of the affected article or draft, unless they are capable of immediately resolving the identified issues themselves.

This template should not be used in biographies of living persons. In BLPs, such non-compliant content should be removed immediately and without waiting for discussion.

Verification

All suspected LLM output must be checked for accuracy and is assumed to be fabricated until proven otherwise. LLM models are known to falsify sources such as books, journal articles and web URLs, so be sure to first check that the referenced work actually exists. All factual claims must then be verified against the provided sources. LLM-originated content that is contentious or fails verification must be removed immediately.

Deletion

If removal as described above would result in deletion of the entire contents of the article or draft, it then becomes a candidate for deletion.[c] If the entire page appears to be factually incorrect or relies on fabricated sources, speedy deletion via WP:G3 (Pure vandalism and blatant hoaxes) may be appropriate.

See also

An application of the transformer model, and therefore a subfield of deep learning, LLMs also (only) partially intersect with artificial intelligence.

Demonstrations

Related articles

Notes

  1. ^ This also applies to cases in which the AI model is in a jurisdiction where works generated solely by AI is not copyrightable.
  2. ^ e.g. someone skilled at dealing with vandalism but doing very little article work is probably not someone who should start creating articles using LLMs, before they have gained actual experience at article creation without the assistance of these models; the same logic applies to creating modules, templates, using talk pages etc.
  3. ^ As long as the title indicates a topic that has some potential merit, it may be worth it to stubify and possibly draftify, or blank-and-redirect, articles. Likewise, drafts about viable new topics may be convertible to "skeleton drafts", i.e. near-blanked, by leaving only a brief definition of the subject. Creators of such pages should be suitably notified or warned. Whenever suspected LLM-generated content is concerned, editors are strongly discouraged from contesting instances of removal through reversal without discussing first. When an alternative to deletion is considered, editors should still be mindful of any outstanding copyright or similar critical issues which would necessitate deletion.

References

  1. ^ Smith, Adam (2023-01-25). "What is ChatGPT? And will it steal our jobs?". www.context.news. Thomson Reuters Foundation. Retrieved 2023-01-27.