Jump to content

Wikipedia:Wikipedia Signpost/2023-12-24/Recent research: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
No edit summary
No edit summary
Line 67: Line 67:
* See the [[mw:Wikimedia Research/Showcase|page of the monthly '''Wikimedia Research Showcase''']] for videos and slides of past presentations.
* See the [[mw:Wikimedia Research/Showcase|page of the monthly '''Wikimedia Research Showcase''']] for videos and slides of past presentations.
* '''US-based editors wanted for workshop on research ethics''': For a research project titled "[[:m:Research:Beyond the Individual: Community-Engaged Design and Implementation of a Framework for Ethical Online Communities Research|Beyond the Individual: Community-Engaged Design and Implementation of a Framework for Ethical Online Communities Research]]", a team from the University of Minnesota's [[GroupLens Research|GroupLens lab]] is seeking US-based Wikipedia editors to participate in a 2-hour remote workshop, to discuss "ways that research can help or harm the community" (following up on a previous workshop with non-US-based English Wikipedia editors). Interested users can sign up [[:m:Research talk:Beyond the Individual: Community-Engaged Design and Implementation of a Framework for Ethical Online Communities Research|here]].
* '''US-based editors wanted for workshop on research ethics''': For a research project titled "[[:m:Research:Beyond the Individual: Community-Engaged Design and Implementation of a Framework for Ethical Online Communities Research|Beyond the Individual: Community-Engaged Design and Implementation of a Framework for Ethical Online Communities Research]]", a team from the University of Minnesota's [[GroupLens Research|GroupLens lab]] is seeking US-based Wikipedia editors to participate in a 2-hour remote workshop, to discuss "ways that research can help or harm the community" (following up on a previous workshop with non-US-based English Wikipedia editors). Interested users can sign up [[:m:Research talk:Beyond the Individual: Community-Engaged Design and Implementation of a Framework for Ethical Online Communities Research|here]].



===Other recent publications===
===Other recent publications===
Line 77: Line 78:
"In this work, we explore the use of Large Language Models (LLMs) for knowledge engineering tasks in the context of the [[ISWC]] 2023 LM-KBC Challenge. For this task, given subject and relation pairs sourced from Wikidata, we utilize pre-trained LLMs to produce the relevant objects in string format and link them to their respective Wikidata QIDs. [...] The method achieved a macro-averaged F1-score of 0.701 across the properties, with the scores varying from 1.00 to 0.328. These results demonstrate that the knowledge of LLMs varies significantly depending on the domain and that further experimentation is required to determine the circumstances under which LLMs can be used for automatic Knowledge Base (e.g., Wikidata) completion and correction. The investigation of the results also suggests the promising contribution of LLMs in collaborative knowledge engineering. LLMKE won Track 2 of the challenge.
"In this work, we explore the use of Large Language Models (LLMs) for knowledge engineering tasks in the context of the [[ISWC]] 2023 LM-KBC Challenge. For this task, given subject and relation pairs sourced from Wikidata, we utilize pre-trained LLMs to produce the relevant objects in string format and link them to their respective Wikidata QIDs. [...] The method achieved a macro-averaged F1-score of 0.701 across the properties, with the scores varying from 1.00 to 0.328. These results demonstrate that the knowledge of LLMs varies significantly depending on the domain and that further experimentation is required to determine the circumstances under which LLMs can be used for automatic Knowledge Base (e.g., Wikidata) completion and correction. The investigation of the results also suggests the promising contribution of LLMs in collaborative knowledge engineering. LLMKE won Track 2 of the challenge.
</blockquote>
</blockquote>



===="Large language models learn to organize concepts in ways that are strikingly similar to how concepts are organized in [Wikidata]"====
===="Large language models learn to organize concepts in ways that are strikingly similar to how concepts are organized in [Wikidata]"====
Line 83: Line 85:
"Knowledge bases such as WikiData provide large-scale, high-quality representations of inferential semantics and world knowledge. We show that large language models learn to organize concepts in ways that are strikingly similar to how concepts are organized in such knowledge bases. Knowledge bases model collective, institutional knowledge, and large language models seem to induce such knowledge from raw text. We show that bigger and better models exhibit more human-like concept organization, across four families of language models and three knowledge graph embeddings."
"Knowledge bases such as WikiData provide large-scale, high-quality representations of inferential semantics and world knowledge. We show that large language models learn to organize concepts in ways that are strikingly similar to how concepts are organized in such knowledge bases. Knowledge bases model collective, institutional knowledge, and large language models seem to induce such knowledge from raw text. We show that bigger and better models exhibit more human-like concept organization, across four families of language models and three knowledge graph embeddings."
</blockquote>
</blockquote>


===="Enhancing Multilingual Language Model with Massive Multilingual Knowledge Triples" from Wikidata====
From the abstract:<ref>{{Cite conference| publisher = Association for Computational Linguistics| doi = 10.18653/v1/2022.emnlp-main.462| conference = EMNLP 2022| pages = 6878–6890| editor1-first = Yoav | editor1-last =Goldberg | editor2-first = Zornitsa | editor2-last = Kozareva| editor3-first = Yue | editor3-last = Zhang | last1 = Liu| first1 = Linlin| last2 = Li| first2 = Xin| last3 = He| first3 = Ruidan| last4 = Bing| first4 = Lidong| last5 = Joty| first5 = Shafiq| last6 = Si| first6 = Luo| title = Enhancing Multilingual Language Model with Massive Multilingual Knowledge Triples| book-title = Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing| location = Abu Dhabi, United Arab Emirates| date = December 2022| url = https://aclanthology.org/2022.emnlp-main.462}}</ref>
<blockquote style="padding-left:1.0em; padding-right:1.0em; background-color:#eaf8f4;">
[...] we explore methods to make better use of the multilingual annotation and language agnostic property of KG [ [[knowledge graph]] ] triples, and present novel knowledge based multilingual language models (KMLMs) trained directly on the knowledge triples. We first generate a large amount of multilingual synthetic sentences using the Wikidata KG triples. Then based on the intra- and inter-sentence structures of the generated data, we design pretraining tasks to enable the LMs to not only memorize the factual knowledge but also learn useful logical patterns. Our pretrained KMLMs demonstrate significant performance improvements on a wide range of knowledge-intensive cross-lingual tasks, including named entity recognition (NER), factual knowledge retrieval, relation classification, and a newly designed logical reasoning task.
</blockquote>



===="KGConv, a Conversational Corpus grounded in Wikidata"====
===="KGConv, a Conversational Corpus grounded in Wikidata"====
Line 89: Line 99:
"We present KGConv, a large, conversational corpus of 71k conversations where each question-answer pair is grounded in a Wikidata fact. Conversations contain on average 8.6 questions and for each Wikidata fact, we provide multiple variants (12 on average) of the corresponding question using templates, human annotations, hand-crafted rules and a question rewriting neural model. We provide baselines for the task of Knowledge-Based, Conversational Question Generation. [...]"
"We present KGConv, a large, conversational corpus of 71k conversations where each question-answer pair is grounded in a Wikidata fact. Conversations contain on average 8.6 questions and for each Wikidata fact, we provide multiple variants (12 on average) of the corresponding question using templates, human annotations, hand-crafted rules and a question rewriting neural model. We provide baselines for the task of Knowledge-Based, Conversational Question Generation. [...]"
</blockquote>
</blockquote>



===="WikiDialog" dataset: "Dialog inpainting" using Wikipedia====
===="WikiDialog" dataset: "Dialog inpainting" using Wikipedia====

Revision as of 23:00, 23 December 2023

Recent research

"LLMs Know More, Hallucinate Less" with Wikidata


A monthly overview of recent academic research about Wikipedia and other Wikimedia projects, also published as the Wikimedia Research Newsletter.


"Fine-tuned LLMs Know More, Hallucinate Less with Few-Shot Sequence-to-Sequence Semantic Parsing over Wikidata"

Overview of how the authors' "WikiSP" semantic parser is used to answer a user's question:
"An entity linker is used to link entities in the user query to their unique ID in Wikidata; e.g. “A Bronx Tale” is linked to entity ID “Q1130705”. The query and entity linker outputs are fed to the WikiSP semantic parser to produce a modified version of SPARQL, where property IDs (e.g. “P915”) are replaced by their unique string identifiers (e.g. “filming_location”). If applying the [SPARQL] query to Wikidata fails to return a result, we default to [OpenAI's large language model] GPT-3, labeling the result as a GPT-3 guess. Returned answers are presented in the context of the query, so the user can tell if the answer is acceptable; if not, we also show the guess from GPT-3. Here WikiSP mistakenly uses “filming_location” instead of “narrative_location”; the user detects the mistake, thumbs down the answer, and the GPT-3 answer is provided."

This paper[1] (by five graduate students at Stanford University's computer science department and Monica S. Lam as last author) sets out to show that

While large language models (LLMs) can answer many questions correctly, they can also hallucinate and give wrong answers. Wikidata, with its over 12 billion facts, can be used to ground LLMs to improve their factuality.

To do this, the paper "presents WikiSP, a few-shot sequence-to-sequence semantic parser for Wikidata that translates a user query, along with results from an entity linker, directly into SPARQL queries [to retrieve information from Wikidata]." It is obtained by fine-tuning the LLaMA large language model.

For example, the user question "What year did giants win the world series?" is supposed to be converted into the query SELECT DISTINCT ?x WHERE {?y wdt:sports_season_of_league_or_competition wd:Q265538; wdt:winner wd:Q308966; wdt:point_in_time ?x. }. The paper uses a modified SPARQL syntax that replaces numerical property IDs (here, P3450) with their English-language label (here, "sports season of league or competition"). The authors motivate this choice by observing that "While zero-shot LLMs [e.g. ChatGPT] can generate SPARQL queries for the easiest and most common questions, they do not know all the PIDs and QIDs [property and item IDs in Wikidata], and nor is it possible to include them in a prompt."

To evaluate the performance of "WikiSP", and as a second contribution of the paper, the authors present

[...] WikiWebQuestions, a high-quality question answering benchmark for Wikidata. Ported over from WebQuestions for Freebase, it consists of real-world data with SPARQL annotation. [...]

Despite being the most popular large knowledge base for a long time, existing benchmarks on Wiki- data with labeled SPARQL queries are unfortunately either small or of low quality. On the other hand, benchmarks over the deprecated Freebase still dominate the KBQA research with better-quality data.

Using this new benchmark, "Our experimental results demonstrate the effectiveness of [WikiSP], establishing a strong baseline of 76% and 65% answer accuracy in the dev and test sets of WikiWeb- Questions, respectively." However, the paper's "Limitations" section hints that despite the impressive "12 billion facts" factoid that the paper opens with, Wikidata's coverage may be too limited to answer most user questions in a satisfying manner:

Even though knowledge bases are an important source of facts, a large portion of the knowledge available in digital form (e.g. Wikipedia, news articles, etc.), is not organized into knowledge bases. As such, the results of this paper can be considered complementary to the larger body of fact-checking research based on free text.

To address this weakness, the authors combine this Wikidata-based setup with a standard LLM that provides the answer if the Wikidata query fails to return a result. They state that

By pairing our semantic parser with GPT-3, we combine verifiable results with qualified GPT-3 guesses to provide useful answers to 96% of the questions in dev.

Data and evaluation code from the paper have been released in a GitHub repo, where the authors state that "We are now working on releasing fine-tuned models."

The paper's endeavour bears some similarity to a paper authored by a different team of Stanford graduate students with professor Lam that sought to use Wikipedia (rather than Wikidata) to reduce LLM hallucations, see the review in our July issue: "Wikipedia-based LLM chatbot 'outperforms all baselines' regarding factual accuracy".

Briefly


Other recent publications

Other recent publications that could not be covered in time for this issue include the items listed below. Contributions, whether reviewing or summarizing newly published research, are always welcome.

"Using Large Language Models for Knowledge Engineering (LLMKE): A Case Study on Wikidata"

From the abstract:[2]

"In this work, we explore the use of Large Language Models (LLMs) for knowledge engineering tasks in the context of the ISWC 2023 LM-KBC Challenge. For this task, given subject and relation pairs sourced from Wikidata, we utilize pre-trained LLMs to produce the relevant objects in string format and link them to their respective Wikidata QIDs. [...] The method achieved a macro-averaged F1-score of 0.701 across the properties, with the scores varying from 1.00 to 0.328. These results demonstrate that the knowledge of LLMs varies significantly depending on the domain and that further experimentation is required to determine the circumstances under which LLMs can be used for automatic Knowledge Base (e.g., Wikidata) completion and correction. The investigation of the results also suggests the promising contribution of LLMs in collaborative knowledge engineering. LLMKE won Track 2 of the challenge.


"Large language models learn to organize concepts in ways that are strikingly similar to how concepts are organized in [Wikidata]"

From the abstract:[3]

"Knowledge bases such as WikiData provide large-scale, high-quality representations of inferential semantics and world knowledge. We show that large language models learn to organize concepts in ways that are strikingly similar to how concepts are organized in such knowledge bases. Knowledge bases model collective, institutional knowledge, and large language models seem to induce such knowledge from raw text. We show that bigger and better models exhibit more human-like concept organization, across four families of language models and three knowledge graph embeddings."


"Enhancing Multilingual Language Model with Massive Multilingual Knowledge Triples" from Wikidata

From the abstract:[4]

[...] we explore methods to make better use of the multilingual annotation and language agnostic property of KG [ knowledge graph ] triples, and present novel knowledge based multilingual language models (KMLMs) trained directly on the knowledge triples. We first generate a large amount of multilingual synthetic sentences using the Wikidata KG triples. Then based on the intra- and inter-sentence structures of the generated data, we design pretraining tasks to enable the LMs to not only memorize the factual knowledge but also learn useful logical patterns. Our pretrained KMLMs demonstrate significant performance improvements on a wide range of knowledge-intensive cross-lingual tasks, including named entity recognition (NER), factual knowledge retrieval, relation classification, and a newly designed logical reasoning task.


"KGConv, a Conversational Corpus grounded in Wikidata"

From the abstract:[5]

"We present KGConv, a large, conversational corpus of 71k conversations where each question-answer pair is grounded in a Wikidata fact. Conversations contain on average 8.6 questions and for each Wikidata fact, we provide multiple variants (12 on average) of the corresponding question using templates, human annotations, hand-crafted rules and a question rewriting neural model. We provide baselines for the task of Knowledge-Based, Conversational Question Generation. [...]"


"WikiDialog" dataset: "Dialog inpainting" using Wikipedia

From the abstract[6] of a paper presented by a team of Google researchers at last year's ICML conference:

"[...] conversational question answering (ConvQA) systems have long been stymied by scarce training data that is expensive to collect. To address this problem, we propose a new technique for synthetically generating diverse and high-quality dialog data: dialog inpainting. Our approach takes the text of any document and transforms it into a two-person dialog between the writer and an imagined reader: we treat sentences from the article as utterances spoken by the writer, and then use a dialog inpainter to predict what the imagined reader asked or said in between each of the writer's utterances. By applying this approach to passages from Wikipedia and the web, we produce WikiDialog and WebDialog, two datasets totalling 19 million diverse information-seeking dialogs -- 1,000x larger than the largest existing ConvQA dataset. Furthermore, human raters judge the answer adequacy and conversationality of WikiDialog to be as good or better than existing manually-collected datasets."

As "a real example of a dialog inferred from a Wikipedia passage using dialog inpainting" the paper presents the following (abridged) exchange between an "imagined reader" of the Freshman 15 article and a Wikipedia "Writer" who (after the initial greeting) always answers with excerpts from the article, with all other sentences filled in by the inpainter:

  • Writer: "I'm an automated assistant. I can tell you about Freshman 15."
  • Imagined Reader: "How does the freshman 15 relate to eating habits?"
  • Writer: "In parental-supervised diets, students also usually ingest the proper proportion [...]"
  • Imagined Reader: "What is the cause of this?"
  • Writer: "This is because when students go off to college, they face an independence [...]"
  • Imagined Reader: "Do people tend to eat healthier or less healthy when they are away from home?"
  • Writer: "Research has shown that over 60 percent of college students commonly ingest [...]"


References

  1. ^ Xu, Silei; Liu, Shicheng; Culhane, Theo; Pertseva, Elizaveta; Wu, Meng-Hsi; Semnani, Sina; Lam, Monica (December 2023). "Fine-tuned LLMs Know More, Hallucinate Less with Few-Shot Sequence-to-Sequence Semantic Parsing over Wikidata". In Bouamor, Houda; Pino, Juan; Bali, Kalika (eds.). Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. EMNLP 2023. Singapore: Association for Computational Linguistics. pp. 5778–5791. doi:10.18653/v1/2023.emnlp-main.353. Data and evaluation code
  2. ^ Zhang, Bohui; Reklos, Ioannis; Jain, Nitisha; Peñuela, Albert Meroño; Simperl, Elena (2023-09-15), Using Large Language Models for Knowledge Engineering (LLMKE): A Case Study on Wikidata, arXiv code
  3. ^ Gammelgaard, Mathias Lykke; Christiansen, Jonathan Gabel; Søgaard, Anders (2023-08-29), Large language models converge toward human-like concept organization, arXiv
  4. ^ Liu, Linlin; Li, Xin; He, Ruidan; Bing, Lidong; Joty, Shafiq; Si, Luo (December 2022). "Enhancing Multilingual Language Model with Massive Multilingual Knowledge Triples". In Goldberg, Yoav; Kozareva, Zornitsa; Zhang, Yue (eds.). Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. EMNLP 2022. Abu Dhabi, United Arab Emirates: Association for Computational Linguistics. pp. 6878–6890. doi:10.18653/v1/2022.emnlp-main.462.
  5. ^ Brabant, Quentin; Lecorve, Gwenole; Rojas-Barahona, Lina M.; Gardent, Claire (2023-08-29), KGConv, a Conversational Corpus grounded in Wikidata, arXiv
  6. ^ Dai, Zhuyun; Chaganty, Arun Tejasvi; Zhao, Vincent Y.; Amini, Aida; Rashid, Qazi Mamunur; Green, Mike; Guu, Kelvin (2022-06-28). "Dialog Inpainting: Turning Documents into Dialogs". Proceedings of the 39th International Conference on Machine Learning. International Conference on Machine Learning. PMLR. pp. 4558–4586. Dataset, poster presentation


This page is a draft for the next issue of the Signpost. Below is some helpful code that will help you write and format a Signpost draft. If it's blank, you can fill out a template by copy-pasting this in and pressing 'publish changes': {{subst:Wikipedia:Wikipedia Signpost/Templates/Story-preload}}


Images and Galleries
Sidebar images

To put an image in your article, use the following template (link):

[[File:|center|300px|alt=TKTK]]

O frabjous day.
{{Wikipedia:Wikipedia Signpost/Templates/Filler image-v2
 |image     = 
 |size      = 300px
 |alt       = TKTK
 |caption   = 
 |fullwidth = no
}}

This will create the file on the right. Keep the 300px in most cases. If writing a 'full width' article, change |fullwidth=no to |fullwidth=yes.

Inline images

Placing

{{Wikipedia:Wikipedia Signpost/Templates/Inline image
 |image   =
 |size    = 300px
 |align   = center
 |alt     = Placeholder alt text
 |caption = CAPTION
}}

(link) will instead create an inline image like below

[[File:|300px|center|alt=Placeholder alt text]]
CAPTION
Galleries

To create a gallery, use the following

<gallery mode = packed | heights = 200px>
|Caption for second image
</gallery>

to create

Quotes
Framed quotes

To insert a framed quote like the one on the right, use this template (link):

{{Wikipedia:Wikipedia Signpost/Templates/Filler quote-v2
 |1         = 
 |author    = 
 |source    = 
 |fullwidth = 
}}

If writing a 'full width' article, change |fullwidth=no to |fullwidth=yes.

Pull quotes

To insert a pull quote like

use this template (link):

{{Wikipedia:Wikipedia Signpost/Templates/Quote
 |1         = 
 |source    = 
}}
Long quotes

To insert a long inline quote like

The goose is on the loose! The geese are on the lease!
— User:Oscar Wilde
— Quotations Notes from the Underpoop

use this template (link):

{{Wikipedia:Wikipedia Signpost/Templates/block quote
 | text   = 
 | by     = 
 | source = 
 | ts     = 
 | oldid  = 
}}
Side frames

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

A caption

Side frames help put content in sidebar vignettes. For instance, this one (link):

{{Wikipedia:Wikipedia Signpost/Templates/Filler frame-v2
 |1         = Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
 |caption   = A caption
 |fullwidth = no
}}

gives the frame on the right. This is useful when you want to insert non-standard images, quotes, graphs, and the like.

Example − Graph/Charts
A caption

For example, to insert the {{Graph:Chart}} generated by

{{Graph:Chart
 |width=250|height=100|type=line
 |x=1,2,3,4,5,6,7,8|y=10,12,6,14,2,10,7,9
}}

in a frame, simple put the graph code in |1=

{{Wikipedia:Wikipedia Signpost/Templates/Filler frame-v2
 |1=
{{Graph:Chart
 |width=250|height=100|type=line
 |x=1,2,3,4,5,6,7,8|y=10,12,6,14,2,10,7,9
}}
 |caption=A caption
 |fullwidth=no
}}

to get the framed Graph:Chart on the right.

If writing a 'full width' article, change |fullwidth=no to |fullwidth=yes.

Two-column vs full width styles

If you keep the 'normal' preloaded draft and work from there, you will be using the two-column style. This is perfectly fine in most cases and you don't need to do anything.

However, every time you have a |fullwidth=no and change it to |fullwidth=yes (or vice-versa), the article will take that style from that point onwards (|fullwidth=yes → full width, |fullwidth=no → two-column). By default, omitting |fullwidth= is the same as putting |fullwidth=no and the article will have two columns after that. Again, this is perfectly fine in most cases, and you don't need to do anything.

However, you can also fine-tune which style is used at which point in an article.

To switch from two-column → full width style midway in an article, insert

{{Wikipedia:Wikipedia Signpost/Templates/Signpost-block-end-v2}}
{{Wikipedia:Wikipedia Signpost/Templates/Signpost-block-start-v2|fullwidth=yes}}

where you want the switch to happen.

To switch from full width → two-column style midway in an article, insert

{{Wikipedia:Wikipedia Signpost/Templates/Signpost-block-end-v2}}
{{Wikipedia:Wikipedia Signpost/Templates/Signpost-block-start-v2|fullwidth=no}}

where you want the switch to happen.

Article series

To add a series of 'related articles' your article, use the following code

Related articles
Visual Editor

Five, ten, and fifteen years ago
1 January 2023

VisualEditor, endowment, science, and news in brief
5 August 2015

HTTPS-only rollout completed, proposal to enable VisualEditor for new accounts
17 June 2015

VisualEditor and MediaWiki updates
29 April 2015

Security issue fixed; VisualEditor changes
4 February 2015


More articles

{{Signpost series
 |type        = sidebar-v2
 |tag         = VisualEditor
 |seriestitle = Visual Editor
 |fullwidth   = no
}}

or

{{Signpost series
 |type        = sidebar-v2
 |tag         = VisualEditor
 |seriestitle = Visual Editor
 |fullwidth   = yes
}}

will create the sidebar on the right. If writing a 'full width' article, change |fullwidth=no to |fullwidth=yes. A partial list of valid |tag= parameters can be found at here and will decide the list of articles presented. |seriestitle= is the title that will appear below 'Related articles' in the box.

Alternatively, you can use

{{Signpost series
 |type        = inline
 |tag         = VisualEditor
 |tag_name    = visual editor
 |tag_pretext = the
}}

at the end of an article to create

For more Signpost coverage on the visual editor see our visual editor series.

If you think a topic would make a good series, but you don't see a tag for it, or that all the articles in a series seem 'old', ask for help at the WT:NEWSROOM. Many more tags exist, but they haven't been documented yet.

Links and such

By the way, the template that you're reading right now is {{Editnotices/Group/Wikipedia:Wikipedia Signpost/Next issue}} (edit). A list of the preload templates for Signpost articles can be found here.