Community Wishlist Survey 2015/Bots and gadgets
This page is translatable, but you can only vote on the English version.
Voting instructions:
- Voting begins on Monday, November 30th and will end on Monday, December 14th.
- Any user with at least 100 edits on any project is eligible to vote. See participation requirements for more details.
- Positive votes marked with Support and signature will be counted as the proposal's tally. There's no limit to the number of proposals for which you may cast support votes.
- Comments marked Neutral or Oppose are acceptable, in order to ask clarifying questions or raise potential problems for discussion, but they will not be counted as negative votes.
- Please do not add new proposals to this page; the proposals phase ended on November 22nd.
Article assessor gadget/extension
Create an easy to use interface for adding WikiProject assessment templates to articles. Would probably require some sort of JSON config page for listing the available templates. Would work similar to the WikiLove extension (which adds barnstars to talk pages). Kaldari (talk) 17:30, 19 May 2015 (UTC)
Earlier discussion and endorsements |
---|
|
Votes
- Oppose WMF specific and well within the capability of volunteer editors. There is no need for this to be an extension, the gadgets above should be sufficient. This proposal would also be made even more unnecessary by having a global repository of gadgets. MER-C (talk) 09:52, 30 November 2015 (UTC)
- Support anything that makes the adding of assessment templates is a Good Thing. Allows us to look at overviews of the development of subjects. Casliber (talk) 05:04, 1 December 2015 (UTC)
- Support--Shizhao (talk) 09:32, 1 December 2015 (UTC)
- Support although I wouldn't use the Rater script as it currently exists as it snags the load of some pages. Something more advanced without bugs, yes. Stevie is the man! Talk • Work 13:52, 1 December 2015 (UTC)
- Support Sadads (talk) 15:41, 1 December 2015 (UTC)
- Support Goombiis (talk) 16:17, 1 December 2015 (UTC)
- Support★ Sethtalk 16:51, 1 December 2015 (UTC)
Improve the "copy and paste detection" bot
Currently we have a bot that analysis "all" new edits to en WP for copyright concerns. The output is here. And there is the potential for it to work in a number of other languages.
Problems is that it is not up as reliably as it should be. Also presentation of the concerns could be improved. Would love to see the output turned into an extension and formatted similar to the en:wp:Special:NewPagesFeed
Currently the output is sort-able by WikiProject. It would be nice to create WikiProject specific modules to go on individual project pages.
Doc James (talk · contribs · email) 03:45, 4 November 2015 (UTC)
Earlier discussion and endorsements |
---|
|
Votes
- Support 4nn1l2 (talk) 03:00, 30 November 2015 (UTC)
- Support --Tobias1984 (talk) 11:17, 30 November 2015 (UTC)
- Support Lugnuts (talk) 12:00, 30 November 2015 (UTC)
- Support This is one of the most amazing and Wikipedia-changing automated tools to come to editor attention in some years. Having automated copyright detection should be a priority because of the time that it saves experienced editors and the credibility that it gives to Wikimedia projects. Blue Rasberry (talk) 16:34, 30 November 2015 (UTC)
- Support Great idea which is going to save a lot of time. Bharatiya29 (talk) 17:37, 30 November 2015 (UTC)
- Support This is very important. Tryptofish (talk) 18:15, 30 November 2015 (UTC)
- Support Armbrust (talk) 22:29, 30 November 2015 (UTC)
- Support --Isacdaavid (talk) 02:06, 1 December 2015 (UTC)
- Support Risker (talk) 04:21, 1 December 2015 (UTC)
- Support Casliber (talk) 05:03, 1 December 2015 (UTC)
- Support Doc James (talk · contribs · email) 09:23, 1 December 2015 (UTC)
- Support other languages--Shizhao (talk) 09:33, 1 December 2015 (UTC)
- Support, especially "WikiProject specific modules to go on individual project pages". Perhaps this could also coordinate with the bot that creates cleanup listings for WikiProjects. Stevie is the man! Talk • Work 14:05, 1 December 2015 (UTC)
- Support --Arnd (talk) 14:41, 1 December 2015 (UTC)
- Support Mbch331 (talk) 14:48, 1 December 2015 (UTC)
- Support --Natkeeran (talk) 14:50, 1 December 2015 (UTC)
- Support as an extension that can be easily enabled on other wikis/projects. -- Dave Braunschweig (talk) 15:09, 1 December 2015 (UTC)
- Support it would be a great save of time! --Nastoshka (talk) 15:34, 1 December 2015 (UTC)
- Support Cavamos (talk) 15:34, 1 December 2015 (UTC)
- Support Goombiis (talk) 16:18, 1 December 2015 (UTC)
- Support --Jarekt (talk) 17:11, 1 December 2015 (UTC)
- Support This is an important issue. --Frmorrison (talk) 17:13, 1 December 2015 (UTC)
- Support --SucreRouge (talk) 17:40, 1 December 2015 (UTC)
- Support --Wesalius (talk) 18:49, 1 December 2015 (UTC)
- Support StevenJ81 (talk) 21:49, 1 December 2015 (UTC)
- Support--Jey (talk) 22:03, 1 December 2015 (UTC)
- Support, it would be extremely useful, and it is not just a Wikipedia tool, as it should cover all languages and all wikis (copyvio is also a big problem for Wikibooks, Wikinews, Wikiversity) — NickK (talk) 23:37, 1 December 2015 (UTC)
- Support Spencer (talk) 01:05, 2 December 2015 (UTC)
- Support Good idea. Beyond My Ken (talk) 02:09, 2 December 2015 (UTC)
- Support --Chaoborus (talk) 02:18, 2 December 2015 (UTC)
- Support --Rosiestep (talk) 02:34, 2 December 2015 (UTC)
- Support --Shubha (talk) 04:41, 2 December 2015 (UTC)
- Support --Jasonzhuocn (talk) 06:58, 2 December 2015 (UTC)
- Support Litlok (talk) 08:10, 2 December 2015 (UTC)
- Support Anything that might help reduce the rampant copy-pasting on India/Pakistan-related articles has to be A Good Thing. - Sitush (talk) 08:44, 2 December 2015 (UTC)
- Support Amen Sitush, amen. Bgwhite (talk) 09:40, 2 December 2015 (UTC)
Machine-learning tool to reduce toxic talk page interactions
- Proposal
Build an AI tool to identify occurrences of apparent talk page abuse in the English Wikipedia in real time, building on existing en:WP functions such as tags and edit filters.
- Envisaged benefits
- An edit filter could warn users before posting that their comment may need to be refactored to be considered appropriate:
- Cutting down on the number of abusive talk page messages actually posted.
- Editors could check recent changes for tagged edits:
- Bringing much-needed third eyes to talk pages where an editor may be facing sexual harassment or other types of abuse.
- Improving response times and relieving victims of the burden of having to ask an admin for help.
- Prevention of talk page escalation.
- Improvement of talk page culture.
- Enhanced editor retention.
Some prior discussion of this idea can be found at https://en.wikipedia.org/wiki/Wikipedia:Village_pump_(proposals)#Proposed:_Tag_.2F_edit_filter_for_talk_page_abuse
As User:Denny pointed out on the Wikimedia-l mailing list yesterday, a similar project has reportedly been run in the League of Legends online gaming community to improve the quality of social interactions, with considerable success: occurrences of verbal abuse in that community are reported to have dropped by more than 40 percent.
Another interesting finding from that project was: 87 percent of online toxicity came from the neutral and positive citizens just having a bad day here or there. [...] We had to change how people thought about online society and change their expectations of what was acceptable. That is something that seems to apply to Wikipedia as well.
The game designers and scientists working on this project started out by compiling a large dataset of interactions community members deemed counterproductive (toxic behaviour, harassment, abuse) and then applied machine learning to this dataset to be able to provide near real-time feedback to participants on the quality of their interaction. (They're also looking at identifying positive, collaborative behaviours.)
I would love to see the Foundation explore if this approach could be adapted to address the very similar problems in the Wikipedia community. The totality of revision-deleted and oversighted talk page posts in the English Wikipedia could provide an initial dataset, for example; like the League of Legends community, the Foundation could invite outside labs and academic institutes to help analyse this dataset.
There are considerable difficulties involved in building a system sophisticated enough to avoid unacceptable numbers of false positives, but this is a challenge familiar from ClueBot programming, and one the League of Legends team seems to have mastered: Just classifying words was easy, but what about more advanced linguistics such as whether something was sarcastic or passive-aggressive? What about more positive concepts, like phrases that supported conflict resolution? To tackle the more challenging problems, we wanted to collaborate with world-class labs. We offered the chance to work on these datasets and solve these problems with us. Scientists leapt at the chance to make a difference and the breakthroughs followed. We began to better understand collaboration between strangers, how language evolves over time and the relationship between age and toxicity; surprisingly, there was no link between age and toxicity in online societies.
A successful project of this type could subsequently be offered to other Wikimedia projects as well. It would address a long-standing and much-discussed problem in the English Wikipedia, and put the Foundation at the leading edge of internet culture. Andreas JN466 20:03, 14 November 2015 (UTC)
Earlier discussion and endorsements |
---|
|
Votes
- Comment: This is something I mentioned to an OS during Wikimania actually. Basically, imagine a system that automatically hides suspicious edits pending OS review. Think of it like FlaggedRevs but with a more strict set of rules. Certain OSable content is fairly trivial to spot. Emails and phone numbers, social security numbers etc. follow a certain pattern detectable with a simple regex for example. AI would learn such rules. We often see same content be reposted as well. I would propose a OSed content campaign where clustered content would be hand labelled by humans (perhaps oversight users to avoid privacy concerns). Humans would go through the unsupervised clusters and label them (this one looks like a phone number so it should be hidden, that one looks ok so it doesn't need to be hidden) and we can then use this to train AI (clustering or multi class classification). AI would develop a level of confidence of "bad content" such that if the post is above a certain threshold it would be hidden until reviewed by AI. -- とある白い猫 chi? 19:47, 30 November 2015 (UTC)
- Support Anthonyhcole (talk) 09:18, 1 December 2015 (UTC)
- Support · · · Peter (Southwood) (talk): 13:50, 1 December 2015 (UTC)
- Neutral I wouldn't mind seeing a pilot done on particular parts of the English Wikipedia where negative talk page interactions tend to be a big issue, but I will withhold support pending the results of said pilot. This idea sounds like "easier said than done", but I'm willing to be open-minded and see what can be done. At any rate, I would oppose any full rollout until any serious kinks are worked out. Stevie is the man! Talk • Work 14:14, 1 December 2015 (UTC)
- Oppose ORES is a better way to do this. After all, edit filters on enwiki (the database name for en.wikipedia) have for a long time pretty much used up the limits they have.--Snaevar (talk) 16:30, 1 December 2015 (UTC)
- Opposedifficult in chinese.--Temp3600 (talk) 16:38, 1 December 2015 (UTC)
- Neutral I did not notice this to be a big problem. --Jarekt (talk) 17:12, 1 December 2015 (UTC)
- Support on an experimental basis. We'll never know if this will work unless we try it. Eman235/talk 21:01, 1 December 2015 (UTC)
- Oppose - We've got too much of the Friendly Space baloney going on already without adding a highly fallible bot layer. These things don't work elsewhere and won't work here. Scunthorpe, anyone? - Sitush (talk) 08:41, 2 December 2015 (UTC)
Migrate dead links to Wayback Machine
- See also: mw:Archived Pages.
Most external links have am average lifespan of about 7 years before they go dead. As Wikipedia ages, the dead external links problem grows exponentially. Internet Archive has partnered with Wikipedia to ensure all new external links have a Wayback cache. However there has been no formal process of adding the Wayback links to Wikipedia (via the cite web |archiveurl=..
feature for example). There have been attempts to automate with various bots (see en:WP:Link rot) but the coding is non-trivial and multiple volunteer efforts have stalled. Likely what will be required is a team of programmers working full-time, something that is beyond the scope of a few volunteers working spare time. It's the sort of coding work that MediaWiki could sponsor and make a big difference in the quality of content, impacting every article. -- Green Cardamom (talk) 19:27, 7 November 2015 (UTC)
Earlier discussion and endorsements |
---|
|
Votes
- Support 4nn1l2 (talk) 03:01, 30 November 2015 (UTC)
- Support Jenks24 (talk) 10:16, 30 November 2015 (UTC)
- Support Lugnuts (talk) 12:01, 30 November 2015 (UTC)
- Support Debresser (talk) 12:55, 30 November 2015 (UTC)
- Support Wildthing61476 (talk) 13:22, 30 November 2015 (UTC)
- Support MrX (talk) 15:08, 30 November 2015 (UTC)
- Support TeriEmbrey (talk) 15:55, 30 November 2015 (UTC)
- Support Internet Archive should be Wikimedia's best friend! Blue Rasberry (talk) 16:35, 30 November 2015 (UTC)
- Support Daniel Case (talk) 17:17, 30 November 2015 (UTC)
- Support Bharatiya29 (talk) 17:37, 30 November 2015 (UTC)
- Support BethNaught (talk) 17:43, 30 November 2015 (UTC)
- Support PresN (talk) 17:48, 30 November 2015 (UTC)
- Oppose Bots notifying/helping with a migration to archive.org links were appropriate are already available. A fully automated process, which is imho likely to automated cause errors as well is imho a rather bad idea. Money for professional development is better spend elsewhere on many other pressing issues. For the link management a half automated approach is the way to go imho.--Kmhkmh (talk) 19:38, 30 November 2015 (UTC)
- Oppose Isn't it ironic that a community that uses the most flexible content management system you could possibly think of longs for becoming a museum of sorts, offering its readers the state that used to be seven or eight years ago? Wikipedia was successful against its competitors because it provided up to date content in any way you want. Now, it's becoming a museum. At best, an interface to the internet archive's wayback machine. But that's already available next door. Go and get moving, look for what's up now, instead. The future does not lie in the world we had yesterday.--Aschmidt (talk) 20:10, 30 November 2015 (UTC)
- Nothing stops editors from adding fresher references. Stevie is the man! Talk • Work 14:38, 1 December 2015 (UTC)
- Support --YodinT 02:00, 1 December 2015 (UTC)
- Support --Isacdaavid (talk) 02:06, 1 December 2015 (UTC)
- Support. This is a basic function for keepingthe links usable. DGG (talk) 02:07, 1 December 2015 (UTC)
- Support - Not all links last forever and plus I've already been converting dead links to Archived versions and so It'd be better if a bot could do it. –Davey2010Talk 02:43, 1 December 2015 (UTC)
- Support some good stuff gets archived and disappears every day. This is a good idea. Casliber (talk) 05:05, 1 December 2015 (UTC)
- Support--Kippelboy (talk) 05:30, 1 December 2015 (UTC)
- Support--Gbeckmann (talk) 09:14, 1 December 2015 (UTC)
- Neutral-- Has been done. Is a great idea. Should be running in the next few weeks from what I understand. Doc James (talk · contribs · email) 09:24, 1 December 2015 (UTC)
- Support--Shizhao (talk) 09:34, 1 December 2015 (UTC)
- Support--Purodha Blissenbach (talk) 10:19, 1 December 2015 (UTC)
- Support · · · Peter (Southwood) (talk): 13:52, 1 December 2015 (UTC)
- Support. When sources are cited and then vanish, credibility suffers. LLarson (talk) 14:00, 1 December 2015 (UTC)
- Support as long as this is well-tested enough to not cause new headaches for editors. Also, I don't want to see perfectly good original links replaced with archive links from the reader's point of view -- this replacement should only occur if the original link is dead. Another concern is that sometimes webpages migrate without proper redirects instead of truly going dead -- might we also have a tool that hunts around for where the webpage moved to, so we can maintain a fresh original link? Stevie is the man! Talk • Work 14:35, 1 December 2015 (UTC)
- Support --Arnd (talk) 14:42, 1 December 2015 (UTC)
- Support --Natkeeran (talk) 14:50, 1 December 2015 (UTC)
- Neutral It's a little hard to support this as long as IA maintains their policy of retroactively applying robots.txt. In the worst case, some domain is archived for years and the archive links are used, then the registration expires, the domain is scooped up by a squatter or some other company, they put up a robots.txt denying access, and boom, the existing archives for the former site under that domain are inaccessible. Or a site is bought out by some other company, and the new company redirects every URL from the old site to their existing homepage and throws up a robots.txt denying access to everything on the old site so as to prevent Google potentially penalizing it as a SEO trick, and boom, the existing archives for the old site are inaccessible. Or a site just reorganizes everything and puts up a robots.txt blocking access to the old URLs to get them out of Google searches, and boom, the archives for the old pages are inaccessible. Anomie (talk) 14:53, 1 December 2015 (UTC)
- Support--KRLS (talk) 15:12, 1 December 2015 (UTC)
- Support --Andyrom75 (talk) 15:12, 1 December 2015 (UTC)
- Support It's for me a very important issue. People can be always able to check the source and to learn more about the topics mentionend. DanGong (talk) 15:14, 1 December 2015 (UTC)
- Support Wittylama (talk) 15:15, 1 December 2015 (UTC) While the Internet Archive is an excellent 'catch all' source, I'd also like to see this solution be able to address the fact that many national libraries already perform web-archiving of their national domain. For example, the Pandora service of the National Library of Australia has a far more professional and consistent archive of Australian content than IA does, but it is done on a 'permission' basis due to local law. It would be good if any tool build to solve this request could be made to search other notable web-archives too. Wittylama (talk) 15:15, 1 December 2015 (UTC)
- Support--Bramfab (talk) 15:26, 1 December 2015 (UTC)
- Support -- but work is already happening with The Wikipedia Library, Internet Archive, and the Citoid team, to support work on en:w:User:Cyberbot II implementation of archiveurls. If implemented, we need to build on existing work conversation with these teams. Sadads (talk) 15:43, 1 December 2015 (UTC)
- Support Goombiis (talk) 16:19, 1 December 2015 (UTC)
- Support JohanahoJ (talk) 16:42, 1 December 2015 (UTC)
- Support This is a serious issue which definitely needs to be worked on, and is, in the end, going to need a much more permanent and inventive fix. I don't know if a full-on editing team is required -- maybe just an improved bot, or a section on some kind of common page (like the Wikpedia Community Portal) listing pages tagged with a "improve dead links" template (which would need to be created, I believe). I also definitely agree with user Sadads above, that any work done should build off of editors' current efforts, rather than starting completely from scratch. -- 2ReinreB2 (talk) 17:11, 1 December 2015 (UTC)
- Support This is a growing issue that needs to be worked on. --Frmorrison (talk) 17:12, 1 December 2015 (UTC)
- Support --Jarekt (talk) 17:14, 1 December 2015 (UTC)
- Support I think that automatic migrating deadlink to Wayback Mashine will improve external links in Wikipedia. --Urbanecm (talk) 17:34, 1 December 2015 (UTC)
- Support --SucreRouge (talk) 17:38, 1 December 2015 (UTC)
- Support --Coentor (talk) 18:18, 1 December 2015 (UTC)
- Support --Wesalius (talk) 18:50, 1 December 2015 (UTC)
- Support --Usien6 (talk) 18:56, 1 December 2015 (UTC)
- Support --Hkoala (talk) 20:23, 1 December 2015 (UTC)
- Support --Akela (talk) 20:56, 1 December 2015 (UTC)
- Support Something really, really needs to be done about the linkrot problem. Eman235/talk 21:03, 1 December 2015 (UTC)
- Support It is very important for the verifiability now and in the future. Regards, Kertraon (talk) 21:33, 1 December 2015 (UTC)
- Support StevenJ81 (talk) 21:51, 1 December 2015 (UTC)
- Support Emptywords (talk) 00:01, 2 December 2015 (UTC) I was thinking about that for a long time.
- Support Hondo77 (talk)
- Comment Often, there is a better ("live") replacement for a dead link than the Wayback Machine's archived version. An automated process could discourage people from actively looking for such a replacement. Using an archived version should always be seen as a last-resort option; I'm not convinced that a blindly acting bot is what is needed here. Gestumblindi (talk) 01:11, 2 December 2015 (UTC)
- Support --Rosiestep (talk) 02:36, 2 December 2015 (UTC)
- Support but not without hesitation. Basically I agree with arguments used by Gestumblindi (discouraging pepole from active looking for "live" replacements)". On the other hand however an old link is better than none. Pawel Niemczuk (talk) 02:48, 2 December 2015 (UTC)
- Support RoodyAlien (talk) 02:51, 2 December 2015 (UTC)
- Support Syced (talk) 03:52, 2 December 2015 (UTC)
- Support - Shubha (talk) 04:44, 2 December 2015 (UTC)
- Support - WillemienH (talk) 05:15, 2 December 2015 (UTC)
- Support --Moroboshi (talk) 06:57, 2 December 2015 (UTC)
- Support --Jasonzhuocn (talk) 07:00, 2 December 2015 (UTC)
- Support Litlok (talk) 08:10, 2 December 2015 (UTC)
- Support - Sitush (talk) 08:38, 2 December 2015 (UTC)