Research talk:Automated classification of article importance

From Meta, a Wikimedia project coordination wiki
This is an archived version of this page, as edited by LZia (WMF) (talk | contribs) at 01:50, 13 May 2017 (→‎Rethinking importance). It may differ significantly from the current version.

Latest comment: 7 years ago by LZia (WMF) in topic Rethinking importance

Work log



Importance of Wikidata Items

It seems like the proposed notion of importance of articles to a language version of Wikipedia is restricted to articles that already exist in that language. But for smaller Wikipedias, there may be many important articles that have not yet been created. It would be really valuable to have a notion of importance for any entity in Wikidata that has an article in at least one language. In particular, it would be great to have a global, language independent list of Wikidata items ranked by importance as well as a separate importance ranking for each language. These rankings could be used to not only prioritize work on existing articles, but also prioritize work on creating new articles and filling knowledge gaps. Ewulczyn (WMF) (talk) 00:53, 2 February 2017 (UTC)Reply

Hi Ewulczyn (WMF), thank you for the insightful comments! You are right that the current proposal aims to determine importance of already existing articles. I agree that being able to determine importance for articles that do not exist, or for any Wikidata entity for that matter, would be valuable. Then some of the other notions of importance could potentially be a filter on top of that calculation (e.g. "any Wikidata entity with >= 1 article in >= 1 language"). However, I do not think we know much about how importance works across different scopes. Some of it is found in the discussions around List of articles every Wikipedia should have, where contributors argue about whether the list reflects a global perspective. There are also some research papers that look into cultural differences in importance (e.g. Research:Newsletter/2014/June#"Interactions of cultures and top people of Wikipedia from ranking of 24 language editions" reviews a preprint of a PLOS ONE paper that does this).
In summary, I am wondering if this suggests three potential research topics:
  1. Importance by scope
    • Are there meaningful differences in importance depending on the scope? In other words, if we ask a set of WikiProject members, will they agree with importance ratings that were gathered using an algorithm based on Wikidata? What are their reasons for agreeing/disagreeing?
  2. Importance in the context of Wikidata
    • How can we determine the importance of entities in Wikidata?
    • How does Wikidata importance relate to Wikipedia article importance?
  3. Recommendations for article creation
    • This should specifically target smaller Wikipedias. In other words, we are perhaps interested in determining some sort of base set of articles. This might turn out to be List of articles every Wikipedia should have, or it might turn out to be something different.
    • A key element would be to study what happens when local and global scope collide. In our WikiSym 2012 paper we did a rudimentary investigation of articles in a single language and found that they had a limited scope. Based on the PLOS ONE paper mentioned above, different language editions have slightly different focuses. It would therefore most likely be useful if we had a way of adding a local influence to a global scope, or something along those lines, in order to improve the quality of the recommendations.
I'm not certain to what extent these should be incorporated into the proposal. Pinging User:Halfak (WMF) so he knows about this thread. Thanks again for the comments! Cheers, Nettrom (talk) 19:44, 3 February 2017 (UTC)Reply

Rethinking importance

Thanks for the proposal. In Research:Automated_classification_of_article_importance#Proposed_Project you say: "Training a machine learner based on Wikipedian article importance assessments that can predict article importance both globally (e.g. for a Wikipedia edition as a whole) and locally (e.g. within a specific WikiProject).". What do you mean by "Wikipedian article importance assessments"? Are you referring to predicting standard class types used in some Wikipedia languages, for example, English Wikipedia (see here for a table of classes)?

On a separate note and to document what we have already discussed offline: You have two main paths for tackling the problem of defining what importance should mean. One is the macro-economic path (and I won't be able to comment on that), the other is the more applied path that fields such as CS use. For the latter, you will need to focus on a goal to be able to define importance. For example, if your goal is to find the top n articles a reader should read to learn about topic x, you may need to define importance differently than if your goal is to focus on the task of sorting existing articles that need to be expanded.

And last but not least, feel free to use the features used in section 2.2 of https://arxiv.org/abs/1604.03235 when/if they become relevant to your work. Good luck! :) --LZia (WMF) (talk) 18:36, 6 February 2017 (UTC)Reply

Hi LZia (WMF), thanks for the comments! The "article importance assessments" that I'm referring to are the Low/Mid/High/Top-importance ratings found on article talk pages, yes. I see from the link you provided there are also "Bottom", "NA", and "??/Unknown" ratings. "Bottom" appears to be used sparingly compared to the common ones, and I don't see "NA" and "??" ratings being very useful as they are basically a "non-label".
I agree with your example that importance might be defined differently depending on what scope of articles we're looking at. It reminded me of what I wrote in my response above, we currently do not know much about how importance differs with different scopes. To some extent I think it also could blur the line between importance and relevance. The latter is a topic I have so far mostly scratched the surface of, but I've read a couple of interesting papers in JASIST (Hjørland, B. "The foundation of the concept of relevance", 2010; and Cole, C. "A theory of information need for information retrieval that connects information to knowledge", 2011). At the moment I am not sure I am able to articulate my thoughts about it well, so I'll keep working on it.
And thanks for mentioning the paper, I'll be sure to revisit it and definitely grab features from it where it would be useful. Cheers, Nettrom (talk) 01:26, 8 February 2017 (UTC)Reply
@Nettrom: I just ran into this while familiarizing myself with the board candidates. ;) Sending it to you in case you haven't seen it. FYI. --LZia (WMF) (talk) 01:50, 13 May 2017 (UTC)Reply

Looking within WikiProjects

I generated some inlink counts for a few articles in WikiProject biology. See https://quarry.wmflabs.org/query/16204

article Articles in WikiProject Links from within WikiProject
Biology 2520 418
Abdominal_cavity 2520 4
Abyssal_plain 2520 7

--Halfak (WMF) (talk) 22:28, 6 February 2017 (UTC)Reply


Oooh! Here's a query that gets inlink counts for all pages linked from entire WikiProject. https://quarry.wmflabs.org/query/16210 --Halfak (WMF) (talk) 23:36, 6 February 2017 (UTC)Reply

definition of importance

Reading about this project it occurs to me that a very important distinction is being overlooked. Originally article importance was conceived as an indigenous metric --- how important is a giving article to the self-defined community of editors. Many of the proposed approaches subvert this by using external inputs (from other cultures, world-views, languages) to define importance. If we assume that wikis are going to continue to be self-governing self-organising communities, imposition of a foreign definition of importance may not be well received, particularly if it subsumes an existing indigenous metric (even if that metric is poorly used). Stuartyeates (talk) 20:25, 8 February 2017 (UTC)Reply

Hi Stuartyeates, thank you for reading and commenting on the proposal! I am not sure I completely understand your concern. Historically, Wikipedia contributors have used several external inputs to determine importance, e.g. the usage of the content of the 1911 Encyclopædia Britannica is an argument for anything in that encyclopedia being important, or the usage of external sources (e.g. Google) to determine notability. Sometimes inputs have also come from within the Wikimedia sphere, I know for instance some contributors looking to translate content into an edition would seek out articles covered by a large number of languages, meaning that "number of language editions with this article" would be a measurement of importance. Lastly, there's the usage of Rambot and Lsjbot to create a large number of articles based on external databases, thereby arguing that something found in those databases is important.
That being said, I am a fan of human-in-the-loop tools, and know from my work with SuggestBot and the discussion in the Signpost regarding our research paper on the misalignment between popularity and quality that contributors' interest in external input on what is important varies greatly. That is of course something I'll keep in mind as we move along and build tools around this. Cheers, Nettrom (talk) 23:21, 9 February 2017 (UTC)Reply
en.wiki editors reaching a consensus to use 1911 britannica is fine, because en.wiki editors made a decision about en.wiki and collectively acted on it. A framework from wmf or based on a large group of foreign wikis is completely different an is likely to be viewed with scepticism by some wikis, particularly those with cultures struggling with the results of colonialism. Stuartyeates (talk) 09:18, 10 February 2017 (UTC)Reply

Past discussion about Importance modeling

See en:User_talk:Jimbo_Wales/Archive_208#Specific_proposals. Ping EllenCT! We're working on this :D --Halfak (WMF) (talk) 18:41, 16 February 2017 (UTC)Reply

Excellent! Thank you very much. EllenCT (talk) 20:48, 17 February 2017 (UTC)Reply

Amazing medical classifications

What you did at Wikipedia_talk:WikiProject_Medicine/Assessment#WikiProject_Medicine_and_importance_ratings.

Based on your post there, I do not know how to get more involved. For example, you have some items which you ranked, and I think if invited and if you had a way to get feedback people would confirm or disagree with your system's choices. What kind of community response do you want from this? Blue Rasberry (talk) 22:24, 23 March 2017 (UTC)Reply

I believe that link should be en:Wikipedia_talk:WikiProject_Medicine/Assessment#WikiProject_Medicine_and_importance_ratings Stuartyeates (talk) 09:36, 26 March 2017 (UTC)Reply
Hi Blue Rasberry , thanks so much for your very useful feedback in the thread you refer to, and for getting in touch! I'm happy to hear you like our work so far!
We're currently in the initial stages of this work, where I'm mainly focusing on figuring out where the large gains can be made. The fact that WP:MED rates certain kinds of articles as Low-importance is an example of that, I expect our performance to be much better once I figure out how to add that kind of information to the model. At that stage a round of feedback on the predictions from a group of WP:MED members should be very useful as it can provide us with knowledge of where the model's predictions might be way off (those are important test cases), or where the existing rating of an article needs to be adjusted (which can lead to an improved training set). Right now it's difficult for me to estimate how quickly I'll have an improved model ready, but I can get in touch with you when I do. Thanks again for your great comments and interest, much appreciated! Cheers, Nettrom (talk) 17:03, 27 March 2017 (UTC)Reply

Evaluation metric

Very interesting research. I do however have one thing I'm wondering about. The performance seems to be solely measured in accuracy. As there is and order in the labels some mistakes are however more wrong than other mistakes. Say we have an article X which has the label Top. If the algorithm classifies this as High (1 distance) that is much better than when it gets classified as Low (3 distance). Maybe providing the full confusion matrix, or looking at whether w:RMSE can be used to evaluate (the unknown class might be an issue) would be an idea. Basvb (talk) 05:31, 19 April 2017 (UTC)Reply

Hi Basvb, thanks for getting in touch about this! You are correct that focusing solely on accuracy can lead to problems with misclassifications as you describe, and this would be a particular problem if the classes are weighted differently (e.g. if we decide that getting Top-importance articles correct is more important than getting Low-importance articles correct). In the work that I have done so far, I focused on accuracy firstly because we were looking to understand if we could make these kinds of predictions, and secondly because we try to make large gains. Once we found a model that was performing reasonably well, adding more predictors to it tend to result in small changes, and that's why I also started focusing on parts of the confusion matrix, to catch those types of errors that you describe. At the same time, it seems to me that as our models got more accurate, they tended also to not make large errors.
Nonetheless, it's something I'll keep in mind as we work with this, and I'll do my best to make sure we report useful measures as we continue this work. Thanks again! Cheers, Nettrom (talk) 20:23, 20 April 2017 (UTC)Reply