End-user image suppression: Difference between revisions

From Meta, a Wikimedia project coordination wiki
Content deleted Content added
Christiaan (talk | contribs)
added secondary PR goal
Apokrif (talk | contribs)
 
(47 intermediate revisions by 19 users not shown)
Line 1: Line 1:
This article seeks to explore the implementation of [[Wikipedia:end-user|end-user]] controlled suppression of [[potentially offensive images]] on [[Wikimedia projects]]. It does not seek to implement the idea, but to explore the ins and outs of how it may be implemented ''if'' it is wanted.
This page deals with the technical implications of implementing [[Wikipedia:end-user|end-user]] controlled suppression of [[potentially offensive images]] on [[Wikimedia projects]].
*For discussion of whether it would be desirable, see [[desirability of end-user image suppression]].


This page originated from a discussion on the WikiEN-L [[mailing list]] where some level of consensus was reached between the free-speech and content suppression parties that a technical solution is possible that will go a long way towards satisfying most mainstream positions in the debate. However, it should be stressed that this page is ''not'' intended to imply that this is a <i>fait accompli</i>, but merely an initial exploration of the specification and feasibility of a possible solution. The features described below are ''not'' currently under development, and will certainly not be made live until their desirability has been firmly established.
Following on from recent debate about the appropriateness of sexually explicit images on Wikipedia there has been some level of agreement to develop a technical solution to the problem of users being presented with [[potentially offensive images]], particurlarly sexually explicit or medically explicit images.


The are generally four facets to this discussion:
The primary goal is to avoid offending some of our users while also avoiding the need to participate in [[Wikipedia:Self-censorship|self-censorship]]. A secondary goal would be the PR value this system would provide in terms of staving off attacks by those who might seek to discredit Wikimedia projects, even though we have a content disclaimers.


#End-user image suppression: the technical implications of implemention (this page)
This solution would primarily involve the tagging of [[potentially offensive images]] coupled with some combination of the following:
#[[Potentially offensive images]]: the identification of categories that would be included as part of implementation
#[[w:Wikipedia:Descriptive image tagging]]: the useful categorising of images
#[[Desirability of end-user image suppression]]: discussion about whether it's all desirable


== Goal ==
# a site-based preference option allowing users to remove, hide or show this content by default
# a cookie-based preference option allowing users to remove, hide or show this content by default
# browser-based filtration, when browsers support it
# proxy and IP based filtration for organisations (such as primary schools, etc.)


The primary goal would be to give [[Wikipedia:end-user|end-users]] the choice as to whether or not they see certain [[potentially offensive images]] by default, thus avoiding their offence and hence minimising the urge of editors to participate in [[Wikipedia:Self-censorship|self-censorship]], while also placating those who would seek to discredit Wikimedia projects based on the adult nature of some content (disclaimers aside).
== Tagging images ==

== Basic idea ==

It would primarily involve ensuring that [[potentially offensive images]] are categorised, coupled with some combination of the following:

# a site-based preference option allowing users to remove, hide or show [[potentially offensive images]] by default
# a cookie-based preference option allowing users to remove, hide or show [[potentially offensive images]] by default
# browser-based filtration (when supported in the future)

=== Initial default setting ===

An initial default setting for newly registered users would need to be chosen.

For anonymous users we could use a [[Wikipedia:HTTP_cookie|cookie]]. This cookie would be triggered when an anonymous user first attempts to view a page containing [[potentially offensive images]], whereby they would first be presented with a page asking them to choose a default setting for their session.

One drawback of the cookie approach is that for those who already have content filtering tools (AOL child accounts, possibly schools and homes), there may be a question which even if answered "yes, want to see all types of content" doesn't result in display of the content, because the end user filter may block it anyway. However, the benefits for those without filters installed, perhaps because they aren't in their usual location, seem to be sufficient to overcome this objection.

=== User preference system ===

An ''Image Suppression'' preference section would be created. In this would be a choice to show all images by default. Under that would be the ability to choose amongst categories based on [[potentially offensive images]] and then choose to have one of the following happen:

#Hide images (this would create a place-holder where images would normally be which the user could then choose to display on a case-by-case basis)
#Remove images (this would remove any link or placeholder (but not the image from the server of course))

=== Categorising images ===


Tagging images would simply involve ensuring that [[potentially offensive images]] have all been given [[categories]] as per normal procedures.
Tagging images would simply involve ensuring that [[potentially offensive images]] have all been given [[categories]] as per normal procedures.


Using categories directly in this way is unlikely to be a good technical solution, since it adds overhead for all category processing. However, using tags as part of templates which include categories in addition to markers for the standard content tagging system(s) may be suitable. Note that an image is likely to require several different tags, possibly several from each of many different tagging chemes popular in different countries.
== Default setting ==


== Technical issues ==
A default setting for new and anonymous users would need to be chosen, although this may be largely redundant if we use a [[Wikipedia:HTTP_cookie|cookie]].
The key technical issue for such content is the widespread availability of approaches to removing content which is present but not desired (e.g. many client side filters, the largest ISP in the U.S. defaults to filtered for child accounts) and the inability to add content which is missing but desired. In other words, the nature of the technology available favours including the content but facilitating its suppression if that is desired by a particular viewer.

A subsidiary technical issue is the desirability of using the tags supported by the existing wide range of client-side content filtering systems, so that those work automatically. These standards support a wide range of descriptive tagging types.


== Relevant links ==
== Relevant links ==


*[[Offensive content]]
*[[Potentially offensive images]]
*[[Potentially offensive images]]
*[[en:Wikipedia:Descriptive image tagging]]


== See also ==
== See also ==
Line 29: Line 60:
*[[Wikipedia:Wikipedia:Content disclaimer|Wikipedia:Content disclaimer]]
*[[Wikipedia:Wikipedia:Content disclaimer|Wikipedia:Content disclaimer]]
*[[Wikipedia:Wikipedia:Profanity|Wikipedia:Profanity]]
*[[Wikipedia:Wikipedia:Profanity|Wikipedia:Profanity]]
*[[Wikipedia:Self-censorship|Self-censorship]]
*[[en:Self-censorship]]
*{{ml|Help:User_style%23Non-display}}

[[category:images]]
[[Category:Deletions]]

Latest revision as of 01:50, 28 August 2017

This page deals with the technical implications of implementing end-user controlled suppression of potentially offensive images on Wikimedia projects.

This page originated from a discussion on the WikiEN-L mailing list where some level of consensus was reached between the free-speech and content suppression parties that a technical solution is possible that will go a long way towards satisfying most mainstream positions in the debate. However, it should be stressed that this page is not intended to imply that this is a fait accompli, but merely an initial exploration of the specification and feasibility of a possible solution. The features described below are not currently under development, and will certainly not be made live until their desirability has been firmly established.

The are generally four facets to this discussion:

  1. End-user image suppression: the technical implications of implemention (this page)
  2. Potentially offensive images: the identification of categories that would be included as part of implementation
  3. w:Wikipedia:Descriptive image tagging: the useful categorising of images
  4. Desirability of end-user image suppression: discussion about whether it's all desirable

Goal[edit]

The primary goal would be to give end-users the choice as to whether or not they see certain potentially offensive images by default, thus avoiding their offence and hence minimising the urge of editors to participate in self-censorship, while also placating those who would seek to discredit Wikimedia projects based on the adult nature of some content (disclaimers aside).

Basic idea[edit]

It would primarily involve ensuring that potentially offensive images are categorised, coupled with some combination of the following:

  1. a site-based preference option allowing users to remove, hide or show potentially offensive images by default
  2. a cookie-based preference option allowing users to remove, hide or show potentially offensive images by default
  3. browser-based filtration (when supported in the future)

Initial default setting[edit]

An initial default setting for newly registered users would need to be chosen.

For anonymous users we could use a cookie. This cookie would be triggered when an anonymous user first attempts to view a page containing potentially offensive images, whereby they would first be presented with a page asking them to choose a default setting for their session.

One drawback of the cookie approach is that for those who already have content filtering tools (AOL child accounts, possibly schools and homes), there may be a question which even if answered "yes, want to see all types of content" doesn't result in display of the content, because the end user filter may block it anyway. However, the benefits for those without filters installed, perhaps because they aren't in their usual location, seem to be sufficient to overcome this objection.

User preference system[edit]

An Image Suppression preference section would be created. In this would be a choice to show all images by default. Under that would be the ability to choose amongst categories based on potentially offensive images and then choose to have one of the following happen:

  1. Hide images (this would create a place-holder where images would normally be which the user could then choose to display on a case-by-case basis)
  2. Remove images (this would remove any link or placeholder (but not the image from the server of course))

Categorising images[edit]

Tagging images would simply involve ensuring that potentially offensive images have all been given categories as per normal procedures.

Using categories directly in this way is unlikely to be a good technical solution, since it adds overhead for all category processing. However, using tags as part of templates which include categories in addition to markers for the standard content tagging system(s) may be suitable. Note that an image is likely to require several different tags, possibly several from each of many different tagging chemes popular in different countries.

Technical issues[edit]

The key technical issue for such content is the widespread availability of approaches to removing content which is present but not desired (e.g. many client side filters, the largest ISP in the U.S. defaults to filtered for child accounts) and the inability to add content which is missing but desired. In other words, the nature of the technology available favours including the content but facilitating its suppression if that is desired by a particular viewer.

A subsidiary technical issue is the desirability of using the tags supported by the existing wide range of client-side content filtering systems, so that those work automatically. These standards support a wide range of descriptive tagging types.

Relevant links[edit]

See also[edit]