Jump to content

End-user image suppression: Difference between revisions

From Meta, a Wikimedia project coordination wiki
Content deleted Content added
Christiaan (talk | contribs)
Christiaan (talk | contribs)
added places to see discussion
Line 1: Line 1:
This page deals with the technical implications of implementing [[Wikipedia:end-user|end-user]] controlled suppression of [[potentially offensive images]] on [[Wikimedia projects]]. For discussion of whether it would be desirable, see [[Desirability of end-user content suppression]]
This page deals with the technical implications of implementing [[Wikipedia:end-user|end-user]] controlled suppression of [[potentially offensive images]] on [[Wikimedia projects]]. For discussion of whether it would be desirable, see [[Desirability of end-user content suppression]], the [http://mail.wikimedia.org/pipermail/wikien-l/2005-February/ February 2005 archive] of WikiEN-I mailing list, and [[Wikipedia:Wikipedia:Image censorship|Wikipedia:Image censorship]].


This page originated from a discussion on the WikiEN-I mailing list where some level of consensus was reached between the free-speech and content suppression parties that a technical solution is possible that will go a long way towards satisfying most mainstream positions in the debate.
This page originated from a discussion on the WikiEN-I mailing list where some level of consensus was reached between the free-speech and content suppression parties that a technical solution is possible that will go a long way towards satisfying most mainstream positions in the debate.

Revision as of 19:45, 23 February 2005

This page deals with the technical implications of implementing end-user controlled suppression of potentially offensive images on Wikimedia projects. For discussion of whether it would be desirable, see Desirability of end-user content suppression, the February 2005 archive of WikiEN-I mailing list, and Wikipedia:Image censorship.

This page originated from a discussion on the WikiEN-I mailing list where some level of consensus was reached between the free-speech and content suppression parties that a technical solution is possible that will go a long way towards satisfying most mainstream positions in the debate.

The primary goal would be to give end-users the choice as to whether or not they see certain potentially offensive images by default or not, thus avoiding their offence and hence minimising the urge of editors to participate in self-censorship while also placating those who would seek to discredit Wikimedia projects based on the adult nature of some content (disclaimers aside).

It would primarily involve ensuring that potentially offensive images are categorised, coupled with some combination of the following:

  1. a site-based preference option allowing users to remove, hide or show potentially offensive images by default
  2. a cookie-based preference option allowing users to remove, hide or show potentially offensive images by default
  3. browser-based filtration (when supported in the future)
  4. proxy and IP based filtration for organisations (such as primary schools, etc.)

Categorising images

Tagging images would simply involve ensuring that potentially offensive images have all been given categories as per normal procedures.

Default setting

A default setting for new and anonymous users would need to be chosen, although this may be largely redundant if we use a cookie.

Relevant links

See also