Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-21 Thread David Levy
I wrote:

  I believe that we should focus on the criteria behind reliable sources'
  illustrative decisions, *not* the decisions themselves.

Andreas Kolbe replied:

 Ah well, that *is* second-guessing the source, because unless the author
 tells you, you have no way of knowing *why* they didn't include a particular
 type of image.

I've repeatedly addressed this point and explained why I regard it as
moot.  You needn't agree with me, but it's frustrating when you
seemingly disregard what I've written.

You actually quoted the relevant text later in your message:

  We needn't know why a particular illustration was omitted.  If we apply
  similar criteria, we'll arrive at similar decisions, excepting instances
  in which considerations applicable to reliable sources (e.g. those based on
  images' upsetting/offensive nature) are inapplicable to Wikipedia ...

I used the phrase why a particular illustration was omitted, which
is remarkably similar to why they didn't include a particular type of
image.  I've made such statements (sometimes with further
elaboration) in several replies.

Again, I don't demand that you agree with me, but I humbly request
that you acknowledge my position.

 If we did that for text, we'd be guessing why an author might not have
 mentioned such and such a thing, and applying our correction.

Again, the images in question don't introduce information inconsistent
with that published by reliable sources; they merely illustrate the
things that said sources tell us.

And again, we haven't pulled our image evaluation criteria out of thin
air.  They reflect those employed by the very same publications.

Our application of these criteria entails no such guessing.  You
seem to envision a scenario in which we seek to determine whether a
particular illustration was omitted for a reason inapplicable to
Wikipedia.  In actuality, we simply set aside such considerations (but
we retain the others, so if an illustration was omitted for a reason
applicable to Wikipedia, we're likely to arrive at the same decision).

. I don't subscribe to the notion that Wikipedia should go out of its way (=
 depart from reliable sources' standards) to upset or offend readers where
 reliable sources don't.

Do you honestly believe that this is our motive?

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-20 Thread Tobias Oelgarte
Am 19.10.2011 23:19, schrieb Philippe Beaudette:
 On Wed, Oct 19, 2011 at 5:07 AM, Tobias Oelgarte
 tobias.oelga...@googlemail.com  wrote:

 I ask Sue and Philippe again: WHERE ARE THE PROMISED RESULTS - BY PROJECT?!


 First, there's a bit of a framing difference here.  We did not initially
 promise results by project.  Even now, I've never promised that. What I've
 said is that we would attempt to do so.  But it's not solely in the WMF's
 purview - the election had a team of folks in charge of it who came from the
 community and it's not the WMF's role to dictate to them how to do their
 job.

 I (finally) have the full results parsed in such a way as to make it *
 potentially* possible to release them for discussion by project.  However,
 I'm still waiting for the committee to approve that release.  I'll re-ping
 on that, because, frankly, it's been a week or so.  That will be my next
 email. :)

 pb

Don't get me wrong. But this should have been part of the results in the 
first place. The first calls for such results go back to times before 
the referendum even started. [1] That leaves an very bad impression, and 
so far the WMF did nothing to regain any trust. Instead you started to 
loose even more. [2]

[1] 
http://meta.wikimedia.org/wiki/Talk:Image_filter_referendum/Archive1#Quantification_of_representation_of_the_world-wide_populace
[2] 
http://meta.wikimedia.org/wiki/User_talk:WereSpielChequers/filter#Thanks_for_this_proposal.2C_WereSpielCheqrs

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-20 Thread Andreas K.
On Wed, Oct 19, 2011 at 10:29 PM, David Levy lifeisunf...@gmail.com wrote:


 Indeed, but *not* when it comes to images' basic illustrative
 properties.  Again, I elaborated in the text quoted below.


This process can be applied to images depicting almost any subject,
 even if others decline to do so.




I mentioned before that a video of rape would have basic illustrative
properties in the article on rape, yet still be deeply inappropriate. Rather
than enhancing the educational value of the article, it would completely
destroy it. Whether to add a media file to an article or not is always a
cost/benefit question. It does not make sense to argue that any benefit,
however small and superficial, outweighs any cost, however large and
substantive.




  We're coming back to the same sticking point: you're assuming that
  reputable sources omit media because they are objectionable, rather
 than
  for any valid reason, and you think they are wrong to do so.

 No, I'm *not* assuming that this is the only reason, nor am I claiming
 that this wrong for them to do.

 We *always* must independently determine whether a valid reason to
 omit media exists.  We might share some such reasons (e.g. low
 illustrative value, inferiority to other available media, copyright
 issues) with reliable sources.  Other reasons (e.g. non-free
 licensing) might apply to us and not to reliable sources.  Still other
 reasons (e.g. upsetting/offensive nature, noncompliance with local
 print/broadcast regulations, incompatibility with paper, space/time
 constraints) might apply to reliable sources and not to us.

 Again, we needn't ponder why a particular illustration was omitted or
 what was available to a publication by its deadline.  We need only
 determine whether the images currently available to us meet the
 standards that we apply across the board.




I would rather apply the standards of reputable publications in our
articles, and leave the rest to a Commons link. YMMV.

Andreas
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-20 Thread David Levy
Andreas Kolbe wrote:

 Whether to add a media file to an article or not is always a
 cost/benefit not is always a cost/benefit question. It does not make
 sense to argue that any benefit, however small and superficial,
 outweighs any cost, however large and substantive.

Agreed.  I'm not arguing that.

Your replies seem indicative of a belief that my position is Let's
include every illustrative image, no matter what.  That isn't so.  My
point is merely that we aren't bound by others' decisions.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-20 Thread Andreas K.
On Thu, Oct 20, 2011 at 7:19 PM, David Levy lifeisunf...@gmail.com wrote:

 Andreas Kolbe wrote:

  Whether to add a media file to an article or not is always a
  cost/benefit not is always a cost/benefit question. It does not make
  sense to argue that any benefit, however small and superficial,
  outweighs any cost, however large and substantive.

 Agreed.  I'm not arguing that.

 Your replies seem indicative of a belief that my position is Let's
 include every illustrative image, no matter what.  That isn't so.  My
 point is merely that we aren't bound by others' decisions.

 David Levy



David,

I think we've reached about as much agreement in this stimulating exchange
as we're likely to. I don't actually know what your position in any specific
dispute around illustration would be; I don't think we've ever met in one of
those on-wiki. I don't assume that we'd necessarily be far apart.

I wouldn't go so far as to say that we should consider ourselves *bound* by
others' decisions either. But I do think that the presence or absence of
precedents in reliable sources is an important factor that we should weigh
when we're contemplating the addition of a particular type of illustration.

For example, if a reader complains about images in the article on the [[rape
of Nanking]], it is useful if an editor can say, Look, these are the
standard works on the rape of Nanking, and they include images like that. If
someone complains about an image or media file in some other article and we
cannot point to a single reputable source that has included a similar
illustration, then we may indeed be at fault.

Regards,
Andreas
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-20 Thread David Levy
Andreas Kolbe wrote:

 I wouldn't go so far as to say that we should consider ourselves *bound* by
 others' decisions either. But I do think that the presence or absence of
 precedents in reliable sources is an important factor that we should weigh
 when we're contemplating the addition of a particular type of illustration.

I believe that we should focus on the criteria behind reliable
sources' illustrative decisions, *not* the decisions themselves.  As
previously noted, some considerations are applicable to Wikipedia,
while others are not.

We needn't know why a particular illustration was omitted.  If we
apply similar criteria, we'll arrive at similar decisions, excepting
instances in which considerations applicable to reliable sources (e.g.
those based on images' upsetting/offensive nature) are
inapplicable to Wikipedia and instances in which considerations
inapplicable to reliable sources (e.g. those based on images' non-free
licensing) are applicable to Wikipedia.

 For example, if a reader complains about images in the article on the [[rape
 of Nanking]], it is useful if an editor can say, Look, these are the
 standard works on the rape of Nanking, and they include images like that.

An editor *can* do that.  It's the inverse situation that requires
deeper analysis.

 If someone complains about an image or media file in some other article and we
 cannot point to a single reputable source that has included a similar
 illustration, then we may indeed be at fault.

Quite possibly.  We'd need to determine whether the relevant criteria
have been met.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-20 Thread Andreas K.
On Fri, Oct 21, 2011 at 2:13 AM, David Levy lifeisunf...@gmail.com wrote:

 Andreas Kolbe wrote:

  I wouldn't go so far as to say that we should consider ourselves *bound*
 by
  others' decisions either. But I do think that the presence or absence of
  precedents in reliable sources is an important factor that we should
 weigh
  when we're contemplating the addition of a particular type of
 illustration.

 I believe that we should focus on the criteria behind reliable
 sources' illustrative decisions, *not* the decisions themselves.




Ah well, that *is* second-guessing the source, because unless the author
tells you, you have no way of knowing *why* they didn't include a particular
type of image.

As I said, there may be other good reasons such as educational psychology –
we make up our own rules at our peril.

If we did that for text, we'd be guessing why an author might not have
mentioned such and such a thing, and applying our correction.




 As
 previously noted, some considerations are applicable to Wikipedia,
 while others are not.



 We needn't know why a particular illustration was omitted.  If we
 apply similar criteria, we'll arrive at similar decisions, excepting
 instances in which considerations applicable to reliable sources (e.g.
 those based on images' upsetting/offensive nature) are
 inapplicable to Wikipedia ...




I don't subscribe to the notion that Wikipedia should go out of its way (=
depart from reliable sources' standards) to upset or offend readers where
reliable sources don't.

Andreas
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-19 Thread Jussi-Ville Heiskanen
On Tue, Oct 18, 2011 at 9:41 PM, Andreas K. jayen...@gmail.com wrote:
 On Tue, Oct 18, 2011 at 7:00 PM, Jussi-Ville Heiskanen cimonav...@gmail.com
 wrote:

 On Tue, Oct 18, 2011 at 2:44 AM, Andreas Kolbe jayen...@yahoo.com wrote:


 
  The English Wikipedia community, like any other, has always contained a
 wide spectrum of opinion on such matters. We have seen this in the past,
 with long discussions about contentious cases like the goatse image, or the
 Katzouras photos. That is unlikely to ever change.
 
  But we do also subscribe to the principle of least astonishment. If the
 average reader finds our image choices odd, or unexpectedly and needlessly
 offensive, then we alienate a large part of our target audience, and may
 indeed only attract an unnecessarily limited demographic as contributors.
 

 You completely and utterly misrepresent what the principle of least
 astonishment is supposed to address. It is a matter of where people
 should be directed, when there are confliting disambiguation issues.
 It doesn't refer to content issues in the slightest. Period. We don't
 say you can read an article about X and not see pictures of X. That is
 ridiculous.



 The principle of least astonishment is mentioned thrice in the board
 resolution on controversial content:

 http://wikimediafoundation.org/wiki/Resolution:Controversial_content

 We support the principle of least astonishment: content on Wikimedia
 projects should be presented to readers in such a way as to respect their
 expectations of what any page or feature might contain

 Signpost coverage:
 http://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2011-06-06/News_and_notes#Board_resolutions_on_controversial_content_and_images_of_identifiable_people


Yes, but that is not proof of what we as a community understand the
principle to mean, it means the board is on crack.



-- 
--
Jussi-Ville Heiskanen, ~ [[User:Cimon Avaro]]

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-19 Thread Andrew Garrett
On Wed, Oct 19, 2011 at 7:59 PM, Jussi-Ville Heiskanen
cimonav...@gmail.com wrote:
 Yes, but that is not proof of what we as a community understand the
 principle to mean, it means the board is on crack.

That's not a helpful contribution to this discussion.

-- 
Andrew Garrett
Wikimedia Foundation
agarr...@wikimedia.org

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-19 Thread Jussi-Ville Heiskanen
On Wed, Oct 19, 2011 at 12:07 PM, Andrew Garrett agarr...@wikimedia.org wrote:
 On Wed, Oct 19, 2011 at 7:59 PM, Jussi-Ville Heiskanen
 cimonav...@gmail.com wrote:
 Yes, but that is not proof of what we as a community understand the
 principle to mean, it means the board is on crack.

 That's not a helpful contribution to this discussion.


Stating the obvious never should be, but there are people here living
in denial, so it has to be stated, no matter how obviously true.



-- 
--
Jussi-Ville Heiskanen, ~ [[User:Cimon Avaro]]

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-19 Thread David Gerard
On 19 October 2011 10:07, Andrew Garrett agarr...@wikimedia.org wrote:
 On Wed, Oct 19, 2011 at 7:59 PM, Jussi-Ville Heiskanen
 cimonav...@gmail.com wrote:

 Yes, but that is not proof of what we as a community understand the
 principle to mean, it means the board is on crack.

 That's not a helpful contribution to this discussion.


I can quite understand you don't want to hear it (the tone argument),
but it remains both true and relevant.


- d.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-19 Thread Tobias Oelgarte
Am 19.10.2011 11:07, schrieb Andrew Garrett:
 On Wed, Oct 19, 2011 at 7:59 PM, Jussi-Ville Heiskanen
 cimonav...@gmail.com  wrote:
 Yes, but that is not proof of what we as a community understand the
 principle to mean, it means the board is on crack.
 That's not a helpful contribution to this discussion.

But if i look at the current reactions, some might agree with this point 
of view. So far i did not see any reaction to provide sufficient 
information, so that would strengthen the argumentation of the WMF or 
the Board. All we get represented are assumptions on what the problem 
might be and that it might be existing. There was not a single study 
that was directed at the readers, particularity not a single one 
directed at a diverse, multicultural audience. All we got is the 
worthless result of the referendum.

I ask Sue and Philippe again: WHERE ARE THE PROMISED RESULTS - BY PROJECT?!

I asked for this shit multiple month ago. I repeated my request on 
daily/weekly basis. All i got wasn't a T-Shirt, it was nothing. That 
makes people like me very angry and lets me believe that the WMF is 
either trying to hide the facts, to push their own point of view, or 
that they are entirely incompetent. Alternatively they are just busy 
with counting the money...

I lost all trust inside the Foundation and I believe that they would 
sell out the basic idea of the project, whenever possible. Knowledge + 
Principle of least astonishment, applied to everything, no matter how 
the facts are? You truly did not understand the foundation of knowledge. 
Knowledge is interesting because it is shocking. It destroys your own 
sand-castle-world on daily basis.

Hard words? Yes it are hard words, based upon the current situation and 
reactions. All we got are messages to calm down, while nothing changes. 
Now we read at some back-pages (discussions spread out everywhere) that 
there will be a test-run, to invite the readers to flag images. Another 
measure to improve the acceptance if the filter will be enabled, another 
study based on a only English speaking community/audience to make it the 
rule over thumb for every project? It seams to be the case. But where 
does all this will to implement a filter come from? No one said it 
clearly, no one published reliable source (Harris report, a true 
insider joke) and you expect us to believe this shit?

The referendum was a farce, the new approach is again a farce. The only 
way left to assume good faith is to claim that they are on crack. 
Anything else would be worse.

nya~



___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-19 Thread Andreas K.
On Wed, Oct 19, 2011 at 1:17 AM, Bjoern Hoehrmann derhoe...@gmx.net wrote:

 * Andreas K. wrote:
 Satisfying most users is a laudable aim for any service provider, whether
 revenue is involved or not. Why should we not aim to satisfy most our
 users,
 or appeal to as many potential users as possible?

 Many Wikipedians would disagree that they or Wikipedia as a whole is a
 service provider. The first sentence on the german language version
 for instance is Wikipedia ist ein Projekt zum Aufbau einer Enzyklopädie
 aus freien Inhalten in allen Sprachen der Welt. That's about creating
 something, not about providing some service to others, much less trying
 to satisfy most people who might wish to be serviced.




I see our vision and mission as entirely service-focused. We are not doing
this for our own amusement:

*Vision: Imagine a world in which every single human being can freely share
in the sum of all knowledge. That's our commitment.*
*
*
*Mission: The mission of the Wikimedia Foundation is to empower and engage
people around the world to collect and develop educational content under a
free license or in the public domain, and to disseminate it effectively and
globally.*
*
*
*Values: An essential part of the Wikimedia Foundation's mission is
encouraging the development of free-content educational resources that may
be created, used, and reused by the entire human community.*

It's about providing a service to the entire human community. Quality is
defined by the recipient of a service, not the producer.




 I invite you to have a look at http://katograph.appspot.com/ which shows
 the category system of the german Wikipedia at the end of 2009 with in-
 formation about how many articles can be found under them and the number
 of views of articles in the category over a three day period. You will
 find for instance that there are many more articles on buildings than on
 movies, many times more, but articles on movies get more views in total.




That's a fascinating piece of work. :) If I understand it correctly, the
colour of each rectangle reflects average number of page views per article
in this category (blue = low, orange = high), and the area of each rectangle
reflects the number of articles in that category. What do the dropdown menus
do? I can't figure them out. Do you have an FAQ for this application?


Best,
Andreas
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-19 Thread Andreas K.
On Wed, Oct 19, 2011 at 4:11 AM, David Levy lifeisunf...@gmail.com wrote:

 Andreas Kolbe wrote:

  But if we use a *different* style, it should still be traceable to an
  educational or scholarly standard, rather than one we have made up, or
  inherited from 4chan. Would you agree?

 Yes, and I dispute the premise that the English Wikipedia has failed
 in this respect.




I think we have already agreed that our standards for inclusion differ from
those used by reliable sources.



As I've noted, we always must gauge available images' illustrative
 value on an individual basis.  We do so by applying criteria intended
 to be as objective as possible, thereby reflecting (as closely as we
 can, given the relatively small pool of libre images) the quality
 standards upheld by reputable publications.  We also reject images
 inconsistent with reliable sources' information on the subjects
 depicted therein.

 We don't, however, exclude images on the basis that others declined to
 publish the same or similar illustrations.




Again, on this point you advocate that we should differ from the standards
upheld by reputable publications.




 Images widely regarded as objectionable commonly are omitted for
 this reason (which is no more relevant to Wikipedia than the
 censorship of objectionable words is).  But again, we needn't seek
 to determine when this has occurred.  We can simply apply our normal
 assessment criteria across the board (irrespective of whether an image
 depicts a sexual act or a pine tree).




We're coming back to the same sticking point: you're assuming that reputable
sources omit media because they are objectionable, rather than for any
valid reason, and you think they are wrong to do so. You are putting your
judgment above that of the sources, something that I presume you would never
do in matters of text.



On Wed, Oct 19, 2011 at 4:12 AM, David Levy lifeisunf...@gmail.com wrote:

 Andreas Kolbe wrote:

  Satisfying most users is a laudable aim for any service provider, whether
  revenue is involved or not. Why should we not aim to satisfy most our
 users,
  or appeal to as many potential users as possible?

 It depends on the context.  There's nothing inherently bad about
 satisfying as many users as possible.  It's doing so in a
 discriminatory, non-neutral manner that's problematic.




In my view, the best we can do is follow the standards of international
scholarship. I trust the international body of scholarship (as a whole, not
necessarily each individual representative of it) to be as
non-discriminatory and neutral as is humanly possible.


Best,
Andreas
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-19 Thread Bjoern Hoehrmann
* Andreas K. wrote:
I see our vision and mission as entirely service-focused. We are not doing
this for our own amusement:

You are talking about the Wikimedia Foundation while I was talking about
Wikipedians. I certainly do this for my own amusement, not to satisfy.

That's a fascinating piece of work. :) If I understand it correctly, the
colour of each rectangle reflects average number of page views per article
in this category (blue = low, orange = high), and the area of each rectangle
reflects the number of articles in that category. What do the dropdown menus
do? I can't figure them out. Do you have an FAQ for this application?

http://lists.wikimedia.org/pipermail/wikide-l/2010-January/022758.html
has some additional information. By default, the rectangles are sized
according to the number of articles in the category and coloured by the
median number of requests per article in the category. So a very big
rectangle with a cold colour indicates there are many articles under it
that nobody reads, while small rectangles with a warm colour indicate
categories with few articles that draw a lot of traffic. If you set the
colour to Anzahl and the size to (inv) Anzahl Artikel, the smallest
category will be in the left top and the colours get warmer towards the
bottom right corner. The third dropdown specifies the layout algorithm.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-19 Thread Philippe Beaudette
On Wed, Oct 19, 2011 at 5:07 AM, Tobias Oelgarte 
tobias.oelga...@googlemail.com wrote:

 I ask Sue and Philippe again: WHERE ARE THE PROMISED RESULTS - BY PROJECT?!


First, there's a bit of a framing difference here.  We did not initially
promise results by project.  Even now, I've never promised that. What I've
said is that we would attempt to do so.  But it's not solely in the WMF's
purview - the election had a team of folks in charge of it who came from the
community and it's not the WMF's role to dictate to them how to do their
job.

I (finally) have the full results parsed in such a way as to make it *
potentially* possible to release them for discussion by project.  However,
I'm still waiting for the committee to approve that release.  I'll re-ping
on that, because, frankly, it's been a week or so.  That will be my next
email. :)

pb

___
Philippe Beaudette
Head of Reader Relations
Wikimedia Foundation, Inc.

415-839-6885, x 6643

phili...@wikimedia.org
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-19 Thread David Levy
Andreas Kolbe wrote:

   But if we use a *different* style, it should still be traceable to an
   educational or scholarly standard, rather than one we have made up, or
   inherited from 4chan. Would you agree?

  Yes, and I dispute the premise that the English Wikipedia has failed
  in this respect.

 I think we have already agreed that our standards for inclusion differ from
 those used by reliable sources.

Yes, in part.  You wrote traceable to, not identical to.

I elaborated in the text quoted below.

  As I've noted, we always must gauge available images' illustrative
  value on an individual basis.  We do so by applying criteria intended to
  be as objective as possible, thereby reflecting (as closely as we can,
  given the relatively small pool of libre images) the quality standards
  upheld by reputable publications.  We also reject images inconsistent
  with reliable sources' information on the subjects depicted therein.
 
  We don't, however, exclude images on the basis that others declined to
  publish the same or similar illustrations.

 Again, on this point you advocate that we should differ from the standards
 upheld by reputable publications.

Indeed, but *not* when it comes to images' basic illustrative
properties.  Again, I elaborated in the text quoted below.

  Images widely regarded as objectionable commonly are omitted for this
  reason (which is no more relevant to Wikipedia than the censorship of
  objectionable words is).  But again, we needn't seek to determine when
  this has occurred.  We can simply apply our normal assessment criteria
  across the board (irrespective of whether an image depicts a sexual act
  or a pine tree).

For the Pine article, we examine the available images of pine trees
(and related entities, such as needles, cones and seeds) and assess
their illustrative properties as objectively as possible.  Our goal is
to include the images that best enhance readers' understanding of the
subject.

This is exactly what reputable publications do.  (The specific images
available to them differ and sometimes exceed the quality of those
available to us, of course.)

This process can be applied to images depicting almost any subject,
even if others decline to do so.  I don't insist that we automatically
include lawful, suitably licensed images or shout WIKIPEDIA IS NOT
CENSORED! when we don't.  I merely advocate that we apply the same
assessment criteria across the board.  Inferior images (whether they
depict pine trees, sexual acts or anything else) should be omitted.

 We're coming back to the same sticking point: you're assuming that
 reputable sources omit media because they are objectionable, rather than
 for any valid reason, and you think they are wrong to do so.

No, I'm *not* assuming that this is the only reason, nor am I claiming
that this wrong for them to do.

We *always* must independently determine whether a valid reason to
omit media exists.  We might share some such reasons (e.g. low
illustrative value, inferiority to other available media, copyright
issues) with reliable sources.  Other reasons (e.g. non-free
licensing) might apply to us and not to reliable sources.  Still other
reasons (e.g. upsetting/offensive nature, noncompliance with local
print/broadcast regulations, incompatibility with paper, space/time
constraints) might apply to reliable sources and not to us.

Again, we needn't ponder why a particular illustration was omitted or
what was available to a publication by its deadline.  We need only
determine whether the images currently available to us meet the
standards that we apply across the board.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread Tom Morris
On Tuesday, October 18, 2011, Thomas Morton wrote:

 On 17 Oct 2011, at 09:19, Tobias Oelgarte
 tobias.oelga...@googlemail.com javascript:; wrote:
  I have no problem with any kind of controversial content. Showing
  progress of fisting on the mainpage? No problem for me. Reading your
  comments? No problem for me. Reading your insults? Also no problem. The
  only thing i did, was the following: I told you, that i will not react
  any longer to your comments, if they are worded in the manner as they
  currently are.
 
  Literary: I'm feeling free to open your book and start to read. If it is
  interesting and constructive i will continue to read it and i will
  respond to you to share my thoughts. If it is purely meant to insult,
  without any other meaning, then i will get bored and fly over the lines,
  reading only the half or less. I also have no intention to share my
  thoughts with the author of this book. Why? I have nothing to talk
  about. Should i complain over it's content? Which content anyway?
 
  Give it a try. Make constructive arguments and explain your thoughts.
  There is no need for strong-wording, if the construction of the words
  itself is strong.
 
  nya~

 And that is a mature and sensible attitude.

 Some people do not share your view and are unable to ignore what to
 them are rude or offensive things.

 Are they wrong?

 Should they be doing what you (and I) do?



I share the same attitude. I'm pretty much immune to almost anything you can
throw at me in terms of potentially offensive content.

But, despite this enlightenment, I am not an island. I use my computer in
public places: at the workplace, in the university library, on the train, at
conferences, and in cafes.

I may have been inured to 'Autofellatio6.jpg', but I'm not sure the random
person sitting next to me on the train needs to see it. Being able to read,
edit and patrol Wikipedia in public without offending the moral
sensibilities of people who catch a glance at my laptop screen would be a
feature. Being able to click 'Random page' without the chance of a public
order offence flowing from it would also be pretty nifty.


-- 
Tom Morris
http://tommorris.org/
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread Fae
Sorry to take a tangential point from Tom's email, but is the random
article tool truly random or does it direct to only stable articles or
some other sub-set of article space?

Thanks
Fae

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread Tobias Oelgarte
Am 18.10.2011 09:57, schrieb Tom Morris:
 On Tuesday, October 18, 2011, Thomas Morton wrote:

 On 17 Oct 2011, at 09:19, Tobias Oelgarte
 tobias.oelga...@googlemail.comjavascript:;  wrote:
 I have no problem with any kind of controversial content. Showing
 progress of fisting on the mainpage? No problem for me. Reading your
 comments? No problem for me. Reading your insults? Also no problem. The
 only thing i did, was the following: I told you, that i will not react
 any longer to your comments, if they are worded in the manner as they
 currently are.

 Literary: I'm feeling free to open your book and start to read. If it is
 interesting and constructive i will continue to read it and i will
 respond to you to share my thoughts. If it is purely meant to insult,
 without any other meaning, then i will get bored and fly over the lines,
 reading only the half or less. I also have no intention to share my
 thoughts with the author of this book. Why? I have nothing to talk
 about. Should i complain over it's content? Which content anyway?

 Give it a try. Make constructive arguments and explain your thoughts.
 There is no need for strong-wording, if the construction of the words
 itself is strong.

 nya~
 And that is a mature and sensible attitude.

 Some people do not share your view and are unable to ignore what to
 them are rude or offensive things.

 Are they wrong?

 Should they be doing what you (and I) do?


 I share the same attitude. I'm pretty much immune to almost anything you can
 throw at me in terms of potentially offensive content.

 But, despite this enlightenment, I am not an island. I use my computer in
 public places: at the workplace, in the university library, on the train, at
 conferences, and in cafes.

 I may have been inured to 'Autofellatio6.jpg', but I'm not sure the random
 person sitting next to me on the train needs to see it. Being able to read,
 edit and patrol Wikipedia in public without offending the moral
 sensibilities of people who catch a glance at my laptop screen would be a
 feature. Being able to click 'Random page' without the chance of a public
 order offence flowing from it would also be pretty nifty.
But that is exactly this typical scenario that does not need a category 
based filtering system. There are many other proposed solutions that 
would handle exactly this case, without the need for any categorization. 
The hide all image feature would already be an good option. An 
improved version is the blured images/pixelated images feature, where 
you enter the hide/distort/... mode and any image is not visible in 
detail as long you don't hover it or click on it.

Still, we discuss about filter categories and their need. In your 
example no categorization is needed at all, to provide a well working 
solution.

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread Thomas Morton

  And that is a mature and sensible attitude.
 
  Some people do not share your view and are unable to ignore what to
  them are rude or offensive things.
 
  Are they wrong?
 
  Should they be doing what you (and I) do?
 
  Tom
 The question is, if we should support them to not even try to start
 this learning progress. It's like saying: That is all you have to know.
 Don't bother with the rest, it is not good for you.

 nya~


 Which assumes that they want to, or should, change - and that our approach
is better and we are right. These are arrogant assumptions, not at all in
keeping with our mission.

It is this fallacious logic that underpins our crazy politics of
neutrality which we attempt to enforce on people (when in practice we lack
neutrality almost as much as the next man!).

It's like religion; I am not religious, and if a religious person wants to
discuss their beliefs against my lack then great, I find that refreshing and
will take the opportunity to try and show them my argument. If they don't
want to, not going to force the issue :)

 It's like saying: That is all you have to know. Don't bother with the
rest, it is not good for you.

Actually, no, it's exactly not that. Because we are talking about
user-choice filtering. In that context, providing individual filtering tools
for each user should not be controversial.

I understand where that becomes a problem is when we look at offering
pre-built block lists, so that our readers don't have to manually
construct their own preferences, but can click a few buttons and largely
have the experience they desire. So we have this issue of trading usability
against potential for abuse; I don't have an immediate solution there, but I
think we can come up with one. Although we do quite poorly at handling abuse
of process and undermining of content on-wiki at the moment, this could be a
unique opportunity to brainstorm wider solutions that impact everywhere in a
positive way.

If an individual expresses a preference to hide certain content, it is
reasonable for us to provide that option for use at their discretion.

Anything else is like saying No, your views on acceptability are wrong and
we insist you must see this.[1]

*That* is censorship.

Tom

1. I appreciate that this is the status quo at the moment, I still think it
is censorship, and this is why we must address it as a problem.
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread Thomas Morton
On 18 October 2011 11:56, Tobias Oelgarte tobias.oelga...@googlemail.comwrote:

 I don't assume that. I say that they should have the opportunity to
 change if they like to.


Absolutely - we do not disagree on this.


 That controversial content is hidden or that we
 provide a button to hide controversial content is prejudicial.


I disagree on this, though. There is a balance between encouraging people to
question their views (and, yes, even our own!) and giving them no option but
to accept our view.

This problem can be addressed via wording related to the filter and
avoidance of phrases like controversial, problematic etc.

I disagree very strongly with the notion that providing a button to hide
material is prejudicial.

It deepens the viewpoint that this content is objectionable and that it is
 generally accepted this way, even if not. That means that we would
 fathering the readers that have a tendency to enable a filter (not even
 particularly an image filter).


This is a reasonable objection; and again it goes back to this idea of how
far do we enforce our world view on readers. I think that there are ways
that a filter could be enabled that improves Wikipedia for our readers
(helping neutrality) and equally there are ways that it could be enabled
that adversely affect this goal.

So if done; it needs to be done right.


 ... and that is exactly what makes me curious about this approach. You
 assume that we aren't neutral and Sue described us in median a little
 bit geeky, which goes in the same direction.


We are not; over time it is fairly clear that we reflect certain world
views. To pluck an example out of thin air - in the 9/11 article there is
extremely strong resistance to adding a see also link to the article on
9/11 conspiracies. This reflects a certain bias/world view we are imposing.
That is an obvious example - there are many more.

The bias is not uniform; we have various biases depending on the subject -
and over time those biases can swing back and forth depending on
the prevalent group of editors at that time. Many of our articles have
distinctly different tone/content/slant to foreign language ones (which is a
big giveaway IMO).

Another example: English Wikipedia has a pretty strong policy on BLP
material that restricts a lot of what we record - other language Wiki's do
not have the same restrictions and things we would not consider noting (such
as non-notable children names) are not considered a problem on other Wikis.


But if we aren't neutral at
 all, how can we even believe that an controversial-content-filter-system
 based upon our views would be neutral in judgment or as proposed in the
 referendum cultural neutral. (Question: Is there even a thing as
 cultural neutrality?)


No; this is the underlying problem I mentioned with implementing a filter
that offers pre-built lists.

It is a problem to address, but not one that kills the idea stone dead IMO.


 We also don't force anyone to read Wikipedia.


Oh come on :) we are a highly visible source of information with millions of
inbound links/pointers. No we don't force anyone to read, but this is not an
argument against accommodating as many people as possible.


 If he does not like it, he
 has multiple options. He could close it, he could still read it, even if
 he don't like any part of it, he could participate to change it or he
 could start his own project.


And most of those options belie our primary purpose.

We can definitely think about possible solution. But at first i have to
 insist to get an answer to the question: Is there a problem, big and
 worthy enough, to make it a main priority?


Absolutely - and the first question I asked in this debate (weeks ago) was
when we were going to poll readers for their opinion. This devolved slightly
into an argument over whether our readers should have a say in Wikipedia...
but the issue still stands - no clear picture has been built.

We are still stuck in our little house

I doubt it will ever be done; which is why if it comes to a vote, despite
my advocacy here, I will staunchly oppose any filter on grounds of process
and poor planning.

I am willing to be pleasantly surprised.


 After that comes the question for (non neutral) categorization of
 content. That means: Do we need to label offensive content, or could
 same goal be reached without doing this?


Well from a practical perspective a self-managed filter is the sensible
option.

I think we can do an objective categorisation of things people might not
like to see, though. Say, nudity, we could have an entirely objective
classification for nudity.. just thinking off-hand  imperfectly:

- Incidental nudity (background etc.)
- Partial nudity
- Full nudity
- Full frontal nudity / Close ups
- Sexual acts

And then independently classify articles as sexuality topic, physiology
topic  neither (with neither being default). By combining the two
classifications you can build a dynamic score of how likely that any 

Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread Tobias Oelgarte
Am 18.10.2011 14:00, schrieb Thomas Morton:
 On 18 October 2011 11:56, Tobias 
 Oelgartetobias.oelga...@googlemail.comwrote:

 That controversial content is hidden or that we
 provide a button to hide controversial content is prejudicial.

 I disagree on this, though. There is a balance between encouraging people to
 question their views (and, yes, even our own!) and giving them no option but
 to accept our view.

 This problem can be addressed via wording related to the filter and
 avoidance of phrases like controversial, problematic etc.

 I disagree very strongly with the notion that providing a button to hide
 material is prejudicial.
That comes down to the two layers of judgment involved in this proposal. 
At first we give them the option to view anything and we give them the 
option to view not anything. The problem is that we have to define what 
not anything is. This imposes our judgment to the reader. That means, 
that even if the reader decides to hide some content, then it was our 
(and not his) decision what is hidden.

This concludes to two cases:

1. If he does not use the filter, then - as you say - we impose our 
judgment to the reader,
2. If he does use the filter, then - as i say - we impose our judgment 
to the reader as well.

Both cases seam to be equal. No win or loss with or without filter. But 
there is a slight difference.

If we treat nothing as objectionable (no filter), then we don't need to 
play the judge. We say: We accept anything, it's up to you to judge.
If we start to add a category based filter, then we play the judge 
over our own content. We say: We accept anything, but this might not be 
good to look at. Now it is up to you to trust our opinion or not.

The later imposes our judgment to the reader, while the first makes no 
judgment at all and leaves anything to free mind of the reader. (free 
mind means, that the reader has to find his own answer to this 
question. He might have objections or could agree.)
 It deepens the viewpoint that this content is objectionable and that it is
 generally accepted this way, even if not. That means that we would
 fathering the readers that have a tendency to enable a filter (not even
 particularly an image filter).

 This is a reasonable objection; and again it goes back to this idea of how
 far do we enforce our world view on readers. I think that there are ways
 that a filter could be enabled that improves Wikipedia for our readers
 (helping neutrality) and equally there are ways that it could be enabled
 that adversely affect this goal.

 So if done; it needs to be done right.
The big question is: Can be done right?

A filter that only knows a yes or no to questions that are 
influenced by different cultural views, seams to fail right away. It 
draws a sharp line through anything, ignoring the fact that even in one 
culture there are lot of border cases. I did not want to use examples, 
but i will still give one. If we have a photography of a young woman at 
the beach. How would we handle the case that her swimsuit shows a lot of 
naked flesh? I'm sure more then 90% of western country citizens would 
have no objection against this image, if it is inside a corresponding 
article. But as soon we go to other cultures, lets say Turkey, then we 
might find very different viewpoints if this should be hidden by the 
filter or not. I remember the question in the referendum, if the filter 
should be cultural neutral. Many agreed on this point. But how in gods 
name should this be done? Especially: How can this be done right?

 ... and that is exactly what makes me curious about this approach. You
 assume that we aren't neutral and Sue described us in median a little
 bit geeky, which goes in the same direction.

 We are not; over time it is fairly clear that we reflect certain world
 views. To pluck an example out of thin air - in the 9/11 article there is
 extremely strong resistance to adding a see also link to the article on
 9/11 conspiracies. This reflects a certain bias/world view we are imposing.
 That is an obvious example - there are many more.

 The bias is not uniform; we have various biases depending on the subject -
 and over time those biases can swing back and forth depending on
 the prevalent group of editors at that time. Many of our articles have
 distinctly different tone/content/slant to foreign language ones (which is a
 big giveaway IMO).

 Another example: English Wikipedia has a pretty strong policy on BLP
 material that restricts a lot of what we record - other language Wiki's do
 not have the same restrictions and things we would not consider noting (such
 as non-notable children names) are not considered a problem on other Wikis.


 But if we aren't neutral at
 all, how can we even believe that an controversial-content-filter-system
 based upon our views would be neutral in judgment or as proposed in the
 referendum cultural neutral. (Question: Is there even a thing as
 cultural neutrality?)

 No; this is the 

Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread Thomas Morton

 That comes down to the two layers of judgment involved in this proposal.
 At first we give them the option to view anything and we give them the
 option to view not anything. The problem is that we have to define what
 not anything is. This imposes our judgment to the reader. That means,
 that even if the reader decides to hide some content, then it was our
 (and not his) decision what is hidden.


No; because the core functionality of a filter should always present the
choice do you want to see this image or not. Which is specifically not
imposing our judgement on the reader :) Whether we then place some optional
preset filters for the readers to use is certainly a matter of discussion -
but nothing I have seen argues against this core ideas.

If we treat nothing as objectionable (no filter), then we don't need to
 play the judge. We say: We accept anything, it's up to you to judge.
 If we start to add a category based filter, then we play the judge
 over our own content. We say: We accept anything, but this might not be
 good to look at. Now it is up to you to trust our opinion or not.


By implementing a graded filter; one which lets you set grades of visibility
rather than off/on addresses this concern - because once again it gives the
reader ultimate control over the question of what they want to see. If they
are seeing too much for their preference they can tweak up, and vice
versa.



 The later imposes our judgment to the reader, while the first makes no
 judgment at all and leaves anything to free mind of the reader. (free
 mind means, that the reader has to find his own answer to this
 question. He might have objections or could agree.)


And if he objects, we are then just ignoring him?

I disagree with your argument; both points are imposing our judgement on the
reader.

A filter that only knows a yes or no to questions that are
 influenced by different cultural views, seams to fail right away. It
 draws a sharp line through anything, ignoring the fact that even in one
 culture there are lot of border cases. I did not want to use examples,
 but i will still give one. If we have a photography of a young woman at
 the beach.



 How would we handle the case that her swimsuit shows a lot of
 naked flesh? I'm sure more then 90% of western country citizens would
 have no objection against this image, if it is inside a corresponding
 article. But as soon we go to other cultures, lets say Turkey, then we
 might find very different viewpoints if this should be hidden by the
 filter or not.


Agreed; which is why we allow people to filter based on a sliding scale,
rather than a discrete yes or no. So someone who has no objection to such an
image, but wants to hide people having sex can do so. And someone who wants
to hide that image can have a stricter grade on the filter.

If nothing else the latter case is the more important one to address;
because sexual images are largely tied to sexual subjects, and any
reasonably person should expect those images to appear. But if culturally
you object to seeing people in swimwear then this could be found in almost
any article.

We shouldn't judge those cultural objections as invalid.  Equally we
shouldn't endorse them as valid. There is a balance somewhere between those
two extremes.


 I remember the question in the referendum, if the filter
 should be cultural neutral. Many agreed on this point. But how in gods
 name should this be done? Especially: How can this be done right?


I suggested a way in which we could cover a broad spectrum of views on one
key subject without setting discrete categories of visibility.


 I belive that the idea dies at the moment as we assume that we can
 achieve neutrality through filtering. Speaking theoretically there are
 only three types of neutral filters. The first leaves anything through,
 the second blocks all and the third is totally random, resulting in an
 equal 50:50 chance for large numbers. Currently we would ideally have
 the first filter. Your examples show that this isn't always true. But at
 least this is the goal. Filter two would equal to don't show anything,
 or shut down Wikipedia. Not an real option. I know. The third option is
 a construct out of theory that would not work, since it contains an
 infinite amount of information, but also nothing at all.


What about the fourth type; that gives you extensive options to filter out
(or better description; to collapse) content from initial viewing per your
specific preferences.

This is a technical challenge, but in no way unachievable.

I made an analogy before that some people might prefer to surf Wikipedia
with plot summaries collapsed (I would be one of them!). In a perfect world
we would have the option to collapse *any* section in a Wikipedia article
and have that option stored. Over time the software would notice I was
collapsing plot summaries and, so, intelligently collapse summaries on newly
visited pages for me. Plus there might even be an option in 

Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread Andreas Kolbe


From: Tobias Oelgarte tobias.oelga...@googlemail.com

Am 18.10.2011 11:43, schrieb Thomas Morton:
 It is this fallacious logic that underpins our crazy politics of
 neutrality which we attempt to enforce on people (when in practice we lack
 neutrality almost as much as the next man!).
... and that is exactly what makes me curious about this approach. You 
assume that we aren't neutral and Sue described us in median a little 
bit geeky, which goes in the same direction. But if we aren't neutral at 
all, how can we even believe that an controversial-content-filter-system 
based upon our views would be neutral in judgment or as proposed in the 
referendum cultural neutral. (Question: Is there even a thing as 
cultural neutrality?)

Who said that the personal image filter function should be based on *our* 
judgment? It shouldn't. 

As Wikipedians, we are used to working from sources. In deciding what content 
to include, we look at high-quality, educational sources, and try to reflect 
them fairly. 

Now, given that we are a top-10 website, why should it not make sense to look 
at what other large websites like Google, Bing, and Yahoo allow the user to 
filter, and what media Flickr and YouTube require opt-ins for? Why should we 
not take our cues from them? The situation seems quite analogous.

As the only major website *not* to offer users a filter, we have more in common 
with 4chan than the mainstream. Any abstract discussion of neutrality that 
neglects to address this fundamental point misses the mark. Our present 
approach is not neutral by our own definition of neutrality; it owes more to 
Internet culture than to the sources we cite.

Another important point that Thomas made is that any filter set-up should use 
objective criteria, rather than criteria based on offensiveness. We should not 
make a value judgment, we should simply offer users the browsing choices they 
are used to in mainstream sites.  

Best,
Andreas
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread Jussi-Ville Heiskanen
On Tue, Oct 18, 2011 at 2:44 AM, Andreas Kolbe jayen...@yahoo.com wrote:



 The English Wikipedia community, like any other, has always contained a wide 
 spectrum of opinion on such matters. We have seen this in the past, with long 
 discussions about contentious cases like the goatse image, or the Katzouras 
 photos. That is unlikely to ever change.

 But we do also subscribe to the principle of least astonishment. If the 
 average reader finds our image choices odd, or unexpectedly and needlessly 
 offensive, then we alienate a large part of our target audience, and may 
 indeed only attract an unnecessarily limited demographic as contributors.


You completely and utterly misrepresent what the principle of least
astonishment is supposed to address. It is a matter of where people
should be directed, when there are confliting disambiguation issues.
It doesn't refer to content issues in the slightest. Period. We don't
say you can read an article about X and not see pictures of X. That is
ridiculous.


-- 
--
Jussi-Ville Heiskanen, ~ [[User:Cimon Avaro]]

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread Andreas K.
On Tue, Oct 18, 2011 at 7:00 PM, Jussi-Ville Heiskanen cimonav...@gmail.com
 wrote:

 On Tue, Oct 18, 2011 at 2:44 AM, Andreas Kolbe jayen...@yahoo.com wrote:


 
  The English Wikipedia community, like any other, has always contained a
 wide spectrum of opinion on such matters. We have seen this in the past,
 with long discussions about contentious cases like the goatse image, or the
 Katzouras photos. That is unlikely to ever change.
 
  But we do also subscribe to the principle of least astonishment. If the
 average reader finds our image choices odd, or unexpectedly and needlessly
 offensive, then we alienate a large part of our target audience, and may
 indeed only attract an unnecessarily limited demographic as contributors.
 

 You completely and utterly misrepresent what the principle of least
 astonishment is supposed to address. It is a matter of where people
 should be directed, when there are confliting disambiguation issues.
 It doesn't refer to content issues in the slightest. Period. We don't
 say you can read an article about X and not see pictures of X. That is
 ridiculous.



The principle of least astonishment is mentioned thrice in the board
resolution on controversial content:

http://wikimediafoundation.org/wiki/Resolution:Controversial_content

We support the principle of least astonishment: content on Wikimedia
projects should be presented to readers in such a way as to respect their
expectations of what any page or feature might contain

Signpost coverage:
http://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2011-06-06/News_and_notes#Board_resolutions_on_controversial_content_and_images_of_identifiable_people

Andreas
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread Tobias Oelgarte
Am 18.10.2011 17:23, schrieb Thomas Morton:
 That comes down to the two layers of judgment involved in this proposal.
 At first we give them the option to view anything and we give them the
 option to view not anything. The problem is that we have to define what
 not anything is. This imposes our judgment to the reader. That means,
 that even if the reader decides to hide some content, then it was our
 (and not his) decision what is hidden.

 No; because the core functionality of a filter should always present the
 choice do you want to see this image or not. Which is specifically not
 imposing our judgement on the reader :) Whether we then place some optional
 preset filters for the readers to use is certainly a matter of discussion -
 but nothing I have seen argues against this core ideas.
Yes; because even the provision of a filter implies that some content is 
seen as objectionable and treated different from other content. This is 
only no problem, as long we don't represent default settings, aka 
categories, which introduce our judgment to the readership. Only the 
fact that our judgment is visible, is already enough to manipulate the 
reader in what to see as objectionable or not. This scenario is very 
much comparable to the unknown man that sits behind you, looking 
randomly onto your screen, while you want to inform yourself. Just the 
thought that someone else could be upset is already an issue. Having us 
to directly show/indicate what we think of as objectionable by others 
is even the stronger.
 If we treat nothing as objectionable (no filter), then we don't need to
 play the judge. We say: We accept anything, it's up to you to judge.
 If we start to add a category based filter, then we play the judge
 over our own content. We say: We accept anything, but this might not be
 good to look at. Now it is up to you to trust our opinion or not.

 By implementing a graded filter; one which lets you set grades of visibility
 rather than off/on addresses this concern - because once again it gives the
 reader ultimate control over the question of what they want to see. If they
 are seeing too much for their preference they can tweak up, and vice
 versa.
This would imply that we, the ones that are unable to neutrally handle 
content, would be perfect in categorizing images after a fine degree of 
nudity. But even having multiple steps would not be a satisfying 
solution. There are many cultural regions which differentiate strongly 
between man an woman. While they would have no problem to see a man in 
just his boxer short, it would be seen as offending to show a woman open 
hair. I wonder what effort it would need to accomplish this goal (if 
even possible), compared to the benefits.

 The later imposes our judgment to the reader, while the first makes no
 judgment at all and leaves anything to free mind of the reader. (free
 mind means, that the reader has to find his own answer to this
 question. He might have objections or could agree.)

 And if he objects, we are then just ignoring him?

 I disagree with your argument; both points are imposing our judgement on the
 reader.
If _we_ do the categorization, then we impose our judgment, since it was 
us, who made the decision. It is not a customized filter where the user 
decides what is best for himself. Showing anything might not be ideal 
for all readers. Hiding more then preferred might also no be ideal for 
all readers. Hiding less then preferred is just another not ideal case. 
We can't meet everyones taste like no book can meet everyones taste. 
While Harry Potter seams to be fine in many cultures, in some there 
might be parts that are seen as offensive. Would you hide/rewrite parts 
from Harry Potter to make them all happy, or would you go after the 
majority of the market and ignore the rest?

There is one simple way to deal with it. If someone does not like our 
content, then he don't need to use it. If someone does not like the 
content of a book he does not need to buy it. He can complain about it. 
Thats whats Philip Pullman meant with: No one has the right to life 
without being shocked.
 Agreed; which is why we allow people to filter based on a sliding scale,
 rather than a discrete yes or no. So someone who has no objection to such an
 image, but wants to hide people having sex can do so. And someone who wants
 to hide that image can have a stricter grade on the filter.

 If nothing else the latter case is the more important one to address;
 because sexual images are largely tied to sexual subjects, and any
 reasonably person should expect those images to appear. But if culturally
 you object to seeing people in swimwear then this could be found in almost
 any article.

 We shouldn't judge those cultural objections as invalid.  Equally we
 shouldn't endorse them as valid. There is a balance somewhere between those
 two extremes.
Yes there is a balance between two extremes. But who ever said that the 
center between two opinions is seen 

Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread Tobias Oelgarte
Am 18.10.2011 19:04, schrieb Andreas Kolbe:

 From: Tobias Oelgartetobias.oelga...@googlemail.com
 Am 18.10.2011 11:43, schrieb Thomas Morton:
 It is this fallacious logic that underpins our crazy politics of
 neutrality which we attempt to enforce on people (when in practice we lack
 neutrality almost as much as the next man!).
 ... and that is exactly what makes me curious about this approach. You
 assume that we aren't neutral and Sue described us in median a little
 bit geeky, which goes in the same direction. But if we aren't neutral at
 all, how can we even believe that an controversial-content-filter-system
 based upon our views would be neutral in judgment or as proposed in the
 referendum cultural neutral. (Question: Is there even a thing as
 cultural neutrality?)
 Who said that the personal image filter function should be based on *our* 
 judgment? It shouldn't. 

 As Wikipedians, we are used to working from sources. In deciding what content 
 to include, we look at high-quality, educational sources, and try to reflect 
 them fairly. 

 Now, given that we are a top-10 website, why should it not make sense to look 
 at what other large websites like Google, Bing, and Yahoo allow the user to 
 filter, and what media Flickr and YouTube require opt-ins for? Why should we 
 not take our cues from them? The situation seems quite analogous.

 As the only major website *not* to offer users a filter, we have more in 
 common with 4chan than the mainstream. Any abstract discussion of neutrality 
 that neglects to address this fundamental point misses the mark. Our present 
 approach is not neutral by our own definition of neutrality; it owes more to 
 Internet culture than to the sources we cite.

 Another important point that Thomas made is that any filter set-up should use 
 objective criteria, rather than criteria based on offensiveness. We should 
 not make a value judgment, we should simply offer users the browsing choices 
 they are used to in mainstream sites.  

 Best,
 Andreas
You said that we should learn from Google and other top websites, but at 
the same time you want to introduce objective criteria, which neither of 
this websites did? You also compare Wikipedia with an image board like 
4chan? You want the readers to define what they want see. That means 
they should play the judge and that majority will win. But this in 
contrast to the proposal that the filter should work with objective 
criteria.

Could you please crosscheck your own comment and tell me what kind of 
solution is up on your mind? Currently it is mix of very different 
approaches, that don't fit together.

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread David Levy
Andreas Kolbe wrote:

 The English Wikipedia community, like any other, has always contained a
 wide spectrum of opinion on such matters.

Of course.  But consensus != unanimity.

Your interpretation of the English Wikipedia's neutrality policy
contradicts that under which the site operates.

  The New York Times (recipient of more Pulitzer Prizes than any other
  news organization) uses Stuff My Dad Says.  So does the Los Angeles
  Times, which states that the subject's actual name is unsuitable for
  a family publication.
 
  http://www.nytimes.com/2010/05/23/books/review/InsideList-t.html
  http://latimesblogs.latimes.com/technology/2009/09/mydadsays-twitter.html
 
  You might dismiss those sources as the popular press, but they're
  the most reputable ones available on the subject.  Should we deem
  their censorship sacrosanct and adopt it as our own?

 No. :)

Please elaborate.  Why shouldn't we follow the example set by the most
reliable sources?

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread Thomas Morton

 This is only no problem, as long we don't represent default settings, aka

categories, which introduce our judgment to the readership. Only the
 fact that our judgment is visible, is already enough to manipulate the
 reader in what to see as objectionable or not. This scenario is very
 much comparable to the unknown man that sits behind you, looking
 randomly onto your screen, while you want to inform yourself. Just the
 thought that someone else could be upset is already an issue. Having us
 to directly show/indicate what we think of as objectionable by others
 is even the stronger.


I guess we just sit at opposites sides on this point; I think that a broad
but clear categorisation with a slider control to figure out how much or
little you wished to see is perfectly fine.

It is uncontroversial that people find nudity inconvenient or objectional. I
see no issue in considering that a filter area.

This would imply that we, the ones that are unable to neutrally handle
 content, would be perfect in categorizing images after a fine degree of
 nudity.




 But even having multiple steps would not be a satisfying
 solution. There are many cultural regions which differentiate strongly
 between man an woman.


By using broad strokes that disregard gender we address this concern - sure
it may be somewhat imperfect for people who specifically don't want to see
bare armed women because it would end up blocking similarly attired men. But
it is better than the situation we have.


 We can't meet everyones taste like no book can meet everyones taste.


True; but we can try to improve things.


 While Harry Potter seams to be fine in many cultures, in some there
 might be parts that are seen as offensive. Would you hide/rewrite parts
 from Harry Potter to make them all happy, or would you go after the
 majority of the market and ignore the rest?


I'm not sure of the relevance; HP is a commercial product with a distinctly
different aim or market to ourselves. They go after the core market because
it makes commercial sense, we are not limited in this way.


 There is one simple way to deal with it. If someone does not like our
 content, then he don't need to use it. If someone does not like the
 content of a book he does not need to buy it.


I find this a non-optimal and very bad solution.


  I suggested a way in which we could cover a broad spectrum of views on
 one
  key subject without setting discrete categories of visibility.
 As explained above, this will be a very very hard job to do. Even in the
 most simple subject sexuality you will need more then one scale to
 measure content against. Other topics, like the religious or cultural
 topics, will be even a much harder job.


Not really; one scale would do nicely and cover most of the use cases.


 That is kind of another drawing the line case. To be neutral we should
 represent both (or more) point of views.


No; this is not neutrality (this is my bug bear because it is the underlying
reason we are not neutral, and have trouble apprising neutrality in
content).


 But showing the reader only
 that what he want's to read is not real knowledge.


This really comes back to my argument of our views and biases. If you read a
topic you obviously want to know the view of it by people you agree with.

Now I agree that throwing differing views into the mix can be useful, of
give you another viewpoint. But you are still predominantly interested in a
view point that you consider accurate and compelling.

*There is nothing wrong with this.*

Presenting two parallel views with the aim of bouncing them off each other
to impart the knowledge is also not neutral.

Tom
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread Andreas Kolbe
From: David Levy lifeisunf...@gmail.com


  The New York Times (recipient of more Pulitzer Prizes than any other

  news organization) uses Stuff My Dad Says.  So does the Los Angeles
  Times, which states that the subject's actual name is unsuitable for
  a family publication.
 
  http://www.nytimes.com/2010/05/23/books/review/InsideList-t.html
  http://latimesblogs.latimes.com/technology/2009/09/mydadsays-twitter.html
 
  You might dismiss those sources as the popular press, but they're
  the most reputable ones available on the subject.  Should we deem
  their censorship sacrosanct and adopt it as our own?

 No. :)

 Please elaborate.  Why shouldn't we follow the example set by the most
 reliable sources?


I don't consider press sources the most reliable sources, or in general a good 
model to follow. Even among press sources, there are many (incl. Reuters) 
who call the Twitter feed by its proper name, Shit my dad says.

Scholars don't write f*ck when they mean fuck. As an educational resource, we 
should follow the best practices adopted by educational and scholarly sources.


Best,
Andreas
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread Andreas K.
On Tue, Oct 18, 2011 at 8:09 PM, Tobias Oelgarte 
tobias.oelga...@googlemail.com wrote:

 You said that we should learn from Google and other top websites, but at
 the same time you want to introduce objective criteria, which neither of
 this websites did?




What I mean is that we should not classify media as offensive, but in terms
such as photographic depictions of real-life sex and masturbation, images
of Muhammad. If someone feels strongly that they do not want to see these
by default, they should not have to. In terms of what areas to cover, we can
look at what people like Google do (e.g. by comparing moderate safe search
and safe search off results), and at what our readers request.




 You also compare Wikipedia with an image board like
 4chan? You want the readers to define what they want see. That means
 they should play the judge and that majority will win. But this in
 contrast to the proposal that the filter should work with objective
 criteria.




I do not see this as the majority winning, and a minority losing. I see it
as everyone winning -- those who do not want to be confronted with whatever
media don't have to be, and those who want to see them can.




 Could you please crosscheck your own comment and tell me what kind of
 solution is up on your mind? Currently it is mix of very different
 approaches, that don't fit together.




My mind is not made up; we are still in a brainstorming phase. Of the
alternatives presented so far, I like the opt-in version of Neitram's
proposal best:

http://meta.wikimedia.org/wiki/Controversial_content/Brainstorming#thumb.2Fhidden

If something better were proposed, my views might change.


Best,
Andreas
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread David Levy
Andreas Kolbe wrote:

 I don't consider press sources the most reliable sources, or in general a good
 model to follow. Even among press sources, there are many (incl. Reuters)
 who call the Twitter feed by its proper name, Shit my dad says.

The sources to which I referred are the most reputable ones cited in
the English Wikipedia's article.

Of course, I agree that we needn't emulate the style in which they
present information.  That's my point.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread Andreas K.
On Tue, Oct 18, 2011 at 10:30 PM, David Levy lifeisunf...@gmail.com wrote:

 Andreas Kolbe wrote:

  I don't consider press sources the most reliable sources, or in general a
 good
  model to follow. Even among press sources, there are many (incl. Reuters)
  who call the Twitter feed by its proper name, Shit my dad says.

 The sources to which I referred are the most reputable ones cited in
 the English Wikipedia's article.

 Of course, I agree that we needn't emulate the style in which they
 present information.  That's my point.

 David Levy



I understand that. But if we use a *different* style, it should still be
traceable to an educational or scholarly standard, rather than one we have
made up, or inherited from 4chan. Would you agree?

Andreas
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread Tobias Oelgarte
Am 18.10.2011 23:20, schrieb Andreas K.:
 On Tue, Oct 18, 2011 at 8:09 PM, Tobias Oelgarte
 tobias.oelga...@googlemail.com  wrote:

 You said that we should learn from Google and other top websites, but at
 the same time you want to introduce objective criteria, which neither of
 this websites did?



 What I mean is that we should not classify media as offensive, but in terms
 such as photographic depictions of real-life sex and masturbation, images
 of Muhammad. If someone feels strongly that they do not want to see these
 by default, they should not have to. In terms of what areas to cover, we can
 look at what people like Google do (e.g. by comparing moderate safe search
 and safe search off results), and at what our readers request.


The problem is, that we never asked our readers, before the whole thing 
was running wild already. It would be really the time to question the 
feelings of the readers. That would mean to ask the readers in very 
different regions to get an good overview about this topic. What Google 
and other commercial groups do shouldn't be a reference to us. They 
serve their core audience and ignore the rest, since their aim is 
profit, and only profit, no matter what good reasons they represent. 
We are quite an exception from them. Not in popularity, but in concept. 
If we put to the example of futanari, then we surely agree that there 
could be quite a lot of people that would be surprised. Especially if 
safe-search is on. But now we have to ask why it is that way? Why does 
it work so well for other, more common terms in a western audience?

 You also compare Wikipedia with an image board like
 4chan? You want the readers to define what they want see. That means
 they should play the judge and that majority will win. But this in
 contrast to the proposal that the filter should work with objective
 criteria.



 I do not see this as the majority winning, and a minority losing. I see it
 as everyone winning -- those who do not want to be confronted with whatever
 media don't have to be, and those who want to see them can.

I guess you missed the point that a minority of offended people would 
just be ignored. Looking at the goal and Tings examples, then we would 
just strengthen the current position (western majority and point of 
view) but doing little to nothing in the areas that where the main 
concern, or at least the strong argument to start the progress. If it 
really comes down to the point that a majority does not find Muhammad 
caricatures offensive and it wins, then we have no solution.
 Could you please crosscheck your own comment and tell me what kind of
 solution is up on your mind? Currently it is mix of very different
 approaches, that don't fit together.



 My mind is not made up; we are still in a brainstorming phase. Of the
 alternatives presented so far, I like the opt-in version of Neitram's
 proposal best:

 http://meta.wikimedia.org/wiki/Controversial_content/Brainstorming#thumb.2Fhidden

 If something better were proposed, my views might change.


 Best,
 Andreas

I read this proposal and can't see a real difference in a second 
thought. At first it is good that the decision stays related to the 
topic and is not separated as in the first proposals. But it also has a 
bad taste in itself. We directly deliver the tags needed to remove 
content by third parties (SPI, Local Network, Institutions), no matter 
if the reader chooses to view the image or not, and we are still in 
charge to declare what might be or is offensive to others, forcing our 
judgment onto the users of the feature.

Overall it follows a good intention, but I'm very concerned about the 
side effects, which just let me say no way to this proposal as it is.

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread Andreas K.
On Tue, Oct 18, 2011 at 9:17 PM, David Levy lifeisunf...@gmail.com wrote:

 Andreas Kolbe wrote:

  Now, given that we are a top-10 website, why should it not make sense to
  look at what other large websites like Google, Bing, and Yahoo allow the
  user to filter, and what media Flickr and YouTube require opt-ins for?
  Why should we not take our cues from them? The situation seems quite
  analogous.

 Again, those websites are commercial endeavors whose decisions are
 based on profitability, not an obligation to maintain neutrality (a
 core element of most WMF projects).  These services can cater to the
 revenue-driving majorities (with geographic segregation, if need be)
 and ignore minorities whose beliefs fall outside the mainstream for
 a given country.

 This probably works fairly well for them; most users are satisfied,
 with the rest too fragmented to be accommodated in a cost-effective
 manner.  Revenues are maximized.  Mission accomplished.



Satisfying most users is a laudable aim for any service provider, whether
revenue is involved or not. Why should we not aim to satisfy most our users,
or appeal to as many potential users as possible?



 The WMF projects' missions are dramatically different.  For most,
 neutrality is a nonnegotiable principle.  To provide an optional
 filter for one image type and not another is to formally validate the
 former objection and not the latter.  That's unacceptable.



This goes back to our fundamental disagreement about what neutrality means.
You give it your own definition, which, as I understand you, means
refraining from making judgments. But that is not how we work. We constantly
apply judgment, based on the judgment of reliable sources.

We constantly discriminate.

We say, This is unsourced; it may be true, but you can't have it in the
article.

We say, This is interesting, but it is synthesis, or original research, and
you can't have it in the article.

We say, This is a self-published source, it does not have an editorial
staff, therefore it is not reliable.

By doing so, we are constantly empowering the judgment of the professional,
commercial outfits who produce what we term reliable sources.

If this is unacceptable to you, do you also object to our sourcing policies
and guidelines?

Andreas
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread Bjoern Hoehrmann
* Andreas K. wrote:
Satisfying most users is a laudable aim for any service provider, whether
revenue is involved or not. Why should we not aim to satisfy most our users,
or appeal to as many potential users as possible?

Many Wikipedians would disagree that they or Wikipedia as a whole is a
service provider. The first sentence on the german language version
for instance is Wikipedia ist ein Projekt zum Aufbau einer Enzyklopädie
aus freien Inhalten in allen Sprachen der Welt. That's about creating
something, not about providing some service to others, much less trying
to satisfy most people who might wish to be serviced.

I invite you to have a look at http://katograph.appspot.com/ which shows
the category system of the german Wikipedia at the end of 2009 with in-
formation about how many articles can be found under them and the number
of views of articles in the category over a three day period. You will
find for instance that there are many more articles on buildings than on
movies, many times more, but articles on movies get more views in total.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread David Levy
Andreas Kolbe wrote:

 But if we use a *different* style, it should still be traceable to an
 educational or scholarly standard, rather than one we have made up, or
 inherited from 4chan. Would you agree?

Yes, and I dispute the premise that the English Wikipedia has failed
in this respect.

As I've noted, we always must gauge available images' illustrative
value on an individual basis.  We do so by applying criteria intended
to be as objective as possible, thereby reflecting (as closely as we
can, given the relatively small pool of libre images) the quality
standards upheld by reputable publications.  We also reject images
inconsistent with reliable sources' information on the subjects
depicted therein.

We don't, however, exclude images on the basis that others declined to
publish the same or similar illustrations.

Images widely regarded as objectionable commonly are omitted for
this reason (which is no more relevant to Wikipedia than the
censorship of objectionable words is).  But again, we needn't seek
to determine when this has occurred.  We can simply apply our normal
assessment criteria across the board (irrespective of whether an image
depicts a sexual act or a pine tree).

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread David Levy
Andreas Kolbe wrote:

 Satisfying most users is a laudable aim for any service provider, whether
 revenue is involved or not. Why should we not aim to satisfy most our users,
 or appeal to as many potential users as possible?

It depends on the context.  There's nothing inherently bad about
satisfying as many users as possible.  It's doing so in a
discriminatory, non-neutral manner that's problematic.

We probably could satisfy most users by detecting their locations and
displaying information intended to reflect the beliefs prevalent there
(e.g. by favoring the majority religions).  But that, like the
creation of special categories for images deemed potentially
objectionable, is incompatible with most WMF projects' missions.

 We constantly discriminate.

 We say, This is unsourced; it may be true, but you can't have it in the
 article.

 We say, This is interesting, but it is synthesis, or original research, and
 you can't have it in the article.

 We say, This is a self-published source, it does not have an editorial staff,
 therefore it is not reliable.

 By doing so, we are constantly empowering the judgment of the professional,
 commercial outfits who produce what we term reliable sources.

 If this is unacceptable to you, do you also object to our sourcing policies
 and guidelines?

You're still conflating disparate concepts.  (I've elaborated on this
point several times.)

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-17 Thread Tobias Oelgarte
Am 16.10.2011 21:27, schrieb ???:
 On 16/10/2011 19:36, Tobias Oelgarte wrote:
 Am 16.10.2011 16:17, schrieb ???:
 On 16/10/2011 14:50, David Gerard wrote:
 On 16 October 2011 14:40, ???wiki-l...@phizz.demon.co.uk wrote:

 Don't be an arsehole you get the same sort of stuff if you search for
 Presumably this is the sort of quality of discourse Sue was
 complaining about from filter advocates: provocateurs lacking in
 empathy.

 Trolling much eh David?


 But thanks for showing once again your incapacity to acknowledge that
 searching for sexual images and seeing such images, is somewhat
 different, from searching for non sexual imagary and getting sexual images.

 I have to agree with David. Your behavior is provocative and
 unproductive. I don't feel the need to respond to your arguments at all,
 if you write in this tone. You could either excuse yourself for this
 kind of wording, or we are done.


 Now you wouldn't be complainng about seeing content not to your liking
 would you. What are you going to do filter out the posts? Bet your glad
 your email provider added that option for you.

 Yet another censorship hipocrite.
I guess you did not understand my answer. Thats why I'm feeling free to 
respond one more time.

I have no problem with any kind of controversial content. Showing 
progress of fisting on the mainpage? No problem for me. Reading your 
comments? No problem for me. Reading your insults? Also no problem. The 
only thing i did, was the following: I told you, that i will not react 
any longer to your comments, if they are worded in the manner as they 
currently are.

Literary: I'm feeling free to open your book and start to read. If it is 
interesting and constructive i will continue to read it and i will 
respond to you to share my thoughts. If it is purely meant to insult, 
without any other meaning, then i will get bored and fly over the lines, 
reading only the half or less. I also have no intention to share my 
thoughts with the author of this book. Why? I have nothing to talk 
about. Should i complain over it's content? Which content anyway?

Give it a try. Make constructive arguments and explain your thoughts. 
There is no need for strong-wording, if the construction of the words 
itself is strong.

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-17 Thread Andreas Kolbe
Note: This foundation-l post is cross-posted to commons-l, since this 
discussion may be of interest there as well.



 From: Tobias Oelgarte tobias.oelga...@googlemail.com

 It is a in house made problem, as i explained at brainstorming [1].
 To put it short: It is a self made problem, based on the fact that this 
 images got more attention then others. Thanks to failed deletion 
 requests they had many people caring about them. This results in more 
 exact descriptions and file naming then in average images. Thats what 
 search engines prefer; and now we have them at a top spot. Thanks for 
 caring so much about this images and not treating them like anything else.



I don't think that is the case, actually. Brandon described how the search 
function works here:

http://www.quora.com/Why-is-the-second-image-returned-on-Wikimedia-Commons-when-one-searches-for-electric-toothbrush-an-image-of-a-female-masturbating


To take an example, the file 

http://commons.wikimedia.org/w/index.php?title=File:Golden_Shower.jpgaction=history 

(a prominent search result in searches for shower) has never had its name or 
description changed since it was uploaded from Flickr. My impression is that 
refinement of file names and descriptions following discussions has little to 
do with sexual or pornography-related media appearing prominently in search 
listings. The material is simply there, and the search function finds it, as it 
is designed to do.




 Andreas, you currently represent exactly that kind of argumentation that 

 leads into anything, but not to a solution. I described it already in 
 the post Controversial Content vs Only-Image-Filter [2], that single 
 examples don't represent the overall thematic. It also isn't an addition 
 to the discussion as an argument. It would be an argument if we would 
 know the effects that occur. We have to clear the question:




It is hard to say how else to provide evidence of a problem, other than by 
giving multiple (not single) examples of it.

You could also search for blond, blonde, red hair, strawberry, or peach ...

What is striking is the crass sexism of some of the filenames and image 
descriptions: blonde bombshell, Blonde teenie sucking, so, so sexy, 
These two had a blast showing off etc.

http://commons.wikimedia.org/w/index.php?title=Special%3ASearchsearch=blondefulltext=Search


One of the images shows a young woman in the bathroom, urinating: 

http://commons.wikimedia.org/wiki/File:Blonde_woman_urinating.jpg


Her face is fully shown, and the image, displayed in the Czech Wikipedia, 
carries no personality rights warning, nor is there evidence that she has 
consented to or is even aware of the upload.

And I am surprised how often images of porn actresses are found in search 
results, even for searches like Barbie. Commons has 917 files in 
Category:Unidentified porn actresses alone. There is no corresponding 
Category:Unidentified porn actors (although there is of course a wealth of 
categories and media for gay porn actors).



 * Is it a problem that the search function displays sexual content? (A 

 search should find anything related, by definition.)




I think the search function works as designed, looking for matches in file 
names and descriptions. 



 * Is sexual content is overrepresented by the search?



I don't think so. The search function simply shows what is there. However, the 
sexual content that comes up for innocuous searches sometimes violates the 
principle of least astonishment, and thus may turn some users off using, 
contributing to, or recommending Commons as an educational resource.



 * If that is the case. Why is it that way?
 * Can we do something about it, without drastic changes, like 
 blocking/excluding categories?




One thing that might help would be for the search function to privilege files 
that are shown in top-level categories containing the search term: e.g. for 
cucumber, first display all files that are in category cucumber, rather 
than those contained in subcategories, like sexual penetrative use of 
cucumbers, regardless of the file name (which may not have the English word 
cucumber in it).

A second step would be to make sure that sexual content is not housed in the 
top categories, but in appropriately named subcategories. This is generally 
already established practice. Doing both would reduce the problem somewhat, at 
least in cases where there is a category that matches the search term.


Regards,
Andreas



[1] 

http://meta.wikimedia.org/w/index.php?title=Controversial_content%2FBrainstormingaction=historysubmitdiff=2996411oldid=2995984
[2] 
http://lists.wikimedia.org/pipermail/foundation-l/2011-October/069699.html

Am 17.10.2011 02:56, schrieb Andreas Kolbe:
 Personality conflicts aside, we're noting that non-sexual search terms in 
 Commons can prominently return sexual images of varying explicitness, from 
 mild nudity to hardcore, and that this is different from entering a sexual 
 search term 

Re: [Foundation-l] Letter to the community on Controversial Content (???)

2011-10-17 Thread WereSpielChequers
Re

 I claim that you are talking total crap. It is not *that* difficult to
 get the
 categories of an image and reject based on which categories the image
 is in are. There are enough people out there busily categorizing all the
 images already that any org that may wish to could block images that
 are in disapproved categories.

It is incredibly easy. One justs says any image within Category:Sex is
not acceptable.
Its not hard to do. An organisation can run a script once a week or so
to delve down
through the category hierachy to pick up any changes.

You already categorize the images for any one with enough processing
power, or the
will to censor the content. I doubt that anyone doing so is going to be
too bothered
whether they've falsely censored an image that is in Category:Sex that
isn't 'controversial'
or not.



Anyone who thinks that a category based solution can work because we have
enough categorisors, may I suggest that you go to

http://commons.wikimedia.org/wiki/Category:Images_from_the_Geograph_British_Isles_project_needing_categories_by_date

And try categorising 0.01% of that part of the backlog yourself before being
so over optimistic about our categorisation resources.

When we've cleared all the subscategories in there then maybe I could be
convinced that Commons has enough categorisors to handle what it already
does. Taking on a major new obligation would be another matter though, even
if that obligation could be defined and its categories agreed.

A categorisation approach also has the difficult task of getting people to
agree what porn is. This is something that varies enormously around the
world, and while there will be some images that we can all agree are
pornographic, I'm pretty sure there will be a far larger number where people
will be genuinely surprised to discover that others have dramatically
different views as to whether they should be classed as porn. For some
people this may seem easy, anything depicting certain parts of the human
anatomy or certain poses is pornographic to them. But different people will
have a different understanding as to which parts of the body should be
counted as pornographic. Getting the community to agree whether all images
depicting human penises are pornographic will not be easy, and that's before
you get into arguments as to how abstract a depiction of a penis has to be
be before it ceases to be an image of a penis.

We also need to consider how we relate to outside organisations,
particularly with important initiatives such as the GLAM program. This
mildly not safe for work image
http://commons.wikimedia.org/wiki/File:JinaVA.jpg is a good example of the
challenge here. To those concerned about human penises it may well count as
pornographic, though the museum that has it on display certainly does not
bar children from that gallery. This image was loaded by one of our GLAM
partners, our hope for the GLAM project is that hundreds of partners will
load millions perhaps tens of millions of images onto Commons. If that
succeeds then our current categorisation backlog will be utterly dwarfed by
future backlogs. If we start telling GLAM partners that yes we want them to
upload images, but they will need to categorise them through an ill defined
and arbitrary offensiveness criteria, then our GLAM program will have a
problem. In principle I support an image filter, I've even proposed one
design. But if people want to go down the route of a category based image
filter they don't just have to convince the many who oppose any filter as
censorship, they also need to be aware that to me and probably others GLAM
is core to our mission and important, whilst an image filter is non-core and
of relatively low importance. If the two conflict then choosing between them
would be easy.

If people want to advocate a categorisation approach to an image filter I
would suggest they start with the difficult areas of defining where the
boundary would be between porn and non-porn, or between hardcore and
softcore. Drawing clear and sharp lines between different shades of grey is
not easy, especially where you want them to be perceived as right by a
globally diverse population. My advice to anyone considering a category
based filter system is to focus on the shades of grey, not at the extreme
examples on the uncontentious contentious scale.

Then if you manage to square that particular circle an equally difficult
task would be to recruit sufficient categorisers. As someone who has
categorised many hundreds of the Geograph images I'd be surprised to find
any Geograph images that I would be offended by. The sort of statues of
topless ladies that you find on display in England certainly don't offend
me, but bare breasts are pornographic to some people in some contexts. So
there will be some long uncategorised images amongst the 1.7 million from
the Geograph load that meet some peoples definition of porn. Any
categorisation based approach needs to explain how it would recruit 

Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-17 Thread Andreas Kolbe
 You view them as standalone pieces of information, entirely distinct
 from those conveyed textually.  You believe that their inclusion
 constitutes undue weight unless reliable sources utilize the same or
 similar illustrations (despite their publication of text establishing
 the images' accuracy and relevance).

 The English Wikipedia community disagrees with you.




The English Wikipedia community, like any other, has always contained a wide 
spectrum of opinion on such matters. We have seen this in the past, with long 
discussions about contentious cases like the goatse image, or the Katzouras 
photos. That is unlikely to ever change.

But we do also subscribe to the principle of least astonishment. If the average 
reader finds our image choices odd, or unexpectedly and needlessly offensive, 
then we alienate a large part of our target audience, and may indeed only 
attract an unnecessarily limited demographic as contributors.



 The New York Times (recipient of more Pulitzer Prizes than any other
 news organization) uses Stuff My Dad Says.  So does the Los Angeles
 Times, which states that the subject's actual name is unsuitable for
 a family publication.

 http://www.nytimes.com/2010/05/23/books/review/InsideList-t.html
 http://latimesblogs.latimes.com/technology/2009/09/mydadsays-twitter.html

 You might dismiss those sources as the popular press, but they're
 the most reputable ones available on the subject.  Should we deem
 their censorship sacrosanct and adopt it as our own?




No. :)



Best,
Andreas


P.S. It's been pointed out to me that my e-mail client (yahoo) does a poor job 
with formatting and threading. That's true, and I'm not happy with it either. 
I'll have a look at alternatives.
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-17 Thread Thomas Morton
On 17 Oct 2011, at 09:19, Tobias Oelgarte
tobias.oelga...@googlemail.com wrote:

 Am 16.10.2011 21:27, schrieb ???:
 On 16/10/2011 19:36, Tobias Oelgarte wrote:
 Am 16.10.2011 16:17, schrieb ???:
 On 16/10/2011 14:50, David Gerard wrote:
 On 16 October 2011 14:40, ???wiki-l...@phizz.demon.co.uk wrote:

 Don't be an arsehole you get the same sort of stuff if you search for
 Presumably this is the sort of quality of discourse Sue was
 complaining about from filter advocates: provocateurs lacking in
 empathy.

 Trolling much eh David?


 But thanks for showing once again your incapacity to acknowledge that
 searching for sexual images and seeing such images, is somewhat
 different, from searching for non sexual imagary and getting sexual images.

 I have to agree with David. Your behavior is provocative and
 unproductive. I don't feel the need to respond to your arguments at all,
 if you write in this tone. You could either excuse yourself for this
 kind of wording, or we are done.


 Now you wouldn't be complainng about seeing content not to your liking
 would you. What are you going to do filter out the posts? Bet your glad
 your email provider added that option for you.

 Yet another censorship hipocrite.
 I guess you did not understand my answer. Thats why I'm feeling free to
 respond one more time.

 I have no problem with any kind of controversial content. Showing
 progress of fisting on the mainpage? No problem for me. Reading your
 comments? No problem for me. Reading your insults? Also no problem. The
 only thing i did, was the following: I told you, that i will not react
 any longer to your comments, if they are worded in the manner as they
 currently are.

 Literary: I'm feeling free to open your book and start to read. If it is
 interesting and constructive i will continue to read it and i will
 respond to you to share my thoughts. If it is purely meant to insult,
 without any other meaning, then i will get bored and fly over the lines,
 reading only the half or less. I also have no intention to share my
 thoughts with the author of this book. Why? I have nothing to talk
 about. Should i complain over it's content? Which content anyway?

 Give it a try. Make constructive arguments and explain your thoughts.
 There is no need for strong-wording, if the construction of the words
 itself is strong.

 nya~

And that is a mature and sensible attitude.

Some people do not share your view and are unable to ignore what to
them are rude or offensive things.

Are they wrong?

Should they be doing what you (and I) do?

Tom

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-16 Thread ???
On 11/10/2011 15:33, Kim Bruning wrote:
  flame on Therefore you cannot claim that I am stating nonsense.
  The inverse is true: you do not possess the information to support
  your position, as you now admit. In future, before you set out to
  make claims of bad faith in others, it would be wise to ensure that
  your own information is impeccable first. /flame sincerely, Kim
  Bruning

I claim that you are talking total crap. It is not *that* difficult to 
get the
categories of an image and reject based on which categories the image
is in are. There are enough people out there busily categorizing all the
images already that any org that may wish to could block images that
are in disapproved categories.


The problem, and it is a genuine problem, is that the fucking stupid images
leak out across commons in unexpected ways. Lets assuime that an 6th grade
class is asked to write a report on Queen Victoria, and a child serach 
commons
for prince albert:

http://commons.wikimedia.org/w/index.php?title=Special:Searchsearch=prince+albertlimit=50offset=0

If you at work you probably do not want to clicking the above link at all.



___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-16 Thread ???
On 11/10/2011 00:47, MZMcBride wrote:
  Risker wrote:
  Given the number of people who insist that any categorization
  system seems to be vulnerable, I'd like to hear the reasons why the
  current system, which is obviously necessary in order for people to
  find types of images, does not have the same effect. I'm not
  trying to be provocative here, but I am rather concerned that this
  does not seem to have been discussed.

  Personally, from the technical side, I don't think there's any way to
  make per-category filtering work. What happens when a category is
  deleted? Or a category is renamed (which is effectively deleting the
  old category name currently)? And are we really expecting individual
  users to go through millions of categories and find the ones that may
  be offensive to them? Surely users don't want to do that. The whole
  point is that they want to limit their exposure to such images, not
  dig into the millions of categories that may exist looking for ones
  that largely contain content they find objectionable. Surely.


People that care will filter on broadest categories as those are least
likely to change. They may start with category:sex,
Category:Depictions of Muhammad, etc.



___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-16 Thread Tobias Oelgarte
Am 16.10.2011 12:53, schrieb ???:
 On 11/10/2011 15:33, Kim Bruning wrote:
   flame on  Therefore you cannot claim that I am stating nonsense.
   The inverse is true: you do not possess the information to support
   your position, as you now admit. In future, before you set out to
   make claims of bad faith in others, it would be wise to ensure that
   your own information is impeccable first./flame  sincerely, Kim
   Bruning
 I claim that you are talking total crap. It is not *that* difficult to
 get the
 categories of an image and reject based on which categories the image
 is in are. There are enough people out there busily categorizing all the
 images already that any org that may wish to could block images that
 are in disapproved categories.
I have to throw that kind wording back at you. It isn't very difficult 
to judge what is offensive and what isn't, because it is impossible to 
do this, if you want to stay neutral and to respect any, even if only 
any major, opinion out there. Wikipedia and Commons are projects that 
gather knowledge or media. Wikipedia has an editorial system that 
watches over the content to be accurate and representative. Commons is a 
media library with a categorization system that aids the reader to what 
he want's to find. The category system in itself is (or should be) build 
upon directional labels. Anything else is contradictory to current 
practice and unacceptable:

* Wikipedia authors do not judge about topics. They also do not claim 
for themselves that something is controversial, ugly, bad, ...
* Commons contributers respect this terms as well. They don't judge 
about the content. They gather and categorize it. But they will not 
append prejudicial labels.
 The problem, and it is a genuine problem, is that the fucking stupid images
 leak out across commons in unexpected ways. Lets assuime that an 6th grade
 class is asked to write a report on Queen Victoria, and a child serach
 commons
 for prince albert:

 http://commons.wikimedia.org/w/index.php?title=Special:Searchsearch=prince+albertlimit=50offset=0

 If you at work you probably do not want to clicking the above link at all.

Worst case scenarios will always happen. With filter or without filter, 
you will still and always find such examples. They are seldom, and might 
happen from time to time. But they aren't the rule. They aren't at the 
same height as you should use to measure a flood.

To give an simple example of the opposite. Enable strict filtering on 
google and search for images with the term futanari . Don't say that i 
did not warn you...


A last word: Categorizing content rightful as good and evil is 
impossible for human beings, that we are. But categorizing content as 
good and evil always led to destructive consequences if human beings 
are involved, that we are.


___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-16 Thread ???
On 16/10/2011 12:37, Tobias Oelgarte wrote:
 Am 16.10.2011 12:53, schrieb ???:
 On 11/10/2011 15:33, Kim Bruning wrote:
flame on   Therefore you cannot claim that I am stating nonsense.
The inverse is true: you do not possess the information to support
your position, as you now admit. In future, before you set out to
make claims of bad faith in others, it would be wise to ensure that
your own information is impeccable first./flame   sincerely, Kim
Bruning
 I claim that you are talking total crap. It is not *that* difficult to
 get the
 categories of an image and reject based on which categories the image
 is in are. There are enough people out there busily categorizing all the
 images already that any org that may wish to could block images that
 are in disapproved categories.
 I have to throw that kind wording back at you. It isn't very difficult
 to judge what is offensive and what isn't, because it is impossible to
 do this, if you want to stay neutral and to respect any, even if only
 any major, opinion out there. Wikipedia and Commons are projects that
 gather knowledge or media. Wikipedia has an editorial system that
 watches over the content to be accurate and representative. Commons is a
 media library with a categorization system that aids the reader to what
 he want's to find. The category system in itself is (or should be) build
 upon directional labels. Anything else is contradictory to current
 practice and unacceptable:

It is incredibly easy. One justs says any image within Category:Sex is 
not acceptable.
Its not hard to do. An organisation can run a script once a week or so 
to delve down
through the category hierachy to pick up any changes.

You already categorize the images for any one with enough processing 
power, or the
will to censor the content. I doubt that anyone doing so is going to be 
too bothered
whether they've falsely censored an image that is in Category:Sex that 
isn't 'controversial'
or not.




 * Wikipedia authors do not judge about topics. They also do not claim
 for themselves that something is controversial, ugly, bad, ...
 * Commons contributers respect this terms as well. They don't judge
 about the content. They gather and categorize it. But they will not
 append prejudicial labels.


Of course they do: they add categories. Some else applies the value 
judgment as to
whether images in that category are controversial or not. The job of WMF 
editors
just to categorise them.  If Arachnids are 'controversial' then anything 
under that
category goes. Just label the damn things and shut the fuck up.



 The problem, and it is a genuine problem, is that the fucking stupid images
 leak out across commons in unexpected ways. Lets assuime that an 6th grade
 class is asked to write a report on Queen Victoria, and a child serach
 commons
 for prince albert:

 http://commons.wikimedia.org/w/index.php?title=Special:Searchsearch=prince+albertlimit=50offset=0

 If you at work you probably do not want to clicking the above link at all.

 Worst case scenarios will always happen. With filter or without filter,
 you will still and always find such examples. They are seldom, and might
 happen from time to time. But they aren't the rule. They aren't at the
 same height as you should use to measure a flood.


That are not seldom, they leak all over the place. You can get porn on 
commons by searching
for 'furniture'. Porn images are everywhere on Commons.





 To give an simple example of the opposite. Enable strict filtering on
 google and search for images with the term futanari . Don't say that i
 did not warn you...

Don't be an arsehole you get the same sort of stuff if you search for 
cumshot. We aren't
talking about terms that have primarily a sexual context but phrases 
like 'furniture' or
'prince albert' which do not.



___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-16 Thread David Gerard
On 16 October 2011 14:40, ??? wiki-l...@phizz.demon.co.uk wrote:

 Don't be an arsehole you get the same sort of stuff if you search for


Presumably this is the sort of quality of discourse Sue was
complaining about from filter advocates: provocateurs lacking in
empathy.


- d.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-16 Thread David Levy
I wrote:

  In this context, you view images as entities independent from the people and
  things depicted therein (and believe that our use of illustrations not
  included in other publications constitutes undue weight).

Andreas Kolbe replied:

 I view images as *content*, subject to the same fundamental policies and
 principles as any other content.

You view them as standalone pieces of information, entirely distinct
from those conveyed textually.  You believe that their inclusion
constitutes undue weight unless reliable sources utilize the same or
similar illustrations (despite their publication of text establishing
the images' accuracy and relevance).

The English Wikipedia community disagrees with you.

 For the avoidance of doubt, I am not saying that we should use the *very same*
 illustrations that reliable sources use – we can't, for obvious copyright
 reasons, and there is no need to follow sources that slavishly anyway.

I realize that you advocate the use of comparable illustrations, but
in my view, slavish is a good description of the extent to which you
want us to emulate our sources' presentational styles.

 I agree by the way that we should never write F*** or s***. Some newspapers do
 that, but it is not a practice that the best and most reliable sources
 (scholarly, educational sources as opposed to popular press) use. We should be
 guided by the best, most encyclopedic sources. YMMV.

I previously mentioned Shit My Dad Says.  Have you seen the sources
cited in the English Wikipedia's article?

Time (the world's largest weekly news magazine) refers to it as Sh*t
My Dad Says.

http://www.time.com/time/arts/article/0,8599,1990838,00.html

The New York Times (recipient of more Pulitzer Prizes than any other
news organization) uses Stuff My Dad Says.  So does the Los Angeles
Times, which states that the subject's actual name is unsuitable for
a family publication.

http://www.nytimes.com/2010/05/23/books/review/InsideList-t.html
http://latimesblogs.latimes.com/technology/2009/09/mydadsays-twitter.html

You might dismiss those sources as the popular press, but they're
the most reputable ones available on the subject.  Should we deem
their censorship sacrosanct and adopt it as our own?

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-16 Thread ???
On 16/10/2011 14:50, David Gerard wrote:
 On 16 October 2011 14:40, ???wiki-l...@phizz.demon.co.uk  wrote:

 Don't be an arsehole you get the same sort of stuff if you search for


 Presumably this is the sort of quality of discourse Sue was
 complaining about from filter advocates: provocateurs lacking in
 empathy.



Trolling much eh David?


But thanks for showing once again your incapacity to acknowledge that 
searching for sexual images and seeing such images, is somewhat 
different, from searching for non sexual imagary and getting sexual images.



___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-16 Thread Tobias Oelgarte
Am 16.10.2011 16:17, schrieb ???:
 On 16/10/2011 14:50, David Gerard wrote:
 On 16 October 2011 14:40, ???wiki-l...@phizz.demon.co.uk   wrote:

 Don't be an arsehole you get the same sort of stuff if you search for

 Presumably this is the sort of quality of discourse Sue was
 complaining about from filter advocates: provocateurs lacking in
 empathy.


 Trolling much eh David?


 But thanks for showing once again your incapacity to acknowledge that
 searching for sexual images and seeing such images, is somewhat
 different, from searching for non sexual imagary and getting sexual images.

I have to agree with David. Your behavior is provocative and 
unproductive. I don't feel the need to respond to your arguments at all, 
if you write in this tone. You could either excuse yourself for this 
kind of wording, or we are done.

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-16 Thread ???
On 16/10/2011 19:36, Tobias Oelgarte wrote:
 Am 16.10.2011 16:17, schrieb ???:
 On 16/10/2011 14:50, David Gerard wrote:
 On 16 October 2011 14:40, ???wiki-l...@phizz.demon.co.ukwrote:

 Don't be an arsehole you get the same sort of stuff if you search for

 Presumably this is the sort of quality of discourse Sue was
 complaining about from filter advocates: provocateurs lacking in
 empathy.


 Trolling much eh David?


 But thanks for showing once again your incapacity to acknowledge that
 searching for sexual images and seeing such images, is somewhat
 different, from searching for non sexual imagary and getting sexual images.

 I have to agree with David. Your behavior is provocative and
 unproductive. I don't feel the need to respond to your arguments at all,
 if you write in this tone. You could either excuse yourself for this
 kind of wording, or we are done.



Now you wouldn't be complainng about seeing content not to your liking 
would you. What are you going to do filter out the posts? Bet your glad 
your email provider added that option for you.

Yet another censorship hipocrite.



___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-16 Thread Dan Rosenthal
If the entire premise of an email comes down to I'm taunting you, that's
an indication it probably shouldn't be sent.


Dan Rosenthal


On Sun, Oct 16, 2011 at 10:27 PM, ??? wiki-l...@phizz.demon.co.uk wrote:

 On 16/10/2011 19:36, Tobias Oelgarte wrote:
  Am 16.10.2011 16:17, schrieb ???:
  On 16/10/2011 14:50, David Gerard wrote:
  On 16 October 2011 14:40, ???wiki-l...@phizz.demon.co.ukwrote:
 
  Don't be an arsehole you get the same sort of stuff if you search for
 
  Presumably this is the sort of quality of discourse Sue was
  complaining about from filter advocates: provocateurs lacking in
  empathy.
 
 
  Trolling much eh David?
 
 
  But thanks for showing once again your incapacity to acknowledge that
  searching for sexual images and seeing such images, is somewhat
  different, from searching for non sexual imagary and getting sexual
 images.
 
  I have to agree with David. Your behavior is provocative and
  unproductive. I don't feel the need to respond to your arguments at all,
  if you write in this tone. You could either excuse yourself for this
  kind of wording, or we are done.
 


 Now you wouldn't be complainng about seeing content not to your liking
 would you. What are you going to do filter out the posts? Bet your glad
 your email provider added that option for you.

 Yet another censorship hipocrite.



 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-16 Thread Theo10011
On Mon, Oct 17, 2011 at 12:57 AM, ??? wiki-l...@phizz.demon.co.uk wrote:

 On 16/10/2011 19:36, Tobias Oelgarte wrote:
  Am 16.10.2011 16:17, schrieb ???:
  On 16/10/2011 14:50, David Gerard wrote:
  On 16 October 2011 14:40, ???wiki-l...@phizz.demon.co.ukwrote:
 
  Don't be an arsehole you get the same sort of stuff if you search for
 
  Presumably this is the sort of quality of discourse Sue was
  complaining about from filter advocates: provocateurs lacking in
  empathy.
 
 
  Trolling much eh David?
 
 
  But thanks for showing once again your incapacity to acknowledge that
  searching for sexual images and seeing such images, is somewhat
  different, from searching for non sexual imagary and getting sexual
 images.
 
  I have to agree with David. Your behavior is provocative and
  unproductive. I don't feel the need to respond to your arguments at all,
  if you write in this tone. You could either excuse yourself for this
  kind of wording, or we are done.
 


 Now you wouldn't be complainng about seeing content not to your liking
 would you. What are you going to do filter out the posts? Bet your glad
 your email provider added that option for you.

 Yet another censorship hipocrite.



I think he meant ignoring you, I don't feel the need to respond to your
arguments at all. Which has also been an argument against the filter, I
don't see anything hypocritical in that.

Regards
Theo
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-16 Thread Andreas Kolbe
Personality conflicts aside, we're noting that non-sexual search terms in 
Commons can prominently return sexual images of varying explicitness, from mild 
nudity to hardcore, and that this is different from entering a sexual search 
term and finding that Google fails to filter some results.

I posted some more Commons search terms where this happens on Meta; they 
include 

Black, Caucasian, Asian; 

Male, Female, Teenage, Woman, Man; 

Vegetables; 

Drawing, Drawing style; 

Barbie, Doll; 

Demonstration, Slideshow; 

Drinking, Custard, Tan; 

Hand, Forefinger, Backhand, Hair; 

Bell tolling, Shower, Furniture, Crate, Scaffold; 

Galipette – French for somersault; this leads to a collection of 1920s 
pornographic films which are undoubtedly of significant historical interest, 
but are also pretty much as explicit as any modern representative of the genre.

Andreas




From: Dan Rosenthal swatjes...@gmail.com
To: Wikimedia Foundation Mailing List foundation-l@lists.wikimedia.org
Sent: Sunday, 16 October 2011, 20:31
Subject: Re: [Foundation-l] Letter to the community on Controversial Content

If the entire premise of an email comes down to I'm taunting you, that's
an indication it probably shouldn't be sent.


Dan Rosenthal


On Sun, Oct 16, 2011 at 10:27 PM, ??? wiki-l...@phizz.demon.co.uk wrote:

 On 16/10/2011 19:36, Tobias Oelgarte wrote:
  Am 16.10.2011 16:17, schrieb ???:
  On 16/10/2011 14:50, David Gerard wrote:
  On 16 October 2011 14:40, ???wiki-l...@phizz.demon.co.uk    wrote:
 
  Don't be an arsehole you get the same sort of stuff if you search for
 
  Presumably this is the sort of quality of discourse Sue was
  complaining about from filter advocates: provocateurs lacking in
  empathy.
 
 
  Trolling much eh David?
 
 
  But thanks for showing once again your incapacity to acknowledge that
  searching for sexual images and seeing such images, is somewhat
  different, from searching for non sexual imagary and getting sexual
 images.
 
  I have to agree with David. Your behavior is provocative and
  unproductive. I don't feel the need to respond to your arguments at all,
  if you write in this tone. You could either excuse yourself for this
  kind of wording, or we are done.
 


 Now you wouldn't be complainng about seeing content not to your liking
 would you. What are you going to do filter out the posts? Bet your glad
 your email provider added that option for you.

 Yet another censorship hipocrite.



 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l



___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-16 Thread Bjoern Hoehrmann
* Andreas Kolbe wrote:
Personality conflicts aside, we're noting that non-sexual search terms
in Commons can prominently return sexual images of varying explicitness,
from mild nudity to hardcore, and that this is different from entering a
sexual search term and finding that Google fails to filter some results.

That is normal and expected with a full text search that does not filter
out sexual images. Even if you search for sexual content on Commons it
is normal and expected that you get results you would rather not get. It
is possible to largely avoid this by using, say, Google Image Search and
a site:commons.wikimedia.org constraint and the right SafeSearch setting
if you want for the simpler cases, but I would not want to search for,
say, penis, on either site when unprepared for shock. I do not think
Commons is relevant to the Image Filter discussion, the image filter is
for things editors largely agree should be included in context, while on
Commons you lack context and editorial control. If there was a MediaWiki
extension that is good at emulating Google's SafeSearch, installing that
on Commons might be an acceptable idea, but there is not, and making one
would be rather expensive.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-16 Thread Andreas Kolbe
Commons featured prominently in the Harris study, as well as the board 
resolution on controversial content.

http://meta.wikimedia.org/wiki/2010_Wikimedia_Study_of_Controversial_Content:_Part_Two

http://wikimediafoundation.org/wiki/Resolution:Controversial_content


Andreas




From: Bjoern Hoehrmann derhoe...@gmx.net
To: Andreas Kolbe jayen...@yahoo.com; Wikimedia Foundation Mailing List 
foundation-l@lists.wikimedia.org
Sent: Monday, 17 October 2011, 2:15
Subject: Re: [Foundation-l] Letter to the community on Controversial Content

* Andreas Kolbe wrote:
Personality conflicts aside, we're noting that non-sexual search terms
in Commons can prominently return sexual images of varying explicitness,
from mild nudity to hardcore, and that this is different from entering a
sexual search term and finding that Google fails to filter some results.

That is normal and expected with a full text search that does not filter
out sexual images. Even if you search for sexual content on Commons it
is normal and expected that you get results you would rather not get. It
is possible to largely avoid this by using, say, Google Image Search and
a site:commons.wikimedia.org constraint and the right SafeSearch setting
if you want for the simpler cases, but I would not want to search for,
say, penis, on either site when unprepared for shock. I do not think
Commons is relevant to the Image Filter discussion, the image filter is
for things editors largely agree should be included in context, while on
Commons you lack context and editorial control. If there was a MediaWiki
extension that is good at emulating Google's SafeSearch, installing that
on Commons might be an acceptable idea, but there is not, and making one
would be rather expensive.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-16 Thread Andreas Kolbe
For reference, the resolution said: 

* We ask the Executive Director, in consultation with the community, to 
develop and implement a personal image hiding feature that will enable readers 
to easily hide images hosted ***on the projects*** that they do not wish to 
view, either when first viewing the image or ahead of time through preference 
settings. We affirm that no image should be permanently removed because of this 
feature, only hidden; that the language used in the interface and development 
of this feature be as neutral and inclusive as possible; that the principle of 
least astonishment for the reader is applied; and that the feature be visible, 
clear and usable on ***all Wikimedia projects*** for both logged-in and 
logged-out readers.

This doesn't look like Commons is exempt from that, but perhaps the Board might 
like to clarify that point.

Andreas




From: Thyge ltl.pri...@gmail.com
To: Andreas Kolbe jayen...@yahoo.com; Wikimedia Foundation Mailing List 
foundation-l@lists.wikimedia.org
Sent: Monday, 17 October 2011, 2:59
Subject: Re: [Foundation-l] Letter to the community on Controversial Content

2011/10/17 Andreas Kolbe jayen...@yahoo.com:
 Commons featured prominently in the Harris study, as well as the board 
 resolution on controversial content.

Indeed, but featured curation on Commons, not filtering Commons. IMHO
the filter discussion should concentrate on the other projects and
treat Commons differently and separately.

Sir48/Thyge



___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-15 Thread Andreas Kolbe
 From: David Levy lifeisunf...@gmail.com

 It most certainly is a matter of interpretation.  If the English
 Wikipedia community shared yours, we wouldn't be having this
 discussion.

 In this context, you view images as entities independent from the
 people and things depicted therein 



I view images as *content*, subject to the same fundamental policies and 
principles as any other content.


 (and believe that our use of
 illustrations not included in other publications constitutes undue
 weight).



For the avoidance of doubt, I am not saying that we should use the *very same* 
illustrations that reliable sources use – we can't, for obvious copyright 
reasons, and there is no need to follow sources that slavishly anyway. 

But as we are writing an encyclopedia, it would be good to strive for images 
equivalent to those found in educational standard works. We could also look at 
good educational websites (bearing in mind that some specialist scholarly works 
do without colour images to keep printing costs low). 

So I view it as important, before we use an illustration, to consider whether 
reliable sources in the field use the same kind of illustration. For example, 
the German vulva image that has been discussed several times conforms in style 
to the illustrations used in scholarly (e.g. medical) works, and even 
educational works for minors (at least in Germany). So, good image. The anal 
fisting image included in the English Wikipedia I would not have used, because 
I don't think it's the type of image we would find in a reputably published 
illustrated source, even an uncensored one, on sexology (which would be the 
model to follow in this topic area). It just looks too amateurish and 
home-made, and home-made + sexually explicit is a poor combination. (The image 
in the frotting article is another example.)

I agree by the way that we should never write F*** or s***. Some newspapers do 
that, but it is not a practice that the best and most reliable sources 
(scholarly, educational sources as opposed to popular press) use. We should be 
guided by the best, most encyclopedic sources. YMMV.

Cheers,
Andreas
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-14 Thread David Levy
Andreas Kolbe wrote:

   NPOV policy as written would require us to do the same, yes.

  The community obviously doesn't share your interpretation of said policy.

 It's not a question of interpretation; it is the very letter of the policy.

It most certainly is a matter of interpretation.  If the English
Wikipedia community shared yours, we wouldn't be having this
discussion.

In this context, you view images as entities independent from the
people and things depicted therein (and believe that our use of
illustrations not included in other publications constitutes undue
weight).

Conversely, the community doesn't treat images of x as a subject
separate from x (unless the topic images of x is sufficiently
noteworthy in its own right).  If an image illustrates x in a manner
consistent with what reliable sources tell us about x, it clears the
pertinent hurdle.  (There are, of course, other inclusion criteria.)

 Due weight and neutrality are established by reliable sources.

And these are the sources through which the images' accuracy and
relevance are verified.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-13 Thread Yann Forget
Hello,

To me, this shows that the search engine is badly configured, or has a
major problem.
So fix it instead of creating a filter, which would have unwanted side effects.
Having a good search engine would be within the WMF mission,
creating a filter is not.

Regards,

Yann

2011/10/12 Andreas Kolbe jayen...@yahoo.com:
 From: Tobias Oelgarte tobias.oelga...@googlemail.com
  Someone on Meta has pointed out that Commons seems to list sexual image 
  results for search terms like cucumber, electric toothbrushes or pearl 
  necklace way higher than a corresponding Google search. See 
  http://lists.wikimedia.org/pipermail/commons-l/2011-October/006290.html

  Andreas
 This might just be coincidence for special cases. I'm sure if you search
 long enough you will find opposite examples as well.

 Tobias,

 If you can find counterexamples, I'll gladly look at them. These were the 
 only three we checked this afternoon, and the difference was striking.

 Here is another search, underwater:

 http://commons.wikimedia.org/w/index.php?title=Special%3ASearchsearch=underwaterfulltext=Search


 The third search result in Commons is a bondage image:

 http://commons.wikimedia.org/wiki/File:Underwater_bondage.jpg


 On Google, with safe search off, the same image is the 58th result:

 http://www.google.co.uk/search?gcx=wq=underwater+site:commons.wikimedia.orgum=1ie=UTF-8hl=entbm=ischsource=ogsa=Ntab=wibiw=1095bih=638

 But wouldn't it run
 against the intention of a search engine to rate down content by
 possibly offensive? If you search for a cucumber you should expect to
 find one. If the description is correct, you should find the most
 suitable images first. But that should be based on the rating algorithm
 that works on the description, not on the fact that content is/might
 be/could be controversial.

 Implementing such a restriction for a search engine (by default) would
 go against any principal and would be discrimination of content. We
 should not do this.

 You are not being realistic. If someone searches for cucumber, toothbrush 
 or necklace on Commons, they will not generally be looking for sexual 
 images, and it is no use saying, Well, you looked for a cucumber, and here 
 you have one. Stuck up a woman's vagina.

 Similarly, users entering jumping ball in the search field are unlikely to 
 be looking for this image:

 http://commons.wikimedia.org/wiki/File:Jumping_ball_01.jpg

 Yet that is the first one the Commons search for jumping ball displays:

 http://commons.wikimedia.org/w/index.php?title=Special%3ASearchsearch=jumping+ballfulltext=Search

 We are offering an image service, and the principle of least astonishment 
 should apply. By having these images come at the top of our search results, 
 we are alienating at least part of our readers who were simply looking for an 
 image of a toothbrush, cucumber, or whatever.

 On the other hand, if these images don't show up among our top results, we 
 are not alienating users who look for images of the penetrative use of 
 cucumbers or toothbrushes, because they can easily narrow their search if 
 that is the image they're after.

 Are you really saying that this is how Commons should work, bringing up 
 sexual images for the most innocuous searches, and that this is how you would 
 design the user experience for Commons users?

 Andreas

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-13 Thread Hubert
Dear Andreas,

This is, what I wanted to express. But it is not only a quite different
etymology, it is also a definition of gender-related positions to be
interpreted and applied.

But what we care about minorities - as I always say - as long as they
representing only women, children, homosexuals and cyclists.

Based on needs of minorities, the majority always knows better what has
to be good for them.

In our case it´s a little bit different, this seems, that an ivory tower
conference knows exactly how the community will solve the problem - what
should categorized as violence-, sexuality- and religious-related.

Meanwhile, I prefer the following solution:

Everyone, who will not understand and perceive the world so as it is,
should unsubscribe his internet connection - just like his newspaper
subscription, radio and television and - of course - any advertising on
streets. And this individuals should deny any public schools for their
children.

And I mean this not in a depreciatory way. Maybe, this may be a better
world.

I just hope, they will even throw the bible then.

h.

Am 10.10.2011 20:37, schrieb Andreas Kolbe:
 Hubert, 
 
 The fact is that the English word violence has a quite different etymology, 
 and a much narrower meaning, than the German word Gewalt, which 
 historically also means control, or even administrative competence.
 
 The scope of the English article is indeed appropriate to the English word 
 violence, because that word lacks several shades of meaning that the German 
 word Gewalt has.
 
 Andreas 
 
 
 
 From: Hubert hubert.la...@gmx.at
 To: Wikimedia Foundation Mailing List foundation-l@lists.wikimedia.org
 Sent: Monday, 10 October 2011, 18:58
 Subject: Re: [Foundation-l] Letter to the community on Controversial Content
 
 David, did you read the german article completely?
 have you compared the contents of which part of the concept of violence
 and more attention is paid to what portion of the term violence in en:
 wp did not occur?
 
 Gewalt ist nicht unbedingt in gleicher Form Gewalt.
 
 to say it simply: hitting someones head, to shoot s.o. is in en: WP the
 primary part of the article (this is very simplifying, indeed). The
 German article is a far greater degree of philosophical and sociological
 issues of violence.
 
 This difference alone makes it clear that a single definition of what
 violence is or may be, and how it manifests itself in images, can not
 even enter.
 
 Quite frankly, I do not want that maybe people who are socialized to a
 far greater degree in a culture of violence than other cultures can be
 categorized by images of tens of thousands to impose his concept of
 violence.
 
 Even though we are all Wikipedians, even within the German Wikipedia,
 there are significant cultural differences.
 
 And violence is - contrary to religion and sexuality - just the smaller
 problem.
 h
 
 Am 10.10.2011 12:22, schrieb David Gerard:
 On 10 October 2011 11:17, Hubert hubert.la...@gmx.at wrote:
 Am 09.10.2011 16:35, schrieb Anneke Wolf:

 http://de.wikipedia.org/wiki/Gewalt

 dear Anneke,
 +1
 and see the basic difference and the disaccordance in understanding and
 meaning of violence.
 http://en.wikipedia.org/wiki/Violence


 I don't understand what point is being made here.


 - d.

 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l

 
 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
 

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-13 Thread Hubert


Am 10.10.2011 21:16, schrieb Sue Gardner:
 On 10 October 2011 11:56, Möller, Carsten c.moel...@wmco.de wrote:
 Sue wrote:
 It is asking me to do something.
 But it is not asking me to do the specific thing that has
 been discussed over the past several months, and which the Germans
 voted against.

 I may translate:
 As the German community has voted against filters,
 I was ordered to circumvent this vote by making some adjustments to the 
 wording.

 That will not work. The vote was very clear agaist all image filters.
 The referendum was a farce, as we clearly see.

 Sorry, somebody is playing games with us.
 
 
 Truly, Carsten, nobody is playing games with you. The Board's
 discussion was sincere and thoughtful.
Maybe, but for me, it is absolutely unserious, to start an survey
without the basic question:

1. Filter or not?
2. who will do the work and manage the war inside Commons, when unknown
persons and groups, up to this point entirely unknown, called from the
strangest organizations, will start the crusade to improve the world!

 
 This is how the system is supposed to work. The Board identified a
 problem; the staff hacked together a proposed solution, and we asked
 the community what it thought. Now, we're responding to the input and
 we're going to iterate. This is how it's supposed to work: we mutually
 influence each other.
sounds good, but you did´nt act like that! This is just theory!

 
 I'm not saying it isn't messy and awkward and flawed in many respects:
 it absolutely is. But nobody is playing games with you. The Board is
 sincere. It is taking seriously the German community, and the others
 who have thoughtfully opposed the filter.
 
I just read from the board: The decision is fixed, there will be a
filter worldwide, no exceptions allowed.

thanks

h

80.000 edits in the last seven years. Spending thousands of hours to
support the project. And not only by editing!

 The right thing to do now is to accept the olive branch, and work with
 the Wikimedia Foundation to figure out a good solution. You want to
 train the Wikimedia Foundation that listening to you is the path to a
 successful outcome :-)
 
 Thanks,
 Sue
 
 
 
 --
 
 Sue Gardner
 Executive Director
 Wikimedia Foundation
 
 415 839 6885 office
 415 816 9967 cell
 
 Imagine a world in which every single human being can freely share in
 the sum of all knowledge.  Help us make it a reality!
 
 http://wikimediafoundation.org/wiki/Donate
 
 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-13 Thread Hubert
+1

h

Am 11.10.2011 03:20, schrieb Bjoern Hoehrmann:
 * Sue Gardner wrote:
 This is how the system is supposed to work. The Board identified a
 problem; the staff hacked together a proposed solution, and we asked
 the community what it thought. Now, we're responding to the input and
 we're going to iterate. This is how it's supposed to work: we mutually
 influence each other.
 
 The Board asked you to develop this feature in consultation with the
 community. The manner in which you chose to do that has led to parts
 of the community discussing the best way to split from the community.
 
 I'm not saying it isn't messy and awkward and flawed in many respects:
 it absolutely is. But nobody is playing games with you. The Board is
 sincere. It is taking seriously the German community, and the others
 who have thoughtfully opposed the filter.
 
 http://lists.wikimedia.org/pipermail/foundation-l/2011-June/066624.html
 
   The Wikimedia Foundation, at the direction of the Board of Trustees,
   will be holding a vote to determine whether members of the community
   support the creation and usage of an opt-in personal image filter
 
 Funny correlation: all polls about the image filter that explained the
 pros and cons to voters found voters overwhelmingly opposed to it.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-13 Thread Denis Barthel
Am 13.10.2011 09:54, schrieb Hubert:
 Meanwhile, I prefer the following solution:

 Everyone, who will not understand and perceive the world so as it is,
 should unsubscribe his internet connection - just like his newspaper
 subscription, radio and television and - of course - any advertising on
 streets. And this individuals should deny any public schools for their
 children.


Some do.

Related to the filter I always remember a story Asaf Bartov wrote in the 
German Wikipedia-book. A young man, born in an ultraorthodox jewish 
movement, named Gur. They denied him access to books and further 
education beyond elementary school level. He got the chance to access 
the internet, found Wikipedia and this encounter changed his life 
completely.

Two years later he is not only an active Wikipedian, but studying 
computer sciences and a well educated citizen. He experienced a mind 
shift. He found out, what power knowledge has and made use of it. 
Because he dared to step into it.

I doubt that a Wikipedia containing a system to exclude content by 
individual taste or belief will still have this sheer breathtaking 
energy. The loss of this energy would be a pretty high price for not 
seeing a cucumber used in an unconventional manner in the wrong place.

Maybe we should consider to promote daring.

Regards,
Denis



___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-13 Thread MZMcBride
David Levy wrote:
 Andreas Kolbe wrote:
 Again, I think you are being too philosophical, and lack pragmatism.
 
 We already have bad image lists like
 
 http://en.wikipedia.org/wiki/MediaWiki:Bad_image_list
 
 If you remain wedded to an abstract philosophical approach, such lists
 are not neutral. But they answer a real need.
 
 Apart from the name (which the MediaWiki developers inexplicably
 refused to change), the bad image list is entirely compliant with the
 principle of neutrality (barring abuse by a particular project, which
 I haven't observed).

Not inexplicably: https://bugzilla.wikimedia.org/show_bug.cgi?id=14281#c10

MZMcBride



___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-13 Thread David Levy
I wrote:

  Apart from the name (which the MediaWiki developers inexplicably
  refused to change), the bad image list is entirely compliant with the
  principle of neutrality (barring abuse by a particular project, which
  I haven't observed).

MZMcBride replied:

 Not inexplicably: https://bugzilla.wikimedia.org/show_bug.cgi?id=14281#c10

Actually, that's precisely what I had in mind.  Maybe it's me, but I
found Tim's response rather bewildering.  It certainly isn't standard
procedure to forgo an accurate description in favor of nebulous social
commentary/parody, particularly when this proves controversial.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-13 Thread Andreas Kolbe
 From: David Levy lifeisunf...@gmail.com

   In an earlier reply, I cited ultra-Orthodox Jewish newspapers and 
   magazines
   that refuse to publish photographs of women.  If this were a mainstream
   policy, would that make it neutral?

 Please answer the above question.



NPOV policy as written would require us to do the same, yes. In the same way, 
if no reliable sources were written about women, we would not be able to have 
articles on them.



  You said in an earlier mail that in writing our texts, our job is to

  neutrally reflect the real-world balance, *including* any presumed biases. I
  agree with that.

 Yes, our content reflects the biases' existence.  It does *not* affirm
 their correctness.



By following sources, and describing points of view with which you personally 
do not agree, you are not affirming the correctness of these views. You are 
simply writing neutrally. Do you see the difference?

Images are content too, just like text. By following sources' illustration 
conventions, you are not affirming that you agree with those conventions, or 
consider them neutral yourself, but you *are* editing neutrally, i.e. in line 
with reliable sources.

Just as an idea, if we want to gather data on what readers think of our use of 
illustrations, we should add a point about image use to the article feedback 
template.


Andreas
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-13 Thread David Levy
I wrote:

  In an earlier reply, I cited ultra-Orthodox Jewish newspapers and magazines
  that refuse to publish photographs of women.  If this were a mainstream
  policy, would that make it neutral?

Andreas Kolbe replied:

 NPOV policy as written would require us to do the same, yes.

The community obviously doesn't share your interpretation of said policy.

 In the same way, if no reliable sources were written about women, we would not
 be able to have articles on them.

The images in question depict subjects documented by reliable sources
(through which the images' accuracy and relevance are verifiable).

Essentially, you're arguing that we're required to present information
only in the *form* published by reliable sources.

 By following sources, and describing points of view with which you personally 
 do
 not agree, you are not affirming the correctness of these views. You are 
 simply
 writing neutrally.

Agreed.  And that's what we do.  We describe views.  We don't adopt
them as their own.

If reliable sources deem a word objectionable and routinely censor it
(e.g. when referring to the Twitter feed Shit My Dad Says), we don't
follow suit.

The same principle applies to imagery deemed objectionable.  We might
cover the controversy in our articles (depending on the context), but
we won't suppress such content on the basis that others do.

As previously discussed, this is one of many reasons why reliable
sources might decline to include images.  Fortunately, we needn't read
their minds.  As I noted, we *always* must evaluate our available
images (the pool of which differs substantially from those of most
publications) to gauge their illustrative value.  We simply apply the
same criteria (intended to be as objective as possible) across the
board.

 Images are content too, just like text.

Precisely.  And unless an image introduces information that isn't
verifiable via our reliable sources' text, there's no material
distinction.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-13 Thread Andreas Kolbe
bla




From: David Levy lifeisunf...@gmail.com
To: foundation-l@lists.wikimedia.org
Sent: Friday, 14 October 2011, 3:52
Subject: Re: [Foundation-l] Letter to the community on Controversial Content

I wrote:

  In an earlier reply, I cited ultra-Orthodox Jewish newspapers and magazines
  that refuse to publish photographs of women.  If this were a mainstream
  policy, would that make it neutral?

Andreas Kolbe replied:

 NPOV policy as written would require us to do the same, yes.

The community obviously doesn't share your interpretation of said policy.


It's not a question of interpretation; it is the very letter of the policy. Due 
weight and neutrality are established by reliable sources.

Now, let's look at your example: if you and I lived in a society that did not 
produce reliable sources about women, and refused to publish pictures of them, 
then I guess we would be unlikely to work on a wiki that 

- defines neutrality as fairly representing reliable sources without bias, 
- derives its definition of due weight from the weight any topic (incl. women) 
is given in reliable sources,
- requires verifiability in reliable sources for every statement made in our 
wiki, 
- and disallows original research. 

Instead, we would start a revolutionary wiki with a political agenda that 

- denounces the status quo, 
- criticises the inhuman and pervasive bias against women, 
- refuses to be bound by it,
- sets out to start a new tradition of writing about, and depicting, women, 
- and vows to subvert the established system in order to create a new world.

We would set out to be *different* from the existing sources.

However, in our world, that is not how Wikipedia views reliable sources. 
Wikipedia is not set up to be in antagonism to its sources; it is set up to be 
in agreement with them.


Andreas



 In the same way, if no reliable sources were written about women, we would not

 be able to have articles on them.

The images in question depict subjects documented by reliable sources
(through which the images' accuracy and relevance are verifiable).

Essentially, you're arguing that we're required to present information
only in the *form* published by reliable sources.

 By following sources, and describing points of view with which you 
 personally do
 not agree, you are not affirming the correctness of these views. You are 
 simply
 writing neutrally.

Agreed.  And that's what we do.  We describe views.  We don't adopt
them as their own.

If reliable sources deem a word objectionable and routinely censor it
(e.g. when referring to the Twitter feed Shit My Dad Says), we don't
follow suit.

The same principle applies to imagery deemed objectionable.  We might
cover the controversy in our articles (depending on the context), but
we won't suppress such content on the basis that others do.

As previously discussed, this is one of many reasons why reliable
sources might decline to include images.  Fortunately, we needn't read
their minds.  As I noted, we *always* must evaluate our available
images (the pool of which differs substantially from those of most
publications) to gauge their illustrative value.  We simply apply the
same criteria (intended to be as objective as possible) across the
board.

 Images are content too, just like text.

Precisely.  And unless an image introduces information that isn't
verifiable via our reliable sources' text, there's no material
distinction.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l



___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-13 Thread Andreas Kolbe
David, 

I just noticed that I left a bla at the top of my reply to you. That wasn't a 
comment on your post: my e-mail editor often doesn't allow me to break the 
indent of the post I'm replying to. My work-around is to type some random 
unindented text at the top of my editor window, and then copy that down to the 
place where I want to insert a reply, so I can start an unindented line. That's 
what I did here; I just forgot to delete it before I posted.

Cheers,
Andreas 




From: Andreas Kolbe jayen...@yahoo.com
To: Wikimedia Foundation Mailing List foundation-l@lists.wikimedia.org
Sent: Friday, 14 October 2011, 5:45
Subject: Re: [Foundation-l] Letter to the community on Controversial Content

bla




From: David Levy lifeisunf...@gmail.com
To: foundation-l@lists.wikimedia.org
Sent: Friday, 14 October 2011, 3:52
Subject: Re: [Foundation-l] Letter to the community on Controversial Content

I wrote:

  In an earlier reply, I cited ultra-Orthodox Jewish newspapers and 
  magazines
  that refuse to publish photographs of women.  If this were a mainstream
  policy, would that make it neutral?

Andreas Kolbe replied:

 NPOV policy as written would require us to do the same, yes.

The community obviously doesn't share your interpretation of said policy.


It's not a question of interpretation; it is the very letter of the policy. 
Due weight and neutrality are established by reliable sources.

Now, let's look at your example: if you and I lived in a society that did not 
produce reliable sources about women, and refused to publish pictures of them, 
then I guess we would be unlikely to work on a wiki that 

- defines neutrality as fairly representing reliable sources without bias, 
- derives its definition of due weight from the weight any topic (incl. women) 
is given in reliable sources,
- requires verifiability in reliable sources for every statement made in our 
wiki, 
- and disallows original research. 

Instead, we would start a revolutionary wiki with a political agenda that 

- denounces the status quo, 
- criticises the inhuman and pervasive bias against women, 
- refuses to be bound by it,
- sets out to start a new tradition of writing about, and depicting, women, 
- and vows to subvert the established system in order to create a new world.

We would set out to be *different* from the existing sources.

However, in our world, that is not how Wikipedia views reliable sources. 
Wikipedia is not set up to be in antagonism to its sources; it is set up to be 
in agreement with them.


Andreas



 In the same way, if no reliable sources were written about women, we would 
 not

 be able to have articles on them.

The images in question depict subjects documented by reliable sources
(through which the images' accuracy and relevance are verifiable).

Essentially, you're arguing that we're required to present information
only in the *form* published by reliable sources.

 By following sources, and describing points of view with which you 
 personally do
 not agree, you are not affirming the correctness of these views. You are 
 simply
 writing neutrally.

Agreed.  And that's what we do.  We describe views.  We don't adopt
them as their own.

If reliable sources deem a word objectionable and routinely censor it
(e.g. when referring to the Twitter feed Shit My Dad Says), we don't
follow suit.

The same principle applies to imagery deemed objectionable.  We might
cover the controversy in our articles (depending on the context), but
we won't suppress such content on the basis that others do.

As previously discussed, this is one of many reasons why reliable
sources might decline to include images.  Fortunately, we needn't read
their minds.  As I noted, we *always* must evaluate our available
images (the pool of which differs substantially from those of most
publications) to gauge their illustrative value.  We simply apply the
same criteria (intended to be as objective as possible) across the
board.

 Images are content too, just like text.

Precisely.  And unless an image introduces information that isn't
verifiable via our reliable sources' text, there's no material
distinction.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l



___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l



___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-12 Thread Andreas Kolbe
 From:David Levy lifeisunf...@gmail.com

 Setting aside the matter of category tags, I disagree with the premise
 that the neutrality principle is inapplicable to display options.
 When an on-wiki gadget is used to selectively suppress material deemed
 objectionable, that's a content issue (despite not affecting pages
 by default).


Again, I think you are being too philosophical, and lack pragmatism. 

We already have bad image lists like 

http://en.wikipedia.org/wiki/MediaWiki:Bad_image_list

If you remain wedded to an abstract philosophical approach, such lists
are not neutral. But they answer a real need.


  I see this as no different. I really wonder where this idea entered that
  when it comes to text, reliable sources' judgment is sacrosanct, while when
  it comes to illustrations, reliable sources' judgment is suspect, and 
editors'
  judgment is better.

 Again, you're conflating separate issues.

 We consult reliable sources to obtain factual information and gauge
 the manner in which topics receive coverage.  It's quite true that the
 latter often reflects biases, but we seek to neutrally convey the 
 real-world balance (which _includes_ those biases).

 Conversely, we don't take on subjective views — no matter how
 widespread — as our own.  For example, if most mainstream media
 outlets publish the opinion that x is bad, we simply relay the fact
 that said information was published.  We don't adopt x is bad as
 *our* position.


I would invite you to think some more about this, and view it from a 
different angle. You said earlier, 


   A reputable publication might include textual documentation of a
   subject, omitting useful illustrations to avoid upsetting its readers.
   That's non-neutral.


You assume here that there is any kind of neutrality in Wikipedia that
is not defined by reliable sources.

There isn't. The very definition of neutrality in our projects is tied to 
the editorial judgment of reliable sources.

If I go along with your statement that reliable sources avoid upsetting 
their readers, why would we be more neutral by deciding to depart from 
reliable sources' judgment, and consciously upsetting our readers in a way
reliable sources do not? 

It seems to me we do not become more neutral by doing so, but are
implementing a clear bias – a departure from, as you put it, the real-
world balance. And I think this is a fact. Wikipedia departs from 
reliable sources in its approach to illustration, and has a clear bias in
favour of showing offensive content that sets its editorial policy apart
from real-world publishing standards.

 Likewise, if most publications decide that it would be bad to publish
 illustrations alongside their coverage of a subject (on the basis that
 such images are likely to offend), we might address this determination
 via prose, but won't adopt it as *our* position.



That exact same argument could be made about text as well: 

Likewise, if most publications decide that it would be bad to publish
that X is a scoundrel (on the basis that it would be likely to offend), we 
might address this determination via prose, but won't adopt it as *our* 
position.


So then we would have articles saying, No newspaper has reported
that X is a scoundrel, but he is, because –.

And then you can throw in NOTCENSORED for good measure as
a riposte to anyone wishing to delete such original research.

Seen from this perspective, your judgment that illustrations which 
reliable sources have not found useful for their readers should be 
useful for readers of Wikipedia is directly analogous to this kind of
original research.

In my view, our articles should show what a reliable educational 
source would show, no more and no less. Anything beyond that
should be left to a prominent Commons link.

That would be neutral.


  Probably true, and I am beginning to wonder if the concern that censors
  could abuse any filter infrastructure isn't somewhat overstated.

 I regard such concerns as valid, but other elements of the proposed
 setup strike me as significantly more problematic.



I regard them as valid too, and if it can be avoided I am all for it,
but the fact is that bad image lists and categories exist already, and those 
who would censor us do so already. We don't forbid cars because some
people drive drunk.

Andreas
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-12 Thread David Levy
Andreas Kolbe wrote:

 Again, I think you are being too philosophical, and lack pragmatism.

 We already have bad image lists like

 http://en.wikipedia.org/wiki/MediaWiki:Bad_image_list

 If you remain wedded to an abstract philosophical approach, such lists
 are not neutral. But they answer a real need.

Apart from the name (which the MediaWiki developers inexplicably
refused to change), the bad image list is entirely compliant with the
principle of neutrality (barring abuse by a particular project, which
I haven't observed).  It's used to prevent a type of vandalism, which
typically involves the insertion of images among the most likely to
offend/disgust large numbers of people.  But if the need arises (for
example, in the case of a let's post harmless pictures of x
everywhere meme), it can be applied to *any* image (including one
that practically no one regards as inherently objectionable).

 I would invite you to think some more about this, and view it from a
 different angle. You said earlier,

A reputable publication might include textual documentation of a
subject, omitting useful illustrations to avoid upsetting its readers.
That's non-neutral.

 You assume here that there is any kind of neutrality in Wikipedia that is
 not defined by reliable sources.

 There isn't.

Again, you're conflating two separate concepts.

In most cases, we can objectively determine, based on information from
reliable sources, that an image depicts x.  This is comparable to
confirming written facts via the same sources.

If a reliable source declines to include an image because it's
considered offensive, that's analogous to censoring a word (e.g.
replacing fuck with f**k) for the same reason.

Include suitably licensed images illustrating their subjects is a
neutral pursuit.  (Debates arise regarding images' utility — just as
they do regarding text — but the goal is neutral.)  Include suitably
licensed images illustrating their subjects, provided that they aren't
upsetting is *not* neutral, just as include properly sourced
information, provided that it isn't upsetting is not.

 If I go along with your statement that reliable sources avoid upsetting their
 readers, why would we be more neutral by deciding to depart from reliable
 sources' judgment, and consciously upsetting our readers in a way reliable
 sources do not?

This is an image of x (corroborated by information from reliable
sources) is a neutral statement.  This image is upsetting is not.

In an earlier reply, I cited ultra-Orthodox Jewish newspapers and
magazines that refuse to publish photographs of women.  If this were a
mainstream policy, would that make it neutral?

As I noted previously, to emulate such a standard would be to adopt a
value judgement as our own (analogous to stating as fact that x is
bad because most reputable sources agree that it is).

  Likewise, if most publications decide that it would be bad to publish
  illustrations alongside their coverage of a subject (on the basis that such
  images are likely to offend), we might address this determination via prose,
  but won't adopt it as *our* position.

 That exact same argument could be made about text as well:

 Likewise, if most publications decide that it would be bad to publish that X
 is a scoundrel (on the basis that it would be likely to offend), we might
 address this determination via prose, but won't adopt it as *our* position.

 So then we would have articles saying, No newspaper has reported that X is a
 scoundrel, but he is, because –.

X is a scoundrel is a statement of opinion.  X is a photograph of
y (corroborated by information from reliable sources) is a statement
of fact.

And as noted earlier, this is tangential to the image filter discussion.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-12 Thread David Gerard
On 12 October 2011 14:09, David Levy lifeisunf...@gmail.com wrote:
 Andreas Kolbe wrote:

 We already have bad image lists like
 http://en.wikipedia.org/wiki/MediaWiki:Bad_image_list
 If you remain wedded to an abstract philosophical approach, such lists
 are not neutral. But they answer a real need.

 Apart from the name (which the MediaWiki developers inexplicably
 refused to change), the bad image list is entirely compliant with the
 principle of neutrality (barring abuse by a particular project, which
 I haven't observed).  It's used to prevent a type of vandalism, which
 typically involves the insertion of images among the most likely to
 offend/disgust large numbers of people.  But if the need arises (for
 example, in the case of a let's post harmless pictures of x
 everywhere meme), it can be applied to *any* image (including one
 that practically no one regards as inherently objectionable).


In fact, it is specifically a measure used only in case of vandalism,
and only after an image is actually being used for vandalism, per:

http://en.wikipedia.org/wiki/MediaWiki_talk:Bad_image_list

The images listed on MediaWiki:Bad image list are prohibited by
technical means from being displayed inline on pages, besides
specified exceptions. Images on the list have normally been used for
widespread vandalism where user blocks and page protections are
impractical.

Using it as justification for the image filter under discussion shows
a misunderstanding of its purpose and scope.


- d.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-12 Thread Thomas Morton

 Secondly, it ignores the fact that an encyclopedia, at least in intention,
 does not deal in opinions at all, but rather in facts


Not at all!

You've confused a fact with factual. What we record is factual - but it
might be a fact, or it might be an opinion. When relating opinions we
reflect (or ostensibly try to) the global opinion, and occasionally some of
the more significant alternative views.

Consider:

*Abby killed Betty.*

compared to

*The judge convicted Abby of killing Betty, saying that the overwhelming
evidence indicated manslaughter.*

The latter is factual, and contains facts  opinions.

But this is really irrelevant to the problem at hand - because we are not
talking about presenting a factually different piece of prose to suit an
individuals preference (that is what the forks are for...!). Although it
could be argued that we could handle alternate viewpoints better.

What we are talking about is hiding illustrative images per the
sensibilities of the person viewing the page. This is an editorial rather
than a content matter; related to choice of presentation 
illustration. Akin to deciding on how to word a sentence. Rather than
Freddy thought frogs were fucking stupid we might choose Freddy did not
have a high opinion of frog intelligence, because the former isn't a
particularly polite expression of the material. Most people would probably
wish to learn Freddies view of frogs without the bad language!

Removal of, say, a nude image on the Vagina article does not bias or detract
from the information. The image is there to provide illustration, and a
visual cue to accompany the text. Hiding the image for optional viewing for
people who would prefer it that way* doesn't seem controversial*.

Tom
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-12 Thread Andreas Kolbe
From: David Levy lifeisunf...@gmail.com

 You assume here that there is any kind of neutrality in Wikipedia that is
 not defined by reliable sources.

 There isn't.

Again, you're conflating two separate concepts.

In most cases, we can objectively determine, based on information from
reliable sources, that an image depicts x.  This is comparable to
confirming written facts via the same sources.

If a reliable source declines to include an image because it's
considered offensive, that's analogous to censoring a word (e.g.
replacing fuck with f**k) for the same reason.

Include suitably licensed images illustrating their subjects is a
neutral pursuit.  


Well, you need to be clear that you're using the word neutral here with a 
different meaning than the one ascribed to it in NPOV policy.

Neutrality is not abstractly defined: like notability or verifiability, it has a
very specific meaning within Wikipedia policy. That meaning is irrevocably
tied to reliable sources.

Neutrality consists in our reflecting fairly, proportionately, and without bias,
how reliable sources treat a subject.

Including suitably licensed images illustrating their subjects can easily 
*not*
be a neutral pursuit. For example, if we end up featuring more female nudity
than reliable sources do, and in places where reliable sources would eschew
it, we are not being neutral, even if each image illustrates its subject.



(Debates arise regarding images' utility — just as
they do regarding text — but the goal is neutral.)  Include suitably
licensed images illustrating their subjects, provided that they aren't
upsetting is *not* neutral, just as include properly sourced
information, provided that it isn't upsetting is not.



Your assumption that reliably published sources do not publish the
images you have in mind here because they do not wish to upset people
is unexamined, and disregards other considerations – of aesthetics, 
didactics, psychology, professionalism, educational value, quality of 
execution, and others.

It also disregards the possibility that Wikipedians may wish to include images
for other reasons than simply to educate the reader – because they like 
the images, find them attractive, wish to shock, and so forth.

Basically, you are positing that whatever you like, or the community likes, 
is neutral. :) That is an approach that would not fly for text, and it 
disregards
our demographic imbalance. 




 If I go along with your statement that reliable sources avoid upsetting their
 readers, why would we be more neutral by deciding to depart from reliable
 sources' judgment, and consciously upsetting our readers in a way reliable
 sources do not?

This is an image of x (corroborated by information from reliable
sources) is a neutral statement.  This image is upsetting is not.



Here I need to remind you that it was you who expressed the belief that 
reliable sources choose not to publish imagery because it might upset people. 
As I said above, this is an unexamined assumption that discards other 
considerations.

Our approach to illustration should follow that embraced by reliable sources,
and just as we should not second-guess why sources say what they do, and
whether they omitted to say important things for fear of upsetting readers, we 
should not second-guess their approach to illustration either, but simply 
follow it.

What we *should* second-guess is the motivation of Wikipedians who wish
to depart from that approach.



In an earlier reply, I cited ultra-Orthodox Jewish newspapers and
magazines that refuse to publish photographs of women.  If this were a
mainstream policy, would that make it neutral?

As I noted previously, to emulate such a standard would be to adopt a
value judgement as our own (analogous to stating as fact that x is
bad because most reputable sources agree that it is).



You said in an earlier mail that in writing our texts, our job is to neutrally 
reflect the real-world balance, *including* any presumed biases. I agree
with that. 

My argument is that the same applies to illustration, for exactly the same 
reasons.

We seem to be agreeing on one thing: that Wikipedia's approach to
illustration differs from that in our sources. You seem to be saying that is 
a good thing; I say it isn't, or at least that we may have a little too much of 
a 
good thing.

Andreas




  Likewise, if most publications decide that it would be bad to publish
  illustrations alongside their coverage of a subject (on the basis that such
  images are likely to offend), we might address this determination via 
  prose,
  but won't adopt it as *our* position.

 That exact same argument could be made about text as well:

 Likewise, if most publications decide that it would be bad to publish that X
 is a scoundrel (on the basis that it would be likely to offend), we might
 address this determination via prose, but won't adopt it as *our* position.

 So then we would have articles saying, No newspaper has reported that 

Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-12 Thread Andrew Crawford
On Wed, Oct 12, 2011 at 10:44 AM, Thomas Morton 
morton.tho...@googlemail.com wrote:

 You've confused a fact with factual.


I've confused the adjective form with the noun form of fact? I'm quite
sure that I have.

*The judge convicted Abby of killing Betty, saying that the overwhelming

 evidence indicated manslaughter.*

 The latter is factual, and contains facts  opinions.


It contains facts about opinions - it does not itself express an opinion. It
is both factual, and a fact.

But this is really irrelevant to the problem at hand


Definitely!


 - because we are not
 talking about presenting a factually different piece of prose to suit an
 individuals preference


Although that is true, it doesn't make any difference. There is information
content in an image - if there wasn't, we wouldn't need any. Making a
decision to use or not to use an image is an editorial decision, and in some
cases it could enhance or detract from the neutrality of the article.


 Removal of, say, a nude image on the Vagina article does not bias or
 detract
 from the information.


Then we can solve the problem by removing the image completely, since the
article would be completely unaffected by it.

Cheers,

Andrew (Thparkth)
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-12 Thread Thomas Morton

 It contains facts about opinions - it does not itself express an opinion.
 It
 is both factual, and a fact.


It expresses the *opinion* of the judge that Abbey killed Betty :) We
include it because the global *opinion* is that judges are in a position to
make such statements with authority. And the fact is that Abbey is convicted
of killing Betty.

My point was that opinion influences both our content and our choice of
material (just not our opinion, theoretically).

Perhaps I was confused by your original:

*an encyclopedia, at least in intention, does not deal in opinions at all,
but rather in facts*

Which suggested were were uninterested in opinion (not true, of course).

There is information content in an image - if there wasn't, we wouldn't need
 any.


We regularly (and rightly) use images in a purely illustrative context -
this is fine. Images look nice. They can also express the same concepts as
the prose in a different way (which might connect with different people).
But in the vast majority of cases images are supplementary to the prose.

Yes; in some cases an image may contain information not in the prose - this
is a legitimate problem to consider (although if we are just hiding images 
leaving them accessible then there doesn't seem to be an issue to me).


 Making a
 decision to use or not to use an image is an editorial decision, and in
 some
 cases it could enhance or detract from the neutrality of the article.


Yes, it could. But this is where we get to the finicky part of the situation
- because if we get the filtering right this won't matter, because it is an
individual choice about what to see/not see.

What you are talking about there is abusing any filter to bias or detract
from the neutrality of an article for readers.

When I put together a product for a user base you have to look at what they
want, and what they also need. They want a filter to hide X, and they need
one that does so properly and without abuse.

So, yes, I agree that a filter has potential for abuse - and any technical
solution should take that into consideration and prevent it.


  Removal of, say, a nude image on the Vagina article does not bias or
  detract
  from the information.


 Then we can solve the problem by removing the image completely, since the
 article would be completely unaffected by it.


Not really; the image certainly has value for some. Hiding it on page load
for those who do not wish it to appear is also good. We don't have to have a
binary solution

So long as the image

a) Appears for people who use and appreciate it
b) Is initially hidden for those who do not wish to see it
c) Appears for those apathetic to it's appearance

Then this is surely a nice improvement to the current situation of Appears
for everyone, one which does not remove *any* information from the reader
and provides them with the experience they wish.

Here's a similar point; if we had a setting that said do not show plots
initially that collapsed plots on movies, books, etc. this would effect the
same thing. The reader would have expressed a preference in viewing
material; none of that material is removed from his access, but he is able
to browse Wikipedia in a format he prefers. Win!

If a reader wanted to read Wikipedia with the words damn and crap
substituted for every (non-quoted) fuck and shit why is this a problem?
It alters presentation of the content to suit their sensibilities, but
without necessarily detracting from the content.

Another thought; the mobile interfaces collapses all sections by default on
page load (apart from the lead). Hiding material in this format (where the
reader has expressed an implicit preference to use Wikipedia on a mobile
device) doesn't seem to be controversial.

Hiding an image to suit individual preference is a good thing. It's just a
technical challenge to make sure the preference is well reflected, the
system is not abused and the content remains accessible.

Tom
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-12 Thread David Levy
Andreas Kolbe wrote:

 Well, you need to be clear that you're using the word neutral here with a
 different meaning than the one ascribed to it in NPOV policy.

 Neutrality is not abstractly defined: like notability or verifiability, it
 has a very specific meaning within Wikipedia policy. That meaning is
 irrevocably tied to reliable sources.

 Neutrality consists in our reflecting fairly, proportionately, and without 
 bias,
 how reliable sources treat a subject.

Again, reflecting views != adopting views as our own.

We're going around in circles, so I don't care to elaborate again.

 Your assumption that reliably published sources do not publish the images you
 have in mind here because they do not wish to upset people is unexamined, and
 disregards other considerations – of aesthetics, didactics, psychology,
 professionalism, educational value, quality of execution, and others.

I referred to a scenario in which an illustration is omitted because
of a belief that its inclusion would upset people, but I do *not*
assume that this is the only possible rationale.

I also don't advocate that every relevant image be shoehorned into an
article.  (Many are of relatively low quality and/or redundant to
others.)  My point is merely that it upsets people isn't a valid
reason for us to omit an image.

As our image availability differs from that of most publications (i.e.
we can't simply duplicate the pictures that they run), we *always*
must evaluate — using the most objective criteria possible — how well
an image illustrates its subject.  It's impossible to eliminate all
subjectivity, but we do our best.

 It also disregards the possibility that Wikipedians may wish to include
 images for other reasons than simply to educate the reader – because they
 like the images, find them attractive, wish to shock, and so forth.

No, I don't disregard that possibility.  Such problems arise with text too.

 Basically, you are positing that whatever you like, or the community likes,
 is neutral. :)

If you were familiar with my on-wiki rants, you wouldn't have written that.

  In an earlier reply, I cited ultra-Orthodox Jewish newspapers and magazines
  that refuse to publish photographs of women.  If this were a mainstream
  policy, would that make it neutral?

Please answer the above question.

 You said in an earlier mail that in writing our texts, our job is to
 neutrally reflect the real-world balance, *including* any presumed biases. I
 agree with that.

Yes, our content reflects the biases' existence.  It does *not* affirm
their correctness.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-11 Thread Ilario Valdelli
On Mon, Oct 10, 2011 at 2:00 PM, Julius Redzinski
julius.redzin...@hotmail.de wrote:
 On such a decision the Board should have before making any decision researched
 really what raeders expect and want and this with empathy for different 
 regions and
 the understanding that germany maybe has different needs than the arabic room 
 and
 that a making them all the same is not a good idea, and not empathic at all. 
 Before a
 Board decision there would have been to be a poll that really ask the right 
 questions,
 not this fake thing with no impact at all. The way the Board acted on this 
 and now not
 even says yes, we fucked it up, we take the decision back and start at point 
 zero
 again is a shame for teh complete Wikimedia world and community.

 Second last point: Give back to the editors the responsibility to amke the 
 choice how the can
 present their educational content to the readers. That is no Board decision. 
 If a
 community says we don't need the filter, then the Board doesn't know any 
 better
 about the needs and wishes of teh users of this project and shouldn't act 
 into it
 this way.

 Last point: The Board should start fisrt thinking and then deciding. It would 
 reduce much
 the danger of splitting the communities an the Wikipedias. The Board seems a 
 little
 bit too american, first shooting by feeling threatend and then asking ... 
 That is not
 the way the Board should work. So act responsible and take back the decision
 until a really good decision process would have been made through ...

 Julius Redzinski (de:Julius1990)

Please get one's breath and after answer me.

Which community? German community? Do you think that German community
represent all users? because you are opposing the community to the
board, elected by the community.

It means that this community is a little bit unstable because
yesterday it elect a board and now it is fighting with it.

de.wikipedia.org is used by a lot of persons of different cultures, so
does it mean that the German community is taking a decision for all of
these users?

If the members of de.wikipedia.org are *unaffected by explicit sexual
images* because there are already ahead as they practice bondage or
BDSM, it doesn't mean that all person of the world are so evolute in
sexual matters.

What the poll of de.wikipedia.org means is that the use of the filter
should not be applied automatically and the community needs to be
consulted. In this case the survey should be addressed to all persons
and not only to the German ones.

After if de.wikipedia.org would impose to the world their decision, it
seems to me logical that de.wikipedia.org will limit the use of their
projects only to the countries where German language is spoken.
to define them because German community is the community) are
assisting to this piece of theater. Please give us at least pop corns!

Ilario

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-11 Thread Fae
 If the members of de.wikipedia.org are *unaffected by explicit sexual
 images* because there are already ahead as they practice bondage or
 BDSM, it doesn't mean that all person of the world are so evolute in
 sexual matters.

I find these sorts of comments personally offensive, likely to disrupt
any forming consensus and appear to promote stereotypes. Please keep
in mind the principle of conducting discussions in respectful and
civil manner as one might expect if applying
http://en.wikipedia.org/wiki/Wikipedia:5P.

Should this list be hijacked by these sorts of comments then I see no
point in staying subscribed to it.

Thanks,
Fae

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content - Commons searches

2011-10-11 Thread MZMcBride
Andreas Kolbe wrote:
 If I search Commons for electric toothbrushes, the second search result is
 an image of a woman masturbating with an electric toothbrush:
 
 http://commons.wikimedia.org/w/index.php?title=Special:Searchsearch=electric+
 toothbrushesfulltext=Searchredirs=1ns0=1ns6=1ns9=1ns12=1ns14=1ns100=1
 ns106=1
 
 
 If I search Commons for pearl necklace, the first search result is an image
 of a woman with sperm on her throat:
 
 http://commons.wikimedia.org/w/index.php?title=Special%3ASearchsearch=pearl+n
 ecklacefulltext=Search
 
 
 If I search Commons for cucumber, the first page of search results shows a
 woman with a cucumber up her vagina:
 
 http://commons.wikimedia.org/w/index.php?title=Special%3ASearchsearch=cucumbe
 rfulltext=Search
 
 
 Please accept that people who are looking for images of electric toothbrushes,
 cucumbers and pearl necklaces in Commons may be somewhat taken aback by this.
 Surely your vision of neutrality does not include that we have to force people
 interested in personal hygiene, vegetables and fashion to look at graphic sex
 images? There is theory and practice. Philosophically, I agree with you. But
 looking at the results of trying to find an image of a cucumber or pearl
 necklace in Commons is a pragmatic question. Users should be able to tailor
 their user experience to their needs.

Brainstorming for a workable solution, now ongoing:
https://meta.wikimedia.org/wiki/Controversial_content/Brainstorming.

MZMcBride



___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-11 Thread Kim Bruning
On Tue, Oct 11, 2011 at 07:19:00AM +0530, Theo10011 wrote:
 
 ...,a viable alternative to not relying blindly on the categorization
 system, would be implementing a new image reviewer flag on en.wp and maybe
 in commons. This method would create a list of reviewed images that can be
 considered objectionable, that could be filtered/black-listed. 

We could also just delete them, unless someone actually uses them in a sensible 
way in an article. :-)

sincerely,
Kim Bruning
-- 

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-11 Thread Fae
 ...,a viable alternative to not relying blindly on the categorization
 system, would be implementing a new image reviewer flag on en.wp and maybe
 in commons. This method would create a list of reviewed images that can be
 considered objectionable, that could be filtered/black-listed.

 We could also just delete them, unless someone actually uses them in a 
 sensible way in an article. :-)

 sincerely,
        Kim Bruning

Not on Commons; being objectionable to some viewers and not being
currently in use does not make a potentially educational image out of
scope. I have seen many poorly worded deletion requests on Commons on
the basis of a potentially useable image being orphaned rather than
it being unrealistic to expect it to ever be used for an educational
purpose.

Fae

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-11 Thread Kim Bruning
On Mon, Oct 10, 2011 at 09:53:55PM -0400, Risker wrote:
 Kim, I am getting the impression you are being deliberately obtuse.  

No, I'm being exhaustive. I wanted to ensure that there is no hair
of a possibility that  I might have missed a good faith avenue.

(I wouldn't have asked this question if you hadn't said I was
stating nonsense)

 I cannot decide what is being blocked, as a bottom level user.
 Those decisions have been made at a sysadmin or software level. I
 can tell you my experiences as a user on those systems, but I do
 not have the information you seek, nor am I in a position to
 obtain it.

flame on
Therefore you cannot claim that I am stating nonsense. The inverse
is true: you do not possess the information to support your
position, as you now admit.

In future, before you set out to make claims of bad faith in others,
it would be wise to ensure that your own information is impeccable
first.
/flame

sincerely,
Kim Bruning


-- 

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-11 Thread Andreas Kolbe
From: Fae f...@wikimedia.org.uk
 We could also just delete them, unless someone actually uses them in a 
 sensible way in an article. :-)

 sincerely,
        Kim Bruning

Not on Commons; being objectionable to some viewers and not being
currently in use does not make a potentially educational image out of
scope. I have seen many poorly worded deletion requests on Commons on
the basis of a potentially useable image being orphaned rather than
it being unrealistic to expect it to ever be used for an educational
purpose.

Fae




Agree with Fae; Commons is a general image repository in its own right, serving 
a bigger audience than just the other Wikimedia projects.

So the fact is that Commons will contain controversial images – and that we 
have to curate them responsibly.

Someone on Meta has pointed out that Commons seems to list sexual image results 
for search terms like cucumber, electric toothbrushes or pearl necklace way 
higher than a corresponding Google search. 
See http://lists.wikimedia.org/pipermail/commons-l/2011-October/006290.html

Andreas
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content - Commons searches

2011-10-11 Thread David Levy
Andreas Kolbe wrote:

 If I search Commons for electric toothbrushes, the second search result is
 an image of a woman masturbating with an electric toothbrush:

 http://commons.wikimedia.org/w/index.php?title=Special:Searchsearch=electric+toothbrushesfulltext=Searchredirs=1ns0=1ns6=1ns9=1ns12=1ns14=1ns100=1ns106=1

 If I search Commons for pearl necklace, the first search result is an
 image of a woman with sperm on her throat:

 http://commons.wikimedia.org/w/index.php?title=Special%3ASearchsearch=pearl+necklacefulltext=Search

 If I search Commons for cucumber, the first page of search results shows a
 woman with a cucumber up her vagina:

 http://commons.wikimedia.org/w/index.php?title=Special%3ASearchsearch=cucumberfulltext=Search

 Please accept that people who are looking for images of electric
 toothbrushes, cucumbers and pearl necklaces in Commons may be somewhat taken
 aback by this.

I agree that this is a problem.  I disagree that the proposed
image-tagging system is a viable solution.

Please answer the questions from my previous message.

 Surely your vision of neutrality does not include that we have to force
 people interested in personal hygiene, vegetables and fashion to look at
 graphic sex images?

I don't wish to force *any* off-topic images on people (including
those who haven't activated an optional feature intended to filter
objectionable ones).  Methods of preventing this should be pursued.

 There is theory and practice. Philosophically, I agree with you. But looking
 at the results of trying to find an image of a cucumber or pearl necklace in
 Commons is a pragmatic question. Users should be able to tailor their user
 experience to their needs.

I agree, provided that we seek to accommodate all users equally.
That's why I support the type of implementation discussed here:
http://meta.wikimedia.org/wiki/Talk:Image_filter_referendum/en/Categories#general_image_filter_vs._category_system
or
http://goo.gl/t6ly5

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-11 Thread Andreas Kolbe
David,

You asked for a reply to your earlier questions.

 As has been mentioned numerous times, deeming certain subjects (and
 not others) potentially objectionable is inherently subjective and
 non-neutral.

 Unveiled women, pork consumption, miscegenation and homosexuality are
 considered objectionable by many people.  Will they be assigned
 categories?  If not, why not?  If so, who's gong to analyze millions
 of images (with thousands more uploaded on a daily basis) to tag them?

 And what if the barefaced, bacon-eating, interracial lesbians are
 visible only in the image's background?  Does that count?



If we provide a filter, we have to be pragmatic, and restrict its application 
to media that significant demographics really might want to filter. 

We should take our lead from real-world media.

Real-world media show images of lesbians, pork, mixed-race couples, and 
unveiled women (even Al-Jazeera). 

There is absolutely no need to filter them, as there is no significant target 
group among our readers who would want such a filter.

Images of Muhammad, or masturbation with cucumbers, are different. There is a 
significant demographic of users who might not want to see such images, 
especially if they come across them unprepared.

If there is doubt whether or not an image should belong in a category (because 
the potentially controversial content is mostly covered, far in the background 
etc.), it should be left out, until there are complaints from filter users. 

You mentioned a discussion about category-based filter systems in your other 
post. One other avenue I would like to explore is whether the existing Commons 
category system could, with a bit of work, be used as a basis for the filter. 
I've made a corresponding post here:

http://meta.wikimedia.org/wiki/Controversial_content/Brainstorming#Refine_the_existing_category_system_so_it_forms_a_suitable_basis_for_filtering 

This would reduce the need to tag thousands and thousands of images in Commons 
to a need to tag a few hundred categories. Some clean-up and recategorisation 
might be required for categories with mixed content. (Images stored on the 
projects themselves, rather than on Commons, would need to be addressed 
separately.)

I understand you are more in favour of users being able to switch all images 
off, depending on the page they are on. This has some attractive aspects, but 
it would not help e.g. the Commons user searching for an image of a pearl 
necklace. To see the images Commons contains, they have to have image display 
on, and then the first image they see is the image of the woman with sperm on 
her throat. 

It also does not necessarily prepare users for the media they might find in WP 
articles like the ones on fisting, ejaculation and many others; there are 
always users who are genuinely shocked to see that we have the kind of media we 
have on those pages, and are unprepared for them.

Andreas
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-11 Thread Milos Rancic
On Tue, Oct 11, 2011 at 18:19, Andreas Kolbe jayen...@yahoo.com wrote:
 If we provide a filter, we have to be pragmatic, and restrict its application 
 to media that significant demographics really might want to filter.

That should be designed well and maintained, too. I am really
frustrated by Google's insisting that my interface should be in
Serbian (or in Spanish while I was in Spain), although I am a logged
in user.

IPv4 address blocks are being sold from one company to another now,
which means that GeoIP method *requires* the last database.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-11 Thread Ilario Valdelli
On Tue, Oct 11, 2011 at 12:13 PM, Fae f...@wikimedia.org.uk wrote:
 If the members of de.wikipedia.org are *unaffected by explicit sexual
 images* because there are already ahead as they practice bondage or
 BDSM, it doesn't mean that all person of the world are so evolute in
 sexual matters.

 I find these sorts of comments personally offensive, likely to disrupt
 any forming consensus and appear to promote stereotypes. Please keep
 in mind the principle of conducting discussions in respectful and
 civil manner as one might expect if applying
 http://en.wikipedia.org/wiki/Wikipedia:5P.

 Should this list be hijacked by these sorts of comments then I see no
 point in staying subscribed to it.

 Thanks,
 Fae


Do you see which kind of comments there are in this thread?

Do you think that this thread has been opened in a civil and
respectful manner?

At least I have used a generic reference, someone here has give a
*direct personal offense*, he is not really a person who can educate
about civilization and respect.

Please be kind to apply the same measure for all comments.

Thank you

Ilario Valdelli

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-11 Thread David Levy
Andreas Kolbe wrote:

 If we provide a filter, we have to be pragmatic, and restrict its application
 to media that significant demographics really might want to filter.

Define significant demographics.  Do you have a numerical cut-off
point in mind (below which we're to convey you're a small minority,
so we've deemed you insignificant)?

 We should take our lead from real-world media.

WMF websites display many types of images that most media don't.
That's because our mission materially differs.  We seek to spread
knowledge, not to cater to majorities in a manner that maximizes
revenues.

For most WMF projects, neutrality is a core principle.  Designating
certain subjects (and not others) potentially objectionable is
inherently non-neutral.

 Real-world media show images of lesbians, pork, mixed-race couples, and
 unveiled women (even Al-Jazeera).

 There is absolutely no need to filter them, as there is no significant target
 group among our readers who would want such a filter.

So only insignificant target groups would want that?

Many ultra-Orthodox Jewish newspapers and magazines maintain an
editorial policy forbidding the publication of photographs depicting
women.  Some have even performed digital alterations to remove them
from both the foreground and background.

http://en.wikipedia.org/wiki/The_Situation_Room_(photograph)

These publications (which routinely run photographs of deceased
women's husbands when publishing obituaries) obviously have large
enough readerships to be profitable and remain in business.

As of 2011, there are approximately 1.3 million Haredi Jews.  The
Haredi Jewish population is growing very rapidly, doubling every 17 to
20 years.

http://en.wikipedia.org/wiki/Haredi_Judaism

Are we to tag every image containing a woman, or are we to deem this
religious group insignificant?

 You mentioned a discussion about category-based filter systems in your other
 post.

The ability to blacklist categories is only one element of the
proposal (and a secondary one, in my view).

 One other avenue I would like to explore is whether the existing Commons
 category system could, with a bit of work, be used as a basis for the filter.
 I've made a corresponding post here:

 http://meta.wikimedia.org/wiki/Controversial_content/Brainstorming#Refine_the_existing_category_system_so_it_forms_a_suitable_basis_for_filtering

This was discussed at length on the talk pages accompanying the
referendum and on this list.

Our current categorization is based primarily on what images are
about, *not* what they contain.  For example, a photograph depicting a
protest rally might include nudity in the crowd, but its
categorization probably won't specify that.  Of course, if we were to
introduce a filter system reliant upon the current categories, it's
likely that some users would seek to change that (resulting in harmful
dilution).

Many potentially objectionable subjects lack categories entirely
(though as discussed above, you evidently have deemed them
insignificant).

On the brainstorming page, you suggest that [defining] a small number
of categories (each containing a group of existing Commons categories)
that users might want to filter would alleviate the concern that we
are creating a special infrastructure that censors could exploit.  I
don't understand how.  What would stop censors from utilizing the
categories of categories in precisely the same manner?

 I understand you are more in favour of users being able to switch all images
 off, depending on the page they are on.

The proposal that I support includes both blacklisting and whitelisting.

 This has some attractive aspects, but it would not help e.g. the Commons user
 searching for an image of a pearl necklace. To see the images Commons
 contains, they have to have image display on, and then the first image they
 see is the image of the woman with sperm on her throat.

This problem extends far beyond the issue of objectionable images,
and I believe that we should pursue solutions separately.

 It also does not necessarily prepare users for the media they might find in
 WP articles like the ones on fisting, ejaculation and many others; there are
 always users who are genuinely shocked to see that we have the kind of media
 we have on those pages, and are unprepared for them.

Such users could opt to block images by default, whitelisting only the
articles or specific images whose captions indicate content that they
wish to view.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-11 Thread Tobias Oelgarte
Am 11.10.2011 17:42, schrieb Andreas Kolbe:
 From: Faef...@wikimedia.org.uk
 We could also just delete them, unless someone actually uses them in a 
 sensible way in an article. :-)

 sincerely,
 Kim Bruning
 Not on Commons; being objectionable to some viewers and not being
 currently in use does not make a potentially educational image out of
 scope. I have seen many poorly worded deletion requests on Commons on
 the basis of a potentially useable image being orphaned rather than
 it being unrealistic to expect it to ever be used for an educational
 purpose.

 Fae




 Agree with Fae; Commons is a general image repository in its own right, 
 serving a bigger audience than just the other Wikimedia projects.

 So the fact is that Commons will contain controversial images – and that we 
 have to curate them responsibly.

 Someone on Meta has pointed out that Commons seems to list sexual image 
 results for search terms like cucumber, electric toothbrushes or pearl 
 necklace way higher than a corresponding Google search. See 
 http://lists.wikimedia.org/pipermail/commons-l/2011-October/006290.html

 Andreas
This might just be coincidence for special cases. I'm sure if you search 
long enough you will find opposite examples as well. But wouldn't it run 
against the intention of a search engine to rate down content by 
possibly offensive? If you search for a cucumber you should expect to 
find one. If the description is correct, you should find the most 
suitable images first. But that should be based on the rating algorithm 
that works on the description, not on the fact that content is/might 
be/could be controversial.

Implementing such a restriction for a search engine (by default) would 
go against any principal and would be discrimination of content. We 
should not do this.

nya~

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-11 Thread Etienne Beaule
MediaWiki serves more than the Wikimedia Foundation too.  ~~Ebe123


On 11-10-11 4:42 PM, Tobias Oelgarte tobias.oelga...@googlemail.com
wrote:

 Am 11.10.2011 17:42, schrieb Andreas Kolbe:
 From:
 Faef...@wikimedia.org.uk
 We could also just delete them, unless someone
 actually uses them in a sensible way in an article. :-)

 sincerely,

 Kim Bruning
 Not on Commons; being objectionable to some viewers and not
 being
 currently in use does not make a potentially educational image out
 of
 scope. I have seen many poorly worded deletion requests on Commons on

 the basis of a potentially useable image being orphaned rather than
 it
 being unrealistic to expect it to ever be used for an educational

 purpose.

 Fae




 Agree with Fae; Commons is a general image
 repository in its own right, serving a bigger audience than just the other
 Wikimedia projects.

 So the fact is that Commons will contain controversial
 images ­ and that we have to curate them responsibly.

 Someone on Meta has
 pointed out that Commons seems to list sexual image results for search terms
 like cucumber, electric toothbrushes or pearl necklace way higher than a
 corresponding Google search. See
 http://lists.wikimedia.org/pipermail/commons-l/2011-October/006290.html


 Andreas
This might just be coincidence for special cases. I'm sure if you
 search 
long enough you will find opposite examples as well. But wouldn't it
 run 
against the intention of a search engine to rate down content by
 
possibly offensive? If you search for a cucumber you should expect to 
find
 one. If the description is correct, you should find the most 
suitable images
 first. But that should be based on the rating algorithm 
that works on the
 description, not on the fact that content is/might 
be/could be
 controversial.

Implementing such a restriction for a search engine (by
 default) would 
go against any principal and would be discrimination of
 content. We 
should not do
 this.

nya~

___
foundation-l
 mailing list
foundation-l@lists.wikimedia.org
Unsubscribe:
 https://lists.wikimedia.org/mailman/listinfo/foundation-l




___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-11 Thread Jussi-Ville Heiskanen
On Tue, Oct 11, 2011 at 6:42 PM, Andreas Kolbe jayen...@yahoo.com wrote:
 From: Fae f...@wikimedia.org.uk
 We could also just delete them, unless someone actually uses them in a 
 sensible way in an article. :-)

 sincerely,
        Kim Bruning

 Not on Commons; being objectionable to some viewers and not being
 currently in use does not make a potentially educational image out of
 scope. I have seen many poorly worded deletion requests on Commons on
 the basis of a potentially useable image being orphaned rather than
 it being unrealistic to expect it to ever be used for an educational
 purpose.

 Fae




 Agree with Fae; Commons is a general image repository in its own right, 
 serving a bigger audience than just the other Wikimedia projects.

 So the fact is that Commons will contain controversial images – and that we 
 have to curate them responsibly.

 Someone on Meta has pointed out that Commons seems to list sexual image 
 results for search terms like cucumber, electric toothbrushes or pearl 
 necklace way higher than a corresponding Google search. 
 See http://lists.wikimedia.org/pipermail/commons-l/2011-October/006290.html


Concur strenously. Jimbo tried deleting things he thought would have no useful
purpose but merely titillation from commons and crashed and burned. Not the
way to go folks! The finnish wikipedia uses a victorian or
pre-victorian era mildly
pedophilic suggestive copperplate drawing as an illustration of the Pedophilia
article. By modern day standards the image is more comical than titillating
*by our Finnish standards* --- but would be highly suspect in the US, atleast
if the deletion debate for that image at commons is to be given credence to...



-- 
--
Jussi-Ville Heiskanen, ~ [[User:Cimon Avaro]]

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-11 Thread Andreas Kolbe
 From: Tobias Oelgarte tobias.oelga...@googlemail.com
  Someone on Meta has pointed out that Commons seems to list sexual image 
  results for search terms like cucumber, electric toothbrushes or pearl 
  necklace way higher than a corresponding Google search. See 
  http://lists.wikimedia.org/pipermail/commons-l/2011-October/006290.html

  Andreas
 This might just be coincidence for special cases. I'm sure if you search 
 long enough you will find opposite examples as well. 


Tobias,

If you can find counterexamples, I'll gladly look at them. These were the only 
three we checked this afternoon, and the difference was striking. 

Here is another search, underwater:

http://commons.wikimedia.org/w/index.php?title=Special%3ASearchsearch=underwaterfulltext=Search


The third search result in Commons is a bondage image:

http://commons.wikimedia.org/wiki/File:Underwater_bondage.jpg


On Google, with safe search off, the same image is the 58th result:

http://www.google.co.uk/search?gcx=wq=underwater+site:commons.wikimedia.orgum=1ie=UTF-8hl=entbm=ischsource=ogsa=Ntab=wibiw=1095bih=638




 But wouldn't it run 
 against the intention of a search engine to rate down content by 
 possibly offensive? If you search for a cucumber you should expect to 
 find one. If the description is correct, you should find the most 
 suitable images first. But that should be based on the rating algorithm 
 that works on the description, not on the fact that content is/might 
 be/could be controversial.

 Implementing such a restriction for a search engine (by default) would 
 go against any principal and would be discrimination of content. We 
 should not do this.



You are not being realistic. If someone searches for cucumber, toothbrush 
or necklace on Commons, they will not generally be looking for sexual images, 
and it is no use saying, Well, you looked for a cucumber, and here you have 
one. Stuck up a woman's vagina.

Similarly, users entering jumping ball in the search field are unlikely to be 
looking for this image:


http://commons.wikimedia.org/wiki/File:Jumping_ball_01.jpg


Yet that is the first one the Commons search for jumping ball displays:


http://commons.wikimedia.org/w/index.php?title=Special%3ASearchsearch=jumping+ballfulltext=Search


We are offering an image service, and the principle of least astonishment 
should apply. By having these images come at the top of our search results, we 
are alienating at least part of our readers who were simply looking for an 
image of a toothbrush, cucumber, or whatever. 

On the other hand, if these images don't show up among our top results, we are 
not alienating users who look for images of the penetrative use of cucumbers or 
toothbrushes, because they can easily narrow their search if that is the image 
they're after.

Are you really saying that this is how Commons should work, bringing up sexual 
images for the most innocuous searches, and that this is how you would design 
the user experience for Commons users? 

Andreas 
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-11 Thread Jussi-Ville Heiskanen
On Wed, Oct 12, 2011 at 12:23 AM, Andreas Kolbe jayen...@yahoo.com wrote:
 From: Tobias Oelgarte tobias.oelga...@googlemail.com
  Someone on Meta has pointed out that Commons seems to list sexual image 
  results for search terms like cucumber, electric toothbrushes or pearl 
  necklace way higher than a corresponding Google search. See 
  http://lists.wikimedia.org/pipermail/commons-l/2011-October/006290.html

  Andreas
 This might just be coincidence for special cases. I'm sure if you search
 long enough you will find opposite examples as well.


 Tobias,

 If you can find counterexamples, I'll gladly look at them. These were the 
 only three we checked this afternoon, and the difference was striking.

 Here is another search, underwater:

 http://commons.wikimedia.org/w/index.php?title=Special%3ASearchsearch=underwaterfulltext=Search


 The third search result in Commons is a bondage image:

 http://commons.wikimedia.org/wiki/File:Underwater_bondage.jpg


 On Google, with safe search off, the same image is the 58th result:

 http://www.google.co.uk/search?gcx=wq=underwater+site:commons.wikimedia.orgum=1ie=UTF-8hl=entbm=ischsource=ogsa=Ntab=wibiw=1095bih=638




 But wouldn't it run
 against the intention of a search engine to rate down content by
 possibly offensive? If you search for a cucumber you should expect to
 find one. If the description is correct, you should find the most
 suitable images first. But that should be based on the rating algorithm
 that works on the description, not on the fact that content is/might
 be/could be controversial.

 Implementing such a restriction for a search engine (by default) would
 go against any principal and would be discrimination of content. We
 should not do this.



 You are not being realistic. If someone searches for cucumber, toothbrush 
 or necklace on Commons, they will not generally be looking for sexual 
 images, and it is no use saying, Well, you looked for a cucumber, and here 
 you have one. Stuck up a woman's vagina.

 Similarly, users entering jumping ball in the search field are unlikely to 
 be looking for this image:


 http://commons.wikimedia.org/wiki/File:Jumping_ball_01.jpg


 Yet that is the first one the Commons search for jumping ball displays:


 http://commons.wikimedia.org/w/index.php?title=Special%3ASearchsearch=jumping+ballfulltext=Search


 We are offering an image service, and the principle of least astonishment 
 should apply. By having these images come at the top of our search results, 
 we are alienating at least part of our readers who were simply looking for an 
 image of a toothbrush, cucumber, or whatever.

 On the other hand, if these images don't show up among our top results, we 
 are not alienating users who look for images of the penetrative use of 
 cucumbers or toothbrushes, because they can easily narrow their search if 
 that is the image they're after.

 Are you really saying that this is how Commons should work, bringing up 
 sexual images for the most innocuous searches, and that this is how you would 
 design the user experience for Commons users?


'There may be a middle ground on this whole issue, but I don't really see where
it is at, because so few people seem to occupy it. Does that encapsulate the
conundrum we are at?'



-- 
--
Jussi-Ville Heiskanen, ~ [[User:Cimon Avaro]]

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content - Commons searches

2011-10-11 Thread Jussi-Ville Heiskanen
What you are all missing here is that commons is a service site, not a
repository
for the public to go into without knowing it caters to different
cultures than their
own. Period.



-- 
--
Jussi-Ville Heiskanen, ~ [[User:Cimon Avaro]]

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-11 Thread Andreas Kolbe
 From: David Levy lifeisunf...@gmail.com

 Andreas Kolbe wrote:

  If we provide a filter, we have to be pragmatic, and restrict its 
  application
  to media that significant demographics really might want to filter.

 Define significant demographics.  Do you have a numerical cut-off
 point in mind (below which we're to convey you're a small minority,
 so we've deemed you insignificant)?


I would use indicators like the number and intensity of complaints received.

That is one of the indicators the Foundation has used as well. 


 WMF websites display many types of images that most media don't.
 That's because our mission materially differs.  We seek to spread
 knowledge, not to cater to majorities in a manner that maximizes
 revenues.


Generally, what we display in Wikipedia should match what reputable educational 
sources in the field display. Just like Wikipedia text reflects the text in 
reliable sources. Anything that goes beyond that should be accessible via a 
Commons link, rather than displayed on the article page.

Commons, however, is different, and has a wider scope. It has an important role 
in its own right, and its media categories should be linked from Wikipedia.


 For most WMF projects, neutrality is a core principle.  Designating
 certain subjects (and not others) potentially objectionable is
 inherently non-neutral.


What we present should be neutral (where neutrality is, as always, defined by 
reliable sources, rather than editor preference).

That does not mean that we should not listen to users who tell us that they 
don't want to see certain media because they find them upsetting, or 
unappealing. 


 So only insignificant target groups would want that?

 Many ultra-Orthodox Jewish newspapers and magazines maintain an
 editorial policy forbidding the publication of photographs depicting
 women.  Some have even performed digital alterations to remove them
 from both the foreground and background.

 http://en.wikipedia.org/wiki/The_Situation_Room_(photograph)

 These publications (which routinely run photographs of deceased
 women's husbands when publishing obituaries) obviously have large
 enough readerships to be profitable and remain in business.

 As of 2011, there are approximately 1.3 million Haredi Jews.  The
 Haredi Jewish population is growing very rapidly, doubling every 17 to
 20 years.

 http://en.wikipedia.org/wiki/Haredi_Judaism

 Are we to tag every image containing a woman, or are we to deem this
 religious group insignificant?


I would deem them insignificant for the purposes of the image filter. They are 
faced with images of women everywhere in modern life, and we cannot cater for 
every fringe group. At some point, there are diminishing returns, especially 
when it amounts to filtering images of more than half the human race.

We need to look at mainstream issues (including Muhammad images). 


  You mentioned a discussion about category-based filter systems in your other
  post.

 The ability to blacklist categories is only one element of the
 proposal (and a secondary one, in my view).

  One other avenue I would like to explore is whether the existing Commons
  category system could, with a bit of work, be used as a basis for the 
  filter.
  I've made a corresponding post here:
 
  http://meta.wikimedia.org/wiki/Controversial_content/Brainstorming#Refine_the_existing_category_system_so_it_forms_a_suitable_basis_for_filtering

 This was discussed at length on the talk pages accompanying the
 referendum and on this list.

 Our current categorization is based primarily on what images are
 about, *not* what they contain.  For example, a photograph depicting a
 protest rally might include nudity in the crowd, but its
 categorization probably won't specify that.  Of course, if we were to
 introduce a filter system reliant upon the current categories, it's
 likely that some users would seek to change that (resulting in harmful
 dilution).

 Many potentially objectionable subjects lack categories entirely
 (though as discussed above, you evidently have deemed them
 insignificant).


I believe the most important content is identifiable by categories.


 On the brainstorming page, you suggest that [defining] a small number
 of categories (each containing a group of existing Commons categories)
 that users might want to filter would alleviate the concern that we
 are creating a special infrastructure that censors could exploit.  I
 don't understand how.  What would stop censors from utilizing the
 categories of categories in precisely the same manner?



What I meant is that compiling a collection of a few hundred categories would 
not be saving censors an awful lot of work. They could -- and can -- achieve 
the same thing in an afternoon now, based on our existing category system.



  I understand you are more in favour of users being able to switch all images
  off, depending on the page they are on.

 The proposal that I support includes both blacklisting and 

  1   2   3   >