[Foundation-l] Error at Special:CongressLookup

2012-01-19 Thread David Levy
[crossposted to Foundation-l and WikiEN-l]

http://en.wikipedia.org/wiki/Special:CongressLookup

Can someone please change zip code to ZIP code again?  (This error
was corrected in the blackout notice yesterday.)  I haven't managed to
get anyone's attention via the IRC channels.  Thanks!

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Blackout notice errors (English Wikipedia)

2012-01-18 Thread David Levy
Erik Moeller wrote:

 Should be fixed.

Yep, I got someone's attention via the #wikimedia IRC channel.
Thanks!  (And my apologies again for the duplicate posts.  There
appears to have been a significant delay.)

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


[Foundation-l] Blackout notice errors

2012-01-17 Thread David Levy
[crossposted to Foundation-l and WikiEN-l]

Can someone please correct the following errors in the English
Wikipedia's blackout notice?

internet - Internet
zip code - ZIP code

Thanks!

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


[Foundation-l] Blackout notice errors (English Wikipedia)

2012-01-17 Thread David Levy
[crossposted to Foundation-l and WikiEN-l]

Can someone please correct the following errors in the English
Wikipedia's blackout notice?

internet - Internet
zip code - ZIP code

Thanks!

(I apologize if this message appears twice.  My fist attempt appeared
unsuccessful.)

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Community consensus for software changes (Re: Show community consensus for Wikilove)

2011-10-31 Thread David Levy
Erik Moeller wrote:

 As a matter of general practice, the Wikimedia Foundation aims to be
 responsive to the community both before and after the deployment of
 software, but it doesn't obtain community consensus before deploying
 software which it would like to deploy on its sites and services, nor
 does it necessarily write or deploy software changes if a consensus to
 do so exists.

Brandon Harris explicitly stated that the policy for deployment of
the tool is that it is by request only, and the requesting wiki must
... show community consensus.

http://lists.wikimedia.org/pipermail/foundation-l/2011-October/070062.html

That leaves three possibilities:

A) Community consensus was demonstrated at the English Wikipedia.
B) The WikiLove deployment policy was violated.
C) The above statement by Brandon Harris is incorrect.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Community consensus for software changes (Re: Show community consensus for Wikilove)

2011-10-31 Thread David Levy
Brandon Harris wrote:

 Well.  I wrote the criteria for additional deployment.

In your earlier message, you omitted the word additional (and an
explanation of the preceding English Wikipedia test deployment).

No matter; the misunderstanding has been resolved.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Community consensus for software changes (Re: Show community consensus for Wikilove)

2011-10-31 Thread David Levy
Erik Moeller wrote:

 It's easy for us to accidentally send mixed messages, though, as this
 thread has shown. Because so many things are done in response to
 community consensus, there may be an expectation that this is always
 the case, and that that's just how we work.

In this instance, the confusion arose not because of an underlying
expectation that community consensus will be sought, but because
Brandon Harris explicitly stated that it would be (and neglected to
mention that he was referring to wikis other than the English
Wikipedia).

This, of course, was an honest mistake (and quite a minor one).

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-21 Thread David Levy
I wrote:

  I believe that we should focus on the criteria behind reliable sources'
  illustrative decisions, *not* the decisions themselves.

Andreas Kolbe replied:

 Ah well, that *is* second-guessing the source, because unless the author
 tells you, you have no way of knowing *why* they didn't include a particular
 type of image.

I've repeatedly addressed this point and explained why I regard it as
moot.  You needn't agree with me, but it's frustrating when you
seemingly disregard what I've written.

You actually quoted the relevant text later in your message:

  We needn't know why a particular illustration was omitted.  If we apply
  similar criteria, we'll arrive at similar decisions, excepting instances
  in which considerations applicable to reliable sources (e.g. those based on
  images' upsetting/offensive nature) are inapplicable to Wikipedia ...

I used the phrase why a particular illustration was omitted, which
is remarkably similar to why they didn't include a particular type of
image.  I've made such statements (sometimes with further
elaboration) in several replies.

Again, I don't demand that you agree with me, but I humbly request
that you acknowledge my position.

 If we did that for text, we'd be guessing why an author might not have
 mentioned such and such a thing, and applying our correction.

Again, the images in question don't introduce information inconsistent
with that published by reliable sources; they merely illustrate the
things that said sources tell us.

And again, we haven't pulled our image evaluation criteria out of thin
air.  They reflect those employed by the very same publications.

Our application of these criteria entails no such guessing.  You
seem to envision a scenario in which we seek to determine whether a
particular illustration was omitted for a reason inapplicable to
Wikipedia.  In actuality, we simply set aside such considerations (but
we retain the others, so if an illustration was omitted for a reason
applicable to Wikipedia, we're likely to arrive at the same decision).

. I don't subscribe to the notion that Wikipedia should go out of its way (=
 depart from reliable sources' standards) to upset or offend readers where
 reliable sources don't.

Do you honestly believe that this is our motive?

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-20 Thread David Levy
Andreas Kolbe wrote:

 Whether to add a media file to an article or not is always a
 cost/benefit not is always a cost/benefit question. It does not make
 sense to argue that any benefit, however small and superficial,
 outweighs any cost, however large and substantive.

Agreed.  I'm not arguing that.

Your replies seem indicative of a belief that my position is Let's
include every illustrative image, no matter what.  That isn't so.  My
point is merely that we aren't bound by others' decisions.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-20 Thread David Levy
Andreas Kolbe wrote:

 I wouldn't go so far as to say that we should consider ourselves *bound* by
 others' decisions either. But I do think that the presence or absence of
 precedents in reliable sources is an important factor that we should weigh
 when we're contemplating the addition of a particular type of illustration.

I believe that we should focus on the criteria behind reliable
sources' illustrative decisions, *not* the decisions themselves.  As
previously noted, some considerations are applicable to Wikipedia,
while others are not.

We needn't know why a particular illustration was omitted.  If we
apply similar criteria, we'll arrive at similar decisions, excepting
instances in which considerations applicable to reliable sources (e.g.
those based on images' upsetting/offensive nature) are
inapplicable to Wikipedia and instances in which considerations
inapplicable to reliable sources (e.g. those based on images' non-free
licensing) are applicable to Wikipedia.

 For example, if a reader complains about images in the article on the [[rape
 of Nanking]], it is useful if an editor can say, Look, these are the
 standard works on the rape of Nanking, and they include images like that.

An editor *can* do that.  It's the inverse situation that requires
deeper analysis.

 If someone complains about an image or media file in some other article and we
 cannot point to a single reputable source that has included a similar
 illustration, then we may indeed be at fault.

Quite possibly.  We'd need to determine whether the relevant criteria
have been met.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-19 Thread David Levy
Andreas Kolbe wrote:

   But if we use a *different* style, it should still be traceable to an
   educational or scholarly standard, rather than one we have made up, or
   inherited from 4chan. Would you agree?

  Yes, and I dispute the premise that the English Wikipedia has failed
  in this respect.

 I think we have already agreed that our standards for inclusion differ from
 those used by reliable sources.

Yes, in part.  You wrote traceable to, not identical to.

I elaborated in the text quoted below.

  As I've noted, we always must gauge available images' illustrative
  value on an individual basis.  We do so by applying criteria intended to
  be as objective as possible, thereby reflecting (as closely as we can,
  given the relatively small pool of libre images) the quality standards
  upheld by reputable publications.  We also reject images inconsistent
  with reliable sources' information on the subjects depicted therein.
 
  We don't, however, exclude images on the basis that others declined to
  publish the same or similar illustrations.

 Again, on this point you advocate that we should differ from the standards
 upheld by reputable publications.

Indeed, but *not* when it comes to images' basic illustrative
properties.  Again, I elaborated in the text quoted below.

  Images widely regarded as objectionable commonly are omitted for this
  reason (which is no more relevant to Wikipedia than the censorship of
  objectionable words is).  But again, we needn't seek to determine when
  this has occurred.  We can simply apply our normal assessment criteria
  across the board (irrespective of whether an image depicts a sexual act
  or a pine tree).

For the Pine article, we examine the available images of pine trees
(and related entities, such as needles, cones and seeds) and assess
their illustrative properties as objectively as possible.  Our goal is
to include the images that best enhance readers' understanding of the
subject.

This is exactly what reputable publications do.  (The specific images
available to them differ and sometimes exceed the quality of those
available to us, of course.)

This process can be applied to images depicting almost any subject,
even if others decline to do so.  I don't insist that we automatically
include lawful, suitably licensed images or shout WIKIPEDIA IS NOT
CENSORED! when we don't.  I merely advocate that we apply the same
assessment criteria across the board.  Inferior images (whether they
depict pine trees, sexual acts or anything else) should be omitted.

 We're coming back to the same sticking point: you're assuming that
 reputable sources omit media because they are objectionable, rather than
 for any valid reason, and you think they are wrong to do so.

No, I'm *not* assuming that this is the only reason, nor am I claiming
that this wrong for them to do.

We *always* must independently determine whether a valid reason to
omit media exists.  We might share some such reasons (e.g. low
illustrative value, inferiority to other available media, copyright
issues) with reliable sources.  Other reasons (e.g. non-free
licensing) might apply to us and not to reliable sources.  Still other
reasons (e.g. upsetting/offensive nature, noncompliance with local
print/broadcast regulations, incompatibility with paper, space/time
constraints) might apply to reliable sources and not to us.

Again, we needn't ponder why a particular illustration was omitted or
what was available to a publication by its deadline.  We need only
determine whether the images currently available to us meet the
standards that we apply across the board.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread David Levy
Andreas Kolbe wrote:

 The English Wikipedia community, like any other, has always contained a
 wide spectrum of opinion on such matters.

Of course.  But consensus != unanimity.

Your interpretation of the English Wikipedia's neutrality policy
contradicts that under which the site operates.

  The New York Times (recipient of more Pulitzer Prizes than any other
  news organization) uses Stuff My Dad Says.  So does the Los Angeles
  Times, which states that the subject's actual name is unsuitable for
  a family publication.
 
  http://www.nytimes.com/2010/05/23/books/review/InsideList-t.html
  http://latimesblogs.latimes.com/technology/2009/09/mydadsays-twitter.html
 
  You might dismiss those sources as the popular press, but they're
  the most reputable ones available on the subject.  Should we deem
  their censorship sacrosanct and adopt it as our own?

 No. :)

Please elaborate.  Why shouldn't we follow the example set by the most
reliable sources?

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread David Levy
Andreas Kolbe wrote:

 I don't consider press sources the most reliable sources, or in general a good
 model to follow. Even among press sources, there are many (incl. Reuters)
 who call the Twitter feed by its proper name, Shit my dad says.

The sources to which I referred are the most reputable ones cited in
the English Wikipedia's article.

Of course, I agree that we needn't emulate the style in which they
present information.  That's my point.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread David Levy
Andreas Kolbe wrote:

 But if we use a *different* style, it should still be traceable to an
 educational or scholarly standard, rather than one we have made up, or
 inherited from 4chan. Would you agree?

Yes, and I dispute the premise that the English Wikipedia has failed
in this respect.

As I've noted, we always must gauge available images' illustrative
value on an individual basis.  We do so by applying criteria intended
to be as objective as possible, thereby reflecting (as closely as we
can, given the relatively small pool of libre images) the quality
standards upheld by reputable publications.  We also reject images
inconsistent with reliable sources' information on the subjects
depicted therein.

We don't, however, exclude images on the basis that others declined to
publish the same or similar illustrations.

Images widely regarded as objectionable commonly are omitted for
this reason (which is no more relevant to Wikipedia than the
censorship of objectionable words is).  But again, we needn't seek
to determine when this has occurred.  We can simply apply our normal
assessment criteria across the board (irrespective of whether an image
depicts a sexual act or a pine tree).

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-18 Thread David Levy
Andreas Kolbe wrote:

 Satisfying most users is a laudable aim for any service provider, whether
 revenue is involved or not. Why should we not aim to satisfy most our users,
 or appeal to as many potential users as possible?

It depends on the context.  There's nothing inherently bad about
satisfying as many users as possible.  It's doing so in a
discriminatory, non-neutral manner that's problematic.

We probably could satisfy most users by detecting their locations and
displaying information intended to reflect the beliefs prevalent there
(e.g. by favoring the majority religions).  But that, like the
creation of special categories for images deemed potentially
objectionable, is incompatible with most WMF projects' missions.

 We constantly discriminate.

 We say, This is unsourced; it may be true, but you can't have it in the
 article.

 We say, This is interesting, but it is synthesis, or original research, and
 you can't have it in the article.

 We say, This is a self-published source, it does not have an editorial staff,
 therefore it is not reliable.

 By doing so, we are constantly empowering the judgment of the professional,
 commercial outfits who produce what we term reliable sources.

 If this is unacceptable to you, do you also object to our sourcing policies
 and guidelines?

You're still conflating disparate concepts.  (I've elaborated on this
point several times.)

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-16 Thread David Levy
I wrote:

  In this context, you view images as entities independent from the people and
  things depicted therein (and believe that our use of illustrations not
  included in other publications constitutes undue weight).

Andreas Kolbe replied:

 I view images as *content*, subject to the same fundamental policies and
 principles as any other content.

You view them as standalone pieces of information, entirely distinct
from those conveyed textually.  You believe that their inclusion
constitutes undue weight unless reliable sources utilize the same or
similar illustrations (despite their publication of text establishing
the images' accuracy and relevance).

The English Wikipedia community disagrees with you.

 For the avoidance of doubt, I am not saying that we should use the *very same*
 illustrations that reliable sources use – we can't, for obvious copyright
 reasons, and there is no need to follow sources that slavishly anyway.

I realize that you advocate the use of comparable illustrations, but
in my view, slavish is a good description of the extent to which you
want us to emulate our sources' presentational styles.

 I agree by the way that we should never write F*** or s***. Some newspapers do
 that, but it is not a practice that the best and most reliable sources
 (scholarly, educational sources as opposed to popular press) use. We should be
 guided by the best, most encyclopedic sources. YMMV.

I previously mentioned Shit My Dad Says.  Have you seen the sources
cited in the English Wikipedia's article?

Time (the world's largest weekly news magazine) refers to it as Sh*t
My Dad Says.

http://www.time.com/time/arts/article/0,8599,1990838,00.html

The New York Times (recipient of more Pulitzer Prizes than any other
news organization) uses Stuff My Dad Says.  So does the Los Angeles
Times, which states that the subject's actual name is unsuitable for
a family publication.

http://www.nytimes.com/2010/05/23/books/review/InsideList-t.html
http://latimesblogs.latimes.com/technology/2009/09/mydadsays-twitter.html

You might dismiss those sources as the popular press, but they're
the most reputable ones available on the subject.  Should we deem
their censorship sacrosanct and adopt it as our own?

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-14 Thread David Levy
Andreas Kolbe wrote:

   NPOV policy as written would require us to do the same, yes.

  The community obviously doesn't share your interpretation of said policy.

 It's not a question of interpretation; it is the very letter of the policy.

It most certainly is a matter of interpretation.  If the English
Wikipedia community shared yours, we wouldn't be having this
discussion.

In this context, you view images as entities independent from the
people and things depicted therein (and believe that our use of
illustrations not included in other publications constitutes undue
weight).

Conversely, the community doesn't treat images of x as a subject
separate from x (unless the topic images of x is sufficiently
noteworthy in its own right).  If an image illustrates x in a manner
consistent with what reliable sources tell us about x, it clears the
pertinent hurdle.  (There are, of course, other inclusion criteria.)

 Due weight and neutrality are established by reliable sources.

And these are the sources through which the images' accuracy and
relevance are verified.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-13 Thread David Levy
I wrote:

  Apart from the name (which the MediaWiki developers inexplicably
  refused to change), the bad image list is entirely compliant with the
  principle of neutrality (barring abuse by a particular project, which
  I haven't observed).

MZMcBride replied:

 Not inexplicably: https://bugzilla.wikimedia.org/show_bug.cgi?id=14281#c10

Actually, that's precisely what I had in mind.  Maybe it's me, but I
found Tim's response rather bewildering.  It certainly isn't standard
procedure to forgo an accurate description in favor of nebulous social
commentary/parody, particularly when this proves controversial.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-13 Thread David Levy
I wrote:

  In an earlier reply, I cited ultra-Orthodox Jewish newspapers and magazines
  that refuse to publish photographs of women.  If this were a mainstream
  policy, would that make it neutral?

Andreas Kolbe replied:

 NPOV policy as written would require us to do the same, yes.

The community obviously doesn't share your interpretation of said policy.

 In the same way, if no reliable sources were written about women, we would not
 be able to have articles on them.

The images in question depict subjects documented by reliable sources
(through which the images' accuracy and relevance are verifiable).

Essentially, you're arguing that we're required to present information
only in the *form* published by reliable sources.

 By following sources, and describing points of view with which you personally 
 do
 not agree, you are not affirming the correctness of these views. You are 
 simply
 writing neutrally.

Agreed.  And that's what we do.  We describe views.  We don't adopt
them as their own.

If reliable sources deem a word objectionable and routinely censor it
(e.g. when referring to the Twitter feed Shit My Dad Says), we don't
follow suit.

The same principle applies to imagery deemed objectionable.  We might
cover the controversy in our articles (depending on the context), but
we won't suppress such content on the basis that others do.

As previously discussed, this is one of many reasons why reliable
sources might decline to include images.  Fortunately, we needn't read
their minds.  As I noted, we *always* must evaluate our available
images (the pool of which differs substantially from those of most
publications) to gauge their illustrative value.  We simply apply the
same criteria (intended to be as objective as possible) across the
board.

 Images are content too, just like text.

Precisely.  And unless an image introduces information that isn't
verifiable via our reliable sources' text, there's no material
distinction.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-12 Thread David Levy
Andreas Kolbe wrote:

 Again, I think you are being too philosophical, and lack pragmatism.

 We already have bad image lists like

 http://en.wikipedia.org/wiki/MediaWiki:Bad_image_list

 If you remain wedded to an abstract philosophical approach, such lists
 are not neutral. But they answer a real need.

Apart from the name (which the MediaWiki developers inexplicably
refused to change), the bad image list is entirely compliant with the
principle of neutrality (barring abuse by a particular project, which
I haven't observed).  It's used to prevent a type of vandalism, which
typically involves the insertion of images among the most likely to
offend/disgust large numbers of people.  But if the need arises (for
example, in the case of a let's post harmless pictures of x
everywhere meme), it can be applied to *any* image (including one
that practically no one regards as inherently objectionable).

 I would invite you to think some more about this, and view it from a
 different angle. You said earlier,

A reputable publication might include textual documentation of a
subject, omitting useful illustrations to avoid upsetting its readers.
That's non-neutral.

 You assume here that there is any kind of neutrality in Wikipedia that is
 not defined by reliable sources.

 There isn't.

Again, you're conflating two separate concepts.

In most cases, we can objectively determine, based on information from
reliable sources, that an image depicts x.  This is comparable to
confirming written facts via the same sources.

If a reliable source declines to include an image because it's
considered offensive, that's analogous to censoring a word (e.g.
replacing fuck with f**k) for the same reason.

Include suitably licensed images illustrating their subjects is a
neutral pursuit.  (Debates arise regarding images' utility — just as
they do regarding text — but the goal is neutral.)  Include suitably
licensed images illustrating their subjects, provided that they aren't
upsetting is *not* neutral, just as include properly sourced
information, provided that it isn't upsetting is not.

 If I go along with your statement that reliable sources avoid upsetting their
 readers, why would we be more neutral by deciding to depart from reliable
 sources' judgment, and consciously upsetting our readers in a way reliable
 sources do not?

This is an image of x (corroborated by information from reliable
sources) is a neutral statement.  This image is upsetting is not.

In an earlier reply, I cited ultra-Orthodox Jewish newspapers and
magazines that refuse to publish photographs of women.  If this were a
mainstream policy, would that make it neutral?

As I noted previously, to emulate such a standard would be to adopt a
value judgement as our own (analogous to stating as fact that x is
bad because most reputable sources agree that it is).

  Likewise, if most publications decide that it would be bad to publish
  illustrations alongside their coverage of a subject (on the basis that such
  images are likely to offend), we might address this determination via prose,
  but won't adopt it as *our* position.

 That exact same argument could be made about text as well:

 Likewise, if most publications decide that it would be bad to publish that X
 is a scoundrel (on the basis that it would be likely to offend), we might
 address this determination via prose, but won't adopt it as *our* position.

 So then we would have articles saying, No newspaper has reported that X is a
 scoundrel, but he is, because –.

X is a scoundrel is a statement of opinion.  X is a photograph of
y (corroborated by information from reliable sources) is a statement
of fact.

And as noted earlier, this is tangential to the image filter discussion.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-12 Thread David Levy
Andreas Kolbe wrote:

 Well, you need to be clear that you're using the word neutral here with a
 different meaning than the one ascribed to it in NPOV policy.

 Neutrality is not abstractly defined: like notability or verifiability, it
 has a very specific meaning within Wikipedia policy. That meaning is
 irrevocably tied to reliable sources.

 Neutrality consists in our reflecting fairly, proportionately, and without 
 bias,
 how reliable sources treat a subject.

Again, reflecting views != adopting views as our own.

We're going around in circles, so I don't care to elaborate again.

 Your assumption that reliably published sources do not publish the images you
 have in mind here because they do not wish to upset people is unexamined, and
 disregards other considerations – of aesthetics, didactics, psychology,
 professionalism, educational value, quality of execution, and others.

I referred to a scenario in which an illustration is omitted because
of a belief that its inclusion would upset people, but I do *not*
assume that this is the only possible rationale.

I also don't advocate that every relevant image be shoehorned into an
article.  (Many are of relatively low quality and/or redundant to
others.)  My point is merely that it upsets people isn't a valid
reason for us to omit an image.

As our image availability differs from that of most publications (i.e.
we can't simply duplicate the pictures that they run), we *always*
must evaluate — using the most objective criteria possible — how well
an image illustrates its subject.  It's impossible to eliminate all
subjectivity, but we do our best.

 It also disregards the possibility that Wikipedians may wish to include
 images for other reasons than simply to educate the reader – because they
 like the images, find them attractive, wish to shock, and so forth.

No, I don't disregard that possibility.  Such problems arise with text too.

 Basically, you are positing that whatever you like, or the community likes,
 is neutral. :)

If you were familiar with my on-wiki rants, you wouldn't have written that.

  In an earlier reply, I cited ultra-Orthodox Jewish newspapers and magazines
  that refuse to publish photographs of women.  If this were a mainstream
  policy, would that make it neutral?

Please answer the above question.

 You said in an earlier mail that in writing our texts, our job is to
 neutrally reflect the real-world balance, *including* any presumed biases. I
 agree with that.

Yes, our content reflects the biases' existence.  It does *not* affirm
their correctness.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content - Commons searches

2011-10-11 Thread David Levy
Andreas Kolbe wrote:

 If I search Commons for electric toothbrushes, the second search result is
 an image of a woman masturbating with an electric toothbrush:

 http://commons.wikimedia.org/w/index.php?title=Special:Searchsearch=electric+toothbrushesfulltext=Searchredirs=1ns0=1ns6=1ns9=1ns12=1ns14=1ns100=1ns106=1

 If I search Commons for pearl necklace, the first search result is an
 image of a woman with sperm on her throat:

 http://commons.wikimedia.org/w/index.php?title=Special%3ASearchsearch=pearl+necklacefulltext=Search

 If I search Commons for cucumber, the first page of search results shows a
 woman with a cucumber up her vagina:

 http://commons.wikimedia.org/w/index.php?title=Special%3ASearchsearch=cucumberfulltext=Search

 Please accept that people who are looking for images of electric
 toothbrushes, cucumbers and pearl necklaces in Commons may be somewhat taken
 aback by this.

I agree that this is a problem.  I disagree that the proposed
image-tagging system is a viable solution.

Please answer the questions from my previous message.

 Surely your vision of neutrality does not include that we have to force
 people interested in personal hygiene, vegetables and fashion to look at
 graphic sex images?

I don't wish to force *any* off-topic images on people (including
those who haven't activated an optional feature intended to filter
objectionable ones).  Methods of preventing this should be pursued.

 There is theory and practice. Philosophically, I agree with you. But looking
 at the results of trying to find an image of a cucumber or pearl necklace in
 Commons is a pragmatic question. Users should be able to tailor their user
 experience to their needs.

I agree, provided that we seek to accommodate all users equally.
That's why I support the type of implementation discussed here:
http://meta.wikimedia.org/wiki/Talk:Image_filter_referendum/en/Categories#general_image_filter_vs._category_system
or
http://goo.gl/t6ly5

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-11 Thread David Levy
Andreas Kolbe wrote:

 If we provide a filter, we have to be pragmatic, and restrict its application
 to media that significant demographics really might want to filter.

Define significant demographics.  Do you have a numerical cut-off
point in mind (below which we're to convey you're a small minority,
so we've deemed you insignificant)?

 We should take our lead from real-world media.

WMF websites display many types of images that most media don't.
That's because our mission materially differs.  We seek to spread
knowledge, not to cater to majorities in a manner that maximizes
revenues.

For most WMF projects, neutrality is a core principle.  Designating
certain subjects (and not others) potentially objectionable is
inherently non-neutral.

 Real-world media show images of lesbians, pork, mixed-race couples, and
 unveiled women (even Al-Jazeera).

 There is absolutely no need to filter them, as there is no significant target
 group among our readers who would want such a filter.

So only insignificant target groups would want that?

Many ultra-Orthodox Jewish newspapers and magazines maintain an
editorial policy forbidding the publication of photographs depicting
women.  Some have even performed digital alterations to remove them
from both the foreground and background.

http://en.wikipedia.org/wiki/The_Situation_Room_(photograph)

These publications (which routinely run photographs of deceased
women's husbands when publishing obituaries) obviously have large
enough readerships to be profitable and remain in business.

As of 2011, there are approximately 1.3 million Haredi Jews.  The
Haredi Jewish population is growing very rapidly, doubling every 17 to
20 years.

http://en.wikipedia.org/wiki/Haredi_Judaism

Are we to tag every image containing a woman, or are we to deem this
religious group insignificant?

 You mentioned a discussion about category-based filter systems in your other
 post.

The ability to blacklist categories is only one element of the
proposal (and a secondary one, in my view).

 One other avenue I would like to explore is whether the existing Commons
 category system could, with a bit of work, be used as a basis for the filter.
 I've made a corresponding post here:

 http://meta.wikimedia.org/wiki/Controversial_content/Brainstorming#Refine_the_existing_category_system_so_it_forms_a_suitable_basis_for_filtering

This was discussed at length on the talk pages accompanying the
referendum and on this list.

Our current categorization is based primarily on what images are
about, *not* what they contain.  For example, a photograph depicting a
protest rally might include nudity in the crowd, but its
categorization probably won't specify that.  Of course, if we were to
introduce a filter system reliant upon the current categories, it's
likely that some users would seek to change that (resulting in harmful
dilution).

Many potentially objectionable subjects lack categories entirely
(though as discussed above, you evidently have deemed them
insignificant).

On the brainstorming page, you suggest that [defining] a small number
of categories (each containing a group of existing Commons categories)
that users might want to filter would alleviate the concern that we
are creating a special infrastructure that censors could exploit.  I
don't understand how.  What would stop censors from utilizing the
categories of categories in precisely the same manner?

 I understand you are more in favour of users being able to switch all images
 off, depending on the page they are on.

The proposal that I support includes both blacklisting and whitelisting.

 This has some attractive aspects, but it would not help e.g. the Commons user
 searching for an image of a pearl necklace. To see the images Commons
 contains, they have to have image display on, and then the first image they
 see is the image of the woman with sperm on her throat.

This problem extends far beyond the issue of objectionable images,
and I believe that we should pursue solutions separately.

 It also does not necessarily prepare users for the media they might find in
 WP articles like the ones on fisting, ejaculation and many others; there are
 always users who are genuinely shocked to see that we have the kind of media
 we have on those pages, and are unprepared for them.

Such users could opt to block images by default, whitelisting only the
articles or specific images whose captions indicate content that they
wish to view.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-11 Thread David Levy
Andreas Kolbe wrote:

 I would use indicators like the number and intensity of complaints received.

For profit-making organizations seeking to maximize revenues by
catering to majorities, this is a sensible approach.  For most WMF
projects, conversely, neutrality is a fundamental, non-negotiable
principle.

 Generally, what we display in Wikipedia should match what reputable
 educational sources in the field display. Just like Wikipedia text reflects
 the text in reliable sources.

This is a tangential matter, but you're comparing apples to oranges.

We look to reliable sources to determine factual information and the
extent of coverage thereof.  We do *not* emulate their value
judgements.

A reputable publication might include textual documentation of a
subject, omitting useful illustrations to avoid upsetting its readers.
 That's non-neutral.

 That does not mean that we should not listen to users who tell us that they
 don't want to see certain media because they find them upsetting, or
 unappealing.

Agreed.  That's why I support the introduction of a system enabling
users (including those belonging to insignificant groups) to filter
images to which they object.

 I would deem them insignificant for the purposes of the image filter. They
 are faced with images of women everywhere in modern life, and we cannot cater
 for every fringe group.

The setup that I support would accommodate all groups, despite being
*far* simpler and easier to implement/maintain than one based on
tagging would be.

 At some point, there are diminishing returns, especially when it amounts to
 filtering images of more than half the human race.

That such an endeavor is infeasible is my point.

 We need to look at mainstream issues (including Muhammad images).

We needn't focus on *any* objectionable content in particular.

 That would involve a user switching all images off, and then whitelisting
 those they wish to see; is that correct? Or blacklisting individual
 categories?

Those would be two options.  The inverse options (blacklisting images
and whitelisting entire categories) also should be included.

And it should be possible to black/whitelist every image appearing in
a particular page revision (either permanently or on a one-off basis).

 This would be better from the point of view of project neutrality, but would
 seem to involve a *lot* more work for the individual user.

Please keep in mind that I don't regard a category-based approach as
feasible, let alone neutral.  The amount of work for editors (and
related conflicts among them) would be downright nightmarish.

 It would also be equally likely to aid censorship, as the software would have
 to recognise the user's blacklists, and a country or ISP could then equally
 generate its own blacklists and apply them across the board to all users.

They'd have to identify specific images/categories to block, which
they can do *now* (and simply intercept and suppress the data
themselves).

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-10 Thread David Levy
Jussi-Ville Heiskanen wrote:

 Given comments like this, it seems the contingent in support of filters
 is utterly and completely delusional. That proposal mitigates none of
 the valid objections to enabling other forces from just taking what we
 would be foolish enough to supply, and abusing the system to all its
 delight. Please come up with something more realistic.

Please elaborate (ideally without hurling insults).

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-10 Thread David Levy
Jussi-Ville Heiskanen wrote:

 Any (and I stress *any*) tagging system is very nicely vulnerable to being
 hijacked by downstream users.

I've steadfastly opposed the introduction of a tag-based image filter system.

The proposal to which I linked involves no tagging (as I understand
the term).  Is it possible that you misread/misinterpreted it?  If
not, please explain how it would enable such an exploit.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-10 Thread David Levy
Risker wrote:

 So does the current categorization system lend itself to being hijacked by
 downstream users?

Yes, but not nearly to the same extent.

 Given the number of people who insist that any categorization system seems
 to be vulnerable, I'd like to hear the reasons why the current system, which
 is obviously necessary in order for people to find types of images, does not
 have the same effect.  I'm not trying to be provocative here, but I am
 rather concerned that this does not seem to have been discussed.

The current system doesn't involve categorizing images based on
criteria under which they're considered potentially objectionable,
so someone wishing to censor images of x is far less likely to find
Category:x.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-10 Thread David Levy
Jussi-Ville Heiskanen wrote:

 if  you  like the image browsers

Sorry, I don't know what you mean.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Letter to the community on Controversial Content

2011-10-09 Thread David Levy
Sue Gardner wrote:

 The Board is hoping there is a solution that will 1) enable readers to
 easily hide images they don't want to see, as laid out in the Board's
 resolution [1], while 2) being generally acceptable to editors. Maybe
 this will not be possible, but it's the goal.

As I've noted in other threads, Board member Samuel Klein has publicly
expressed support for the type of implementation discussed here:

http://meta.wikimedia.org/wiki/Talk:Image_filter_referendum/en/Categories#general_image_filter_vs._category_system
or
http://goo.gl/t6ly5

Given the absence of elements widely cited as problematic, I believe
that something along these lines would be both feasible and generally
acceptable to editors.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Blog from Sue about censorship, editorial judgement, and image filters

2011-10-01 Thread David Levy
Andreas Kolbe wrote:

 We'd still be in good company, as all other major websites, including
 Google, YouTube and Flickr, use equivalent systems, systems that are
 widely accepted.

I'm going to simply copy and paste one of my earlier replies (from a
different thread):

Websites like Flickr (an example commonly cited) are commercial
endeavors whose decisions are based on profitability, not an
obligation to maintain neutrality (a core element of most WMF
projects).  These services can cater to the revenue-driving majorities
(with geographic segregation, if need be) and ignore minorities whose
beliefs fall outside the mainstream for a given country.  We mustn't
do that.

One of the main issues regarding the proposed system is the need to
determine which image types to label potentially objectionable and
place under the limited number of optional filters.  Due to cultural
bias, some people (including a segment of voters in the referendum,
some of whom commented on its various talk pages) believe that this is
as simple as creating a few categories along the lines of nudity,
sex, violence and gore (defined and populated in accordance with
arbitrary standards).

For a website like Flickr, that probably works fairly well; a majority
of users will be satisfied, with the rest too fragmented to be
accommodated in a cost-effective manner.  Revenues are maximized.
Mission accomplished.

The WMF projects' missions are dramatically different.  For most,
neutrality is a nonnegotiable principle.  To provide an optional
filter for image type x and not image type y is to formally
validate the former objection and not the latter.  That's
unacceptable.

An alternative implementation, endorsed by WMF trustee Samuel Klein,
is discussed here:
http://meta.wikimedia.org/wiki/Talk:Image_filter_referendum/en/Categories#general_image_filter_vs._category_system
or
http://goo.gl/t6ly5

 If I google for images of cream pies in my office in the lunch break,
 because I want to bake one, I'm quite happy not to have dozens of images of
 sperm-oozing rectums and vaginas pop up on my screen. Thanks, Google.

Are you suggesting that a comparable situation is likely to arise at a
WMF website?

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Blog from Sue about censorship, editorial judgement, and image filters

2011-10-01 Thread David Levy
MZMcBride wrote:

 I'd forgotten all about Toby. That was largely a joke, wasn't it?

Do not try to define Toby.  Toby might be a joke or he might be
serious.  Toby might be watching over us right now or he might be a
bowl of porridge.  Toby might be windmills or he might be giants.
Don't fight about Toby.  Let Toby be there.  Toby loves us.  Toby
hates us.  Toby always wins.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Blog from Sue about censorship, editorial judgement, and image filters

2011-09-30 Thread David Levy
André Engels wrote:

 We will be putting certain categories/tags/classifications on images,
 but it will still be the readers themselves who decide whether or not
 they see the tagged images.

But _we_ will need to determine the categories/tags/classifications to
use and the images to which they're applied.

As previously discussed, unless we implement an unveiled women
category (which is highly unlikely), readers who object to such images
will be discriminated against.

And for a hypothetical nudity category, we'll have to decide what
constitutes nudity.  This will trigger endless debate, and whatever
definition prevails will fail to jibe that held by a large number of
readers.

 We will be putting certain categories/tags/classifications on images,
 There might well be an option to show a certain image even though
 it's under the filter. Apart from that, if we were of the opinion
 that we should do something perfectly or not at all, we would not
 have any of our projects.

As I pointed out to you in a previous reply, an alternative image
filter implementation has been proposed (and is endorsed by WMF
trustee Samuel Klein).  It would accommodate everyone and require no
determinations on the part of the community (let alone
analysis/tagging of millions of files, with thousands more uploaded
every day).

Please see the relevant discussion:
http://meta.wikimedia.org/wiki/Talk:Image_filter_referendum/en/Categories#general_image_filter_vs._category_system
or
http://goo.gl/t6ly5

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Blog from Sue about censorship, editorial judgement, and image filters

2011-09-30 Thread David Levy
I wrote:

 And for a hypothetical nudity category, we'll have to decide what
 constitutes nudity.  This will trigger endless debate, and whatever
 definition prevails will fail to jibe that held by a large number of
 readers.

The above should read jibe _with_ that held by a large number of readers.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Image filter

2011-09-23 Thread David Levy
Marcus Buck wrote:

 The majority of people probably think that an optional opt-in filter is
 a thing that does no harm to non-users and has advantages for those who
 choose to use it. (Ask your gramma whether You can hide pictures if you
 don't want to see them sounds like a threatening thing to her.)

In attempting to explain Wikipedia's neutrality to my father, I
mentioned that we don't condemn the acts of Adolf Hitler.  (We
document the widespread condemnation, but we don't deem it correct.)
No matter what I said, my father refused to accept that there was any
valid reason for Wikipedia to refrain from stating as a fact that
Hitler was evil.

This, I believe, is a fairly mainstream attitude.  If questioned,
most people probably wouldn't object to that type of content.  But
persons familiar with Wikipedia's mission feel differently (and
routinely revert such edits).

The idea behind a category-based image filter feature comes across as
incredibly simple.  (You want to provide the option to block
objectionable images?  Sure, what could be the harm in that?)  To a
typical respondent, this might seem as straightforward as adding
checkboxes labeled sex, nudity, violence, gore and
blasphemy.  That those classifications are subjective and
non-inclusive of images that others regard as objectionable probably
isn't even a passing thought.  That volunteers would need to analyze
and tag millions of images (with thousands more uploaded every day) is
an alien concept.

To borrow David Gerard's metaphor, it's like soliciting opinions on
the introduction of a magical flying unicorn pony that shits
rainbows (or as a less absurd example, free ice cream for
everyone).  Among those who needn't worry about the logistics, it
sounds highly appealing.

I find it odd that some are inclined to discount the German
Wikipedia's poll on the basis that it reflects the views of editors
(as opposed to readers as a whole).  Setting aside the general
public's ignorance of the WMF projects' core principles, let's keep in
mind that a category-based filter system would depend upon _editors_
to function.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Image filter

2011-09-23 Thread David Levy
Milos Rancic wrote:

 There are around 300M of readers and less than 30k of the extended
 pool of editors, which brings number of 0,01%. Thus, not just
 irrelevant, but much less than the margin of statistical error.

You appear to have ignored my points regarding non-editors'
unfamiliarity with the WMF projects' core principles and the proposed
feature's logistics.  (Most members of the general public might
support the introduction of a magical flying unicorn pony that shits
rainbows, but that won't cause it to spring into existence.)

You also declined to address my point that a category-based filter
system, irrespective of its popularity among readers, could not
function without the support of editors.

And you seem to suggest that *any* on-wiki poll is inherently irrelevant.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter

2011-09-23 Thread David Levy
Milos Rancic wrote:

 Note that more than 50% of money comes from US and that it could be
 easily assumed that at least 10% of ~$10M given by US citizens and
 corporations want to have a kind of family friendly Wikipedia. Thus,
 $1M/year is fair price for creating something which would please them.

Assuming that the 10% figure is accurate, it has no bearing on the
feature's relative importance.

The same people/corporations might care more about numerous other
potential uses of the money (including different unimplemented
features), so your mathematical equation is invalid.

And I reject the premise that it's reasonable to base fund allocations
on popular opinion, with donors' views carrying extra (all?) weight.
Our mission is to disseminate information to the world, not to
please donors by catering to their preferences.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter

2011-09-21 Thread David Levy
Milos Rancic wrote:

 They want that censoring tool and I think that they won't be content
 until they get it. Thus, let them have it and let them leave the rest
 of us alone.

Some people won't be content until Wikipedia's prose conveys their
cultural/religious/spiritual beliefs as absolute truth.  Should the
WMF provide en.[insert belief system].wikipedia.org so they can edit
it and leave the rest of us alone?

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter

2011-09-21 Thread David Levy
Milos Rancic wrote:

 Don't worry! Any implementation of censorship project would lead to
 endless troll-fests which would be more dumb than Youtube comments.
 The point is just to kick out them out of productive projects. Imagine
 a place where Christian, Muslim, religon3, religionN
 fundamentalists will have to cooperate! That would be the place for
 epic battles of dumbness. We'll have in-house circus!

You're comfortable with the Wikimedia Foundation hosting/funding an
in-house circus?

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter

2011-09-21 Thread David Levy
David Gerard wrote:

 I don't see that Milos' proposal will be any more of a circus than any
 other proposed filtering plan (except switch all images on/off) and
 it has the advantage of keeping the circus somewhere general users of
 the site don't need to be bothered.

I agree that Milos Rancic's model is no worse (and probably less bad)
than the one officially envisioned, but in my view, it still falls far
below the threshold of acceptability.

Another proposed implementation is technically feasible, maintains
neutrality, and includes functionality significantly more advanced
than switch all images on/off:
http://meta.wikimedia.org/wiki/Talk:Image_filter_referendum/en/Categories#general_image_filter_vs._category_system
or
http://goo.gl/t6ly5

 And if they actually square the circle and come up with something that
 doesn't fundamentally violate neutrality, then win.

We generally require evidence of viability before permitting the
creation of WMF projects or offshoots thereof.  Thus far, the
available evidence paints a picture in which the stated goal seems as
realistic as the aforementioned magical flying unicorn pony that shits
rainbows.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter

2011-09-21 Thread David Levy
Milos Rancic wrote:

 Between:
 1) implementation against the majority will on main project;
 2) prolonged discussion about this issue, which would harm community;
 3) irrelevant in-house circus

 -- I choose the circus.

Those aren't the only options.

 And Board needs other options, as pushing censorship within the Board
 is strong enough to allow them to step back.

An alternative proposal (which is *far* simpler and maintains
neutrality) already has the public backing of WMF trustee Samuel
Klein:

As I noted at some point on the mediawiki discussion page: I'm also
in favor of any implementation being a general image filter. I haven't
seen any other option proposed that sounded like it would work out
socially or philosophically.

There is only one argument I have heard for a category system: that
it might be technically easier to implement. I don't think this is a
good argument - a bad solution here could be worse than none.

I would certainly like to see what sort of interest there is in a
general image filter. That is the only version of this feature that I
could imagine myself using. (for a small and annoying phobia.)

http://meta.wikimedia.org/wiki/Talk:Image_filter_referendum/en/Categories#general_image_filter_vs._category_system
or
http://goo.gl/t6ly5

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Possible solution for image filter - magical flying unicorn pony that s***s rainbows

2011-09-21 Thread David Levy
Andrew Gray wrote:

 People are saying we can't have the image filter because it
 would stop us being neutral. If we aren't neutral to begin
 with, this is a bad argument.

An image filter feature isn't inherently non-neutral, but one reliant
upon special categories (as described in the WMF outline) would be.

Unlike our normal categories (which objectively and non-exclusively
describe images' subjects in great detail), a manageable setup would
require broad, objection-based (i.e. extremely subjective)
classifications.

Additional non-neutrality enters the equation when we provide
categories for some potentially objectionable images and not others
(implicitly validating belief x and discriminating against holders of
belief y).  As previously discussed, this isn't realistically
avoidable.

Even if it were feasible to include an unveiled women category, who
would analyze the millions of images (with thousands more uploaded
every day) to tag them accordingly?  That's merely a single example.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] 86% of german users disagree with the introduction of the personal image filter

2011-09-19 Thread David Levy
Marcus Buck wrote:

 From what I understood the image filter will not have subjective
 criteria like a little offensive, very offensive, pornography,
 but neutrally decidable criteria like depicts nude female breasts,
 depicts the face of Muhammad, depicts mutilated dead body.

The WMF outline cites 5–10 categories, with sexual imagery and
violent imagery as examples:
http://meta.wikimedia.org/wiki/Image_filter_referendum/en

As Tobias Oelgarte noted, even with the type of specificity to which
you refer, the omission of countless widespread objections (a
non-neutral, discriminatory practice) would be unavoidable.  Who's
going to analyze millions of images (with thousands more uploaded
every day) to tag the ones containing unveiled women?

Tobias also pointed out that even if we _were_ to include every
widespread objection, the resultant quantity of filter categories
would be unmanageable.

As David Gerard noted, terms like mutilated are highly subjective.
At what level of injury does a murder victim qualify?  Fae mentioned
ancient Egyptian mummies.  Do they count?

Does an autopsy constitute mutilation?  Is a dead man's circumcised
penis mutilated?  (Many people would argue that it is.)

Do non-photographic images qualify?  If so, this article contains
several images that arguably should be filtered:
http://en.wikipedia.org/wiki/Descent_from_the_Cross

Speaking of artwork, Fae mentioned the depictions of nude female
breasts contained therein.  Do those count?  What about photographs
of breasts taken in medical contexts?  Are those equivalent to those
taken in sexual contexts?  If not, how do we define a sexual
context?

Do you foresee clear consensus in these areas?

This is what Fred Bauder means when he notes that some depictions of
such things are offensive or pornographic and some not at all.  But
this is entirely subjective.  We can expect endless debates on these
subjects and countless others (and probably tag wars as well).

Conversely, the alternative image filter implementation on which I've
harped would enable readers to decide for themselves on an individual,
case-by-case basis:
http://meta.wikimedia.org/wiki/Talk:Image_filter_referendum/en/Categories#general_image_filter_vs._category_system
or
http://goo.gl/t6ly5

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] 86% of german users disagree with the introduction of the personal image filter

2011-09-19 Thread David Levy
Stephen Bain wrote:

 And once again, the labelling doesn't need to be perfect (nothing on a
 wiki is) if an option to hide all images by default is implemented
 (which at present there seems to be broad support for, from most
 quarters).

With such an option in place, why take on the task of labeling images
on readers' behalf?  As previously discussed, it would be a logistical
nightmare and enormous resource drain.

Furthermore, I don't regard If you don't like our specific filters,
you can just use the blanket one. as a remotely appropriate message
to convey to the millions of readers whose beliefs we fail to
individually accommodate.

 The accuracy of filtering can then be disclaimed, with a
 recommendation that people can hide all images if they want a
 guarantee.

Or we can simply accept the idea's infeasibility and provide the
non-broken image filter implementation alone.

Note that disclaimers won't satisfy the masses.  (If they did, we
wouldn't be having this discussion.)  As soon as the WMF introduces
such a feature, large segments of the public will expect it to
function flawlessly and become outraged when it doesn't.

I foresee sensationalistic media reports about our child protection
filters that let through smut and gore.  (I realize that we aren't
advertising child protection filters.  Nonetheless, that's a likely
perception, regardless of any disclaimers.)

 And of course artworks are being used as examples because they're
 going to present the corner cases. But all of these discussions seem
 to be proceeding on the basis that there are nothing but corner cases,
 when really (I would imagine) pretty much everything that will be
 filtered will be either:
 * actual images of human genitals [1],
 * actual images of dead human bodies, or
 * imagery subject to religious restriction.
 Almost all will be in the first two categories, and most of those in
 the first one, and will primarily be photographs.
 [1] Which, naturally, includes actual images of people undertaking all
 sorts of activities involving human genitals.

Firstly, I don't know why you've singled out genitals.  People
commonly regard depictions of other portions of the human anatomy
(such as buttocks and female breasts) as objectionable.

Secondly, imagery subject to religious restriction (which doesn't
constitute a viable category in this context) includes images of
unveiled women.  You state that almost all will be in the first two
categories, but I'm confident that we host significantly more images
of unveiled women than images of human genitals and images of dead
human bodies combined.

Thirdly, there's been a great deal of discussion regarding other
images to which people commonly object, such those depicting violence
(whatever that means), surgical procedures, spiders and snakes, to
name but a few.

 On the basis that the community, by and large, is not comprised wholly
 of idiots, I'm sure it will be capable of holding a sensible
 discussion as to whether images of mummies (not to forget bog bodies
 and Pompeii castings, as further examples) would be in or out of such
 a category.

...thereby arriving at an inherently subjective, binary determination
that fails to meet many readers' expectations.

 And again, perfection is not necessary. If someone has dead bodies
 filtered and sees the filtered image placeholder with the caption
 this is an Egyptian mummy, they can elect to show that particular
 image, or decide that they would like to turn off the filter. Or if
 such a dead bodies filter is described as not including Egyptian
 mummies, someone could decide to hide all images by default.

Or we could simply provide that functionality alone, thereby enabling
the same scenario.

 This doesn't have to be difficult.

Indeed.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] 86% of german users disagree with the introduction of the personal image filter

2011-09-17 Thread David Levy
Peter Gervai wrote:

 But then you state that WMF will make it compulsory for all projects
 to activate the feature? (Citation is welcome, sure, but not
 required.)

The feature will be developed for, and implemented on, all projects.

http://meta.wikimedia.org/wiki/Image_filter_referendum/en

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] 86% of german users disagree with the introduction of the personal image filter

2011-09-17 Thread David Levy
André Engels wrote:

 As said before, just get different categories, and let people choose among
 them. The priest could then choose to block full nudity, female
 toplessness, people in underwear and people in swimwear, but not
 images containing naked bellies or unveiled women, whereas the atheist
 could for example choose to only block photographs of sexual organs and
 watch the rest.

As Tobias Oelgarte noted, it simply isn't feasible to categorize
images in this manner (keeping in mind that those are merely examples
of the countless categories that we would need to apply to millions of
images, with thousands more uploaded every day), let alone to present
such a large quantity of filter options to readers.

 I find it strange that you consider this an objection to a filter. Surely,
 giving someone an imperfect choice of what they consider objectionable is
 _less_ making a decision for them than judging in advance that nothing is
 objectionable?

You're mischaracterizing the status quo.  We haven't determined that
nothing is objectionable to anyone;  we rightly assume that
_everything_ is potentially objectionable to someone (and refrain from
favoring certain objections over others).

 What is POV about labelling something as being an image containing a nude
 human or an illustration supposed to represent a religious figure?

Tobias Oelgarte described one key problem.  Another lies in the
labeling of some things and not others.  Unless we were to create and
apply a label for literally everything that someone finds
objectionable, we'd be taking the non-neutral position that only
certain objections (the ones for which filters exist) are reasonable.

You mentioned a hypothetical unveiled women category.  Do you
honestly believe that the idea of tagging images in this manner is
remotely realistic?

What about images depicting miscegenation (another concept to which
many people strongly object)?  Are we to have such a category?

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] 86% of german users disagree with the introduction of the personal image filter

2011-09-17 Thread David Levy
I wrote:

  You're mischaracterizing the status quo.  We haven't determined that
  nothing is objectionable to anyone;  we rightly assume that
  _everything_ is potentially objectionable to someone (and refrain from
  favoring certain objections over others).

André Engels replied:

 Thereby giving those who have objections nothing just because there are
 others who we can't give what they want.

I don't advocate maintaining the status quo.  I support an alternative
image filter implementation (endorsed by WMF trustee Samuel Klein),
which would accommodate _everyone_ and require no determinations on
the part of the community (let alone analysis/tagging of millions of
files, with thousands more uploaded every day).

Please see the relevant discussion:
http://meta.wikimedia.org/wiki/Talk:Image_filter_referendum/en/Categories#general_image_filter_vs._category_system
or
http://goo.gl/t6ly5

  Unless we were to create and apply a label for literally everything that
  someone finds objectionable, we'd be taking the non-neutral position that
  only certain objections (the ones for which filters exist) exist) are
  reasonable.

 We don't say they're unreasonable, we say that we don't cater to it, or at
 least not yet.

On what basis shall we determine the objections to which we cater?

Presumably, we would start by weeding out objections held by
relatively small numbers of people.  (This is problematic, but let's
set aside that issue.)

Then what?  To select only certain objections from the resultant pool,
what criterion other than reasonableness (a completely subjective
judgement) remains?

You might argue that we needn't narrow the aforementioned pool, but a
large number of filter categories is unsustainable (both in its
creation/maintenance and in its presentation to readers).

 That may be non-neutral, but no more non-neutral than that one subject has
 an article and the other not, or one picture is used to describe an article
 and the other not, or one category is deemed important enough to be used to
 categorize our articles, books, words and images and another not.

All of those decisions are based — at least in part — on objective
criteria.  More importantly, all of those decisions are intrinsic to
the WMF projects' operation.

Conversely, the image filter implementation that you advocate is
neither essential nor technically feasible.

 Or even clearer: that one language has a Wikipedia and another not. Wid we
 make a non-neutral choice that only certain languages (the ones for which
 Wikipedias exist) are valid languages to use for spreading knowledge?

You're describing an imbalance that exists through misfortune, *not*
by design.  Ideally, we would include every written language utilized
as a primary means of communication.  Sadly, some projects simply
aren't viable (because they lack sufficient user bases).

No analogous situation forces us to treat readers differently based on
their personal beliefs regarding what images are/aren't objectionable.

  You mentioned a hypothetical unveiled women category.  Do you
  honestly believe that the idea of tagging images in this manner is
  remotely realistic?

 I'd say it is, provided there are people wanting to use the filter, and not
 minding the fact that in the beginning it will be far from perfect.

So we eventually will analyze millions of images (and monitor the
thousands uploaded on a daily basis) to tag each and every one
containing an unveiled woman?

  What about images depicting miscegenation (another concept to which
  many people strongly object)?  Are we to have such a category?

 I'd say if there are people actually wanting to use such a filter, then yes,
 I would think we might well get one.

I admire your consistency, but I regard this approach as stupendously
infeasible.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] 86% of german users disagree with the introduction of the personal image filter

2011-09-17 Thread David Levy
Stephen Bain wrote:

 NPOV involves determining whether viewpoints are widely held, are held
 by substantial or significant minorities, or are held by an extremely
 small or vastly limited minority and therefore not suitable to be
 covered in articles. This is an editorial decision-making process that
 all editors perform all the time. Determining which filters to work on
 is entirely analogous to this process, which is inherently neutral.

Gauging a viewpoint's level of coverage by reliable sources is
achievable via objective criteria.  We don't take anyone's side and
aren't bound by practical limitations on the number of widely covered
views we can document (irrespective of the quantity of articles
required).

Conversely, category-based filtering would require us to accept/reject
our readers' views in a binary fashion.  (A type of objectionable
image would either receive a filter or not.)  This would convey a
formal determination that x beliefs warrant accommodation and y
beliefs don't, which isn't remotely the same as documenting these
views in a neutral manner.

Perhaps you have in mind that we could accommodate objections that are
widely held or held by substantial or significant minorities,
thereby excluding only the ones held by an extremely small or vastly
limited minority.  As I noted in another reply, setting aside any
philosophical issues, this isn't technically feasible.  For logistical
reasons, the numerical limit would be far lower (with an example of
5–10 categories cited by the WMF).

The above doesn't even touch on the categories' population, which
would entail non-stop argumentation over whether particular images
belong in particular categories.  Once again, contrary to the creation
of articles documenting a wide range of views, the decision would be
binary: filter or don't filter.  Unlike our normal categorization
scheme's large number of objective classifications, this would rely on
a small number of subjective ones (created not to provide neutral
descriptions, but to label the images' subjects potentially
objectionable).

There's no need for any of this.  We can accommodate _everyone_ via a
vastly simpler, fully neutral setup.  If you haven't already, please
see the relevant discussion:
http://meta.wikimedia.org/wiki/Talk:Image_filter_referendum/en/Categories#general_image_filter_vs._category_system
or
http://goo.gl/t6ly5

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Personal Image Filter results announced

2011-09-07 Thread David Levy
Sydney Poore wrote:

 The idea of offering imagine filters on WMF project is much more
 controversial than it is on other internet websites. So, I I think
 that it is fair to suggest that we examine why we are having
 conflicts over this topic when other website don't. One possible
 reason is that our base of editors is different from other websites.

Websites like Flickr (an example commonly cited) are commercial
endeavors whose decisions are based on profitability, not an
obligation to maintain neutrality (a core element of most WMF
projects).  These services can cater to the revenue-driving majorities
(with geographic segregation, if need be) and ignore minorities whose
beliefs fall outside the mainstream for a given country.  We mustn't
do that.

One of the main issues regarding the proposed system is the need to
determine which image types to label potentially objectionable and
place under the limited number of optional filters.  Due to cultural
bias, some people (including a segment of voters in the referendum,
some of whom commented on its various talk pages) believe that this is
as simple as creating a few categories along the lines of nudity,
sex, violence and gore (defined and populated in accordance with
arbitrary standards).

For a website like Flickr, that probably works fairly well; a majority
of users will be satisfied, with the rest too fragmented to be
accommodated in a cost-effective manner.  Revenues are maximized.
Mission accomplished.

The WMF projects' missions are dramatically different.  For most,
neutrality is a nonnegotiable principle.  To provide an optional
filter for image type x and not image type y is to formally
validate the former objection and not the latter.  That's
unacceptable.

An alternative implementation, endorsed by WMF trustee Samuel Klein,
is discussed here:
http://meta.wikimedia.org/wiki/Talk:Image_filter_referendum/en/Categories#general_image_filter_vs._category_system
or
http://goo.gl/t6ly5

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Personal Image Filter results announced

2011-09-07 Thread David Levy
Sydney Poore wrote:

 Today to be successful organizations; both for-profit and
 not-for-profit, must recognize the needs of their global audience.
 Offering image filters where people can set their own preferences
 and bypass the setting for individual settings is brilliant way
 for people with different values to share the same space. No content
 is removed, and people can see all images if they choose to.

Agreed.  That's why I support the image filter implementation proposed here:
http://meta.wikimedia.org/wiki/Talk:Image_filter_referendum/en/Categories#general_image_filter_vs._category_system
or
http://goo.gl/t6ly5

Because we seek to accommodate a global audience (comprising people
whose beliefs are extremely diverse), I unreservedly oppose any
implementation necessitating the designation of certain image types
(and not others) as potentially objectionable or similar.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] [Fwd: Re: Do WMF want enwp.org?]

2011-05-08 Thread David Levy
Martijn Hoekstra wrote:

  Back to the issue at hand though: Thomas is (quite generously)
  offering the enwp.org domain. Would the foundation like to have it?

Gerard Meijssen replied:

 The wp.org is probably for sale ... It makes more sense to acquire
 that domain.

Why is this an either-or proposition?

Even if the wp.org domain name is worth acquiring (most likely at a
substantial cost, as Waldir Pimenta noted), why does this mean that
the Wikimedia Foundation should decline Thomas Wang's generous
donation of the enwp.org domain name?

—David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Wikipedia articles based on Wikileaks material

2010-12-12 Thread David Levy
Fred Bauder wrote:

[...]

 Likewise links to or hosting of classified documents, or offensive
 images, is inappropriate

[...]

Images of unveiled women are regarded as offensive by many.  Should
we prohibit linking to or hosting them?

-- 
David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] hiding interlanguage links by default is a Bad Idea, part 2

2010-06-07 Thread David Levy
Aryeh Gregor wrote:

 Because evidence is a great thing, but judgment is necessary too.  It
 would be nice if you could do everything strictly based on evidence,
 but real life isn't so simple.

Agreed.  So why are you dismissing people's arguments on the basis
that they stem from such judgement (while simultaneously passing
similar judgement of your own)?  You can't have it both ways.

It's entirely reasonable to vigorously disagree with others'
arguments, of course.  But when the rationale is they are not backed
by data, it's unreasonable to exempt the user experience team and
yourself from this standard.

 Just because there are complaints about *many* problems, doesn't mean
 there are complaints about *all* problems.  Some problems draw few to
 no complaints, by their nature.

As previously noted, perceived clutter draws complaints.

 If every problem really did trigger complaints, we wouldn't need
 usability studies -- we could find all problems by just looking at
 complaints.  This isn't the case.

We've agreed that such comments mustn't be interpreted as
representative samples, so the value of usability studies is clear.

However, this particular problem was identified *not* through such a
study (despite the fact that one was ongoing), but through speculation
stemming from a general design principle of questionable
applicability.

 There is ample evidence that when users are presented with more
 buttons to click, they take longer to find what they want and make
 more mistakes.

In my observation, a list of twenty interlanguage links is perceived
*not* as twenty separate links, but as one coherent list.  It's
instantly clear that most or all of the labels are written in a
language foreign to the user, and little or no time is spent examining
them individually.

However, I'm not suggesting that my observation is sacrosanct; I
welcome scientific data.

 We can apply this generality to specific cases without having to
 directly check it every time.

Yes, but not when dealing with a materially different entity.

 We don't have the budget to run a usability study on every individual
 possible problem.

Of course not.  But for reasons explained throughout the discussion,
many of us regard this feature as immensely important and feel that it
should not be demoted in the absence of data indicating that the
change is beneficial.

Howie Fung has acknowledged that additional data is needed, and I
applaud this response.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] hiding interlanguage links by default is a Bad Idea, part 2

2010-06-06 Thread David Levy
 links and less on other interface elements, is
 that good or bad?

There is no evidence that the interlanguage links are a distraction,
let alone that users click on them at the expense of other links.

 If we wanted to maximize clicks on interlanguage links, we could
 always put them above the article text, so you have to scroll through
 them to get to the article text . . . but that's obviously ridiculous.

Indeed, such placement obviously would impede access to the article
text.  There is no evidence that the actual placement acts in kind.

 My suspicion is that a long list is not ideal.

Perhaps not.  And when we have data indicating that an alternative
setup is superior to the one currently preferred by the community, we
should switch to it.

Erik Möller has outlined a sensible course of action.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] hiding interlanguage links by default is a Bad Idea, part 2

2010-06-05 Thread David Levy
Austin Hair wrote:

 And yes, I'll echo others when I question the original rationale and
 suggest that the interpretation of what very little data was collected
 is completely wrong, but I think I'll direct my focus toward a
 practical fix, rather than just calling the usability team stupid.

Your last sentence surprised me, as I haven't seen anyone opine that
the usability team is stupid (and I certainly am not suggesting
anything of the sort).  Everyone makes mistakes, and we believe that
one has been made in this instance.  As for a practical fix, one
actually was implemented (and quickly undone).

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] hiding interlanguage links by default is a Bad Idea, part 2

2010-06-05 Thread David Levy
Gregory Maxwell wrote:

 I was alarmed when I heard the click rates: 1%.  That's an enormous
 number of clicks, considerably higher than I expected with the large
 number of things available for folks to click on.  To hear that it
 went down considerably with Vector—well, if nothing else, it is a
 possible objective indication that the change has reduced the
 usability of the site.

To clarify, no current statistics have been released.  The 0.28%
figure refers to use of Vector before the official roll-out (when the
interwiki links became collapsed by default, if I'm not mistaken).
The disparity is attributable to the fact that most Vector users were
participants in an opt-in, English-language beta test.

For the record, I agree with everything else that you wrote.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] hiding interlanguage links by default is a Bad Idea, part 2

2010-06-04 Thread David Levy
Mark Williamson wrote:

 That's not good enough. First of all, people who don't speak a
 language won't recognize the text see other languages, or even
 languages. Could you pick the word ენები out of a page full of
 text in a foreign language and understand that clicking it would lead
 you to a link to the English version of an article?

Very well said.

Indeed, the interwiki links are pointedly presented in the relevant
languages/scripts, and the readers for whom they're most useful are
among the least likely to comprehend the label under which they've
been hidden.

 The reason your proposal to use geolocation or browser language is not
 good enough is that would still result in reducing the visibility of
 many, many Wikipedias. I think there are many users who would prefer
 to read articles in Catalan whose default browser language is set to
 ES, and geolocation will probably not solve that problem either.

It appears that the idea's ramifications haven't been fully considered
(in part because it's difficult for speakers of one language to
appreciate the needs of another language's speakers).

 I think it is a mistake to hide ANY interwikis. clutter is not a
 huge sacrifice for people to make to vastly increase INTERNATIONAL
 usability.

Furthermore, I don't recall _ever_ encountering a complaint about this
so-called clutter.  But I certainly have seen numerous complaints
about the interwiki links' sudden removal (as many have perceived the
change).

Perhaps a suitable compromise can be devised, but in the meantime, the
only appropriate solution is to display the interwiki links by
default.  It's unfortunate that this fix was reverted, let alone in
the name of usability.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] hiding interlanguage links by default is a Bad Idea, part 2

2010-06-04 Thread David Levy
Aryeh Gregor wrote:

 Users don't explicitly complain about small things.

At the English Wikipedia, this is not so.  If we had a bike shed,
there would be daily complaints about its color.

 They especially don't complain about things like clutter, because the
 negative effect that has is barely perceptible -- extra effort
 required to find things.

I've encountered many complaints about clutter at the English
Wikipedia (pertaining to articles, our main page and other pages), but
not one complaint that the interwiki links caused clutter.

 But if you take away a feature that's important to a small number of
 users, or that's well established and people are used to it, you'll
 get lots of complaints from a tiny minority of users.

I realize that, and I once had a high-profile edit reverted because a
tiny number of users (out of a very large number affected) complained.

However, assuming that the interwiki links benefit a relatively small
percentage of users (still a non-negligible number in absolute terms),
I've yet to see evidence that displaying them by default is
problematic.  Like David Gerard, I desire access to the data behind
this decision.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] hiding interlanguage links by default is a Bad Idea, part 2

2010-06-04 Thread David Levy
[replying here and at
http://usability.wikimedia.org/wiki/Opinion_Language_Links]

Howie Fung wrote:

 First, some background on the problem we're addressing and the design
 principle that we used.  Every situation is unique, but in the case of
 the interwikilinks, we believe the sheer number of language links,
 especially within the context of an information-heavy page, makes users
 numb to the list.

I regard this as an unreasonable assumption.

In my experience/observation, readers saw the links and recognized
them as a list of articles in various languages.  Most didn't wish to
view such articles, so they paid no further attention to the list
until such time as they did.  (No harm done.)  Meanwhile, users
wishing to view articles in other languages (a small percentage, but a
large number in absolute terms) knew exactly where to look.

To equate this unusual type of list with large blocks of text in
general (without any data to demonstrate the principle's
applicability) is to completely ignore context.

 When people see large collections of things, they tend to group them all
 together as one object and not identify the individual parts that make
 the whole. As the number of items in the list decreases the likelihood
 of a person identifying the individual items increases. This is similar
 to how viewing a traffic jam appears as a long line of generic vehicles,
 while seeing just a few cars driving down the road might be comprehended
 in more granular detail (a motorcycle, a truck and a sports car).

This analogy fails to consider a very important distinction.

Unlike the motor vehicles, one (or a small number) of the links on the
list are meaningfully different from the user's perspective.  To a
reader of Japanese, the 日本語 link stands out in much the same way that
an ice cream van would stand out in the aforementioned traffic jam.
The other links, being foreign to this particular user, do not compete
for attention (and therefore are less of a distraction than the random
cars surrounding the ice cream van are).

 While we did not explicitly test for this during our usability studies
 (e.g., it wasn't included as a major design question), we did exercise
 judgement in identifying this as a problem, based partly on the applying
 the above design principle to the site, partly on the data.

Said data indicated only that the interwiki links were used relatively
infrequently.  Apparently, there is absolutely no data suggesting that
the full list's display posed a problem.  Rather, this is a hunch
based upon the application of a general design principle whose
relevance has not been established.

Aryeh Gregor: You cited the importance of data (and the systematic
analysis thereof).  In light of this explanation, what is your opinion
now?

 A more effective approach would be to balance the two, by showing just
 enough links to clearly illustrate the meaning of the list.  So our
 proposal is to show a list of, say, 5 languages with a more link.  We
 think that a list of 5 languages should be sufficient to communicate to
 the user that there are other language versions available.  If the
 language they want is not on the list, they may click more to see the
 full list.

If the language that the user seeks is not visible, why do you assume
that he/she will recognize the list's nature?

Imagine seeing the following on a page full of similarly unintelligible text:

Ectbadi
Feskanalic
Ibsterglit
Kreviodeil
Tionevere
 Straknaj 6 tak

Would you recognize the first five items as languages and the last as
Show 6 more?

Compare the above to this:

Bacruhy
Ectbadi
English
Feskanalic
Ibsterglit
Kreviodeil
Nuprevnu
Ootredi
Rozlovatom
Tionevere
Zidentranou

And keep in mind that the above allows for the use of Ctrl-F to find
English on the page.

 There are numerous ways we can populate the list of 5.  The simplest way
 is to populate based on the current order, but we can also do it based
 on size of the wiki, browser language, geo IP, etc.

Browser language and location detection are the best of the above
options, but they're far from flawless.  It's been explained that
there are reasons for speakers of some languages to set their browsers
to other languages.  Location detection is not entirely reliable and
only enables en educated guess as to the language that someone speaks.

To me, all of this comes across as a manufactured problem in search of
a solution (while the initial change was a solution in search of a
problem).

 Our proposal is to go with something simple for now, and then continue
 to explore options for greater customization.

Or you could simply restore the one-line code modification that
provided the default behavior requested by the community (pending
evidence that an alternative setup is beneficial).

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Renaming Flagged Protections

2010-05-24 Thread David Levy
William Pietri wrote:

 Sorry if I was unclear. I was speaking about the naming issue. I think
 it's ok if our name for this generally assumes the happy case.

I disagree.  I think that it should be as clear as possible that this
process exists to counter inappropriate edits, not as an Orwellian
measure intended to be used indiscriminately throughout the
encyclopedia (because we want to double check good edits before
allowing them to attain normal status).

I understand what you mean (we assume that most edits will be good
even in a case in which a relatively small number of bad edits renders
this feature necessary), but it's unrealistic to expect that
complicated concept to come across.  We seek a name that requires as
little elaboration as possible.

 The essence of a wiki, both notionally and practically, is the
 assumption that people are generally doing something good.

Leaving the incorrect impression that we intend to routinely double
check edits in this manner conveys something very different.

 Protection, which focuses on the trouble a few bad actors can cause,
 is a big step away from that notion. Flagged Protection moves back
 toward the original wiki spirit.

But it still exists for the purpose of countering inappropriate edits.
 I see no reason to pretend otherwise.  In fact, given the negative
publicity that some such edits have caused, I view this as extremely
important to convey.  Downplaying the feature as a reaction to
something happy strikes me as precisely the wrong approach.

 So I think it's fine if the name has a positive connotation.

And that connotation should be we're countering inappropriate edits,
not we assume that everything's okay, but we'll humor the concerns.

Of course, I'm not proposing that we use a term like Vandal Buster.
I'm saying that the name itself should imply nothing about the edits'
quality.

Revision Review is perfectly neutral (and much clearer than Double
Check, which has inapplicable connotations and doesn't even specify
what's being checked) and thus far has generated more support than
anything else has.

  My understanding is that we seek to avoid colloquialisms, which are
  particularly difficult for non-native English speakers to comprehend.

 In theory, certainly. In practice, I have a hard time believing that
 non-native speakers would struggle with a name Double Check more than
 they'd struggle with any of the other names.

I've already noted that if I didn't possess prior knowledge of the
feature's nature, the name Double Check would confuse *me* (a native
English speaker).  You expect non-native English speakers to grasp a
colloquial usage (and see no advantage in a name composed of words
whose dictionary meanings accurately describe the intended concept)?

 I think that any name we choose is going to leave a lot of people
 confused about what's going on, especially if they sit their and
 ruminate on it. The most we can ask of a name is that it gives them a
 vague sense of what's going on, and doesn't cause too much confusion as
 they read further.

The purpose of this request is to select the best (i.e. most
informative and least confusing) name possible.

 I know that these names have been worked over extensively by Jay and
 Moka, who have a lot of experience dealing with reporters and the
 general public. They were pretty happy with the two names that were part
 of the initial proposal from Rob, so I am willing to trust their
 professional judgment as far as reaction from the press and the person
 on the street. More, in fact, than I trust my own, as I know that I'm
 tainted by long years as a programmer and as a participant here and in
 Ward's wiki.

Rob has explicitly asked us to comment on these names and set up a
forum in which to do so (and propose alternatives).  You've vigorously
defended the name drawing the most opposition and declined to comment
on the name drawing the most support, and that's fine.  But please
don't suggest that we're wasting our time by doing what Rob asked of
us.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Renaming Flagged Protections

2010-05-24 Thread David Levy
Michael Snow wrote:

 You edited out the text William was replying to, but in expressing
 his trust that the public relations professionals have the greatest
 expertise as to how the general public will receive the terminology,
 he was responding directly to speculation about how the general
 public would receive it. There's nothing in that comment to suggest
 that the community should not be involved or is wasting its time.

I hope that it's clear that I don't edit out text that I perceive as
contextually necessary (and don't intend to distort anyone's words).
In this instance, I don't regard William's response as dependent upon
my preceding comment.

 When dealing with multiple intended audiences (in this case, editors,
 readers, and the media), there is inevitably a balancing act in
 targeting your choice of words. It is unlikely that any name will be
 absolutely perfect for all use cases. Some degree of editorial judgment
 and discretion will have to be applied, and that's exactly the purpose
 of this discussion.

Agreed.


Gregory Maxwell wrote:

 Hm. Accctttuualyy

 Why not something that _must_ be explained?

 Call it Garblesmook, for example.
 (or better, import a word from some obscure semi-dead language... Does
 anyone have a suggestion of an especially fitting word? Perhaps
 something Hawaiian?)

 The big danger of using something with an intuitive meaning is that
 you get the intuitive understanding. We _KNOW_ that the intuitive
 understanding of this feature is a misunderstanding.

Our goal, as I understand it, is to select a name that provides as
much information as we can convey without causing substantial,
widespread confusion.  So if it were impossible to convey *any* amount
of information without causing substantial, widespread confusion, the
above approach would be best.

In my assessment (and that of others), the term Double Check is
likely to foster misunderstanding and the term Revision Review is
not.  This is not to say that it will actively counter
misunderstanding (which will arise no matter what name is used), but
it seems unlikely to introduce new misconceptions or reinforce those
that already exist.

 I think that if were to ask some random person with a basic laymen
 knowledge of what a new feature of Wikipedia called revision review
 did and what benefits and problems it would have,  I'd get results
 which were largely unmatched with the reality of it.

We don't expect the general public to possess intimate knowledge and
won't ask random persons to provide such details.

I don't believe that the name Revision Review generally would
encourage people lacking sufficient information to jump to conclusions
(unless pressed, as in the hypothetical scenario that you describe).
For those learning about the feature, it would be clear, memorable and
repeatable.

 (Not that I think that any word is good)

Understood.  :)

 We could use a name which expresses _nothing_ about what is going on,
 thus making it clear that you can't figure it out simply from the name.

In this case, perhaps to a greater extent than in any other, we want
to generate beneficial media attention (to address the negative
coverage that Wikipedia has received regarding the problems that this
process is intended to mitigate).  Revision Review is a term that
the press can latch onto and run with.  (So is Double Check, but I
believe that it would cause confusion.)  In this respect, a term with
no discernible meaning simply wouldn't work well.


William Pietri wrote:

 I'm not arguing for any name in particular. I have argued against some
 notions about names that I think are incorrect. Broadly, I think it's
 easy for insiders to incorrectly use themselves as proxies for what
 regular users will think. That's a very common mistake in my field, so I
 spoke up.

 But I said before and I say again that am avoiding having an opinion on
 whatever the best name is. It's a lot of work to do it properly,
 especially for me as an insider, and I don't have time for it right now.
 I'm not suggesting that people are wasting their time working on this,
 and in fact think just the opposite. I think it's great, and supported
 bringing this up for community discussion.

Thanks for clarifying.


Nathan wrote:

 Why not involve the community at the beginning? A request for endorsement of 
 your favored options is
 not the same thing, and fails to harness real community enthusiasm.

In fairness, Rob stated that while time is of the essence, the
community is welcome to propose alternatives, and he created a
discussion page section for that purpose.


David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Renaming Flagged Protections

2010-05-23 Thread David Levy
Alex wrote:

 Except unless we consider the initial edit to be the first check, its
 not correct. Only one person independent from the editor is reviewing
 each edit.

This is one of my main objections to the term.

The write-in candidate Revision Review appears to combine the best
elements of Pending Revisions and Double Check.  Tango and I (who
strongly prefer opposing candidates) agree that it's a good option.
It seems like an excellent solution, and I hope that it can garner
sufficient support.

Irrespective of his/her opinion, everyone should weigh in at the
designated discussion page:
http://en.wikipedia.org/wiki/Wikipedia:Flagged_protection_and_patrolled_revisions/Terminology

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Renaming Flagged Protections

2010-05-23 Thread David Levy
Sorry, the correct page is:
http://en.wikipedia.org/wiki/Wikipedia_talk:Flagged_protection_and_patrolled_revisions/Terminology

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Renaming Flagged Protections

2010-05-23 Thread David Levy
James Alexander wrote:

 That is basically exactly how I see it, most times you double check
 something you are only the 2nd person because the first check is done by the
 original author. We assume good faith, we assume that they are putting
 legitimate and correct information into the article and checked to make sure
 it didn't break any policies, it's just that because of problems on that
 page we wanted to have someone double check.

That's a good attitude, but such an interpretation is far from
intuitive.  Our goal is to select a name that stands on its own as an
unambiguous description, not one that requires background knowledge of
our philosophies.

I'll also point out that one of the English Wikipedia's most important
policies is ignore all rules, a major component of which is the
principle that users needn't familiarize themselves with our policies
(let alone check to make sure they aren't breaking them) before
editing.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Renaming Flagged Protections

2010-05-23 Thread David Levy
William Pietri wrote:

 Allow me to quote the whole policy: If a rule prevents you from
 improving or maintaining Wikipedia, ignore it. That implies, in my view
 correctly, that the person editing is presumed to set out with the
 intention of making the encyclopedia better.

 I think that fits in nicely with James Alexander's view: we can and
 should assume that most editors have already checked their work. Not
 against the minutiae of our rules, but against their own intent, and
 their understanding of what constitutes an improvement to Wikipedia.

But we aren't checking to ensure that the edits were performed in good
faith (as even a gross BLP violation can be).  We're checking to
ensure that they're appropriate.

And again, the main problem is ambiguity.  Double Check can easily
be interpreted to mean that two separate post-submission checks are
occurring.  It also is a chess term (and could be mistaken for a a
reference to that concept).

What is your opinion of the proposed name Revision Review?

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Renaming Flagged Protections

2010-05-23 Thread David Levy
William Pietri wrote:

 I think insiders will adjust to any name we choose, as some of our
 existing names attest. So I think as long as the name isn't hideous or
 actively misleading, then my main criterion is how it comes across to
 novices. For them, I'd suspect most will take double check as it's
 used colloquially,

My understanding is that we seek to avoid colloquialisms, which are
particularly difficult for non-native English speakers to comprehend.

And honestly, if I were not already familiar with the process in
question, I would interpret Double Check to mean checked twice
after submission (and I'm a native English speaker and Wikipedian
since 2005).  Someone unfamiliar with our existing processes might
assume that everything is routinely checked once by an outside party
(and this is an additional check).

Such potential for misunderstanding is non-trivial, as this feature's
deployment is likely to generate significant mainstream media
coverage.

 but if some do get the notion that it's checked twice by others rather than
 once, I see little harm done.

If the general public is led to believe that we're instituting a
second check because an existing check isn't working (as evidenced by
the disturbing edits already widely reported), this will be quite
injurious to Wikipedia's reputation.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Renaming Flagged Protections

2010-05-22 Thread David Levy
MZMcBride wrote:

 No, it really isn't a legitimate concern.

Contrary to your claim that nobody cares, some of us obviously do.
Does this mean that we're in on the conspiracy, or have we merely been
brainwashed to go along with it?

Or is it possible that people simply disagree with you in good faith?

 To hear that feature naming has suddenly become an issue sounds like
 bullshit to me.

You cited various MediaWiki elements whose names are/were less than
optimal.  Have you considered that perhaps the Wikimedia Foundation is
attempting to learn from its mistakes and get this one right?

 The worst that happens? A few power-users confuse their terminology.

The feature's deployment likely will be reported in mainstream media,
so its name's impact will extend far beyond the small percentage of
the population that edits the wikis.

 Think I'm wrong? Prove it.

I seek to prove nothing.  I have no crystal ball and cannot predict
whether there will be further delays.  This is irrelevant to your
assertion that the Wikimedia Foundation is erecting pseudo-hurdles
to this end, for which the burden of proof is on you (and for which an
announcement that we really need to have a name fully locked down no
later than Friday, May 28 is not compelling evidence).

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Renaming Flagged Protections

2010-05-21 Thread David Levy
MZMcBride wrote:

 Stop, take a deep breath, and look at the big picture: nobody cares.

 Most users don't edit. Most users who do edit won't care what the feature is
 called. Nobody cares. And I think you're a pretty smart guy who already
 realizes this, so I'm curious why there seems to be deliberate
 smoke-throwing here.

 Please, focus on the important issues and tell whoever is suggesting that
 this is one of them to stop erecting pseudo-hurdles that only further delay
 deployment.

You've made some valid points on this subject, but with all due
respect, you appear to be tilting at windmills in this instance.

The feature's name is a legitimate concern, and I see no attempt to
erect any hurdles.  (On the contrary, Rob unambiguously noted that
time is of the essence.)

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Filtering ourselves is pointless

2010-05-10 Thread David Levy
Mike Godwin wrote:

 The hidden assumption here -- an incorrect assumption, in my view -- is that
 there is some universe of possibilities in which Fox News would not have
 cited Jimbo's *inaction* as validation that it was correct. I infer from
 this comment that you imagine that if Jimbo had not intervened as he did,
 there would be no such story from Fox News.  My response is, if you think
 this, then you don't know Fox News.

No, that isn't my assumption at all.  I'm quite certain that Fox News
would have attempted to spin any response (or lack thereof) in a
manner injurious to the Wikimedia Foundation's reputation.

The key difference is that in the other scenario, it would have
continued to be their word vs. ours (with an opportunity to reach out
to responsible media and explain our position).

Instead, Jimbo has essentially announced to the world that Fox News
was correct.  And until we purge our servers of every graphic image,
we knowingly retain our self-acknowledged state of indecency.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Filtering ourselves is pointless

2010-05-10 Thread David Levy
 Can you point me to major media entities that have accepted the notion that
 Fox News was correct?

I'm referring to the conclusion that one, in my assessment, would draw
upon encountering Jimbo's remarks first-hand, with or without reading
Fox's subsequent reports on the matter.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Filtering ourselves is pointless

2010-05-10 Thread David Levy
Mike Godwin wrote:

   Can you point me to major media entities that have accepted the notion
   that Fox News was correct?

  I'm referring to the conclusion that one, in my assessment, would draw
  upon encountering Jimbo's remarks first-hand, with or without reading
  Fox's subsequent reports on the matter.

 Did you draw that conclusion?

No.  I'm not referring to the tiny percentage of Earth's population
possessing a substantial degree of familiarity with the Wikimedia
Foundation and its work.  I'm referring to anyone taking Jimbo's
comments at face value, as I would expect the vast majority of persons
to do.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Filtering ourselves is pointless

2010-05-10 Thread David Levy
Noein wrote:

 Is anyone here really concerned by Fox News actions? From the beginning
 it seemed to me that what they were barking about were of no impact:
 they would confirm the WMF's opponents in their opinions and obtain an
 indifferent or amused shrug from the rest of the world. Am I wrong?
 Should we really panic as the Board and Mr. Wales did? (ok, not panic,
 but feel the gravity and urgency of the situation?)

What's especially damaging isn't the absurd reporting from Fox News,
but our founder's proclamation that the reports are accurate.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Filtering ourselves is pointless

2010-05-10 Thread David Levy
Mike Godwin wrote:

 Do you mean the vast majority of persons in Earth's population? I don't
 imagine much of Earth's population is even aware of the story, much less
 Jimmy's actions.

Of course not.  I mean the vast majority of persons encountering
Jimbo's statements.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Jimbo's Sexual Image Deletions

2010-05-08 Thread David Levy
Ting Chen wrote:

 It is certainly possible that Jimmy in doing his work had made some false
 decisions. We all know that he do make failures. Maybe he didn't
 researched the context of a particular image, maybe in some cases his
 criteria was too narrow. One can discuss those on the case basis.

Errors are understandable, but Jimmy deliberately cast aside the
reasoned views of the community's most trusted users by continually
wheel-warring with a generic deletion summary (an extraordinarily
disrespectful method).  Does this have your full support as well?

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Jimbo's Sexual Image Deletions

2010-05-08 Thread David Levy
Ting Chen wrote:

 it depends. Please point to me what you mean so that I can give you my
 opinion on the cases.

http://commons.wikimedia.org/w/index.php?title=Special%3ALogtype=deletepage=File:F%E9licien%20Rops%20-%20Sainte-Th%E9r%E8se.png

http://commons.wikimedia.org/w/index.php?title=Special%3ALogtype=deletepage=File%3AFranz+von+Bayros+016.jpg

http://commons.wikimedia.org/w/index.php?title=Special%3ALogtype=deletepage=File%3AWiki-fisting.png

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Reflections on the recent debates

2010-05-08 Thread David Levy
Sydney Poore wrote:

 The clean up project initiated by Jimmy on Commons has brought much needed
 attention to a long standing problem.

And to Hell with the toes (i.e. valued contributors who retired in
disgust) stepped on along the way?

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Jimbo's Sexual Image Deletions

2010-05-08 Thread David Levy
Samuel Klein wrote:

 I don't think this is a technical issue at all.   Considering how
 flexible and reversible wiki-actions are, it seems eminently
 appropriate to me for the project founder to have 'unlimited
 technical power' on the projects -- just as you and all of our
 developers do, at a much higher level.

Deletions are easily reversible.  Multi-wiki image transclusion
removals, distrust in the Wikimedia Commons and resignations from
Wikimedia projects?  Less so.

 I'm not sure anyone has tried to address the role of developers
 through policy ;-)

If a developer were to engage in the type of behavior exhibited by Mr.
Wales, the resultant action would be swift and severe.

I've defended Jimbo in the past and even turned to him for guidance.
Unlike those are are merely angry at him (and in some cases, lashing
out in a nonconstructive manner), I'm truly disheartened.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Jimbo's Sexual Image Deletions

2010-05-08 Thread David Levy
I wrote:

 Unlike those are are merely angry at him (and in some cases, lashing
 out in a nonconstructive manner), I'm truly disheartened.

are are = who are

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Reflections on the recent debates

2010-05-08 Thread David Levy
Mike Godwin wrote:

 All metaphors are at least somewhat misleading, and some metaphors are
 deeply misleading.

At least no one is comparing Jimbo with Nazis or Hitler yet.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Jimbo's Sexual Image Deletions

2010-05-08 Thread David Levy
Tomasz Ganicz wrote:

 Well.. maybe... but bear in mind that it is really hard to discuss the
 pictures you can't see, and commons-delinker bot actions are really
 difficuilt to revert. On any other project if you delete something it
 is just a local issue. But deleting a picture on Commons which was
 used on many other project for years is really hiting all those
 projects, not only Commons.

We're arguing the same thing.  :)


Anthony wrote:

 So fix commons-delinker.  Or shut it off altogether.

1. We don't usually have this problem, as Commons administrators
seldom go on controversial deletion sprees (and when they do, other
administrators aren't powerless to counter their actions).

2. Even without CommonsDelinker, editors at the various projects will
remove broken image transclusions when they discover them.

 OMG.  Red links would indicate to a human that there was a problem which
 needed to be solved.  Then that human could go about solving the problem
 (which very well may involve more than just delinking the image).

What, other than delinking or uploading the missing image locally
(thereby bypassing Commons), do you expect a wiki to do?  And how do
you expect editors who cannot read English (particularly those whose
native languages are among the less widespread) to even understand why
in-use images are being deleted?

 And deletions are easily reversible.

I'll quote myself from earlier in the thread.

Deletions are easily reversible.  Multi-wiki image transclusion
removals, distrust in the Wikimedia Commons and resignations from
Wikimedia projects?  Less so.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Jimbo's Sexual Image Deletions

2010-05-08 Thread David Levy
Anthony wrote:

   OMG.  Red links would indicate to a human that there was a problem which
   needed to be solved.  Then that human could go about solving the problem
   (which very well may involve more than just delinking the image).

  What, other than delinking or uploading the missing image locally
  (thereby bypassing Commons), do you expect a wiki to do?

 Replacing it with a different image,

This assumes that a suitable alternative is readily available (which
likely isn't the case with many of the inappropriately deleted images,
including those that Jimbo wheel-warred over) and still entails
delinking the original image.

 removing the text from the article which refers to the image,

This is a highly undesirable outcome (and example of potential damage).

 contacting someone at Commons to argue for reinstatement of the image...

This is not always feasible (depending on one's native language) and
is futile when Jimbo intends to unilaterally overrule any such
decision.

Additionally, one might erroneously assume that the image was deleted
for a valid reason (e.g. copyright infringement).

 And yeah, uploading the missing image locally (thereby bypassing Commons)
 would be another possibility.

This is another highly undesirable outcome (and example of potential damage).

 All depends on the situation.  But fortunately, the human brain (unlike the
 robot brain) is very flexible in dealing with a multitude of situations.

And the point is that some solutions weaken the Wikimedia Commons
and/or the sister projects that rely upon it.

 Maybe by finding a translator?

Depending on the language, that isn't always an easy task.  And again,
that assumes that a benefit exists and is apparent.

 Alternatively, they could employ one of the possibilities listed above
 which don't involve speaking English at all.

See above.

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Jimbo's Sexual Image Deletions

2010-05-08 Thread David Levy
Anthony wrote:

 Jimbo shouldn't be blamed for the actions of CommonsDelinkerBot.  For those
 particular deletions in which he exercised poor judgment, sure.  For
 wheel-warring over some of those instances, absolutely.  But ultimately, his
 actions (as opposed to the actions which were caused by the maintainer of
 CommonsDelinkerBot), are easily undone, at least from a technical
 standpoint.

As noted, the problem extends beyond the images delinked by the
CommonsDelinker bot.

In fact, I would argue that the bot actually mitigates the damage by
preemptively delinking images via an identifiable account (thereby
making the removals easier to detect and revert).

David Levy

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Pedophilia and the Non discrimination policy

2009-12-02 Thread David Levy
Anthony wrote:

 Then my response is quite simple.  Blocking some pedophiles before they can
 cause trouble is better than blocking none of them before they can cause 
 trouble.

And what do you believe is likely to occur when these pedophiles are
blocked before they can cause trouble?

We have no inherent justification to report their IP addresses to
their ISPs or law enforcement agencies, so their accounts are merely
blocked.  And then what do they do?  Leave?  Perhaps.  But the ones
intent on actually using Wikipedia to recruit victims (assuming that
they exist) probably will simply register new accounts (via different
IP addresses, if need be), this time being careful to avoid divulging
any information connecting pedophilia with their new accounts.

 Why does it matter whether or not they identify themselves as pedophiles, if
 you're not allowed to use that information against them?

 If you're suggesting that when we find someone who has self-identified as a
 pedophile (or a different site, as otherwise it would be on-wiki behavior),
 instead of blocking them we secretly monitor their contributions waiting for 
 the
 proper moment to pounce, well, I say good luck with that.  It just isn't 
 going to
 happen.

At Wikipedia (and probably other wikis), users often monitor each
other's edits because of minor (and even downright silly) disputes.
You don't think that anyone would be interested in monitoring a known
pedophile's edits?  You think that if Tyciol hadn't been blocked, no
one would be watching his every on-wiki move?

Then there's the e-mail issue, which we've discussed.

 The secret will quickly get out, and pedophile editors will become less 
 likely to
 disclose their pedophilia, or the monitoring won't be done.

It certainly is possible that an editor exposed as a pedophile will
abandon that account.  How does such an outcome meaningfully differ
from a ban?

 But then, according to you, if you think that any sort of pro-pedophilia 
 editing
 would be tolerated for any length of time, you're mistaken.  So then, 
 according to
 you (I don't agree with this assertion), it *doesn't matter* if pedophiles 
 don't
 disclose their pedophilia.

1. Please cite instances of pro-pedophilia editing that was tolerated
for a substantial period of time.

2. In terms of content edits, it probably doesn't matter very much.  I
don't know about you, but I view the possibility of a pedophile using
a Wikimedia site to contact potential victims as a much scarier
scenario.  I don't believe that it's terribly likely (because many far
easier methods are readily available), but if it were to occur, it
would be a hell of a lot worse than the brief appearance of
pro-pedophilia propaganda in a wiki.

  The earlier blocks pertained to mundane infractions of the sort exhibited by
  countless users. To lump them together with pedophilia is ludicrous (and 
  arguably
  offensive, as it trivializes pedophilia).

 Countless users are indefinitely blocked?

Accounts are indefinitely blocked extremely often.  But I was
referring to the stark contrast between the edits' nature and
pedophilia.  The only thing remotely extraordinary about Tyciol's
problematic edits was their sheer quantity.

 I wouldn't lump his behavior in with pedophilia, but I would lump it in with 
 his
 behavior of publicly revealing that he is a pedophile (as though there's 
 nothing
 wrong with that).  He did things that he knew would piss everybody off, most
 likely with the explicit intention of pissing them off.  If I'm wrong about 
 his
 intention, then he's truly clueless.

He struck me as quite clueless, indeed.

  As I noted, if it were up to me, he probably would have been banned before 
  the
  pedophilia issue came to light.

 For mundane infractions of the sort exhibited by countless users?  How are 
 you
 not contradicting yourself?

As noted above, I mean that the individual infractions were mundane
(and most would not even have been viewed as infractions on their
own).  Their quantity, conversely, was ridiculous (and this resulted
in a great deal of cleanup work for other editors).

But even the combined effect doesn't approach the level of child abuse.

 I presented you with a cyberspace example - a virtual school.  Is it wrong 
 to ban
 pedophiles from interacting with children in a virtual school?  Or do you at 
 least
 agree that *that* makes sense?

I do agree that it makes sense, as that's a situation in which an
identified (and traceable) adult is pointedly in a position of
authority and direct influence over children.

Please pay particular attention to the identified (and traceable) part.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Pedophilia and the Non discrimination policy

2009-12-01 Thread David Levy
Anthony wrote:

Also, please address my point that banning self-identified
pedophiles from editing merely entourages pedophile editors to
*not* identify themselves as such (thereby increasing the
likelihood that problematic activities will be overlooked).

   If that were true, then we wouldn't have banned any self-identified
   pedophiles, and we won't every ban another one, and this whole
   conversation is pointless.

  So...you're justifying the status quo by its existence?

 No, you asked me to address your point that insert untrue statement.
 I addressed it by pointing out that it's not true.

I expressed my opinion that banning all self-identified pedophiles is
ineffectual.  I then misinterpreted your response to mean that if this
were true, we wouldn't be doing it.

Upon further rumination, I now believe that your point was that if
we'd convinced all pedophile editors to not publicly disclose their
pedophilia, there would be no self-identified pedophiles for us to ban
(rendering this discussion moot).  Do I now understand correctly?

I haven't presented such a scenario.  I believe that a natural
consequence of the continual bans is that over time, pedophile editors
will become less likely to disclose their pedophilia.  This means that
_fewer_ pedophiles will do so, not that none will (and not that the
full effect is instantaneous).

[Incidentally, entourages was a typo for encourages.]

 Both his pedophilia (or claims of pedophilia) and his behavior which led
 to his first indefinite block are results of his character.  I see no
 reason to wait for the latter when we already know the former.

The earlier blocks pertained to mundane infractions of the sort
exhibited by countless users. To lump them together with pedophilia is
ludicrous (and arguably offensive, as it trivializes pedophilia).

And again, those issues had _nothing_ to do with the ban (which would
have occurred even if the user had a spotless editing record).  We're
discussing the appropriateness of the ban rationale, *not* whether
this particular editor was an asset to the community.  (As I noted, if
it were up to me, he probably would have been banned before the
pedophilia issue came to light.)

  To what pro-pedophilia editing are you referring?

 The part in your comment (thereby increasing the likelihood that
 problematic activities will be overlooked)

How does that pertain to the editor in question?

  To be clear, you're stating that while you would regard such an act as
  inappropriate, you believe that the Hindi Wikipedia currently is
  empowered by the Wikimedia Foundation to ban Pakistani editors if it
  so chooses.  Correct?

 I'm not completely familiar with the rules of the Hindi Wikipedia, but I
 assume they have a method of banning which doesn't involve petitioning
 the Wikimedia Foundation, so yes.

I don't know whether you're correct or incorrect, but the latter is my
sincere hope.

 We've come to a consensus, as a community, that some sort of ArbCom
 rulings are to be followed.

1. Please don't quote me out of context.  The above implies that I
seek to belittle the ArbCom, and that isn't so.  I referred to some
sort of ArbCom ruling because the precise nature of the decision is
unclear.  Contextually, I was stating that the ruling should *not* be
ignored.

2. The ArbCom is a consensus-backed body, but its power is far from
limitless, and there absolutely is no consensus that its actions
should never be questioned.

I seek to determine the nature and basis of the policy that the ArbCom
is purported to have instituted.  Only then would it be appropriate
for the community to evaluate whether the committee acted within its
authority.

There is no assertion that the ArbCom's rulings should not be followed.

 So you're willing to engage in actions which you believe to be
 unconscionable if they are effective?

No.  I don't view this as a black-and-white issue, and I believe that
a grey solution is vastly preferable to the realistic alternative.

 No, I don't see it as a quibble.  I'm willing to modify my statement.
 I agree that we shouldn't judge him/her as one judges someone on trial.
 Rather, we should judge him/her as one judges someone applying for a
 volunteer job, collaborating with children, creating an encyclopedia.
 Do you agree or disagree with that?

I regard the statement as overly broad; it is applicable to both
physical space and cyberspace, and I believe that some procedures
effective in the former are impractical in the latter.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Pedophilia and the Non discrimination policy

2009-11-30 Thread David Levy
Gerard Meijssen wrote:

 Hoi,
 When a group of people are to come up with a communal opinion, particularly
 when this opinion is intended in order to judge a situation, a behaviour,
 you can no longer dismiss this formed group opinion as just personal and
 dismiss it as such. Obviously you can, because you do, but in this way you
 undermine the integrity of the process whereby an arbitration committee
 comes to its conclusions.

To what are you referring?  I'm not sure that we're on the same page.

If you thought that I was challenging the ArbCom's very existence, you
misunderstood.  Obviously, I have some serious concerns regarding the
procedures that have/haven't been followed, but I fully recognize the
ArbCom's importance.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Pedophilia and the Non discrimination policy

2009-11-30 Thread David Levy
 I think it is germane, because it means the choice we have is to ban a
 pedophile from the start, before s/he gets a chance to cause any damage,
 or to wait far too long to ban the pedophile, after much damage has
 already been done.  If the banning process were much simpler, efficient,
 and effective, and the ability of damage to become widely disseminated
 minimalized, we could better afford to give people enough rope to hang
 themselves with.

To be extra-safe, let's just ban everyone before they can do any damage.

We're too patient with edit warriors and the like, but if you think
that any sort of pro-pedophilia editing would be tolerated for any
length of time, you're mistaken.

Also, please address my point that banning self-identified pedophiles
from editing merely entourages pedophile editors to *not* identify
themselves as such (thereby increasing the likelihood that problematic
activities will be overlooked).

  My point is that *all* editors should be accepted/rejected on the same
  terms, with no regard for our personal opinions of them.

 But clearly you draw a distinction between what is merely our personal
 opinions of them and what is a legitimate concern which affects our
 ability to accomplish our goals.  I don't think this is the distinction
 on which we disagree.  Rather, I think it's more your other belief that
 the idea of preemptively banning individuals on the basis that they
 _might_ engage in misconduct is unconscionable.

 The Wikimedia Foundation has granted the community the right to decide
 who can and who cannot access its servers.  That means we have the right
 to ban anyone, for any reason.

1. Suppose that the Hindi Wikipedia voted to ban Pakistani editors.
Would that be acceptable?  (Note that I'm not remotely equating the
exclusion of pedophiles with the exclusion of a nationality; I'm
addressing your claim that we have the right to ban anyone, for any
reason.)

2. Please direct me to the discussion(s) in which consensus was
reached to ban all known pedophiles from editing.

 And ultimately, the basis that someone _might_ engage in misconduct in
 the future is the *only* proper basis on which to ban someone.

And we base this assessment on past behavior, not hunches.  Except,
evidently, with pedophiles.

  However, I will acknowledge that the idea of disabling the e-mail
  function crossed my mind and might be a valid consideration.

 Why is that not unconscionable?

Pragmatically, it strikes me as a realistic compromise.  There
obviously are many users who oppose editing by pedophiles, and I
believe that this would address one of the main issues (the
facilitation of private communication, potentially with children).

I don't believe that pedophiles are likely to seek out victims via a
wiki (as there are numerous online and offline fora that are far
better for making contact with people in a particular geographic
area), but I understand why the concern exists.

  I've addressed the PR issue elsewhere in the thread.

 Not properly.

You're welcome to respond to those posts.  Otherwise, we can simply
agree to disagree.

  I'm even more puzzled by the suggestion that they be barred from
  editing articles related to pedophilia.  Provided that their edits
  don't reflect a pro-pedophilia bias, what's the problem?

 The problem is that they've admitted to an inability to think rationally
 about the topic.

I certainly agree that pedophiles possess highly abnormal ideas on the
subject, but that doesn't preclude them from contributing to relevant
encyclopedia articles in a rational manner.  By all accounts that I've
seen, the editor in question did precisely that.

Pedophilia-related articles are heavily monitored, and inappropriate
edits (by pedophiles or anyone else) are reverted more promptly than
in the majority of articles.

   I do think some people's opposition is tantamount to approval of
   pedophilia, but not everyone's.

  Whose is?  Is mine?

 I don't know.  Is it?

No, I condemn pedophilia.  I'm just curious as to the basis of your above claim.

 What personal opinions should we set aside?

Our condemnation of pedophiles (not pedophilia, mind you, as we
certainly mustn't allow any such activities to be promoted on our
wikis).

 I agree that we shouldn't judge him/her as one judges someone on trial.
 Rather, we should judge him/her as one judges someone applying for a
 volunteer job, working hand in hand with children, creating an
 encyclopedia.

The hand in hand with children wording seems to conflate physical
space with cyberspace.  Please see my relevant reply to George William
Herbert.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Pedophilia and the Non discrimination policy

2009-11-30 Thread David Levy
George William Herbert wrote:

 There's a known and ancedotally (but not known to be statistically)
 significant trend of pedophiles attracting victims online.

 Also, apparently, of them coordinating amongst themselves to pass tips
 about possible victims in specific areas.

I'm well aware.  In fact, I produced a children's video about Internet
predators for my county's public library and elementary schools.

My point is that some measures commonly taken in physical space are
ineffective (and possibly even detrimental) in cyberspace.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Pedophilia and the Non discrimination policy

2009-11-30 Thread David Levy
Anthony wrote:

 Right, because the only two possible solutions are to ban everyone and
 to ban no one.

Obviously not.  Likewise, we have more possible outcomes than banning
all known pedophiles and banning no known pedophiles.

  Also, please address my point that banning self-identified pedophiles
  from editing merely entourages pedophile editors to *not* identify
  themselves as such (thereby increasing the likelihood that problematic
  activities will be overlooked).

 If that were true, then we wouldn't have banned any self-identified
 pedophiles, and we won't every ban another one, and this whole
 conversation is pointless.

So...you're justifying the status quo by its existence?

 Pedophile editors already have plenty of reasons to not identify
 themselves as such.

Agreed, and I doubt that many do.

 I don't think they're going to change just because what they're doing is
 bannable.  In fact, I think in the vast majority of cases these editors
 fully expect to eventually be banned

Including the ones that make no on-wiki mentions of their pedophilia?

 - the only question is how much we put up with before banning them.

How much non-wiki-related behavior we put up with?

 And looking at the edits of whatshisname, I think he fits in that
 category.  I'll bring it up again - this wasn't his first block, or even
 his first indefinite block.

And I'll once again note that his past blocks had nothing to do with
pedophilia and have no connection to this ban.

There is no dispute that the editor caused considerable on-wiki
disruption via the continual creation of numerous inappropriate
redirects and disambiguation pages, and if it had been up to me, he
probably would have been banned back then.  But he wasn't, and none of
that is remotely relevant to the matter at hand (the rationale behind
this ban and others like it).

 And according to Ryan, and I assume the Arb Com checked this, he's been
 banned from quite a few other sites for the way he talks about his
 pedophilia (including LiveJournal for creating the 'childlove
 community').

There is no assertion that he engaged in any comparable conduct at Wikipedia.

 Had he not been blocked indefinitely by Ryan for being a pedophile, I'm
 fairly certain he would have been blocked by someone else for some other
 reason, possibly related to pedophilia, possibly not.

I wouldn't have been a bit surprised.

 And you yourself claimed, in the paragraph directly prior to this one,
 that if you think that any sort of pro-pedophilia editing would be
 tolerated for any length of time, you're mistaken.  So which is it?

To what pro-pedophilia editing are you referring?  The editor in
question apparently has engaged in none at Wikimedia wikis (the
context of my statement), and as soon as an administrator discovered
that he engaged in it elsewhere, he was banned.

 I see a difference between whether or not you have the right to do
 something, and whether or not it's in your best interest to do so.  When
 you asked me whether or not banning Pakistani editors is acceptable, I
 answered based on the latter.  Had you asked whether or not the Hindi
 Wikipedia had a right to ban Pakistani editors, well, I would have
 responded with sure, they have that right, until the WMF decides to take
 it away from them.

To be clear, you're stating that while you would regard such an act as
inappropriate, you believe that the Hindi Wikipedia currently is
empowered by the Wikimedia Foundation to ban Pakistani editors if it
so chooses.  Correct?

  Please direct me to the discussion(s) in which consensus was reached to
  ban all known pedophiles from editing.

 We're having one of them right now, I guess.  And so far, no one has been
 bold enough to try to overturn the de facto ban.

 Try unblocking one of the blocked pedophiles, if you'd like.

We have more than a de facto ban; we have some sort of ArbCom ruling.
You and I both know that it's foolhardy, futile and disruptive to defy
such a decision.

 No, pedophiles are assessed based on their past behavior as well.  If
 whatshisname didn't post all over the Internet bragging about being a
 pedophile, he never would have been banned (for being a pedophile,
 anyway).

 Perhaps by behavior you mean on-wiki behavior.  But handcuffing
 yourself with that rule is pointless.  It treats Wikipedia like a game,
 and only promotes trolling.

I don't mean on-wiki behavior, but I do mean behavior directly
related to the wikis.  If, for example, someone states on a message
board that he/she intends to vandalise Wikipedia, I'm not suggesting
that this should be ignored on the technicality that it was posted
off-wiki.  Likewise, if a pedophile conveys an intention to use
Wikipedia as a venue for contacting potential victims, ban away.

You (and others) take this a step further by assuming that
self-identified pedophiles will commit on-wiki misconduct (or that the
likelihood is high enough to treat it as a certainty) and that
permanently blocking their 

Re: [Foundation-l] Pedophilia and the Non discrimination policy

2009-11-30 Thread David Levy
George William Herbert wrote:

 Wikipedia's strong culture of pseudonymity and anonymity makes protecting
 anyone, or detecting anyone, a nearly lost cause if they have any clue
 and sense of privacy.  Unlike real life, we can't make guarantees with
 anything approaching a straight face.

 However - there's a difference between being unable to effectively screen
 people by real world standards, and not having a policy of acting when we
 do detect something.  One is acknowledging cultural and technical
 reality - because of who and where we are, we couldn't possibly do better
 than random luck at finding these people.  The other is disregarding any
 responsibility as a site and community to protect our younger members and
 our community from harm, if we find out via whatever means.

 Witch hunts looking for people don't seem helpful or productive to me.
 But if they out themselves somewhere else and are noticed here, then
 we're aware and on notice.  The question is, entirely, what do we do
 then.

 Do we owe the underaged users a duty to protect them from known threats?

In my view, we're doing nothing of the sort (and constructing a false
sense of security by claiming otherwise).

I doubt that many pedophiles will seek to recruit victims via our
wikis, but if this occurs, these account bans are highly unlikely to
counter it to any significant extent.

 Do we owe the project as a whole a duty to protect it from disgrace by
 association?

I see the potential for negative publicity stemming from the
perception that we seek to create the illusion of improved safety and
integrity.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Pedophilia and the Non discrimination policy

2009-11-29 Thread David Levy
Fred Bauder wrote:

 An appeal is not futile. For one thing the policy might be changed or it
 might be decided the policy which exists does not apply in this case.

Again, I wish to read this policy.  Where is it published?  And how
was it established?  Did the ArbCom itself author it?

 If a close examination of his editing record shows no activist activity,
 it might be considered unfair to do external research which established
 his identity.

There has been no _assertion_ of activist activity on the wiki.  Shall
we go ahead and block everyone pending close examination of their
editing records?

 But then, if Ryan could do it, anyone, including an investigative
 journalist could have done it.

Yes, and the same applies to murderers, rapists and neo-Nazis in our
midst.  This is not a slippery slope argument (a contention that
we'll be banning those editors next).  I'm asking how it would be
worse for an investigative journalist to discover that a pedophile is
editing than to discover that a murderer, rapist or neo-Nazi is
editing.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Pedophilia and the Non discrimination policy

2009-11-29 Thread David Levy
I wrote:

  Again, I wish to read this policy.  Where is it published?  And how was
  it established?  Did the ArbCom itself author it?

Fred Bauder replied:

 It was authored by the Arbitration Committee and posted on the
 Administrators' Noticeboard several years ago.

Please provide a link.

 Basically it says don't discuss issues regarding pedophilia activists
 on-wiki; send everything to the Arbitration Committee.

Does it also say that known pedophiles are to be banned on-sight
(irrespective of their on-wiki activities)?

 This is coupled with a policy of hearing ban appeals privately.

Such a procedural policy falls within the ArbCom's authority.  An
outright ban on editing by known pedophiles does not.  To your
knowledge, has the latter been instituted?

   If a close examination of his editing record shows no activist
   activity, it might be considered unfair to do external research which
   established his identity.

  There has been no _assertion_ of activist activity on the wiki.  Shall
  we go ahead and block everyone pending close examination of their
  editing records?

 No, we assume good faith.

Except with pedophiles?  You just suggested that the ArbCom conduct an
investigation to rule out a behavior that has not been alleged.

 Well, if Charlie Manson has internet access and is editing, we don't know
 it. Murders and rapists, and I'm sure we have a few editing, don't
 usually advocate for the practice. Neo-Nazis are frequently banned for
 disruptive editing as are many other aggressive POV pushers.

Right, and there is no dispute that aggressive POV pushers should be
banned.  Whether that POV is pro-pedophilia, pro-Nazism,
pro-mainstream political position, or pro-anything (or anti-anything,
for that matter), such conduct is unacceptable.

We're discussing the practice of banning pedophiles who have *not*
engaged in such on-wiki behavior.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Pedophilia and the Non discrimination policy

2009-11-29 Thread David Levy
Anthony wrote:

 The subject is Pedophilia and the Non discrimination policy.

Obviously, the discussion's scope has expanded.

 I've opted to participate to dispel the notion, suggested by you, that a
 perfectly productive editor was blocked simply because the editor
 happened to be a pedophile.

I never claimed that the editor in question was perfectly productive
(and I noted that he created numerous inappropriate redirects and
disambiguation pages), but it certainly appears that he was blocked
for being a pedophile (and not because of any disruptive editing,
apart from the belief that editing by a known pedophile is inherently
disruptive).

 This is a hypothetical which I don't believe will ever arise in reality,

What is?

 and certainly not often enough that there is a harm in simply blocking
 pedophiles on sight.

Are you suggesting that we needn't even address a contention of unjust
editing bans, provided that the number of affected individuals is low?

 Jesse mentioned the idea that paedophiles are inherently evil and can do
 no good.  I'm not saying that, but I do find the idea that someone who
 openly admits to being a pedophile (*) could also be a good encyclopedia
 editor, to be a bit far-fetched.

 (*) Which implies that they don't think there's anything wrong with being
 a pedophile.

I reject the premise that someone who openly admits to being a
pedophile inherently [doesn't] think there's anything wrong with
[that].  This accurately describes some, of course, but I don't
regard any of this as relevant.  We routinely ban editors who
habitually cause disruption (irrespective of our prior knowledge of
them), and I see no need to formulate blanket assumptions that
particular societal classes cannot be productive contributors.

 I don't expect to convince anyone of this.  In fact, I suspect a number
 of Wikipedians on this very mailing list would take the pedophiles side
 on the issue of whether or not there's anything wrong with that.

Wow, that's entirely uncalled-for.  It's disheartening that you would
equate opposition to an outright ban on editing by known pedophiles
with approval of pedophilia.

  Yes, and the same applies to murderers, rapists and neo-Nazis in our
  midst.  This is not a slippery slope argument (a contention that
  we'll be banning those editors next).  I'm asking how it would be
  worse for an investigative journalist to discover that a pedophile is
  editing than to discover that a murderer, rapist or neo-Nazi is
  editing.

 Of those three, I think neo-Nazi is the most closely analogous, and I
 don't see why they should be allowed in Wikipedia either.

Then perhaps this is a slippery slope, after all.

I'm Jewish, and I would unreservedly oppose any attempt to prohibit
neo-Nazis from editing (in accordance with the same rules to which we
hold other contributors).

 As for murderer and rapist, I'm not quite sure what you're getting
 at.

Fred referred to the negative publicity that could arise if an
investigative journalist were to determine that a pedophile is editing
Wikipedia (or another Wikimedia wiki, I presume).  Most societies
condemn murder and rape with comparable vehemence.

 If someone was convicted of murder 20 years ago and they are now out
 having served their time, I don't think this sets a precedent that we can
 ban them.  On the other hand, if someone posts to message boards bragging
 about how they like to rape people, but that rape is legal in their
 country, I don't see any problem with banning them from Wikipedia.

I do.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Pedophilia and the Non discrimination policy

2009-11-29 Thread David Levy
Anthony wrote:

   This is a hypothetical which I don't believe will ever arise in
   reality,

  What is?

 A perfectly productive pedophile editor.

What do you mean by perfectly productive?  We don't ban editors for
being less than perfect in their contributions.

Are you suggesting that it's unlikely that a pedophile could edit with
the degree of productivity that that we ordinarily demand of editors
in good standing?

   and certainly not often enough that there is a harm in simply
   blocking pedophiles on sight.

  Are you suggesting that we needn't even address a contention of unjust
  editing bans, provided that the number of affected individuals is low?

 I don't see anything unjust about treating someone differently because
 they're a pedophile.

Okay, so your position is not that the degree of collateral damage
(the banning of pedophiles who are productive editors) would be
negligible, but that this is irrelevant (because all pedophiles
deserve to be banned from editing, regardless of how they conduct
themselves).  Correct?

   Jesse mentioned the idea that paedophiles are inherently evil and
   can do no good.  I'm not saying that, but I do find the idea that
   someone who openly admits to being a pedophile (*) could also be a
   good encyclopedia editor, to be a bit far-fetched.
  
   (*) Which implies that they don't think there's anything wrong with
   being a pedophile.

  I reject the premise that someone who openly admits to being a
  pedophile inherently [doesn't] think there's anything wrong with
  [that].

 Perhaps you're taking me out of context, then.  In the case in point (and
 in fact I believe all the cases where pedophiles were blocked), the
 person was caught effectively bragging about being a pedophile, and it's
 hard to see how one would get caught without essentially doing just that.

I'm not referring to any particular case(s).  Openly [admitting] to
being a pedophile could apply to the public statement I struggle
with a condition called pedophilia, for which I receive therapy.  And
yes, it also could apply to boasting.  As I said, I don't view the
distinction as relevant to the matter at hand.

 Pedophiles are not particular societal classes, and it's ridiculous
 that you'd regard them as such.

Class: a number of persons or things regarded as forming a group by
reason of common attributes, characteristics, qualities, or traits

http://dictionary.reference.com/browse/class

   I don't expect to convince anyone of this.  In fact, I suspect a
   number of Wikipedians on this very mailing list would take the
   pedophiles side on the issue of whether or not there's anything
   wrong with that.

  Wow, that's entirely uncalled-for.  It's disheartening that you would
  equate opposition to an outright ban on editing by known pedophiles
  with approval of pedophilia.

 I don't.

You just conveyed your suspicion that a number of Wikipedians on this
very mailing list condone pedophilia.

 What I equate with a lack of willingness to judge pedophiles as wrong
 is when someone refers to a such a ban with a comment that We should not
 judge people by what their opinions are, however apalling we may find
 them.

Then you've completely missed the point.  What part of someone finding
an opinion appalling do you associate with the absence of
disapproval?

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Pedophilia and the Non discrimination policy

2009-11-29 Thread David Levy
I wrote:

  Are you suggesting that it's unlikely that a pedophile could edit with
  the degree of productivity that that we ordinarily demand of editors
  in good standing?

Anthony replied:

 No.  I'm am saying that the ordinary demands are far far too low, though.

Please elaborate.

  Okay, so your position is not that the degree of (the banning of
  pedophiles who are productive editors) would be negligible, but that
  this is irrelevant (because all pedophiles deserve to be banned from
  editing, regardless of how they conduct themselves).  Correct?

 No.  I *am* saying that a degree of collateral damage (the banning of
 pedophiles who are productive editors) would be negligible, and
 acceptable.  But I'm also saying that this has nothing whatsoever to
 do with justice.

To clarify, assuming that the aforementioned collateral damage can be
prevented, do you believe that pedophiles who are productive editors
should be permitted to edit?

Regardless, please explain why we shouldn't simply ban
unproductive/disruptive editors (irrespective of whether we know them
to be pedophiles).

  Openly [admitting] to being a pedophile could apply to the public
  statement I struggle with a condition called pedophilia, for which I
  receive therapy.

 Yes, if you ignore the context in which I said it, of which my footnote
 was part.

That context simply wasn't stated.  But okay, I accept that you were
referring to a situation in which someone boasts about his/her
pedophilia.  What, in your assessment, is the proper course of action
when it's discovered that an editor has made a public statement along
the lines of the above example?

I'll reiterate that I don't view the distinction as relevant, but I'm
curious as to what you think.

  You just conveyed your suspicion that a number of Wikipedians on this
  very mailing list condone pedophilia.

 Yes.  And it's more than just a suspicion.  Many Wikipedians on this
 mailing list have said things which have brought me to this conclusion,
 but on and off the list.  I could start naming names, but that'd probably
 get me into trouble.

Then it probably is best that you simply remain silent on the issue
(rather than implying that opposition to your stance reflects approval
of pedophilia).

  What part of someone finding an opinion appalling do you associate
  with the absence of disapproval?

 The part about not judging them,

That means that it isn't our place to pass judgement (in the judicial
sense), _in spite of_ how we personally view one's actions.  It does
*not* mean that we lack such a personal view.

 and the referring to pedophilia as an opinion.

An appalling opinion.  Construing this as a lack of disapproval is rubbish.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Pedophilia and the Non discrimination policy

2009-11-29 Thread David Levy
George William Herbert wrote:

 It is not clear that anyone has raised any issues which are appropriate
 or necessary for the Foundation to deal with.

If the English Wikipedia's Arbitration Committee has created a policy
prohibiting editing by all known pedophiles, I believe that it has
overstepped its bounds.  (I say if because I have not yet received a
response to my question of whether the ArbCom has enacted such a
policy.)  Such an issue is a Foundation-level matter.

 If you have a specific claim that the Foundation has to or should
 intervene please state that, simply and concisely.

I believe that the Foundation should intervene, at least to the extent
of questioning the ArbCom and fully ascertaining the nature of this
very murky situation.  (I realize that there might be sensitive
details to which the general community should not be privy.)

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Pedophilia and the Non discrimination policy

2009-11-29 Thread David Levy
Beth wrote:

 If we allow self-identified pedophiles to edit our projects, particularly
 those who insist on proclaiming this proclivity on-wiki  -  we are
 permitting  even facilitating pedophile advocacy.

What about those who do *not* issue such proclamations on-wiki?

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Pedophilia and the Non discrimination policy

2009-11-29 Thread David Levy
 Bad editors are often allowed to edit for years before they finally get
 indefinitely banned.  I'm not getting into specific details, that's far
 outside the scope of this thread.  Even this comment is pushing it.

I agree that we often wait far too long to ban disruptive editors, and
I also agree that this is not germane to the discussion.

 Seriously, if we could shut off their ability to use
 [[Special:EmailUser]], and then have someone examine their every
 contribution with a fine-toothed comb, and then ban them at the first
 sight of anything approaching pedophile advocacy, maybe.  But that's a
 lot of work for very little benefit.  Better to just ban them
 categorically.

Why not simply ban those whose edits reflect advocacy (of any kind)?
To me, the idea of preemptively banning individuals on the basis that
they _might_ engage in misconduct is unconscionable.

And I can think of many groups more likely than pedophiles to perform
edits reflecting advocacy.  But these people aren't near-universally
abhorred, so we wouldn't think of barring their participation.  My
point is that *all* editors should be accepted/rejected on the same
terms, with no regard for our personal opinions of them.

However, I will acknowledge that the idea of disabling the e-mail
function crossed my mind and might be a valid consideration.

 There's also the issue of negative publicity.

I've addressed the PR issue elsewhere in the thread.

  What, in your assessment, is the proper course of action when it's
  discovered that an editor has made a public statement along the lines
  of the above example?

 I don't know.  I certainly wouldn't complain if they were banned anyway.
 But maybe they could be given some sort of supervised editing permission,
 to edit topics wholly unrelated to children and pedophilia, of course.

Are you suggesting that editing topics related to children (a vague
description) somehow enables pedophiles to access children?

I'm even more puzzled by the suggestion that they be barred from
editing articles related to pedophilia.  Provided that their edits
don't reflect a pro-pedophilia bias, what's the problem?

  Then it probably is best that you simply remain silent on the issue
  (rather than implying that opposition to your stance reflects approval
  of pedophilia).

 I never made that implication, though.

There was no other reason to mention such a thing.  And besides, you
come right out and say it in the next sentence...

 I do think some people's opposition is tantamount to approval of
 pedophilia, but not everyone's.

Whose is?  Is mine?

What part of someone finding an opinion appalling do you
associate with the absence of disapproval?

   The part about not judging them,

  That means that it isn't our place to pass judgement (in the judicial
  sense), _in spite of_ how we personally view one's actions.  It does
  *not* mean that we lack such a personal view.

 In the judicial sense?  As in court of law type stuff?  I don't think
 that's what was meant.

Not an actual court of law, but the Wikimedia equivalent (in this
instance, a committee convening to deliberate and render a verdict).
Obviously, the determination that someone has done something
appalling is a personal judgement, so it can't refer to that.  It
means that we set aside our personal opinions and decline to judge
him/her as one judges someone on trial.

 If you take appalling to imply judgment as (morally) wrong, then
 appalling opinion is a contradiction in terms.  The word opinion
 means there is no right or wrong choice.  If I had to try to parse
 appalling opinion, I'd guess it means something which isn't right
 or wrong, but which I personally find distasteful.

Appalling: causing dismay or horror
http://dictionary.reference.com/browse/appalling

Opinion: a personal view, attitude, or appraisal
http://dictionary.reference.com/browse/opinion

Appalling opinion: a personal view, attitude, or appraisal causing
dismay or horror

Contextual application: Person X's attitude regarding pedophilia
causes me dismay and horror, but I don't regard this as a valid reason
to consider barring his/her participation in the project.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Pedophilia and the Non discrimination policy

2009-11-28 Thread David Levy
Andre Engels wrote:

  If [allowing self-identified pedophiles to edit] brings the project in
  disrepute, then so be it.

Fred Bauder replied:

 It is our responsibility to avoid harm to the project.

By that logic, we ought to disallow public editing altogether.  After
all, wikis (and Wikipedia in particular) are widely criticised because
the ability of anyone to edit sometimes leads to inaccuracies and
other undesirable content.

But of course, we mustn't do that (despite the fact that it would
rectify a flaw that leads to disrepute), because it would
fundamentally alter the wikis' nature in an unacceptable manner.

This is the risk that we run when we begin banning editors because we
dislike beliefs and behaviors unrelated to their participation in the
wikis.  We might avoid some negative attention that would accompany
their involvement, but what sort of project are we left with?
Certainly not the sort that I signed up for (and not one that will
engender positive publicity as the open community that it's purported
to be).

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Pedophilia and the Non discrimination policy

2009-11-28 Thread David Levy
Bod Notbod wrote:

 Let's just have Paedo-Wiki and be done with it.

 We have wikis for over 200 languages. It would be wrong not to allow
 paedos to express themselves.

I recognize your sarcasm, but not your point.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Pedophilia and the Non discrimination policy

2009-11-28 Thread David Levy
Bod Notbod wrote:

 Well, I guess I just don't know where this conversation is going.

 A paedophile might know a lot about the Spanish Civil War and could
 usefully add stuff.

 A murderer might know a lot about Pokemon.

 A rapist might know a lot about physics.

 It's not like we're going to know the personality involved, so surely
 we just have to accept that editors come in all shapes and sizes and
 let them get on with it.

I agree.  When users edit the wikis to reflect
pro-pedophilia/pro-murder/pro-rape/pro-anything (or anti-anything)
agendas, that's when it's appropriate to act (regardless of whether
they've provided advance indication that such an issue might arise).

There's a world of difference between the block rationale you edited
badly and the block rationale you didn't edit badly, but you're a
bad person.  We stand to draw more negative attention to ourselves by
deeming certain people bad than by allowing said users to edit under
the same rules as everyone else.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Pedophilia and the Non discrimination policy

2009-11-28 Thread David Levy
George William Herbert wrote:

 We have one single class of editors who, as a class, for
 non-wiki-behavioral reasons, we ban.  This class' participation is
 problematic both for our other users safety and for Wikipedia's
 reputation and integrity of content.

Integrity of content?  Please elaborate.

 There is no slippery slope.

I haven't argued otherwise.

 Pedophiles have a near unity risk of reoffending.  Even the ones who
 say they have never abused anyone and never intend to, according to
 surveys and psychologists, essentially always do.

And banning self-identified pedophiles increases our users'
safety...how?  Is it remotely realistic to assume that most pedophiles
will publicly identify themselves as such (and never seek to register
another account)?  Of course not, and we're only encouraging them to
keep quiet (thereby increasing the likelihood that any improper
actions will go undetected).

It's clear that this is a PR issue, and there is validity to the
assertion that allowing known pedophiles to edit would generate
negative publicity.  But would it generate more negative publicity
than the alternative (banning good editors and driving pedophiles
underground)?  I'm skeptical.  And either way, I believe that such a
practice contradicts the fundamental principles on which our community
is based.

 There is a reason they are, after conviction (in the US) not allowed
 anywhere near children in organized settings.

 Wikipedia is a large organized setting, with children present as
 editors.  We owe them a duty to not let known pedophiles near them.
 We can't guarantee that unknown ones aren't out there - but if we do
 become aware, we must act.

You're making the mistake of equating physical space to cyberspace.
In physical space, pedophiles are identifiable and traceable.  We're
dealing with an anonymous setting.  A known pedophile is less of a
threat than an unknown one, and banning the former only creates
incentive to remain the latter (which is as simple as not saying I'm
a pedophile.).  Our child editors are no safer.

 We also, to continue to be taken seriously by society at large, not
 allow ourselves to be a venue for their participation.  Being known as
 pedophile-friendly leads to societal and press condemnation and
 governmental action, all of which would wreck the project.

I've addressed the PR issue, and I'd be very interested to read about
the governmental action.

 I understand that some do not agree.  But the reasons for this policy
 are well founded.

I also wish to read the policy.  Where is it published?

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


  1   2   >