Re: [Foundation-l] Potential ICRA labels for Wikipedia

2010-05-11 Thread David Goodman
the key distinction  is that a method for getting a list of files in a
category is a good thing for many purposes, and is morally totally
neutral. The ethical questions depend on what other people do with the
list, and like all intellectual work, it can be used for  ends any
person might think desirable or undesirable. To use it for compiling a
better guide to content than we can do ourselves, for example, would
be a very good end. We might want to restrict people from using it to
a bad end, as suggested by some, by  altering our terms of service,
but that would be opposed to our being a free resource in the
expansive sense of free that commons is, and would contradict our
licensing.

The opportunity for an individual to select what images to see,
another proposal,  is also neutral.

on the other hand, adding descriptors for levels of suitability set by
or corresponding to those set by outside sources cannot be used for a
good end, but only for the bad end of censorship. And it contradicts
our basic policy that we do not make conclusions about suitability or
truth or other values. There is no reason to actively work to gain a
capacity that can only be used by those who oppose the basic values of
free culture. It's as if we added a capacity to the software to charge
for viewing an article--and that's not even intrinsically wrong, but
it's not a suitable purpose for free software.



David Goodman, Ph.D, M.L.S.
http://en.wikipedia.org/wiki/User_talk:DGG



On Mon, May 10, 2010 at 7:41 PM, K. Peachey p858sn...@yahoo.com.au wrote:
 I've read most of the replies in this thread, And i think I should
 point out a few things out:

 * The omg tagging for any reason is censorship mentality is a
 needless, Yes we tag things presently *shock horror* look at the
 currently category system.

 * Omg adding this to Mediawiki will destroy Wikipedia Currently
 Mediawiki is a separate application from wiki and always will be,
 Wikipedia is just a site that uses Mediawiki for it's back end. Just
 because Mediawiki supports something, doesn't mean it will be
 activated (or in the case of extensions, installed) on Wikipedia and
 Wikipedia isn't the only site that uses Mediawiki.
  ** Currently there are two discussions about possible implementation:
     A) Bug 982: Is referring to a EXTENSION that provides the
 functionality of tagging content (with it's current discussion being
 pointed at ICRA) so it has a rating of sorts which can be used by
 external sources (AKA filtering companies)
     B) The discussion on wikitech-l is currently discussing a way
 (either extension or a core functionality) to accurately grab the
 contents of a category and provide it in a usable interface so that
 again, it can be used by third parties. Currently this discussions has
 hardly approached the rating system discussion (who it would be done?
 own internal scale? some sort of standard out there?).

 * The lesser of two evils, Currently there is no easy way to get a
 list of the files (and their file paths) of images contained within a
 category, This can be applied for multiple things (bots for example)
 but use, the discussion is primarily about  a exportable format so a
 machine can easily use it. Schools and Filter providers are currently
 blocking whole W* projects because there are no easy ways to do it.
 Unfortunately the lesser of the two evils is allow a easy way for a
 company to get a list of what is contained in a certain category (Eg:
 Category:Images of BDSM) and then import that into the filtering
 system to block them compared to the whole project.


 JUST A REMINDER: The current discussions are revolving around
 implementations in Mediawiki as either as a core functionality (like
 most of these, being enabled/disabledable) or as a extension, and not
 of how to implement this in the WMF hemisphere of projects such as for
 example commons.

 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Potential ICRA labels for Wikipedia

2010-05-10 Thread teun spaans
Dear Derk-jan,

As for 1), I think youtube can be compared in populairity and size with
wikipedia, and in videos surpasses commons.
Youtube enables its visitors to tag videos as adult.
see for example:
http://www.youtube.com/watch?v=ZA22WSVlCZ4

kind regards,
Teun Spaans



On Sun, May 9, 2010 at 3:24 PM, Derk-Jan Hartman d.j.hart...@gmail.comwrote:

 This message is CC'ed to other people who might wish to comment on this
 potential approach
 ---

 Dear reader at FOSI,

 As a member of the Wikipedia community and the community that develops the
 software on which Wikipedia runs, I come to you with a few questions.
 Over the past years Wikipedia has become more and more popular and
 omnipresent. This has led to enormous problems, because for the first time,
 a largely uncensored system has to work in the boundaries of a world that is
 largely censored. For libraries and schools this means that they want to
 provide Wikipedia and its related projects to their readers, but are
 presented with the problem of what some people might consider, information
 that is not child-safe. They have several options in that case, either
 blocking completely or using context aware filtering software that may make
 mistakes, that can cost some of these institutions their funding.

 Similar problems are starting to present themselves in countries around the
 world, differing views about sexuality between northern and southern europe
 for instance. Add to that the censoring of images of Muhammad, Tiananman
 square, the Nazi Swastika, and a host of other problems. Recently there has
 been concern that all this all-out-censoring of content by parties around
 the world is damaging the education mission of the Wikipedia related
 projects because so many people are not able to access large portions of our
 content due to a small (think 0.01% ) part of our other content.

 This has led some people to infer that perhaps it is time to rate the
 content of Wikipedia ourselves, in order to facilitate external censoring of
 material, hopefully making the rest of our content more accessible.
 According to statements around the web ICRA ratings are probably the most
 widely supported rating by filtering systems. Thus we were thinking of
 adding autogenerated ICRA RDF tags to each individual page describing the
 rating of the page and the images contained within them. I have a few
 questions however, both general and technical.

 1: If I am correctly informed, Wikipedia would be the first website of this
 size to label their content with ratings, is this correct?
 2: How many content filters understand the RDF tags
 3: How many of those understand multiple labels and path specific labeling.
 This means: if we rate the path of images included on the page different
 from the page itself, do filters block the entire content, or just the
 images ? (Consider the Virgin Killer album cover on the Virgin Killer
 article, if you are aware of that controversial image
 http://en.wikipedia.org/wiki/Virgin_Killer)
 4: Do filters understand per page labeling ? Or do they cache the first RDF
 file they encounter on a website and use that for all other pages of the
 website ?
 5: Is there any chance the vocabulary of ICRA can be expanded with new
 ratings for non-Western world sensitive issues ?
 6: Is there a possibility of creating a separate namespace that we could
 potentially use for our own labels ?

 I hope that you can help me answer these questions, so that we may continue
 our community debate with more informed viewpoints about the possibilities
 of content rating. If you have additional suggestions for systems or
 problems that this web-property should account for, I would more than
 welcome those suggestions as well.

 Derk-Jan Hartman
 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Potential ICRA labels for Wikipedia

2010-05-10 Thread Marco Chiesa
On Mon, May 10, 2010 at 6:27 PM, teun spaans teun.spa...@gmail.com wrote:
 Dear Derk-jan,

 As for 1), I think youtube can be compared in populairity and size with
 wikipedia, and in videos surpasses commons.
 Youtube enables its visitors to tag videos as adult.

I think there is a difference between using tags/categories like
contains the depiction of a female breast or contains a portrait of
Muhammad and suitable for adults only or offensive to Islam. The
first way is an objective categorisation, and I see nothing wrong in
someone else using such categorisation to censor contents, while the
second way is too much culture dependent. Even a concept like nudity
strongly depends on culture, so I wouldn't use it as a categorisation.
Cruccone

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Potential ICRA labels for Wikipedia

2010-05-10 Thread David Goodman
Presumably you mean nude female breast, and then you are involved with
exactly the nudity definition dilemma you allude to. If you mean
nude or clothed, Every full or half length picture of a woman seen
from the front or side contains a depiction of the female breast.
As another consideration, If we are   out to describe the image, we
would need to put in an tag for nude male breast also, and presumably
other sometimes uncovered parts of the body, like the  hand.
Otherwise we are concentrating on tagging those portions of images
that are sexually charged, and the only reason for doing that
preferentially is to facilitate censorship. (or to  facilitate access
by those who want sexually charged material over those who want access
to other kinds of material). Neither is an appropriate function for a
free encyclopedia.

David Goodman, Ph.D, M.L.S.
http://en.wikipedia.org/wiki/User_talk:DGG



On Mon, May 10, 2010 at 12:40 PM, Marco Chiesa chiesa.ma...@gmail.com wrote:
 On Mon, May 10, 2010 at 6:27 PM, teun spaans teun.spa...@gmail.com wrote:
 Dear Derk-jan,

 As for 1), I think youtube can be compared in populairity and size with
 wikipedia, and in videos surpasses commons.
 Youtube enables its visitors to tag videos as adult.

 I think there is a difference between using tags/categories like
 contains the depiction of a female breast or contains a portrait of
 Muhammad and suitable for adults only or offensive to Islam. The
 first way is an objective categorisation, and I see nothing wrong in
 someone else using such categorisation to censor contents, while the
 second way is too much culture dependent. Even a concept like nudity
 strongly depends on culture, so I wouldn't use it as a categorisation.
 Cruccone

 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Potential ICRA labels for Wikipedia

2010-05-10 Thread K. Peachey
I've read most of the replies in this thread, And i think I should
point out a few things out:

* The omg tagging for any reason is censorship mentality is a
needless, Yes we tag things presently *shock horror* look at the
currently category system.

* Omg adding this to Mediawiki will destroy Wikipedia Currently
Mediawiki is a separate application from wiki and always will be,
Wikipedia is just a site that uses Mediawiki for it's back end. Just
because Mediawiki supports something, doesn't mean it will be
activated (or in the case of extensions, installed) on Wikipedia and
Wikipedia isn't the only site that uses Mediawiki.
  ** Currently there are two discussions about possible implementation:
 A) Bug 982: Is referring to a EXTENSION that provides the
functionality of tagging content (with it's current discussion being
pointed at ICRA) so it has a rating of sorts which can be used by
external sources (AKA filtering companies)
 B) The discussion on wikitech-l is currently discussing a way
(either extension or a core functionality) to accurately grab the
contents of a category and provide it in a usable interface so that
again, it can be used by third parties. Currently this discussions has
hardly approached the rating system discussion (who it would be done?
own internal scale? some sort of standard out there?).

* The lesser of two evils, Currently there is no easy way to get a
list of the files (and their file paths) of images contained within a
category, This can be applied for multiple things (bots for example)
but use, the discussion is primarily about  a exportable format so a
machine can easily use it. Schools and Filter providers are currently
blocking whole W* projects because there are no easy ways to do it.
Unfortunately the lesser of the two evils is allow a easy way for a
company to get a list of what is contained in a certain category (Eg:
Category:Images of BDSM) and then import that into the filtering
system to block them compared to the whole project.


JUST A REMINDER: The current discussions are revolving around
implementations in Mediawiki as either as a core functionality (like
most of these, being enabled/disabledable) or as a extension, and not
of how to implement this in the WMF hemisphere of projects such as for
example commons.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Potential ICRA labels for Wikipedia

2010-05-09 Thread Jussi-Ville Heiskanen
Derk-Jan Hartman wrote:
 This message is CC'ed to other people who might wish to comment on this 
 potential approach
 ---

   


You asked for comments... Here is one we prepared earlier...

http://en.wikipedia.org/wiki/Wikipedia:Image_censorship#ICRA

In other words, we have been here, we have done this, and we
have the T-shirt.

This *HAS* been suggested before, and soundly defeated.
Nothing has changed in this respect. I would heartfeltly ask
that folks just quit trying to stuff this down the throat of a
community that simply does not content labeling.


Yours,

Jussi-Ville Heiskanen


___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Potential ICRA labels for Wikipedia

2010-05-09 Thread Jiří Hofman
I am afraid we will never be able to label our content properly. There will be 
no chance to keep NPOV regardless how implemented labels will be. Our content 
is free. If somebody needs labeled content he can label it himself in his own 
copy of Wikimedia projects.

It is a bad idea. Let's not do it. We have better things to do.

Jiri

 Personally, I tend to see ICRA labeling as just another kind of
 categorization, albeit one with definitions that were defined
 elsewhere.


signature.asc
Description: This is a digitally signed message part.
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Potential ICRA labels for Wikipedia

2010-05-09 Thread Jussi-Ville Heiskanen
Robert Rohde wrote:
 Personally, I tend to see ICRA labeling as just another kind of
 categorization, albeit one with definitions that were defined
 elsewhere.
   
This is precisely and completely absolutely wrong.
Labeling is enabling censorship. Labeling images
is the worst kind of enablement of censorship, in
that it can effect the way a pages informational
content is presented to the viewer.

 If there are people in the community willing to sort content into the
 ICRA categories and maintain those associations, then I see no problem
 with Wikimedia supporting that.  Having images tagged with
 [[Category:ICRA Nudity-A (exposed breasts)]] is useful information for
 people that care about such things.  As with most other projects on
 Wikimedia, I think it mostly comes down to whether there is a
 community of volunteers who want to work on such issues.
   
Not so. As an argumentum absurdum, let me offer the
following proposition:

If there are people in the community willing to sort content
into categories depending on whether the content is suitable
reading material for Catholics (insert your own ideology,
religion, political affiliation, or other orientation here) and
maintain those associations, then I see no problem with
Wikimedia supporting that.

See the problem with your argument there? I am sure
there would be people who would care about such things.
But we just don't do that. And the same applies to ICRA.

It does not come down to whether there are enough hands
to do the work. It comes down to the fact that our *mission*
is to distribute the *whole* of human knowledge to every
human in their own language. Period, no ifs or buts.

 There are, by my rough count, ~75 tags in the current ICRA vocabulary.

 These cover nudity, sexuality, violence, bad language, drug use,
 weapons, gambling, and other disturbing material.  In addition there
 are a number of meta tags to identify things like user-generated
 content, sites with advertising, and sites intended to be educational
 / news-oriented / religious, etc.
   
We don't do censorship. Period.

 It appears we could choose to use tags in some categories, e.g.
 nudity/sexuality, even if we didn't use tags in other categories, e.g.
 violence.

 On balance I suspect that participating in such schemes is probably
 more helpful than harmful since it allows schools and other
 organizations that would do filtering anyway to block only selected
 content rather than blocking wide swathes of content or the entire
 site just to get at 0.01% of content that they fine intolerable.  It
 also provides the public relations benefits of showing we are
 concerned about such issues, without having to remove or block the
 content ourselves.
   
The public relations effects would be devastating. There
is a reason Wikipedia was blocked in China. It was because
we would not help in stuff like this, just to appease the
Chinese government. We haven't buckled on this yet.
And we won't.

The worst possible argument imaginable is that they
would do that anyway. That is their option, but we
won't help them a red cunt hairs distance on their
way. (pardon my french)

 To be clear, I don't think we should be removing or blocking any
 content ourselves.  Wikimedia is designed for adults and that
 shouldn't change.  However, if there is a content filtering standard
 that some segment of the community wants to support, then I'm
 perfectly happy to see that happen.

   
You know what. You may be happy to see it happen.
But this question has been put to the community
time and again. There have been scores of attempts
to vote labeling in. ICRA has been put to the vote
at least three times. Each time, no matter how people
have tried to dress their proposal as innocous, we
have rejected it resoundingly. No, not only resoundinly,
but angrily, furiously. We don't do censorship. Period.


Sorry about the length of the posting, but this
continues to be important, vital, to our community.


Yours,

Jussi-Ville Heiskanen




___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Potential ICRA labels for Wikipedia

2010-05-09 Thread Gregory Maxwell
On Sun, May 9, 2010 at 9:24 AM, Derk-Jan Hartman d.j.hart...@gmail.com wrote:
 This message is CC'ed to other people who might wish to comment on this 
 potential approach
 ---

 Dear reader at FOSI,

 As a member of the Wikipedia community and the community that develops the 
 software on which Wikipedia runs, I come to you with a few questions.
 Over the past years Wikipedia has become more and more popular and 
 omnipresent. This has led to enormous


I am strongly in favour of allowing our users to choose what they see.
  If you don't like it, don't look at it is only useful advice when
it's easy to avoid looking at things— and it isn't always on our
sites. By marking up our content better and providing the right
software tools we could _increase_ choice for our users and that can
only be a good thing.

At the same time, and I think we'll hear a similar message from the
EFF and the ALA,  I am opposed to these organized content labelling
systems.  These systems are primary censorship systems and are
overwhelmingly used to subject third parties, often adults, to
restrictions against their will. I'm sure these groups will gladly
confirm this for us, regardless of the sales patter used to sell these
systems to content providers and politicians.

(For more information on the current state of compulsory filtering in
the US I recommend the filing in Bradburn v. North Central Regional
Library District  an ongoing legal battle over a library system
refusing to allow adult patrons to bypass the censorware in order to
access constitutionally protected speech, in apparent violation of the
suggestion by the US Supreme Court that the ability to bypass these
filters is what made the filters lawful in the first place
http://docs.justia.com/cases/federal/district-courts/washington/waedce/2:2006cv00327/41160/40/0.pdf
)

It's arguable if we should fight against the censorship of factual
information to adults or merely play no role in it—  but it isn't
really acceptable to assist it.

And even when not used as a method of third party control, these
systems require the users to have special software installed— so they
aren't all that useful as a method for our users to self-determine
what they will see on the site.  So it sounds like a lose, lose
proposition to me.

Labelling systems are also centred around broad classifications, e.g.
Drugs, Pornography with definitions which defy NPOV. This will
obviously lead to endless arguments on applicability within the site.

Many places exempt Wikipedia from their filtering, after all it's all
educational, so it would be a step backwards for these people for us
to start applying labels that they would have gladly gone without.
The filter the drugs category because they want to filter pro-drug
advocacy, but if we follow the criteria we may end up with our factual
articles bunched into the same bin.  A labelling system designed for
the full spectrum of internet content simply will not have enough
words for our content... or are there really separate labels for Drug
_education_, Hate speech _education_, Pornography _education_,
etc. ?

Urban legend says the Eskimos have 100 words for snow, it's not
true... but I think that it is true that for the Wiki(p|m)edia
projects we really do need 10 million words for education.

Using a third party labelling system we can also expect issues that
would arise where we fail to correctly apply the labels, either due
to vandalism, limitations of the community process, or simply because
of a genuine and well founded difference of opinion.

Instead I prefer that we run our own labelling system. By controlling
it ourselves we determine its meaning— avoiding terminology disputes
without outsiders; we can operate the system in a manner which
inhibits its usefulness to the involuntary censorship of adults (e.g.
not actually putting the label data in the pages users view in an
accessible way, creating site TOS which makes the involuntary
application of our filters on adults unlawful), and maximizes its
usefulness for user self determination by making the controls
available right on the site.

The wikimedia sites have enough traffic that its worth peoples time to
customize their own preferences.

There are many technical ways in which such a system could be
constructed, some requiring more development work than others, and
while I'd love to blather on a possible methods the important point at
this time is to establish the principles before we worry about the
tools.


Cheers,

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Potential ICRA labels for Wikipedia

2010-05-09 Thread Sydney Poore
On Sun, May 9, 2010 at 2:34 PM, Gregory Maxwell gmaxw...@gmail.com wrote:

 On Sun, May 9, 2010 at 9:24 AM, Derk-Jan Hartman d.j.hart...@gmail.com
 wrote:
  This message is CC'ed to other people who might wish to comment on this
 potential approach
  ---
 
  Dear reader at FOSI,
 
  As a member of the Wikipedia community and the community that develops
 the software on which Wikipedia runs, I come to you with a few questions.
  Over the past years Wikipedia has become more and more popular and
 omnipresent. This has led to enormous


 I am strongly in favour of allowing our users to choose what they see.
  If you don't like it, don't look at it is only useful advice when
 it's easy to avoid looking at things— and it isn't always on our
 sites. By marking up our content better and providing the right
 software tools we could _increase_ choice for our users and that can
 only be a good thing.


I agree and I'm in favor of WMF allocating resources in order to develop a
system that allows users to filter content based on the particular needs of
their setting.


 At the same time, and I think we'll hear a similar message from the
 EFF and the ALA,  I am opposed to these organized content labelling
 systems.  These systems are primary censorship systems and are
 overwhelmingly used to subject third parties, often adults, to
 restrictions against their will. I'm sure these groups will gladly
 confirm this for us, regardless of the sales patter used to sell these
 systems to content providers and politicians.

 (For more information on the current state of compulsory filtering in
 the US I recommend the filing in Bradburn v. North Central Regional
 Library District  an ongoing legal battle over a library system
 refusing to allow adult patrons to bypass the censorware in order to
 access constitutionally protected speech, in apparent violation of the
 suggestion by the US Supreme Court that the ability to bypass these
 filters is what made the filters lawful in the first place

 http://docs.justia.com/cases/federal/district-courts/washington/waedce/2:2006cv00327/41160/40/0.pdf
 )

 It's arguable if we should fight against the censorship of factual
 information to adults or merely play no role in it—  but it isn't
 really acceptable to assist it.

 And even when not used as a method of third party control, these
 systems require the users to have special software installed— so they
 aren't all that useful as a method for our users to self-determine
 what they will see on the site.  So it sounds like a lose, lose
 proposition to me.

 Labelling systems are also centred around broad classifications, e.g.
 Drugs, Pornography with definitions which defy NPOV. This will
 obviously lead to endless arguments on applicability within the site.

 Many places exempt Wikipedia from their filtering, after all it's all
 educational, so it would be a step backwards for these people for us
 to start applying labels that they would have gladly gone without.
 The filter the drugs category because they want to filter pro-drug
 advocacy, but if we follow the criteria we may end up with our factual
 articles bunched into the same bin.  A labelling system designed for
 the full spectrum of internet content simply will not have enough
 words for our content... or are there really separate labels for Drug
 _education_, Hate speech _education_, Pornography _education_,
 etc. ?


 Urban legend says the Eskimos have 100 words for snow, it's not
 true... but I think that it is true that for the Wiki(p|m)edia
 projects we really do need 10 million words for education.

 Using a third party labelling system we can also expect issues that
 would arise where we fail to correctly apply the labels, either due
 to vandalism, limitations of the community process, or simply because
 of a genuine and well founded difference of opinion.

 Instead I prefer that we run our own labelling system. By controlling
 it ourselves we determine its meaning— avoiding terminology disputes
 without outsiders; we can operate the system in a manner which
 inhibits its usefulness to the involuntary censorship of adults (e.g.
 not actually putting the label data in the pages users view in an
 accessible way, creating site TOS which makes the involuntary
 application of our filters on adults unlawful), and maximizes its
 usefulness for user self determination by making the controls
 available right on the site.

 The wikimedia sites have enough traffic that its worth peoples time to
 customize their own preferences.

 There are many technical ways in which such a system could be
 constructed, some requiring more development work than others, and
 while I'd love to blather on a possible methods the important point at
 this time is to establish the principles before we worry about the
 tools.


I agree and prefer a system designed for the special needs of WMF wikis and
our global community. We may take some design elements and underlying
concepts from existing systems, but 

Re: [Foundation-l] Potential ICRA labels for Wikipedia

2010-05-09 Thread David Goodman
This is the first step towards censorship, and we should not take it.

We have no experience or expertise to determine what content is
suitable for particular users, or how content can be classified  as
such.Further, doing so is contrary to the basic principle that we do
not perform original research  or draw conclusions on disputed
matters, but present the facts and outside opinions and leave the
implication for the readers to decide. This principle has served us
well in dealing with many disputes which in other settings are
intractable.

What we do have expertise and experience in is classifying our content
by subject. We have a complex system of categories, actively
maintained, and  a system for determining correct titles and other
metadata  that reflect the content of the article.  No user wants to
see all of Wikipedia--they all choose what the see on the basis of
these descriptors, and on the basis of external links to our site,
links that are not under our control. They can choose on various
grounds. They can choose by title, by links from another article, by
inclusion in a category. Anyone who wishes to use this information to
provide a selected version of WP can freely do so.

To a certain extent , we also have visible metadata about the format
of our material: the main ones which are easily present to visitors
are the language, the size, and the type of computer file. There is
other material that we could display,such as whether an article
contains other files of particular types (in this context, images), or
references, on external links. We  could display a separate list of
the images in an article, including their descriptions.

We could include this in our search criteria. They would be useful for
many purposes; someone might for example wish to see all articles on
southeast Asia that contain maps, or wish to see articles about people
only if they contain photographs of the subjects. This is broadly
useful information, that can be used in many ways. it could easily be
used to design an external filter than would, for example, display
articles on people that contain photographs  with  the descriptors in
place of the photographs, while displaying photographs in all other
articles. The question is whether we should design such filters as
part of the project.

I think we should not take that step. We should leave it to outside
services, which might for example work by viewing WP through a site
that contains the desired filters, or by using a browser that
incorporates them.


David Goodman, Ph.D, M.L.S.
http://en.wikipedia.org/wiki/User_talk:DGG



On Sun, May 9, 2010 at 3:17 PM, Sydney Poore sydney.po...@gmail.com wrote:
 On Sun, May 9, 2010 at 2:34 PM, Gregory Maxwell gmaxw...@gmail.com wrote:

 On Sun, May 9, 2010 at 9:24 AM, Derk-Jan Hartman d.j.hart...@gmail.com
 wrote:
  This message is CC'ed to other people who might wish to comment on this
 potential approach
  ---
 
  Dear reader at FOSI,
 
  As a member of the Wikipedia community and the community that develops
 the software on which Wikipedia runs, I come to you with a few questions.
  Over the past years Wikipedia has become more and more popular and
 omnipresent. This has led to enormous


 I am strongly in favour of allowing our users to choose what they see.
  If you don't like it, don't look at it is only useful advice when
 it's easy to avoid looking at things— and it isn't always on our
 sites. By marking up our content better and providing the right
 software tools we could _increase_ choice for our users and that can
 only be a good thing.


 I agree and I'm in favor of WMF allocating resources in order to develop a
 system that allows users to filter content based on the particular needs of
 their setting.


 At the same time, and I think we'll hear a similar message from the
 EFF and the ALA,  I am opposed to these organized content labelling
 systems.  These systems are primary censorship systems and are
 overwhelmingly used to subject third parties, often adults, to
 restrictions against their will. I'm sure these groups will gladly
 confirm this for us, regardless of the sales patter used to sell these
 systems to content providers and politicians.

 (For more information on the current state of compulsory filtering in
 the US I recommend the filing in Bradburn v. North Central Regional
 Library District  an ongoing legal battle over a library system
 refusing to allow adult patrons to bypass the censorware in order to
 access constitutionally protected speech, in apparent violation of the
 suggestion by the US Supreme Court that the ability to bypass these
 filters is what made the filters lawful in the first place

 http://docs.justia.com/cases/federal/district-courts/washington/waedce/2:2006cv00327/41160/40/0.pdf
 )

 It's arguable if we should fight against the censorship of factual
 information to adults or merely play no role in it—  but it isn't
 really acceptable to assist it.

 And even when not used as a method 

Re: [Foundation-l] Potential ICRA labels for Wikipedia

2010-05-09 Thread Mike Godwin
Greg Maxwell writes:

At the same time, and I think we'll hear a similar message from the
 EFF and the ALA,  I am opposed to these organized content labelling
 systems.  These systems are primary censorship systems and are
 overwhelmingly used to subject third parties, often adults, to
 restrictions against their will. I'm sure these groups will gladly
 confirm this for us, regardless of the sales patter used to sell these
 systems to content providers and politicians.


I just want to chime in, in support of Greg's assessment here. I worked for
EFF for nine years, and I have done extensive work with ALA as well, and I
am absolutely certain that these organizations (and others, including
civil-liberties groups) will be extremely critical if any project adopts
ICRA labeling schemes. Moreover, Greg's characterization of the existing
systems as primary censorship systems ... overwhelmingly used to subject
third parties, often adults, to restrictions against their will is entirely
accurate.


--Mike
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Potential ICRA labels for Wikipedia

2010-05-09 Thread David Gerard
On 9 May 2010 21:17, Marcus Buck m...@marcusbuck.org wrote:

 The tags applied should be clear and fact-based. So instead of tagging a
 page as containing pornography, which is entirely subjective, we
 should rather tag the page as contains a depiction of an erect penis
 or contains a depiction of oral intercourse.


We can do this with the existing category system.

The objection of the objectors will remain that the material is
present at all. No system of categorisation will alleviate this
concern - only actual censorship of Commons will.


- d.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Potential ICRA labels for Wikipedia

2010-05-09 Thread David Gerard
On 9 May 2010 21:28, Mikemoral mikemoral...@gmail.com wrote:

 By why censor Commons? Should educational material be freely viewed and,
 of course, be made free to read, use, etc.


Well, yes. The apparent reason is that Fox News is making trouble.

Categorisation, labeling, etc. won't fix that - only removing the
material would. This does not make it a good idea.


- d.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Potential ICRA labels for Wikipedia

2010-05-09 Thread Excirial
 *That's true. But at the moment we have nothing to defend or excuse
ourselves with. If we had decent tagging we could at least say: You
don't want your pupils to see nude people? Add rule XYZ to your school's
proxy servers and Wikipedia will be clean. You can even choose which
content should be allowed and which not.

Much better than saying: You don't want your pupils to see nude people?
No way! No Wikipedia without dicks and titties! Except you block all of
Wikipedia...*

If we create a content rating system, it should be based upon individual
account settings which are decided by the editors themselves instead of
being enforced globally. I am very much against any system that takes
control away from the editor and hands it to some external party; We are not
aiming to become another Golden Shield
Projecthttp://en.wikipedia.org/wiki/Golden_Shield_Projectwhere a
handful of people can dictate what content is appropriate for its
audience. Even a system that sets top level permissions which are
overridable trough account settings is to much in my eyes, as such systems
can easily be abused.

Equally i don't believe it is up to a school or ISP to decide whether or not
they want to show certain content to its subscribers. If i don't want to see
sexual images, nudity, the face of Muhammad, evolution or religious related
content i should not be searching for it on the first place. A setting that
allows people to filter content is little more then a courtesy to them as we
would allow them to filter based upon their personal convictions. However,
there is no way my ISP can decide what convictions i should follow. If a
school decides to block Wikipedia altogether then i would say it is their
loss, not ours.

~Excirial

On Sun, May 9, 2010 at 10:42 PM, Marcus Buck m...@marcusbuck.org wrote:

 David Gerard hett schreven:
  On 9 May 2010 21:17, Marcus Buck m...@marcusbuck.org wrote:
 
 
  The tags applied should be clear and fact-based. So instead of tagging a
  page as containing pornography, which is entirely subjective, we
  should rather tag the page as contains a depiction of an erect penis
  or contains a depiction of oral intercourse.
 
 
 
  We can do this with the existing category system.
 
 That is possible but it will either be hacky or we'll need to be much
 more strict with our categorization (atomic categorization). I'm not
 opposed though.
  The objection of the objectors will remain that the material is
  present at all. No system of categorisation will alleviate this
  concern - only actual censorship of Commons will.
 

 That's true. But at the moment we have nothing to defend or excuse
 ourselves with. If we had decent tagging we could at least say: You
 don't want your pupils to see nude people? Add rule XYZ to your school's
 proxy servers and Wikipedia will be clean. You can even choose which
 content should be allowed and which not.

 Much better than saying: You don't want your pupils to see nude people?
 No way! No Wikipedia without dicks and titties! Except you block all of
 Wikipedia...

 Our current strategy is censoring, but hiding the censorship under most
 possibly vague and undefined terms like scope.

 Marcus Buck
 User:Slomox
 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Potential ICRA labels for Wikipedia

2010-05-09 Thread Derk-Jan Hartman
This message was an attempt to gain information and spur discussion about the 
system in general, it's limits and effectiveness, not wether or not we should 
actually do it. I was trying to gather more information so that we can have an 
informed debate if it ever got to discussing about the possibility of using 
ratings.

That is why it was addressed to FOSI and cc'ed to some parties that might have 
clue about such systems. The copy to foundation-l was a courtesy message. You 
are welcome to discuss censorship and your opinion about it, but I would 
appreciate it even more if people actually talked about rating systems.

DJ

On 9 mei 2010, at 15:24, Derk-Jan Hartman wrote:

 This message is CC'ed to other people who might wish to comment on this 
 potential approach
 ---
 
 Dear reader at FOSI,
 
 As a member of the Wikipedia community and the community that develops the 
 software on which Wikipedia runs, I come to you with a few questions.
 Over the past years Wikipedia has become more and more popular and 
 omnipresent. This has led to enormous problems, because for the first time, a 
 largely uncensored system has to work in the boundaries of a world that is 
 largely censored. For libraries and schools this means that they want to 
 provide Wikipedia and its related projects to their readers, but are 
 presented with the problem of what some people might consider, information 
 that is not child-safe. They have several options in that case, either 
 blocking completely or using context aware filtering software that may make 
 mistakes, that can cost some of these institutions their funding.
 
 Similar problems are starting to present themselves in countries around the 
 world, differing views about sexuality between northern and southern europe 
 for instance. Add to that the censoring of images of Muhammad, Tiananman 
 square, the Nazi Swastika, and a host of other problems. Recently there has 
 been concern that all this all-out-censoring of content by parties around the 
 world is damaging the education mission of the Wikipedia related projects 
 because so many people are not able to access large portions of our content 
 due to a small (think 0.01% ) part of our other content.
 
 This has led some people to infer that perhaps it is time to rate the content 
 of Wikipedia ourselves, in order to facilitate external censoring of 
 material, hopefully making the rest of our content more accessible. According 
 to statements around the web ICRA ratings are probably the most widely 
 supported rating by filtering systems. Thus we were thinking of adding 
 autogenerated ICRA RDF tags to each individual page describing the rating of 
 the page and the images contained within them. I have a few questions 
 however, both general and technical.
 
 1: If I am correctly informed, Wikipedia would be the first website of this 
 size to label their content with ratings, is this correct?
 2: How many content filters understand the RDF tags
 3: How many of those understand multiple labels and path specific labeling. 
 This means: if we rate the path of images included on the page different from 
 the page itself, do filters block the entire content, or just the images ? 
 (Consider the Virgin Killer album cover on the Virgin Killer article, if you 
 are aware of that controversial image 
 http://en.wikipedia.org/wiki/Virgin_Killer)
 4: Do filters understand per page labeling ? Or do they cache the first RDF 
 file they encounter on a website and use that for all other pages of the 
 website ?
 5: Is there any chance the vocabulary of ICRA can be expanded with new 
 ratings for non-Western world sensitive issues ?
 6: Is there a possibility of creating a separate namespace that we could 
 potentially use for our own labels ?
 
 I hope that you can help me answer these questions, so that we may continue 
 our community debate with more informed viewpoints about the possibilities of 
 content rating. If you have additional suggestions for systems or problems 
 that this web-property should account for, I would more than welcome those 
 suggestions as well.
 
 Derk-Jan Hartman


___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Potential ICRA labels for Wikipedia

2010-05-09 Thread Sue Gardner
Hi Derk-Jan,

Thank you for starting this thread.

There is obviously a range of options -- let's say, on a 10-point
scale, ranging from 0 (do nothing but enforce existing policy) to 10
(completely purge everything that's potentially objectionable to
anyone, anywhere).  Somewhere on that continuum are possibilities like
i) we tag pages so that external entities can filter, ii) we point
parents towards content filtering systems they can use, but make no
changes ourselves, iii) we implement our own filtering system so that
people can choose to hide objectionable content if they want, or iv)
we implement our own filtering system, with a default to a safe or
moderate view and the option for people to change their own
settings.  Those are just a few: there are lots of options.  (e.g.,
Google Images and Flickr I believe do different versions of option iv.
 I'm not saying that means we should do the same; it does not
necessarily mean that.)

I would love to see a table of various options, with pros and cons
including feedback from folks like EFF.  If anyone feels like starting
such a thing, I would be really grateful :-)

Thanks,
Sue



On 9 May 2010 14:26, Derk-Jan Hartman d.j.hart...@gmail.com wrote:
 This message was an attempt to gain information and spur discussion about the 
 system in general, it's limits and effectiveness, not wether or not we should 
 actually do it. I was trying to gather more information so that we can have 
 an informed debate if it ever got to discussing about the possibility of 
 using ratings.

 That is why it was addressed to FOSI and cc'ed to some parties that might 
 have clue about such systems. The copy to foundation-l was a courtesy 
 message. You are welcome to discuss censorship and your opinion about it, but 
 I would appreciate it even more if people actually talked about rating 
 systems.

 DJ

 On 9 mei 2010, at 15:24, Derk-Jan Hartman wrote:

 This message is CC'ed to other people who might wish to comment on this 
 potential approach
 ---

 Dear reader at FOSI,

 As a member of the Wikipedia community and the community that develops the 
 software on which Wikipedia runs, I come to you with a few questions.
 Over the past years Wikipedia has become more and more popular and 
 omnipresent. This has led to enormous problems, because for the first time, 
 a largely uncensored system has to work in the boundaries of a world tha t 
 is largely censored. For libraries and schools this means that they want to 
 provide Wikipedia and its related projects to their readers, but are 
 presented with the problem of what some people might consider, information 
 that is not child-safe. They have several options in that case, either 
 blocking completely or using context aware filtering software that may make 
 mistakes, that can cost some of these institutions their funding.

 Similar problems are starting to present themselves in countries around the 
 world, differing views about sexuality between northern and southern europe 
 for instance. Add to that the censoring of images of Muhammad, Tiananman 
 square, the Nazi Swastika, and a host of other problems. Recently there has 
 been concern that all this all-out-censoring of content by parties around 
 the world is damaging the education mission of the Wikipedia related 
 projects because so many people are not able to access large portions of our 
 content due to a small (think 0.01% ) part of our other content.

 This has led some people to infer that perhaps it is time to rate the 
 content of Wikipedia ourselves, in order to facilitate external censoring of 
 material, hopefully making the rest of our content more accessible. 
 According to statements around the web ICRA ratings are probably the most 
 widely supported rating by filtering systems. Thus we were thinking of 
 adding autogenerated ICRA RDF tags to each individual page describing the 
 rating of the page and the images contained within them. I have a few 
 questions however, both general and technical.

 1: If I am correctly informed, Wikipedia would be the first website of this 
 size to label their content with ratings, is this correct?
 2: How many content filters understand the RDF tags
 3: How many of those understand multiple labels and path specific labeling. 
 This means: if we rate the path of images included on the page different 
 from the page itself, do filters block the entire content, or just the 
 images ? (Consider the Virgin Killer album cover on the Virgin Killer 
 article, if you are aware of that controversial image 
 http://en.wikipedia.org/wiki/Virgin_Killer)
 4: Do filters understand per page labeling ? Or do they cache the first RDF 
 file they encounter on a website and use that for all other pages of the 
 website ?
 5: Is there any chance the vocabulary of ICRA can be expanded with new 
 ratings for non-Western world sensitive issues ?
 6: Is there a possibility of creating a separate namespace that we could 
 potentially use for our 

Re: [Foundation-l] Potential ICRA labels for Wikipedia

2010-05-09 Thread stevertigo
Jussi-Ville Heiskanen cimonav...@gmail.com wrote:

 This *HAS* been suggested before, and soundly defeated.
 Nothing has changed in this respect. I would heartfeltly ask
 that folks just quit trying to stuff this down the throat of a
 community that simply does not content labeling.

I don't see why the negative emphasis here.

Without taking a side about any particular types of imagery, labelling
should be regarded as metadata.  Ideally, all media should be
described by text, which should be regarded as simply another
dimension by which people can find media they want. The end user can
deal with images as they like, through semantic searching or through
content filtering.

-Stevertigo

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Potential ICRA labels for Wikipedia

2010-05-09 Thread Excirial
*That is why it was addressed to FOSI and cc'ed to some parties that might
have clue about such systems. The copy to foundation-l was a courtesy
message. You are welcome to discuss censorship and your opinion about it,
but I would appreciate it even more if people actually talked about rating
systems.*

Very well, lets see if i can write up some something more on the point then:

*Definition and purpose*: The purpose of such a system would be to allow
certain content to be filtered from public view. The scope of such a project
is discussable, and dependent upon the goals we wish to reach.

*Rating System*: In order to decide a contents category and offensiveness
there has to be a method to sort the images. Multiple options are available:

*Categorization:* Categories could be added to images to establish the
subject of an image. For example, one image might be categorized nudity, the
other might be categorized as sexual intercourse and so on. The
categorization could be similar to the way we categorize our stub templates
- we could create a top-level filter for Nudity and create more specific
categories under that. That way it is possible to fine-tune the content one
might not wish to see.
*Rating:* Another method is rating each image. Instead of using a category
tree we might use a system that allows users to set a level of explicitness
or severity for each image. An image which shows non sexual nudity would be
rated lower then an image which shows a high level of nudity. Note that such
a system would require a clear set of rules as a rating might be subject to
ones personal idea's and feelings towards a certain subject.

*Control mechanism*:There are various levels at which we can filter content:

*Organization wide:* An organization wide filter would allow an organization
to block content based upon site-wide settings. Techically this would likely
prove to be the more difficult option to implement as it would require both
local and external changes. There are multiple methods to execute this
though. For example a server may rely a certain value to Wikipedia at the
start of each session detailing the content that should not be forwarded
over this connection. Based on such a value the server could be programmed
in such a way that images of a certain category won't be forwarded, or would
be replaced by placeholders.
The advantage of this method is that it allows organizations such as schools
to control which content should be shown, therefor possibly negating
complete blocks of Wikipedia. The negative is that it takes away control
from the user.
*Par-user:* A second method is allowing par-user settings. Such a system
would be easier to build and integrate as it only requires changes on
wikipedia's side. A seperate section could be made under My preferences
which would include a set of check boxes where a user could select which
content he or she prefers not to see. Images falling under a certain
category could be replaced with the images alt text or with an image stating
something akin to Par your preferences, this image was removed.
*Hybrid*: A hybrid system could integrate both systems. A user might
override or increase organization level settings if he or she has personal
preferences.

*Possible concerns*
*Responsibility and vandalism: *One risk with rating systems is that they
might be abused for personal goals, akin to article vandalism. Therefor
there should be some limit on who can rate an image - anonymous rating could
change images in such a way that they may be visible or invisible to people
who might or might not want this.
*Volunteer interest:* Implementing such a system would likely require a lot
of volunteer activity. Not only has every image to be checked and rated, we
would also have a backlog of over 6 million images to rate. Therefor we
should have sufficient volunteers who are interested in such a system.
*Public interest*: Plain and simple: Will people actually use this system?
Will people be content with their ability to filter, or will they still try
to remove images they deem offensive? Also: How many editors would use this
system?
*Implementation area*: Commons only? Local and commons?

That is all i can think of for now. I hope it is somewhat more constructive
towards the point you were initially trying to relay :)
~Excirial

On Sun, May 9, 2010 at 11:26 PM, Derk-Jan Hartman d.j.hart...@gmail.comwrote:

 This message was an attempt to gain information and spur discussion about
 the system in general, it's limits and effectiveness, not wether or not we
 should actually do it. I was trying to gather more information so that we
 can have an informed debate if it ever got to discussing about the
 possibility of using ratings.

 That is why it was addressed to FOSI and cc'ed to some parties that might
 have clue about such systems. The copy to foundation-l was a courtesy
 message. You are welcome to discuss censorship and your opinion about it,
 but I would appreciate it even more if 

Re: [Foundation-l] Potential ICRA labels for Wikipedia

2010-05-09 Thread David Goodman
I can   think of other concerns.

The main one is that of our competence to form judgements. On some
things we can: though nudity would seem something obvious, deciding on
the various degrees of it is not: I do not think we are likely to
agree on whether any particular nude image is primarily sexual,m or
primarily non-sexual. If we tried to be precise, we would degenerate
towards a situation like some legal codes which state exactly what
portions of a female breast may be displayed in a particular context,
or in just what way something must be covered to make it non-nude.

Further, though I consider it essential that Commons should include
appropriately educational sexual and even pornographic   content, I do
not think it should concentrate on that; important though education
about human sexuality is, there are other things to educate about
also, many just as much dependent upon images. A project to minutely
categorize pages on sexuality would concentrate much of the volunteer
effort on this portion of the contents.   We need some editors who
want to work on this field primarily, but we do not need everybody.



David Goodman, Ph.D, M.L.S.
http://en.wikipedia.org/wiki/User_talk:DGG



On Sun, May 9, 2010 at 6:26 PM, Excirial wp.excir...@gmail.com wrote:
 *That is why it was addressed to FOSI and cc'ed to some parties that might
 have clue about such systems. The copy to foundation-l was a courtesy
 message. You are welcome to discuss censorship and your opinion about it,
 but I would appreciate it even more if people actually talked about rating
 systems.*

 Very well, lets see if i can write up some something more on the point then:

 *Definition and purpose*: The purpose of such a system would be to allow
 certain content to be filtered from public view. The scope of such a project
 is discussable, and dependent upon the goals we wish to reach.

 *Rating System*: In order to decide a contents category and offensiveness
 there has to be a method to sort the images. Multiple options are available:

 *Categorization:* Categories could be added to images to establish the
 subject of an image. For example, one image might be categorized nudity, the
 other might be categorized as sexual intercourse and so on. The
 categorization could be similar to the way we categorize our stub templates
 - we could create a top-level filter for Nudity and create more specific
 categories under that. That way it is possible to fine-tune the content one
 might not wish to see.
 *Rating:* Another method is rating each image. Instead of using a category
 tree we might use a system that allows users to set a level of explicitness
 or severity for each image. An image which shows non sexual nudity would be
 rated lower then an image which shows a high level of nudity. Note that such
 a system would require a clear set of rules as a rating might be subject to
 ones personal idea's and feelings towards a certain subject.

 *Control mechanism*:There are various levels at which we can filter content:

 *Organization wide:* An organization wide filter would allow an organization
 to block content based upon site-wide settings. Techically this would likely
 prove to be the more difficult option to implement as it would require both
 local and external changes. There are multiple methods to execute this
 though. For example a server may rely a certain value to Wikipedia at the
 start of each session detailing the content that should not be forwarded
 over this connection. Based on such a value the server could be programmed
 in such a way that images of a certain category won't be forwarded, or would
 be replaced by placeholders.
 The advantage of this method is that it allows organizations such as schools
 to control which content should be shown, therefor possibly negating
 complete blocks of Wikipedia. The negative is that it takes away control
 from the user.
 *Par-user:* A second method is allowing par-user settings. Such a system
 would be easier to build and integrate as it only requires changes on
 wikipedia's side. A seperate section could be made under My preferences
 which would include a set of check boxes where a user could select which
 content he or she prefers not to see. Images falling under a certain
 category could be replaced with the images alt text or with an image stating
 something akin to Par your preferences, this image was removed.
 *Hybrid*: A hybrid system could integrate both systems. A user might
 override or increase organization level settings if he or she has personal
 preferences.

 *Possible concerns*
 *Responsibility and vandalism: *One risk with rating systems is that they
 might be abused for personal goals, akin to article vandalism. Therefor
 there should be some limit on who can rate an image - anonymous rating could
 change images in such a way that they may be visible or invisible to people
 who might or might not want this.
 *Volunteer interest:* Implementing such a system would likely 

Re: [Foundation-l] Potential ICRA labels for Wikipedia

2010-05-09 Thread John Vandenberg
On Mon, May 10, 2010 at 1:04 AM, Jussi-Ville Heiskanen
cimonav...@gmail.com wrote:
 Derk-Jan Hartman wrote:
 This message is CC'ed to other people who might wish to comment on this 
 potential approach
 ---




 You asked for comments... Here is one we prepared earlier...

 http://en.wikipedia.org/wiki/Wikipedia:Image_censorship#ICRA

That was on English Wikipedia, a very long time ago.

The current discussion is about Commons, and affects other projects
where there is less editorial input to ensure the content is
educational.

--
John Vandenberg

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Potential ICRA labels for Wikipedia

2010-05-09 Thread Jussi-Ville Heiskanen
Sue Gardner wrote:
 Hi Derk-Jan,

 Thank you for starting this thread.

 There is obviously a range of options -- let's say, on a 10-point
 scale, ranging from 0 (do nothing but enforce existing policy) to 10
 (completely purge everything that's potentially objectionable to
 anyone, anywhere).  Somewhere on that continuum are possibilities like
 i) we tag pages so that external entities can filter, ii) we point
 parents towards content filtering systems they can use, but make no
 changes ourselves, iii) we implement our own filtering system so that
 people can choose to hide objectionable content if they want, or iv)
 we implement our own filtering system, with a default to a safe or
 moderate view and the option for people to change their own
 settings.  Those are just a few: there are lots of options.  (e.g.,
 Google Images and Flickr I believe do different versions of option iv.
  I'm not saying that means we should do the same; it does not
 necessarily mean that.)

 I would love to see a table of various options, with pros and cons
 including feedback from folks like EFF.  If anyone feels like starting
 such a thing, I would be really grateful :-)

   
Hi Sue,

Is it okay if I first explain why none of the examples
you mention are a good fit for us; and then pull a
rabbit out of my hat, and explain how one of them
can be salvaged and made into an excellent system?

Rating by level is fixed, and it will never be culturally
sensitive. And on wikipedia no matter how it is rigged
people who edit will just get frustrated for both the
right and the wrong reasons.

Using words like safe etc, will certainly offend
cultures, which are very very strict, for instance
in terms how much flesh can be seen of women.

Pointing parents to systems of filtering, that is
half a solution, and the problem would be we
would have to keep vetting what the filtering
systems are basing their filtering, so our site
doesn't look ridiculous in some form or another,
either accidentally failing and offending the
viewer (ask me sometime, I have tales to tell),
or going to the other extreme, and leaving the
viewer without a perfectly nice result.

The last problem, but certainly not the least one.
All of these are a *hard* *sell*. They are a hard
sell to the wikimedian community. They are also
a hard sell to a huge sector of our readers, and
those who love us, even enough to give us small
donations. Our community must matter to us,
our readers must matter to us, those who love
us should matter to us, and well, those who
give us small donations -- I am not in a place
to tell how much they matter to us.

So now we come to the rabbit time!!!

(DRUM-ROLL PLEASE!)

If it is a hard sell, find a way to soften it, without
forcing the issue. How? First, be very canny about
how the tags are named. Tits vs. Breasts, Butt vs.
Rear-end, Baretits vs. Topless and so forth. Second
do *not* limit the tags to such content tags which
are useful for _avoiding_ content, but add in also
positive tags (I know, for some all those above are
positive tags ;-) puppies, kittens, funny, horsies,
etc.

I am sure someone can think of even better and
smoother tagnames. But teh advantages of this
approach are that it doesn't *feel* like censorship,
but more like a value added service. I won't
talk about how the system of selecting which stuff
to see should be constructed, but I am sure
someone has ideas. I do think though that the
system should be reversible, that is essential to
sell it not as censorship, so that those who only
want to see naughty bits can do so, or for that
matter somebody can see only cute animals.

There is one additional benefit in terms of our
community too. It is much more fun to add those
kinds of tags, and much less drudge-work.


Yours,

Jussi-Ville Heiskanen


___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Potential ICRA labels for Wikipedia

2010-05-09 Thread David Goodman
There is no general agreement here that any system of filtering for
any purpose is ever necessary, and I think it is totally contrary to
the entire general idea behind the the free culture movement.

But people have liberty do do as they please with our content, and if
someone wants to filter  for their own purposes we cannot and should
not prevent them.  Neither should we assist them.

For JV to suggest assisting censorship by doing something that will
not feel like censorship is not in my opinion forthright. We should
have good descriptors because that's part of the context for the
images, but this should be decided without the least concern about
anything other than finding the images a user might want to find.
Agreed that one part of that is avoiding retrieving what they do not
want to receive, but there are many such criteria, such as size, date,
and the like. It can be argued that we have some responsibility to
those of o  users who can not access unfiltered content, but the least
  judgmental way is to provide ourselves for a option to display
images as text descriptors only, rather than leave it to
browsers--especially since a text-only view is appropriate for other
purposes also.

We could show the proper approach by working on better descriptors for
more important things than sexual images first.  The necessary
distinctions for any filtering service that does aim at restricting
concept in a way which is not grossly heavy handed would require very
detailed separation of the various types of breast images, and I do
not see why distinguishing between such things as the different
degrees of nudity is all that important in an encyclopedic sense.


David Goodman, Ph.D, M.L.S.
http://en.wikipedia.org/wiki/User_talk:DGG



On Sun, May 9, 2010 at 11:23 PM, Jussi-Ville Heiskanen
cimonav...@gmail.com wrote:
 Sue Gardner wrote:
 Hi Derk-Jan,

 Thank you for starting this thread.

 There is obviously a range of options -- let's say, on a 10-point
 scale, ranging from 0 (do nothing but enforce existing policy) to 10
 (completely purge everything that's potentially objectionable to
 anyone, anywhere).  Somewhere on that continuum are possibilities like
 i) we tag pages so that external entities can filter, ii) we point
 parents towards content filtering systems they can use, but make no
 changes ourselves, iii) we implement our own filtering system so that
 people can choose to hide objectionable content if they want, or iv)
 we implement our own filtering system, with a default to a safe or
 moderate view and the option for people to change their own
 settings.  Those are just a few: there are lots of options.  (e.g.,
 Google Images and Flickr I believe do different versions of option iv.
  I'm not saying that means we should do the same; it does not
 necessarily mean that.)

 I would love to see a table of various options, with pros and cons
 including feedback from folks like EFF.  If anyone feels like starting
 such a thing, I would be really grateful :-)


 Hi Sue,

 Is it okay if I first explain why none of the examples
 you mention are a good fit for us; and then pull a
 rabbit out of my hat, and explain how one of them
 can be salvaged and made into an excellent system?

 Rating by level is fixed, and it will never be culturally
 sensitive. And on wikipedia no matter how it is rigged
 people who edit will just get frustrated for both the
 right and the wrong reasons.

 Using words like safe etc, will certainly offend
 cultures, which are very very strict, for instance
 in terms how much flesh can be seen of women.

 Pointing parents to systems of filtering, that is
 half a solution, and the problem would be we
 would have to keep vetting what the filtering
 systems are basing their filtering, so our site
 doesn't look ridiculous in some form or another,
 either accidentally failing and offending the
 viewer (ask me sometime, I have tales to tell),
 or going to the other extreme, and leaving the
 viewer without a perfectly nice result.

 The last problem, but certainly not the least one.
 All of these are a *hard* *sell*. They are a hard
 sell to the wikimedian community. They are also
 a hard sell to a huge sector of our readers, and
 those who love us, even enough to give us small
 donations. Our community must matter to us,
 our readers must matter to us, those who love
 us should matter to us, and well, those who
 give us small donations -- I am not in a place
 to tell how much they matter to us.

 So now we come to the rabbit time!!!

 (DRUM-ROLL PLEASE!)

 If it is a hard sell, find a way to soften it, without
 forcing the issue. How? First, be very canny about
 how the tags are named. Tits vs. Breasts, Butt vs.
 Rear-end, Baretits vs. Topless and so forth. Second
 do *not* limit the tags to such content tags which
 are useful for _avoiding_ content, but add in also
 positive tags (I know, for some all those above are
 positive tags ;-) puppies, kittens, funny, horsies,
 etc.

 I am sure someone