Re: [Wikimedia-l] On toxic communities

2015-11-19 Thread Aaron Halfaker
I've started a thread on our "Revision scoring as a service" talk page
regarding labeled conversation datasets & modeling work we could do.

See
https://meta.wikimedia.org/wiki/Research_talk:Revision_scoring_as_a_service#Thread_on_Toxic_communities_from_wikimedia-l

On Sun, Nov 15, 2015 at 12:41 PM, Risker  wrote:

> I am going to quote Joseph Reagle, who responded to a similarly titled
> threat on Wiki-en-L:
>
>
> date:13 November 2015 at 13:48
>
> It's been great that Riot games has had someone like Lin (an experimental
> psychologist) to think about issues of community and abuse. And I
> appreciate that Lin has been previously been so forthcoming about their
> experiences and findings.
>
> But the much trumpeted League of Legends Tribunal has been down "for
> maintenance" for months, even before this article was published, with much
> discussion by the community of how it was broken. On this, Riot and Lin
> have said nothing.
>
> Copying Joseph in case he wants to respond to some of the discussions here.
>
>
> Risker/Anne
>
> On 15 November 2015 at 10:36, Pharos  wrote:
>
> > The figure quoted is quite interesting, but do we have a comparable
> metric
> > for the Wikimedia projects? :
> >
> > "... incidences of homophobia, sexism and racism ... have fallen to a
> > combined
> > 2 percent of all games"
> >
> > 2% sounds "low", but do we indeed know if this is better or worse than
> us?
> > Would our comparable metric be the % of bigoted comments per article, per
> > talk page discussion, per time that an editor spends at the project?  I
> > would think that encountering bigoted comments on 1 in 50 discussions
> would
> > still be pretty significant.
> >
> > Thanks,
> > Pharos
> >
> > On Sun, Nov 15, 2015 at 1:21 PM, Ziko van Dijk 
> wrote:
> >
> > > Hello,
> > >
> > > Just yesterday I had a long talk with a researcher about how to define
> > > and detect trolls on Wikipedia. E.g., whether "unintentional trolling"
> > > should be included or not.
> > >
> > > In my opinion, it is not possible to detect by machine trollism,
> > > unkindness, harassment, mobbing etc., maybe with the exception of
> > > swear words. A lot of turntaking, deviation from the topic and other
> > > phenomena can be experienced by the participants as positive or as
> > > negative. You might need to ask them, and even then they might not be
> > > aware of a problem that works through in subtlety. Also, third persons
> > > not involved in the conversation can be effected negatively (look at
> > > ... page X... and you know why you don't like to contribute there).
> > >
> > > Kind regards
> > > Ziko
> > >
> > >
> > > 2015-11-15 17:40 GMT+01:00 Katherine Casey <
> fluffernutter.w...@gmail.com
> > >:
> > > > I'd be happy to offer my admin/oversighter experience and knowledge
> to
> > > help
> > > > you develop the labeling and such, Aaron! I just commented on
> Andreas's
> > > > proposal on the Community Wishlist, but to summarize here: I see a
> lot
> > of
> > > > potential pitfalls in trying to handle/generalize this with machine
> > > > learning, but I also see a lot of potential value, and I think it's
> > > > something we should be investigating.
> > > >
> > > > -Fluffernutter
> > > >
> > > > On Sun, Nov 15, 2015 at 11:32 AM, Aaron Halfaker <
> > > ahalfa...@wikimedia.org>
> > > > wrote:
> > > >
> > > >> >
> > > >> > The League of Legends team collaborated with outside scientists to
> > > >> > analyse their dataset. I would love to see the Wikimedia
> Foundation
> > > >> engage
> > > >> > in a similar research project.
> > > >>
> > > >>
> > > >> Oh!  We are!  :) When we have time. :\ One of the projects that I'd
> > > like to
> > > >> see done, but I've struggled to find the time for is a common talk
> > page
> > > >> parser[1] that could produce a dataset of talk page interactions.
> I'd
> > > like
> > > >> this dataset to be easy to join to editor outcome measures.  E.g.
> > there
> > > >> might be "aggressive" talk that we don't know is problematic until
> we
> > > see
> > > >> the kind of effect that it has on other conversation participants.
> > > >>
> > > >> Anyway, I want some powerful utilities and datasets out there to
> help
> > > >> academics look into this problem more easily.  For revscoring, I'd
> > like
> > > to
> > > >> be able to take a set of talk page diffs, have them classified in
> Wiki
> > > >> labels[2] as "aggressive" and the build a model for ORES[3] to be
> used
> > > >> however people see fit.  You could then use ORES to do offline
> > analysis
> > > of
> > > >> discussions for research.  You could use ORES to interrupt the a
> user
> > > >> before saving a change.  I'm sure there are other clever ideas that
> > > people
> > > >> have for what to do with such a model that I'm happy to enable it
> via
> > > the
> > > >> service.  The hard part is getting a good dataset labeled.
> > > >>
> > > >> If someone wants to invest some time and 

Re: [Wikimedia-l] On toxic communities

2015-11-15 Thread Aaron Halfaker
>
> The League of Legends team collaborated with outside scientists to
> analyse their dataset. I would love to see the Wikimedia Foundation engage
> in a similar research project.


Oh!  We are!  :) When we have time. :\ One of the projects that I'd like to
see done, but I've struggled to find the time for is a common talk page
parser[1] that could produce a dataset of talk page interactions.  I'd like
this dataset to be easy to join to editor outcome measures.  E.g. there
might be "aggressive" talk that we don't know is problematic until we see
the kind of effect that it has on other conversation participants.

Anyway, I want some powerful utilities and datasets out there to help
academics look into this problem more easily.  For revscoring, I'd like to
be able to take a set of talk page diffs, have them classified in Wiki
labels[2] as "aggressive" and the build a model for ORES[3] to be used
however people see fit.  You could then use ORES to do offline analysis of
discussions for research.  You could use ORES to interrupt the a user
before saving a change.  I'm sure there are other clever ideas that people
have for what to do with such a model that I'm happy to enable it via the
service.  The hard part is getting a good dataset labeled.

If someone wants to invest some time and energy into this, I'm happy to
work with you.  We'll need more than programming help.  We'll need a lot of
help to figure out what dimensions we'll label talk page postings by and to
do the actual labeling.

1. https://github.com/Ironholds/talk-parser
2. https://meta.wikimedia.org/wiki/Wiki_labels
3. https://meta.wikimedia.org/wiki/ORES

On Sun, Nov 15, 2015 at 6:56 AM, Andreas Kolbe  wrote:

> On Sat, Nov 14, 2015 at 9:13 PM, Benjamin Lees 
> wrote:
>
> > This article highlights the happier side of things, but it appears
> > that Lin's approach also involved completely removing bad actors:
> > "Some players have also asked why we've taken such an aggressive
> > stance when we've been focused on reform; well, the key here is that
> > for most players, reform approaches are quite effective. But, for a
> > number of players, reform attempts have been very unsuccessful which
> > forces us to remove some of these players from League entirely."[0]
> >
>
>
> Thanks for the added context, Benjamin. Of course, banning bad actors that
> they consider unreformable is something Wikipedia admins have always done
> as well.
>
> The League of Legends team began by building a dataset of interactions that
> the community considered unacceptable, and then applied machine-learning to
> that dataset.
>
> It occurs to me that the English Wikipedia has ready access to such a
> dataset: it's the totality of revision-deleted and oversighted talk page
> posts. The League of Legends team collaborated with outside scientists to
> analyse their dataset. I would love to see the Wikimedia Foundation engage
> in a similar research project.
>
> I've added this point to the community wishlist survey:
>
>
> https://meta.wikimedia.org/wiki/2015_Community_Wishlist_Survey#Machine-learning_tool_to_reduce_toxic_talk_page_interactions
>
>
>
> > P.S. As Rupert noted, over 90% of LoL players are male (how much over
> > 90%?).[1] It would be interesting to know whether this percentage has
> > changed along with the improvements described in the article.
> >
>
>
> Indeed.
> ___
> Wikimedia-l mailing list, guidelines at:
> https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
> Wikimedia-l@lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> 
>
___
Wikimedia-l mailing list, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 


Re: [Wikimedia-l] On toxic communities

2015-11-15 Thread Andreas Kolbe
On Sat, Nov 14, 2015 at 9:13 PM, Benjamin Lees  wrote:

> This article highlights the happier side of things, but it appears
> that Lin's approach also involved completely removing bad actors:
> "Some players have also asked why we've taken such an aggressive
> stance when we've been focused on reform; well, the key here is that
> for most players, reform approaches are quite effective. But, for a
> number of players, reform attempts have been very unsuccessful which
> forces us to remove some of these players from League entirely."[0]
>


Thanks for the added context, Benjamin. Of course, banning bad actors that
they consider unreformable is something Wikipedia admins have always done
as well.

The League of Legends team began by building a dataset of interactions that
the community considered unacceptable, and then applied machine-learning to
that dataset.

It occurs to me that the English Wikipedia has ready access to such a
dataset: it's the totality of revision-deleted and oversighted talk page
posts. The League of Legends team collaborated with outside scientists to
analyse their dataset. I would love to see the Wikimedia Foundation engage
in a similar research project.

I've added this point to the community wishlist survey:

https://meta.wikimedia.org/wiki/2015_Community_Wishlist_Survey#Machine-learning_tool_to_reduce_toxic_talk_page_interactions



> P.S. As Rupert noted, over 90% of LoL players are male (how much over
> 90%?).[1] It would be interesting to know whether this percentage has
> changed along with the improvements described in the article.
>


Indeed.
___
Wikimedia-l mailing list, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 


Re: [Wikimedia-l] On toxic communities

2015-11-15 Thread Risker
I am going to quote Joseph Reagle, who responded to a similarly titled
threat on Wiki-en-L:


date:13 November 2015 at 13:48

It's been great that Riot games has had someone like Lin (an experimental
psychologist) to think about issues of community and abuse. And I
appreciate that Lin has been previously been so forthcoming about their
experiences and findings.

But the much trumpeted League of Legends Tribunal has been down "for
maintenance" for months, even before this article was published, with much
discussion by the community of how it was broken. On this, Riot and Lin
have said nothing.

Copying Joseph in case he wants to respond to some of the discussions here.


Risker/Anne

On 15 November 2015 at 10:36, Pharos  wrote:

> The figure quoted is quite interesting, but do we have a comparable metric
> for the Wikimedia projects? :
>
> "... incidences of homophobia, sexism and racism ... have fallen to a
> combined
> 2 percent of all games"
>
> 2% sounds "low", but do we indeed know if this is better or worse than us?
> Would our comparable metric be the % of bigoted comments per article, per
> talk page discussion, per time that an editor spends at the project?  I
> would think that encountering bigoted comments on 1 in 50 discussions would
> still be pretty significant.
>
> Thanks,
> Pharos
>
> On Sun, Nov 15, 2015 at 1:21 PM, Ziko van Dijk  wrote:
>
> > Hello,
> >
> > Just yesterday I had a long talk with a researcher about how to define
> > and detect trolls on Wikipedia. E.g., whether "unintentional trolling"
> > should be included or not.
> >
> > In my opinion, it is not possible to detect by machine trollism,
> > unkindness, harassment, mobbing etc., maybe with the exception of
> > swear words. A lot of turntaking, deviation from the topic and other
> > phenomena can be experienced by the participants as positive or as
> > negative. You might need to ask them, and even then they might not be
> > aware of a problem that works through in subtlety. Also, third persons
> > not involved in the conversation can be effected negatively (look at
> > ... page X... and you know why you don't like to contribute there).
> >
> > Kind regards
> > Ziko
> >
> >
> > 2015-11-15 17:40 GMT+01:00 Katherine Casey  >:
> > > I'd be happy to offer my admin/oversighter experience and knowledge to
> > help
> > > you develop the labeling and such, Aaron! I just commented on Andreas's
> > > proposal on the Community Wishlist, but to summarize here: I see a lot
> of
> > > potential pitfalls in trying to handle/generalize this with machine
> > > learning, but I also see a lot of potential value, and I think it's
> > > something we should be investigating.
> > >
> > > -Fluffernutter
> > >
> > > On Sun, Nov 15, 2015 at 11:32 AM, Aaron Halfaker <
> > ahalfa...@wikimedia.org>
> > > wrote:
> > >
> > >> >
> > >> > The League of Legends team collaborated with outside scientists to
> > >> > analyse their dataset. I would love to see the Wikimedia Foundation
> > >> engage
> > >> > in a similar research project.
> > >>
> > >>
> > >> Oh!  We are!  :) When we have time. :\ One of the projects that I'd
> > like to
> > >> see done, but I've struggled to find the time for is a common talk
> page
> > >> parser[1] that could produce a dataset of talk page interactions.  I'd
> > like
> > >> this dataset to be easy to join to editor outcome measures.  E.g.
> there
> > >> might be "aggressive" talk that we don't know is problematic until we
> > see
> > >> the kind of effect that it has on other conversation participants.
> > >>
> > >> Anyway, I want some powerful utilities and datasets out there to help
> > >> academics look into this problem more easily.  For revscoring, I'd
> like
> > to
> > >> be able to take a set of talk page diffs, have them classified in Wiki
> > >> labels[2] as "aggressive" and the build a model for ORES[3] to be used
> > >> however people see fit.  You could then use ORES to do offline
> analysis
> > of
> > >> discussions for research.  You could use ORES to interrupt the a user
> > >> before saving a change.  I'm sure there are other clever ideas that
> > people
> > >> have for what to do with such a model that I'm happy to enable it via
> > the
> > >> service.  The hard part is getting a good dataset labeled.
> > >>
> > >> If someone wants to invest some time and energy into this, I'm happy
> to
> > >> work with you.  We'll need more than programming help.  We'll need a
> > lot of
> > >> help to figure out what dimensions we'll label talk page postings by
> > and to
> > >> do the actual labeling.
> > >>
> > >> 1. https://github.com/Ironholds/talk-parser
> > >> 2. https://meta.wikimedia.org/wiki/Wiki_labels
> > >> 3. https://meta.wikimedia.org/wiki/ORES
> > >>
> > >> On Sun, Nov 15, 2015 at 6:56 AM, Andreas Kolbe 
> > wrote:
> > >>
> > >> > On Sat, Nov 14, 2015 at 9:13 PM, Benjamin Lees <
> emufarm...@gmail.com>
> > >> > 

Re: [Wikimedia-l] On toxic communities

2015-11-15 Thread Katherine Casey
I'd be happy to offer my admin/oversighter experience and knowledge to help
you develop the labeling and such, Aaron! I just commented on Andreas's
proposal on the Community Wishlist, but to summarize here: I see a lot of
potential pitfalls in trying to handle/generalize this with machine
learning, but I also see a lot of potential value, and I think it's
something we should be investigating.

-Fluffernutter

On Sun, Nov 15, 2015 at 11:32 AM, Aaron Halfaker 
wrote:

> >
> > The League of Legends team collaborated with outside scientists to
> > analyse their dataset. I would love to see the Wikimedia Foundation
> engage
> > in a similar research project.
>
>
> Oh!  We are!  :) When we have time. :\ One of the projects that I'd like to
> see done, but I've struggled to find the time for is a common talk page
> parser[1] that could produce a dataset of talk page interactions.  I'd like
> this dataset to be easy to join to editor outcome measures.  E.g. there
> might be "aggressive" talk that we don't know is problematic until we see
> the kind of effect that it has on other conversation participants.
>
> Anyway, I want some powerful utilities and datasets out there to help
> academics look into this problem more easily.  For revscoring, I'd like to
> be able to take a set of talk page diffs, have them classified in Wiki
> labels[2] as "aggressive" and the build a model for ORES[3] to be used
> however people see fit.  You could then use ORES to do offline analysis of
> discussions for research.  You could use ORES to interrupt the a user
> before saving a change.  I'm sure there are other clever ideas that people
> have for what to do with such a model that I'm happy to enable it via the
> service.  The hard part is getting a good dataset labeled.
>
> If someone wants to invest some time and energy into this, I'm happy to
> work with you.  We'll need more than programming help.  We'll need a lot of
> help to figure out what dimensions we'll label talk page postings by and to
> do the actual labeling.
>
> 1. https://github.com/Ironholds/talk-parser
> 2. https://meta.wikimedia.org/wiki/Wiki_labels
> 3. https://meta.wikimedia.org/wiki/ORES
>
> On Sun, Nov 15, 2015 at 6:56 AM, Andreas Kolbe  wrote:
>
> > On Sat, Nov 14, 2015 at 9:13 PM, Benjamin Lees 
> > wrote:
> >
> > > This article highlights the happier side of things, but it appears
> > > that Lin's approach also involved completely removing bad actors:
> > > "Some players have also asked why we've taken such an aggressive
> > > stance when we've been focused on reform; well, the key here is that
> > > for most players, reform approaches are quite effective. But, for a
> > > number of players, reform attempts have been very unsuccessful which
> > > forces us to remove some of these players from League entirely."[0]
> > >
> >
> >
> > Thanks for the added context, Benjamin. Of course, banning bad actors
> that
> > they consider unreformable is something Wikipedia admins have always done
> > as well.
> >
> > The League of Legends team began by building a dataset of interactions
> that
> > the community considered unacceptable, and then applied machine-learning
> to
> > that dataset.
> >
> > It occurs to me that the English Wikipedia has ready access to such a
> > dataset: it's the totality of revision-deleted and oversighted talk page
> > posts. The League of Legends team collaborated with outside scientists to
> > analyse their dataset. I would love to see the Wikimedia Foundation
> engage
> > in a similar research project.
> >
> > I've added this point to the community wishlist survey:
> >
> >
> >
> https://meta.wikimedia.org/wiki/2015_Community_Wishlist_Survey#Machine-learning_tool_to_reduce_toxic_talk_page_interactions
> >
> >
> >
> > > P.S. As Rupert noted, over 90% of LoL players are male (how much over
> > > 90%?).[1] It would be interesting to know whether this percentage has
> > > changed along with the improvements described in the article.
> > >
> >
> >
> > Indeed.
> > ___
> > Wikimedia-l mailing list, guidelines at:
> > https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
> > Wikimedia-l@lists.wikimedia.org
> > Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> > 
> >
> ___
> Wikimedia-l mailing list, guidelines at:
> https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
> Wikimedia-l@lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> 
>



-- 
Karen Brown
user:Fluffernutter

*Unless otherwise specified, any email sent from this address is in my
volunteer capacity and does not represent the views or wishes of the
Wikimedia Foundation*
___

Re: [Wikimedia-l] On toxic communities

2015-11-15 Thread Pharos
The figure quoted is quite interesting, but do we have a comparable metric
for the Wikimedia projects? :

"... incidences of homophobia, sexism and racism ... have fallen to a combined
2 percent of all games"

2% sounds "low", but do we indeed know if this is better or worse than us?
Would our comparable metric be the % of bigoted comments per article, per
talk page discussion, per time that an editor spends at the project?  I
would think that encountering bigoted comments on 1 in 50 discussions would
still be pretty significant.

Thanks,
Pharos

On Sun, Nov 15, 2015 at 1:21 PM, Ziko van Dijk  wrote:

> Hello,
>
> Just yesterday I had a long talk with a researcher about how to define
> and detect trolls on Wikipedia. E.g., whether "unintentional trolling"
> should be included or not.
>
> In my opinion, it is not possible to detect by machine trollism,
> unkindness, harassment, mobbing etc., maybe with the exception of
> swear words. A lot of turntaking, deviation from the topic and other
> phenomena can be experienced by the participants as positive or as
> negative. You might need to ask them, and even then they might not be
> aware of a problem that works through in subtlety. Also, third persons
> not involved in the conversation can be effected negatively (look at
> ... page X... and you know why you don't like to contribute there).
>
> Kind regards
> Ziko
>
>
> 2015-11-15 17:40 GMT+01:00 Katherine Casey :
> > I'd be happy to offer my admin/oversighter experience and knowledge to
> help
> > you develop the labeling and such, Aaron! I just commented on Andreas's
> > proposal on the Community Wishlist, but to summarize here: I see a lot of
> > potential pitfalls in trying to handle/generalize this with machine
> > learning, but I also see a lot of potential value, and I think it's
> > something we should be investigating.
> >
> > -Fluffernutter
> >
> > On Sun, Nov 15, 2015 at 11:32 AM, Aaron Halfaker <
> ahalfa...@wikimedia.org>
> > wrote:
> >
> >> >
> >> > The League of Legends team collaborated with outside scientists to
> >> > analyse their dataset. I would love to see the Wikimedia Foundation
> >> engage
> >> > in a similar research project.
> >>
> >>
> >> Oh!  We are!  :) When we have time. :\ One of the projects that I'd
> like to
> >> see done, but I've struggled to find the time for is a common talk page
> >> parser[1] that could produce a dataset of talk page interactions.  I'd
> like
> >> this dataset to be easy to join to editor outcome measures.  E.g. there
> >> might be "aggressive" talk that we don't know is problematic until we
> see
> >> the kind of effect that it has on other conversation participants.
> >>
> >> Anyway, I want some powerful utilities and datasets out there to help
> >> academics look into this problem more easily.  For revscoring, I'd like
> to
> >> be able to take a set of talk page diffs, have them classified in Wiki
> >> labels[2] as "aggressive" and the build a model for ORES[3] to be used
> >> however people see fit.  You could then use ORES to do offline analysis
> of
> >> discussions for research.  You could use ORES to interrupt the a user
> >> before saving a change.  I'm sure there are other clever ideas that
> people
> >> have for what to do with such a model that I'm happy to enable it via
> the
> >> service.  The hard part is getting a good dataset labeled.
> >>
> >> If someone wants to invest some time and energy into this, I'm happy to
> >> work with you.  We'll need more than programming help.  We'll need a
> lot of
> >> help to figure out what dimensions we'll label talk page postings by
> and to
> >> do the actual labeling.
> >>
> >> 1. https://github.com/Ironholds/talk-parser
> >> 2. https://meta.wikimedia.org/wiki/Wiki_labels
> >> 3. https://meta.wikimedia.org/wiki/ORES
> >>
> >> On Sun, Nov 15, 2015 at 6:56 AM, Andreas Kolbe 
> wrote:
> >>
> >> > On Sat, Nov 14, 2015 at 9:13 PM, Benjamin Lees 
> >> > wrote:
> >> >
> >> > > This article highlights the happier side of things, but it appears
> >> > > that Lin's approach also involved completely removing bad actors:
> >> > > "Some players have also asked why we've taken such an aggressive
> >> > > stance when we've been focused on reform; well, the key here is that
> >> > > for most players, reform approaches are quite effective. But, for a
> >> > > number of players, reform attempts have been very unsuccessful which
> >> > > forces us to remove some of these players from League entirely."[0]
> >> > >
> >> >
> >> >
> >> > Thanks for the added context, Benjamin. Of course, banning bad actors
> >> that
> >> > they consider unreformable is something Wikipedia admins have always
> done
> >> > as well.
> >> >
> >> > The League of Legends team began by building a dataset of interactions
> >> that
> >> > the community considered unacceptable, and then applied
> machine-learning
> >> to
> >> > that dataset.
> >> >
> >> > 

Re: [Wikimedia-l] On toxic communities

2015-11-14 Thread Benjamin Lees
This article highlights the happier side of things, but it appears
that Lin's approach also involved completely removing bad actors:
"Some players have also asked why we've taken such an aggressive
stance when we've been focused on reform; well, the key here is that
for most players, reform approaches are quite effective. But, for a
number of players, reform attempts have been very unsuccessful which
forces us to remove some of these players from League entirely."[0]

A little context about League of Legends (I haven't played in a couple
years, so my apologies if anything I say is out of date):
* In an average game you are thrust onto a team with 4 complete
strangers you will probably never meet again, and must work together
to defeat the other team.
* Individual player mistakes hurt the team, often a lot.  Think making
an error at the World Series in baseball.
* A typical game lasts 20-50 minutes.  If you leave the game before it
finishes, you will be punished. (After 20 minutes your team can
surrender if 4 of your players agree to do so.)
* Some games affect your global ranking relative to all other players.

These game mechanics promote a form of tension which is part of the
excitement of the game but which is also sometimes stressful (if, say,
you're doing really badly and your team doesn't want to quit).  By and
large, Wikipedia's mechanics seem very different from this, there are
a few areas where users are pushed into a more hostile role with one
another.  In those narrow cases, like the village pump, I could maybe
see benefits from trying to re-engineer interactions, but I'm
skeptical that this will somehow engineer a cultural shift.

P.S. As Rupert noted, over 90% of LoL players are male (how much over
90%?).[1] It would be interesting to know whether this percentage has
changed along with the improvements described in the article.

P.P.S. In League you have to pay if you want to transfer your account
from one region to another.  I'm sure we could resolve all ENGVAR
disputes once and for all by adding some region locking. :-)

[0] 
http://www.polygon.com/2014/7/21/5924203/league-of-legends-ban-code-2500-toxic-behavior-permabans
[1] http://majorleagueoflegends.s3.amazonaws.com/lol_infographic.png

On Fri, Nov 13, 2015 at 5:12 PM, Denny Vrandečić  wrote:
> Very interesting read (via Brandon Harris):
>
> http://recode.net/2015/07/07/doing-something-about-the-impossible-problem-of-abuse-in-online-games/
>
> "the vast majority of negative behavior ... did not originate from the
> persistently negative online citizens; in fact, 87 percent of online
> toxicity came from the neutral and positive citizens just having a bad day
> here or there."
>
> "... incidences of homophobia, sexism and racism ... have fallen to a
> combined 2 percent of all games. Verbal abuse has dropped by more than 40
> percent, and 91.6 percent of negative players change their act and never
> commit another offense after just one reported penalty."
>
> I have plenty of ideas how to apply this to Wikipedia, but I am sure Dario
> and his team as well :) - and some opportunity for the communities to use
> such results.
>
> Cheers,
> Denny
> ___
> Wikimedia-l mailing list, guidelines at: 
> https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
> Wikimedia-l@lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
> 

___
Wikimedia-l mailing list, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 


Re: [Wikimedia-l] On toxic communities

2015-11-13 Thread Chris Keating
Really interesting - thanks for sharing!

On Fri, Nov 13, 2015 at 10:12 PM, Denny Vrandečić 
wrote:

> Very interesting read (via Brandon Harris):
>
>
> http://recode.net/2015/07/07/doing-something-about-the-impossible-problem-of-abuse-in-online-games/
>
> "the vast majority of negative behavior ... did not originate from the
> persistently negative online citizens; in fact, 87 percent of online
> toxicity came from the neutral and positive citizens just having a bad day
> here or there."
>
> "... incidences of homophobia, sexism and racism ... have fallen to a
> combined 2 percent of all games. Verbal abuse has dropped by more than 40
> percent, and 91.6 percent of negative players change their act and never
> commit another offense after just one reported penalty."
>
> I have plenty of ideas how to apply this to Wikipedia, but I am sure Dario
> and his team as well :) - and some opportunity for the communities to use
> such results.
>
> Cheers,
> Denny
> ___
> Wikimedia-l mailing list, guidelines at:
> https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
> Wikimedia-l@lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> 
___
Wikimedia-l mailing list, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,