On Sat, Jun 24, 2017 at 2:49 AM, Kerry Raymond <kerry.raym...@gmail.com>
wrote:

> No right to be offended? To say to someone "you don't have the right to be
> offended" seems pretty offensive in itself. It seems to imply that their
> cultural norms are somehow inferior or unacceptable.
>

I'm not sure that I worded my comment clearly as I would like. I would like
to reduce the intensity and frequency of toxic behavior, but there's some
difficulty in defining what is toxic or unacceptable. If person X says
something that person Y finds offensive, that in and of itself doesn't mean
that person X was being intentionally malicious. Cultural norms and
personal sensitivities vary widely, and there is a danger that attempts to
reduce conflict will be done in such a way that freedom of expression is
suppressed. As an example, there are statements in British English that I
am told are highly offensive, but to me as an American seem mild when I
hear them through an American cultural lens. Having an AI, or humans,
attempt to police the degree to which a statement is offensive seems like a
minefield. Perhaps a better way to approach the situation is to try to a
look at intent, which I think is similar to your next point:


>
> With the global reach of Wikipedia, there are obviously many points of
> view on what is or isn't offensive in what circumstances. Offence may not
> be intended at first, but, if after a person is told their behaviour is
> offensive and they persist with that behaviour, I think it is reasonable to
> assume that they intend to offend. Which is why the data showing there is a
> group of experienced users involved in numerous personal attacks demands
> some human investigation of their behaviour.
>

I think that looking at intent, rather than solely at the content of what
was said, sounds like a good idea. However, I'm not sure that I'd always
agree that if person X is told that statement A is offensive to person Y
that person X should necessarily stop, because what person X is saying may
be seem reasonable to person X (for example "It's OK to eat meat") but
highly offensive to person Y. I think maybe a more nuanced approach would
be to look at what person X's intent is in saying "It's OK to eat meat": is
the person expressing or arguing for their views in good faith, or are they
acting in bad faith and intentionally trying to provoke person Y?
Fortunately, in my experience, the cases where people are being malicious
are usually clearer, such that admins and others are not usually called on
to evaluate whether a statement was OK. "Calling names" in any language
seems to not go over very well, and I think that most of us who have a tool
to create blocks would be willing to use that tool if a conversation
degenerated to that point. Unfortunately, like you, my perception in the
past was that there were some experienced users on English Wikipedia (and
perhaps other languages as well) where needlessly provocative behavior was
tolerated; I would like to think that the standards for civility are being
raised.

I'm aware of WMF's research into the frequency of personal attacks; I
wonder whether there are charts of how the frequency is changing over time.


> Similarly for a person offended, if there is a genuinely innocent
> interpretation to something they found offensive and that is explained to
> them (perhaps by third parties), I think they need to be accepting that no
> offence was intended on that occasion. Obviously we need a bit of give and
> take. But I think there have to be limits on the repeated behaviour (either
> in giving the offence or taking the offence).
>

In general, I agree.

There are some actions for which I could support "one strike and you're
out"; I once kicked someone out of an IRC channel for uncivil behavior with
little (perhaps no) warning because the situation seemed so clear to me,
and no one complained about my decision. I think that in many cases that
it's clear whether someone is making a personal attack, but some cases are
not so clear, and I want to be careful about the degree to which WMF
encourages administrators to rely on an AI to make decisions. Even if an AI
is trained extensively in with native language speakers, there can be
significant differences in how a statement is interpreted.

Pine


>
> Kerry
>
>
>
>
>
>
>
> _______________________________________________
> Wiki-research-l mailing list
> Wiki-research-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>
_______________________________________________
Wiki-research-l mailing list
Wiki-research-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l

Reply via email to