Solving YouTube's Abusive Content Problems -- via Crowdsourcing

https://lauren.vortex.com/2018/03/11/solving-youtubes-abusive-content-problems-via-crowdsourcing


We all know that the long knives are out by various governments
regarding YouTube content. We know that Google is significantly
increasing the number of workers who will review YT abuse reports.

But we also know that the volume of videos in the uploading firehose
is going to continue leaving very large numbers of abusive videos
online that may quickly achieve high numbers of views, even if YT
employed techniques that I've previously urged, such as human review
of videos that are about to go onto the trending lists before they
actually do so.

This scale of videos is enormous -- yet the scale of viewing users is
also very large.

Is there some way to leverage the latter to help deal with abusive
content in the former, as a proactive effort to help keep government
censorship of YT at bay?

YT already has a "Trusted Flaggers" program that gives abuse review
priority to videos that these users have flagged. But (as far as I
know) this only applies to videos that these users have happened to
find and see of their own volition.

I don't have the hard data to prove this, but I have a strong
suspicion that vast numbers of users would be willing to participate
as organized volunteer proactive "screeners" of a sort for YT,
especially if there was some even minor financial incentive for their
participation (think in terms of a small amount of Play Store credit,
for example).

What if public videos that were suddenly attracting significant
numbers of views ("significant" yet to be defined), were pushed to
some odd number (to avoid ties) of such volunteer viewers who have
undergone appropriate online training regarding YT's Terms of Use? We
require that they actually are viewing reasonable amounts of these
videos (yes, there would be ways to attempt gaming this, but remember
we're talking about very large numbers of volunteers so much of that
risk should wash out if care is used in tracking analysis).

They vote/rate the videos acceptable or not. If the majority vote a
video as unacceptable, it gets pushed to the formal G abuse screeners
for a decision. If any given volunteer is found over time to be
providing bad decisions, they're dropped from the program.

Most videos would have small enough numbers of views to never even
enter this system. But it would provide a middle ground to help deal
with videos that are suddenly getting more visibility *before* they
can cause big problems, and this technique doesn't rely on random
viewers taking the initiative to flag abusive videos (and for that
matter figuring out how to flag them, since flagging is not typically
a top level YT user interface element these days, as I've previously
noted).

Since participants in this program would not have any control over
which specific videos they'd be pushed for a vote, and since again
we'd be talking about quite large numbers of participants (and we'd be
monitoring their performance over time), the ability to purposely
claim that nonabusive videos were abusive (or the reverse) would be
minimized.

No video would have action taken against it unless it had also been
declared abusive by a regular YT screener in the pipeline after the
volunteer screeners down-voted a video -- providing even more
protection.

How to define abusive videos is of course a separate discussion
relating directly to the YT Terms of Service, but this could include
the kinds of content violations that we all know about in relation to
YT (hate speech, dangerous pranks and dares, threats, etc.), and even
areas such as obvious obnoxious Content ID evasions (e.g.,
program/movie video inset boxes against random backgrounds, artificial
program run time variations, and so on).

I do realize that this is a fairly radical concept and that there are
all manner of details that aren't considered in this brief summary.
But I am increasingly convinced that it's going to take some sort of
new approach to help deal with these problems proactively, and to help
forestall governments from moving in and wrecking the wonderful
YouTube ecosystem with escalating politically motivated demands and
threats.

--Lauren--
Lauren Weinstein (lau...@vortex.com): https://www.vortex.com/lauren 
Lauren's Blog: https://lauren.vortex.com
Google Issues Mailing List: https://vortex.com/google-issues
Founder: Network Neutrality Squad: https://www.nnsquad.org 
         PRIVACY Forum: https://www.vortex.com/privacy-info
Co-Founder: People For Internet Responsibility: https://www.pfir.org/pfir-info
Member: ACM Committee on Computers and Public Policy
Google+: https://google.com/+LaurenWeinstein
Twitter: https://twitter.com/laurenweinstein
Tel: +1 (818) 225-2800
_______________________________________________
nnsquad mailing list
https://lists.nnsquad.org/mailman/listinfo/nnsquad

Reply via email to