I also want to add that you can change ores sensitivity in your preferences
and add "We deliberately set the default threshold so low to capture all
vandalism cases so false positives are expected unlike anti-vandalism bot
that set the threshold so high to capture only vandalism cases (and don't
Thanks Luis! :)
And I just finished setting up a new labeling campaign for English
Wikipedia. This data will help us train/test more accurate models.
See https://en.wikipedia.org/wiki/Wikipedia:Labels/Edit_quality for
instructions on how to get started.
-Aaron
On Tue, Aug 23, 2016 at 4:05
Thanks for the detailed explanation, Aaron. As always your work is a model
in transparency for the rest of us :)
On Tue, Aug 23, 2016 at 12:40 PM Aaron Halfaker
wrote:
> Hi Luis! Thanks for taking a look.
>
> First, I should say that false-positives should be
Hi Luis! Thanks for taking a look.
First, I should say that false-positives should be expected. We're working
on better signaling in the UI so that you can differentiate the edits that
ORES is confident about and those that it isn't confident about -- but are
still worth your review.
So, in
Very cool! Is there any way for users of this tool to help train it? For
example, the first four things it flagged in my watchlist were all false
positives (next 5-6 were correctly flagged.) It'd be nice to be able to
contribute to training the model somehow when we see these false-positives.
On