OK. Yesterday I was looking with a few other ENWP people at what I think
was a series of edits by either a vandal bot or an inadequately designed
and unapproved good faith bot. I read that it made approximately 500 edits
before someone who knew enough about ENWP saw what was happening and did
something about it. I don't know how many problematic bots we have, in
addition to vandal bots, but I am confident that they drain a nontrivial
amount of time from stewards, admins, and patrollers.

I don't know how much of a priority WMF places on detecting and stopping
unwelcome bots, but I think that the question of how to decrease the
numbers and effectiveness of unwelcome bots would be a good topic for WMF
to research.

Pine
( https://meta.wikimedia.org/wiki/User:Pine )


On Sat, Feb 9, 2019 at 9:24 PM Gergo Tisza <[email protected]> wrote:

> On Fri, Feb 8, 2019 at 6:20 PM Pine W <[email protected]> wrote:
>
> > I don't know how practical it would be to implement an approach like this
> > in the Wikiverse, and whether licensing proprietary technology would be
> > required.
> >
>
> They are talking about Polyform [1], a reverse proxy that filters traffic
> with a combination of browser fingerprinting, behavior analysis and proof
> of work.
> Proof of work is not really useful unless you have huge levels of bot
> traffic from a single bot operator (also it means locking out users with no
> Javascript); browser and behavior analysis very likely cannot be outsourced
> to a third party for privacy reasons. Maybe we could do it ourselves
> (although it would still bring up interesting questions privacy-wise) but
> it would be a huge undertaking.
>
>
> [1] https://www.kasada.io/product/
> _______________________________________________
> Wikitech-l mailing list
> [email protected]
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
_______________________________________________
Wikitech-l mailing list
[email protected]
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Reply via email to