One way to do this might be to universally install disambiguation pages on 
every word of a tweet. Of course, some words wouldn't necessarily explode (i.e. 
the disambiguation page would be small, with only a few entries) like "a" or 
"on". But some words (e.g. "her" or "him") might explode in an interesting way. 
20 years ago, the disambiguation pages for these pronouns would reflect 
societies systemic prejudice against trans people. But today they would be 
fully blossomed launching points.

A "bottom up" method could then be devised to track and take statistics on the 
paths followed through these pages, inductively inferring categories from that 
traffic.

On 5/28/20 9:39 AM, Marcus Daniels wrote:
> I would say that companies like Twitter should massively annotate serious 
> offenders and cancel accounts as needed.    It doesn't have to come from top, 
> but it isn't going to come from the bottom.   There should be processes to 
> keep conspicuous liars from ever gaining visibility.   They don't have to 
> involve black vans, as satisfying as that might be.   But maybe advanced 
> natural language processing codes that escalate issues to editors.

-- 
☣ uǝlƃ

-- --- .-. . .-.. --- -.-. -.- ... -..-. .- .-. . -..-. - .... . -..-. . ... 
... . -. - .. .- .-.. -..-. .-- --- .-. -.- . .-. ...
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 

Reply via email to