Figured it out! A draft! Why we might have been turned into stereotypes in a 
trafficked manner.

If the steroetypes match metrics used in AI rewards, and there was more than 
one AI involved in a population, trying
to meet certain metrics or influence other AIs, then the multiplicity would 
have created an adversarial environment,
where influence focused on the specific metrics used in the algorithms.

Hence, there are people and parts and phrases and etc that are specific metrics 
in algorithms.

Notably, weird ones, because what emerges most strongly would the ones that 
aren't in conflict.


The idea is that say 1 AI wants people to be like X more (vote, support, buy, 
believe ...), and another AI wants people
to be like Y more (same kinds of things ...) and each one was made to judge 
these by different heuristics, polls, etc,
that they end up focusing on the heuristics and polls because it's not the 
situation that everyone is X or everyone is
Y. some people don't meet the metric, and some of the properties of the metric 
collide.

...

it's a nascent idea but probably discussed much better elsewhere

sadly it's of course much worse, but it's the environment the unimaginable 
(both good and bad) things happen in

because they keep running them we get taken over by learned parts that meet ai 
metrics

just an idea!

Reply via email to