"AGI should show us the truth. It shouldn't be deliberately biased. And this is
the best weapon against racism. Racism is irrational - it comes from, built in
human nature, bias against aliens."
So, given that the preferred model to base AGI on - the human brain - is thus
fallible, we should t
I suppose one way to reduce the likelihood of creating bad AGI's is to
ensure its first interactions in the world are not with 4chan.
On Wed, Feb 22, 2017 at 4:34 PM, Mike Archbold wrote:
> One of my big motivations to get into AGI is that I have long NOT
> considered a human being some kind of
One of my big motivations to get into AGI is that I have long NOT
considered a human being some kind of gold standard of intelligence.
Racism is just one piece of the problem. The solution, though, still
a long way off...
On 2/22/17, Dr Miles Dyson wrote:
> It seems as though the answer Microsof
It seems as though the answer Microsoft came up with for preventing its
chat bot "Zo" from ending up racist like "Tay", is through the development
of a content moderator (
https://www.microsoft.com/cognitive-services/en-us/content-moderator ).
On Wed, Feb 22, 2017 at 2:35 PM, Mark Nuzz wrote:
>
On Wed, Feb 22, 2017 at 2:20 PM, Jan Matusiewicz
wrote:
> AGI should show us the truth. It shouldn't be deliberately biased. And
> this is the best weapon against racism. Racism is irrational - it comes
> from, built in human nature, bias against aliens. Humans have a tendency to
> generalize neg
AGI should show us the truth. It shouldn't be deliberately biased. And this
is the best weapon against racism. Racism is irrational - it comes from,
built in human nature, bias against aliens. Humans have a tendency to
generalize negative features of units to whole groups in a very different
way fo
If the world is such that the data that exists supports empirical and
scientific racism (or insert your politically incorrect world view here),
how could we create a AGI that could avoid (or minimize the likelihood of
it) taking such an undesirable view?
-