To conclude - to worry that AI would be racist is like bothering that it
would believe in horoscopes and parapsychology. If it is smart and not
deliberately biased - it won't. And if it comes to conclusions (little
probable) that for example telepathy or the world government exists or that
my compatriots have genetically better or worse inteligence than people
from some other nation - I want it to show the reasoning, maybe AI will
help bring my world view closer to reality.

Moreover, I think we should at first try to design AI to cope with
different world views, different hipotheses and make it critical so that it
doesn't believe in the first conspiracy theory it encounters.

On Thu, Feb 23, 2017 at 10:18 AM, Jan Matusiewicz <[email protected]
> wrote:

> >> But can you objectively define racism?
> Racism is a matter of way of thinking rather then conclusion. A person (or
> AI) that doesn't think in a rasist way would not come to racist conclusions
> if given true premises. Of course if it was given false information - it
> could come to wrong conclusion but that is a problem of every reasoning
> system.
>
> I don't think that taking race into account and drawing correlation
> between race and other things is racist, as long as your reasoning is not
> affected by the information what race you are member of. AI would not
> belong to any human nationality so it would be an external observer. Of
> course different AI could come to different conclusions if fed with
> different data or because of randomness of the process. But I don't think
> you need to censor information sent to AI, just don't feed it only with
> content of 4chan :)
>
> >> So, given that the preferred model to base AGI on - the human brain -
> is thus fallible, we should turn to machines to fix human morality
> Human brain is flawed in many ways. We don't need to copy all its errors.
> It is also easier not to implement all the biases of human reasoning. And
> morality is a completely different topic.
>
> On Thu, Feb 23, 2017 at 8:10 AM, Nanograte Knowledge Technologies <
> [email protected]> wrote:
>
>> "AGI should show us the truth. It shouldn't be deliberately biased. And
>> this is the best weapon against racism. Racism is irrational - it comes
>> from, built in human nature, bias against aliens."
>>
>> So, given that the preferred model to base AGI on - the human brain - is
>> thus fallible, we should turn to machines to fix human morality? I think
>> this is preposterous. The reasoning places the human species at the level
>> of a lowly animal, who seems incapable of learning and adapting. For
>> interest, please refer to Covey's book, The 8th habit, on human integrity.
>>
>> I'd propose rather building a separate species of machine, as its own
>> intelligence. Let humans sort out their self-destructive patterns as a
>> species. Only the naturally fittest would survive. For decades now, human
>> beings have proven to not be the naturally fittest, but the architectural
>> potential remains.
>> ------------------------------
>> *From:* Jan Matusiewicz <[email protected]>
>> *Sent:* 23 February 2017 12:20 AM
>> *To:* AGI
>> *Subject:* Re: [agi] Politically Incorrect AGI, the Machined Learned
>> Racism Problem
>>
>>
>> AGI should show us the truth. It shouldn't be deliberately biased. And
>> this is the best weapon against racism. Racism is irrational - it comes
>> from, built in human nature, bias against aliens. Humans have a tendency to
>> generalize negative features of units to whole groups in a very different
>> way for our group and the alien group. We have examples of that in the
>> history and now. The idea that some nations should be eradicated or
>> enslaved because they so much differ from our nation is riddiculous. But is
>> was very popular. Assuming that all people from particular countries should
>> be banned from entering your country is also irrational but appealing for
>> many.
>>
>> An AGI which tries to be as objective as possible would be free from this
>> dangerous tendency. This would make political correctness unnecessary for
>> it. However it is possible that people would interpret its finding in a
>> racist way. For example it may find out that ethnic group A is better at IQ
>> tests then group B, but this is be due to lower educational level of B.
>> Some people would repeat only the first part of its finding.
>> On Wed, 22 Feb 2017 21:51 Dr Miles Dyson, <[email protected]>
>> wrote:
>>
>>> If the world is such that the data that exists supports empirical and
>>> scientific racism (or insert your politically incorrect world view here),
>>> how could we create a AGI that could avoid (or minimize the likelihood of
>>> it) taking such an undesirable view?
>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/28565694-f30243b8> |
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>> <http://www.listbox.com>
>>>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/28565694-f30243b8> |
>> Modify
>> <https://www.listbox.com/member/?&;>
>> Your Subscription <http://www.listbox.com>
>>
>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to