IBM's Watson: http://bigthink.com/videos/ibms-watson-cognitive-or-sentient-2


Samiya


On Tue, Feb 10, 2015 at 9:47 PM, Jason Resch <[email protected]> wrote:

> If you define increased intelligence as decreased probability of having a
> false belief on any randomly chosen proposition, then superintelligences
> will be wrong on almost nothing, and their beliefs will converge as their
> intelligence rises. Therefore nearly all superintelligences will operate
> according to the same belief system. We should stop worrying about trying
> to ensure friendly AI, it will either be friendly or it won't according to
> what is right.
>
> I think chances are that it will be friendly, since I happen to believe in
> universal personhood, and if that belief is correct, then
> superintelligences will also come to believe it is correct. And with the
> belief in universal personhood it would know that harm to others is harm to
> the self.
>
> Jason
>
> On Tue, Feb 10, 2015 at 2:19 AM, Alberto G. Corona <[email protected]>
> wrote:
>
>> I can“t even enumerate the number of ways in which that article is wrong.
>>
>> First of all, any intelligent robot MUST have a religion in order to act
>> in any way. A set of core beliefs. A non intelligent robot need them too:
>> It is the set of constants. The intelligent robot  can rewrite their
>> constants from which he derive their calculations for actions and if the
>> robot is self preserving and reproduce sexually, it has to adjust his
>> constants i.e. his beliefs according with some darwinian algoritm that must
>> take into account himself but specially the group in which he lives and
>> collaborates..
>>
>> If the robot does not reproduce sexually and his fellows do not execute
>> very similar programs, it is pointless to teach them any human religion.
>>
>> These and other higher aspects like acting with other intelligent beings
>> communicate perceptions, how a robot elaborate philosophical and
>> theological concepts and collaborate with others, see my post about
>> "robotic truth"
>>
>> But I think that a robot with such level of intelligence will never be
>> possible.
>>
>> 2015-02-09 21:59 GMT+01:00 meekerdb <[email protected]>:
>>
>>>
>>> In two senses of that term! Or something.
>>>
>>> http://bigthink.com/ideafeed/robot-religion-2
>>>
>>> http://gizmodo.com/when-superintelligent-ai-arrives-
>>> will-religions-try-t-1682837922
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to [email protected].
>>> To post to this group, send email to [email protected].
>>> Visit this group at http://groups.google.com/group/everything-list.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>>
>> --
>> Alberto.
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> To post to this group, send email to [email protected].
>> Visit this group at http://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to