On 10 Feb 2015, at 19:04, Telmo Menezes wrote:
On Tue, Feb 10, 2015 at 4:47 PM, Jason Resch <[email protected]>
wrote:
If you define increased intelligence as decreased probability of
having a false belief on any randomly chosen proposition, then
superintelligences will be wrong on almost nothing, and their
beliefs will converge as their intelligence rises. Therefore nearly
all superintelligences will operate according to the same belief
system. We should stop worrying about trying to ensure friendly AI,
it will either be friendly or it won't according to what is right.
I wonder if this isn't prevented by Gödel's incompleteness. Given
that the superintelligence can never be certain of its own
consistency, it must remain fundamentally agnostic. In this case, we
might have different superintelligences working under different
hypothesis, possibly occupying niches just like what happens with
Darwinism.
I think chances are that it will be friendly, since I happen to
believe in universal personhood, and if that belief is correct, then
superintelligences will also come to believe it is correct. And with
the belief in universal personhood it would know that harm to others
is harm to the self.
I agree with you, with the difference that I try to assume universal
personhood without believing in it, to avoid becoming a religious
fundamentalist.
Well, you can derive universal personhood from simpler hypotheses,
also. Of course you have to assume those simpler hypothesis.
I tend to equalize belief and assumption. The difference between
belief and knowledge is that beliefs are revisable. They lack []A -> A.
Bruno
Telmo.
Jason
On Tue, Feb 10, 2015 at 2:19 AM, Alberto G. Corona <[email protected]
> wrote:
I can´t even enumerate the number of ways in which that article is
wrong.
First of all, any intelligent robot MUST have a religion in order to
act in any way. A set of core beliefs. A non intelligent robot need
them too: It is the set of constants. The intelligent robot can
rewrite their constants from which he derive their calculations for
actions and if the robot is self preserving and reproduce sexually,
it has to adjust his constants i.e. his beliefs according with some
darwinian algoritm that must take into account himself but specially
the group in which he lives and collaborates..
If the robot does not reproduce sexually and his fellows do not
execute very similar programs, it is pointless to teach them any
human religion.
These and other higher aspects like acting with other intelligent
beings communicate perceptions, how a robot elaborate philosophical
and theological concepts and collaborate with others, see my post
about "robotic truth"
But I think that a robot with such level of intelligence will never
be possible.
2015-02-09 21:59 GMT+01:00 meekerdb <[email protected]>:
In two senses of that term! Or something.
http://bigthink.com/ideafeed/robot-religion-2
http://gizmodo.com/when-superintelligent-ai-arrives-will-religions-try-t-1682837922
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
--
Alberto.
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
http://iridia.ulb.ac.be/~marchal/
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.