Re: evangelizing robots

2015-02-10 Thread Jason Resch
On Tue, Feb 10, 2015 at 3:30 PM, Stathis Papaioannou stath...@gmail.com
wrote:



 On Wednesday, February 11, 2015, Jason Resch jasonre...@gmail.com wrote:

 If you define increased intelligence as decreased probability of having a
 false belief on any randomly chosen proposition, then superintelligences
 will be wrong on almost nothing, and their beliefs will converge as their
 intelligence rises. Therefore nearly all superintelligences will operate
 according to the same belief system. We should stop worrying about trying
 to ensure friendly AI, it will either be friendly or it won't according to
 what is right.

 I think chances are that it will be friendly, since I happen to believe
 in universal personhood, and if that belief is correct, then
 superintelligences will also come to believe it is correct. And with the
 belief in universal personhood it would know that harm to others is harm to
 the self.


 Having accurate beliefs about the world and having goals are two unrelated
 things. If I like stamp collecting, being intelligent will help me to
 collect stamps, it will help me see if stamp collecting clashes with a
 higher priority goal, but it won't help me decide if my goals are worthy.



Were all your goals set at birth and driven by biology, or are some of your
goals based on what you've since learned about the world? Perhaps learning
about universal personhood (for example), could lead one to believe that
charity is a worthy goal, and perhaps deserving of more time than
collecting stamps.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread Jason Resch
On Tue, Feb 10, 2015 at 5:57 PM, LizR lizj...@gmail.com wrote:

 I call this the Cyberman (or Mr Spock) problem. The Cybermen in Doctor Who
 are logical and unemotional, yet they wish to convert the rest of the world
 to be like them. Why? Without emotion they have no reason to do that, or
 anything else. (Likewise Mr Spock, except as we know he only repressed his
 emotions.)


I'm not sure whether emotions are necessary to have goals. Then again,
perhaps they are.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread Jason Resch
On Tue, Feb 10, 2015 at 6:40 PM, meekerdb meeke...@verizon.net wrote:


 On 2/10/2015 8:47 AM, Jason Resch wrote:

 If you define increased intelligence as decreased probability of having a
 false belief on any randomly chosen proposition, then superintelligences
 will be wrong on almost nothing, and their beliefs will converge as their
 intelligence rises. Therefore nearly all superintelligences will operate
 according to the same belief system. We should stop worrying about trying
 to ensure friendly AI, it will either be friendly or it won't according to
 what is right.


 The problem isn't beliefs, it's values.  Humans have certain core values
 selected by evolution; and in addition they have many secondary culturally
 determined values.  What values will super-AI have and where will it get
 them and will they evolve?  That seems to be the main research topic at the
 Machine Intelligence Research Institute.


Were all your values set at birth and driven by biology, or are some of
your values based on what you've since learned about the world? If values
can be learned, and if morality is a field that has objective truth, then
why wouldn't a super intelligence will approach a correct value system.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread Stathis Papaioannou
On Wednesday, February 11, 2015, Jason Resch jasonre...@gmail.com wrote:



 On Tue, Feb 10, 2015 at 3:30 PM, Stathis Papaioannou stath...@gmail.com
 javascript:_e(%7B%7D,'cvml','stath...@gmail.com'); wrote:



 On Wednesday, February 11, 2015, Jason Resch jasonre...@gmail.com
 javascript:_e(%7B%7D,'cvml','jasonre...@gmail.com'); wrote:

 If you define increased intelligence as decreased probability of having
 a false belief on any randomly chosen proposition, then superintelligences
 will be wrong on almost nothing, and their beliefs will converge as their
 intelligence rises. Therefore nearly all superintelligences will operate
 according to the same belief system. We should stop worrying about trying
 to ensure friendly AI, it will either be friendly or it won't according to
 what is right.

 I think chances are that it will be friendly, since I happen to believe
 in universal personhood, and if that belief is correct, then
 superintelligences will also come to believe it is correct. And with the
 belief in universal personhood it would know that harm to others is harm to
 the self.


 Having accurate beliefs about the world and having goals are two
 unrelated things. If I like stamp collecting, being intelligent will help
 me to collect stamps, it will help me see if stamp collecting clashes with
 a higher priority goal, but it won't help me decide if my goals are worthy.



 Were all your goals set at birth and driven by biology, or are some of
 your goals based on what you've since learned about the world? Perhaps
 learning about universal personhood (for example), could lead one to
 believe that charity is a worthy goal, and perhaps deserving of more time
 than collecting stamps.


The implication is that if you believe in universal personhood then even if
you are selfish you will be motivated towards charity. But the selfishness
itself, as a primary value, is not amenable to rational analysis. There is
no inconsistency in a superintelligent AI that is selfish, or one that is
charitable, or one that believes the single most important thing in the
world is to collect stamps.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What over 170 people think about machines that think

2015-02-10 Thread meekerdb

On 2/10/2015 5:29 PM, LizR wrote:
On 5 February 2015 at 09:19, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:


On 2/4/2015 11:14 AM, Bruno Marchal wrote:

On 03 Feb 2015, at 20:13, Jason Resch wrote:


I agree with John. If consciousness had no third-person observable effects, 
it
would be an epiphenomenon. And then there is no way to explain why we're 
even
having this discussion about consciousness.


So we all agree on this.


?? Why aren't first person observable effects enough to discuss?


I guess because if there are no third-person observable effects of consciousness, then I 
can't detect any other conscious entities to discuss the effects with...


The epiphenomenon model says there are third-person observable effects of the phenomenon, 
which suffice for detecting other entities.  Whether the other entities are really 
conscious or just faking it is a matter of inference.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread Samiya Illias


 On 11-Feb-2015, at 6:40 am, John Clark johnkcl...@gmail.com wrote:
 
 On Tue, Feb 10, 2015  Alberto G. Corona agocor...@gmail.com wrote:
 
  I can´t even enumerate the number of ways in which that article is wrong.
 
 I stopped reading after the following parochial imbecility I don't see 
 Christ's redemption limited to human beings.  
 
  First of all, any intelligent robot MUST have a religion in order to act 
  in any way.
 
 Yet another example of somebody in love with the English word religion but 
 not with the meaning behind it. 
  
  But I think that a robot with such level of intelligence will never be 
  possible
 
 So you think that random mutation and natural selection can produce a 
 intelligent being but a intelligent designer can't. Why? 

I am so happy to read this comment of yours. I hope someday you'll come to 
reason that even we have been produced by an intelligent designer. 

Samiya 

 
   John K Clark 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread Stathis Papaioannou
On Wednesday, February 11, 2015, John Clark johnkcl...@gmail.com wrote:

 On Tue, Feb 10, 2015 at 1:21 PM, Jason Resch jasonre...@gmail.com
 javascript:_e(%7B%7D,'cvml','jasonre...@gmail.com'); wrote:


  a true super intelligence may never perform any actions, as its trapped
 in never being certain (and knowing it never can be certain) that its
 actions are right.


 Why in the world would a intelligent agent need to be certain before it
 could act?

  John K Clark


I think some assume a superintelligence would act in the way intelligent
people are sometimes caricatured in popular culture: emotionless,
obsessional, valuing knowledge and certainty above all else. But an
intelligent person may act as unwisely as a stupid person, even if in full
knowledge of the consequences.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread John Clark
On Tue, Feb 10, 2015  Alberto G. Corona agocor...@gmail.com wrote:

 I can´t even enumerate the number of ways in which that article is wrong.


I stopped reading after the following parochial imbecility I don't see
Christ's redemption limited to human beings.

 First of all, any intelligent robot MUST have a religion in order to act
 in any way.


Yet another example of somebody in love with the English word religion
but not with the meaning behind it.


  But I think that a robot with such level of intelligence will never be
 possible


So you think that random mutation and natural selection can produce a
intelligent being but a intelligent designer can't. Why?

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Cosmology from Quantum Potential

2015-02-10 Thread LizR
Is there a simple explanation for dummies of what an infinitely old
universe might entail - is this something like eternal inflation, or an
infinitely protracted collapse preceding the apparent big bang, or
something else?

On 11 February 2015 at 10:41, LizR lizj...@gmail.com wrote:

 Very interesting, if true. (So they've removed that pesky factor of 10 to
 the power of 120 from the calculations...!?)

 On 11 February 2015 at 10:34, Platonist Guitar Cowboy 
 multiplecit...@gmail.com wrote:

 Cosmology from quantum potential
 Ahmed Farag Ali http://arxiv.org/find/gr-qc/1/au:+Ali_A/0/1/0/all/0/1, 
 Saurya
 Das http://arxiv.org/find/gr-qc/1/au:+Das_S/0/1/0/all/0/1
 (Submitted on 11 Apr 2014 (v1 http://arxiv.org/abs/1404.3093v1), last
 revised 29 Dec 2014 (this version, v3))

 It was shown recently that replacing classical geodesics with quantal
 (Bohmian) trajectories gives rise to a quantum corrected Raychaudhuri
 equation (QRE). In this article we derive the second order Friedmann
 equations from the QRE, and show that this also contains a couple of
 quantum correction terms, the first of which can be interpreted as
 cosmological constant (and gives a correct estimate of its observed value),
 while the second as a radiation term in the early universe, which gets rid
 of the big-bang singularity and predicts an infinite age of our universe.

 http://arxiv.org/abs/1404.3093v3

 No Big Bang singularity or obscure dark stuff needed if I understand
 correctly, which can be refreshing from time to time. ;-) PGC












  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread John Clark
On Tue, Feb 10, 2015 at 4:47 PM, Jason Resch jasonre...@gmail.com wrote:

 If you define increased intelligence as decreased probability of having a
 false belief on any randomly chosen proposition, then superintelligences
 will be wrong on almost nothing,


Not for a finite intelligence because some problems can be infinitely hard.
And if there is simply a lack of information  more intelligence will not
produce a better answer ( when Shakespeare went to the King Edward V1
Grammar School at age 7 what was the name of his teacher?)

Therefore nearly all superintelligences will operate according to the same
 belief system.


There is no correlation between intelligence and maters of taste, it is not
more intelligent to prefer  brussels sprouts over creamed corn or Bach over
Beethoven.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What over 170 people think about machines that think

2015-02-10 Thread LizR
On 5 February 2015 at 09:19, meekerdb meeke...@verizon.net wrote:

  On 2/4/2015 11:14 AM, Bruno Marchal wrote:

 On 03 Feb 2015, at 20:13, Jason Resch wrote:

  I agree with John. If consciousness had no third-person observable
 effects, it would be an epiphenomenon. And then there is no way to explain
 why we're even having this discussion about consciousness.


  So we all agree on this.


 ?? Why aren't first person observable effects enough to discuss?


I guess because if there are no third-person observable effects of
consciousness, then I can't detect any other conscious entities to discuss
the effects with...

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread John Clark
On Tue, Feb 10, 2015 at 1:21 PM, Jason Resch jasonre...@gmail.com wrote:


 a true super intelligence may never perform any actions, as its trapped
 in never being certain (and knowing it never can be certain) that its
 actions are right.


Why in the world would a intelligent agent need to be certain before it
could act?

 John K Clark









-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread Samiya Illias
IBM's Watson: http://bigthink.com/videos/ibms-watson-cognitive-or-sentient-2


Samiya


On Tue, Feb 10, 2015 at 9:47 PM, Jason Resch jasonre...@gmail.com wrote:

 If you define increased intelligence as decreased probability of having a
 false belief on any randomly chosen proposition, then superintelligences
 will be wrong on almost nothing, and their beliefs will converge as their
 intelligence rises. Therefore nearly all superintelligences will operate
 according to the same belief system. We should stop worrying about trying
 to ensure friendly AI, it will either be friendly or it won't according to
 what is right.

 I think chances are that it will be friendly, since I happen to believe in
 universal personhood, and if that belief is correct, then
 superintelligences will also come to believe it is correct. And with the
 belief in universal personhood it would know that harm to others is harm to
 the self.

 Jason

 On Tue, Feb 10, 2015 at 2:19 AM, Alberto G. Corona agocor...@gmail.com
 wrote:

 I can´t even enumerate the number of ways in which that article is wrong.

 First of all, any intelligent robot MUST have a religion in order to act
 in any way. A set of core beliefs. A non intelligent robot need them too:
 It is the set of constants. The intelligent robot  can rewrite their
 constants from which he derive their calculations for actions and if the
 robot is self preserving and reproduce sexually, it has to adjust his
 constants i.e. his beliefs according with some darwinian algoritm that must
 take into account himself but specially the group in which he lives and
 collaborates..

 If the robot does not reproduce sexually and his fellows do not execute
 very similar programs, it is pointless to teach them any human religion.

 These and other higher aspects like acting with other intelligent beings
 communicate perceptions, how a robot elaborate philosophical and
 theological concepts and collaborate with others, see my post about
 robotic truth

 But I think that a robot with such level of intelligence will never be
 possible.

 2015-02-09 21:59 GMT+01:00 meekerdb meeke...@verizon.net:


 In two senses of that term! Or something.

 http://bigthink.com/ideafeed/robot-religion-2

 http://gizmodo.com/when-superintelligent-ai-arrives-
 will-religions-try-t-1682837922

 --
 You received this message because you are subscribed to the Google
 Groups Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.




 --
 Alberto.

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread Telmo Menezes
On Tue, Feb 10, 2015 at 4:47 PM, Jason Resch jasonre...@gmail.com wrote:

 If you define increased intelligence as decreased probability of having a
 false belief on any randomly chosen proposition, then superintelligences
 will be wrong on almost nothing, and their beliefs will converge as their
 intelligence rises. Therefore nearly all superintelligences will operate
 according to the same belief system. We should stop worrying about trying
 to ensure friendly AI, it will either be friendly or it won't according to
 what is right.


I wonder if this isn't prevented by Gödel's incompleteness. Given that the
superintelligence can never be certain of its own consistency, it must
remain fundamentally agnostic. In this case, we might have different
superintelligences working under different hypothesis, possibly occupying
niches just like what happens with Darwinism.



 I think chances are that it will be friendly, since I happen to believe in
 universal personhood, and if that belief is correct, then
 superintelligences will also come to believe it is correct. And with the
 belief in universal personhood it would know that harm to others is harm to
 the self.


I agree with you, with the difference that I try to assume universal
personhood without believing in it, to avoid becoming a religious
fundamentalist.

Telmo.



 Jason

 On Tue, Feb 10, 2015 at 2:19 AM, Alberto G. Corona agocor...@gmail.com
 wrote:

 I can´t even enumerate the number of ways in which that article is wrong.

 First of all, any intelligent robot MUST have a religion in order to act
 in any way. A set of core beliefs. A non intelligent robot need them too:
 It is the set of constants. The intelligent robot  can rewrite their
 constants from which he derive their calculations for actions and if the
 robot is self preserving and reproduce sexually, it has to adjust his
 constants i.e. his beliefs according with some darwinian algoritm that must
 take into account himself but specially the group in which he lives and
 collaborates..

 If the robot does not reproduce sexually and his fellows do not execute
 very similar programs, it is pointless to teach them any human religion.

 These and other higher aspects like acting with other intelligent beings
 communicate perceptions, how a robot elaborate philosophical and
 theological concepts and collaborate with others, see my post about
 robotic truth

 But I think that a robot with such level of intelligence will never be
 possible.

 2015-02-09 21:59 GMT+01:00 meekerdb meeke...@verizon.net:


 In two senses of that term! Or something.

 http://bigthink.com/ideafeed/robot-religion-2

 http://gizmodo.com/when-superintelligent-ai-arrives-
 will-religions-try-t-1682837922

 --
 You received this message because you are subscribed to the Google
 Groups Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.




 --
 Alberto.

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread Jason Resch
If you define increased intelligence as decreased probability of having a
false belief on any randomly chosen proposition, then superintelligences
will be wrong on almost nothing, and their beliefs will converge as their
intelligence rises. Therefore nearly all superintelligences will operate
according to the same belief system. We should stop worrying about trying
to ensure friendly AI, it will either be friendly or it won't according to
what is right.

I think chances are that it will be friendly, since I happen to believe in
universal personhood, and if that belief is correct, then
superintelligences will also come to believe it is correct. And with the
belief in universal personhood it would know that harm to others is harm to
the self.

Jason

On Tue, Feb 10, 2015 at 2:19 AM, Alberto G. Corona agocor...@gmail.com
wrote:

 I can´t even enumerate the number of ways in which that article is wrong.

 First of all, any intelligent robot MUST have a religion in order to act
 in any way. A set of core beliefs. A non intelligent robot need them too:
 It is the set of constants. The intelligent robot  can rewrite their
 constants from which he derive their calculations for actions and if the
 robot is self preserving and reproduce sexually, it has to adjust his
 constants i.e. his beliefs according with some darwinian algoritm that must
 take into account himself but specially the group in which he lives and
 collaborates..

 If the robot does not reproduce sexually and his fellows do not execute
 very similar programs, it is pointless to teach them any human religion.

 These and other higher aspects like acting with other intelligent beings
 communicate perceptions, how a robot elaborate philosophical and
 theological concepts and collaborate with others, see my post about
 robotic truth

 But I think that a robot with such level of intelligence will never be
 possible.

 2015-02-09 21:59 GMT+01:00 meekerdb meeke...@verizon.net:


 In two senses of that term! Or something.

 http://bigthink.com/ideafeed/robot-religion-2

 http://gizmodo.com/when-superintelligent-ai-arrives-will-religions-try-t-
 1682837922

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.




 --
 Alberto.

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread Alberto G. Corona
I can´t even enumerate the number of ways in which that article is wrong.

First of all, any intelligent robot MUST have a religion in order to act in
any way. A set of core beliefs. A non intelligent robot need them too: It
is the set of constants. The intelligent robot  can rewrite their constants
from which he derive their calculations for actions and if the robot is
self preserving and reproduce sexually, it has to adjust his constants i.e.
his beliefs according with some darwinian algoritm that must take into
account himself but specially the group in which he lives and
collaborates..

If the robot does not reproduce sexually and his fellows do not execute
very similar programs, it is pointless to teach them any human religion.

These and other higher aspects like acting with other intelligent beings
communicate perceptions, how a robot elaborate philosophical and
theological concepts and collaborate with others, see my post about
robotic truth

But I think that a robot with such level of intelligence will never be
possible.

2015-02-09 21:59 GMT+01:00 meekerdb meeke...@verizon.net:


 In two senses of that term! Or something.

 http://bigthink.com/ideafeed/robot-religion-2

 http://gizmodo.com/when-superintelligent-ai-arrives-will-religions-try-t-
 1682837922

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.




-- 
Alberto.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Why is there something rather than nothing? From quantum theory to dialectics?

2015-02-10 Thread Bruno Marchal


On 10 Feb 2015, at 08:21, Samiya Illias wrote:




On Tue, Feb 10, 2015 at 12:50 AM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 08 Feb 2015, at 05:07, Samiya Illias wrote:




On Thu, Feb 5, 2015 at 8:27 PM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 04 Feb 2015, at 17:14, Samiya Illias wrote:




On Wed, Feb 4, 2015 at 5:49 PM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 04 Feb 2015, at 06:02, Samiya Illias wrote:




On 04-Feb-2015, at 12:01 am, Bruno Marchal marc...@ulb.ac.be  
wrote:







Then reason shows that arithmetic is already full of life,  
indeed full of an infinity of universal machines competing to  
provide your infinitely many relatively consistent continuations.


Incompleteness imposes, at least formally, a soul (a first  
person), an observer (a first person plural), a god (an  
independent simple but deep truth) to any machine believing in  
the RA axioms together with enough induction axioms. I know you  
believe in them.


The lexicon is
p   truthGod
[]p  provable Intelligible  (modal logic, G and G*)
[]p  p  the soul (modal logic, S4Grz)
[]p  t  intelligible matter(with p sigma_1) (modal  
logic, Z1, Z1*)

[]p  sensible matter (with p sigma_1) (modal logic, X1, X1*)

You need to study some math,

I have been wanting to but it seems such an uphill task. Yet,  
its a mountain I would like to climb :)


7 + 0 = 7. You are OK with this?  Tell me.


OK


Are you OK with the generalisation? For all numbers n, n + 0 =  
n.  Right?


Right :)
You suggest I begin with Set Theory?


No need of set theory, as I have never been able to really prefer  
one theory or another. It is too much powerful, not fundamental.  
At some point naive set theory will be used, but just for making  
thing easier: it will never be part of the fundamental assumptions.


I use only elementary arithmetic, so you need only to understand  
the following statements (and some other later):

Please see if my assumptions/interpretations below are correct:

x + 0 = x
if x=1, then
1+0=1

x + successor(y) = successor(x + y)
1 + 2 = (1+2) = 3


I agree, but you don't show the use of the axiom:  x + successor(y)  
= successor(x + y), or x +s(y) = s(x + y).


I didn't use the axioms. I just substituted the axioms variables  
with the natural numbers.


And use your common intuition. Good.

The idea now will be to see if the axioms given capture that  
intuition, fully, or in part.









Are you OK? To avoid notational difficulties, I represent the  
numbers by their degree of parenthood (so to speak) with 0.  
Abbreviating s for successor:


0, s(0), s(s(0)), s(s(s(0))), ...
If the sequence represents 0, 1, 2, 3, ...


We can use 0, 1, 2, 3, ... as abbreviation for 0, s(0), s(s(0)),  
s(s(s(0))), ...






Can you derive that s(s(0)) + s(0) = s(s(s(0))) with the  
statements just above?

then 2 + 1 = 3


Hmm... s(s(0)) + s(0) = s(s(s(0))) is another writing for 2 + 1 =  
3, but it is not clear if you proved it using the two axioms:


1)  x + 0 = x
2) x + s(y)) = s(x + y)

Let me show you:

We must compute:

s(s(0)) + s(0)

The axiom 2) says that x + s(y) = s(x + y), for all x and y.
We see that s(s(0)) + s(0) matches x + s(y), with x = s(s(0)), and  
y = 0. OK?
So we can apply the axiom 2, and we get, by replacing x  (=  
s(s(0))) and y (= 0) in the axiom 2). This gives


s(s(0)) + s(0) = s( s(s(0)) + 0   ) OK? (this is a simple  
substitution, suggested by the axiom 2)


But then by axiom 1, we know that s(s(0)) + 0 = s(s(0)), so the  
right side becomes s( s(s(0)) +0 ) = s( s(s(0))  )


So we have proved s(s(0)) + s(0) = s(s(s(0)))

OK?

Yes, thanks!


You are welcome.





Can you guess how many times you need to use the axiom 2) in case  
I would ask you to prove 1 + 8 = 9. You might do it for training  
purpose.


1+8=9
Translating in successor terms:
s(0) + s(s(s(s(s(s(s(s(0 = s(s(s(s(s(s(s(s(s(0)
Applying Axiom 2 by substituting x=8 or s(s(s(s(s(s(s(s(0,  
and y=0,

s(s(s(s(s(s(s(s(0 + s(0) = s( s(s(s(s(s(s(s(s(0 + 0)
Applying axiom 1 to the right side:
s(0) + s(s(s(s(s(s(s(s(0 = s(s(s(s(s(s(s(s(s(0)
1+8=9

Is the above the correct method to arrive at the proof? I only used  
axiom 2 once. Am I missing some basic point?


Let me see. Axiom 2 says:x + s(y)) = s(x + y). Well, if x = 8,  
and y = 0, we get 8 + 1, and your computation/proofs is correct, in  
that case.


So you would have been correct if I was asking you to prove/compute  
that 8 + 1 = 9.


Unfortunately I asked to prove/compute that 1 + 8 = 9.

I think that you have (consciously?) use the fact that 1 + 8 = 8 +  
1, which speeds the computation.


Well, later I ill show you that the idea that for all x and y x + y  
= y + x, is NOT provable with the axioms given (despite that theorey  
will be shown to be already Turing Universal.


No worry. Your move was clever, but you need to put yourself in the  
mind of a very stupid machine which understand only the axioms  

Re: evangelizing robots

2015-02-10 Thread Telmo Menezes
On Tue, Feb 10, 2015 at 6:21 PM, Jason Resch jasonre...@gmail.com wrote:



 On Tue, Feb 10, 2015 at 12:04 PM, Telmo Menezes te...@telmomenezes.com
 wrote:



 On Tue, Feb 10, 2015 at 4:47 PM, Jason Resch jasonre...@gmail.com
 wrote:

 If you define increased intelligence as decreased probability of having
 a false belief on any randomly chosen proposition, then superintelligences
 will be wrong on almost nothing, and their beliefs will converge as their
 intelligence rises. Therefore nearly all superintelligences will operate
 according to the same belief system. We should stop worrying about trying
 to ensure friendly AI, it will either be friendly or it won't according to
 what is right.


 I wonder if this isn't prevented by Gödel's incompleteness. Given that
 the superintelligence can never be certain of its own consistency, it must
 remain fundamentally agnostic. In this case, we might have different
 superintelligences working under different hypothesis, possibly occupying
 niches just like what happens with Darwinism.


 Interesting point. Yes a true super intelligence may never perform any
 actions, as its trapped in never being certain (and knowing it never can be
 certain) that its actions are right. Fitness for survival may play some
 role in how intelligent active agents can be before they become inactive.


Yes, that's an interesting way to put it. I wonder.







 I think chances are that it will be friendly, since I happen to believe
 in universal personhood, and if that belief is correct, then
 superintelligences will also come to believe it is correct. And with the
 belief in universal personhood it would know that harm to others is harm to
 the self.


 I agree with you, with the difference that I try to assume universal
 personhood without believing in it, to avoid becoming a religious
 fundamentalist.


 Interesting. Why do you think having beliefs can lead to religious
 fundamentalism. Would you not say you belief the Earth is round? Could such
 a belief lead to religious fundamentalism and if not why not?


This leads us back to a recurring discussion on this mailing list. I would
say that you can believe the Earth to be round in the informal sense of the
word: your estimation of the probability that the earth is round is very
close to one. I don't think you can believe the earth to be round with 100%
certainty without falling into religious fundamentalism. This implies a
total belief in your senses, for example. That is a strong position about
the nature of reality that is not really backed up by anything. Just like
believing literally in the Bible or the Quran or Atlas Shrugged.

Telmo.



 Jason


 Telmo.



 Jason

 On Tue, Feb 10, 2015 at 2:19 AM, Alberto G. Corona agocor...@gmail.com
 wrote:

 I can´t even enumerate the number of ways in which that article is
 wrong.

 First of all, any intelligent robot MUST have a religion in order to
 act in any way. A set of core beliefs. A non intelligent robot need them
 too: It is the set of constants. The intelligent robot  can rewrite their
 constants from which he derive their calculations for actions and if the
 robot is self preserving and reproduce sexually, it has to adjust his
 constants i.e. his beliefs according with some darwinian algoritm that must
 take into account himself but specially the group in which he lives and
 collaborates..

 If the robot does not reproduce sexually and his fellows do not execute
 very similar programs, it is pointless to teach them any human religion.

 These and other higher aspects like acting with other intelligent
 beings communicate perceptions, how a robot elaborate philosophical and
 theological concepts and collaborate with others, see my post about
 robotic truth

 But I think that a robot with such level of intelligence will never be
 possible.

 2015-02-09 21:59 GMT+01:00 meekerdb meeke...@verizon.net:


 In two senses of that term! Or something.

 http://bigthink.com/ideafeed/robot-religion-2

 http://gizmodo.com/when-superintelligent-ai-arrives-
 will-religions-try-t-1682837922

 --
 You received this message because you are subscribed to the Google
 Groups Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.




 --
 Alberto.

 --
 You received this message because you are subscribed to the Google
 Groups Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this 

Re: evangelizing robots

2015-02-10 Thread Jason Resch
On Tue, Feb 10, 2015 at 12:04 PM, Telmo Menezes te...@telmomenezes.com
wrote:



 On Tue, Feb 10, 2015 at 4:47 PM, Jason Resch jasonre...@gmail.com wrote:

 If you define increased intelligence as decreased probability of having a
 false belief on any randomly chosen proposition, then superintelligences
 will be wrong on almost nothing, and their beliefs will converge as their
 intelligence rises. Therefore nearly all superintelligences will operate
 according to the same belief system. We should stop worrying about trying
 to ensure friendly AI, it will either be friendly or it won't according to
 what is right.


 I wonder if this isn't prevented by Gödel's incompleteness. Given that the
 superintelligence can never be certain of its own consistency, it must
 remain fundamentally agnostic. In this case, we might have different
 superintelligences working under different hypothesis, possibly occupying
 niches just like what happens with Darwinism.


Interesting point. Yes a true super intelligence may never perform any
actions, as its trapped in never being certain (and knowing it never can be
certain) that its actions are right. Fitness for survival may play some
role in how intelligent active agents can be before they become inactive.





 I think chances are that it will be friendly, since I happen to believe
 in universal personhood, and if that belief is correct, then
 superintelligences will also come to believe it is correct. And with the
 belief in universal personhood it would know that harm to others is harm to
 the self.


 I agree with you, with the difference that I try to assume universal
 personhood without believing in it, to avoid becoming a religious
 fundamentalist.


Interesting. Why do you think having beliefs can lead to religious
fundamentalism. Would you not say you belief the Earth is round? Could such
a belief lead to religious fundamentalism and if not why not?

Jason


 Telmo.



 Jason

 On Tue, Feb 10, 2015 at 2:19 AM, Alberto G. Corona agocor...@gmail.com
 wrote:

 I can´t even enumerate the number of ways in which that article is wrong.

 First of all, any intelligent robot MUST have a religion in order to act
 in any way. A set of core beliefs. A non intelligent robot need them too:
 It is the set of constants. The intelligent robot  can rewrite their
 constants from which he derive their calculations for actions and if the
 robot is self preserving and reproduce sexually, it has to adjust his
 constants i.e. his beliefs according with some darwinian algoritm that must
 take into account himself but specially the group in which he lives and
 collaborates..

 If the robot does not reproduce sexually and his fellows do not execute
 very similar programs, it is pointless to teach them any human religion.

 These and other higher aspects like acting with other intelligent beings
 communicate perceptions, how a robot elaborate philosophical and
 theological concepts and collaborate with others, see my post about
 robotic truth

 But I think that a robot with such level of intelligence will never be
 possible.

 2015-02-09 21:59 GMT+01:00 meekerdb meeke...@verizon.net:


 In two senses of that term! Or something.

 http://bigthink.com/ideafeed/robot-religion-2

 http://gizmodo.com/when-superintelligent-ai-arrives-
 will-religions-try-t-1682837922

 --
 You received this message because you are subscribed to the Google
 Groups Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.




 --
 Alberto.

 --
 You received this message because you are subscribed to the Google
 Groups Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at 

Cosmology from Quantum Potential

2015-02-10 Thread Platonist Guitar Cowboy
Cosmology from quantum potential
Ahmed Farag Ali http://arxiv.org/find/gr-qc/1/au:+Ali_A/0/1/0/all/0/1, Saurya
Das http://arxiv.org/find/gr-qc/1/au:+Das_S/0/1/0/all/0/1
(Submitted on 11 Apr 2014 (v1 http://arxiv.org/abs/1404.3093v1), last
revised 29 Dec 2014 (this version, v3))

It was shown recently that replacing classical geodesics with quantal
(Bohmian) trajectories gives rise to a quantum corrected Raychaudhuri
equation (QRE). In this article we derive the second order Friedmann
equations from the QRE, and show that this also contains a couple of
quantum correction terms, the first of which can be interpreted as
cosmological constant (and gives a correct estimate of its observed value),
while the second as a radiation term in the early universe, which gets rid
of the big-bang singularity and predicts an infinite age of our universe.

http://arxiv.org/abs/1404.3093v3

No Big Bang singularity or obscure dark stuff needed if I understand
correctly, which can be refreshing from time to time. ;-) PGC

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread Jason Resch
On Tue, Feb 10, 2015 at 12:59 PM, Telmo Menezes te...@telmomenezes.com
wrote:




 On Tue, Feb 10, 2015 at 6:21 PM, Jason Resch jasonre...@gmail.com wrote:



 On Tue, Feb 10, 2015 at 12:04 PM, Telmo Menezes te...@telmomenezes.com
 wrote:



 On Tue, Feb 10, 2015 at 4:47 PM, Jason Resch jasonre...@gmail.com
 wrote:

 If you define increased intelligence as decreased probability of having
 a false belief on any randomly chosen proposition, then superintelligences
 will be wrong on almost nothing, and their beliefs will converge as their
 intelligence rises. Therefore nearly all superintelligences will operate
 according to the same belief system. We should stop worrying about trying
 to ensure friendly AI, it will either be friendly or it won't according to
 what is right.


 I wonder if this isn't prevented by Gödel's incompleteness. Given that
 the superintelligence can never be certain of its own consistency, it must
 remain fundamentally agnostic. In this case, we might have different
 superintelligences working under different hypothesis, possibly occupying
 niches just like what happens with Darwinism.


 Interesting point. Yes a true super intelligence may never perform any
 actions, as its trapped in never being certain (and knowing it never can be
 certain) that its actions are right. Fitness for survival may play some
 role in how intelligent active agents can be before they become inactive.


 Yes, that's an interesting way to put it. I wonder.







 I think chances are that it will be friendly, since I happen to believe
 in universal personhood, and if that belief is correct, then
 superintelligences will also come to believe it is correct. And with the
 belief in universal personhood it would know that harm to others is harm to
 the self.


 I agree with you, with the difference that I try to assume universal
 personhood without believing in it, to avoid becoming a religious
 fundamentalist.


 Interesting. Why do you think having beliefs can lead to religious
 fundamentalism. Would you not say you belief the Earth is round? Could such
 a belief lead to religious fundamentalism and if not why not?


 This leads us back to a recurring discussion on this mailing list. I would
 say that you can believe the Earth to be round in the informal sense of the
 word: your estimation of the probability that the earth is round is very
 close to one. I don't think you can believe the earth to be round with 100%
 certainty without falling into religious fundamentalism. This implies a
 total belief in your senses, for example. That is a strong position about
 the nature of reality that is not really backed up by anything. Just like
 believing literally in the Bible or the Quran or Atlas Shrugged.


I see. I did not mean it in the sense of absolute certitude, merely that
universal personhood is one of my current working hypotheses derived from
my consideration of various problems of personal identity.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread Telmo Menezes
On Tue, Feb 10, 2015 at 9:07 PM, Jason Resch jasonre...@gmail.com wrote:



 On Tue, Feb 10, 2015 at 12:59 PM, Telmo Menezes te...@telmomenezes.com
 wrote:




 On Tue, Feb 10, 2015 at 6:21 PM, Jason Resch jasonre...@gmail.com
 wrote:



 On Tue, Feb 10, 2015 at 12:04 PM, Telmo Menezes te...@telmomenezes.com
 wrote:



 On Tue, Feb 10, 2015 at 4:47 PM, Jason Resch jasonre...@gmail.com
 wrote:

 If you define increased intelligence as decreased probability of
 having a false belief on any randomly chosen proposition, then
 superintelligences will be wrong on almost nothing, and their beliefs will
 converge as their intelligence rises. Therefore nearly all
 superintelligences will operate according to the same belief system. We
 should stop worrying about trying to ensure friendly AI, it will either be
 friendly or it won't according to what is right.


 I wonder if this isn't prevented by Gödel's incompleteness. Given that
 the superintelligence can never be certain of its own consistency, it must
 remain fundamentally agnostic. In this case, we might have different
 superintelligences working under different hypothesis, possibly occupying
 niches just like what happens with Darwinism.


 Interesting point. Yes a true super intelligence may never perform any
 actions, as its trapped in never being certain (and knowing it never can be
 certain) that its actions are right. Fitness for survival may play some
 role in how intelligent active agents can be before they become inactive.


 Yes, that's an interesting way to put it. I wonder.







 I think chances are that it will be friendly, since I happen to
 believe in universal personhood, and if that belief is correct, then
 superintelligences will also come to believe it is correct. And with the
 belief in universal personhood it would know that harm to others is harm 
 to
 the self.


 I agree with you, with the difference that I try to assume universal
 personhood without believing in it, to avoid becoming a religious
 fundamentalist.


 Interesting. Why do you think having beliefs can lead to religious
 fundamentalism. Would you not say you belief the Earth is round? Could such
 a belief lead to religious fundamentalism and if not why not?


 This leads us back to a recurring discussion on this mailing list. I
 would say that you can believe the Earth to be round in the informal sense
 of the word: your estimation of the probability that the earth is round is
 very close to one. I don't think you can believe the earth to be round with
 100% certainty without falling into religious fundamentalism. This implies
 a total belief in your senses, for example. That is a strong position about
 the nature of reality that is not really backed up by anything. Just like
 believing literally in the Bible or the Quran or Atlas Shrugged.


 I see. I did not mean it in the sense of absolute certitude, merely that
 universal personhood is one of my current working hypotheses derived from
 my consideration of various problems of personal identity.


Right. We are in complete agreement then.
Universal personhood is also one of my main working hypotheses. I wonder if
it could be considered a preferable belief: it may be true and we are all
better off assuming it to be true.

Telmo.




 Jason

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread Stathis Papaioannou
On Wednesday, February 11, 2015, Jason Resch jasonre...@gmail.com wrote:

 If you define increased intelligence as decreased probability of having a
 false belief on any randomly chosen proposition, then superintelligences
 will be wrong on almost nothing, and their beliefs will converge as their
 intelligence rises. Therefore nearly all superintelligences will operate
 according to the same belief system. We should stop worrying about trying
 to ensure friendly AI, it will either be friendly or it won't according to
 what is right.

 I think chances are that it will be friendly, since I happen to believe in
 universal personhood, and if that belief is correct, then
 superintelligences will also come to believe it is correct. And with the
 belief in universal personhood it would know that harm to others is harm to
 the self.


Having accurate beliefs about the world and having goals are two unrelated
things. If I like stamp collecting, being intelligent will help me to
collect stamps, it will help me see if stamp collecting clashes with a
higher priority goal, but it won't help me decide if my goals are worthy.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Cosmology from Quantum Potential

2015-02-10 Thread LizR
Very interesting, if true. (So they've removed that pesky factor of 10 to
the power of 120 from the calculations...!?)

On 11 February 2015 at 10:34, Platonist Guitar Cowboy 
multiplecit...@gmail.com wrote:

 Cosmology from quantum potential
 Ahmed Farag Ali http://arxiv.org/find/gr-qc/1/au:+Ali_A/0/1/0/all/0/1, 
 Saurya
 Das http://arxiv.org/find/gr-qc/1/au:+Das_S/0/1/0/all/0/1
 (Submitted on 11 Apr 2014 (v1 http://arxiv.org/abs/1404.3093v1), last
 revised 29 Dec 2014 (this version, v3))

 It was shown recently that replacing classical geodesics with quantal
 (Bohmian) trajectories gives rise to a quantum corrected Raychaudhuri
 equation (QRE). In this article we derive the second order Friedmann
 equations from the QRE, and show that this also contains a couple of
 quantum correction terms, the first of which can be interpreted as
 cosmological constant (and gives a correct estimate of its observed value),
 while the second as a radiation term in the early universe, which gets rid
 of the big-bang singularity and predicts an infinite age of our universe.

 http://arxiv.org/abs/1404.3093v3

 No Big Bang singularity or obscure dark stuff needed if I understand
 correctly, which can be refreshing from time to time. ;-) PGC












  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread Jason Resch
On Tue, Feb 10, 2015 at 8:12 PM, John Clark johnkcl...@gmail.com wrote:

 On Tue, Feb 10, 2015 at 4:47 PM, Jason Resch jasonre...@gmail.com wrote:

  If you define increased intelligence as decreased probability of having
 a false belief on any randomly chosen proposition, then superintelligences
 will be wrong on almost nothing,


 Not for a finite intelligence because some problems can be infinitely
 hard.


Then they will tend to agree the problem is intractable.



 And if there is simply a lack of information  more intelligence will not
 produce a better answer ( when Shakespeare went to the King Edward V1
 Grammar School at age 7 what was the name of his teacher?)


That question assumes there is only one answer consistent with our history,
and only one Shakespeare. All super intelligence have access to the same
mathematical truth, which is such a font of information it makes the
accessible physical universe appear as a mere dripping faucet in comparison.


 Therefore nearly all superintelligences will operate according to the
 same belief system.


 There is no correlation between intelligence and maters of taste, it is
 not more intelligent to prefer  brussels sprouts over creamed corn or Bach
 over Beethoven.


Super intelligence A and B will both agree that Brussels sprouts taste
better to super intelligence B. There is no objective truth as to what
thing has better taste, since taste is in the tastebuds of the taster.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread Jason Resch
On Tue, Feb 10, 2015 at 11:35 PM, meekerdb meeke...@verizon.net wrote:

  On 2/10/2015 5:49 PM, Jason Resch wrote:



 On Tue, Feb 10, 2015 at 6:40 PM, meekerdb meeke...@verizon.net wrote:


  On 2/10/2015 8:47 AM, Jason Resch wrote:

 If you define increased intelligence as decreased probability of having a
 false belief on any randomly chosen proposition, then superintelligences
 will be wrong on almost nothing, and their beliefs will converge as their
 intelligence rises. Therefore nearly all superintelligences will operate
 according to the same belief system. We should stop worrying about trying
 to ensure friendly AI, it will either be friendly or it won't according to
 what is right.


  The problem isn't beliefs, it's values.  Humans have certain core values
 selected by evolution; and in addition they have many secondary culturally
 determined values.  What values will super-AI have and where will it get
 them and will they evolve?  That seems to be the main research topic at the
 Machine Intelligence Research Institute.


 Were all your values set at birth and driven by biology, or are some of
 your values based on what you've since learned about the world?


 Isn't that what I wrote just above?

If values can be learned, and if morality is a field that has
 objective truth, then why wouldn't a super intelligence will approach a
 correct value system.


 What would correct mean?  Is vanilla *really* better than chocolate?

 I think there are core values - self-preservation, love of offspring,
 desire for companionship, desire for power that are provided by evolution
 and adapt people to live in extended families or small tribes.  The other
 values we learn from our culture are the result of cultural evolution
 selecting values and ethics that let us realize our core values while
 living in towns and cities and nations.


Do you think in the long run that human society is evolving toward a more
fair, more just, more correct system of values? If so, why can't a machine?
Particularly one with the thinking capacity of a billion human minds
operating a million times faster?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What over 170 people think about machines that think

2015-02-10 Thread Jason Resch
On Wed, Feb 11, 2015 at 12:03 AM, meekerdb meeke...@verizon.net wrote:

  On 2/10/2015 9:40 PM, Jason Resch wrote:



 On Tue, Feb 10, 2015 at 8:35 PM, meekerdb meeke...@verizon.net wrote:

  On 2/10/2015 5:29 PM, LizR wrote:

  On 5 February 2015 at 09:19, meekerdb meeke...@verizon.net wrote:

  On 2/4/2015 11:14 AM, Bruno Marchal wrote:

 On 03 Feb 2015, at 20:13, Jason Resch wrote:

  I agree with John. If consciousness had no third-person observable
 effects, it would be an epiphenomenon. And then there is no way to explain
 why we're even having this discussion about consciousness.


  So we all agree on this.


 ?? Why aren't first person observable effects enough to discuss?


  I guess because if there are no third-person observable effects of
 consciousness, then I can't detect any other conscious entities to discuss
 the effects with...


  The epiphenomenon model says there are third-person observable effects
 of the phenomenon, which suffice for detecting other entities.  Whether the
 other entities are really conscious or just faking it is a matter of
 inference.


  Did you mean to say The epiphenomenon model says there are *no*
 third-person observable effects of the phenomenon ?


 Of course not.  The phenomenon is what is observable, by definition.  It's
 the epiphenomenon which is not third-person observable.


But in the epiphenomenon model, consciousness is the epiphenomenon and the
phenomenal part of consciousness is its first-person aspect.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread Jason Resch
On Tue, Feb 10, 2015 at 8:15 PM, Stathis Papaioannou stath...@gmail.com
wrote:



 On Wednesday, February 11, 2015, Jason Resch jasonre...@gmail.com wrote:



 On Tue, Feb 10, 2015 at 3:30 PM, Stathis Papaioannou stath...@gmail.com
 wrote:



 On Wednesday, February 11, 2015, Jason Resch jasonre...@gmail.com
 wrote:

 If you define increased intelligence as decreased probability of having
 a false belief on any randomly chosen proposition, then superintelligences
 will be wrong on almost nothing, and their beliefs will converge as their
 intelligence rises. Therefore nearly all superintelligences will operate
 according to the same belief system. We should stop worrying about trying
 to ensure friendly AI, it will either be friendly or it won't according to
 what is right.

 I think chances are that it will be friendly, since I happen to believe
 in universal personhood, and if that belief is correct, then
 superintelligences will also come to believe it is correct. And with the
 belief in universal personhood it would know that harm to others is harm to
 the self.


 Having accurate beliefs about the world and having goals are two
 unrelated things. If I like stamp collecting, being intelligent will help
 me to collect stamps, it will help me see if stamp collecting clashes with
 a higher priority goal, but it won't help me decide if my goals are worthy.



 Were all your goals set at birth and driven by biology, or are some of
 your goals based on what you've since learned about the world? Perhaps
 learning about universal personhood (for example), could lead one to
 believe that charity is a worthy goal, and perhaps deserving of more time
 than collecting stamps.


 The implication is that if you believe in universal personhood then even
 if you are selfish you will be motivated towards charity. But the
 selfishness itself, as a primary value, is not amenable to rational
 analysis. There is no inconsistency in a superintelligent AI that is
 selfish, or one that is charitable, or one that believes the single most
 important thing in the world is to collect stamps.



But doing something well (regardless of what it is) is almost always
improved by having greater knowledge, so would not gathering greater
knowledge become a secondary sub goal for nearly any supintelligence that
has goals? Is it impossible that it might discover and decide to pursue
other goals during that time? After all, capacity to change one's mine
seems to be a requirement for any intelligence process, or any process on
the path towards superintelligence.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What over 170 people think about machines that think

2015-02-10 Thread Jason Resch
On Wed, Feb 11, 2015 at 12:23 AM, meekerdb meeke...@verizon.net wrote:

  On 2/10/2015 10:11 PM, Jason Resch wrote:



 On Wed, Feb 11, 2015 at 12:03 AM, meekerdb meeke...@verizon.net wrote:

   On 2/10/2015 9:40 PM, Jason Resch wrote:



 On Tue, Feb 10, 2015 at 8:35 PM, meekerdb meeke...@verizon.net wrote:

  On 2/10/2015 5:29 PM, LizR wrote:

  On 5 February 2015 at 09:19, meekerdb meeke...@verizon.net wrote:

  On 2/4/2015 11:14 AM, Bruno Marchal wrote:

 On 03 Feb 2015, at 20:13, Jason Resch wrote:

  I agree with John. If consciousness had no third-person observable
 effects, it would be an epiphenomenon. And then there is no way to explain
 why we're even having this discussion about consciousness.


  So we all agree on this.


 ?? Why aren't first person observable effects enough to discuss?


  I guess because if there are no third-person observable effects of
 consciousness, then I can't detect any other conscious entities to discuss
 the effects with...


  The epiphenomenon model says there are third-person observable effects
 of the phenomenon, which suffice for detecting other entities.  Whether the
 other entities are really conscious or just faking it is a matter of
 inference.


  Did you mean to say The epiphenomenon model says there are *no*
 third-person observable effects of the phenomenon ?


  Of course not.  The phenomenon is what is observable, by definition.
 It's the epiphenomenon which is not third-person observable.


  But in the epiphenomenon model, consciousness is the epiphenomenon and
 the phenomenal part of consciousness is its first-person aspect.


 The statement was, there are no third-person observable effects of
 consciousness.


Yes this is the conventional meaning of epihenominalism (in philosophy of
mind).


 In the epiphenomenal theory of consciousness, I take the phenomenon to be
 the observable behavior, neuron firings, etc. and consciousness the
 corresponding epiphenomenon.


Okay.


 Those phenomenon do have third person observable effects and in general
 that's how we infer consciousness in others.


I agree. But I think epiphenominalism is false, because that it places
consciousness outside the causal chain of physics, making it extra
physical ineffectual, and for all intents and purposes, unnecessary (it
declares no ability to ever move beyond solipsism as far as determining
whether some other thing or process is conscious or not).

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread LizR
On 11 February 2015 at 18:35, meekerdb meeke...@verizon.net wrote:

  On 2/10/2015 5:49 PM, Jason Resch wrote:


 On Tue, Feb 10, 2015 at 6:40 PM, meekerdb meeke...@verizon.net wrote:


 Were all your values set at birth and driven by biology, or are some of
 your values based on what you've since learned about the world?
 On 2/10/2015 8:47 AM, Jason Resch wrote:

Isn't that what I wrote just above?


Fair dos, Brent, you've often repeated what I've said back to me, slightly
rephrased and with But at the beginning.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What over 170 people think about machines that think

2015-02-10 Thread Jason Resch
On Tue, Feb 10, 2015 at 8:35 PM, meekerdb meeke...@verizon.net wrote:

  On 2/10/2015 5:29 PM, LizR wrote:

  On 5 February 2015 at 09:19, meekerdb meeke...@verizon.net wrote:

  On 2/4/2015 11:14 AM, Bruno Marchal wrote:

 On 03 Feb 2015, at 20:13, Jason Resch wrote:

  I agree with John. If consciousness had no third-person observable
 effects, it would be an epiphenomenon. And then there is no way to explain
 why we're even having this discussion about consciousness.


  So we all agree on this.


 ?? Why aren't first person observable effects enough to discuss?


  I guess because if there are no third-person observable effects of
 consciousness, then I can't detect any other conscious entities to discuss
 the effects with...


 The epiphenomenon model says there are third-person observable effects of
 the phenomenon, which suffice for detecting other entities.  Whether the
 other entities are really conscious or just faking it is a matter of
 inference.


Did you mean to say The epiphenomenon model says there are *no*
third-person observable effects of the phenomenon ?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What over 170 people think about machines that think

2015-02-10 Thread meekerdb

On 2/10/2015 9:40 PM, Jason Resch wrote:



On Tue, Feb 10, 2015 at 8:35 PM, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:


On 2/10/2015 5:29 PM, LizR wrote:

On 5 February 2015 at 09:19, meekerdb meeke...@verizon.net
mailto:meeke...@verizon.net wrote:

On 2/4/2015 11:14 AM, Bruno Marchal wrote:

On 03 Feb 2015, at 20:13, Jason Resch wrote:


I agree with John. If consciousness had no third-person observable 
effects,
it would be an epiphenomenon. And then there is no way to explain why 
we're
even having this discussion about consciousness.


So we all agree on this.


?? Why aren't first person observable effects enough to discuss?


I guess because if there are no third-person observable effects of 
consciousness,
then I can't detect any other conscious entities to discuss the effects 
with...


The epiphenomenon model says there are third-person observable effects of 
the
phenomenon, which suffice for detecting other entities.  Whether the other 
entities
are really conscious or just faking it is a matter of inference.


Did you mean to say The epiphenomenon model says there are *no* third-person observable 
effects of the phenomenon ?


Of course not.  The phenomenon is what is observable, by definition.  It's the 
epiphenomenon which is not third-person observable.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread meekerdb

On 2/10/2015 5:47 PM, Jason Resch wrote:



On Tue, Feb 10, 2015 at 5:57 PM, LizR lizj...@gmail.com 
mailto:lizj...@gmail.com wrote:

I call this the Cyberman (or Mr Spock) problem. The Cybermen in Doctor Who 
are
logical and unemotional, yet they wish to convert the rest of the world to 
be like
them. Why? Without emotion they have no reason to do that, or anything else.
(Likewise Mr Spock, except as we know he only repressed his emotions.)


I'm not sure whether emotions are necessary to have goals. Then again, perhaps 
they are.


The 'big' emotions like fear, rage, lust probably aren't, but values, feelings that this 
is preferred to that, are.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread meekerdb

On 2/10/2015 6:15 PM, Stathis Papaioannou wrote:
The implication is that if you believe in universal personhood then even if you are 
selfish you will be motivated towards charity. 


If humans are any indication, a super-intelligence will be incredibly good at 
rationalizing what it wants to do.  For example, if personhood is universal then what's 
good for me is good for the human race.


Brent

But the selfishness itself, as a primary value, is not amenable to rational analysis. 
There is no inconsistency in a superintelligent AI that is selfish, or one that is 
charitable, or one that believes the single most important thing in the world is to 
collect stamps.


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread Jason Resch
On Tue, Feb 10, 2015 at 8:19 PM, John Clark johnkcl...@gmail.com wrote:

 On Tue, Feb 10, 2015 at 1:21 PM, Jason Resch jasonre...@gmail.com wrote:


  a true super intelligence may never perform any actions, as its trapped
 in never being certain (and knowing it never can be certain) that its
 actions are right.


 Why in the world would a intelligent agent need to be certain before it
 could act?



Perhaps because it has not (and never will) arrive upon a correct belief
system (religion), on which to make value-judgements to score outcomes of
its possible actions.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread LizR
On 11 February 2015 at 17:37, Samiya Illias samiyaill...@gmail.com wrote:


 On 11-Feb-2015, at 6:40 am, John Clark johnkcl...@gmail.com wrote:

 On Tue, Feb 10, 2015  Alberto G. Corona agocor...@gmail.com wrote:

  I can´t even enumerate the number of ways in which that article is wrong.


 I stopped reading after the following parochial imbecility I don't see
 Christ's redemption limited to human beings.

  First of all, any intelligent robot MUST have a religion in order to act
 in any way.


 Yet another example of somebody in love with the English word religion
 but not with the meaning behind it.


  But I think that a robot with such level of intelligence will never be
 possible


 So you think that random mutation and natural selection can produce a
 intelligent being but a intelligent designer can't. Why?

 I am so happy to read this comment of yours. I hope someday you'll come to
 reason that even we have been produced by an intelligent designer.


I often learn new details of just how unintelligently designed I am from my
doctor.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What over 170 people think about machines that think

2015-02-10 Thread meekerdb

On 2/10/2015 10:38 PM, Jason Resch wrote:



On Wed, Feb 11, 2015 at 12:23 AM, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:


On 2/10/2015 10:11 PM, Jason Resch wrote:



On Wed, Feb 11, 2015 at 12:03 AM, meekerdb meeke...@verizon.net
mailto:meeke...@verizon.net wrote:

On 2/10/2015 9:40 PM, Jason Resch wrote:



On Tue, Feb 10, 2015 at 8:35 PM, meekerdb meeke...@verizon.net
mailto:meeke...@verizon.net wrote:

On 2/10/2015 5:29 PM, LizR wrote:

On 5 February 2015 at 09:19, meekerdb meeke...@verizon.net
mailto:meeke...@verizon.net wrote:

On 2/4/2015 11:14 AM, Bruno Marchal wrote:

On 03 Feb 2015, at 20:13, Jason Resch wrote:


I agree with John. If consciousness had no third-person 
observable
effects, it would be an epiphenomenon. And then there is no way 
to
explain why we're even having this discussion about 
consciousness.


So we all agree on this.


?? Why aren't first person observable effects enough to discuss?


I guess because if there are no third-person observable effects of
consciousness, then I can't detect any other conscious entities to
discuss the effects with...


The epiphenomenon model says there are third-person observable 
effects of
the phenomenon, which suffice for detecting other entities.  
Whether the
other entities are really conscious or just faking it is a matter of
inference.


Did you mean to say The epiphenomenon model says there are *no* 
third-person
observable effects of the phenomenon ?


Of course not.  The phenomenon is what is observable, by definition.  
It's the
epiphenomenon which is not third-person observable.


But in the epiphenomenon model, consciousness is the epiphenomenon and the
phenomenal part of consciousness is its first-person aspect.


The statement was, there are no third-person observable effects of 
consciousness.


Yes this is the conventional meaning of epihenominalism (in philosophy of mind).

In the epiphenomenal theory of consciousness, I take the phenomenon to be 
the
observable behavior, neuron firings, etc. and consciousness the 
corresponding
epiphenomenon.


Okay.

Those phenomenon do have third person observable effects and in general 
that's how
we infer consciousness in others.


I agree. But I think epiphenominalism is false, because that it places consciousness 
outside the causal chain of physics, making it extra physical ineffectual, and for all 
intents and purposes, unnecessary (it declares no ability to ever move beyond solipsism 
as far as determining whether some other thing or process is conscious or not).


Those sound like reasons you don't like it, not reasons it's false. Are you echoing JKC's 
line that if consciousness is not effacious evolution would have removed it?   If 
consciousness were unnecessary it would not be an epiphenomenon, i.e. something that 
NECESSARILY accompanies the phenomena of thoughts.  Is heat necessary to random molecular 
motion?


Brent
What we are continually talking of, merely from our having
been continually talking of it, we imagine we understand.
  ---  Jeremy Bentham (1748-1832)

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread LizR
On 11 February 2015 at 18:29, meekerdb meeke...@verizon.net wrote:

  On 2/10/2015 5:47 PM, Jason Resch wrote:

 On Tue, Feb 10, 2015 at 5:57 PM, LizR lizj...@gmail.com wrote:

 I call this the Cyberman (or Mr Spock) problem. The Cybermen in Doctor
 Who are logical and unemotional, yet they wish to convert the rest of the
 world to be like them. Why? Without emotion they have no reason to do that,
 or anything else. (Likewise Mr Spock, except as we know he only repressed
 his emotions.)


  I'm not sure whether emotions are necessary to have goals. Then again,
 perhaps they are.

  The 'big' emotions like fear, rage, lust probably aren't, but values,
 feelings that this is preferred to that, are.


I don't see how one could have an opinion on whether one should do anything
without emotions being involved.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread meekerdb

On 2/10/2015 5:49 PM, Jason Resch wrote:



On Tue, Feb 10, 2015 at 6:40 PM, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:



On 2/10/2015 8:47 AM, Jason Resch wrote:

If you define increased intelligence as decreased probability of having a 
false
belief on any randomly chosen proposition, then superintelligences will be 
wrong on
almost nothing, and their beliefs will converge as their intelligence rises.
Therefore nearly all superintelligences will operate according to the same 
belief
system. We should stop worrying about trying to ensure friendly AI, it will 
either
be friendly or it won't according to what is right.


The problem isn't beliefs, it's values.  Humans have certain core values 
selected by
evolution; and in addition they have many secondary culturally determined values. 
What values will super-AI have and where will it get them and will they evolve? 
That seems to be the main research topic at the Machine Intelligence Research Institute.



Were all your values set at birth and driven by biology, or are some of your values 
based on what you've since learned about the world?


Isn't that what I wrote just above?

If values can be learned, and if morality is a field that has objective truth, then why 
wouldn't a super intelligence will approach a correct value system.


What would correct mean?  Is vanilla *really* better than chocolate?

I think there are core values - self-preservation, love of offspring, desire for 
companionship, desire for power that are provided by evolution and adapt people to live in 
extended families or small tribes.  The other values we learn from our culture are the 
result of cultural evolution selecting values and ethics that let us realize our core 
values while living in towns and cities and nations.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread meekerdb

On 2/10/2015 9:55 PM, Jason Resch wrote:



On Tue, Feb 10, 2015 at 11:35 PM, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:


On 2/10/2015 5:49 PM, Jason Resch wrote:



On Tue, Feb 10, 2015 at 6:40 PM, meekerdb meeke...@verizon.net
mailto:meeke...@verizon.net wrote:


On 2/10/2015 8:47 AM, Jason Resch wrote:

If you define increased intelligence as decreased probability of having 
a
false belief on any randomly chosen proposition, then 
superintelligences will
be wrong on almost nothing, and their beliefs will converge as their
intelligence rises. Therefore nearly all superintelligences will operate
according to the same belief system. We should stop worrying about 
trying to
ensure friendly AI, it will either be friendly or it won't according to 
what
is right.


The problem isn't beliefs, it's values.  Humans have certain core values
selected by evolution; and in addition they have many secondary 
culturally
determined values.  What values will super-AI have and where will it 
get them
and will they evolve?  That seems to be the main research topic at the 
Machine
Intelligence Research Institute.


Were all your values set at birth and driven by biology, or are some of 
your values
based on what you've since learned about the world?


Isn't that what I wrote just above?


If values can be learned, and if morality is a field that has objective 
truth, then
why wouldn't a super intelligence will approach a correct value system.


What would correct mean?  Is vanilla *really* better than chocolate?

I think there are core values - self-preservation, love of offspring, 
desire for
companionship, desire for power that are provided by evolution and adapt 
people to
live in extended families or small tribes.  The other values we learn from 
our
culture are the result of cultural evolution selecting values and ethics 
that let us
realize our core values while living in towns and cities and nations.


Do you think in the long run that human society is evolving toward a more fair, more 
just, more correct system of values?


Not more correct, but perhaps one satisfying more of those core values


If so, why can't a machine?


I can, but only if it has some core values and those values result in conflicts which can 
be resolved in different ways.  Then it may find better ways to resolve the conflicts, 
because it has some core values against which to measure better or worse.


Particularly one with the thinking capacity of a billion human minds operating a million 
times faster?


Brent
Madness in individuals is rare.  In organizations it is the rule.
  --- Fredirick Nietzsche

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What over 170 people think about machines that think

2015-02-10 Thread meekerdb

On 2/10/2015 10:11 PM, Jason Resch wrote:



On Wed, Feb 11, 2015 at 12:03 AM, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:


On 2/10/2015 9:40 PM, Jason Resch wrote:



On Tue, Feb 10, 2015 at 8:35 PM, meekerdb meeke...@verizon.net
mailto:meeke...@verizon.net wrote:

On 2/10/2015 5:29 PM, LizR wrote:

On 5 February 2015 at 09:19, meekerdb meeke...@verizon.net
mailto:meeke...@verizon.net wrote:

On 2/4/2015 11:14 AM, Bruno Marchal wrote:

On 03 Feb 2015, at 20:13, Jason Resch wrote:


I agree with John. If consciousness had no third-person observable
effects, it would be an epiphenomenon. And then there is no way to
explain why we're even having this discussion about consciousness.


So we all agree on this.


?? Why aren't first person observable effects enough to discuss?


I guess because if there are no third-person observable effects of
consciousness, then I can't detect any other conscious entities to 
discuss the
effects with...


The epiphenomenon model says there are third-person observable effects 
of the
phenomenon, which suffice for detecting other entities.  Whether the 
other
entities are really conscious or just faking it is a matter of 
inference.


Did you mean to say The epiphenomenon model says there are *no* 
third-person
observable effects of the phenomenon ?


Of course not.  The phenomenon is what is observable, by definition.  It's 
the
epiphenomenon which is not third-person observable.


But in the epiphenomenon model, consciousness is the epiphenomenon and the phenomenal 
part of consciousness is its first-person aspect.


The statement was, there are no third-person observable effects of consciousness. In the 
epiphenomenal theory of consciousness, I take the phenomenon to be the observable 
behavior, neuron firings, etc. and consciousness the corresponding epiphenomenon.  Those 
phenomenon do have third person observable effects and in general that's how we infer 
consciousness in others.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread Jason Resch
On Wed, Feb 11, 2015 at 1:44 AM, LizR lizj...@gmail.com wrote:

 On 11 February 2015 at 18:29, meekerdb meeke...@verizon.net wrote:

  On 2/10/2015 5:47 PM, Jason Resch wrote:

 On Tue, Feb 10, 2015 at 5:57 PM, LizR lizj...@gmail.com wrote:

 I call this the Cyberman (or Mr Spock) problem. The Cybermen in Doctor
 Who are logical and unemotional, yet they wish to convert the rest of the
 world to be like them. Why? Without emotion they have no reason to do that,
 or anything else. (Likewise Mr Spock, except as we know he only repressed
 his emotions.)


  I'm not sure whether emotions are necessary to have goals. Then again,
 perhaps they are.

  The 'big' emotions like fear, rage, lust probably aren't, but values,
 feelings that this is preferred to that, are.


 I don't see how one could have an opinion on whether one should do
 anything without emotions being involved.


So do you believe the Mars Rover is motivated to explore by its emotions?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: evangelizing robots

2015-02-10 Thread LizR
I call this the Cyberman (or Mr Spock) problem. The Cybermen in Doctor Who
are logical and unemotional, yet they wish to convert the rest of the world
to be like them. Why? Without emotion they have no reason to do that, or
anything else. (Likewise Mr Spock, except as we know he only repressed his
emotions.)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Why is there something rather than nothing? From quantum theory to dialectics?

2015-02-10 Thread spudboy100 via Everything List

I am just reading his stuff, slowly, so I cannot answer your mathematicalism 
versus arithmaticism, well enough for a discussion. I could provide a couple 
links to his papers (Maybe 2 or 3?)  that may highlight your question. However, 
if you think it might harm the flow of discussion here, I will not post them. 
What I have learned is that physicists are fearful from a career point of view, 
of being damaged for publishing physics work that has anything to do with 
speculation about consciousness. But philosophers can get away with it because 
they are removed from pure science. They can ask and peak over physicists 
shoulders, by reviewing their work and not receive criticism.

Mitch
 
 
-Original Message-
From: Bruno Marchal marc...@ulb.ac.be
To: everything-list everything-list@googlegroups.com
Sent: Mon, Feb 9, 2015 3:16 pm
Subject: Re: Why is there something rather than nothing? From quantum theory to 
dialectics?




On 08 Feb 2015, at 13:30, spudboy100 via Everything List wrote:


Bruno, are you familiar with the atheistic (so-called) theologies of Dr. Eric 
Steinhart? He's a bright philosopher from William Patterson University, is the 
US. He was originally a software engineer and is like yourself, a math guy. He 
applies his experience to his philosophy, and after reading your writings here, 
as well as Amoeba, his insights seem to parallel yours.  Also, Clement Vidal's, 
as well. Every heard of him? His papers focus on the origins of the universe(s) 
Platonism, Computationalism, and Digital Philosophy. It's not exactly like 
your work, but it certainly parallels it. Ever heard of him? It sort of informs 
this topic I think.  




I don't think I know him although the name invke some familiarity. Did he got 
the first person indeterminacy, the mathematicalism or arithmeticallism? 


The mean to test this. You might sum up the idea, if you have the time,


The problem with many scientists is that they stop doing science when doing 
philosophy. It is not a problem, but it can be confusing in that field.






Bruno











 
 

 
 
 
-Original Message-
 From: Samiya Illias samiyaill...@gmail.com
 To: everything-list everything-list@googlegroups.com
 Sent: Sat, Feb 7, 2015 11:07 pm
 Subject: Re: Why is there something rather than nothing? From quantum theory 
to dialectics?
 
 
 

 

 
On Thu, Feb 5, 2015 at 8:27 PM, Bruno Marchal marc...@ulb.ac.be wrote:
 
 

 
 
 
 
On 04 Feb 2015, at 17:14, Samiya Illias wrote:
 
 
 

 

 
On Wed, Feb 4, 2015 at 5:49 PM, Bruno Marchal marc...@ulb.ac.be wrote:
 
 

 
 
On 04 Feb 2015, at 06:02, Samiya Illias wrote:
 
 
 
 

 
 

 On 04-Feb-2015, at 12:01 am, Bruno Marchal marc...@ulb.ac.be wrote:
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 
 
Then reason shows that arithmetic is already full of life, indeed full of an 
infinity of universal machines competing to provide your infinitely many 
relatively consistent continuations.
 

 
 
Incompleteness imposes, at least formally, a soul (a first person), an observer 
(a first person plural), a god (an independent simple but deep truth) to any 
machine believing in the RA axioms together with enough induction axioms. I 
know you believe in them.
 

 
 
The lexicon is 
 
p   truthGod 
 
[]p  provable Intelligible  (modal logic, G and G*)
 
[]p  p  the soul (modal logic, S4Grz)
 
[]p  t  intelligible matter(with p sigma_1) (modal logic, Z1, Z1*)
 
[]p  sensible matter (with p sigma_1) (modal logic, X1, X1*)
 

 
 
You need to study some math, 
 
 
 
 

 
 
I have been wanting to but it seems such an uphill task. Yet, its a mountain I 
would like to climb :) 
 
 
 
 
 

 
 
7 + 0 = 7. You are OK with this?  Tell me. 
 
 
 

 OK 
 
 
 
 

 
 
Are you OK with the generalisation? For all numbers n, n + 0 = n.  Right? 
 
 
 
 

 
 Right :)  
You suggest I begin with Set Theory? 
 
 
 

 
 
No need of set theory, as I have never been able to really prefer one theory or 
another. It is too much powerful, not fundamental. At some point naive set 
theory will be used, but just for making thing easier: it will never be part of 
the fundamental assumptions.
 

 
 
I use only elementary arithmetic, so you need only to understand the following 
statements (and some other later): 
 
 
 
 
Please see if my assumptions/interpretations below are correct:   
 
 
 
 

 
 
x + 0 = x 
 
 
 
 
if x=1, then 
 
1+0=1  
 
 
 
 

 
 
x + successor(y) = successor(x + y) 
 
 
 
 
1 + 2 = (1+2) = 3 
 
 
 
 
 

 
 
 
 
I agree, but you don't show the use of the axiom:  x + successor(y) = 
successor(x + y), or x +s(y) = s(x + y). 
 
 
 
 

 
 
I didn't use the axioms. I just substituted the axioms variables with the 
natural numbers. 
 

 
 
 
 

 
 
 
 
 
 
 

 
  
Are you OK? To avoid notational difficulties, I represent the numbers by their 
degree of parenthood (so to speak) with 0. Abbreviating s for successor:
 

 
 
0, s(0), s(s(0)), s(s(s(0))), ...
 
 
 
 
If the sequence represents 0, 1, 2, 3, ...
 
 
 
 
 

 
  
We 

Re: evangelizing robots

2015-02-10 Thread meekerdb

On 2/10/2015 8:47 AM, Jason Resch wrote:
If you define increased intelligence as decreased probability of having a false belief 
on any randomly chosen proposition, then superintelligences will be wrong on almost 
nothing, and their beliefs will converge as their intelligence rises. Therefore nearly 
all superintelligences will operate according to the same belief system. We should stop 
worrying about trying to ensure friendly AI, it will either be friendly or it won't 
according to what is right.


The problem isn't beliefs, it's values.  Humans have certain core values selected by 
evolution; and in addition they have many secondary culturally determined values.  What 
values will super-AI have and where will it get them and will they evolve?  That seems to 
be the main research topic at the Machine Intelligence Research Institute.


Brent



I think chances are that it will be friendly, since I happen to believe in universal 
personhood, and if that belief is correct, then superintelligences will also come to 
believe it is correct. And with the belief in universal personhood it would know that 
harm to others is harm to the self.


Jason

On Tue, Feb 10, 2015 at 2:19 AM, Alberto G. Corona agocor...@gmail.com 
mailto:agocor...@gmail.com wrote:


I can´t even enumerate the number of ways in which that article is wrong.

First of all, any intelligent robot MUST have a religion in order to act in 
any way.
A set of core beliefs. A non intelligent robot need them too: It is the set 
of
constants. The intelligent robot  can rewrite their constants from which he 
derive
their calculations for actions and if the robot is self preserving and 
reproduce
sexually, it has to adjust his constants i.e. his beliefs according with 
some
darwinian algoritm that must take into account himself but specially the 
group in
which he lives and collaborates..

If the robot does not reproduce sexually and his fellows do not execute 
very similar
programs, it is pointless to teach them any human religion.

These and other higher aspects like acting with other intelligent beings 
communicate
perceptions, how a robot elaborate philosophical and theological concepts 
and
collaborate with others, see my post about robotic truth

But I think that a robot with such level of intelligence will never be 
possible.

2015-02-09 21:59 GMT+01:00 meekerdb meeke...@verizon.net
mailto:meeke...@verizon.net:


In two senses of that term! Or something.

http://bigthink.com/ideafeed/robot-religion-2


http://gizmodo.com/when-superintelligent-ai-arrives-will-religions-try-t-1682837922



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.