Re: super intelligence and self-sampling

2015-06-19 Thread Bruno Marchal


On 16 Jun 2015, at 00:50, meekerdb wrote:


On 6/15/2015 9:59 AM, Bruno Marchal wrote:


On 11 Jun 2015, at 01:21, meekerdb wrote:


On 6/10/2015 4:06 PM, LizR wrote:

On 11 June 2015 at 06:26, meekerdb meeke...@verizon.net wrote:

A human is an ape which torture other apes.
Not just torture but also eliminate, e.g. homo erectus, homo  
neaderthalis,...  It's called evolution.


You sound like you're in favour.


When they're winners and losers I'm in favor of being a winner.




To win you need to master the art of losing. The future belongs to  
the good losers :)


Is that an extrapolation from the past?


It is more an interpolation on the futures :)

It is a principle in the Art of the War. It is also a principle of  
many martial art, with the many ways to fall down, in judo, and  
technic to transform defeat into victory.


Good losers are better than bad winners, if you mind this quasi- 
tautology (as the good is always better than the bad, by definition).


Bruno

Bruno





Brent

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-15 Thread Bruno Marchal


On 10 Jun 2015, at 20:26, meekerdb wrote:


On 6/10/2015 12:51 AM, Bruno Marchal wrote:


On 10 Jun 2015, at 01:40, LizR wrote:


On 10 June 2015 at 11:38, meekerdb meeke...@verizon.net wrote:
On 6/9/2015 2:25 PM, Telmo Menezes wrote:
On Tue, Jun 9, 2015 at 10:15 PM, John Clark  
johnkcl...@gmail.com wrote:

On Tue, Jun 9, 2015 Telmo Menezes te...@telmomenezes.com wrote:

 Super-intelligence is more resilient than human intelligence,  
so it is likely to last longer


Maybe, but I note that smarter than average humans seem to have  
higher than average rates of suicide too.


I wonder if this is because intelligence leads to depression or  
because it makes one more likely to research and correctly  
execute a viable method of suicide. Do you know if the rates are  
also higher on failed attempts?

According to most people on this list, they are ALL failed attempts.

Heehee.

(Or at least most people are willnig to entertain the possibility.)


Which is enough to doubt such kind of self-sampling assumption,  
which are based on ASSA (absolute self-sampling), which I thought  
was shown non valid (cf our old discussion on the doomsday argument).


Then what is super-intelligence? I doubt this make sense, or at the  
least should be made more precise.


I know it is counter-intuitive, or that I use perhaps a non  
standard notion of intelligence(*), but I think that intelligence  
is maximal with the virgin universal machine, or perhaps Löbian  
machine (but I am not sure), and then can only decrease.
The singularity is when the machine will supersede the human'  
stupidity.


I might think that animals are more intelligent than humans. May be  
plants are more intelligent than animals. But I guess people talk  
here about competence. This can grow, but is often used for stupid  
behavior.


By what standard can you judge that an animal putatively more  
intelligent than you has acted stupidly?


Where did I do that?






A human is an ape which torture other apes.


Not just torture but also eliminate, e.g. homo erectus, homo  
neaderthalis,...  It's called evolution.


I am not sure of this, but if true that makes my point even more  
obvious.


(I might be blasphemous, with respect to the machine theology, so add  
IF comp is true, and keep in mind I use the terms in larger sense  
than usual).


Bruno






Brent

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-15 Thread Bruno Marchal


On 10 Jun 2015, at 16:44, Telmo Menezes wrote:






On 10 Jun 2015, at 09:51, Bruno Marchal marc...@ulb.ac.be wrote:



On 10 Jun 2015, at 01:40, LizR wrote:


On 10 June 2015 at 11:38, meekerdb meeke...@verizon.net wrote:
On 6/9/2015 2:25 PM, Telmo Menezes wrote:
On Tue, Jun 9, 2015 at 10:15 PM, John Clark  
johnkcl...@gmail.com wrote:

On Tue, Jun 9, 2015 Telmo Menezes te...@telmomenezes.com wrote:

 Super-intelligence is more resilient than human intelligence,  
so it is likely to last longer


Maybe, but I note that smarter than average humans seem to have  
higher than average rates of suicide too.


I wonder if this is because intelligence leads to depression or  
because it makes one more likely to research and correctly  
execute a viable method of suicide. Do you know if the rates are  
also higher on failed attempts?

According to most people on this list, they are ALL failed attempts.

Heehee.

(Or at least most people are willnig to entertain the possibility.)


Which is enough to doubt such kind of self-sampling assumption,  
which are based on ASSA (absolute self-sampling), which I thought  
was shown non valid (cf our old discussion on the doomsday argument).


Then what is super-intelligence? I doubt this make sense, or at the  
least should be made more precise.


For the purpose of this discussion, I would say that you would only  
have to grant that there is some utility function that captures  
chances of survival. Then, super-intelligence is something that can  
optimize this function beyond what human intelligence is capable.




Then amoeba and bacteria are more clever than dinosaurs and humans.  
OK, but again, I would say that it is a particular competence. It  
might be that intelligence per se is not necessarily useful for  
surviving, as it makes you more sensible to events. It is suggested by  
some studies that very gifted people dies more quickly than others.







I know it is counter-intuitive, or that I use perhaps a non  
standard notion of intelligence(*), but I think that intelligence  
is maximal with the virgin universal machine, or perhaps Löbian  
machine (but I am not sure), and then can only decrease.
The singularity is when the machine will supersede the human'  
stupidity.


I believe I understand what you mean, but perhaps we are talking  
about different things.


I define intelligence in a very general sense by the negation of  
stupidity, and I define stupidity by either the assertion of I am  
intelligent, or of I am stupid. It makes pebble intelligent, but  
this is not a problem.


I distinguish this from competence, and from consciousness.









I might think that animals are more intelligent than humans. May be  
plants are more intelligent than animals. But I guess people talk  
here about competence. This can grow, but is often used for stupid  
behavior. A human is an ape which torture other apes.


Perhaps they merge in the end. For example, the super-intelligence  
according to my definition eventually develops a TOE that makes it  
believe that the well-being of others is the same as its own.


I am OK with this.

Bruno


PS Sorry for the delays (exam period)




Best
Telmo



Bruno

(*) a machine is intelligent if it is not stupid, and a machine is  
stupid if she asserts that she is intelligent, or that she is  
stupid. (it makes pebble infinitely intelligent, I agree).





--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com 
.

Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything- 
l...@googlegroups.com.

Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.



--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 

Re: super intelligence and self-sampling

2015-06-15 Thread Bruno Marchal


On 11 Jun 2015, at 01:21, meekerdb wrote:


On 6/10/2015 4:06 PM, LizR wrote:

On 11 June 2015 at 06:26, meekerdb meeke...@verizon.net wrote:

A human is an ape which torture other apes.
Not just torture but also eliminate, e.g. homo erectus, homo  
neaderthalis,...  It's called evolution.


You sound like you're in favour.


When they're winners and losers I'm in favor of being a winner.




To win you need to master the art of losing. The future belongs to the  
good losers :)


Bruno





Brent

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-15 Thread LizR
On 11 June 2015 at 16:03, meekerdb meeke...@verizon.net wrote:

  On 6/10/2015 6:36 PM, LizR wrote:

  On 11 June 2015 at 11:21, meekerdb meeke...@verizon.net wrote:

   On 6/10/2015 4:06 PM, LizR wrote:

  On 11 June 2015 at 06:26, meekerdb meeke...@verizon.net wrote:

   A human is an ape which torture other apes.

 Not just torture but also eliminate, e.g. homo erectus, homo
 neaderthalis,...  It's called evolution.


  You sound like you're in favour.

  When they're winners and losers I'm in favor of being a winner.


  But your original statement didn't talk about winners and losers, it
 talked about elimination, specifically it sounded as though you were in
 favour of one ape eliminating another one (on a species basis, going by
 your mention of neanderthals).

  So, are you actually in favour of genocide, or were you just shooting
 your mouth off?

  Are you a Neanderthal or are you just trolling?

 Neither, you're the one who said the things quoted above, which certainly
look like you're in favour of genocide when directed against the
Neanderthals. Making spiteful comments doesn't change that, and is actually
quite hurtful. How about manning up and explaining yourself properly,
instead of retreating behind being flip, snide and childish?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-15 Thread meekerdb

On 6/15/2015 1:27 PM, LizR wrote:
On 11 June 2015 at 16:03, meekerdb meeke...@verizon.net mailto:meeke...@verizon.net 
wrote:


On 6/10/2015 6:36 PM, LizR wrote:

On 11 June 2015 at 11:21, meekerdb meeke...@verizon.net
mailto:meeke...@verizon.net wrote:

On 6/10/2015 4:06 PM, LizR wrote:

On 11 June 2015 at 06:26, meekerdb meeke...@verizon.net
mailto:meeke...@verizon.net wrote:


A human is an ape which torture other apes.

Not just torture but also eliminate, e.g. homo erectus, homo
neaderthalis,...  It's called evolution.


You sound like you're in favour.

When they're winners and losers I'm in favor of being a winner.


But your original statement didn't talk about winners and losers, it talked 
about
elimination, specifically it sounded as though you were in favour of one 
ape
eliminating another one (on a species basis, going by your mention of 
neanderthals).

So, are you actually in favour of genocide, or were you just shooting your 
mouth off?

Are you a Neanderthal or are you just trolling?

Neither, you're the one who said the things quoted above, which certainly look like 
you're in favour of genocide when directed against the Neanderthals. Making spiteful 
comments doesn't change that, and is actually quite hurtful. How about manning up and 
explaining yourself properly, instead of retreating behind being flip, snide and childish?


How about not imputing opinions not in evidence and trolling with 
have-you-stopped-beating-your-wife questions.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-15 Thread meekerdb

On 6/15/2015 9:59 AM, Bruno Marchal wrote:


On 11 Jun 2015, at 01:21, meekerdb wrote:


On 6/10/2015 4:06 PM, LizR wrote:
On 11 June 2015 at 06:26, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:



A human is an ape which torture other apes.
Not just torture but also eliminate, e.g. homo erectus, homo neaderthalis,... 
It's called evolution.



You sound like you're in favour.


When they're winners and losers I'm in favor of being a winner.




To win you need to master the art of losing. The future belongs to the good 
losers :)


Is that an extrapolation from the past?

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-12 Thread John Clark
On Thu, Jun 11, 2015  spudboy100 via Everything List 
everything-list@googlegroups.com wrote:

 Meh! I have read that some theorists now predict that dark whatever will
 cause a new contraction and that this is already occuring. Its the sort of
 thing that gets mentioned in ARIXV, and physorg. Please note, I am not
 waiting up for the next x-billion years to see if this occurs or not?


Predict? The only thing I've heard is that because nobody knows what Dark
Energy is we can't entirely rule out the possibility that in trillions of
years it will suddenly reverse direction even though there is absolutely no
sign of that happening now. But no amount of spin can change the fact that
the discovery of an accelerating universe was a devastating blow to
Tipler's Omega Theory. And how do you explain away Tipler's incorrect
predictions about the value of the Hubble constant and the mass of the
Higgs boson when Tipler wrote in black and white that those predictions HAD
to be correct or his theory wouldn't work?

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-11 Thread John Clark
On Wed, Jun 10, 2015  spudboy100 via Everything List 
everything-list@googlegroups.com wrote:

  Yes, but there have been so much counter examples for the 1997 WMAP
 analysis that Tipler may end up correct.


I don't know what you're talking about. In his 1993 book Tipler made a
number of predictions and said that if even one of those predictions was
wrong his entire theory could not work; and Tipler's predictions turned out
to be wrong, some spectacularly wrong. Tipler predicted the expansion of
the universe would slow down, then it would stop, then it would change
direction and collapse in on itself; from the heat of that imploding
fireball he thought a hyper-advanced civilization could theoretically
extract an infinite amount of energy. But we now know that due to Dark
Energy (which he did NOT predict) the expansion of the cosmos is
accelerating not decelerating so that fireball will never happen.

Tipler also predicted that the Higgs boson must be at 220GEV +- 20  but we
now know it is 125.3GEV +- .5.And Tipler predicted that the Hubble
constant must be less than or equal to 45, but we now know it's  67.8 +-
.77 .  It's clear we don't live in the sort of universe that Tipler thought
we did. More than one of his predictions was wrong so if we take Tipler at
his word then his theory must be wrong too.

Tipler







 I am talking about the accelerated expansion reversing, I hold computer
 theory as over-taking most cosmo theories be it a saddle, a doughnut, flat
 as a pancake, whatever. And no, you need not agree, but for me it seems
 apparent. You?


  -Original Message-
 From: John Clark johnkcl...@gmail.com
 To: everything-list everything-list@googlegroups.com
 Sent: Wed, Jun 10, 2015 3:00 pm
 Subject: Re: super intelligence and self-sampling

   On Wed, Jun 10, 2015  spudboy100 via Everything List 
 everything-list@googlegroups.com wrote:

   Now you are talking of Tipler's Omega Point. A usable theory when
 combined with MWI, which Tipler supports.


  Tipler's idea of the Omega Point was interesting in 1993 when he
 introduced the idea, but unfortunately in the last 22 years it has proven
 to be wrong. And no matter how beautiful a theory is if it doesn't fit the
 facts it must be abandoned.

John K Clark





   --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-11 Thread meekerdb

On 6/11/2015 10:47 AM, John Clark wrote:


On Wed, Jun 10, 2015  spudboy100 via Everything List everything-list@googlegroups.com 
mailto:everything-list@googlegroups.com wrote:


 Yes, but there have been so much counter examples for the 1997 WMAP 
analysis that
Tipler may end up correct.


I don't know what you're talking about. In his 1993 book Tipler made a number of 
predictions and said that if even one of those predictions was wrong his entire theory 
could not work; and Tipler's predictions turned out to be wrong, some spectacularly 
wrong. Tipler predicted the expansion of the universe would slow down, then it would 
stop, then it would change direction and collapse in on itself; from the heat of that 
imploding fireball he thought a hyper-advanced civilization could theoretically extract 
an infinite amount of energy. But we now know that due to Dark Energy (which he did NOT 
predict) the expansion of the cosmos is accelerating not decelerating so that fireball 
will never happen.


We know the expansion of the universe is accelerating, and that is well modeled by a 
cosmological constant.  But general relativity is only an effective approximation to some 
as yet unknown quantum theory of gravity; and in a quantum theory of gravity the 
cosmological constant may be a manifestation of some field that is subject to a phase 
change and would allow for an ultimate contraction of the universe.


Not that I put in credence in Tipler's speculations.

Brent



Tipler also predicted that the Higgs boson must be at 220GEV +- 20  but we now know it 
is 125.3GEV +- .5.And Tipler predicted that the Hubble constant must be less than or 
equal to 45, but we now know it's  67.8 +- .77 . It's clear we don't live in the sort of 
universe that Tipler thought we did. More than one of his predictions was wrong so if we 
take Tipler at his word then his theory must be wrong too.


Tipler





I am talking about the accelerated expansion reversing, I hold computer 
theory as
over-taking most cosmo theories be it a saddle, a doughnut, flat as a 
pancake,
whatever. And no, you need not agree, but for me it seems apparent. You?


-Original Message-
From: John Clark johnkcl...@gmail.com mailto:johnkcl...@gmail.com
To: everything-list everything-list@googlegroups.com
mailto:everything-list@googlegroups.com
Sent: Wed, Jun 10, 2015 3:00 pm
Subject: Re: super intelligence and self-sampling

On Wed, Jun 10, 2015  spudboy100 via Everything List
everything-list@googlegroups.com 
mailto:everything-list@googlegroups.com wrote:

 Now youare talking of Tipler's Omega Point. A usable theory when 
combined with
MWI, which Tipler supports. 



Tipler's idea of the Omega Point was interesting in 1993 when he introduced 
the
idea, but unfortunately in the last 22 years it has proven to be wrong. And 
no
matter how beautiful a theory is if it doesn't fit the facts it must be 
abandoned.

  John K Clark


-- 
You received this message because you are subscribed to the Google Groups

Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an 
email to
everything-list+unsubscr...@googlegroups.com
mailto:everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com
mailto:everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
-- 
You received this message because you are subscribed to the Google Groups

Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an 
email to
everything-list+unsubscr...@googlegroups.com
mailto:everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com
mailto:everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups Everything 
List group.
To unsubscribe from this group and stop receiving emails from it, send an email to 
everything-list+unsubscr...@googlegroups.com 
mailto:everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com 
mailto:everything-list@googlegroups.com.

Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group

Re: super intelligence and self-sampling

2015-06-11 Thread John Clark
On Thu, Jun 11, 2015  spudboy100 via Everything List 
everything-list@googlegroups.com wrote:

 Dark energy and matter have predicted by some physicists and astronomers
 to call the expansion to reverse.


I don't know what you're talking about. Dark Energy is causing the
universe's expansion to accelerate not slow down

 No he did not predict dark matter or energy but it seems to be in the
 cards despite this.


Tipler didn't predict Dark Energy but he did predict that the that the
Higgs boson would have a mass of  220GEV +- 20  and that the Hubble
constant must be less than or equal to 45, and Tipler's predictions have
been proven to be DEAD WRONG. Some called Tipler a crackpot in 1993 when he
wrote his book but I did not because he made clear predictions and said if
any one of them was wrong then his entire theory was wrong. Well lots of
his predictions were wrong and however much I may have personally wished it
was true my preferences has nothing to do with the way things are. Tipler
was right about one thing, if a theory does not fit the facts it must be
abandoned. That's why Tipler wasn't a crackpot, he was just wrong.


  general relativity is only an effective approximation to some as yet
 unknown quantum theory of gravity; and in a quantum theory of gravity the
 cosmological constant may be a manifestation of some field that is subject
 to a phase change and would allow for an ultimate contraction of the
 universe


You can ALWAYS say that if the fundamental laws of physics are not what we
think they are then my theory could still be right, but that's not science,
in science you say if X isn't Y then my ideas are wrong. To his credit
Tipler gave himself no wiggle room, he insisted that ALL his predictions
HAD to be true. They wen't. End of story.

 John K Clark




-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-11 Thread spudboy100 via Everything List
Meh! I have read that some theorists now predict that dark whatever will cause 
a new contraction and that this is already occuring. Its the sort of thing that 
gets mentioned in ARIXV, and physorg. Please note, I am not waiting up for the 
next x-billion years to see if this occurs or not? 

Sent from AOL Mobile Mail


-Original Message-
From: John Clark johnkcl...@gmail.com
To: everything-list everything-list@googlegroups.com
Sent: Thu, Jun 11, 2015 10:19 PM
Subject: Re: super intelligence and self-sampling



div id=AOLMsgPart_2_f85d573b-d2be-4711-b87d-af98203c96c1

 div dir=ltr
  div class=aolmail_gmail_extra
   div class=aolmail_gmail_quote
On Thu, Jun 11, 2015  spudboy100 via Everything List 
span dir=ltra target=_blank 
href=mailto:everything-list@googlegroups.com;everything-list@googlegroups.com/a/span
 wrote:
   /div
   div class=aolmail_gmail_quote


blockquote class=aolmail_gmail_quote style=margin:0px 0px 0px 
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex
 font color=black size=2 face=arial Dark energy and matter have 
predicted by some physicists and astronomers to call the expansion to 
reverse./font
/blockquote


 




I don't know what you're talking about. Dark Energy is causing the universe's 
expansion to accelerate not slow down 



 


blockquote class=aolmail_gmail_quote style=margin:0px 0px 0px 
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex
 font color=black size=2 face=arial No he did not predict dark 
matter or energy but it seems to be in the cards despite this./font
/blockquote


 




Tipler didn't predict Dark Energy but he did predict that the t
 font color=#00 face=arial, helveticaspan 
style=font-size:13.330154419pxhat the Higgs boson would have a mass of  
220GEV +- 20  and that the Hubble constant must be less than or equal to 45, 
and Tipler's predictions have been proven to be DEAD WRONG. Some called Tipler 
a crackpot in 1993 when he wrote his book but I did not because he made clear 
predictions and said if any one of them was wrong then his entire theory was 
wrong. Well lots of his predictions were wrong and however much I may have 
personally wished it was true my preferences has nothing to do with the way 
things are. /span/font
 span 
style=color:rgb(0,0,0);font-family:arial,helvetica;font-size:13.330154419pxTipler
 was right about one thing, if a theory does not fit the facts it must be 
abandoned. That's why Tipler wasn't a crackpot, he was just wrong./span



 font color=#00 face=arial, helveticaspan 
style=font-size:13.330154419px /span/font

blockquote class=aolmail_gmail_quote style=margin:0px 0px 0px 
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex
 span 
style=color:rgb(0,0,0);font-family:arial,helvetica;font-size:13.330154419px
 general relativity is only an effective approximation to some as yet unknown 
quantum theory of gravity; and in a quantum theory of gravity the cosmological 
constant may be a manifestation of some field that is subject to a phase change 
and would allow for an ultimate contraction of the universe/span
/blockquote


 




You can ALWAYS say that if the fundamental laws of physics are not what we 
think they are then my theory could still be right, but that's not science, in 
science you say if X isn't Y then my ideas are wrong. To his credit Tipler gave 
himself no wiggle room, he insisted that ALL his predictions HAD to be true. 
They wen't. End of story.



 




 John K Clark  



 font color=#00 face=arial, helveticaspan 
style=font-size:13.330154419px
/span/font

blockquote class=aolmail_gmail_quote style=margin:0px 0px 0px 
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex
 font color=black size=2 face=arial
  

   div style=font-family:arial,helvetica;font-size:10pt;color:black
div
 div
  blockquote
   div dir=ltr
div class=aolmail_gmail_extra
 div class=aolmail_gmail_quote
  blockquote class=aolmail_gmail_quote style=margin:0px 0px 0px 
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex
   font color=black face=arial size=2
div
 

/font
  /blockquote
 /div
/div
   /div
  /blockquote
 /div
/div
   /div
  /div/font
/blockquote
   /div
   

  /div
 /div 
 p/p -- 
 
 You received this message because you are subscribed to the Google Groups 
Everything List group.
 
 To unsubscribe from this group and stop receiving emails from

Re: super intelligence and self-sampling

2015-06-11 Thread LizR
On 12 June 2015 at 14:19, John Clark johnkcl...@gmail.com wrote:

 On Thu, Jun 11, 2015  spudboy100 via Everything List 
 everything-list@googlegroups.com wrote:

  Dark energy and matter have predicted by some physicists and astronomers
 to call the expansion to reverse.


 I don't know what you're talking about. Dark Energy is causing the
 universe's expansion to accelerate not slow down

 Since we don't know it's nature, it's *possible* it will wear off after a
while, or even go into reverse. But this is 100% speculation at present, of
course - and will be until we devise a testable theory of what it actually
is!

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-11 Thread spudboy100 via Everything List
Dark energy and matter have predicted by some physicists and astronomers to 
call the expansion to reverse. Whether this really occurs in out of my pudgy 
hands. No he did not predict dark matter or energy but it seems to be in the 
cards despite this. Agrees that there is physics that we have never seen 
before, awaiting the scientist.


-Original Message-
From: meekerdb meeke...@verizon.net
To: everything-list everything-list@googlegroups.com
Sent: Thu, Jun 11, 2015 3:10 pm
Subject: Re: super intelligence and self-sampling


  
On 6/11/2015 10:47 AM, John Clark wrote:  
  
  
   


 
On Wed, Jun 10, 2015  spudboy100 via Everything List  
everything-list@googlegroups.com wrote:
 
 
  
  
  Yes, but there have been so much counter examples for the 1997 WMAP 
analysis that Tipler may end up correct.   
  
  
  
  
  
I don't know what you're talking about. In his 1993 book Tipler made a number 
of predictions and said that if even one of those predictions was wrong his 
entire theory could not work; and Tipler's predictions turned out to be wrong, 
some spectacularly wrong. Tipler predicted the expansion of the universe would 
slow down, then it would stop, then it would change direction and collapse in 
on itself; from the heat of that imploding fireball he thought a hyper-advanced 
civilization could theoretically extract an infinite amount of energy. But we 
now know that due to Dark Energy (which he did NOT predict) the expansion of 
the cosmos is accelerating not decelerating so that fireball will never happen. 
  
  
 

   
  
  
 We know the expansion of the universe is accelerating, and that is well 
modeled by a cosmological constant.  But general relativity is only an 
effective approximation to some as yet unknown quantum theory of gravity; and 
in a quantum theory of gravity the cosmological constant may be a manifestation 
of some field that is subject to a phase change and would allow for an ultimate 
contraction of the universe. 
  
 Not that I put in credence in Tipler's speculations. 
  
 Brent 
  
  
   

 
  
  
  
  
Tipler also predicted that the Higgs boson must be at 220GEV +- 20  but we now 
know it is 125.3GEV +- .5.And Tipler predicted that the Hubble constant 
must be less than or equal to 45, but we now know it's  67.8 +- .77 .  It's 
clear we don't live in the sort of universe that Tipler thought we did. More 
than one of his predictions was wrong so if we take Tipler at his word then his 
theory must be wrong too. 
  
  
  
  
Tipler  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
 I am talking about the accelerated expansion reversing, I hold 
computer theory as over-taking most cosmo theories be it a saddle, a doughnut, 
flat as a pancake, whatever. And no, you need not agree, but for me it seems 
apparent. You? 


 


 


-Original Message-
 From: John Clark johnkcl...@gmail.com
 To: everything-list everything-list@googlegroups.com
 Sent: Wed, Jun 10, 2015 3:00 pm
 Subject: Re: super intelligence and self-sampling
 
  
  
   

 
  
 On Wed, Jun 10, 2015  spudboy100 via Everything List   
everything-list@googlegroups.com wrote:  
  
   
   
 Now you are talking of Tipler's Omega Point. A usable theory 
when combined with MWI, which Tipler supports.   
   

   
   
 Tipler's idea of the Omega Point was interesting in 1993 when he introduced 
the idea, but unfortunately in the last 22 years it has proven to be wrong. And 
no matter how beautiful a theory is if it doesn't fit the facts it must be 
abandoned.
   

   
   
   John K Clark

Re: super intelligence and self-sampling

2015-06-11 Thread LizR
On 12 June 2015 at 07:10, meekerdb meeke...@verizon.net wrote:


 Not that I put in credence in Tipler's speculations.


They seem to be based on a comp1 style idea, namely that consciousness is
generated by computation and that recreating the computation would
effectively resurrect that person. I think he assumes that the recreation
is an emulation at the level of the (as yet unknown) physics, which would
run afoul of no-cloning (and probably lots of other things. As I said in
replyto David's recent summary, I find it hard to believe that an emulated
me will actually be me in the important sense that I experience becoming
it).

Didn't Tipler make some testable predictions? (including the Higgs mass???)
If so did they pan out?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-10 Thread Bruno Marchal


On 10 Jun 2015, at 01:40, LizR wrote:


On 10 June 2015 at 11:38, meekerdb meeke...@verizon.net wrote:
On 6/9/2015 2:25 PM, Telmo Menezes wrote:
On Tue, Jun 9, 2015 at 10:15 PM, John Clark johnkcl...@gmail.com  
wrote:

On Tue, Jun 9, 2015 Telmo Menezes te...@telmomenezes.com wrote:

 Super-intelligence is more resilient than human intelligence, so  
it is likely to last longer


Maybe, but I note that smarter than average humans seem to have  
higher than average rates of suicide too.


I wonder if this is because intelligence leads to depression or  
because it makes one more likely to research and correctly execute  
a viable method of suicide. Do you know if the rates are also  
higher on failed attempts?

According to most people on this list, they are ALL failed attempts.

Heehee.

(Or at least most people are willnig to entertain the possibility.)


Which is enough to doubt such kind of self-sampling assumption, which  
are based on ASSA (absolute self-sampling), which I thought was shown  
non valid (cf our old discussion on the doomsday argument).


Then what is super-intelligence? I doubt this make sense, or at the  
least should be made more precise.


I know it is counter-intuitive, or that I use perhaps a non standard  
notion of intelligence(*), but I think that intelligence is maximal  
with the virgin universal machine, or perhaps Löbian machine (but I am  
not sure), and then can only decrease.

The singularity is when the machine will supersede the human' stupidity.

I might think that animals are more intelligent than humans. May be  
plants are more intelligent than animals. But I guess people talk here  
about competence. This can grow, but is often used for stupid  
behavior. A human is an ape which torture other apes.


Bruno

(*) a machine is intelligent if it is not stupid, and a machine is  
stupid if she asserts that she is intelligent, or that she is stupid.  
(it makes pebble infinitely intelligent, I agree).





--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-10 Thread Telmo Menezes




 On 10 Jun 2015, at 09:51, Bruno Marchal marc...@ulb.ac.be wrote:
 
 
 On 10 Jun 2015, at 01:40, LizR wrote:
 
 On 10 June 2015 at 11:38, meekerdb meeke...@verizon.net wrote:
 On 6/9/2015 2:25 PM, Telmo Menezes  wrote:
 On Tue, Jun 9, 2015 at 10:15 PM, John Clark johnkcl...@gmail.com wrote:
 On Tue, Jun 9, 2015 Telmo Menezes te...@telmomenezes.com wrote:
 
  Super-intelligence is more resilient than human intelligence, so it is 
  likely to last longer
 
 Maybe, but I note that smarter than average humans seem to have higher 
 than average rates of suicide too.
 
 I wonder if this is because intelligence leads to depression or because it 
 makes one more likely to research and correctly execute a viable method of 
 suicide. Do you know if the rates are also higher on failed attempts?
 According to most people on this list, they are ALL failed attempts.
 
 Heehee.
 
 (Or at least most people are willnig to entertain the possibility.)
 
 Which is enough to doubt such kind of self-sampling assumption, which are 
 based on ASSA (absolute self-sampling), which I thought was shown non valid 
 (cf our old discussion on the doomsday argument).
 
 Then what is super-intelligence? I doubt this make sense, or at the least 
 should be made more precise.

For the purpose of this discussion, I would say that you would only have to 
grant that there is some utility function that captures chances of survival. 
Then, super-intelligence is something that can optimize this function beyond 
what human intelligence is capable.

 
 I know it is counter-intuitive, or that I use perhaps a non standard notion 
 of intelligence(*), but I think that intelligence is maximal with the virgin 
 universal machine, or perhaps Löbian machine (but I am not sure), and then 
 can only decrease. 
 The singularity is when the machine will supersede the human' stupidity. 

I believe I understand what you mean, but perhaps we are talking about 
different things. 

 
 I might think that animals are more intelligent than humans. May be plants 
 are more intelligent than animals. But I guess people talk here about 
 competence. This can grow, but is often used for stupid behavior. A human is 
 an ape which torture other apes.

Perhaps they merge in the end. For example, the super-intelligence according to 
my definition eventually develops a TOE that makes it believe that the 
well-being of others is the same as its own.

Best
Telmo

 
 Bruno
 
 (*) a machine is intelligent if it is not stupid, and a machine is stupid if 
 she asserts that she is intelligent, or that she is stupid. (it makes pebble 
 infinitely intelligent, I agree).
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.
 
 http://iridia.ulb.ac.be/~marchal/
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-10 Thread Telmo Menezes




 On 10 Jun 2015, at 01:38, meekerdb meeke...@verizon.net wrote:
 
 On 6/9/2015 2:25 PM, Telmo Menezes wrote:
 
 
 On Tue, Jun 9, 2015 at 10:15 PM, John Clark johnkcl...@gmail.com wrote:
 On Tue, Jun 9, 2015 Telmo Menezes te...@telmomenezes.com wrote:
 
  Super-intelligence is more resilient than human intelligence, so it is 
  likely to last longer
 
 Maybe, but I note that smarter than average humans seem to have higher than 
 average rates of suicide too.
 
 I wonder if this is because intelligence leads to depression or because it 
 makes one more likely to research and correctly execute a viable method of 
 suicide. Do you know if the rates are also higher on failed attempts?
 
 According to most people on this list, they are ALL failed attempts.

Right, but only from the perspective of the person attempting suicide. Maybe 
higher intelligence makes you more successful at reducing your measure.

Telmo

 
 Brent
 -- 
 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-10 Thread Telmo Menezes




 On 10 Jun 2015, at 01:51, Russell Standish li...@hpcoders.com.au wrote:
 
 On Wed, Jun 10, 2015 at 09:39:37AM +1000, Stathis Papaioannou wrote:
 On 10 June 2015 at 08:37, LizR lizj...@gmail.com wrote:
 
 The normal answer to this is as stated - a superintelligence may form, as
 per various Arthur C Clark (or Olaf Stapledon, really) stories, by merging
 lots of non-super intelligences. So the chances of finding yourself
 non-super is vastly greater, because it takes billions of us to make one of
 them. However, this could lead to you eventually finding yourself super
 (especially if quantum immortality operates). Or a subset of super.
 
 PS Ants aren't relevant, as Russell explains in Theory of Nothing.
 
 
 OK, but the same argument can easily be made otherwise: why should you find
 yourself living in tiny New Zealand rather than populous China?
 
 I address that as well. Because of a peculiar conspiracy, country
 populations follow a near power law, which means it is just as likely
 that you will be born in a low population country like New Zealand, as
 a high population country like China, simply because there are more
 low population countries in just the right number.

I remember that argument and I agree.
Do biological species follow a power law distribution?

 
 Which leads one to suspect that self-sampling is another mechanism for
 the ubuquity of power laws in nature.
 
 I had a proof in one version of my paper that
 fragmentation/coalescence processes in general lead to power law
 distributions in just the right way to solve self-sampling problems
 like the above, but referees made me take it out. I suppose I should
 try to publish that result in a more mathematical journal at some
 point, but I'm getting tired of arguing with referees all the time ):.

Why not publish on arxiv?

Telmo

 
 
 -- 
 
 
 Prof Russell Standish  Phone 0425 253119 (mobile)
 Principal, High Performance Coders
 Visiting Professor of Mathematics  hpco...@hpcoders.com.au
 University of New South Wales  http://www.hpcoders.com.au
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-10 Thread spudboy100 via Everything List
Now you are talking of Tipler's Omega Point. A usable theory when combined with 
MWI, which Tipler supports. 
 

 

 

-Original Message-
From: Terren Suydam terren.suy...@gmail.com
To: everything-list everything-list@googlegroups.com
Sent: Tue, Jun 9, 2015 7:15 pm
Subject: Re: super intelligence and self-sampling


 
From a quantum immortality perspective, I think if a superintelligence was 
merging lots of intelligences, including yours, you find yourself in 
increasingly unlikely situations where you were able to escape being merged 
with the superintelligence. Eventually, against all odds, you might be the 
only non-integrated intelligence left.  
   
  
  
Terren  
 
 
  
  
On Tue, Jun 9, 2015 at 6:37 PM, LizRlizj...@gmail.com wrote:   
   

The normal answer to this is as stated - a superintelligence may form, as per 
various Arthur C Clark (or Olaf Stapledon, really) stories, by merging lots of 
non-super intelligences. So the chances of finding yourself non-super is vastly 
greater, because it takes billions of us to make one of them. However, this 
could lead to you eventually finding yourself super (especially if quantum 
immortality operates). Or a subset of super. 
  
 
 
PS Ants aren't relevant, as Russell explains in Theory of Nothing. 


 
  
   
   
On 10 June 2015 at 09:41, Terren Suydam terren.suy...@gmail.com 
wrote:

 
  
   
   
On Tue, Jun 9, 2015 at 5:31 PM, Telmo Menezes 
te...@telmomenezes.com wrote:
 
  
   
   


 
  
On Tue, Jun 9, 2015 at 8:19 PM, Terren Suydam
terren.suy...@gmail.com wrote:   
   

 
  
  
   On Tue, Jun 9, 2015 at 1:48 PM, Telmo Menezes 
te...@telmomenezes.com wrote:

 
  
  
   
   
On Tue, Jun 9, 2015 at 7:28 PM, Terren Suydam 
terren.suy...@gmail.com wrote:
 
  
   
Perhaps most superintelligences end up merging into one super-ego, so that 
their measure effectively becomes zero.
  
 
 
  
 

Perhaps, but I'm not convinced that this would reduce its measure. Consider the 
fact that you are no an ant, even though there are apparently 100 trillion of 
them compared to 7 billion humans.
  
   
  
  
Telmo.  

 
  
   
 

   
  
 


 

   
The way I resolve that one is to assume that self-sampling requires a high 
enough level intelligence to have an ego (the 'self' in self-sampling). This is 
required to differentiate the computational histories we identify with as 
identity  memory.   
   

   
   
Let's say the entirety of humanity uploaded into a simulated environment, and 
that one day the simulated separation between minds was eradicated, giving rise 
to a super-intelligence (just one path of many to a superintelligence). From 
that moment on it would be impossible to differentiate computational histories 
in terms of personal identity/memory, so the measure goes to zero.  
 
  
 

   
   

   
  
 
 
Why zero? There is still one conscious entity. Why wouldn't it remember the 
great unification and the multitude of humans events before that?   
  


   
   
Telmo.   
   

   
  
 
 
  
 

When I say goes to zero I mean it as in, approaches

Re: super intelligence and self-sampling

2015-06-10 Thread spudboy100 via Everything List

 Yes, but there have been so much counter examples for the 1997 WMAP analysis 
that Tipler may end up correct. I am talking about the accelerated expansion 
reversing, I hold computer theory as over-taking most cosmo theories be it a 
saddle, a doughnut, flat as a pancake, whatever. And no, you need not agree, 
but for me it seems apparent. You? 

 

 

-Original Message-
From: John Clark johnkcl...@gmail.com
To: everything-list everything-list@googlegroups.com
Sent: Wed, Jun 10, 2015 3:00 pm
Subject: Re: super intelligence and self-sampling


 
  
   
On Wed, Jun 10, 2015  spudboy100 via Everything List 
everything-list@googlegroups.com wrote:   
   


  Now you are talking of Tipler's Omega Point. A usable theory when 
combined with MWI, which Tipler supports.

 


Tipler's idea of the Omega Point was interesting in 1993 when he introduced the 
idea, but unfortunately in the last 22 years it has proven to be wrong. And no 
matter how beautiful a theory is if it doesn't fit the facts it must be 
abandoned. 

 


  John K Clark

 

 
   

 
  
   

 
  
   

 
  
   
 
  
   

 
  
   

 
  
   
  
 

   
  
 

   
  
 
   
  
 

   
  
 

   
  
 

   
  

   
   
  
 
  
 --  
 You received this message because you are subscribed to the Google Groups 
Everything List group. 
 To unsubscribe from this group and stop receiving emails from it, send an 
email to  everything-list+unsubscr...@googlegroups.com. 
 To post to this group, send email to  everything-list@googlegroups.com. 
 Visit this group at  http://groups.google.com/group/everything-list. 
 For more options, visit  https://groups.google.com/d/optout. 
 

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-10 Thread meekerdb

On 6/10/2015 12:51 AM, Bruno Marchal wrote:


On 10 Jun 2015, at 01:40, LizR wrote:

On 10 June 2015 at 11:38, meekerdb meeke...@verizon.net mailto:meeke...@verizon.net 
wrote:


On 6/9/2015 2:25 PM, Telmo Menezes wrote:

On Tue, Jun 9, 2015 at 10:15 PM, John Clark johnkcl...@gmail.com
mailto:johnkcl...@gmail.com wrote:

On Tue, Jun 9, 2015 Telmo Menezes te...@telmomenezes.com
mailto:te...@telmomenezes.com wrote:

 Super-intelligence is more resilient than human intelligence, so 
it is likely to last longer


Maybe, but I note that smarter than average humans seem to have higher 
than
average rates of suicide too.


I wonder if this is because intelligence leads to depression or because it 
makes
one more likely to research and correctly execute a viable method of 
suicide. Do
you know if the rates are also higher on failed attempts?

According to most people on this list, they are ALL failed attempts.


Heehee.

(Or at least most people are willnig to entertain the possibility.)


Which is enough to doubt such kind of self-sampling assumption, which are based on ASSA 
(absolute self-sampling), which I thought was shown non valid (cf our old discussion on 
the doomsday argument).


Then what is super-intelligence? I doubt this make sense, or at the least should be made 
more precise.


I know it is counter-intuitive, or that I use perhaps a non standard notion of 
intelligence(*), but I think that intelligence is maximal with the virgin universal 
machine, or perhaps Löbian machine (but I am not sure), and then can only decrease.

The singularity is when the machine will supersede the human' stupidity.

I might think that animals are more intelligent than humans. May be plants are more 
intelligent than animals. But I guess people talk here about competence. This can grow, 
but is often used for stupid behavior.


By what standard can you judge that an animal putatively more intelligent than you has 
acted stupidly?



A human is an ape which torture other apes.


Not just torture but also eliminate, e.g. homo erectus, homo neaderthalis,...  It's called 
evolution.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-10 Thread John Clark
On Wed, Jun 10, 2015  spudboy100 via Everything List 
everything-list@googlegroups.com wrote:

 Now you are talking of Tipler's Omega Point. A usable theory when
 combined with MWI, which Tipler supports.


Tipler's idea of the Omega Point was interesting in 1993 when he introduced
the idea, but unfortunately in the last 22 years it has proven to be wrong.
And no matter how beautiful a theory is if it doesn't fit the facts it must
be abandoned.

  John K Clark






-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-10 Thread meekerdb

On 6/10/2015 7:44 AM, Telmo Menezes wrote:
For the purpose of this discussion, I would say that you would only have to grant that 
there is some utility function that captures chances of survival. Then, 
super-intelligence is something that can optimize this function beyond what human 
intelligence is capable.


Ahh, so it's bacteria.

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-10 Thread LizR
On 10 June 2015 at 19:05, Telmo Menezes te...@telmomenezes.com wrote:

 Do biological species follow a power law distribution?


I don't know, but I imagine so - there are generally a lot more of the
smaller ones.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-10 Thread LizR
On 11 June 2015 at 07:21, spudboy100 via Everything List 
everything-list@googlegroups.com wrote:

  Yes, but there have been so much counter examples for the 1997 WMAP
 analysis that Tipler may end up correct. I am talking about the accelerated
 expansion reversing, I hold computer theory as over-taking most cosmo
 theories be it a saddle, a doughnut, flat as a pancake, whatever. And no,
 you need not agree, but for me it seems apparent. You?

 I must admit I have always found it a bit tenuous to base the cosmological
acceleration only on the measurement of light from distant supernovas. It's
at least possible supernovas operated differently in the early universe, or
that something in between has affected the signal. It would be nice to get
independent confirmation from a completely different source.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-10 Thread LizR
On 11 June 2015 at 10:45, meekerdb meeke...@verizon.net wrote:

  On 6/10/2015 7:44 AM, Telmo Menezes wrote:

 For the purpose of this discussion, I would say that you would only have
 to grant that there is some utility function that captures chances of
 survival. Then, super-intelligence is something that can optimize this
 function beyond what human intelligence is capable.

 Ahh, so it's bacteria.


It is indeed, at least if we leave aside the ones that have foolishly
aglommerated into large colonies that then sit around typing stuff on
forums.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-10 Thread Russell Standish
On Thu, Jun 11, 2015 at 11:04:47AM +1200, LizR wrote:
 On 10 June 2015 at 19:05, Telmo Menezes te...@telmomenezes.com wrote:
 
  Do biological species follow a power law distribution?
 
 
 I don't know, but I imagine so - there are generally a lot more of the
 smaller ones.
 

I don't have empirical data, but by a combination of Damuth's law and
the Hutchinson-MacArthur model, it is a power law, but with a higher
exponent (ie falls off faster) than the 1/x power law of country populations.


-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-10 Thread LizR
On 11 June 2015 at 06:26, meekerdb meeke...@verizon.net wrote:

  A human is an ape which torture other apes.

 Not just torture but also eliminate, e.g. homo erectus, homo
 neaderthalis,...  It's called evolution.


You sound like you're in favour.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-10 Thread meekerdb

On 6/10/2015 4:06 PM, LizR wrote:
On 11 June 2015 at 06:26, meekerdb meeke...@verizon.net mailto:meeke...@verizon.net 
wrote:



A human is an ape which torture other apes.

Not just torture but also eliminate, e.g. homo erectus, homo 
neaderthalis,...  It's
called evolution.


You sound like you're in favour.


When they're winners and losers I'm in favor of being a winner.

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-10 Thread meekerdb

On 6/10/2015 6:36 PM, LizR wrote:
On 11 June 2015 at 11:21, meekerdb meeke...@verizon.net mailto:meeke...@verizon.net 
wrote:


On 6/10/2015 4:06 PM, LizR wrote:

On 11 June 2015 at 06:26, meekerdb meeke...@verizon.net
mailto:meeke...@verizon.net wrote:


A human is an ape which torture other apes.
Not just torture but also eliminate, e.g. homo erectus, homo neaderthalis,... 
It's called evolution.



You sound like you're in favour.

When they're winners and losers I'm in favor of being a winner.


But your original statement didn't talk about winners and losers, it talked about 
elimination, specifically it sounded as though you were in favour of one ape 
eliminating another one (on a species basis, going by your mention of neanderthals).


So, are you actually in favour of genocide, or were you just shooting your 
mouth off?


Are you a Neanderthal or are you just trolling?

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


super intelligence and self-sampling

2015-06-09 Thread Telmo Menezes
Hi everyone,

Something I have been thinking about. I start with two assumptions:

- Super-intelligence is more resilient than human intelligence, so it is
likely to last longer (e.g. it is more likely to be able to anticipate
existencial threats and prepare accordingly; it is more likely to spread
throughout the galaxy);

- A super-intelligence is necessarily conscious (I think both
computacionalists and emergentists can agree here).

If a super-intelligence is created at some point in time, then we can
expect there to exists much more of it in an entire timeline than human
intelligence. By self-sampling, it is therefore unlikely that I exist as a
human and not as a super-intelligence.

I can think of three options:

1) We are outliers -- it is hard to estimate the likelihood of this, but it
would be tempting to assume that it is very very low if we imagine a
galaxy-spanning AI civilisation;

2) No super-intelligence will ever be created;

3) We are already super-intelligences, having an experience in a simulation
for some reason.

What do you think?

Cheers
Telmo.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-09 Thread Terren Suydam
Perhaps most superintelligences end up merging into one super-ego, so that
their measure effectively becomes zero.

Terren

On Tue, Jun 9, 2015 at 1:03 PM, Telmo Menezes te...@telmomenezes.com
wrote:

 Hi everyone,

 Something I have been thinking about. I start with two assumptions:

 - Super-intelligence is more resilient than human intelligence, so it is
 likely to last longer (e.g. it is more likely to be able to anticipate
 existencial threats and prepare accordingly; it is more likely to spread
 throughout the galaxy);

 - A super-intelligence is necessarily conscious (I think both
 computacionalists and emergentists can agree here).

 If a super-intelligence is created at some point in time, then we can
 expect there to exists much more of it in an entire timeline than human
 intelligence. By self-sampling, it is therefore unlikely that I exist as a
 human and not as a super-intelligence.

 I can think of three options:

 1) We are outliers -- it is hard to estimate the likelihood of this, but
 it would be tempting to assume that it is very very low if we imagine a
 galaxy-spanning AI civilisation;

 2) No super-intelligence will ever be created;

 3) We are already super-intelligences, having an experience in a
 simulation for some reason.

 What do you think?

 Cheers
 Telmo.

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-09 Thread Terren Suydam
On Tue, Jun 9, 2015 at 1:48 PM, Telmo Menezes te...@telmomenezes.com
wrote:



 On Tue, Jun 9, 2015 at 7:28 PM, Terren Suydam terren.suy...@gmail.com
 wrote:

 Perhaps most superintelligences end up merging into one super-ego, so
 that their measure effectively becomes zero.


 Perhaps, but I'm not convinced that this would reduce its measure.
 Consider the fact that you are no an ant, even though there are apparently
 100 trillion of them compared to 7 billion humans.

 Telmo.



The way I resolve that one is to assume that self-sampling requires a high
enough level intelligence to have an ego (the 'self' in self-sampling).
This is required to differentiate the computational histories we identify
with as identity  memory.

Let's say the entirety of humanity uploaded into a simulated environment,
and that one day the simulated separation between minds was eradicated,
giving rise to a super-intelligence (just one path of many to a
superintelligence). From that moment on it would be impossible to
differentiate computational histories in terms of personal identity/memory,
so the measure goes to zero.

T

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-09 Thread Telmo Menezes
On Tue, Jun 9, 2015 at 7:28 PM, Terren Suydam terren.suy...@gmail.com
wrote:

 Perhaps most superintelligences end up merging into one super-ego, so that
 their measure effectively becomes zero.


Perhaps, but I'm not convinced that this would reduce its measure. Consider
the fact that you are no an ant, even though there are apparently 100
trillion of them compared to 7 billion humans.

Telmo.



 Terren

 On Tue, Jun 9, 2015 at 1:03 PM, Telmo Menezes te...@telmomenezes.com
 wrote:

 Hi everyone,

 Something I have been thinking about. I start with two assumptions:

 - Super-intelligence is more resilient than human intelligence, so it is
 likely to last longer (e.g. it is more likely to be able to anticipate
 existencial threats and prepare accordingly; it is more likely to spread
 throughout the galaxy);

 - A super-intelligence is necessarily conscious (I think both
 computacionalists and emergentists can agree here).

 If a super-intelligence is created at some point in time, then we can
 expect there to exists much more of it in an entire timeline than human
 intelligence. By self-sampling, it is therefore unlikely that I exist as a
 human and not as a super-intelligence.

 I can think of three options:

 1) We are outliers -- it is hard to estimate the likelihood of this, but
 it would be tempting to assume that it is very very low if we imagine a
 galaxy-spanning AI civilisation;

 2) No super-intelligence will ever be created;

 3) We are already super-intelligences, having an experience in a
 simulation for some reason.

 What do you think?

 Cheers
 Telmo.

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-09 Thread John Clark
On Tue, Jun 9, 2015 Telmo Menezes te...@telmomenezes.com wrote:

 Super-intelligence is more resilient than human intelligence, so it is
 likely to last longer


Maybe, but I note that smarter than average humans seem to have higher than
average rates of suicide too. Mathematicians kill themselves at a rate 1.8
times higher than the general population, but they're not as bad as
dentists, they kill themselves 5.6 times as often.

  John K Clark






-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-09 Thread Telmo Menezes
On Tue, Jun 9, 2015 at 10:15 PM, John Clark johnkcl...@gmail.com wrote:

 On Tue, Jun 9, 2015 Telmo Menezes te...@telmomenezes.com wrote:

  Super-intelligence is more resilient than human intelligence, so it is
 likely to last longer


 Maybe, but I note that smarter than average humans seem to have higher
 than average rates of suicide too.


I wonder if this is because intelligence leads to depression or because it
makes one more likely to research and correctly execute a viable method of
suicide. Do you know if the rates are also higher on failed attempts?


 Mathematicians kill themselves at a rate 1.8 times higher than the general
 population, but they're not as bad as dentists, they kill themselves 5.6
 times as often.

   John K Clark




  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-09 Thread LizR
The normal answer to this is as stated - a superintelligence may form, as
per various Arthur C Clark (or Olaf Stapledon, really) stories, by merging
lots of non-super intelligences. So the chances of finding yourself
non-super is vastly greater, because it takes billions of us to make one of
them. However, this could lead to you eventually finding yourself super
(especially if quantum immortality operates). Or a subset of super.

PS Ants aren't relevant, as Russell explains in Theory of Nothing.

On 10 June 2015 at 09:41, Terren Suydam terren.suy...@gmail.com wrote:


 On Tue, Jun 9, 2015 at 5:31 PM, Telmo Menezes te...@telmomenezes.com
 wrote:



 On Tue, Jun 9, 2015 at 8:19 PM, Terren Suydam terren.suy...@gmail.com
 wrote:


 On Tue, Jun 9, 2015 at 1:48 PM, Telmo Menezes te...@telmomenezes.com
 wrote:



 On Tue, Jun 9, 2015 at 7:28 PM, Terren Suydam terren.suy...@gmail.com
 wrote:

 Perhaps most superintelligences end up merging into one super-ego, so
 that their measure effectively becomes zero.


 Perhaps, but I'm not convinced that this would reduce its measure.
 Consider the fact that you are no an ant, even though there are apparently
 100 trillion of them compared to 7 billion humans.

 Telmo.



 The way I resolve that one is to assume that self-sampling requires a
 high enough level intelligence to have an ego (the 'self' in
 self-sampling). This is required to differentiate the computational
 histories we identify with as identity  memory.

 Let's say the entirety of humanity uploaded into a simulated
 environment, and that one day the simulated separation between minds was
 eradicated, giving rise to a super-intelligence (just one path of many to a
 superintelligence). From that moment on it would be impossible to
 differentiate computational histories in terms of personal identity/memory,
 so the measure goes to zero.


 Why zero? There is still one conscious entity. Why wouldn't it remember
 the great unification and the multitude of humans events before that?

 Telmo.


 When I say goes to zero I mean it as in, approaches the limit of zero in
 the relative measure.

 I think it would remember the great multitude of human events, but it
 would remember all of them as a single entity, as a single undifferentiated
 identity. It effectively collapses the measure from billions to one.

 Terren






 T

 --
 You received this message because you are subscribed to the Google
 Groups Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-09 Thread LizR
On 10 June 2015 at 11:39, Stathis Papaioannou stath...@gmail.com wrote:

 On 10 June 2015 at 08:37, LizR lizj...@gmail.com wrote:

 The normal answer to this is as stated - a superintelligence may form, as
 per various Arthur C Clark (or Olaf Stapledon, really) stories, by merging
 lots of non-super intelligences. So the chances of finding yourself
 non-super is vastly greater, because it takes billions of us to make one of
 them. However, this could lead to you eventually finding yourself super
 (especially if quantum immortality operates). Or a subset of super.

 PS Ants aren't relevant, as Russell explains in Theory of Nothing.


 OK, but the same argument can easily be made otherwise: why should you
 find yourself living in tiny New Zealand rather than populous China?

 There is a way to show that you are more likely to find yourself in a
smaller country. I can't remember the details (but I think a power law is
involved :-)

But I will have a go.

I am more likely to find myself not in China than in China, because the
majority of people live outside China. Of the rest of the world, the next
most populous country is India, but more people live outside India than in
it, so I am more likely not to live in India. Next is the USA, but of the
remaining 4 or 5 billion people, most live outside the USA, so...

Repeating the process, I end up living alone on an island in the Pacific.
Or in New Zealand, which is almost the same thing.

(And then the test is given on Tuesday, much to my surprise!)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-09 Thread Russell Standish
On Wed, Jun 10, 2015 at 09:39:37AM +1000, Stathis Papaioannou wrote:
 On 10 June 2015 at 08:37, LizR lizj...@gmail.com wrote:
 
  The normal answer to this is as stated - a superintelligence may form, as
  per various Arthur C Clark (or Olaf Stapledon, really) stories, by merging
  lots of non-super intelligences. So the chances of finding yourself
  non-super is vastly greater, because it takes billions of us to make one of
  them. However, this could lead to you eventually finding yourself super
  (especially if quantum immortality operates). Or a subset of super.
 
  PS Ants aren't relevant, as Russell explains in Theory of Nothing.
 
 
 
 OK, but the same argument can easily be made otherwise: why should you find
 yourself living in tiny New Zealand rather than populous China?
 

I address that as well. Because of a peculiar conspiracy, country
populations follow a near power law, which means it is just as likely
that you will be born in a low population country like New Zealand, as
a high population country like China, simply because there are more
low population countries in just the right number.

Which leads one to suspect that self-sampling is another mechanism for
the ubuquity of power laws in nature.

I had a proof in one version of my paper that
fragmentation/coalescence processes in general lead to power law
distributions in just the right way to solve self-sampling problems
like the above, but referees made me take it out. I suppose I should
try to publish that result in a more mathematical journal at some
point, but I'm getting tired of arguing with referees all the time ):.


-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-09 Thread LizR
I was close :)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-09 Thread Terren Suydam
On Tue, Jun 9, 2015 at 7:33 PM, LizR lizj...@gmail.com wrote:

 On 10 June 2015 at 11:15, Terren Suydam terren.suy...@gmail.com wrote:

 From a quantum immortality perspective, I think if a superintelligence
 was merging lots of intelligences, including yours, you find yourself in
 increasingly unlikely situations where you were able to escape being merged
 with the superintelligence. Eventually, against all odds, you might be the
 only non-integrated intelligence left.

 Yes, that does seem possible. It would imply that closest continuers of
 you could never be the versions within the Cloud - an alternative might
 be that the superintelligence starts off new arrivals with full autonomy
 inside a virtual world indistinguishable from their previous existence, and
 only gradually allow them to merge into the Overmind ... maybe giving them
 tests to check if they are ready to do so yet.


But that would be a cul-de-sac if eventually the superintelligence reaps
all individual consciousnesses.


 (Which may or may not involve being able to recite the Quran :-)


lol, the religious parallels are many. The superintelligence is a sort of
ego dissolution into the Void.

Terren

  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-09 Thread Stathis Papaioannou
On 10 June 2015 at 08:37, LizR lizj...@gmail.com wrote:

 The normal answer to this is as stated - a superintelligence may form, as
 per various Arthur C Clark (or Olaf Stapledon, really) stories, by merging
 lots of non-super intelligences. So the chances of finding yourself
 non-super is vastly greater, because it takes billions of us to make one of
 them. However, this could lead to you eventually finding yourself super
 (especially if quantum immortality operates). Or a subset of super.

 PS Ants aren't relevant, as Russell explains in Theory of Nothing.



OK, but the same argument can easily be made otherwise: why should you find
yourself living in tiny New Zealand rather than populous China?

 --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-09 Thread LizR
On 10 June 2015 at 11:38, meekerdb meeke...@verizon.net wrote:

  On 6/9/2015 2:25 PM, Telmo Menezes wrote:

 On Tue, Jun 9, 2015 at 10:15 PM, John Clark johnkcl...@gmail.com wrote:

  On Tue, Jun 9, 2015 Telmo Menezes te...@telmomenezes.com wrote:

   Super-intelligence is more resilient than human intelligence, so it
 is likely to last longer


  Maybe, but I note that smarter than average humans seem to have higher
 than average rates of suicide too.


  I wonder if this is because intelligence leads to depression or because
 it makes one more likely to research and correctly execute a viable method
 of suicide. Do you know if the rates are also higher on failed attempts?

 According to most people on this list, they are ALL failed attempts.


Heehee.

(Or at least most people are willnig to entertain the possibility.)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-09 Thread Terren Suydam
From a quantum immortality perspective, I think if a superintelligence was
merging lots of intelligences, including yours, you find yourself in
increasingly unlikely situations where you were able to escape being merged
with the superintelligence. Eventually, against all odds, you might be the
only non-integrated intelligence left.

Terren

On Tue, Jun 9, 2015 at 6:37 PM, LizR lizj...@gmail.com wrote:

 The normal answer to this is as stated - a superintelligence may form, as
 per various Arthur C Clark (or Olaf Stapledon, really) stories, by merging
 lots of non-super intelligences. So the chances of finding yourself
 non-super is vastly greater, because it takes billions of us to make one of
 them. However, this could lead to you eventually finding yourself super
 (especially if quantum immortality operates). Or a subset of super.

 PS Ants aren't relevant, as Russell explains in Theory of Nothing.

 On 10 June 2015 at 09:41, Terren Suydam terren.suy...@gmail.com wrote:


 On Tue, Jun 9, 2015 at 5:31 PM, Telmo Menezes te...@telmomenezes.com
 wrote:



 On Tue, Jun 9, 2015 at 8:19 PM, Terren Suydam terren.suy...@gmail.com
 wrote:


 On Tue, Jun 9, 2015 at 1:48 PM, Telmo Menezes te...@telmomenezes.com
 wrote:



 On Tue, Jun 9, 2015 at 7:28 PM, Terren Suydam terren.suy...@gmail.com
  wrote:

 Perhaps most superintelligences end up merging into one super-ego, so
 that their measure effectively becomes zero.


 Perhaps, but I'm not convinced that this would reduce its measure.
 Consider the fact that you are no an ant, even though there are apparently
 100 trillion of them compared to 7 billion humans.

 Telmo.



 The way I resolve that one is to assume that self-sampling requires a
 high enough level intelligence to have an ego (the 'self' in
 self-sampling). This is required to differentiate the computational
 histories we identify with as identity  memory.

 Let's say the entirety of humanity uploaded into a simulated
 environment, and that one day the simulated separation between minds was
 eradicated, giving rise to a super-intelligence (just one path of many to a
 superintelligence). From that moment on it would be impossible to
 differentiate computational histories in terms of personal identity/memory,
 so the measure goes to zero.


 Why zero? There is still one conscious entity. Why wouldn't it remember
 the great unification and the multitude of humans events before that?

 Telmo.


 When I say goes to zero I mean it as in, approaches the limit of zero
 in the relative measure.

 I think it would remember the great multitude of human events, but it
 would remember all of them as a single entity, as a single undifferentiated
 identity. It effectively collapses the measure from billions to one.

 Terren






 T

 --
 You received this message because you are subscribed to the Google
 Groups Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google
 Groups Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-09 Thread LizR
On 10 June 2015 at 11:15, Terren Suydam terren.suy...@gmail.com wrote:

 From a quantum immortality perspective, I think if a superintelligence was
 merging lots of intelligences, including yours, you find yourself in
 increasingly unlikely situations where you were able to escape being merged
 with the superintelligence. Eventually, against all odds, you might be the
 only non-integrated intelligence left.

 Yes, that does seem possible. It would imply that closest continuers of
you could never be the versions within the Cloud - an alternative might
be that the superintelligence starts off new arrivals with full autonomy
inside a virtual world indistinguishable from their previous existence, and
only gradually allow them to merge into the Overmind ... maybe giving them
tests to check if they are ready to do so yet.

(Which may or may not involve being able to recite the Quran :-)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-09 Thread meekerdb

On 6/9/2015 2:25 PM, Telmo Menezes wrote:



On Tue, Jun 9, 2015 at 10:15 PM, John Clark johnkcl...@gmail.com 
mailto:johnkcl...@gmail.com wrote:


On Tue, Jun 9, 2015 Telmo Menezes te...@telmomenezes.com
mailto:te...@telmomenezes.com wrote:

 Super-intelligence is more resilient than human intelligence, so it 
is likely to last longer


Maybe, but I note that smarter than average humans seem to have higher than 
average
rates of suicide too.


I wonder if this is because intelligence leads to depression or because it makes one 
more likely to research and correctly execute a viable method of suicide. Do you know if 
the rates are also higher on failed attempts?


According to most people on this list, they are ALL failed attempts.

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-09 Thread Terren Suydam
On Tue, Jun 9, 2015 at 5:31 PM, Telmo Menezes te...@telmomenezes.com
wrote:



 On Tue, Jun 9, 2015 at 8:19 PM, Terren Suydam terren.suy...@gmail.com
 wrote:


 On Tue, Jun 9, 2015 at 1:48 PM, Telmo Menezes te...@telmomenezes.com
 wrote:



 On Tue, Jun 9, 2015 at 7:28 PM, Terren Suydam terren.suy...@gmail.com
 wrote:

 Perhaps most superintelligences end up merging into one super-ego, so
 that their measure effectively becomes zero.


 Perhaps, but I'm not convinced that this would reduce its measure.
 Consider the fact that you are no an ant, even though there are apparently
 100 trillion of them compared to 7 billion humans.

 Telmo.



 The way I resolve that one is to assume that self-sampling requires a
 high enough level intelligence to have an ego (the 'self' in
 self-sampling). This is required to differentiate the computational
 histories we identify with as identity  memory.

 Let's say the entirety of humanity uploaded into a simulated environment,
 and that one day the simulated separation between minds was eradicated,
 giving rise to a super-intelligence (just one path of many to a
 superintelligence). From that moment on it would be impossible to
 differentiate computational histories in terms of personal identity/memory,
 so the measure goes to zero.


 Why zero? There is still one conscious entity. Why wouldn't it remember
 the great unification and the multitude of humans events before that?

 Telmo.


When I say goes to zero I mean it as in, approaches the limit of zero in
the relative measure.

I think it would remember the great multitude of human events, but it would
remember all of them as a single entity, as a single undifferentiated
identity. It effectively collapses the measure from billions to one.

Terren






 T

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: super intelligence and self-sampling

2015-06-09 Thread Telmo Menezes
On Tue, Jun 9, 2015 at 8:19 PM, Terren Suydam terren.suy...@gmail.com
wrote:


 On Tue, Jun 9, 2015 at 1:48 PM, Telmo Menezes te...@telmomenezes.com
 wrote:



 On Tue, Jun 9, 2015 at 7:28 PM, Terren Suydam terren.suy...@gmail.com
 wrote:

 Perhaps most superintelligences end up merging into one super-ego, so
 that their measure effectively becomes zero.


 Perhaps, but I'm not convinced that this would reduce its measure.
 Consider the fact that you are no an ant, even though there are apparently
 100 trillion of them compared to 7 billion humans.

 Telmo.



 The way I resolve that one is to assume that self-sampling requires a high
 enough level intelligence to have an ego (the 'self' in self-sampling).
 This is required to differentiate the computational histories we identify
 with as identity  memory.

 Let's say the entirety of humanity uploaded into a simulated environment,
 and that one day the simulated separation between minds was eradicated,
 giving rise to a super-intelligence (just one path of many to a
 superintelligence). From that moment on it would be impossible to
 differentiate computational histories in terms of personal identity/memory,
 so the measure goes to zero.


Why zero? There is still one conscious entity. Why wouldn't it remember the
great unification and the multitude of humans events before that?

Telmo.



 T

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.