Re: How to live forever

2018-03-19 Thread Stathis Papaioannou
On Tue, 20 Mar 2018 at 10:09 am, Bruce Kellett 
wrote:

> From: Stathis Papaioannou 
>
>
> It is possible that consciousness is fully preserved until a threshold is
> reached then suddenly disappears. So if half the subject’s brain is
> replaced, he behaves normally and has normal consciousness, but if one more
> neurone is replaced he continues to behave normally but becomes a zombie.
> Moreover, since neurones are themselves complex systems it could be broken
> down further: half of that final neurone could be replaced with no change
> to consciousness, but when a particular membrane protein is replaced with a
> non-biological nanomachine the subject will suddenly become a zombie. And
> we need not stop here, because this protein molecule could also be replaced
> gradually, for example by non-biological radioisotopes. If half the atoms
> in this protein are replaced, there is no change in behaviour and no change
> in consciousness; but when one more atom is replaced a threshold is reached
> and the subject suddenly loses consciousness. So zombification could turn
> on the addition or subtraction of one neutron. Are you prepared to go this
> far to challenge the idea that if the observable behaviour of the brain is
> replicated, consciousness will also be replicated?
>
>
> If the theory is that if the observable behaviour of the brain is
> replicated, then consciousness will also be replicated, then the clear
> corollary is that consciousness can be inferred from observable behaviour.
> Which implies that I can be as certain of the consciousness of other people
> as I am of my own. This seems to do some violence to the 1p/1pp/3p
> distinctions that computationalism rely on so much: only 1p is "certainly
> certain". But if I can reliably infer consciousness in others, then other
> things can be as certain as 1p experiences
>

You can’t reliable infer consciousness in others. What you can infer is
that whatever consciousness an entity has, it will be preserved if
functionally identical substitutions in its brain are made. You can’t know
if a mouse is conscious, but you can know that if mouse neurones are
replaced with functionally identical electronic neurones its behaviour will
be the same and any consciousness it may have will also be the same.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-19 Thread Brent Meeker



On 3/19/2018 7:21 PM, John Clark wrote:
On Mon, Mar 19, 2018 at 9:25 PM, Brent Meeker >wrote:


*
​>> ​
NO!*That is exactly what it doesn't show! Consciousness has no
evolutionary purpose, it couldn't have because Evolution can't
even detect consciousness, its no better at it than we are at
detecting consciousness in others. So consciousness must be a
byproduct of something that Evolution *can*detect, like
intelligence.


​>/​/
/But a necessary byproduct of hominid evolved intelligence; /


*NO! *Byproduct don't evolve anything, especially not consciousness 
because Evolution can't even see it.


"Byproduct don't evolve anything"?? Where did I say it did?  Have you 
been drinking?...you can usually keep your answers grammatical.




​_/> ​/_
_/otherwise it could be dispensed with. /_


​You can't dispose of a spandrel, it comes with the territory.  ​


You contradict yourself.  "Comes with the territory" = "necessary by 
product".   So if consciousness is a spandrel it's a necessary 
byproduct...which means it's not really a byproduct, it's a 
feature...just not one that selection can act on.





​>/​/
/Fine.  But if the roof isn't supported by arches the spandrels
don't appear. If intelligence evolves via some other path
​ ​
/
/(as in octopuses) or is designed rather than evolved the argument
that it will be accompanied by consciousness fails./


If there are many paths to intelligence it is probable that Evolution 
found the simplest,


That's laughable.  You must not have looked at cell metabolism.

if only one of those paths leads to consciousness, our path, then it 
would be easier to make a intelligent computer that is conscious than 
to make a intelligent computer that has no consciousness.


Maybe.  IF it's easy to copy brain architecture and function.  The 
question is, how will we know if we do.  If we copy brain architecture 
we'll have some reason to think intelligence => consciousness.  But if 
we adopt some other architecture, e.g. a mixed neural net + von Neumann, 
we'll be in the dark as to whether it's conscious.  My inclination would 
be ask it.  If it's intelligent enough to lie I'm willing to give it 
ethical status.


Brent


As I've said
​,​
you've got to make a educated guess and place a bet. And the stakes 
are very high.


​John K Clark​



--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To post to this group, send email to everything-list@googlegroups.com 
.

Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-19 Thread John Clark
On Mon, Mar 19, 2018 at 9:25 PM, Brent Meeker  wrote:

*​>> ​NO!* That is exactly what it doesn't show! Consciousness has no
>> evolutionary purpose, it couldn't have because Evolution can't even detect
>> consciousness, its no better at it than we are at detecting consciousness
>> in others. So consciousness must be a byproduct of something that Evolution
>> *can* detect, like intelligence.
>
>
> ​>* ​*
> *But a necessary byproduct of hominid evolved intelligence; *
>

*NO! *Byproduct don't evolve anything, especially not consciousness because
Evolution can't even see it.


> ​*> ​*
> *otherwise it could be dispensed with.  *
>

​You can't dispose of a spandrel, it comes with the territory.  ​

​>* ​*
> *Fine.  But if the roof isn't supported by arches the spandrels don't
> appear. If intelligence evolves via some other path​ ​*
> *(as in octopuses) or is designed rather than evolved the argument that it
> will be accompanied by consciousness fails.*


If there are many paths to intelligence it is probable that Evolution found
the simplest, if only one of those paths leads to consciousness, our path,
then it would be easier to make a intelligent computer that is conscious
than to make a intelligent computer that has no consciousness. As I've said
​,​
you've got to make a educated guess and place a bet. And the stakes are
very high.

​John K Clark​




>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-19 Thread John Clark
On Mon, Mar 19, 2018 at 8:11 PM, Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

​>>​
>> That’s nice. I repeat my question, NP-completeness is sorta weird and
>> consciousness is sorta weird, but other than that is there any reason to
>> think the two things are related?
>>
>> ​
> >  If you can't compute efficiently the map,
>

 I
​don't even know what ​"
compute efficiently the map
​" means.​


>
> ​> ​
> *how do you know this system will really upload brain states in such as
> way that consciousness seemlessly carries from brain to computer?*
>

I don't know. I can never know for sure until I actually do it, if I notice
that I'm not dead and I remember being me then and only then I'll know it
worked. Maybe Black Holes and cadavers and piles of shit are conscious and
maybe a computer that acts just like me is not, but I doubt it. In mattes
like this all I can do is make a educated guess, and my guess is piles of
shit are not conscious but a intelligent computer is.


> ​>
> Even if the entity in the computer is conscious it might not actually be
> me.
>

​If it remember being you then it is you.​



> *​> ​If I could duplicate myself which of the two would "be me?"*


That is a silly question. If you are duplicated in a duplicating machine
then both are you because that's what the word "duplicated" means. I've
gone over this crap for years with Bruno, if I wen't a
​n​
atheist I'd pray to God I don't have to repeat it with you.


> ​>​
>  As I see it there is a lot of hype here concealing the fact we really
> know every little about this.


​But ​you seems to know all there about it, enough to quite literally bet
your life on it not working.

​John K Clark​





>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-19 Thread Brent Meeker



On 3/19/2018 6:10 PM, John Clark wrote:
On Mon, Mar 19, 2018 at 7:56 PM, Brent Meeker >wrote:


/
​> ​
You're the one who floated a theory of consciousness based on
evolution. /


​I did indeed.​

/
​>​
I'm just pointing out that it only shows that consciousness exists
in humans for some evolutionarily selected purpose. /


*NO!*That is exactly what it doesn't show! Consciousness has no 
evolutionary purpose, it couldn't have because Evolution can't even 
detect consciousness, its no better at it than we are at detecting 
consciousness in others. So consciousness must be a byproduct of 
something that Evolution *can*detect, like intelligence.


But a necessary byproduct of hominid evolved intelligence; otherwise it 
could be dispensed with.  If it's a necessary byproduct that means it's 
necessary for our intelligence and so it exists for an evolutionarily 
selected reason.


When a architect decides to put an arch inside a rectangular enclosure 
he doesn't separately decide to put a spandrel in there too because it 
would look nice, he gets the spandrel automatically whether he wants 
one or not. Consciousness is a biological spandrel.


Fine.  But if the roof isn't supported by arches the spandrels don't 
appear.  If intelligence evolves via some other path (as in octopuses) 
or is designed rather than evolved the argument that it will be 
accompanied by consciousness fails.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-19 Thread John Clark
On Mon, Mar 19, 2018 at 7:56 PM, Brent Meeker  wrote:

*​> ​You're the one who floated a theory of consciousness based on
> evolution. *
>

​I did indeed.​


*​>​I'm just pointing out that it only shows that consciousness exists in
> humans for some evolutionarily selected purpose. *
>

 *NO!* That is exactly what it doesn't show! Consciousness has no
evolutionary purpose, it couldn't have because Evolution can't even detect
consciousness, its no better at it than we are at detecting consciousness
in others. So consciousness must be a byproduct of something that Evolution
*can* detect, like intelligence. When a architect decides to put an arch
inside a rectangular enclosure he doesn't separately decide to put a
spandrel in there too because it would look nice, he gets the spandrel
automatically whether he wants one or not. Consciousness is a biological
spandrel.

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-19 Thread Lawrence Crowell


On Monday, March 19, 2018 at 3:02:28 PM UTC-6, John Clark wrote:
>
> On Sun, Mar 18, 2018 at 11:02 AM, Lawrence Crowell <
> goldenfield...@gmail.com> wrote:
>
>> >>  NP-completeness is sorta weird and consciousness is sorta weird, but 
>>> other than that is there any reason to think the two things are related?
>>
>>  
>
> *> This seems to be something you are not registering.*
>
> You’ve got that right.
>
>> *> Classic NP-complete problems involve cataloging subgraphs and 
>> determining the rules for all subgraphs in a graph. There are other similar 
>> combinatoric problems that are NP complete. *
>
> That’s nice. I repeat my question, NP-completeness is sorta weird and 
> consciousness is sorta weird, but other than that is there any reason to 
> think the two things are related?
>

If you can't compute efficiently the map, the how do you know this system 
will really upload brain states in such as way that consciousness 
seemlessly carries from brain to computer? Even if the entity in the 
computer is conscious it might not actually be me. If I could duplicate 
myself which of the two would "be me?" The map problem is one involving 
graph theoretic problems that are NP-complete. As I see it there is a lot 
of hype here concealing the fact we really know every little about this.

LC


> *> A map from a brain to a computer is going to require knowing how 
>> to handle these problems. *
>
> That is utterly ridiculous! Duplicating a map is not a NP-complete 
> problem, in fact its not much of a problem at all, a Xerox machine can do 
> it. In this case we're not trying to find the shortest path or even a 
> shorted path than the one the trailing salesman took. All we need do is 
> take the path the salesman already took. 
>
>> *> Quantum computers do not help much.*
>
>
> It would be great to have a quantum computer but would not be necessary 
> for uploading or for AI, it would just be icing on the cake.  
>
>  *> It could have some bearing on the ability to emulate consciousness in 
> a computer.*
>
> Yes, and the Goldbach conjecture might have some bearing on the ability to 
> emulate consciousness in a computer too, but there is not one particle of 
> evidence to suggest that either of the two actually does. There are a 
> infinite number of things and concepts in the universe and not one of them 
> has been ruled out as having somethings to do with consciousness, and 
> that’s why consciousness theories are so easy to come up with and why they 
> are so completely useless. Intelligence theories are a different matter 
> entirely, they are testable. 
>
>> >> How do you figure that? Both my brain and my computer are made of 
>>> matter that obeys the laws of physics, and matter that obeys the laws of 
>>> physics has never been observed to compute NP-complete problems in 
>>> polynomial time, much less less find the answer to a 
>>> non-computable question, like “what is the 7918th Busy Beaver number?”.
>>
>>  
>>
>> > *And for this reason it could be impossible to map brain states into a 
>> computer and capture a person completely. *
>
> How do you figure that? A computer can never find the 7918th Busy Beaver 
> number but my consciousness can never find it either. I’ll be damned if I 
> see how one thing has anything to do with the other. It seems to me that 
> you don’t want computers to be conscious so you looked for a problem that a 
> computer can never solve and just decreed that problem must have something 
> to do with consciousness. But computers can’t find the 7918th Busy Beaver 
> number because the laws of physics can’t find it, even the universe itself 
> doesn’t know what that finite number is. But I know for a fact that the 
> universe does know how to arrange atoms so they behave in a johnkclarkian 
> way and become conscious. The universe doesn't know how to solve NP 
> complete problems in polynomial time, much less NP hard problems, much less 
> flat out non-computable problems like the busy Beaver, so I don't see how 
> any of them could have anything to do with consciousness. 
>
>> *> Of course brains and computers are made of matter. So is a pile of 
>> shit also made of matter.* 
>>
> Exactly, and the only difference between my brain and a pile of shit is 
> the way the generic atoms are arranged, and the only difference between a 
> cadaver and a healthy living person is the way the generic atoms are 
> arranged. One carbon atom is identical to another so the only thing that 
> specifies something as being me or a cadaver or pile of shit is the 
> information on how to arrange those atoms.
>
>> *> Based on what we know about bacteria and their network communicating 
>> by electrical potentials the pile of shit may have more in the way of 
>> consciousness than a computer. *
>
> Maybe maybe maybe. The above is a excellent example of what I was talking 
> about, consciousness theories are utterly and completely useless. Is this 
> really the best you can do? Are piles of shit 

Re: How to live forever

2018-03-19 Thread Bruce Kellett
From: *Telmo Menezes* >


On Tue, Mar 20, 2018 at 12:06 AM, Bruce Kellett
mailto:bhkell...@optusnet.com.au>> wrote:
> From: Stathis Papaioannou >

>
>
> It is possible that consciousness is fully preserved until a 
threshold is

> reached then suddenly disappears. So if half the subject’s brain is
> replaced, he behaves normally and has normal consciousness, but if 
one more
> neurone is replaced he continues to behave normally but becomes a 
zombie.
> Moreover, since neurones are themselves complex systems it could be 
broken
> down further: half of that final neurone could be replaced with no 
change to

> consciousness, but when a particular membrane protein is replaced with a
> non-biological nanomachine the subject will suddenly become a 
zombie. And we

> need not stop here, because this protein molecule could also be replaced
> gradually, for example by non-biological radioisotopes. If half the 
atoms in
> this protein are replaced, there is no change in behaviour and no 
change in
> consciousness; but when one more atom is replaced a threshold is 
reached and
> the subject suddenly loses consciousness. So zombification could 
turn on the
> addition or subtraction of one neutron. Are you prepared to go this 
far to

> challenge the idea that if the observable behaviour of the brain is
> replicated, consciousness will also be replicated?
>
>
> If the theory is that if the observable behaviour of the brain is
> replicated, then consciousness will also be replicated, then the clear
> corollary is that consciousness can be inferred from observable 
behaviour.


For this to be a theory in the scientific sense, one needs some way to
detect consciousness. In that case your corollary becomes a tautology:

(a) If one can detect consciousness then one can detect consciousness.

The other option is to assume that observable behaviors in the brain
imply consciousness -- because "common sense", because experts say so,
whatever. In this case it becomes circular reasoning:

(b) Assuming that observable behaviors in the brain imply
consciousness, consciousness can be inferred from brain behaviors.


I was responding to the claim by Stathis that consciousness will follow 
replication of observable behaviour. It seemed to me that this was 
proposed as a theory: "If the observable behaviour of is replicated then 
consciousness will also be replicated." I was merely pointing out 
consequences of this theory, so your claims of tautology and/or 
circularity rather miss the point: the consequences of any theory are 
either tautologies or circularities in that sense, because they are 
implications of the theory.


Now it may be that you want to reject Stathis's calim, and insist that 
consciousness cannot be inferred from behaviour. But it seems to me that 
that theory is as lacking in independent verification as the contrary.



> Which implies that I can be as certain of the consciousness of other 
people

> as I am of my own. This seems to do some violence to the 1p/1pp/3p
> distinctions that computationalism rely on so much: only 1p is 
"certainly

> certain".
> But if I can reliably infer consciousness in others, then other
> things can be as certain as 1p experiences.

If one can detect 1p experiences then one can detect 1p experiences...


The claim has more content than that.

Bruce





Telmo.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-19 Thread Brent Meeker



On 3/19/2018 2:19 PM, John Clark wrote:
On Sun, Mar 18, 2018 at 10:03 PM, Brent Meeker > wrote


/> octopuses are fairly intelligent but their neural structure is
distributed very differently from mammals of similar
intelligence.  An artificial intelligence that was not modeled on
mammalian brain structure might be intelligent but not conscious/


Maybe maybe maybe.
​A​
nd you don't have have the exact same brain structure as I have so you 
might be intelligent but not conscious. I said it before I'll say it 
again, consciousness theories are useless, and not just the ones on 
this list, all of them.


You're the one who floated a theory of consciousness based on 
evolution.  I'm just pointing out that it only shows that consciousness 
exists in humans for some evolutionarily selected purpose.  It doesn't 
apply to intelligence that arises in some other evolutionary branch or 
intelligence like AI that doesn't evolve but is designed.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-19 Thread Telmo Menezes
On Tue, Mar 20, 2018 at 12:06 AM, Bruce Kellett
 wrote:
> From: Stathis Papaioannou 
>
>
> It is possible that consciousness is fully preserved until a threshold is
> reached then suddenly disappears. So if half the subject’s brain is
> replaced, he behaves normally and has normal consciousness, but if one more
> neurone is replaced he continues to behave normally but becomes a zombie.
> Moreover, since neurones are themselves complex systems it could be broken
> down further: half of that final neurone could be replaced with no change to
> consciousness, but when a particular membrane protein is replaced with a
> non-biological nanomachine the subject will suddenly become a zombie. And we
> need not stop here, because this protein molecule could also be replaced
> gradually, for example by non-biological radioisotopes. If half the atoms in
> this protein are replaced, there is no change in behaviour and no change in
> consciousness; but when one more atom is replaced a threshold is reached and
> the subject suddenly loses consciousness. So zombification could turn on the
> addition or subtraction of one neutron. Are you prepared to go this far to
> challenge the idea that if the observable behaviour of the brain is
> replicated, consciousness will also be replicated?
>
>
> If the theory is that if the observable behaviour of the brain is
> replicated, then consciousness will also be replicated, then the clear
> corollary is that consciousness can be inferred from observable behaviour.

For this to be a theory in the scientific sense, one needs some way to
detect consciousness. In that case your corollary becomes a tautology:

(a) If one can detect consciousness then one can detect consciousness.

The other option is to assume that observable behaviors in the brain
imply consciousness -- because "common sense", because experts say so,
whatever. In this case it becomes circular reasoning:

(b) Assuming that observable behaviors in the brain imply
consciousness, consciousness can be inferred from brain behaviors.

> Which implies that I can be as certain of the consciousness of other people
> as I am of my own. This seems to do some violence to the 1p/1pp/3p
> distinctions that computationalism rely on so much: only 1p is "certainly
> certain".
> But if I can reliably infer consciousness in others, then other
> things can be as certain as 1p experiences.

If one can detect 1p experiences then one can detect 1p experiences...

Telmo.

> Bruce
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-19 Thread Bruce Kellett

From: *Stathis Papaioannou* mailto:stath...@gmail.com>>


It is possible that consciousness is fully preserved until a threshold 
is reached then suddenly disappears. So if half the subject’s brain is 
replaced, he behaves normally and has normal consciousness, but if one 
more neurone is replaced he continues to behave normally but becomes a 
zombie. Moreover, since neurones are themselves complex systems it 
could be broken down further: half of that final neurone could be 
replaced with no change to consciousness, but when a particular 
membrane protein is replaced with a non-biological nanomachine the 
subject will suddenly become a zombie. And we need not stop here, 
because this protein molecule could also be replaced gradually, for 
example by non-biological radioisotopes. If half the atoms in this 
protein are replaced, there is no change in behaviour and no change in 
consciousness; but when one more atom is replaced a threshold is 
reached and the subject suddenly loses consciousness. So zombification 
could turn on the addition or subtraction of one neutron. Are you 
prepared to go this far to challenge the idea that if the observable 
behaviour of the brain is replicated, consciousness will also be 
replicated?


If the theory is that if the observable behaviour of the brain is 
replicated, then consciousness will also be replicated, then the clear 
corollary is that consciousness can be inferred from observable 
behaviour. Which implies that I can be as certain of the consciousness 
of other people as I am of my own. This seems to do some violence to the 
1p/1pp/3p distinctions that computationalism rely on so much: only 1p is 
"certainly certain". But if I can reliably infer consciousness in 
others, then other things can be as certain as 1p experiences.


Bruce


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-19 Thread Stathis Papaioannou
On Tue, 20 Mar 2018 at 3:49 am, Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

> On Monday, March 19, 2018 at 6:47:01 AM UTC-5, stathisp wrote:
>
>>
>> On Mon, 19 Mar 2018 at 8:58 pm, Lawrence Crowell <
>> goldenfield...@gmail.com> wrote:
>>
>>> On Sunday, March 18, 2018 at 8:46:26 PM UTC-6, stathisp wrote:
>>>


 On 19 March 2018 at 12:14, Lawrence Crowell 
 wrote:

> On Sunday, March 18, 2018 at 3:51:13 PM UTC-6, John Clark wrote:
>>
>> On Sun, Mar 18, 2018 at 11:02 AM, Lawrence Crowell <
>> goldenfield...@gmail.com> wrote:
>>
>> *> The MH spacetimes have Cauchy horizons that because they pile up
>>> geodesics can be a sort of singularity.*
>>
>>
>> That’s not the only thing they have, MH spacetimes also have closed
>> timelike curves and logical paradoxes produced by them, one of them being
>> the one found by Turing. They also have naked singularities that nobody 
>> has
>> ever seen the slightest hint of. And if you need to go to as exotic a 
>> place
>> as the speculative interior of a Black Hole to find a reason why Cryonics
>> might not work I am greatly encouraged.
>>
>
> Not all MH spaces have closed timelike curves.
>
>
>>
>> *> The subject of NP-completeness came up because of my conjecture
>>> about there being a sort of code associated with a conscious entity 
>>> that is
>>> not computable or if computable is intractable in NP. *
>>
>>
>> NP-completeness is sorta weird and consciousness is sorta weird, but
>> other than that is there any reason to think the two things are related?
>>
>
> This seems to be something you are not registering. Classic
> NP-complete problems involve cataloging subgraphs and determining the 
> rules
> for all subgraphs in a graph. There are other similar combinatoric 
> problems
> that are NP complete. A map from a brain to a computer is going to require
> knowing how to handle these problems. Quantum computers do not help much.
>
>
>>
>> *> It could have some bearing on the ability to emulate consciousness
>>> in a computer.*
>>
>>
>> How do you figure that? Both my brain and my computer are made of
>> matter that obeys the laws of physics, and matter that obeys the laws of
>> physics has never been observed to compute NP-complete problems in
>> polynomial time, much less less find the answer to a non-computable
>> question, like “what is the 7918th Busy Beaver number?”.
>>
>
> And for this reason it could be impossible to map brain states into a
> computer and capture a person completely. Of course brains and computers
> are made of matter. So is a pile of shit also made of matter. Based on 
> what
> we know about bacteria and their network communicating by electrical
> potentials the pile of shit may have more in the way of consciousness than
> a computer.
>
> As for the rest I think a lot of this sort of idea is chasing after
> some crazy dream. There is in some ways a problem with doing that. As
> things stand now I would not do the upload.  Below is a picture of some
> aspect of this.
>
>
> 
>
 Could you say if you think the observable behaviour of the brain (and
 hence of the person whose muscles are controlled by the brain) could be
 replaced by a computer, and, if the answer is yes, if you still think it is
 possible that the consciousness might not be preserved? And if the answer
 is also yes to the second question, what you think it would be like if your
 consciousness was changed by replacing part of your brain, but your brain
 still forced your body to behave in the same way?


 --
 Stathis Papaioannou

>>>
>>> I really do not know. I will say if it is possible in principle to
>>> replace the executive parts of the brain with a computer, but where the
>>> result could be a sort of zombie. There are too many unknowns and unknowns
>>> with no Bayesian priors, or unknown unknowns. We are in a domain of
>>> possibles, plausibles and maybe a Jupiter computer-brain. There is so
>>> little to go with this, and to be honest a lot more possible obstructions I
>>> might see than realities, that almost nothing can be said with much
>>> certainty.
>>>
>>
>> Consider not a zombie but a brain in the process of zombification. A
>> piece of the brain is replaced with an electronic implant which replicates
>> its I/O behaviour as it interacts with the surrounding biological tissue
>> but, by assumption, does not participate in consciousness. It is believed,
>> for example, that visual experiences arise in the occipital cortex, and
>> lesions here cause partial or

Re: How to live forever

2018-03-19 Thread John Clark
On Mon, Mar 19, 2018 at 8:13 AM, Telmo Menezes 
wrote:

​> ​
> I am pointing out that just
> ​
> because you have a certain property, it doesn't follow that this
> property was created be evolution.


​And I am pointing out that if a property didn't exist before Evolution
started its work but it did afterword then, unless there are strong reasons
to think otherwise, its reasonable to conclude that evolution produced it. ​


​> ​Evolution is a theory about the
> ​
> complexification of biological systems.


​And the most complex thing Evolution ever produced is the human brain. And
Evolution can see intelligent behavior but not consciousness.
And when my brain changes my consciousness changes. And when my
consciousness ​changes my brain changes.

​What more do I need to conclude that consciousness is produced by my brain
and is probably a byproduct of intelligence and therefore I am probably
correct in concluding I am not the only conscious being in the universe?
​If you have a better argument against solipsism I'd love to head it.

>
>> ​>​
>> I don't really know that human cadavers are not conscious, but they
>> ​
>>  sure
>> ​
>> don't behave intelligently so my hunch is they are not. What is your
>> hunch?
>
>
> ​> ​My hunch is that I am mind in a universe made of mind. I believe
> ​
> modern physics is both valid and useful (and a lot of fun), but I do
> ​
> not know that it studies fundamental reality and I believe nobody else
> ​
> knows either.


​That's ​nice, but you didn't answer my question. My suspicion is that
cadavers are not conscious because they display the same degree of
intelligence as a rock or a tree does. But what about you, do you suspect
cadavers are conscious, if not why not?

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-19 Thread Russell Standish
On Mon, Mar 19, 2018 at 05:19:11PM -0400, John Clark wrote:
> On Sun, Mar 18, 2018 at 10:03 PM, Brent Meeker  wrote
> 
> *> octopuses are fairly intelligent but their neural structure is
> > distributed very differently from mammals of similar intelligence.  An
> > artificial intelligence that was not modeled on mammalian brain structure
> > might be intelligent but not conscious*
> 
> 
> Maybe maybe maybe.
> ​A​
> nd you don't have have the exact same brain structure as I have so you
> might be intelligent but not conscious. I said it before I'll say it again,
> consciousness theories are useless, and not just the ones on this list, all
> of them.
> 
> John K Clark

In my backyard, pretty much about 200m from where I'm sitting now, we
have these amazing creatures called giant cuttlefish. These are about
the size of a dog (which breed of dog you say? - well yes there's that
sort of variation in size too). They (like most cephalopods) are
masters of camouflage. To be really effective in camouflage, you need
to know what the things you're hiding from is seeing. Cuttlefish use
different camo patterns and communication methods depending on whether they're
trying to avoid sharks or dolphins. I had the experience of one of
these animals approaching me looking for all the world like a bunch of
kelp. The instant I saw its eyes, it knew its disguise was blown, and
it scarpered. I had the strong feeling that here was an animal reading
my mind as I its. I can't be sure, of course, but I'm pretty convinced
from that cuttlefish are conscious beings. It seems hard to imagine them
being able to understand other animals minds without also being aware
of their own, and their place in the world.

-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellowhpco...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


How to live forever

2018-03-19 Thread John Clark
On Sun, Mar 18, 2018 at 10:03 PM, Brent Meeker  wrote

*> octopuses are fairly intelligent but their neural structure is
> distributed very differently from mammals of similar intelligence.  An
> artificial intelligence that was not modeled on mammalian brain structure
> might be intelligent but not conscious*


Maybe maybe maybe.
​A​
nd you don't have have the exact same brain structure as I have so you
might be intelligent but not conscious. I said it before I'll say it again,
consciousness theories are useless, and not just the ones on this list, all
of them.

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-19 Thread John Clark
On Sun, Mar 18, 2018 at 11:02 AM, Lawrence Crowell 
wrote:

> >>  NP-completeness is sorta weird and consciousness is sorta weird, but
>> other than that is there any reason to think the two things are related?
>
>

*> This seems to be something you are not registering.*

You’ve got that right.

> *> Classic NP-complete problems involve cataloging subgraphs and
> determining the rules for all subgraphs in a graph. There are other similar
> combinatoric problems that are NP complete. *

That’s nice. I repeat my question, NP-completeness is sorta weird and
consciousness is sorta weird, but other than that is there any reason to
think the two things are related?

> *> A map from a brain to a computer is going to require knowing how
> to handle these problems. *

That is utterly ridiculous! Duplicating a map is not a NP-complete problem,
in fact its not much of a problem at all, a Xerox machine can do it. In
this case we're not trying to find the shortest path or even a shorted path
than the one the trailing salesman took. All we need do is take the path
the salesman already took.

> *> Quantum computers do not help much.*


It would be great to have a quantum computer but would not be necessary for
uploading or for AI, it would just be icing on the cake.

 *> It could have some bearing on the ability to emulate consciousness in a
computer.*

Yes, and the Goldbach conjecture might have some bearing on the ability to
emulate consciousness in a computer too, but there is not one particle of
evidence to suggest that either of the two actually does. There are a
infinite number of things and concepts in the universe and not one of them
has been ruled out as having somethings to do with consciousness, and
that’s why consciousness theories are so easy to come up with and why they
are so completely useless. Intelligence theories are a different matter
entirely, they are testable.

> >> How do you figure that? Both my brain and my computer are made of
>> matter that obeys the laws of physics, and matter that obeys the laws of
>> physics has never been observed to compute NP-complete problems in
>> polynomial time, much less less find the answer to a
>> non-computable question, like “what is the 7918th Busy Beaver number?”.
>
>
>
> > *And for this reason it could be impossible to map brain states into a
> computer and capture a person completely. *

How do you figure that? A computer can never find the 7918th Busy Beaver
number but my consciousness can never find it either. I’ll be damned if I
see how one thing has anything to do with the other. It seems to me that
you don’t want computers to be conscious so you looked for a problem that a
computer can never solve and just decreed that problem must have something
to do with consciousness. But computers can’t find the 7918th Busy Beaver
number because the laws of physics can’t find it, even the universe itself
doesn’t know what that finite number is. But I know for a fact that the
universe does know how to arrange atoms so they behave in a johnkclarkian
way and become conscious. The universe doesn't know how to solve NP
complete problems in polynomial time, much less NP hard problems, much less
flat out non-computable problems like the busy Beaver, so I don't see how
any of them could have anything to do with consciousness.

> *> Of course brains and computers are made of matter. So is a pile of shit
> also made of matter.*
>
Exactly, and the only difference between my brain and a pile of shit is the
way the generic atoms are arranged, and the only difference between a
cadaver and a healthy living person is the way the generic atoms are
arranged. One carbon atom is identical to another so the only thing that
specifies something as being me or a cadaver or pile of shit is the
information on how to arrange those atoms.

> *> Based on what we know about bacteria and their network communicating by
> electrical potentials the pile of shit may have more in the way of
> consciousness than a computer. *

Maybe maybe maybe. The above is a excellent example of what I was talking
about, consciousness theories are utterly and completely useless. Is this
really the best you can do? Are piles of shit and the interior of Black
Holes the only places you can find arguments against Cryonics?

> *> As things stand now I would not do the upload.  *

But is that because you believe there is no change of it working or because
you believe there is? I think the chances are greater than zero but less
than 100% and I’m not afraid to give it a try. After all, I have the money
and if it doesn’t work it won’t make me any deader.

There is a excellent article by Kenneth Hayworth on the nuts and bolts of
how this uploading procedure might work from the present day when you
undergo the procedure to about 70 years in the future where you’re revived
and uploaded into a robot body:

http://www.brainpreservation.org/wp-content/uploads/2018/02/
vitrifyingtheconnectomicself_hayworth.pdf
Kenneth Hayworth is t

Re: How to live forever

2018-03-19 Thread Lawrence Crowell
I am not particularly in the Platonist camp. I see Platonism and other 
philosophical ideas as just grist for the mill. 

Dennett's approach has some merit as at least opening a door for some 
possible testable approaches to consciousness. I have no idea whether this 
entire construction is realistic or not. 

LC

On Monday, March 19, 2018 at 7:01:04 AM UTC-5, telmo_menezes wrote:
>
> On Sun, Mar 18, 2018 at 9:29 PM, Lawrence Crowell 
>
> > 
> > 
> > In a part what you say is spot on. The problem with consciousness is 
> there 
> > is a lot more ignorance about it than much in the way of certain 
> knowledge. 
> > It may be a sort of epiphenomenon that emerges from some class of 
> complex 
> > systems, which at this time we do know understand. Roger Penrose thinks 
> it 
> > is something is a triality of physics, mathematics and mind, which is a 
> sort 
> > of Platonic look. Dennett on the other hand thinks consciousness is a 
> sort 
> > of illusion, which is a sort of epiphenomenon. Dennett calls it a 
> > hetererophenomenon as it involves a sort of game of multiple drafts. We 
> > really do not know for sure what consciousness is. 
>
> I am on the Platonist camp, but fully realize that this is a personal 
> bet / intuition. I agree with Bruno that if computationalism is true, 
> then consciousness cannot be an epiphenomenon. But we don't know if 
> computationalism is true. 
>
> Dennett I just find just silly. I think he plays with words, and 
> accepting his arguments would force me to deny something (the only 
> thing) that I absolutely know to be true. 
>
> > I can think of things that strike me as obstructions to the idea of 
> > uploading brain states to a computer. The issue of NP-completeness seems 
> > plausible, and classic NP-complete problems are combinatorial systems 
> which 
> > the brain is an example of. Other questions seem to make this 
> problematic. 
> > It does seem to me the barrier of ignorance is far higher than our 
> ability 
> > to vault over it. 
>
> Agreed. I'm not sure we will ever be able to understand consciousness 
> -- there is really no reason to assume that this is possible. If it 
> is, I bet that it will require a quantitative jump in our 
> understanding of reality. I most definitely do not believe that it can 
> be solved by incrementalist research in neuroscience. 
>
> Telmo. 
>
> > LC 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "Everything List" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to everything-li...@googlegroups.com . 
> > To post to this group, send email to everyth...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/everything-list. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-19 Thread Lawrence Crowell
On Monday, March 19, 2018 at 6:47:01 AM UTC-5, stathisp wrote:
>
>
> On Mon, 19 Mar 2018 at 8:58 pm, Lawrence Crowell  > wrote:
>
>> On Sunday, March 18, 2018 at 8:46:26 PM UTC-6, stathisp wrote:
>>
>>>
>>>
>>> On 19 March 2018 at 12:14, Lawrence Crowell  
>>> wrote:
>>>
 On Sunday, March 18, 2018 at 3:51:13 PM UTC-6, John Clark wrote:
>
> On Sun, Mar 18, 2018 at 11:02 AM, Lawrence Crowell <
> goldenfield...@gmail.com> wrote:
>
> *> The MH spacetimes have Cauchy horizons that because they pile up 
>> geodesics can be a sort of singularity.*
>
>
> That’s not the only thing they have, MH spacetimes also have closed 
> timelike curves and logical paradoxes produced by them, one of them being 
> the one found by Turing. They also have naked singularities that nobody 
> has 
> ever seen the slightest hint of. And if you need to go to as exotic a 
> place 
> as the speculative interior of a Black Hole to find a reason why Cryonics 
> might not work I am greatly encouraged. 
>

 Not all MH spaces have closed timelike curves.
  

>
> *> The subject of NP-completeness came up because of my conjecture 
>> about there being a sort of code associated with a conscious entity that 
>> is 
>> not computable or if computable is intractable in NP. *
>
>
> NP-completeness is sorta weird and consciousness is sorta weird, but 
> other than that is there any reason to think the two things are related?
>

 This seems to be something you are not registering. Classic NP-complete 
 problems involve cataloging subgraphs and determining the rules for all 
 subgraphs in a graph. There are other similar combinatoric problems that 
 are NP complete. A map from a brain to a computer is going to require 
 knowing how to handle these problems. Quantum computers do not help much.
  

>
> *> It could have some bearing on the ability to emulate consciousness 
>> in a computer.*
>
>
> How do you figure that? Both my brain and my computer are made of 
> matter that obeys the laws of physics, and matter that obeys the laws of 
> physics has never been observed to compute NP-complete problems in 
> polynomial time, much less less find the answer to a non-computable 
> question, like “what is the 7918th Busy Beaver number?”. 
>

 And for this reason it could be impossible to map brain states into a 
 computer and capture a person completely. Of course brains and computers 
 are made of matter. So is a pile of shit also made of matter. Based on 
 what 
 we know about bacteria and their network communicating by electrical 
 potentials the pile of shit may have more in the way of consciousness than 
 a computer. 

 As for the rest I think a lot of this sort of idea is chasing after 
 some crazy dream. There is in some ways a problem with doing that. As 
 things stand now I would not do the upload.  Below is a picture of some 
 aspect of this. 


 

>>> Could you say if you think the observable behaviour of the brain (and 
>>> hence of the person whose muscles are controlled by the brain) could be 
>>> replaced by a computer, and, if the answer is yes, if you still think it is 
>>> possible that the consciousness might not be preserved? And if the answer 
>>> is also yes to the second question, what you think it would be like if your 
>>> consciousness was changed by replacing part of your brain, but your brain 
>>> still forced your body to behave in the same way?
>>>
>>>
>>> -- 
>>> Stathis Papaioannou
>>>
>>
>> I really do not know. I will say if it is possible in principle to 
>> replace the executive parts of the brain with a computer, but where the 
>> result could be a sort of zombie. There are too many unknowns and unknowns 
>> with no Bayesian priors, or unknown unknowns. We are in a domain of 
>> possibles, plausibles and maybe a Jupiter computer-brain. There is so 
>> little to go with this, and to be honest a lot more possible obstructions I 
>> might see than realities, that almost nothing can be said with much 
>> certainty.
>>
>
> Consider not a zombie but a brain in the process of zombification. A piece 
> of the brain is replaced with an electronic implant which replicates its 
> I/O behaviour as it interacts with the surrounding biological tissue but, 
> by assumption, does not participate in consciousness. It is believed, for 
> example, that visual experiences arise in the occipital cortex, and lesions 
> here cause partial or complete blindness. So if the implant in the visual 
> cortex lacked the special quality giving rise to visual experiences, the 
> subject should have this large deficit in his consciousnes

Re: How to live forever

2018-03-19 Thread Telmo Menezes
On Sun, Mar 18, 2018 at 11:38 PM, John Clark  wrote:
> On Sun, Mar 18, 2018 at 12:34 PM, Telmo Menezes 
> wrote:
>
>>>
>>> >> You walk into a bakery and see a cake and you assume the baker made
>>> >> the cake. Do you also assume the baker made the flower in the cake, and  
>>> >> the
>>> >> carbon in the flower, and the protons in the carbon, and the quarks in 
>>> >> the
>>> >> protons?
>>
>>
>>
>> > No, because the cake is an improbable thing, while protons are  probable
>> > things. I don't know how probable consciousness is.
>
>
> But I do know that the bakers brain is far more improbable  than a cake
> ,
> and mass and an electromagnetic field and temperature are even more probable
> than protons, so why do you demand Evolution produce mass and temperature
> and electromagnetic fields but
> you
> don't demand the baker produce protons?

I demand no such thing. On the contrary, I am pointing out that just
because you have a certain property, it doesn't follow that this
property was created be evolution. Evolution is a theory about the
complexification of biological systems. Nothing more, nothing less.

>> > I don't know how probable consciousness is.
>
>
> But I can make a educated guess because I know that most things don't behave
> intelagently and I also know that when I become less conscious I also become
> less intelligent.
>
>> > Humans are terribly complex
>
>
> Yes, and evolution didn't bother to put in all that complexity because its a
> nice guy and figured we'd like consciousness, it did it because intelligent
> behavior has survival value.

Human-like intelligent behavior is a successful strategy in a very
narrow evolutionary niche. We are one thermonuclear war away from
nature "deciding" that human-like intelligence is not so useful after
all. Evolution is constant adaptation to the environment, with some
self-referenciality because the things evolved become part of, and
change the environment.

>> > and it might be that consciousness arises  from terribly complex things,
>> > or from certain types of terribly  complex things. But I don't really know
>> > and neither do you.
>
>
> I don't really know that human cadavers are not conscious, but they
> sure
> don't behave intelligently so my hunch is they are not.
> W
> hat is your hunch?

My hunch is that I am mind in a universe made of mind. I believe
modern physics is both valid and useful (and a lot of fun), but I do
not know that it studies fundamental reality and I believe nobody else
knows either.

Telmo.

> John K
> Clark
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-19 Thread Telmo Menezes
On Sun, Mar 18, 2018 at 9:29 PM, Lawrence Crowell
 wrote:
> On Sunday, March 18, 2018 at 10:34:17 AM UTC-6, telmo_menezes wrote:
>>
>> On Sun, Mar 18, 2018 at 5:18 PM, John Clark  wrote:
>> > On Sun, Mar 18, 2018 at 11:16 AM, Telmo Menezes 
>> > wrote:
>> >>>
>> >>> >>  Evolution produced me. I know with 100% certainty that I am
>> >>> >> conscious. I very strongly suspect billions of other things are
>> >>> >> conscious
>> >>> >> too. I know for a fact Evolution can detect intelligent behavior
>> >>> >> but it
>> >>> >> can’t detect consciousness and yet I am consciousness. Therefor
>> >>> >> consciousness MUST be a byproduct intelligence. Evolution says as
>> >>> >> much about
>> >>> >> consciousness as there is to say, it is the best purely logical
>> >>> >> argument
>> >>> >> against solipsism, in fact it is the only one, all the others are
>> >>> >> just
>> >>> >> variations of “my initiation says its untrue” or “solipsism is too
>> >>> >> strange
>> >>> >> to be true”.
>> >>
>> >>
>> >>
>> >> > This assumes that if evolution produced X, then any property of X is
>> >> > also a product of evolution.
>> >
>> >
>> > Not any property
>> > of
>> >  X, just any property the parts of X didn't have before Evolution
>> > started
>> > its work. You walk into a bakery and see a cake and you assume the baker
>> > made the cake. Do you also assume the baker made the flower in the cake,
>> > and
>> > the carbon in the flower, and the protons in the carbon, and the quarks
>> > in
>> > the protons?
>>
>> No, because the cake is an improbable thing, while protons are
>> probable things. I don't know how probable consciousness is. As you
>> said, I cannot detect it.
>>
>> >>
>> >> > It is trivially not the case. For example, you have mass and an
>> >> > electromagnetic field and a temperature, and yet neither mass nor
>> >> > electromagnetism nor temperature are products
>> >> of evolution.
>> >
>> > I know Evolution produced my physical brain and I know with 100%
>> > certainty
>> > my brain is conscious,
>>
>> But you don't know what else is conscious.
>>
>> > however I do not know with 100% certainty that mass
>> > or electromagnetism or temperature are conscious, in fact I rather
>> > suspect
>> > they are not and there is only one reason I have that suspicion, they do
>> > not
>> > behave intelligently.
>>
>> Intelligent behavior is relative to humans. It just means that you are
>> good at the things that are necessary to succeed in a specific
>> evolutionary niche that the Lovecraftian-horror a.k.a. nature
>> produced. It also produced myriad other things.
>>
>> Humans are terribly complex, and it might be that consciousness arises
>> from terribly complex things, or from certain types of terribly
>> complex things. But I don't really know and neither do you.
>>
>> Telmo.
>
>
> In a part what you say is spot on. The problem with consciousness is there
> is a lot more ignorance about it than much in the way of certain knowledge.
> It may be a sort of epiphenomenon that emerges from some class of complex
> systems, which at this time we do know understand. Roger Penrose thinks it
> is something is a triality of physics, mathematics and mind, which is a sort
> of Platonic look. Dennett on the other hand thinks consciousness is a sort
> of illusion, which is a sort of epiphenomenon. Dennett calls it a
> hetererophenomenon as it involves a sort of game of multiple drafts. We
> really do not know for sure what consciousness is.

I am on the Platonist camp, but fully realize that this is a personal
bet / intuition. I agree with Bruno that if computationalism is true,
then consciousness cannot be an epiphenomenon. But we don't know if
computationalism is true.

Dennett I just find just silly. I think he plays with words, and
accepting his arguments would force me to deny something (the only
thing) that I absolutely know to be true.

> I can think of things that strike me as obstructions to the idea of
> uploading brain states to a computer. The issue of NP-completeness seems
> plausible, and classic NP-complete problems are combinatorial systems which
> the brain is an example of. Other questions seem to make this problematic.
> It does seem to me the barrier of ignorance is far higher than our ability
> to vault over it.

Agreed. I'm not sure we will ever be able to understand consciousness
-- there is really no reason to assume that this is possible. If it
is, I bet that it will require a quantitative jump in our
understanding of reality. I most definitely do not believe that it can
be solved by incrementalist research in neuroscience.

Telmo.

> LC
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/every

Re: How to live forever

2018-03-19 Thread Stathis Papaioannou
On Mon, 19 Mar 2018 at 8:58 pm, Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

> On Sunday, March 18, 2018 at 8:46:26 PM UTC-6, stathisp wrote:
>
>>
>>
>> On 19 March 2018 at 12:14, Lawrence Crowell 
>> wrote:
>>
>>> On Sunday, March 18, 2018 at 3:51:13 PM UTC-6, John Clark wrote:

 On Sun, Mar 18, 2018 at 11:02 AM, Lawrence Crowell <
 goldenfield...@gmail.com> wrote:

 *> The MH spacetimes have Cauchy horizons that because they pile up
> geodesics can be a sort of singularity.*


 That’s not the only thing they have, MH spacetimes also have closed
 timelike curves and logical paradoxes produced by them, one of them being
 the one found by Turing. They also have naked singularities that nobody has
 ever seen the slightest hint of. And if you need to go to as exotic a place
 as the speculative interior of a Black Hole to find a reason why Cryonics
 might not work I am greatly encouraged.

>>>
>>> Not all MH spaces have closed timelike curves.
>>>
>>>

 *> The subject of NP-completeness came up because of my conjecture
> about there being a sort of code associated with a conscious entity that 
> is
> not computable or if computable is intractable in NP. *


 NP-completeness is sorta weird and consciousness is sorta weird, but
 other than that is there any reason to think the two things are related?

>>>
>>> This seems to be something you are not registering. Classic NP-complete
>>> problems involve cataloging subgraphs and determining the rules for all
>>> subgraphs in a graph. There are other similar combinatoric problems that
>>> are NP complete. A map from a brain to a computer is going to require
>>> knowing how to handle these problems. Quantum computers do not help much.
>>>
>>>

 *> It could have some bearing on the ability to emulate consciousness
> in a computer.*


 How do you figure that? Both my brain and my computer are made of
 matter that obeys the laws of physics, and matter that obeys the laws of
 physics has never been observed to compute NP-complete problems in
 polynomial time, much less less find the answer to a non-computable
 question, like “what is the 7918th Busy Beaver number?”.

>>>
>>> And for this reason it could be impossible to map brain states into a
>>> computer and capture a person completely. Of course brains and computers
>>> are made of matter. So is a pile of shit also made of matter. Based on what
>>> we know about bacteria and their network communicating by electrical
>>> potentials the pile of shit may have more in the way of consciousness than
>>> a computer.
>>>
>>> As for the rest I think a lot of this sort of idea is chasing after some
>>> crazy dream. There is in some ways a problem with doing that. As things
>>> stand now I would not do the upload.  Below is a picture of some aspect of
>>> this.
>>>
>>>
>>> 
>>>
>> Could you say if you think the observable behaviour of the brain (and
>> hence of the person whose muscles are controlled by the brain) could be
>> replaced by a computer, and, if the answer is yes, if you still think it is
>> possible that the consciousness might not be preserved? And if the answer
>> is also yes to the second question, what you think it would be like if your
>> consciousness was changed by replacing part of your brain, but your brain
>> still forced your body to behave in the same way?
>>
>>
>> --
>> Stathis Papaioannou
>>
>
> I really do not know. I will say if it is possible in principle to replace
> the executive parts of the brain with a computer, but where the result
> could be a sort of zombie. There are too many unknowns and unknowns with no
> Bayesian priors, or unknown unknowns. We are in a domain of possibles,
> plausibles and maybe a Jupiter computer-brain. There is so little to go
> with this, and to be honest a lot more possible obstructions I might see
> than realities, that almost nothing can be said with much certainty.
>

Consider not a zombie but a brain in the process of zombification. A piece
of the brain is replaced with an electronic implant which replicates its
I/O behaviour as it interacts with the surrounding biological tissue but,
by assumption, does not participate in consciousness. It is believed, for
example, that visual experiences arise in the occipital cortex, and lesions
here cause partial or complete blindness. So if the implant in the visual
cortex lacked the special quality giving rise to visual experiences, the
subject should have this large deficit in his consciousness. But although
he might want to yell out that he is blind, his vocal cords receive the
same input from motor neurones that they would normally, since the output
from the visual cortex is the same, and he declares that nothing

Re: How to live forever

2018-03-19 Thread Lawrence Crowell
On Sunday, March 18, 2018 at 8:46:26 PM UTC-6, stathisp wrote:
>
>
>
> On 19 March 2018 at 12:14, Lawrence Crowell  > wrote:
>
>> On Sunday, March 18, 2018 at 3:51:13 PM UTC-6, John Clark wrote:
>>>
>>> On Sun, Mar 18, 2018 at 11:02 AM, Lawrence Crowell <
>>> goldenfield...@gmail.com> wrote:
>>>
>>> *> The MH spacetimes have Cauchy horizons that because they pile up 
 geodesics can be a sort of singularity.*
>>>
>>>
>>> That’s not the only thing they have, MH spacetimes also have closed 
>>> timelike curves and logical paradoxes produced by them, one of them being 
>>> the one found by Turing. They also have naked singularities that nobody has 
>>> ever seen the slightest hint of. And if you need to go to as exotic a place 
>>> as the speculative interior of a Black Hole to find a reason why Cryonics 
>>> might not work I am greatly encouraged. 
>>>
>>
>> Not all MH spaces have closed timelike curves.
>>  
>>
>>>
>>> *> The subject of NP-completeness came up because of my conjecture about 
 there being a sort of code associated with a conscious entity that is not 
 computable or if computable is intractable in NP. *
>>>
>>>
>>> NP-completeness is sorta weird and consciousness is sorta weird, but 
>>> other than that is there any reason to think the two things are related?
>>>
>>
>> This seems to be something you are not registering. Classic NP-complete 
>> problems involve cataloging subgraphs and determining the rules for all 
>> subgraphs in a graph. There are other similar combinatoric problems that 
>> are NP complete. A map from a brain to a computer is going to require 
>> knowing how to handle these problems. Quantum computers do not help much.
>>  
>>
>>>
>>> *> It could have some bearing on the ability to emulate consciousness in 
 a computer.*
>>>
>>>
>>> How do you figure that? Both my brain and my computer are made of matter 
>>> that obeys the laws of physics, and matter that obeys the laws of physics 
>>> has never been observed to compute NP-complete problems in polynomial time, 
>>> much less less find the answer to a non-computable question, like “what is 
>>> the 7918th Busy Beaver number?”. 
>>>
>>
>> And for this reason it could be impossible to map brain states into a 
>> computer and capture a person completely. Of course brains and computers 
>> are made of matter. So is a pile of shit also made of matter. Based on what 
>> we know about bacteria and their network communicating by electrical 
>> potentials the pile of shit may have more in the way of consciousness than 
>> a computer. 
>>
>> As for the rest I think a lot of this sort of idea is chasing after some 
>> crazy dream. There is in some ways a problem with doing that. As things 
>> stand now I would not do the upload.  Below is a picture of some aspect of 
>> this. 
>>
>>
>> 
>>
> Could you say if you think the observable behaviour of the brain (and 
> hence of the person whose muscles are controlled by the brain) could be 
> replaced by a computer, and, if the answer is yes, if you still think it is 
> possible that the consciousness might not be preserved? And if the answer 
> is also yes to the second question, what you think it would be like if your 
> consciousness was changed by replacing part of your brain, but your brain 
> still forced your body to behave in the same way?
>
>
> -- 
> Stathis Papaioannou
>

I really do not know. I will say if it is possible in principle to replace 
the executive parts of the brain with a computer, but where the result 
could be a sort of zombie. There are too many unknowns and unknowns with no 
Bayesian priors, or unknown unknowns. We are in a domain of possibles, 
plausibles and maybe a Jupiter computer-brain. There is so little to go 
with this, and to be honest a lot more possible obstructions I might see 
than realities, that almost nothing can be said with much certainty.

LC

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.