Re: How to live forever

2018-03-25 Thread Brent Meeker



On 3/25/2018 7:14 PM, Stathis Papaioannou wrote:



On 26 March 2018 at 04:57, Brent Meeker > wrote:




On 3/25/2018 2:15 AM, Bruno Marchal wrote:



On 21 Mar 2018, at 22:56, Brent Meeker > wrote:



On 3/21/2018 2:27 PM, Stathis Papaioannou wrote:


On Thu, 22 Mar 2018 at 5:45 am, Brent Meeker
> wrote:



On 3/20/2018 11:29 PM, Stathis Papaioannou wrote:

On Wed, 21 Mar 2018 at 9:03 am, Brent Meeker
> wrote:



On 3/20/2018 1:14 PM, Stathis Papaioannou wrote:


On Wed, 21 Mar 2018 at 6:34 am, Brent Meeker
>
wrote:



On 3/20/2018 3:58 AM, Telmo Menezes wrote:

The interesting thing is that you can draw conclusions about 
consciousness
without being able to define it or detect it.

I agree.


The claim is that IF an entity
is conscious THEN its consciousness will be preserved if brain 
function is
preserved despite changing the brain substrate.

Ok, this is computationalism. I also bet on computationalism, 
but I
think we must proceed with caution and not forget that we are 
just
assuming this to be true. Your thought experiment is convincing 
but is
not a proof. You do expose something that I agree with: that
non-computationalism sounds silly.

But does it sound so silly if we propose
substituting a completely different kind of
computer, e.g. von Neumann architecture or one
that just records everything instead of an
episodic associative memory, for the brain. The
Church-Turing conjecture says it can compute the
same functions. But does it instantiate the same
consciousness. My intuition is that it would be
"conscious" but in some different way; for
example by having the kind of memory you would
have if you could review of a movie of any
interval in your past.


I think it would be conscious in the same way if you
replaced neural tissue with a black box that
interacted with the surrounding tissue in the same
way. It doesn’t matter what is in the black box; it
could even work by magic.


Then why draw the line at "surrounding tissue".  Why
not the external enivironment?


Keep expanding the part that is replaced and you replace
the whole brain and the whole organism.

Are you saying you can't imagine being "conscious" but
in a different way?


I think it is possible but I don’t think it could happen
if my neurones were replaced by a functionally equivalent
component. If it’s functionally equivalent, my behaviour
would be unchanged,


I agree with that.  But you've already supposed that
functional equivalence at the behavior level implies
preservation of consciousness.  So what I'm considering is
replacements in the brain far above the neuron level, say
at the level of whole functional groups of the brain, e.g.
the visual system, the auditory system, the memory,...
Would functional equivalence at the body/brain interface
then still imply consciousness equivalence?


I think it would, because I don’t think there are isolated
consciousness modules in the brain. A large enough change in
visual experience will be noticed by the subject, who will
report that things look different. This could only happen if
there is a change in the input to the language system from the
visual system; but we have assumed that the output from the
visual system is the same, and only the consciousness has
changed, leading to a contradiction.


But what about internal systems which are independent of
perception...the very reason Bruno wants to talk about dream
states.  And I'm not necessarily asking that behavior be
identical...just that the body/brain interface be the same.  The
"brain" may be different in how it processes input from the
eyeballs and hence report verbally different perceptions.  In
other words, I'm wondering how much does computationalism
constrain consciousness.  My intuition is that there could be a
lot of difference in consciousness depending on how different
perceptual inputs are process and/or merged and how internal
simulations are handled.  To take a crude example, would it
matter if the 

Re: How to live forever

2018-03-25 Thread Stathis Papaioannou
On 26 March 2018 at 04:57, Brent Meeker  wrote:

>
>
> On 3/25/2018 2:15 AM, Bruno Marchal wrote:
>
>
> On 21 Mar 2018, at 22:56, Brent Meeker  wrote:
>
>
>
> On 3/21/2018 2:27 PM, Stathis Papaioannou wrote:
>
>
> On Thu, 22 Mar 2018 at 5:45 am, Brent Meeker  wrote:
>
>>
>>
>> On 3/20/2018 11:29 PM, Stathis Papaioannou wrote:
>>
>> On Wed, 21 Mar 2018 at 9:03 am, Brent Meeker 
>> wrote:
>>
>>>
>>>
>>> On 3/20/2018 1:14 PM, Stathis Papaioannou wrote:
>>>
>>>
>>> On Wed, 21 Mar 2018 at 6:34 am, Brent Meeker 
>>> wrote:
>>>


 On 3/20/2018 3:58 AM, Telmo Menezes wrote:

 The interesting thing is that you can draw conclusions about consciousness
 without being able to define it or detect it.

 I agree.


 The claim is that IF an entity
 is conscious THEN its consciousness will be preserved if brain function is
 preserved despite changing the brain substrate.

 Ok, this is computationalism. I also bet on computationalism, but I
 think we must proceed with caution and not forget that we are just
 assuming this to be true. Your thought experiment is convincing but is
 not a proof. You do expose something that I agree with: that
 non-computationalism sounds silly.

 But does it sound so silly if we propose substituting a completely
 different kind of computer, e.g. von Neumann architecture or one that just
 records everything instead of an episodic associative memory, for the
 brain.  The Church-Turing conjecture says it can compute the same
 functions.  But does it instantiate the same consciousness.  My intuition
 is that it would be "conscious" but in some different way; for example by
 having the kind of memory you would have if you could review of a movie of
 any interval in your past.

>>>
>>> I think it would be conscious in the same way if you replaced neural
>>> tissue with a black box that interacted with the surrounding tissue in the
>>> same way. It doesn’t matter what is in the black box; it could even work by
>>> magic.
>>>
>>>
>>> Then why draw the line at "surrounding tissue".  Why not the external
>>> enivironment?
>>>
>>
>> Keep expanding the part that is replaced and you replace the whole brain
>> and the whole organism.
>>
>> Are you saying you can't imagine being "conscious" but in a different way?
>>>
>>
>> I think it is possible but I don’t think it could happen if my neurones
>> were replaced by a functionally equivalent component. If it’s functionally
>> equivalent, my behaviour would be unchanged,
>>
>>
>> I agree with that.  But you've already supposed that functional
>> equivalence at the behavior level implies preservation of consciousness.
>> So what I'm considering is replacements in the brain far above the neuron
>> level, say at the level of whole functional groups of the brain, e.g. the
>> visual system, the auditory system, the memory,...  Would functional
>> equivalence at the body/brain interface then still imply consciousness
>> equivalence?
>>
>
> I think it would, because I don’t think there are isolated consciousness
> modules in the brain. A large enough change in visual experience will be
> noticed by the subject, who will report that things look different. This
> could only happen if there is a change in the input to the language system
> from the visual system; but we have assumed that the output from the visual
> system is the same, and only the consciousness has changed, leading to a
> contradiction.
>
>
> But what about internal systems which are independent of perception...the
> very reason Bruno wants to talk about dream states.  And I'm not
> necessarily asking that behavior be identical...just that the body/brain
> interface be the same.  The "brain" may be different in how it processes
> input from the eyeballs and hence report verbally different perceptions.
> In other words, I'm wondering how much does computationalism constrain
> consciousness.  My intuition is that there could be a lot of difference in
> consciousness depending on how different perceptual inputs are process
> and/or merged and how internal simulations are handled.  To take a crude
> example, would it matter if the computer-brain was programmed in a
> functional language like LISP, an object-oriented language like Ruby, or a
> neural network?  Of course Church-Turing says they all compute the same set
> of functions, but they don't do it the same way
>
>
> They can do it in the same way. They will not do it in the same way with a
> compiler, but will do it in the same way when you implement an interpreter
> in another interpreter. The extensional CT (in terms if which functions are
> calculated) entails the intensional CT (in terms of which computations can
> be processed. Babbage machine could emulate a quantum brain. It involves a
> relative slow-down, but the subject 

Re: Mind Uploading and NP-completeness

2018-03-25 Thread John Clark
On Sun, Mar 25, 2018 at 6:07 PM, Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

> >
> ​
>  *I am more interested in the graph theoretic issues with NP-completeness
> for other reasons than I am in this question of uploading minds into
> computers. The latter I think is largely science fiction.*


Of course its science fiction, its fiction because because nobody has yet
come back from the world of liquid nitrogen and its scientific because you
have to go to the speculative interior of Black Holes to try to find a
fundamental physical reason it
​might​
 not work.


> ​>
> * I am more interested in questions of quantum information and the
> compatibility of quantum mechanics and general relativity.*
>

Those are indeed interesting questions, but they are very deep and a
​n​
answer to them might not be available for a hundred years. wouldn't you
like a chance of knowing what those answers are?

*​> ​My reasoning for not doing this is not about being "deader," but in
> terms of money. To be honest this borders on sounding like a scam, and I
> can imagine that con-men have concocted this scheme to part people from
> their money. Sure it might be based on a bit of science and technology, but
> I could easily see this as being some sort of scam.*


If cryonics a scam its not a very good one because its been around for half
a century and nobody has gotten rich off it. Perhaps Alcor should stop
saying there are no guarantees and telling people
​that ​
getting cryogenically preserved is the second worst thing that could happen
to you and instead do what religion does and insist there is no way it
could fail. If they didn't care about honesty and did that from the first
preservation back in the 1960s Alcor might be bigger than Scientology by
now, maybe even bigger than the Vatican
​.

John K Clark​

​

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Mind Uploading and NP-completeness

2018-03-25 Thread agrayson2000


On Sunday, March 25, 2018 at 6:07:44 PM UTC-4, Lawrence Crowell wrote:
>
> I am more interested in the graph theoretic issues with NP-completeness 
> for other reasons than I am in this question of uploading minds into 
> computers. The latter I think is largely science fiction. I am more 
> interested in questions of quantum information and the compatibility of 
> quantum mechanics and general relativity.
>
> My reasoning for not doing this is not about being "deader," but in terms 
> of money. To be honest this borders on sounding like a scam, and I can 
> imagine that con-men have concocted this scheme to part people from their 
> money. Sure it might be based on a bit of science and technology, but I 
> could easily see this as being some sort of scam. The cryonics movement has 
> produced nothing, but people continue to pay into it.
>

*It might not be a scam; just extreme speculating. But what is, for sure, 
is hubris in evidence. AG* 

>
> LC
>
> On Sunday, March 25, 2018 at 11:16:03 AM UTC-6, John Clark wrote:
>>
>> On Sun, Mar 25, 2018 at 11:34 AM, Lawrence Crowell <
>> goldenfield...@gmail.com> wrote:
>>
>> *> This further might connect with the whole idea of up-loading minds 
>>> into computers. Brains and their states are not just localized states but 
>>> networks, and it could well be that this is not tractable.*
>>
>>
>> If the brain is a network it is a network with a finite number of 
>> vertices and a finite number of lines connecting those vertices. For 
>> uploading you’re not trying to optimize or do anything with the network 
>> except to just list all the lines and vertices in a network that already 
>> exists. You don’t need to find some exotic new algorithm that can solve NP 
>> complete problems in polynomial time to do that, you just need a few 
>> trillion nano-machines that can feel around inside a brain and report back 
>> on what they’ve discovered. And as I said before, even if a general class 
>> of problems has been proven to be difficult that just means some specific 
>> examples of it are, it doesn’t mean all or even most are and in fact some 
>> could be quite easy. In general factoring large numbers is hard and 2^1000 
>> is huge but it would be remarkably easy to factor.
>>
>> There is another thing that confuses me, you seem to be implying nobody 
>> should engage in Cryonics unless it has been proven with mathematical 
>> certainty to work, and that doesn’t seem wise to me unless you know of a 
>> reason that being frozen will make 
>> ​me​
>>  deader than being eaten by worms. 
>>
>> *> As a general rule once these threads gets past 100 I tend not to post 
>>> any more. It becomes to annoying to find my way around them.*
>>
>>
>> I can sympathize, I’ve been complaining about that for years, but the 
>> problem really isn’t 100 posts its that most people refuse to trim anything 
>> when they respond so you end up with a vast iterated sea of quotes of quote 
>> of quotes of quotes of quotes of quotes of quotes of quotes and its very 
>> hard to tell who said what. Its frustrating to scroll down through page 
>> after page of quotes only to be rewarded at the end with one cryptic new 
>> line like “that’s not true”. 
>>
>>  John K Clark
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Mind Uploading and NP-completeness

2018-03-25 Thread Lawrence Crowell
I am more interested in the graph theoretic issues with NP-completeness for 
other reasons than I am in this question of uploading minds into computers. 
The latter I think is largely science fiction. I am more interested in 
questions of quantum information and the compatibility of quantum mechanics 
and general relativity.

My reasoning for not doing this is not about being "deader," but in terms 
of money. To be honest this borders on sounding like a scam, and I can 
imagine that con-men have concocted this scheme to part people from their 
money. Sure it might be based on a bit of science and technology, but I 
could easily see this as being some sort of scam. The cryonics movement has 
produced nothing, but people continue to pay into it.

LC

On Sunday, March 25, 2018 at 11:16:03 AM UTC-6, John Clark wrote:
>
> On Sun, Mar 25, 2018 at 11:34 AM, Lawrence Crowell <
> goldenfield...@gmail.com > wrote:
>
> *> This further might connect with the whole idea of up-loading minds into 
>> computers. Brains and their states are not just localized states but 
>> networks, and it could well be that this is not tractable.*
>
>
> If the brain is a network it is a network with a finite number of vertices 
> and a finite number of lines connecting those vertices. For uploading 
> you’re not trying to optimize or do anything with the network except to 
> just list all the lines and vertices in a network that already exists. You 
> don’t need to find some exotic new algorithm that can solve NP complete 
> problems in polynomial time to do that, you just need a few trillion 
> nano-machines that can feel around inside a brain and report back on what 
> they’ve discovered. And as I said before, even if a general class of 
> problems has been proven to be difficult that just means some specific 
> examples of it are, it doesn’t mean all or even most are and in fact some 
> could be quite easy. In general factoring large numbers is hard and 2^1000 
> is huge but it would be remarkably easy to factor.
>
> There is another thing that confuses me, you seem to be implying nobody 
> should engage in Cryonics unless it has been proven with mathematical 
> certainty to work, and that doesn’t seem wise to me unless you know of a 
> reason that being frozen will make 
> ​me​
>  deader than being eaten by worms. 
>
> *> As a general rule once these threads gets past 100 I tend not to post 
>> any more. It becomes to annoying to find my way around them.*
>
>
> I can sympathize, I’ve been complaining about that for years, but the 
> problem really isn’t 100 posts its that most people refuse to trim anything 
> when they respond so you end up with a vast iterated sea of quotes of quote 
> of quotes of quotes of quotes of quotes of quotes of quotes and its very 
> hard to tell who said what. Its frustrating to scroll down through page 
> after page of quotes only to be rewarded at the end with one cryptic new 
> line like “that’s not true”. 
>
>  John K Clark
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-25 Thread Brent Meeker



On 3/25/2018 2:15 AM, Bruno Marchal wrote:


On 21 Mar 2018, at 22:56, Brent Meeker > wrote:




On 3/21/2018 2:27 PM, Stathis Papaioannou wrote:


On Thu, 22 Mar 2018 at 5:45 am, Brent Meeker > wrote:




On 3/20/2018 11:29 PM, Stathis Papaioannou wrote:

On Wed, 21 Mar 2018 at 9:03 am, Brent Meeker
> wrote:



On 3/20/2018 1:14 PM, Stathis Papaioannou wrote:


On Wed, 21 Mar 2018 at 6:34 am, Brent Meeker
> wrote:



On 3/20/2018 3:58 AM, Telmo Menezes wrote:

The interesting thing is that you can draw conclusions about 
consciousness
without being able to define it or detect it.

I agree.


The claim is that IF an entity
is conscious THEN its consciousness will be preserved if brain 
function is
preserved despite changing the brain substrate.

Ok, this is computationalism. I also bet on computationalism, but I
think we must proceed with caution and not forget that we are just
assuming this to be true. Your thought experiment is convincing but 
is
not a proof. You do expose something that I agree with: that
non-computationalism sounds silly.

But does it sound so silly if we propose substituting
a completely different kind of computer, e.g. von
Neumann architecture or one that just records
everything instead of an episodic associative memory,
for the brain. The Church-Turing conjecture says it
can compute the same functions.  But does it
instantiate the same consciousness.  My intuition is
that it would be "conscious" but in some different
way; for example by having the kind of memory you
would have if you could review of a movie of any
interval in your past.


I think it would be conscious in the same way if you
replaced neural tissue with a black box that interacted
with the surrounding tissue in the same way. It doesn’t
matter what is in the black box; it could even work by magic.


Then why draw the line at "surrounding tissue".  Why not
the external enivironment?


Keep expanding the part that is replaced and you replace the
whole brain and the whole organism.

Are you saying you can't imagine being "conscious" but in a
different way?


I think it is possible but I don’t think it could happen if my
neurones were replaced by a functionally equivalent component.
If it’s functionally equivalent, my behaviour would be unchanged,


I agree with that.  But you've already supposed that functional
equivalence at the behavior level implies preservation of
consciousness.  So what I'm considering is replacements in the
brain far above the neuron level, say at the level of whole
functional groups of the brain, e.g. the visual system, the
auditory system, the memory,...  Would functional equivalence at
the body/brain interface then still imply consciousness equivalence?


I think it would, because I don’t think there are isolated 
consciousness modules in the brain. A large enough change in visual 
experience will be noticed by the subject, who will report that 
things look different. This could only happen if there is a change 
in the input to the language system from the visual system; but we 
have assumed that the output from the visual system is the same, and 
only the consciousness has changed, leading to a contradiction.


But what about internal systems which are independent of 
perception...the very reason Bruno wants to talk about dream states.  
And I'm not necessarily asking that behavior be identical...just that 
the body/brain interface be the same.  The "brain" may be different 
in how it processes input from the eyeballs and hence report verbally 
different perceptions.  In other words, I'm wondering how much does 
computationalism constrain consciousness.  My intuition is that there 
could be a lot of difference in consciousness depending on how 
different perceptual inputs are process and/or merged and how 
internal simulations are handled.  To take a crude example, would it 
matter if the computer-brain was programmed in a functional language 
like LISP, an object-oriented language like Ruby, or a neural 
network? Of course Church-Turing says they all compute the same set 
of functions, but they don't do it the same way


They can do it in the same way. They will not do it in the same way 
with a compiler, but will do it in the same way when you implement an 
interpreter in another interpreter. The extensional CT (in terms if 
which functions are calculated) entails the 

Mind Uploading and NP-completeness

2018-03-25 Thread John Clark
On Sun, Mar 25, 2018 at 11:34 AM, Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

*> This further might connect with the whole idea of up-loading minds into
> computers. Brains and their states are not just localized states but
> networks, and it could well be that this is not tractable.*


If the brain is a network it is a network with a finite number of vertices
and a finite number of lines connecting those vertices. For uploading
you’re not trying to optimize or do anything with the network except to
just list all the lines and vertices in a network that already exists. You
don’t need to find some exotic new algorithm that can solve NP complete
problems in polynomial time to do that, you just need a few trillion
nano-machines that can feel around inside a brain and report back on what
they’ve discovered. And as I said before, even if a general class of
problems has been proven to be difficult that just means some specific
examples of it are, it doesn’t mean all or even most are and in fact some
could be quite easy. In general factoring large numbers is hard and 2^1000
is huge but it would be remarkably easy to factor.

There is another thing that confuses me, you seem to be implying nobody
should engage in Cryonics unless it has been proven with mathematical
certainty to work, and that doesn’t seem wise to me unless you know of a
reason that being frozen will make
​me​
 deader than being eaten by worms.

*> As a general rule once these threads gets past 100 I tend not to post
> any more. It becomes to annoying to find my way around them.*


I can sympathize, I’ve been complaining about that for years, but the
problem really isn’t 100 posts its that most people refuse to trim anything
when they respond so you end up with a vast iterated sea of quotes of quote
of quotes of quotes of quotes of quotes of quotes of quotes and its very
hard to tell who said what. Its frustrating to scroll down through page
after page of quotes only to be rewarded at the end with one cryptic new
line like “that’s not true”.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-25 Thread John Clark
On Sun, Mar 25, 2018 at 5:01 AM, Bruno Marchal  wrote:

​>> ​
>> if it was all based on the observation of behavior then what you'd end up
>> with is a scientific theory about intelligence not consciousness.
>
>
> *​> ​That is right. But if you agree that consciousness is a form of
> non-provable but also non-doubtable knowledge, and if you agree with the
> standard definition of knowledge in philosophy of mind, then it is a
> theorem that Peano Arithmetic is conscious.*
>
Perhaps rocks are intelligent and they just choose not to display it, if so
then rocks are conscious too.  Perhaps Peano Arithmetic is intelligent and
it just chooses not to display it, if so then Peano Arithmetic is conscious
too. Or perhaps neither rocks nor Peano Arithmetic nor you is conscious and
only I am. Perhaps, but I doubt it.

​ John K Clark​

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-25 Thread Lawrence Crowell


On Sunday, March 25, 2018 at 5:01:59 AM UTC-6, Bruno Marchal wrote:
>
>
> Yes, and if someone argue that consciousness is not maintained whatever 
> the substitution level is, it is up to them to explain what in the 
> brain+local-evirnoment is not Turing emulable. I see only the “wave packet 
> reduction”, but I don’t see any evidence for that reduction, and it would 
> make Quantum mechanics inconsistent (I think) and not usable in cosmology, 
> nor in quantum information science. To believe that the brain is not a 
> “natural” machine is a bit like believing in some magic. Why not, but where 
> are the evidences?
>
>
> Bruno
>

There are a couple of things running around here. One involves brains and 
minds and the other wave function reduction. 

The issue of up loading brains or mapping them come into the problem with 
the NP-complete problem of partitioning graphs. I like to think of this 
according to tensor spaces of states, such as with MERA (multi-scale 
entanglement renormalization ansatz) tensor networks. The AdS_3 example 
with H^2 spatial surface is seen in the diagram below.



This network has the highest complexity for the pentagonal tessellation for 
these are honeycombs of the groups H3, H4, H5 corresponding to the 
pentagon, dodecahedron, and the 4-dim icosadedron or 120/600 cells. These 
groups will tessellate a 2, 3 and 4 dimensional spatial hyperbolic surface 
embedded in AdS_3, AdS_4 and AdS_5. These define half the weights of the E8 
groups with the Zamolodchikov eigenvalues or masses. 5-fold structures have 
connections to the golden mean, and the Zamolodchikov quaternions are 
representations of the golden mean quaternions. A quantum error correction 
code (QECC) defines a projector onto each of these partitioned elements, 
but (without going into some deep mathematics) this is not computable in a 
root system because there is no Galois field extension, which gives that 
the QECC is not NP-complete.  

This of course is work I am doing with respect to the problem of unitarity 
in quantum black holes and holography. It may have some connection with 
more ordinary quantum mechanics and measurement. The action of a 
measurement is a process whereby a set of quantum states code some other 
set of quantum states, where usually the number of the measuring states is 
far larger than the measured states. The quantum measurement problem may 
have some connection to the above, and further it has some qualitative 
similarity to self-reference. This may then mean the proposition P = NP or 
P =/= NP is not provable, but where maybe specific examples of 
NP/NP-complete algorithms as not-P can be proven. 

This further might connect with the whole idea of up-loading minds into 
computers. Brains and their states are not just localized states but 
networks, and it could well be that this is not tractable. I paste in below 
a review paper on graph partitioning. This is just one possible theoretical 
obstruction, and if you plan on actually "bending metal" on this the 
problems will doubtless multiply like bunnies in spring. 

As a general rule once these threads gets past 100 I tend not to post any 
more. It becomes to annoying to find my way around them.

LC

https://arxiv.org/abs/1311.3144
Recent Advances in Graph Partitioning
Aydin Buluc , Henning 
Meyerhenke , Ilya 
Safro , Peter Sanders 
, Christian Schulz 

(Submitted on 13 Nov 2013 (v1 ), last 
revised 3 Feb 2015 (this version, v3))

We survey recent trends in practical algorithms for balanced graph 
partitioning together with applications and future research directions.

Subjects: Data Structures and Algorithms (cs.DS); Distributed, Parallel, 
and Cluster Computing (cs.DC); Combinatorics (math.CO)
Cite as: arXiv:1311.3144  [cs.DS]
  (or arXiv:1311.3144v3  [cs.DS] for 
this version)


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-25 Thread Bruno Marchal

> On 23 Mar 2018, at 02:46, Stathis Papaioannou  wrote:
> 
> 
> On Fri, 23 Mar 2018 at 11:32 am, Bruce Kellett  > wrote:
> From: Stathis Papaioannou >
>> 
>> On Thu, 22 Mar 2018 at 9:02 am, Bruce Kellett > > wrote:
>> From: Stathis Papaioannou < stath...@gmail.com 
>> >
>>> 
>>> On Wed, 21 Mar 2018 at 10:56 am, Bruce Kellett < 
>>> bhkell...@optusnet.com.au 
>>> > wrote:
>>> From: Stathis Papaioannou < stath...@gmail.com 
>>> >
>>> 
 On Tue, 20 Mar 2018 at 10:09 am, Bruce Kellett < 
 bhkell...@optusnet.com.au 
 > wrote:
 
 If the theory is that if the observable behaviour of the brain is 
 replicated, then consciousness will also be replicated, then the clear 
 corollary is that consciousness can be inferred from observable behaviour. 
 Which implies that I can be as certain of the consciousness of other 
 people as I am of my own. This seems to do some violence to the 1p/1pp/3p 
 distinctions that computationalism rely on so much: only 1p is "certainly 
 certain". But if I can reliably infer consciousness in others, then other 
 things can be as certain as 1p experiences
 
>>> 
 You can’t reliable infer consciousness in others. What you can infer is 
 that whatever consciousness an entity has, it will be preserved if 
 functionally identical 
 substitutions in its brain are made.
>>> 
>>> 
>>> You have that backwards. You can infer consciousness in others, by 
>>> observing their behaviour. The alternative would be solipsism. Now, while 
>>> you can't prove or disprove solipsism in a mathematical sense, you can 
>>> reject solipsism as a useless theory, since it tells you nothing about 
>>> anything. Whereas science acts on the available evidence -- observations of 
>>> behaviour in this case.
>>> 
>>> But we have no evidence that consciousness would be preserved under 
>>> functionally identical substitutions in the brain. Consciousness may be a 
>>> global affair, so functionally equivalence may not be achievable, or even 
>>> definable, within the context of a conscious brain. Can you map the 
>>> functionality of even a single neuron? You are assuming that you can, but 
>>> if that function is global, then you probably can't. There is a fair amount 
>>> of glibness in your assumption that consciousness will be preserved under 
>>> such substitutions.
>>> 
>>> 
 You can’t know if a mouse is conscious, but you can know that if mouse 
 neurones are replaced with functionally identical electronic neurones its 
 behaviour will be the same and any consciousness it may have will also be 
 the same.
>>> 
>>> You cannot know this without actually doing the substitution and observing 
>>> the results.
>>> 
>>> So do you think that it is possible to replace the neurones with 
>>> functionally identical neurones (same output for same input) and the 
>>> mouse’s behaviour would *not* be the same?
>> 
>> Individual neurons may not be the appropriate functional unit.
>> 
>> It seems that you might be close to circularity -- neural functionality 
>> includes consciousness. So if I maintain neural functionality, I will 
>> maintain consciousness.
>> 
>> The only assumption is that the brain is somehow responsible for 
>> consciousness. The argument I am making is that if any part of the brain is 
>> replaced with a functionally identical non-biological part, engineered to 
>> replicate its interactions with the surrounding tissue,  consciousness will 
>> also necessarily be replicated; for if not, an absurd situation would 
>> result, whereby consciousness can radically change but the subject not 
>> notice, or consciousness decouple completely from behaviour, or 
>> consciousness flip on or off with the change of one subatomic particle.
> 
> There still seems to be some circularity there -- consciousness is part of 
> the functionality of the brain, or parts thereof, so maintaining 
> functionality requires maintenance of consciousness.
> 
> By functionality here I specifically mean the observable behaviour of the 
> brain. Consciousness is special in that it is not directly observable as, for 
> example, the potential difference across a cell membrane or the contraction 
> of muscle is.
> 
> One would really need some independent measure of functionality, independent 
> of consciousness. And the claim would be that reproducing local functionality 
> would maintain consciousness. I do not see that that could readily be tested, 
> since mapping all the 

Re: How to live forever

2018-03-25 Thread Stathis Papaioannou
On 25 March 2018 at 20:18, Bruno Marchal  wrote:

>
> On 21 Mar 2018, at 23:49, Stathis Papaioannou  wrote:
>
>
> On Thu, 22 Mar 2018 at 9:02 am, Bruce Kellett 
> wrote:
>
>> From: Stathis Papaioannou 
>>
>>
>> On Wed, 21 Mar 2018 at 10:56 am, Bruce Kellett 
>> wrote:
>>
>>> From: Stathis Papaioannou < stath...@gmail.com>
>>>
>>> On Tue, 20 Mar 2018 at 10:09 am, Bruce Kellett <
>>> bhkell...@optusnet.com.au> wrote:
>>>

 If the theory is that if the observable behaviour of the brain is
 replicated, then consciousness will also be replicated, then the clear
 corollary is that consciousness can be inferred from observable behaviour.
 Which implies that I can be as certain of the consciousness of other people
 as I am of my own. This seems to do some violence to the 1p/1pp/3p
 distinctions that computationalism rely on so much: only 1p is "certainly
 certain". But if I can reliably infer consciousness in others, then other
 things can be as certain as 1p experiences

>>>
>>> You can’t reliable infer consciousness in others. What you can infer is
>>> that whatever consciousness an entity has, it will be preserved if
>>> functionally identical substitutions in its brain are made.
>>>
>>>
>>> You have that backwards. You can infer consciousness in others, by
>>> observing their behaviour. The alternative would be solipsism. Now, while
>>> you can't prove or disprove solipsism in a mathematical sense, you can
>>> reject solipsism as a useless theory, since it tells you nothing about
>>> anything. Whereas science acts on the available evidence -- observations of
>>> behaviour in this case.
>>>
>>> But we have no evidence that consciousness would be preserved under
>>> functionally identical substitutions in the brain. Consciousness may be a
>>> global affair, so functionally equivalence may not be achievable, or even
>>> definable, within the context of a conscious brain. Can you map the
>>> functionality of even a single neuron? You are assuming that you can, but
>>> if that function is global, then you probably can't. There is a fair amount
>>> of glibness in your assumption that consciousness will be preserved under
>>> such substitutions.
>>>
>>>
>>>
>>> You can’t know if a mouse is conscious, but you can know that if mouse
>>> neurones are replaced with functionally identical electronic neurones its
>>> behaviour will be the same and any consciousness it may have will also be
>>> the same.
>>>
>>>
>>> You cannot know this without actually doing the substitution and
>>> observing the results.
>>>
>>
>> So do you think that it is possible to replace the neurones with
>> functionally identical neurones (same output for same input) and the
>> mouse’s behaviour would *not* be the same?
>>
>>
>> Individual neurons may not be the appropriate functional unit.
>>
>> It seems that you might be close to circularity -- neural functionality
>> includes consciousness. So if I maintain neural functionality, I will
>> maintain consciousness.
>>
>
> The only assumption is that the brain is somehow responsible for
> consciousness.
>
>
> Consciousness is an attribute of the abstract immaterial person. The
> locally material brain is only responsible for the relative manifestation
> of consciousness. The computations does not create consciousness, but
> channel its possible differentiation. But that should not change your point.
>

But you start off with the assumption that replacing your brain with a
machine will preserve consciousness - "comp". From this assumption, the
rest follows, including the conclusion that there isn't actually a primary
physical brain.

> The argument I am making is that if any part of the brain is replaced with
> a functionally identical non-biological part, engineered to replicate its
> interactions with the surrounding tissue,  consciousness will also
> necessarily be replicated; for if not, an absurd situation would result,
> whereby consciousness can radically change but the subject not notice, or
> consciousness decouple completely from behaviour, or consciousness flip on
> or off with the change of one subatomic particle.
>
>
> OK,
>
> Bruno
>
>
>
> --
> Stathis Papaioannou
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send 

Re: How to live forever

2018-03-25 Thread Bruno Marchal

> On 21 Mar 2018, at 23:49, Stathis Papaioannou  wrote:
> 
> 
> On Thu, 22 Mar 2018 at 9:02 am, Bruce Kellett  > wrote:
> From: Stathis Papaioannou >
>> 
>> On Wed, 21 Mar 2018 at 10:56 am, Bruce Kellett > > wrote:
>> From: Stathis Papaioannou < stath...@gmail.com 
>> >
>> 
>>> On Tue, 20 Mar 2018 at 10:09 am, Bruce Kellett < 
>>> bhkell...@optusnet.com.au 
>>> > wrote:
>>> 
>>> If the theory is that if the observable behaviour of the brain is 
>>> replicated, then consciousness will also be replicated, then the clear 
>>> corollary is that consciousness can be inferred from observable behaviour. 
>>> Which implies that I can be as certain of the consciousness of other people 
>>> as I am of my own. This seems to do some violence to the 1p/1pp/3p 
>>> distinctions that computationalism rely on so much: 
>>> only 1p is "certainly certain". But if I can reliably infer 
>>> consciousness in others, then other things can be as certain as 1p 
>>> experiences
>>> 
>> 
>>> You can’t reliable infer consciousness in others. What you can infer is 
>>> that whatever consciousness an entity has, it will be preserved if 
>>> functionally identical substitutions in its brain are made.
>> 
>> 
>> You have that backwards. You can infer consciousness in others, by observing 
>> their behaviour. The alternative would be solipsism. Now, while you can't 
>> prove or disprove solipsism in a mathematical sense, you can reject 
>> solipsism as a useless theory, since it tells you nothing about anything. 
>> Whereas science acts on the available evidence -- observations of behaviour 
>> in this case.
>> 
>> But we have no evidence that consciousness would be preserved under 
>> functionally identical substitutions in the brain. Consciousness may be a 
>> global affair, so functionally equivalence may not be achievable, or even 
>> definable, within the context of a conscious brain. Can you map the 
>> functionality of even a single neuron? You are assuming that you can, but if 
>> that function is global, then you probably can't. There is a fair amount of 
>> glibness in your assumption that consciousness will be preserved under such 
>> substitutions.
>> 
>> 
>> 
>>> You can’t know if a mouse is conscious, but you can know that if mouse 
>>> neurones are replaced with functionally identical electronic neurones its 
>>> behaviour will be the same and any consciousness it may have will also be 
>>> the same.
>> 
>> You cannot know this without actually doing the substitution and observing 
>> the results.
>> 
>> So do you think that it is possible to replace the neurones with 
>> functionally identical neurones (same output for same input) and the mouse’s 
>> behaviour would *not* be the same?
> 
> Individual neurons may not be the appropriate functional unit.
> 
> It seems that you might be close to circularity -- neural functionality 
> includes consciousness. So if I maintain neural functionality, I will 
> maintain consciousness.
> 
> The only assumption is that the brain is somehow responsible for 
> consciousness.

Consciousness is an attribute of the abstract immaterial person. The locally 
material brain is only responsible for the relative manifestation of 
consciousness. The computations does not create consciousness, but channel its 
possible differentiation. But that should not change your point.



> The argument I am making is that if any part of the brain is replaced with a 
> functionally identical non-biological part, engineered to replicate its 
> interactions with the surrounding tissue,  consciousness will also 
> necessarily be replicated; for if not, an absurd situation would result, 
> whereby consciousness can radically change but the subject not notice, or 
> consciousness decouple completely from behaviour, or consciousness flip on or 
> off with the change of one subatomic particle.

OK,

Bruno



> -- 
> Stathis Papaioannou
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything 

Re: How to live forever

2018-03-25 Thread Bruno Marchal

> On 21 Mar 2018, at 22:56, Brent Meeker  wrote:
> 
> 
> 
> On 3/21/2018 2:27 PM, Stathis Papaioannou wrote:
>> 
>> On Thu, 22 Mar 2018 at 5:45 am, Brent Meeker > > wrote:
>> 
>> 
>> On 3/20/2018 11:29 PM, Stathis Papaioannou wrote:
>>> On Wed, 21 Mar 2018 at 9:03 am, Brent Meeker >> > wrote:
>>> 
>>> 
>>> On 3/20/2018 1:14 PM, Stathis Papaioannou wrote:
 
 On Wed, 21 Mar 2018 at 6:34 am, Brent Meeker > wrote:
 
 
 On 3/20/2018 3:58 AM, Telmo Menezes wrote:
>> The interesting thing is that you can draw conclusions about 
>> consciousness
>> without being able to define it or detect it.
> I agree.
> 
>> The claim is that IF an entity
>> is conscious THEN its consciousness will be preserved if brain function 
>> is
>> preserved despite changing the brain substrate.
> Ok, this is computationalism. I also bet on computationalism, but I
> think we must proceed with caution and not forget that we are just
> assuming this to be true. Your thought experiment is convincing but is
> not a proof. You do expose something that I agree with: that
> non-computationalism sounds silly.
 
 But does it sound so silly if we propose substituting a completely 
 different kind of computer, e.g. von Neumann architecture or one that  
just records everything instead of an 
 episodic associative memory, for the brain.  The Church-Turing conjecture 
 says it can compute the same functions.  But does it instantiate the same 
 consciousness.  My intuition is that it would be "conscious" but in some 
 different way; for example by having the kind of memory you would have if 
 you could review of a movie of any interval in your past.
 
 I think it would be conscious in the same way if you replaced neural 
 tissue with a black box that interacted with the surrounding tissue in the 
 same way. It doesn’t matter what is in the black box; it could even work 
 by magic.
>>> 
>>> Then why draw the line at "surrounding tissue".  Why not the external 
>>> enivironment? 
>>> 
>>> Keep expanding the part that is replaced and you replace the whole brain 
>>> and the whole organism.
>>> 
>>> Are you saying you can't imagine being "conscious" but in a different way?
>>> 
>>> I think it is possible but I don’t think it could happen if my neurones 
>>> were replaced by a functionally equivalent component. If it’s functionally 
>>> equivalent, my behaviour would be unchanged,
>> 
>> I agree with that.  But you've already supposed that functional equivalence 
>> at the behavior level implies preservation of consciousness.  So what I'm 
>> considering is replacements in the brain far above the neuron level, say at 
>> the level of whole functional groups of the brain, e.g. the visual system, 
>> the auditory system, the memory,...  Would functional equivalence at the 
>> body/brain interface then still imply consciousness equivalence?
>> 
>> I think it would, because I don’t think there are isolated consciousness 
>> modules in the brain. A large enough change in visual experience will be 
>> noticed by the subject, who will report that things look different. This 
>> could only happen if there is a change in the input to the language system 
>> from the visual system; but we have assumed that the output from the visual 
>> system is the same, and only the consciousness has changed, leading to a 
>> contradiction.
> 
> But what about internal systems which are independent of perception...the 
> very reason Bruno wants to talk about dream states.  And I'm not necessarily 
> asking that behavior be identical...just that the body/brain interface be the 
> same.  The "brain" may be different in how it processes input from the 
> eyeballs and hence report verbally different perceptions.  In other words, 
> I'm wondering how much does computationalism constrain consciousness.  My 
> intuition is that there could be a lot of difference in consciousness 
> depending on how different perceptual inputs are process and/or merged and 
> how internal simulations are handled.  To take a crude example, would it 
> matter if the computer-brain was programmed in a functional language like 
> LISP, an object-oriented language like Ruby, or a neural network?  Of course 
> Church-Turing says they all compute the same set of functions, but they don't 
> do it the same way

They can do it in the same way. They will not do it in the same way with a 
compiler, but will do it in the same way when you implement an interpreter in 
another interpreter. The extensional CT (in terms if which functions are 
calculated) entails the intensional CT (in terms of which computations can be 
processed. Babbage machine could emulate a quantum 

Re: How to live forever

2018-03-25 Thread Bruno Marchal

> On 21 Mar 2018, at 01:35, John Clark  wrote:
> 
> On Tue, Mar 20, 2018 at 7:27 PM, Bruce Kellett  > wrote:
>  
> ​>​You don't need an instrument that can give a clean yes/no answer to the 
> presence of consciousness to develop scientific theories about consciousness. 
> We can start with the observation that all normal healthy humans are 
> conscious, and that rocks and other inert objects are not conscious and work 
> from there to develop a science of consciousness, based on evidence from the 
> observation of behaviour.
> 
> But if it was all based on the observation of behavior then what you'd end up 
> with is a scientific theory about intelligence not consciousness.

That is right. But if you agree that consciousness is a form of non-provable 
but also non-doubtable knowledge, and if you agree with the standard definition 
of knowledge in philosophy of mind, then it is a theorem that Peano Arithmetic 
is conscious. To believe that Robinson Arithmetic is conscious too (plausibly 
even more) is more tricky.

Bruce is right that consciousness will be a global thing, as we can get from 
the first person indeterminacy too, but that does not mean that consciousness 
is not preserved by functional digital substitution made at some level.

Bruno



> 
> ​ ​John K Clark
>  
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.