Re: How to live forever

2018-04-07 Thread Bruno Marchal

> On 5 Apr 2018, at 21:52, Brent Meeker  wrote:
> 
> 
> 
> On 4/5/2018 1:55 AM, Bruno Marchal wrote:
>>> On 3 Apr 2018, at 19:40, Brent Meeker  wrote:
>>> 
>>> 
>>> 
>>> On 4/3/2018 12:37 AM, Bruno Marchal wrote:
> On 3 Apr 2018, at 07:22, Brent Meeker  wrote:
> 
> 
> 
> On 3/31/2018 1:30 AM, Russell Standish wrote:
>> On Wed, Mar 21, 2018 at 05:14:21PM +0100, Bruno Marchal wrote:
>>> Now, is a jellyfish conscious?
>>> 
>>> I bet they are, but not far away from the dissociative and constant 
>>> arithmetical consciousness (of the universal machines).
>> As I'm sure you're aware, I disagree with this. Jellyfish appear to be
>> quite simple automatons, with a distributed neural network, not any
>> brain as such. However, my main reason for disagreeing is that
>> anthropic reasoning leads us to conclude that most species of animal
>> are not conscious. Our most typical animal is a nematode (for instance
>> your favourite - the planarians), but even most insects cannot be
>> conscious either.
>> 
> In these discussions I always wonder what kind of consciousness is meant?
> 
> 1. Perception.  light/dark  acid/base touch...
> 2. Self location relative to things.  Prey/predators
> 3. Self relative to others.  Sex and territory and rivals
> 4. Abstractions.  Number geometries
> 5.  Self reflection.  Theory of minds.  Language
 It is any form of knowledge. It is the raw consciousness of the worms 
 disliking the lack of comfort impression when, say, it is eaten by a 
 little mammals.
>>> Or my thermostat not liking that it too cool in the house.
>> I doubt this. To have consciousness you need some “[]p”, that is, a 
>> representation of yourself
> 
> But what does that mean?  Does it mean an idea of where your are? memory and 
> the passage of time?  Does it mean a kind of simulation in which you have an 
> avatar?  And why would you suppose a jellyfish or a nematode would have such 
> a representation?

The behaviour. My observation of protozoans and invertebrates (hydra, 
planaria). Attributing consciousness is of course a bit subjective. I might be 
wrong of course.



> 
>> plus some connection with truth.
> 
> ?? Facts or axioms or proofs?

I was alluding to Theaetetus: knowledge is true-belief. So I meant “facts”, 
although that is a double-edged word, as some would say that the existence of 
the moon is a fact, but “existence” is too much ambiguous in metaphysics, as 
things can exist in many phenomenological senses.




> 
>> The thermostat has the connection with truth (usually)
> 
> It is connected to temperature and the furnace...so you mean facts.

OK.


> 
>> but lacks the representability means. Now, the consciousness of the 
>> non-Löbian universal machine is so disconnected that IT can believe being a 
>> thermostat (like in some salvia reports), but that is still an illusion.
> 
> Having an illusion implies consciousness, at least according to Descarte.


He is completely right on this. No problem.

Bruno



> 
> Brent
> 
>> 
>> Bruno
>> 
>> 
>> 
>>> Brent
>>> 
 It is []t, with any weak Theaetetical reading of the box. It is needed for 
 any of your 5 more special use that you mention.
 
 Bruno
 
 
 
> Brent
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>>> -- 
>>> You received this message because you are subscribed to the Google Groups 
>>> "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to everything-list+unsubscr...@googlegroups.com.
>>> To post to this group, send email to everything-list@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/everything-list.
>>> For more options, visit https://groups.google.com/d/optout.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 

Re: How to live forever

2018-04-05 Thread Brent Meeker



On 4/5/2018 1:55 AM, Bruno Marchal wrote:

On 3 Apr 2018, at 19:40, Brent Meeker  wrote:



On 4/3/2018 12:37 AM, Bruno Marchal wrote:

On 3 Apr 2018, at 07:22, Brent Meeker  wrote:



On 3/31/2018 1:30 AM, Russell Standish wrote:

On Wed, Mar 21, 2018 at 05:14:21PM +0100, Bruno Marchal wrote:

Now, is a jellyfish conscious?

I bet they are, but not far away from the dissociative and constant 
arithmetical consciousness (of the universal machines).

As I'm sure you're aware, I disagree with this. Jellyfish appear to be
quite simple automatons, with a distributed neural network, not any
brain as such. However, my main reason for disagreeing is that
anthropic reasoning leads us to conclude that most species of animal
are not conscious. Our most typical animal is a nematode (for instance
your favourite - the planarians), but even most insects cannot be
conscious either.


In these discussions I always wonder what kind of consciousness is meant?

1. Perception.  light/dark  acid/base touch...
2. Self location relative to things.  Prey/predators
3. Self relative to others.  Sex and territory and rivals
4. Abstractions.  Number geometries
5.  Self reflection.  Theory of minds.  Language

It is any form of knowledge. It is the raw consciousness of the worms disliking 
the lack of comfort impression when, say, it is eaten by a little mammals.

Or my thermostat not liking that it too cool in the house.

I doubt this. To have consciousness you need some “[]p”, that is, a 
representation of yourself


But what does that mean?  Does it mean an idea of where your are? memory 
and the passage of time?  Does it mean a kind of simulation in which you 
have an avatar?  And why would you suppose a jellyfish or a nematode 
would have such a representation?



plus some connection with truth.


?? Facts or axioms or proofs?


The thermostat has the connection with truth (usually)


It is connected to temperature and the furnace...so you mean facts.


but lacks the representability means. Now, the consciousness of the non-Löbian 
universal machine is so disconnected that IT can believe being a thermostat 
(like in some salvia reports), but that is still an illusion.


Having an illusion implies consciousness, at least according to Descarte.

Brent



Bruno




Brent


It is []t, with any weak Theaetetical reading of the box. It is needed for any 
of your 5 more special use that you mention.

Bruno




Brent


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-05 Thread Mindey I.
On 1 April 2018 at 07:56, Telmo Menezes  wrote:

> Hey Mindey,
>
> On Sat, Mar 31, 2018 at 11:12 PM, Mindey I.  wrote:
> > Why not to just define yourself, and then try to re-run yourself? If you
> > have a mathematical definition of your own self, you are already close to
> > living forever as a running process based on that definition.
>
> Easier said than done might be the understatement of the millennium here :)
>

Hey, Telmo. Definitely not easy. However, abstractly, curiosity may boil
down to randomness generators, as a means to explore (computational and
other) universes (or find solutions), and if we have somehow equivalent
generators, it may be that some of us are equivalent in that sense.

> Personally, when I try to define myself, I bump into memories of strong
> > sense of curiosity, making me nearly cry of desire to know Everything.
> >
> > Maybe most of us here on the "Everything-List" are like that. Maybe we're
> > equivalent?
>
> I don't know if my curiosity is as strong as yours, I think it's
> impossible to know. I think you are being reductive about yourself, no
> matter how amazing curiosity is.
>

Yes, but how could someone with limited thinking resources understand
everything by not being somewhat reductionist?




>
> Telmo.
>
> > On 31 March 2018 at 20:32, Telmo Menezes  wrote:
> >>
> >> On Sat, Mar 31, 2018 at 10:17 PM, Lawrence Crowell
> >>  wrote:
> >> > You would have to replicate then not only the dynamics of neurons, but
> >> > every
> >> > biomolecule in the neurons, and don't forget about the oligoastrocytes
> >> > and
> >> > other glial cells. Many enzymes for instance to multi-state systems,
> say
> >> > in
> >> > a simple case where a single amino acid residue of phosphorylated or
> >> > unphosphorylated, and in effect are binary switching units. To then
> make
> >> > this work you now need to have the brain states mapped out down to the
> >> > molecular level, and further to have their combinatorial relationships
> >> > mapped. Biomolecules also behave in water, so you have to model all
> the
> >> > water molecules. Given the brain has around 10^{25} or a few moles of
> >> > molecules the number of possible combinations might be on the order of
> >> > 10^{10^{25}} this is a daunting task. Also your computer has to
> >> > accurately
> >> > encode the dynamics of molecules -- down to the quantum mechanics of
> >> > their
> >> > bonds.
> >> >
> >> > This is another way of saying that biological systems, even that of a
> >> > basic
> >> > prokaryote, are beyond our current abilities to simulate. You can't
> just
> >> > hand wave away the enormous problems with just simulating a bacillus,
> >> > let
> >> > alone something like the brain. Now of course one can do some
> >> > simulations to
> >> > learn about the brain in a model system, but this is far from mapping
> a
> >> > brain and its conscious state into a computer.
> >>
> >> Well maybe, but this is just you guessing.
> >> Nobody knows the necessary level of detail.
> >>
> >> Telmo.
> >>
> >> > LC
> >> >
> >> >
> >> > On Saturday, March 31, 2018 at 10:31:56 AM UTC-6, John Clark wrote:
> >> >>
> >> >> On Tue, Mar 27, 2018 at 8:24 PM, Lawrence Crowell
> >> >>  wrote:
> >> >>
> >> >>> > Yes, and if you replace the entire brain with technology the peg
> leg
> >> >>> > is
> >> >>> > expanded into an entire Pinocchio. Would the really be conscious?
> It
> >> >>> > is the
> >> >>> > case as well that so much of our mental processing does involve
> >> >>> > hormone
> >> >>> > reception and a range of other data inputs from other receptors
> and
> >> >>> > ligands.
> >> >>
> >> >> I see nothing sacred in hormones, I don't see the slightest reason
> why
> >> >> they or any neurotransmitter would be especially difficult to
> simulate
> >> >> through computation, because chemical messengers are not a sign of
> >> >> sophisticated design on nature's part, rather it's an example of
> >> >> Evolution's
> >> >> bungling. If you need to inhibit a nearby neuron there are better
> ways
> >> >> of
> >> >> sending that signal then launching a GABA molecule like a message in
> a
> >> >> bottle thrown into the sea and waiting ages for it to diffuse to its
> >> >> random
> >> >> target.
> >> >>
> >> >> I'm not interested in chemicals only the information they contain, I
> >> >> want
> >> >> the information to get transmitted from cell to cell by the best
> method
> >> >> and
> >> >> so I would not send smoke signals if I had a fiber optic cable. The
> >> >> information content in each molecular message must be tiny, just a
> few
> >> >> bits
> >> >> because only about 60 neurotransmitters such as acetylcholine,
> >> >> norepinephrine and GABA are known, even if the true number is 100
> times
> >> >> greater (or a million times for that matter) the information content
> >> >> ofeach
> >> >> signal must be tiny. 

Re: How to live forever

2018-04-05 Thread Telmo Menezes
> If that is true, the dissociative state can genuinely be identified with the 
> death state, and, as far as we get there (with salvia for example), the 
> experience cannot be memorised at all. In fact, it entails that we truly die, 
> and the one who come back is truly “someone else”, and this is a common 
> felling reported by salvia experiencers. It is what I call the MAX Effect, 
> (because it is well illustrated in a comic book “Max, the explorer”, by Bara 
> (a Belgian comics book).

It is interesting that "psychadelic culture" has a strong death theme
to it. For example, a lot of people associate LSD experiences with
death, skeletons, and things like that. There is a dark humor to it,
but it's nor morbid.


Best,
Telmo.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-05 Thread Telmo Menezes
On 4 April 2018 at 10:21, Russell Standish  wrote:
> On Tue, Apr 03, 2018 at 08:25:59AM +0200, Telmo Menezes wrote:
>> Hi Russell,
>>
>> On Sat, Mar 31, 2018 at 10:30 AM, Russell Standish
>>  wrote:
>> > On Wed, Mar 21, 2018 at 05:14:21PM +0100, Bruno Marchal wrote:
>> >>
>> >> Now, is a jellyfish conscious?
>> >>
>> >> I bet they are, but not far away from the dissociative and constant 
>> >> arithmetical consciousness (of the universal machines).
>> >
>> > As I'm sure you're aware, I disagree with this. Jellyfish appear to be
>> > quite simple automatons, with a distributed neural network, not any
>> > brain as such. However, my main reason for disagreeing is that
>> > anthropic reasoning leads us to conclude that most species of animal
>> > are not conscious. Our most typical animal is a nematode (for instance
>> > your favourite - the planarians), but even most insects cannot be
>> > conscious either.
>>
>> I follow your anthropic reasoning, but am not convinced by the
>> implicit 1:1 correspondence between one minute of human consciousness
>> and one human of insect consciousness. I have no rigorous way of
>> saying this, but my intuition is the following: there is more content
>> in one minute of one than the other. I think it makes sense for the
>> probabilities to be weighted by this content, somehow.
>>
>> Imagine a simple possibility: your anthropic reasoning being weighed
>> by the number of neurons in the given creature. See what I'm getting
>> at?
>>
>
> My argument is simply that your first observer moment (ie "birth
> moment", although not literally at birth) is selected at random from
> all such possible moments. Thereafter, successor OMs are chosen
> acording to Born's rule. Ant birth OMs are vastly more numerous than
> human ones. A city of perhaps a million individuals lives under our
> house, and ants are born, live an die far more rapidly than we
> humans.

Ok, I see. I don't buy that first OMs have some special status. In my
view it makes sense to sample each OM from all possible OMs in the
universe. I think I am a block universe kind of person, and I think
that the feeling of continuity that we have in our lives is illusory,
in a sense. It's just that my current OM is a complexification of
other OMs, and that is what memory is. I am OM-centric, not
me-centric.

> To argue that OMs might be weighted somehow is quite close to the
> ASSA, which I've never found convincing, though some argue for it here
> on this list. Why should first observer moments be weighted by neuron number?

What is the ASSA?


> --
>
> 
> Dr Russell StandishPhone 0425 253119 (mobile)
> Principal, High Performance Coders
> Visiting Senior Research Fellowhpco...@hpcoders.com.au
> Economics, Kingston University http://www.hpcoders.com.au
> 
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-05 Thread Bruno Marchal

> On 4 Apr 2018, at 22:57, Brent Meeker  wrote:
> 
> 
> 
> On 4/2/2018 10:53 AM, smitra wrote:
>> On 02-04-2018 17:27, Bruno Marchal wrote:
 On 1 Apr 2018, at 00:29, Lawrence Crowell
  wrote:
 
 On Saturday, March 31, 2018 at 2:32:06 PM UTC-6, telmo_menezes
 wrote:
 
> On Sat, Mar 31, 2018 at 10:17 PM, Lawrence Crowell
>  wrote:
>> You would have to replicate then not only the dynamics of
> neurons, but every
>> biomolecule in the neurons, and don't forget about the
> oligoastrocytes and
>> other glial cells. Many enzymes for instance to multi-state
> systems, say in
>> a simple case where a single amino acid residue of
> phosphorylated or
>> unphosphorylated, and in effect are binary switching units. To
> then make
>> this work you now need to have the brain states mapped out down
> to the
>> molecular level, and further to have their combinatorial
> relationships
>> mapped. Biomolecules also behave in water, so you have to model
> all the
>> water molecules. Given the brain has around 10^{25} or a few
> moles of
>> molecules the number of possible combinations might be on the
> order of
>> 10^{10^{25}} this is a daunting task. Also your computer has to
> accurately
>> encode the dynamics of molecules -- down to the quantum
> mechanics of their
>> bonds.
>> 
>> This is another way of saying that biological systems, even that
> of a basic
>> prokaryote, are beyond our current abilities to simulate. You
> can't just
>> hand wave away the enormous problems with just simulating a
> bacillus, let
>> alone something like the brain. Now of course one can do some
> simulations to
>> learn about the brain in a model system, but this is far from
> mapping a
>> brain and its conscious state into a computer.
> 
> Well maybe, but this is just you guessing.
> Nobody knows the necessary level of detail.
> 
> Telmo.
 
 Take LSD or psilocybin mushrooms and what enters the brain are
 chemical compounds that interact with neural ligand gates. The
 effect is a change in the perception of consciousness. Then if we
 load coarse grained brain states into a computer that ignores lots
 of fine grained detail, will that result in something different?
 Hell yeah! The idea one could set up a computer neural network,
 upload some data file from a brain scan and that this would be a
 completely conscious person is frankly absurd.
>>> 
>>> This means that you bet on a lower substitution level. I guess others
>>> have already answered this. Note that the proof that physics is a
>>> branch of arithmetic does not put any bound of the graining of the
>>> substitution level. It could even be that your brain is the entire
>>> universe described at the level of superstring theory, that will
>>> change nothing in the conclusion of the reasoning. Yet it would be a
>>> threat for evolution and biology as conceived today.
>>> 
>>> Bruno
>>> 
 LC
 
>> 
>> In experiments involving stimulation/inhibition of certain brain parts using 
>> strong magnetic fields where people look for a few seconds at a screen with 
>> a large number of dots, it was found that significantly more people can 
>> correctly guess the number of dots when the field was switched on. The 
>> conclusion was that under normal circumstances when we are not aware of 
>> lower level information, such as the exact number of dots ion the screen, 
>> that information is actually present in the brain but we're not consciously 
>> aware of it. Certain people who have "savant syndrome" can be constantly 
>> aware of such lower level information.
> 
> And not just people
> 
> https://www.npr.org/sections/krulwich/2014/04/16/302943533/the-ultimate-animal-experience-losing-a-memory-quiz-to-a-chimp
> 
> which suggests to me that the part of one's brain that instantiates 
> consciousness competes with other parts and may interfere with their 
> function.  I think everyone experiences this in sports.  Who hasn't missed a 
> shot in tennis by "thinking about it too much?”


A funny illustration:

https://www.youtube.com/watch?v=2Ia9hoEMFOY 


But I think that the cat just missed the minus sign in the third equation :)

Bruno


> 
> Brent
> 
>> 
>> This then suggests to me that the substitution level can be taken at a much 
>> higher level than the level of neurons. In the MWI we would have to be 
>> imagined being spread out over sectors where information such as the number 
>> of dots on a screen is different. So, what you're not aware of isn't fixed 
>> for you, and therefore it cannot possibly define your identity
>> 
>> Saibal
>> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> 

Re: How to live forever

2018-04-05 Thread Bruno Marchal

> On 2 Apr 2018, at 19:53, smitra  wrote:
> 
> On 02-04-2018 17:27, Bruno Marchal wrote:
>>> On 1 Apr 2018, at 00:29, Lawrence Crowell
>>>  wrote:
>>> On Saturday, March 31, 2018 at 2:32:06 PM UTC-6, telmo_menezes
>>> wrote:
 On Sat, Mar 31, 2018 at 10:17 PM, Lawrence Crowell
  wrote:
> You would have to replicate then not only the dynamics of
 neurons, but every
> biomolecule in the neurons, and don't forget about the
 oligoastrocytes and
> other glial cells. Many enzymes for instance to multi-state
 systems, say in
> a simple case where a single amino acid residue of
 phosphorylated or
> unphosphorylated, and in effect are binary switching units. To
 then make
> this work you now need to have the brain states mapped out down
 to the
> molecular level, and further to have their combinatorial
 relationships
> mapped. Biomolecules also behave in water, so you have to model
 all the
> water molecules. Given the brain has around 10^{25} or a few
 moles of
> molecules the number of possible combinations might be on the
 order of
> 10^{10^{25}} this is a daunting task. Also your computer has to
 accurately
> encode the dynamics of molecules -- down to the quantum
 mechanics of their
> bonds.
> This is another way of saying that biological systems, even that
 of a basic
> prokaryote, are beyond our current abilities to simulate. You
 can't just
> hand wave away the enormous problems with just simulating a
 bacillus, let
> alone something like the brain. Now of course one can do some
 simulations to
> learn about the brain in a model system, but this is far from
 mapping a
> brain and its conscious state into a computer.
 Well maybe, but this is just you guessing.
 Nobody knows the necessary level of detail.
 Telmo.
>>> Take LSD or psilocybin mushrooms and what enters the brain are
>>> chemical compounds that interact with neural ligand gates. The
>>> effect is a change in the perception of consciousness. Then if we
>>> load coarse grained brain states into a computer that ignores lots
>>> of fine grained detail, will that result in something different?
>>> Hell yeah! The idea one could set up a computer neural network,
>>> upload some data file from a brain scan and that this would be a
>>> completely conscious person is frankly absurd.
>> This means that you bet on a lower substitution level. I guess others
>> have already answered this. Note that the proof that physics is a
>> branch of arithmetic does not put any bound of the graining of the
>> substitution level. It could even be that your brain is the entire
>> universe described at the level of superstring theory, that will
>> change nothing in the conclusion of the reasoning. Yet it would be a
>> threat for evolution and biology as conceived today.
>> Bruno
>>> LC
> 
> In experiments involving stimulation/inhibition of certain brain  parts using 
> strong magnetic fields where people look for a few seconds at a screen with a 
> large number of dots, it was found that significantly more people can 
> correctly guess the number of dots when the field was switched on. The 
> conclusion was that under normal circumstances when we are not aware of lower 
> level information, such as the exact number of dots ion the screen, that 
> information is actually present in the brain but we're not consciously aware 
> of it. Certain people who have "savant syndrome" can be constantly aware of 
> such lower level information.
> 
> This then suggests to me that the substitution level can be taken at a much 
> higher level than the level of neurons.

I tend to agree with this. At least if we are concerned with short term 
survival. For the long term, it is not easy to evacuate them, as it could play 
some unconscious role later in life, but perhaps unimportant one FAPP.



> In the MWI we would have to be imagined being spread out over sectors where 
> information such as the number of dots on a screen is different. So, what 
> you're not aware of isn't fixed for you, and therefore it cannot possibly 
> define your identity

I agree. The identify is more in the stable beliefs, and actual 1p identity is 
restricted non constructively with the true belief. This eventually leads to 
the abandon of the notion of personal identity, but not in a rationally 
communicable way. Whatever God says it is for personal use only :)

Bruno



> 
> Saibal
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more 

Re: How to live forever

2018-04-05 Thread Bruno Marchal

> On 4 Apr 2018, at 10:21, Russell Standish  wrote:
> 
> On Tue, Apr 03, 2018 at 08:25:59AM +0200, Telmo Menezes wrote:
>> Hi Russell,
>> 
>> On Sat, Mar 31, 2018 at 10:30 AM, Russell Standish
>>  wrote:
>>> On Wed, Mar 21, 2018 at 05:14:21PM +0100, Bruno Marchal wrote:
 
 Now, is a jellyfish conscious?
 
 I bet they are, but not far away from the dissociative and constant 
 arithmetical consciousness (of the universal machines).
>>> 
>>> As I'm sure you're aware, I disagree with this. Jellyfish appear to be
>>> quite simple automatons, with a distributed neural network, not any
>>> brain as such. However, my main reason for disagreeing is that
>>> anthropic reasoning leads us to conclude that most species of animal
>>> are not conscious. Our most typical animal is a nematode (for instance
>>> your favourite - the planarians), but even most insects cannot be
>>> conscious either.
>> 
>> I follow your anthropic reasoning, but am not convinced by the
>> implicit 1:1 correspondence between one minute of human consciousness
>> and one human of insect consciousness. I have no rigorous way of
>> saying this, but my intuition is the following: there is more content
>> in one minute of one than the other. I think it makes sense for the
>> probabilities to be weighted by this content, somehow.
>> 
>> Imagine a simple possibility: your anthropic reasoning being weighed
>> by the number of neurons in the given creature. See what I'm getting
>> at?
>> 
> 
> My argument is simply that your first observer moment (ie "birth
> moment", although not literally at birth) is selected at random from
> all such possible moments. Thereafter, successor OMs are chosen
> acording to Born's rule. Ant birth OMs are vastly more numerous than
> human ones. A city of perhaps a million individuals lives under our
> house, and ants are born, live an die far more rapidly than we
> humans. 
> 
> To argue that OMs might be weighted somehow is quite close to the
> ASSA, which I've never found convincing, though some argue for it here
> on this list. Why should first observer moments be weighted by neuron number?

I agree, although I cannot make sense of “first 1p-observer-moment”, nor of a 
selection which would not be relative to the computations leading to such 1p 
observer moment, nor of invoking the Born rule before getting it from the 
“observable” self-referential viewpoints. But, yes, the weighting by neuronal 
number will not work in the absolute, although it does contrive the 
computations for the relative measure.

Best,

Bruno



> 
> -- 
> 
> 
> Dr Russell StandishPhone 0425 253119 (mobile)
> Principal, High Performance Coders
> Visiting Senior Research Fellowhpco...@hpcoders.com.au
> Economics, Kingston University http://www.hpcoders.com.au
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-05 Thread Bruno Marchal

> On 3 Apr 2018, at 19:43, Brent Meeker  wrote:
> 
> 
> 
> On 4/3/2018 12:47 AM, Bruno Marchal wrote:
>>> On 31 Mar 2018, at 10:30, Russell Standish  wrote:
>>> 
>>> On Wed, Mar 21, 2018 at 05:14:21PM +0100, Bruno Marchal wrote:
 Now, is a jellyfish conscious?
 
 I bet they are, but not far away from the dissociative and constant 
 arithmetical consciousness (of the universal machines).
>>> As I'm sure you're aware, I disagree with this. Jellyfish appear to be
>>> quite simple automatons, with a distributed neural network, not any
>>> brain as such.
>> Yes, like the hydra, but the shape of the brain or the nervous system is not 
>> relevant, and maybe the nervous system is not relevant.
>> 
>> I have no certainty, but the observation of hydra makes me doubt it is only 
>> a colony of cells, they seem to have an integrated personality. I might be 
>> wrong, but I just don’t know, and the math are easier with that assumption, 
>> to be honest.
> 
> Then one's consciousness might be preserved by replacing neurons with 
> baccilus with little integrated personalities. 

If they do the work of the neurons, why not?



> On the other hand I know of someone who did that and it didn't seem to work 
> out well.

Elliptical joke?



> 
>> 
>> 
>> 
>> 
>>> However, my main reason for disagreeing is that
>>> anthropic reasoning leads us to conclude that most species of animal
>>> are not conscious.
>> We have of course already discuss this in length. I am not sure that 
>> bayesian anthropism makes sense, as we don’t know what could be the prior. 
>> Bacteria might all have an undifferentiated consciousness, so they would all 
>> count for one person, but again counted relatively to one?
>> Note that your argument precludes also aliens to be conscious, or to even 
>> exist. In arithmetic, all creatures
> 
> Define "creatures”?

Turing-Church Universal numbers.

Bruno



> 
> Brent
> 
>> are represented in infinities, and the relative probabilities are handled by 
>> the self-reference modes, so we can avoid Turing-thropic or anthropic 
>> methods of reasoning. My critics here is isomorphic to my critics of 
>> doomsday like argument.
>> 
>> 
>> 
>>> Our most typical animal is a nematode (for instance
>>> your favourite - the planarians), but even most insects cannot be
>>> conscious either.
>> I am not sure of that. As I try to explain, may be the brain is only a 
>> filter of consciousness: information hides the universal person, for obvious 
>> evolutionary reasons. May be even 0 neuron could lead to “infinite 
>> consciousness”, a bit like in set theory the unary intersection of the empty 
>> set gives the whole universes of all sets.
>> 
>> Bruno
>> 
>> 
>> 
>>> 
>>> -- 
>>> 
>>> 
>>> Dr Russell StandishPhone 0425 253119 (mobile)
>>> Principal, High Performance Coders
>>> Visiting Senior Research Fellowhpco...@hpcoders.com.au
>>> Economics, Kingston University http://www.hpcoders.com.au
>>> 
>>> 
>>> -- 
>>> You received this message because you are subscribed to the Google Groups 
>>> "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to everything-list+unsubscr...@googlegroups.com.
>>> To post to this group, send email to everything-list@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/everything-list.
>>> For more options, visit https://groups.google.com/d/optout.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-05 Thread Bruno Marchal

> On 3 Apr 2018, at 19:40, Brent Meeker  wrote:
> 
> 
> 
> On 4/3/2018 12:37 AM, Bruno Marchal wrote:
>>> On 3 Apr 2018, at 07:22, Brent Meeker  wrote:
>>> 
>>> 
>>> 
>>> On 3/31/2018 1:30 AM, Russell Standish wrote:
 On Wed, Mar 21, 2018 at 05:14:21PM +0100, Bruno Marchal wrote:
> Now, is a jellyfish conscious?
> 
> I bet they are, but not far away from the dissociative and constant 
> arithmetical consciousness (of the universal machines).
 As I'm sure you're aware, I disagree with this. Jellyfish appear to be
 quite simple automatons, with a distributed neural network, not any
 brain as such. However, my main reason for disagreeing is that
 anthropic reasoning leads us to conclude that most species of animal
 are not conscious. Our most typical animal is a nematode (for instance
 your favourite - the planarians), but even most insects cannot be
 conscious either.
 
>>> In these discussions I always wonder what kind of consciousness is meant?
>>> 
>>> 1. Perception.  light/dark  acid/base touch...
>>> 2. Self location relative to things.  Prey/predators
>>> 3. Self relative to others.  Sex and territory and rivals
>>> 4. Abstractions.  Number geometries
>>> 5.  Self reflection.  Theory of minds.  Language
>> It is any form of knowledge. It is the raw consciousness of the worms 
>> disliking the lack of comfort impression when, say, it is eaten by a little 
>> mammals.
> 
> Or my thermostat not liking that it too cool in the house.

I doubt this. To have consciousness you need some “[]p”, that is, a 
representation of yourself plus some connection with truth. The thermostat has 
the connection with truth (usually) but lacks the representability means. Now, 
the consciousness of the non-Löbian universal machine is so disconnected that 
IT can believe being a thermostat (like in some salvia reports), but that is 
still an illusion.

Bruno



> 
> Brent
> 
>> It is []t, with any weak Theaetetical reading of the box. It is needed for 
>> any of your 5 more special use that you mention.
>> 
>> Bruno
>> 
>> 
>> 
>>> Brent
>>> 
>>> 
>>> -- 
>>> You received this message because you are subscribed to the Google Groups 
>>> "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to everything-list+unsubscr...@googlegroups.com.
>>> To post to this group, send email to everything-list@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/everything-list.
>>> For more options, visit https://groups.google.com/d/optout.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-05 Thread Bruno Marchal

> On 3 Apr 2018, at 10:37, Telmo Menezes  wrote:
> 
> On Tue, Apr 3, 2018 at 9:33 AM, Bruno Marchal  wrote:
>> 
>>> On 3 Apr 2018, at 08:25, Telmo Menezes  wrote:
>>> 
>>> Hi Russell,
>>> 
>>> On Sat, Mar 31, 2018 at 10:30 AM, Russell Standish
>>>  wrote:
 On Wed, Mar 21, 2018 at 05:14:21PM +0100, Bruno Marchal wrote:
> 
> Now, is a jellyfish conscious?
> 
> I bet they are, but not far away from the dissociative and constant 
> arithmetical consciousness (of the universal machines).
 
 As I'm sure you're aware, I disagree with this. Jellyfish appear to be
 quite simple automatons, with a distributed neural network, not any
 brain as such. However, my main reason for disagreeing is that
 anthropic reasoning leads us to conclude that most species of animal
 are not conscious. Our most typical animal is a nematode (for instance
 your favourite - the planarians), but even most insects cannot be
 conscious either.
>>> 
>>> I follow your anthropic reasoning, but am not convinced by the
>>> implicit 1:1 correspondence between one minute of human consciousness
>>> and one human of insect consciousness. I have no rigorous way of
>>> saying this, but my intuition is the following: there is more content
>>> in one minute of one than the other. I think it makes sense for the
>>> probabilities to be weighted by this content, somehow.
>>> 
>>> Imagine a simple possibility: your anthropic reasoning being weighed
>>> by the number of neurons in the given creature. See what I'm getting
>>> at?
>> 
>> 
>> Then the brain seems to be a filter of the natural raw consciousness which 
>> is at the start of the consciousness differentiation. The less neurons there 
>> are, the more consciousness is intense from the first person view, but also 
>> the more it is disconnected.
> 
> I agree, as you know.
> I have no scientific argument here, just personal experience.

OK.




> 
>> I have no doubt that this is very counter-intuitive for people having no 
>> memory of a dissociative state of consciousness,
> 
> Yes. What makes this particularly tricky is that such memories are
> from the neighborhood of the experience. The actual thing cannot be
> remembered -- at least I cannot.

Consciousness is a cousin of consistency (<>t) . It obeys to <>t -> ~[]<>t. 
This means that a consistent machine cannot prove its consistency, in the 
strong sense of “proof”. Franzen said that it cannot even been asserted by 
which he means that it cannot even be taken as an axiom (this way of talking is 
slightly misleading, as PA can both assert con(‘PA’) and even take it as a new 
axiom, but then it is a new machine which still cannot prove its consistency. 
So it is really the fixed point of consistency which is not provable or 
rationally communicable. It is PA+ = PA+ +con(PA+) which becomes inconsistent 
(and such fixed point exists by Gödel diagonal lemma).
Now, by the Completeness theorem (which can be shown to apply on the type of 
machines considered); consistency is equivalent with “there is a reality 
satisfying me (with me = my beliefs), and that is intuitively coherent: nobody 
can prove that a reality or a God exist.

But consciousness is not consistency. It lacks the 1p feature, and consistency 
is indeed a purely 3p syntactical notion: it means that there is no proof of f 
(no finite sequence of beliefs starting from PA (say) and obtained by repeated 
application of the rules of PA and ending by f. 

So consciousness is more like <>p v p, the dual of []p & p, with a non 
constructive “or”. It makes <>t v t trivial from the first person point of 
view, and indeed it is a theorem (of G), but non expressible by the machine. In 
this case, it cannot be taken as an axiom only because it is not definable by 
the machine, at least concerning itself. 

This makes me wondering if the dissociative state is just a state of 
inconsistency, or a state when we wake up and realise we were consistent but 
unsound, like if we were becoming a “queer reasoner” (in the sense of 
Smullyan’s “Forever Undecided”. In that case, consciousness is definitely based 
on the notion of truth, and any particular instantiation of consciousness can 
work only by forgetting its “true nature”, making the ultimate experience 
unmemorable. Now, with salvia, any theory we make of the experience is 
contradicted by the next experience, but eventually I cycle on those two 
options, with the rather frustrating feeling that the solution here cannot be 
brought back at all, despite that, as you say, from the neighbourhood of the 
experience, we can remember having known the solution! That one is stable, and 
never contradicted by further experience, but the price is that we cannot bring 
back here. We can only “know” that it is either total madness ([]f, cul-de-sac 
world), or that consciousness is (sigma_1) truth itself, which indeed 

Re: How to live forever

2018-04-04 Thread Stathis Papaioannou
On Thu, 5 Apr 2018 at 2:58 am, smitra  wrote:

> On 02-04-2018 17:27, Bruno Marchal wrote:
> >> On 1 Apr 2018, at 00:29, Lawrence Crowell
> >>  wrote:
> >>
> >> On Saturday, March 31, 2018 at 2:32:06 PM UTC-6, telmo_menezes
> >> wrote:
> >>
> >>> On Sat, Mar 31, 2018 at 10:17 PM, Lawrence Crowell
> >>>  wrote:
>  You would have to replicate then not only the dynamics of
> >>> neurons, but every
>  biomolecule in the neurons, and don't forget about the
> >>> oligoastrocytes and
>  other glial cells. Many enzymes for instance to multi-state
> >>> systems, say in
>  a simple case where a single amino acid residue of
> >>> phosphorylated or
>  unphosphorylated, and in effect are binary switching units. To
> >>> then make
>  this work you now need to have the brain states mapped out down
> >>> to the
>  molecular level, and further to have their combinatorial
> >>> relationships
>  mapped. Biomolecules also behave in water, so you have to model
> >>> all the
>  water molecules. Given the brain has around 10^{25} or a few
> >>> moles of
>  molecules the number of possible combinations might be on the
> >>> order of
>  10^{10^{25}} this is a daunting task. Also your computer has to
> >>> accurately
>  encode the dynamics of molecules -- down to the quantum
> >>> mechanics of their
>  bonds.
> 
>  This is another way of saying that biological systems, even that
> >>> of a basic
>  prokaryote, are beyond our current abilities to simulate. You
> >>> can't just
>  hand wave away the enormous problems with just simulating a
> >>> bacillus, let
>  alone something like the brain. Now of course one can do some
> >>> simulations to
>  learn about the brain in a model system, but this is far from
> >>> mapping a
>  brain and its conscious state into a computer.
> >>>
> >>> Well maybe, but this is just you guessing.
> >>> Nobody knows the necessary level of detail.
> >>>
> >>> Telmo.
> >>
> >> Take LSD or psilocybin mushrooms and what enters the brain are
> >> chemical compounds that interact with neural ligand gates. The
> >> effect is a change in the perception of consciousness. Then if we
> >> load coarse grained brain states into a computer that ignores lots
> >> of fine grained detail, will that result in something different?
> >> Hell yeah! The idea one could set up a computer neural network,
> >> upload some data file from a brain scan and that this would be a
> >> completely conscious person is frankly absurd.
> >
> > This means that you bet on a lower substitution level. I guess others
> > have already answered this. Note that the proof that physics is a
> > branch of arithmetic does not put any bound of the graining of the
> > substitution level. It could even be that your brain is the entire
> > universe described at the level of superstring theory, that will
> > change nothing in the conclusion of the reasoning. Yet it would be a
> > threat for evolution and biology as conceived today.
> >
> > Bruno
> >
> >> LC
> >>
>
> In experiments involving stimulation/inhibition of certain brain  parts
> using strong magnetic fields where people look for a few seconds at a
> screen with a large number of dots, it was found that significantly more
> people can correctly guess the number of dots when the field was
> switched on. The conclusion was that under normal circumstances when we
> are not aware of lower level information, such as the exact number of
> dots ion the screen, that information is actually present in the brain
> but we're not consciously aware of it. Certain people who have "savant
> syndrome" can be constantly aware of such lower level information.
>
> This then suggests to me that the substitution level can be taken at a
> much higher level than the level of neurons. In the MWI we would have to
> be imagined being spread out over sectors where information such as the
> number of dots on a screen is different. So, what you're not aware of
> isn't fixed for you, and therefore it cannot possibly define your
> identity.


Different physical states may lead to the same mental state until some
differentiating physical event occurs, and then the mental states diverge.
For example, the biological and the silicon version may have identical
experiences until they are exposed to a drug or to physical trauma. If, for
some reason, you were unhappy with this difference you could insist that
your brain replacement have further refinements so that it behaves closer
to the original.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at 

Re: How to live forever

2018-04-04 Thread Brent Meeker



On 4/2/2018 10:53 AM, smitra wrote:

On 02-04-2018 17:27, Bruno Marchal wrote:

On 1 Apr 2018, at 00:29, Lawrence Crowell
 wrote:

On Saturday, March 31, 2018 at 2:32:06 PM UTC-6, telmo_menezes
wrote:


On Sat, Mar 31, 2018 at 10:17 PM, Lawrence Crowell
 wrote:

You would have to replicate then not only the dynamics of

neurons, but every

biomolecule in the neurons, and don't forget about the

oligoastrocytes and

other glial cells. Many enzymes for instance to multi-state

systems, say in

a simple case where a single amino acid residue of

phosphorylated or

unphosphorylated, and in effect are binary switching units. To

then make

this work you now need to have the brain states mapped out down

to the

molecular level, and further to have their combinatorial

relationships

mapped. Biomolecules also behave in water, so you have to model

all the

water molecules. Given the brain has around 10^{25} or a few

moles of

molecules the number of possible combinations might be on the

order of

10^{10^{25}} this is a daunting task. Also your computer has to

accurately

encode the dynamics of molecules -- down to the quantum

mechanics of their

bonds.

This is another way of saying that biological systems, even that

of a basic

prokaryote, are beyond our current abilities to simulate. You

can't just

hand wave away the enormous problems with just simulating a

bacillus, let

alone something like the brain. Now of course one can do some

simulations to

learn about the brain in a model system, but this is far from

mapping a

brain and its conscious state into a computer.


Well maybe, but this is just you guessing.
Nobody knows the necessary level of detail.

Telmo.


Take LSD or psilocybin mushrooms and what enters the brain are
chemical compounds that interact with neural ligand gates. The
effect is a change in the perception of consciousness. Then if we
load coarse grained brain states into a computer that ignores lots
of fine grained detail, will that result in something different?
Hell yeah! The idea one could set up a computer neural network,
upload some data file from a brain scan and that this would be a
completely conscious person is frankly absurd.


This means that you bet on a lower substitution level. I guess others
have already answered this. Note that the proof that physics is a
branch of arithmetic does not put any bound of the graining of the
substitution level. It could even be that your brain is the entire
universe described at the level of superstring theory, that will
change nothing in the conclusion of the reasoning. Yet it would be a
threat for evolution and biology as conceived today.

Bruno


LC



In experiments involving stimulation/inhibition of certain brain parts 
using strong magnetic fields where people look for a few seconds at a 
screen with a large number of dots, it was found that significantly 
more people can correctly guess the number of dots when the field was 
switched on. The conclusion was that under normal circumstances when 
we are not aware of lower level information, such as the exact number 
of dots ion the screen, that information is actually present in the 
brain but we're not consciously aware of it. Certain people who have 
"savant syndrome" can be constantly aware of such lower level 
information.


And not just people

https://www.npr.org/sections/krulwich/2014/04/16/302943533/the-ultimate-animal-experience-losing-a-memory-quiz-to-a-chimp

which suggests to me that the part of one's brain that instantiates 
consciousness competes with other parts and may interfere with their 
function.  I think everyone experiences this in sports.  Who hasn't 
missed a shot in tennis by "thinking about it too much?"


Brent



This then suggests to me that the substitution level can be taken at a 
much higher level than the level of neurons. In the MWI we would have 
to be imagined being spread out over sectors where information such as 
the number of dots on a screen is different. So, what you're not aware 
of isn't fixed for you, and therefore it cannot possibly define your 
identity


Saibal



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-04 Thread smitra

On 02-04-2018 17:27, Bruno Marchal wrote:

On 1 Apr 2018, at 00:29, Lawrence Crowell
 wrote:

On Saturday, March 31, 2018 at 2:32:06 PM UTC-6, telmo_menezes
wrote:


On Sat, Mar 31, 2018 at 10:17 PM, Lawrence Crowell
 wrote:

You would have to replicate then not only the dynamics of

neurons, but every

biomolecule in the neurons, and don't forget about the

oligoastrocytes and

other glial cells. Many enzymes for instance to multi-state

systems, say in

a simple case where a single amino acid residue of

phosphorylated or

unphosphorylated, and in effect are binary switching units. To

then make

this work you now need to have the brain states mapped out down

to the

molecular level, and further to have their combinatorial

relationships

mapped. Biomolecules also behave in water, so you have to model

all the

water molecules. Given the brain has around 10^{25} or a few

moles of

molecules the number of possible combinations might be on the

order of

10^{10^{25}} this is a daunting task. Also your computer has to

accurately

encode the dynamics of molecules -- down to the quantum

mechanics of their

bonds.

This is another way of saying that biological systems, even that

of a basic

prokaryote, are beyond our current abilities to simulate. You

can't just

hand wave away the enormous problems with just simulating a

bacillus, let

alone something like the brain. Now of course one can do some

simulations to

learn about the brain in a model system, but this is far from

mapping a

brain and its conscious state into a computer.


Well maybe, but this is just you guessing.
Nobody knows the necessary level of detail.

Telmo.


Take LSD or psilocybin mushrooms and what enters the brain are
chemical compounds that interact with neural ligand gates. The
effect is a change in the perception of consciousness. Then if we
load coarse grained brain states into a computer that ignores lots
of fine grained detail, will that result in something different?
Hell yeah! The idea one could set up a computer neural network,
upload some data file from a brain scan and that this would be a
completely conscious person is frankly absurd.


This means that you bet on a lower substitution level. I guess others
have already answered this. Note that the proof that physics is a
branch of arithmetic does not put any bound of the graining of the
substitution level. It could even be that your brain is the entire
universe described at the level of superstring theory, that will
change nothing in the conclusion of the reasoning. Yet it would be a
threat for evolution and biology as conceived today.

Bruno


LC



In experiments involving stimulation/inhibition of certain brain  parts 
using strong magnetic fields where people look for a few seconds at a 
screen with a large number of dots, it was found that significantly more 
people can correctly guess the number of dots when the field was 
switched on. The conclusion was that under normal circumstances when we 
are not aware of lower level information, such as the exact number of 
dots ion the screen, that information is actually present in the brain 
but we're not consciously aware of it. Certain people who have "savant 
syndrome" can be constantly aware of such lower level information.


This then suggests to me that the substitution level can be taken at a 
much higher level than the level of neurons. In the MWI we would have to 
be imagined being spread out over sectors where information such as the 
number of dots on a screen is different. So, what you're not aware of 
isn't fixed for you, and therefore it cannot possibly define your 
identity


Saibal

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-04 Thread Russell Standish
On Tue, Apr 03, 2018 at 08:25:59AM +0200, Telmo Menezes wrote:
> Hi Russell,
> 
> On Sat, Mar 31, 2018 at 10:30 AM, Russell Standish
>  wrote:
> > On Wed, Mar 21, 2018 at 05:14:21PM +0100, Bruno Marchal wrote:
> >>
> >> Now, is a jellyfish conscious?
> >>
> >> I bet they are, but not far away from the dissociative and constant 
> >> arithmetical consciousness (of the universal machines).
> >
> > As I'm sure you're aware, I disagree with this. Jellyfish appear to be
> > quite simple automatons, with a distributed neural network, not any
> > brain as such. However, my main reason for disagreeing is that
> > anthropic reasoning leads us to conclude that most species of animal
> > are not conscious. Our most typical animal is a nematode (for instance
> > your favourite - the planarians), but even most insects cannot be
> > conscious either.
> 
> I follow your anthropic reasoning, but am not convinced by the
> implicit 1:1 correspondence between one minute of human consciousness
> and one human of insect consciousness. I have no rigorous way of
> saying this, but my intuition is the following: there is more content
> in one minute of one than the other. I think it makes sense for the
> probabilities to be weighted by this content, somehow.
> 
> Imagine a simple possibility: your anthropic reasoning being weighed
> by the number of neurons in the given creature. See what I'm getting
> at?
> 

My argument is simply that your first observer moment (ie "birth
moment", although not literally at birth) is selected at random from
all such possible moments. Thereafter, successor OMs are chosen
acording to Born's rule. Ant birth OMs are vastly more numerous than
human ones. A city of perhaps a million individuals lives under our
house, and ants are born, live an die far more rapidly than we
humans. 

To argue that OMs might be weighted somehow is quite close to the
ASSA, which I've never found convincing, though some argue for it here
on this list. Why should first observer moments be weighted by neuron number?

-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellowhpco...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-04 Thread Russell Standish
On Mon, Apr 02, 2018 at 10:22:57PM -0700, Brent Meeker wrote:
> 
> 
> On 3/31/2018 1:30 AM, Russell Standish wrote:
> > On Wed, Mar 21, 2018 at 05:14:21PM +0100, Bruno Marchal wrote:
> > > Now, is a jellyfish conscious?
> > > 
> > > I bet they are, but not far away from the dissociative and constant 
> > > arithmetical consciousness (of the universal machines).
> > As I'm sure you're aware, I disagree with this. Jellyfish appear to be
> > quite simple automatons, with a distributed neural network, not any
> > brain as such. However, my main reason for disagreeing is that
> > anthropic reasoning leads us to conclude that most species of animal
> > are not conscious. Our most typical animal is a nematode (for instance
> > your favourite - the planarians), but even most insects cannot be
> > conscious either.
> > 
> 
> In these discussions I always wonder what kind of consciousness is meant?
> 
> 1. Perception.  light/dark  acid/base touch...
> 2. Self location relative to things.  Prey/predators
> 3. Self relative to others.  Sex and territory and rivals
> 4. Abstractions.  Number geometries
> 5.  Self reflection.  Theory of minds.  Language
> 
> Brent

For my anthropic ant argument, I would have said 2 and above. Mere
perception would not be enough to apply the anthropic argument. A
thermostat is probably 1) above. Of course, others have argued that 5
is necessary for anthropic reasoning - I just remain unconvinced by that.


-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellowhpco...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-03 Thread Brent Meeker



On 4/3/2018 12:47 AM, Bruno Marchal wrote:

On 31 Mar 2018, at 10:30, Russell Standish  wrote:

On Wed, Mar 21, 2018 at 05:14:21PM +0100, Bruno Marchal wrote:

Now, is a jellyfish conscious?

I bet they are, but not far away from the dissociative and constant 
arithmetical consciousness (of the universal machines).

As I'm sure you're aware, I disagree with this. Jellyfish appear to be
quite simple automatons, with a distributed neural network, not any
brain as such.

Yes, like the hydra, but the shape of the brain or the nervous system is not 
relevant, and maybe the nervous system is not relevant.

I have no certainty, but the observation of hydra makes me doubt it is only a 
colony of cells, they seem to have an integrated personality. I might be wrong, 
but I just don’t know, and the math are easier with that assumption, to be 
honest.


Then one's consciousness might be preserved by replacing neurons with 
baccilus with little integrated personalities.  On the other hand I know 
of someone who did that and it didn't seem to work out well.








However, my main reason for disagreeing is that
anthropic reasoning leads us to conclude that most species of animal
are not conscious.

We have of course already discuss this in length. I am not sure that bayesian 
anthropism makes sense, as we don’t know what could be the prior. Bacteria 
might all have an undifferentiated consciousness, so they would all count for 
one person, but again counted relatively to one?
Note that your argument precludes also aliens to be conscious, or to even 
exist. In arithmetic, all creatures


Define "creatures"?

Brent


are represented in infinities, and the relative probabilities are handled by 
the self-reference modes, so we can avoid Turing-thropic or anthropic methods 
of reasoning. My critics here is isomorphic to my critics of doomsday like 
argument.




Our most typical animal is a nematode (for instance
your favourite - the planarians), but even most insects cannot be
conscious either.

I am not sure of that. As I try to explain, may be the brain is only a filter 
of consciousness: information hides the universal person, for obvious 
evolutionary reasons. May be even 0 neuron could lead to “infinite 
consciousness”, a bit like in set theory the unary intersection of the empty 
set gives the whole universes of all sets.

Bruno





--


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellowhpco...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-03 Thread Brent Meeker



On 4/3/2018 12:37 AM, Bruno Marchal wrote:

On 3 Apr 2018, at 07:22, Brent Meeker  wrote:



On 3/31/2018 1:30 AM, Russell Standish wrote:

On Wed, Mar 21, 2018 at 05:14:21PM +0100, Bruno Marchal wrote:

Now, is a jellyfish conscious?

I bet they are, but not far away from the dissociative and constant 
arithmetical consciousness (of the universal machines).

As I'm sure you're aware, I disagree with this. Jellyfish appear to be
quite simple automatons, with a distributed neural network, not any
brain as such. However, my main reason for disagreeing is that
anthropic reasoning leads us to conclude that most species of animal
are not conscious. Our most typical animal is a nematode (for instance
your favourite - the planarians), but even most insects cannot be
conscious either.


In these discussions I always wonder what kind of consciousness is meant?

1. Perception.  light/dark  acid/base touch...
2. Self location relative to things.  Prey/predators
3. Self relative to others.  Sex and territory and rivals
4. Abstractions.  Number geometries
5.  Self reflection.  Theory of minds.  Language

It is any form of knowledge. It is the raw consciousness of the worms disliking 
the lack of comfort impression when, say, it is eaten by a little mammals.


Or my thermostat not liking that it too cool in the house.

Brent


It is []t, with any weak Theaetetical reading of the box. It is needed for any 
of your 5 more special use that you mention.

Bruno




Brent


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-03 Thread Telmo Menezes
On Tue, Apr 3, 2018 at 9:33 AM, Bruno Marchal  wrote:
>
>> On 3 Apr 2018, at 08:25, Telmo Menezes  wrote:
>>
>> Hi Russell,
>>
>> On Sat, Mar 31, 2018 at 10:30 AM, Russell Standish
>>  wrote:
>>> On Wed, Mar 21, 2018 at 05:14:21PM +0100, Bruno Marchal wrote:

 Now, is a jellyfish conscious?

 I bet they are, but not far away from the dissociative and constant 
 arithmetical consciousness (of the universal machines).
>>>
>>> As I'm sure you're aware, I disagree with this. Jellyfish appear to be
>>> quite simple automatons, with a distributed neural network, not any
>>> brain as such. However, my main reason for disagreeing is that
>>> anthropic reasoning leads us to conclude that most species of animal
>>> are not conscious. Our most typical animal is a nematode (for instance
>>> your favourite - the planarians), but even most insects cannot be
>>> conscious either.
>>
>> I follow your anthropic reasoning, but am not convinced by the
>> implicit 1:1 correspondence between one minute of human consciousness
>> and one human of insect consciousness. I have no rigorous way of
>> saying this, but my intuition is the following: there is more content
>> in one minute of one than the other. I think it makes sense for the
>> probabilities to be weighted by this content, somehow.
>>
>> Imagine a simple possibility: your anthropic reasoning being weighed
>> by the number of neurons in the given creature. See what I'm getting
>> at?
>
>
> Then the brain seems to be a filter of the natural raw consciousness which is 
> at the start of the consciousness differentiation. The less neurons there 
> are, the more consciousness is intense from the first person view, but also 
> the more it is disconnected.

I agree, as you know.
I have no scientific argument here, just personal experience.

> I have no doubt that this is very counter-intuitive for people having no 
> memory of a dissociative state of consciousness,

Yes. What makes this particularly tricky is that such memories are
from the neighborhood of the experience. The actual thing cannot be
remembered -- at least I cannot.

> but then it makes much more simple the explanation of the origin of the 
> physical appearances. Peano arithmetic is less conscious than Robinson 
> arithmetic, even if Robinson arithmetic if of a type of white light so 
> luminous that we can distinguish nothing, not even 0 and 1. It might be the 
> state of "very near inconsistency”. This is not part of what I ahem published 
> so far, but again, logic, observations and simplicity concur on this.
>
> Bruno
>
>
>
>
>>
>> Cheers,
>> Telmo.
>>
>>>
>>> --
>>>
>>> 
>>> Dr Russell StandishPhone 0425 253119 (mobile)
>>> Principal, High Performance Coders
>>> Visiting Senior Research Fellowhpco...@hpcoders.com.au
>>> Economics, Kingston University http://www.hpcoders.com.au
>>> 
>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups 
>>> "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to everything-list+unsubscr...@googlegroups.com.
>>> To post to this group, send email to everything-list@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/everything-list.
>>> For more options, visit https://groups.google.com/d/optout.
>>
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-list+unsubscr...@googlegroups.com.
>> To post to this group, send email to everything-list@googlegroups.com.
>> Visit this group at https://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-03 Thread Bruno Marchal

> On 31 Mar 2018, at 10:57, Russell Standish  wrote:
> 
> On Sun, Mar 25, 2018 at 11:01:22AM +0200, Bruno Marchal wrote:
>> 
>>> On 21 Mar 2018, at 01:35, John Clark  wrote:
>>> 
>>> On Tue, Mar 20, 2018 at 7:27 PM, Bruce Kellett >> > wrote:
>>> 
>>> ​>​You don't need an instrument that can give a clean yes/no answer to the 
>>> presence of consciousness to develop scientific theories about 
>>> consciousness. We can start with the observation that all normal healthy 
>>> humans are conscious, and that rocks and other inert objects are not 
>>> conscious and work from there to develop a science of consciousness, based 
>>> on evidence from the observation of behaviour.
>>> 
>>> But if it was all based on the observation of behavior then what you'd end 
>>> up with is a scientific theory about intelligence not consciousness.
>> 
>> That is right. But if you agree that consciousness is a form of
> non-provable but also non-doubtable knowledge,
> 
> Only with self-awareness. A non self-aware consciousness (if such a
> thing exists) would have no knowledge whatsoever of its consciousness.

Consciousness is always self-awareness or self-knowledge, in the first person 
sense. And it has to be more in the p than in the []p, when defined by []p & p.



> 
>> and if you agree with
> the standard definition of knowledge in philosophy of mind,
> 
> But isn't the Theatetus formula more a property of knowledge, rather
> than being equivalent to knowledge itself?

It is an attempt to define knowledge, and it works in arithmetic (thanks to 
incompleteness).



> Couldn't Bp & P describe
> things other than knowledge?


How do you define knowledge? Bp & p obeys the S4 axioms for knowability, which 
is where most philosophers agree for a definition. To be sure, we can dismiss 
the "4” axiom ([]p->[][]p), which gives the Löbian type of consciousness.



> 
> then it is a theorem that Peano Arithmetic is conscious. To believe that 
> Robinson Arithmetic is conscious too (plausibly even more) is more tricky.
>> 
> 
> The premisses are already quite stretch :)

The premise is the mechanist hypothesis.

Bruno 




> 
> 
> -- 
> 
> 
> Dr Russell StandishPhone 0425 253119 (mobile)
> Principal, High Performance Coders
> Visiting Senior Research Fellowhpco...@hpcoders.com.au
> Economics, Kingston University http://www.hpcoders.com.au
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-03 Thread Bruno Marchal

> On 31 Mar 2018, at 10:30, Russell Standish  wrote:
> 
> On Wed, Mar 21, 2018 at 05:14:21PM +0100, Bruno Marchal wrote:
>> 
>> Now, is a jellyfish conscious? 
>> 
>> I bet they are, but not far away from the dissociative and constant 
>> arithmetical consciousness (of the universal machines).
> 
> As I'm sure you're aware, I disagree with this. Jellyfish appear to be
> quite simple automatons, with a distributed neural network, not any
> brain as such.

Yes, like the hydra, but the shape of the brain or the nervous system is not 
relevant, and maybe the nervous system is not relevant.

I have no certainty, but the observation of hydra makes me doubt it is only a 
colony of cells, they seem to have an integrated personality. I might be wrong, 
but I just don’t know, and the math are easier with that assumption, to be 
honest.




> However, my main reason for disagreeing is that
> anthropic reasoning leads us to conclude that most species of animal
> are not conscious.

We have of course already discuss this in length. I am not sure that bayesian 
anthropism makes sense, as we don’t know what could be the prior. Bacteria 
might all have an undifferentiated consciousness, so they would all count for 
one person, but again counted relatively to one?
Note that your argument precludes also aliens to be conscious, or to even 
exist. In arithmetic, all creatures are represented in infinities, and the 
relative probabilities are handled by the self-reference modes, so we can avoid 
Turing-thropic or anthropic methods of reasoning. My critics here is isomorphic 
to my critics of doomsday like argument.



> Our most typical animal is a nematode (for instance
> your favourite - the planarians), but even most insects cannot be
> conscious either.

I am not sure of that. As I try to explain, may be the brain is only a filter 
of consciousness: information hides the universal person, for obvious 
evolutionary reasons. May be even 0 neuron could lead to “infinite 
consciousness”, a bit like in set theory the unary intersection of the empty 
set gives the whole universes of all sets. 

Bruno



> 
> 
> -- 
> 
> 
> Dr Russell StandishPhone 0425 253119 (mobile)
> Principal, High Performance Coders
> Visiting Senior Research Fellowhpco...@hpcoders.com.au
> Economics, Kingston University http://www.hpcoders.com.au
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-03 Thread Bruno Marchal

> On 3 Apr 2018, at 07:22, Brent Meeker  wrote:
> 
> 
> 
> On 3/31/2018 1:30 AM, Russell Standish wrote:
>> On Wed, Mar 21, 2018 at 05:14:21PM +0100, Bruno Marchal wrote:
>>> Now, is a jellyfish conscious?
>>> 
>>> I bet they are, but not far away from the dissociative and constant 
>>> arithmetical consciousness (of the universal machines).
>> As I'm sure you're aware, I disagree with this. Jellyfish appear to be
>> quite simple automatons, with a distributed neural network, not any
>> brain as such. However, my main reason for disagreeing is that
>> anthropic reasoning leads us to conclude that most species of animal
>> are not conscious. Our most typical animal is a nematode (for instance
>> your favourite - the planarians), but even most insects cannot be
>> conscious either.
>> 
> 
> In these discussions I always wonder what kind of consciousness is meant?
> 
> 1. Perception.  light/dark  acid/base touch...
> 2. Self location relative to things.  Prey/predators
> 3. Self relative to others.  Sex and territory and rivals
> 4. Abstractions.  Number geometries
> 5.  Self reflection.  Theory of minds.  Language

It is any form of knowledge. It is the raw consciousness of the worms disliking 
the lack of comfort impression when, say, it is eaten by a little mammals. 
It is []t, with any weak Theaetetical reading of the box. It is needed for any 
of your 5 more special use that you mention.  

Bruno



> 
> Brent
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-03 Thread Bruno Marchal

> On 3 Apr 2018, at 08:25, Telmo Menezes  wrote:
> 
> Hi Russell,
> 
> On Sat, Mar 31, 2018 at 10:30 AM, Russell Standish
>  wrote:
>> On Wed, Mar 21, 2018 at 05:14:21PM +0100, Bruno Marchal wrote:
>>> 
>>> Now, is a jellyfish conscious?
>>> 
>>> I bet they are, but not far away from the dissociative and constant 
>>> arithmetical consciousness (of the universal machines).
>> 
>> As I'm sure you're aware, I disagree with this. Jellyfish appear to be
>> quite simple automatons, with a distributed neural network, not any
>> brain as such. However, my main reason for disagreeing is that
>> anthropic reasoning leads us to conclude that most species of animal
>> are not conscious. Our most typical animal is a nematode (for instance
>> your favourite - the planarians), but even most insects cannot be
>> conscious either.
> 
> I follow your anthropic reasoning, but am not convinced by the
> implicit 1:1 correspondence between one minute of human consciousness
> and one human of insect consciousness. I have no rigorous way of
> saying this, but my intuition is the following: there is more content
> in one minute of one than the other. I think it makes sense for the
> probabilities to be weighted by this content, somehow.
> 
> Imagine a simple possibility: your anthropic reasoning being weighed
> by the number of neurons in the given creature. See what I'm getting
> at?


Then the brain seems to be a filter of the natural raw consciousness which is 
at the start of the consciousness differentiation. The less neurons there are, 
the more consciousness is intense from the first person view, but also the more 
it is disconnected. I have no doubt that this is very counter-intuitive for 
people having no memory of a dissociative state of consciousness, but then it 
makes much more simple the explanation of the origin of the physical 
appearances. Peano arithmetic is less conscious than Robinson arithmetic, even 
if Robinson arithmetic if of a type of white light so luminous that we can 
distinguish nothing, not even 0 and 1. It might be the state of "very near 
inconsistency”. This is not part of what I ahem published so far, but again, 
logic, observations and simplicity concur on this.

Bruno




> 
> Cheers,
> Telmo.
> 
>> 
>> --
>> 
>> 
>> Dr Russell StandishPhone 0425 253119 (mobile)
>> Principal, High Performance Coders
>> Visiting Senior Research Fellowhpco...@hpcoders.com.au
>> Economics, Kingston University http://www.hpcoders.com.au
>> 
>> 
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-list+unsubscr...@googlegroups.com.
>> To post to this group, send email to everything-list@googlegroups.com.
>> Visit this group at https://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/d/optout.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-03 Thread Telmo Menezes
Hi Russell,

On Sat, Mar 31, 2018 at 10:30 AM, Russell Standish
 wrote:
> On Wed, Mar 21, 2018 at 05:14:21PM +0100, Bruno Marchal wrote:
>>
>> Now, is a jellyfish conscious?
>>
>> I bet they are, but not far away from the dissociative and constant 
>> arithmetical consciousness (of the universal machines).
>
> As I'm sure you're aware, I disagree with this. Jellyfish appear to be
> quite simple automatons, with a distributed neural network, not any
> brain as such. However, my main reason for disagreeing is that
> anthropic reasoning leads us to conclude that most species of animal
> are not conscious. Our most typical animal is a nematode (for instance
> your favourite - the planarians), but even most insects cannot be
> conscious either.

I follow your anthropic reasoning, but am not convinced by the
implicit 1:1 correspondence between one minute of human consciousness
and one human of insect consciousness. I have no rigorous way of
saying this, but my intuition is the following: there is more content
in one minute of one than the other. I think it makes sense for the
probabilities to be weighted by this content, somehow.

Imagine a simple possibility: your anthropic reasoning being weighed
by the number of neurons in the given creature. See what I'm getting
at?

Cheers,
Telmo.

>
> --
>
> 
> Dr Russell StandishPhone 0425 253119 (mobile)
> Principal, High Performance Coders
> Visiting Senior Research Fellowhpco...@hpcoders.com.au
> Economics, Kingston University http://www.hpcoders.com.au
> 
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-02 Thread Brent Meeker



On 3/31/2018 1:30 AM, Russell Standish wrote:

On Wed, Mar 21, 2018 at 05:14:21PM +0100, Bruno Marchal wrote:

Now, is a jellyfish conscious?

I bet they are, but not far away from the dissociative and constant 
arithmetical consciousness (of the universal machines).

As I'm sure you're aware, I disagree with this. Jellyfish appear to be
quite simple automatons, with a distributed neural network, not any
brain as such. However, my main reason for disagreeing is that
anthropic reasoning leads us to conclude that most species of animal
are not conscious. Our most typical animal is a nematode (for instance
your favourite - the planarians), but even most insects cannot be
conscious either.



In these discussions I always wonder what kind of consciousness is meant?

1. Perception.  light/dark  acid/base touch...
2. Self location relative to things.  Prey/predators
3. Self relative to others.  Sex and territory and rivals
4. Abstractions.  Number geometries
5.  Self reflection.  Theory of minds.  Language

Brent


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-02 Thread Russell Standish
On Wed, Mar 21, 2018 at 05:14:21PM +0100, Bruno Marchal wrote:
> 
> Now, is a jellyfish conscious? 
> 
> I bet they are, but not far away from the dissociative and constant 
> arithmetical consciousness (of the universal machines).

As I'm sure you're aware, I disagree with this. Jellyfish appear to be
quite simple automatons, with a distributed neural network, not any
brain as such. However, my main reason for disagreeing is that
anthropic reasoning leads us to conclude that most species of animal
are not conscious. Our most typical animal is a nematode (for instance
your favourite - the planarians), but even most insects cannot be
conscious either.


-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellowhpco...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-02 Thread Russell Standish
On Sun, Mar 25, 2018 at 11:01:22AM +0200, Bruno Marchal wrote:
> 
> > On 21 Mar 2018, at 01:35, John Clark  wrote:
> > 
> > On Tue, Mar 20, 2018 at 7:27 PM, Bruce Kellett  > > wrote:
> >  
> > ​>​You don't need an instrument that can give a clean yes/no answer to the 
> > presence of consciousness to develop scientific theories about 
> > consciousness. We can start with the observation that all normal healthy 
> > humans are conscious, and that rocks and other inert objects are not 
> > conscious and work from there to develop a science of consciousness, based 
> > on evidence from the observation of behaviour.
> > 
> > But if it was all based on the observation of behavior then what you'd end 
> > up with is a scientific theory about intelligence not consciousness.
> 
> That is right. But if you agree that consciousness is a form of
non-provable but also non-doubtable knowledge,

Only with self-awareness. A non self-aware consciousness (if such a
thing exists) would have no knowledge whatsoever of its consciousness.

> and if you agree with
the standard definition of knowledge in philosophy of mind,

But isn't the Theatetus formula more a property of knowledge, rather
than being equivalent to knowledge itself? Couldn't Bp & P describe
things other than knowledge?

then it is a theorem that Peano Arithmetic is conscious. To believe that 
Robinson Arithmetic is conscious too (plausibly even more) is more tricky.
>

The premisses are already quite stretch :)


-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellowhpco...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-02 Thread Bruno Marchal

> On 31 Mar 2018, at 23:12, Mindey I.  wrote:
> 
> Why not to just define yourself, and then try to re-run yourself? If you have 
> a mathematical definition of your own self, you are already close to living 
> forever as a running process based on that definition.

You cannot. It is a theorem in arithmetic: no universal machine can define 
itself. You can bet on a level of substitution, but you can’t prove, not even 
experimentally to yourself that you have found it. All this is detailed in some 
of my papers, or my older long texts.

Note that all computations are executed (in the original mathematical sense of 
Post, Church, Turing, …) in arithmetic. If you believe that a proposition like 
3^3 + 4^3 + 5^3 is equal to 6^3 independently of you verifying that fact or 
not, you are already “there”.

> 
> Personally, when I try to define myself, I bump into memories of strong sense 
> of curiosity, making me nearly cry of desire to know Everything.
> 
> Maybe most of us here on the "Everything-List" are like that. Maybe we're 
> equivalent?


Yes, that is the natural mystical understanding of the average universal 
numbers: there is only one person, but locally disconnected. I sum up often by 
“we are god playing hide-and-seek with Itself”, but the details of this are 
more demanding in machine’s theology (probability logic).

Bruno




> 
> On 31 March 2018 at 20:32, Telmo Menezes  > wrote:
> On Sat, Mar 31, 2018 at 10:17 PM, Lawrence Crowell
> > 
> wrote:
> > You would have to replicate then not only the dynamics of neurons, but every
> > biomolecule in the neurons, and don't forget about the oligoastrocytes and
> > other glial cells. Many enzymes for instance to multi-state systems, say in
> > a simple case where a single amino acid residue of phosphorylated or
> > unphosphorylated, and in effect are binary switching units. To then make
> > this work you now need to have the brain states mapped out down to the
> > molecular level, and further to have their combinatorial relationships
> > mapped. Biomolecules also behave in water, so you have to model all the
> > water molecules. Given the brain has around 10^{25} or a few moles of
> > molecules the number of possible combinations might be on the order of
> > 10^{10^{25}} this is a daunting task. Also your computer has to accurately
> > encode the dynamics of molecules -- down to the quantum mechanics of their
> > bonds.
> >
> > This is another way of saying that biological systems, even that of a basic
> > prokaryote, are beyond our current abilities to simulate. You can't just
> > hand wave away the enormous problems with just simulating a bacillus, let
> > alone something like the brain. Now of course one can do some simulations to
> > learn about the brain in a model system, but this is far from mapping a
> > brain and its conscious state into a computer.
> 
> Well maybe, but this is just you guessing.
> Nobody knows the necessary level of detail.
> 
> Telmo.
> 
> > LC
> >
> >
> > On Saturday, March 31, 2018 at 10:31:56 AM UTC-6, John Clark wrote:
> >>
> >> On Tue, Mar 27, 2018 at 8:24 PM, Lawrence Crowell
> >>  >> > wrote:
> >>
> >>> > Yes, and if you replace the entire brain with technology the peg leg is
> >>> > expanded into an entire Pinocchio. Would the really be conscious? It is 
> >>> > the
> >>> > case as well that so much of our mental processing does involve hormone
> >>> > reception and a range of other data inputs from other receptors and 
> >>> > ligands.
> >>
> >> I see nothing sacred in hormones, I don't see the slightest reason why
> >> they or any neurotransmitter would be especially difficult to simulate
> >> through computation, because chemical messengers are not a sign of
> >> sophisticated design on nature's part, rather it's an example of 
> >> Evolution's
> >> bungling. If you need to inhibit a nearby neuron there are better ways of
> >> sending that signal then launching a GABA molecule like a message in a
> >> bottle thrown into the sea and waiting ages for it to diffuse to its random
> >> target.
> >>
> >> I'm not interested in chemicals only the information they contain, I want
> >> the information to get transmitted from cell to cell by the best method and
> >> so I would not send smoke signals if I had a fiber optic cable. The
> >> information content in each molecular message must be tiny, just a few bits
> >> because only about 60 neurotransmitters such as acetylcholine,
> >> norepinephrine and GABA are known, even if the true number is 100 times
> >> greater (or a million times for that matter) the information content ofeach
> >> signal must be tiny. Also, for the long range stuff, exactly which neuron
> >> receives the signal can not be specified because it relies on a random
> >> process, 

Re: How to live forever

2018-04-02 Thread John Clark
On Mon, Apr 2, 2018 at 2:42 AM, Stathis Papaioannou 
wrote:

​> *​*
> *The problem I was alluding to was how to interface with existing
> biological systems. You could make a camera that exceeds the performance of
> the human eye, but that doesn’t mean you can use it to replace damaged
> eyes. It would be easier to use the camera in a robot than a cyborg.*
>

​OK, I agree with that.

 John K Clark​

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-02 Thread Stathis Papaioannou
On Sun, 1 Apr 2018 at 9:15 pm, John Clark  wrote:

> On Sun, Apr 1, 2018 at 7:43 AM, Stathis Papaioannou  > wrote:
>
> >> It's not the wind its diffusion that send the signal on its way, which
>>> means exactly where the signal is sent is NOT critical and the time it
>>> takes to transmit it can't be critical either. So you think technology will
>>> find that duplicating this meager feat will be insuperably difficult. Why?
>>> Sending a signal with a tiny informational content very very slowly and
>>> successfully hitting a HUGE target seems to me to be the easiest part of
>>> the entire thing.
>>
>>
>
> *>  I don’t think it’s impossible,*
>
> Forget impossible, overall mind uploading might be difficult but the part
> of it that you're talking about would not only be possible it would be
> easy.
>
>> *> but if you want a neural implant to work like the biological
>> equivalent, it must communicate with neurones via neurotransmitters,*
>
> You think only a chemical could send that signal, and specifically only
> the particular chemical that Homo Sapiens happens to use will work? WHY?
>
>> > it must modulate it’s responses according to circulating hormones, it
>> must develop new connections and prune old connections, it must upregulate
>> and downregulate its responsiveness to neurotransmitters according to its
>> history
>
>
>  There are two ways to accomplish this:
>
> 1) A neural net computer could do it directly, and that’s the way it would
> probably be done.
>
> 2) A conventional computer with a Von Neumann architecture could simulate
> a neural net computer, that would slow things way down but if
> Nanotechnology was used the increase in the speed of the hardware would be
> so enormous it would still think faster than you or me.
>

The problem I was alluding to was how to interface with existing biological
systems. You could make a camera that exceeds the performance of the human
eye, but that doesn’t mean you can use it to replace damaged eyes. It would
be easier to use the camera in a robot than a cyborg.
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-01 Thread John Clark
On Sun, Apr 1, 2018 at 7:43 AM, Stathis Papaioannou  wrote:

>> It's not the wind its diffusion that send the signal on its way, which
>> means exactly where the signal is sent is NOT critical and the time it
>> takes to transmit it can't be critical either. So you think technology will
>> find that duplicating this meager feat will be insuperably difficult. Why?
>> Sending a signal with a tiny informational content very very slowly and
>> successfully hitting a HUGE target seems to me to be the easiest part of
>> the entire thing.
>
>

*>  I don’t think it’s impossible,*

Forget impossible, overall mind uploading might be difficult but the part
of it that you're talking about would not only be possible it would be
easy.

> *> but if you want a neural implant to work like the biological
> equivalent, it must communicate with neurones via neurotransmitters,*

You think only a chemical could send that signal, and specifically only the
particular chemical that Homo Sapiens happens to use will work? WHY?

> > it must modulate it’s responses according to circulating hormones, it
> must develop new connections and prune old connections, it must upregulate
> and downregulate its responsiveness to neurotransmitters according to its
> history


 There are two ways to accomplish this:

1) A neural net computer could do it directly, and that’s the way it would
probably be done.

2) A conventional computer with a Von Neumann architecture could simulate
a neural net computer, that would slow things way down but if
Nanotechnology was used the increase in the speed of the hardware would be
so enormous it would still think faster than you or me.

 John K Clark

>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-01 Thread John Clark
On Sun, Apr 1, 2018 at 6:42 AM, Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

> *> It may just be a bit like saying because an airplane can fly that it is
> equivalent to a bird.*


I think the nouns "airplane" and "bird" are not equivalent, however the
same adjective "flying" can be used to describe what both those two nouns
are doing, and I think my third grade English teacher was entirely wrong
when she said "I" was a pronoun, it is not, it is an adjective describing
how matter behaves when it is organized in a johnkclarkian way. That's why
I also think its a bit like saying the 4  my calculator produces when it
adds 2+2 is the same 4 that I produce when I add those two numbers in my
head. I think  all that because I don't believe the Sacred Atoms Theory is
sound .

 John K Clark


>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-01 Thread Stathis Papaioannou
On Sun, 1 Apr 2018 at 8:26 am, John Clark  wrote:

> On Sat, Mar 31, 2018 at 5:50 PM, Stathis Papaioannou 
> wrote:
>
> *​> ​The problem is the biological neurones only understand smoke signals.
>> *
>
>
> Not so, we already understand that some neurotransmitters send smoke
> signals that excite neurons while others send a inhibitory signal.
>
> ​> ​
>> Not only that, but the smoke signals change depending on how the wind is
>> blowing,
>
>
> It's not the wind its diffusion that send the signal on its way, which
> means exactly where the signal is sent is* NOT* critical and the time it
> takes to transmit it can't be critical either. So you think technology will
> find that duplicating this meager feat will be insuperably difficult. Why?
> Sending a signal with a tiny informational content very very slowly and
> successfully hitting a HUGE target seems to me to be the easiest part of
> the entire thing.
>

I don’t think it’s impossible, but if you want a neural implant to work
like the biological equivalent, it must communicate with neurones via
neurotransmitters, it must modulate it’s responses according to circulating
hormones, it must develop new connections and prune old connections, it
must upregulate and downregulate its responsiveness to neurotransmitters
according to its history and multiple local factors, and probably other
things that we don’t even know about. So what us needed is not just a
little computer, but complex nanomachinery. It might be easier to simulate
an entire brain than make an implant.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-01 Thread Lawrence Crowell
It may just be a bit like saying because an airplane can fly that it is 
equivalent to a bird.

LC

On Saturday, March 31, 2018 at 6:52:43 PM UTC-6, John Clark wrote:
>
> On Sat, Mar 31, 2018 at 6:29 PM, Lawrence Crowell <
> goldenfield...@gmail.com > wrote:
>
> ​
>> >  Take LSD or psilocybin mushrooms and what enters the brain are 
>> chemical compounds that interact with neural ligand gates. The effect is a 
>> change in the perception of consciousness.
>>
>
> LSD is not magical, it is a physical substance that changes the chemistry 
> of the brain which in turn changes the way information in the brain is 
> process which changes observed intelligent behavior. So the type of 
> intelligence displayed depends on how something physical, like matter, is 
> arranged. As for consciousness LSD may effect that too but I don't know 
> that for a fact because I've never taken LSD, although I have heard people 
> make noises with their mouth that sound like "it changed my consciousness". 
>  
>  
>
>> *​>​The idea one could set up a computer neural network, upload some data 
>> file from a brain scan and that this would be a completely conscious person 
>> is frankly absurd. *
>
>
> ​If a ​
> ​computer being conscious is absurd why isn't 3 pounds of grey goo being 
> conscious also absurd? Is there some new law of physics I haven't head of 
> that only that only squishy things can be conscious?  ​
>
>  John K Clark
>
>  
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-01 Thread Telmo Menezes
On Sun, Apr 1, 2018 at 12:29 AM, Lawrence Crowell
 wrote:
> On Saturday, March 31, 2018 at 2:32:06 PM UTC-6, telmo_menezes wrote:
>>
>> On Sat, Mar 31, 2018 at 10:17 PM, Lawrence Crowell
>>  wrote:
>> > You would have to replicate then not only the dynamics of neurons, but
>> > every
>> > biomolecule in the neurons, and don't forget about the oligoastrocytes
>> > and
>> > other glial cells. Many enzymes for instance to multi-state systems, say
>> > in
>> > a simple case where a single amino acid residue of phosphorylated or
>> > unphosphorylated, and in effect are binary switching units. To then make
>> > this work you now need to have the brain states mapped out down to the
>> > molecular level, and further to have their combinatorial relationships
>> > mapped. Biomolecules also behave in water, so you have to model all the
>> > water molecules. Given the brain has around 10^{25} or a few moles of
>> > molecules the number of possible combinations might be on the order of
>> > 10^{10^{25}} this is a daunting task. Also your computer has to
>> > accurately
>> > encode the dynamics of molecules -- down to the quantum mechanics of
>> > their
>> > bonds.
>> >
>> > This is another way of saying that biological systems, even that of a
>> > basic
>> > prokaryote, are beyond our current abilities to simulate. You can't just
>> > hand wave away the enormous problems with just simulating a bacillus,
>> > let
>> > alone something like the brain. Now of course one can do some
>> > simulations to
>> > learn about the brain in a model system, but this is far from mapping a
>> > brain and its conscious state into a computer.
>>
>> Well maybe, but this is just you guessing.
>> Nobody knows the necessary level of detail.
>>
>> Telmo.
>
>
> Take LSD or psilocybin mushrooms and what enters the brain are chemical
> compounds that interact with neural ligand gates. The effect is a change in
> the perception of consciousness. Then if we load coarse grained brain states
> into a computer that ignores lots of fine grained detail, will that result
> in something different? Hell yeah! The idea one could set up a computer
> neural network, upload some data file from a brain scan and that this would
> be a completely conscious person is frankly absurd.

The molecules of LSD, psilocybin, etc have specific binding affinities
to various neuroreceptors. Ok.

This is a very important point and I completely sympathize with you
bringing it up. Current artificial neural network models are extreme
simplifications. We could say that they model a brain that only uses
Glutamate (excitatory signals) and GABA (inhibitory signals). The
other neurotransmitters are responsible for a lot of interesting
stuff, namely learning. It is telling that contemporary ANNs resort to
a blunt, centralized algorithm for that part (backpropagation).

But how much information is contained in a molecule of LSD? And how
much information is necessary to define a receptor site? Imagine a
model similar to what John Holland suggested, where you define these
things as strings of letters. Let's say with an alphabet of four
letters (a, b, c, d) -- because nature seems to like that. So we could
have a drug that is abaccaba and we could have a receptor site that is
abbccacb. Then use edit distance to determine their affinity. How many
letters would we need to model something with the complexity of the
human brain? Not a lot I bet.

My point is: there is no reason to assume that we have to go into
extreme detail, such as molecular interactions with water or quantum
states. Maybe we do, but the stuff you allude to could still be way
simpler than that from an information theory perspective.

Telmo.


> LC
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-01 Thread Telmo Menezes
Hey Mindey,

On Sat, Mar 31, 2018 at 11:12 PM, Mindey I.  wrote:
> Why not to just define yourself, and then try to re-run yourself? If you
> have a mathematical definition of your own self, you are already close to
> living forever as a running process based on that definition.

Easier said than done might be the understatement of the millennium here :)

> Personally, when I try to define myself, I bump into memories of strong
> sense of curiosity, making me nearly cry of desire to know Everything.
>
> Maybe most of us here on the "Everything-List" are like that. Maybe we're
> equivalent?

I don't know if my curiosity is as strong as yours, I think it's
impossible to know. I think you are being reductive about yourself, no
matter how amazing curiosity is.

Telmo.

> On 31 March 2018 at 20:32, Telmo Menezes  wrote:
>>
>> On Sat, Mar 31, 2018 at 10:17 PM, Lawrence Crowell
>>  wrote:
>> > You would have to replicate then not only the dynamics of neurons, but
>> > every
>> > biomolecule in the neurons, and don't forget about the oligoastrocytes
>> > and
>> > other glial cells. Many enzymes for instance to multi-state systems, say
>> > in
>> > a simple case where a single amino acid residue of phosphorylated or
>> > unphosphorylated, and in effect are binary switching units. To then make
>> > this work you now need to have the brain states mapped out down to the
>> > molecular level, and further to have their combinatorial relationships
>> > mapped. Biomolecules also behave in water, so you have to model all the
>> > water molecules. Given the brain has around 10^{25} or a few moles of
>> > molecules the number of possible combinations might be on the order of
>> > 10^{10^{25}} this is a daunting task. Also your computer has to
>> > accurately
>> > encode the dynamics of molecules -- down to the quantum mechanics of
>> > their
>> > bonds.
>> >
>> > This is another way of saying that biological systems, even that of a
>> > basic
>> > prokaryote, are beyond our current abilities to simulate. You can't just
>> > hand wave away the enormous problems with just simulating a bacillus,
>> > let
>> > alone something like the brain. Now of course one can do some
>> > simulations to
>> > learn about the brain in a model system, but this is far from mapping a
>> > brain and its conscious state into a computer.
>>
>> Well maybe, but this is just you guessing.
>> Nobody knows the necessary level of detail.
>>
>> Telmo.
>>
>> > LC
>> >
>> >
>> > On Saturday, March 31, 2018 at 10:31:56 AM UTC-6, John Clark wrote:
>> >>
>> >> On Tue, Mar 27, 2018 at 8:24 PM, Lawrence Crowell
>> >>  wrote:
>> >>
>> >>> > Yes, and if you replace the entire brain with technology the peg leg
>> >>> > is
>> >>> > expanded into an entire Pinocchio. Would the really be conscious? It
>> >>> > is the
>> >>> > case as well that so much of our mental processing does involve
>> >>> > hormone
>> >>> > reception and a range of other data inputs from other receptors and
>> >>> > ligands.
>> >>
>> >> I see nothing sacred in hormones, I don't see the slightest reason why
>> >> they or any neurotransmitter would be especially difficult to simulate
>> >> through computation, because chemical messengers are not a sign of
>> >> sophisticated design on nature's part, rather it's an example of
>> >> Evolution's
>> >> bungling. If you need to inhibit a nearby neuron there are better ways
>> >> of
>> >> sending that signal then launching a GABA molecule like a message in a
>> >> bottle thrown into the sea and waiting ages for it to diffuse to its
>> >> random
>> >> target.
>> >>
>> >> I'm not interested in chemicals only the information they contain, I
>> >> want
>> >> the information to get transmitted from cell to cell by the best method
>> >> and
>> >> so I would not send smoke signals if I had a fiber optic cable. The
>> >> information content in each molecular message must be tiny, just a few
>> >> bits
>> >> because only about 60 neurotransmitters such as acetylcholine,
>> >> norepinephrine and GABA are known, even if the true number is 100 times
>> >> greater (or a million times for that matter) the information content
>> >> ofeach
>> >> signal must be tiny. Also, for the long range stuff, exactly which
>> >> neuron
>> >> receives the signal can not be specified because it relies on a random
>> >> process, diffusion. The fact that it's slow as molasses in February
>> >> does not
>> >> add to its charm.
>> >>
>> >> If your job is delivering packages and all the packages are very small
>> >> and
>> >> your boss doesn't care who you give them to as long as it's on the
>> >> correct
>> >> continent and you have until the next ice age to get the work done,
>> >> then you
>> >> don't have a very difficult profession. I see no reason why simulating
>> >> that
>> >> anachronism  would present the slightest difficulty. Artificial neurons
>> >> could be made to 

Re: How to live forever

2018-03-31 Thread John Clark
On Sat, Mar 31, 2018 at 6:29 PM, Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

​
> >  Take LSD or psilocybin mushrooms and what enters the brain are chemical
> compounds that interact with neural ligand gates. The effect is a change in
> the perception of consciousness.
>

LSD is not magical, it is a physical substance that changes the chemistry
of the brain which in turn changes the way information in the brain is
process which changes observed intelligent behavior. So the type of
intelligence displayed depends on how something physical, like matter, is
arranged. As for consciousness LSD may effect that too but I don't know
that for a fact because I've never taken LSD, although I have heard people
make noises with their mouth that sound like "it changed my consciousness".



> *​>​The idea one could set up a computer neural network, upload some data
> file from a brain scan and that this would be a completely conscious person
> is frankly absurd. *


​If a ​
​computer being conscious is absurd why isn't 3 pounds of grey goo being
conscious also absurd? Is there some new law of physics I haven't head of
that only that only squishy things can be conscious?  ​

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-31 Thread John Clark
On Sat, Mar 31, 2018 at 5:50 PM, Stathis Papaioannou 
wrote:

*​> ​The problem is the biological neurones only understand smoke signals. *


Not so, we already understand that some neurotransmitters send smoke
signals that excite neurons while others send a inhibitory signal.

​> ​
> Not only that, but the smoke signals change depending on how the wind is
> blowing,


It's not the wind its diffusion that send the signal on its way, which
means exactly where the signal is sent is* NOT* critical and the time it
takes to transmit it can't be critical either. So you think technology will
find that duplicating this meager feat will be insuperably difficult. Why?
Sending a signal with a tiny informational content very very slowly and
successfully hitting a HUGE target seems to me to be the easiest part of
the entire thing.

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-31 Thread John Clark
On Sat, Mar 31, 2018 at 4:17 PM, Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:


​> ​
> *You would have to replicate then not only the dynamics of neurons, but
> every biomolecule in the neurons,*


Why? Most of the things neurons do have nothing to do with signal
processing, they just involved in the same dull metabolism stuff that the
cells in your big toe must do  just to stay alive and keep operating.


> ​> ​
> This is another way of saying that biological systems, even that of a
> basic prokaryote, are beyond our current abilities to simulate.


Well of course its beyond our simulate, that's why we don't have uploaded
minds right now! But vitrifying a human brain and storing it at liquid
nitrogen temperatures indefinite is not beyond our current abilities.


> *> You can't just hand wave away the enormous problems with just
> simulating a bacillus, let alone something like the brain.*


I would maintain that YOU can not have wave mind uploading away unless you
can find some fundamental law of physics that proves it could not work, and
all you can come up with is vague speculation that maybe there is something
inside a Black Hole that would prevent it.


> *> Now of course one can do some simulations to learn about the brain in a
> model system, but this is far from mapping a brain and its conscious state
> into a computer.*
>

​If it was good enough its intellectual state would be mapped into a
computer, and that's all I'm concerned with  because I'm confided that
consciousness will follow, I am exactly as confident of that as I am
confident that Charles Darwin was right.

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-31 Thread Brent Meeker



On 3/31/2018 3:29 PM, Lawrence Crowell wrote:

On Saturday, March 31, 2018 at 2:32:06 PM UTC-6, telmo_menezes wrote:

On Sat, Mar 31, 2018 at 10:17 PM, Lawrence Crowell
 wrote:
> You would have to replicate then not only the dynamics of
neurons, but every
> biomolecule in the neurons, and don't forget about the
oligoastrocytes and
> other glial cells. Many enzymes for instance to multi-state
systems, say in
> a simple case where a single amino acid residue of
phosphorylated or
> unphosphorylated, and in effect are binary switching units. To
then make
> this work you now need to have the brain states mapped out down
to the
> molecular level, and further to have their combinatorial
relationships
> mapped. Biomolecules also behave in water, so you have to model
all the
> water molecules. Given the brain has around 10^{25} or a few
moles of
> molecules the number of possible combinations might be on the
order of
> 10^{10^{25}} this is a daunting task. Also your computer has to
accurately
> encode the dynamics of molecules -- down to the quantum
mechanics of their
> bonds.
>
> This is another way of saying that biological systems, even that
of a basic
> prokaryote, are beyond our current abilities to simulate. You
can't just
> hand wave away the enormous problems with just simulating a
bacillus, let
> alone something like the brain. Now of course one can do some
simulations to
> learn about the brain in a model system, but this is far from
mapping a
> brain and its conscious state into a computer.

Well maybe, but this is just you guessing.
Nobody knows the necessary level of detail.

Telmo.


Take LSD or psilocybin mushrooms and what enters the brain are 
chemical compounds that interact with neural ligand gates. The effect 
is a change in the perception of consciousness. Then if we load coarse 
grained brain states into a computer that ignores lots of fine grained 
detail, will that result in something different? Hell yeah! The idea 
one could set up a computer neural network, upload some data file from 
a brain scan and that this would be a completely conscious person is 
frankly absurd.


But would it be a conscious something...very likely, but how would we know?

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-31 Thread Lawrence Crowell
On Saturday, March 31, 2018 at 2:32:06 PM UTC-6, telmo_menezes wrote:
>
> On Sat, Mar 31, 2018 at 10:17 PM, Lawrence Crowell 
>  wrote: 
> > You would have to replicate then not only the dynamics of neurons, but 
> every 
> > biomolecule in the neurons, and don't forget about the oligoastrocytes 
> and 
> > other glial cells. Many enzymes for instance to multi-state systems, say 
> in 
> > a simple case where a single amino acid residue of phosphorylated or 
> > unphosphorylated, and in effect are binary switching units. To then make 
> > this work you now need to have the brain states mapped out down to the 
> > molecular level, and further to have their combinatorial relationships 
> > mapped. Biomolecules also behave in water, so you have to model all the 
> > water molecules. Given the brain has around 10^{25} or a few moles of 
> > molecules the number of possible combinations might be on the order of 
> > 10^{10^{25}} this is a daunting task. Also your computer has to 
> accurately 
> > encode the dynamics of molecules -- down to the quantum mechanics of 
> their 
> > bonds. 
> > 
> > This is another way of saying that biological systems, even that of a 
> basic 
> > prokaryote, are beyond our current abilities to simulate. You can't just 
> > hand wave away the enormous problems with just simulating a bacillus, 
> let 
> > alone something like the brain. Now of course one can do some 
> simulations to 
> > learn about the brain in a model system, but this is far from mapping a 
> > brain and its conscious state into a computer. 
>
> Well maybe, but this is just you guessing. 
> Nobody knows the necessary level of detail. 
>
> Telmo. 
>

Take LSD or psilocybin mushrooms and what enters the brain are chemical 
compounds that interact with neural ligand gates. The effect is a change in 
the perception of consciousness. Then if we load coarse grained brain 
states into a computer that ignores lots of fine grained detail, will that 
result in something different? Hell yeah! The idea one could set up a 
computer neural network, upload some data file from a brain scan and that 
this would be a completely conscious person is frankly absurd. 

LC

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-31 Thread Stathis Papaioannou
On Sun, 1 Apr 2018 at 2:31 am, John Clark  wrote:

> On Tue, Mar 27, 2018 at 8:24 PM, Lawrence Crowell <
> goldenfieldquaterni...@gmail.com> wrote:
>
> > *Yes, and if you replace the entire brain with technology the peg leg
>> is expanded into an entire Pinocchio. Would the really be conscious? It is
>> the case as well that so much of our mental processing does involve hormone
>> reception and a range of other data inputs from other receptors and
>> ligands.*
>
> I see nothing sacred in hormones, I don't see the slightest reason why
> they or any neurotransmitter would be especially difficult to simulate
> through computation, because chemical messengers are not a sign of
> sophisticated design on nature's part, rather it's an example of
> Evolution's bungling. If you need to inhibit a nearby neuron there are
> better ways of sending that signal then launching a GABA molecule like a
> message in a bottle thrown into the sea and waiting ages for it to diffuse
> to its random target.
>
> I'm not interested in chemicals only the information they contain, I want
> the information to get transmitted from cell to cell by the best method and
> so I would not send smoke signals if I had a fiber optic cable. The
> information content in each molecular message must be tiny, just a few bits
> because only about 60 neurotransmitters such as acetylcholine,
> norepinephrine and GABA are known, even if the true number is 100 times
> greater (or a million times for that matter) the information content ofeach
> signal must be tiny. Also, for the long range stuff, exactly which neuron
> receives the signal can not be specified because it relies on a random
> process, diffusion. The fact that it's slow as molasses in February does
> not add to its charm.
>

The problem is the biological neurones only understand smoke signals. Not
only that, but the smoke signals change depending on how the wind is
blowing, and so does their meaning. So the computer and fibre optic cable
must ultimately communicate via these smoke signals, unless the entire
network is replaced.

If your job is delivering packages and all the packages are very small and
> your boss doesn't care who you give them to as long as it's on the correct
> continent and you have until the next ice age to get the work done, then
> you don't have a very difficult profession. I see no reason why simulating
> that anachronism  would present the slightest difficulty. Artificial
> neurons could be made to release neurotransmitters as inefficiently as
> natural ones if anybody really wanted to, but it would be pointless when
> there are much faster ways.
>

> Electronics is inherently fast because its electrical signals are sent by
> fast light electrons. The brain also uses some electrical signals, but it
> doesn't use electrons, it uses ions to send signals, the most important are
> chlorine and potassium. A chlorine ion is 65 thousand times as heavy as an
> electron, a potassium ion is even heavier, if you want to talk about gap
> junctions, the ions they use are millions of times more massive than
> electrons. There is no way to get around it, according to the fundamental
> laws of physics, something that has a large mass will be slow, very, very,
> slow.
>
> The great strength biology has over present day electronics is in the
> ability of one neuron to make thousands of connections of various strengths
> with other neurons. However, I see absolutely nothing in the fundamental
> laws of physics that prevents nano machines from doing the same thing, or
> better and MUCH faster.
>
>   John K Clark
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-31 Thread Mindey I.
Why not to just define yourself, and then try to re-run yourself? If you
have a mathematical definition of your own self, you are already close to
living forever as a running process based on that definition.

Personally, when I try to define myself, I bump into memories of strong
sense of curiosity, making me nearly cry of desire to know Everything.

Maybe most of us here on the "Everything-List" are like that. Maybe we're
equivalent?

On 31 March 2018 at 20:32, Telmo Menezes  wrote:

> On Sat, Mar 31, 2018 at 10:17 PM, Lawrence Crowell
>  wrote:
> > You would have to replicate then not only the dynamics of neurons, but
> every
> > biomolecule in the neurons, and don't forget about the oligoastrocytes
> and
> > other glial cells. Many enzymes for instance to multi-state systems, say
> in
> > a simple case where a single amino acid residue of phosphorylated or
> > unphosphorylated, and in effect are binary switching units. To then make
> > this work you now need to have the brain states mapped out down to the
> > molecular level, and further to have their combinatorial relationships
> > mapped. Biomolecules also behave in water, so you have to model all the
> > water molecules. Given the brain has around 10^{25} or a few moles of
> > molecules the number of possible combinations might be on the order of
> > 10^{10^{25}} this is a daunting task. Also your computer has to
> accurately
> > encode the dynamics of molecules -- down to the quantum mechanics of
> their
> > bonds.
> >
> > This is another way of saying that biological systems, even that of a
> basic
> > prokaryote, are beyond our current abilities to simulate. You can't just
> > hand wave away the enormous problems with just simulating a bacillus, let
> > alone something like the brain. Now of course one can do some
> simulations to
> > learn about the brain in a model system, but this is far from mapping a
> > brain and its conscious state into a computer.
>
> Well maybe, but this is just you guessing.
> Nobody knows the necessary level of detail.
>
> Telmo.
>
> > LC
> >
> >
> > On Saturday, March 31, 2018 at 10:31:56 AM UTC-6, John Clark wrote:
> >>
> >> On Tue, Mar 27, 2018 at 8:24 PM, Lawrence Crowell
> >>  wrote:
> >>
> >>> > Yes, and if you replace the entire brain with technology the peg leg
> is
> >>> > expanded into an entire Pinocchio. Would the really be conscious? It
> is the
> >>> > case as well that so much of our mental processing does involve
> hormone
> >>> > reception and a range of other data inputs from other receptors and
> ligands.
> >>
> >> I see nothing sacred in hormones, I don't see the slightest reason why
> >> they or any neurotransmitter would be especially difficult to simulate
> >> through computation, because chemical messengers are not a sign of
> >> sophisticated design on nature's part, rather it's an example of
> Evolution's
> >> bungling. If you need to inhibit a nearby neuron there are better ways
> of
> >> sending that signal then launching a GABA molecule like a message in a
> >> bottle thrown into the sea and waiting ages for it to diffuse to its
> random
> >> target.
> >>
> >> I'm not interested in chemicals only the information they contain, I
> want
> >> the information to get transmitted from cell to cell by the best method
> and
> >> so I would not send smoke signals if I had a fiber optic cable. The
> >> information content in each molecular message must be tiny, just a few
> bits
> >> because only about 60 neurotransmitters such as acetylcholine,
> >> norepinephrine and GABA are known, even if the true number is 100 times
> >> greater (or a million times for that matter) the information content
> ofeach
> >> signal must be tiny. Also, for the long range stuff, exactly which
> neuron
> >> receives the signal can not be specified because it relies on a random
> >> process, diffusion. The fact that it's slow as molasses in February
> does not
> >> add to its charm.
> >>
> >> If your job is delivering packages and all the packages are very small
> and
> >> your boss doesn't care who you give them to as long as it's on the
> correct
> >> continent and you have until the next ice age to get the work done,
> then you
> >> don't have a very difficult profession. I see no reason why simulating
> that
> >> anachronism  would present the slightest difficulty. Artificial neurons
> >> could be made to release neurotransmitters as inefficiently as natural
> ones
> >> if anybody really wanted to, but it would be pointless when there are
> much
> >> faster ways.
> >>
> >> Electronics is inherently fast because its electrical signals are sent
> by
> >> fast light electrons. The brain also uses some electrical signals, but
> it
> >> doesn't use electrons, it uses ions to send signals, the most important
> are
> >> chlorine and potassium. A chlorine ion is 65 thousand times as heavy as
> an
> >> electron, a potassium ion is even 

Re: How to live forever

2018-03-31 Thread Brent Meeker



On 3/31/2018 1:32 PM, Telmo Menezes wrote:

On Sat, Mar 31, 2018 at 10:17 PM, Lawrence Crowell
 wrote:

You would have to replicate then not only the dynamics of neurons, but every
biomolecule in the neurons, and don't forget about the oligoastrocytes and
other glial cells. Many enzymes for instance to multi-state systems, say in
a simple case where a single amino acid residue of phosphorylated or
unphosphorylated, and in effect are binary switching units. To then make
this work you now need to have the brain states mapped out down to the
molecular level, and further to have their combinatorial relationships
mapped. Biomolecules also behave in water, so you have to model all the
water molecules. Given the brain has around 10^{25} or a few moles of
molecules the number of possible combinations might be on the order of
10^{10^{25}} this is a daunting task. Also your computer has to accurately
encode the dynamics of molecules -- down to the quantum mechanics of their
bonds.

This is another way of saying that biological systems, even that of a basic
prokaryote, are beyond our current abilities to simulate. You can't just
hand wave away the enormous problems with just simulating a bacillus, let
alone something like the brain. Now of course one can do some simulations to
learn about the brain in a model system, but this is far from mapping a
brain and its conscious state into a computer.

Well maybe, but this is just you guessing.
Nobody knows the necessary level of detail.


Right.  Ever notice how your car seems to have a personality.  I think 
we flatter ourselves.  I'll bet robots waay smarter than us will be 
commonplace in 50yrs.  They will debate how much dumber they would have 
to be to download into one of those meat computers.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-31 Thread Brent Meeker
On the other hand, somebody could just create a smartass email bot and 
as far as we know it would be a perfect upload of John Clark. :-)


Brent

On 3/31/2018 1:17 PM, Lawrence Crowell wrote:
You would have to replicate then not only the dynamics of neurons, but 
every biomolecule in the neurons, and don't forget about the 
oligoastrocytes and other glial cells. Many enzymes for instance to 
multi-state systems, say in a simple case where a single amino acid 
residue of phosphorylated or unphosphorylated, and in effect are 
binary switching units. To then make this work you now need to have 
the brain states mapped out down to the molecular level, and further 
to have their combinatorial relationships mapped. Biomolecules also 
behave in water, so you have to model all the water molecules. Given 
the brain has around 10^{25} or a few moles of molecules the number of 
possible combinations might be on the order of 10^{10^{25}} this is a 
daunting task. Also your computer has to accurately encode the 
dynamics of molecules -- down to the quantum mechanics of their bonds.


This is another way of saying that biological systems, even that of a 
basic prokaryote, are beyond our current abilities to simulate. You 
can't just hand wave away the enormous problems with just simulating a 
bacillus, let alone something like the brain. Now of course one can do 
some simulations to learn about the brain in a model system, but this 
is far from mapping a brain and its conscious state into a computer.


LC

On Saturday, March 31, 2018 at 10:31:56 AM UTC-6, John Clark wrote:

On Tue, Mar 27, 2018 at 8:24 PM, Lawrence
Crowell  wrote:

> /Yes, and if you replace the entire brain with technology
the peg leg is expanded into an entire Pinocchio. Would the
really be conscious? It is the case as well that so much of
our mental processing does involve hormone reception and a
range of other data inputs from other receptors and ligands./

I see nothing sacred in hormones, I don't see the slightest reason
why they or any neurotransmitter would be especially difficult to
simulate through computation, because chemical messengers are not
a sign of sophisticated design on nature's part, rather it's an
example of Evolution's bungling. If you need to inhibit a
nearby neuron there are better ways of sending that signal then
launching a GABA molecule like a message in a bottle thrown into
the sea and waiting ages for it to diffuse to its random target.

I'm not interested in chemicals only the information they contain,
I want the information to get transmitted from cell to cell by the
best method and so I would not send smoke signals if I had a fiber
optic cable. The information content in each molecular message
must be tiny, just a few bits because only about 60
neurotransmitters such as acetylcholine, norepinephrine and GABA
are known, even if the true number is 100 times greater (or a
million times for that matter) the information content ofeach
signal must be tiny. Also, for the long range stuff, exactly which
neuron receives the signal can not be specified because it relies
on a random process, diffusion. The fact that it's slow as
molasses in February does not add to its charm.

If your job is delivering packages and all the packages are very
small and your boss doesn't care who you give them to as long as
it's on the correct continent and you have until the next ice age
to get the work done, then you don't have a very difficult
profession. I see no reason why simulating that anachronism  would
present the slightest difficulty. Artificial neurons could be made
to release neurotransmitters as inefficiently as natural ones if
anybody really wanted to, but it would be pointless when there are
much faster ways.

Electronics is inherently fast because its electrical signals are
sent by fast light electrons. The brain also uses some electrical
signals, but it doesn't use electrons, it uses ions to send
signals, the most important are chlorine and potassium. A chlorine
ion is 65 thousand times as heavy as an electron, a potassium ion
is even heavier, if you want to talk about gap junctions, the ions
they use are millions of times more massive than electrons. There
is no way to get around it, according to the fundamental laws of
physics, something that has a large mass will be slow, very, very,
slow.

The great strength biology has over present day electronics is in
the ability of one neuron to make thousands of connections of
various strengths with other neurons. However, I see absolutely
nothing in the fundamental laws of physics that prevents nano
machines from doing the same thing, or better and MUCH faster.

  John K Clark


--
You received this message because you are 

Re: How to live forever

2018-03-31 Thread Telmo Menezes
On Sat, Mar 31, 2018 at 10:17 PM, Lawrence Crowell
 wrote:
> You would have to replicate then not only the dynamics of neurons, but every
> biomolecule in the neurons, and don't forget about the oligoastrocytes and
> other glial cells. Many enzymes for instance to multi-state systems, say in
> a simple case where a single amino acid residue of phosphorylated or
> unphosphorylated, and in effect are binary switching units. To then make
> this work you now need to have the brain states mapped out down to the
> molecular level, and further to have their combinatorial relationships
> mapped. Biomolecules also behave in water, so you have to model all the
> water molecules. Given the brain has around 10^{25} or a few moles of
> molecules the number of possible combinations might be on the order of
> 10^{10^{25}} this is a daunting task. Also your computer has to accurately
> encode the dynamics of molecules -- down to the quantum mechanics of their
> bonds.
>
> This is another way of saying that biological systems, even that of a basic
> prokaryote, are beyond our current abilities to simulate. You can't just
> hand wave away the enormous problems with just simulating a bacillus, let
> alone something like the brain. Now of course one can do some simulations to
> learn about the brain in a model system, but this is far from mapping a
> brain and its conscious state into a computer.

Well maybe, but this is just you guessing.
Nobody knows the necessary level of detail.

Telmo.

> LC
>
>
> On Saturday, March 31, 2018 at 10:31:56 AM UTC-6, John Clark wrote:
>>
>> On Tue, Mar 27, 2018 at 8:24 PM, Lawrence Crowell
>>  wrote:
>>
>>> > Yes, and if you replace the entire brain with technology the peg leg is
>>> > expanded into an entire Pinocchio. Would the really be conscious? It is 
>>> > the
>>> > case as well that so much of our mental processing does involve hormone
>>> > reception and a range of other data inputs from other receptors and 
>>> > ligands.
>>
>> I see nothing sacred in hormones, I don't see the slightest reason why
>> they or any neurotransmitter would be especially difficult to simulate
>> through computation, because chemical messengers are not a sign of
>> sophisticated design on nature's part, rather it's an example of Evolution's
>> bungling. If you need to inhibit a nearby neuron there are better ways of
>> sending that signal then launching a GABA molecule like a message in a
>> bottle thrown into the sea and waiting ages for it to diffuse to its random
>> target.
>>
>> I'm not interested in chemicals only the information they contain, I want
>> the information to get transmitted from cell to cell by the best method and
>> so I would not send smoke signals if I had a fiber optic cable. The
>> information content in each molecular message must be tiny, just a few bits
>> because only about 60 neurotransmitters such as acetylcholine,
>> norepinephrine and GABA are known, even if the true number is 100 times
>> greater (or a million times for that matter) the information content ofeach
>> signal must be tiny. Also, for the long range stuff, exactly which neuron
>> receives the signal can not be specified because it relies on a random
>> process, diffusion. The fact that it's slow as molasses in February does not
>> add to its charm.
>>
>> If your job is delivering packages and all the packages are very small and
>> your boss doesn't care who you give them to as long as it's on the correct
>> continent and you have until the next ice age to get the work done, then you
>> don't have a very difficult profession. I see no reason why simulating that
>> anachronism  would present the slightest difficulty. Artificial neurons
>> could be made to release neurotransmitters as inefficiently as natural ones
>> if anybody really wanted to, but it would be pointless when there are much
>> faster ways.
>>
>> Electronics is inherently fast because its electrical signals are sent by
>> fast light electrons. The brain also uses some electrical signals, but it
>> doesn't use electrons, it uses ions to send signals, the most important are
>> chlorine and potassium. A chlorine ion is 65 thousand times as heavy as an
>> electron, a potassium ion is even heavier, if you want to talk about gap
>> junctions, the ions they use are millions of times more massive than
>> electrons. There is no way to get around it, according to the fundamental
>> laws of physics, something that has a large mass will be slow, very, very,
>> slow.
>>
>> The great strength biology has over present day electronics is in the
>> ability of one neuron to make thousands of connections of various strengths
>> with other neurons. However, I see absolutely nothing in the fundamental
>> laws of physics that prevents nano machines from doing the same thing, or
>> better and MUCH faster.
>>
>>   John K Clark
>>
>>>
> --
> You received this message because you are subscribed to the 

Re: How to live forever

2018-03-31 Thread Lawrence Crowell
You would have to replicate then not only the dynamics of neurons, but 
every biomolecule in the neurons, and don't forget about the 
oligoastrocytes and other glial cells. Many enzymes for instance to 
multi-state systems, say in a simple case where a single amino acid residue 
of phosphorylated or unphosphorylated, and in effect are binary switching 
units. To then make this work you now need to have the brain states mapped 
out down to the molecular level, and further to have their combinatorial 
relationships mapped. Biomolecules also behave in water, so you have to 
model all the water molecules. Given the brain has around 10^{25} or a few 
moles of molecules the number of possible combinations might be on the 
order of 10^{10^{25}} this is a daunting task. Also your computer has to 
accurately encode the dynamics of molecules -- down to the quantum 
mechanics of their bonds. 

This is another way of saying that biological systems, even that of a basic 
prokaryote, are beyond our current abilities to simulate. You can't just 
hand wave away the enormous problems with just simulating a bacillus, let 
alone something like the brain. Now of course one can do some simulations 
to learn about the brain in a model system, but this is far from mapping a 
brain and its conscious state into a computer.

LC 

On Saturday, March 31, 2018 at 10:31:56 AM UTC-6, John Clark wrote:
>
> On Tue, Mar 27, 2018 at 8:24 PM, Lawrence Crowell <
> goldenfieldquaterni...@gmail.com > wrote:
>
> > *Yes, and if you replace the entire brain with technology the peg leg 
>> is expanded into an entire Pinocchio. Would the really be conscious? It is 
>> the case as well that so much of our mental processing does involve hormone 
>> reception and a range of other data inputs from other receptors and 
>> ligands.*
>
> I see nothing sacred in hormones, I don't see the slightest reason why 
> they or any neurotransmitter would be especially difficult to simulate 
> through computation, because chemical messengers are not a sign of 
> sophisticated design on nature's part, rather it's an example of 
> Evolution's bungling. If you need to inhibit a nearby neuron there are 
> better ways of sending that signal then launching a GABA molecule like a 
> message in a bottle thrown into the sea and waiting ages for it to diffuse 
> to its random target.
>
> I'm not interested in chemicals only the information they contain, I want 
> the information to get transmitted from cell to cell by the best method and 
> so I would not send smoke signals if I had a fiber optic cable. The 
> information content in each molecular message must be tiny, just a few bits 
> because only about 60 neurotransmitters such as acetylcholine, 
> norepinephrine and GABA are known, even if the true number is 100 times 
> greater (or a million times for that matter) the information content ofeach 
> signal must be tiny. Also, for the long range stuff, exactly which neuron 
> receives the signal can not be specified because it relies on a random 
> process, diffusion. The fact that it's slow as molasses in February does 
> not add to its charm.  
> If your job is delivering packages and all the packages are very small and 
> your boss doesn't care who you give them to as long as it's on the correct 
> continent and you have until the next ice age to get the work done, then 
> you don't have a very difficult profession. I see no reason why simulating 
> that anachronism  would present the slightest difficulty. Artificial 
> neurons could be made to release neurotransmitters as inefficiently as 
> natural ones if anybody really wanted to, but it would be pointless when 
> there are much faster ways. 
>
> Electronics is inherently fast because its electrical signals are sent by 
> fast light electrons. The brain also uses some electrical signals, but it 
> doesn't use electrons, it uses ions to send signals, the most important are 
> chlorine and potassium. A chlorine ion is 65 thousand times as heavy as an 
> electron, a potassium ion is even heavier, if you want to talk about gap 
> junctions, the ions they use are millions of times more massive than 
> electrons. There is no way to get around it, according to the fundamental 
> laws of physics, something that has a large mass will be slow, very, very, 
> slow. 
>
> The great strength biology has over present day electronics is in the 
> ability of one neuron to make thousands of connections of various strengths 
> with other neurons. However, I see absolutely nothing in the fundamental 
> laws of physics that prevents nano machines from doing the same thing, or 
> better and MUCH faster.
>
>   John K Clark
>
>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit 

Re: How to live forever

2018-03-31 Thread John Clark
On Tue, Mar 27, 2018 at 8:24 PM, Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

> *Yes, and if you replace the entire brain with technology the peg leg is
> expanded into an entire Pinocchio. Would the really be conscious? It is the
> case as well that so much of our mental processing does involve hormone
> reception and a range of other data inputs from other receptors and
> ligands.*

I see nothing sacred in hormones, I don't see the slightest reason why they
or any neurotransmitter would be especially difficult to simulate through
computation, because chemical messengers are not a sign of sophisticated
design on nature's part, rather it's an example of Evolution's bungling. If
you need to inhibit a nearby neuron there are better ways of sending
that signal then launching a GABA molecule like a message in a bottle
thrown into the sea and waiting ages for it to diffuse to its random target.

I'm not interested in chemicals only the information they contain, I want
the information to get transmitted from cell to cell by the best method and
so I would not send smoke signals if I had a fiber optic cable. The
information content in each molecular message must be tiny, just a few bits
because only about 60 neurotransmitters such as acetylcholine,
norepinephrine and GABA are known, even if the true number is 100 times
greater (or a million times for that matter) the information content ofeach
signal must be tiny. Also, for the long range stuff, exactly which neuron
receives the signal can not be specified because it relies on a random
process, diffusion. The fact that it's slow as molasses in February does
not add to its charm.
If your job is delivering packages and all the packages are very small and
your boss doesn't care who you give them to as long as it's on the correct
continent and you have until the next ice age to get the work done, then
you don't have a very difficult profession. I see no reason why simulating
that anachronism  would present the slightest difficulty. Artificial
neurons could be made to release neurotransmitters as inefficiently as
natural ones if anybody really wanted to, but it would be pointless when
there are much faster ways.

Electronics is inherently fast because its electrical signals are sent by
fast light electrons. The brain also uses some electrical signals, but it
doesn't use electrons, it uses ions to send signals, the most important are
chlorine and potassium. A chlorine ion is 65 thousand times as heavy as an
electron, a potassium ion is even heavier, if you want to talk about gap
junctions, the ions they use are millions of times more massive than
electrons. There is no way to get around it, according to the fundamental
laws of physics, something that has a large mass will be slow, very, very,
slow.

The great strength biology has over present day electronics is in the
ability of one neuron to make thousands of connections of various strengths
with other neurons. However, I see absolutely nothing in the fundamental
laws of physics that prevents nano machines from doing the same thing, or
better and MUCH faster.

  John K Clark


>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-31 Thread John Clark
On Tue, Mar 27, 2018 at 7:30 PM, Brent Meeker  wrote:

> > I think it very likely that a silicon neuron could work.  But it would
> work like a peg leg works.  You could still get around, but it wouldn't
> grow or self heal or tell you when it was hot or cold.


My first car was a piece of junk, a used 1960 Corvair, but even it could
tell me when it was too hot or too cold; why would an intelligent computer
be unable to do the same thing?

> Your brain can actually grow new neurons and remove dead and
> non-functioning ones, and respond to hormones.

A human brain can't get larger than the skull, but a computer could get as
physically large as it wanted, and it could replace bad chips with good
ones too.

> > For example, you don't consciously notice hormone levels, so if you
> stopped responding to them you wouldn't notice, and nobody else would
> either unless they were monitoring hormone levels in your blood.

You wouldn't notice if you no longer got very scared or very angry and
other people wouldn't notice if you were no longer interested in sex?

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-30 Thread Lawrence Crowell


On Friday, March 30, 2018 at 3:36:19 AM UTC-6, Bruno Marchal wrote:
>
>
> On 27 Mar 2018, at 23:55, Lawrence Crowell  > wrote:
>
> On Tuesday, March 27, 2018 at 11:06:51 AM UTC-6, Bruno Marchal wrote:
>>
>>
>> On 27 Mar 2018, at 00:57, Lawrence Crowell  
>> wrote:
>>
>> On Monday, March 26, 2018 at 11:01:27 AM UTC-6, Bruno Marchal wrote:
>>>
>>>
>>> On 25 Mar 2018, at 17:34, Lawrence Crowell  
>>> wrote:
>>>
>>>
>>>
>>> On Sunday, March 25, 2018 at 5:01:59 AM UTC-6, Bruno Marchal wrote:


 Yes, and if someone argue that consciousness is not maintained whatever 
 the substitution level is, it is up to them to explain what in the 
 brain+local-evirnoment is not Turing emulable. I see only the “wave packet 
 reduction”, but I don’t see any evidence for that reduction, and it would 
 make Quantum mechanics inconsistent (I think) and not usable in cosmology, 
 nor in quantum information science. To believe that the brain is not a 
 “natural” machine is a bit like believing in some magic. Why not, but 
 where 
 are the evidences?


 Bruno

>>>
>>> There are a couple of things running around here. One involves brains 
>>> and minds and the other wave function reduction. 
>>>
>>> The issue of up loading brains or mapping them come into the problem 
>>> with the NP-complete problem of partitioning graphs. I like to think of 
>>> this according to tensor spaces of states, such as with MERA (multi-scale 
>>> entanglement renormalization ansatz) tensor networks. The AdS_3 example 
>>> with H^2 spatial surface is seen in the diagram below.
>>>
>>>
>>> 
>>>
>>> This network has the highest complexity for the pentagonal tessellation 
>>> for these are honeycombs of the groups H3, H4, H5 corresponding to the 
>>> pentagon, dodecahedron, and the 4-dim icosadedron or 120/600 cells. These 
>>> groups will tessellate a 2, 3 and 4 dimensional spatial hyperbolic surface 
>>> embedded in AdS_3, AdS_4 and AdS_5. These define half the weights of the E8 
>>> groups with the Zamolodchikov eigenvalues or masses. 5-fold structures have 
>>> connections to the golden mean, and the Zamolodchikov quaternions are 
>>> representations of the golden mean quaternions. A quantum error correction 
>>> code (QECC) defines a projector onto each of these partitioned elements, 
>>> but (without going into some deep mathematics) this is not computable in a 
>>> root system because there is no Galois field extension, which gives that 
>>> the QECC is not NP-complete.  
>>>
>>> This of course is work I am doing with respect to the problem of 
>>> unitarity in quantum black holes and holography. It may have some 
>>> connection with more ordinary quantum mechanics and measurement. The action 
>>> of a measurement is a process whereby a set of quantum states code some 
>>> other set of quantum states, where usually the number of the measuring 
>>> states is far larger than the measured states. The quantum measurement 
>>> problem may have some connection to the above, and further it has some 
>>> qualitative similarity to self-reference. This may then mean the 
>>> proposition P = NP or P =/= NP is not provable, but where maybe specific 
>>> examples of NP/NP-complete algorithms as not-P can be proven. 
>>>
>>> This further might connect with the whole idea of up-loading minds into 
>>> computers. Brains and their states are not just localized states but 
>>> networks, and it could well be that this is not tractable. I paste in below 
>>> a review paper on graph partitioning. This is just one possible theoretical 
>>> obstruction, and if you plan on actually "bending metal" on this the 
>>> problems will doubtless multiply like bunnies in spring. 
>>>
>>> As a general rule once these threads gets past 100 I tend not to post 
>>> any more. It becomes to annoying to find my way around them.
>>>
>>>
>>>
>>>
>>> That is interesting, and might even help later to recover notions like 
>>> space, but to keep the distinction between the communicable and the non 
>>> communicable part of the machines modes, which is needed for the mind-body 
>>> problème, we have to extracted such structure in some special way, using 
>>> the mathematics of self-reference. I am unfortunately not that far! It 
>>> might take some generations of mathematicians.
>>>
>>> Bruno
>>>
>>
>> The non-communicating regions can be in a quantum entanglement.
>>
>>
>>
>> The non-communicating/proving/Justifiable-believing regions is 
>> axiomatised by G*. The quantum entanglement appears (well their shadows at 
>> least) in the material mode of the true self-references (see my papers for 
>> the mathematical precision).
>>
>> I do not assume a physical reality. I assume only very elementary 
>> 

Re: How to live forever

2018-03-30 Thread Bruno Marchal

> On 27 Mar 2018, at 23:55, Lawrence Crowell  
> wrote:
> 
> On Tuesday, March 27, 2018 at 11:06:51 AM UTC-6, Bruno Marchal wrote:
> 
>> On 27 Mar 2018, at 00:57, Lawrence Crowell > > wrote:
>> 
>> On Monday, March 26, 2018 at 11:01:27 AM UTC-6, Bruno Marchal wrote:
>> 
>>> On 25 Mar 2018, at 17:34, Lawrence Crowell > 
>>> wrote:
>>> 
>>> 
>>> 
>>> On Sunday, March 25, 2018 at 5:01:59 AM UTC-6, Bruno Marchal wrote:
>>> 
 Yes, and if someone argue that consciousness is not maintained whatever 
 the substitution level is, it is up to them to explain what in the 
 brain+local-evirnoment is not Turing emulable. I see only the “wave packet 
 reduction”, but I don’t see any evidence for that reduction, and it would 
 make Quantum mechanics inconsistent (I think) and not usable in cosmology, 
 nor in quantum information science. To believe that the brain is not a 
 “natural” machine is a bit like believing in some magic. Why not, but 
 where are the evidences?
>>> 
>>> Bruno
>>> 
>>> There are a couple of things running around here. One involves brains and 
>>> minds and the other wave function reduction. 
>>> 
>>> The issue of up loading brains or mapping them come into the problem with 
>>> the NP-complete problem of partitioning graphs. I like to think of this 
>>> according to tensor spaces of states, such as with MERA (multi-scale 
>>> entanglement renormalization ansatz) tensor networks. The AdS_3 example 
>>> with H^2 spatial surface is seen in the diagram below.
>>> 
>>>  
>>> 
>>> 
>>> This network has the highest complexity for the pentagonal tessellation for 
>>> these are honeycombs of the groups H3, H4, H5 corresponding to the 
>>> pentagon, dodecahedron, and the 4-dim icosadedron or 120/600 cells. These 
>>> groups will tessellate a 2, 3 and 4 dimensional spatial hyperbolic surface 
>>> embedded in AdS_3, AdS_4 and AdS_5. These define half the weights of the E8 
>>> groups with the Zamolodchikov eigenvalues or masses. 5-fold structures have 
>>> connections to the golden mean, and the Zamolodchikov quaternions are 
>>> representations of the golden mean quaternions. A quantum error correction 
>>> code (QECC) defines a projector onto each of these partitioned elements, 
>>> but (without going into some deep mathematics) this is not computable in a 
>>> root system because there is no Galois field extension, which gives that 
>>> the QECC is not NP-complete.  
>>> 
>>> This of course is work I am doing with respect to the problem of unitarity 
>>> in quantum black holes and holography. It may have some connection with 
>>> more ordinary quantum mechanics and measurement. The action of a 
>>> measurement is a process whereby a set of quantum states code some other 
>>> set of quantum states, where usually the number of the measuring states is 
>>> far larger than the measured states. The quantum measurement problem may 
>>> have some connection to the above, and further it has some qualitative 
>>> similarity to self-reference. This may then mean the proposition P = NP or 
>>> P =/= NP is not provable, but where maybe specific examples of 
>>> NP/NP-complete algorithms as not-P can be proven. 
>>> 
>>> This further might connect with the whole idea of up-loading minds into 
>>> computers. Brains and their states are not just localized states but 
>>> networks, and it could well be that this is not tractable. I paste in below 
>>> a review paper on graph partitioning. This is just one possible theoretical 
>>> obstruction, and if you plan on actually "bending metal" on this the 
>>> problems will doubtless multiply like bunnies in spring. 
>>> 
>>> As a general rule once these threads gets past 100 I tend not to post any 
>>> more. It becomes to annoying to find my way around them.
>> 
>> 
>> 
>> That is interesting, and might even help later to recover notions like 
>> space, but to keep the distinction between the communicable and the non 
>> communicable part of the machines modes, which is needed for the mind-body 
>> problème, we have to extracted such structure in some special way, using the 
>> mathematics of self-reference. I am unfortunately not that far! It might 
>> take some generations of mathematicians.
>> 
>> Bruno
>> 
>> The non-communicating regions can be in a quantum entanglement.
> 
> 
> The non-communicating/proving/Justifiable-believing regions is axiomatised by 
> G*. The quantum entanglement appears (well their shadows at least) in the 
> material mode of the true self-references (see my papers for the mathematical 
> precision).
> 
> I do not assume a physical reality. I assume only very elementary arithmetic, 
> or very elementary combinators theory, or anything Turing equivalent.
> 
> But there is indeed a 

Re: How to live forever

2018-03-29 Thread Bruno Marchal

> On 27 Mar 2018, at 21:58, Brent Meeker  wrote:
> 
> 
> 
> On 3/27/2018 9:57 AM, Bruno Marchal wrote:
>> There is non zombies with mechanism, not because we can derive consciousness 
>> from body-behaviour, but because we just cannot do that, as their are no 
>> really “body-behavior”, as there are no bodies: the intelligence and the 
>> consciousness is a collective works of all numbers or all combinators or all 
>> Turing Universal Machines. Every bodies or every numbers are zombies, in 
>> some sense, but the soul is in the actual true relations.
> You're retreating into metaphysics and avoiding the problem. 

The problem belongs to metaphysics, although I prefer the term “theology”, so 
it is hardly retreating. And, yes, with the correct metaphysics enforced by 
Mechanism, the problem of zombies is dissolved indeed, as there is no 
matter/bodies having an ontological existence, only phenomenological.

(We do metaphysics/TOE/theology here. It is here you often use your 
metaphysical hypothesis of a primary (non reducible) notion of physical 
universe. The existence of a *primary* physical universe is NOT an hypothesis 
of any physical theory.)

> If I build two robots that are equally intelligent

What could that mean? 



> and human like in their behavior (but not assuming them be identical in 
> behavior) can I make one that is conscious and the other either not-conscious 
> or conscious in a very different way that I won't be able to recognize as 
> such? 

If the two robots are "equally intelligent", why would we not assume they have 
an equivalent consciousness? What do you mean by “equally intelligent”? 

The “physical” computations don’t  make them intelligent/conscious, but win the 
first person indeterminacy and will stabilise the environment. The physical is 
here the first person plural “winner” on a statistic onn the infinitely many 
arithmetical true relations.  (The measure one necessitates to obey "[]p & <>p” 
(as can be explained). The robot’ bodies just give you a chance to discuss with 
it, and gives the person supported by the robot bodies some ability to do the 
same with respect to you. Matter creates nothing, but change the relative 
probabilities of consciousness manifestations.

Keep in mind in this theory there is only K, S, (K K), …; or, 0, 1, 2, 3, …, 
with their usual respective laws (which happens to be Turing complete 
(universal, creative) theory.


> Or to give a more concrete biological example, is the consciousness of an 
> octopus similar to that of a mouse;

It is the same consciousness at its most succinct state. Consciousness perhaps 
cosmic, divine, might be the consciousness of the universal machine, alias 
Robinson Arithmetic. ,It is dissimilar only its its content and a qualia 
intensity, except that here, from the behaviours we might say they share also 
some comparable intelligence. Some cuttlefishes solves comparable problem than 
some apes. But I don’t really believe in the IQ test, which evaluate only a 
small era of competence. Intelligence, and consciousness are semantical notion 
that we can define for much simpler machine than us, and show that no machine 
sound can define in its own language what they experience: it is not 
comparable, except in possible meta-fusion, like when awakening and realising 
where were doing two different dreams at once. The consciousness of Robinson 
Arithmetic is a dissociate state of consciousness. “She” has no eyes, no arms, 
no ears, no nose, yet “she" is conscious, in a highly dissociative state, but 
that state is more what the brain hides than what the brain would secreted or 
create. I suspect we get it in Non REM sleep, but it is not really a 
"memorisable state”, nor even an imaginable state (by the machine for itself).

Now, the Löbian machine is the one which believes in Peano arithmetic. They 
believe in enough induction axioms, allowing to discover that they are 
“universal” and all the price of it and the infinitely many interrogations.



> or does the very different neural structure imply a difference in 
> consciousness?

Not per se. But having 8 arms and living in water will differentiates the type 
of experiences available. Octopus might not be afraid by cat, and mouse do not 
dare about big fishes. But it is the same consciousness of the universal 
machine, just differentiated by different contingencies.


I recall that a computation can be defined in many ways. Like a succession of 
reduction of combinators, or a sequence of instantaneous description of a 
(Martin Davis quadruplet) Turing machine.

But a combinator reduction can emulate a sequence of instantaneous description 
of a Turing machine. And a Turing machine can emulate combinator reduction. It 
is the intensional Church thesis. The extensional (usual formulation) says that 
all such Turing-universal system compute the same class of partial computable 
function from N to N, and the intensional (which is easy to deduce 

Re: How to live forever

2018-03-27 Thread Stathis Papaioannou
On Wed, 28 Mar 2018 at 10:30 am, Brent Meeker  wrote:

>
>
> On 3/27/2018 3:59 PM, Lawrence Crowell wrote:
>
> On Tuesday, March 27, 2018 at 3:56:18 PM UTC-6, Brent wrote:
>>
>>
>>
>> On 3/27/2018 2:26 PM, Stathis Papaioannou wrote:
>>
>>
>> On Wed, 28 Mar 2018 at 7:27 am, Brent Meeker  wrote:
>>
>>>
>>>
>>> On 3/27/2018 10:19 AM, Stathis Papaioannou wrote:
>>>
>>>
>>> On Wed, 28 Mar 2018 at 1:50 am, Lawrence Crowell <
>>> goldenfield...@gmail.com> wrote:
>>>
 On Tuesday, March 27, 2018 at 7:21:00 AM UTC-5, stathisp wrote:

>
>
> On 27 March 2018 at 09:35, Brent Meeker  wrote:
>
>>
>>
>> On 3/26/2018 3:19 PM, Stathis Papaioannou wrote:
>>
>>
>> If you are not and never can be aware of it then in what sense is it
>> consciousness?
>>
>>
>> Depends on what you mean by "it".  I can be aware of my
>> consciousness, without being aware that it is different than it was 
>> before;
>> just as I can be aware of my consciousness without knowing whether it is
>> the same as yours, or the same as some robot.
>>
>
> If I am given a brain implant to try out for a few days and I notice
> no difference with the implant (everything feels exactly the same if I
> switch it in or out of circuit), everyone I know agrees there is no change
> in me, and every test I do with the implant switched in or out of circuit
> yields the same results, then I think there would be no good reason to
> hesitate in saying yes to the implant. If the change it brings about is
> neither objectively nor subjectively obvious, it isn't a change.
>
>
> --
> Stathis Papaioannou
>

 This argument ignores scaling. With any network you can replace or
 change nodes and connections on a small scale and the system remains
 largely unchanged. At a certain critical number of such changes the
 properties of the entire network system can rapidly change.

>>>
>>> Yes, it is possible that this is the case. What this would mean is that
>>> that the observable behaviour of the system would stay unchanged as it is
>>> replaced from 0 to 100% and so would the consciousness for part of the way,
>>> but at a certain point, when a particular neurone is replaced,
>>> consciousness will suddenly flip on or off or change radically.
>>>
>>>
>>> I think you are overstating that and creating a strawman.  Consciousness
>>> under the influence of drugs for example can change radically, but not
>>> "suddenly flip" with one more molecule of alcohol.
>>>
>>
>> If part of your consciousness changes as your brain is gradually replaced
>> then you would notice but be aware noble to communicate it, which is what
>> it scproblematic. One way out of this would be if your consciousness stayed
>> the same up to a certain point then suddenly flipped. If you suddenly
>> became a zombie you would not notice and not report that anything had
>> changed, so no inconsistency. However, it’s a long stretch to say that
>> consciousness will flip on changing a single molecule in order to save the
>> idea that it is substrate specific.
>>
>>
>> But LC wasn't arguing it was substrate specific.  He was arguing that its
>> scale specific.
>>
>
>> Brent
>>
>
> That was one argument. Also I am not arguing about the dosage of a drug,
> but of some rewiring, removal or replacement of sub-networks.
>
> As for substrate ponder the following question. If you had a stroke and
> were given the option of either a silicon chip system to replace neural
> functioning or neurons derived from your own stem cells, which would you
> choose? The obvious choice would be neurons, for they would most adapt to
> fill in needed function and interact with the rest of the brain.
>
>
> I think it very likely that a silicon neuron could work.  But it would
> work like a peg leg works.  You could still get around, but it wouldn't
> grow or self heal or tell you when it was hot or cold.  Your brain can
> actually grow new neurons and remove dead and non-functioning ones, and
> respond to hormones.  So I would expect that gradually replacing your
> neurons with artificial neurons would gradually change how you function.
> But maybe not in a way you could notice.  For example, you don't
> consciously notice hormone levels, so if you stopped responding to them you
> wouldn't notice, and nobody else would either unless they were monitoring
> hormone levels in your blood.
>

I agree with this, some as yet unavailable nanotechnology would be needed
to make fully functional artificial neurones. Alternatively, if there is no
need to interface with neural tissue the whole brain, including the changes
in neurones over time and their response to the microenvironment, could be
simulated on a general purpose computer.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 

Re: How to live forever

2018-03-27 Thread Stathis Papaioannou
On Wed, 28 Mar 2018 at 9:59 am, Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

> On Tuesday, March 27, 2018 at 3:56:18 PM UTC-6, Brent wrote:
>
>>
>>
>> On 3/27/2018 2:26 PM, Stathis Papaioannou wrote:
>>
> On Wed, 28 Mar 2018 at 7:27 am, Brent Meeker  wrote:
>>
>>
>>>
>>> On 3/27/2018 10:19 AM, Stathis Papaioannou wrote:
>>>
>>>
>>> On Wed, 28 Mar 2018 at 1:50 am, Lawrence Crowell <
>>> goldenfield...@gmail.com> wrote:
>>>
 On Tuesday, March 27, 2018 at 7:21:00 AM UTC-5, stathisp wrote:

>
>
> On 27 March 2018 at 09:35, Brent Meeker  wrote:
>
>>
>>
>> On 3/26/2018 3:19 PM, Stathis Papaioannou wrote:
>>
>>
>> If you are not and never can be aware of it then in what sense is it
>> consciousness?
>>
>>
>> Depends on what you mean by "it".  I can be aware of my
>> consciousness, without being aware that it is different than it was 
>> before;
>> just as I can be aware of my consciousness without knowing whether it is
>> the same as yours, or the same as some robot.
>>
>
> If I am given a brain implant to try out for a few days and I notice
> no difference with the implant (everything feels exactly the same if I
> switch it in or out of circuit), everyone I know agrees there is no change
> in me, and every test I do with the implant switched in or out of circuit
> yields the same results, then I think there would be no good reason to
> hesitate in saying yes to the implant. If the change it brings about is
> neither objectively nor subjectively obvious, it isn't a change.
>
>
> --
> Stathis Papaioannou
>

 This argument ignores scaling. With any network you can replace or
 change nodes and connections on a small scale and the system remains
 largely unchanged. At a certain critical number of such changes the
 properties of the entire network system can rapidly change.

>>>
>>> Yes, it is possible that this is the case. What this would mean is that
>>> that the observable behaviour of the system would stay unchanged as it is
>>> replaced from 0 to 100% and so would the consciousness for part of the way,
>>> but at a certain point, when a particular neurone is replaced,
>>> consciousness will suddenly flip on or off or change radically.
>>>
>>>
>>> I think you are overstating that and creating a strawman.  Consciousness
>>> under the influence of drugs for example can change radically, but not
>>> "suddenly flip" with one more molecule of alcohol.
>>>
>>
>> If part of your consciousness changes as your brain is gradually replaced
>> then you would notice but be aware noble to communicate it, which is what
>> it scproblematic. One way out of this would be if your consciousness stayed
>> the same up to a certain point then suddenly flipped. If you suddenly
>> became a zombie you would not notice and not report that anything had
>> changed, so no inconsistency. However, it’s a long stretch to say that
>> consciousness will flip on changing a single molecule in order to save the
>> idea that it is substrate specific.
>>
>>
>> But LC wasn't arguing it was substrate specific.  He was arguing that its
>> scale specific.
>>
>
>> Brent
>>
>
> That was one argument. Also I am not arguing about the dosage of a drug,
> but of some rewiring, removal or replacement of sub-networks.
>
> As for substrate ponder the following question. If you had a stroke and
> were given the option of either a silicon chip system to replace neural
> functioning or neurons derived from your own stem cells, which would you
> choose? The obvious choice would be neurons, for they would most adapt to
> fill in needed function and interact with the rest of the brain.
>

A “silicon chip system” probably wouldn’t be able to fully replace the
function of neurones; to give just one reason, neurones change and make new
connections over time, while computer chips do not. To completely replace
neurones would involve some sort of nanomachinery, the likes of which does
not yet exist. But if it did exist, I would prefer it to the biological
version if it had advantages such as being more robust.

This whole idea of brain uploading has been rejected by neurophysiologists.
> First off they are the most likely to be in the real know about this
> subject. My theoretical objections are meant to be grist in the mill; maybe
> they are real and related to the neurophysiology or not. The short article
> did not go into details, but it does not surprise me they threw up their
> hands in disbelief anyone would seriously invest in this.
>

I doubt that you can make this blanket statement about all neuroscientists.
In any case, I was talking about the theoretical rather than the technical
problem. Is there a theoretical reason why the mind could not be
implemented in a different substrate?

> --
Stathis Papaioannou

-- 
You received this message because you 

Re: How to live forever

2018-03-27 Thread Lawrence Crowell
On Tuesday, March 27, 2018 at 5:30:37 PM UTC-6, Brent wrote:
>
>
>
> On 3/27/2018 3:59 PM, Lawrence Crowell wrote:
>
> On Tuesday, March 27, 2018 at 3:56:18 PM UTC-6, Brent wrote: 
>>
>>
>>
>> On 3/27/2018 2:26 PM, Stathis Papaioannou wrote:
>>
>>
>> On Wed, 28 Mar 2018 at 7:27 am, Brent Meeker  wrote:
>>
>>>
>>>
>>> On 3/27/2018 10:19 AM, Stathis Papaioannou wrote:
>>>
>>>
>>> On Wed, 28 Mar 2018 at 1:50 am, Lawrence Crowell <
>>> goldenfield...@gmail.com> wrote:
>>>
 On Tuesday, March 27, 2018 at 7:21:00 AM UTC-5, stathisp wrote:

>
>
> On 27 March 2018 at 09:35, Brent Meeker  wrote:
>
>>
>>
>> On 3/26/2018 3:19 PM, Stathis Papaioannou wrote:
>>
>>
>> If you are not and never can be aware of it then in what sense is it 
>> consciousness?
>>
>>
>> Depends on what you mean by "it".  I can be aware of my 
>> consciousness, without being aware that it is different than it was 
>> before; 
>> just as I can be aware of my consciousness without knowing whether it is 
>> the same as yours, or the same as some robot.
>>
>
> If I am given a brain implant to try out for a few days and I notice 
> no difference with the implant (everything feels exactly the same if I 
> switch it in or out of circuit), everyone I know agrees there is no 
> change 
> in me, and every test I do with the implant switched in or out of circuit 
> yields the same results, then I think there would be no good reason to 
> hesitate in saying yes to the implant. If the change it brings about is 
> neither objectively nor subjectively obvious, it isn't a change.
>
>
> -- 
> Stathis Papaioannou
>

 This argument ignores scaling. With any network you can replace or 
 change nodes and connections on a small scale and the system remains 
 largely unchanged. At a certain critical number of such changes the 
 properties of the entire network system can rapidly change. 

>>>
>>> Yes, it is possible that this is the case. What this would mean is that 
>>> that the observable behaviour of the system would stay unchanged as it is 
>>> replaced from 0 to 100% and so would the consciousness for part of the way, 
>>> but at a certain point, when a particular neurone is replaced, 
>>> consciousness will suddenly flip on or off or change radically. 
>>>
>>>
>>> I think you are overstating that and creating a strawman.  Consciousness 
>>> under the influence of drugs for example can change radically, but not 
>>> "suddenly flip" with one more molecule of alcohol.  
>>>
>>
>> If part of your consciousness changes as your brain is gradually replaced 
>> then you would notice but be aware noble to communicate it, which is what 
>> it scproblematic. One way out of this would be if your consciousness stayed 
>> the same up to a certain point then suddenly flipped. If you suddenly 
>> became a zombie you would not notice and not report that anything had 
>> changed, so no inconsistency. However, it’s a long stretch to say that 
>> consciousness will flip on changing a single molecule in order to save the 
>> idea that it is substrate specific.
>>
>>
>> But LC wasn't arguing it was substrate specific.  He was arguing that its 
>> scale specific. 
>>
>
>> Brent
>>
>
> That was one argument. Also I am not arguing about the dosage of a drug, 
> but of some rewiring, removal or replacement of sub-networks. 
>
> As for substrate ponder the following question. If you had a stroke and 
> were given the option of either a silicon chip system to replace neural 
> functioning or neurons derived from your own stem cells, which would you 
> choose? The obvious choice would be neurons, for they would most adapt to 
> fill in needed function and interact with the rest of the brain. 
>
>
> I think it very likely that a silicon neuron could work.  But it would 
> work like a peg leg works.  You could still get around, but it wouldn't 
> grow or self heal or tell you when it was hot or cold.  Your brain can 
> actually grow new neurons and remove dead and non-functioning ones, and 
> respond to hormones.  So I would expect that gradually replacing your 
> neurons with artificial neurons would gradually change how you function.  
> But maybe not in a way you could notice.  For example, you don't 
> consciously notice hormone levels, so if you stopped responding to them you 
> wouldn't notice, and nobody else would either unless they were monitoring 
> hormone levels in your blood.
>
> Brent
>
>
Yes, and if you replace the entire brain with technology the peg leg is 
expanded into an entire Pinocchio. Would the really be conscious? It is the 
case as well that so much of our mental processing does involve hormone 
reception and a range of other data inputs from other receptors and ligands.

LC
 

>
> This whole idea of brain uploading has been rejected by 
> neurophysiologists. 

Re: How to live forever

2018-03-27 Thread Brent Meeker



On 3/27/2018 3:59 PM, Lawrence Crowell wrote:

On Tuesday, March 27, 2018 at 3:56:18 PM UTC-6, Brent wrote:



On 3/27/2018 2:26 PM, Stathis Papaioannou wrote:


On Wed, 28 Mar 2018 at 7:27 am, Brent Meeker  wrote:



On 3/27/2018 10:19 AM, Stathis Papaioannou wrote:


On Wed, 28 Mar 2018 at 1:50 am, Lawrence Crowell
 wrote:

On Tuesday, March 27, 2018 at 7:21:00 AM UTC-5, stathisp
wrote:



On 27 March 2018 at 09:35, Brent Meeker
 wrote:



On 3/26/2018 3:19 PM, Stathis Papaioannou wrote:


If you are not and never can be aware of it
then in what sense is it consciousness?


Depends on what you mean by "it".  I can be
aware of my consciousness, without being aware
that it is different than it was before; just as
I can be aware of my consciousness without
knowing whether it is the same as yours, or the
same as some robot.


If I am given a brain implant to try out for a few
days and I notice no difference with the implant
(everything feels exactly the same if I switch it in
or out of circuit), everyone I know agrees there is
no change in me, and every test I do with the
implant switched in or out of circuit yields the
same results, then I think there would be no good
reason to hesitate in saying yes to the implant. If
the change it brings about is neither objectively
nor subjectively obvious, it isn't a change.


-- 
Stathis Papaioannou



This argument ignores scaling. With any network you can
replace or change nodes and connections on a small scale
and the system remains largely unchanged. At a certain
critical number of such changes the properties of the
entire network system can rapidly change.


Yes, it is possible that this is the case. What this would
mean is that that the observable behaviour of the system
would stay unchanged as it is replaced from 0 to 100% and so
would the consciousness for part of the way, but at a
certain point, when a particular neurone is replaced,
consciousness will suddenly flip on or off or change radically.


I think you are overstating that and creating a strawman.
Consciousness under the influence of drugs for example can
change radically, but not "suddenly flip" with one more
molecule of alcohol.


If part of your consciousness changes as your brain is gradually
replaced then you would notice but be aware noble to communicate
it, which is what it scproblematic. One way out of this would be
if your consciousness stayed the same up to a certain point then
suddenly flipped. If you suddenly became a zombie you would not
notice and not report that anything had changed, so no
inconsistency. However, it’s a long stretch to say that
consciousness will flip on changing a single molecule in order to
save the idea that it is substrate specific.


But LC wasn't arguing it was substrate specific.  He was arguing
that its scale specific.


Brent


That was one argument. Also I am not arguing about the dosage of a 
drug, but of some rewiring, removal or replacement of sub-networks.


As for substrate ponder the following question. If you had a stroke 
and were given the option of either a silicon chip system to replace 
neural functioning or neurons derived from your own stem cells, which 
would you choose? The obvious choice would be neurons, for they would 
most adapt to fill in needed function and interact with the rest of 
the brain.


I think it very likely that a silicon neuron could work.  But it would 
work like a peg leg works.  You could still get around, but it wouldn't 
grow or self heal or tell you when it was hot or cold. Your brain can 
actually grow new neurons and remove dead and non-functioning ones, and 
respond to hormones.  So I would expect that gradually replacing your 
neurons with artificial neurons would gradually change how you 
function.  But maybe not in a way you could notice.  For example, you 
don't consciously notice hormone levels, so if you stopped responding to 
them you wouldn't notice, and nobody else would either unless they were 
monitoring hormone levels in your blood.


Brent



This whole idea of brain uploading has been rejected by 
neurophysiologists. First off they are the most likely to be in the 
real know about this subject. My theoretical objections are meant to 
be grist in the mill; maybe they are real and related to 

Re: How to live forever

2018-03-27 Thread Lawrence Crowell
On Tuesday, March 27, 2018 at 3:56:18 PM UTC-6, Brent wrote:
>
>
>
> On 3/27/2018 2:26 PM, Stathis Papaioannou wrote:
>
>
> On Wed, 28 Mar 2018 at 7:27 am, Brent Meeker  > wrote:
>
>>
>>
>> On 3/27/2018 10:19 AM, Stathis Papaioannou wrote:
>>
>>
>> On Wed, 28 Mar 2018 at 1:50 am, Lawrence Crowell <
>> goldenfield...@gmail.com > wrote:
>>
>>> On Tuesday, March 27, 2018 at 7:21:00 AM UTC-5, stathisp wrote:
>>>


 On 27 March 2018 at 09:35, Brent Meeker  wrote:

>
>
> On 3/26/2018 3:19 PM, Stathis Papaioannou wrote:
>
>
> If you are not and never can be aware of it then in what sense is it 
> consciousness?
>
>
> Depends on what you mean by "it".  I can be aware of my consciousness, 
> without being aware that it is different than it was before; just as I 
> can 
> be aware of my consciousness without knowing whether it is the same as 
> yours, or the same as some robot.
>

 If I am given a brain implant to try out for a few days and I notice no 
 difference with the implant (everything feels exactly the same if I switch 
 it in or out of circuit), everyone I know agrees there is no change in me, 
 and every test I do with the implant switched in or out of circuit yields 
 the same results, then I think there would be no good reason to hesitate 
 in 
 saying yes to the implant. If the change it brings about is neither 
 objectively nor subjectively obvious, it isn't a change.


 -- 
 Stathis Papaioannou

>>>
>>> This argument ignores scaling. With any network you can replace or 
>>> change nodes and connections on a small scale and the system remains 
>>> largely unchanged. At a certain critical number of such changes the 
>>> properties of the entire network system can rapidly change. 
>>>
>>
>> Yes, it is possible that this is the case. What this would mean is that 
>> that the observable behaviour of the system would stay unchanged as it is 
>> replaced from 0 to 100% and so would the consciousness for part of the way, 
>> but at a certain point, when a particular neurone is replaced, 
>> consciousness will suddenly flip on or off or change radically. 
>>
>>
>> I think you are overstating that and creating a strawman.  Consciousness 
>> under the influence of drugs for example can change radically, but not 
>> "suddenly flip" with one more molecule of alcohol.  
>>
>
> If part of your consciousness changes as your brain is gradually replaced 
> then you would notice but be aware noble to communicate it, which is what 
> it scproblematic. One way out of this would be if your consciousness stayed 
> the same up to a certain point then suddenly flipped. If you suddenly 
> became a zombie you would not notice and not report that anything had 
> changed, so no inconsistency. However, it’s a long stretch to say that 
> consciousness will flip on changing a single molecule in order to save the 
> idea that it is substrate specific.
>
>
> But LC wasn't arguing it was substrate specific.  He was arguing that its 
> scale specific. 
>

> Brent
>

That was one argument. Also I am not arguing about the dosage of a drug, 
but of some rewiring, removal or replacement of sub-networks. 

As for substrate ponder the following question. If you had a stroke and 
were given the option of either a silicon chip system to replace neural 
functioning or neurons derived from your own stem cells, which would you 
choose? The obvious choice would be neurons, for they would most adapt to 
fill in needed function and interact with the rest of the brain. 

This whole idea of brain uploading has been rejected by neurophysiologists. 
First off they are the most likely to be in the real know about this 
subject. My theoretical objections are meant to be grist in the mill; maybe 
they are real and related to the neurophysiology or not. The short article 
did not go into details, but it does not surprise me they threw up their 
hands in disbelief anyone would seriously invest in this.

LC

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-27 Thread Brent Meeker



On 3/27/2018 2:26 PM, Stathis Papaioannou wrote:


On Wed, 28 Mar 2018 at 7:27 am, Brent Meeker > wrote:




On 3/27/2018 10:19 AM, Stathis Papaioannou wrote:


On Wed, 28 Mar 2018 at 1:50 am, Lawrence Crowell
> wrote:

On Tuesday, March 27, 2018 at 7:21:00 AM UTC-5, stathisp wrote:



On 27 March 2018 at 09:35, Brent Meeker
 wrote:



On 3/26/2018 3:19 PM, Stathis Papaioannou wrote:


If you are not and never can be aware of it then in
what sense is it consciousness?


Depends on what you mean by "it".  I can be aware of
my consciousness, without being aware that it is
different than it was before; just as I can be aware
of my consciousness without knowing whether it is the
same as yours, or the same as some robot.


If I am given a brain implant to try out for a few days
and I notice no difference with the implant (everything
feels exactly the same if I switch it in or out of
circuit), everyone I know agrees there is no change in
me, and every test I do with the implant switched in or
out of circuit yields the same results, then I think
there would be no good reason to hesitate in saying yes
to the implant. If the change it brings about is neither
objectively nor subjectively obvious, it isn't a change.


-- 
Stathis Papaioannou



This argument ignores scaling. With any network you can
replace or change nodes and connections on a small scale and
the system remains largely unchanged. At a certain critical
number of such changes the properties of the entire network
system can rapidly change.


Yes, it is possible that this is the case. What this would mean
is that that the observable behaviour of the system would stay
unchanged as it is replaced from 0 to 100% and so would the
consciousness for part of the way, but at a certain point, when a
particular neurone is replaced, consciousness will suddenly flip
on or off or change radically.


I think you are overstating that and creating a strawman. 
Consciousness under the influence of drugs for example can change
radically, but not "suddenly flip" with one more molecule of alcohol.


If part of your consciousness changes as your brain is gradually 
replaced then you would notice but be aware noble to communicate it, 
which is what it scproblematic. One way out of this would be if your 
consciousness stayed the same up to a certain point then suddenly 
flipped. If you suddenly became a zombie you would not notice and not 
report that anything had changed, so no inconsistency. However, it’s a 
long stretch to say that consciousness will flip on changing a single 
molecule in order to save the idea that it is substrate specific.


But LC wasn't arguing it was substrate specific.  He was arguing that 
its scale specific.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-27 Thread Lawrence Crowell
On Tuesday, March 27, 2018 at 11:06:51 AM UTC-6, Bruno Marchal wrote:
>
>
> On 27 Mar 2018, at 00:57, Lawrence Crowell  > wrote:
>
> On Monday, March 26, 2018 at 11:01:27 AM UTC-6, Bruno Marchal wrote:
>>
>>
>> On 25 Mar 2018, at 17:34, Lawrence Crowell  
>> wrote:
>>
>>
>>
>> On Sunday, March 25, 2018 at 5:01:59 AM UTC-6, Bruno Marchal wrote:
>>>
>>>
>>> Yes, and if someone argue that consciousness is not maintained whatever 
>>> the substitution level is, it is up to them to explain what in the 
>>> brain+local-evirnoment is not Turing emulable. I see only the “wave packet 
>>> reduction”, but I don’t see any evidence for that reduction, and it would 
>>> make Quantum mechanics inconsistent (I think) and not usable in cosmology, 
>>> nor in quantum information science. To believe that the brain is not a 
>>> “natural” machine is a bit like believing in some magic. Why not, but where 
>>> are the evidences?
>>>
>>>
>>> Bruno
>>>
>>
>> There are a couple of things running around here. One involves brains and 
>> minds and the other wave function reduction. 
>>
>> The issue of up loading brains or mapping them come into the problem with 
>> the NP-complete problem of partitioning graphs. I like to think of this 
>> according to tensor spaces of states, such as with MERA (multi-scale 
>> entanglement renormalization ansatz) tensor networks. The AdS_3 example 
>> with H^2 spatial surface is seen in the diagram below.
>>
>>
>> 
>>
>> This network has the highest complexity for the pentagonal tessellation 
>> for these are honeycombs of the groups H3, H4, H5 corresponding to the 
>> pentagon, dodecahedron, and the 4-dim icosadedron or 120/600 cells. These 
>> groups will tessellate a 2, 3 and 4 dimensional spatial hyperbolic surface 
>> embedded in AdS_3, AdS_4 and AdS_5. These define half the weights of the E8 
>> groups with the Zamolodchikov eigenvalues or masses. 5-fold structures have 
>> connections to the golden mean, and the Zamolodchikov quaternions are 
>> representations of the golden mean quaternions. A quantum error correction 
>> code (QECC) defines a projector onto each of these partitioned elements, 
>> but (without going into some deep mathematics) this is not computable in a 
>> root system because there is no Galois field extension, which gives that 
>> the QECC is not NP-complete.  
>>
>> This of course is work I am doing with respect to the problem of 
>> unitarity in quantum black holes and holography. It may have some 
>> connection with more ordinary quantum mechanics and measurement. The action 
>> of a measurement is a process whereby a set of quantum states code some 
>> other set of quantum states, where usually the number of the measuring 
>> states is far larger than the measured states. The quantum measurement 
>> problem may have some connection to the above, and further it has some 
>> qualitative similarity to self-reference. This may then mean the 
>> proposition P = NP or P =/= NP is not provable, but where maybe specific 
>> examples of NP/NP-complete algorithms as not-P can be proven. 
>>
>> This further might connect with the whole idea of up-loading minds into 
>> computers. Brains and their states are not just localized states but 
>> networks, and it could well be that this is not tractable. I paste in below 
>> a review paper on graph partitioning. This is just one possible theoretical 
>> obstruction, and if you plan on actually "bending metal" on this the 
>> problems will doubtless multiply like bunnies in spring. 
>>
>> As a general rule once these threads gets past 100 I tend not to post any 
>> more. It becomes to annoying to find my way around them.
>>
>>
>>
>>
>> That is interesting, and might even help later to recover notions like 
>> space, but to keep the distinction between the communicable and the non 
>> communicable part of the machines modes, which is needed for the mind-body 
>> problème, we have to extracted such structure in some special way, using 
>> the mathematics of self-reference. I am unfortunately not that far! It 
>> might take some generations of mathematicians.
>>
>> Bruno
>>
>
> The non-communicating regions can be in a quantum entanglement.
>
>
>
> The non-communicating/proving/Justifiable-believing regions is axiomatised 
> by G*. The quantum entanglement appears (well their shadows at least) in 
> the material mode of the true self-references (see my papers for the 
> mathematical precision).
>
> I do not assume a physical reality. I assume only very elementary 
> arithmetic, or very elementary combinators theory, or anything Turing 
> equivalent.
>
> But there is indeed a relation between arithmetical incompleteness and the 
> quantum, as it is incompleteness making the logic of []p distinguishable 
> from []p & p and []p & ~[]f 

Re: How to live forever

2018-03-27 Thread Stathis Papaioannou
On Wed, 28 Mar 2018 at 7:27 am, Brent Meeker  wrote:

>
>
> On 3/27/2018 10:19 AM, Stathis Papaioannou wrote:
>
>
> On Wed, 28 Mar 2018 at 1:50 am, Lawrence Crowell <
> goldenfieldquaterni...@gmail.com> wrote:
>
>> On Tuesday, March 27, 2018 at 7:21:00 AM UTC-5, stathisp wrote:
>>
>>>
>>>
>>> On 27 March 2018 at 09:35, Brent Meeker  wrote:
>>>


 On 3/26/2018 3:19 PM, Stathis Papaioannou wrote:


 If you are not and never can be aware of it then in what sense is it
 consciousness?


 Depends on what you mean by "it".  I can be aware of my consciousness,
 without being aware that it is different than it was before; just as I can
 be aware of my consciousness without knowing whether it is the same as
 yours, or the same as some robot.

>>>
>>> If I am given a brain implant to try out for a few days and I notice no
>>> difference with the implant (everything feels exactly the same if I switch
>>> it in or out of circuit), everyone I know agrees there is no change in me,
>>> and every test I do with the implant switched in or out of circuit yields
>>> the same results, then I think there would be no good reason to hesitate in
>>> saying yes to the implant. If the change it brings about is neither
>>> objectively nor subjectively obvious, it isn't a change.
>>>
>>>
>>> --
>>> Stathis Papaioannou
>>>
>>
>> This argument ignores scaling. With any network you can replace or change
>> nodes and connections on a small scale and the system remains largely
>> unchanged. At a certain critical number of such changes the properties of
>> the entire network system can rapidly change.
>>
>
> Yes, it is possible that this is the case. What this would mean is that
> that the observable behaviour of the system would stay unchanged as it is
> replaced from 0 to 100% and so would the consciousness for part of the way,
> but at a certain point, when a particular neurone is replaced,
> consciousness will suddenly flip on or off or change radically.
>
>
> I think you are overstating that and creating a strawman.  Consciousness
> under the influence of drugs for example can change radically, but not
> "suddenly flip" with one more molecule of alcohol.
>

If part of your consciousness changes as your brain is gradually replaced
then you would notice but be aware noble to communicate it, which is what
it scproblematic. One way out of this would be if your consciousness stayed
the same up to a certain point then suddenly flipped. If you suddenly
became a zombie you would not notice and not report that anything had
changed, so no inconsistency. However, it’s a long stretch to say that
consciousness will flip on changing a single molecule in order to save the
idea that it is substrate specific.

And since neurones are themselves complex systems, within that neurone
> there will be a particular protein, or a particular atom in the protein
> which when replaced will lead to a flipping of consciousness, while all the
> time behaviour remains unchanged. It’s possible that in the last few
> minutes a cosmic ray has added a neutron to a crucial atom somewhere in
> your brain and this has radically changed your consciousness, but you don’t
> know it and neither does anyone else.
>
> I read the other day about this whole idea of brain uploading. The
>> neurophysiologists are largely rejecting this idea.
>>
>
> Why?
>
>> --
> Stathis Papaioannou
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-27 Thread Stathis Papaioannou
On Wed, 28 Mar 2018 at 6:13 am, Brent Meeker  wrote:

>
>
> On 3/27/2018 5:20 AM, Stathis Papaioannou wrote:
>
>
>
> On 27 March 2018 at 09:35, Brent Meeker  wrote:
>
>>
>>
>> On 3/26/2018 3:19 PM, Stathis Papaioannou wrote:
>>
>>
>> If you are not and never can be aware of it then in what sense is it
>> consciousness?
>>
>>
>> Depends on what you mean by "it".  I can be aware of my consciousness,
>> without being aware that it is different than it was before; just as I can
>> be aware of my consciousness without knowing whether it is the same as
>> yours, or the same as some robot.
>>
>
> If I am given a brain implant to try out for a few days and I notice no
> difference with the implant (everything feels exactly the same if I switch
> it in or out of circuit),
>
>
> If it is a whole brain, then switching it will also switch memories and it
> will be impossible for you to say whether or not it "feels the same".
>

The implant replaces part of the brain (to begin with). If it’s the whole
brain you could speculate that the subject would become a zombie and, by
definition, not be aware of it. If it’s part of the brain the rest of the
brain will immediately notice if the change is large enough. If the visual
cortex is taken out by a stroke, the subject says he is blind and behaves
as if he is blind. He still has some visual reflexes, such as the pupillary
response to light, but he describes only what he can perceive, not visual
responses per se. So if it possible to produce a cortical implant that has
the normal I/O behaviour but lacks visual perception or has radically
different visual perception, the subject should notice, like the stroke
patient.
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-27 Thread Brent Meeker



On 3/27/2018 10:19 AM, Stathis Papaioannou wrote:


On Wed, 28 Mar 2018 at 1:50 am, Lawrence Crowell 
> wrote:


On Tuesday, March 27, 2018 at 7:21:00 AM UTC-5, stathisp wrote:



On 27 March 2018 at 09:35, Brent Meeker 
wrote:



On 3/26/2018 3:19 PM, Stathis Papaioannou wrote:


If you are not and never can be aware of it then in what
sense is it consciousness?


Depends on what you mean by "it".  I can be aware of my
consciousness, without being aware that it is different
than it was before; just as I can be aware of my
consciousness without knowing whether it is the same as
yours, or the same as some robot.


If I am given a brain implant to try out for a few days and I
notice no difference with the implant (everything feels
exactly the same if I switch it in or out of circuit),
everyone I know agrees there is no change in me, and every
test I do with the implant switched in or out of circuit
yields the same results, then I think there would be no good
reason to hesitate in saying yes to the implant. If the change
it brings about is neither objectively nor subjectively
obvious, it isn't a change.


-- 
Stathis Papaioannou



This argument ignores scaling. With any network you can replace or
change nodes and connections on a small scale and the system
remains largely unchanged. At a certain critical number of such
changes the properties of the entire network system can rapidly
change.


Yes, it is possible that this is the case. What this would mean is 
that that the observable behaviour of the system would stay unchanged 
as it is replaced from 0 to 100% and so would the consciousness for 
part of the way, but at a certain point, when a particular neurone is 
replaced, consciousness will suddenly flip on or off or change radically.


I think you are overstating that and creating a strawman. Consciousness 
under the influence of drugs for example can change radically, but not 
"suddenly flip" with one more molecule of alcohol.


Brent

And since neurones are themselves complex systems, within that neurone 
there will be a particular protein, or a particular atom in the 
protein which when replaced will lead to a flipping of consciousness, 
while all the time behaviour remains unchanged. It’s possible that in 
the last few minutes a cosmic ray has added a neutron to a crucial 
atom somewhere in your brain and this has radically changed your 
consciousness, but you don’t know it and neither does anyone else.


I read the other day about this whole idea of brain uploading. The
neurophysiologists are largely rejecting this idea.


Why?

--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To post to this group, send email to everything-list@googlegroups.com 
.

Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-27 Thread Brent Meeker



On 3/27/2018 9:57 AM, Bruno Marchal wrote:

There is non zombies with mechanism, not because we can derive consciousness 
from body-behaviour, but because we just cannot do that, as their are no really 
“body-behavior”, as there are no bodies: the intelligence and the consciousness 
is a collective works of all numbers or all combinators or all Turing Universal 
Machines. Every bodies or every numbers are zombies, in some sense, but the 
soul is in the actual true relations.
You're retreating into metaphysics and avoiding the problem.  If I build 
two robots that are equally intelligent and human like in their behavior 
(but not assuming them be identical in behavior) can I make one that is 
conscious and the other either not-conscious or conscious in a very 
different way that I won't be able to recognize as such?  Or to give a 
more concrete biological example, is the consciousness of an octopus 
similar to that of a mouse; or does the very different neural structure 
imply a difference in consciousness?


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-27 Thread Brent Meeker



On 3/27/2018 5:20 AM, Stathis Papaioannou wrote:



On 27 March 2018 at 09:35, Brent Meeker > wrote:




On 3/26/2018 3:19 PM, Stathis Papaioannou wrote:


If you are not and never can be aware of it then in what sense is
it consciousness?


Depends on what you mean by "it".  I can be aware of my
consciousness, without being aware that it is different than it
was before; just as I can be aware of my consciousness without
knowing whether it is the same as yours, or the same as some robot.


If I am given a brain implant to try out for a few days and I notice 
no difference with the implant (everything feels exactly the same if I 
switch it in or out of circuit),


If it is a whole brain, then switching it will also switch memories and 
it will be impossible for you to say whether or not it "feels the same".


everyone I know agrees there is no change in me, and every test I do 
with the implant switched in or out of circuit yields the same 
results, then I think there would be no good reason to hesitate in 
saying yes to the implant. If the change it brings about is neither 
objectively nor subjectively obvious, it isn't a change.


Ex hypothesi your external behavior is the same (at least insofar as 
your friends can judge).


But my question wasn't whether you should accept the implant or not.  
That would depend on a lot of other things and wouldn't necessarily be 
ruled out even if you noticed your consciousness to be different.


Brent




--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To post to this group, send email to everything-list@googlegroups.com 
.

Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-27 Thread John Clark
On Tue, Mar 27, 2018 at 10:50 AM, Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

*>This argument ignores scaling. With any network you can replace or change
> nodes and connections on a small scale and the system remains largely
> unchanged. At a certain critical number of such changes the properties of
> the entire network system can rapidly change. *


So you’re saying no simulation can be absolutely exact so you dare not
upload because due to chaos theory very minor differences would soon grow
so you’d no longer be Lawrence Crowell. Fine, but using the exact same
argument you dare not take a sip of coffee because that will also soon
change you to a person that was different from a person who did not take
that sip of coffee.

*> I read the other day about this whole idea of brain uploading. The
> neurophysiologists are largely rejecting this idea.*

I don’t know what specifically you’re referring to but I don't need to
because Ialready know what the objections would be, there are only two
general categories.

1) They will say mind uploading would be enormously difficult to do so we
can’t do it now witch is true, but then they will say it will always be
enormously difficult to do which is not true. In the future everything that
is not forbidden by the laws of physics can not only be done it can be
easily done.


2) They will present an argument that, when you boil away several layers of
glop, you will find its some slight variation of the very silly Sacred
Atoms Theory, the idea that there is something special about the particular
carbon atoms in your body. They’d be far too embarrassed to just come right
out and say they think some atoms somehow have your name scratched on them,
but that is the clear implication.

And Lawrence, I just don’t understand how you can be so certain about this
that you’re willing to stake your life on it, because that is exactly what
you’re doing. If I’m wrong I won’t be any deader, if you’re wrong you’re
going to be a lot deader.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-27 Thread Stathis Papaioannou
On Wed, 28 Mar 2018 at 1:50 am, Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

> On Tuesday, March 27, 2018 at 7:21:00 AM UTC-5, stathisp wrote:
>
>>
>>
>> On 27 March 2018 at 09:35, Brent Meeker  wrote:
>>
>>>
>>>
>>> On 3/26/2018 3:19 PM, Stathis Papaioannou wrote:
>>>
>>>
>>> If you are not and never can be aware of it then in what sense is it
>>> consciousness?
>>>
>>>
>>> Depends on what you mean by "it".  I can be aware of my consciousness,
>>> without being aware that it is different than it was before; just as I can
>>> be aware of my consciousness without knowing whether it is the same as
>>> yours, or the same as some robot.
>>>
>>
>> If I am given a brain implant to try out for a few days and I notice no
>> difference with the implant (everything feels exactly the same if I switch
>> it in or out of circuit), everyone I know agrees there is no change in me,
>> and every test I do with the implant switched in or out of circuit yields
>> the same results, then I think there would be no good reason to hesitate in
>> saying yes to the implant. If the change it brings about is neither
>> objectively nor subjectively obvious, it isn't a change.
>>
>>
>> --
>> Stathis Papaioannou
>>
>
> This argument ignores scaling. With any network you can replace or change
> nodes and connections on a small scale and the system remains largely
> unchanged. At a certain critical number of such changes the properties of
> the entire network system can rapidly change.
>

Yes, it is possible that this is the case. What this would mean is that
that the observable behaviour of the system would stay unchanged as it is
replaced from 0 to 100% and so would the consciousness for part of the way,
but at a certain point, when a particular neurone is replaced,
consciousness will suddenly flip on or off or change radically. And since
neurones are themselves complex systems, within that neurone there will be
a particular protein, or a particular atom in the protein which when
replaced will lead to a flipping of consciousness, while all the time
behaviour remains unchanged. It’s possible that in the last few minutes a
cosmic ray has added a neutron to a crucial atom somewhere in your brain
and this has radically changed your consciousness, but you don’t know it
and neither does anyone else.

I read the other day about this whole idea of brain uploading. The
> neurophysiologists are largely rejecting this idea.
>

Why?

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-27 Thread Bruno Marchal

> On 27 Mar 2018, at 00:57, Lawrence Crowell  
> wrote:
> 
> On Monday, March 26, 2018 at 11:01:27 AM UTC-6, Bruno Marchal wrote:
> 
>> On 25 Mar 2018, at 17:34, Lawrence Crowell > > wrote:
>> 
>> 
>> 
>> On Sunday, March 25, 2018 at 5:01:59 AM UTC-6, Bruno Marchal wrote:
>> 
>>> Yes, and if someone argue that consciousness is not maintained whatever the 
>>> substitution level is, it is up to them to explain what in the 
>>> brain+local-evirnoment is not Turing emulable. I see only the “wave packet 
>>> reduction”, but I don’t see any evidence for that reduction, and it would 
>>> make Quantum mechanics inconsistent (I think) and not usable in cosmology, 
>>> nor in quantum information science. To believe that the brain is not a 
>>> “natural” machine is a bit like believing in some magic. Why not, but where 
>>> are the evidences?
>> 
>> Bruno
>> 
>> There are a couple of things running around here. One involves brains and 
>> minds and the other wave function reduction. 
>> 
>> The issue of up loading brains or mapping them come into the problem with 
>> the NP-complete problem of partitioning graphs. I like to think of this 
>> according to tensor spaces of states, such as with MERA (multi-scale 
>> entanglement renormalization ansatz) tensor networks. The AdS_3 example with 
>> H^2 spatial surface is seen in the diagram below.
>> 
>>  
>> 
>> 
>> This network has the highest complexity for the pentagonal tessellation for 
>> these are honeycombs of the groups H3, H4, H5 corresponding to the pentagon, 
>> dodecahedron, and the 4-dim icosadedron or 120/600 cells. These groups will 
>> tessellate a 2, 3 and 4 dimensional spatial hyperbolic surface embedded in 
>> AdS_3, AdS_4 and AdS_5. These define half the weights of the E8 groups with 
>> the Zamolodchikov eigenvalues or masses. 5-fold structures have connections 
>> to the golden mean, and the Zamolodchikov quaternions are representations of 
>> the golden mean quaternions. A quantum error correction code (QECC) defines 
>> a projector onto each of these partitioned elements, but (without going into 
>> some deep mathematics) this is not computable in a root system because there 
>> is no Galois field extension, which gives that the QECC is not NP-complete.  
>> 
>> This of course is work I am doing with respect to the problem of unitarity 
>> in quantum black holes and holography. It may have some connection with more 
>> ordinary quantum mechanics and measurement. The action of a measurement is a 
>> process whereby a set of quantum states code some other set of quantum 
>> states, where usually the number of the measuring states is far larger than 
>> the measured states. The quantum measurement problem may have some 
>> connection to the above, and further it has some qualitative similarity to 
>> self-reference. This may then mean the proposition P = NP or P =/= NP is not 
>> provable, but where maybe specific examples of NP/NP-complete algorithms as 
>> not-P can be proven. 
>> 
>> This further might connect with the whole idea of up-loading minds into 
>> computers. Brains and their states are not just localized states but 
>> networks, and it could well be that this is not tractable. I paste in below 
>> a review paper on graph partitioning. This is just one possible theoretical 
>> obstruction, and if you plan on actually "bending metal" on this the 
>> problems will doubtless multiply like bunnies in spring. 
>> 
>> As a general rule once these threads gets past 100 I tend not to post any 
>> more. It becomes to annoying to find my way around them.
> 
> 
> 
> That is interesting, and might even help later to recover notions like space, 
> but to keep the distinction between the communicable and the non communicable 
> part of the machines modes, which is needed for the mind-body problème, we 
> have to extracted such structure in some special way, using the mathematics 
> of self-reference. I am unfortunately not that far! It might take some 
> generations of mathematicians.
> 
> Bruno
> 
> The non-communicating regions can be in a quantum entanglement.


The non-communicating/proving/Justifiable-believing regions is axiomatised by 
G*. The quantum entanglement appears (well their shadows at least) in the 
material mode of the true self-references (see my papers for the mathematical 
precision).

I do not assume a physical reality. I assume only very elementary arithmetic, 
or very elementary combinators theory, or anything Turing equivalent.

But there is indeed a relation between arithmetical incompleteness and the 
quantum, as it is incompleteness making the logic of []p distinguishable from 
[]p & p and []p & ~[]f and others, and they give the arithmetical quantum logic 
when p is restricted to the semi-computable (sigma_1) 

Re: How to live forever

2018-03-27 Thread Bruno Marchal

> On 26 Mar 2018, at 22:09, Brent Meeker  wrote:
> 
> 
> 
> On 3/26/2018 10:10 AM, Bruno Marchal wrote:
>>> You retreat into what is possible.  My question is much more directly 
>>> pragmatic.  If I actually made a silicon based replacement for your brain 
>>> that had the same input/output would you consciousness be different if the 
>>> replacement processed the information differently...and how could you or we 
>>> know?
>> 
>> Not necessarily, unless you mean all my possible behaviour, including the 
>> infinite one. For a finite time, a zombie might be able to imitated me, or 
>> some of my behaviour enough well to fail people.
>> 
>> The level of substitution is more precise than “behaviour”, as what is 
>> maintained is the behaviour of the relevant entities at some level, this 
>> might includes all the internal inputs and outputs of all particular 
>> neurons. I think I have already said that I tend to think that the 
>> substation level is the particles/waves up to the Heisenberg uncertainty.
>> 
> The problem with that level of substitution is that the wave-function of the 
> brain (or even of one neuron) is an extreme idealization. 

Yes, it is a complex sum/statistics of infinities of computation/sigma_1 
sentences. To emulate exactly a physical object you need an infinite 
computations, at least when assuming Digital Mechanism, or indexical 
computatianlism (yes doctor).

Here I am not sure which theory you are assuming. If you assume a primary 
physical reality behind the wave, then you cannot use digital mechanism, as it 
requires the physical reality being a first person plural emergence from that 
infinite sum.

You need only to believe if 3^3 + 4^3 + 5^3 = 6^3 is true of false, 
independently of you verifying it or not.





> The brain (or neuron) as a quantum system not isolated and will be entangled 
> with a lot of the environment, including that outside the body.  The is no 
> such thing as "the wave-function of a brain”.

I agree. It is all in the head of the universal Turing machine, verifiably so 
(and up to now it fits and explain both the sharable quanta and the private 
qualia).


> 
> But I raised the question precisely because of this extreme disconnect 
> between discussions of the "level of substitution" and the "consciousness as 
> detected by behavior”.

Not really, because “behaviour of neurons” was ambiguous for me, is it the 
behaviour of the relevant part, the whole (non computable) possible parts, or 
the mundane behaviour of the person incarnated by that brain/body relatively to 
me?

It matters, because with mechanism, consciousness and intelligence is not in 
the physical reality, it is the physical reality which evolves (arithmetic 
logically) in dreams of the universal person, which differentiates along the 
distinguishable computations, with weight provided by the mathematics of the 
“material modes” of self-reference.

It works, making, up to now, the Everett Wave into a sort of phenomenological 
fixed point of the universal self-introscting machine. Physics becomes a 
number-theoretical phenomenon due to the ability of the numbers to be involved 
in infinitely may (sigma_1) relation, and reflex on themselves.

That is not a threat for physics, but it is a threat for metaphysical 
physicalism or materialism, as it is (imho) the only theory which explains as 
much as possible the quanta, the qualia and their mathematically precise 
relation, and this from a very simple set of equation, like Kxy = x and Sexy = 
xy(yz). (And very few identity rules of the type, derive xz = yz from x = y.

There is non zombies with mechanism, not because we can derive consciousness 
from body-behaviour, but because we just cannot do that, as their are no really 
“body-behavior”, as there are no bodies: the intelligence and the consciousness 
is a collective works of all numbers or all combinators or all Turing Universal 
Machines. Every bodies or every numbers are zombies, in some sense, but the 
soul is in the actual true relations.

Bruno





> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-27 Thread Lawrence Crowell
On Tuesday, March 27, 2018 at 7:21:00 AM UTC-5, stathisp wrote:
>
>
>
> On 27 March 2018 at 09:35, Brent Meeker  > wrote:
>
>>
>>
>> On 3/26/2018 3:19 PM, Stathis Papaioannou wrote:
>>
>>
>> If you are not and never can be aware of it then in what sense is it 
>> consciousness?
>>
>>
>> Depends on what you mean by "it".  I can be aware of my consciousness, 
>> without being aware that it is different than it was before; just as I can 
>> be aware of my consciousness without knowing whether it is the same as 
>> yours, or the same as some robot.
>>
>
> If I am given a brain implant to try out for a few days and I notice no 
> difference with the implant (everything feels exactly the same if I switch 
> it in or out of circuit), everyone I know agrees there is no change in me, 
> and every test I do with the implant switched in or out of circuit yields 
> the same results, then I think there would be no good reason to hesitate in 
> saying yes to the implant. If the change it brings about is neither 
> objectively nor subjectively obvious, it isn't a change.
>
>
> -- 
> Stathis Papaioannou
>

This argument ignores scaling. With any network you can replace or change 
nodes and connections on a small scale and the system remains largely 
unchanged. At a certain critical number of such changes the properties of 
the entire network system can rapidly change. 

I read the other day about this whole idea of brain uploading. The 
neurophysiologists are largely rejecting this idea.

LC 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-27 Thread Stathis Papaioannou
On 27 March 2018 at 09:35, Brent Meeker  wrote:

>
>
> On 3/26/2018 3:19 PM, Stathis Papaioannou wrote:
>
>
> If you are not and never can be aware of it then in what sense is it
> consciousness?
>
>
> Depends on what you mean by "it".  I can be aware of my consciousness,
> without being aware that it is different than it was before; just as I can
> be aware of my consciousness without knowing whether it is the same as
> yours, or the same as some robot.
>

If I am given a brain implant to try out for a few days and I notice no
difference with the implant (everything feels exactly the same if I switch
it in or out of circuit), everyone I know agrees there is no change in me,
and every test I do with the implant switched in or out of circuit yields
the same results, then I think there would be no good reason to hesitate in
saying yes to the implant. If the change it brings about is neither
objectively nor subjectively obvious, it isn't a change.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-26 Thread Lawrence Crowell
On Monday, March 26, 2018 at 11:01:27 AM UTC-6, Bruno Marchal wrote:
>
>
> On 25 Mar 2018, at 17:34, Lawrence Crowell  > wrote:
>
>
>
> On Sunday, March 25, 2018 at 5:01:59 AM UTC-6, Bruno Marchal wrote:
>>
>>
>> Yes, and if someone argue that consciousness is not maintained whatever 
>> the substitution level is, it is up to them to explain what in the 
>> brain+local-evirnoment is not Turing emulable. I see only the “wave packet 
>> reduction”, but I don’t see any evidence for that reduction, and it would 
>> make Quantum mechanics inconsistent (I think) and not usable in cosmology, 
>> nor in quantum information science. To believe that the brain is not a 
>> “natural” machine is a bit like believing in some magic. Why not, but where 
>> are the evidences?
>>
>>
>> Bruno
>>
>
> There are a couple of things running around here. One involves brains and 
> minds and the other wave function reduction. 
>
> The issue of up loading brains or mapping them come into the problem with 
> the NP-complete problem of partitioning graphs. I like to think of this 
> according to tensor spaces of states, such as with MERA (multi-scale 
> entanglement renormalization ansatz) tensor networks. The AdS_3 example 
> with H^2 spatial surface is seen in the diagram below.
>
>
> 
>
> This network has the highest complexity for the pentagonal tessellation 
> for these are honeycombs of the groups H3, H4, H5 corresponding to the 
> pentagon, dodecahedron, and the 4-dim icosadedron or 120/600 cells. These 
> groups will tessellate a 2, 3 and 4 dimensional spatial hyperbolic surface 
> embedded in AdS_3, AdS_4 and AdS_5. These define half the weights of the E8 
> groups with the Zamolodchikov eigenvalues or masses. 5-fold structures have 
> connections to the golden mean, and the Zamolodchikov quaternions are 
> representations of the golden mean quaternions. A quantum error correction 
> code (QECC) defines a projector onto each of these partitioned elements, 
> but (without going into some deep mathematics) this is not computable in a 
> root system because there is no Galois field extension, which gives that 
> the QECC is not NP-complete.  
>
> This of course is work I am doing with respect to the problem of unitarity 
> in quantum black holes and holography. It may have some connection with 
> more ordinary quantum mechanics and measurement. The action of a 
> measurement is a process whereby a set of quantum states code some other 
> set of quantum states, where usually the number of the measuring states is 
> far larger than the measured states. The quantum measurement problem may 
> have some connection to the above, and further it has some qualitative 
> similarity to self-reference. This may then mean the proposition P = NP or 
> P =/= NP is not provable, but where maybe specific examples of 
> NP/NP-complete algorithms as not-P can be proven. 
>
> This further might connect with the whole idea of up-loading minds into 
> computers. Brains and their states are not just localized states but 
> networks, and it could well be that this is not tractable. I paste in below 
> a review paper on graph partitioning. This is just one possible theoretical 
> obstruction, and if you plan on actually "bending metal" on this the 
> problems will doubtless multiply like bunnies in spring. 
>
> As a general rule once these threads gets past 100 I tend not to post any 
> more. It becomes to annoying to find my way around them.
>
>
>
>
> That is interesting, and might even help later to recover notions like 
> space, but to keep the distinction between the communicable and the non 
> communicable part of the machines modes, which is needed for the mind-body 
> problème, we have to extracted such structure in some special way, using 
> the mathematics of self-reference. I am unfortunately not that far! It 
> might take some generations of mathematicians.
>
> Bruno
>

The non-communicating regions can be in a quantum entanglement.

LC 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-26 Thread Brent Meeker



On 3/26/2018 3:19 PM, Stathis Papaioannou wrote:


On Tue, 27 Mar 2018 at 6:40 am, Brent Meeker > wrote:




On 3/26/2018 8:24 AM, Stathis Papaioannou wrote:

On 26 March 2018 at 15:20, Brent Meeker > wrote:



On 3/25/2018 7:14 PM, Stathis Papaioannou wrote:



On 26 March 2018 at 04:57, Brent Meeker
> wrote:



On 3/25/2018 2:15 AM, Bruno Marchal wrote:



On 21 Mar 2018, at 22:56, Brent Meeker
>
wrote:



On 3/21/2018 2:27 PM, Stathis Papaioannou wrote:


On Thu, 22 Mar 2018 at 5:45 am, Brent Meeker
>
wrote:



On 3/20/2018 11:29 PM, Stathis Papaioannou wrote:

On Wed, 21 Mar 2018 at 9:03 am, Brent Meeker
> wrote:



On 3/20/2018 1:14 PM, Stathis Papaioannou wrote:


On Wed, 21 Mar 2018 at 6:34 am, Brent
Meeker > wrote:



On 3/20/2018 3:58 AM, Telmo Menezes wrote:

The interesting thing is that you can draw conclusions 
about consciousness
without being able to define it or detect it.

I agree.


The claim is that IF an entity
is conscious THEN its consciousness will be preserved 
if brain function is
preserved despite changing the brain substrate.

Ok, this is computationalism. I also bet on 
computationalism, but I
think we must proceed with caution and not forget that 
we are just
assuming this to be true. Your thought experiment is 
convincing but is
not a proof. You do expose something that I agree with: 
that
non-computationalism sounds silly.

But does it sound so silly if we
propose substituting a completely
different kind of computer, e.g. von
Neumann architecture or one that just
records everything instead of an
episodic associative memory, for the
brain. The Church-Turing conjecture
says it can compute the same functions.
But does it instantiate the same
consciousness. My intuition is that it
would be "conscious" but in some
different way; for example by having
the kind of memory you would have if
you could review of a movie of any
interval in your past.


I think it would be conscious in the same
way if you replaced neural tissue with a
black box that interacted with the
surrounding tissue in the same way. It
doesn’t matter what is in the black box; it
could even work by magic.


Then why draw the line at "surrounding
tissue".  Why not the external enivironment?


Keep expanding the part that is replaced and you
replace the whole brain and the whole organism.

Are you saying you can't imagine being
"conscious" but in a different way?


I think it is possible but I don’t think it
could happen if my neurones were replaced by a
functionally equivalent component. If it’s
functionally equivalent, my behaviour would be
unchanged,


I agree with that.  But you've already supposed
that functional equivalence at the behavior level
implies preservation of consciousness. So what
I'm considering is replacements in the brain far
above the neuron level, say at the level of whole
functional groups of the brain, e.g. the visual
system, the auditory system, the memory,... Would
functional equivalence at the body/brain
interface then still imply consciousness equivalence?


I think it would, because I don’t think there are
isolated consciousness modules in the brain. A large
enough change in visual experience will be noticed by
the subject, who will 

Re: How to live forever

2018-03-26 Thread Stathis Papaioannou
On Tue, 27 Mar 2018 at 6:40 am, Brent Meeker  wrote:

>
>
> On 3/26/2018 8:24 AM, Stathis Papaioannou wrote:
>
> On 26 March 2018 at 15:20, Brent Meeker  wrote:
>
>>
>>
>> On 3/25/2018 7:14 PM, Stathis Papaioannou wrote:
>>
>>
>>
>> On 26 March 2018 at 04:57, Brent Meeker  wrote:
>>
>>>
>>>
>>> On 3/25/2018 2:15 AM, Bruno Marchal wrote:
>>>
>>>
>>> On 21 Mar 2018, at 22:56, Brent Meeker  wrote:
>>>
>>>
>>>
>>> On 3/21/2018 2:27 PM, Stathis Papaioannou wrote:
>>>
>>>
>>> On Thu, 22 Mar 2018 at 5:45 am, Brent Meeker 
>>> wrote:
>>>


 On 3/20/2018 11:29 PM, Stathis Papaioannou wrote:

 On Wed, 21 Mar 2018 at 9:03 am, Brent Meeker 
 wrote:

>
>
> On 3/20/2018 1:14 PM, Stathis Papaioannou wrote:
>
>
> On Wed, 21 Mar 2018 at 6:34 am, Brent Meeker 
> wrote:
>
>>
>>
>> On 3/20/2018 3:58 AM, Telmo Menezes wrote:
>>
>> The interesting thing is that you can draw conclusions about 
>> consciousness
>> without being able to define it or detect it.
>>
>> I agree.
>>
>>
>> The claim is that IF an entity
>> is conscious THEN its consciousness will be preserved if brain function 
>> is
>> preserved despite changing the brain substrate.
>>
>> Ok, this is computationalism. I also bet on computationalism, but I
>> think we must proceed with caution and not forget that we are just
>> assuming this to be true. Your thought experiment is convincing but is
>> not a proof. You do expose something that I agree with: that
>> non-computationalism sounds silly.
>>
>> But does it sound so silly if we propose substituting a completely
>> different kind of computer, e.g. von Neumann architecture or one that 
>> just
>> records everything instead of an episodic associative memory, for the
>> brain.  The Church-Turing conjecture says it can compute the same
>> functions.  But does it instantiate the same consciousness.  My intuition
>> is that it would be "conscious" but in some different way; for example by
>> having the kind of memory you would have if you could review of a movie 
>> of
>> any interval in your past.
>>
>
> I think it would be conscious in the same way if you replaced neural
> tissue with a black box that interacted with the surrounding tissue in the
> same way. It doesn’t matter what is in the black box; it could even work 
> by
> magic.
>
>
> Then why draw the line at "surrounding tissue".  Why not the external
> enivironment?
>

 Keep expanding the part that is replaced and you replace the whole
 brain and the whole organism.

 Are you saying you can't imagine being "conscious" but in a different
> way?
>

 I think it is possible but I don’t think it could happen if my neurones
 were replaced by a functionally equivalent component. If it’s functionally
 equivalent, my behaviour would be unchanged,


 I agree with that.  But you've already supposed that functional
 equivalence at the behavior level implies preservation of consciousness.
 So what I'm considering is replacements in the brain far above the neuron
 level, say at the level of whole functional groups of the brain, e.g. the
 visual system, the auditory system, the memory,...  Would functional
 equivalence at the body/brain interface then still imply consciousness
 equivalence?

>>>
>>> I think it would, because I don’t think there are isolated consciousness
>>> modules in the brain. A large enough change in visual experience will be
>>> noticed by the subject, who will report that things look different. This
>>> could only happen if there is a change in the input to the language system
>>> from the visual system; but we have assumed that the output from the visual
>>> system is the same, and only the consciousness has changed, leading to a
>>> contradiction.
>>>
>>>
>>> But what about internal systems which are independent of
>>> perception...the very reason Bruno wants to talk about dream states.  And
>>> I'm not necessarily asking that behavior be identical...just that the
>>> body/brain interface be the same.  The "brain" may be different in how it
>>> processes input from the eyeballs and hence report verbally different
>>> perceptions.  In other words, I'm wondering how much does computationalism
>>> constrain consciousness.  My intuition is that there could be a lot of
>>> difference in consciousness depending on how different perceptual inputs
>>> are process and/or merged and how internal simulations are handled.  To
>>> take a crude example, would it matter if the computer-brain was programmed
>>> in a functional language like LISP, an object-oriented language like Ruby,
>>> or a 

Re: How to live forever

2018-03-26 Thread Brent Meeker



On 3/26/2018 10:10 AM, Bruno Marchal wrote:
You retreat into what is possible.  My question is much more directly 
pragmatic.  If I actually made a silicon based replacement for your 
brain that had the same input/output would you consciousness be 
different if the replacement processed the information 
differently...and how could you or we know?


Not necessarily, unless you mean all my possible behaviour, including 
the infinite one. For a finite time, a zombie might be able to 
imitated me, or some of my behaviour enough well to fail people.


The level of substitution is more precise than “behaviour”, as what is 
maintained is the behaviour of the relevant entities at some level, 
this might includes all the internal inputs and outputs of all 
particular neurons. I think I have already said that I tend to think 
that the substation level is the particles/waves up to the Heisenberg 
uncertainty.


The problem with that level of substitution is that the wave-function of 
the brain (or even of one neuron) is an extreme idealization.  The brain 
(or neuron) as a quantum system not isolated and will be entangled with 
a lot of the environment, including that outside the body.  The is no 
such thing as "the wave-function of a brain".


But I raised the question precisely because of this extreme disconnect 
between discussions of the "level of substitution" and the 
"consciousness as detected by behavior".


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-26 Thread Brent Meeker



On 3/26/2018 8:24 AM, Stathis Papaioannou wrote:
On 26 March 2018 at 15:20, Brent Meeker > wrote:




On 3/25/2018 7:14 PM, Stathis Papaioannou wrote:



On 26 March 2018 at 04:57, Brent Meeker > wrote:



On 3/25/2018 2:15 AM, Bruno Marchal wrote:



On 21 Mar 2018, at 22:56, Brent Meeker
> wrote:



On 3/21/2018 2:27 PM, Stathis Papaioannou wrote:


On Thu, 22 Mar 2018 at 5:45 am, Brent Meeker
> wrote:



On 3/20/2018 11:29 PM, Stathis Papaioannou wrote:

On Wed, 21 Mar 2018 at 9:03 am, Brent Meeker
>
wrote:



On 3/20/2018 1:14 PM, Stathis Papaioannou wrote:


On Wed, 21 Mar 2018 at 6:34 am, Brent Meeker
> wrote:



On 3/20/2018 3:58 AM, Telmo Menezes wrote:

The interesting thing is that you can draw conclusions 
about consciousness
without being able to define it or detect it.

I agree.


The claim is that IF an entity
is conscious THEN its consciousness will be preserved if 
brain function is
preserved despite changing the brain substrate.

Ok, this is computationalism. I also bet on 
computationalism, but I
think we must proceed with caution and not forget that we 
are just
assuming this to be true. Your thought experiment is 
convincing but is
not a proof. You do expose something that I agree with: that
non-computationalism sounds silly.

But does it sound so silly if we propose
substituting a completely different kind of
computer, e.g. von Neumann architecture or
one that just records everything instead of
an episodic associative memory, for the
brain. The Church-Turing conjecture says it
can compute the same functions. But does it
instantiate the same consciousness. My
intuition is that it would be "conscious"
but in some different way; for example by
having the kind of memory you would have if
you could review of a movie of any interval
in your past.


I think it would be conscious in the same way if
you replaced neural tissue with a black box that
interacted with the surrounding tissue in the
same way. It doesn’t matter what is in the black
box; it could even work by magic.


Then why draw the line at "surrounding tissue". 
Why not the external enivironment?


Keep expanding the part that is replaced and you
replace the whole brain and the whole organism.

Are you saying you can't imagine being
"conscious" but in a different way?


I think it is possible but I don’t think it could
happen if my neurones were replaced by a functionally
equivalent component. If it’s functionally
equivalent, my behaviour would be unchanged,


I agree with that.  But you've already supposed that
functional equivalence at the behavior level implies
preservation of consciousness. So what I'm considering
is replacements in the brain far above the neuron
level, say at the level of whole functional groups of
the brain, e.g. the visual system, the auditory
system, the memory,... Would functional equivalence at
the body/brain interface then still imply
consciousness equivalence?


I think it would, because I don’t think there are isolated
consciousness modules in the brain. A large enough change
in visual experience will be noticed by the subject, who
will report that things look different. This could only
happen if there is a change in the input to the language
system from the visual system; but we have assumed that
the output from the visual system is the same, and only
the consciousness has changed, leading to a contradiction.


But what about internal systems which are independent of
perception...the very reason Bruno wants to talk about
dream states.  And I'm not necessarily asking that behavior
be identical...just that the body/brain 

Re: How to live forever

2018-03-26 Thread Bruno Marchal

> On 25 Mar 2018, at 19:57, Brent Meeker  wrote:
> 
> 
> 
> On 3/25/2018 2:15 AM, Bruno Marchal wrote:
>> 
>>> On 21 Mar 2018, at 22:56, Brent Meeker >> > wrote:
>>> 
>>> 
>>> 
>>> On 3/21/2018 2:27 PM, Stathis Papaioannou wrote:
 
 On Thu, 22 Mar 2018 at 5:45 am, Brent Meeker > wrote:
 
 
 On 3/20/2018 11:29 PM, Stathis Papaioannou wrote:
> On Wed, 21 Mar 2018 at 9:03 am, Brent Meeker  > wrote:
> 
> 
> On 3/20/2018 1:14 PM, Stathis Papaioannou wrote:
>> 
>> On Wed, 21 Mar 2018 at 6:34 am, Brent Meeker > > wrote:
>> 
>> 
>> On 3/20/2018 3:58 AM, Telmo Menezes wrote:
 The interesting thing is that you can draw conclusions about 
 consciousness
 without being able to define it or detect it.
>>> I agree.
>>> 
 The claim is that IF an entity
 is conscious THEN its consciousness will be preserved if brain 
 function is
 preserved despite changing the brain substrate.
>>> Ok, this is computationalism. I also bet on computationalism, but I
>>> think we must proceed with caution and not forget that we are just
>>> assuming this to be true. Your thought experiment is convincing but is
>>> not a proof. You do expose something that I agree with: that
>>> non-computationalism sounds silly.
>> 
>> But does it sound so silly if we propose substituting a completely 
>> different kind of computer, e.g. von Neumann architecture or one that 
>> just records everything instead of an episodic associative memory, for 
>> the brain.  The Church-Turing conjecture says it can compute the same 
>> functions.  But does it instantiate the same consciousness.  My 
>> intuition is that it would be "conscious" but in some different way; for 
>> example by having the kind of memory you would have if you could review 
>> of a movie of any interval in your past.
>> 
>> I think it would be conscious in the same way if you replaced neural 
>> tissue with a black box that interacted with the surrounding tissue in 
>> the same way. It doesn’t matter what is in the black box; it could even 
>> work by magic.
> 
> Then why draw the line at "surrounding tissue".  Why not the external 
> enivironment? 
> 
> Keep expanding the part that is replaced and you replace the whole brain 
> and the whole organism.
> 
> Are you saying you can't imagine being "conscious" but in a different way?
> 
> I think it is possible but I don’t think it could happen if my neurones 
> were replaced by a functionally equivalent component. If it’s 
> functionally equivalent, my behaviour would be unchanged,
 
 I agree with that.  But you've already supposed that functional 
 equivalence at the behavior level implies preservation of consciousness.  
 So what I'm considering is replacements in the 
 brain far above the neuron level, say at the level of whole functional 
 groups of the brain, e.g. the visual system, the auditory system, the 
 memory,...  Would functional equivalence at the body/brain interface then 
 still imply consciousness equivalence?
 
 I think it would, because I don’t think there are isolated consciousness 
 modules in the brain. A large enough change in visual experience will be 
 noticed by the subject, who will report that things look different. This 
 could only happen if there is a change in the input to the language system 
 from the visual system; but we have assumed that the output from the 
 visual system is the same, and only the consciousness has changed, leading 
 to a contradiction.
>>> 
>>> But what about internal systems which are independent of perception...the 
>>> very reason Bruno wants to talk about dream states.  And I'm not 
>>> necessarily asking that behavior be identical...just that the body/brain 
>>> interface be the same.  The "brain" may be different in how it processes 
>>> input from the eyeballs and hence report verbally different perceptions.  
>>> In other words, I'm wondering how much does computationalism constrain 
>>> consciousness.  My intuition is that there could be a lot of difference in 
>>> consciousness depending on how different perceptual inputs are process 
>>> and/or merged and how internal simulations are handled.  To take a crude 
>>> example, would it matter if the computer-brain was programmed in a 
>>> functional language like LISP, an object-oriented language like Ruby, or a 
>>> neural network?  Of course Church-Turing says they all compute the same set 
>>> of functions, but they don't do it the same way
>> 
>> They can do it 

Re: How to live forever

2018-03-26 Thread Bruno Marchal

> On 25 Mar 2018, at 18:02, John Clark  wrote:
> 
> 
> 
> On Sun, Mar 25, 2018 at 5:01 AM, Bruno Marchal  > wrote:
> 
> ​>> ​if it was all based on the observation of behavior then what you'd end 
> up with is a scientific theory about intelligence not consciousness.
> 
> ​> ​That is right. But if you agree that consciousness is a form of 
> non-provable but also non-doubtable knowledge, and if you agree with the 
> standard definition of knowledge in philosophy of mind, then it is a theorem 
> that Peano Arithmetic is conscious.
> Perhaps rocks are intelligent and they just choose not to display it, if so 
> then rocks are conscious too.  Perhaps Peano Arithmetic is intelligent and it 
> just chooses not to display it,
> 

No, Peano arithmetic (PA) displays intelligence, if you listen carefully about 
what PA says about PA’s ability to choose between saying something or string 
mute.



> if so then Peano Arithmetic is conscious too.
> 

I have few doubt about that.



> Or perhaps neither rocks nor Peano Arithmetic nor you is conscious and only I 
> am. Perhaps, but I doubt it.
> 
> 

I have no evidence that a rock can think in any genuine way relatively to me, 
but the question "can a rock think” is ambiguous because a rock is only a map 
of our most future self-localisation in the multi dream which is emulated by 
very elementary arithmetic. There is no rocks per se, only invariant and 
changing pattern in numbers involved in long/deep computations.

Bruno





> 
> ​ John K Clark​
> 
> 
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-26 Thread Bruno Marchal

> On 25 Mar 2018, at 17:34, Lawrence Crowell  
> wrote:
> 
> 
> 
> On Sunday, March 25, 2018 at 5:01:59 AM UTC-6, Bruno Marchal wrote:
> 
>> Yes, and if someone argue that consciousness is not maintained whatever the 
>> substitution level is, it is up to them to explain what in the 
>> brain+local-evirnoment is not Turing emulable. I see only the “wave packet 
>> reduction”, but I don’t see any evidence for that reduction, and it would 
>> make Quantum mechanics inconsistent (I think) and not usable in cosmology, 
>> nor in quantum information science. To believe that the brain is not a 
>> “natural” machine is a bit like believing in some magic. Why not, but where 
>> are the evidences?
> 
> Bruno
> 
> There are a couple of things running around here. One involves brains and 
> minds and the other wave function reduction. 
> 
> The issue of up loading brains or mapping them come into the problem with the 
> NP-complete problem of partitioning graphs. I like to think of this according 
> to tensor spaces of states, such as with MERA (multi-scale entanglement 
> renormalization ansatz) tensor networks. The AdS_3 example with H^2 spatial 
> surface is seen in the diagram below.
> 
>  
> 
> 
> This network has the highest complexity for the pentagonal tessellation for 
> these are honeycombs of the groups H3, H4, H5 corresponding to the pentagon, 
> dodecahedron, and the 4-dim icosadedron or 120/600 cells. These groups will 
> tessellate a 2, 3 and 4 dimensional spatial hyperbolic surface embedded in 
> AdS_3, AdS_4 and AdS_5. These define half the weights of the E8 groups with 
> the Zamolodchikov eigenvalues or masses. 5-fold structures have connections 
> to the golden mean, and the Zamolodchikov quaternions are representations of 
> the golden mean quaternions. A quantum error correction code (QECC) defines a 
> projector onto each of these partitioned elements, but (without going into 
> some deep mathematics) this is not computable in a root system because there 
> is no Galois field extension, which gives that the QECC is not NP-complete.  
> 
> This of course is work I am doing with respect to the problem of unitarity in 
> quantum black holes and holography. It may have some connection with more 
> ordinary quantum mechanics and measurement. The action of a measurement is a 
> process whereby a set of quantum states code some other set of quantum 
> states, where usually the number of the measuring states is far larger than 
> the measured states. The quantum measurement problem may have some connection 
> to the above, and further it has some qualitative similarity to 
> self-reference. This may then mean the proposition P = NP or P =/= NP is not 
> provable, but where maybe specific examples of NP/NP-complete algorithms as 
> not-P can be proven. 
> 
> This further might connect with the whole idea of up-loading minds into 
> computers. Brains and their states are not just localized states but 
> networks, and it could well be that this is not tractable. I paste in below a 
> review paper on graph partitioning. This is just one possible theoretical 
> obstruction, and if you plan on actually "bending metal" on this the problems 
> will doubtless multiply like bunnies in spring. 
> 
> As a general rule once these threads gets past 100 I tend not to post any 
> more. It becomes to annoying to find my way around them.



That is interesting, and might even help later to recover notions like space, 
but to keep the distinction between the communicable and the non communicable 
part of the machines modes, which is needed for the mind-body problème, we have 
to extracted such structure in some special way, using the mathematics of 
self-reference. I am unfortunately not that far! It might take some generations 
of mathematicians.

Bruno





> 
> LC
> 
> https://arxiv.org/abs/1311.3144
> Recent Advances in Graph Partitioning
> 
> Aydin Buluc , Henning 
> Meyerhenke , Ilya 
> Safro , Peter Sanders 
> , Christian Schulz 
> 
> (Submitted on 13 Nov 2013 (v1 ), last 
> revised 3 Feb 2015 (this version, v3))
> We survey recent trends in practical algorithms for balanced graph 
> partitioning together with applications and future research directions.
> Subjects: Data Structures and Algorithms (cs.DS); Distributed, Parallel, 
> and Cluster Computing (cs.DC); Combinatorics (math.CO)
> Cite as:  arXiv:1311.3144  [cs.DS]
>   (or arXiv:1311.3144v3 

Re: How to live forever

2018-03-26 Thread Bruno Marchal

> On 25 Mar 2018, at 11:41, Stathis Papaioannou  wrote:
> 
> 
> 
> On 25 March 2018 at 20:18, Bruno Marchal  > wrote:
> 
>> On 21 Mar 2018, at 23:49, Stathis Papaioannou > > wrote:
>> 
>> 
>> On Thu, 22 Mar 2018 at 9:02 am, Bruce Kellett > > wrote:
>> From: Stathis Papaioannou >
>>> 
>>> On Wed, 21 Mar 2018 at 10:56 am, Bruce Kellett >> > wrote:
>>> From: Stathis Papaioannou < stath...@gmail.com 
>>> >
>>> 
 On Tue, 20 Mar 2018 at 10:09 am, Bruce Kellett < 
 bhkell...@optusnet.com.au 
 > wrote:
 
 If the theory is that if the observable behaviour of the brain is 
 replicated, then consciousness will also be replicated, then the clear 
 corollary is that consciousness can be inferred from observable behaviour. 
 Which implies that I can be as certain of the consciousness of other 
 people as I am of my own. This seems to do some violence to the 1p/1pp/3p 
 distinctions that computationalism rely on so much: only 1p is "certainly 
 certain". But if I can reliably infer consciousness in others, then other 
 things can be as certain as 1p experiences
 
>>> 
 You can’t reliable infer consciousness in others. What you can infer is 
 that whatever consciousness an entity has, it will be preserved if 
 functionally identical substitutions in its brain 
 are made.
>>> 
>>> 
>>> You have that backwards. You can infer consciousness in others, by 
>>> observing their behaviour. The alternative would be solipsism. Now, while 
>>> you can't prove or disprove solipsism in a mathematical sense, you can 
>>> reject solipsism as a useless theory, since it tells you nothing about 
>>> anything. Whereas science acts on the available evidence -- observations of 
>>> behaviour in this case.
>>> 
>>> But we have no evidence that consciousness would be preserved under 
>>> functionally identical substitutions in the brain. Consciousness may be a 
>>> global affair, so functionally equivalence may not be achievable, or even 
>>> definable, within the context of a conscious brain. Can you map the 
>>> functionality of even a single neuron? You are assuming that you can, but 
>>> if that function is global, then you probably can't. There is a fair amount 
>>> of glibness in your assumption that consciousness will be preserved under 
>>> such substitutions.
>>> 
>>> 
>>> 
 You can’t know if a mouse is conscious, but you can know that if mouse 
 neurones are replaced with functionally identical electronic neurones its 
 behaviour will be the same and any consciousness it may have will also be 
 the same.
>>> 
>>> You cannot know this without actually doing the substitution and observing 
>>> the results.
>>> 
>>> So do you think that it is possible to replace the neurones with 
>>> functionally identical neurones (same output for same input) and the 
>>> mouse’s behaviour would *not* be the same?
>> 
>> Individual neurons may not be the appropriate functional unit.
>> 
>> It seems that you might be close to circularity -- neural functionality 
>> includes consciousness. So if I maintain neural functionality, I will 
>> maintain consciousness.
>> 
>> The only assumption is that the brain is somehow responsible for 
>> consciousness.
> 
> Consciousness is an attribute of the abstract immaterial person. The locally 
> material brain is only responsible for the relative manifestation of 
> consciousness. The computations does not create consciousness, but channel 
> its possible differentiation. But that should not change your point.
> 
> But you start off with the assumption that replacing your brain with a 
> machine will preserve consciousness - "comp”.

Yes.



> From this assumption, the rest follows, including the conclusion that there 
> isn't actually a primary physical brain. 


Yes. Then the evidences are more for computationalism than for materialism, 
given the “quantum evidences”. If the logic of the observable of the machine in 
arithmetic departs to much (not recoverable from simple deformation or 
representation selection principles) from the observation and empirical 
physics, we might get evidence that either comp is false, or we are in a 
malevolent simulation (I doubt this). Up to now, mechanism fits with the 
observation with few ontological commitment (just one universal machinery like 
very elementary arithmetic). Physics does not fit with the observation except 
by abstracting the mind or identifying it with some physical “volume”, say, 
which remains possible with special infinities (Not 

Re: How to live forever

2018-03-26 Thread Stathis Papaioannou
On 26 March 2018 at 15:20, Brent Meeker  wrote:

>
>
> On 3/25/2018 7:14 PM, Stathis Papaioannou wrote:
>
>
>
> On 26 March 2018 at 04:57, Brent Meeker  wrote:
>
>>
>>
>> On 3/25/2018 2:15 AM, Bruno Marchal wrote:
>>
>>
>> On 21 Mar 2018, at 22:56, Brent Meeker  wrote:
>>
>>
>>
>> On 3/21/2018 2:27 PM, Stathis Papaioannou wrote:
>>
>>
>> On Thu, 22 Mar 2018 at 5:45 am, Brent Meeker 
>> wrote:
>>
>>>
>>>
>>> On 3/20/2018 11:29 PM, Stathis Papaioannou wrote:
>>>
>>> On Wed, 21 Mar 2018 at 9:03 am, Brent Meeker 
>>> wrote:
>>>


 On 3/20/2018 1:14 PM, Stathis Papaioannou wrote:


 On Wed, 21 Mar 2018 at 6:34 am, Brent Meeker 
 wrote:

>
>
> On 3/20/2018 3:58 AM, Telmo Menezes wrote:
>
> The interesting thing is that you can draw conclusions about consciousness
> without being able to define it or detect it.
>
> I agree.
>
>
> The claim is that IF an entity
> is conscious THEN its consciousness will be preserved if brain function is
> preserved despite changing the brain substrate.
>
> Ok, this is computationalism. I also bet on computationalism, but I
> think we must proceed with caution and not forget that we are just
> assuming this to be true. Your thought experiment is convincing but is
> not a proof. You do expose something that I agree with: that
> non-computationalism sounds silly.
>
> But does it sound so silly if we propose substituting a completely
> different kind of computer, e.g. von Neumann architecture or one that just
> records everything instead of an episodic associative memory, for the
> brain.  The Church-Turing conjecture says it can compute the same
> functions.  But does it instantiate the same consciousness.  My intuition
> is that it would be "conscious" but in some different way; for example by
> having the kind of memory you would have if you could review of a movie of
> any interval in your past.
>

 I think it would be conscious in the same way if you replaced neural
 tissue with a black box that interacted with the surrounding tissue in the
 same way. It doesn’t matter what is in the black box; it could even work by
 magic.


 Then why draw the line at "surrounding tissue".  Why not the external
 enivironment?

>>>
>>> Keep expanding the part that is replaced and you replace the whole brain
>>> and the whole organism.
>>>
>>> Are you saying you can't imagine being "conscious" but in a different
 way?

>>>
>>> I think it is possible but I don’t think it could happen if my neurones
>>> were replaced by a functionally equivalent component. If it’s functionally
>>> equivalent, my behaviour would be unchanged,
>>>
>>>
>>> I agree with that.  But you've already supposed that functional
>>> equivalence at the behavior level implies preservation of consciousness.
>>> So what I'm considering is replacements in the brain far above the neuron
>>> level, say at the level of whole functional groups of the brain, e.g. the
>>> visual system, the auditory system, the memory,...  Would functional
>>> equivalence at the body/brain interface then still imply consciousness
>>> equivalence?
>>>
>>
>> I think it would, because I don’t think there are isolated consciousness
>> modules in the brain. A large enough change in visual experience will be
>> noticed by the subject, who will report that things look different. This
>> could only happen if there is a change in the input to the language system
>> from the visual system; but we have assumed that the output from the visual
>> system is the same, and only the consciousness has changed, leading to a
>> contradiction.
>>
>>
>> But what about internal systems which are independent of perception...the
>> very reason Bruno wants to talk about dream states.  And I'm not
>> necessarily asking that behavior be identical...just that the body/brain
>> interface be the same.  The "brain" may be different in how it processes
>> input from the eyeballs and hence report verbally different perceptions.
>> In other words, I'm wondering how much does computationalism constrain
>> consciousness.  My intuition is that there could be a lot of difference in
>> consciousness depending on how different perceptual inputs are process
>> and/or merged and how internal simulations are handled.  To take a crude
>> example, would it matter if the computer-brain was programmed in a
>> functional language like LISP, an object-oriented language like Ruby, or a
>> neural network?  Of course Church-Turing says they all compute the same set
>> of functions, but they don't do it the same way
>>
>>
>> They can do it in the same way. They will not do it in the same way with
>> a compiler, but will do it in the same way when you implement an
>> interpreter in 

Re: How to live forever

2018-03-25 Thread Brent Meeker



On 3/25/2018 7:14 PM, Stathis Papaioannou wrote:



On 26 March 2018 at 04:57, Brent Meeker > wrote:




On 3/25/2018 2:15 AM, Bruno Marchal wrote:



On 21 Mar 2018, at 22:56, Brent Meeker > wrote:



On 3/21/2018 2:27 PM, Stathis Papaioannou wrote:


On Thu, 22 Mar 2018 at 5:45 am, Brent Meeker
> wrote:



On 3/20/2018 11:29 PM, Stathis Papaioannou wrote:

On Wed, 21 Mar 2018 at 9:03 am, Brent Meeker
> wrote:



On 3/20/2018 1:14 PM, Stathis Papaioannou wrote:


On Wed, 21 Mar 2018 at 6:34 am, Brent Meeker
>
wrote:



On 3/20/2018 3:58 AM, Telmo Menezes wrote:

The interesting thing is that you can draw conclusions about 
consciousness
without being able to define it or detect it.

I agree.


The claim is that IF an entity
is conscious THEN its consciousness will be preserved if brain 
function is
preserved despite changing the brain substrate.

Ok, this is computationalism. I also bet on computationalism, 
but I
think we must proceed with caution and not forget that we are 
just
assuming this to be true. Your thought experiment is convincing 
but is
not a proof. You do expose something that I agree with: that
non-computationalism sounds silly.

But does it sound so silly if we propose
substituting a completely different kind of
computer, e.g. von Neumann architecture or one
that just records everything instead of an
episodic associative memory, for the brain. The
Church-Turing conjecture says it can compute the
same functions. But does it instantiate the same
consciousness. My intuition is that it would be
"conscious" but in some different way; for
example by having the kind of memory you would
have if you could review of a movie of any
interval in your past.


I think it would be conscious in the same way if you
replaced neural tissue with a black box that
interacted with the surrounding tissue in the same
way. It doesn’t matter what is in the black box; it
could even work by magic.


Then why draw the line at "surrounding tissue".  Why
not the external enivironment?


Keep expanding the part that is replaced and you replace
the whole brain and the whole organism.

Are you saying you can't imagine being "conscious" but
in a different way?


I think it is possible but I don’t think it could happen
if my neurones were replaced by a functionally equivalent
component. If it’s functionally equivalent, my behaviour
would be unchanged,


I agree with that.  But you've already supposed that
functional equivalence at the behavior level implies
preservation of consciousness.  So what I'm considering is
replacements in the brain far above the neuron level, say
at the level of whole functional groups of the brain, e.g.
the visual system, the auditory system, the memory,...
Would functional equivalence at the body/brain interface
then still imply consciousness equivalence?


I think it would, because I don’t think there are isolated
consciousness modules in the brain. A large enough change in
visual experience will be noticed by the subject, who will
report that things look different. This could only happen if
there is a change in the input to the language system from the
visual system; but we have assumed that the output from the
visual system is the same, and only the consciousness has
changed, leading to a contradiction.


But what about internal systems which are independent of
perception...the very reason Bruno wants to talk about dream
states.  And I'm not necessarily asking that behavior be
identical...just that the body/brain interface be the same.  The
"brain" may be different in how it processes input from the
eyeballs and hence report verbally different perceptions.  In
other words, I'm wondering how much does computationalism
constrain consciousness.  My intuition is that there could be a
lot of difference in consciousness depending on how different
perceptual inputs are process and/or merged and how internal
simulations are handled.  To take a crude example, would it
matter if the 

Re: How to live forever

2018-03-25 Thread Stathis Papaioannou
On 26 March 2018 at 04:57, Brent Meeker  wrote:

>
>
> On 3/25/2018 2:15 AM, Bruno Marchal wrote:
>
>
> On 21 Mar 2018, at 22:56, Brent Meeker  wrote:
>
>
>
> On 3/21/2018 2:27 PM, Stathis Papaioannou wrote:
>
>
> On Thu, 22 Mar 2018 at 5:45 am, Brent Meeker  wrote:
>
>>
>>
>> On 3/20/2018 11:29 PM, Stathis Papaioannou wrote:
>>
>> On Wed, 21 Mar 2018 at 9:03 am, Brent Meeker 
>> wrote:
>>
>>>
>>>
>>> On 3/20/2018 1:14 PM, Stathis Papaioannou wrote:
>>>
>>>
>>> On Wed, 21 Mar 2018 at 6:34 am, Brent Meeker 
>>> wrote:
>>>


 On 3/20/2018 3:58 AM, Telmo Menezes wrote:

 The interesting thing is that you can draw conclusions about consciousness
 without being able to define it or detect it.

 I agree.


 The claim is that IF an entity
 is conscious THEN its consciousness will be preserved if brain function is
 preserved despite changing the brain substrate.

 Ok, this is computationalism. I also bet on computationalism, but I
 think we must proceed with caution and not forget that we are just
 assuming this to be true. Your thought experiment is convincing but is
 not a proof. You do expose something that I agree with: that
 non-computationalism sounds silly.

 But does it sound so silly if we propose substituting a completely
 different kind of computer, e.g. von Neumann architecture or one that just
 records everything instead of an episodic associative memory, for the
 brain.  The Church-Turing conjecture says it can compute the same
 functions.  But does it instantiate the same consciousness.  My intuition
 is that it would be "conscious" but in some different way; for example by
 having the kind of memory you would have if you could review of a movie of
 any interval in your past.

>>>
>>> I think it would be conscious in the same way if you replaced neural
>>> tissue with a black box that interacted with the surrounding tissue in the
>>> same way. It doesn’t matter what is in the black box; it could even work by
>>> magic.
>>>
>>>
>>> Then why draw the line at "surrounding tissue".  Why not the external
>>> enivironment?
>>>
>>
>> Keep expanding the part that is replaced and you replace the whole brain
>> and the whole organism.
>>
>> Are you saying you can't imagine being "conscious" but in a different way?
>>>
>>
>> I think it is possible but I don’t think it could happen if my neurones
>> were replaced by a functionally equivalent component. If it’s functionally
>> equivalent, my behaviour would be unchanged,
>>
>>
>> I agree with that.  But you've already supposed that functional
>> equivalence at the behavior level implies preservation of consciousness.
>> So what I'm considering is replacements in the brain far above the neuron
>> level, say at the level of whole functional groups of the brain, e.g. the
>> visual system, the auditory system, the memory,...  Would functional
>> equivalence at the body/brain interface then still imply consciousness
>> equivalence?
>>
>
> I think it would, because I don’t think there are isolated consciousness
> modules in the brain. A large enough change in visual experience will be
> noticed by the subject, who will report that things look different. This
> could only happen if there is a change in the input to the language system
> from the visual system; but we have assumed that the output from the visual
> system is the same, and only the consciousness has changed, leading to a
> contradiction.
>
>
> But what about internal systems which are independent of perception...the
> very reason Bruno wants to talk about dream states.  And I'm not
> necessarily asking that behavior be identical...just that the body/brain
> interface be the same.  The "brain" may be different in how it processes
> input from the eyeballs and hence report verbally different perceptions.
> In other words, I'm wondering how much does computationalism constrain
> consciousness.  My intuition is that there could be a lot of difference in
> consciousness depending on how different perceptual inputs are process
> and/or merged and how internal simulations are handled.  To take a crude
> example, would it matter if the computer-brain was programmed in a
> functional language like LISP, an object-oriented language like Ruby, or a
> neural network?  Of course Church-Turing says they all compute the same set
> of functions, but they don't do it the same way
>
>
> They can do it in the same way. They will not do it in the same way with a
> compiler, but will do it in the same way when you implement an interpreter
> in another interpreter. The extensional CT (in terms if which functions are
> calculated) entails the intensional CT (in terms of which computations can
> be processed. Babbage machine could emulate a quantum brain. It involves a
> relative slow-down, but the subject 

Re: How to live forever

2018-03-25 Thread Brent Meeker



On 3/25/2018 2:15 AM, Bruno Marchal wrote:


On 21 Mar 2018, at 22:56, Brent Meeker > wrote:




On 3/21/2018 2:27 PM, Stathis Papaioannou wrote:


On Thu, 22 Mar 2018 at 5:45 am, Brent Meeker > wrote:




On 3/20/2018 11:29 PM, Stathis Papaioannou wrote:

On Wed, 21 Mar 2018 at 9:03 am, Brent Meeker
> wrote:



On 3/20/2018 1:14 PM, Stathis Papaioannou wrote:


On Wed, 21 Mar 2018 at 6:34 am, Brent Meeker
> wrote:



On 3/20/2018 3:58 AM, Telmo Menezes wrote:

The interesting thing is that you can draw conclusions about 
consciousness
without being able to define it or detect it.

I agree.


The claim is that IF an entity
is conscious THEN its consciousness will be preserved if brain 
function is
preserved despite changing the brain substrate.

Ok, this is computationalism. I also bet on computationalism, but I
think we must proceed with caution and not forget that we are just
assuming this to be true. Your thought experiment is convincing but 
is
not a proof. You do expose something that I agree with: that
non-computationalism sounds silly.

But does it sound so silly if we propose substituting
a completely different kind of computer, e.g. von
Neumann architecture or one that just records
everything instead of an episodic associative memory,
for the brain. The Church-Turing conjecture says it
can compute the same functions.  But does it
instantiate the same consciousness.  My intuition is
that it would be "conscious" but in some different
way; for example by having the kind of memory you
would have if you could review of a movie of any
interval in your past.


I think it would be conscious in the same way if you
replaced neural tissue with a black box that interacted
with the surrounding tissue in the same way. It doesn’t
matter what is in the black box; it could even work by magic.


Then why draw the line at "surrounding tissue".  Why not
the external enivironment?


Keep expanding the part that is replaced and you replace the
whole brain and the whole organism.

Are you saying you can't imagine being "conscious" but in a
different way?


I think it is possible but I don’t think it could happen if my
neurones were replaced by a functionally equivalent component.
If it’s functionally equivalent, my behaviour would be unchanged,


I agree with that.  But you've already supposed that functional
equivalence at the behavior level implies preservation of
consciousness.  So what I'm considering is replacements in the
brain far above the neuron level, say at the level of whole
functional groups of the brain, e.g. the visual system, the
auditory system, the memory,...  Would functional equivalence at
the body/brain interface then still imply consciousness equivalence?


I think it would, because I don’t think there are isolated 
consciousness modules in the brain. A large enough change in visual 
experience will be noticed by the subject, who will report that 
things look different. This could only happen if there is a change 
in the input to the language system from the visual system; but we 
have assumed that the output from the visual system is the same, and 
only the consciousness has changed, leading to a contradiction.


But what about internal systems which are independent of 
perception...the very reason Bruno wants to talk about dream states.  
And I'm not necessarily asking that behavior be identical...just that 
the body/brain interface be the same.  The "brain" may be different 
in how it processes input from the eyeballs and hence report verbally 
different perceptions.  In other words, I'm wondering how much does 
computationalism constrain consciousness.  My intuition is that there 
could be a lot of difference in consciousness depending on how 
different perceptual inputs are process and/or merged and how 
internal simulations are handled.  To take a crude example, would it 
matter if the computer-brain was programmed in a functional language 
like LISP, an object-oriented language like Ruby, or a neural 
network? Of course Church-Turing says they all compute the same set 
of functions, but they don't do it the same way


They can do it in the same way. They will not do it in the same way 
with a compiler, but will do it in the same way when you implement an 
interpreter in another interpreter. The extensional CT (in terms if 
which functions are calculated) entails the 

Re: How to live forever

2018-03-25 Thread John Clark
On Sun, Mar 25, 2018 at 5:01 AM, Bruno Marchal  wrote:

​>> ​
>> if it was all based on the observation of behavior then what you'd end up
>> with is a scientific theory about intelligence not consciousness.
>
>
> *​> ​That is right. But if you agree that consciousness is a form of
> non-provable but also non-doubtable knowledge, and if you agree with the
> standard definition of knowledge in philosophy of mind, then it is a
> theorem that Peano Arithmetic is conscious.*
>
Perhaps rocks are intelligent and they just choose not to display it, if so
then rocks are conscious too.  Perhaps Peano Arithmetic is intelligent and
it just chooses not to display it, if so then Peano Arithmetic is conscious
too. Or perhaps neither rocks nor Peano Arithmetic nor you is conscious and
only I am. Perhaps, but I doubt it.

​ John K Clark​

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-25 Thread Lawrence Crowell


On Sunday, March 25, 2018 at 5:01:59 AM UTC-6, Bruno Marchal wrote:
>
>
> Yes, and if someone argue that consciousness is not maintained whatever 
> the substitution level is, it is up to them to explain what in the 
> brain+local-evirnoment is not Turing emulable. I see only the “wave packet 
> reduction”, but I don’t see any evidence for that reduction, and it would 
> make Quantum mechanics inconsistent (I think) and not usable in cosmology, 
> nor in quantum information science. To believe that the brain is not a 
> “natural” machine is a bit like believing in some magic. Why not, but where 
> are the evidences?
>
>
> Bruno
>

There are a couple of things running around here. One involves brains and 
minds and the other wave function reduction. 

The issue of up loading brains or mapping them come into the problem with 
the NP-complete problem of partitioning graphs. I like to think of this 
according to tensor spaces of states, such as with MERA (multi-scale 
entanglement renormalization ansatz) tensor networks. The AdS_3 example 
with H^2 spatial surface is seen in the diagram below.



This network has the highest complexity for the pentagonal tessellation for 
these are honeycombs of the groups H3, H4, H5 corresponding to the 
pentagon, dodecahedron, and the 4-dim icosadedron or 120/600 cells. These 
groups will tessellate a 2, 3 and 4 dimensional spatial hyperbolic surface 
embedded in AdS_3, AdS_4 and AdS_5. These define half the weights of the E8 
groups with the Zamolodchikov eigenvalues or masses. 5-fold structures have 
connections to the golden mean, and the Zamolodchikov quaternions are 
representations of the golden mean quaternions. A quantum error correction 
code (QECC) defines a projector onto each of these partitioned elements, 
but (without going into some deep mathematics) this is not computable in a 
root system because there is no Galois field extension, which gives that 
the QECC is not NP-complete.  

This of course is work I am doing with respect to the problem of unitarity 
in quantum black holes and holography. It may have some connection with 
more ordinary quantum mechanics and measurement. The action of a 
measurement is a process whereby a set of quantum states code some other 
set of quantum states, where usually the number of the measuring states is 
far larger than the measured states. The quantum measurement problem may 
have some connection to the above, and further it has some qualitative 
similarity to self-reference. This may then mean the proposition P = NP or 
P =/= NP is not provable, but where maybe specific examples of 
NP/NP-complete algorithms as not-P can be proven. 

This further might connect with the whole idea of up-loading minds into 
computers. Brains and their states are not just localized states but 
networks, and it could well be that this is not tractable. I paste in below 
a review paper on graph partitioning. This is just one possible theoretical 
obstruction, and if you plan on actually "bending metal" on this the 
problems will doubtless multiply like bunnies in spring. 

As a general rule once these threads gets past 100 I tend not to post any 
more. It becomes to annoying to find my way around them.

LC

https://arxiv.org/abs/1311.3144
Recent Advances in Graph Partitioning
Aydin Buluc , Henning 
Meyerhenke , Ilya 
Safro , Peter Sanders 
, Christian Schulz 

(Submitted on 13 Nov 2013 (v1 ), last 
revised 3 Feb 2015 (this version, v3))

We survey recent trends in practical algorithms for balanced graph 
partitioning together with applications and future research directions.

Subjects: Data Structures and Algorithms (cs.DS); Distributed, Parallel, 
and Cluster Computing (cs.DC); Combinatorics (math.CO)
Cite as: arXiv:1311.3144  [cs.DS]
  (or arXiv:1311.3144v3  [cs.DS] for 
this version)


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-25 Thread Bruno Marchal

> On 23 Mar 2018, at 02:46, Stathis Papaioannou  wrote:
> 
> 
> On Fri, 23 Mar 2018 at 11:32 am, Bruce Kellett  > wrote:
> From: Stathis Papaioannou >
>> 
>> On Thu, 22 Mar 2018 at 9:02 am, Bruce Kellett > > wrote:
>> From: Stathis Papaioannou < stath...@gmail.com 
>> >
>>> 
>>> On Wed, 21 Mar 2018 at 10:56 am, Bruce Kellett < 
>>> bhkell...@optusnet.com.au 
>>> > wrote:
>>> From: Stathis Papaioannou < stath...@gmail.com 
>>> >
>>> 
 On Tue, 20 Mar 2018 at 10:09 am, Bruce Kellett < 
 bhkell...@optusnet.com.au 
 > wrote:
 
 If the theory is that if the observable behaviour of the brain is 
 replicated, then consciousness will also be replicated, then the clear 
 corollary is that consciousness can be inferred from observable behaviour. 
 Which implies that I can be as certain of the consciousness of other 
 people as I am of my own. This seems to do some violence to the 1p/1pp/3p 
 distinctions that computationalism rely on so much: only 1p is "certainly 
 certain". But if I can reliably infer consciousness in others, then other 
 things can be as certain as 1p experiences
 
>>> 
 You can’t reliable infer consciousness in others. What you can infer is 
 that whatever consciousness an entity has, it will be preserved if 
 functionally identical 
 substitutions in its brain are made.
>>> 
>>> 
>>> You have that backwards. You can infer consciousness in others, by 
>>> observing their behaviour. The alternative would be solipsism. Now, while 
>>> you can't prove or disprove solipsism in a mathematical sense, you can 
>>> reject solipsism as a useless theory, since it tells you nothing about 
>>> anything. Whereas science acts on the available evidence -- observations of 
>>> behaviour in this case.
>>> 
>>> But we have no evidence that consciousness would be preserved under 
>>> functionally identical substitutions in the brain. Consciousness may be a 
>>> global affair, so functionally equivalence may not be achievable, or even 
>>> definable, within the context of a conscious brain. Can you map the 
>>> functionality of even a single neuron? You are assuming that you can, but 
>>> if that function is global, then you probably can't. There is a fair amount 
>>> of glibness in your assumption that consciousness will be preserved under 
>>> such substitutions.
>>> 
>>> 
 You can’t know if a mouse is conscious, but you can know that if mouse 
 neurones are replaced with functionally identical electronic neurones its 
 behaviour will be the same and any consciousness it may have will also be 
 the same.
>>> 
>>> You cannot know this without actually doing the substitution and observing 
>>> the results.
>>> 
>>> So do you think that it is possible to replace the neurones with 
>>> functionally identical neurones (same output for same input) and the 
>>> mouse’s behaviour would *not* be the same?
>> 
>> Individual neurons may not be the appropriate functional unit.
>> 
>> It seems that you might be close to circularity -- neural functionality 
>> includes consciousness. So if I maintain neural functionality, I will 
>> maintain consciousness.
>> 
>> The only assumption is that the brain is somehow responsible for 
>> consciousness. The argument I am making is that if any part of the brain is 
>> replaced with a functionally identical non-biological part, engineered to 
>> replicate its interactions with the surrounding tissue,  consciousness will 
>> also necessarily be replicated; for if not, an absurd situation would 
>> result, whereby consciousness can radically change but the subject not 
>> notice, or consciousness decouple completely from behaviour, or 
>> consciousness flip on or off with the change of one subatomic particle.
> 
> There still seems to be some circularity there -- consciousness is part of 
> the functionality of the brain, or parts thereof, so maintaining 
> functionality requires maintenance of consciousness.
> 
> By functionality here I specifically mean the observable behaviour of the 
> brain. Consciousness is special in that it is not directly observable as, for 
> example, the potential difference across a cell membrane or the contraction 
> of muscle is.
> 
> One would really need some independent measure of functionality, independent 
> of consciousness. And the claim would be that reproducing local functionality 
> would maintain consciousness. I do not see that that could readily be tested, 
> since mapping all the 

Re: How to live forever

2018-03-25 Thread Stathis Papaioannou
On 25 March 2018 at 20:18, Bruno Marchal  wrote:

>
> On 21 Mar 2018, at 23:49, Stathis Papaioannou  wrote:
>
>
> On Thu, 22 Mar 2018 at 9:02 am, Bruce Kellett 
> wrote:
>
>> From: Stathis Papaioannou 
>>
>>
>> On Wed, 21 Mar 2018 at 10:56 am, Bruce Kellett 
>> wrote:
>>
>>> From: Stathis Papaioannou < stath...@gmail.com>
>>>
>>> On Tue, 20 Mar 2018 at 10:09 am, Bruce Kellett <
>>> bhkell...@optusnet.com.au> wrote:
>>>

 If the theory is that if the observable behaviour of the brain is
 replicated, then consciousness will also be replicated, then the clear
 corollary is that consciousness can be inferred from observable behaviour.
 Which implies that I can be as certain of the consciousness of other people
 as I am of my own. This seems to do some violence to the 1p/1pp/3p
 distinctions that computationalism rely on so much: only 1p is "certainly
 certain". But if I can reliably infer consciousness in others, then other
 things can be as certain as 1p experiences

>>>
>>> You can’t reliable infer consciousness in others. What you can infer is
>>> that whatever consciousness an entity has, it will be preserved if
>>> functionally identical substitutions in its brain are made.
>>>
>>>
>>> You have that backwards. You can infer consciousness in others, by
>>> observing their behaviour. The alternative would be solipsism. Now, while
>>> you can't prove or disprove solipsism in a mathematical sense, you can
>>> reject solipsism as a useless theory, since it tells you nothing about
>>> anything. Whereas science acts on the available evidence -- observations of
>>> behaviour in this case.
>>>
>>> But we have no evidence that consciousness would be preserved under
>>> functionally identical substitutions in the brain. Consciousness may be a
>>> global affair, so functionally equivalence may not be achievable, or even
>>> definable, within the context of a conscious brain. Can you map the
>>> functionality of even a single neuron? You are assuming that you can, but
>>> if that function is global, then you probably can't. There is a fair amount
>>> of glibness in your assumption that consciousness will be preserved under
>>> such substitutions.
>>>
>>>
>>>
>>> You can’t know if a mouse is conscious, but you can know that if mouse
>>> neurones are replaced with functionally identical electronic neurones its
>>> behaviour will be the same and any consciousness it may have will also be
>>> the same.
>>>
>>>
>>> You cannot know this without actually doing the substitution and
>>> observing the results.
>>>
>>
>> So do you think that it is possible to replace the neurones with
>> functionally identical neurones (same output for same input) and the
>> mouse’s behaviour would *not* be the same?
>>
>>
>> Individual neurons may not be the appropriate functional unit.
>>
>> It seems that you might be close to circularity -- neural functionality
>> includes consciousness. So if I maintain neural functionality, I will
>> maintain consciousness.
>>
>
> The only assumption is that the brain is somehow responsible for
> consciousness.
>
>
> Consciousness is an attribute of the abstract immaterial person. The
> locally material brain is only responsible for the relative manifestation
> of consciousness. The computations does not create consciousness, but
> channel its possible differentiation. But that should not change your point.
>

But you start off with the assumption that replacing your brain with a
machine will preserve consciousness - "comp". From this assumption, the
rest follows, including the conclusion that there isn't actually a primary
physical brain.

> The argument I am making is that if any part of the brain is replaced with
> a functionally identical non-biological part, engineered to replicate its
> interactions with the surrounding tissue,  consciousness will also
> necessarily be replicated; for if not, an absurd situation would result,
> whereby consciousness can radically change but the subject not notice, or
> consciousness decouple completely from behaviour, or consciousness flip on
> or off with the change of one subatomic particle.
>
>
> OK,
>
> Bruno
>
>
>
> --
> Stathis Papaioannou
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send 

Re: How to live forever

2018-03-25 Thread Bruno Marchal

> On 21 Mar 2018, at 23:49, Stathis Papaioannou  wrote:
> 
> 
> On Thu, 22 Mar 2018 at 9:02 am, Bruce Kellett  > wrote:
> From: Stathis Papaioannou >
>> 
>> On Wed, 21 Mar 2018 at 10:56 am, Bruce Kellett > > wrote:
>> From: Stathis Papaioannou < stath...@gmail.com 
>> >
>> 
>>> On Tue, 20 Mar 2018 at 10:09 am, Bruce Kellett < 
>>> bhkell...@optusnet.com.au 
>>> > wrote:
>>> 
>>> If the theory is that if the observable behaviour of the brain is 
>>> replicated, then consciousness will also be replicated, then the clear 
>>> corollary is that consciousness can be inferred from observable behaviour. 
>>> Which implies that I can be as certain of the consciousness of other people 
>>> as I am of my own. This seems to do some violence to the 1p/1pp/3p 
>>> distinctions that computationalism rely on so much: 
>>> only 1p is "certainly certain". But if I can reliably infer 
>>> consciousness in others, then other things can be as certain as 1p 
>>> experiences
>>> 
>> 
>>> You can’t reliable infer consciousness in others. What you can infer is 
>>> that whatever consciousness an entity has, it will be preserved if 
>>> functionally identical substitutions in its brain are made.
>> 
>> 
>> You have that backwards. You can infer consciousness in others, by observing 
>> their behaviour. The alternative would be solipsism. Now, while you can't 
>> prove or disprove solipsism in a mathematical sense, you can reject 
>> solipsism as a useless theory, since it tells you nothing about anything. 
>> Whereas science acts on the available evidence -- observations of behaviour 
>> in this case.
>> 
>> But we have no evidence that consciousness would be preserved under 
>> functionally identical substitutions in the brain. Consciousness may be a 
>> global affair, so functionally equivalence may not be achievable, or even 
>> definable, within the context of a conscious brain. Can you map the 
>> functionality of even a single neuron? You are assuming that you can, but if 
>> that function is global, then you probably can't. There is a fair amount of 
>> glibness in your assumption that consciousness will be preserved under such 
>> substitutions.
>> 
>> 
>> 
>>> You can’t know if a mouse is conscious, but you can know that if mouse 
>>> neurones are replaced with functionally identical electronic neurones its 
>>> behaviour will be the same and any consciousness it may have will also be 
>>> the same.
>> 
>> You cannot know this without actually doing the substitution and observing 
>> the results.
>> 
>> So do you think that it is possible to replace the neurones with 
>> functionally identical neurones (same output for same input) and the mouse’s 
>> behaviour would *not* be the same?
> 
> Individual neurons may not be the appropriate functional unit.
> 
> It seems that you might be close to circularity -- neural functionality 
> includes consciousness. So if I maintain neural functionality, I will 
> maintain consciousness.
> 
> The only assumption is that the brain is somehow responsible for 
> consciousness.

Consciousness is an attribute of the abstract immaterial person. The locally 
material brain is only responsible for the relative manifestation of 
consciousness. The computations does not create consciousness, but channel its 
possible differentiation. But that should not change your point.



> The argument I am making is that if any part of the brain is replaced with a 
> functionally identical non-biological part, engineered to replicate its 
> interactions with the surrounding tissue,  consciousness will also 
> necessarily be replicated; for if not, an absurd situation would result, 
> whereby consciousness can radically change but the subject not notice, or 
> consciousness decouple completely from behaviour, or consciousness flip on or 
> off with the change of one subatomic particle.

OK,

Bruno



> -- 
> Stathis Papaioannou
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything 

Re: How to live forever

2018-03-25 Thread Bruno Marchal

> On 21 Mar 2018, at 22:56, Brent Meeker  wrote:
> 
> 
> 
> On 3/21/2018 2:27 PM, Stathis Papaioannou wrote:
>> 
>> On Thu, 22 Mar 2018 at 5:45 am, Brent Meeker > > wrote:
>> 
>> 
>> On 3/20/2018 11:29 PM, Stathis Papaioannou wrote:
>>> On Wed, 21 Mar 2018 at 9:03 am, Brent Meeker >> > wrote:
>>> 
>>> 
>>> On 3/20/2018 1:14 PM, Stathis Papaioannou wrote:
 
 On Wed, 21 Mar 2018 at 6:34 am, Brent Meeker > wrote:
 
 
 On 3/20/2018 3:58 AM, Telmo Menezes wrote:
>> The interesting thing is that you can draw conclusions about 
>> consciousness
>> without being able to define it or detect it.
> I agree.
> 
>> The claim is that IF an entity
>> is conscious THEN its consciousness will be preserved if brain function 
>> is
>> preserved despite changing the brain substrate.
> Ok, this is computationalism. I also bet on computationalism, but I
> think we must proceed with caution and not forget that we are just
> assuming this to be true. Your thought experiment is convincing but is
> not a proof. You do expose something that I agree with: that
> non-computationalism sounds silly.
 
 But does it sound so silly if we propose substituting a completely 
 different kind of computer, e.g. von Neumann architecture or one that  
just records everything instead of an 
 episodic associative memory, for the brain.  The Church-Turing conjecture 
 says it can compute the same functions.  But does it instantiate the same 
 consciousness.  My intuition is that it would be "conscious" but in some 
 different way; for example by having the kind of memory you would have if 
 you could review of a movie of any interval in your past.
 
 I think it would be conscious in the same way if you replaced neural 
 tissue with a black box that interacted with the surrounding tissue in the 
 same way. It doesn’t matter what is in the black box; it could even work 
 by magic.
>>> 
>>> Then why draw the line at "surrounding tissue".  Why not the external 
>>> enivironment? 
>>> 
>>> Keep expanding the part that is replaced and you replace the whole brain 
>>> and the whole organism.
>>> 
>>> Are you saying you can't imagine being "conscious" but in a different way?
>>> 
>>> I think it is possible but I don’t think it could happen if my neurones 
>>> were replaced by a functionally equivalent component. If it’s functionally 
>>> equivalent, my behaviour would be unchanged,
>> 
>> I agree with that.  But you've already supposed that functional equivalence 
>> at the behavior level implies preservation of consciousness.  So what I'm 
>> considering is replacements in the brain far above the neuron level, say at 
>> the level of whole functional groups of the brain, e.g. the visual system, 
>> the auditory system, the memory,...  Would functional equivalence at the 
>> body/brain interface then still imply consciousness equivalence?
>> 
>> I think it would, because I don’t think there are isolated consciousness 
>> modules in the brain. A large enough change in visual experience will be 
>> noticed by the subject, who will report that things look different. This 
>> could only happen if there is a change in the input to the language system 
>> from the visual system; but we have assumed that the output from the visual 
>> system is the same, and only the consciousness has changed, leading to a 
>> contradiction.
> 
> But what about internal systems which are independent of perception...the 
> very reason Bruno wants to talk about dream states.  And I'm not necessarily 
> asking that behavior be identical...just that the body/brain interface be the 
> same.  The "brain" may be different in how it processes input from the 
> eyeballs and hence report verbally different perceptions.  In other words, 
> I'm wondering how much does computationalism constrain consciousness.  My 
> intuition is that there could be a lot of difference in consciousness 
> depending on how different perceptual inputs are process and/or merged and 
> how internal simulations are handled.  To take a crude example, would it 
> matter if the computer-brain was programmed in a functional language like 
> LISP, an object-oriented language like Ruby, or a neural network?  Of course 
> Church-Turing says they all compute the same set of functions, but they don't 
> do it the same way

They can do it in the same way. They will not do it in the same way with a 
compiler, but will do it in the same way when you implement an interpreter in 
another interpreter. The extensional CT (in terms if which functions are 
calculated) entails the intensional CT (in terms of which computations can be 
processed. Babbage machine could emulate a quantum 

Re: How to live forever

2018-03-25 Thread Bruno Marchal

> On 21 Mar 2018, at 01:35, John Clark  wrote:
> 
> On Tue, Mar 20, 2018 at 7:27 PM, Bruce Kellett  > wrote:
>  
> ​>​You don't need an instrument that can give a clean yes/no answer to the 
> presence of consciousness to develop scientific theories about consciousness. 
> We can start with the observation that all normal healthy humans are 
> conscious, and that rocks and other inert objects are not conscious and work 
> from there to develop a science of consciousness, based on evidence from the 
> observation of behaviour.
> 
> But if it was all based on the observation of behavior then what you'd end up 
> with is a scientific theory about intelligence not consciousness.

That is right. But if you agree that consciousness is a form of non-provable 
but also non-doubtable knowledge, and if you agree with the standard definition 
of knowledge in philosophy of mind, then it is a theorem that Peano Arithmetic 
is conscious. To believe that Robinson Arithmetic is conscious too (plausibly 
even more) is more tricky.

Bruce is right that consciousness will be a global thing, as we can get from 
the first person indeterminacy too, but that does not mean that consciousness 
is not preserved by functional digital substitution made at some level.

Bruno



> 
> ​ ​John K Clark
>  
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-22 Thread Stathis Papaioannou
On Fri, 23 Mar 2018 at 11:32 am, Bruce Kellett 
wrote:

> From: Stathis Papaioannou 
>
>
> On Thu, 22 Mar 2018 at 9:02 am, Bruce Kellett 
> wrote:
>
>> From: Stathis Papaioannou < stath...@gmail.com>
>>
>>
>> On Wed, 21 Mar 2018 at 10:56 am, Bruce Kellett <
>> bhkell...@optusnet.com.au> wrote:
>>
>>> From: Stathis Papaioannou < stath...@gmail.com>
>>>
>>> On Tue, 20 Mar 2018 at 10:09 am, Bruce Kellett <
>>> bhkell...@optusnet.com.au> wrote:
>>>

 If the theory is that if the observable behaviour of the brain is
 replicated, then consciousness will also be replicated, then the clear
 corollary is that consciousness can be inferred from observable behaviour.
 Which implies that I can be as certain of the consciousness of other people
 as I am of my own. This seems to do some violence to the 1p/1pp/3p
 distinctions that computationalism rely on so much: only 1p is "certainly
 certain". But if I can reliably infer consciousness in others, then other
 things can be as certain as 1p experiences

>>>
>>> You can’t reliable infer consciousness in others. What you can infer is
>>> that whatever consciousness an entity has, it will be preserved if
>>> functionally identical substitutions in its brain are made.
>>>
>>>
>>> You have that backwards. You can infer consciousness in others, by
>>> observing their behaviour. The alternative would be solipsism. Now, while
>>> you can't prove or disprove solipsism in a mathematical sense, you can
>>> reject solipsism as a useless theory, since it tells you nothing about
>>> anything. Whereas science acts on the available evidence -- observations of
>>> behaviour in this case.
>>>
>>> But we have no evidence that consciousness would be preserved under
>>> functionally identical substitutions in the brain. Consciousness may be a
>>> global affair, so functionally equivalence may not be achievable, or even
>>> definable, within the context of a conscious brain. Can you map the
>>> functionality of even a single neuron? You are assuming that you can, but
>>> if that function is global, then you probably can't. There is a fair amount
>>> of glibness in your assumption that consciousness will be preserved under
>>> such substitutions.
>>>
>>>
>>> You can’t know if a mouse is conscious, but you can know that if mouse
>>> neurones are replaced with functionally identical electronic neurones its
>>> behaviour will be the same and any consciousness it may have will also be
>>> the same.
>>>
>>>
>>> You cannot know this without actually doing the substitution and
>>> observing the results.
>>>
>>
>> So do you think that it is possible to replace the neurones with
>> functionally identical neurones (same output for same input) and the
>> mouse’s behaviour would *not* be the same?
>>
>>
>> Individual neurons may not be the appropriate functional unit.
>>
>> It seems that you might be close to circularity -- neural functionality
>> includes consciousness. So if I maintain neural functionality, I will
>> maintain consciousness.
>>
>
> The only assumption is that the brain is somehow responsible for
> consciousness. The argument I am making is that if any part of the brain is
> replaced with a functionally identical non-biological part, engineered to
> replicate its interactions with the surrounding tissue,  consciousness will
> also necessarily be replicated; for if not, an absurd situation would
> result, whereby consciousness can radically change but the subject not
> notice, or consciousness decouple completely from behaviour, or
> consciousness flip on or off with the change of one subatomic particle.
>
>
> There still seems to be some circularity there -- consciousness is part of
> the functionality of the brain, or parts thereof, so maintaining
> functionality requires maintenance of consciousness.
>

By functionality here I specifically mean the observable behaviour of the
brain. Consciousness is special in that it is not directly observable as,
for example, the potential difference across a cell membrane or the
contraction of muscle is.

One would really need some independent measure of functionality,
> independent of consciousness. And the claim would be that reproducing local
> functionality would maintain consciousness. I do not see that that could
> readily be tested, since mapping all the inputs and outputs of neurons or
> other brain components may not be technically possible. One could map
> neuron behaviour at some crude level, but would that be sufficient to
> maintain consciousness? Natural cell death, and the death of neurons does,
> generally, lead to noticeable changes in consciousness and function -- have
> you not noticed decline in memory and other mental faculties as you get
> older? When consciousness changes in this way, the subject is usually only
> too 

Re: How to live forever

2018-03-22 Thread Bruce Kellett

From: *Stathis Papaioannou* >


On Thu, 22 Mar 2018 at 9:02 am, Bruce Kellett 
> wrote:


From: *Stathis Papaioannou* >


On Wed, 21 Mar 2018 at 10:56 am, Bruce Kellett
> wrote:

From: *Stathis Papaioannou* >

On Tue, 20 Mar 2018 at 10:09 am, Bruce Kellett
> wrote:


If the theory is that if the observable behaviour of the
brain is replicated, then consciousness will also be
replicated, then the clear corollary is that
consciousness can be inferred from observable behaviour.
Which implies that I can be as certain of the
consciousness of other people as I am of my own. This
seems to do some violence to the 1p/1pp/3p distinctions
that computationalism rely on so much: only 1p is
"certainly certain". But if I can reliably infer
consciousness in others, then other things can be as
certain as 1p experiences


You can’t reliable infer consciousness in others. What you
can infer is that whatever consciousness an entity has, it
will be preserved if functionally identical substitutions in
its brain are made.


You have that backwards. You can infer consciousness in
others, by observing their behaviour. The alternative would
be solipsism. Now, while you can't prove or disprove
solipsism in a mathematical sense, you can reject solipsism
as a useless theory, since it tells you nothing about
anything. Whereas science acts on the available evidence --
observations of behaviour in this case.

But we have no evidence that consciousness would be preserved
under functionally identical substitutions in the brain.
Consciousness may be a global affair, so functionally
equivalence may not be achievable, or even definable, within
the context of a conscious brain. Can you map the
functionality of even a single neuron? You are assuming that
you can, but if that function is global, then you probably
can't. There is a fair amount of glibness in your assumption
that consciousness will be preserved under such substitutions.



You can’t know if a mouse is conscious, but you can know
that if mouse neurones are replaced with functionally
identical electronic neurones its behaviour will be the same
and any consciousness it may have will also be the same.


You cannot know this without actually doing the substitution
and observing the results.


So do you think that it is possible to replace the neurones with
functionally identical neurones (same output for same input) and
the mouse’s behaviour would *not* be the same?


Individual neurons may not be the appropriate functional unit.

It seems that you might be close to circularity -- neural
functionality includes consciousness. So if I maintain neural
functionality, I will maintain consciousness.


The only assumption is that the brain is somehow responsible for 
consciousness. The argument I am making is that if any part of the 
brain is replaced with a functionally identical non-biological part, 
engineered to replicate its interactions with the surrounding tissue, 
 consciousness will also necessarily be replicated; for if not, an 
absurd situation would result, whereby consciousness can radically 
change but the subject not notice, or consciousness decouple 
completely from behaviour, or consciousness flip on or off with the 
change of one subatomic particle.


There still seems to be some circularity there -- consciousness is part 
of the functionality of the brain, or parts thereof, so maintaining 
functionality requires maintenance of consciousness. One would really 
need some independent measure of functionality, independent of 
consciousness. And the claim would be that reproducing local 
functionality would maintain consciousness. I do not see that that could 
readily be tested, since mapping all the inputs and outputs of neurons 
or other brain components may not be technically possible. One could map 
neuron behaviour at some crude level, but would that be sufficient to 
maintain consciousness? Natural cell death, and the death of neurons 
does, generally, lead to noticeable changes in consciousness and 
function -- have you not noticed decline in memory and other mental 
faculties as you get older? When consciousness changes in this way, the 
subject is usually only too painfully aware of the decline in mental 
acuity. To avoid this 

Re: How to live forever

2018-03-22 Thread Bruno Marchal

> On 22 Mar 2018, at 02:34, Kim Jones  wrote:
> 
> What if we already live forever? Why in fact, do people think that just 
> because we die that we go away or cease to exist? What if Nature has already 
> solved the problem? Why would you spend a motza to ensure you lived forever 
> in the same boring universe when after 70 or 80 years you can be teleported 
> to a different universe in a pine box?

Yes, that makes sense, and the simples way to be immortal consists in having 
children. But then … you know … kids can be terrible … ;)

Then, we are also immortal already when we remember the “consciousness state 
which is out of time”, but that one is so counterintuitive that I prefer to not 
insist on it. I think we get it with salvia, but some describes it as the worst 
thing that they ever encountered, other as the most blissful thing they 
encountered. 

Mortality is a God self-delusion when bored from immortality, somehow … It is 
also a way to say Hello to Itself, or to play hide-and-seek.

Bruno



> 
> Kim Jones
> 
> 
> 
> 
> On 22 Mar 2018, at 6:39 am, Brent Meeker  > wrote:
> 
>> 
>> 
>> On 3/21/2018 8:40 AM, Bruno Marchal wrote:
>>> 
 On 20 Mar 2018, at 00:56, Brent Meeker > wrote:
 
 
 
 On 3/19/2018 2:19 PM, John Clark wrote:
> On Sun, Mar 18, 2018 at 10:03 PM, Brent Meeker  > wrote
> 
> > octopuses are fairly intelligent but their neural structure is 
> > distributed very differently from mammals of similar intelligence.  An 
> > artificial intelligence that was not modeled on mammalian brain 
> > structure might be intelligent but not conscious
> 
> Maybe maybe maybe. ​A​ nd you don't have have the exact same brain 
> structure as I have so you might be intelligent but not conscious. I said 
> it before I'll say it again, consciousness theories are useless, and not 
> just the ones on this list, all of them.
 
 You're the one who floated a theory of consciousness based on evolution.  
 I'm just pointing out that it only shows that consciousness exists in 
 humans for some evolutionarily selected purpose.  It doesn't apply to 
 intelligence that arises in some other evolutionary branch or intelligence 
 like AI that doesn't evolve but is designed.
 
>>> 
>>> 
>>> Not sure that there is a genuine difference between design and evolution. 
>>> With the multicellular, there is an evolution of design. With the origin of 
>>> life, there has been a design of evolution, even if serendipitously. I am 
>>> not talking about some purposeful or intelligent design here. A cell is a 
>>> quite sophisticated "gigantic nano-machine”.
>>> 
>>> Then some technic in AI, like the genetic algorithm, or some technic 
>>> inspired by the study of the immune system, or self-reference, leads to 
>>> programs or machines evolving in some ways.
>>> 
>>> Now, I can argue that for consciousness nothing of this is needed. It is 
>>> the canonical knowledge associated with the fixed point of the embedding of 
>>> the universal machines in the arithmetical reality.
>>> 
>>> It differentiates into the many indexical first person scenarii.
>>> 
>>> Matter should be what gives rise to possibilities ([]p & ~[]f, []p & <>t).  
>>> That works as it is confirmed by QM without collapse, both intuitively 
>>> through the many computations, and formally as the three material modes do 
>>> provide a formal quantum logic, its arithmetical interpretation, and its 
>>> metamathematical interpretations. 
>>> 
>>> The universal machine rich enough to prove their own universality (like the 
>>> sound humans and Peano arithmetic, and ZF, …) are confronted to the 
>>> distinction between knowing and proof. They prove their own incompleteness 
>>> but still figure out some truth despite being non provable. 
>>> 
>>> The only mystery is where does the numbers (and/or the combinators, the 
>>> lambda expressions, the game of life, c++, etc.) come from?
>>> But here the sound löbian machine can prove that it is impossible to derive 
>>> a universal system from a non-universal theory. 
>>> 
>>> A weaker version of the Church-Turing-Post-Kleene thesis is: It exists a 
>>> Universal Machine. That is, a Machine which computes all computable 
>>> functions. The stronger usual version is that some formal system/definition 
>>> provides such a universal machine, meaning that the class of the functions 
>>> computable by some universal machine gives the class of all computable 
>>> functions, including those not everywhere defined (and non algorithmically 
>>> spared among those defined everywhere: the price of universality). 
>>> 
>>> That universal being has a rich theology, explaining the relation between 
>>> believing, knowing, observing, feeling and the truth.
>> 
>> 

Re: How to live forever

2018-03-22 Thread Bruno Marchal

> On 21 Mar 2018, at 20:39, Brent Meeker  wrote:
> 
> 
> 
> On 3/21/2018 8:40 AM, Bruno Marchal wrote:
>> 
>>> On 20 Mar 2018, at 00:56, Brent Meeker >> > wrote:
>>> 
>>> 
>>> 
>>> On 3/19/2018 2:19 PM, John Clark wrote:
 On Sun, Mar 18, 2018 at 10:03 PM, Brent Meeker > wrote
 
 > octopuses are fairly intelligent but their neural structure is 
 > distributed very differently from mammals of similar intelligence.  An 
 > artificial intelligence that was not modeled on mammalian brain 
 > structure might be intelligent but not conscious
 
 Maybe maybe maybe. ​A​ nd you don't have have the exact same brain 
 structure as I have so you might be intelligent but not conscious. I said 
 it before I'll say it again, consciousness theories are useless, and not 
 just the ones on this list, all of them.
>>> 
>>> You're the one who floated a theory of consciousness based on evolution.  
>>> I'm just pointing out that it only shows that consciousness exists in 
>>> humans for some evolutionarily selected purpose.  It doesn't apply to 
>>> intelligence that arises in some other evolutionary branch or intelligence 
>>> like AI that doesn't evolve but is designed.
>>> 
>> 
>> 
>> Not sure that there is a genuine difference between design and evolution. 
>> With the multicellular, there is an evolution of design. With the origin of 
>> life, there has been a design of evolution, even if serendipitously. I am 
>> not talking about some purposeful or intelligent design here. A cell is a 
>> quite sophisticated "gigantic nano-machine”.
>> 
>> Then some technic in AI, like the genetic algorithm, or some technic 
>> inspired by the study of the immune system, or self-reference, leads to 
>> programs or machines evolving in some ways.
>> 
>> Now, I can argue that for consciousness nothing of this is needed. It is the 
>> canonical knowledge associated with the fixed point of the embedding of the 
>> universal machines in the arithmetical reality.
>> 
>> It differentiates into the many indexical first person scenarii.
>> 
>> Matter should be what gives rise to possibilities ([]p & ~[]f, []p & <>t).  
>> That works as it is confirmed by QM without collapse, both intuitively 
>> through the many computations, and formally as the three material modes do 
>> provide a formal quantum logic, its arithmetical interpretation, and its 
>> metamathematical interpretations. 
>> 
>> The universal machine rich enough to prove their own universality (like the 
>> sound humans and Peano arithmetic, and ZF, …) are confronted to the 
>> distinction between knowing and proof. They prove their own incompleteness 
>> but still figure out some truth despite being non provable. 
>> 
>> The only mystery is where does the numbers (and/or the combinators, the 
>> lambda expressions, the game of life, c++, etc.) come from?
>> But here the sound löbian machine can prove that it is impossible to derive 
>> a universal system from a non-universal theory. 
>> 
>> A weaker version of the Church-Turing-Post-Kleene thesis is: It exists a 
>> Universal Machine. That is, a Machine which computes all computable 
>> functions. The stronger usual version is that some formal system/definition 
>> provides such a universal machine, meaning that the class of the functions 
>> computable by some universal machine gives the class of all computable 
>> functions, including those not everywhere defined (and non algorithmically 
>> spared among those defined everywhere: the price of universality). 
>> 
>> That universal being has a rich theology, explaining the relation between 
>> believing, knowing, observing, feeling and the truth.
> 
> That didn't address my question: Can you imagine different kinds of 
> consciousness.  For example you have sometimes speculated that there is only 
> one consciousness which is somehow compartmentalized in individuals.  That 
> implies that the compartmentalization could be eliminated and a different 
> kind of consciousness experienced...like the Borg.

Yes. With dissociative drugs, like Ketamine (dangerous) or salvia (much less 
dangerous but quite impressive) you do feel like reminding who you were before 
birth, and it is a quite altered state of consciousness, which is felt 
retrospectively as being totally out of time and space/ But you can have a 
glimpse of this each time you understand a theorem (even more so with a no-go 
theorem) in math.

At first, I thought that salvia led to the experience of the Löbian entity, but 
eventually, it looks it is the universal Turing machine experience, before she 
get deluded in believing in the induction axioms. It is a dissociative non 
Löbian altered state of consciousness. I suspect we all go there each night, 
and the brain does a lot of work for us not reminding this (it would not help 
to motivate for the 

Re: How to live forever

2018-03-22 Thread Bruno Marchal

> On 21 Mar 2018, at 00:27, Bruce Kellett  wrote:
> 
> From: Telmo Menezes >
>> 
>> On Tue, Mar 20, 2018 at 1:03 AM, Bruce Kellett
>> > wrote:
>> 
>> > Now it may be that you want to reject Stathis's claim, and insist that
>> > consciousness cannot be inferred from behaviour. But it seems to me that
>> > that theory is as lacking in independent verification as the contrary.
>> 
>> Again, no theory. I am just stating the simple fact that, since there
>> is no known instrument so far that can detect consciousness in the 3p,
>> then it is not possible to propose scientific theories about
>> consciousness at the moment. Only conjectures.
> 
> Explain the difference between a scientific theory and a scientific 
> conjecture. Science is not about proofs; theories are always only ever 
> conjectural, and subject to revision and/or rejection as further evidence is 
> gathered. You don't need an instrument that can give a clean yes/no answer to 
> the presence of consciousness to develop scientific theories about 
> consciousness. We can start with the observation that all normal healthy 
> humans are conscious, and that rocks and other inert objects are not 
> conscious and work from there to develop a science of consciousness, based on 
> evidence from the observation of behaviour. One might well consider that 
> there are different levels or types of consciousness accorded to humans, 
> animals, octopuses, and so on. But that would be a scientific finding, based 
> on observational evidence.
> 
> So science is not as limited as you seem to want to make it —

I agreed with all what you say here, but then ...



> science is not mathematics, after all.


Well, what you say above applies also to mathematics, even if we might argue 
that every-elementary arithmetic is close to being undoubtable, but that is 
true only for an arithmetical realist, not for those who rejects the (A v ~A) 
principles. Most mathematicians and scientist do not doubt arithmetic, but many 
philosophers of mathematics do. But OK, this is tangent on the discussion. The 
important point is that science is not about proof (then it happens that proof 
is not about truth, as the premises might be unsound, even if consistent).

Bruno




> 
>> If you want my conjecture: I assume that all living things are
>> conscious. If you show me an AI that behaves like a human being (or
>> even a dog) I will assume it's conscious too. But none of this is
>> science.
>> 
>> I strongly suspect that consciousness is something that cannot, in
>> fact, be studied by science -- because consciousness is what does
>> science. It's like asking you to look inside your eyeballs.
> 
>  It is perfectly possible to look inside one's own eyeballs. Have you never 
> been to an optician? Just use a mirror with his instruments for inspecting 
> and recording the state of the retina.
> 
> Bruce
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-22 Thread Bruno Marchal

> On 20 Mar 2018, at 20:34, Brent Meeker  wrote:
> 
> 
> 
> On 3/20/2018 3:58 AM, Telmo Menezes wrote:
>>> The interesting thing is that you can draw conclusions about consciousness
>>> without being able to define it or detect it.
>> I agree.
>> 
>>> The claim is that IF an entity
>>> is conscious THEN its consciousness will be preserved if brain function is
>>> preserved despite changing the brain substrate.
>> Ok, this is computationalism. I also bet on computationalism, but I
>> think we must proceed with caution and not forget that we are just
>> assuming this to be true. Your thought experiment is convincing but is
>> not a proof. You do expose something that I agree with: that
>> non-computationalism sounds silly.
> But does it sound so silly if we propose substituting a completely different 
> kind of computer, e.g. von Neumann architecture or one that just records 
> everything instead of an episodic associative memory, for the brain.  The 
> Church-Turing conjecture says it can compute the same functions. 

That is the usual extensional Church-Turing thesis, but it implies the 
intensional thesis. Not only a universal machine can compute what any other can 
compute, but it can compute it in the same way as the one that it imitates. The 
reason is simple, as the universal machine can emulate the other universal 
machine. A list interpreter can emulate a fortran interpreter and compute, 
after that, like a fortran compiler/interpreter.

Even Babbage Universal Engine can emulate a quantum computer, albeit with a 
super-slow-down, but the entities emeumlate by that quantum virtual machine 
will not see the difference, and when done in arithmetic, no first person can 
detect the delays, so that slow down does play no role.




> But does it instantiate the same consciousness.  My intuition is that it 
> would be "conscious" but in some different way; for example by having the 
> kind of memory you would have if you could review of a movie of any interval 
> in your past.

Then you have got a new type of brain, doing new type of computations, which 
makes sense. That is why eventually we will all buy artificial brain, because 
they will just be really more powerful than our actual brain, and allows much 
more. Probably digital transplant will come in that way: people will put the 
smartphone *in* the head …in some not so far futures.

Bruno




> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-22 Thread Bruno Marchal

> On 20 Mar 2018, at 17:52, Lawrence Crowell  
> wrote:
> 
> 
> 
> On Tuesday, March 20, 2018 at 3:57:31 AM UTC-5, telmo_menezes wrote:
> On Tue, Mar 20, 2018 at 1:03 AM, Bruce Kellett 
>  wrote: 
> > From: Telmo Menezes  
> > 
> > 
> > On Tue, Mar 20, 2018 at 12:06 AM, Bruce Kellett 
> >  wrote: 
> >> From: Stathis Papaioannou  
> >> 
> >> 
> >> It is possible that consciousness is fully preserved until a threshold is 
> >> reached then suddenly disappears. So if half the subject’s brain is 
> >> replaced, he behaves normally and has normal consciousness, but if one 
> >> more 
> >> neurone is replaced he continues to behave normally but becomes a zombie. 
> >> Moreover, since neurones are themselves complex systems it could be broken 
> >> down further: half of that final neurone could be replaced with no change 
> >> to 
> >> consciousness, but when a particular membrane protein is replaced with a 
> >> non-biological nanomachine the subject will suddenly become a zombie. And 
> >> we 
> >> need not stop here, because this protein molecule could also be replaced 
> >> gradually, for example by non-biological radioisotopes. If half the atoms 
> >> in 
> >> this protein are replaced, there is no change in behaviour and no change 
> >> in 
> >> consciousness; but when one more atom is replaced a threshold is reached 
> >> and 
> >> the subject suddenly loses consciousness. So zombification could turn on 
> >> the 
> >> addition or subtraction of one neutron. Are you prepared to go this far to 
> >> challenge the idea that if the observable behaviour of the brain is 
> >> replicated, consciousness will also be replicated? 
> >> 
> >> 
> >> If the theory is that if the observable behaviour of the brain is 
> >> replicated, then consciousness will also be replicated, then the clear 
> >> corollary is that consciousness can be inferred from observable behaviour. 
> > 
> > For this to be a theory in the scientific sense, one needs some way to 
> > detect consciousness. In that case your corollary becomes a tautology: 
> > 
> > (a) If one can detect consciousness then one can detect consciousness. 
> > 
> > The other option is to assume that observable behaviors in the brain 
> > imply consciousness -- because "common sense", because experts say so, 
> > whatever. In this case it becomes circular reasoning: 
> > 
> > (b) Assuming that observable behaviors in the brain imply 
> > consciousness, consciousness can be inferred from brain behaviors. 
> > 
> > 
> > I was responding to the claim by Stathis that consciousness will follow 
> > replication of observable behaviour. It seemed to me that this was proposed 
> > as a theory: "If the observable behaviour of is replicated then 
> > consciousness will also be replicated." 
> 
> Lawrence is proposing that something specific about the brain might be 
> necessary for consciousness to arise. He proposed a scenario where 
> parts of the brain are replaced with a computer, and behavior is 
> maintained while consciousness is lost (p-zombie). Stathis is 
> proposing a thought experiment that attempts reductio ad absurdum on 
> this scenario. Although this is all interesting speculation, there is 
> no scientific theory, because there is no way to perform an 
> experiment, because there is no scientific instrument that detects 
> consciousness. In the end I still don't know, as scientific fact, if 
> others are conscious. 
> 
> You were the first to call it a theory, and this is why I reacted. 
> 
> My point is actually empirical. The claim is that this can be done, which 
> means experiments will be  done. If so then we might ask, "What can go wrong 
> with that?" 
> 
> My point is that to load my brain states into a computer requires some 
> process for measuring and cataloging the neural nets in my brain. Processes 
> such as computing subsets of combinatorial processes are NP-complete.

Why? I don’t see that at all. A copy can be done in lear time, once we have the 
right technology. The hippocampus of the rat have been copied, and some brain 
worms also, with some partial success.

Personally I would ask a copy at the atomic level; just above Heisenberg 
uncertainty, in case I am forced to say “yes” to some doctor.





> This will form some limit on this claim, and it could be a fundamental 
> barrier. Duplication is not possible either, for a complete duplicate on the 
> fine grain quantum scale involves quantum cloning that is not a quantum 
> process. A lot of this discussion involves rubbing the philosopher's stone, 
> when in fact this would be a whole lot more difficult to actually do.

Note that we are actually copied or prepared (in the quantum sense) infinitely 
often in arithmetic, and only that counts to understand that if Mechanism is 
true, then physics becomes a branch of machine theology, which is itself a 
branch of 

Re: How to live forever

2018-03-22 Thread Bruno Marchal

> On 20 Mar 2018, at 17:03, John Clark  wrote:
> 
> On Mon, Mar 19, 2018 at 7:06 PM, Bruce Kellett  > wrote:
> 
> > If the theory is that if the observable behaviour of the brain is 
> > replicated, then consciousness will also be replicated, then the clear 
> > corollary is that consciousness can be inferred from observable behaviour.
> Yes.
> 
> > Which implies that I can be as certain of the consciousness of other people 
> > as I am of my own.
> No. The idea that consciousness can be inferred from intelligent behavior is 
> an axiom of existence, it has no proof and will never have a proof but it 
> sure seems like its true, and every sane human being uses it every hour of 
> their waking life. And there is always a element of doubt in real life, or at 
> least there should be, so something need not provide absolute certainty to be 
> enormously useful. As for my own consciousness I don’t have a proof of that 
> either but I don’t need one because I’ve got the one thing that can pull rank 
> even over proof, direct experience.   
> 
> > This seems to do some violence to the 1p/1pp/3p distinctions 
> Bruno’s the one who started pushing that ridiculous phrase, I suppose he 
> thought it sounded more profound erudite and scientific than “the difference 
> between you and me”. And I would maintain nobody outside a looney bin has 
> difficulty finding the "1p/1pp/3p distinction”.
> 
> 


Yes, I teach this since year, and nobody has ever have any difficulty in 
understanding this, including the first person indeterminacy.

Problems raised are in step 7 and 8, which are more demanding in mathematical 
logic, as they need to understand that Very Elementary Arithmetic (Peano 
Arithmetic *without* induction) is already Turing complete. This is known (by 
logicians) since the 1930.

Bruno



>  John K Clark
> 
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-22 Thread Bruno Marchal

> On 20 Mar 2018, at 09:57, Telmo Menezes  wrote:
> 
> On Tue, Mar 20, 2018 at 1:03 AM, Bruce Kellett
>  wrote:
>> From: Telmo Menezes 
>> 
>> 
>> On Tue, Mar 20, 2018 at 12:06 AM, Bruce Kellett
>>  wrote:
>>> From: Stathis Papaioannou 
>>> 
>>> 
>>> It is possible that consciousness is fully preserved until a threshold is
>>> reached then suddenly disappears. So if half the subject’s brain is
>>> replaced, he behaves normally and has normal consciousness, but if one
>>> more
>>> neurone is replaced he continues to behave normally but becomes a zombie.
>>> Moreover, since neurones are themselves complex systems it could be broken
>>> down further: half of that final neurone could be replaced with no change
>>> to
>>> consciousness, but when a particular membrane protein is replaced with a
>>> non-biological nanomachine the subject will suddenly become a zombie. And
>>> we
>>> need not stop here, because this protein molecule could also be replaced
>>> gradually, for example by non-biological radioisotopes. If half the atoms
>>> in
>>> this protein are replaced, there is no change in behaviour and no change
>>> in
>>> consciousness; but when one more atom is replaced a threshold is reached
>>> and
>>> the subject suddenly loses consciousness. So zombification could turn on
>>> the
>>> addition or subtraction of one neutron. Are you prepared to go this far to
>>> challenge the idea that if the observable behaviour of the brain is
>>> replicated, consciousness will also be replicated?
>>> 
>>> 
>>> If the theory is that if the observable behaviour of the brain is
>>> replicated, then consciousness will also be replicated, then the clear
>>> corollary is that consciousness can be inferred from observable behaviour.
>> 
>> For this to be a theory in the scientific sense, one needs some way to
>> detect consciousness.

I did not dream. Telmo, that is Aristotle criteria of “scientificness”  (if I 
can say).

A Platonist doubt even more what we can detect than what he can conceive or 
understand ….

(I might be slightly out of the context here, note).




>> In that case your corollary becomes a tautology:
>> 
>> (a) If one can detect consciousness then one can detect consciousness.
>> 
>> The other option is to assume that observable behaviors in the brain
>> imply consciousness -- because "common sense", because experts say so,
>> whatever. In this case it becomes circular reasoning:
>> 
>> (b) Assuming that observable behaviors in the brain imply
>> consciousness, consciousness can be inferred from brain behaviors.
>> 
>> 
>> I was responding to the claim by Stathis that consciousness will follow
>> replication of observable behaviour. It seemed to me that this was proposed
>> as a theory: "If the observable behaviour of is replicated then
>> consciousness will also be replicated."
> 
> Lawrence is proposing that something specific about the brain might be
> necessary for consciousness to arise. He proposed a scenario where
> parts of the brain are replaced with a computer, and behavior is
> maintained while consciousness is lost (p-zombie). Stathis is
> proposing a thought experiment that attempts reductio ad absurdum on
> this scenario. Although this is all interesting speculation, there is
> no scientific theory, because there is no way to perform an
> experiment, because there is no scientific instrument that detects
> consciousness. In the end I still don't know, as scientific fact, if
> others are conscious.

That is right. But we don’t know, as a scientific fact … anything. We can only 
prove in theories, which are always assumptions.

The dream argument is radical about that. You can believe strongly that the 
Higgs boson has been detected, and yet you can conceive that you will wake up, 
and all that boson stuff was a dream.

Science is only collection of belief, and we mainly learn only when we refute 
our beliefs, but even that can be a dream (contra-popper). Yet, we can find big 
picture which are appealing in elegance, beauty, and confirmed by the (only 
plausible) facts.





> 
> You were the first to call it a theory, and this is why I reacted.

OK.


> 
>> I was merely pointing out
>> consequences of this theory, so your claims of tautology and/or circularity
>> rather miss the point: the consequences of any theory are either tautologies
>> or circularities in that sense, because they are implications of the theory.
> 
> Tautologies are fine indeed. I did not call (a) a tautology as an
> insult, merely to point out that the hard part is still missing, and
> that assuming that it is solved does not lead to anywhere interesting.
> 
> Circularities are, of course, not fine. You cannot assume that you can
> infer consciousness from behavior, and that use this assumption to
> conclude that you can infer consciousness from behavior.
> 
>> Now it may be that you want to reject 

Re: How to live forever

2018-03-22 Thread Bruno Marchal

> On 20 Mar 2018, at 06:46, Stathis Papaioannou  wrote:
> 
> 
> On Tue, 20 Mar 2018 at 10:09 am, Bruce Kellett  > wrote:
> From: Stathis Papaioannou >
> 
>> 
>> It is possible that consciousness is fully preserved until a threshold is 
>> reached then suddenly disappears. So if half the subject’s brain is 
>> replaced, he behaves normally and has normal consciousness, but if one more 
>> neurone is replaced he continues to behave normally but becomes a zombie. 
>> Moreover, since neurones are themselves complex systems it could be broken 
>> down further: half of that final neurone could be replaced with no change to 
>> consciousness, but when a particular membrane protein is replaced with a 
>> non-biological nanomachine the subject will suddenly become a zombie. And we 
>> need not stop here, because this protein molecule could also be replaced 
>> gradually, for example by non-biological radioisotopes. If half the atoms in 
>> this protein are replaced, there is no change in behaviour and no change in 
>> consciousness; but when one more atom is replaced a threshold is reached and 
>> the subject suddenly loses consciousness. So zombification could turn on the 
>> addition or subtraction of one neutron. Are you prepared to go this far to 
>> challenge the idea that if the observable behaviour of the brain is 
>> replicated, consciousness will also be replicated?
> 
> If the theory is that if the observable behaviour of the brain is replicated, 
> then consciousness will also be replicated, then the clear corollary is that 
> consciousness can be inferred from observable behaviour. Which implies that I 
> can be as certain of the consciousness of other people as I am of my own. 
> This seems to do some violence to the 1p/1pp/3p distinctions that 
> computationalism rely on so much: only 1p is "certainly certain". But if I 
> can reliably infer consciousness in others, then other things can be as 
> certain as 1p experiences
> 
> You can’t reliable infer consciousness in others. What you can infer is that 
> whatever consciousness an entity has, it will be preserved if functionally 
> identical substitutions in its brain are made. You can’t know if a mouse is 
> conscious, but you can know that if mouse neurones are replaced with 
> functionally identical electronic neurones its behaviour will be the same and 
> any consciousness it may have will also be the same.

Assuming the neuronal level for the substitution level, but some like Hameroff 
will require the copy made at the level of the tubulins, other will ask for the 
quantum states, and some will just accept the neuronal + glial cells, as they 
seem to play a role in pain.

We cannot know our machine level, but we can bet, and we can believe (correctly 
or wrongly) having survived.

It is arguable that molecular biology gives some weight to the idea that 
“nature has already bet” on Mechanism, as we replace our stuff all the time.

Bruno




> -- 
> Stathis Papaioannou
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


  1   2   3   >