Re: How to live forever

2018-04-04 Thread Bruno Marchal

> On 3 Apr 2018, at 10:37, Telmo Menezes  wrote:
> 
> On Tue, Apr 3, 2018 at 9:33 AM, Bruno Marchal  wrote:
>> 
>>> On 3 Apr 2018, at 08:25, Telmo Menezes  wrote:
>>> 
>>> Hi Russell,
>>> 
>>> On Sat, Mar 31, 2018 at 10:30 AM, Russell Standish
>>>  wrote:
 On Wed, Mar 21, 2018 at 05:14:21PM +0100, Bruno Marchal wrote:
> 
> Now, is a jellyfish conscious?
> 
> I bet they are, but not far away from the dissociative and constant 
> arithmetical consciousness (of the universal machines).
 
 As I'm sure you're aware, I disagree with this. Jellyfish appear to be
 quite simple automatons, with a distributed neural network, not any
 brain as such. However, my main reason for disagreeing is that
 anthropic reasoning leads us to conclude that most species of animal
 are not conscious. Our most typical animal is a nematode (for instance
 your favourite - the planarians), but even most insects cannot be
 conscious either.
>>> 
>>> I follow your anthropic reasoning, but am not convinced by the
>>> implicit 1:1 correspondence between one minute of human consciousness
>>> and one human of insect consciousness. I have no rigorous way of
>>> saying this, but my intuition is the following: there is more content
>>> in one minute of one than the other. I think it makes sense for the
>>> probabilities to be weighted by this content, somehow.
>>> 
>>> Imagine a simple possibility: your anthropic reasoning being weighed
>>> by the number of neurons in the given creature. See what I'm getting
>>> at?
>> 
>> 
>> Then the brain seems to be a filter of the natural raw consciousness which 
>> is at the start of the consciousness differentiation. The less neurons there 
>> are, the more consciousness is intense from the first person view, but also 
>> the more it is disconnected.
> 
> I agree, as you know.
> I have no scientific argument here, just personal experience.

OK.




> 
>> I have no doubt that this is very counter-intuitive for people having no 
>> memory of a dissociative state of consciousness,
> 
> Yes. What makes this particularly tricky is that such memories are
> from the neighborhood of the experience. The actual thing cannot be
> remembered -- at least I cannot.

Consciousness is a cousin of consistency (<>t) . It obeys to <>t -> ~[]<>t. 
This means that a consistent machine cannot prove its consistency, in the 
strong sense of “proof”. Franzen said that it cannot even been asserted by 
which he means that it cannot even be taken as an axiom (this way of talking is 
slightly misleading, as PA can both assert con(‘PA’) and even take it as a new 
axiom, but then it is a new machine which still cannot prove its consistency. 
So it is really the fixed point of consistency which is not provable or 
rationally communicable. It is PA+ = PA+ +con(PA+) which becomes inconsistent 
(and such fixed point exists by Gödel diagonal lemma).
Now, by the Completeness theorem (which can be shown to apply on the type of 
machines considered); consistency is equivalent with “there is a reality 
satisfying me (with me = my beliefs), and that is intuitively coherent: nobody 
can prove that a reality or a God exist.

But consciousness is not consistency. It lacks the 1p feature, and consistency 
is indeed a purely 3p syntactical notion: it means that there is no proof of f 
(no finite sequence of beliefs starting from PA (say) and obtained by repeated 
application of the rules of PA and ending by f. 

So consciousness is more like <>p v p, the dual of []p & p, with a non 
constructive “or”. It makes <>t v t trivial from the first person point of 
view, and indeed it is a theorem (of G), but non expressible by the machine. In 
this case, it cannot be taken as an axiom only because it is not definable by 
the machine, at least concerning itself. 

This makes me wondering if the dissociative state is just a state of 
inconsistency, or a state when we wake up and realise we were consistent but 
unsound, like if we were becoming a “queer reasoner” (in the sense of 
Smullyan’s “Forever Undecided”. In that case, consciousness is definitely based 
on the notion of truth, and any particular instantiation of consciousness can 
work only by forgetting its “true nature”, making the ultimate experience 
unmemorable. Now, with salvia, any theory we make of the experience is 
contradicted by the next experience, but eventually I cycle on those two 
options, with the rather frustrating feeling that the solution here cannot be 
brought back at all, despite that, as you say, from the neighbourhood of the 
experience, we can remember having known the solution! That one is stable, and 
never contradicted by further experience, but the price is that we cannot bring 
back here. We can only “know” that it is either total madness ([]f, cul-de-sac 
world), or that consciousness is (sigma_1) truth itself, which indeed belongs 
to G*. G* proves p <-> <>p, on the p sigma_1. So, I tend to “believe” that the 
sec

Wave Functions again

2018-04-04 Thread agrayson2000
 Does a macro object, say a billiard ball, have a definite wave function? 
That is, does it have one in principle, even if it can't be written down? 
If one can speak of the wf of the universe, one would think individual 
macro objects would also have wf's. TIA, AG

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-04 Thread Stathis Papaioannou
On Thu, 5 Apr 2018 at 2:58 am, smitra  wrote:

> On 02-04-2018 17:27, Bruno Marchal wrote:
> >> On 1 Apr 2018, at 00:29, Lawrence Crowell
> >>  wrote:
> >>
> >> On Saturday, March 31, 2018 at 2:32:06 PM UTC-6, telmo_menezes
> >> wrote:
> >>
> >>> On Sat, Mar 31, 2018 at 10:17 PM, Lawrence Crowell
> >>>  wrote:
>  You would have to replicate then not only the dynamics of
> >>> neurons, but every
>  biomolecule in the neurons, and don't forget about the
> >>> oligoastrocytes and
>  other glial cells. Many enzymes for instance to multi-state
> >>> systems, say in
>  a simple case where a single amino acid residue of
> >>> phosphorylated or
>  unphosphorylated, and in effect are binary switching units. To
> >>> then make
>  this work you now need to have the brain states mapped out down
> >>> to the
>  molecular level, and further to have their combinatorial
> >>> relationships
>  mapped. Biomolecules also behave in water, so you have to model
> >>> all the
>  water molecules. Given the brain has around 10^{25} or a few
> >>> moles of
>  molecules the number of possible combinations might be on the
> >>> order of
>  10^{10^{25}} this is a daunting task. Also your computer has to
> >>> accurately
>  encode the dynamics of molecules -- down to the quantum
> >>> mechanics of their
>  bonds.
> 
>  This is another way of saying that biological systems, even that
> >>> of a basic
>  prokaryote, are beyond our current abilities to simulate. You
> >>> can't just
>  hand wave away the enormous problems with just simulating a
> >>> bacillus, let
>  alone something like the brain. Now of course one can do some
> >>> simulations to
>  learn about the brain in a model system, but this is far from
> >>> mapping a
>  brain and its conscious state into a computer.
> >>>
> >>> Well maybe, but this is just you guessing.
> >>> Nobody knows the necessary level of detail.
> >>>
> >>> Telmo.
> >>
> >> Take LSD or psilocybin mushrooms and what enters the brain are
> >> chemical compounds that interact with neural ligand gates. The
> >> effect is a change in the perception of consciousness. Then if we
> >> load coarse grained brain states into a computer that ignores lots
> >> of fine grained detail, will that result in something different?
> >> Hell yeah! The idea one could set up a computer neural network,
> >> upload some data file from a brain scan and that this would be a
> >> completely conscious person is frankly absurd.
> >
> > This means that you bet on a lower substitution level. I guess others
> > have already answered this. Note that the proof that physics is a
> > branch of arithmetic does not put any bound of the graining of the
> > substitution level. It could even be that your brain is the entire
> > universe described at the level of superstring theory, that will
> > change nothing in the conclusion of the reasoning. Yet it would be a
> > threat for evolution and biology as conceived today.
> >
> > Bruno
> >
> >> LC
> >>
>
> In experiments involving stimulation/inhibition of certain brain  parts
> using strong magnetic fields where people look for a few seconds at a
> screen with a large number of dots, it was found that significantly more
> people can correctly guess the number of dots when the field was
> switched on. The conclusion was that under normal circumstances when we
> are not aware of lower level information, such as the exact number of
> dots ion the screen, that information is actually present in the brain
> but we're not consciously aware of it. Certain people who have "savant
> syndrome" can be constantly aware of such lower level information.
>
> This then suggests to me that the substitution level can be taken at a
> much higher level than the level of neurons. In the MWI we would have to
> be imagined being spread out over sectors where information such as the
> number of dots on a screen is different. So, what you're not aware of
> isn't fixed for you, and therefore it cannot possibly define your
> identity.


Different physical states may lead to the same mental state until some
differentiating physical event occurs, and then the mental states diverge.
For example, the biological and the silicon version may have identical
experiences until they are exposed to a drug or to physical trauma. If, for
some reason, you were unhappy with this difference you could insist that
your brain replacement have further refinements so that it behaves closer
to the original.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-04 Thread Brent Meeker



On 4/2/2018 10:53 AM, smitra wrote:

On 02-04-2018 17:27, Bruno Marchal wrote:

On 1 Apr 2018, at 00:29, Lawrence Crowell
 wrote:

On Saturday, March 31, 2018 at 2:32:06 PM UTC-6, telmo_menezes
wrote:


On Sat, Mar 31, 2018 at 10:17 PM, Lawrence Crowell
 wrote:

You would have to replicate then not only the dynamics of

neurons, but every

biomolecule in the neurons, and don't forget about the

oligoastrocytes and

other glial cells. Many enzymes for instance to multi-state

systems, say in

a simple case where a single amino acid residue of

phosphorylated or

unphosphorylated, and in effect are binary switching units. To

then make

this work you now need to have the brain states mapped out down

to the

molecular level, and further to have their combinatorial

relationships

mapped. Biomolecules also behave in water, so you have to model

all the

water molecules. Given the brain has around 10^{25} or a few

moles of

molecules the number of possible combinations might be on the

order of

10^{10^{25}} this is a daunting task. Also your computer has to

accurately

encode the dynamics of molecules -- down to the quantum

mechanics of their

bonds.

This is another way of saying that biological systems, even that

of a basic

prokaryote, are beyond our current abilities to simulate. You

can't just

hand wave away the enormous problems with just simulating a

bacillus, let

alone something like the brain. Now of course one can do some

simulations to

learn about the brain in a model system, but this is far from

mapping a

brain and its conscious state into a computer.


Well maybe, but this is just you guessing.
Nobody knows the necessary level of detail.

Telmo.


Take LSD or psilocybin mushrooms and what enters the brain are
chemical compounds that interact with neural ligand gates. The
effect is a change in the perception of consciousness. Then if we
load coarse grained brain states into a computer that ignores lots
of fine grained detail, will that result in something different?
Hell yeah! The idea one could set up a computer neural network,
upload some data file from a brain scan and that this would be a
completely conscious person is frankly absurd.


This means that you bet on a lower substitution level. I guess others
have already answered this. Note that the proof that physics is a
branch of arithmetic does not put any bound of the graining of the
substitution level. It could even be that your brain is the entire
universe described at the level of superstring theory, that will
change nothing in the conclusion of the reasoning. Yet it would be a
threat for evolution and biology as conceived today.

Bruno


LC



In experiments involving stimulation/inhibition of certain brain parts 
using strong magnetic fields where people look for a few seconds at a 
screen with a large number of dots, it was found that significantly 
more people can correctly guess the number of dots when the field was 
switched on. The conclusion was that under normal circumstances when 
we are not aware of lower level information, such as the exact number 
of dots ion the screen, that information is actually present in the 
brain but we're not consciously aware of it. Certain people who have 
"savant syndrome" can be constantly aware of such lower level 
information.


And not just people

https://www.npr.org/sections/krulwich/2014/04/16/302943533/the-ultimate-animal-experience-losing-a-memory-quiz-to-a-chimp

which suggests to me that the part of one's brain that instantiates 
consciousness competes with other parts and may interfere with their 
function.  I think everyone experiences this in sports.  Who hasn't 
missed a shot in tennis by "thinking about it too much?"


Brent



This then suggests to me that the substitution level can be taken at a 
much higher level than the level of neurons. In the MWI we would have 
to be imagined being spread out over sectors where information such as 
the number of dots on a screen is different. So, what you're not aware 
of isn't fixed for you, and therefore it cannot possibly define your 
identity


Saibal



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-04 Thread smitra

On 02-04-2018 17:27, Bruno Marchal wrote:

On 1 Apr 2018, at 00:29, Lawrence Crowell
 wrote:

On Saturday, March 31, 2018 at 2:32:06 PM UTC-6, telmo_menezes
wrote:


On Sat, Mar 31, 2018 at 10:17 PM, Lawrence Crowell
 wrote:

You would have to replicate then not only the dynamics of

neurons, but every

biomolecule in the neurons, and don't forget about the

oligoastrocytes and

other glial cells. Many enzymes for instance to multi-state

systems, say in

a simple case where a single amino acid residue of

phosphorylated or

unphosphorylated, and in effect are binary switching units. To

then make

this work you now need to have the brain states mapped out down

to the

molecular level, and further to have their combinatorial

relationships

mapped. Biomolecules also behave in water, so you have to model

all the

water molecules. Given the brain has around 10^{25} or a few

moles of

molecules the number of possible combinations might be on the

order of

10^{10^{25}} this is a daunting task. Also your computer has to

accurately

encode the dynamics of molecules -- down to the quantum

mechanics of their

bonds.

This is another way of saying that biological systems, even that

of a basic

prokaryote, are beyond our current abilities to simulate. You

can't just

hand wave away the enormous problems with just simulating a

bacillus, let

alone something like the brain. Now of course one can do some

simulations to

learn about the brain in a model system, but this is far from

mapping a

brain and its conscious state into a computer.


Well maybe, but this is just you guessing.
Nobody knows the necessary level of detail.

Telmo.


Take LSD or psilocybin mushrooms and what enters the brain are
chemical compounds that interact with neural ligand gates. The
effect is a change in the perception of consciousness. Then if we
load coarse grained brain states into a computer that ignores lots
of fine grained detail, will that result in something different?
Hell yeah! The idea one could set up a computer neural network,
upload some data file from a brain scan and that this would be a
completely conscious person is frankly absurd.


This means that you bet on a lower substitution level. I guess others
have already answered this. Note that the proof that physics is a
branch of arithmetic does not put any bound of the graining of the
substitution level. It could even be that your brain is the entire
universe described at the level of superstring theory, that will
change nothing in the conclusion of the reasoning. Yet it would be a
threat for evolution and biology as conceived today.

Bruno


LC



In experiments involving stimulation/inhibition of certain brain  parts 
using strong magnetic fields where people look for a few seconds at a 
screen with a large number of dots, it was found that significantly more 
people can correctly guess the number of dots when the field was 
switched on. The conclusion was that under normal circumstances when we 
are not aware of lower level information, such as the exact number of 
dots ion the screen, that information is actually present in the brain 
but we're not consciously aware of it. Certain people who have "savant 
syndrome" can be constantly aware of such lower level information.


This then suggests to me that the substitution level can be taken at a 
much higher level than the level of neurons. In the MWI we would have to 
be imagined being spread out over sectors where information such as the 
number of dots on a screen is different. So, what you're not aware of 
isn't fixed for you, and therefore it cannot possibly define your 
identity


Saibal

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-04 Thread Russell Standish
On Tue, Apr 03, 2018 at 08:25:59AM +0200, Telmo Menezes wrote:
> Hi Russell,
> 
> On Sat, Mar 31, 2018 at 10:30 AM, Russell Standish
>  wrote:
> > On Wed, Mar 21, 2018 at 05:14:21PM +0100, Bruno Marchal wrote:
> >>
> >> Now, is a jellyfish conscious?
> >>
> >> I bet they are, but not far away from the dissociative and constant 
> >> arithmetical consciousness (of the universal machines).
> >
> > As I'm sure you're aware, I disagree with this. Jellyfish appear to be
> > quite simple automatons, with a distributed neural network, not any
> > brain as such. However, my main reason for disagreeing is that
> > anthropic reasoning leads us to conclude that most species of animal
> > are not conscious. Our most typical animal is a nematode (for instance
> > your favourite - the planarians), but even most insects cannot be
> > conscious either.
> 
> I follow your anthropic reasoning, but am not convinced by the
> implicit 1:1 correspondence between one minute of human consciousness
> and one human of insect consciousness. I have no rigorous way of
> saying this, but my intuition is the following: there is more content
> in one minute of one than the other. I think it makes sense for the
> probabilities to be weighted by this content, somehow.
> 
> Imagine a simple possibility: your anthropic reasoning being weighed
> by the number of neurons in the given creature. See what I'm getting
> at?
> 

My argument is simply that your first observer moment (ie "birth
moment", although not literally at birth) is selected at random from
all such possible moments. Thereafter, successor OMs are chosen
acording to Born's rule. Ant birth OMs are vastly more numerous than
human ones. A city of perhaps a million individuals lives under our
house, and ants are born, live an die far more rapidly than we
humans. 

To argue that OMs might be weighted somehow is quite close to the
ASSA, which I've never found convincing, though some argue for it here
on this list. Why should first observer moments be weighted by neuron number?

-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellowhpco...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-04-04 Thread Russell Standish
On Mon, Apr 02, 2018 at 10:22:57PM -0700, Brent Meeker wrote:
> 
> 
> On 3/31/2018 1:30 AM, Russell Standish wrote:
> > On Wed, Mar 21, 2018 at 05:14:21PM +0100, Bruno Marchal wrote:
> > > Now, is a jellyfish conscious?
> > > 
> > > I bet they are, but not far away from the dissociative and constant 
> > > arithmetical consciousness (of the universal machines).
> > As I'm sure you're aware, I disagree with this. Jellyfish appear to be
> > quite simple automatons, with a distributed neural network, not any
> > brain as such. However, my main reason for disagreeing is that
> > anthropic reasoning leads us to conclude that most species of animal
> > are not conscious. Our most typical animal is a nematode (for instance
> > your favourite - the planarians), but even most insects cannot be
> > conscious either.
> > 
> 
> In these discussions I always wonder what kind of consciousness is meant?
> 
> 1. Perception.  light/dark  acid/base touch...
> 2. Self location relative to things.  Prey/predators
> 3. Self relative to others.  Sex and territory and rivals
> 4. Abstractions.  Number geometries
> 5.  Self reflection.  Theory of minds.  Language
> 
> Brent

For my anthropic ant argument, I would have said 2 and above. Mere
perception would not be enough to apply the anthropic argument. A
thermostat is probably 1) above. Of course, others have argued that 5
is necessary for anthropic reasoning - I just remain unconvinced by that.


-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellowhpco...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.