Re: How to live forever

2018-03-22 Thread Stathis Papaioannou
On Fri, 23 Mar 2018 at 11:32 am, Bruce Kellett 
wrote:

> From: Stathis Papaioannou 
>
>
> On Thu, 22 Mar 2018 at 9:02 am, Bruce Kellett 
> wrote:
>
>> From: Stathis Papaioannou < stath...@gmail.com>
>>
>>
>> On Wed, 21 Mar 2018 at 10:56 am, Bruce Kellett <
>> bhkell...@optusnet.com.au> wrote:
>>
>>> From: Stathis Papaioannou < stath...@gmail.com>
>>>
>>> On Tue, 20 Mar 2018 at 10:09 am, Bruce Kellett <
>>> bhkell...@optusnet.com.au> wrote:
>>>

 If the theory is that if the observable behaviour of the brain is
 replicated, then consciousness will also be replicated, then the clear
 corollary is that consciousness can be inferred from observable behaviour.
 Which implies that I can be as certain of the consciousness of other people
 as I am of my own. This seems to do some violence to the 1p/1pp/3p
 distinctions that computationalism rely on so much: only 1p is "certainly
 certain". But if I can reliably infer consciousness in others, then other
 things can be as certain as 1p experiences

>>>
>>> You can’t reliable infer consciousness in others. What you can infer is
>>> that whatever consciousness an entity has, it will be preserved if
>>> functionally identical substitutions in its brain are made.
>>>
>>>
>>> You have that backwards. You can infer consciousness in others, by
>>> observing their behaviour. The alternative would be solipsism. Now, while
>>> you can't prove or disprove solipsism in a mathematical sense, you can
>>> reject solipsism as a useless theory, since it tells you nothing about
>>> anything. Whereas science acts on the available evidence -- observations of
>>> behaviour in this case.
>>>
>>> But we have no evidence that consciousness would be preserved under
>>> functionally identical substitutions in the brain. Consciousness may be a
>>> global affair, so functionally equivalence may not be achievable, or even
>>> definable, within the context of a conscious brain. Can you map the
>>> functionality of even a single neuron? You are assuming that you can, but
>>> if that function is global, then you probably can't. There is a fair amount
>>> of glibness in your assumption that consciousness will be preserved under
>>> such substitutions.
>>>
>>>
>>> You can’t know if a mouse is conscious, but you can know that if mouse
>>> neurones are replaced with functionally identical electronic neurones its
>>> behaviour will be the same and any consciousness it may have will also be
>>> the same.
>>>
>>>
>>> You cannot know this without actually doing the substitution and
>>> observing the results.
>>>
>>
>> So do you think that it is possible to replace the neurones with
>> functionally identical neurones (same output for same input) and the
>> mouse’s behaviour would *not* be the same?
>>
>>
>> Individual neurons may not be the appropriate functional unit.
>>
>> It seems that you might be close to circularity -- neural functionality
>> includes consciousness. So if I maintain neural functionality, I will
>> maintain consciousness.
>>
>
> The only assumption is that the brain is somehow responsible for
> consciousness. The argument I am making is that if any part of the brain is
> replaced with a functionally identical non-biological part, engineered to
> replicate its interactions with the surrounding tissue,  consciousness will
> also necessarily be replicated; for if not, an absurd situation would
> result, whereby consciousness can radically change but the subject not
> notice, or consciousness decouple completely from behaviour, or
> consciousness flip on or off with the change of one subatomic particle.
>
>
> There still seems to be some circularity there -- consciousness is part of
> the functionality of the brain, or parts thereof, so maintaining
> functionality requires maintenance of consciousness.
>

By functionality here I specifically mean the observable behaviour of the
brain. Consciousness is special in that it is not directly observable as,
for example, the potential difference across a cell membrane or the
contraction of muscle is.

One would really need some independent measure of functionality,
> independent of consciousness. And the claim would be that reproducing local
> functionality would maintain consciousness. I do not see that that could
> readily be tested, since mapping all the inputs and outputs of neurons or
> other brain components may not be technically possible. One could map
> neuron behaviour at some crude level, but would that be sufficient to
> maintain consciousness? Natural cell death, and the death of neurons does,
> generally, lead to noticeable changes in consciousness and function -- have
> you not noticed decline in memory and other mental faculties as you get
> older? When consciousness changes in this way, the subject is usually only
> too 

Re: How to live forever

2018-03-22 Thread Bruce Kellett

From: *Stathis Papaioannou* >


On Thu, 22 Mar 2018 at 9:02 am, Bruce Kellett 
> wrote:


From: *Stathis Papaioannou* >


On Wed, 21 Mar 2018 at 10:56 am, Bruce Kellett
> wrote:

From: *Stathis Papaioannou* >

On Tue, 20 Mar 2018 at 10:09 am, Bruce Kellett
> wrote:


If the theory is that if the observable behaviour of the
brain is replicated, then consciousness will also be
replicated, then the clear corollary is that
consciousness can be inferred from observable behaviour.
Which implies that I can be as certain of the
consciousness of other people as I am of my own. This
seems to do some violence to the 1p/1pp/3p distinctions
that computationalism rely on so much: only 1p is
"certainly certain". But if I can reliably infer
consciousness in others, then other things can be as
certain as 1p experiences


You can’t reliable infer consciousness in others. What you
can infer is that whatever consciousness an entity has, it
will be preserved if functionally identical substitutions in
its brain are made.


You have that backwards. You can infer consciousness in
others, by observing their behaviour. The alternative would
be solipsism. Now, while you can't prove or disprove
solipsism in a mathematical sense, you can reject solipsism
as a useless theory, since it tells you nothing about
anything. Whereas science acts on the available evidence --
observations of behaviour in this case.

But we have no evidence that consciousness would be preserved
under functionally identical substitutions in the brain.
Consciousness may be a global affair, so functionally
equivalence may not be achievable, or even definable, within
the context of a conscious brain. Can you map the
functionality of even a single neuron? You are assuming that
you can, but if that function is global, then you probably
can't. There is a fair amount of glibness in your assumption
that consciousness will be preserved under such substitutions.



You can’t know if a mouse is conscious, but you can know
that if mouse neurones are replaced with functionally
identical electronic neurones its behaviour will be the same
and any consciousness it may have will also be the same.


You cannot know this without actually doing the substitution
and observing the results.


So do you think that it is possible to replace the neurones with
functionally identical neurones (same output for same input) and
the mouse’s behaviour would *not* be the same?


Individual neurons may not be the appropriate functional unit.

It seems that you might be close to circularity -- neural
functionality includes consciousness. So if I maintain neural
functionality, I will maintain consciousness.


The only assumption is that the brain is somehow responsible for 
consciousness. The argument I am making is that if any part of the 
brain is replaced with a functionally identical non-biological part, 
engineered to replicate its interactions with the surrounding tissue, 
 consciousness will also necessarily be replicated; for if not, an 
absurd situation would result, whereby consciousness can radically 
change but the subject not notice, or consciousness decouple 
completely from behaviour, or consciousness flip on or off with the 
change of one subatomic particle.


There still seems to be some circularity there -- consciousness is part 
of the functionality of the brain, or parts thereof, so maintaining 
functionality requires maintenance of consciousness. One would really 
need some independent measure of functionality, independent of 
consciousness. And the claim would be that reproducing local 
functionality would maintain consciousness. I do not see that that could 
readily be tested, since mapping all the inputs and outputs of neurons 
or other brain components may not be technically possible. One could map 
neuron behaviour at some crude level, but would that be sufficient to 
maintain consciousness? Natural cell death, and the death of neurons 
does, generally, lead to noticeable changes in consciousness and 
function -- have you not noticed decline in memory and other mental 
faculties as you get older? When consciousness changes in this way, the 
subject is usually only too painfully aware of the decline in mental 
acuity. To avoid this 

Re: How to live forever

2018-03-22 Thread Bruno Marchal

> On 22 Mar 2018, at 02:34, Kim Jones  wrote:
> 
> What if we already live forever? Why in fact, do people think that just 
> because we die that we go away or cease to exist? What if Nature has already 
> solved the problem? Why would you spend a motza to ensure you lived forever 
> in the same boring universe when after 70 or 80 years you can be teleported 
> to a different universe in a pine box?

Yes, that makes sense, and the simples way to be immortal consists in having 
children. But then … you know … kids can be terrible … ;)

Then, we are also immortal already when we remember the “consciousness state 
which is out of time”, but that one is so counterintuitive that I prefer to not 
insist on it. I think we get it with salvia, but some describes it as the worst 
thing that they ever encountered, other as the most blissful thing they 
encountered. 

Mortality is a God self-delusion when bored from immortality, somehow … It is 
also a way to say Hello to Itself, or to play hide-and-seek.

Bruno



> 
> Kim Jones
> 
> 
> 
> 
> On 22 Mar 2018, at 6:39 am, Brent Meeker  > wrote:
> 
>> 
>> 
>> On 3/21/2018 8:40 AM, Bruno Marchal wrote:
>>> 
 On 20 Mar 2018, at 00:56, Brent Meeker > wrote:
 
 
 
 On 3/19/2018 2:19 PM, John Clark wrote:
> On Sun, Mar 18, 2018 at 10:03 PM, Brent Meeker  > wrote
> 
> > octopuses are fairly intelligent but their neural structure is 
> > distributed very differently from mammals of similar intelligence.  An 
> > artificial intelligence that was not modeled on mammalian brain 
> > structure might be intelligent but not conscious
> 
> Maybe maybe maybe. ​A​ nd you don't have have the exact same brain 
> structure as I have so you might be intelligent but not conscious. I said 
> it before I'll say it again, consciousness theories are useless, and not 
> just the ones on this list, all of them.
 
 You're the one who floated a theory of consciousness based on evolution.  
 I'm just pointing out that it only shows that consciousness exists in 
 humans for some evolutionarily selected purpose.  It doesn't apply to 
 intelligence that arises in some other evolutionary branch or intelligence 
 like AI that doesn't evolve but is designed.
 
>>> 
>>> 
>>> Not sure that there is a genuine difference between design and evolution. 
>>> With the multicellular, there is an evolution of design. With the origin of 
>>> life, there has been a design of evolution, even if serendipitously. I am 
>>> not talking about some purposeful or intelligent design here. A cell is a 
>>> quite sophisticated "gigantic nano-machine”.
>>> 
>>> Then some technic in AI, like the genetic algorithm, or some technic 
>>> inspired by the study of the immune system, or self-reference, leads to 
>>> programs or machines evolving in some ways.
>>> 
>>> Now, I can argue that for consciousness nothing of this is needed. It is 
>>> the canonical knowledge associated with the fixed point of the embedding of 
>>> the universal machines in the arithmetical reality.
>>> 
>>> It differentiates into the many indexical first person scenarii.
>>> 
>>> Matter should be what gives rise to possibilities ([]p & ~[]f, []p & <>t).  
>>> That works as it is confirmed by QM without collapse, both intuitively 
>>> through the many computations, and formally as the three material modes do 
>>> provide a formal quantum logic, its arithmetical interpretation, and its 
>>> metamathematical interpretations. 
>>> 
>>> The universal machine rich enough to prove their own universality (like the 
>>> sound humans and Peano arithmetic, and ZF, …) are confronted to the 
>>> distinction between knowing and proof. They prove their own incompleteness 
>>> but still figure out some truth despite being non provable. 
>>> 
>>> The only mystery is where does the numbers (and/or the combinators, the 
>>> lambda expressions, the game of life, c++, etc.) come from?
>>> But here the sound löbian machine can prove that it is impossible to derive 
>>> a universal system from a non-universal theory. 
>>> 
>>> A weaker version of the Church-Turing-Post-Kleene thesis is: It exists a 
>>> Universal Machine. That is, a Machine which computes all computable 
>>> functions. The stronger usual version is that some formal system/definition 
>>> provides such a universal machine, meaning that the class of the functions 
>>> computable by some universal machine gives the class of all computable 
>>> functions, including those not everywhere defined (and non algorithmically 
>>> spared among those defined everywhere: the price of universality). 
>>> 
>>> That universal being has a rich theology, explaining the relation between 
>>> believing, knowing, observing, feeling and the truth.
>> 
>> 

Re: How to live forever

2018-03-22 Thread Bruno Marchal

> On 21 Mar 2018, at 20:39, Brent Meeker  wrote:
> 
> 
> 
> On 3/21/2018 8:40 AM, Bruno Marchal wrote:
>> 
>>> On 20 Mar 2018, at 00:56, Brent Meeker >> > wrote:
>>> 
>>> 
>>> 
>>> On 3/19/2018 2:19 PM, John Clark wrote:
 On Sun, Mar 18, 2018 at 10:03 PM, Brent Meeker > wrote
 
 > octopuses are fairly intelligent but their neural structure is 
 > distributed very differently from mammals of similar intelligence.  An 
 > artificial intelligence that was not modeled on mammalian brain 
 > structure might be intelligent but not conscious
 
 Maybe maybe maybe. ​A​ nd you don't have have the exact same brain 
 structure as I have so you might be intelligent but not conscious. I said 
 it before I'll say it again, consciousness theories are useless, and not 
 just the ones on this list, all of them.
>>> 
>>> You're the one who floated a theory of consciousness based on evolution.  
>>> I'm just pointing out that it only shows that consciousness exists in 
>>> humans for some evolutionarily selected purpose.  It doesn't apply to 
>>> intelligence that arises in some other evolutionary branch or intelligence 
>>> like AI that doesn't evolve but is designed.
>>> 
>> 
>> 
>> Not sure that there is a genuine difference between design and evolution. 
>> With the multicellular, there is an evolution of design. With the origin of 
>> life, there has been a design of evolution, even if serendipitously. I am 
>> not talking about some purposeful or intelligent design here. A cell is a 
>> quite sophisticated "gigantic nano-machine”.
>> 
>> Then some technic in AI, like the genetic algorithm, or some technic 
>> inspired by the study of the immune system, or self-reference, leads to 
>> programs or machines evolving in some ways.
>> 
>> Now, I can argue that for consciousness nothing of this is needed. It is the 
>> canonical knowledge associated with the fixed point of the embedding of the 
>> universal machines in the arithmetical reality.
>> 
>> It differentiates into the many indexical first person scenarii.
>> 
>> Matter should be what gives rise to possibilities ([]p & ~[]f, []p & <>t).  
>> That works as it is confirmed by QM without collapse, both intuitively 
>> through the many computations, and formally as the three material modes do 
>> provide a formal quantum logic, its arithmetical interpretation, and its 
>> metamathematical interpretations. 
>> 
>> The universal machine rich enough to prove their own universality (like the 
>> sound humans and Peano arithmetic, and ZF, …) are confronted to the 
>> distinction between knowing and proof. They prove their own incompleteness 
>> but still figure out some truth despite being non provable. 
>> 
>> The only mystery is where does the numbers (and/or the combinators, the 
>> lambda expressions, the game of life, c++, etc.) come from?
>> But here the sound löbian machine can prove that it is impossible to derive 
>> a universal system from a non-universal theory. 
>> 
>> A weaker version of the Church-Turing-Post-Kleene thesis is: It exists a 
>> Universal Machine. That is, a Machine which computes all computable 
>> functions. The stronger usual version is that some formal system/definition 
>> provides such a universal machine, meaning that the class of the functions 
>> computable by some universal machine gives the class of all computable 
>> functions, including those not everywhere defined (and non algorithmically 
>> spared among those defined everywhere: the price of universality). 
>> 
>> That universal being has a rich theology, explaining the relation between 
>> believing, knowing, observing, feeling and the truth.
> 
> That didn't address my question: Can you imagine different kinds of 
> consciousness.  For example you have sometimes speculated that there is only 
> one consciousness which is somehow compartmentalized in individuals.  That 
> implies that the compartmentalization could be eliminated and a different 
> kind of consciousness experienced...like the Borg.

Yes. With dissociative drugs, like Ketamine (dangerous) or salvia (much less 
dangerous but quite impressive) you do feel like reminding who you were before 
birth, and it is a quite altered state of consciousness, which is felt 
retrospectively as being totally out of time and space/ But you can have a 
glimpse of this each time you understand a theorem (even more so with a no-go 
theorem) in math.

At first, I thought that salvia led to the experience of the Löbian entity, but 
eventually, it looks it is the universal Turing machine experience, before she 
get deluded in believing in the induction axioms. It is a dissociative non 
Löbian altered state of consciousness. I suspect we all go there each night, 
and the brain does a lot of work for us not reminding this (it would not help 
to motivate for the 

Re: How to live forever

2018-03-22 Thread Bruno Marchal

> On 21 Mar 2018, at 00:27, Bruce Kellett  wrote:
> 
> From: Telmo Menezes >
>> 
>> On Tue, Mar 20, 2018 at 1:03 AM, Bruce Kellett
>> > wrote:
>> 
>> > Now it may be that you want to reject Stathis's claim, and insist that
>> > consciousness cannot be inferred from behaviour. But it seems to me that
>> > that theory is as lacking in independent verification as the contrary.
>> 
>> Again, no theory. I am just stating the simple fact that, since there
>> is no known instrument so far that can detect consciousness in the 3p,
>> then it is not possible to propose scientific theories about
>> consciousness at the moment. Only conjectures.
> 
> Explain the difference between a scientific theory and a scientific 
> conjecture. Science is not about proofs; theories are always only ever 
> conjectural, and subject to revision and/or rejection as further evidence is 
> gathered. You don't need an instrument that can give a clean yes/no answer to 
> the presence of consciousness to develop scientific theories about 
> consciousness. We can start with the observation that all normal healthy 
> humans are conscious, and that rocks and other inert objects are not 
> conscious and work from there to develop a science of consciousness, based on 
> evidence from the observation of behaviour. One might well consider that 
> there are different levels or types of consciousness accorded to humans, 
> animals, octopuses, and so on. But that would be a scientific finding, based 
> on observational evidence.
> 
> So science is not as limited as you seem to want to make it —

I agreed with all what you say here, but then ...



> science is not mathematics, after all.


Well, what you say above applies also to mathematics, even if we might argue 
that every-elementary arithmetic is close to being undoubtable, but that is 
true only for an arithmetical realist, not for those who rejects the (A v ~A) 
principles. Most mathematicians and scientist do not doubt arithmetic, but many 
philosophers of mathematics do. But OK, this is tangent on the discussion. The 
important point is that science is not about proof (then it happens that proof 
is not about truth, as the premises might be unsound, even if consistent).

Bruno




> 
>> If you want my conjecture: I assume that all living things are
>> conscious. If you show me an AI that behaves like a human being (or
>> even a dog) I will assume it's conscious too. But none of this is
>> science.
>> 
>> I strongly suspect that consciousness is something that cannot, in
>> fact, be studied by science -- because consciousness is what does
>> science. It's like asking you to look inside your eyeballs.
> 
>  It is perfectly possible to look inside one's own eyeballs. Have you never 
> been to an optician? Just use a mirror with his instruments for inspecting 
> and recording the state of the retina.
> 
> Bruce
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-22 Thread Bruno Marchal

> On 20 Mar 2018, at 20:34, Brent Meeker  wrote:
> 
> 
> 
> On 3/20/2018 3:58 AM, Telmo Menezes wrote:
>>> The interesting thing is that you can draw conclusions about consciousness
>>> without being able to define it or detect it.
>> I agree.
>> 
>>> The claim is that IF an entity
>>> is conscious THEN its consciousness will be preserved if brain function is
>>> preserved despite changing the brain substrate.
>> Ok, this is computationalism. I also bet on computationalism, but I
>> think we must proceed with caution and not forget that we are just
>> assuming this to be true. Your thought experiment is convincing but is
>> not a proof. You do expose something that I agree with: that
>> non-computationalism sounds silly.
> But does it sound so silly if we propose substituting a completely different 
> kind of computer, e.g. von Neumann architecture or one that just records 
> everything instead of an episodic associative memory, for the brain.  The 
> Church-Turing conjecture says it can compute the same functions. 

That is the usual extensional Church-Turing thesis, but it implies the 
intensional thesis. Not only a universal machine can compute what any other can 
compute, but it can compute it in the same way as the one that it imitates. The 
reason is simple, as the universal machine can emulate the other universal 
machine. A list interpreter can emulate a fortran interpreter and compute, 
after that, like a fortran compiler/interpreter.

Even Babbage Universal Engine can emulate a quantum computer, albeit with a 
super-slow-down, but the entities emeumlate by that quantum virtual machine 
will not see the difference, and when done in arithmetic, no first person can 
detect the delays, so that slow down does play no role.




> But does it instantiate the same consciousness.  My intuition is that it 
> would be "conscious" but in some different way; for example by having the 
> kind of memory you would have if you could review of a movie of any interval 
> in your past.

Then you have got a new type of brain, doing new type of computations, which 
makes sense. That is why eventually we will all buy artificial brain, because 
they will just be really more powerful than our actual brain, and allows much 
more. Probably digital transplant will come in that way: people will put the 
smartphone *in* the head …in some not so far futures.

Bruno




> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-22 Thread Bruno Marchal

> On 20 Mar 2018, at 17:52, Lawrence Crowell  
> wrote:
> 
> 
> 
> On Tuesday, March 20, 2018 at 3:57:31 AM UTC-5, telmo_menezes wrote:
> On Tue, Mar 20, 2018 at 1:03 AM, Bruce Kellett 
>  wrote: 
> > From: Telmo Menezes  
> > 
> > 
> > On Tue, Mar 20, 2018 at 12:06 AM, Bruce Kellett 
> >  wrote: 
> >> From: Stathis Papaioannou  
> >> 
> >> 
> >> It is possible that consciousness is fully preserved until a threshold is 
> >> reached then suddenly disappears. So if half the subject’s brain is 
> >> replaced, he behaves normally and has normal consciousness, but if one 
> >> more 
> >> neurone is replaced he continues to behave normally but becomes a zombie. 
> >> Moreover, since neurones are themselves complex systems it could be broken 
> >> down further: half of that final neurone could be replaced with no change 
> >> to 
> >> consciousness, but when a particular membrane protein is replaced with a 
> >> non-biological nanomachine the subject will suddenly become a zombie. And 
> >> we 
> >> need not stop here, because this protein molecule could also be replaced 
> >> gradually, for example by non-biological radioisotopes. If half the atoms 
> >> in 
> >> this protein are replaced, there is no change in behaviour and no change 
> >> in 
> >> consciousness; but when one more atom is replaced a threshold is reached 
> >> and 
> >> the subject suddenly loses consciousness. So zombification could turn on 
> >> the 
> >> addition or subtraction of one neutron. Are you prepared to go this far to 
> >> challenge the idea that if the observable behaviour of the brain is 
> >> replicated, consciousness will also be replicated? 
> >> 
> >> 
> >> If the theory is that if the observable behaviour of the brain is 
> >> replicated, then consciousness will also be replicated, then the clear 
> >> corollary is that consciousness can be inferred from observable behaviour. 
> > 
> > For this to be a theory in the scientific sense, one needs some way to 
> > detect consciousness. In that case your corollary becomes a tautology: 
> > 
> > (a) If one can detect consciousness then one can detect consciousness. 
> > 
> > The other option is to assume that observable behaviors in the brain 
> > imply consciousness -- because "common sense", because experts say so, 
> > whatever. In this case it becomes circular reasoning: 
> > 
> > (b) Assuming that observable behaviors in the brain imply 
> > consciousness, consciousness can be inferred from brain behaviors. 
> > 
> > 
> > I was responding to the claim by Stathis that consciousness will follow 
> > replication of observable behaviour. It seemed to me that this was proposed 
> > as a theory: "If the observable behaviour of is replicated then 
> > consciousness will also be replicated." 
> 
> Lawrence is proposing that something specific about the brain might be 
> necessary for consciousness to arise. He proposed a scenario where 
> parts of the brain are replaced with a computer, and behavior is 
> maintained while consciousness is lost (p-zombie). Stathis is 
> proposing a thought experiment that attempts reductio ad absurdum on 
> this scenario. Although this is all interesting speculation, there is 
> no scientific theory, because there is no way to perform an 
> experiment, because there is no scientific instrument that detects 
> consciousness. In the end I still don't know, as scientific fact, if 
> others are conscious. 
> 
> You were the first to call it a theory, and this is why I reacted. 
> 
> My point is actually empirical. The claim is that this can be done, which 
> means experiments will be  done. If so then we might ask, "What can go wrong 
> with that?" 
> 
> My point is that to load my brain states into a computer requires some 
> process for measuring and cataloging the neural nets in my brain. Processes 
> such as computing subsets of combinatorial processes are NP-complete.

Why? I don’t see that at all. A copy can be done in lear time, once we have the 
right technology. The hippocampus of the rat have been copied, and some brain 
worms also, with some partial success.

Personally I would ask a copy at the atomic level; just above Heisenberg 
uncertainty, in case I am forced to say “yes” to some doctor.





> This will form some limit on this claim, and it could be a fundamental 
> barrier. Duplication is not possible either, for a complete duplicate on the 
> fine grain quantum scale involves quantum cloning that is not a quantum 
> process. A lot of this discussion involves rubbing the philosopher's stone, 
> when in fact this would be a whole lot more difficult to actually do.

Note that we are actually copied or prepared (in the quantum sense) infinitely 
often in arithmetic, and only that counts to understand that if Mechanism is 
true, then physics becomes a branch of machine theology, which is itself a 
branch of 

Re: How to live forever

2018-03-22 Thread Bruno Marchal

> On 20 Mar 2018, at 17:03, John Clark  wrote:
> 
> On Mon, Mar 19, 2018 at 7:06 PM, Bruce Kellett  > wrote:
> 
> > If the theory is that if the observable behaviour of the brain is 
> > replicated, then consciousness will also be replicated, then the clear 
> > corollary is that consciousness can be inferred from observable behaviour.
> Yes.
> 
> > Which implies that I can be as certain of the consciousness of other people 
> > as I am of my own.
> No. The idea that consciousness can be inferred from intelligent behavior is 
> an axiom of existence, it has no proof and will never have a proof but it 
> sure seems like its true, and every sane human being uses it every hour of 
> their waking life. And there is always a element of doubt in real life, or at 
> least there should be, so something need not provide absolute certainty to be 
> enormously useful. As for my own consciousness I don’t have a proof of that 
> either but I don’t need one because I’ve got the one thing that can pull rank 
> even over proof, direct experience.   
> 
> > This seems to do some violence to the 1p/1pp/3p distinctions 
> Bruno’s the one who started pushing that ridiculous phrase, I suppose he 
> thought it sounded more profound erudite and scientific than “the difference 
> between you and me”. And I would maintain nobody outside a looney bin has 
> difficulty finding the "1p/1pp/3p distinction”.
> 
> 


Yes, I teach this since year, and nobody has ever have any difficulty in 
understanding this, including the first person indeterminacy.

Problems raised are in step 7 and 8, which are more demanding in mathematical 
logic, as they need to understand that Very Elementary Arithmetic (Peano 
Arithmetic *without* induction) is already Turing complete. This is known (by 
logicians) since the 1930.

Bruno



>  John K Clark
> 
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-22 Thread Bruno Marchal

> On 20 Mar 2018, at 09:57, Telmo Menezes  wrote:
> 
> On Tue, Mar 20, 2018 at 1:03 AM, Bruce Kellett
>  wrote:
>> From: Telmo Menezes 
>> 
>> 
>> On Tue, Mar 20, 2018 at 12:06 AM, Bruce Kellett
>>  wrote:
>>> From: Stathis Papaioannou 
>>> 
>>> 
>>> It is possible that consciousness is fully preserved until a threshold is
>>> reached then suddenly disappears. So if half the subject’s brain is
>>> replaced, he behaves normally and has normal consciousness, but if one
>>> more
>>> neurone is replaced he continues to behave normally but becomes a zombie.
>>> Moreover, since neurones are themselves complex systems it could be broken
>>> down further: half of that final neurone could be replaced with no change
>>> to
>>> consciousness, but when a particular membrane protein is replaced with a
>>> non-biological nanomachine the subject will suddenly become a zombie. And
>>> we
>>> need not stop here, because this protein molecule could also be replaced
>>> gradually, for example by non-biological radioisotopes. If half the atoms
>>> in
>>> this protein are replaced, there is no change in behaviour and no change
>>> in
>>> consciousness; but when one more atom is replaced a threshold is reached
>>> and
>>> the subject suddenly loses consciousness. So zombification could turn on
>>> the
>>> addition or subtraction of one neutron. Are you prepared to go this far to
>>> challenge the idea that if the observable behaviour of the brain is
>>> replicated, consciousness will also be replicated?
>>> 
>>> 
>>> If the theory is that if the observable behaviour of the brain is
>>> replicated, then consciousness will also be replicated, then the clear
>>> corollary is that consciousness can be inferred from observable behaviour.
>> 
>> For this to be a theory in the scientific sense, one needs some way to
>> detect consciousness.

I did not dream. Telmo, that is Aristotle criteria of “scientificness”  (if I 
can say).

A Platonist doubt even more what we can detect than what he can conceive or 
understand ….

(I might be slightly out of the context here, note).




>> In that case your corollary becomes a tautology:
>> 
>> (a) If one can detect consciousness then one can detect consciousness.
>> 
>> The other option is to assume that observable behaviors in the brain
>> imply consciousness -- because "common sense", because experts say so,
>> whatever. In this case it becomes circular reasoning:
>> 
>> (b) Assuming that observable behaviors in the brain imply
>> consciousness, consciousness can be inferred from brain behaviors.
>> 
>> 
>> I was responding to the claim by Stathis that consciousness will follow
>> replication of observable behaviour. It seemed to me that this was proposed
>> as a theory: "If the observable behaviour of is replicated then
>> consciousness will also be replicated."
> 
> Lawrence is proposing that something specific about the brain might be
> necessary for consciousness to arise. He proposed a scenario where
> parts of the brain are replaced with a computer, and behavior is
> maintained while consciousness is lost (p-zombie). Stathis is
> proposing a thought experiment that attempts reductio ad absurdum on
> this scenario. Although this is all interesting speculation, there is
> no scientific theory, because there is no way to perform an
> experiment, because there is no scientific instrument that detects
> consciousness. In the end I still don't know, as scientific fact, if
> others are conscious.

That is right. But we don’t know, as a scientific fact … anything. We can only 
prove in theories, which are always assumptions.

The dream argument is radical about that. You can believe strongly that the 
Higgs boson has been detected, and yet you can conceive that you will wake up, 
and all that boson stuff was a dream.

Science is only collection of belief, and we mainly learn only when we refute 
our beliefs, but even that can be a dream (contra-popper). Yet, we can find big 
picture which are appealing in elegance, beauty, and confirmed by the (only 
plausible) facts.





> 
> You were the first to call it a theory, and this is why I reacted.

OK.


> 
>> I was merely pointing out
>> consequences of this theory, so your claims of tautology and/or circularity
>> rather miss the point: the consequences of any theory are either tautologies
>> or circularities in that sense, because they are implications of the theory.
> 
> Tautologies are fine indeed. I did not call (a) a tautology as an
> insult, merely to point out that the hard part is still missing, and
> that assuming that it is solved does not lead to anywhere interesting.
> 
> Circularities are, of course, not fine. You cannot assume that you can
> infer consciousness from behavior, and that use this assumption to
> conclude that you can infer consciousness from behavior.
> 
>> Now it may be that you want to reject 

Re: How to live forever

2018-03-22 Thread Bruno Marchal

> On 20 Mar 2018, at 06:46, Stathis Papaioannou  wrote:
> 
> 
> On Tue, 20 Mar 2018 at 10:09 am, Bruce Kellett  > wrote:
> From: Stathis Papaioannou >
> 
>> 
>> It is possible that consciousness is fully preserved until a threshold is 
>> reached then suddenly disappears. So if half the subject’s brain is 
>> replaced, he behaves normally and has normal consciousness, but if one more 
>> neurone is replaced he continues to behave normally but becomes a zombie. 
>> Moreover, since neurones are themselves complex systems it could be broken 
>> down further: half of that final neurone could be replaced with no change to 
>> consciousness, but when a particular membrane protein is replaced with a 
>> non-biological nanomachine the subject will suddenly become a zombie. And we 
>> need not stop here, because this protein molecule could also be replaced 
>> gradually, for example by non-biological radioisotopes. If half the atoms in 
>> this protein are replaced, there is no change in behaviour and no change in 
>> consciousness; but when one more atom is replaced a threshold is reached and 
>> the subject suddenly loses consciousness. So zombification could turn on the 
>> addition or subtraction of one neutron. Are you prepared to go this far to 
>> challenge the idea that if the observable behaviour of the brain is 
>> replicated, consciousness will also be replicated?
> 
> If the theory is that if the observable behaviour of the brain is replicated, 
> then consciousness will also be replicated, then the clear corollary is that 
> consciousness can be inferred from observable behaviour. Which implies that I 
> can be as certain of the consciousness of other people as I am of my own. 
> This seems to do some violence to the 1p/1pp/3p distinctions that 
> computationalism rely on so much: only 1p is "certainly certain". But if I 
> can reliably infer consciousness in others, then other things can be as 
> certain as 1p experiences
> 
> You can’t reliable infer consciousness in others. What you can infer is that 
> whatever consciousness an entity has, it will be preserved if functionally 
> identical substitutions in its brain are made. You can’t know if a mouse is 
> conscious, but you can know that if mouse neurones are replaced with 
> functionally identical electronic neurones its behaviour will be the same and 
> any consciousness it may have will also be the same.

Assuming the neuronal level for the substitution level, but some like Hameroff 
will require the copy made at the level of the tubulins, other will ask for the 
quantum states, and some will just accept the neuronal + glial cells, as they 
seem to play a role in pain.

We cannot know our machine level, but we can bet, and we can believe (correctly 
or wrongly) having survived.

It is arguable that molecular biology gives some weight to the idea that 
“nature has already bet” on Mechanism, as we replace our stuff all the time.

Bruno




> -- 
> Stathis Papaioannou
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-22 Thread Bruno Marchal

> On 20 Mar 2018, at 02:52, John Clark  wrote:
> 
> On Mon, Mar 19, 2018 at 8:11 PM, Lawrence Crowell 
> > 
> wrote:
> 
> ​>>​That’s nice. I repeat my question, NP-completeness is sorta weird and 
> consciousness is sorta weird, but other than that is there any reason to 
> think the two things are related?
> 
> ​>  If you can't compute efficiently the map,
> 
>  I ​don't even know what ​"compute efficiently the map​" means.​
>  
>  ​> ​how do you know this system will really upload brain states in such as 
> way that consciousness seemlessly carries from brain to computer?
> 
> I don't know. I can never know for sure until I actually do it, if I notice 
> that I'm not dead and I remember being me then and only then I'll know it 
> worked. Maybe Black Holes and cadavers and piles of shit are conscious and 
> maybe a computer that acts just like me is not, but I doubt it. In mattes 
> like this all I can do is make a educated guess, and my guess is piles of 
> shit are not conscious but a intelligent computer is.   
>   
> ​>Even if the entity in the computer is conscious it might not actually be me.
> 
> ​If it remember being you then it is you.​ 
>  
> ​> ​If I could duplicate myself which of the two would "be me?"
> 
> That is a silly question. If you are duplicated in a duplicating machine then 
> both are you because that's what the word "duplicated" means. I've gone over 
> this crap for years with Bruno,

Yes, both are you, but they have become different and have different subjective 
life after the duplication, and as this is known in advance, that entails the 
first person indeterminacy, which plays the key role in deriving physics from 
arithmetic (and that works up to now as we retrieved a part of quantum physics 
already, both intuitively with the many computations playing the role of the 
many worlds, and formally as []p & p gives a quantum logic when p is 
semi-computable (sigma_1).

Bruno




> if I wen't a​n​ atheist I'd pray to God I don't have to repeat it with you.
>   
> ​>​ As I see it there is a lot of hype here concealing the fact we really 
> know every little about this.
> 
> ​But ​you seems to know all there about it, enough to quite literally bet 
> your life on it not working.
> 
> ​John K Clark​
> 
> 
> 
>  
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-22 Thread Bruno Marchal

> On 20 Mar 2018, at 01:11, Lawrence Crowell  
> wrote:
> 
> 
> 
> On Monday, March 19, 2018 at 3:02:28 PM UTC-6, John Clark wrote:
> On Sun, Mar 18, 2018 at 11:02 AM, Lawrence Crowell  <>> wrote:
> 
> >>  NP-completeness is sorta weird and consciousness is sorta weird, but 
> >> other than that is there any reason to think the two things are related?
>  
> > This seems to be something you are not registering.
> You’ve got that right.
> 
> > Classic NP-complete problems involve cataloging subgraphs and determining 
> > the rules for all subgraphs in a graph. There are other similar 
> > combinatoric problems that are NP complete.
> That’s nice. I repeat my question, NP-completeness is sorta weird and 
> consciousness is sorta weird, but other than that is there any reason to 
> think the two things are related?
> 
> 
> If you can't compute efficiently the map, the how do you know this system 
> will really upload brain states in such as way that consciousness seemlessly 
> carries from brain to computer? Even if the entity in the computer is 
> conscious it might not actually be me. If I could duplicate myself which of 
> the two would "be me?”

Both. We have discussed this at length. But of course, if you think a copy is 
not you, then you better have to refuse a brain transplant, and reject the 
indexical mechanist hypothesis.




> The map problem is one involving graph theoretic problems that are 
> NP-complete. As I see it there is a lot of hype here concealing the fact we 
> really know every little about this.


Note that it has already been proved that no machine can know which machine she 
is, and that all Löbian machine (universal machine knowing that they are 
universal) have a soul (first person view) and they know that such a soul is 
not a machine, and actually not anything describable in the a third person way. 
It is amazing, and require a good understanding of Gödel’s proof of 
incompleteness.

Bruno




> 
> LC
> 
> 
> 
> > A map from a brain to a computer is going to require knowing how to handle 
> > these problems. 
> That is utterly ridiculous! Duplicating a map is not a NP-complete problem, 
> in fact its not much of a problem at all, a Xerox machine can do it. In this 
> case we're not trying to find the shortest path or even a shorted path than 
> the one the trailing salesman took. All we need do is take the path the 
> salesman already took. 
> 
> > Quantum computers do not help much.
> 
> It would be great to have a quantum computer but would not be necessary for 
> uploading or for AI, it would just be icing on the cake.  
>  > It could have some bearing on the ability to emulate consciousness in a 
> computer.
> 
> Yes, and the Goldbach conjecture might have some bearing on the ability to 
> emulate consciousness in a computer too, but there is not one particle of 
> evidence to suggest that either of the two actually does. There are a 
> infinite number of things and concepts in the universe and not one of them 
> has been ruled out as having somethings to do with consciousness, and that’s 
> why consciousness theories are so easy to come up with and why they are so 
> completely useless. Intelligence theories are a different matter entirely, 
> they are testable. 
> 
> >> How do you figure that? Both my brain and my computer are made of matter 
> >> that obeys the laws of physics, and matter that obeys the laws of physics 
> >> has never been observed to compute NP-complete problems in polynomial 
> >> time, much less less find the answer to a non-computable question, like 
> >> “what is the 7918th Busy Beaver number?”.
>  
> > And for this reason it could be impossible to map brain states into a 
> > computer and capture a person completely. 
> How do you figure that? A computer can never find the 7918th Busy Beaver 
> number but my consciousness can never find it either. I’ll be damned if I see 
> how one thing has anything to do with the other. It seems to me that you 
> don’t want computers to be conscious so you looked for a problem that a 
> computer can never solve and just decreed that problem must have something to 
> do with consciousness. But computers can’t find the 7918th Busy Beaver number 
> because the laws of physics can’t find it, even the universe itself doesn’t 
> know what that finite number is. But I know for a fact that the universe does 
> know how to arrange atoms so they behave in a johnkclarkian way and become 
> conscious. The universe doesn't know how to solve NP complete problems in 
> polynomial time, much less NP hard problems, much less flat out 
> non-computable problems like the busy Beaver, so I don't see how any of them 
> could have anything to do with consciousness. 
> 
> > Of course brains and computers are made of matter. So is a pile of shit 
> > also made of matter. 
> Exactly, and the only difference between my brain and a pile of shit is the 
> way the generic atoms 

Re: How to live forever

2018-03-22 Thread Bruno Marchal

> On 20 Mar 2018, at 01:03, Bruce Kellett  wrote:
> 
> From: Telmo Menezes >
>> 
>> On Tue, Mar 20, 2018 at 12:06 AM, Bruce Kellett
>> > wrote:
>> > From: Stathis Papaioannou >
>> >
>> >
>> > It is possible that consciousness is fully preserved until a threshold is
>> > reached then suddenly disappears. So if half the subject’s brain is
>> > replaced, he behaves normally and has normal consciousness, but if one more
>> > neurone is replaced he continues to behave normally but becomes a zombie.
>> > Moreover, since neurones are themselves complex systems it could be broken
>> > down further: half of that final neurone could be replaced with no change 
>> > to
>> > consciousness, but when a particular membrane protein is replaced with a
>> > non-biological nanomachine the subject will suddenly become a zombie. And 
>> > we
>> > need not stop here, because this protein molecule could also be replaced
>> > gradually, for example by non-biological radioisotopes. If half the atoms 
>> > in
>> > this protein are replaced, there is no change in behaviour and no change in
>> > consciousness; but when one more atom is replaced a threshold is reached 
>> > and
>> > the subject suddenly loses consciousness. So zombification could turn on 
>> > the
>> > addition or subtraction of one neutron. Are you prepared to go this far to
>> > challenge the idea that if the observable behaviour of the brain is
>> > replicated, consciousness will also be replicated?
>> >
>> >
>> > If the theory is that if the observable behaviour of the brain is
>> > replicated, then consciousness will also be replicated, then the clear
>> > corollary is that consciousness can be inferred from observable behaviour.
>> 
>> For this to be a theory in the scientific sense, one needs some way to
>> detect consciousness. In that case your corollary becomes a tautology:
>> 
>> (a) If one can detect consciousness then one can detect consciousness.
>> 
>> The other option is to assume that observable behaviors in the brain
>> imply consciousness -- because "common sense", because experts say so,
>> whatever. In this case it becomes circular reasoning:
>> 
>> (b) Assuming that observable behaviors in the brain imply
>> consciousness, consciousness can be inferred from brain behaviors.
> 
> I was responding to the claim by Stathis that consciousness will follow 
> replication of observable behaviour.


There is big ambiguity here. Are we talking about the behaviour of the brain at 
the substitution level, or below, or above, or are we talking about the 
behaviour of the person having a brain?

Indexical Mechanism, as I defined it, only *assumes* the existence of a level 
of description which would preserves my consciousness if run by some machine 
(then a reasoning shows that it cannot matter if the running is physical or 
arithmetical, which leads into making physics a branch of machine 
bio-psycho-theology, making the machine theory of consciousness testable).

Sorry if I missed some precision made in another posts.



> It seemed to me that this was proposed as a theory: "If the observable 
> behaviour of is replicated then consciousness will also be replicated." I was 
> merely pointing out consequences of this theory, so your claims of tautology 
> and/or circularity rather miss the point: the consequences of any theory are 
> either tautologies or circularities in that sense, because they are 
> implications of the theory.
> 
> Now it may be that you want to reject Stathis's calim, and insist that 
> consciousness cannot be inferred from behaviour. But it seems to me that that 
> theory is as lacking in independent verification as the contrary.

Does the cops, made with paper/wood, that we can see on the roads, conscious? 
They do behave like the usual living cops ….




> 
> 
>> > Which implies that I can be as certain of the consciousness of other people
>> > as I am of my own. This seems to do some violence to the 1p/1pp/3p
>> > distinctions that computationalism rely on so much: only 1p is "certainly
>> > certain".
>> > But if I can reliably infer consciousness in others, then other
>> > things can be as certain as 1p experiences.
>> 
>> If one can detect 1p experiences then one can detect 1p experiences...
> 
> The claim has more content than that.


I am not sure if Telmo is not using “detection” as a criteria of truth, but 
that is Aristotle’s metaphysics, which can not work with Digital Indexical 
Mechanism (computationalism in cognitive science).

Bruno



> 
> Bruce
> 
> 
> 
>> 
>> Telmo.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> 

Re: How to live forever

2018-03-22 Thread Bruno Marchal

> On 19 Mar 2018, at 22:02, John Clark  wrote:
> 
> On Sun, Mar 18, 2018 at 11:02 AM, Lawrence Crowell  > wrote:
> 
> >>  NP-completeness is sorta weird and consciousness is sorta weird, but 
> >> other than that is there any reason to think the two things are related?
>  
> > This seems to be something you are not registering.
> You’ve got that right.
> 
> > Classic NP-complete problems involve cataloging subgraphs and determining 
> > the rules for all subgraphs in a graph. There are other similar 
> > combinatoric problems that are NP complete.
> That’s nice. I repeat my question, NP-completeness is sorta weird and 
> consciousness is sorta weird, but other than that is there any reason to 
> think the two things are related?
> 

I agree. NP completeness has nothing to do with consciousness. But the higher 
degree of unsolvability has a relation, because it measures our ignorance, or 
machine’s ignorance, and the awareness of that ignorance has some role in 
self-consciousness. RA is conscious, but lack this higher order 
self-consciousness (which might already be a sort of delusion, actually).

Bruno



> > A map from a brain to a computer is going to require knowing how to handle 
> > these problems. 
> That is utterly ridiculous! Duplicating a map is not a NP-complete problem, 
> in fact its not much of a problem at all, a Xerox machine can do it. In this 
> case we're not trying to find the shortest path or even a shorted path than 
> the one the trailing salesman took. All we need do is take the path the 
> salesman already took. 
> 
> > Quantum computers do not help much.
> 
> It would be great to have a quantum computer but would not be necessary for 
> uploading or for AI, it would just be icing on the cake.  
>  > It could have some bearing on the ability to emulate consciousness in a 
> computer.
> 
> Yes, and the Goldbach conjecture might have some bearing on the ability to 
> emulate consciousness in a computer too, but there is not one particle of 
> evidence to suggest that either of the two actually does. There are a 
> infinite number of things and concepts in the universe and not one of them 
> has been ruled out as having somethings to do with consciousness, and that’s 
> why consciousness theories are so easy to come up with and why they are so 
> completely useless. Intelligence theories are a different matter entirely, 
> they are testable. 
> 
> >> How do you figure that? Both my brain and my computer are made of matter 
> >> that obeys the laws of physics, and matter that obeys the laws of physics 
> >> has never been observed to compute NP-complete problems in polynomial 
> >> time, much less less find the answer to a non-computable question, like 
> >> “what is the 7918th Busy Beaver number?”.
>  
> > And for this reason it could be impossible to map brain states into a 
> > computer and capture a person completely. 
> How do you figure that? A computer can never find the 7918th Busy Beaver 
> number but my consciousness can never find it either. I’ll be damned if I see 
> how one thing has anything to do with the other. It seems to me that you 
> don’t want computers to be conscious so you looked for a problem that a 
> computer can never solve and just decreed that problem must have something to 
> do with consciousness. But computers can’t find the 7918th Busy Beaver number 
> because the laws of physics can’t find it, even the universe itself doesn’t 
> know what that finite number is. But I know for a fact that the universe does 
> know how to arrange atoms so they behave in a johnkclarkian way and become 
> conscious. The universe doesn't know how to solve NP complete problems in 
> polynomial time, much less NP hard problems, much less flat out 
> non-computable problems like the busy Beaver, so I don't see how any of them 
> could have anything to do with consciousness. 
> 
> > Of course brains and computers are made of matter. So is a pile of shit 
> > also made of matter. 
> Exactly, and the only difference between my brain and a pile of shit is the 
> way the generic atoms are arranged, and the only difference between a cadaver 
> and a healthy living person is the way the generic atoms are arranged. One 
> carbon atom is identical to another so the only thing that specifies 
> something as being me or a cadaver or pile of shit is the information on how 
> to arrange those atoms.
> 
> > Based on what we know about bacteria and their network communicating by 
> > electrical potentials the pile of shit may have more in the way of 
> > consciousness than a computer. 
> Maybe maybe maybe. The above is a excellent example of what I was talking 
> about, consciousness theories are utterly and completely useless. Is this 
> really the best you can do? Are piles of shit and the interior of Black Holes 
> the only places you can find arguments against Cryonics?
> 
> > As things stand now I 

Re: How to live forever

2018-03-22 Thread Bruno Marchal

> On 19 Mar 2018, at 17:54, Lawrence Crowell  
> wrote:
> 
> I am not particularly in the Platonist camp. I see Platonism and other 
> philosophical ideas as just grist for the mill. 

Platonism is basically the understanding that seeing or observing cannot prove 
that anything is existing in some ontological sense. It is pushing the doubt up 
to be skeptical about what we see.

Then in mathematics, it has acquired the meaning of a belief in the ideas, 
including mathematical ideas.

To avoid confusion, as I use both senses in some context, I use realism for the 
second sense. With digital mechanism, such realism is limited to arithmetic. 
“God created the Natural Numbers, all the rest (analysis, physics) belongs to 
the imagination of the Natural Numbers (which makes sense given that very 
elementary arithmetic (Peano-Arithmetic minus the induction axioms) is already 
Turing-complete (but not Löbian).

The Mechanist theory of consciousness leads to a testable theory of … matter, 
and indeed up to now we retrieved quantum mechanics. 




> 
> Dennett's approach has some merit as at least opening a door for some 
> possible testable approaches to consciousness. I have no idea whether this 
> entire construction is realistic or not. 

Dennett believes in both Mechanism *and* materialism, and is thus shown 
inconsistent. It simply cannot work (see my papers on this).

Bruno




> 
> LC
> 
> On Monday, March 19, 2018 at 7:01:04 AM UTC-5, telmo_menezes wrote:
> On Sun, Mar 18, 2018 at 9:29 PM, Lawrence Crowell 
> 
> > 
> > 
> > In a part what you say is spot on. The problem with consciousness is there 
> > is a lot more ignorance about it than much in the way of certain knowledge. 
> > It may be a sort of epiphenomenon that emerges from some class of complex 
> > systems, which at this time we do know understand. Roger Penrose thinks it 
> > is something is a triality of physics, mathematics and mind, which is a 
> > sort 
> > of Platonic look. Dennett on the other hand thinks consciousness is a sort 
> > of illusion, which is a sort of epiphenomenon. Dennett calls it a 
> > hetererophenomenon as it involves a sort of game of multiple drafts. We 
> > really do not know for sure what consciousness is. 
> 
> I am on the Platonist camp, but fully realize that this is a personal 
> bet / intuition. I agree with Bruno that if computationalism is true, 
> then consciousness cannot be an epiphenomenon. But we don't know if 
> computationalism is true. 
> 
> Dennett I just find just silly. I think he plays with words, and 
> accepting his arguments would force me to deny something (the only 
> thing) that I absolutely know to be true. 
> 
> > I can think of things that strike me as obstructions to the idea of 
> > uploading brain states to a computer. The issue of NP-completeness seems 
> > plausible, and classic NP-complete problems are combinatorial systems which 
> > the brain is an example of. Other questions seem to make this problematic. 
> > It does seem to me the barrier of ignorance is far higher than our ability 
> > to vault over it. 
> 
> Agreed. I'm not sure we will ever be able to understand consciousness 
> -- there is really no reason to assume that this is possible. If it 
> is, I bet that it will require a quantitative jump in our 
> understanding of reality. I most definitely do not believe that it can 
> be solved by incrementalist research in neuroscience. 
> 
> Telmo. 
> 
> > LC 
> > 
> > -- 
> > You received this message because you are subscribed to the Google Groups 
> > "Everything List" group. 
> > To unsubscribe from this group and stop receiving emails from it, send an 
> > email to everything-li...@googlegroups.com . 
> > To post to this group, send email to everyth...@googlegroups.com 
> > . 
> > Visit this group at https://groups.google.com/group/everything-list 
> > . 
> > For more options, visit https://groups.google.com/d/optout 
> > . 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to 

Re: Disclosure Project

2018-03-22 Thread agrayson2000


On Wednesday, March 21, 2018 at 9:08:01 PM UTC-4, agrays...@gmail.com wrote:
>
>
>
> On Monday, February 26, 2018 at 3:22:01 PM UTC-5, agrays...@gmail.com 
> wrote:
>>
>> http://siriusdisclosure.com/evidence/
>
>
> The forward for Stanton Friedman's book on Majestic 12 is worth reading. 
> Not too long. AG
>
>  
> https://www.amazon.com/Top-Secret-Majic-Stanton-Friedman/dp/1569248303/ref=sr_1_12?s=books=UTF8=1521680059=1-12=stanton+friedman+books
>

The most interesting part for me was how a federal judge and the NSA 
interacted on a FOIA request for the NSA to release documents in its 
possession about UFO's. AG 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.