Re: How to live forever

2018-03-21 Thread Brent Meeker



On 3/21/2018 9:06 PM, Stathis Papaioannou wrote:


On Thu, 22 Mar 2018 at 8:57 am, Brent Meeker > wrote:




On 3/21/2018 2:27 PM, Stathis Papaioannou wrote:


On Thu, 22 Mar 2018 at 5:45 am, Brent Meeker
mailto:meeke...@verizon.net>> wrote:



On 3/20/2018 11:29 PM, Stathis Papaioannou wrote:

On Wed, 21 Mar 2018 at 9:03 am, Brent Meeker
mailto:meeke...@verizon.net>> wrote:



On 3/20/2018 1:14 PM, Stathis Papaioannou wrote:


On Wed, 21 Mar 2018 at 6:34 am, Brent Meeker
mailto:meeke...@verizon.net>> wrote:



On 3/20/2018 3:58 AM, Telmo Menezes wrote:

The interesting thing is that you can draw conclusions about 
consciousness
without being able to define it or detect it.

I agree.


The claim is that IF an entity
is conscious THEN its consciousness will be preserved if brain 
function is
preserved despite changing the brain substrate.

Ok, this is computationalism. I also bet on computationalism, 
but I
think we must proceed with caution and not forget that we are 
just
assuming this to be true. Your thought experiment is convincing 
but is
not a proof. You do expose something that I agree with: that
non-computationalism sounds silly.

But does it sound so silly if we propose
substituting a completely different kind of
computer, e.g. von Neumann architecture or one that
just records everything instead of an episodic
associative memory, for the brain. The
Church-Turing conjecture says it can compute the
same functions.  But does it instantiate the same
consciousness.  My intuition is that it would be
"conscious" but in some different way; for example
by having the kind of memory you would have if you
could review of a movie of any interval in your past.


I think it would be conscious in the same way if you
replaced neural tissue with a black box that interacted
with the surrounding tissue in the same way. It doesn’t
matter what is in the black box; it could even work by
magic.


Then why draw the line at "surrounding tissue". Why not
the external enivironment?


Keep expanding the part that is replaced and you replace the
whole brain and the whole organism.

Are you saying you can't imagine being "conscious" but
in a different way?


I think it is possible but I don’t think it could happen if
my neurones were replaced by a functionally equivalent
component. If it’s functionally equivalent, my behaviour
would be unchanged,


I agree with that.  But you've already supposed that
functional equivalence at the behavior level implies
preservation of consciousness.  So what I'm considering is
replacements in the brain far above the neuron level, say at
the level of whole functional groups of the brain, e.g. the
visual system, the auditory system, the memory,...  Would
functional equivalence at the body/brain interface then still
imply consciousness equivalence?


I think it would, because I don’t think there are isolated
consciousness modules in the brain. A large enough change in
visual experience will be noticed by the subject, who will report
that things look different. This could only happen if there is a
change in the input to the language system from the visual
system; but we have assumed that the output from the visual
system is the same, and only the consciousness has changed,
leading to a contradiction.


But what about internal systems which are independent of
perception...the very reason Bruno wants to talk about dream
states.  And I'm not necessarily asking that behavior be
identical...just that the body/brain interface be the same.  The
"brain" may be different in how it processes input from the
eyeballs and hence report verbally different perceptions.  In
other words, I'm wondering how much does computationalism
constrain consciousness.  My intuition is that there could be a
lot of difference in consciousness depending on how different
perceptual inputs are process and/or merged and how internal
simulations are handled.  To take a crude example, would it matter
if the computer-brain was programmed in a functional language like
LISP, an object-oriented language like Ruby, or a neural network? 
Of course Church-Turing says they all compute the same set of
functions, but they don't do it the same way and that might make a
diff

Re: How to live forever

2018-03-21 Thread Stathis Papaioannou
On Thu, 22 Mar 2018 at 8:57 am, Brent Meeker  wrote:

>
>
> On 3/21/2018 2:27 PM, Stathis Papaioannou wrote:
>
>
> On Thu, 22 Mar 2018 at 5:45 am, Brent Meeker  wrote:
>
>>
>>
>> On 3/20/2018 11:29 PM, Stathis Papaioannou wrote:
>>
>> On Wed, 21 Mar 2018 at 9:03 am, Brent Meeker 
>> wrote:
>>
>>>
>>>
>>> On 3/20/2018 1:14 PM, Stathis Papaioannou wrote:
>>>
>>>
>>> On Wed, 21 Mar 2018 at 6:34 am, Brent Meeker 
>>> wrote:
>>>


 On 3/20/2018 3:58 AM, Telmo Menezes wrote:

 The interesting thing is that you can draw conclusions about consciousness
 without being able to define it or detect it.

 I agree.


 The claim is that IF an entity
 is conscious THEN its consciousness will be preserved if brain function is
 preserved despite changing the brain substrate.

 Ok, this is computationalism. I also bet on computationalism, but I
 think we must proceed with caution and not forget that we are just
 assuming this to be true. Your thought experiment is convincing but is
 not a proof. You do expose something that I agree with: that
 non-computationalism sounds silly.

 But does it sound so silly if we propose substituting a completely
 different kind of computer, e.g. von Neumann architecture or one that just
 records everything instead of an episodic associative memory, for the
 brain.  The Church-Turing conjecture says it can compute the same
 functions.  But does it instantiate the same consciousness.  My intuition
 is that it would be "conscious" but in some different way; for example by
 having the kind of memory you would have if you could review of a movie of
 any interval in your past.

>>>
>>> I think it would be conscious in the same way if you replaced neural
>>> tissue with a black box that interacted with the surrounding tissue in the
>>> same way. It doesn’t matter what is in the black box; it could even work by
>>> magic.
>>>
>>>
>>> Then why draw the line at "surrounding tissue".  Why not the external
>>> enivironment?
>>>
>>
>> Keep expanding the part that is replaced and you replace the whole brain
>> and the whole organism.
>>
>> Are you saying you can't imagine being "conscious" but in a different way?
>>>
>>
>> I think it is possible but I don’t think it could happen if my neurones
>> were replaced by a functionally equivalent component. If it’s functionally
>> equivalent, my behaviour would be unchanged,
>>
>>
>> I agree with that.  But you've already supposed that functional
>> equivalence at the behavior level implies preservation of consciousness.
>> So what I'm considering is replacements in the brain far above the neuron
>> level, say at the level of whole functional groups of the brain, e.g. the
>> visual system, the auditory system, the memory,...  Would functional
>> equivalence at the body/brain interface then still imply consciousness
>> equivalence?
>>
>
> I think it would, because I don’t think there are isolated consciousness
> modules in the brain. A large enough change in visual experience will be
> noticed by the subject, who will report that things look different. This
> could only happen if there is a change in the input to the language system
> from the visual system; but we have assumed that the output from the visual
> system is the same, and only the consciousness has changed, leading to a
> contradiction.
>
>
> But what about internal systems which are independent of perception...the
> very reason Bruno wants to talk about dream states.  And I'm not
> necessarily asking that behavior be identical...just that the body/brain
> interface be the same.  The "brain" may be different in how it processes
> input from the eyeballs and hence report verbally different perceptions.
> In other words, I'm wondering how much does computationalism constrain
> consciousness.  My intuition is that there could be a lot of difference in
> consciousness depending on how different perceptual inputs are process
> and/or merged and how internal simulations are handled.  To take a crude
> example, would it matter if the computer-brain was programmed in a
> functional language like LISP, an object-oriented language like Ruby, or a
> neural network?  Of course Church-Turing says they all compute the same set
> of functions, but they don't do it the same way and that might make a
> difference in consciousness (and at least verbal behavior).
>

If the behaviour of the brain is different then it isn't contentious that
consciousness will also be different. The question is whether there would
be a difference in consciousness even though the behaviour is the same: for
example, if the subroutine controlling an artificial dopamine receptor is
written in LISP or in Ruby.

>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@goog

Re: How to live forever

2018-03-21 Thread Kim Jones
What if we already live forever? Why in fact, do people think that just because 
we die that we go away or cease to exist? What if Nature has already solved the 
problem? Why would you spend a motza to ensure you lived forever in the same 
boring universe when after 70 or 80 years you can be teleported to a different 
universe in a pine box?

Kim Jones




> On 22 Mar 2018, at 6:39 am, Brent Meeker  wrote:
> 
> 
> 
>> On 3/21/2018 8:40 AM, Bruno Marchal wrote:
>> 
>>> On 20 Mar 2018, at 00:56, Brent Meeker  wrote:
>>> 
>>> 
>>> 
 On 3/19/2018 2:19 PM, John Clark wrote:
 On Sun, Mar 18, 2018 at 10:03 PM, Brent Meeker  wrote
 
> > octopuses are fairly intelligent but their neural structure is 
> > distributed very differently from mammals of similar intelligence.  An 
> > artificial intelligence that was not modeled on mammalian brain 
> > structure might be intelligent but not conscious
 
 Maybe maybe maybe. ​A​ nd you don't have have the exact same brain 
 structure as I have so you might be intelligent but not conscious. I said 
 it before I'll say it again, consciousness theories are useless, and not 
 just the ones on this list, all of them.
>>> 
>>> You're the one who floated a theory of consciousness based on evolution.  
>>> I'm just pointing out that it only shows that consciousness exists in 
>>> humans for some evolutionarily selected purpose.  It doesn't apply to 
>>> intelligence that arises in some other evolutionary branch or intelligence 
>>> like AI that doesn't evolve but is designed.
>>> 
>> 
>> 
>> Not sure that there is a genuine difference between design and evolution. 
>> With the multicellular, there is an evolution of design. With the origin of 
>> life, there has been a design of evolution, even if serendipitously. I am 
>> not talking about some purposeful or intelligent design here. A cell is a 
>> quite sophisticated "gigantic nano-machine”.
>> 
>> Then some technic in AI, like the genetic algorithm, or some technic 
>> inspired by the study of the immune system, or self-reference, leads to 
>> programs or machines evolving in some ways.
>> 
>> Now, I can argue that for consciousness nothing of this is needed. It is the 
>> canonical knowledge associated with the fixed point of the embedding of the 
>> universal machines in the arithmetical reality.
>> 
>> It differentiates into the many indexical first person scenarii.
>> 
>> Matter should be what gives rise to possibilities ([]p & ~[]f, []p & <>t).  
>> That works as it is confirmed by QM without collapse, both intuitively 
>> through the many computations, and formally as the three material modes do 
>> provide a formal quantum logic, its arithmetical interpretation, and its 
>> metamathematical interpretations. 
>> 
>> The universal machine rich enough to prove their own universality (like the 
>> sound humans and Peano arithmetic, and ZF, …) are confronted to the 
>> distinction between knowing and proof. They prove their own incompleteness 
>> but still figure out some truth despite being non provable. 
>> 
>> The only mystery is where does the numbers (and/or the combinators, the 
>> lambda expressions, the game of life, c++, etc.) come from?
>> But here the sound löbian machine can prove that it is impossible to derive 
>> a universal system from a non-universal theory. 
>> 
>> A weaker version of the Church-Turing-Post-Kleene thesis is: It exists a 
>> Universal Machine. That is, a Machine which computes all computable 
>> functions. The stronger usual version is that some formal system/definition 
>> provides such a universal machine, meaning that the class of the functions 
>> computable by some universal machine gives the class of all computable 
>> functions, including those not everywhere defined (and non algorithmically 
>> spared among those defined everywhere: the price of universality). 
>> 
>> That universal being has a rich theology, explaining the relation between 
>> believing, knowing, observing, feeling and the truth.
> 
> That didn't address my question: Can you imagine different kinds of 
> consciousness.  For example you have sometimes speculated that there is only 
> one consciousness which is somehow compartmentalized in individuals.  That 
> implies that the compartmentalization could be eliminated and a different 
> kind of consciousness experienced...like the Borg.
> 
> Brent
> "We are the Dyslexic of Borg. Futility is persistent. Your ass will be 
> laminated."
> 
>> 
>> Bruno
>> 
>> 
>> 
>>> Brent
>>> 
>>> -- 
>>> You received this message because you are subscribed to the Google Groups 
>>> "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to everything-list+unsubscr...@googlegroups.com.
>>> To post to this group, send email to everything-list@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/everything-list.
>>> For more options, visit https://groups.google.com

Re: Disclosure Project

2018-03-21 Thread agrayson2000


On Monday, February 26, 2018 at 3:22:01 PM UTC-5, agrays...@gmail.com wrote:
>
> http://siriusdisclosure.com/evidence/


The forward for Stanton Friedman's book on Majestic 12 is worth reading. 
Not too long. AG

 
https://www.amazon.com/Top-Secret-Majic-Stanton-Friedman/dp/1569248303/ref=sr_1_12?s=books&ie=UTF8&qid=1521680059&sr=1-12&keywords=stanton+friedman+books

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-21 Thread John Clark
On Wed, Mar 21, 2018 at 6:02 PM, Bruce Kellett 
wrote:

​> ​
> neural functionality includes consciousness.


Well that's certainly what my neurons do, but I have no evidence your
neurons do too.

Yes I admit it, certain assumptions must be made to allow me to conclude a
upload would preserve my consciousness but they are not outrageous
assumptions, they are exactly the same ones that allow me to conclude that
solipsism is untrue.

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-21 Thread Stathis Papaioannou
On Thu, 22 Mar 2018 at 9:02 am, Bruce Kellett 
wrote:

> From: Stathis Papaioannou 
>
>
> On Wed, 21 Mar 2018 at 10:56 am, Bruce Kellett 
> wrote:
>
>> From: Stathis Papaioannou < stath...@gmail.com>
>>
>> On Tue, 20 Mar 2018 at 10:09 am, Bruce Kellett <
>> bhkell...@optusnet.com.au> wrote:
>>
>>>
>>> If the theory is that if the observable behaviour of the brain is
>>> replicated, then consciousness will also be replicated, then the clear
>>> corollary is that consciousness can be inferred from observable behaviour.
>>> Which implies that I can be as certain of the consciousness of other people
>>> as I am of my own. This seems to do some violence to the 1p/1pp/3p
>>> distinctions that computationalism rely on so much: only 1p is "certainly
>>> certain". But if I can reliably infer consciousness in others, then other
>>> things can be as certain as 1p experiences
>>>
>>
>> You can’t reliable infer consciousness in others. What you can infer is
>> that whatever consciousness an entity has, it will be preserved if
>> functionally identical substitutions in its brain are made.
>>
>>
>> You have that backwards. You can infer consciousness in others, by
>> observing their behaviour. The alternative would be solipsism. Now, while
>> you can't prove or disprove solipsism in a mathematical sense, you can
>> reject solipsism as a useless theory, since it tells you nothing about
>> anything. Whereas science acts on the available evidence -- observations of
>> behaviour in this case.
>>
>> But we have no evidence that consciousness would be preserved under
>> functionally identical substitutions in the brain. Consciousness may be a
>> global affair, so functionally equivalence may not be achievable, or even
>> definable, within the context of a conscious brain. Can you map the
>> functionality of even a single neuron? You are assuming that you can, but
>> if that function is global, then you probably can't. There is a fair amount
>> of glibness in your assumption that consciousness will be preserved under
>> such substitutions.
>>
>>
>>
>> You can’t know if a mouse is conscious, but you can know that if mouse
>> neurones are replaced with functionally identical electronic neurones its
>> behaviour will be the same and any consciousness it may have will also be
>> the same.
>>
>>
>> You cannot know this without actually doing the substitution and
>> observing the results.
>>
>
> So do you think that it is possible to replace the neurones with
> functionally identical neurones (same output for same input) and the
> mouse’s behaviour would *not* be the same?
>
>
> Individual neurons may not be the appropriate functional unit.
>
> It seems that you might be close to circularity -- neural functionality
> includes consciousness. So if I maintain neural functionality, I will
> maintain consciousness.
>

The only assumption is that the brain is somehow responsible for
consciousness. The argument I am making is that if any part of the brain is
replaced with a functionally identical non-biological part, engineered to
replicate its interactions with the surrounding tissue,  consciousness will
also necessarily be replicated; for if not, an absurd situation would
result, whereby consciousness can radically change but the subject not
notice, or consciousness decouple completely from behaviour, or
consciousness flip on or off with the change of one subatomic particle.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-21 Thread Bruce Kellett

From: *Stathis Papaioannou* mailto:stath...@gmail.com>>


On Wed, 21 Mar 2018 at 10:56 am, Bruce Kellett 
mailto:bhkell...@optusnet.com.au>> wrote:


From: *Stathis Papaioannou* 

On Tue, 20 Mar 2018 at 10:09 am, Bruce Kellett
 wrote:


If the theory is that if the observable behaviour of the
brain is replicated, then consciousness will also be
replicated, then the clear corollary is that consciousness
can be inferred from observable behaviour. Which implies that
I can be as certain of the consciousness of other people as I
am of my own. This seems to do some violence to the 1p/1pp/3p
distinctions that computationalism rely on so much: only 1p
is "certainly certain". But if I can reliably infer
consciousness in others, then other things can be as certain
as 1p experiences


You can’t reliable infer consciousness in others. What you can
infer is that whatever consciousness an entity has, it will be
preserved if functionally identical substitutions in its brain
are made.


You have that backwards. You can infer consciousness in others, by
observing their behaviour. The alternative would be solipsism.
Now, while you can't prove or disprove solipsism in a mathematical
sense, you can reject solipsism as a useless theory, since it
tells you nothing about anything. Whereas science acts on the
available evidence -- observations of behaviour in this case.

But we have no evidence that consciousness would be preserved
under functionally identical substitutions in the brain.
Consciousness may be a global affair, so functionally equivalence
may not be achievable, or even definable, within the context of a
conscious brain. Can you map the functionality of even a single
neuron? You are assuming that you can, but if that function is
global, then you probably can't. There is a fair amount of
glibness in your assumption that consciousness will be preserved
under such substitutions.




You can’t know if a mouse is conscious, but you can know that if
mouse neurones are replaced with functionally identical
electronic neurones its behaviour will be the same and any
consciousness it may have will also be the same.


You cannot know this without actually doing the substitution and
observing the results.


So do you think that it is possible to replace the neurones with 
functionally identical neurones (same output for same input) and the 
mouse’s behaviour would *not* be the same?


Individual neurons may not be the appropriate functional unit.

It seems that you might be close to circularity -- neural functionality 
includes consciousness. So if I maintain neural functionality, I will 
maintain consciousness.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-21 Thread Brent Meeker



On 3/21/2018 2:27 PM, Stathis Papaioannou wrote:


On Thu, 22 Mar 2018 at 5:45 am, Brent Meeker > wrote:




On 3/20/2018 11:29 PM, Stathis Papaioannou wrote:

On Wed, 21 Mar 2018 at 9:03 am, Brent Meeker
mailto:meeke...@verizon.net>> wrote:



On 3/20/2018 1:14 PM, Stathis Papaioannou wrote:


On Wed, 21 Mar 2018 at 6:34 am, Brent Meeker
mailto:meeke...@verizon.net>> wrote:



On 3/20/2018 3:58 AM, Telmo Menezes wrote:

The interesting thing is that you can draw conclusions about 
consciousness
without being able to define it or detect it.

I agree.


The claim is that IF an entity
is conscious THEN its consciousness will be preserved if brain 
function is
preserved despite changing the brain substrate.

Ok, this is computationalism. I also bet on computationalism, but I
think we must proceed with caution and not forget that we are just
assuming this to be true. Your thought experiment is convincing but 
is
not a proof. You do expose something that I agree with: that
non-computationalism sounds silly.

But does it sound so silly if we propose substituting a
completely different kind of computer, e.g. von Neumann
architecture or one that just records everything instead
of an episodic associative memory, for the brain.  The
Church-Turing conjecture says it can compute the same
functions.  But does it instantiate the same
consciousness. My intuition is that it would be
"conscious" but in some different way; for example by
having the kind of memory you would have if you could
review of a movie of any interval in your past.


I think it would be conscious in the same way if you
replaced neural tissue with a black box that interacted with
the surrounding tissue in the same way. It doesn’t matter
what is in the black box; it could even work by magic.


Then why draw the line at "surrounding tissue".  Why not the
external enivironment?


Keep expanding the part that is replaced and you replace the
whole brain and the whole organism.

Are you saying you can't imagine being "conscious" but in a
different way?


I think it is possible but I don’t think it could happen if my
neurones were replaced by a functionally equivalent component. If
it’s functionally equivalent, my behaviour would be unchanged,


I agree with that. But you've already supposed that functional
equivalence at the behavior level implies preservation of
consciousness. So what I'm considering is replacements in the
brain far above the neuron level, say at the level of whole
functional groups of the brain, e.g. the visual system, the
auditory system, the memory,...  Would functional equivalence at
the body/brain interface then still imply consciousness equivalence?


I think it would, because I don’t think there are isolated 
consciousness modules in the brain. A large enough change in visual 
experience will be noticed by the subject, who will report that things 
look different. This could only happen if there is a change in the 
input to the language system from the visual system; but we have 
assumed that the output from the visual system is the same, and only 
the consciousness has changed, leading to a contradiction.


But what about internal systems which are independent of 
perception...the very reason Bruno wants to talk about dream states.  
And I'm not necessarily asking that behavior be identical...just that 
the body/brain interface be the same.  The "brain" may be different in 
how it processes input from the eyeballs and hence report verbally 
different perceptions.  In other words, I'm wondering how much does 
computationalism constrain consciousness.  My intuition is that there 
could be a lot of difference in consciousness depending on how different 
perceptual inputs are process and/or merged and how internal simulations 
are handled.  To take a crude example, would it matter if the 
computer-brain was programmed in a functional language like LISP, an 
object-oriented language like Ruby, or a neural network?  Of course 
Church-Turing says they all compute the same set of functions, but they 
don't do it the same way and that might make a difference in 
consciousness (and at least verbal behavior).


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit h

Re: How to live forever

2018-03-21 Thread Stathis Papaioannou
On Wed, 21 Mar 2018 at 10:56 am, Bruce Kellett 
wrote:

> From: Stathis Papaioannou 
>
> On Tue, 20 Mar 2018 at 10:09 am, Bruce Kellett 
> wrote:
>
>>
>> If the theory is that if the observable behaviour of the brain is
>> replicated, then consciousness will also be replicated, then the clear
>> corollary is that consciousness can be inferred from observable behaviour.
>> Which implies that I can be as certain of the consciousness of other people
>> as I am of my own. This seems to do some violence to the 1p/1pp/3p
>> distinctions that computationalism rely on so much: only 1p is "certainly
>> certain". But if I can reliably infer consciousness in others, then other
>> things can be as certain as 1p experiences
>>
>
> You can’t reliable infer consciousness in others. What you can infer is
> that whatever consciousness an entity has, it will be preserved if
> functionally identical substitutions in its brain are made.
>
>
> You have that backwards. You can infer consciousness in others, by
> observing their behaviour. The alternative would be solipsism. Now, while
> you can't prove or disprove solipsism in a mathematical sense, you can
> reject solipsism as a useless theory, since it tells you nothing about
> anything. Whereas science acts on the available evidence -- observations of
> behaviour in this case.
>
> But we have no evidence that consciousness would be preserved under
> functionally identical substitutions in the brain. Consciousness may be a
> global affair, so functionally equivalence may not be achievable, or even
> definable, within the context of a conscious brain. Can you map the
> functionality of even a single neuron? You are assuming that you can, but
> if that function is global, then you probably can't. There is a fair amount
> of glibness in your assumption that consciousness will be preserved under
> such substitutions.
>
>
>
> You can’t know if a mouse is conscious, but you can know that if mouse
> neurones are replaced with functionally identical electronic neurones its
> behaviour will be the same and any consciousness it may have will also be
> the same.
>
>
> You cannot know this without actually doing the substitution and observing
> the results.
>

So do you think that it is possible to replace the neurones with
functionally identical neurones (same output for same input) and the
mouse’s behaviour would *not* be the same?
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-21 Thread Stathis Papaioannou
On Thu, 22 Mar 2018 at 5:45 am, Brent Meeker  wrote:

>
>
> On 3/20/2018 11:29 PM, Stathis Papaioannou wrote:
>
> On Wed, 21 Mar 2018 at 9:03 am, Brent Meeker  wrote:
>
>>
>>
>> On 3/20/2018 1:14 PM, Stathis Papaioannou wrote:
>>
>>
>> On Wed, 21 Mar 2018 at 6:34 am, Brent Meeker 
>> wrote:
>>
>>>
>>>
>>> On 3/20/2018 3:58 AM, Telmo Menezes wrote:
>>>
>>> The interesting thing is that you can draw conclusions about consciousness
>>> without being able to define it or detect it.
>>>
>>> I agree.
>>>
>>>
>>> The claim is that IF an entity
>>> is conscious THEN its consciousness will be preserved if brain function is
>>> preserved despite changing the brain substrate.
>>>
>>> Ok, this is computationalism. I also bet on computationalism, but I
>>> think we must proceed with caution and not forget that we are just
>>> assuming this to be true. Your thought experiment is convincing but is
>>> not a proof. You do expose something that I agree with: that
>>> non-computationalism sounds silly.
>>>
>>> But does it sound so silly if we propose substituting a completely
>>> different kind of computer, e.g. von Neumann architecture or one that just
>>> records everything instead of an episodic associative memory, for the
>>> brain.  The Church-Turing conjecture says it can compute the same
>>> functions.  But does it instantiate the same consciousness.  My intuition
>>> is that it would be "conscious" but in some different way; for example by
>>> having the kind of memory you would have if you could review of a movie of
>>> any interval in your past.
>>>
>>
>> I think it would be conscious in the same way if you replaced neural
>> tissue with a black box that interacted with the surrounding tissue in the
>> same way. It doesn’t matter what is in the black box; it could even work by
>> magic.
>>
>>
>> Then why draw the line at "surrounding tissue".  Why not the external
>> enivironment?
>>
>
> Keep expanding the part that is replaced and you replace the whole brain
> and the whole organism.
>
> Are you saying you can't imagine being "conscious" but in a different way?
>>
>
> I think it is possible but I don’t think it could happen if my neurones
> were replaced by a functionally equivalent component. If it’s functionally
> equivalent, my behaviour would be unchanged,
>
>
> I agree with that.  But you've already supposed that functional
> equivalence at the behavior level implies preservation of consciousness.
> So what I'm considering is replacements in the brain far above the neuron
> level, say at the level of whole functional groups of the brain, e.g. the
> visual system, the auditory system, the memory,...  Would functional
> equivalence at the body/brain interface then still imply consciousness
> equivalence?
>

I think it would, because I don’t think there are isolated consciousness
modules in the brain. A large enough change in visual experience will be
noticed by the subject, who will report that things look different. This
could only happen if there is a change in the input to the language system
from the visual system; but we have assumed that the output from the visual
system is the same, and only the consciousness has changed, leading to a
contradiction.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-21 Thread Brent Meeker



On 3/21/2018 8:40 AM, Bruno Marchal wrote:


On 20 Mar 2018, at 00:56, Brent Meeker > wrote:




On 3/19/2018 2:19 PM, John Clark wrote:
On Sun, Mar 18, 2018 at 10:03 PM, Brent Meeker > wrote


/> octopuses are fairly intelligent but their neural structure
is distributed very differently from mammals of similar
intelligence. An artificial intelligence that was not modeled on
mammalian brain structure might be intelligent but not conscious/


Maybe maybe maybe.
​A​
nd you don't have have the exact same brain structure as I have so 
you might be intelligent but not conscious. I said it before I'll 
say it again, consciousness theories are useless, and not just the 
ones on this list, all of them.


You're the one who floated a theory of consciousness based on 
evolution.  I'm just pointing out that it only shows that 
consciousness exists in humans for some evolutionarily selected 
purpose.  It doesn't apply to intelligence that arises in some other 
evolutionary branch or intelligence like AI that doesn't evolve but 
is designed.





Not sure that there is a genuine difference between design and 
evolution. With the multicellular, there is an evolution of design. 
With the origin of life, there has been a design of evolution, even if 
serendipitously. I am not talking about some purposeful or intelligent 
design here. A cell is a quite sophisticated "gigantic nano-machine”.


Then some technic in AI, like the genetic algorithm, or some technic 
inspired by the study of the immune system, or self-reference, leads 
to programs or machines evolving in some ways.


Now, I can argue that for consciousness nothing of this is needed. It 
is the canonical knowledge associated with the fixed point of the 
embedding of the universal machines in the arithmetical reality.


It differentiates into the many indexical first person scenarii.

Matter should be what gives rise to possibilities ([]p & ~[]f, []p & 
<>t).  That works as it is confirmed by QM without collapse, both 
intuitively through the many computations, and formally as the three 
material modes do provide a formal quantum logic, its arithmetical 
interpretation, and its metamathematical interpretations.


The universal machine rich enough to prove their own universality 
(like the sound humans and Peano arithmetic, and ZF, …) are confronted 
to the distinction between knowing and proof. They prove their own 
incompleteness but still figure out some truth despite being non 
provable.


The only mystery is where does the numbers (and/or the combinators, 
the lambda expressions, the game of life, c++, etc.) come from?
But here the sound löbian machine can prove that it is impossible to 
derive a universal system from a non-universal theory.


A weaker version of the Church-Turing-Post-Kleene thesis is: It exists 
a Universal Machine. That is, a Machine which computes all computable 
functions. The stronger usual version is that some formal 
system/definition provides such a universal machine, meaning that the 
class of the functions computable by some universal machine gives the 
class of all computable functions, including those not everywhere 
defined (and non algorithmically spared among those defined 
everywhere: the price of universality).


That universal being has a rich theology, explaining the relation 
between believing, knowing, observing, feeling and the truth.


That didn't address my question: Can you imagine different kinds of 
consciousness.  For example you have sometimes speculated that there is 
only one consciousness which is somehow compartmentalized in 
individuals.  That implies that the compartmentalization could be 
eliminated and a different kind of consciousness experienced...like the 
Borg.


Brent
"We are the Dyslexic of Borg. Futility is persistent. Your ass will be 
laminated."




Bruno




Brent

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to everything-list+unsubscr...@googlegroups.com 
.
To post to this group, send email to everything-list@googlegroups.com 
.

Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To post to this group, send email to everything-list@googlegroups.com 
.

Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You 

Re: How to live forever

2018-03-21 Thread Brent Meeker



On 3/21/2018 6:45 AM, John Clark wrote:


On Tue, Mar 20, 2018 at 10:25 PM, Brent Meeker > wrote:


​>> ​
 A mind might not consciously know if its data was being
processed distributively or not and have no why to deduce that
fact no matter intelligent it is unless it had additional
information from its sense organs. If a mind (not to be
confused with a brain) can be said to have a position at all
it would be the place it is thinking about, which is usually
the place the sense organs are observing. Due to human anatomy
the sense organs and the place the data is processed are in
almost the same place but that need not be true for mind in
general. 



​>/​/
/You are implicitly assuming a unity of mind which it might not
have. /

Assume it? I don't even know what "unity of mind" means. I assume 
nothing, I observe that a mind has no way of determining its position 
without input information from sense organs and therefore its 
problematic to say a mind occupies a unique position in space at all.


Did I write /*spatial*/ unity?

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-21 Thread Brent Meeker



On 3/20/2018 11:29 PM, Stathis Papaioannou wrote:
On Wed, 21 Mar 2018 at 9:03 am, Brent Meeker > wrote:




On 3/20/2018 1:14 PM, Stathis Papaioannou wrote:


On Wed, 21 Mar 2018 at 6:34 am, Brent Meeker
mailto:meeke...@verizon.net>> wrote:



On 3/20/2018 3:58 AM, Telmo Menezes wrote:

The interesting thing is that you can draw conclusions about 
consciousness
without being able to define it or detect it.

I agree.


The claim is that IF an entity
is conscious THEN its consciousness will be preserved if brain function 
is
preserved despite changing the brain substrate.

Ok, this is computationalism. I also bet on computationalism, but I
think we must proceed with caution and not forget that we are just
assuming this to be true. Your thought experiment is convincing but is
not a proof. You do expose something that I agree with: that
non-computationalism sounds silly.

But does it sound so silly if we propose substituting a
completely different kind of computer, e.g. von Neumann
architecture or one that just records everything instead of
an episodic associative memory, for the brain.  The
Church-Turing conjecture says it can compute the same
functions.  But does it instantiate the same consciousness. 
My intuition is that it would be "conscious" but in some
different way; for example by having the kind of memory you
would have if you could review of a movie of any interval in
your past.


I think it would be conscious in the same way if you replaced
neural tissue with a black box that interacted with the
surrounding tissue in the same way. It doesn’t matter what is in
the black box; it could even work by magic.


Then why draw the line at "surrounding tissue".  Why not the
external enivironment?


Keep expanding the part that is replaced and you replace the whole 
brain and the whole organism.


Are you saying you can't imagine being "conscious" but in a
different way?


I think it is possible but I don’t think it could happen if my 
neurones were replaced by a functionally equivalent component. If it’s 
functionally equivalent, my behaviour would be unchanged,


I agree with that.  But you've already supposed that functional 
equivalence at the behavior level implies preservation of 
consciousness.  So what I'm considering is replacements in the brain far 
above the neuron level, say at the level of whole functional groups of 
the brain, e.g. the visual system, the auditory system, the memory,...  
Would functional equivalence at the body/brain interface then still 
imply consciousness equivalence?


Brent

so I would have to communicate that my consciousness had not changed. 
If, in fact, my consciousness had changed, this means either I would 
not have noticed, in which case the idea of consciousness loses 
meaning, or I would have noticed but been unable to communicate it, 
from which point on my consciousness and my behaviour would become 
decoupled, implying a type of substance dualism.


--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To post to this group, send email to everything-list@googlegroups.com 
.

Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-21 Thread Bruno Marchal

> On 19 Mar 2018, at 22:54, Russell Standish  wrote:
> 
> On Mon, Mar 19, 2018 at 05:19:11PM -0400, John Clark wrote:
>> On Sun, Mar 18, 2018 at 10:03 PM, Brent Meeker  wrote
>> 
>> *> octopuses are fairly intelligent but their neural structure is
>>> distributed very differently from mammals of similar intelligence.  An
>>> artificial intelligence that was not modeled on mammalian brain structure
>>> might be intelligent but not conscious*
>> 
>> 
>> Maybe maybe maybe.
>> ​A​
>> nd you don't have have the exact same brain structure as I have so you
>> might be intelligent but not conscious. I said it before I'll say it again,
>> consciousness theories are useless, and not just the ones on this list, all
>> of them.
>> 
>> John K Clark
> 
> In my backyard, pretty much about 200m from where I'm sitting now, we
> have these amazing creatures called giant cuttlefish.

Wonderful!



> These are about
> the size of a dog (which breed of dog you say? - well yes there's that
> sort of variation in size too). They (like most cephalopods) are
> masters of camouflage. To be really effective in camouflage, you need
> to know what the things you're hiding from is seeing. Cuttlefish use
> different camo patterns and communication methods depending on whether they're
> trying to avoid sharks or dolphins. I had the experience of one of
> these animals approaching me looking for all the world like a bunch of
> kelp. The instant I saw its eyes, it knew its disguise was blown, and
> it scarpered. I had the strong feeling that here was an animal reading
> my mind as I its. I can't be sure, of course, but I'm pretty convinced
> from that cuttlefish are conscious beings.

Yes. Maybe less deluded than us.


> It seems hard to imagine them
> being able to understand other animals minds without also being aware
> of their own, and their place in the world.

They are wonderful animals. The octopus too. 

Now, is a jellyfish conscious? 

I bet they are, but not far away from the dissociative and constant 
arithmetical consciousness (of the universal machines).

The Löbian Universal Machine is, in some sense, less conscious than the simpler 
universal machine. Robinson arithmetic might be more conscious than Peano 
arithmetic. The quasi-ultrafinitists might be right on this: the induction 
axioms might be the beginning of the delusion.The first sin (grin). Then I 
guess consciousness’ volume evolves like the n-dimentional volume of spheres, 
which grow up to dimension 5 and then decrease and tend to zero with n growing 
arbitrarily. You need some hundreds of neurons to implement a Turing universal 
neuronal system, it might be close to maximally conscious, but in a quite 
“altered state” (compared to mundane consciousness), then by multiplying the 
neurons, you get more room from the models of its histories, and consciousness 
‘volume” grows, but 99,8 of consciousness content is delusion, once you reject 
the “ontological” induction, and the histories are relative filtering of 
consciousness in the consistent computational continuations.

The real question is not “How to live forever?”. The real question is “How to 
NOT live forever?”. How to cut the cycle of death and birth? 

Otto Rossler summed up Mechanism by “Consciousness is a prison”. It is rather 
frightening, I think. I wish sometime Mechanism to be false! We must try 
Heaven, and avoid Hell. Arithmetic is big, especially from inside. And we have 
only partial control but also partial mean to awaken ourself of possible 
nightmares, at diverse degrees.


Bruno





> 
> -- 
> 
> 
> Dr Russell StandishPhone 0425 253119 (mobile)
> Principal, High Performance Coders
> Visiting Senior Research Fellowhpco...@hpcoders.com.au
> Economics, Kingston University http://www.hpcoders.com.au
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-21 Thread Bruno Marchal

> On 20 Mar 2018, at 00:56, Brent Meeker  wrote:
> 
> 
> 
> On 3/19/2018 2:19 PM, John Clark wrote:
>> On Sun, Mar 18, 2018 at 10:03 PM, Brent Meeker > > wrote
>> 
>> > octopuses are fairly intelligent but their neural structure is distributed 
>> > very differently from mammals of similar intelligence.  An artificial 
>> > intelligence that was not modeled on mammalian brain structure might be 
>> > intelligent but not conscious
>> 
>> Maybe maybe maybe. ​A​ nd you don't have have the exact same brain structure 
>> as I have so you might be intelligent but not conscious. I said it before 
>> I'll say it again, consciousness theories are useless, and not just the ones 
>> on this list, all of them.
> 
> You're the one who floated a theory of consciousness based on evolution.  I'm 
> just pointing out that it only shows that consciousness exists in humans for 
> some evolutionarily selected purpose.  It doesn't apply to intelligence that 
> arises in some other evolutionary branch or intelligence like AI that doesn't 
> evolve but is designed.
> 


Not sure that there is a genuine difference between design and evolution. With 
the multicellular, there is an evolution of design. With the origin of life, 
there has been a design of evolution, even if serendipitously. I am not talking 
about some purposeful or intelligent design here. A cell is a quite 
sophisticated "gigantic nano-machine”.

Then some technic in AI, like the genetic algorithm, or some technic inspired 
by the study of the immune system, or self-reference, leads to programs or 
machines evolving in some ways.

Now, I can argue that for consciousness nothing of this is needed. It is the 
canonical knowledge associated with the fixed point of the embedding of the 
universal machines in the arithmetical reality.

It differentiates into the many indexical first person scenarii.

Matter should be what gives rise to possibilities ([]p & ~[]f, []p & <>t).  
That works as it is confirmed by QM without collapse, both intuitively through 
the many computations, and formally as the three material modes do provide a 
formal quantum logic, its arithmetical interpretation, and its metamathematical 
interpretations. 

The universal machine rich enough to prove their own universality (like the 
sound humans and Peano arithmetic, and ZF, …) are confronted to the distinction 
between knowing and proof. They prove their own incompleteness but still figure 
out some truth despite being non provable. 

The only mystery is where does the numbers (and/or the combinators, the lambda 
expressions, the game of life, c++, etc.) come from?
But here the sound löbian machine can prove that it is impossible to derive a 
universal system from a non-universal theory. 

A weaker version of the Church-Turing-Post-Kleene thesis is: It exists a 
Universal Machine. That is, a Machine which computes all computable functions. 
The stronger usual version is that some formal system/definition provides such 
a universal machine, meaning that the class of the functions computable by some 
universal machine gives the class of all computable functions, including those 
not everywhere defined (and non algorithmically spared among those defined 
everywhere: the price of universality). 

That universal being has a rich theology, explaining the relation between 
believing, knowing, observing, feeling and the truth.

Bruno



> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Disclosure Project

2018-03-21 Thread agrayson2000


On Monday, March 12, 2018 at 11:46:49 AM UTC-4, agrays...@gmail.com wrote:
>
>
>
> On Monday, March 12, 2018 at 11:35:14 AM UTC-4, Lawrence Crowell wrote:
>>
>> On Monday, March 12, 2018 at 7:26:32 AM UTC-6, John Clark wrote:
>>>
>>> On Sun, Mar 11, 2018 at 10:34 PM,  wrote:
>>>
 > *Jesse Marcel Sr discusses the material recovered at Roswell AG.*  
>>>
>>> For God's sake, can't you get a hint? Nobody around hear is interested 
>>> in your idiot Roswell crap!
>>>
>>> John K Clark
>>>
>>
>> I just looked a bit at the Wikipedia site below. Wiki-P is semi-reliable 
>> in that errors do tend to be corrected, though they can reappear. The name 
>> Marcel appears often enough that I wonder whether he is just looking for 
>> being in a spotlight. 
>>
>
> *He was officer in charge of Intelligence and Security at Roswell Army Air 
> Base with a pretty wide range of responsibilities in his areas of 
> expertise. Sounds like too much debris for a single balloon, plus the 
> material found was not made at a toy factory as Clark's earlier link 
> claimed. But, as I pointed out, the first witness in the 15 min video 
> reports a different property of the material than that claimed by Marcel. 
> Perhaps we are dealing with different materials from a single alien craft. 
> But whatever the case, it's nothing that was manufactured in a toy factory. 
> AG*
>

*In ordinary circumstances Marcel would be an extremely trusted witness; 
absolutely credible. What are the odds he's making it up? Virtually nil. 
Yet his testimony is glibly dismissed by those who claim, on no special 
knowledge really, that they know more. AG*

>
>> If this were a balloon in the Mogul program, these balloons can reach up 
>> to 30,000 meters. A two and half mile debris distribution is a radius of 
>> 1.25 miles or 2000 meters. This is then a cone of fall with an angle of 
>> arcsin(2/30) = .067rad or about 4 degrees. So assume the balloon failed at 
>> 3 meters and fell, then given atmospheric drag, winds, wind shearing 
>> and so forth it is not unreasonable to think this material scattered in 
>> this narrow cone of fall this the sort of debris distribution found.
>>
>
> *It's the amount, not necessarily the spread. Listen to what he says. AG *
>
>>
>> In the wiki-p article it is evident that multiple cases have been 
>> advanced for this being a UFO crash. From alien autopsy reports to claims 
>> of government conspiracies this is clearly a catalyst for all sort of 
>> misguided thinking and people who are seeking to have some connection with 
>> the profound without the real education required.
>>
>> https://en.wikipedia.org/wiki/Roswell_UFO_incident
>>
>> LC
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: How to live forever

2018-03-21 Thread John Clark
On Tue, Mar 20, 2018 at 10:25 PM, Brent Meeker  wrote:

​>> ​
>>  A mind might not consciously know if its data was being processed
>> distributively or not and have no why to deduce that fact no matter
>> intelligent it is unless it had additional information from its sense
>> organs. If a mind (not to be confused with a brain) can be said to have a
>> position at all it would be the place it is thinking about, which is
>> usually the place the sense organs are observing. Due to human anatomy the
>> sense organs and the place the data is processed are in almost the same
>> place but that need not be true for mind in general.
>
>
> ​>*​*
> *You are implicitly assuming a unity of mind which it might not have.  *
>

Assume it? I don't even know what "unity of mind" means. I assume nothing,
I observe that a mind has no way of determining its position without input
information from sense organs and therefore its problematic to say a mind
occupies a unique position in space at all.

 ​John K Clark​

  ​

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.