Re: Step 3 - one step beyond?

2015-04-28 Thread Stathis Papaioannou
On 28 April 2015 at 10:44, LizR lizj...@gmail.com wrote:
 On 28 April 2015 at 05:25, meekerdb meeke...@verizon.net wrote:

 On 4/27/2015 2:34 AM, David Nyman wrote:

 On 27 April 2015 at 07:43, Bruce Kellett bhkell...@optusnet.com.au
 wrote:

 That all relies too much on the assumption that comp is true


 At the risk of pointing out the stunningly obvious, *everything* in
 Bruno's argument is premised on the truth of the comp thesis, summarised in
 the claim that consciousness is invariant for a purely digital
 transformation (at some level). In practice this postulate is widely
 accepted, even though in many if not most cases neither the assumption nor
 its possible consequences are made completely explicit, as Bruno is striving
 to do.

 But his argument also includes other assumptions, some more controversial
 than others, c.f. the discussion of whether a recording can instantiate
 consciousness or how much scope is required for counterfactual correctness.
 So Bruno often confusingly uses his shorthand of assuming comp to mean
 either the digital substitution of some brain function OR the whole argument
 and its conclusion.


 I think the point about recordings is that if you assume comp, then you
 tacitly assume records can't be conscious because a recording isn't a
 computation - although one might be involvesd in its playback, this is not a
 computation which should instantiate consciousness, being (presumably) far
 too simple to do so. Although given that physical supervenience is possible,
 I guess it could apply to anything really. (A rock, I think, is the ultimate
 example?)

It's not that a recording isn't a computation, it's that a recording
isn't a computer, because it can't handle the counterfactuals. The
computer is what is needed in a physicalist account of
computationalism. If you accept arguments that purport to show that if
computationalism is true then a computer is not necessary (a recording
is conscious, a rock is conscious, the MGA, Maudlin's argument) then
you either have to throw out computationalism or (and few have been
bold enough to do this) throw out physicalism and keep
computationalism.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-28 Thread Bruno Marchal


On 27 Apr 2015, at 20:24, meekerdb wrote:


On 4/27/2015 4:07 AM, David Nyman wrote:
On 27 April 2015 at 07:43, Bruce Kellett  
bhkell...@optusnet.com.au wrote:


Mose people get on living in the world by means of heuristics, or  
useful rules-of-thumb, that are good enough for most purposes. That  
means, of course, that we make mistakes, we are misled by imprecise  
interpretations of perceptions, and of other peoples' intentions  
and motives. But as long as we get it right often enough, we can  
function perfectly adequately. As Brent might say, consciousness is  
an engineering solution to living -- not a logician's solution.


So a Turing emulation of consciousness is perfectly possible, and  
that consciousness would be not essentially different from yours or  
mine.


I think the conclusion you draw here obfuscates the distinction  
between behaviour (normally) attributed to a conscious being and  
the putative additional fact (truth) of consciousness itself. Of  
course it is possible - implicitly or explicitly - to reject any  
such distinction, or what Bruno likes to call 'sweeping  
consciousness under the rug'. A fairly typical example of this  
(complete with the tell-tale terminology of 'illusion') can be  
found in the Graziano theory under discussion in another thread.


Alternatively one can look for an explicit nomological entailment  
for consciousness in, say, physical activity or computation. The  
problems with establishing any explicable nomological bridging  
principles from physical activity alone are well known and tend to  
lead to a more-or-less unintelligible brute identity thesis.


I disagree on that point.  Physical activity in the brain can give a  
very fine-grained identity between processes and qualia and much  
progress has been made as technology allows finer resolution of  
brain activity.  I think that's the way progress will be made.  A  
convergence of brain neurophysiology and computer AI will give us  
the ability to create beings that act as conscious as human beings  
do and we'll have engineering level knowledge of creating  
consciousness to order and questions about qualia will be bypassed  
as semantic philosophizing.


It's Bruno's modal logic that postulates a brute indentity between  
axiomatic provability and qualia.


Where? I don't see what you are pointing too. There is no brute  
identity at all. There are axiomatic definition of qualia and quanta,  
and a discovery that machine discover them (things obeying those  
definition) when looking inward, including the difference between  
qualia and quanta.




He proposes that this is just a technical problem...but one with no  
solution in sight.


?

What is lacking, beyond the open problems (and one has been solved  
since I expose them).


It is not like if have choice in the matter.

bruno




Consequently physical activity is postulated as an adequate  
approximation of computation, at some level, and it is the latter  
that is assumed to provide the nomological bridge to consciousness.  
What is striking, then, about Bruno's UD argument is that it uses  
precisely this starting assumption to draw the opposite conclusion:  
i.e. that computation and not physical activity must be playing the  
primary role in this relation.


This is perhaps less of a shock to the imagination than it may at  
first appear. Idealists such as Berkeley and of course the  
Platonists that preceded him had already pointed out that deriving  
the appearance of matter from the 'mental' might present conceptual  
problems less insuperable than the reverse. What they lacked was  
any explicit conceptual apparatus to put flesh on the bare bones of  
such an intuition. What is interesting about Bruno's work, at least  
to me, is that it suggests (until proved in error) that the default  
assumption about the nomological basis of consciousness in fact  
leads to a kind of a quasi-idealism, albeit one founded on the  
neutral ontological basis of primary arithmetical relations. That  
then presents the empirically-testable task of validating, or  
ruling out, the entailment that physics itself (or more generally  
'what is observable or shareable') relies on nothing more or less  
than such relations.


Did anyone suppose that physics did not rely on shared perception  
and intersubjective agreement?  The laws of physics are just  
models that physicists invent to try to codify and predict those  
perceptions.  Reality is of the ontology of our best current  
model...always subject to revision.


Brent

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit 

Re: Step 3 - one step beyond?

2015-04-28 Thread Bruno Marchal


On 28 Apr 2015, at 03:45, Bruce Kellett wrote:


David Nyman wrote:
On 27 April 2015 at 07:43, Bruce Kellett bhkell...@optusnet.com.au  
mailto:bhkell...@optusnet.com.au wrote:

   Mose people get on living in the world by means of heuristics, or
   useful rules-of-thumb, that are good enough for most purposes.  
That
   means, of course, that we make mistakes, we are misled by  
imprecise
   interpretations of perceptions, and of other peoples' intentions  
and

   motives. But as long as we get it right often enough, we can
   function perfectly adequately. As Brent might say, consciousness  
is

   an engineering solution to living -- not a logician's solution.
   So a Turing emulation of consciousness is perfectly possible, and
   that consciousness would be not essentially different from yours  
or

   mine.
I think the conclusion you draw here obfuscates the distinction  
between behaviour (normally) attributed to a conscious being and  
the putative additional fact (truth) of consciousness itself.


That is the Platonists move, and also leads to problems, as Kant  
found. When you use a phrase like consciousness itself, one  
inevitably thinks of Kant's 'ding an sich', and the conclusion that  
this is essentially unknowable. Postulating a distinction between  
consciousness as found in conscious beings and consciousness  
itself is to postulate that conscious beings are explained by the  
inexplicable -- not a great advance!


Of course it is possible - implicitly or explicitly - to reject any  
such distinction, or what Bruno likes to call 'sweeping  
consciousness under the rug'. A fairly typical example of this  
(complete with the tell-tale terminology of 'illusion') can be  
found in the Graziano theory under discussion in another thread.


There is no sweeping under the rug here. Consciousness is that  
which is to be found in conscious beings. It supervenes on the  
physical, and came about by evolution -- a process of trial and  
error. That is why conscious living is by corrigible heuristics, not  
arithmetic or modal logics.



Alternatively one can look for an explicit nomological entailment  
for consciousness in, say, physical activity or computation. The  
problems with establishing any explicable nomological bridging  
principles from physical activity alone are well known and tend to  
lead to a more-or-less unintelligible brute identity thesis.


Can you indicate to me why relating consciousness is computations is  
Platonia is any less an unintelligible brute identity thesis?  
Arithmetical relations are static, not dynamic, so they do not  
instantiate the computations of a physical computer (or brain).


You need first to understand what a computation is, in the sense of  
Church and Turing. You just asked a question in a post which shows  
that you are not aware of computation and computability theory.
Those are mathematical notion. They are dynamical in a weaker sense  
than physical dynamics.


More on this later.

bruno




Bruce


Consequently physical activity is postulated as an adequate  
approximation of computation, at some level, and it is the latter  
that is assumed to provide the nomological bridge to consciousness.  
What is striking, then, about Bruno's UD argument is that it uses  
precisely this starting assumption to draw the opposite conclusion:  
i.e. that computation and not physical activity must be playing the  
primary role in this relation.
This is perhaps less of a shock to the imagination than it may at  
first appear. Idealists such as Berkeley and of course the  
Platonists that preceded him had already pointed out that deriving  
the appearance of matter from the 'mental' might present conceptual  
problems less insuperable than the reverse. What they lacked was  
any explicit conceptual apparatus to put flesh on the bare bones of  
such an intuition. What is interesting about Bruno's work, at least  
to me, is that it suggests (until proved in error) that the default  
assumption about the nomological basis of consciousness in fact  
leads to a kind of a quasi-idealism, albeit one founded on the  
neutral ontological basis of primary arithmetical relations. That  
then presents the empirically-testable task of validating, or  
ruling out, the entailment that physics itself (or more generally  
'what is observable or shareable') relies on nothing more or less  
than such relations.

David


--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List 

Re: Step 3 - one step beyond?

2015-04-27 Thread David Nyman
On 27 April 2015 at 07:43, Bruce Kellett bhkell...@optusnet.com.au wrote:

That all relies too much on the assumption that comp is true


At the risk of pointing out the stunningly obvious, *everything* in Bruno's
argument is premised on the truth of the comp thesis, summarised in the
claim that consciousness is invariant for a purely digital transformation
(at some level). In practice this postulate is widely accepted, even though
in many if not most cases neither the assumption nor its possible
consequences are made completely explicit, as Bruno is striving to do.

Of course there is no compulsion to accept the premise, but once it is
adopted, even hypothetically, the onus is on the challenger (Bruno
included) to reveal some flaw in the derivation, e.g. an invalid inference
or contradiction. That some of the consequences may be counter-intuitive
does not of itself invalidate the premise. On the other hand, if you reject
it at the outset, there is little further to be said.

David

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-27 Thread David Nyman
On 27 April 2015 at 07:43, Bruce Kellett bhkell...@optusnet.com.au wrote:

Mose people get on living in the world by means of heuristics, or useful
 rules-of-thumb, that are good enough for most purposes. That means, of
 course, that we make mistakes, we are misled by imprecise interpretations
 of perceptions, and of other peoples' intentions and motives. But as long
 as we get it right often enough, we can function perfectly adequately. As
 Brent might say, consciousness is an engineering solution to living -- not
 a logician's solution.

 So a Turing emulation of consciousness is perfectly possible, and that
 consciousness would be not essentially different from yours or mine.


I think the conclusion you draw here obfuscates the distinction between
behaviour (normally) attributed to a conscious being and the putative
additional fact (truth) of consciousness itself. Of course it is possible -
implicitly or explicitly - to reject any such distinction, or what Bruno
likes to call 'sweeping consciousness under the rug'. A fairly typical
example of this (complete with the tell-tale terminology of 'illusion') can
be found in the Graziano theory under discussion in another thread.

Alternatively one can look for an explicit nomological entailment for
consciousness in, say, physical activity or computation. The problems with
establishing any explicable nomological bridging principles from physical
activity alone are well known and tend to lead to a more-or-less
unintelligible brute identity thesis. Consequently physical activity is
postulated as an adequate approximation of computation, at some level, and
it is the latter that is assumed to provide the nomological bridge to
consciousness. What is striking, then, about Bruno's UD argument is that it
uses precisely this starting assumption to draw the opposite conclusion:
i.e. that computation and not physical activity must be playing the primary
role in this relation.

This is perhaps less of a shock to the imagination than it may at first
appear. Idealists such as Berkeley and of course the Platonists that
preceded him had already pointed out that deriving the appearance of matter
from the 'mental' might present conceptual problems less insuperable than
the reverse. What they lacked was any explicit conceptual apparatus to put
flesh on the bare bones of such an intuition. What is interesting about
Bruno's work, at least to me, is that it suggests (until proved in error)
that the default assumption about the nomological basis of consciousness in
fact leads to a kind of a quasi-idealism, albeit one founded on the neutral
ontological basis of primary arithmetical relations. That then presents the
empirically-testable task of validating, or ruling out, the entailment that
physics itself (or more generally 'what is observable or shareable') relies
on nothing more or less than such relations.

David

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-27 Thread David Nyman
On 27 April 2015 at 07:43, Bruce Kellett bhkell...@optusnet.com.au wrote:


 I can define my own consciousness, at least to a level that is sufficient
 for me to operate successfully in the world. If my brain and body functions
 can be taken over by a general-purpose computer, then that computer could
 define its own consciousness perfectly adequately, just as I now do.

  The same happens with knowledge. Those notions mix what the machine can
 define and believe, and semantical notions related to truth, which would
 need stronger beliefs, that no machine can get about itself for logical
 reason. We don't hit the contradiction, we just explore the G* minus G
 logic of machines  which are correct by definition (something necessarily
 not constructive).


 I don't think that people, or other conscious beings, understand their own
 consciousness, or that of others, in these terms.


With respect, the above comment leads me to doubt that you've fully grasped
the point of what Bruno is saying here. He's not claiming that conscious
beings necessarily or explicitly think about their consciousness in these
terms. His intention is rather to establish a method of differentiating, on
principles motivated by his starting premise, some aspects of consciousness
that are communicable (shareable) from some that are inalienably private.
Of course, to be valid, these elementary principles should eventually
*entail* specific boundaries to self-knowledge (and especially the peculiar
limits on what is communicable) but they cannot be expected to fully
characterise it.

David

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-27 Thread Bruce Kellett

Bruno Marchal wrote:

On 24 Apr 2015, at 02:43, Bruce Kellett wrote:


That seems odd to me. The starting point was that the brain was Turing 
emulable (at some substitution level). Which seems to suggest that 
consciousness (usually associated with brain function) is Turing 
emulable.


Using an identity thesis which does no more work, as normally UDA makes 
clear.


If you find at the end or your chain of reasoning that consciousness 
isn't computable (not Turing emulable?), it seems that you might have 
hit a contradiction.


Not necessarily. Consciousness, like truth, is a notion that the machine 
cannot define for itself, although she can study this for machine 
simpler than herself.


I can define my own consciousness, at least to a level that is 
sufficient for me to operate successfully in the world. If my brain and 
body functions can be taken over by a general-purpose computer, then 
that computer could define its own consciousness perfectly adequately, 
just as I now do.


The same happens with knowledge. Those notions mix 
what the machine can define and believe, and semantical notions related 
to truth, which would need stronger beliefs, that no machine can get 
about itself for logical reason. We don't hit the contradiction, we just 
explore the G* minus G logic of machines  which are correct by 
definition (something necessarily not constructive).


I don't think that people, or other conscious beings, understand their 
own consciousness, or that of others, in these terms. Consciousness 
evolved in beings (people) operating in the physical world, and it does 
not need to be able to define itself in order to be able to operate 
quite successfully. People do not run their lives according to truths 
that they can prove, or worry themselves needlessly about whether 
their reasoning is consistent or complete.


Mose people get on living in the world by means of heuristics, or useful 
rules-of-thumb, that are good enough for most purposes. That means, of 
course, that we make mistakes, we are misled by imprecise 
interpretations of perceptions, and of other peoples' intentions and 
motives. But as long as we get it right often enough, we can function 
perfectly adequately. As Brent might say, consciousness is an 
engineering solution to living -- not a logician's solution.


So a Turing emulation of consciousness is perfectly possible, and that 
consciousness would be not essentially different from yours or mine.


Consciousness is not much more than the mental first person state of a 
person believing *correctly* in some reality, be it a dream or a 
physical universe. That notion relies on another non definable nition: 
reality, which per se, is not Turing emulable.
The brain does not produce or compute consciousness, it might even been 
more like a filter, which differentiate consciousness in the many 
histories, and make a person having some genuine first person 
perspective, which are also not definable (although locally approximable 
by the (correct) person's discourse, once having enough introspective 
ability).


That all relies too much on the assumption that comp is true. And I am 
far from believing that you have actually demonstrated that, or that the 
assumption that comp is true is a useful step towards understanding the 
world.


Comp explains all this, with a big price: we have to extract the 
apparent stability of the physical laws from machine's self-reference 
logics. The laws of physics have to be brain-invariant, or phi_i 
invariant. This put a quite big constraint on what a physical 
(observable) reality can be.


But you have not yet really made any progress at all towards achieving 
this. You make some hints, and claim some things, but they are just 
cherry-picked from the infinity of things that your comp world has to 
come to terms with.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-27 Thread LizR
On 28 April 2015 at 05:25, meekerdb meeke...@verizon.net wrote:

  On 4/27/2015 2:34 AM, David Nyman wrote:

  On 27 April 2015 at 07:43, Bruce Kellett bhkell...@optusnet.com.au
 wrote:

 That all relies too much on the assumption that comp is true


  At the risk of pointing out the stunningly obvious, *everything* in
 Bruno's argument is premised on the truth of the comp thesis, summarised in
 the claim that consciousness is invariant for a purely digital
 transformation (at some level). In practice this postulate is widely
 accepted, even though in many if not most cases neither the assumption nor
 its possible consequences are made completely explicit, as Bruno is
 striving to do.

 But his argument also includes other assumptions, some more controversial
 than others, c.f. the discussion of whether a recording can instantiate
 consciousness or how much scope is required for counterfactual
 correctness.  So Bruno often confusingly uses his shorthand of assuming
 comp to mean either the digital substitution of some brain function OR the
 whole argument and its conclusion.


I think the point about recordings is that if you assume comp, then you
tacitly assume records can't be conscious because a recording isn't a
computation - although one might be involvesd in its playback, this is not
a computation which should instantiate consciousness, being (presumably)
far too simple to do so. Although given that physical supervenience is
possible, I guess it could apply to anything really. (A rock, I think, is
the ultimate example?)

But being a bear of little brain I expect to be corrected on that point
shortly.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-27 Thread Bruce Kellett

David Nyman wrote:
On 27 April 2015 at 07:43, Bruce Kellett bhkell...@optusnet.com.au 
mailto:bhkell...@optusnet.com.au wrote:


Mose people get on living in the world by means of heuristics, or
useful rules-of-thumb, that are good enough for most purposes. That
means, of course, that we make mistakes, we are misled by imprecise
interpretations of perceptions, and of other peoples' intentions and
motives. But as long as we get it right often enough, we can
function perfectly adequately. As Brent might say, consciousness is
an engineering solution to living -- not a logician's solution.

So a Turing emulation of consciousness is perfectly possible, and
that consciousness would be not essentially different from yours or
mine.

I think the conclusion you draw here obfuscates the distinction between 
behaviour (normally) attributed to a conscious being and the putative 
additional fact (truth) of consciousness itself.


That is the Platonists move, and also leads to problems, as Kant found. 
When you use a phrase like consciousness itself, one inevitably thinks 
of Kant's 'ding an sich', and the conclusion that this is essentially 
unknowable. Postulating a distinction between consciousness as found in 
conscious beings and consciousness itself is to postulate that 
conscious beings are explained by the inexplicable -- not a great advance!


Of course it is 
possible - implicitly or explicitly - to reject any such distinction, or 
what Bruno likes to call 'sweeping consciousness under the rug'. A 
fairly typical example of this (complete with the tell-tale terminology 
of 'illusion') can be found in the Graziano theory under discussion in 
another thread.


There is no sweeping under the rug here. Consciousness is that which 
is to be found in conscious beings. It supervenes on the physical, and 
came about by evolution -- a process of trial and error. That is why 
conscious living is by corrigible heuristics, not arithmetic or modal 
logics.



Alternatively one can look for an explicit nomological entailment for 
consciousness in, say, physical activity or computation. The problems 
with establishing any explicable nomological bridging principles from 
physical activity alone are well known and tend to lead to a 
more-or-less unintelligible brute identity thesis.


Can you indicate to me why relating consciousness is computations is 
Platonia is any less an unintelligible brute identity thesis? 
Arithmetical relations are static, not dynamic, so they do not 
instantiate the computations of a physical computer (or brain).


Bruce


Consequently physical 
activity is postulated as an adequate approximation of computation, at 
some level, and it is the latter that is assumed to provide the 
nomological bridge to consciousness. What is striking, then, about 
Bruno's UD argument is that it uses precisely this starting assumption 
to draw the opposite conclusion: i.e. that computation and not physical 
activity must be playing the primary role in this relation.


This is perhaps less of a shock to the imagination than it may at first 
appear. Idealists such as Berkeley and of course the Platonists that 
preceded him had already pointed out that deriving the appearance of 
matter from the 'mental' might present conceptual problems less 
insuperable than the reverse. What they lacked was any explicit 
conceptual apparatus to put flesh on the bare bones of such an 
intuition. What is interesting about Bruno's work, at least to me, is 
that it suggests (until proved in error) that the default assumption 
about the nomological basis of consciousness in fact leads to a kind of 
a quasi-idealism, albeit one founded on the neutral ontological basis of 
primary arithmetical relations. That then presents the 
empirically-testable task of validating, or ruling out, the entailment 
that physics itself (or more generally 'what is observable or 
shareable') relies on nothing more or less than such relations.


David


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-27 Thread meekerdb

On 4/27/2015 2:34 AM, David Nyman wrote:
On 27 April 2015 at 07:43, Bruce Kellett bhkell...@optusnet.com.au 
mailto:bhkell...@optusnet.com.au wrote:


That all relies too much on the assumption that comp is true


At the risk of pointing out the stunningly obvious, *everything* in Bruno's argument is 
premised on the truth of the comp thesis, summarised in the claim that consciousness is 
invariant for a purely digital transformation (at some level). In practice this 
postulate is widely accepted, even though in many if not most cases neither the 
assumption nor its possible consequences are made completely explicit, as Bruno is 
striving to do.


But his argument also includes other assumptions, some more controversial than others, 
c.f. the discussion of whether a recording can instantiate consciousness or how much scope 
is required for counterfactual correctness.  So Bruno often confusingly uses his shorthand 
of assuming comp to mean either the digital substitution of some brain function OR the 
whole argument and its conclusion.


Brent



Of course there is no compulsion to accept the premise, but once it is adopted, even 
hypothetically, the onus is on the challenger (Bruno included) to reveal some flaw in 
the derivation, e.g. an invalid inference or contradiction. That some of the 
consequences may be counter-intuitive does not of itself invalidate the premise. On the 
other hand, if you reject it at the outset, there is little further to be said.


David
--
You received this message because you are subscribed to the Google Groups Everything 
List group.
To unsubscribe from this group and stop receiving emails from it, send an email to 
everything-list+unsubscr...@googlegroups.com 
mailto:everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com 
mailto:everything-list@googlegroups.com.

Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-27 Thread Bruno Marchal


On 27 Apr 2015, at 13:07, David Nyman wrote:

On 27 April 2015 at 07:43, Bruce Kellett bhkell...@optusnet.com.au  
wrote:


Mose people get on living in the world by means of heuristics, or  
useful rules-of-thumb, that are good enough for most purposes. That  
means, of course, that we make mistakes, we are misled by imprecise  
interpretations of perceptions, and of other peoples' intentions and  
motives. But as long as we get it right often enough, we can  
function perfectly adequately. As Brent might say, consciousness is  
an engineering solution to living -- not a logician's solution.


So a Turing emulation of consciousness is perfectly possible, and  
that consciousness would be not essentially different from yours or  
mine.


I think the conclusion you draw here obfuscates the distinction  
between behaviour (normally) attributed to a conscious being and the  
putative additional fact (truth) of consciousness itself. Of course  
it is possible - implicitly or explicitly - to reject any such  
distinction, or what Bruno likes to call 'sweeping consciousness  
under the rug'. A fairly typical example of this (complete with the  
tell-tale terminology of 'illusion') can be found in the Graziano  
theory under discussion in another thread.


Alternatively one can look for an explicit nomological entailment  
for consciousness in, say, physical activity or computation. The  
problems with establishing any explicable nomological bridging  
principles from physical activity alone are well known and tend to  
lead to a more-or-less unintelligible brute identity thesis.  
Consequently physical activity is postulated as an adequate  
approximation of computation, at some level, and it is the latter  
that is assumed to provide the nomological bridge to consciousness.  
What is striking, then, about Bruno's UD argument is that it uses  
precisely this starting assumption to draw the opposite conclusion:  
i.e. that computation and not physical activity must be playing the  
primary role in this relation.


This is perhaps less of a shock to the imagination than it may at  
first appear. Idealists such as Berkeley and of course the  
Platonists that preceded him had already pointed out that deriving  
the appearance of matter from the 'mental' might present conceptual  
problems less insuperable than the reverse. What they lacked was any  
explicit conceptual apparatus to put flesh on the bare bones of such  
an intuition. What is interesting about Bruno's work, at least to  
me, is that it suggests (until proved in error) that the default  
assumption about the nomological basis of consciousness in fact  
leads to a kind of a quasi-idealism, albeit one founded on the  
neutral ontological basis of primary arithmetical relations. That  
then presents the empirically-testable task of validating, or ruling  
out, the entailment that physics itself (or more generally 'what is  
observable or shareable') relies on nothing more or less than such  
relations.



All good and important points that you clearly expose, David.

Bruce seems to ignore the (mind-body) problem, and to miss that the  
UDA just helps to make that problem more precise, in the frame of  
computationalism, and to make it more amenable to more rigorous  
treatments, ... without mentioning that the arithmetical translation  
of the UDA in arithmetic is a non trivial beginning of solution (and  
which might motivate people to study a lot of nice and fun results in  
theoretical computer science, at the least).


Bruno




David

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-27 Thread Bruno Marchal


On 27 Apr 2015, at 08:43, Bruce Kellett wrote:


Bruno Marchal wrote:

On 24 Apr 2015, at 02:43, Bruce Kellett wrote:


That seems odd to me. The starting point was that the brain was  
Turing emulable (at some substitution level). Which seems to  
suggest that consciousness (usually associated with brain  
function) is Turing emulable.
Using an identity thesis which does no more work, as normally UDA  
makes clear.
If you find at the end or your chain of reasoning that  
consciousness isn't computable (not Turing emulable?), it seems  
that you might have hit a contradiction.
Not necessarily. Consciousness, like truth, is a notion that the  
machine cannot define for itself, although she can study this for  
machine simpler than herself.


I can define my own consciousness, at least to a level that is  
sufficient for me to operate successfully in the world. If my brain  
and body functions can be taken over by a general-purpose computer,  
then that computer could define its own consciousness perfectly  
adequately, just as I now do.


That is what computationalism makes conceivable, but it does not  
define consciousness, and you have to bet on some substitution level.





The same happens with knowledge. Those notions mix what the machine  
can define and believe, and semantical notions related to truth,  
which would need stronger beliefs, that no machine can get about  
itself for logical reason. We don't hit the contradiction, we just  
explore the G* minus G logic of machines  which are correct by  
definition (something necessarily not constructive).


I don't think that people, or other conscious beings, understand  
their own consciousness, or that of others, in these terms.


Nor do I. Nor do the machine. Indeed, the conscious part will be  
related to the soul (axiomatized by S4Grz if defined in the  
Theaetetus way), which cannot recognize itself in the beweisbar  
predicta nor its logic G, unless betting on comp and reasoning.




Consciousness evolved in beings (people) operating in the physical  
world,


I don't know that.



and it does not need to be able to define itself in order to be able  
to operate quite successfully.


It needs to think about this when asking if it will say yes to a  
doctor.  We must be careful to separate the level of reflexion.




People do not run their lives according to truths that they can  
prove, or worry themselves needlessly about whether their  
reasoning is consistent or complete.


We search a TOE soliving the measure problem in arithmetic. Wer don't  
search to explain everyday thinking.





Mose people get on living in the world by means of heuristics, or  
useful rules-of-thumb, that are good enough for most purposes. That  
means, of course, that we make mistakes, we are misled by imprecise  
interpretations of perceptions, and of other peoples' intentions and  
motives. But as long as we get it right often enough, we can  
function perfectly adequately. As Brent might say, consciousness is  
an engineering solution to living -- not a logician's solution.


?

I give a mathematical problem to those believing in computationalism  
and in primitive materialism.







So a Turing emulation of consciousness is perfectly possible, and  
that consciousness would be not essentially different from yours or  
mine.



Of course we don't know that, but thank to recall the computationalist  
assumption. That is step zero.


I don't tell my personal opinion on this. I just shows logical  
relation between set of beliefs.






Consciousness is not much more than the mental first person state  
of a person believing *correctly* in some reality, be it a dream or  
a physical universe. That notion relies on another non definable  
nition: reality, which per se, is not Turing emulable.
The brain does not produce or compute consciousness, it might even  
been more like a filter, which differentiate consciousness in the  
many histories, and make a person having some genuine first person  
perspective, which are also not definable (although locally  
approximable by the (correct) person's discourse, once having  
enough introspective ability).


That all relies too much on the assumption that comp is true.


?





And I am far from believing that you have actually demonstrated that,



Why?



or that the assumption that comp is true is a useful step towards  
understanding the world.



That assumption leads to big problem, indeed. But where I thought  
finding contradiction, I find only quantum weirdness, so I think comp  
is not refuted.


If you have a non comp theory, give it to us, as there are none know  
today, except Penrose and people using the idea that the collapse of  
the wave is due to consciousness (but this is well refuted by Shimony).





Comp explains all this, with a big price: we have to extract the  
apparent stability of the physical laws from machine's self- 
reference logics. The laws of physics have to be brain-invariant,  
or 

Re: Step 3 - one step beyond?

2015-04-27 Thread meekerdb

On 4/27/2015 4:07 AM, David Nyman wrote:
On 27 April 2015 at 07:43, Bruce Kellett bhkell...@optusnet.com.au 
mailto:bhkell...@optusnet.com.au wrote:


Mose people get on living in the world by means of heuristics, or useful
rules-of-thumb, that are good enough for most purposes. That means, of 
course, that
we make mistakes, we are misled by imprecise interpretations of 
perceptions, and of
other peoples' intentions and motives. But as long as we get it right often 
enough,
we can function perfectly adequately. As Brent might say, consciousness is 
an
engineering solution to living -- not a logician's solution.

So a Turing emulation of consciousness is perfectly possible, and that 
consciousness
would be not essentially different from yours or mine.


I think the conclusion you draw here obfuscates the distinction between behaviour 
(normally) attributed to a conscious being and the putative additional fact (truth) of 
consciousness itself. Of course it is possible - implicitly or explicitly - to reject 
any such distinction, or what Bruno likes to call 'sweeping consciousness under the 
rug'. A fairly typical example of this (complete with the tell-tale terminology of 
'illusion') can be found in the Graziano theory under discussion in another thread.


Alternatively one can look for an explicit nomological entailment for consciousness in, 
say, physical activity or computation. The problems with establishing any explicable 
nomological bridging principles from physical activity alone are well known and tend to 
lead to a more-or-less unintelligible brute identity thesis.


I disagree on that point.  Physical activity in the brain can give a very fine-grained 
identity between processes and qualia and much progress has been made as technology allows 
finer resolution of brain activity.  I think that's the way progress will be made.  A 
convergence of brain neurophysiology and computer AI will give us the ability to create 
beings that act as conscious as human beings do and we'll have engineering level knowledge 
of creating consciousness to order and questions about qualia will be bypassed as semantic 
philosophizing.


It's Bruno's modal logic that postulates a brute indentity between axiomatic provability 
and qualia.  He proposes that this is just a technical problem...but one with no solution 
in sight.


Consequently physical activity is postulated as an adequate approximation of 
computation, at some level, and it is the latter that is assumed to provide the 
nomological bridge to consciousness. What is striking, then, about Bruno's UD argument 
is that it uses precisely this starting assumption to draw the opposite conclusion: i.e. 
that computation and not physical activity must be playing the primary role in this 
relation.


This is perhaps less of a shock to the imagination than it may at first appear. 
Idealists such as Berkeley and of course the Platonists that preceded him had already 
pointed out that deriving the appearance of matter from the 'mental' might present 
conceptual problems less insuperable than the reverse. What they lacked was any explicit 
conceptual apparatus to put flesh on the bare bones of such an intuition. What is 
interesting about Bruno's work, at least to me, is that it suggests (until proved in 
error) that the default assumption about the nomological basis of consciousness in fact 
leads to a kind of a quasi-idealism, albeit one founded on the neutral ontological basis 
of primary arithmetical relations. That then presents the empirically-testable task of 
validating, or ruling out, the entailment that physics itself (or more generally 'what 
is observable or shareable') relies on nothing more or less than such relations.


Did anyone suppose that physics did not rely on shared perception and intersubjective 
agreement?  The laws of physics are just models that physicists invent to try to codify 
and predict those perceptions.  Reality is of the ontology of our best current 
model...always subject to revision.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-27 Thread David Nyman
On 27 April 2015 at 19:24, meekerdb meeke...@verizon.net wrote:

 On 4/27/2015 4:07 AM, David Nyman wrote:

  On 27 April 2015 at 07:43, Bruce Kellett bhkell...@optusnet.com.au
 wrote:

  Mose people get on living in the world by means of heuristics, or useful
 rules-of-thumb, that are good enough for most purposes. That means, of
 course, that we make mistakes, we are misled by imprecise interpretations
 of perceptions, and of other peoples' intentions and motives. But as long
 as we get it right often enough, we can function perfectly adequately. As
 Brent might say, consciousness is an engineering solution to living -- not
 a logician's solution.

 So a Turing emulation of consciousness is perfectly possible, and that
 consciousness would be not essentially different from yours or mine.


  I think the conclusion you draw here obfuscates the distinction between
 behaviour (normally) attributed to a conscious being and the putative
 additional fact (truth) of consciousness itself. Of course it is possible -
 implicitly or explicitly - to reject any such distinction, or what Bruno
 likes to call 'sweeping consciousness under the rug'. A fairly typical
 example of this (complete with the tell-tale terminology of 'illusion') can
 be found in the Graziano theory under discussion in another thread.

  Alternatively one can look for an explicit nomological entailment for
 consciousness in, say, physical activity or computation. The problems with
 establishing any explicable nomological bridging principles from physical
 activity alone are well known and tend to lead to a more-or-less
 unintelligible brute identity thesis.


 I disagree on that point.  Physical activity in the brain can give a very
 fine-grained identity between processes and qualia and much progress has
 been made as technology allows finer resolution of brain activity.  I think
 that's the way progress will be made.


No doubt. But it still won't give an account of the essential difference
(which for the sake of argument I still assume you accept) between an
'engineering' description at any (3p) level whatsoever and the (1p)
actuality of conscious experience.

  A convergence of brain neurophysiology and computer AI will give us the
 ability to create beings that act as conscious as human beings do and we'll
 have engineering level knowledge of creating consciousness to order and
 questions about qualia will be bypassed as semantic philosophizing.


Again, you may be right in this, since most people are not unnaturally
inclined to accept the fruits of technological progress despite, in most
cases, having no more than the dimmest notion of the relevant principles.
But my point above still stands notwithstanding.


 It's Bruno's modal logic that postulates a brute indentity between
 axiomatic provability and qualia.


I don't think that's right. There is a proposed 3-p identity, IIUC, both
for qualia (non-sharable) and quanta (sharable), with types of provable
propositions or beliefs, instantiated computationally. But 'conscious
reality', again IIUC, is postulated as standing in transcendent (1p)
relation to belief, such that the 3p belief and its 1p truth are
coincident, but not provably so (hence Bp *and* p). The truth of the
relevant belief or proposition, though in a sense fully entailed by its
function (i.e. it is quasi-analytic), cannot be further described in 3p
terms. Such truths are 'transcendently' accessible only in the 1p view of a
knower possessed of the relevant belief.

Obviously this doesn't hold for you, but to me there is something
powerfully intuitive about all this. The idea that consciousness
corresponds with some incorrigible truth goes back to Descartes, and
probably a lot further than that. You've previously remarked that
consciousness is far from obviously incorrigible but I think you
persistently miss the distinction between the immediate and indispensable
incorrigibility of consciousness and what may subsequently be inferred,
concluded, or believed on the basis of that primary truth.

A partial analogy that comes to mind is watching a movie. Any particular
viewer of a movie may be mistaken to any arbitrary, secondary degree about
the action taking place, the motives of the characters, or anything else
whatsoever. But all those inferences must necessarily be based on *some*
primary representation that is not, *in itself* and in the moment, open to
correction, but is rather the source of everything that follows. Such
'experiential incorrigibility', it should be noted, must be understood as
quite distinct from any other consideration of the 'correctness' or
otherwise of the enterprise as a whole.

  He proposes that this is just a technical problem...but one with no
 solution in sight.


I don't know about 'just', but it is indeed a technical problem in the
relevant theory and whether there is a solution in sight is a separate
matter. Science would be in a parlous state if we only pursued a course
where the end was in 

Re: Step 3 - one step beyond?

2015-04-26 Thread Bruno Marchal


On 26 Apr 2015, at 00:19, meekerdb wrote:


On 4/25/2015 2:10 PM, Bruno Marchal wrote:


On 25 Apr 2015, at 02:29, meekerdb wrote:


On 4/24/2015 3:05 PM, Quentin Anciaux wrote:



2015-04-24 22:33 GMT+02:00 meekerdb meeke...@verizon.net:
On 4/24/2015 5:25 AM, Quentin Anciaux wrote:
That seems odd to me. The starting point was that the brain was  
Turing emulable (at some substitution level). Which seems to  
suggest that consciousness (usually associated with brain  
function) is Turing emulable. If you find at the end or your  
chain of reasoning that consciousness isn't computable (not  
Turing emulable?), it seems that you might have hit a  
contradiction.


ISTM, that's because you conflate the machinery (iow: the brain  
or a computer program running on a physical computer) necessary  
for consciousness to be able to manifest itself relatively to an  
environment and consciousness itself.


How do we know the two are separable?  What is consciousness that  
can't manifest itself?  The environment (the body?) isn't another  
sentient being that can recognize the consciousness...is it?


The thing is, under computationalism hypothesis, there are an  
infinity of valid implementations of a particular conscious  
moment, so consciousness itself is superverning on all of them,


Does that mean each of them or does it mean the completed  
infinity of them?  And what is a conscious moment?  Is it just a  
state of a Turing machine implementing all these computations, or  
is it a long sequence of states.


assuming the brain is turing emulable, any implementation of it  
is valid, and there are an infinity of equivalent implementations  
such as you have to make a distinction of a particular  
implementation of that conscious moment and the consciousness  
itself.


Why?  Is it because the different implementations will diverge  
after this particular state and will instantiate different  
conscious states.  I don't see how there can be a concept of  
consciousness itself or a consciousness in this model.   
Consciousness is just a sequence of states (each which happen to  
be realized infinitely many times).


Consciousness is 1p, and a sequence of states is 3p, so they can't  
be equal. Consciousness is more like a sequence of states related  
to a possible reality, and consciousness is more like a semantical  
fixed point in that relation, but it is not describable in any 3p  
terms.


Semantical fixed point sounds close to intersubjective agreement  
which is the basis of empirical epistemology.


I don't see the relationship between semantical fixed point, which  
involves one person, and intersubjective agreement, which involves  
more than one person.




What semantical transformation is consciousness a fixed point of?


Doubting, like with Descartes.

 ~ A, or ~A, with  being one of the arithmetical hypostase. If  
it is G, the fixed point is consistency. If it is S4, the fixed  
point is not expressible.






If it's not 3p describable how is it we seem to be talking about it.


By assuming that consciousness is invariant for some digital  
substitution, we can approximate the first person by its memories, or  
by using Theaetetus' idea, which in the comp context also justify why  
we cannot defined it, yet meta-formalize it, and actually formlize  
it completely for machines that we assume to be correct (and usually  
much simpler than ourself). It is related to the fact that a machine  
with string provability ability (like ZF) can formalize the theology  
of a simpler machine, and then can lift it on herself, with some  
caution, as this can lead to inconsistency very easily.





What I'm interested in is whether an AI will be conscious


PA is already conscious, and can already describe its theology.




and what that consciousness will be.


?
I can already not do that with Brent Meeker.

(Now, smoking salvia can give a pretty idea of what is like to PA,  
with an atemporal consciousness of a very dissociative type).


But it is usually hard to have an idea of what is the consciousness of  
another, and even more for entities which are very different from us.





For that I need a description of how the consciousness is realized.


Normally, by giving a machine the universal ability, + enough  
induction axioms.


I tend to think that RA is already conscious, may be in some trivial  
sense. But RA is mute on all interesting question. PA is less mute and  
can justify why it remains silent on some theological question. All  
that is explained in the AUDA part: of the part 2) of the sane04  
paper: the interview of the machine. May be you can read it, and ask  
me question when you don't understand something.







It is not a thing, it is phenomenological or epistemological. It  
concerns the soul, not the body, which helps only for the  
differentiation and the person relative partial control.


??


I define the soul by the knower, and I define the knower by the true  
believer, and I 

Re: Step 3 - one step beyond?

2015-04-25 Thread Bruno Marchal


On 25 Apr 2015, at 02:29, meekerdb wrote:


On 4/24/2015 3:05 PM, Quentin Anciaux wrote:



2015-04-24 22:33 GMT+02:00 meekerdb meeke...@verizon.net:
On 4/24/2015 5:25 AM, Quentin Anciaux wrote:
That seems odd to me. The starting point was that the brain was  
Turing emulable (at some substitution level). Which seems to  
suggest that consciousness (usually associated with brain  
function) is Turing emulable. If you find at the end or your chain  
of reasoning that consciousness isn't computable (not Turing  
emulable?), it seems that you might have hit a contradiction.


ISTM, that's because you conflate the machinery (iow: the brain or  
a computer program running on a physical computer) necessary for  
consciousness to be able to manifest itself relatively to an  
environment and consciousness itself.


How do we know the two are separable?  What is consciousness that  
can't manifest itself?  The environment (the body?) isn't another  
sentient being that can recognize the consciousness...is it?


The thing is, under computationalism hypothesis, there are an  
infinity of valid implementations of a particular conscious moment,  
so consciousness itself is superverning on all of them,


Does that mean each of them or does it mean the completed infinity  
of them?  And what is a conscious moment?  Is it just a state of a  
Turing machine implementing all these computations, or is it a long  
sequence of states.


assuming the brain is turing emulable, any implementation of it is  
valid, and there are an infinity of equivalent implementations such  
as you have to make a distinction of a particular implementation of  
that conscious moment and the consciousness itself.


Why?  Is it because the different implementations will diverge after  
this particular state and will instantiate different conscious  
states.  I don't see how there can be a concept of consciousness  
itself or a consciousness in this model.  Consciousness is just a  
sequence of states (each which happen to be realized infinitely many  
times).


Consciousness is 1p, and a sequence of states is 3p, so they can't be  
equal. Consciousness is more like a sequence of states related to a  
possible reality, and consciousness is more like a semantical fixed  
point in that relation, but it is not describable in any 3p terms.  It  
is not a thing, it is phenomenological or epistemological. It concerns  
the soul, not the body, which helps only for the differentiation  
and the person relative partial control.


Bruno






--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-25 Thread Bruno Marchal


On 24 Apr 2015, at 02:43, Bruce Kellett wrote:


LizR wrote:
On 24 April 2015 at 09:54, meekerdb meeke...@verizon.net mailto:meeke...@verizon.net 
 wrote:

   On 4/23/2015 1:03 AM, LizR wrote:
   The discussion was originally about step 3 in the comp  
argument.

   Obviously if we've moved onto something else then comp may not
   be relevant, however, if we are still talking about comp then
   the question of importance is whether a brain is Turing  
emulable

   at any level (which includes whether physics is Turing
   emulable). If it is, then either the argument goes through, or
   one of Bruno's other premises is wrong, or there is a  
mistake in

   his argument.
   Well, maybe Bruno can clarify.  He always says that physics and
   consciousness are not computable; they are some kind of sum or
   average over countably infinite many threads going through a
   particular state of the UD.  So it's not that clear what it means
   that the brain is Turing emulable in Bruno's theory, even if it is
   Turing emulable in the materialist theory.  That's part of my
   concern that the environment of the brain, the physics of it is
   relation to the environment, is what makes it not emulable because
   its perception/awareness is inherently adapted to the  
environment by

   evolution.  Bruno tends to dismiss this as a technicality because
   one can just expand the scope of the emulation to include the
   environment.  But I think that's a flaw.  If the scope has to be
   expanded then all that's proven in step 8 is that, within  a
   simulated environment a simulated consciousness doesn't require  
any
   real physics - just simulated physics.  But that's almost  
trivial. I

   say almost because it may still provide some explanation of
   consciousness within the simulation.
I think you'll find that consciousness isn't computable /if you  
assume all the consequences of comp/. But once you've assumed all  
that, you've already had to throw out materialism, including  
brains, so the question is meaningless.


That seems odd to me. The starting point was that the brain was  
Turing emulable (at some substitution level). Which seems to suggest  
that consciousness (usually associated with brain function) is  
Turing emulable.


Using an identity thesis which does no more work, as normally UDA  
makes clear.





If you find at the end or your chain of reasoning that consciousness  
isn't computable (not Turing emulable?), it seems that you might  
have hit a contradiction.


Not necessarily. Consciousness, like truth, is a notion that the  
machine cannot define for itself, although she can study this for  
machine simpler than herself. The same happens with knowledge. Those  
notions mix what the machine can define and believe, and semantical  
notions related to truth, which would need stronger beliefs, that no  
machine can get about itself for logical reason. We don't hit the  
contradiction, we just explore the G* minus G logic of machines  which  
are correct by definition (something necessarily not constructive).
Consciousness is not much more than the mental first person state of a  
person believing *correctly* in some reality, be it a dream or a  
physical universe. That notion relies on another non definable nition:  
reality, which per se, is not Turing emulable.
The brain does not produce or compute consciousness, it might even  
been more like a filter, which differentiate consciousness in the many  
histories, and make a person having some genuine first person  
perspective, which are also not definable (although locally  
approximable by the (correct) person's discourse, once having enough  
introspective ability).
Comp explains all this, with a big price: we have to extract the  
apparent stability of the physical laws from machine's self-reference  
logics. The laws of physics have to be brain-invariant, or phi_i  
invariant. This put a quite big constraint on what a physical  
(observable) reality can be.


Bruno





Bruce

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-25 Thread Bruno Marchal


On 23 Apr 2015, at 23:54, meekerdb wrote:


On 4/23/2015 1:03 AM, LizR wrote:
The discussion was originally about step 3 in the comp argument.  
Obviously if we've moved onto something else then comp may not be  
relevant, however, if we are still talking about comp then the  
question of importance is whether a brain is Turing emulable at any  
level (which includes whether physics is Turing emulable). If it  
is, then either the argument goes through, or one of Bruno's other  
premises is wrong, or there is a mistake in his argument.


Well, maybe Bruno can clarify.  He always says that physics and  
consciousness are not computable; they are some kind of sum or  
average over countably infinite many threads going through a  
particular state of the UD.  So it's not that clear what it means  
that the brain is Turing emulable in Bruno's theory, even if it is  
Turing emulable in the materialist theory.  That's part of my  
concern that the environment of the brain, the physics of it is  
relation to the environment, is what makes it not emulable because  
its perception/awareness is inherently adapted to the environment by  
evolution.  Bruno tends to dismiss this as a technicality because  
one can just expand the scope of the emulation to include the  
environment.  But I think that's a flaw.  If the scope has to be  
expanded then all that's proven in step 8 is that, within  a  
simulated environment a simulated consciousness doesn't require any  
real physics - just simulated physics.  But that's almost trivial. I  
say almost because it may still provide some explanation of  
consciousness within the simulation.


Expanded or not, once the state are digital states, they are  
accessible by the UD, and part of the sigma_1 truth,.
But this is only needed to explain the comp supervenience, which is  
need to explain the measure problem.


Consciousness is not Turing emulable, because it is not even  
definable. It is a true, bt undefianble attribute of a person defined  
by the knower that exists attached to the machine, by incompleteness  
(and obeying S4Grz, X1, X1*).


Even the truth of 1+1=2 cannot be emulated, but with comp the belief  
of 1+1=2 can be emulated, and we can only hope it is true, to have  
the []1+1=2 together with the fact that 1+1=2.


It is subtle, and that's why the tool by Solovay (G and G*) is a  
tremendous help, in this context of ideamlly self-referentially  
correct machine.


Bruno





Brent

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-25 Thread meekerdb

On 4/25/2015 2:10 PM, Bruno Marchal wrote:


On 25 Apr 2015, at 02:29, meekerdb wrote:


On 4/24/2015 3:05 PM, Quentin Anciaux wrote:



2015-04-24 22:33 GMT+02:00 meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net:

On 4/24/2015 5:25 AM, Quentin Anciaux wrote:


That seems odd to me. The starting point was that the brain was Turing
emulable (at some substitution level). Which seems to suggest that
consciousness (usually associated with brain function) is Turing 
emulable. If
you find at the end or your chain of reasoning that consciousness isn't
computable (not Turing emulable?), it seems that you might have hit a
contradiction.


ISTM, that's because you conflate the machinery (iow: the brain or a 
computer
program running on a physical computer) necessary for consciousness to be 
able to
manifest itself relatively to an environment and consciousness itself.


How do we know the two are separable?  What is consciousness that can't 
manifest
itself?  The environment (the body?) isn't another sentient being that can
recognize the consciousness...is it?


The thing is, under computationalism hypothesis, there are an infinity of valid 
implementations of a particular conscious moment, so consciousness itself is 
superverning on all of them,


Does that mean each of them or does it mean the completed infinity of them?  And what 
is a conscious moment?  Is it just a state of a Turing machine implementing all these 
computations, or is it a long sequence of states.


assuming the brain is turing emulable, any implementation of it is valid, and there 
are an infinity of equivalent implementations such as you have to make a distinction 
of a particular implementation of that conscious moment and the consciousness itself.


Why?  Is it because the different implementations will diverge after this particular 
state and will instantiate different conscious states.  I don't see how there can be a 
concept of consciousness itself or a consciousness in this model.  Consciousness is 
just a sequence of states (each which happen to be realized infinitely many times).


Consciousness is 1p, and a sequence of states is 3p, so they can't be equal. 
Consciousness is more like a sequence of states _related to a possible reality,_ and 
consciousness is more like a semantical fixed point in that relation, but it is not 
describable in any 3p terms.


Semantical fixed point sounds close to intersubjective agreement which is the basis of 
empirical epistemology.  What semantical transformation is consciousness a fixed point of?


If it's not 3p describable how is it we seem to be talking about it.  What I'm interested 
in is whether an AI will be conscious and what that consciousness will be.  For that I 
need a description of how the consciousness is realized.


It is not a thing, it is phenomenological or epistemological. It concerns the soul, 
not the body, which helps only for the differentiation and the person relative partial 
control.


??

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-24 Thread Quentin Anciaux
2015-04-24 2:43 GMT+02:00 Bruce Kellett bhkell...@optusnet.com.au:

 LizR wrote:

 On 24 April 2015 at 09:54, meekerdb meeke...@verizon.net mailto:
 meeke...@verizon.net wrote:

 On 4/23/2015 1:03 AM, LizR wrote:

 The discussion was originally about step 3 in the comp argument.
 Obviously if we've moved onto something else then comp may not
 be relevant, however, if we are still talking about comp then
 the question of importance is whether a brain is Turing emulable
 at any level (which includes whether physics is Turing
 emulable). If it is, then either the argument goes through, or
 one of Bruno's other premises is wrong, or there is a mistake in
 his argument.

 Well, maybe Bruno can clarify.  He always says that physics and
 consciousness are not computable; they are some kind of sum or
 average over countably infinite many threads going through a
 particular state of the UD.  So it's not that clear what it means
 that the brain is Turing emulable in Bruno's theory, even if it is
 Turing emulable in the materialist theory.  That's part of my
 concern that the environment of the brain, the physics of it is
 relation to the environment, is what makes it not emulable because
 its perception/awareness is inherently adapted to the environment by
 evolution.  Bruno tends to dismiss this as a technicality because
 one can just expand the scope of the emulation to include the
 environment.  But I think that's a flaw.  If the scope has to be
 expanded then all that's proven in step 8 is that, within  a
 simulated environment a simulated consciousness doesn't require any
 real physics - just simulated physics.  But that's almost trivial. I
 say almost because it may still provide some explanation of
 consciousness within the simulation.

 I think you'll find that consciousness isn't computable /if you assume
 all the consequences of comp/. But once you've assumed all that, you've
 already had to throw out materialism, including brains, so the question is
 meaningless.


 That seems odd to me. The starting point was that the brain was Turing
 emulable (at some substitution level). Which seems to suggest that
 consciousness (usually associated with brain function) is Turing emulable.
 If you find at the end or your chain of reasoning that consciousness isn't
 computable (not Turing emulable?), it seems that you might have hit a
 contradiction.


ISTM, that's because you conflate the machinery (iow: the brain or a
computer program running on a physical computer) necessary for
consciousness to be able to manifest itself relatively to an environment
and consciousness itself.

Quentin




 Bruce


 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.




-- 
All those moments will be lost in time, like tears in rain. (Roy
Batty/Rutger Hauer)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-24 Thread meekerdb

On 4/24/2015 3:05 PM, Quentin Anciaux wrote:



2015-04-24 22:33 GMT+02:00 meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net:

On 4/24/2015 5:25 AM, Quentin Anciaux wrote:


That seems odd to me. The starting point was that the brain was Turing 
emulable
(at some substitution level). Which seems to suggest that consciousness
(usually associated with brain function) is Turing emulable. If you 
find at the
end or your chain of reasoning that consciousness isn't computable (not 
Turing
emulable?), it seems that you might have hit a contradiction.


ISTM, that's because you conflate the machinery (iow: the brain or a 
computer
program running on a physical computer) necessary for consciousness to be 
able to
manifest itself relatively to an environment and consciousness itself.


How do we know the two are separable?  What is consciousness that can't 
manifest
itself?  The environment (the body?) isn't another sentient being that can
recognize the consciousness...is it?


The thing is, under computationalism hypothesis, there are an infinity of valid 
implementations of a particular conscious moment, so consciousness itself is 
superverning on all of them,


Does that mean each of them or does it mean the completed infinity of them?  And what is 
a conscious moment?  Is it just a state of a Turing machine implementing all these 
computations, or is it a long sequence of states.


assuming the brain is turing emulable, any implementation of it is valid, and there are 
an infinity of equivalent implementations such as you have to make a distinction of a 
particular implementation of that conscious moment and the consciousness itself.


Why?  Is it because the different implementations will diverge after this particular state 
and will instantiate different conscious states.  I don't see how there can be a concept 
of consciousness itself or a consciousness in this model.  Consciousness is just a 
sequence of states (each which happen to be realized infinitely many times).


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-24 Thread Quentin Anciaux
2015-04-24 22:33 GMT+02:00 meekerdb meeke...@verizon.net:

  On 4/24/2015 5:25 AM, Quentin Anciaux wrote:

 That seems odd to me. The starting point was that the brain was Turing
 emulable (at some substitution level). Which seems to suggest that
 consciousness (usually associated with brain function) is Turing emulable.
 If you find at the end or your chain of reasoning that consciousness isn't
 computable (not Turing emulable?), it seems that you might have hit a
 contradiction.


  ISTM, that's because you conflate the machinery (iow: the brain or a
 computer program running on a physical computer) necessary for
 consciousness to be able to manifest itself relatively to an environment
 and consciousness itself.


 How do we know the two are separable?  What is consciousness that can't
 manifest itself?  The environment (the body?) isn't another sentient being
 that can recognize the consciousness...is it?


The thing is, under computationalism hypothesis, there are an infinity of
valid implementations of a particular conscious moment, so consciousness
itself is superverning on all of them, assuming the brain is turing
emulable, any implementation of it is valid, and there are an infinity of
equivalent implementations such as you have to make a distinction of a
particular implementation of that conscious moment and the consciousness
itself.

Quentin


 Brent

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.




-- 
All those moments will be lost in time, like tears in rain. (Roy
Batty/Rutger Hauer)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-24 Thread LizR
On 24 April 2015 at 12:43, Bruce Kellett bhkell...@optusnet.com.au wrote:

 LizR wrote:

 I think you'll find that consciousness isn't computable /if you assume
 all the consequences of comp/. But once you've assumed all that, you've
 already had to throw out materialism, including brains, so the question is
 meaningless.


 That seems odd to me. The starting point was that the brain was Turing
 emulable (at some substitution level). Which seems to suggest that
 consciousness (usually associated with brain function) is Turing emulable.
 If you find at the end or your chain of reasoning that consciousness isn't
 computable (not Turing emulable?), it seems that you might have hit a
 contradiction.


I think that's the point.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-24 Thread meekerdb

On 4/24/2015 5:25 AM, Quentin Anciaux wrote:


That seems odd to me. The starting point was that the brain was Turing 
emulable (at
some substitution level). Which seems to suggest that consciousness (usually
associated with brain function) is Turing emulable. If you find at the end 
or your
chain of reasoning that consciousness isn't computable (not Turing 
emulable?), it
seems that you might have hit a contradiction.


ISTM, that's because you conflate the machinery (iow: the brain or a computer program 
running on a physical computer) necessary for consciousness to be able to manifest 
itself relatively to an environment and consciousness itself.


How do we know the two are separable?  What is consciousness that can't manifest itself? 
The environment (the body?) isn't another sentient being that can recognize the 
consciousness...is it?


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-23 Thread Telmo Menezes
On Mon, Apr 20, 2015 at 12:52 PM, Bruce Kellett bhkell...@optusnet.com.au
wrote:

 Telmo Menezes wrote:

 On Mon, Apr 20, 2015 at 8:40 AM, Bruce Kellett bhkell...@optusnet.com.au
 mailto:bhkell...@optusnet.com.au wrote:

 Dennis Ochei wrote:

 One must revise the everyday concept of personal identity
 because it isn't even coherent. It's like youre getting mad at
 him for explaining combustion without reference to phlogiston.
 He can't use the everyday notion because it is a convenient
 fiction.

 I don't think phlogiston is an everyday concept.
 Not anymore. It was made obsolete by a better theory, which was not
 required to take phlogiston into account, because phlogiston was just a
 made up explanation that happened to fit the observations available at the
 time.


 No, phlogiston was a serious scientific theory. It required careful
 experimentation to demonstrate that the theory did not really fit the facts
 easily (you would require negative mass, for instance).


I did not say that it wasn't serious. What I said is that it was made up.
Many successful theories start as a creative hypothesis. Usually creative
hypothesis are very constrained by what is known at the time, or, saying it
another way, by common sense. Requiring better theories to fit common sense
would prevent scientific success. When you are dealing with things like
particle accelerators you are already fair removed from our common
experience of what matter is.




  The closest continuer concept of personal identity is far from an
 unsophisticated everyday notion, or a convenient fiction.
 I wasn't familiar with the concept so I looked at several sources. I will
 summarize it in my own words, so that you can please correct me if I
 misunderstand something:

 In case of branching (through something like duplication machines, body
 swaps, non-destructive teleportations, etc..), only one or zero branches
 will be the true continuation of the original. In some cases the true
 continuation is the one that more closely resembles the original
 psychologically, which can be determined by following causality chains. In
 the case of a tie, no branch is a true continuation of the original.


 It involves a lot more than psychological resemblance. The point is that
 personal identity is a multidimensional concept. It includes continuity of
 the body, causality, continuity, access to memories, emotional states,
 value systems, and everything else that goes to make up a unique person.
 Although all of these things change with time in the natural course of
 events, we say that there is a unique person in this history. Closest
 continuer theory is a sophisticated attempt to capture this
 multidimensionality, and acknowledges that the metric one might use, and
 the relative weights placed on different dimensions, might be open to
 discussion. But it is clear that in the case of ties (in whatever metric
 you are using), new persons are created -- the person is not duplicated in
 any operational sense.


What part of reality is all of this stuff trying to explain? The entire
personal identity business strikes me as an ill-defined problem.




  Again, please correct me if I am misrepresenting the theory or missing
 something important.

 If what I said above is correct, this is just akin to a legal definition,
 not a serious scientific or philosophical theory. It makes a statement
 about a bunch of mushy concepts. What is a true continuation? How is the
 causality chain introduced by a train journey any different from the one
 introduced by a teleportation?

 If Everett's MWI is correct, then this theory holds that there is no true
 continuation -- every single branching from one observer moment to the next
 introduces a tie in closeness. Which is fine by me, but then we can just
 ignore this entire true continuation business.


 MWI is in no way equivalent to Bruno's duplication situation. He
 acknowledges this.


Does he? This statement seems to broad to be meaningful.


 The point about MWI is that the continuers are in different worlds.


So are first person perspectives. I cannot experience your first person
perspective while experiencing mine.


 There is no dimension connecting the worlds, so there is no metric
 defining this difference.


I don't see how that follows. Just because the worlds are mutually
inaccessible by humans doesn't mean we can't define a metric for similarity
between possible worlds.


 Each can then be counted as the closest continuer /in that world/ -- with
 no possibility of conflicts.


What conflict? It's not like me shaking hands with a copy of myself will
create a singularity. The conflict only appears as a limitation of human
language.




  If you want to revise it to some alternative definition of personal
 identity that is better suited to your purposes, then you have to do
 the necessary analytical work.

 There isn't a single reference to personal identity 

Re: Step 3 - one step beyond?

2015-04-23 Thread meekerdb

On 4/23/2015 1:03 AM, LizR wrote:
The discussion was originally about step 3 in the comp argument. Obviously if we've 
moved onto something else then comp may not be relevant, however, if we are still 
talking about comp then the question of importance is whether a brain is Turing emulable 
at any level (which includes whether physics is Turing emulable). If it is, then either 
the argument goes through, or one of Bruno's other premises is wrong, or there is a 
mistake in his argument.


Well, maybe Bruno can clarify.  He always says that physics and consciousness are not 
computable; they are some kind of sum or average over countably infinite many threads 
going through a particular state of the UD.  So it's not that clear what it means that the 
brain is Turing emulable in Bruno's theory, even if it is Turing emulable in the 
materialist theory.  That's part of my concern that the environment of the brain, the 
physics of it is relation to the environment, is what makes it not emulable because its 
perception/awareness is inherently adapted to the environment by evolution.  Bruno tends 
to dismiss this as a technicality because one can just expand the scope of the emulation 
to include the environment.  But I think that's a flaw.  If the scope has to be expanded 
then all that's proven in step 8 is that, within  a simulated environment a simulated 
consciousness doesn't require any real physics - just simulated physics.  But that's 
almost trivial. I say almost because it may still provide some explanation of 
consciousness within the simulation.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-23 Thread John Mikes
Stathis:
I am an idealist enough (and an agnostic) to confess to lots and lots of so
far undetected functions (maybe even components  -- outside our 'material'
 --concept) that contribute to the functioning of a human 'brain'(?) as
developed into by now. Scanning goes for known items, composing is
contemplated for known structures (that include known functioning and
functionals as well) so to scan and reproduce is but a pius wish *within
our knowledge-base *of today. Maybe ever. The readiness for infinites is a
humanly unknown domain.
I feel it as much more than a linear progressing from 200 (1000?) to 11
billion or so which may be (if only by the huge numbers) above linearity.
AND... it includes the Aristotelian (what I called in a recent post my pun:
Aris-Total) *mistake* of regarding the 'total' as the composition of
known *material* parts.

Physicists may fall into these traps, mathemaiticians  even more, but
people in 'thinking' areas should not.

Apologies to the physicians and number-churners.

JM

On Wed, Apr 22, 2015 at 4:17 AM, Stathis Papaioannou stath...@gmail.com
wrote:



 On Wednesday, April 22, 2015, Bruce Kellett bhkell...@optusnet.com.au
 wrote:

 Bruno Marchal wrote:

 On 21 Apr 2015, at 00:43, Bruce Kellett wrote:

  What you are talking about has more to do with psychology and/or
 physics than mathematics,


 I call that theology, and this can be justified using Plato's notion of
 theology, as the lexicon Plotinus/arithmetic illustrates. The name of the
 field is another topic.

 Also, you are unclear. you argue that comp is false, but reason like it
 makes sense, and that the reasoning is non valid, without saying where is
 the error. It is hard to figure out what you mean.


 I think we are coming from entirely different starting points. From my
 (physicist's) point of view, what you are doing is proposing a model and
 reasoning about what happens in that model. Because it is your model,
 you are free to choose the starting point and the ancillary assumptions
 as you wish. All that matters for the model is that the logic of the
 development of the model is correct.

 What is happening in our exchanges is that I am examining what goes into
 your model and seeing whether it makes sense in the light of other
 knowledge. The actual logic of the development of your model is then of
 secondary importance. If your assumptions are unrealistic or too
 restrictive, then no matter how good your logic, the end result will not
 be of any great value. These wider issues cannot be simply dismissed as
 off topic.

 In summary, my objections start with step 0, the yes doctor argument.
 I do not think that it is physically possible to examine a living brain
 in sufficient detail to reproduce its conscious life in a Turing machine
 without actually destroying the brain before the process is complete. I
 would say No to the doctor. So even though I believe that AI is
 possible, in other words, that a computer-based intelligence that can
 function in all relevant respects like a normal human being is in
 principle possible, I do not believe that I can be replaced by such an
 AI. The necessary starting data are unobtainable in principle.

 Consequently, I think the reasoning in the first steps of your model
 could only apply to mature AIs, not to humans. The internal logic of the
 model is then not an issue -- but the relevance to human experience is.


 I don't see why you think it is impossible to scan a brain sufficiently to
 reproduce it. For example, you could fix a the brain, slice it up with a
 microtome and with microscopy establish all the synaptic connections. That
 is the crudest proposal for so-called mind uploading, but it may be
 necessary to go further to the molecular level and determine the types and
 numbers of membrane proteins in each neuron. The next step would be the at
 the level of small molecules and atoms, such as neurotransmitters and ions,
 but this may be able to be deduced from information about the type of
 neuron and macromolecules. It seems unlikely that you would need to
 determine things like ionic concentrations at a given moment, since ionic
 gradients collapse all the time and the person survives. In any case, with
 the yes doctor test you would not be the first volunteer. It is assumed
 that it will be well established, through a series of engineering
 refinements, that with the brain replacement the copies seem to behave
 normally and claim that they feel normal. The leap of faith (which, as I've
 said previously, I don't think is such a leap) is that not only will the
 copies say they feel the same, they will in fact feel the same.


 --
 Stathis Papaioannou

  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 

Re: Step 3 - one step beyond?

2015-04-23 Thread Stathis Papaioannou
On Friday, April 24, 2015, John Mikes jami...@gmail.com wrote:

 Stathis:
 I am an idealist enough (and an agnostic) to confess to lots and lots of
 so far undetected functions (maybe even components  -- outside our
 'material'  --concept) that contribute to the functioning of a human
 'brain'(?) as developed into by now. Scanning goes for known items,
 composing is contemplated for known structures (that include known
 functioning and functionals as well) so to scan and reproduce is but a
 pius wish *within our knowledge-base *of today. Maybe ever. The readiness
 for infinites is a humanly unknown domain.
 I feel it as much more than a linear progressing from 200 (1000?) to 11
 billion or so which may be (if only by the huge numbers) above linearity.
 AND... it includes the Aristotelian (what I called in a recent post my pun:
 Aris-Total) *mistake* of regarding the 'total' as the composition of
 known *material* parts.

 Physicists may fall into these traps, mathemaiticians  even more, but
 people in 'thinking' areas should not.

 Apologies to the physicians and number-churners.

 JM


John,

You may be right and there may be brain structures and functions that defy
scanning and reproducing. However, this is straightforward scientific
question, amenable to experimental methods.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-23 Thread LizR
On 24 April 2015 at 10:03, Telmo Menezes te...@telmomenezes.com wrote:

 On Mon, Apr 20, 2015 at 12:52 PM, Bruce Kellett bhkell...@optusnet.com.au
  wrote:

 No, phlogiston was a serious scientific theory. It required careful
 experimentation to demonstrate that the theory did not really fit the facts
 easily (you would require negative mass, for instance).

 I did not say that it wasn't serious. What I said is that it was made up.


Surely all scientific theories are made up?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-23 Thread LizR
On 24 April 2015 at 09:54, meekerdb meeke...@verizon.net wrote:

 On 4/23/2015 1:03 AM, LizR wrote:

 The discussion was originally about step 3 in the comp argument.
 Obviously if we've moved onto something else then comp may not be relevant,
 however, if we are still talking about comp then the question of importance
 is whether a brain is Turing emulable at any level (which includes whether
 physics is Turing emulable). If it is, then either the argument goes
 through, or one of Bruno's other premises is wrong, or there is a mistake
 in his argument.


 Well, maybe Bruno can clarify.  He always says that physics and
 consciousness are not computable; they are some kind of sum or average over
 countably infinite many threads going through a particular state of the
 UD.  So it's not that clear what it means that the brain is Turing emulable
 in Bruno's theory, even if it is Turing emulable in the materialist
 theory.  That's part of my concern that the environment of the brain, the
 physics of it is relation to the environment, is what makes it not emulable
 because its perception/awareness is inherently adapted to the environment
 by evolution.  Bruno tends to dismiss this as a technicality because one
 can just expand the scope of the emulation to include the environment.  But
 I think that's a flaw.  If the scope has to be expanded then all that's
 proven in step 8 is that, within  a simulated environment a simulated
 consciousness doesn't require any real physics - just simulated physics.
 But that's almost trivial. I say almost because it may still provide some
 explanation of consciousness within the simulation.


I think you'll find that consciousness isn't computable *if you assume all
the consequences of comp*. But once you've assumed all that, you've already
had to throw out materialism, including brains, so the question is
meaningless.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-23 Thread Bruce Kellett

LizR wrote:
On 24 April 2015 at 09:54, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:


On 4/23/2015 1:03 AM, LizR wrote:

The discussion was originally about step 3 in the comp argument.
Obviously if we've moved onto something else then comp may not
be relevant, however, if we are still talking about comp then
the question of importance is whether a brain is Turing emulable
at any level (which includes whether physics is Turing
emulable). If it is, then either the argument goes through, or
one of Bruno's other premises is wrong, or there is a mistake in
his argument.

Well, maybe Bruno can clarify.  He always says that physics and
consciousness are not computable; they are some kind of sum or
average over countably infinite many threads going through a
particular state of the UD.  So it's not that clear what it means
that the brain is Turing emulable in Bruno's theory, even if it is
Turing emulable in the materialist theory.  That's part of my
concern that the environment of the brain, the physics of it is
relation to the environment, is what makes it not emulable because
its perception/awareness is inherently adapted to the environment by
evolution.  Bruno tends to dismiss this as a technicality because
one can just expand the scope of the emulation to include the
environment.  But I think that's a flaw.  If the scope has to be
expanded then all that's proven in step 8 is that, within  a
simulated environment a simulated consciousness doesn't require any
real physics - just simulated physics.  But that's almost trivial. I
say almost because it may still provide some explanation of
consciousness within the simulation.

I think you'll find that consciousness isn't computable /if you assume 
all the consequences of comp/. But once you've assumed all that, you've 
already had to throw out materialism, including brains, so the question 
is meaningless.


That seems odd to me. The starting point was that the brain was Turing 
emulable (at some substitution level). Which seems to suggest that 
consciousness (usually associated with brain function) is Turing 
emulable. If you find at the end or your chain of reasoning that 
consciousness isn't computable (not Turing emulable?), it seems that you 
might have hit a contradiction.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-23 Thread Bruce Kellett

Stathis Papaioannou wrote:

On 23 April 2015 at 14:32, Bruce Kellett bhkell...@optusnet.com.au wrote:

meekerdb wrote:

On 4/22/2015 9:22 PM, Bruce Kellett wrote:

Stathis Papaioannou wrote:

On 23 April 2015 at 11:37, LizR lizj...@gmail.com wrote:

On 23 April 2015 at 11:36, meekerdb meeke...@verizon.net wrote:

But not without destroying the brain and producing a gap in
consciousness
(assuming you could produce a working replica).  I don't see that a
gap is
particularly significant; a concussion also causes a gap.


If comp is correct, gaps make no difference. (That would also be Frank
Tipler's argument for immortality, in the absence of cosmic
acceleration.)


Even if comp is incorrect gaps make no difference, since they occur in
the course of normal life.


Gaps in consciousness, perhaps. But are there gaps in the ebb and flow of
brain chemicals, hormones, cell deaths and divisions, ...? Or gaps in the
flow of the unconscious?


I'm pretty sure there are gaps in all biological processes that correspond
to any kind of thought/perception/awareness in the case of people who are
cooled down for heart surgery.


I doubt that. Is the point susceptible of proof either way? Not all brain
processes stop under anaesthesia.


When embryos are frozen all metabolic processes stop. On thawing, the
embryo is usually completely normal. If this could be done with a
brain would it make any difference in the philosophical discussion?


That becomes a hypothetical discussion. Let's do it first and discuss 
the implications later. I remain sceptical about the possibility. An 
embryo is not an adult brain. Injecting antifreeze to inhibit cell 
rupturing might have adverse consequences in the brain.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-23 Thread Stathis Papaioannou
On 23 April 2015 at 16:39, meekerdb meeke...@verizon.net wrote:
 On 4/22/2015 10:57 PM, Stathis Papaioannou wrote:

 On 23 April 2015 at 14:30, LizR lizj...@gmail.com wrote:

 On 23 April 2015 at 16:14, Stathis Papaioannou stath...@gmail.com
 wrote:

 On 23 April 2015 at 11:37, LizR lizj...@gmail.com wrote:

 On 23 April 2015 at 11:36, meekerdb meeke...@verizon.net wrote:

 But not without destroying the brain and producing a gap in
 consciousness
 (assuming you could produce a working replica).  I don't see that a
 gap
 is
 particularly significant; a concussion also causes a gap.


 If comp is correct, gaps make no difference. (That would also be Frank
 Tipler's argument for immortality, in the absence of cosmic
 acceleration.)

 Even if comp is incorrect gaps make no difference, since they occur in
 the course of normal life.


 But they do have to be explained differently (For example by physical
 continuity). We're discussing whether scanning a brain and making a
 (hypothetically exact enough) duplicate later would affect the
 consciousness of the person involved. Comp says not, obviously in this
 case
 for other reasons than physical continuity.

 As I understand it, comp requires simulation of the brain on a digital
 computer. It could be that there are processes in the brain that are
 not Turing emulable, and therefore it would be impossible to make an
 artificial brain using a computer. However, it might still be possible
 to make a copy through some other means, such as making an exact
 biological copy using different matter.

 But for Bruno's argument to go thru the copy must be digital, so that it's
 function appears in the UD list.

Yes, that's right; but it does not necessarily mean that an artificial
brain preserving your consciousness is impossible if comp is false.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-23 Thread Dennis Ochei
Short of bringing the brain down to absolute zero, im not sure that
stopping all brain processes is physically meaningful. we could talk about
stopping all action potentials. I think you might see short term memory
loss with this but you can probably reboot.

On Thursday, April 23, 2015, Bruce Kellett bhkell...@optusnet.com.au
wrote:

 Stathis Papaioannou wrote:



 On 23 April 2015 at 16:36, Bruce Kellett bhkell...@optusnet.com.au
 javascript:; wrote:
   Stathis Papaioannou wrote:
  
   On 23 April 2015 at 16:19, Bruce Kellett bhkell...@optusnet.com.au
 javascript:;
   wrote:
  
   I doubt that. Is the point susceptible of proof either way? Not all
   brain
   processes stop under anaesthesia.
  
  
   When embryos are frozen all metabolic processes stop. On thawing,
 the
   embryo is usually completely normal. If this could be done with a
   brain would it make any difference in the philosophical discussion?
  
  
   That becomes a hypothetical discussion. Let's do it first and
 discuss the
   implications later. I remain sceptical about the possibility. An
 embryo
   is
   not an adult brain. Injecting antifreeze to inhibit cell rupturing
 might
   have adverse consequences in the brain.
  
  
   In anaesthesia (and even in sleep) metabolic processes involved in
   consciousness are suspended without damage to the brain. But this
   whole list is hypothetical discussion! Mere technical difficulty does
   not affect the philosophical questions.
  
  
   I think it might -- if the technical issues are such that the process
 is
   impossible in principle (for physical reasons).

 Then it wouldn't be a mere technical difficulty. You have to show that
 suspending biological processes then restarting them breaks some physical
 law, and I don't think that it does.


 The argument would be that physical laws stop you restarting the suspended
 processes -- the suspension process causes irreversible damage, for
 instance. Irreversible processes are quite plentiful under known physical
 laws.

 Bruce

 --
 You received this message because you are subscribed to a topic in the
 Google Groups Everything List group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/everything-list/Lp5_VIb6ddY/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.



-- 
Sent from Gmail Mobile

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Step 3 - one step beyond?

2015-04-23 Thread Stathis Papaioannou
On 23 April 2015 at 16:36, Bruce Kellett bhkell...@optusnet.com.au
javascript:; wrote:
 Stathis Papaioannou wrote:

 On 23 April 2015 at 16:19, Bruce Kellett bhkell...@optusnet.com.au
javascript:;
 wrote:

 I doubt that. Is the point susceptible of proof either way? Not all
 brain
 processes stop under anaesthesia.


 When embryos are frozen all metabolic processes stop. On thawing, the
 embryo is usually completely normal. If this could be done with a
 brain would it make any difference in the philosophical discussion?


 That becomes a hypothetical discussion. Let's do it first and discuss
the
 implications later. I remain sceptical about the possibility. An embryo
 is
 not an adult brain. Injecting antifreeze to inhibit cell rupturing might
 have adverse consequences in the brain.


 In anaesthesia (and even in sleep) metabolic processes involved in
 consciousness are suspended without damage to the brain. But this
 whole list is hypothetical discussion! Mere technical difficulty does
 not affect the philosophical questions.


 I think it might -- if the technical issues are such that the process is
 impossible in principle (for physical reasons).

Then it wouldn't be a mere technical difficulty. You have to show that
suspending biological processes then restarting them breaks some physical
law, and I don't think that it does.


--
Stathis Papaioannou


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-23 Thread meekerdb

On 4/22/2015 11:51 PM, Stathis Papaioannou wrote:



On 23 April 2015 at 16:36, Bruce Kellett bhkell...@optusnet.com.au 
javascript:; wrote:
 Stathis Papaioannou wrote:

 On 23 April 2015 at 16:19, Bruce Kellett bhkell...@optusnet.com.au 
javascript:;
 wrote:

 I doubt that. Is the point susceptible of proof either way? Not all
 brain
 processes stop under anaesthesia.


 When embryos are frozen all metabolic processes stop. On thawing, the
 embryo is usually completely normal. If this could be done with a
 brain would it make any difference in the philosophical discussion?


 That becomes a hypothetical discussion. Let's do it first and discuss the
 implications later. I remain sceptical about the possibility. An embryo
 is
 not an adult brain. Injecting antifreeze to inhibit cell rupturing might
 have adverse consequences in the brain.


 In anaesthesia (and even in sleep) metabolic processes involved in
 consciousness are suspended without damage to the brain. But this
 whole list is hypothetical discussion! Mere technical difficulty does
 not affect the philosophical questions.


 I think it might -- if the technical issues are such that the process is
 impossible in principle (for physical reasons).

Then it wouldn't be a mere technical difficulty. You have to show that suspending 
biological processes then restarting them breaks some physical law, and I don't think 
that it does.


I agree.  But for Bruno's argument I don't think it's even necessary to copy humans.  If 
you just suppose that a conscious, digital AI is possible and that its operation is 
essentially classical, then duplicating its consciousness is not problematic.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-23 Thread Stathis Papaioannou
On 23 April 2015 at 14:32, Bruce Kellett bhkell...@optusnet.com.au wrote:
 meekerdb wrote:

 On 4/22/2015 9:22 PM, Bruce Kellett wrote:

 Stathis Papaioannou wrote:

 On 23 April 2015 at 11:37, LizR lizj...@gmail.com wrote:

 On 23 April 2015 at 11:36, meekerdb meeke...@verizon.net wrote:

 But not without destroying the brain and producing a gap in
 consciousness
 (assuming you could produce a working replica).  I don't see that a
 gap is
 particularly significant; a concussion also causes a gap.


 If comp is correct, gaps make no difference. (That would also be Frank
 Tipler's argument for immortality, in the absence of cosmic
 acceleration.)


 Even if comp is incorrect gaps make no difference, since they occur in
 the course of normal life.


 Gaps in consciousness, perhaps. But are there gaps in the ebb and flow of
 brain chemicals, hormones, cell deaths and divisions, ...? Or gaps in the
 flow of the unconscious?


 I'm pretty sure there are gaps in all biological processes that correspond
 to any kind of thought/perception/awareness in the case of people who are
 cooled down for heart surgery.


 I doubt that. Is the point susceptible of proof either way? Not all brain
 processes stop under anaesthesia.

When embryos are frozen all metabolic processes stop. On thawing, the
embryo is usually completely normal. If this could be done with a
brain would it make any difference in the philosophical discussion?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-23 Thread Bruce Kellett

Stathis Papaioannou wrote:

On 23 April 2015 at 16:19, Bruce Kellett bhkell...@optusnet.com.au wrote:


I doubt that. Is the point susceptible of proof either way? Not all brain
processes stop under anaesthesia.


When embryos are frozen all metabolic processes stop. On thawing, the
embryo is usually completely normal. If this could be done with a
brain would it make any difference in the philosophical discussion?


That becomes a hypothetical discussion. Let's do it first and discuss the
implications later. I remain sceptical about the possibility. An embryo is
not an adult brain. Injecting antifreeze to inhibit cell rupturing might
have adverse consequences in the brain.


In anaesthesia (and even in sleep) metabolic processes involved in
consciousness are suspended without damage to the brain. But this
whole list is hypothetical discussion! Mere technical difficulty does
not affect the philosophical questions.


I think it might -- if the technical issues are such that the process is 
impossible in principle (for physical reasons).


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-23 Thread Dennis Ochei
I mean you're not asking if the suspension maintained your personality or
your memories or what youe favorite food is. At this point we are assuming
all these things are preserved. Yours is not a question of technical
difficultly What you are instead asking is, will the conscious entity
before and after still be me? The distinction is not physically meaningful.
Not to envoke Newton's flaming laser sword, but it's clear that what you
are asking is an empty question.

If we are going to claim that this suspension annihilates identity, then
who's to say that drinking water doesn't do the same thing? If identity is
some epiphenomena, then who's to say we have it in the first place?

On Wednesday, April 22, 2015, Stathis Papaioannou stath...@gmail.com
wrote:



 On 23 April 2015 at 16:36, Bruce Kellett bhkell...@optusnet.com.au
 wrote:
  Stathis Papaioannou wrote:
 
  On 23 April 2015 at 16:19, Bruce Kellett bhkell...@optusnet.com.au
  wrote:
 
  I doubt that. Is the point susceptible of proof either way? Not all
  brain
  processes stop under anaesthesia.
 
 
  When embryos are frozen all metabolic processes stop. On thawing, the
  embryo is usually completely normal. If this could be done with a
  brain would it make any difference in the philosophical discussion?
 
 
  That becomes a hypothetical discussion. Let's do it first and discuss
 the
  implications later. I remain sceptical about the possibility. An embryo
  is
  not an adult brain. Injecting antifreeze to inhibit cell rupturing
 might
  have adverse consequences in the brain.
 
 
  In anaesthesia (and even in sleep) metabolic processes involved in
  consciousness are suspended without damage to the brain. But this
  whole list is hypothetical discussion! Mere technical difficulty does
  not affect the philosophical questions.
 
 
  I think it might -- if the technical issues are such that the process is
  impossible in principle (for physical reasons).

 Then it wouldn't be a mere technical difficulty. You have to show that
 suspending biological processes then restarting them breaks some physical
 law, and I don't think that it does.


 --
 Stathis Papaioannou


 --
 Stathis Papaioannou

 --
 You received this message because you are subscribed to a topic in the
 Google Groups Everything List group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/everything-list/Lp5_VIb6ddY/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 everything-list+unsubscr...@googlegroups.com
 javascript:_e(%7B%7D,'cvml','everything-list%2bunsubscr...@googlegroups.com');
 .
 To post to this group, send email to everything-list@googlegroups.com
 javascript:_e(%7B%7D,'cvml','everything-list@googlegroups.com');.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.



-- 
Sent from Gmail Mobile

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-23 Thread Stathis Papaioannou
On 23 April 2015 at 16:19, Bruce Kellett bhkell...@optusnet.com.au wrote:

 I doubt that. Is the point susceptible of proof either way? Not all brain
 processes stop under anaesthesia.


 When embryos are frozen all metabolic processes stop. On thawing, the
 embryo is usually completely normal. If this could be done with a
 brain would it make any difference in the philosophical discussion?


 That becomes a hypothetical discussion. Let's do it first and discuss the
 implications later. I remain sceptical about the possibility. An embryo is
 not an adult brain. Injecting antifreeze to inhibit cell rupturing might
 have adverse consequences in the brain.

In anaesthesia (and even in sleep) metabolic processes involved in
consciousness are suspended without damage to the brain. But this
whole list is hypothetical discussion! Mere technical difficulty does
not affect the philosophical questions.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-23 Thread meekerdb

On 4/22/2015 10:57 PM, Stathis Papaioannou wrote:

On 23 April 2015 at 14:30, LizR lizj...@gmail.com wrote:

On 23 April 2015 at 16:14, Stathis Papaioannou stath...@gmail.com wrote:

On 23 April 2015 at 11:37, LizR lizj...@gmail.com wrote:

On 23 April 2015 at 11:36, meekerdb meeke...@verizon.net wrote:

But not without destroying the brain and producing a gap in
consciousness
(assuming you could produce a working replica).  I don't see that a gap
is
particularly significant; a concussion also causes a gap.


If comp is correct, gaps make no difference. (That would also be Frank
Tipler's argument for immortality, in the absence of cosmic
acceleration.)

Even if comp is incorrect gaps make no difference, since they occur in
the course of normal life.


But they do have to be explained differently (For example by physical
continuity). We're discussing whether scanning a brain and making a
(hypothetically exact enough) duplicate later would affect the
consciousness of the person involved. Comp says not, obviously in this case
for other reasons than physical continuity.

As I understand it, comp requires simulation of the brain on a digital
computer. It could be that there are processes in the brain that are
not Turing emulable, and therefore it would be impossible to make an
artificial brain using a computer. However, it might still be possible
to make a copy through some other means, such as making an exact
biological copy using different matter.
But for Bruno's argument to go thru the copy must be digital, so that it's function 
appears in the UD list.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-23 Thread Bruce Kellett

Stathis Papaioannou wrote:



On 23 April 2015 at 16:36, Bruce Kellett bhkell...@optusnet.com.au 
javascript:; wrote:

  Stathis Papaioannou wrote:
 
  On 23 April 2015 at 16:19, Bruce Kellett bhkell...@optusnet.com.au 
javascript:;

  wrote:
 
  I doubt that. Is the point susceptible of proof either way? Not all
  brain
  processes stop under anaesthesia.
 
 
  When embryos are frozen all metabolic processes stop. On thawing, the
  embryo is usually completely normal. If this could be done with a
  brain would it make any difference in the philosophical discussion?
 
 
  That becomes a hypothetical discussion. Let's do it first and 
discuss the

  implications later. I remain sceptical about the possibility. An embryo
  is
  not an adult brain. Injecting antifreeze to inhibit cell rupturing 
might

  have adverse consequences in the brain.
 
 
  In anaesthesia (and even in sleep) metabolic processes involved in
  consciousness are suspended without damage to the brain. But this
  whole list is hypothetical discussion! Mere technical difficulty does
  not affect the philosophical questions.
 
 
  I think it might -- if the technical issues are such that the process is
  impossible in principle (for physical reasons).

Then it wouldn't be a mere technical difficulty. You have to show that 
suspending biological processes then restarting them breaks some 
physical law, and I don't think that it does.


The argument would be that physical laws stop you restarting the 
suspended processes -- the suspension process causes irreversible 
damage, for instance. Irreversible processes are quite plentiful under 
known physical laws.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-23 Thread Dennis Ochei
Yeah... we've been off topic for a while...

On Thursday, April 23, 2015, LizR lizj...@gmail.com wrote:

 The discussion was originally about step 3 in the comp argument. Obviously
 if we've moved onto something else then comp may not be relevant, however,
 if we are still talking about comp then the question of importance is
 whether a brain is Turing emulable at any level (which includes whether
 physics is Turing emulable). If it is, then either the argument goes
 through, or one of Bruno's other premises is wrong, or there is a mistake
 in his argument.



 On 23 April 2015 at 19:24, Dennis Ochei do.infinit...@gmail.com
 javascript:_e(%7B%7D,'cvml','do.infinit...@gmail.com'); wrote:

 Short of bringing the brain down to absolute zero, im not sure that
 stopping all brain processes is physically meaningful. we could talk about
 stopping all action potentials. I think you might see short term memory
 loss with this but you can probably reboot.


 On Thursday, April 23, 2015, Bruce Kellett bhkell...@optusnet.com.au
 javascript:_e(%7B%7D,'cvml','bhkell...@optusnet.com.au'); wrote:

 Stathis Papaioannou wrote:



 On 23 April 2015 at 16:36, Bruce Kellett bhkell...@optusnet.com.au
 javascript:; wrote:
   Stathis Papaioannou wrote:
  
   On 23 April 2015 at 16:19, Bruce Kellett bhkell...@optusnet.com.au
 javascript:;
   wrote:
  
   I doubt that. Is the point susceptible of proof either way? Not
 all
   brain
   processes stop under anaesthesia.
  
  
   When embryos are frozen all metabolic processes stop. On thawing,
 the
   embryo is usually completely normal. If this could be done with a
   brain would it make any difference in the philosophical
 discussion?
  
  
   That becomes a hypothetical discussion. Let's do it first and
 discuss the
   implications later. I remain sceptical about the possibility. An
 embryo
   is
   not an adult brain. Injecting antifreeze to inhibit cell rupturing
 might
   have adverse consequences in the brain.
  
  
   In anaesthesia (and even in sleep) metabolic processes involved in
   consciousness are suspended without damage to the brain. But this
   whole list is hypothetical discussion! Mere technical difficulty
 does
   not affect the philosophical questions.
  
  
   I think it might -- if the technical issues are such that the
 process is
   impossible in principle (for physical reasons).

 Then it wouldn't be a mere technical difficulty. You have to show that
 suspending biological processes then restarting them breaks some physical
 law, and I don't think that it does.


 The argument would be that physical laws stop you restarting the
 suspended processes -- the suspension process causes irreversible damage,
 for instance. Irreversible processes are quite plentiful under known
 physical laws.

 Bruce

 --
 You received this message because you are subscribed to a topic in the
 Google Groups Everything List group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/everything-list/Lp5_VIb6ddY/unsubscribe
 .
 To unsubscribe from this group and all its topics, send an email to
 everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.



 --
 Sent from Gmail Mobile

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com
 javascript:_e(%7B%7D,'cvml','everything-list%2bunsubscr...@googlegroups.com');
 .
 To post to this group, send email to everything-list@googlegroups.com
 javascript:_e(%7B%7D,'cvml','everything-list@googlegroups.com');.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to a topic in the
 Google Groups Everything List group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/everything-list/Lp5_VIb6ddY/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 everything-list+unsubscr...@googlegroups.com
 javascript:_e(%7B%7D,'cvml','everything-list%2bunsubscr...@googlegroups.com');
 .
 To post to this group, send email to everything-list@googlegroups.com
 javascript:_e(%7B%7D,'cvml','everything-list@googlegroups.com');.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.



-- 
Sent from Gmail Mobile

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit 

Re: Step 3 - one step beyond?

2015-04-23 Thread LizR
On 23 April 2015 at 21:30, Dennis Ochei do.infinit...@gmail.com wrote:

 Yeah... we've been off topic for a while...

 That doesn't worry me in itself, but it does mean that things that aren't
actually relevant to comp may appear to some to be valid arguments against
it. Personally, I'm interested in relevant arguments against comp, and
discussions of whatever other topics may come up, but not in confusing the
two.

Maybe start a new thread?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-23 Thread LizR
The discussion was originally about step 3 in the comp argument. Obviously
if we've moved onto something else then comp may not be relevant, however,
if we are still talking about comp then the question of importance is
whether a brain is Turing emulable at any level (which includes whether
physics is Turing emulable). If it is, then either the argument goes
through, or one of Bruno's other premises is wrong, or there is a mistake
in his argument.



On 23 April 2015 at 19:24, Dennis Ochei do.infinit...@gmail.com wrote:

 Short of bringing the brain down to absolute zero, im not sure that
 stopping all brain processes is physically meaningful. we could talk about
 stopping all action potentials. I think you might see short term memory
 loss with this but you can probably reboot.


 On Thursday, April 23, 2015, Bruce Kellett bhkell...@optusnet.com.au
 wrote:

 Stathis Papaioannou wrote:



 On 23 April 2015 at 16:36, Bruce Kellett bhkell...@optusnet.com.au
 javascript:; wrote:
   Stathis Papaioannou wrote:
  
   On 23 April 2015 at 16:19, Bruce Kellett bhkell...@optusnet.com.au
 javascript:;
   wrote:
  
   I doubt that. Is the point susceptible of proof either way? Not
 all
   brain
   processes stop under anaesthesia.
  
  
   When embryos are frozen all metabolic processes stop. On thawing,
 the
   embryo is usually completely normal. If this could be done with a
   brain would it make any difference in the philosophical discussion?
  
  
   That becomes a hypothetical discussion. Let's do it first and
 discuss the
   implications later. I remain sceptical about the possibility. An
 embryo
   is
   not an adult brain. Injecting antifreeze to inhibit cell rupturing
 might
   have adverse consequences in the brain.
  
  
   In anaesthesia (and even in sleep) metabolic processes involved in
   consciousness are suspended without damage to the brain. But this
   whole list is hypothetical discussion! Mere technical difficulty does
   not affect the philosophical questions.
  
  
   I think it might -- if the technical issues are such that the process
 is
   impossible in principle (for physical reasons).

 Then it wouldn't be a mere technical difficulty. You have to show that
 suspending biological processes then restarting them breaks some physical
 law, and I don't think that it does.


 The argument would be that physical laws stop you restarting the
 suspended processes -- the suspension process causes irreversible damage,
 for instance. Irreversible processes are quite plentiful under known
 physical laws.

 Bruce

 --
 You received this message because you are subscribed to a topic in the
 Google Groups Everything List group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/everything-list/Lp5_VIb6ddY/unsubscribe
 .
 To unsubscribe from this group and all its topics, send an email to
 everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.



 --
 Sent from Gmail Mobile

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-23 Thread Stathis Papaioannou
On Thursday, April 23, 2015, Bruce Kellett bhkell...@optusnet.com.au
wrote:

 Stathis Papaioannou wrote:



 On 23 April 2015 at 16:36, Bruce Kellett bhkell...@optusnet.com.au
 javascript:; wrote:
   Stathis Papaioannou wrote:
  
   On 23 April 2015 at 16:19, Bruce Kellett bhkell...@optusnet.com.au
 javascript:;
   wrote:
  
   I doubt that. Is the point susceptible of proof either way? Not all
   brain
   processes stop under anaesthesia.
  
  
   When embryos are frozen all metabolic processes stop. On thawing,
 the
   embryo is usually completely normal. If this could be done with a
   brain would it make any difference in the philosophical discussion?
  
  
   That becomes a hypothetical discussion. Let's do it first and
 discuss the
   implications later. I remain sceptical about the possibility. An
 embryo
   is
   not an adult brain. Injecting antifreeze to inhibit cell rupturing
 might
   have adverse consequences in the brain.
  
  
   In anaesthesia (and even in sleep) metabolic processes involved in
   consciousness are suspended without damage to the brain. But this
   whole list is hypothetical discussion! Mere technical difficulty does
   not affect the philosophical questions.
  
  
   I think it might -- if the technical issues are such that the process
 is
   impossible in principle (for physical reasons).

 Then it wouldn't be a mere technical difficulty. You have to show that
 suspending biological processes then restarting them breaks some physical
 law, and I don't think that it does.


 The argument would be that physical laws stop you restarting the suspended
 processes -- the suspension process causes irreversible damage, for
 instance. Irreversible processes are quite plentiful under known physical
 laws.


Clearly it is *not* physically impossible to suspend a cell and then
restart it unharmed, since it has actually been done. But your whole
argument is beside your point. It's as if I asked what I might expect to
observe if I dropped a ball on the surface of Mars, and you answered that
you couldn't answer the question because humans might be unable to survive
the trip there.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-23 Thread Dennis Ochei
I'll roll one out

On Thursday, April 23, 2015, LizR lizj...@gmail.com wrote:

 On 23 April 2015 at 21:30, Dennis Ochei do.infinit...@gmail.com
 javascript:_e(%7B%7D,'cvml','do.infinit...@gmail.com'); wrote:

 Yeah... we've been off topic for a while...

 That doesn't worry me in itself, but it does mean that things that aren't
 actually relevant to comp may appear to some to be valid arguments against
 it. Personally, I'm interested in relevant arguments against comp, and
 discussions of whatever other topics may come up, but not in confusing the
 two.

 Maybe start a new thread?

 --
 You received this message because you are subscribed to a topic in the
 Google Groups Everything List group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/everything-list/Lp5_VIb6ddY/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 everything-list+unsubscr...@googlegroups.com
 javascript:_e(%7B%7D,'cvml','everything-list%2bunsubscr...@googlegroups.com');
 .
 To post to this group, send email to everything-list@googlegroups.com
 javascript:_e(%7B%7D,'cvml','everything-list@googlegroups.com');.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.



-- 
Sent from Gmail Mobile

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-22 Thread meekerdb

On 4/22/2015 12:26 AM, Dennis Ochei wrote:
Certainly we could scan a nematode, don't you think? 302 neurons. Nematodes should say 
yes doctor. If I had a brain tumor, rescinsion of which would involve damaging the 1000 
neurons and there was a brain prothesis that would simulate a their function I should 
say yes doctor. Since modelling 1000 neurons at sufficient detail is possible, I leave 
it as an excercise for the reader to demonstrate that simulating a whole brain is possible.


The complete neural structure of planaria has been mapped.  But that doesn't capture the 
consciounsness of the individual planaria.  You can't tell from the wiring diagram 
whether a particular planaria has learned to take the illuminated fork in the test maze.  
So you might determine the generic brain structure of homo sapiens, but you would not 
thereby capture the consciousness of some particular person.  For that, presumably you 
would need to know the relative strength of all the synapses at a particular moment.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-22 Thread Dennis Ochei
Yes, I know it hasn't been done, but i think most people would agree that c
elegans could be scanned or that a small neuroprothesis is possible, which
is enough of a foothold to say uploading thought experiments are relevant
to human experience.

Of course none of this is deeply relevant to comp.

On Wednesday, April 22, 2015, meekerdb meeke...@verizon.net wrote:

  On 4/22/2015 12:26 AM, Dennis Ochei wrote:

 Certainly we could scan a nematode, don't you think? 302 neurons.
 Nematodes should say yes doctor. If I had a brain tumor, rescinsion of
 which would involve damaging the 1000 neurons and there was a brain
 prothesis that would simulate a their function I should say yes doctor.
 Since modelling 1000 neurons at sufficient detail is possible, I leave it
 as an excercise for the reader to demonstrate that simulating a whole brain
 is possible.


 The complete neural structure of planaria has been mapped.  But that
 doesn't capture the consciounsness of the individual planaria.  You can't
 tell from the wiring diagram whether a particular planaria has learned to
 take the illuminated fork in the test maze.  So you might determine the
 generic brain structure of homo sapiens, but you would not thereby capture
 the consciousness of some particular person.  For that, presumably you
 would need to know the relative strength of all the synapses at a
 particular moment.

 Brent

 --
 You received this message because you are subscribed to a topic in the
 Google Groups Everything List group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/everything-list/Lp5_VIb6ddY/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 everything-list+unsubscr...@googlegroups.com
 javascript:_e(%7B%7D,'cvml','everything-list%2bunsubscr...@googlegroups.com');
 .
 To post to this group, send email to everything-list@googlegroups.com
 javascript:_e(%7B%7D,'cvml','everything-list@googlegroups.com');.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.



-- 
Sent from Gmail Mobile

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-22 Thread meekerdb

On 4/22/2015 3:13 PM, Stathis Papaioannou wrote:



On Thursday, April 23, 2015, meekerdb meeke...@verizon.net 
javascript:_e(%7B%7D,'cvml','meeke...@verizon.net'); wrote:


On 4/22/2015 12:26 AM, Dennis Ochei wrote:

Certainly we could scan a nematode, don't you think? 302 neurons. Nematodes 
should
say yes doctor. If I had a brain tumor, rescinsion of which would involve 
damaging
the 1000 neurons and there was a brain prothesis that would simulate a their
function I should say yes doctor. Since modelling 1000 neurons at 
sufficient detail
is possible, I leave it as an excercise for the reader to demonstrate that
simulating a whole brain is possible.


The complete neural structure of planaria has been mapped.  But that 
doesn't capture
the consciounsness of the individual planaria.  You can't tell from the 
wiring
diagram whether a particular planaria has learned to take the illuminated 
fork in
the test maze.  So you might determine the generic brain structure of homo 
sapiens,
but you would not thereby capture the consciousness of some particular 
person.  For
that, presumably you would need to know the relative strength of all the 
synapses at
a particular moment.


Yes, and you could possibly do that using a technique resolving detail down to the size 
of macromolecules.


But not without destroying the brain and producing a gap in consciousness (assuming you 
could produce a working replica).  I don't see that a gap is particularly significant; a 
concussion also causes a gap.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-22 Thread Bruce Kellett

Stathis Papaioannou wrote:

On 23 April 2015 at 11:37, LizR lizj...@gmail.com wrote:

On 23 April 2015 at 11:36, meekerdb meeke...@verizon.net wrote:

But not without destroying the brain and producing a gap in consciousness
(assuming you could produce a working replica).  I don't see that a gap is
particularly significant; a concussion also causes a gap.


If comp is correct, gaps make no difference. (That would also be Frank
Tipler's argument for immortality, in the absence of cosmic acceleration.)


Even if comp is incorrect gaps make no difference, since they occur in
the course of normal life.


Gaps in consciousness, perhaps. But are there gaps in the ebb and flow 
of brain chemicals, hormones, cell deaths and divisions, ...? Or gaps in 
the flow of the unconscious?


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-22 Thread LizR
On 23 April 2015 at 11:36, meekerdb meeke...@verizon.net wrote:

  But not without destroying the brain and producing a gap in
 consciousness (assuming you could produce a working replica).  I don't see
 that a gap is particularly significant; a concussion also causes a gap.


If comp is correct, gaps make no difference. (That would also be Frank
Tipler's argument for immortality, in the absence of cosmic acceleration.)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-22 Thread Stathis Papaioannou
On 23 April 2015 at 11:37, LizR lizj...@gmail.com wrote:
 On 23 April 2015 at 11:36, meekerdb meeke...@verizon.net wrote:

 But not without destroying the brain and producing a gap in consciousness
 (assuming you could produce a working replica).  I don't see that a gap is
 particularly significant; a concussion also causes a gap.


 If comp is correct, gaps make no difference. (That would also be Frank
 Tipler's argument for immortality, in the absence of cosmic acceleration.)

Even if comp is incorrect gaps make no difference, since they occur in
the course of normal life.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Step 3 - one step beyond?

2015-04-22 Thread Stathis Papaioannou
On Thursday, April 23, 2015, meekerdb meeke...@verizon.net
javascript:_e(%7B%7D,'cvml','meeke...@verizon.net'); wrote:

  On 4/22/2015 12:26 AM, Dennis Ochei wrote:

 Certainly we could scan a nematode, don't you think? 302 neurons.
 Nematodes should say yes doctor. If I had a brain tumor, rescinsion of
 which would involve damaging the 1000 neurons and there was a brain
 prothesis that would simulate a their function I should say yes doctor.
 Since modelling 1000 neurons at sufficient detail is possible, I leave it
 as an excercise for the reader to demonstrate that simulating a whole brain
 is possible.


 The complete neural structure of planaria has been mapped.  But that
 doesn't capture the consciounsness of the individual planaria.  You can't
 tell from the wiring diagram whether a particular planaria has learned to
 take the illuminated fork in the test maze.  So you might determine the
 generic brain structure of homo sapiens, but you would not thereby capture
 the consciousness of some particular person.  For that, presumably you
 would need to know the relative strength of all the synapses at a
 particular moment.


Yes, and you could possibly do that using a technique resolving detail down
to the size of macromolecules.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-22 Thread LizR
On 23 April 2015 at 16:14, Stathis Papaioannou stath...@gmail.com wrote:

 On 23 April 2015 at 11:37, LizR lizj...@gmail.com wrote:
  On 23 April 2015 at 11:36, meekerdb meeke...@verizon.net wrote:
 
  But not without destroying the brain and producing a gap in
 consciousness
  (assuming you could produce a working replica).  I don't see that a gap
 is
  particularly significant; a concussion also causes a gap.
 
 
  If comp is correct, gaps make no difference. (That would also be Frank
  Tipler's argument for immortality, in the absence of cosmic
 acceleration.)

 Even if comp is incorrect gaps make no difference, since they occur in
 the course of normal life.


But they do have to be explained differently (For example by physical
continuity). We're discussing whether scanning a brain and making a
(hypothetically exact enough) duplicate later would affect the
consciousness of the person involved. Comp says not, obviously in this case
for other reasons than physical continuity.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-22 Thread Bruce Kellett

meekerdb wrote:

On 4/22/2015 9:22 PM, Bruce Kellett wrote:

Stathis Papaioannou wrote:

On 23 April 2015 at 11:37, LizR lizj...@gmail.com wrote:

On 23 April 2015 at 11:36, meekerdb meeke...@verizon.net wrote:
But not without destroying the brain and producing a gap in 
consciousness
(assuming you could produce a working replica).  I don't see that a 
gap is

particularly significant; a concussion also causes a gap.


If comp is correct, gaps make no difference. (That would also be Frank
Tipler's argument for immortality, in the absence of cosmic 
acceleration.)


Even if comp is incorrect gaps make no difference, since they occur in
the course of normal life.


Gaps in consciousness, perhaps. But are there gaps in the ebb and flow 
of brain chemicals, hormones, cell deaths and divisions, ...? Or gaps 
in the flow of the unconscious?


I'm pretty sure there are gaps in all biological processes that 
correspond to any kind of thought/perception/awareness in the case of 
people who are cooled down for heart surgery.


I doubt that. Is the point susceptible of proof either way? Not all 
brain processes stop under anaesthesia.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-22 Thread Stathis Papaioannou
On 23 April 2015 at 14:30, LizR lizj...@gmail.com wrote:
 On 23 April 2015 at 16:14, Stathis Papaioannou stath...@gmail.com wrote:

 On 23 April 2015 at 11:37, LizR lizj...@gmail.com wrote:
  On 23 April 2015 at 11:36, meekerdb meeke...@verizon.net wrote:
 
  But not without destroying the brain and producing a gap in
  consciousness
  (assuming you could produce a working replica).  I don't see that a gap
  is
  particularly significant; a concussion also causes a gap.
 
 
  If comp is correct, gaps make no difference. (That would also be Frank
  Tipler's argument for immortality, in the absence of cosmic
  acceleration.)

 Even if comp is incorrect gaps make no difference, since they occur in
 the course of normal life.


 But they do have to be explained differently (For example by physical
 continuity). We're discussing whether scanning a brain and making a
 (hypothetically exact enough) duplicate later would affect the
 consciousness of the person involved. Comp says not, obviously in this case
 for other reasons than physical continuity.

As I understand it, comp requires simulation of the brain on a digital
computer. It could be that there are processes in the brain that are
not Turing emulable, and therefore it would be impossible to make an
artificial brain using a computer. However, it might still be possible
to make a copy through some other means, such as making an exact
biological copy using different matter.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-22 Thread meekerdb

On 4/22/2015 9:22 PM, Bruce Kellett wrote:

Stathis Papaioannou wrote:

On 23 April 2015 at 11:37, LizR lizj...@gmail.com wrote:

On 23 April 2015 at 11:36, meekerdb meeke...@verizon.net wrote:

But not without destroying the brain and producing a gap in consciousness
(assuming you could produce a working replica).  I don't see that a gap is
particularly significant; a concussion also causes a gap.


If comp is correct, gaps make no difference. (That would also be Frank
Tipler's argument for immortality, in the absence of cosmic acceleration.)


Even if comp is incorrect gaps make no difference, since they occur in
the course of normal life.


Gaps in consciousness, perhaps. But are there gaps in the ebb and flow of brain 
chemicals, hormones, cell deaths and divisions, ...? Or gaps in the flow of the unconscious?


I'm pretty sure there are gaps in all  biological processes that correspond to any kind of 
thought/perception/awareness in the case of people who are cooled down for heart surgery.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-22 Thread meekerdb

On 4/22/2015 9:30 PM, LizR wrote:
On 23 April 2015 at 16:14, Stathis Papaioannou stath...@gmail.com 
mailto:stath...@gmail.com wrote:


On 23 April 2015 at 11:37, LizR lizj...@gmail.com 
mailto:lizj...@gmail.com wrote:
 On 23 April 2015 at 11:36, meekerdb meeke...@verizon.net
mailto:meeke...@verizon.net wrote:

 But not without destroying the brain and producing a gap in consciousness
 (assuming you could produce a working replica).  I don't see that a gap 
is
 particularly significant; a concussion also causes a gap.


 If comp is correct, gaps make no difference. (That would also be Frank
 Tipler's argument for immortality, in the absence of cosmic acceleration.)

Even if comp is incorrect gaps make no difference, since they occur in
the course of normal life.


But they do have to be explained differently (For example by physical continuity). We're 
discussing whether scanning a brain and making a (hypothetically exact enough) 
duplicate later would affect the consciousness of the person involved. Comp says not, 
obviously in this case for other reasons than physical continuity.


Of course as Stathis says, How would you know if your consciousness changed?  You could 
ask friends and look at documents and check your memories, but it's hard to say what it 
would mean to notice your consciousness changed.  Even if you thought that, maybe it's not 
your consciousness that's different rather it's your memory of how your consciousness used 
to be.  Motorcycle racers have a saying, The older I get, the faster I was.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-22 Thread Stathis Papaioannou
On Wednesday, April 22, 2015, Bruce Kellett bhkell...@optusnet.com.au
wrote:

 Bruno Marchal wrote:

 On 21 Apr 2015, at 00:43, Bruce Kellett wrote:

  What you are talking about has more to do with psychology and/or physics
 than mathematics,


 I call that theology, and this can be justified using Plato's notion of
 theology, as the lexicon Plotinus/arithmetic illustrates. The name of the
 field is another topic.

 Also, you are unclear. you argue that comp is false, but reason like it
 makes sense, and that the reasoning is non valid, without saying where is
 the error. It is hard to figure out what you mean.


 I think we are coming from entirely different starting points. From my
 (physicist's) point of view, what you are doing is proposing a model and
 reasoning about what happens in that model. Because it is your model,
 you are free to choose the starting point and the ancillary assumptions
 as you wish. All that matters for the model is that the logic of the
 development of the model is correct.

 What is happening in our exchanges is that I am examining what goes into
 your model and seeing whether it makes sense in the light of other
 knowledge. The actual logic of the development of your model is then of
 secondary importance. If your assumptions are unrealistic or too
 restrictive, then no matter how good your logic, the end result will not
 be of any great value. These wider issues cannot be simply dismissed as
 off topic.

 In summary, my objections start with step 0, the yes doctor argument.
 I do not think that it is physically possible to examine a living brain
 in sufficient detail to reproduce its conscious life in a Turing machine
 without actually destroying the brain before the process is complete. I
 would say No to the doctor. So even though I believe that AI is
 possible, in other words, that a computer-based intelligence that can
 function in all relevant respects like a normal human being is in
 principle possible, I do not believe that I can be replaced by such an
 AI. The necessary starting data are unobtainable in principle.

 Consequently, I think the reasoning in the first steps of your model
 could only apply to mature AIs, not to humans. The internal logic of the
 model is then not an issue -- but the relevance to human experience is.


I don't see why you think it is impossible to scan a brain sufficiently to
reproduce it. For example, you could fix a the brain, slice it up with a
microtome and with microscopy establish all the synaptic connections. That
is the crudest proposal for so-called mind uploading, but it may be
necessary to go further to the molecular level and determine the types and
numbers of membrane proteins in each neuron. The next step would be the at
the level of small molecules and atoms, such as neurotransmitters and ions,
but this may be able to be deduced from information about the type of
neuron and macromolecules. It seems unlikely that you would need to
determine things like ionic concentrations at a given moment, since ionic
gradients collapse all the time and the person survives. In any case, with
the yes doctor test you would not be the first volunteer. It is assumed
that it will be well established, through a series of engineering
refinements, that with the brain replacement the copies seem to behave
normally and claim that they feel normal. The leap of faith (which, as I've
said previously, I don't think is such a leap) is that not only will the
copies say they feel the same, they will in fact feel the same.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-22 Thread Bruce Kellett

Bruno Marchal wrote:

On 21 Apr 2015, at 00:43, Bruce Kellett wrote:

What you are talking about has more to do with psychology and/or 
physics than mathematics,


I call that theology, and this can be justified using Plato's notion of 
theology, as the lexicon Plotinus/arithmetic illustrates. The name of 
the field is another topic.


Also, you are unclear. you argue that comp is false, but reason like it 
makes sense, and that the reasoning is non valid, without saying where 
is the error. It is hard to figure out what you mean.


I think we are coming from entirely different starting points. From my
(physicist's) point of view, what you are doing is proposing a model and
reasoning about what happens in that model. Because it is your model,
you are free to choose the starting point and the ancillary assumptions
as you wish. All that matters for the model is that the logic of the
development of the model is correct.

What is happening in our exchanges is that I am examining what goes into
your model and seeing whether it makes sense in the light of other
knowledge. The actual logic of the development of your model is then of
secondary importance. If your assumptions are unrealistic or too
restrictive, then no matter how good your logic, the end result will not
be of any great value. These wider issues cannot be simply dismissed as
off topic.

In summary, my objections start with step 0, the yes doctor argument.
I do not think that it is physically possible to examine a living brain
in sufficient detail to reproduce its conscious life in a Turing machine
without actually destroying the brain before the process is complete. I
would say No to the doctor. So even though I believe that AI is
possible, in other words, that a computer-based intelligence that can
function in all relevant respects like a normal human being is in
principle possible, I do not believe that I can be replaced by such an
AI. The necessary starting data are unobtainable in principle.

Consequently, I think the reasoning in the first steps of your model
could only apply to mature AIs, not to humans. The internal logic of the
model is then not an issue -- but the relevance to human experience is.

Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-22 Thread Dennis Ochei
Certainly we could scan a nematode, don't you think? 302 neurons. Nematodes
should say yes doctor. If I had a brain tumor, rescinsion of which would
involve damaging the 1000 neurons and there was a brain prothesis that
would simulate a their function I should say yes doctor. Since modelling
1000 neurons at sufficient detail is possible, I leave it as an excercise
for the reader to demonstrate that simulating a whole brain is possible.

On Wednesday, April 22, 2015, Bruce Kellett bhkell...@optusnet.com.au
wrote:

 Bruno Marchal wrote:

 On 21 Apr 2015, at 00:43, Bruce Kellett wrote:

  What you are talking about has more to do with psychology and/or physics
 than mathematics,


 I call that theology, and this can be justified using Plato's notion of
 theology, as the lexicon Plotinus/arithmetic illustrates. The name of the
 field is another topic.

 Also, you are unclear. you argue that comp is false, but reason like it
 makes sense, and that the reasoning is non valid, without saying where is
 the error. It is hard to figure out what you mean.


 I think we are coming from entirely different starting points. From my
 (physicist's) point of view, what you are doing is proposing a model and
 reasoning about what happens in that model. Because it is your model,
 you are free to choose the starting point and the ancillary assumptions
 as you wish. All that matters for the model is that the logic of the
 development of the model is correct.

 What is happening in our exchanges is that I am examining what goes into
 your model and seeing whether it makes sense in the light of other
 knowledge. The actual logic of the development of your model is then of
 secondary importance. If your assumptions are unrealistic or too
 restrictive, then no matter how good your logic, the end result will not
 be of any great value. These wider issues cannot be simply dismissed as
 off topic.

 In summary, my objections start with step 0, the yes doctor argument.
 I do not think that it is physically possible to examine a living brain
 in sufficient detail to reproduce its conscious life in a Turing machine
 without actually destroying the brain before the process is complete. I
 would say No to the doctor. So even though I believe that AI is
 possible, in other words, that a computer-based intelligence that can
 function in all relevant respects like a normal human being is in
 principle possible, I do not believe that I can be replaced by such an
 AI. The necessary starting data are unobtainable in principle.

 Consequently, I think the reasoning in the first steps of your model
 could only apply to mature AIs, not to humans. The internal logic of the
 model is then not an issue -- but the relevance to human experience is.

 Bruce

 --
 You received this message because you are subscribed to a topic in the
 Google Groups Everything List group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/everything-list/Lp5_VIb6ddY/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.



-- 
Sent from Gmail Mobile

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-22 Thread Bruno Marchal


On 22 Apr 2015, at 09:05, Bruce Kellett wrote:


Bruno Marchal wrote:

On 21 Apr 2015, at 00:43, Bruce Kellett wrote:
What you are talking about has more to do with psychology and/or  
physics than mathematics,
I call that theology, and this can be justified using Plato's  
notion of theology, as the lexicon Plotinus/arithmetic illustrates.  
The name of the field is another topic.
Also, you are unclear. you argue that comp is false, but reason  
like it makes sense, and that the reasoning is non valid, without  
saying where is the error. It is hard to figure out what you mean.


I think we are coming from entirely different starting points. From my
(physicist's) point of view, what you are doing is proposing a model  
and

reasoning about what happens in that model. Because it is your model,
you are free to choose the starting point and the ancillary  
assumptions

as you wish. All that matters for the model is that the logic of the
development of the model is correct.


OK. What you call model is what logician call theory. Logician use  
model for a mathematical object playing basically the role of a  
reality satisfying the axioms and theorems of the theory. (let us  
keep in mind this to avoid deaf dialog).





What is happening in our exchanges is that I am examining what goes  
into

your model and seeing whether it makes sense in the light of other
knowledge. The actual logic of the development of your model is then  
of

secondary importance. If your assumptions are unrealistic or too
restrictive, then no matter how good your logic, the end result will  
not
be of any great value. These wider issues cannot be simply dismissed  
as

off topic.

In summary, my objections start with step 0, the yes doctor  
argument.
I do not think that it is physically possible to examine a living  
brain
in sufficient detail to reproduce its conscious life in a Turing  
machine
without actually destroying the brain before the process is  
complete. I

would say No to the doctor. So even though I believe that AI is
possible, in other words, that a computer-based intelligence that can
function in all relevant respects like a normal human being is in
principle possible, I do not believe that I can be replaced by such an
AI. The necessary starting data are unobtainable in principle.

Consequently, I think the reasoning in the first steps of your model
could only apply to mature AIs, not to humans. The internal logic of  
the
model is then not an issue -- but the relevance to human experience  
is.


Hmm, step seven shows that the practilcaness of the duplication is not  
relevant. I come back on this below.


Another point, given that you seem to accept the weaker thesis of  
strong AI (machine can be conscious), then the UDA works for them,  
they can understand it, and get the same conclusion. So such machine  
would prove correctly that either physics is a branch of arithmetic,  
or they are not machine. But we know that such AI are machine (in the  
comp sense), so that would be an even better proof than UDA, and  
indeed it is actually a good sketch of the mathematical translation of  
UDA in arithmetic.


But we don't need to go in UDA. You are right that the first steps of  
the UDA might not be realist, (although I doubt that too: see Ochei's  
post), but normally you should understand that at step seven, that  
absence of realism is no more a trouble, as the UD generates all  
computations, even the simulation of the whole Milky at the level of  
strings and branes.


The only thing which might perhaps prevent the reasoning to go through  
is if matter plays some non Turing emulable role for the presence of  
consciousness. But then we are no more postulating computationalism.


A rather long time ago, I thought that UDA and alike could be used  
to show that computationalism lead to a contradiction. But I got only  
weirdness, and to test comp we need to compare the the comp  
weirdness and the empirical weirdness. And that is the point. I am not  
a defender of comp, or of any idea. I am a logician saying that IF we  
have such belief, and if we are rational enough, then we have to  
accept this or that consequence.


And, to be sure, I do find comp elegant, as it leads to a simple  
theory of arithmetic: elementary arithmetic.


I will try, (cf my promise to the Platonist Guitar Boy (PGC)) to make  
a summary of the math part (AUDA,, the machine interview), you might  
better appreciate, as it shows how complex the extraction of physics  
is, but how incompleteness leads already rather quickly to MWI and  
some quantum logic that we can compare to the empirical quantum logic.  
In fact we can already implement in the comp extracted physics some  
quantum gates, but may be some other could not, and once realized in  
nature that might lead to a refutation of comp or the classical theory  
of knowledge (or we are in a perverse simulation, to be complete). The  
main things is that the approach explains 

Re: Step 3 - one step beyond?

2015-04-22 Thread Bruno Marchal


On 22 Apr 2015, at 09:26, Dennis Ochei wrote:

Certainly we could scan a nematode, don't you think? 302 neurons.  
Nematodes should say yes doctor. If I had a brain tumor, rescinsion  
of which would involve damaging the 1000 neurons and there was a  
brain prothesis that would simulate a their function I should say  
yes doctor. Since modelling 1000 neurons at sufficient detail is  
possible, I leave it as an excercise for the reader to demonstrate  
that simulating a whole brain is possible.


I don't think that this is relevant to grasp the consequence of  
computationalism, but I agree with you: emulating the brain might be  
technologically possible. But it is also quite complex, and the  
pioneers of digital, but physical, brain will probably feel quite  
stoned.  In particular, we get more and more evidences that the  
glial cells plays important regulating roles in the brain, and even  
that they transmit information. They have no axons, but they  
communicate between themselves trough waves of chemical reactions,  
passing from membranes to membranes, and seems to be able to activate  
or inhibit the action of some neurons. So I would say yes to a doctor  
who emulates the neuron and the glial cells at the level of the  
concentration of the metabolites in the cells. That is not for  
tomorrow, but perhaps for after tomorrow.


Then with comp, we survive anyway in the arithmetical reality, but  
here, the problem is that there is still an inflation of  
possibilities, going from backtracking in our life, to becoming a sort  
of god. Only the progress in mathematical theology can give more  
clues. Plato's proof of the immortality of the soul remains intact in  
the arithmetical theology, but in that case, the soul can become  
amnesic, and the survival can have a strong salvia divinorum  
experience look. (You can see the report of such experiences on  
Erowid).The little ego might not survive, in that case, but before  
vanishing, you can realize internally that you are not the little ego.  
That form of personal identity might be an illusion, which can be  
consciously stopped. Note that some dream can lead to similar  
experience. It impose you a form of selfish altruism, as you realize  
that the suffering of the others are yours, in some concrete sense.


Bruno





On Wednesday, April 22, 2015, Bruce Kellett  
bhkell...@optusnet.com.au wrote:

Bruno Marchal wrote:
On 21 Apr 2015, at 00:43, Bruce Kellett wrote:

What you are talking about has more to do with psychology and/or  
physics than mathematics,


I call that theology, and this can be justified using Plato's notion  
of theology, as the lexicon Plotinus/arithmetic illustrates. The  
name of the field is another topic.


Also, you are unclear. you argue that comp is false, but reason like  
it makes sense, and that the reasoning is non valid, without saying  
where is the error. It is hard to figure out what you mean.


I think we are coming from entirely different starting points. From my
(physicist's) point of view, what you are doing is proposing a model  
and

reasoning about what happens in that model. Because it is your model,
you are free to choose the starting point and the ancillary  
assumptions

as you wish. All that matters for the model is that the logic of the
development of the model is correct.

What is happening in our exchanges is that I am examining what goes  
into

your model and seeing whether it makes sense in the light of other
knowledge. The actual logic of the development of your model is then  
of

secondary importance. If your assumptions are unrealistic or too
restrictive, then no matter how good your logic, the end result will  
not
be of any great value. These wider issues cannot be simply dismissed  
as

off topic.

In summary, my objections start with step 0, the yes doctor  
argument.
I do not think that it is physically possible to examine a living  
brain
in sufficient detail to reproduce its conscious life in a Turing  
machine
without actually destroying the brain before the process is  
complete. I

would say No to the doctor. So even though I believe that AI is
possible, in other words, that a computer-based intelligence that can
function in all relevant respects like a normal human being is in
principle possible, I do not believe that I can be replaced by such an
AI. The necessary starting data are unobtainable in principle.

Consequently, I think the reasoning in the first steps of your model
could only apply to mature AIs, not to humans. The internal logic of  
the
model is then not an issue -- but the relevance to human experience  
is.


Bruce

--
You received this message because you are subscribed to a topic in  
the Google Groups Everything List group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/everything-list/Lp5_VIb6ddY/unsubscribe 
.
To unsubscribe from this group and all its topics, send an email to everything-list+unsubscr...@googlegroups.com 
.

To 

Re: Step 3 - one step beyond?

2015-04-21 Thread Stathis Papaioannou
On 21 April 2015 at 08:43, Bruce Kellett bhkell...@optusnet.com.au wrote:
 Bruno Marchal wrote:

 On 20 Apr 2015, at 09:40, Bruce Kellett wrote:

 Dennis Ochei wrote:

 One must revise the everyday concept of personal identity because it
 isn't even coherent. It's like youre getting mad at him for explaining
 combustion without reference to phlogiston. He can't use the everyday 
 notion
 because it is a convenient fiction.


 I don't think phlogiston is an everyday concept. The closest continuer
 concept of personal identity is far from an unsophisticated everyday notion,
 or a convenient fiction. If you want to revise it to some alternative
 definition of personal identity that is better suited to your purposes, then
 you have to do the necessary analytical work.


 Are you saying that you believe that computationalism is false (in which
 case you can believe in some closer continuer theory), or are you saying
 that step 4 is not valid?


 I am suggesting that computationalism is effectively false, in part because
 of an inadequate account of personal identity. You substitute part or all of
 the brain at some level with a Turing machine, but do not take appropriate
 notice of the body bearing the brain. If we are not to notice the
 substitution, we must still have a body that interacts with the world in
 exactly the same way as the original. Under the teleportation scenarios,
 some new body must be created or provided. I think that in general the
 person might notice this.

 If you woke up in the morning and looked in the mirror and saw Sophia Loren
 looking back at you, or saw your next door neighbour in the mirror, you
 might doubt your own identity. Memories are not everything because memories
 can be lost, or be mistaken.

 In total virtual reality scenarios, of course, this could be managed, but
 then you have the problem of the identity of indiscernibles. Creating copies
 that are identical to this level -- identical memories, bodies,
 environments, and so on -- does not duplicate the person -- the copies,
 being identical in all respects, are one person.

 I am saying that a case could be made that all the destructive teleportation
 scenarios create new persons -- the cut actually terminates the original
 person. In step 3 you have a tie for closest continuer so there is no
 continuing person -- the original is cut. If the original is not cut (as in
 step 5), then that is the continuing person, and the duplicate is a new
 person. Time delays as in steps 2 and 4 do not make a lot of difference,
 they just enhance the need for the recognition of new persons.

Of course destructive teleportation creates new persons, but the point
is that it doesn't matter, because ordinary life creates new persons
also, though gradually rather than all at once. If you discovered that
some otherwise perfectly normal people had a condition which caused
all of the matter in their body to be replaced overnight during sleep,
rather than gradually over the course of days, and that you were one
of these people, would it bother you? Or would you doubt that it was
so on the grounds that you were pretty sure you were the same person
and not a new person?

 In sum, your argument over these early steps is not an argument in logic,
 but an argument of rhetoric. Because the tight definitions you need for
 logical argument either are not provided, or when provided, do not refer to
 anything in the real world, at best you are trying to persuade rhetorically
 -- there is no logical compulsion. What you are talking about has more to do
 with psychology and/or physics than mathematics, so definitions can never be
 completely precise -- concepts in the real world are always corrigible, so
 tightly constrained logical arguments are not available as they are in
 mathematics.

All you have to agree is that it would make no difference to you if
you were perfectly (or close enough) copied. I guess you could
disagree with this but in that case you are deluded about being the
person you believe yourself to be.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-21 Thread Stathis Papaioannou
On 21 April 2015 at 09:25, Bruce Kellett bhkell...@optusnet.com.au wrote:
 Dennis Ochei wrote:

 Do you have a coherent, non arbitrary theory of personal identity that
 claims 1) Teletransportation creates a new person, killing the original


 It is a possible theory. See D Parfit, 'Reasons and Persons' (Oxford, 1984).

Parfit's argument is that if identity is not preserved in these
thought experiments, then identity is not the thing that matters.

 and 2) Ordinary survival does not create a new person, killing the
 original?

 Let me remind you, although you probably know this, that all your atoms
 except some in your teeth are replaced throughout the course of a year.


 When a cell in my arm dies and is replaced, I do not die. When my leg is cut
 off, I do not die. Ordinary survival does not kill the original and create a
 new person -- body replacement is a gradual, continuous process which
 preserves bodily identity.

What if the duplicating machine replaced first your head and then a
minute later the rest of your body?

 The teleportation process discussed involves actually destroying (cutting or
 killing) the original and creating a new body at some (remote) location. It
 is arguable whether this new body is sufficiently close to the original to
 constitute a closest continuer -- hence Parfit's idea that a new person is
 always created. If replacement of memories in a new body counts as
 sufficient to constitute a suitable closest continuer, that is your choice.
 But is is not a logical consequence.

There is a difference between natural and artificial replacement, but
in the end in both cases there is a new person and the matter in the
old person has disintegrated. It is not enough to show there is a
difference - you have to explain why it makes a difference to the
philosophical argument.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-21 Thread Dennis Ochei
Right, this is one coherent non-arbitrary view. It's basically what Parfit
put forward in Reason's and Persons.

Kolak's is the other view. Property changes do not destroy identity ever.

Either view says teleportation is the same as ordinary survival.

On Tuesday, April 21, 2015, Stathis Papaioannou stath...@gmail.com wrote:

 On 21 April 2015 at 08:43, Bruce Kellett bhkell...@optusnet.com.au
 javascript:; wrote:
  Bruno Marchal wrote:
 
  On 20 Apr 2015, at 09:40, Bruce Kellett wrote:
 
  Dennis Ochei wrote:
 
  One must revise the everyday concept of personal identity because it
  isn't even coherent. It's like youre getting mad at him for explaining
  combustion without reference to phlogiston. He can't use the everyday
 notion
  because it is a convenient fiction.
 
 
  I don't think phlogiston is an everyday concept. The closest continuer
  concept of personal identity is far from an unsophisticated everyday
 notion,
  or a convenient fiction. If you want to revise it to some alternative
  definition of personal identity that is better suited to your
 purposes, then
  you have to do the necessary analytical work.
 
 
  Are you saying that you believe that computationalism is false (in which
  case you can believe in some closer continuer theory), or are you saying
  that step 4 is not valid?
 
 
  I am suggesting that computationalism is effectively false, in part
 because
  of an inadequate account of personal identity. You substitute part or
 all of
  the brain at some level with a Turing machine, but do not take
 appropriate
  notice of the body bearing the brain. If we are not to notice the
  substitution, we must still have a body that interacts with the world in
  exactly the same way as the original. Under the teleportation scenarios,
  some new body must be created or provided. I think that in general the
  person might notice this.
 
  If you woke up in the morning and looked in the mirror and saw Sophia
 Loren
  looking back at you, or saw your next door neighbour in the mirror, you
  might doubt your own identity. Memories are not everything because
 memories
  can be lost, or be mistaken.
 
  In total virtual reality scenarios, of course, this could be managed, but
  then you have the problem of the identity of indiscernibles. Creating
 copies
  that are identical to this level -- identical memories, bodies,
  environments, and so on -- does not duplicate the person -- the copies,
  being identical in all respects, are one person.
 
  I am saying that a case could be made that all the destructive
 teleportation
  scenarios create new persons -- the cut actually terminates the original
  person. In step 3 you have a tie for closest continuer so there is no
  continuing person -- the original is cut. If the original is not cut (as
 in
  step 5), then that is the continuing person, and the duplicate is a new
  person. Time delays as in steps 2 and 4 do not make a lot of difference,
  they just enhance the need for the recognition of new persons.

 Of course destructive teleportation creates new persons, but the point
 is that it doesn't matter, because ordinary life creates new persons
 also, though gradually rather than all at once. If you discovered that
 some otherwise perfectly normal people had a condition which caused
 all of the matter in their body to be replaced overnight during sleep,
 rather than gradually over the course of days, and that you were one
 of these people, would it bother you? Or would you doubt that it was
 so on the grounds that you were pretty sure you were the same person
 and not a new person?

  In sum, your argument over these early steps is not an argument in logic,
  but an argument of rhetoric. Because the tight definitions you need for
  logical argument either are not provided, or when provided, do not refer
 to
  anything in the real world, at best you are trying to persuade
 rhetorically
  -- there is no logical compulsion. What you are talking about has more
 to do
  with psychology and/or physics than mathematics, so definitions can
 never be
  completely precise -- concepts in the real world are always corrigible,
 so
  tightly constrained logical arguments are not available as they are in
  mathematics.

 All you have to agree is that it would make no difference to you if
 you were perfectly (or close enough) copied. I guess you could
 disagree with this but in that case you are deluded about being the
 person you believe yourself to be.


 --
 Stathis Papaioannou

 --
 You received this message because you are subscribed to a topic in the
 Google Groups Everything List group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/everything-list/Lp5_VIb6ddY/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 everything-list+unsubscr...@googlegroups.com javascript:;.
 To post to this group, send email to everything-list@googlegroups.com
 javascript:;.
 Visit this group at 

Re: Step 3 - one step beyond?

2015-04-21 Thread LizR
On 21 April 2015 at 14:15, Bruce Kellett bhkell...@optusnet.com.au wrote:

 Russell Standish wrote:


 There is another way of looking at this. Assume a robust ontology, so
 that the UD actually runs completely. Then the closest continuation
 theory coupled with computationalism predicts the absence of any
 discontinuities of experience, such as what I experience evry night
 going to sleep. That is because in UD*, there will be always be a
 closer continuation to one you're currently experiencing (for
 essentially the same reason that there is always another real number
 lying between any two real numbers you care to pick.


 That seems to be saying that there is always a continuer who never sleeps.


Don't dreams count?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-21 Thread Bruno Marchal


On 21 Apr 2015, at 00:43, Bruce Kellett wrote:


Bruno Marchal wrote:

On 20 Apr 2015, at 09:40, Bruce Kellett wrote:

Dennis Ochei wrote:
One must revise the everyday concept of personal identity because  
it isn't even coherent. It's like youre getting mad at him for  
explaining combustion without reference to phlogiston. He can't  
use the everyday notion because it is a convenient fiction.


I don't think phlogiston is an everyday concept. The closest  
continuer concept of personal identity is far from an  
unsophisticated everyday notion, or a convenient fiction. If you  
want to revise it to some alternative definition of personal  
identity that is better suited to your purposes, then you have to  
do the necessary analytical work.
Are you saying that you believe that computationalism is false (in  
which case you can believe in some closer continuer theory), or are  
you saying that step 4 is not valid?


I am suggesting that computationalism is effectively false,


OK. But that is out of the topic.




in part because of an inadequate account of personal identity.


Computationalism by definition makes simple teleportation, and  
duplication, supporting the subjective feeling of personal identity.  
So we don't need any account of personal identity, except the  
acceptance of a an artificial brain, seen as a clinical operation like  
another one. If not you should not even take an aspirin, as you would  
need some adequate account of personal identity to be guarantied that  
you will survive when you take that aspirin, or when you just drink  
water, or even when you do nothing.


The situation would be different for someone claiming having the right  
Turing program for the functioning of the brain, but comp just assumes  
such program exists. Indeed, in the mathematical part, it is proven  
than no machine can know for sure what is its own program, and that is  
why the it exists in the definition is non constructive, even  
necessarily non constructive (as Emil Post already saw) and the act of  
saying yes ask for some an act of faith.




You substitute part or all of the brain at some level with a Turing  
machine, but do not take appropriate notice of the body bearing the  
brain.


If the body is needed it is part of the 'generalized brain'. Even if  
that is the entire universe (observable or not), the reasoning still  
go through. This should be clear if you have grasped the argument up  
to step 7.




If we are not to notice the substitution, we must still have a body  
that interacts with the world in exactly the same way as the  
original. Under the teleportation scenarios, some new body must be  
created or provided. I think that in general the person might notice  
this.


You need a perceptual body, as in step 6. With computationalism you  
cannot notice the difference introspectively, and that is all what  
counts in the reasoning.






If you woke up in the morning and looked in the mirror and saw  
Sophia Loren looking back at you, or saw your next door neighbour in  
the mirror, you might doubt your own identity. Memories are not  
everything because memories can be lost, or be mistaken.


Not in the protocol used in the reasoning. You distract yourself with  
ideas which are perhaps interesting for some debate, but are not  
relevant to understand that computationalism makes physics into a  
branch of arithmetic.





In total virtual reality scenarios, of course, this could be  
managed, but then you have the problem of the identity of  
indiscernibles. Creating copies that are identical to this level --  
identical memories, bodies, environments, and so on -- does not  
duplicate the person -- the copies, being identical in all respects,  
are one person.


That is correct.

Of course in step 6, the copies diverge because they are simulated in  
simulation of Moscow and Washington. Like in step 7 they will diverge  
on all ... diverging histories.





I am saying that a case could be made that all the destructive  
teleportation scenarios create new persons -- the cut actually  
terminates the original person.


Then you can't accept a digital brain proposed by the doctor, and comp  
is false (which is out of topic).




In step 3 you have a tie for closest continuer so there is no  
continuing person -- the original is cut. If the original is not cut  
(as in step 5), then that is the continuing person, and the  
duplicate is a new person. Time delays as in steps 2 and 4 do not  
make a lot of difference, they just enhance the need for the  
recognition of new persons.


if comp is false, the reasoning just don't apply.





In sum, your argument over these early steps is not an argument in  
logic,


?

An argument is valid, or is not valid.


but an argument of rhetoric. Because the tight definitions you need  
for logical argument either are not provided, or when provided, do  
not refer to anything in the real world, at best you are trying to  
persuade 

Re: Step 3 - one step beyond?

2015-04-20 Thread Russell Standish
On Tue, Apr 21, 2015 at 08:43:09AM +1000, Bruce Kellett wrote:
 Bruno Marchal wrote:
 On 20 Apr 2015, at 09:40, Bruce Kellett wrote:
 
 Dennis Ochei wrote:
 One must revise the everyday concept of personal identity
 because it isn't even coherent. It's like youre getting mad at
 him for explaining combustion without reference to phlogiston.
 He can't use the everyday notion because it is a convenient
 fiction.
 
 I don't think phlogiston is an everyday concept. The closest
 continuer concept of personal identity is far from an
 unsophisticated everyday notion, or a convenient fiction. If you
 want to revise it to some alternative definition of personal
 identity that is better suited to your purposes, then you have
 to do the necessary analytical work.
 
 Are you saying that you believe that computationalism is false (in
 which case you can believe in some closer continuer theory), or
 are you saying that step 4 is not valid?
 
 I am suggesting that computationalism is effectively false, in part
 because of an inadequate account of personal identity. You
 substitute part or all of the brain at some level with a Turing
 machine, but do not take appropriate notice of the body bearing the
 brain. If we are not to notice the substitution, we must still have
 a body that interacts with the world in exactly the same way as the
 original. Under the teleportation scenarios, some new body must be
 created or provided. I think that in general the person might notice
 this.
 
 If you woke up in the morning and looked in the mirror and saw
 Sophia Loren looking back at you, or saw your next door neighbour in
 the mirror, you might doubt your own identity. Memories are not
 everything because memories can be lost, or be mistaken.
 
 In total virtual reality scenarios, of course, this could be
 managed, but then you have the problem of the identity of
 indiscernibles. Creating copies that are identical to this level --
 identical memories, bodies, environments, and so on -- does not
 duplicate the person -- the copies, being identical in all respects,
 are one person.
 
 I am saying that a case could be made that all the destructive
 teleportation scenarios create new persons -- the cut actually
 terminates the original person. In step 3 you have a tie for closest
 continuer so there is no continuing person -- the original is cut.
 If the original is not cut (as in step 5), then that is the
 continuing person, and the duplicate is a new person. Time delays as
 in steps 2 and 4 do not make a lot of difference, they just enhance
 the need for the recognition of new persons.
 
 In sum, your argument over these early steps is not an argument in
 logic, but an argument of rhetoric. Because the tight definitions
 you need for logical argument either are not provided, or when
 provided, do not refer to anything in the real world, at best you
 are trying to persuade rhetorically -- there is no logical
 compulsion. What you are talking about has more to do with
 psychology and/or physics than mathematics, so definitions can never
 be completely precise -- concepts in the real world are always
 corrigible, so tightly constrained logical arguments are not
 available as they are in mathematics.
 
 Bruce
 

There is another way of looking at this. Assume a robust ontology, so
that the UD actually runs completely. Then the closest continuation
theory coupled with computationalism predicts the absence of any
discontinuities of experience, such as what I experience evry night
going to sleep. That is because in UD*, there will be always be a
closer continuation to one you're currently experiencing (for
essentially the same reason that there is always another real number
lying between any two real numbers you care to pick.

So either ontology is not robust (the Peter Jones move),
computationalism is false, or the CCT is false.

Not sure if Bruno needs to more explicit on this robust ontology bit,
as he deemphasises this until step 7.

Anyway, it does seem to me that CCT is attributing some sort of
identity role to physical continuity that is not there with
computational continuity.

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread LizR
On 20 April 2015 at 21:44, Telmo Menezes te...@telmomenezes.com wrote:



 On Mon, Apr 20, 2015 at 8:40 AM, Bruce Kellett bhkell...@optusnet.com.au
 wrote:

 Dennis Ochei wrote:

 One must revise the everyday concept of personal identity because it
 isn't even coherent. It's like youre getting mad at him for explaining
 combustion without reference to phlogiston. He can't use the everyday
 notion because it is a convenient fiction.


 I don't think phlogiston is an everyday concept.


 Not anymore. It was made obsolete by a better theory, which was not
 required to take phlogiston into account, because phlogiston was just a
 made up explanation that happened to fit the observations available at the
 time.

 Just the same as any other scientific theory, then!

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread Dennis Ochei
http://en.m.wikipedia.org/wiki/Relativity_of_simultaneity

I thought this was basic relativity 101? The video gives a concrete example
with a train moving at relativistic speeds through a tunnel. The train
lorentz contracts such that it is shorter than the tunnel. To an observer
outside the tunnel, off the train, there will come a point in time when the
train is completely within the tunnel. At this point two guillotines slam
downwards simultaneously at the exit and the entrance of the tunnel and
rise again barely missing the train.

From a frame on the train, the tunnel is lorentz contracted to be shorter
than the train. The nose of the train is just barely missed by the
guillotine at the exit while the back of the train portrudes from the
tunnel. Some moments later the back of the train enters the tunnel  and the
guillotine at the entrance slams down behind it with the front portruding.

On Monday, April 20, 2015, Bruce Kellett bhkell...@optusnet.com.au wrote:

 Dennis Ochei wrote:

   Huh? The scan was destructive according to your account!

 That does not preclude me from having a closest continuer. CCT says that
 teletransportation perserves identity. This is just a teleportation to the
 same location. Or perhaps you missed the part were it reconstitutes me at
 t+epsilon and that's the confusion.


 Maybe you forgot to mention that part.

  Time order along a time-like world line is invariant under Lorentz

 transformations.I suggest that you don't know what you are talking about.

 Relativity Paradox - Sixty Symbols
 https://m.youtube.com/watch?v=kGsbBw1I0Rg

 You can start at 4 minutes. I'm resisting the urge to suggest that you
 don't know what you're talking about


 I can't load the video. Tell me briefly what your argument against my
 comment about time order along a time-like world line is.

 Bruce

 --
 You received this message because you are subscribed to a topic in the
 Google Groups Everything List group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/everything-list/Lp5_VIb6ddY/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.



-- 
Sent from Gmail Mobile

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread meekerdb

On 4/20/2015 3:19 PM, Bruce Kellett wrote:

Dennis Ochei wrote:

No, it's actually completely indeterminant whether I am the closest
continuer or not. There might be a six year old somewhere who is more
psychologically like my 5 year old self than I am and with a higher
fraction of the molecules I was made of when I was 5.

Or suppose I get into a matter scanner at time t and it destructively
scans me and then reconstitutes me. then at some unknown time t+x it
creates a duplicate. Who is the closest continuer of the me that
walked into the scanner at t? At all t+y where 0  y  x the person
who walked out of the scanner at t+epsilon is the closest continuer.


Huh? The scan was destructive according to your account!


Then at t+x the newly created duplicate becomes the closest continuer
of me at t and the other person loses their personal identity due to
something that potentially happened on the other side of the
universe.

This is already silly without me opening the can of worms that is
relativity. Which I will now quickly do: As observers in different
reference frames will disagree to the ordering of events, they will
disagree about whether the me who walked out of the scanner just
after t is the closest continuer. CCT requires non-local
instantaneous effects on personal identity which simply doesnt play
nice with relativity.


Time order along a time-like world line is invariant under Lorentz
transformations.I suggest that you don't know what you are talking about.


The information from the scan could be transmitted to spacelike separate reconstruction 
events, in which case you couldn't label one copy as having time precedence over the 
other.  But I don't see what this has to do with anything of metaphysical significance.  
It might present a legalistic problem, but that could be solved just by flipping a coin.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread Dennis Ochei
No one cares who inherits the farm. Subjective expectation is the crux of
personal identity. You can't tell me that whether i wake up in Moscow
depends on whether or not a reconstruction event happened at Helsinki
faster than signals can travel between the two.

On Monday, April 20, 2015, Bruce Kellett bhkell...@optusnet.com.au wrote:

 meekerdb wrote:

 On 4/20/2015 3:19 PM, Bruce Kellett wrote:


 Time order along a time-like world line is invariant under Lorentz
 transformations.I suggest that you don't know what you are talking about.


 The information from the scan could be transmitted to spacelike separate
 reconstruction events, in which case you couldn't label one copy as having
 time precedence over the other.  But I don't see what this has to do with
 anything of metaphysical significance.  It might present a legalistic
 problem, but that could be solved just by flipping a coin.


 True, but then there is no unique closest continuer, so two new persons
 are created. Who inherits the farm? Well, that depends on the will of the
 original, now deceased, person.

 Bruce

 --
 You received this message because you are subscribed to a topic in the
 Google Groups Everything List group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/everything-list/Lp5_VIb6ddY/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.



-- 
Sent from Gmail Mobile

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread Bruce Kellett

Dennis Ochei wrote:
Oh i see the issue. I didn't realize you'd assume the scanner is 
immobile. Immobilizing it relative to everything in the universe is 
uhhh... rather difficult.


The scanning event is taken as a single point in space-time. Mobility is 
irrelevant. If you create duplicates, they can be sent to space-like 
separated points, as Brent says. But if you simply reconstruct at some 
later time at the same location, then the events are separated by a 
time-like interval. This makes a difference to whether or not the time 
order is unique -- it is for time-like separations.


Bruce

On Monday, April 20, 2015, Bruce Kellett bhkell...@optusnet.com.au 
mailto:bhkell...@optusnet.com.au wrote:

Dennis Ochei wrote:

http://en.m.wikipedia.org/wiki/Relativity_of_simultaneity

I thought this was basic relativity 101? The video gives a
concrete example with a train moving at relativistic speeds
through a tunnel. The train lorentz contracts such that it is
shorter than the tunnel. To an observer outside the tunnel, off
the train, there will come a point in time when the train is
completely within the tunnel. At this point two guillotines slam
downwards simultaneously at the exit and the entrance of the
tunnel and rise again barely missing the train.

 From a frame on the train, the tunnel is lorentz contracted to
be shorter than the train. The nose of the train is just barely
missed by the guillotine at the exit while the back of the train
portrudes from the tunnel. Some moments later the back of the
train enters the tunnel  and the guillotine at the entrance
slams down behind it with the front portruding.

The two ends of the train are separated by a space-like interval,
not a time-like interval.

Bruce


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread LizR
I have to say that the point under discussion SHOULD be the nature of
subjective experience, surely? That is, why do we feel as though we have
continuity? (And does the answer to that preclude duplicators etc?)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread Dennis Ochei
sigh... Parfit does away with personal identity, replacing it with
psychological connectedness relation R. Past and future selves are not
identical to you, but are new persons that are like you to a high degree.
Your relationship to your past and future selves are much like your
relationship to your siblings. The illusion that you are the same observer
riding through time is caused by memories and being destructively
teleported is as good as ordinary survival because there is no further
question of identity beyond relation R. Lastly, Parfit's Empty
Individualism is not a CCT as it allows branching.

On Monday, April 20, 2015, Bruce Kellett bhkell...@optusnet.com.au wrote:

 Dennis Ochei wrote:

 Do you have a coherent, non arbitrary theory of personal identity that
 claims 1) Teletransportation creates a new person, killing the original


 It is a possible theory. See D Parfit, 'Reasons and Persons' (Oxford,
 1984).

  and 2) Ordinary survival does not create a new person, killing the
 original?

 Let me remind you, although you probably know this, that all your atoms
 except some in your teeth are replaced throughout the course of a year.


 When a cell in my arm dies and is replaced, I do not die. When my leg is
 cut off, I do not die. Ordinary survival does not kill the original and
 create a new person -- body replacement is a gradual, continuous process
 which preserves bodily identity.

 The teleportation process discussed involves actually destroying (cutting
 or killing) the original and creating a new body at some (remote) location.
 It is arguable whether this new body is sufficiently close to the original
 to constitute a closest continuer -- hence Parfit's idea that a new person
 is always created. If replacement of memories in a new body counts as
 sufficient to constitute a suitable closest continuer, that is your choice.
 But is is not a logical consequence.

 Bruce

 --
 You received this message because you are subscribed to a topic in the
 Google Groups Everything List group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/everything-list/Lp5_VIb6ddY/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.



-- 
Sent from Gmail Mobile

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread Bruce Kellett

Dennis Ochei wrote:

http://en.m.wikipedia.org/wiki/Relativity_of_simultaneity

I thought this was basic relativity 101? The video gives a concrete 
example with a train moving at relativistic speeds through a tunnel. The 
train lorentz contracts such that it is shorter than the tunnel. To an 
observer outside the tunnel, off the train, there will come a point in 
time when the train is completely within the tunnel. At this point two 
guillotines slam downwards simultaneously at the exit and the entrance 
of the tunnel and rise again barely missing the train.


 From a frame on the train, the tunnel is lorentz contracted to be 
shorter than the train. The nose of the train is just barely missed by 
the guillotine at the exit while the back of the train portrudes from 
the tunnel. Some moments later the back of the train enters the tunnel 
 and the guillotine at the entrance slams down behind it with the front 
portruding.


The two ends of the train are separated by a space-like interval, not a 
time-like interval.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread Dennis Ochei
Oh i see the issue. I didn't realize you'd assume the scanner is immobile.
Immobilizing it relative to everything in the universe is uhhh... rather
difficult.

On Monday, April 20, 2015, Bruce Kellett bhkell...@optusnet.com.au wrote:

 Dennis Ochei wrote:

 http://en.m.wikipedia.org/wiki/Relativity_of_simultaneity

 I thought this was basic relativity 101? The video gives a concrete
 example with a train moving at relativistic speeds through a tunnel. The
 train lorentz contracts such that it is shorter than the tunnel. To an
 observer outside the tunnel, off the train, there will come a point in time
 when the train is completely within the tunnel. At this point two
 guillotines slam downwards simultaneously at the exit and the entrance of
 the tunnel and rise again barely missing the train.

  From a frame on the train, the tunnel is lorentz contracted to be
 shorter than the train. The nose of the train is just barely missed by the
 guillotine at the exit while the back of the train portrudes from the
 tunnel. Some moments later the back of the train enters the tunnel  and the
 guillotine at the entrance slams down behind it with the front portruding.


 The two ends of the train are separated by a space-like interval, not a
 time-like interval.

 Bruce

 --
 You received this message because you are subscribed to a topic in the
 Google Groups Everything List group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/everything-list/Lp5_VIb6ddY/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.



-- 
Sent from Gmail Mobile

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread Dennis Ochei
Do you have a coherent, non arbitrary theory of personal identity that
claims 1) Teletransportation creates a new person, killing the original and
2) Ordinary survival does not create a new person, killing the original?

Let me remind you, although you probably know this, that all your atoms
except some in your teeth are replaced throughout the course of a year.

On Monday, April 20, 2015, Bruce Kellett bhkell...@optusnet.com.au wrote:

 Bruno Marchal wrote:

 On 20 Apr 2015, at 09:40, Bruce Kellett wrote:

  Dennis Ochei wrote:

 One must revise the everyday concept of personal identity because it
 isn't even coherent. It's like youre getting mad at him for explaining
 combustion without reference to phlogiston. He can't use the everyday
 notion because it is a convenient fiction.


 I don't think phlogiston is an everyday concept. The closest continuer
 concept of personal identity is far from an unsophisticated everyday
 notion, or a convenient fiction. If you want to revise it to some
 alternative definition of personal identity that is better suited to your
 purposes, then you have to do the necessary analytical work.


 Are you saying that you believe that computationalism is false (in which
 case you can believe in some closer continuer theory), or are you saying
 that step 4 is not valid?


 I am suggesting that computationalism is effectively false, in part
 because of an inadequate account of personal identity. You substitute part
 or all of the brain at some level with a Turing machine, but do not take
 appropriate notice of the body bearing the brain. If we are not to notice
 the substitution, we must still have a body that interacts with the world
 in exactly the same way as the original. Under the teleportation scenarios,
 some new body must be created or provided. I think that in general the
 person might notice this.

 If you woke up in the morning and looked in the mirror and saw Sophia
 Loren looking back at you, or saw your next door neighbour in the mirror,
 you might doubt your own identity. Memories are not everything because
 memories can be lost, or be mistaken.

 In total virtual reality scenarios, of course, this could be managed, but
 then you have the problem of the identity of indiscernibles. Creating
 copies that are identical to this level -- identical memories, bodies,
 environments, and so on -- does not duplicate the person -- the copies,
 being identical in all respects, are one person.

 I am saying that a case could be made that all the destructive
 teleportation scenarios create new persons -- the cut actually terminates
 the original person. In step 3 you have a tie for closest continuer so
 there is no continuing person -- the original is cut. If the original is
 not cut (as in step 5), then that is the continuing person, and the
 duplicate is a new person. Time delays as in steps 2 and 4 do not make a
 lot of difference, they just enhance the need for the recognition of new
 persons.

 In sum, your argument over these early steps is not an argument in logic,
 but an argument of rhetoric. Because the tight definitions you need for
 logical argument either are not provided, or when provided, do not refer to
 anything in the real world, at best you are trying to persuade rhetorically
 -- there is no logical compulsion. What you are talking about has more to
 do with psychology and/or physics than mathematics, so definitions can
 never be completely precise -- concepts in the real world are always
 corrigible, so tightly constrained logical arguments are not available as
 they are in mathematics.

 Bruce

 --
 You received this message because you are subscribed to a topic in the
 Google Groups Everything List group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/everything-list/Lp5_VIb6ddY/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.



-- 
Sent from Gmail Mobile

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread Bruce Kellett

Dennis Ochei wrote:

No, it's actually completely indeterminant whether I am the closest
continuer or not. There might be a six year old somewhere who is more
psychologically like my 5 year old self than I am and with a higher
fraction of the molecules I was made of when I was 5.

Or suppose I get into a matter scanner at time t and it destructively
scans me and then reconstitutes me. then at some unknown time t+x it
creates a duplicate. Who is the closest continuer of the me that
walked into the scanner at t? At all t+y where 0  y  x the person
who walked out of the scanner at t+epsilon is the closest continuer.


Huh? The scan was destructive according to your account!


Then at t+x the newly created duplicate becomes the closest continuer
of me at t and the other person loses their personal identity due to
something that potentially happened on the other side of the
universe.

This is already silly without me opening the can of worms that is
relativity. Which I will now quickly do: As observers in different
reference frames will disagree to the ordering of events, they will
disagree about whether the me who walked out of the scanner just
after t is the closest continuer. CCT requires non-local
instantaneous effects on personal identity which simply doesnt play
nice with relativity.


Time order along a time-like world line is invariant under Lorentz
transformations.I suggest that you don't know what you are talking about.

Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread Bruce Kellett

Dennis Ochei wrote:

  Huh? The scan was destructive according to your account!

That does not preclude me from having a closest continuer. CCT says that 
teletransportation perserves identity. This is just a teleportation to 
the same location. Or perhaps you missed the part were it reconstitutes 
me at t+epsilon and that's the confusion.


Maybe you forgot to mention that part.


Time order along a time-like world line is invariant under Lorentz

transformations.I suggest that you don't know what you are talking about.

https://m.youtube.com/watch?v=kGsbBw1I0Rg

You can start at 4 minutes. I'm resisting the urge to suggest that you 
don't know what you're talking about


I can't load the video. Tell me briefly what your argument against my 
comment about time order along a time-like world line is.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread Bruce Kellett

Bruce Kellett wrote:

Dennis Ochei wrote:

  Huh? The scan was destructive according to your account!

That does not preclude me from having a closest continuer. CCT says 
that teletransportation perserves identity. This is just a 
teleportation to the same location. Or perhaps you missed the part 
were it reconstitutes me at t+epsilon and that's the confusion.


Maybe you forgot to mention that part.


OK, I see now that you reconstitute immediately. That, then is clearly 
the closest continuer. A person reconstructed at some later time is not 
a closest continuer if the original continued or was reconstructed 
immediately -- the original person will have moved on and what he was x 
ago is no longer relevant. The essential point is that time-order along 
a time-like world line is invariant -- t is never before t+x (x0) for 
any observer.


Bruce


Time order along a time-like world line is invariant under Lorentz

transformations.I suggest that you don't know what you are talking about.

https://m.youtube.com/watch?v=kGsbBw1I0Rg

You can start at 4 minutes. I'm resisting the urge to suggest that you 
don't know what you're talking about


I can't load the video. Tell me briefly what your argument against my 
comment about time order along a time-like world line is.


Bruce



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread Dennis Ochei
CCT doesn't have to entail physical continuity. The standard CCT seems to
first use psychological similarty and in the case of ties physical
continuity, but you could also imagine a purely paychological or purely
physical CCT. My problem with CCT is that the rules for ties are ad hoc
legal arbitration that violate locality and to quote Parfit: A double
survival can't equal death. My problem with similiarity measures is that
you are no longer talking about subjective expectation. Similarity measures
are fine if you throw out subjective expectation as a mere illusion.
However, if you want to retain subjective expectation, then you have to
have an all or none model of personal identity.

On Monday, April 20, 2015, Russell Standish li...@hpcoders.com.au wrote:

 On Tue, Apr 21, 2015 at 08:43:09AM +1000, Bruce Kellett wrote:
  Bruno Marchal wrote:
  On 20 Apr 2015, at 09:40, Bruce Kellett wrote:
  
  Dennis Ochei wrote:
  One must revise the everyday concept of personal identity
  because it isn't even coherent. It's like youre getting mad at
  him for explaining combustion without reference to phlogiston.
  He can't use the everyday notion because it is a convenient
  fiction.
  
  I don't think phlogiston is an everyday concept. The closest
  continuer concept of personal identity is far from an
  unsophisticated everyday notion, or a convenient fiction. If you
  want to revise it to some alternative definition of personal
  identity that is better suited to your purposes, then you have
  to do the necessary analytical work.
  
  Are you saying that you believe that computationalism is false (in
  which case you can believe in some closer continuer theory), or
  are you saying that step 4 is not valid?
 
  I am suggesting that computationalism is effectively false, in part
  because of an inadequate account of personal identity. You
  substitute part or all of the brain at some level with a Turing
  machine, but do not take appropriate notice of the body bearing the
  brain. If we are not to notice the substitution, we must still have
  a body that interacts with the world in exactly the same way as the
  original. Under the teleportation scenarios, some new body must be
  created or provided. I think that in general the person might notice
  this.
 
  If you woke up in the morning and looked in the mirror and saw
  Sophia Loren looking back at you, or saw your next door neighbour in
  the mirror, you might doubt your own identity. Memories are not
  everything because memories can be lost, or be mistaken.
 
  In total virtual reality scenarios, of course, this could be
  managed, but then you have the problem of the identity of
  indiscernibles. Creating copies that are identical to this level --
  identical memories, bodies, environments, and so on -- does not
  duplicate the person -- the copies, being identical in all respects,
  are one person.
 
  I am saying that a case could be made that all the destructive
  teleportation scenarios create new persons -- the cut actually
  terminates the original person. In step 3 you have a tie for closest
  continuer so there is no continuing person -- the original is cut.
  If the original is not cut (as in step 5), then that is the
  continuing person, and the duplicate is a new person. Time delays as
  in steps 2 and 4 do not make a lot of difference, they just enhance
  the need for the recognition of new persons.
 
  In sum, your argument over these early steps is not an argument in
  logic, but an argument of rhetoric. Because the tight definitions
  you need for logical argument either are not provided, or when
  provided, do not refer to anything in the real world, at best you
  are trying to persuade rhetorically -- there is no logical
  compulsion. What you are talking about has more to do with
  psychology and/or physics than mathematics, so definitions can never
  be completely precise -- concepts in the real world are always
  corrigible, so tightly constrained logical arguments are not
  available as they are in mathematics.
 
  Bruce
 

 There is another way of looking at this. Assume a robust ontology, so
 that the UD actually runs completely. Then the closest continuation
 theory coupled with computationalism predicts the absence of any
 discontinuities of experience, such as what I experience evry night
 going to sleep. That is because in UD*, there will be always be a
 closer continuation to one you're currently experiencing (for
 essentially the same reason that there is always another real number
 lying between any two real numbers you care to pick.

 So either ontology is not robust (the Peter Jones move),
 computationalism is false, or the CCT is false.

 Not sure if Bruno needs to more explicit on this robust ontology bit,
 as he deemphasises this until step 7.

 Anyway, it does seem to me that CCT is attributing some sort of
 identity role to physical continuity that is not there with
 computational continuity.

 --


 

Re: Step 3 - one step beyond?

2015-04-20 Thread Bruce Kellett

Bruno Marchal wrote:

On 20 Apr 2015, at 09:40, Bruce Kellett wrote:


Dennis Ochei wrote:
One must revise the everyday concept of personal identity because it 
isn't even coherent. It's like youre getting mad at him for 
explaining combustion without reference to phlogiston. He can't use 
the everyday notion because it is a convenient fiction.


I don't think phlogiston is an everyday concept. The closest continuer 
concept of personal identity is far from an unsophisticated everyday 
notion, or a convenient fiction. If you want to revise it to some 
alternative definition of personal identity that is better suited to 
your purposes, then you have to do the necessary analytical work.


Are you saying that you believe that computationalism is false (in which 
case you can believe in some closer continuer theory), or are you saying 
that step 4 is not valid?


I am suggesting that computationalism is effectively false, in part 
because of an inadequate account of personal identity. You substitute 
part or all of the brain at some level with a Turing machine, but do not 
take appropriate notice of the body bearing the brain. If we are not to 
notice the substitution, we must still have a body that interacts with 
the world in exactly the same way as the original. Under the 
teleportation scenarios, some new body must be created or provided. I 
think that in general the person might notice this.


If you woke up in the morning and looked in the mirror and saw Sophia 
Loren looking back at you, or saw your next door neighbour in the 
mirror, you might doubt your own identity. Memories are not everything 
because memories can be lost, or be mistaken.


In total virtual reality scenarios, of course, this could be managed, 
but then you have the problem of the identity of indiscernibles. 
Creating copies that are identical to this level -- identical memories, 
bodies, environments, and so on -- does not duplicate the person -- the 
copies, being identical in all respects, are one person.


I am saying that a case could be made that all the destructive 
teleportation scenarios create new persons -- the cut actually 
terminates the original person. In step 3 you have a tie for closest 
continuer so there is no continuing person -- the original is cut. If 
the original is not cut (as in step 5), then that is the continuing 
person, and the duplicate is a new person. Time delays as in steps 2 and 
4 do not make a lot of difference, they just enhance the need for the 
recognition of new persons.


In sum, your argument over these early steps is not an argument in 
logic, but an argument of rhetoric. Because the tight definitions you 
need for logical argument either are not provided, or when provided, do 
not refer to anything in the real world, at best you are trying to 
persuade rhetorically -- there is no logical compulsion. What you are 
talking about has more to do with psychology and/or physics than 
mathematics, so definitions can never be completely precise -- concepts 
in the real world are always corrigible, so tightly constrained logical 
arguments are not available as they are in mathematics.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread Bruce Kellett

Dennis Ochei wrote:
Do you have a coherent, non arbitrary theory of personal identity that 
claims 1) Teletransportation creates a new person, killing the original


It is a possible theory. See D Parfit, 'Reasons and Persons' (Oxford, 1984).


and 2) Ordinary survival does not create a new person, killing the original?

Let me remind you, although you probably know this, that all your atoms 
except some in your teeth are replaced throughout the course of a year.


When a cell in my arm dies and is replaced, I do not die. When my leg is 
cut off, I do not die. Ordinary survival does not kill the original and 
create a new person -- body replacement is a gradual, continuous process 
which preserves bodily identity.


The teleportation process discussed involves actually destroying 
(cutting or killing) the original and creating a new body at some 
(remote) location. It is arguable whether this new body is sufficiently 
close to the original to constitute a closest continuer -- hence 
Parfit's idea that a new person is always created. If replacement of 
memories in a new body counts as sufficient to constitute a suitable 
closest continuer, that is your choice. But is is not a logical consequence.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread Bruce Kellett

meekerdb wrote:

On 4/20/2015 3:19 PM, Bruce Kellett wrote:


Time order along a time-like world line is invariant under Lorentz
transformations.I suggest that you don't know what you are talking about.


The information from the scan could be transmitted to spacelike separate 
reconstruction events, in which case you couldn't label one copy as 
having time precedence over the other.  But I don't see what this has to 
do with anything of metaphysical significance.  It might present a 
legalistic problem, but that could be solved just by flipping a coin.


True, but then there is no unique closest continuer, so two new persons 
are created. Who inherits the farm? Well, that depends on the will of 
the original, now deceased, person.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread Dennis Ochei
 Huh? The scan was destructive according to your account!

That does not preclude me from having a closest continuer. CCT says that
teletransportation perserves identity. This is just a teleportation to the
same location. Or perhaps you missed the part were it reconstitutes me at
t+epsilon and that's the confusion.

 Time order along a time-like world line is invariant under Lorentz
transformations.I suggest that you don't know what you are talking about.

https://m.youtube.com/watch?v=kGsbBw1I0Rg

You can start at 4 minutes. I'm resisting the urge to suggest that you
don't know what you're talking about

On Monday, April 20, 2015, Bruce Kellett bhkell...@optusnet.com.au wrote:

 Dennis Ochei wrote:

 No, it's actually completely indeterminant whether I am the closest
 continuer or not. There might be a six year old somewhere who is more
 psychologically like my 5 year old self than I am and with a higher
 fraction of the molecules I was made of when I was 5.

 Or suppose I get into a matter scanner at time t and it destructively
 scans me and then reconstitutes me. then at some unknown time t+x it
 creates a duplicate. Who is the closest continuer of the me that
 walked into the scanner at t? At all t+y where 0  y  x the person
 who walked out of the scanner at t+epsilon is the closest continuer.


 Huh? The scan was destructive according to your account!

  Then at t+x the newly created duplicate becomes the closest continuer
 of me at t and the other person loses their personal identity due to
 something that potentially happened on the other side of the
 universe.

 This is already silly without me opening the can of worms that is
 relativity. Which I will now quickly do: As observers in different
 reference frames will disagree to the ordering of events, they will
 disagree about whether the me who walked out of the scanner just
 after t is the closest continuer. CCT requires non-local
 instantaneous effects on personal identity which simply doesnt play
 nice with relativity.


 Time order along a time-like world line is invariant under Lorentz
 transformations.I suggest that you don't know what you are talking about.

 Bruce

 --
 You received this message because you are subscribed to a topic in the
 Google Groups Everything List group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/everything-list/Lp5_VIb6ddY/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.



-- 
Sent from Gmail Mobile

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread Dennis Ochei
Right, mobility is irrelevant. I mispoke.

On Monday, April 20, 2015, Bruce Kellett bhkell...@optusnet.com.au wrote:

 Dennis Ochei wrote:

 Oh i see the issue. I didn't realize you'd assume the scanner is
 immobile. Immobilizing it relative to everything in the universe is uhhh...
 rather difficult.


 The scanning event is taken as a single point in space-time. Mobility is
 irrelevant. If you create duplicates, they can be sent to space-like
 separated points, as Brent says. But if you simply reconstruct at some
 later time at the same location, then the events are separated by a
 time-like interval. This makes a difference to whether or not the time
 order is unique -- it is for time-like separations.

 Bruce

  On Monday, April 20, 2015, Bruce Kellett bhkell...@optusnet.com.au
 mailto:bhkell...@optusnet.com.au wrote:
 Dennis Ochei wrote:

 http://en.m.wikipedia.org/wiki/Relativity_of_simultaneity

 I thought this was basic relativity 101? The video gives a
 concrete example with a train moving at relativistic speeds
 through a tunnel. The train lorentz contracts such that it is
 shorter than the tunnel. To an observer outside the tunnel, off
 the train, there will come a point in time when the train is
 completely within the tunnel. At this point two guillotines slam
 downwards simultaneously at the exit and the entrance of the
 tunnel and rise again barely missing the train.

  From a frame on the train, the tunnel is lorentz contracted to
 be shorter than the train. The nose of the train is just barely
 missed by the guillotine at the exit while the back of the train
 portrudes from the tunnel. Some moments later the back of the
 train enters the tunnel  and the guillotine at the entrance
 slams down behind it with the front portruding.

 The two ends of the train are separated by a space-like interval,
 not a time-like interval.

 Bruce


 --
 You received this message because you are subscribed to a topic in the
 Google Groups Everything List group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/everything-list/Lp5_VIb6ddY/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.



-- 
Sent from Gmail Mobile

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread Bruce Kellett

Russell Standish wrote:


There is another way of looking at this. Assume a robust ontology, so
that the UD actually runs completely. Then the closest continuation
theory coupled with computationalism predicts the absence of any
discontinuities of experience, such as what I experience evry night
going to sleep. That is because in UD*, there will be always be a
closer continuation to one you're currently experiencing (for
essentially the same reason that there is always another real number
lying between any two real numbers you care to pick.


That seems to be saying that there is always a continuer who never 
sleeps. Gets to sound a bit like the quantum suicide scenario -- in MWI 
there is always one branch in which the gun fails to fire.



So either ontology is not robust (the Peter Jones move),
computationalism is false, or the CCT is false.

Not sure if Bruno needs to more explicit on this robust ontology bit,
as he deemphasises this until step 7.

Anyway, it does seem to me that CCT is attributing some sort of
identity role to physical continuity that is not there with
computational continuity.


That seems to be the case. CCT does not specify a particular metric on 
the multiple dimension of personal identity. SO I guess you could weight 
physical continuity above everything else, or you could weight personal 
memories infinitely highly. I do not think that either extreme captures 
what we normally mean by physical identity over time.


I worry about memory loss cases -- whether through disease or trauma. My 
particular concern is with Korsakoff's Syndrome, which was first 
described in advanced alcoholics, but can occur after particular types 
of brain injury. It is characterized by the fact that the person cannot 
lay down new memories. They can't remember from one moment to the next 
things that were said and done. To cover these gaps in memory they 
confabulate all sorts of weird and fanciful stories. Nevertheless, such 
a sufferer may have quite clear childhood memories -- there is just a 
gap of twenty, thirty, or more years in their memory banks. Physically, 
they are of essentially unaltered appearance, and frequently emotional 
and other character traits are intact. When you speak to such a person, 
you can be in no doubt that they are the same person as before the brain 
injury, although the have lost most of their adult memories.


A satisfactory theory of personal identity has to account for such 
cases, and variations thereon.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread Bruce Kellett

Telmo Menezes wrote:
On Mon, Apr 20, 2015 at 8:40 AM, Bruce Kellett 
bhkell...@optusnet.com.au mailto:bhkell...@optusnet.com.au wrote:


Dennis Ochei wrote:

One must revise the everyday concept of personal identity
because it isn't even coherent. It's like youre getting mad at
him for explaining combustion without reference to phlogiston.
He can't use the everyday notion because it is a convenient fiction.

I don't think phlogiston is an everyday concept. 

Not anymore. It was made obsolete by a better theory, which was not 
required to take phlogiston into account, because phlogiston was just a 
made up explanation that happened to fit the observations available at 
the time.


No, phlogiston was a serious scientific theory. It required careful 
experimentation to demonstrate that the theory did not really fit the 
facts easily (you would require negative mass, for instance).



The closest continuer concept of personal identity is far from an
unsophisticated everyday notion, or a convenient fiction. 

I wasn't familiar with the concept so I looked at several sources. I 
will summarize it in my own words, so that you can please correct me if 
I misunderstand something:


In case of branching (through something like duplication machines, body 
swaps, non-destructive teleportations, etc..), only one or zero branches 
will be the true continuation of the original. In some cases the true 
continuation is the one that more closely resembles the original 
psychologically, which can be determined by following causality chains. 
In the case of a tie, no branch is a true continuation of the original.


It involves a lot more than psychological resemblance. The point is that 
personal identity is a multidimensional concept. It includes continuity 
of the body, causality, continuity, access to memories, emotional 
states, value systems, and everything else that goes to make up a unique 
person. Although all of these things change with time in the natural 
course of events, we say that there is a unique person in this history. 
Closest continuer theory is a sophisticated attempt to capture this 
multidimensionality, and acknowledges that the metric one might use, and 
the relative weights placed on different dimensions, might be open to 
discussion. But it is clear that in the case of ties (in whatever metric 
you are using), new persons are created -- the person is not duplicated 
in any operational sense.


Again, please correct me if I am misrepresenting the theory or missing 
something important.


If what I said above is correct, this is just akin to a legal 
definition, not a serious scientific or philosophical theory. It makes a 
statement about a bunch of mushy concepts. What is a true 
continuation? How is the causality chain introduced by a train journey 
any different from the one introduced by a teleportation?


If Everett's MWI is correct, then this theory holds that there is no 
true continuation -- every single branching from one observer moment to 
the next introduces a tie in closeness. Which is fine by me, but then we 
can just ignore this entire true continuation business.


MWI is in no way equivalent to Bruno's duplication situation. He 
acknowledges this. The point about MWI is that the continuers are in 
different worlds. There is no dimension connecting the worlds, so there 
is no metric defining this difference. Each can then be counted as the 
closest continuer /in that world/ -- with no possibility of conflicts.



If you want to revise it to some alternative definition of personal
identity that is better suited to your purposes, then you have to do
the necessary analytical work.

There isn't a single reference to personal identity that I could find 
in the UDA paper. The work does lead to conclusions about personal 
identity (as does Everett's MWI) but it doesn't start from there. Please 
be specific about what you find incorrect in the reasoning.


Read the COMP(2013) paper. There are many references to personal 
identity in that, including the quote given by Liz: The notion of the 
first person, or /the conscious knower/, admits the simplest possible 
definition: it is provided by access to basic memories.


In other words, Bruno is using only one dimension of personal identity 
and basing his argument on that, to the exclusion of all the other 
relevant dimensions. This is a serious limitation on the argument since 
two quite different people can share a large proportion of their 
memories, especially if they have lived closely together for many years. 
And yet they suffer from no confusion of their separate identities. 
Access to personal memories (as given in personal diaries) is not an 
adequate criterion for personal identity.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 

Re: Step 3 - one step beyond?

2015-04-20 Thread Dennis Ochei
Closest continuer theory is itself a redefinition of the lay conception and is 
frankly absurd. Semiconservative replication doesn't kill me. And the lay 
understanding considers teletransportation as equivalent to death, contra 
closest continuer theory.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread Dennis Ochei
Closest continuer theory is itself a redefinition of the lay conception and is 
frankly absurd. Semiconservative replication doesn't kill me. And the lay 
understanding considers teletransportation as equivalent to death, contra 
closest continuer theory.

Combustion is the everyday concept and phlogiston was part of that concept's 
definition until someone redefined it. At least that's the analogy i was 
going for.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread Telmo Menezes
On Mon, Apr 20, 2015 at 8:40 AM, Bruce Kellett bhkell...@optusnet.com.au
wrote:

 Dennis Ochei wrote:

 One must revise the everyday concept of personal identity because it
 isn't even coherent. It's like youre getting mad at him for explaining
 combustion without reference to phlogiston. He can't use the everyday
 notion because it is a convenient fiction.


 I don't think phlogiston is an everyday concept.


Not anymore. It was made obsolete by a better theory, which was not
required to take phlogiston into account, because phlogiston was just a
made up explanation that happened to fit the observations available at the
time.


 The closest continuer concept of personal identity is far from an
 unsophisticated everyday notion, or a convenient fiction.


I wasn't familiar with the concept so I looked at several sources. I will
summarize it in my own words, so that you can please correct me if I
misunderstand something:

In case of branching (through something like duplication machines, body
swaps, non-destructive teleportations, etc..), only one or zero branches
will be the true continuation of the original. In some cases the true
continuation is the one that more closely resembles the original
psychologically, which can be determined by following causality chains. In
the case of a tie, no branch is a true continuation of the original.

Again, please correct me if I am misrepresenting the theory or missing
something important.

If what I said above is correct, this is just akin to a legal definition,
not a serious scientific or philosophical theory. It makes a statement
about a bunch of mushy concepts. What is a true continuation? How is the
causality chain introduced by a train journey any different from the one
introduced by a teleportation?

If Everett's MWI is correct, then this theory holds that there is no true
continuation -- every single branching from one observer moment to the next
introduces a tie in closeness. Which is fine by me, but then we can just
ignore this entire true continuation business.


 If you want to revise it to some alternative definition of personal
 identity that is better suited to your purposes, then you have to do the
 necessary analytical work.


There isn't a single reference to personal identity that I could find in
the UDA paper. The work does lead to conclusions about personal identity
(as does Everett's MWI) but it doesn't start from there. Please be specific
about what you find incorrect in the reasoning.

Telmo.




 Bruce


 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread Bruce Kellett

Dennis Ochei wrote:

One must revise the everyday concept of personal identity because it isn't even 
coherent. It's like youre getting mad at him for explaining combustion without 
reference to phlogiston. He can't use the everyday notion because it is a 
convenient fiction.


I don't think phlogiston is an everyday concept. The closest continuer 
concept of personal identity is far from an unsophisticated everyday 
notion, or a convenient fiction. If you want to revise it to some 
alternative definition of personal identity that is better suited to 
your purposes, then you have to do the necessary analytical work.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread spudboy100 via Everything List

Closest continuer seems technically plausible, even in the John Hick way. But 
it does point out that identity, cannot, over long enough time, remain the 
same. Are we not closest continuers of the 5 year olds we used to be? Death 
should not be a big problem if the closest continuer is close to 100% accurate, 
to start off at least. Identity over time is the real issue. 
 
 
-Original Message-
From: Dennis Ochei do.infinit...@gmail.com
To: everything-list everything-list@googlegroups.com
Sent: Mon, Apr 20, 2015 5:11 am
Subject: Re: Step 3 - one step beyond?


Closest continuer theory is itself a redefinition of the lay conception and is
frankly absurd. Semiconservative replication doesn't kill me. And the lay
understanding considers teletransportation as equivalent to death, contra
closest continuer theory.

-- 
You received this message because you are
subscribed to the Google Groups Everything List group.
To unsubscribe from
this group and stop receiving emails from it, send an email to
everything-list+unsubscr...@googlegroups.com.
To post to this group, send email
to everything-list@googlegroups.com.
Visit this group at
http://groups.google.com/group/everything-list.
For more options, visit
https://groups.google.com/d/optout.

 

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread Dennis Ochei
I think his problem is that you are using an impoverished definition of 
personal identity, the same way an incompatibilist would be annoyed at the 
compatibilist redefinition of free will. I have to admit that as an 
incompatibilist i am annoyed by this move, but in your case i am not bothered 
by it

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread Dennis Ochei
No, it's actually completely indeterminant whether I am the closest continuer 
or not. There might be a six year old somewhere who is more psychologically 
like my 5 year old self than I am and with a higher fraction of the molecules I 
was made of when I was 5.

Or suppose I get into a matter scanner at time t and it destructively scans me 
and then reconstitutes me. then at some unknown time t+x it creates a 
duplicate. Who is the closest continuer of the me that walked into the scanner 
at t? At all t+y where 0  y  x the person who walked out of the scanner at 
t+epsilon is the closest continuer. Then at t+x the newly created duplicate 
becomes the closest continuer of me at t and the other person loses their 
personal identity due to something that potentially happened on the other side 
of the universe.

This is already silly without me opening the can of worms that is relativity. 
Which I will now quickly do: As observers in different reference frames will 
disagree to the ordering of events, they will disagree about whether the me who 
walked out of the scanner just after t is the closest continuer. CCT requires 
non-local instantaneous effects on personal identity which simply doesnt play 
nice with relativity.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 3 - one step beyond?

2015-04-20 Thread Bruno Marchal


On 20 Apr 2015, at 13:52, Bruce Kellett wrote:


Telmo Menezes wrote:
On Mon, Apr 20, 2015 at 8:40 AM, Bruce Kellett bhkell...@optusnet.com.au 
 mailto:bhkell...@optusnet.com.au wrote:

   Dennis Ochei wrote:
   One must revise the everyday concept of personal identity
   because it isn't even coherent. It's like youre getting mad at
   him for explaining combustion without reference to phlogiston.
   He can't use the everyday notion because it is a convenient  
fiction.
   I don't think phlogiston is an everyday concept. Not anymore. It  
was made obsolete by a better theory, which was not required to  
take phlogiston into account, because phlogiston was just a made up  
explanation that happened to fit the observations available at the  
time.


No, phlogiston was a serious scientific theory. It required careful  
experimentation to demonstrate that the theory did not really fit  
the facts easily (you would require negative mass, for instance).



   The closest continuer concept of personal identity is far from an
   unsophisticated everyday notion, or a convenient fiction. I  
wasn't familiar with the concept so I looked at several sources. I  
will summarize it in my own words, so that you can please correct  
me if I misunderstand something:
In case of branching (through something like duplication machines,  
body swaps, non-destructive teleportations, etc..), only one or  
zero branches will be the true continuation of the original. In  
some cases the true continuation is the one that more closely  
resembles the original psychologically, which can be determined by  
following causality chains. In the case of a tie, no branch is a  
true continuation of the original.


It involves a lot more than psychological resemblance. The point is  
that personal identity is a multidimensional concept. It includes  
continuity of the body, causality, continuity, access to memories,  
emotional states, value systems, and everything else that goes to  
make up a unique person. Although all of these things change with  
time in the natural course of events, we say that there is a unique  
person in this history. Closest continuer theory is a sophisticated  
attempt to capture this multidimensionality, and acknowledges that  
the metric one might use, and the relative weights placed on  
different dimensions, might be open to discussion. But it is clear  
that in the case of ties (in whatever metric you are using), new  
persons are created -- the person is not duplicated in any  
operational sense.


Again, please correct me if I am misrepresenting the theory or  
missing something important.
If what I said above is correct, this is just akin to a legal  
definition, not a serious scientific or philosophical theory. It  
makes a statement about a bunch of mushy concepts. What is a true  
continuation? How is the causality chain introduced by a train  
journey any different from the one introduced by a teleportation?
If Everett's MWI is correct, then this theory holds that there is  
no true continuation -- every single branching from one observer  
moment to the next introduces a tie in closeness. Which is fine by  
me, but then we can just ignore this entire true continuation  
business.


MWI is in no way equivalent to Bruno's duplication situation. He  
acknowledges this. The point about MWI is that the continuers are in  
different worlds. There is no dimension connecting the worlds, so  
there is no metric defining this difference. Each can then be  
counted as the closest continuer /in that world/ -- with no  
possibility of conflicts.


   If you want to revise it to some alternative definition of  
personal
   identity that is better suited to your purposes, then you have  
to do

   the necessary analytical work.
There isn't a single reference to personal identity that I could  
find in the UDA paper. The work does lead to conclusions about  
personal identity (as does Everett's MWI) but it doesn't start from  
there. Please be specific about what you find incorrect in the  
reasoning.


Read the COMP(2013) paper. There are many references to personal  
identity in that, including the quote given by Liz: The notion of  
the first person, or /the conscious knower/, admits the simplest  
possible definition: it is provided by access to basic memories.


In other words, Bruno is using only one dimension of personal  
identity and basing his argument on that, to the exclusion of all  
the other relevant dimensions. This is a serious limitation on the  
argument since two quite different people can share a large  
proportion of their memories, especially if they have lived closely  
together for many years. And yet they suffer from no confusion of  
their separate identities. Access to personal memories (as given in  
personal diaries) is not an adequate criterion for personal identity.


Certainly. That is why I insist in saying that the notion of personal  
identity is out-of-topic.
We 

Re: Step 3 - one step beyond?

2015-04-20 Thread Bruno Marchal


On 20 Apr 2015, at 09:40, Bruce Kellett wrote:


Dennis Ochei wrote:
One must revise the everyday concept of personal identity because  
it isn't even coherent. It's like youre getting mad at him for  
explaining combustion without reference to phlogiston. He can't use  
the everyday notion because it is a convenient fiction.


I don't think phlogiston is an everyday concept. The closest  
continuer concept of personal identity is far from an  
unsophisticated everyday notion, or a convenient fiction. If you  
want to revise it to some alternative definition of personal  
identity that is better suited to your purposes, then you have to do  
the necessary analytical work.


Are you saying that you believe that computationalism is false (in  
which case you can believe in some closer continuer theory), or are  
you saying that step 4 is not valid?


Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


  1   2   >