Bruno Marchal wrote:
> 
>> The steps rely on the substitution being "perfect", which they will  
>> never
>> be.
> 
> That would contradict the digital and correct level assumption.
> 
No. Correctly functioning means "good enough to be working", not perfect.
Digital means based on discrete values, not only consisting of discrete
values (otherwise there could be no digital computers, since they rely on
non-discrete functioning of their parts).


Bruno Marchal wrote:
> 
>>
>>
>> Bruno Marchal wrote:
>>>
>>>>
>>>>
>>>> Bruno Marchal wrote:
>>>>>
>>>>>> When I look
>>>>>> at myself, I see (in the center of my attention) a biological  
>>>>>> being,
>>>>>> not a
>>>>>> computer.
>>>>>
>>>>> Biological being are computers. If you feel to be more than a
>>>>> computer, then tell me what.
>>>> Biological beings are not computers. Obviously a biological being it
>>>> is not
>>>> a computer in the sense of physical computer.
>>>
>>> I don't understand this. A bacteria is a physical being (in the sense
>>> that it has a physical body) and is a computer in the sense that its
>>> genetic regulatory system can emulate a universal machine.
>> Usually computer means programmable machine, not "something that can  
>> emulate
>> a universal machine".
> 
> That can be proved to be equivalent.
No, because that would rely on an abstract notion of progammability.
Programmable machine means "programmable (to any practical extent) by us"
(this cannot even be formalized).
That's why we call a computer computer and biological beings usually not.
Othwise you are using an abstraction of a computer.
Also "something that can emulate a universal machine" may be more capable
than a computer, like a hypercomputer.


Bruno Marchal wrote:
> 
>>
>>
>> Bruno Marchal wrote:
>>>
>>>> It is quite strange to say over and over again that I haven't
>>>> studied your
>>>> arguments (I have, though obviously I can't understand all the
>>>> details,
>>>> given how complicated they are),
>>>
>>> UDA is rather simple to understand. I have never met people who does
>>> not understand UDA1-7 among the scientific academical.
>>> Some academics pretends it is wrong, but they have never accepted a
>>> public or even private discussion. And then they are "literary"
>>> continental philosophers with a tradition of disliking science. Above
>>> all, they do not present any arguments.
>> It is indeed not hard to understand.
>> Again, there is no specific flaw in the argument, because all steps  
>> rely on
>> an abstraction of how a computer works,
> 
> They relies on the definition of digital computer. The digitalness  
> allows exact simulation (emulation).
No. It allows simulation to the extent that the computer works. A digital
computer is not defined to be always working, and a correct substitution is
one where the computer works good enough, not perfectly.


Bruno Marchal wrote:
> 
>> Consciousness supposedly emerges from self-reference of numbers, but  
>> the
>> very concept of self-reference needs the existence of self  
>> (=consciousness).
>> Without self, no self-reference.
> 
> The discovery of Löbian machine and of arithmetical self-reference  
> contradicts this.
Again, you can't even begin to talk of arithmetical *self*-reference if you
don't assume SELF. Otherwise we could be talking about HT)D)F$w99-reference
as well.
Just like you can't talk of apple-juice without apples.


Bruno Marchal wrote:
> 
> To equate self and consciousness is not warranted
Why?
We can't equate *local self* or self-identity with consciousness, but why
not self itself?
That's what all great mystics are saying, consciousness is self.


Bruno Marchal wrote:
> 
>>
>>
>> Bruno Marchal wrote:
>>>
>>>> If we have that faith,
>>>> we believe in abitrary mysterious occurences.
>>>
>>> We believe just that the brain is a sort of machine, and that we can
>>> substitute for a functionnally equivalent machine. This cannot be
>>> proved and so it asks for some faith. But it might not need to be
>>> blind faith. You can read books on brain, neuro-histology, and make
>>> your own opinion, including about the subst level. The reasoning uses
>>> only the possibility of this in theory. It is theoretical reasoning  
>>> in
>>> a theoretical frame.
>> But functionally equivalent does not mean *totally* equivalent, but  
>> this is
>> required for the steps to work.
> 
> There is no notion of total equivalence used. Only of subjective  
> equivalence (first person invariance), modulo the changing conditions  
> of reawakening, reconstitution in different environment, etc.
What is that supposed to mean? The only thing that ever remains absolute
subjectively invariant is (arguably) the fact of anything being conscious at
all, and this does not depend on a specific correct substitution (in case
you don't survive other people are still conscious). You seem to assume a
absolute personal self, but the personal self is itself just a collection of
memories and personality traits, that itself is ever changing. So you can't
remain absolutely subjectively equivalent, since you never do. And if you do
remain relatively invariant, it is only because you choose to define
yourself in a way that you are still yourself after a certain change in
experience, but that is just a matter of opinion, and it means that is just
a matter of opinion whether you survive a substitution - but then we can
only conclude that we may survive no substitution (if we don't believe YES
doctor) or we survive every substitution (!) or something inbetween - a
pretty weak conclusion.

Also: How does your reasoning show that we can't survive every substitution?
If we do, how can you exlude the possibility that we survive every
substitution, but only *if done in the correct non-computational way*,
making the conclusion false (we are also related to a non-computational
element, the instantiation of computations). We can always say YES doctor,
but the substitution may still fail, but not because the doctor didn't pick
the right substitution level but because he didn't instantiate the
computations in the correct non-computational way. To be sure that we
survive, we also have to assume YES non-computational-doctor (which makes
sure the computations are instantiated in the right non-computational way).
If we just conlude that we can survive any substitution if we define our
local selves the right way, the result is pretty trivial: Our *local selves*
are related to abstract computations as far as we identify them as related
to abstract computational. Our experience is related to computations, but
may also be related to something beyond computations, and also beyond
infinite sheats of computations, etc...


Bruno Marchal wrote:
> 
>>
>>
>> Bruno Marchal wrote:
>>>
>>>> Unfortunately then we could as well base
>>>> the argument on "1+1=3" or "there are pink unicorn in my room even
>>>> though I
>>>> don't notice them", so it's worthless.
>>>
>>> This does not follow. We do have biological evidence that the brain  
>>> is
>>> a Turing emulable entity. It is deducible from other independent
>>> hypothesis (like the idea that QM is (even just approximately)
>>> correct, for example).
>>> You don't seem to realize, a bit like Craig, that to define a non- 
>>> comp
>>> object, you need to do some hard work.
>> We have no biological evidence whatsoever that the brain is turing  
>> emulable.
> 
> 
> This is simply not true. There are many evidences, including the fact  
> that we don't know physical laws which are not Turing emulable, with  
> exception like the non intelligible collapse of the waves, or some  
> theoretical physical phenomenon build by diagonalization, but unknown  
> in nature.
But we have evidence that not all phenomena follow laws. There are no laws
to derive a emotion, for example. And since we know that the brain has do to
with emotions, is unreasonable that it strictly follows laws (including
comptutational laws).
You assume a form of "formalist reductionism", everything follows laws. Of
course science has to assume that the workings of nature are approximated by
laws, but that doesn't mean that it exactly follows any laws. Indeed, many
paranormal occurences and the problems of finding an adequate unificated
theory suggest even from a scientific standpoint that this is not true. 

Unfortunately we live in a world of dogmatic scientism, materialism and
rationalism (but also in a world of irrationality and superstition and
dogmatic religion) and that's the only reason that many people assume those
things.


Bruno Marchal wrote:
> 
>>
>>
>> Bruno Marchal wrote:
>>>
>>>> or your proof doesn't work (because actually the patient will notice
>>>> he has
>>>> been substituted, that is, he didn't survive a substitution, but a
>>>> change of
>>>> himself - if he survives).
>>>
>>> He might notice it for reason which are non relevant in the  
>>> reasoning.
>>> He might notice it because he got a disk with a software making him
>>> able to uploading himself on the net, or doing classical
>>> teleportation, or living 1000 years, etc.
>> But if he noticed this his subjective experience did change, and we  
>> can't
>> assume in any of the steps that it doesn't (except for an added  
>> belief).
> 
> This just trivially says that we are changed by any experience, even  
> drinking a cup of coffee. But if the substitution is done at the right  
> level, the changes will only be of that type, and are not relevant for  
> the issue.
But that's riduculous since ALL changes are of that type (change of
experience). So if the changes happening due to a substitution doesn't
matter, why should any other change?


Bruno Marchal wrote:
> 
>>
>> It is even more obvious in step 3: "The description
>> encoded at Brussels after the reading-cutting process is just the
>> description of a state of some
>> Turing machine, giving  that we assume comp. So its description can be
>> duplicated, and  the
>> experiencer  can be  reconstituted  simultaneously  at two different
>> places,  for  example
>> Washington and Moscow". This assumes we work precisely like an  
>> abstract
>> turing machine,
> 
> Like a concrete Turing machine.
But a concrete Turing machine does not work like an abstract turing machine.
Your computer, a concrete turing machine, generates errors (that are not
based on the computation that is happening, like a short circuit) and takes
time to process things and can be touched and seen, etc...

An abstract turing machine is a model of a computer, but the computer is not
(and does not work) the same as the model. You confuse an actual thing with
the abstraction of that thing.


Bruno Marchal wrote:
> 
>>
>>
>> Bruno Marchal wrote:
>>>
>>>> This just works as long as
>>>> the neurons can make enough new connections to fill the similarity
>>>> gap.
>>>>
>>>>
>>>> Bruno Marchal wrote:
>>>>>
>>>>>> This would make COMP work in a quite special case scenario, but
>>>>>> wrong in
>>>>>> general.
>>>>>
>>>>> It is hard to follow you.
>>>> I am not saying anything very complicated.
>>>
>>> You seem to oscillate between "comp is nonsense" and "there is
>>> something wrong with the reasoning".
>>> You need to be able to conceive that comp might be true to just  
>>> follow
>>> the reasoning.
>> I can conceive that a substitution might work,
> 
> Nice. that is comp. So you can conceive it to be true.
Indeed. The only reason that I don't think it is practically true is that
the transcendent ("dreamy"/"spiritual") aspect of reality is transcending
biology faster than technology can, so that once we have the technology for
substitution, it becomes totally irrelevant, as we already have no necessity
of a biological brain.
It is a bit like you could theoretically build a computer out of stones and
pipes and levers, but we will never do that, since if we knew how to build
that, we could already build a computer that is much much much more powerful
than that.


Bruno Marchal wrote:
> 
> 1) Exact computational states or slightly changed one, after the  
> recovering, are not relevant for the issue, and this is made clear at  
> step seven, given that the robust universe running the concrete UD  
> just goes through all those computational states, in all histories.  
> The relevant points are only the first person indeterminacy, and its  
> many invariance for some third person changes.
Step 7 does not even adress the issue that I am pointing at.
You write "With  comp, when  we  are  in  the  state  of  going  to drop 
the  pen, we  are  in  a  Turing emulable  state. ". That's simply not the
assumption. COMP just says that a substitution with an (actual) computer can
work, not that the substitution works due to us being in a turing emulable
state - it doesn't say why exactly the substition works (beyond being
functionally correct). We can say YES because we are emulable enough, *even
though* we are not in a precisely emulable state. COMP might mean that we
survive because there is always a substitution that is good enough to work
(since a non-computational aspect of ourselves, beyond our old parts, or
within the new digital parts can adapt), but not because the substitution is
substituting a state that is fully emulable.
You can't presuppose that a substitution can only work if we are precisely
determined through a computational state.


Bruno Marchal wrote:
> 
> 2) The immateriality of computations is not assumed in any step, and  
> is the conclusion of the eight step.
It seems that here lies our main difference. The eight step says it is
absurd to associate any experience with abitary pyhsical activity. That is
just the case if we assume a reductionistic objective world, were there has
to be a precise correspondence. It might be true that it is
impossible/abitrary to associate experience with physical activity in an
*objective way*. Any association can only be a theory or a local
approximation. Therefore we can associate any experience with any physical
activity (or with none), because it may be subjectively meaningful, but that
doesn't mean that that is all there is to it. The substitution does simply
work because it subjectively works, not because of some inherent, *objective
and/or absolute* association of physical activity with experience.
The whole MGA argument is supposed to show that the physical supervenience
thesis is false, but that's not the only possibility (even though the vast
majority of materialist argue for that, or for eliminativism). It may be
that it is impossible to seperate matter and consciousness. Arguably that's
not really materialism as commonly understood (even though we could say it
is materialism if "matter" equals consciousness or "proto-consciousness").
But that's not relevant for the argumentation, because your conclusion is
not that from COMP it follows that naive materialism is false, but that we
can only associate experience with a measure on the computations.
You plainly miss all alternative conclusion that are not your conclusion or
commonly given alternatives. You are not even defending the validity of your
argumentation, your argue for the validity of your argumentation with
respect to commonly presented alternatives.
That might be enough for the validity concerning naive materialism, but that
doesn't make it universally valid. I am just not arguing at all for what
your argument(s) seeks to refute.

If you want your argument to be a refutation of COMP+naive materialism, then
write that. It might actually work as that, though I suspect that people
that adhere to a naive form of materialism have to be very dogmatic to keep
that belief for long, so they probably often will not be convinced by any
rational argument at all.
I mean even almost universally accepted modern physics are not compatible
with naive materialism (things are made of spatially defined and non-fuzzy
stuff, like bricks or something).

benjayk
-- 
View this message in context: 
http://old.nabble.com/The-consciousness-singularity-tp32803353p32912437.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to