On Mar 1, 7:34 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
> On 29 Feb 2012, at 23:29, Craig Weinberg wrote:

>
> >>>>>>> There is no such thing as evidence when it comes to qualitative
> >>>>>>> phenomenology. You don't need evidence to infer that a clock
> >>>>>>> doesn't
> >>>>>>> know what time it is.
>
> >>>>>> A clock has no self-referential ability.
>
> >>>>> How do you know?
>
> >>>> By looking at the structure of the clock. It does not implement
> >>>> self-
> >>>> reference. It is a finite automaton, much lower in complexity
> >>>> than a
> >>>> universal machine.
>
> >>> Knowing what time it is doesn't require self reference.
>
> >> That's what I said, and it makes my point.
>
> > The difference between a clock knowing what time it is, Google knowing
> > what you mean when you search for it, and an AI bot knowing how to
> > have a conversation with someone is a matter of degree. If comp claims
> > that certain kinds of processes have 1p experiences associated with
> > them it has to explain why that should be the case.
>
> Because they have the ability to refer to themselves and understand
> the difference between 1p, 3p, the mind-body problem, etc.
> That some numbers have the ability to refer to themselves is proved in
> computer science textbook.
> A clock lacks it. A computer has it.

"This sentence" refers to 'itself' too. I see no reason why any number
or computer would have any more of a 1p experience than that.

>
>
>
> >>> By comp it
> >>> should be generated by the 1p experience of the logic of the gears
> >>> of
> >>> the clock.
>
> >> ?
>
> > If the Chinese Room is intelligent, then why not gears?
>
> The chinese room is not intelligent.

I agree.

> The person which supervene on the
> some computation done by the chinese room might be intelligent.

Like a metaphysical 'person' that arises out of the computation ?

>
>
>
>
>
>
>
>
>
>
>
> >>>>> By comp logic, the clock could just be part of a
> >>>>> universal timekeeping machine - just a baby of course, so we can't
> >>>>> expect it to show any signs of being a universal machine yet,
> >>>>> but by
> >>>>> comp, we cannot assume that clocks can't know what time it is just
> >>>>> because these primitive clocks don't know how to tell us that they
> >>>>> know it yet.
>
> >>>> Then the universal timekeeping would be conscious, not the baby
> >>>> clock.
> >>>> Level confusion.
>
> >>> A Swiss watch has a fairly complicated movement. How many watches
> >>> does
> >>> it take before they collectively have a chance at knowing what
> >>> time it
> >>> is? If all self referential machines arise from finite automation
> >>> though (by UDA inevitability?), the designation of any Level at
> >>> all is
> >>> arbitrary. How does comp conceive of self referential machines
> >>> evolving in the first place?
>
> >> They exist arithmetically, in many relative way, that is to universal
> >> numbers. Relative "Evolution" exists in higher level description of
> >> those relation.
> >> Evolution of species, presuppose arithmetic and even comp, plausibly.
> >> Genetics is already digital relatively to QM.
>
> > My question though was how many watches does it take to make an
> > intelligent watch?
>
> Difficult question. One hundred might be enough, but a good engineers
> might be able to optimize it. I would not be so much astonished that
> one clock is enough, to implement a very simple (and inefficacious)
> universal system, but then you have to rearrange all the parts of that
> clock.

The misapprehensions of comp are even clearer to me imagining a
universal system in clockwork mechanisms. Electronic computers sort of
mesmerize us because electricity seems magical to us, but having a
warehouse full of brass gears manually clattering together and
assuming that there is a  conscious entity experiencing something
there is hard to seriously consider. It's like Leibniz' Windmill. If
you were able to make a living zygote large enough to walk into, it
wouldn't be like that. Structures would emerge spontaneously out of
circulating fluid and molecules acting spontaneously and
simultaneously, not just in chain reaction.

>
> > It doesn't really make sense to me if comp were
> > true that there would be anything other than QM.
>
> ?

Why would there be any other 'levels'? No matter how complicated a
computer program is, it doesn't need to form some kind of non-
programmatic precipitate or accretion. What would be the point and how
would such a thing even be accomplished?

>
> > Why go through the
> > formality of genetics or cells? What would possibly be the point? If
> > silicon makes just as good of a person as do living mammal cells, why
> > not just make people out of quantum to begin with?
>
> Nature does that, but it takes time. If you have a brain disease, your
> answer is like a doctor who would tell you, just wait life appears on
> some planet and with some luck it will do your brain.
> But my interest in comp is not in the practice, but in the conceptual
> revolution it brings.

I think that comp has conceptual validity, and actually could help us
understand consciousness in spite of it being exactly wrong about it.
Because of the disorientation problem, being wrong about it may in
fact be the only way to study it...as long as you know that it is only
showing you a shadow of mind, and not mind itself.

>
>
>
> >> A machine which can only add, cannot be universal.
> >> A machine which can only multiply cannot be universal.
> >> But a machine which can add and multiply is universal.
>
> > A calculator can add and multiply. Will it know what time it is if I
> > connect it to a clock?
>
> Too much ambiguity, but a priori: yes. Actually it does not need a
> clock. + and * can simulate the clock. Clock is a part of all
> computers, explicitly or implicitly.

This is a good way to show the difference between the a-signifying,
generic 'sense' of time that you're talking about, versus the
anthropocentric, signifying sense. All of those old VCRs flashing
12:00 forever, even though there is a perfectly good clock on board
shows the extremely limited capacities of even a digital clock to tell
time. A microprocessor has only disconnected recursive enumeration.
There is no temporal context to it. If you set it to 7:00 or 13505:00
it makes no difference. Those symbols aren't grounded into anything at
all, they are digital units representing nothing at all. No qualia, no
1p awareness.

>
>
>
> >> The machine is a whole, its function belongs to none of its parts.
> >> When the components are unrelated, the machine does not work. The
> >> machine works well when its components are well assembled, be it
> >> artificially, naturally, virtually or arithmetically (that does not
> >> matter, and can't matter).
>
> > The machine isn't a whole though. Any number of parts can be replaced
> > without irreversibly killing the machine.
>
> Like us. There is no one construct in the human body which lasts for
> more than seven years.

Not like us. If any major organ replacement fails for any reason, we
will die. A machine could sit in a machine shop for 100 years and be
perfectly viable if it gets fixed at that time.

> Brains have much shorter material identity. Only bones change more
> slowly, but are still replaced quasi completely in seven years,
> according to biologists.

True, but they are replaced with tissues which are appropriately aged,
not stem cells. The biographical narrative of the organism as a whole
is maintained.

>
>
>
> >> All know theories in biology are known to be reducible to QM, which
> >> is
> >> Turing emulable. So your theory/opinion is that all known theories
> >> are
> >> false.
>
> > They aren't false, they are only catastrophically incomplete. Neither
> > biology nor QM has any opinion on a purpose for awareness or living
> > organisms to exist.
>
> That does not entail that QM structures or biological structure cannot
> be aware, or bear local notion of persons.

If we were not ourselves aware, would anything that QM or biology
entails leas us to suspect that a such thing as awareness could be
possible? Turning emulation counts on computation being sufficient to
support life and awareness, but it's an arbitrary wish. We aren't
seeing anything especially hopeful to back it up.

>
>
>
> >> You have to lower the comp level in the infinitely low, and
> >> introduce special infinities, not 1p machine recoverable to make comp
> >> false.
>
> > No, you can just reject the entire presumption that computation by
> > itself has causal efficacy.
>
> But it has causal efficacy, even with zombie, which can decide and act
> on the environment like us.

Only because there is a material body which can input and output to a
material environment. A program, without a physical substrate, has no
causal efficacy (if it could, we wouldn't need computers).

>
> > Computation to me is clearly an
> > epiphenomenon of experienced events, not the other way around.
>
> Computation are well defined object in arithmetic. You cannot redefine
> standard notion to suit your point. Or you can conclude whatever you
> want at the start.

Arithmetic is an experience too.

>
>
>
>
>
>
>
>
>
> > It is
> > like saying applause creates the opera.
>
> >>>>> This is
> >>>>> another variation on the Chinese Room. The pig can walk around at
> >>>>> 30,000 feet and we can ask it questions about the view from up
> >>>>> there,
> >>>>> but the pig has not, in fact learned to fly or become a bird.
> >>>>> Neither
> >>>>> has the plane, for that matter.
>
> >>>> Your analogy is confusing. I would say that the pig in the plane
> >>>> does
> >>>> fly, but this is out of the topic.
>
> >>> It could be said that the pig is flying, but not that he has
> >>> *learned
> >>> to fly* (and especially not learned to fly like a bird - which would
> >>> be the direct analogy for a computer simulating human
> >>> consciousness).
>
> >> That why the flying analogy does not work. Consciousness concerns
> >> something unprovable for everone concerned, except oneself.
>
> > No analogy can work any better because nothing else in the universe is
> > unprovable for everyone except oneself except consciousness.
>
> ?

Nothing but consciousness is subjective. Nothing else besides
consciousness is unprovable to others but unnecessary to prove to
oneself.

>
>
>
> >> May I ask you a question? Is a human with an artificial heart still a
> >> human?
>
> > Of course. A person with a wooden leg is still human as well. A person
> > with a wooden head is not a person though.
>
> OK. So the problem is circumscribe to the brain. Someone can have an
> artificial body, but not an artificial brain.
> Could someone survive with an artificial cerebral stem?

It depends how good the artificial brain stem was. The more of the
brain you try to replace, the more intolerant it will be, probably
exponentially so. Just as having four prosthetic limbs would be more
of a burden than just one, the more the ratio of living brain to
prosthetic brain tilts toward the prosthetic, the less person there is
left. It's not strictly linear, as neuroplasticity would allow the
person to scale down to what is left of the natural brain (as in cases
where people have an entire hemisphere removed), and even if the
prosthetics were good it is not clear that it would feel the same for
the person. If the person survived with an artificial brain stem, they
may never again feel that they were 'really' in their body again. If
the cortex were replaced, they may regress to infancy and never be
able to learn to use the new brain.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to