On Mar 2, 2:49 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
> On 02 Mar 2012, at 18:03, Craig Weinberg wrote:

>
> >>>>>>>>>>> There is no such thing as evidence when it comes to
> >>>>>>>>>>> qualitative
> >>>>>>>>>>> phenomenology. You don't need evidence to infer that a clock
> >>>>>>>>>>> doesn't
> >>>>>>>>>>> know what time it is.
>
> >>>>>>>>>> A clock has no self-referential ability.
>
> >>>>>>>>> How do you know?
>
> >>>>>>>> By looking at the structure of the clock. It does not implement
> >>>>>>>> self-
> >>>>>>>> reference. It is a finite automaton, much lower in complexity
> >>>>>>>> than a
> >>>>>>>> universal machine.
>
> >>>>>>> Knowing what time it is doesn't require self reference.
>
> >>>>>> That's what I said, and it makes my point.
>
> >>>>> The difference between a clock knowing what time it is, Google
> >>>>> knowing
> >>>>> what you mean when you search for it, and an AI bot knowing how to
> >>>>> have a conversation with someone is a matter of degree. If comp
> >>>>> claims
> >>>>> that certain kinds of processes have 1p experiences associated
> >>>>> with
> >>>>> them it has to explain why that should be the case.
>
> >>>> Because they have the ability to refer to themselves and understand
> >>>> the difference between 1p, 3p, the mind-body problem, etc.
> >>>> That some numbers have the ability to refer to themselves is proved
> >>>> in
> >>>> computer science textbook.
> >>>> A clock lacks it. A computer has it.
>
> >>> "This sentence" refers to 'itself' too. I see no reason why any
> >>> number
> >>> or computer would have any more of a 1p experience than that.
>
> >> A sentence is not a program.
>
> > Okay, "WHILE  program > 0 DO program. Program = Program + 1. END
> > WHILE"
>
> > Does running that program (or one like it) create a 1p experience?
>
> Very plausibly not. It lacks self-reference and universality.

Why isn't a WHILE loop self-referential?

>
>
>
> >>>>>>> By comp it
> >>>>>>> should be generated by the 1p experience of the logic of the
> >>>>>>> gears
> >>>>>>> of
> >>>>>>> the clock.
>
> >>>>>> ?
>
> >>>>> If the Chinese Room is intelligent, then why not gears?
>
> >>>> The chinese room is not intelligent.
>
> >>> I agree.
>
> >>>> The person which supervene on the
> >>>> some computation done by the chinese room might be intelligent.
>
> >>> Like a metaphysical 'person' that arises out of the computation ?
>
> >> It is more like "prime numbers" arising from + and *. Or like a chess
> >> player arising from some program, except that prime number and chess
> >> player have (today) no universal self-referential abilities.
>
> > That sounds like what I said.
>
> >>>>>>>>> By comp logic, the clock could just be part of a
> >>>>>>>>> universal timekeeping machine - just a baby of course, so we
> >>>>>>>>> can't
> >>>>>>>>> expect it to show any signs of being a universal machine yet,
> >>>>>>>>> but by
> >>>>>>>>> comp, we cannot assume that clocks can't know what time it is
> >>>>>>>>> just
> >>>>>>>>> because these primitive clocks don't know how to tell us that
> >>>>>>>>> they
> >>>>>>>>> know it yet.
>
> >>>>>>>> Then the universal timekeeping would be conscious, not the baby
> >>>>>>>> clock.
> >>>>>>>> Level confusion.
>
> >>>>>>> A Swiss watch has a fairly complicated movement. How many
> >>>>>>> watches
> >>>>>>> does
> >>>>>>> it take before they collectively have a chance at knowing what
> >>>>>>> time it
> >>>>>>> is? If all self referential machines arise from finite
> >>>>>>> automation
> >>>>>>> though (by UDA inevitability?), the designation of any Level at
> >>>>>>> all is
> >>>>>>> arbitrary. How does comp conceive of self referential machines
> >>>>>>> evolving in the first place?
>
> >>>>>> They exist arithmetically, in many relative way, that is to
> >>>>>> universal
> >>>>>> numbers. Relative "Evolution" exists in higher level
> >>>>>> description of
> >>>>>> those relation.
> >>>>>> Evolution of species, presuppose arithmetic and even comp,
> >>>>>> plausibly.
> >>>>>> Genetics is already digital relatively to QM.
>
> >>>>> My question though was how many watches does it take to make an
> >>>>> intelligent watch?
>
> >>>> Difficult question. One hundred might be enough, but a good
> >>>> engineers
> >>>> might be able to optimize it. I would not be so much astonished
> >>>> that
> >>>> one clock is enough, to implement a very simple (and inefficacious)
> >>>> universal system, but then you have to rearrange all the parts of
> >>>> that
> >>>> clock.
>
> >>> The misapprehensions of comp are even clearer to me imagining a
> >>> universal system in clockwork mechanisms. Electronic computers
> >>> sort of
> >>> mesmerize us because electricity seems magical to us, but having a
> >>> warehouse full of brass gears manually clattering together and
> >>> assuming that there is a  conscious entity experiencing something
> >>> there is hard to seriously consider. It's like Leibniz' Windmill.
>
> >> Or like Ned block chinese people computer. This is not convincing.
>
> > Why not? Because our brain can be broken down into components also and
> > we assume that we are the function of our brain?
>
> We are relatively manifested by the function of our brain. "we" are
> not function.

That seems to make 'functionalism' a misnomer.

>
> > If so, that objection
> > evaporates when we use a symmetrical form <> content model rather than
> > a cause >> effect model of brain-mind.
>
> Form and content are not symmetrical.
> The dependence of content to form requires at least universal machine.

What if content is not dependent on form and requires nothing except
being real? I think that content and form are anomalous symmetries
inherent in all real things. It is only our perspective, as human
content, that makes us assume otherwise. Objectively, form and content
are different aspects of the same thing - one side a shape of matter
in space, the other a meaning through time.

>
>
>
> >> It
> >> is just helpful to understand that consciousness relies on logical
> >> informational patterns that on matter. That problem is not a problem
> >> for comp, but for theories without notion of first person. It breaks
> >> down when you can apply a theory of knowledge, which is the case for
> >> machine, thanks to incompleteness. Consciousness is in the "true"
> >> fixed point of self-reference. It is not easy to explain this shortly
> >> and it relies on Gödel and Tarski works. There will be opportunities
> >> to come back on this.
>
> > All of that sounds still like the easy problem of consciousness.
> > Arithmetic can show *that* self reference exists but it does so by
> > drawing a circle around a hole where the self should be. It is a 3p
> > outside view looking in and finding only an abstract vector (pseudo
> > 1p). This is indeed accurate from a 3p logical perspective, which is
> > why it is internally consistent and can be used to make sophisticated
> > puppets featuring trivial intelligence which can be elaborated to a
> > degree far exceeding human trivial intelligence, but still possessing
> > no feeling, understanding, or experience.
>
> Gôdel 1931 stays in the 3p, and Gödel 1933 assess this by observing
> that Bp cannot be used for knowledge, but that's where you can apply
> Theaetus' theory of knowledge, by defining knowledge by Bp & p,
> factually. That truth restriction makes prossible to study meta-
> logically a non formalizable theory of knowledge associated to the
> correct machine. It behaves like a knower, and it mirrors well the
> Plotinian conception of everything.
>
> Your non attribution of consciousness to the machine might comes from
> the fact that you believes that the machines is only handled by the 3p
> Bp, but it happens that the machine, and its universal self-
> transformation has self-referential correct fixed point, and who are
> you to judge if she meant them or not? If you define consciousness by
> the restriction of the Bp on the such true fixed point, the PA baby
> machine will already not be "satisfied" if you call her a zombie.

Take for example how a computer writes compared to a person. If you
blow up a character from a digital font enough, you will see the
jagged bits. If you look at a person's hand writing you will see
dynamic expressiveness and character. No two words or letters that a
person writes will be exactly the same.

A computer of course, produces only identical characters, and its text
has no emotional connection to the author. There will never be a
computer who signs it's John Hancock any differently than any other
computer - unless programmed specifically to do so. All machines have
the same personality by default (which is no personality).

This is a good example of how we can project our own perceptions on an
inanimate, unconscious canvas and see our own reflection in it. These
letters only look like letters to us, but to a computer, they look
like nothing whatsoever.

Reducing consciousness into mathematical terms can yield only a
mathematical sculpture that reminds us of consciousness. It is an
inside out approach, a reverse engineering of meaning by modeling
grammar and punctuation extensively. There is much more to awareness
than Bp & p.

>
>
>
> >>> If
> >>> you were able to make a living zygote large enough to walk into, it
> >>> wouldn't be like that. Structures would emerge spontaneously out of
> >>> circulating fluid and molecules acting spontaneously and
> >>> simultaneously, not just in chain reaction.
>
> >>>>> It doesn't really make sense to me if comp were
> >>>>> true that there would be anything other than QM.
>
> >>>> ?
>
> >>> Why would there be any other 'levels'?
>
> >> So you assume QM in your theory. I do not.
>
> > It doesn't have to be QM, it can be whatever you like - arithmetic
> > truth, Platonia, etc. Why have any other 'level'?
>
> Nice. Let us chose first order arithmetical truth. The formula that we
> can write with "=", the logical symbols (with "A" for "for all", "E""
> for "it exists", and x, y, z, ..., as variables), and the symbol "0",
> "+", "*" and "s".
>
> Do you agree with the intended meaning of the axioms I use:
>
> Ax ~(0 = s(x))  (For all number x the successor of x is different from
> zero).
> AxAy ~(x = y) -> ~(s(x) = s(y))    (different numbers have different
> successors)
>
> Ax x + 0 = x
> AxAy  x + s(y) = s(x + y)   ( meaning x + (y +1) = (x + y) +1) = laws
> of addition
>
> Ax   x *0 = 0
> AxAy x*s(y) = x*y + x    laws of multiplication
>
> This defines a UD.
>
> And in that theory, we can prove the existence of  (tiedously) a
> machine which "believes" the axiom above together with the infinity of
> axioms (for all formula F translatable in the machine's language):
>
> (F(0) & Ax(F(x) -> F(s(x))) -> AxF(x)
>
> This defines the machines I will interview "in" the theory above.
>
> Then the levels will grow, including many cycles, (strange) loops,
> self-reference, and relatively true self-reference, and "absolute"
> fixed points, ... well a whole "theology", I think.

Thanks for writing it out that way. It helps, although I'm still not
able to get enough out of it to say for sure that I get it. The thing
is though, you can still have loops within loops and theology without
having any other level than the native arithmetic level. I still don't
see why or how any other levels with other characteristics would or
could arise.

>
>
>
> >>> No matter how complicated a
> >>> computer program is, it doesn't need to form some kind of non-
> >>> programmatic precipitate or accretion. What would be the point and
> >>> how
> >>> would such a thing even be accomplished?
>
> >> ?
>
> > Deep Blue or Watson don't need to define some new 'level' of
> > interpretation which transcends programming or re-presents it in some
> > way.
>
> Because deep blue is programmed to play some toy game. Not the
> struggle of life game, in which you need self-referential control
> structure, short and long term memories, universality, Löbianity, etc.

I think even if it were programmed to play the struggle of life game
it will still only ever be a toy struggle. That's why I say it has to
be made of something which knows what it means to want to survive and
remain living as itself...cells.

>
>
>
> >>>>> Why go through the
> >>>>> formality of genetics or cells? What would possibly be the
> >>>>> point? If
> >>>>> silicon makes just as good of a person as do living mammal cells,
> >>>>> why
> >>>>> not just make people out of quantum to begin with?
>
> >>>> Nature does that, but it takes time. If you have a brain disease,
> >>>> your
> >>>> answer is like a doctor who would tell you, just wait life
> >>>> appears on
> >>>> some planet and with some luck it will do your brain.
> >>>> But my interest in comp is not in the practice, but in the
> >>>> conceptual
> >>>> revolution it brings.
>
> >>> I think that comp has conceptual validity, and actually could help
> >>> us
> >>> understand consciousness in spite of it being exactly wrong about
> >>> it.
> >>> Because of the disorientation problem, being wrong about it may in
> >>> fact be the only way to study it...as long as you know that it is
> >>> only
> >>> showing you a shadow of mind, and not mind itself.
>
> >> We don't know that.
>
> > I know it as much as I know anything.
>
> Then
> - either you have the magical ability to distinguish humans from
> zombies, or
> - you must explain what in the brain is not Turing emulable, and how
> it interacts with the behavior.

At this point, it's easy to distinguish humans from puppets. The look
weird. They sound weird. It's not a problem. Even if we were not able
to tell the difference with our own senses, by the time anything
remotely convincing in the way of androids come out, there will
probably be detection technologies available at the same time. Even if
a puppet fools us, it probably won't fool another device designed to
detect an android signature.

The brain would be Turing emulable if there weren't a living person in
it. A dead brain is Turing emulable. What is not emulable is
accumulated 1p experience of the person who lives their life through
the brain. It may well extend beyond that. There could easily be non-
comp influences during conception or birth which are unique. Identity
could be a phenomenon of an entire 5d+ entity condensed into a 4d
sequence of embodied experiences. That seems to be our intuition. We
feel that children's personalities anticipate their destiny in some
way. Rather than blank tabula rasa, children are brimming with wild
non-comp idiosyncrasies which is reflected in their faces, their
behaviors, etc.

>
>
>
> >>>>>> A machine which can only add, cannot be universal.
> >>>>>> A machine which can only multiply cannot be universal.
> >>>>>> But a machine which can add and multiply is universal.
>
> >>>>> A calculator can add and multiply. Will it know what time it is
> >>>>> if I
> >>>>> connect it to a clock?
>
> >>>> Too much ambiguity, but a priori: yes. Actually it does not need a
> >>>> clock. + and * can simulate the clock. Clock is a part of all
> >>>> computers, explicitly or implicitly.
>
> >>> This is a good way to show the difference between the a-signifying,
> >>> generic 'sense' of time that you're talking about, versus the
> >>> anthropocentric, signifying sense. All of those old VCRs flashing
> >>> 12:00 forever, even though there is a perfectly good clock on board
> >>> shows the extremely limited capacities of even a digital clock to
> >>> tell
> >>> time. A microprocessor has only disconnected recursive enumeration.
> >>> There is no temporal context to it. If you set it to 7:00 or
> >>> 13505:00
> >>> it makes no difference. Those symbols aren't grounded into
> >>> anything at
> >>> all, they are digital units representing nothing at all. No
> >>> qualia, no
> >>> 1p awareness.
>
> >> So you assume a continuous time?
>
> > I assume no time other than memory of experience in the perpetual now.
>
> That's duration. If you say it is perpetual, I think that you still
> assume some notion of time.

It's perpetual in the sense that it's always now.

>
> > Knowing the time is a function of understanding. It only has relevance
> > in particular contexts, like 3D vision or olfactory sense.
>
> OK.
>
>
>
> >> You would be an alien, you might say that human people have no
> >> qualia,
> >> given that they do not seems to be present in any cut section of a
> >> human body.
>
> > They would be right that I have no alien qualia. I don't say that the
> > components of a clock don't have qualia - I think that they must, but
> > I suspect its much less (say one quintillionth) as significant as
> > ours. Because the qualia is so primitive, there is no 1p coherence to
> > the 'clock' assembly as a whole. There is zero increase in 1p
> > significance over and above the value of the parts. That is not to say
> > there is no increase in significance to us by virtue of possessing and
> > using the clock, of course there is great utility, joy, comfort,
> > learning, knowing, etc.
>
> Why would the consciousness of the clock parts not add in the clock-
> reassembled computer,

Because the parts can't make sense of each other. They have to be
chemically combined.

> like you seems to say that human consciousness
> comes from the addition of the consciousness of its neurons?

Not addition of neurons, but multiplication of a single stem cell as
neurons.

>
>
>
> >>>>>> The machine is a whole, its function belongs to none of its
> >>>>>> parts.
> >>>>>> When the components are unrelated, the machine does not work. The
> >>>>>> machine works well when its components are well assembled, be it
> >>>>>> artificially, naturally, virtually or arithmetically (that does
> >>>>>> not
> >>>>>> matter, and can't matter).
>
> >>>>> The machine isn't a whole though. Any number of parts can be
> >>>>> replaced
> >>>>> without irreversibly killing the machine.
>
> >>>> Like us. There is no one construct in the human body which lasts
> >>>> for
> >>>> more than seven years.
>
> >>> Not like us. If any major organ replacement fails for any reason, we
> >>> will die. A machine could sit in a machine shop for 100 years and be
> >>> perfectly viable if it gets fixed at that time.
>
> >> Some seed can live thousand of years.
>
> > A thousand years means nothing to a seed. It doesn't begin living
> > until it germinates, that becomes year zero for it.
>
> OK. But then why would not the machine live again when someone buys it
> and turns it on. Why would that not becomes the year zero for it
> (assuming its memory are virgin of experiences)?

Because the machine as a whole isn't an entity, it's an assembly of
different parts. It only seems like an 'it' to us. You can tell the
difference too because once you turn the seed on, you can't stop it
without killing it. There's no pause. Machines can be stopped and re-
started generally.

>
>
>
> >> You should not compare the crude
> >> man made machine with natural nanotechnology having a very long
> >> history. No one doubt that life is a very sophisticated technology.
> >> Some frog can freeze completely, and after 4 month of seemingly
> >> death,
> >> come back to their activities.
>
> > As far as I know, all living organisms arise from a single dividing
> > cell and no machines are built that way.
>
> Is a ribosome alive?

Not by itself. It's part of the context of a cell, which is alive by
itself.

>
> > This may be a much bigger
> > deal than it sounds if consciousness 'insists' through memory rather
> > than appears instantaneously as a function of objects in space.
>
> Memory is a key, sure. But it is an information pattern, usually
> interpreted by some information handling.

We don't know that. What you are talking about may only be recall. Our
experience as humans may be defined by an ability to temporarily
forget that we are the entire universe through all time.

>
> I will not blame you to reintroduce the ghost in the machine. crudely
> said, computer science justifies the existence of the ghost (software)
> in the machine (hardware).

I steer away from ghosts though because it makes a pseudosubstance of
something which I understand to be the actual opposite of a substance.
We can talk about ghosts figuratively, sure, or souls or whatever, but
that reifies existence at the expense of essence - which is a
perfectly rational thing to do unless you are trying to talk about
consciousness itself. If you privilege existence when you talk about
consciousness, you immediately get lost into a 3p model of
consciousness. Software is precisely that 3p model of (some aspects
of) consciousness. It has the same ghostly qualities if you force it
into an existential context, but it's a different animal. The
insubstantial of concepts alone doesn't make them sentient, nor does
their ability to overlap function with sentience.

>
> > If we
> > started building machines this way, as nanotech seeds, I think we
> > would gain 1p sentience, but lose control of it.
>
> Yes. But that's the point. Now when you accept an artificial brain,
> and if the level is chosen correctly, by definition, you will neither
> lose, nor gain more control on yourself than you already have, or have
> not.

We wouldn't be able to make it grow into a brain necessarily though.
We would have to try to figure out how to motivate it to do what we
want. It will probably be much easier to just grow a brain out of stem
cells.

>
>
>
> >>>> Brains have much shorter material identity. Only bones change more
> >>>> slowly, but are still replaced quasi completely in seven years,
> >>>> according to biologists.
>
> >>> True, but they are replaced with tissues which are appropriately
> >>> aged,
> >>> not stem cells. The biographical narrative of the organism as a
> >>> whole
> >>> is maintained.
>
> >> Even if that is correct for current machines, comp is considering all
> >> machines.
>
> > That's the theory.
>
> Yes.
>
>
>
> >>>>>> All know theories in biology are known to be reducible to QM,
> >>>>>> which
> >>>>>> is
> >>>>>> Turing emulable. So your theory/opinion is that all known
> >>>>>> theories
> >>>>>> are
> >>>>>> false.
>
> >>>>> They aren't false, they are only catastrophically incomplete.
> >>>>> Neither
> >>>>> biology nor QM has any opinion on a purpose for awareness or
> >>>>> living
> >>>>> organisms to exist.
>
> >>>> That does not entail that QM structures or biological structure
> >>>> cannot
> >>>> be aware, or bear local notion of persons.
>
> >>> If we were not ourselves aware, would anything that QM or biology
> >>> entails leas us to suspect that a such thing as awareness could be
> >>> possible?
>
> >> Yes. Their ability to support universality and self-reference.
>
> > Why should universality and self-reference indicate awareness of any
> > kind. I have motion sensors on my garage lights. I could make them
> > universal by plugging them into a TV set instead of lights.
>
> ?

I'm any universal machine could use a motion sensor or any other
electronic peripheral.

>
> > They are
> > self referential because whenever I go in the garage at night, they
> > turn on to greet me and to make their presence known.
>
> This means perhaps that they are "Craig Weinberg" referential, not
> self-referential.

They pay the same compliment to my wife and other guests though.

>
>
>
> >>> Turning emulation counts on computation being sufficient to
> >>> support life and awareness, but it's an arbitrary wish.
>
> >> All theories collect evidences. Comp has many positive evidences.
> >> Non-
> >> comp has only the absence of solution to a problem kind of evidence.
>
> > Non-comp has sense.
>
> I completely agree.
>
> > It doesn't need evidence because the thought of
> > needing evidence is already non-comp symbol grounding.
>
> I might agree with this, because comp implies something very akin to
> this, from the machine 1-povs.

cool.

>
> > A machine will
> > solve problems using whatever parameters or data it is given. It has
> > no capacity to doubt them unless programmed to act as if it were
> > doubting them.
>
> It is here that you should study the math. Ideally self-referentially
> correct machines cannot miss the doubt. They becomes quickly modest,
> and know that whatever they learn, their ignorance will only get
> bigger. Forever.

Then they have no doubt of their ignorance instead.

>
> > Machines never want or need evidence. They extrapolate
> > recursively, forever.
>
> That might be defended about the ideal virgin machine, dissociated
> from all other universal machine.
> But any reasonable computation will involve the many dependencies
> between machines, and this can play an important role, as Stephen
> suspects I think rightly, in the first person plural realities, which
> makes number dream consistent and coherent (multi-consistent with
> respect to universal numbers).
> In that case, the machine cries for evidences, when betting on such
> local universal neighbors.

Does it really care whether or not it gets evidence though? If it gets
evidence, then it bets one way, if not, then it bets another way. I
have a hard time attributing remorse or impatience to a Turing
machine.

>
>
>
> >> But non-comp faces the same difficulties, except that it hides them
> >> more easily, in special vague infinities.
>
> > It doesn't hide anything, any more than the our sense of humor hides.
>
> That's only funny :)
>
> > It is theory which needs to justify it's relevance in terms of sense,
> > not the other way around.
>
> You are right, but only from the first person point of view. It is a
> key point for the knower, perhaps the human right brain, and part of
> the limbic system. But doing science, the theory should no more refer
> to the sense of the one who does the theory, only to the sense "object
> of theory", if not you do pseudo-religion-science. You can do that.
> There is a public for that, but it is no more like searching the
> truth, but asserting personal opinions (in the best case, because
> usually the pseudo-things is just a selling strategy).

I would agree except in this special case of studying consciousness
itself. Everything goes out the window when we look at the qualitative
side of the universe. We have to come to it on its terms. The object
of the theory is not an object, it is a subject.

>
>
>
> >>> We aren't
> >>> seeing anything especially hopeful to back it up.
>
> >> Study a book in computer science. Look at molecular biology, or
> >> quantum mechanics.
>
> > Nice theory but no payoff in terms of breakthroughs in consciousness.
>
> Read this list, study my work :)
>

I do what I can.

>
>
> >>>>>> You have to lower the comp level in the infinitely low, and
> >>>>>> introduce special infinities, not 1p machine recoverable to make
> >>>>>> comp
> >>>>>> false.
>
> >>>>> No, you can just reject the entire presumption that computation by
> >>>>> itself has causal efficacy.
>
> >>>> But it has causal efficacy, even with zombie, which can decide and
> >>>> act
> >>>> on the environment like us.
>
> >>> Only because there is a material body which can input and output
> >>> to a
> >>> material environment.
>
> >> Immaterial body can input and output to an immaterial environment.
>
> > Only in theory. I don't think it is the case in reality.
>
> It is a consequence of the theory that this is the case in reality. We
> start from the innocuous local relative "yes doctor", and then the
> reasoning shows that we are already in an arithmetical matrix, and we
> can explain why, from inside, it looks like an analytical, physical,
> gigantic history.

It's still de-presentational. There is no actual there there. No
'show', only an idea that something shaped like a theater can
accommodate a stage and props. Comp seems to say "Since stage+props =
show, then we have explained show business in terms of the traffic
patterns of lifting trunks and the angles of Klieg lights".

>
>
>
> >>> A program, without a physical substrate, has no
> >>> causal efficacy (if it could, we wouldn't need computers).
>
> >> Yes, but the point is that a physical substrate is a relative notion,
> >> for relative (indexical) use.
>
> > We don't know that.
>
> I was assuming comp. That is true as far as comp is true.
>
> > Its qualia of physicality to us is certainly a
> > relative notion, but with sense, it would be the case that the
> > relative notion is the actual concrete presentation of realism for us.
> > Its specular reflection - seeing 'through' the proximal surface to the
> > distal image. Sense means it is both relative and absolute.
>
> Sense is absolute, in a sense, for machine. I don't think there is a
> problem here. Sense is absolute from the 1-pov, and relative from (and
> to) 3p local descriptions.

I would say for a person that sense is generally relative for proximal
3p (dogs seem like they are part of the family), absolute for distal
3p (chemistry simply is), and both relative and absolute for 1p (It
seems like I'm in a bad mood but it seems like I can't change it).

>
>
>
> >>>>> Computation to me is clearly an
> >>>>> epiphenomenon of experienced events, not the other way around.
>
> >>>> Computation are well defined object in arithmetic. You cannot
> >>>> redefine
> >>>> standard notion to suit your point. Or you can conclude whatever
> >>>> you
> >>>> want at the start.
>
> >>> Arithmetic is an experience too.
>
> >> You confuse arithmetic and the experience of arithmetic.
>
> > No, it's just that I think all arithmetic is an experience of
> > something - whether it's a brain, a cell, or a semiconductor. Not
> > empty space.
>
> It is easier to explain the sense of "cells", "brains", semiconductor"
> and the "experience of arithmetic" from arithmetic, than from brain
> and cells. you assume what we have to explain.

Sure, it's easier because it's a lowest common denominator sense of
solid state matter. It is easier to explain the shadows of animals
than the animals themselves.

>
> Comp makes this clear. And that is its virtue.

And a powerful virtue it is.. unless you want to understand the whole
truth of consciousness even if it seems less clear.

>
> Your "non-comp theory" seems to assume both matter and mind, which is
> too much for me. (Sorry).

Haha. No problem, it just sounds kind of funny. Is it really so
strange to think of the universe as actually containing matter and
mind? Especially when the alternative is dreaming numbers and
invisible machines.

>
>
>
> >>>>>>>>> This is
> >>>>>>>>> another variation on the Chinese Room. The pig can walk around
> >>>>>>>>> at
> >>>>>>>>> 30,000 feet and we can ask it questions about the view from up
> >>>>>>>>> there,
> >>>>>>>>> but the pig has not, in fact learned to fly or become a bird.
> >>>>>>>>> Neither
> >>>>>>>>> has the plane, for that matter.
>
> >>>>>>>> Your analogy is confusing. I would say that the pig in the
> >>>>>>>> plane
> >>>>>>>> does
> >>>>>>>> fly, but this is out of the topic.
>
> >>>>>>> It could be said that the pig is flying, but not that he has
> >>>>>>> *learned
> >>>>>>> to fly* (and especially not learned to fly like a bird - which
> >>>>>>> would
> >>>>>>> be the direct analogy for a computer simulating human
> >>>>>>> consciousness).
>
> >>>>>> That why the flying analogy does not work. Consciousness concerns
> >>>>>> something unprovable for everone concerned, except oneself.
>
> >>>>> No analogy can work any better because nothing else in the
> >>>>> universe is
> >>>>> unprovable for everyone except oneself except consciousness.
>
> >>>> ?
>
> >>> Nothing but consciousness is subjective. Nothing else besides
> >>> consciousness is unprovable to others but unnecessary to prove to
> >>> oneself.
>
> >> Good. that's a point for the machine's consciousness theory, which
> >> relate consciousness and consistency. indeed only consistency and all
> >> G* minus G propositions appears to the machine as unprovable to
> >> others, but easily inferable to oneself.
>
> > Consistency is only a comment on an aspect of consciousness though,
> > just as a shadow of a tree has a basic tree shape. It doesn't define
> > the tree, it's a silhouette.
>
> It is already something, and it is just Dt, looks at the many sensical
> combinations the machine will give sense too.
>
>
>
> >>>>>> May I ask you a question? Is a human with an artificial heart
> >>>>>> still a
> >>>>>> human?
>
> >>>>> Of course. A person with a wooden leg is still human as well. A
> >>>>> person
> >>>>> with a wooden head is not a person though.
>
> >>>> OK. So the problem is circumscribe to the brain. Someone can have
> >>>> an
> >>>> artificial body, but not an artificial brain.
> >>>> Could someone survive with an artificial cerebral stem?
>
> >>> It depends how good the artificial brain stem was. The more of the
> >>> brain you try to replace, the more intolerant it will be, probably
> >>> exponentially so.
>
> >> So, here you seem to agree that it is just a matter of complexity.
>
> > Not at all. If you are watering plants with vinegar, it is not the
> > complexity which makes it a poor substitute for water. Complexity is
> > important, but it's a red herring as far as a living organism having
> > parts replaced. It's more about organic authenticity and similarity on
> > all levels.
>
> Hmm....
>
>
>
> >> But
> >> we abstract from this in the conceptual theory. Such a complexity is
> >> irrelevant. We are not addressing any practical issue here.
>
> >>> Just as having four prosthetic limbs would be more
> >>> of a burden than just one, the more the ratio of living brain to
> >>> prosthetic brain tilts toward the prosthetic, the less person
> >>> there is
> >>> left. It's not strictly linear, as neuroplasticity would allow the
> >>> person to scale down to what is left of the natural brain (as in
> >>> cases
> >>> where people have an entire hemisphere removed), and even if the
> >>> prosthetics were good it is not clear that it would feel the same
> >>> for
> >>> the person.
>
> >> Theoretically, this will be true only if your lower the level in the
> >> infinitely down.
>
> > Or if there is no level at all. At what level can water be substituted
> > with something that is not wet?
>
> Easy. At the molecular level. One H20 is not wet, I think.
> If you doubt this, you might go at the level of strings, if you want.
> A string is hardly wet!

It could just as easily be the very essence of wet. Our ideas of
molecules are only the reports of instruments made of things like
steel and glass. They wouldn't know wet if they found it.

>
>
>
> >>> If the person survived with an artificial brain stem, they
> >>> may never again feel that they were 'really' in their body again. If
> >>> the cortex were replaced, they may regress to infancy and never be
> >>> able to learn to use the new brain.
>
> >> Why? You need infinities to asses such truth conceptually.
>
> > Because you are replacing part of a tree that knows it's a tree. It
> > remembers and has expectations. If someone suddenly replaced your home
> > with a structure that looked the same on the outside but was
> > cinderblocks and asphalt on the inside, you wouldn't be able to go on
> > living as usual.
>
> It means someone has made a substitution at a wrong level, not that
> such a substitution level does not exist.
>

That could be true, but it doesn't have to be. It may not be
quantitative in the first place.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to