On Mar 4, 2:04 am, Bruno Marchal <marc...@ulb.ac.be> wrote:

> But this is a distracting issue given that your point is that NO
> program ever can give a computation able to manifest consciousness.

If an organism is already able to sustain consciousness, I think that
a program could jump start consciousness or assist in the organism
healing itself. My view is that there is no actual program, in the
sense that a shadow is not an independent entity. Computation is an
epiphenomenon of awareness.

>
>
>
> >> We are relatively manifested by the function of our brain. "we" are
> >> not function.
>
> > That seems to make 'functionalism' a misnomer.
>
> Yes. For many reasons, including that function can be seen as
> extensional objects, defined by their inputs-outputs, or as
> intensional objects, by taking into account how they are computed with
> some details, or by taking into account modalities, resource, etc.
> Putnam's functionalism was fuzzy on the choice of level, leading him
> and its followers to some confusion.
>
> In my early writing I define comp by "it exists a level such that
> functionalism is right at that level".  That "existence" is not
> constructive (and thus the need of the act of faith), and that allow
> some clarification on what comp can mean.
>

It all seems very squirrely to me. The relation between computation
and material substrates, the unexplained existence of presentation and
anomalous levels of presentation, the assignment of awareness to self-
referential abstractions...it seems like any real explanatory power of
comp becomes more and more diluted the more we examine it. It seems to
be a vanishing God of the Gaps to me.

>
>
> >>> If so, that objection
> >>> evaporates when we use a symmetrical form <> content model rather
> >>> than
> >>> a cause >> effect model of brain-mind.
>
> >> Form and content are not symmetrical.
> >> The dependence of content to form requires at least universal
> >> machine.
>
> > What if content is not dependent on form and requires nothing except
> > being real?
>
> Define "real".

To define "real" overturns the superiority of reality and replaces it
with theory. Real defines itself.

>
> > I think that content and form are anomalous symmetries
> > inherent in all real things. It is only our perspective, as human
> > content, that makes us assume otherwise. Objectively, form and content
> > are different aspects of the same thing - one side a shape of matter
> > in space, the other a meaning through time.
>
> Except that it is usually not symmetrical. Form are 3p describable and
> content are not.

That is their symmetry. They are anomalous. 1p is private, 3p is
public. This isn't coincidental. It isn't that 1p tends to be private
but occasionally our thoughts drip our of our ears, it is that the
symmetry circumscribes 1p and 3p completely. There is no way in which
they are not precisely opposite. Wherever there seems to be, that's
where our way of thinking about it needs to be adjusted. For example,
we might think that space is not the opposite of time, because we
observe that objects move over time. Using the symmetry as our guide
we can ask whether we really do see time passing, or whether we see a
fluid phenomenology of change in a limited experiential window of
'now'. We see objects change position and transform, but we don't see
anything that is not 'now' outside of ourselves. We see things over
there, but never things over then. Our experience of time is inferred
directly through perceptual inertia and memory. Our experience of
space is inferred indirectly through relativity and electromagnetism -
the 3p view of our body. Internally we feel no space - no stable sense
of being able to set something down in our mind and come back to it in
a month or a year.

> Also that contradicts what you say above, that content might be
> independent to form.

Independent from form in the sense that content cannot be created
through manipulations of apparent form alone. Form is not a reaction
to content, it is the back door of content. They are an inseparable
whole, but content is the head and form is the tail. You can't reverse
engineer the head out of tails.

> Then, even if that where true, why would that not apply to machine?

Because a machine is only real to the programmer and the audience. The
material substrate is not generating the machine out of it's own sense
and motive as it does in a living cell or species, it is only
performing its normal behaviors to the extent that it can under an
alien motive which it has no capacity to understand. The substrate is
real, and it has experience on its own level, but not on the level
that the audience might assume - as in this picture:

http://24.media.tumblr.com/tumblr_lmbsmluCns1qep93so1_500.jpg

>
>
>
> >> Your non attribution of consciousness to the machine might comes from
> >> the fact that you believes that the machines is only handled by the
> >> 3p
> >> Bp, but it happens that the machine, and its universal self-
> >> transformation has self-referential correct fixed point, and who are
> >> you to judge if she meant them or not? If you define consciousness by
> >> the restriction of the Bp on the such true fixed point, the PA baby
> >> machine will already not be "satisfied" if you call her a zombie.
>
> > Take for example how a computer writes compared to a person. If you
> > blow up a character from a digital font enough, you will see the
> > jagged bits. If you look at a person's hand writing you will see
> > dynamic expressiveness and character. No two words or letters that a
> > person writes will be exactly the same.
>
> > A computer of course, produces only identical characters, and its text
> > has no emotional connection to the author. There will never be a
> > computer who signs it's John Hancock any differently than any other
> > computer - unless programmed specifically to do so. All machines have
> > the same personality by default (which is no personality).
>
> > This is a good example of how we can project our own perceptions on an
> > inanimate, unconscious canvas and see our own reflection in it. These
> > letters only look like letters to us, but to a computer, they look
> > like nothing whatsoever.
>
> You confuse the proposition "could a computer" think, with the
> question "could today's man-made computers think".

No, I don't. I only bring up today's computers because there is no
other concrete example I can use, and because the gap between the
reality of what has been engineered so far and where our expectations
should be were comp actually true. I can't very well use the
unknowable future of computing as an argument for or against anything.

>
>
>
> > Reducing consciousness into mathematical terms
>
> Which comp precisely does not. Comp might be said theologicalist, even
> if 99% mathematicalist.

I have a feeling that depends on who you ask. I think it is pretty
common for computationalism to be understood precisely as the position
that consciousness can be digitally modeled and generated
artificially.

>
> > can yield only a
> > mathematical sculpture that reminds us of consciousness. It is an
> > inside out approach, a reverse engineering of meaning by modeling
> > grammar and punctuation extensively. There is much more to awareness
> > than Bp & p.
>
> You could refute plasma physics by saying that plasma have nothing to
> do with ink and papers, which typically appears in book on plasma
> physics.
>
> Ironically this is, in the language of the Löbian machine, a confusion
> very similar to the confusion between Bp and Bp & p.
> But I think that you need to invest more time in the technics for
> appreciating this, to be honest.

I think it's more like defining plasma physics by mathematics alone
without mentioning matter at all.

>
>
>
> >>>>> If
> >>>>> you were able to make a living zygote large enough to walk into,
> >>>>> it
> >>>>> wouldn't be like that. Structures would emerge spontaneously out
> >>>>> of
> >>>>> circulating fluid and molecules acting spontaneously and
> >>>>> simultaneously, not just in chain reaction.
>
> >>>>>>> It doesn't really make sense to me if comp were
> >>>>>>> true that there would be anything other than QM.
>
> >>>>>> ?
>
> >>>>> Why would there be any other 'levels'?
>
> >>>> So you assume QM in your theory. I do not.
>
> >>> It doesn't have to be QM, it can be whatever you like - arithmetic
> >>> truth, Platonia, etc. Why have any other 'level'?
>
> >> Nice. Let us chose first order arithmetical truth. The formula that
> >> we
> >> can write with "=", the logical symbols (with "A" for "for all", "E""
> >> for "it exists", and x, y, z, ..., as variables), and the symbol "0",
> >> "+", "*" and "s".
>
> >> Do you agree with the intended meaning of the axioms I use:
>
> >> Ax ~(0 = s(x))  (For all number x the successor of x is different
> >> from
> >> zero).
> >> AxAy ~(x = y) -> ~(s(x) = s(y))    (different numbers have different
> >> successors)
>
> >> Ax x + 0 = x
> >> AxAy  x + s(y) = s(x + y)   ( meaning x + (y +1) = (x + y) +1) = laws
> >> of addition
>
> >> Ax   x *0 = 0
> >> AxAy x*s(y) = x*y + x    laws of multiplication
>
> >> This defines a UD.
>
> >> And in that theory, we can prove the existence of  (tiedously) a
> >> machine which "believes" the axiom above together with the infinity
> >> of
> >> axioms (for all formula F translatable in the machine's language):
>
> >> (F(0) & Ax(F(x) -> F(s(x))) -> AxF(x)
>
> >> This defines the machines I will interview "in" the theory above.
>
> >> Then the levels will grow, including many cycles, (strange) loops,
> >> self-reference, and relatively true self-reference, and "absolute"
> >> fixed points, ... well a whole "theology", I think.
>
> > Thanks for writing it out that way. It helps, although I'm still not
> > able to get enough out of it to say for sure that I get it. The thing
> > is though, you can still have loops within loops and theology without
> > having any other level than the native arithmetic level. I still don't
> > see why or how any other levels with other characteristics would or
> > could arise.
>
> Because you can define the working of the theory above *in* the
> language of the theory above.

But why would either theory appear to be in a different language? A
computer doesn't care whether you use a variable or a huge string each
and every time. A variable is convenient for a programmer to work
with, but the computer doesn't care what level, form, or language the
code is in as long as it can execute it. We do care though. A sound
isn't the same as a color, even if it represents the same
'information' to us functionally. If I code one outcome red and the
other blue, or one a chime and the other a buzz, it makes no
difference to the microprocessor. You could probably prove that it
doesn't know the difference.

>This is long and tedious to show, and
> akin to programming in very low level language (which is the price of
> its metamathematical simplicity).
>
> But that's standard, and appears the first time in Gödel 1931. From
> Gödel 1931 to Solovay 1976 there has been tremendous progresses, and
> comp makes it exploitable, notably to study what ideally correct
> machine can prove and not prove about themselves, and what she can
> guess when looking inward.
>
> UDA shows physics is there, and the work of Solovay makes it almost
> easy to derive the propositional level of physics from machine's
> introspection. making comp already testable, and up to now, comp is
> saved by the quantum aspect of physics.

It doesn't surprise me at all that physics is there. Except physical
power. That isn't there. If I'm right, computation is a lowest common
denominator 3p sense, but it has no motive. It can only borrow motive
from a real subject and project it on another object.

>
>
>
> >>>>> No matter how complicated a
> >>>>> computer program is, it doesn't need to form some kind of non-
> >>>>> programmatic precipitate or accretion. What would be the point and
> >>>>> how
> >>>>> would such a thing even be accomplished?
>
> >>>> ?
>
> >>> Deep Blue or Watson don't need to define some new 'level' of
> >>> interpretation which transcends programming or re-presents it in
> >>> some
> >>> way.
>
> >> Because deep blue is programmed to play some toy game. Not the
> >> struggle of life game, in which you need self-referential control
> >> structure, short and long term memories, universality, Löbianity,
> >> etc.
>
> > I think even if it were programmed to play the struggle of life game
> > it will still only ever be a toy struggle. That's why I say it has to
> > be made of something which knows what it means to want to survive and
> > remain living as itself...cells.
>
> But you don't explain how cells do that. If you tell me "Not like the
> machine", you seems to possess an infinite knowledge about cells and
> about machine which I don't have.

I'm looking at what our own mortality means to us and extrapolating
that down to a cellular level. Just as you are taking our intellect
and extrapolating it out to a program. We want to survive not because
there is a script which sets that condition as true, it's just the
opposite, on some level, we are part of a system that wants us to die
and we have to wrestle with the prospect of continuing to live. I
suspect that as a neurological entity, we are quite removed from the
realities of mortality for a long time, but eventually we learn what
the rest of our body might already know. It's a strange position to be
in being a nervous system, our awareness is so elaborately nested and
protected. We are an experiment in neoteny.

>
>
>
> >>> I know it as much as I know anything.
>
> >> Then
> >> - either you have the magical ability to distinguish humans from
> >> zombies, or
> >> - you must explain what in the brain is not Turing emulable, and how
> >> it interacts with the behavior.
>
> > At this point, it's easy to distinguish humans from puppets.
>
> ?
> Zombie (p-zombies) acts like humans.

In theory. I could say that a human shaped shadow acts just like a
human's shadow too.

>
> > The look
> > weird. They sound weird. It's not a problem. Even if we were not able
> > to tell the difference with our own senses, by the time anything
> > remotely convincing in the way of androids come out, there will
> > probably be detection technologies available at the same time. Even if
> > a puppet fools us, it probably won't fool another device designed to
> > detect an android signature.
>
> This is surrealistic! You are telling me that a machine, and thus a
> zombie, for you,  will be able to detect a conscious human from a
> zombie!!

Of course. It would be no different from any security system. The
hackers will always be one step ahead, knowing exactly what to look
for to give the puppet away.

>
>
>
> > The brain would be Turing emulable if there weren't a living person in
> > it. A dead brain is Turing emulable. What is not emulable is
> > accumulated 1p experience of the person who lives their life through
> > the brain.
>
> This is ambiguous, and makes sense for the machine, both with 1p
> defined by personal memory in teleportation protocol, and by Bp & p.

So would a machine that is running not be Turing emulable?

>
>
>
> > Because the machine as a whole isn't an entity, it's an assembly of
> > different parts. It only seems like an 'it' to us. You can tell the
> > difference too because once you turn the seed on, you can't stop it
> > without killing it. There's no pause. Machines can be stopped and re-
> > started generally.
>
> You confuse natural machines evolving since a long time, with man-made
> machine following a different type evolution.
> You keep making an invalid argument again and again.

You keep making the same tautological counter-argument again and
again. I could just say, 'in the future comp will be disproved because
by then our understanding will have improved to the point that we no
longer will have need for such illusions'.

>
>
>
> >>>> You should not compare the crude
> >>>> man made machine with natural nanotechnology having a very long
> >>>> history. No one doubt that life is a very sophisticated technology.
> >>>> Some frog can freeze completely, and after 4 month of seemingly
> >>>> death,
> >>>> come back to their activities.
>
> >>> As far as I know, all living organisms arise from a single dividing
> >>> cell and no machines are built that way.
>
> >> Is a ribosome alive?
>
> > Not by itself. It's part of the context of a cell, which is alive by
> > itself.
>
> OK, but then the point you made about the reassembled clock is no more
> valid.

No, because the clock is never anything more than parts assembled by
an alien agenda. To the clock, there is no clock world. The atoms of
the 'parts' (not parts to them, but to us of course) have a physical
world of torque, density, velocity, maybe some acoustic sense...who
knows, maybe they are all listening to a cosmic radio, but they have
no way of knowing what they are contributing to or how it might be a
whole to some alien intelligence. Our nervous system is totally
different. From the ground floor it is about survival, adaptation,
specialization, integration, communication, etc. Whatever the body
does, it tries to do, not just waits for instructions.

>
>
>
> >>> This may be a much bigger
> >>> deal than it sounds if consciousness 'insists' through memory rather
> >>> than appears instantaneously as a function of objects in space.
>
> >> Memory is a key, sure. But it is an information pattern, usually
> >> interpreted by some information handling.
>
> > We don't know that. What you are talking about may only be recall. Our
> > experience as humans may be defined by an ability to temporarily
> > forget that we are the entire universe through all time.
>
> Open question in the comp theory. Perhaps yes, no, and it depends on
> how any of the terms above are reinterpreted in machine's theology.
>
>
>
> >> I will not blame you to reintroduce the ghost in the machine. crudely
> >> said, computer science justifies the existence of the ghost
> >> (software)
> >> in the machine (hardware).
>
> > I steer away from ghosts though because it makes a pseudosubstance of
> > something which I understand to be the actual opposite of a substance.
> > We can talk about ghosts figuratively, sure, or souls or whatever, but
> > that reifies existence at the expense of essence - which is a
> > perfectly rational thing to do unless you are trying to talk about
> > consciousness itself. If you privilege existence when you talk about
> > consciousness, you immediately get lost into a 3p model of
> > consciousness. Software is precisely that 3p model of (some aspects
> > of) consciousness. It has the same ghostly qualities if you force it
> > into an existential context, but it's a different animal. The
> > insubstantial of concepts alone doesn't make them sentient, nor does
> > their ability to overlap function with sentience.
>
> I was not talking about the "point of view" of the "ghost", but of the
> "ghost".
> Don't confuse the softaware and its relative interpretation by some
> universal software or hardware.

I wasn't talking about the point of view? I don't mind if you deny the
primitive reality of matter, but how can you claim some kind of
realism for software?

>
>
>
> >> It is here that you should study the math. Ideally self-referentially
> >> correct machines cannot miss the doubt. They becomes quickly modest,
> >> and know that whatever they learn, their ignorance will only get
> >> bigger. Forever.
>
> > Then they have no doubt of their ignorance instead.
>
> Amazingly, you are wrong on that too. They can doubt even their
> ignorance. the only thing they cannot doubt is that they doubt.
>
> In conscience and mecanisme I refine Thomas Slezak interpretation of
> Descartes' cogito (which is in G), in the logic Z, and this makes it
> possible to prove that Malcolm's argument against the existence of
> dreams is equivalent with his argument against conscious machine.

Again, these are logical skeletons of dreams and consciousness. If you
define the rainbow on a black and white TV, then you can't tell the
difference between yellow and white. From there it's easy to claim
that yellow and white are really the same thing, and that of course a
color TV will mistake them as different.

>
>
>
> >> You are right, but only from the first person point of view. It is a
> >> key point for the knower, perhaps the human right brain, and part of
> >> the limbic system. But doing science, the theory should no more refer
> >> to the sense of the one who does the theory, only to the sense
> >> "object
> >> of theory", if not you do pseudo-religion-science. You can do that.
> >> There is a public for that, but it is no more like searching the
> >> truth, but asserting personal opinions (in the best case, because
> >> usually the pseudo-things is just a selling strategy).
>
> > I would agree except in this special case of studying consciousness
> > itself. Everything goes out the window when we look at the qualitative
> > side of the universe. We have to come to it on its terms. The object
> > of the theory is not an object, it is a subject.
>
> So meditation, dreams, and things like that might be very useful for
> the study of consciousness, but you can only start with sharable
> personal opinion, and those are called hypotheses, whatever the matter
> subject is (1p, or 3p or anything).
>
> If not, you are saying something like "I know (am conscious that) comp
> is false", which is strictly equivalent with "believe me, God told me!".

I'm saying that faith in comp is the mirror image of faith in God.
These represent two extremes of the symmetrical continuum of
consciousness: anthropocentric and mechanistic models.

>
>
>
> >>> Only in theory. I don't think it is the case in reality.
>
> >> It is a consequence of the theory that this is the case in reality.
> >> We
> >> start from the innocuous local relative "yes doctor", and then the
> >> reasoning shows that we are already in an arithmetical matrix, and we
> >> can explain why, from inside, it looks like an analytical, physical,
> >> gigantic history.
>
> > It's still de-presentational. There is no actual there there. No
> > 'show', only an idea that something shaped like a theater can
> > accommodate a stage and props. Comp seems to say "Since stage+props =
> > show, then we have explained show business in terms of the traffic
> > patterns of lifting trunks and the angles of Klieg lights".
>
> Yes. It is weird. But this is only the 3p view, and you forget the
> meta-explanation of the 1p-view, and why its relation with 3p is
> different than a mere attribution of some 1p to some 3p.

Does it attribute visual qualia to eyes?

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to