On Sep 19, 2:44 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
> On 18 Sep 2011, at 19:24, Craig Weinberg wrote:

>
> > I include comparison as a function of counting.
>
> Counting + the full first order logic is not enough for comparison.
> Counting + second order logic might be, but then second order logic is
> really set theory in disguise.

Isn't it necessary to be able to tell the difference between one count
and one count?

In order for x, (x+x), (x+x+x) to exist there must be an implicit
comparison between 1 and it's successor to establish succession,
mustn't there? Otherwise it's just x, x, x.

>
> > You can't really have
> > one without the other.
>
> It depends on what you assume at the start. I have still no clue of
> what is your theory, except that strange, and alas familiar,
> skepticism on numbers and machine, which is conceptually very
> demanding since Gödel 1931.

I think that's your own prejudice blinding you from seeing my ideas.
You are defending the insights of post Gödelian understanding but I
have no bone to pick with those insights at all. I embrace what I
understand of those kinds of ideas; incompleteness, autopoeisis,
automation, simulation, etc. I just think that the progression of
these ideas lead to the mirror image of consciousness rather than
genuine sentience.

Nothing wrong with that, and for developing intelligent servants, it's
is exactly what we would want to use (otherwise they will most enslave
us). We can even gain great insights into our own nature by
understanding our similarities and differences to what I would call
intelliform arithmetic, but in all of the fruits of this approach we
have seen thus far, there is a distinct quality of aimless repetition,
even if not unpleasantly so (http://www.youtube.com/watch?
v=ZZu5LQ56T18)

Some of the musicality can be attributed to the sampled piano as well.
When you use a fundamental unit which is driven more exclusively by
digital mathematics, what we get I think sounds more like the native
chirps and pulses of abiotic semiconductors (http://www.youtube.com/
watch?v=Dh9EglZJvZs).

I think that my reservations about machine sentience are not at all
borne of skepticism but rather aesthetic supersensitivity. I can hear
what the machine is, and what it is will not become what we are, but
rather something slightly (but very significantly from our perspective
at least) different.

>
> > As for exploring and referring to themselves I
> > think that's just projection of our own 1p experience onto mechanism.
>
> That is possible, and literally necessary. I am currently projecting
> my 1p on you. That is not a reason for saying your don't have your own
> 1p. So you are correct, but it is not an argument against mechanism.

That's only if you believe that 1p is a solipsistic simulation. With
my sense-based model, your perceptions of me are the invariance
between your 1p projections and my 3p reflections. We can look at a
Rorschach inkblot and understand (under typical waking states of
consciousness) that the images we see are not being spontaneously
generated by the inkblot.

To say that a machine is referring to itself or exploring is to
anthropomorphize it's behavior, which, objectively is neither
completely random nor intentional, but merely inevitable by the
conditions of the script. It's a precisely animated inkblot, begging
for misplaced and displaced interpretation.

>
> > To set a function equal to another is not to say that either function
> > or the 'equality' knows what they refer to or that they refer at all.
> > A program only instructs - If X then Y, but there is nothing to
> > suggest that it understands what X or Y is or the relation between
> > them.
>
> Nor is necessary to believe that an electron has any idea of the
> working of QED, or of what a proton is.

I think that it is. What we think of as an electron or a proton is 3-p
exterior of the senses and motives (ideas) of QED. We have an idea of
the workings of our niche, so it stands to reason that this
sensemaking capacity is part of what the universe can do.

We are the senses and motives of a person's life, which from our 3-p
looks like a human body or a set of images, autobiographical
narratives, or a collection of trillions of cells or a single
individual in a global human civilization. From a non human's 1-p
vantage point we have no idea what kind of sense our 3-p presence
makes to them.

> If you say that an electron needs to have a 1p for interacting with a
> proton, then I don't see why we could not say that for a program
> instruction,

Because an instruction has no 3-p existence. It's just a motive to
reproduce a motive in something that we perceive as a 3-p objective
system. Our 1-p motive induces a 3-p consequence which is reflected
back to us through a 3-p objective system as a pseudo 1-p sequence.

>on which we can already use intensional stance (like when
> we say that a routine is waiting for some inputs to get active, etc.

That's more anthropomorphizing. It's not waiting at all. If it was it
would eventually get irritated and leave.

> But this is delaying the mind-body difficulty in the lower level.
> There are just no evidence that we have to delay it in the infinitely
> low level, except the willingness to make mechanism false.

There can't be any 3-p evidence by definition, because mechanism's
falseness is the difference between it's pseudo or a-signifying 1-p
and our genuine 1-p experience. Genuine because it is the native 1-p
of our 3-p neurology, and not an idiopathic simulacra. There is no
reason to think that our naive theoretical presumptions about 3-p
substitution level of 1-p would be any more accurate than any of our
naive theoretical presumptions about anything. We don't know much of
anything about the substitution level of the psyche. It seems far from
scientific at this point to dismiss objections to an arbitrary
physical substitution level.

>
> > I've named several examples which illustrate this: Record and CD
> > players don't learn music.
>
> Nor do them compute. Or, if yopu see their activity as computations,
> it is not the kind of computation which can think. you need self-
> reference, and enough information loops, short and long term memories,
> universal hidden goal, etc.

It's not compelling to me. You can have fancy playlists on internet
radio like Pandora or iTunes which I think satisfy your criteria to a
minimal extent to establish some hint that the program was
understanding music or the user. That's the marketing sell, but it's
hollow. It doesn't work that well. It's limitations are perhaps subtle
to describe in 3-p terms, but it just doesn't know music. Listening to
it's occasionally fruitful but oddly dissonant selections are, like
the cellular automation music, very definitely missing something.

You may not be as sensitive to it, or you may account for it by
promising that these are just newborn tadpoles, but I can see that
increased sophistication will only mask the underlying emptiness. It
may progress to the point that my naive perception of it will be
fooled, but that gives me no confidence that such a technology would
fool my brain (or it's trillions of micro-sentiences within).

>
> > I can see and copy Chinese characters
> > without understanding them in any way, and regardless of how many
> > Chinese manuscripts I manually transcribe, I will never learn to read
> > Chinese.
>
> Why would you, if you do only simple task.
> You find a stupid computation, and you declare from that that all
> computation is stupid.
> Jumping spider can't get to the moon, so living beings can't get to
> the moon.

Living beings can't get to the moon by themselves, and computation
can't become human on it's own.

>
> > As you say, we can use computation to account for the
> > difference between 1p and 3p but that accounting is not an explanation
> > or experience of 1p or 3p (as a 1p reflection...there is no 3-p
> > experience).
>
> It explains bot 99% of it (I would say)
> And it explain 100% of the reason why there is a remaining
> unexplainable 1% gap. technically, we can narrow it as much as we
> want, but will never been able, for logical reason, to explain 100% of
> the qualia or consciousness.

You say that, but I have not yet heard anything that explains it to
me.

>
>
>
> >> They can
> >> believe, know, observe, feel, and be aware of the difference between
> >> sharable and non sharable knowledge, and all this can be show, from
> >> numbers + reasonable axiomatic definition of all those terms.
>
> > To say that it can be shown doesn't help anyone. To paraphrase Yoda,
> > "Show me, or do not".
>
> Read the papers (and study some mathematical logic/computer science
> before).

Why not just tell me briefly what is in the papers that makes sense of
it? Even the most complex ideas can be illustrated metaphorically.
Hofstadter's "a record titled "I Cannot Be Played on Record Player X"
for example, shows a bit of what I think you mean. That kind of self-
reference, I agree is germane to the sense of consciousness as
awareness of awareness, but it's just the silhouette of consciousness,
not the contents.

>
> > Give me one example, one common sense metaphor,
> > one graphed function that could suggest to me that there is any
> > belief, feeling, or awareness.going on.
>
> The fact that the universal machine remains silent on the deep
> question

What deep question?

>is enough for me to suggest they are quite like me.

> Don't ask me for a proof: there are none.

I'm not asking for a proof, I'm asking for some reason to think that
there's something I'm not seeing. Something that suggests that a
mechanical device or abstraction can feel or maybe that produces some
result that it refuses to reproduce on command.

> it is a question of empathy.
> The work of Gödel-Löb-Solovay illustrates that they can introspect
> very deeply, and that they have a rich theology.

The work of Weinberg-King-Searles illustrates that they cannot
introspect very deeply and have an austere theology.

>
> > I have described how we
> > project emotion into images on a movie screen or see a face in a
> > coconut, so it is not enough that we satisfy our idea of what feeling
> > or awareness usually looks like. We need to know why, if numbers feel,
> > it seems like machines don't feel.
>
> Current machines are far too young ... to express their feeling. They
> have not enough memory to integrate their experience in long stories.
> But mechanism is the thesis that *we* are machine, so it does look
> like some machine can feel: you and me are good example, in the
> mechanist theory.

I see that as affirming the consequent. We are machines and we can
feel, therefore machines can feel. Jet engines are machines they can
fly at 30,000 feet, therefore we can fly at 30,000 feet. I'd like to
help you out here and really give you the benefit of the doubt, but it
just sounds like you're shrugging off a fairly obvious gap between
theory and reality. If functionalism-machinism were true, I would
expect that bacteria, viruses, fungi, parasites, etc could infect
computers, cell phones, or computations themselves.

By comp, there should be no particular reason why a Turing machine
should no be vulnerable to the countless (presumably Turing emulable)
pathogens floating around. But of course that is absurd. We cannot
look forward to reviving the world economy by introducing medicine and
vaccines for computer hardware. What accounts for this one-way bubble
which enjoys both total immunity from biological threats but provides
full access to biological functions? If computation alone were enough
for life, at what point will the dogs start to smell it?

>
>
>
> >> In that paragraph you are showing that you seem to persist in
> >> displaying  the reductionist pre-Gödel-Turing conception of what
> >> machines are and can be.
>
> > Not at all. I think that I may understand more than you assume. I
> > agree that 'machine' can be a spiritual term. A self-redefining
> > process which grows and and evolves - but that's only part of what
> > life and consciousness is. The form (or one form) but not the content.
> > It's like electricity without a ground (this kind of ground:
> >http://en.wikipedia.org/wiki/Ground_%28electricity%29). If it's not
> > anchored in the common reference of literal material in the literal
> > universe - with the unique instantiation coordinates drawn from
> > relation to the singularity, then it's a phantom imposter. A 3-p
> > accounting system imposed upon a compliant-but-dumb 1-p of a
> > semiconductor (or collection of inanimate objects, etc).
>
> But then you have to explain us what is not Turing emulable in those
> processes.

It's the the hole that it makes in the singularity. A thing's unique
identity in relation to the rest of the universe. I's a MAC address
than cannot be spoofed. Ultimately, a thing 'is what it is' and not
what just we believe it to be.

>
>
>
> > That's why zombies, prosthetics, blow up dolls, body snatchers, wax
> > museums, taxidermy etc have the same creepy association. We sense the
> > emptiness, and the cognitive dissonance that arises in contrast to the
> > uncanny resemblance to the genuine living creature and the hollow form
> > only highlights the absence of life and awareness. Science Fiction is
> > replete with these metaphorical illustrations: Frankenstein, HAL,
> > Westworld, War of The Worlds,...so many examples of sinister
> > attributions to both the undead and unlive. It would seem unlikely
> > that these kinds of ideas could strike a chord were there not any
> > significant difference between a person and a machine beyond just a
> > prejudice of one relative level of complexity to another.
>
> That is called racism. The foreigners looks to strange, it is creepy.

It's not racism at all. Cadavers are not members a race. They aren't
just unfamiliar, they are the walking dead and unliving persons. They
are the antithesis of human life.

> By machine, I just means "turing-emulable" (with or without oracle).
> That include us, by mechanism assumption.
> It is a constant that novelist foresee the future(s).

What if 'emulation' is a 1-p hallucination? How could it really not
be? If we only can project our perception of a process onto a machine,
why would the rest of the process that we can't perceive automatically
arise?

>
>
>
> > I think that you are jumping to the conclusion that simulation does
> > not require an interpreter which is anchored in matter.
>
> That follows from the UDA-step-8. If my own emulation requires a
> material digital machine, then it does not require a material machine.

Not to produce the 3-p simulacra of you, no, but to produce your
genuine 1-p emulation, it would require the same material machine as
you do. A material digital machine would not suffice because the
material which the machine is being executed digitally on already has
it's own (servile and somnambulant compared to organic chemistry)
genuine 1-p experience.

> Matter is what glue the machine dreams,

I think that it is obviously not. If we were machines and that were
true, then we should come out of the womb filled with intuitions about
electronics, chemistry, and mathematics, not ghosts and space
monsters. Dreams are not material, they are living subjective
feelings. Matter is what is too boring and repetitive to be dreamed
of. Too tiny and too vast, too hot and cold, dense and ephemeral for
dreams. Dream bullets don't make much of an impact.  Dream injuries
don't have to heal.

>and consciousness select the
> gluing histories. This entails we can see the glue by looking at
> ourselves close enough, and quantum logic is what define the gluing.
> Here QM seems to fits very well with DM.

Because QM and DM are different aspects of the same thing. Modeling
the 1-p essence of 3-p.

>
> > I'm not taking
> > a reductionist view of mechanism, even though in this discussion I
> > have to dwell on the most literal aspects of mechanism to make my
> > point that it is fundamentally incomplete to express consciousness.
> > That is the only way to illustrate the difference - with reductio ad
> > absurdum; to get to the essence of what mechanism, counting, and
> > computation is and how it is diametrically opposite of what free will,
> > perception, and experience is. Computation has no 1-p experience of
> > it's own.
>
> Only a person has this, but person relies on computation, not on any
> particular implementations of them, but on all implementations
> existing in arithmetic.

I would say that the person and their computation both rely upon a
single common sense, but that is neither essential-experiential (A)
nor existential-arithmetic (Ω) but the unexperienced potential (ɐ) and
uncomputed arithmetic (ʊ) that comprises the singularity.

Think of the cellular automation music compared to music played by a
master musician. Even a note or two played by a great pianist or
violinist could be recognizable to someone familiar with their work. A
single stroke of paint can evoke Matisse or Van Gogh. They are
proprietary and signifying. You could listen to 10^100 computers
playing the same cellular automata for 100,000 years and never get a
Mozart equivalent. You wouldn't even find one which could be
considered qualitatively different from the others. Beautiful or
awful, they would all have the same generic, a-signifying composer. No
1p flourishes or stylistic trends would appear in one computer and be
copied or enjoyed by the others. They could be programmed to act like
they were doing that perhaps, but they would never generate that kind
of logic on their own as silicon devices.

>
> Yopu might have some genuine intuition on 1p and 3p, but you are
> killing your "theory" by insisting it negates comp, where I see only
> argument for a very low level.

That's how you pigeonhole my idea, but I don't see comp as a viable
primitive. Simulation is just a way to get machines to fool us in the
exact way that we want to be fooled.

>
> > It is the 3-p relation-reflection between private 1-p non-
> > comp monads. It is the essence of existence, not the existence of
> > essence.
>
> That's look like continental philosophy. It is not really in the scope
> of my job. Sorry.

It's funny, I hate philosophy.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to