On 11 Feb 2012, at 15:56, Craig Weinberg wrote:

On Feb 11, 4:03 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
On 11 Feb 2012, at 03:01, Craig Weinberg wrote:

Dennett's Comp:
Human "1p" = 3p(3p(3p)) -

What do you mean precisely by np(np) n = 1 or 3. ?

I'm using 1p or 3p as names only, first person direct phenomenology or
third person objective mechanism. The parenthesis is hierarchical/
holarchical nesting or you could say multiplication.

?


Dennett thinks
that we know that there are only mechanical processes controlling each
other.

Yes. Even in the physical sense. I am not sure if he really means that we know that, but then I am used to give a strong sense to "knowing".




Subjectivity is an illusion

And I guess we agree that this is total nonsense.

Yes. But only because we have first hand experience ourselves and
cannot doubt it.

OK.



If we could doubt it then there would be no reason to
imagine that there could be such a thing as subjectivity.

OK.

If we doubt it, we have a subjective experience.
If we don't doubt it, too.
So we cannot doubt it.







Machine 1p = 3p(3p(3p)) - Subjectivity is not considered formally

My view:
Human 1p = (1p(1p(1p))) - Subjectivity a fundamental sense modality
which is qualitatively enriched in humans through multiple organic
nestings.

Even infinite "organic nestings", which might not even make sense.

No, only seven or so nestings: Physical <> Chemical <> Biological <>
Zoological <> Neurological <> Anthropological <> Individual

By UDA to have no comp you will have to continue such nesting in the "Physical".
I let you this as a non completely trivial exercise.
I know you will invoke finite things non Turing emulable, but I cannot ascribe any sense to that. When you gave me "yellow" as example, you did not convince me. The qualia "yellow" is 1p simple, but needs a complex 3p relation between two universal numbers to be able to be manifested in a consistent history.





Machine 1p = (3p(3p(1p))) - Machine subjectivity is limited to
hardware level sense modalities, which can be used to imitate human 3p
quantitatively but cannot be enriched qualitatively to human 1p.

Which seems ad hoc for making machine non conscious.
Again we see here that you accept that your position entails the
existence of philosophical zombies,

I call them puppets. Zombies are assumed to have absent qualia,
puppets are understood not to have any qualia in the first place.

Puppets don't handle complex counterfactuals, like humans and philosophical zombie. I don't know the difference between absent qualia and having no qualia, also.




that is: the existence of
unconscious machines perfectly imitating humans in *all* circumstances.

Not perfectly imitating, no.

Sorry but it is the definition.



That's what that whole business of
substitution being indexical is about. I propose more of a formula of
substitution, like in pharmacological toxicity where LD50 represents a
lethal dose in 50% of animal test population. Let's call it TD (Turing
Discovery). What you are talking about is a hypothetical puppet with a
TD00 value - it fails the Turing Test for 0% of test participants
(even itself - since if it didn't then an identical program could be
used to detect the puppet strings).

?
By definition, a philosophical zombie win all Turing test, (except if he imitates a human so awkward that people take him for a machine. (That happens!)). By definition philophical zombies behave identically to humans. The only difference is that they lack the 1p experience.




The first consideration with this is that it would need to be extended
to have a duration value because any puppet might not reveal it's
strings in a short conversation. A TD00 program might decay to TD75 in
an hour, and TD99 in two hours, so there could be a function plotted
there. It may not be a simple arithmetic rise over time, participants
may change their mind back and forth (and I think that they might)
over time. There could be patterns in that too, where our expectations
wax and wane with some kind of regularity for some people and not for
others. It's not just time duration though, we would need to factor in
the quantity and quality of data. How hard the questions are and how
many of them. Answering long questions too fast would be a dead
giveaway. A sparse conversation may have giveaways too - how they
express impatience with long delays, whether they bring up parts of
the conversation during the gaps, all kinds of subtle clues in the
semantic character of what the program focuses on.

There are so many variables that it may not even be useful to try to
model it. The TD will undoubtedly be affected by how the participants
have been prepared, whether they have experience with AI or have seen
documentaries about it just before or whether they have been instead
prepared with sentimental stories or crime dramas which sensitize them
or desensitize them to certain attitudes and behavior. If the program
speaks in LOL INTERWEBZ slang, or general informal terms vs precise
scientific terms that would have an effect as well.

Whether or not a true TD00 -> universal human puppet could be possible
in theory or practice is not what I'm speculating on. If I were to
speculate, I would say that no, it is not possible. I don't think that
even a real human could be TD00 to all other humans for all times and
situations. That's because it's a continuum of 'seems like' rather
than a condition which 'simply is' - it's indexical to subject and
circumstance.

All of this in no way means that the TD level implies actual
simulation of subjectivity. TD is only a measure of imitation success.
As I have said, the only way I can think of to come close to knowing
whether a given imitation is a simulation is to walk your brain
function over to the program, one hemisphere at a time, and then walk
it back after a few hours or days. Short of that, we can either be
generous with our projection and imagine that puppets, computers, and
programs are potential human beings given the right degree of
sophistication, or we can be more conservative and limit our scope so
that we only give other humans, animals, or living organisms the
benefit of the doubt. I think the more important feature is the
scoping itself, because it is rooted in what kinds of similarities we
pay attention to and identify with. If we identify with logic, then
anything which impresses us as logical gets the benefit of the doubt.
If we identify with feeling, then anything which seems like it feels
like us gets the benefit of the doubt. I think that logic arises from
feeling rather than the other way around, and both arise from sense.




Bruno:
Machine or human 1p = (1p(f(x)) - Subjectivity arises as a result of
the 1p set of functional consequences of specific arithmetic truths,
which (I think) are neither object, subject, or sense, but Platonic
universal numbers.

Is that close?

I just say that IF we are machine, then some tiny part of arithmetical truth is ontologically enough, to derive matter and consciousness, and
is necessary (up to recursive equivalence).

I agree that would be true if we are only machines, however that would
make it a 3p version of matter and consciousness (a la Dennett).

Not at all. The contrary happens. Matter becomes a first person plural appearance. There is no matter, in the usual Aristotelian sense. But the reason why it looks like there is matter are given.

Deriving the appearance of matter from arithmetic does not imply that matter is made of number. matter simply does not exist, and the appearance of matter emerges from the complex statistics and topology of the dreams of the universal numbers, entangled in deep computations.



Even
the 1p would be a 3p de-presentation/description of what we know as
1p, but the machine would not.

No, it is the contrary. Read UDA, it will make you understand why.




Subjectivity comes from
self-reference + Truth.

I think that our experience shows us though that in human development,
subjectivity is rooted in fantasy and idiosyncaratic interpretation
which is not directly self-referential. I would say truth is the
invariance between subjective and objective sense. Self-reference is
trivial compared to self-embodiment and self-identification. Self-
reference can be imitated by a machine, but I don't think that a
machine can use the word 'I' in the full range of senses that we can.

Trivially, because you assume non-comp.





"Truth about a weaker LUM" is definable by a stronger LUM, but no LUM
can defined its own global notion of truth (which will play the role
of the first greek God, like Plotinus ONE). Weak and String are
defined in term of the set of provable (by the entity in question)
arithmetical (or equivalent) propositions.

Yes, from what I can understand, I agree. I have always thought that
you are on the right track with your insights into the incompleteness
of any given machine or person's understanding. Because of the way
that the interior of the monad is diffracted into spacetime
exteriority, it may very well be the case that our own limitations of
self-knowledge are in fact mechanical 3p limitations, which can be
modeled successfully by comp.

OK. But the 3p limitations have impact on the 1p too. And on the physical, which is really 1p plural.

Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to