On Feb 11, 3:51 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
> On 11 Feb 2012, at 15:56, Craig Weinberg wrote:
> >>> Dennett's Comp:
> >>> Human "1p" = 3p(3p(3p)) -
> >> What do you mean precisely by np(np) n = 1 or 3. ?
> > I'm using 1p or 3p as names only, first person direct phenomenology or
> > third person objective mechanism. The parenthesis is hierarchical/
> > holarchical nesting or you could say multiplication.
> ?

I'm not using 1p and 3p in any standard way. 3p(3p(3p)) represents a
top level mechanical process that is controlled by lower level
mechanical processes that are controlled by lower level mechanical
processes. 1p(1p(1p)) represents a top level self that contains or
incorporates sub-selves and their sub-selves. This is a very different
scaling than 3p, since there is a continuum of voluntary and
involuntary incorporation. It's not a bunch of discrete gears or
subroutines, it is a fugue of directly and indirectly experienced

> > Dennett thinks
> > that we know that there are only mechanical processes controlling each
> > other.
> Yes. Even in the physical sense. I am not sure if he really means that
> we know that, but then I am used to give a strong sense to "knowing".
> >>> Subjectivity is an illusion
> >> And I guess we agree that this is total nonsense.
> > Yes. But only because we have first hand experience ourselves and
> > cannot doubt it.
> OK.
> > If we could doubt it then there would be no reason to
> > imagine that there could be such a thing as subjectivity.
> OK.
> If we doubt it, we have a subjective experience.
> If we don't doubt it, too.
> So we cannot doubt it.


> >>> Machine 1p = 3p(3p(3p)) - Subjectivity is not considered formally
> >>> My view:
> >>> Human 1p = (1p(1p(1p))) - Subjectivity a fundamental sense modality
> >>> which is qualitatively enriched in humans through multiple organic
> >>> nestings.
> >> Even infinite "organic nestings", which might not even make sense.
> > No, only seven or so nestings: Physical <> Chemical <> Biological <>
> > Zoological <> Neurological <> Anthropological <> Individual
> By UDA to have no comp you will have to continue such nesting in the
> "Physical".
> I let you this as a non completely trivial exercise.

I call the intra-physical nesting (quantum-arithmetic) a virtual
nesting. I think that what we measure at that level is literally the
most 'common sense' of matter, and not an independent phenomena. It is
the logic of matter, not the embodiment of logic. It's a small detail
really, but when logic is the sense of matter then all events are
anchored in the singularity, so that ultimately the cosmos coheres as
a single story. If matter is the embodiment of logic then authenticity
is not possible, and all events are redundant and arbitrary universes
unto themselves.

> I know you will invoke finite things non Turing emulable, but I cannot
> ascribe any sense to that. When you gave me "yellow" as example, you
> did not convince me. The qualia "yellow" is 1p simple, but needs a
> complex 3p relation between two universal numbers to be able to be
> manifested in a consistent history.

I think that the 1p simplicity is all that is required. It does not
need to be understood or sensed as a complex relation at all, indeed
it isn't even possible to bridge the two descriptions. The 3p quant
correlation is not yellow, nor does it need yellowness to accomplish
any computational purpose whatsoever. Even if it did, where would it
get yellowness from? Why not gribbow or shlue instead? Of all beings
in the universe, we are the only ones we know of who can even conceive
of a 3p quant correlation to 1p qualities. Most things will live and
die with nothing but the 1p descriptions, therefore we cannot assume
the universe to be incomplete for those beings. If they had the power
to create a copy of their universe, they could do it based only on
their naive perception, just as our ability to create a copy of the
universe we understand would not be limited by our incomplete
understanding of the universe. The 1p experiences make sense on their

> >>> Machine 1p = (3p(3p(1p))) - Machine subjectivity is limited to
> >>> hardware level sense modalities, which can be used to imitate
> >>> human 3p
> >>> quantitatively but cannot be enriched qualitatively to human 1p.
> >> Which seems ad hoc for making machine non conscious.
> >> Again we see here that you accept that your position entails the
> >> existence of philosophical zombies,
> > I call them puppets. Zombies are assumed to have absent qualia,
> > puppets are understood not to have any qualia in the first place.
> Puppets don't handle complex counterfactuals, like humans and
> philosophical zombie. I don't know the difference between absent
> qualia and having no qualia, also.

A puppet could handle any degree of complexity that was anticipated by
the puppet master. The difference between absent qualia and no qualia
is that absent qualia presumes the possibility of presence. We already
know from blindsight that qualia can indeed be absent as well.

> >> that is: the existence of
> >> unconscious machines perfectly imitating humans in *all*
> >> circumstances.
> > Not perfectly imitating, no.
> Sorry but it is the definition.

That's why it's a theoretical/philosophical definition and not a
practical realism.

> > That's what that whole business of
> > substitution being indexical is about. I propose more of a formula of
> > substitution, like in pharmacological toxicity where LD50 represents a
> > lethal dose in 50% of animal test population. Let's call it TD (Turing
> > Discovery). What you are talking about is a hypothetical puppet with a
> > TD00 value - it fails the Turing Test for 0% of test participants
> > (even itself - since if it didn't then an identical program could be
> > used to detect the puppet strings).
> ?
> By definition, a philosophical zombie win all Turing test, (except if
> he imitates a human so awkward that people take him for a machine.
> (That happens!)).
> By definition philophical zombies behave identically to humans. The
> only difference is that they lack the 1p experience.

That's why I don't deal in philosophical zombies. They assume
simulation rather than imitation. There is no such thing as a perfect
imitation because a perfect imitation would be the same thing as the
genuine original.

> > The first consideration with this is that it would need to be extended
> > to have a duration value because any puppet might not reveal it's
> > strings in a short conversation. A TD00 program might decay to TD75 in
> > an hour, and TD99 in two hours, so there could be a function plotted
> > there. It may not be a simple arithmetic rise over time, participants
> > may change their mind back and forth (and I think that they might)
> > over time. There could be patterns in that too, where our expectations
> > wax and wane with some kind of regularity for some people and not for
> > others. It's not just time duration though, we would need to factor in
> > the quantity and quality of data. How hard the questions are and how
> > many of them. Answering long questions too fast would be a dead
> > giveaway. A sparse conversation may have giveaways too - how they
> > express impatience with long delays, whether they bring up parts of
> > the conversation during the gaps, all kinds of subtle clues in the
> > semantic character of what the program focuses on.
> > There are so many variables that it may not even be useful to try to
> > model it. The TD will undoubtedly be affected by how the participants
> > have been prepared, whether they have experience with AI or have seen
> > documentaries about it just before or whether they have been instead
> > prepared with sentimental stories or crime dramas which sensitize them
> > or desensitize them to certain attitudes and behavior. If the program
> > speaks in LOL INTERWEBZ slang, or general informal terms vs precise
> > scientific terms that would have an effect as well.
> > Whether or not a true TD00 -> universal human puppet could be possible
> > in theory or practice is not what I'm speculating on. If I were to
> > speculate, I would say that no, it is not possible. I don't think that
> > even a real human could be TD00 to all other humans for all times and
> > situations. That's because it's a continuum of 'seems like' rather
> > than a condition which 'simply is' - it's indexical to subject and
> > circumstance.
> > All of this in no way means that the TD level implies actual
> > simulation of subjectivity. TD is only a measure of imitation success.
> > As I have said, the only way I can think of to come close to knowing
> > whether a given imitation is a simulation is to walk your brain
> > function over to the program, one hemisphere at a time, and then walk
> > it back after a few hours or days. Short of that, we can either be
> > generous with our projection and imagine that puppets, computers, and
> > programs are potential human beings given the right degree of
> > sophistication, or we can be more conservative and limit our scope so
> > that we only give other humans, animals, or living organisms the
> > benefit of the doubt. I think the more important feature is the
> > scoping itself, because it is rooted in what kinds of similarities we
> > pay attention to and identify with. If we identify with logic, then
> > anything which impresses us as logical gets the benefit of the doubt.
> > If we identify with feeling, then anything which seems like it feels
> > like us gets the benefit of the doubt. I think that logic arises from
> > feeling rather than the other way around, and both arise from sense.
> >>> Bruno:
> >>> Machine or human 1p = (1p(f(x)) - Subjectivity arises as a result of
> >>> the 1p set of functional consequences of specific arithmetic truths,
> >>> which (I think) are neither object, subject, or sense, but Platonic
> >>> universal numbers.
> >>> Is that close?
> >> I just say that IF we are machine, then some tiny part of
> >> arithmetical
> >> truth is ontologically enough, to derive matter and consciousness,
> >> and
> >> is necessary (up to recursive equivalence).
> > I agree that would be true if we are only machines, however that would
> > make it a 3p version of matter and consciousness (a la Dennett).
> Not at all. The contrary happens. Matter becomes a first person plural
> appearance.

No, I don't think that it does. Matter becomes a 3p computational
feedback invariance - like a graphic avatar in a video game collides
with a wall of colored pixels and bounces back. There is no experience
involved at all. The wall could feel like marshmallows or solid steel,
but would it be the avatar feeling the wall or the wall feeling the
avatar, or the negative space feeling both, or the software feeling
graphic vectors, or hardware feeling the logical collisions on the
microprocessor? ...why not the pixels on the monitor themselves
feeling the event? None of it really makes sense to me. The experience
is orphaned in Platonia or otherwise generalized to a free floating
truth condition.

> There is no matter, in the usual Aristotelian sense. But
> the reason why it looks like there is matter are given.

I understand, but I insist that the reasons are not sufficient to
explain the experienced character of matter. Again, it could be
sufficient, had we no authentic subjectivity to compare it with, but
since we do, virtual matter remains a theoretical concept rather than
a reality.

> Deriving the appearance of matter from arithmetic does not imply that
> matter is made of number. matter simply does not exist,

This is the mirror image of Dennett. I explain it in multisense
realism as the Logos position. Why wouldn't matter exist just as much
as anything else? As much as numbers?

> and the
> appearance of matter emerges

Emerges is the key word. Emerges from where? To where? Why is it
necessary? Can you write an equation that emerges as actual matter in
our world? Can you light a brick of charcoal on fire with an
arithmetic function alone? Why would arithmetic want to pretend to

> from the complex statistics and topology
> of the dreams of the universal numbers, entangled in deep computations.

I can't see any reason for computations to ever leave this realm of
intangible dreamy universal entanglement.

> > Even
> > the 1p would be a 3p de-presentation/description of what we know as
> > 1p, but the machine would not.
> No, it is the contrary. Read UDA, it will make you understand why.

I have tried, but it doesn't make sense to me.

> >> Subjectivity comes from
> >> self-reference + Truth.
> > I think that our experience shows us though that in human development,
> > subjectivity is rooted in fantasy and idiosyncaratic interpretation
> > which is not directly self-referential. I would say truth is the
> > invariance between subjective and objective sense. Self-reference is
> > trivial compared to self-embodiment and self-identification. Self-
> > reference can be imitated by a machine, but I don't think that a
> > machine can use the word 'I' in the full range of senses that we can.
> Trivially, because you assume non-comp.

I don't assume anything, I have only to observe the difference between
any person who has ever lived and any machine that has ever been

> >> "Truth about a weaker LUM" is definable by a stronger LUM, but no LUM
> >> can defined its own global notion of truth (which will play the role
> >> of the first greek God, like Plotinus ONE). Weak and String are
> >> defined in term of the set of provable (by the entity in question)
> >> arithmetical (or equivalent) propositions.
> > Yes, from what I can understand, I agree. I have always thought that
> > you are on the right track with your insights into the incompleteness
> > of any given machine or person's understanding. Because of the way
> > that the interior of the monad is diffracted into spacetime
> > exteriority, it may very well be the case that our own limitations of
> > self-knowledge are in fact mechanical 3p limitations, which can be
> > modeled successfully by comp.
> OK. But the 3p limitations have impact on the 1p too. And on the
> physical, which is really 1p plural.

Sure, yes. All of the levels and modalities influence each other in
different ways.


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to