On 30 Aug 2011, at 19:23, Craig Weinberg wrote:
On Aug 30, 11:29 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
On 30 Aug 2011, at 14:43, Craig Weinberg wrote:
On Aug 30, 4:06 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
On 29 Aug 2011, at 20:07, Craig Weinberg wrote:
Definitely, but the reasons that we have for causing those changes
the semiconductor material are not semiconductor logics. They use
hardware logic to to get the hardware to do software logic, just
the mind uses the brain's hardware to remember, imagine, plan, or
execute what the mind wants it to. What the mind wants is
by the brain, but the brain is also influenced directly by the
A hard-wired universal machine can emulate a self-transforming
universal machine, or a high level universal machine acting on its
level universal bearer.
Ok, but can it emulate a non-machine?
This is meaningless.
If there is no such thing as a non-machine, then how can the term
machine have any meaning?
There are a ton of non-machines. Recursion theory is the study of
degree of non-machineness.
What is meaningless is to ask to a machine to emulate a non-machine,
which by definition is not emulable by a machine.
The point is just this one: do you or not make your theory
something non-Turing emulable. If the answer is yes: what is it?
Yes, biological, zoological, and anthropological awareness.
If you mean by this, 1-awareness,
No, I mean qualitatively different phenomenologies which are all types
of 1-awareness. Since 1-awareness is private, they are not all the
comp explains its existence and its
non Turing emulability, without introducing ad hoc non Turing
beings in our physical neighborhood.
Whose physical neighborhood are comp's non Turing emulable 1-awareness
beings living in? Or are they metaphysical?
They are (sigma_1 )arithmetical, in the 3-view.
And unboundedly complex, in the 1-views (personal, plurals)
This is enough precise to be
tested, and we can argue that some non computable quantum weirdness,
like quantum indeterminacy, confirmed this. The simple self-
duplication illustrates quickly how comp makes possible to experience
non computable facts without introducing anything non computable in
the third person picture.
I'm not suggesting anything non-computable in the third person
picture. Third person is by definition computable.
Of course not. I mean, come on, Gödel, 1931. Or just by Church thesis
as I explain from time to time (just one double diagonalization). The
third person truth is bigger than the computable.
Some of those
computations are influenced by 1p motives though.
OK. But the motive might be an abstract being or engram (programmed by
nature, evolution, that is deep computational histories).
No need to introduce anything non Turing emulable in the picture here.
Once those motives
are expressed externally, they are computable.
But with comp, you just cannot express them externally, just
illustrate them and hope others grasp. They are not computable,
because the experience is consciousness filtered by infinities of
Comp shows a problem. What problem shows your theory?
You can't always
reverse engineer the 1-p motives from the 3-p though.
You are right, that is why, with comp, most 1-p notion are not 3-
definable. Still, comp allows to study the case of the ideally correct
machine, and have metatheories shedding light on that non
qualitatively distinct from detection.
Of course. Feeling is distinct from detection. It involves a person,
Yes! A person, or another animal. Not a virus or a silicon chip or a
computer made of chips.
This is racism.
It is a confusion of what is a person and its body. No doubt billions
years of engramming, make them hard to separate technologically, but
nothing prevent to survive with digital brain, or even to live in
virtual environment in principle, at some level, some day.
And in this picture we can formulate precise (sub) problem of the hard
mind body problem.
which involves some (not big) amount of self-reference ability.
You don't have to be able to refer to yourself to feel something.
You don't have to refer to yourself explicitly, but *feeling* still
involves implicit self-references, I think.
It is very simple at the base and very deep, but, hmm.... I don't
know, perhaps 1-primitive (with some of the "1"-views described by the
arithmetical or self-referential hypostases).
Not 3-primitive, with mechanism.
Not to disqualify machines
implemented in a particular material - stone, silicon, milk bottles,
whatever, from having the normal detection experiences of those
substances and pbjects, but there is nothing to tempt me to want to
assign human neurological qualia to milk bottles stacked up like
dominoes. We know about synesthesia and agnosia, and I am positing
HADD or prognosia to describe how the assumption of qualia
If we make a machine out of living cells, then we run into the
of living cells not being easily persuaded to do our bidding. To
physically enact the design of universal machine, you need a
relatively inanimate substance, which is the very thing that you
cannot use to make a living organism with access to qualia in the
But we can emulate a brain with milk bottles,
I don't think that we can. It's just a sculpture of a brain. It's like
emulating a person with a video image.
You are wrong on this. We can. In principle. We cannot afford wasting
our time doing it. But the point is that the person will be a zombie,
where it is just badly connected to our reality.
so you agree that there
is zombie in your theory.
I don't think it would never get that far. A zombie implies that the
behavior is identical to a typical person, and I don't think that's
possible to emulate through mathematics alone. It's always going to be
The arithmetical reality does emulate the computation, where they are
solidly defined (with Church thesis).
Now the experience of the machine themselves will not be of the type
of Turing emulable object.
Above you say that awareness is not Turing
emulable, but my question was: do you know what in the brain is not
The awareness of the brain is not emulable in a non-brain.
Which evidence have you for saying that the brain is aware?
We have evidence that the brain supports awareness and self-awareness
of a person, sometimes persons, not that it is aware itself.
It's not a
matter of what can't be emulated, it's that all emulation is itself
subjective. It's a modeling technique. A model of a brain is never
going to be much like a brain unless it is built out of something that
is a lot like a brain.
What makes you so sure that nature is experimenting modelling all the
If the modelling of the brain fails at all substitution level, it
means you will get zombie at some level.
You cannot answer by some 1-notion, because comp
explains why they exist, and why they are not Turing emulable,
manifestable by Turing emulation with some probability with respect
Comp is generalizing 1-awareness. Human awareness cannot be located
that way. It's not a matter of running human software on a universal
machine, because the essence of 1-p is non-universality.
Yeah ... A typical 1-move, to abandon universality for ... control.
is what makes the software possible.
Locally. Globally it is the other way round.
To negate comp, you have to show something, different than
matter and consciousness, which necessitates an actual infinite
It's the whole premise underlying comp that is circular reasoning. If
you assume that matter and consciousness are both bits,
I don't do that. I just assume there is a level of description of the
brain which makes it digitally emulable. And neither consciousness or
matter become bits in that picture.
then you frame
the argument as a quantitative information theory. Sense is what makes
information meaningful. Sense is the phenomena of being informed and
informing. It's the I, me, and you experiential aspects of the cosmos.
Comp is limited to the 'it' aspects of the cosmos,
No. It get the it (Bp) and the 1-me (Bp & p), and 7 other variants
which offer an arithmetical interpretation of Plotinus. It is very
rich. You can't dismiss computer science, when UMs look deep inside,
they see some non trivial things.
and insists that I,
me, and you can be emulated by 'it'.
You meant "can't" I guess.
That's one way of looking at it,
but it's biased against 1-p from the start.
Not at all. It is explicitely taken into account at the start of comp
by a question. And then recovered later by Theaetetus+machine self-
reference. Comp, the weak version I study, is biased explicitly in
favor of the 1-p at the start.
It's great for designing
AGI, but it does nothing to explain the origin of red or the meaning
of a conversation like this.
I think it does, but you can only understand by yourself, and this by
being able to at least assume comp for the sake of the reasoning.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to email@example.com.
To unsubscribe from this group, send email to
For more options, visit this group at