On Aug 30, 11:29 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
> On 30 Aug 2011, at 14:43, Craig Weinberg wrote:
> > On Aug 30, 4:06 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
> >> On 29 Aug 2011, at 20:07, Craig Weinberg wrote:
> >>> Definitely, but the reasons that we have for causing those changes  
> >>> in
> >>> the semiconductor material are not semiconductor logics. They use
> >>> hardware logic to to get the hardware to do software logic, just as
> >>> the mind uses the brain's hardware to remember, imagine, plan, or
> >>> execute what the mind wants it to. What the mind wants is influenced
> >>> by the brain, but the brain is also influenced directly by the mind.
> >> A hard-wired universal machine can emulate a self-transforming
> >> universal machine, or a high level universal machine acting on its  
> >> low
> >> level universal bearer.
> > Ok, but can it emulate a non-machine?
> This is meaningless.

If there is no such thing as a non-machine, then how can the term
machine have any meaning?

> >> The point is just this one: do you or not make your theory relying on
> >> something non-Turing emulable. If the answer is yes: what is it?
> > Yes, biological, zoological, and anthropological awareness.
> If you mean by this, 1-awareness,

No, I mean qualitatively different phenomenologies which are all types
of 1-awareness. Since 1-awareness is private, they are not all the

> comp explains its existence and its  
> non Turing emulability, without introducing ad hoc non Turing emulable  
> beings in our physical neighborhood.

Whose physical neighborhood are comp's non Turing emulable 1-awareness
beings living in? Or are they metaphysical?

> This is enough precise to be  
> tested, and we can argue that some non computable quantum weirdness,  
> like quantum indeterminacy, confirmed this. The simple self-
> duplication illustrates quickly how comp makes possible to experience  
> non computable facts without introducing anything non computable in  
> the third person picture.

I'm not suggesting anything non-computable in the third person
picture. Third person is by definition computable. Some of those
computations are influenced by 1p motives though. Once those motives
are expressed externally, they are computable. You can't always
reverse engineer the 1-p motives from the 3-p though.

> > Feeling as
> > qualitatively distinct from detection.
> Of course. Feeling is distinct from detection. It involves a person,

Yes! A person, or another animal. Not a virus or a silicon chip or a
computer made of chips.
> which involves some (not big) amount of self-reference ability.

You don't have to be able to refer to yourself to feel something. Pain
is primitive.

> > Not to disqualify machines
> > implemented in a particular material - stone, silicon, milk bottles,
> > whatever, from having the normal detection experiences of those
> > substances and pbjects, but there is nothing to tempt me to want to
> > assign human neurological qualia to milk bottles stacked up like
> > dominoes. We know about synesthesia and agnosia, and I am positing
> > HADD or prognosia to describe how the assumption of qualia equivalence
> > is invalid.
> > If we make a machine out of living cells, then we run into the problem
> > of living cells not being easily persuaded to do our bidding. To
> > physically enact the design of universal machine, you need a
> > relatively inanimate substance, which is the very thing that you
> > cannot use to make a living organism with access to qualia in the
> > biological range.
> But we can emulate a brain with milk bottles,

I don't think that we can. It's just a sculpture of a brain. It's like
emulating a person with a video image.

> so you agree that there  
> is zombie in your theory.

I don't think it would never get that far. A zombie implies that the
behavior is identical to a typical person, and I don't think that's
possible to emulate through mathematics alone. It's always going to be
a stiff.

> Above you say that awareness is not Turing  
> emulable, but my question was: do you know what in the brain is not  
> Turing emulable?

The awareness of the brain is not emulable in a non-brain. It's not a
matter of what can't be emulated, it's that all emulation is itself
subjective. It's a modeling technique. A model of a brain is never
going to be much like a brain unless it is built out of something that
is a lot like a brain.

> You cannot answer by some 1-notion, because comp  
> explains why they exist, and why they are not Turing emulable, (albeit  
> manifestable by Turing emulation with some probability with respect to  
> you).

Comp is generalizing 1-awareness. Human awareness cannot be located
that way. It's not a matter of running human software on a universal
machine, because the essence of 1-p is non-universality. The hardware
is what makes the software possible.

>To negate comp, you have to show something, different than  
> matter and consciousness, which necessitates an actual infinite amount  
> of bits.

It's the whole premise underlying comp that is circular reasoning. If
you assume that matter and consciousness are both bits, then you frame
the argument as a quantitative information theory. Sense is what makes
information meaningful. Sense is the phenomena of being informed and
informing. It's the I, me, and you experiential aspects of the cosmos.
Comp is limited to the 'it' aspects of the cosmos, and insists that I,
me, and you can be emulated by 'it'. That's one way of looking at it,
but it's biased against 1-p from the start. It's great for designing
AGI, but it does nothing to explain the origin of red or the meaning
of a conversation like this.


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to