On Aug 31, 2:53 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
> On 30 Aug 2011, at 19:23, Craig Weinberg wrote:

> >>>> A hard-wired universal machine can emulate a self-transforming
> >>>> universal machine, or a high level universal machine acting on its
> >>>> low
> >>>> level universal bearer.
>
> >>> Ok, but can it emulate a non-machine?
>
> >> This is meaningless.
>
> > If there is no such thing as a non-machine, then how can the term
> > machine have any meaning?
>
> There are a ton of non-machines. Recursion theory is the study of
> degree of non-machineness.
>
> What is meaningless is to ask to a machine to emulate a non-machine,
> which by definition is not emulable by a machine.

Ok, so how do we know that human awareness is not both a machine and a
non-machine, and therefore not completely Turing emulable?

> >>>> The point is just this one: do you or not make your theory
> >>>> relying on
> >>>> something non-Turing emulable. If the answer is yes: what is it?
>
> >>> Yes, biological, zoological, and anthropological awareness.
>
> >> If you mean by this, 1-awareness,
>
> > No, I mean qualitatively different phenomenologies which are all types
> > of 1-awareness. Since 1-awareness is private, they are not all the
> > same.
>
> Most plausibly.
>
>
>
> >> comp explains its existence and its
> >> non Turing emulability, without introducing ad hoc non Turing
> >> emulable
> >> beings in our physical neighborhood.
>
> > Whose physical neighborhood are comp's non Turing emulable 1-awareness
> > beings living in? Or are they metaphysical?
>
> They are (sigma_1 )arithmetical, in the 3-view.
> And unboundedly complex, in the 1-views (personal, plurals)

What makes them seem local to a spatiotemporal axis in a way that
seems simple in the 1p? How does an unboundedly complex phenomena 'go
to the store for some beer'?

But back to this "(sigma_1 )arithmetical, in the 3-view". That's a yes
to the question of whether they are metaphysical, right?

> >> This is enough precise to be
> >> tested, and we can argue that some non computable quantum weirdness,
> >> like quantum indeterminacy, confirmed this. The simple self-
> >> duplication illustrates quickly how comp makes possible to experience
> >> non computable facts without introducing anything non computable in
> >> the third person picture.
>
> > I'm not suggesting anything non-computable in the third person
> > picture. Third person is by definition computable.
>
> Of course not. I mean, come on, Gödel, 1931. Or just by Church thesis
> as I explain from time to time (just one double diagonalization). The
> third person truth is bigger than the computable.

I don't know enough about it to say whether I agree yet, so I'll take
your word for it, but would you agree that Third person truth is by
definition more computable than first person?

> > Some of those
> > computations are influenced by 1p motives though.
>
> OK. But the motive might be an abstract being or engram (programmed by
> nature, evolution, that is deep computational histories).
> No need to introduce anything non Turing emulable in the picture here.

Doesn't that just push first cause back a step? What motives influence
the abstract being, nature, or deep computational histories?

> > Once those motives
> > are expressed externally, they are computable.
>
> But with comp, you just cannot express them externally, just
> illustrate them and hope others grasp. They are not computable,
> because the experience is consciousness filtered by infinities of
> 'brains'.

Illustrating them isn't an external expression? It sounds like you're
saying that nothing is computable now?

> Comp shows a problem. What problem shows your theory?

You mean what problem does my theory solve? Or what's an example of a
problem which arises from not using my model? It's the mind/body
problem. The role of awareness in the cosmos. The nature of our
relation to the microcosm and macrocosm. What energy, time, space, and
matter really are. The origins of the universe.

> > You can't always
> > reverse engineer the 1-p motives from the 3-p though.
>
> You are right, that is why, with comp, most 1-p notion are not 3-
> definable. Still, comp allows to study the case of the ideally correct
> machine, and have metatheories shedding light on that non
> communicability.

Sounds good to me. I think there is tremendous value in studying ideal
principles, although I would not limit them to arithmetic minimalism.
There's a whole universe of ideally correct non-machine intelligence
out there (in here) that needs metatheories too.

> >>> Feeling as
> >>> qualitatively distinct from detection.
>
> >> Of course. Feeling is distinct from detection. It involves a person,
>
> > Yes! A person, or another animal. Not a virus or a silicon chip or a
> > computer made of chips.
>
> This is racism.

A silicon chip is not a member of a race. It does nothing at all that
could be considered an expression of feeling. It might have feeling,
but whatever it has, we have something more, at least in our own eyes.
Racism is to look at another human being with prejudice, not to look
at an inanimate object and fail to give it the benefit of the doubt as
a human being.

I understand that you see no reason in principle why a chip should be
different from anything else as far as being able to host a universal
machine, and that's true, but I'm not talking about universal
machines, I'm talking about universal non-machines, which I think are
made of sense (awareness,etc), and which would indeed vary from
substance to substance in a qualitative way which could not be
quantitatively emulated.

> It is a confusion of what is a person and its body. No doubt billions
> years of engramming, make them hard to separate technologically, but
> nothing prevent to survive with digital brain, or even to live in
> virtual environment in principle, at some level, some day.
> And in this picture we can formulate precise (sub) problem of the hard
> mind body problem.

Survive where?

> >> which involves some (not big) amount of self-reference ability.
>
> > You don't have to be able to refer to yourself to feel something.
>
> You don't have to refer to yourself explicitly, but *feeling* still
> involves implicit self-references, I think.

I don't agree. The self-references are a cognitive-cortical level
afterthought.

>
> > Pain
> > is primitive.
>
> It is very simple at the base and very deep, but, hmm.... I don't
> know, perhaps 1-primitive (with some of the "1"-views described by the
> arithmetical or self-referential hypostases).
>
> Not 3-primitive, with mechanism.

Not human pain, but there is not necessarily any mechanism if pain
originates in cells (and how could it really not?)

>
> >>> Not to disqualify machines
> >>> implemented in a particular material - stone, silicon, milk bottles,
> >>> whatever, from having the normal detection experiences of those
> >>> substances and pbjects, but there is nothing to tempt me to want to
> >>> assign human neurological qualia to milk bottles stacked up like
> >>> dominoes. We know about synesthesia and agnosia, and I am positing
> >>> HADD or prognosia to describe how the assumption of qualia
> >>> equivalence
> >>> is invalid.
>
> >>> If we make a machine out of living cells, then we run into the
> >>> problem
> >>> of living cells not being easily persuaded to do our bidding. To
> >>> physically enact the design of universal machine, you need a
> >>> relatively inanimate substance, which is the very thing that you
> >>> cannot use to make a living organism with access to qualia in the
> >>> biological range.
>
> >> But we can emulate a brain with milk bottles,
>
> > I don't think that we can. It's just a sculpture of a brain. It's like
> > emulating a person with a video image.
>
> You are wrong on this. We can. In principle. We cannot afford wasting
> our time doing it. But the point is that the person will be a zombie,
> where it is just badly connected to our reality.

The program will (asymptotically) approach p-zombie because it's just
a sculpture of a brain.

> >> so you agree that there
> >> is zombie in your theory.
>
> > I don't think it would never get that far. A zombie implies that the
> > behavior is identical to a typical person, and I don't think that's
> > possible to emulate through mathematics alone. It's always going to be
> > a stiff.
>
> The arithmetical reality does emulate the computation, where they are
> solidly defined (with Church thesis).
> Now the experience of the machine themselves will not be of the type
> of Turing emulable object.
>

But the computation is blind to any feeling that is driving the
person's motives through the brain being emulated.

> >> Above you say that awareness is not Turing
> >> emulable, but my question was: do you know what in the brain is not
> >> Turing emulable?
>
> > The awareness of the brain is not emulable in a non-brain.
>
> Which evidence have you for saying that the brain is aware?

I'm saying that we are the awareness of our brain (as opposed to us
being the awareness of our foot our a basketball in our closet). MRI
and TMS technologies have convincingly shown that the activity of the
brain correlates with subjective awareness and can be manipulated in
ways that it cannot be through the foot or the basketball.

In addition, the brain may host many kinds of awareness other than our
own conscious experience. Why wouldn't it?

> We have evidence that the brain supports awareness and self-awareness
> of a person, sometimes persons, not that it is aware itself.

Right, but if we are that awareness then our knowledge of the brain as
an object *is* the brain discovering it's own objective topology. Our
awareness is the 1p (heads) side of the coin of the brain (tails), so
I wouldn't expect the tails side of a coin to have it's own heads side
as distinct from the heads side of the coin. Our awareness isn't only
the brain (or regions of it), but what we are aware of runs through
the brain.

>
> > It's not a
> > matter of what can't be emulated, it's that all emulation is itself
> > subjective. It's a modeling technique. A model of a brain is never
> > going to be much like a brain unless it is built out of something that
> > is a lot like a brain.
>
> What makes you so sure that nature is experimenting modelling all the
> time?

It might model on the inside, but it experiments without modeling on
the outside. Each instance of something we sense in 3-p is a genuine
phenomenon. There aren't any disembodied theories wandering around
Earth.

> If the modelling of the brain fails at all substitution level, it
> means you will get zombie at some level.

The thing itself is not a zombie, it's just what it is. It's our
failure to fool ourselves into thinking it's genuine which projects
zombiehood (let's call it pseudognosia from now on?) on the model.

> >> You cannot answer by some 1-notion, because comp
> >> explains why they exist, and why they are not Turing emulable,
> >> (albeit
> >> manifestable by Turing emulation with some probability with respect
> >> to
> >> you).
>
> > Comp is generalizing 1-awareness. Human awareness cannot be located
> > that way. It's not a matter of running human software on a universal
> > machine, because the essence of 1-p is non-universality.
>
> Yeah ... A typical 1-move, to abandon universality for ... control.

That is the essence of 1p. The motive of sensorimotive. Actually two
distinguishable aspects: the sensory experience represents the
abandonment universality for locality (being something means
collapsing the superposition of universality into a specific
phenomenological range of experiences and relations), while the
projection of the sense of that entity's 1p private involution of it's
self-created universe is, of course, the intent to assert the self
through control (even if that means intentionally seeking to be
controlled).

> > The hardware
> > is what makes the software possible.
>
> Locally. Globally it is the other way round.

Sort of. Globally, the hardware is just the software's rear end. They
are the same thing but appear opposite to the software.

>
> >> To negate comp, you have to show something, different than
> >> matter and consciousness, which necessitates an actual infinite
> >> amount
> >> of bits.
>
> > It's the whole premise underlying comp that is circular reasoning. If
> > you assume that matter and consciousness are both bits,
>
> I don't do that. I just assume there is a level of description of the
> brain which makes it digitally emulable. And neither consciousness or
> matter become bits in that picture.

What is the digital emulation made of if not bits? What would any
consciousness arising from that emulation be made of?

>
> > then you frame
> > the argument as a quantitative information theory. Sense is what makes
> > information meaningful. Sense is the phenomena of being informed and
> > informing. It's the I, me, and you experiential aspects of the cosmos.
> > Comp is limited to the 'it' aspects of the cosmos,
>
> No. It get the it (Bp) and the 1-me (Bp & p), and 7 other variants
> which offer an arithmetical interpretation of Plotinus. It is very
> rich. You can't dismiss computer science, when UMs look deep inside,
> they see some non trivial things.

I accept that UMs and computer science can deliver non trivial
insights, but I don't think that we are going to find ourselves in
there, except by contrast. We may find a negative image of the self
that could be the only way of truly seeing the positive.

> > and insists that I,
> > me, and you can be emulated by 'it'.
>
> You meant "can't" I guess.

No, I'm saying that comp is insisting that digitalness (it) can
emulate I, me, and you.

>
> > That's one way of looking at it,
> > but it's biased against 1-p from the start.
>
> Not at all. It is explicitely taken into account at the start of comp
> by a question. And then recovered later by Theaetetus+machine self-
> reference. Comp, the weak version I study, is biased explicitly in
> favor of the 1-p at the start.
>
> > It's great for designing
> > AGI, but it does nothing to explain the origin of red or the meaning
> > of a conversation like this.
>
> I think it does, but you can only understand by yourself, and this by
> being able to at least assume comp for the sake of the reasoning.

I think I can assume comp for the sake of reasoning, but it still
doesn't explain specific qualia and signifying meaning for me. It may
plot where they come into play on a map, but it has no opinion on the
redness of red.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to