On Thu, May 6, 2021 at 9:08 AM Bruno Marchal <[email protected]> wrote:

>
> On 30 Apr 2021, at 20:52, Jason Resch <[email protected]> wrote:
>
> It might be a true fact that "Machine X believes Y", without Y being true.
> Is it simply the truth that "Machine X believes Y" that makes X
> consciousness of Y?
>
>
> It is more the belief that the machine has a belief which remains true,
> even if the initial belief is false.
>


Is that extra meta-level of belief necessary for simple awareness, or only
for self-awareness?


>
> Can a machine believe "2+2=4" without having a reference to itself?
>
>
> Not really, unless you accept the idea of unconscious belief, which makes
> sense in some psychological theory.
>
> My method consists in defining “the machine M believes P” by “the machine
> M asserts P”, and then I limit myself to machine which are correct by
> definition. This is of no use in psychology, but is enough to derive
> physics.
>

I see. I think this might account for the confusion I had with respect to
the link between consciousness (as we know and perceive it), and the
consciousness of a self-referentially correct and consistent machine, which
was a necessary simplification in your initial research. (Assuming I
understand this correctly).

Self-referentially correct and consistent machines can be conscious, but
those properties are not necessary for consciousness. Only "being a
machine" of some kind would be necessary. My question would then be, is
every machine conscious of something, or are only certain machines
conscious? If only some, how soon in the UD would a conscious machine be
encountered?

If the UD can be viewed as a machine in its own right, is it a machine that
is conscious of everything? A super-mind or over-mind? Or do the minds
fractionate due to their lack of relation to each other by the memory
divisions of the UD?



> What, programmatically, would you say is needed to program a machine that
> believes "2+2=4" or to implement self-reference?
>
>
> That it has enough induction axioms, like PA and ZF, but unlike RA (R and
> Q), or CL (combinatory logic without induction).
>  The universal machine without induction axiom are conscious, but are very
> limited in introspection power. They don’t have the rich theology of the
> machine having induction.
> I recall that the induction axioms are all axioms having the shape [P(0) &
> (for all x P(x) -> P(x+1))] -> (for all x P(x)). It is an ability to build
> universals.
>

Thank you. I can begin to see how induction is necessary for self-reference.


Does a Turing machine evaluating "if (2+2 == 4) then" believe it?
>
>
> If the machine can prove:
> Beweisbar(x)-> Beweisbar(Beweisbar(x)), she can be said to be
> self-conscious. PA can, RA cannot.
>
>
>
> Or does it require theorem proving software that reduces a statement to
> Peano axioms or similar?
>
>
> That is requires for the rational belief, but not for the experience-able
> one.
>
>
>
I guess this is what I am most curious about. Not so much rational belief
or self-consciousness, but the requirements of immediate
experience/awareness. If consciousness is the awareness of information, how
does one write a program that is "aware" of information? In some sense, I
can see the argument that any handling or processing of information
requires, in some sense, some kind of awareness of it.


>
>
>
>> To get immediate knowledgeability you need to add consistency ([]p &
>> <>t), to get ([]p & <>t & p) which prevents transitivity, and gives to the
>> machine a feeling of immediacy.
>>
>
> By consistency here do you mean the machine must never come to believe
> something false, or that the machine itself must behave in a manner
> consistent with its design/definition?
>
>
> That the machine will not believe something false. I agree this works only
> because I can limit myself to correct machine.
> The psychology and theology of the lying machine remains to be done, but
> it has no use to derive physics from arithmetic.
>
>
>
>
> I still have a conceptual difficulty trying to marry these mathematical
> notions of truth, provability, and consistency with a program/Machine that
> manifests them.
>
>
> It *is¨subtle, that is why we need to use the mathematics of
> self-reference. It is highly counter-intuitive. All errors in
> philosophy/theology comes from confusing a self-referential mode with
> another, I would say.
>
>
>
>
>> If a program can be said to "know" something then can we also say it is
>> conscious of that thing?
>>
>>
>> 1) That’s *not* the case for []p & p, unless you accept a notion of
>> unconscious knowledge, like knowing that Perseverance and Ingenuity are on
>> Mars, but not being currently thinking about it, so that you are not right
>> now consciously aware of the fact---well you are, but just because I have
>> just reminded it :)
>>
>
> In a way, I might view these long term memories as environmental signals
> that encroach upon one's mind state. A state which is otherwise not
> immediately aware of all the contents of this memory (like opening a sealed
> box to discover it's content).
>
>
> OK.
>
>
>
>
>
>> 2) But that *is* the case for []p & <>t & p. If the machine knows
>> something in that sense, then the machine can be said to be conscious of p.
>> Then to be “simply” conscious, becomes []t & <>t (& t).
>>
>> Note that “p” always refers to a partially computable arithmetical (or
>> combinatorical) proposition. That’s the way of translating “Digital
>> Mechanism” in the language of the machine.
>>
>> To sum up, to get a conscious machine, you need a computer (aka universal
>> number/machine) with some notion of belief, and knowledge/consciousness
>> rise from the actuation of truth, that the machine cannot define (by the
>> theorem of Tarski and some variant by Montague, Thomason, and myself...).
>>
>
So then, it seems to me a program with a memory for listing propositions,
to which it can categorize them as true or false, or otherwise ascribe some
probability to those propositions being true or false would have a notion
of belief. But what counts as actuation of truth? Does it arise out of
attempts to test/categorize those propositions/beliefs?


>
>> That theory can be said a posteriori well tested because it implied the
>> quantum reality, at least the one described by the Schroedinger equation or
>> Heisenberg matrix (or even better Feynman Integral),  WITHOUT any collapse
>> postulate.
>>
>
> Can it be said that Deep Blue is conscious of the state of the chess board
> it evaluates?
>
>
> Deep blue. I guess not. But for alpha-go, or some of its descendants, it
> looks like they are circular neural pathway allowing the machine to learn
> its own behaviour, and to attach some identity in this way. So, deep
> learning might coverage on a conscious machine.
>

If I recall correctly, AlphaGo's network was entirely feed-forward, with 42
layers of processing.  AlphaZero might be different, it is definitely more
sophisticated in that it was given only the rules of the game, and came to
master many different games (Go, Chess, and Shogi). Are loops, or neurons
with short-term memories somehow necessary for consciousness? I guess the
presence of loops (or external memories) is the difference between circuits
and Turing Machines.


> But that is not verified, and they are still just playing “simple games”.
> We don’t ask to build a theory of themselves. It even looks like they try
> to avoid this, and it is normal. Like nature with insects, we don’t want a
> terrible child, and try to make “mature machine” right at the start.
>

I know you have said you believe jumping spiders are conscious (or even
self-conscious). In your opinion, are ants conscious? Are amoeba? We
already have full neuronal simulations of worms ( see openworm.org ). Are
these, then already examples of conscious programs?


>
>
> Is a Tesla car conscious of whether the traffic signal is showing red,
> yellow, or green?
>
>
> I doubt this, but I have not study them. I doubt it has full
> self-reference ability, like PA and ZF, or any human baby.
>

I am not sure about full self-reference, but they do build models of their
environment that incorporate themselves, at least assuming, I am not
reading into this graphical display too much:

https://i1.wp.com/electrek.co/wp-content/uploads/sites/3/2019/05/Tesla-driving-visualization.jpg?w=1500&quality=82&strip=all&ssl=1

(Note that it builds a model of other surrounding cars (displayed in gray),
and itself in red)


> Or is a more particular class of software necessary for
> belief/consciousness? This is what I'm struggling to understand.  I greatly
> appreciate all the answers you have provided.
>
>
> All you need is enough induction power. RA + induction on recursive
> formula is not enough, unless you add the exponentiation axiom. But RA +
> induction on recursive enumerable formula is enough.
>
>
Thanks again. This is helpful. I feel closer to understanding although not
fully there yet.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUi_RVQS0tyCW7D5E31SonQZ9Yn0_LfweNK2kkv9tWK2oA%40mail.gmail.com.

Reply via email to