> On 30 Apr 2021, at 20:52, Jason Resch <[email protected]> wrote:
> 
> 
> 
> On Fri, Apr 30, 2021, 6:19 AM Bruno Marchal <[email protected] 
> <mailto:[email protected]>> wrote:
> Hi Jason,
> 
> 
>> On 25 Apr 2021, at 22:29, Jason Resch <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>> It is quite easy, I think, to define a program that "remembers" (stores and 
>> later retrieves ( information.
>> 
>> It is slightly harder, but not altogether difficult, to write a program that 
>> "learns" (alters its behavior based on prior inputs).
>> 
>> What though, is required to write a program that "knows" (has awareness or 
>> access to information or knowledge)?
>> 
>> Does, for instance, the following program "know" anything about the data it 
>> is processing?
>> 
>> if (pixel.red > 128) then {
>>     // knows pixel.red is greater than 128
>> } else { 
>>     // knows pixel.red <= 128
>> }
>> 
>> If not, what else is required for knowledge?
> 
> Do you agree that knowledgeability obeys
> 
>  knowledgeability(A) -> A
>  knowledgeability(A) ->  knowledgeability(knowledgeability(A))
> 
> Using the definition of knowledge as "true belief" I agree with this.

OK



> 
> 
> 
> (And also, to limit ourselves to rational knowledge:
> 
>  knowledgeability(A -> B) ->  (knowledgeability(A) ->  knowledgeability(B))
> 
> From this, it can be proved that “ knowledgeability” of any “rich” machine 
> (proving enough theorem of arithmetic) is not definable in the language of 
> that machine, or in any language available to that machine.
> 
> Is this because the definition of knowledge includes truth, and truth is not 
> definable?
> 
> 

Roughly speaking yes, but some could argue that we might define knowledge 
without invoking truth, or less directly, so it pleasant that people like 
Thomason, Artemov, and myself, gives direct proof that anything obeying the S4 
axioms cannot be defined in Arithmetic or by a Turing machine, unless she bet 
on the “truth” of mechanism, to be sure.




> So the best we can do is to define a notion of belief (which abandon the 
> reflexion axiom: that we abandon belief(A) -> A. That makes Belief definable 
> (in the language of the machine), and then we can apply the idea of 
> Theatetus, and define knowledge (or knowledgeability, when we add the 
> transitivity []p -> [][]p)  by true belief.
> 
> The machine knows A when she believes A and A is true.
> 
> So is it more appropriate to equate consciousness with belief, rather than 
> with knowledge?

Consciousness requires some truth at some level. You can be dreaming and having 
false beliefs, but your consciousness will remain the “indubitable” fixed 
point, and will remain associated to truth.




> 
> It might be a true fact that "Machine X believes Y", without Y being true. Is 
> it simply the truth that "Machine X believes Y" that makes X consciousness of 
> Y?

It is more the belief that the machine has a belief which remains true, even if 
the initial belief is false.



> 
> 
> 
> 
> 
> 
> 
>> 
>> Does the program behavior have to change based on the state of some 
>> information? For example:
>> 
>> if (pixel.red > 128) then {
>>     // knows pixel.red is greater than 128
>>     doX();
>> } else { 
>>     // knows pixel.red <= 128
>>     doY():
>> }
>> 
>> Or does the program have to possess some memory and enter a different state 
>> based on the state of the information it processed?
>> 
>> if (pixel.red > 128) then {
>>     // knows pixel.red is greater than 128
>>     enterStateX():
>> } else { 
>>     // knows pixel.red <= 128
>>     enterStateY();
>> }
>> 
>> Or is something else altogether needed to say the program knows?
> 
> You need self-reference ability for the notion of belief, together with a 
> notion of reality or truth, which the machine cannot define.
> 
> Can a machine believe "2+2=4" without having a reference to itself?

Not really, unless you accept the idea of unconscious belief, which makes sense 
in some psychological theory. 

My method consists in defining “the machine M believes P” by “the machine M 
asserts P”, and then I limit myself to machine which are correct by definition. 
This is of no use in psychology, but is enough to derive physics.



> What, programmatically, would you say is needed to program a machine that 
> believes "2+2=4" or to implement self-reference?

That it has enough induction axioms, like PA and ZF, but unlike RA (R and Q), 
or CL (combinatory logic without induction).
 The universal machine without induction axiom are conscious, but are very 
limited in introspection power. They don’t have the rich theology of the 
machine having induction.
I recall that the induction axioms are all axioms having the shape [P(0) & (for 
all x P(x) -> P(x+1))] -> (for all x P(x)). It is an ability to build 
universals.



> 
> Does a Turing machine evaluating "if (2+2 == 4) then" believe it?

If the machine can prove:
Beweisbar(x)-> Beweisbar(Beweisbar(x)), she can be said to be self-conscious. 
PA can, RA cannot.



> Or does it require theorem proving software that reduces a statement to Peano 
> axioms or similar?

That is requires for the rational belief, but not for the experience-able one.



> 
> 
> To get immediate knowledgeability you need to add consistency ([]p & <>t), to 
> get ([]p & <>t & p) which prevents transitivity, and gives to the machine a 
> feeling of immediacy. 
> 
> By consistency here do you mean the machine must never come to believe 
> something false, or that the machine itself must behave in a manner 
> consistent with its design/definition?

That the machine will not believe something false. I agree this works only 
because I can limit myself to correct machine.
The psychology and theology of the lying machine remains to be done, but it has 
no use to derive physics from arithmetic.



> 
> I still have a conceptual difficulty trying to marry these mathematical 
> notions of truth, provability, and consistency with a program/Machine that 
> manifests them.

It *is¨subtle, that is why we need to use the mathematics of self-reference. It 
is highly counter-intuitive. All errors in philosophy/theology comes from 
confusing a self-referential mode with another, I would say. 




> 
> 
> 
>> 
>> If a program can be said to "know" something then can we also say it is 
>> conscious of that thing?
> 
> 1) That’s *not* the case for []p & p, unless you accept a notion of 
> unconscious knowledge, like knowing that Perseverance and Ingenuity are on 
> Mars, but not being currently thinking about it, so that you are not right 
> now consciously aware of the fact---well you are, but just because I have 
> just reminded it :)
> 
> In a way, I might view these long term memories as environmental signals that 
> encroach upon one's mind state. A state which is otherwise not immediately 
> aware of all the contents of this memory (like opening a sealed box to 
> discover it's content).

OK.



> 
> 
> 2) But that *is* the case for []p & <>t & p. If the machine knows something 
> in that sense, then the machine can be said to be conscious of p. 
> Then to be “simply” conscious, becomes []t & <>t (& t). 
> 
> Note that “p” always refers to a partially computable arithmetical (or 
> combinatorical) proposition. That’s the way of translating “Digital 
> Mechanism” in the language of the machine.
> 
> To sum up, to get a conscious machine, you need a computer (aka universal 
> number/machine) with some notion of belief, and knowledge/consciousness rise 
> from the actuation of truth, that the machine cannot define (by the theorem 
> of Tarski and some variant by Montague, Thomason, and myself...). 
> 
> That theory can be said a posteriori well tested because it implied the 
> quantum reality, at least the one described by the Schroedinger equation or 
> Heisenberg matrix (or even better Feynman Integral),  WITHOUT any collapse 
> postulate.
> 
> Can it be said that Deep Blue is conscious of the state of the chess board it 
> evaluates?

Deep blue. I guess not. But for alpha-go, or some of its descendants, it looks 
like they are circular neural pathway allowing the machine to learn its own 
behaviour, and to attach some identity in this way. So, deep learning might 
coverage on a conscious machine. But that is not verified, and they are still 
just playing “simple games”. We don’t ask to build a theory of themselves. It 
even looks like they try to avoid this, and it is normal. Like nature with 
insects, we don’t want a terrible child, and try to make “mature machine” right 
at the start.




> 
> Is a Tesla car conscious of whether the traffic signal is showing red, 
> yellow, or green?

I doubt this, but I have not study them. I doubt it has full self-reference 
ability, like PA and ZF, or any human baby.



> 
> Or is a more particular class of software necessary for belief/consciousness? 
> This is what I'm struggling to understand.  I greatly appreciate all the 
> answers you have provided.

All you need is enough induction power. RA + induction on recursive formula is 
not enough, unless you add the exponentiation axiom. But RA + induction on 
recursive enumerable formula is enough. 

Best,

Bruno



-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/E259E3E2-486B-43E6-86E8-1D023E72F5F9%40ulb.ac.be.

Reply via email to