On Fri, Apr 30, 2021, 6:19 AM Bruno Marchal <[email protected]> wrote:

> Hi Jason,
>
>
> On 25 Apr 2021, at 22:29, Jason Resch <[email protected]> wrote:
>
> It is quite easy, I think, to define a program that "remembers" (stores
> and later retrieves ( information.
>
> It is slightly harder, but not altogether difficult, to write a program
> that "learns" (alters its behavior based on prior inputs).
>
> What though, is required to write a program that "knows" (has awareness or
> access to information or knowledge)?
>
> Does, for instance, the following program "know" anything about the data
> it is processing?
>
> if (pixel.red > 128) then {
>     // knows pixel.red is greater than 128
> } else {
>     // knows pixel.red <= 128
> }
>
> If not, what else is required for knowledge?
>
>
> Do you agree that knowledgeability obeys
>
>  knowledgeability(A) -> A
>  knowledgeability(A) ->  knowledgeability(knowledgeability(A))
>

Using the definition of knowledge as "true belief" I agree with this.



> (And also, to limit ourselves to rational knowledge:
>
>  knowledgeability(A -> B) ->  (knowledgeability(A) ->  knowledgeability(B))
>
> From this, it can be proved that “ knowledgeability” of any “rich” machine
> (proving enough theorem of arithmetic) is not definable in the language of
> that machine, or in any language available to that machine.
>

Is this because the definition of knowledge includes truth, and truth is
not definable?


> So the best we can do is to define a notion of belief (which abandon the
> reflexion axiom: that we abandon belief(A) -> A. That makes Belief
> definable (in the language of the machine), and then we can apply the idea
> of Theatetus, and define knowledge (or knowledgeability, when we add the
> transitivity []p -> [][]p)  by true belief.
>
> The machine knows A when she believes A and A is true.
>

So is it more appropriate to equate consciousness with belief, rather than
with knowledge?

It might be a true fact that "Machine X believes Y", without Y being true.
Is it simply the truth that "Machine X believes Y" that makes X
consciousness of Y?



>
>
>
>
>
> Does the program behavior have to change based on the state of some
> information? For example:
>
> if (pixel.red > 128) then {
>     // knows pixel.red is greater than 128
>     doX();
> } else {
>     // knows pixel.red <= 128
>     doY():
> }
>
> Or does the program have to possess some memory and enter a different
> state based on the state of the information it processed?
>
> if (pixel.red > 128) then {
>     // knows pixel.red is greater than 128
>     enterStateX():
> } else {
>     // knows pixel.red <= 128
>     enterStateY();
> }
>
> Or is something else altogether needed to say the program knows?
>
>
> You need self-reference ability for the notion of belief, together with a
> notion of reality or truth, which the machine cannot define.
>

Can a machine believe "2+2=4" without having a reference to itself? What,
programmatically, would you say is needed to program a machine that
believes "2+2=4" or to implement self-reference?

Does a Turing machine evaluating "if (2+2 == 4) then" believe it? Or does
it require theorem proving software that reduces a statement to Peano
axioms or similar?


> To get immediate knowledgeability you need to add consistency ([]p & <>t),
> to get ([]p & <>t & p) which prevents transitivity, and gives to the
> machine a feeling of immediacy.
>

By consistency here do you mean the machine must never come to believe
something false, or that the machine itself must behave in a manner
consistent with its design/definition?

I still have a conceptual difficulty trying to marry these mathematical
notions of truth, provability, and consistency with a program/Machine that
manifests them.


>
>
> If a program can be said to "know" something then can we also say it is
> conscious of that thing?
>
>
> 1) That’s *not* the case for []p & p, unless you accept a notion of
> unconscious knowledge, like knowing that Perseverance and Ingenuity are on
> Mars, but not being currently thinking about it, so that you are not right
> now consciously aware of the fact---well you are, but just because I have
> just reminded it :)
>

In a way, I might view these long term memories as environmental signals
that encroach upon one's mind state. A state which is otherwise not
immediately aware of all the contents of this memory (like opening a sealed
box to discover it's content).


> 2) But that *is* the case for []p & <>t & p. If the machine knows
> something in that sense, then the machine can be said to be conscious of p.
> Then to be “simply” conscious, becomes []t & <>t (& t).
>
> Note that “p” always refers to a partially computable arithmetical (or
> combinatorical) proposition. That’s the way of translating “Digital
> Mechanism” in the language of the machine.
>
> To sum up, to get a conscious machine, you need a computer (aka universal
> number/machine) with some notion of belief, and knowledge/consciousness
> rise from the actuation of truth, that the machine cannot define (by the
> theorem of Tarski and some variant by Montague, Thomason, and myself...).
>
> That theory can be said a posteriori well tested because it implied the
> quantum reality, at least the one described by the Schroedinger equation or
> Heisenberg matrix (or even better Feynman Integral),  WITHOUT any collapse
> postulate.
>

Can it be said that Deep Blue is conscious of the state of the chess board
it evaluates?

Is a Tesla car conscious of whether the traffic signal is showing red,
yellow, or green?

Or is a more particular class of software necessary for
belief/consciousness? This is what I'm struggling to understand.  I greatly
appreciate all the answers you have provided.

Jason


> Bruno
>
>
>
>
> Jason
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgmPiCz5v4p91LAs0jN_2dCBocvnh4OO8sE7c-0JG%3DuwQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgmPiCz5v4p91LAs0jN_2dCBocvnh4OO8sE7c-0JG%3DuwQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/217C90D2-0AB9-4AD3-BBC7-A876EAA28069%40ulb.ac.be
> <https://groups.google.com/d/msgid/everything-list/217C90D2-0AB9-4AD3-BBC7-A876EAA28069%40ulb.ac.be?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjT5XptgTfXnOVsnmCkn%2BDStgvGA%2BgRYVUyzxbCo8ymRA%40mail.gmail.com.

Reply via email to