On Wednesday, 1 June 2016, Pierz <[email protected]> wrote:

>
>
> On Wednesday, June 1, 2016 at 4:44:33 AM UTC+10, Brent wrote:
>>
>>
>>
>> On 5/31/2016 10:08 AM, Bruno Marchal wrote:
>> >
>> > On 31 May 2016, at 00:36, Pierz wrote:
>> >
>> >> Clark v Marchal! I love this match-up. I predict it will go 47000
>> >> rounds without a knockout!
>> >
>> >
>> > I am interested in the problem why some machine get stuck at step 3 of
>> > the UDA :)
>> >
>> > The translation into arithmetic of the reasoning does provide light on
>> > this, if not an answer to that question. An ideally arithmetically
>> > correct machines cannot believe in computationalism, she will not
>> > identify her soul with her body or relative Gödel number, that is she
>> > will not identify herself with any third person description of what
>> > she really feel to be herself. The soul of the machine is not a
>> > machine from the soul's machine point of view.
>> >
>> > That is well sum up by simple theorem in G and G*. The machine's body
>> > can be identified with its provability predicate []p. When PA talk
>> > about her provability abilities, she derives them from a specific
>> > thrid person description of its beliefs and how to generate them.
>> > Now, accepting the classical analysis of knowledge, and defining it in
>> > the Theaetetus' manner, by []p & p (p sigma_1 arithmetical
>> > propositions) and "[]" representing Gödel's arithmetical beweisbar),
>> > we get that
>> >
>> > 1) G* proves []p <-> ([]p & p)
>> >
>> > 2) G can't prove in general that []p <-> ([]p & p)
>> >
>> > and indeed, the logic of []p & p will be quite different from the
>> > logic of []p, due to incompleteness.
>> >
>> > I define the (proper) theology of the machine by G* minus G. The local
>> > identity of the soul ([]p & p) and the body-brain-program ([]p) is
>> > true, but not provable, not even taken as an axiom. It is necessarily
>> > a non justifiable belief, an hope or a fear.
>> >
>> > The other very nice thing, also, is that "[]p & p" does indeed not
>> > admit any third person description available in its/her/his language.
>> > Then it also defines an arithmetical interpretation of intuitionistic
>> > logic (with the solipsist identity of truth and the personal mental
>> > constructions), and when p is restricted in the sigma_1 (complete)
>> > domain (= UD*), we get a quantum logic, which was expected for the UDA
>> > reson, but still surprising as it marries antisymmetry (related to the
>> > logic of []p & p (S4Grz)) with symmetry (related to []p & p when p is
>> > sigma_1).
>> >
>> > Judson Webb said that Gödel's theorem was a lucky chance for the
>> > Mechanist theory of mind, but here we see that (Everett) QM, even
>> > formally, is even a bigger chance for Mechanism.
>> >
>> > Now this remark, that machines cannot believe in Mechanism (and its
>> > consequences), might apply better to someone like Craig Weinberg, (if
>> > you remember the conversations here) and less to John Clark, who
>> > "accept (and even practice) Mechanism, but still get stuck for unknown
>> > reason (at step 3). We need another theory, which I think might
>> > involve notion of susceptibility and more emotional human stuff. Now,
>> > if you can make (logical) sense of his refutation of step 3, you would
>> > help!
>> >
>> > Note: I have introduced a new term: the surrational. It is, like G*
>> > minus G, the part of the truth *on* a machine that a sound machine
>> > cannot believe/prove/justify.
>>
>> In that formulation you take believe, prove, and justify to have the
>> same extension.  But that's not a good model of anyone I know.  In
>> general believe many things they cannot prove from some set of axioms -
>> and even if they could, their "proof" is contingent on the axioms.
>>
>> Brent
>>
> I tend to agree. Indeed the notion of "belief" is a very complex one,
> psychologically. For example, a person might believe (or more properly,
> believe they believe) in a doomsday prophecy, and yet as the moment arrives
> uneventfully, find themselves quite unsurprised. In other words, they were
> wrong about what they thought they believed. Or a person might believe
> their husband to be a good person, but upon his being charged with murder,
> discover that they "knew all along, deep down". And so on. Sometimes I
> think what we "believe" is merely what we assert to ourselves to be true,
> and this assertion is in real life often not based upon rational evidence,
> let alone anything as rigorous or abstract as an "axiom". I'm not sure
> where that leaves a theory of humans as arithmetical machines, but the
> level at which human beliefs about the self and the world are formed and
> justified seems a long long way from the level at which the "beliefs" of
> logicians are formed.
>

In prediction markets, people are inclined to stake something of value only
to the extent that they actually think the belief is likely to be true;
whereas if they are just asked what their belief is they are more likely to
say what they think ought to be true.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to