On 12/19/2013 1:30 PM, Jesse Mazer wrote:
To me it seems like "thinking something is true" is much more of a fuzzy category that "asserting something is true"


Maybe. But note that Bruno's MGA is couched in terms of a dream, just to avoid any input/output. That seems like a suspicious move to me; one that may lead intuition astray.

Brent


(even assertions can be ambiguous when stated in natural language, but they can be made non-fuzzy by requiring that each assertion be framed in terms of some formal language and entered into a computer, as in my thought-experiment). Is there any exact point where you cross between categories like "being completely unsure whether it's true" and "having a strong hunch it's true" and "having an argument in mind that it's true but not feeling completely sure there isn't a flaw in the reasoning" and "being as confident as you can possibly be that it's true"? I never really feel *absolute* certainty that anything I think is true, even basic arithmetical statements like 1+1=2, because I'm aware of how I've sometimes made sloppy mistakes in thinking in the past, and because I know intelligent people can seem to come to incorrect conclusions about basic ideas when hypnotized, or when dreaming (like the logic of various characters in Alice in Wonderland). I think of certain truth as being like an asymptote that an individual or community of thinkers can continually get closer to but never quite reach.

If I consider the statement "Jesse Mazer will never think this statement is true", I may imagine the perspective of someone else and see that from their perspective it must be true if Jesse's thinking is trustworthy, but then I'll catch myself and see that this imaginary perspective is really just a thought in Jesse's head--at that point, have I had the thought that it's true? And at some point in considering it I can't really help thinking some words along the lines of "oh, so then it *is* true" (it's hard to avoid thinking something you know you are "forbidden" to think, like when someone tells you "don't think of an elephant"), but is merely thinking the magic words enough to count as having thought it's true, and therefore having made it false once and for all?

Jesse


On Thu, Dec 19, 2013 at 3:46 PM, meekerdb <meeke...@verizon.net <mailto:meeke...@verizon.net>> wrote:

    A nice exposition, Jesse.  But it bothers me that it seems to rely on the 
idea of
    "output" and a kind of isolation like invoking a meta-level.  What if 
instead of
    "Craig Weinberg will never in his lifetime assert that this statement is 
true" we
    considered "Craig Weinberg will never in his lifetime think that this 
statement is
    true"?  Then it seems that one invokes a kind of paraconsistent logic in 
which one
    just refuses to draw any inferences from this sentence that one cannot 
think either
    true or false.

    Brent



    On 12/19/2013 8:08 AM, Jesse Mazer wrote:

        The argument only works if you assume from the beginning that an A.I. is
        unconscious or doesn't have the same sort of "mind" as a human (and 
given your
        views you probably do presuppose these things--but if the conclusion 
*requires*
        such presuppositions, then it's an exercise in circular reasoning). If 
you are
        instead willing to consider that an A.I. mind works basically like a 
human mind
        (including things like being able to make mistakes, and being able to 
understand
        things it doesn't "say out loud"), and are willing to "put yourself in 
the
        place" of an A.I. being faced with its own Godel statement, then you 
can see
        it's like a more formal equivalent of me asking you to evaluate the 
statement
        "Craig Weinberg will never in his lifetime assert that this statement is 
true".
        You can understand that if you *did* assert that it's true, that would 
of course
        make it false, but you can likewise understand that as long as you try 
to
        refrain from uttering any false statements including that one, it 
*will* end up
        being true.

        Similarly, an A.I. who is capable of making erroneous statements, and of
        understanding things distinct from its "output" to the world outside the
        program, might well understand that its own Godel statement is 
true--provided it
        never outputs a formal judgment that the statement is true, which would 
mean
        it's false! So if the A.I. in fact avoided ever giving as output a 
judgment
        about that the statement is true, it need not be because it lacks an
        understanding of what's going on, but rather just because it's caught 
in a bind
        similar to the one you're caught in with "Craig Weinberg will never in 
his
        lifetime assert that this statement is true".

        To flesh this out a bit, imagine a community of human-like A.I. 
mathematicians
        (mind uploads, say), living in a self-contained simulated world with no 
input
        from the outside, who have the ability to reflect on various 
arithmetical
        propositions. Once there is a consensus in this community that a 
proposition has
        been proven true or false, they can go to a special terminal (call it 
the
        "output terminal") and enter it on the list of proven statements, which 
will
        constitute the simulation's "output" to those of us watching it run in 
the real
        world. Suppose also that the simulated world is constantly growing, and 
that
        they have an internal simulated supercomputer within their world to 
help with
        their mathematical investigations, and this supercomputer is constantly 
growing
        in memory too. So if we imagine a string encoding the *initial* state 
of the
        simulation along with the rules determining its evolution, although 
this string
        may be very large, after some time has passed the memory of the 
simulated
        supercomputer will be much larger than that, so it's feasible to have 
this
        string appear within the supercomputer's memory (and it's part of the 
rules of
        the simulation that the string automatically appears in the 
supercomputer's
        memory after some finite time T within the simulation, and all the A.I.
        mathematicians knew that this was scheduled to happen).

        Once the A.I. mathematicians have the program's initial conditions and 
the rules
        governing subsequent evolution, they can construct their own Godel 
statement. Of
        course they can never really be sure that the string they are given 
correctly
        describes the true initial conditions of their own simulated universe, 
but let's
        say they have a high degree of trust that it is--for example, they 
might be mind
        uploads of the humans who designed the original simulation, and they 
remember
        having designed it to ensure that the string that would appear in the
        supercomputer's memory is the correct one. They could even use the 
growing
        supercomputer to run a simulation-within-the-simulation of their own 
history,
        starting from those initial conditions--the sub-simulation would always 
lag
        behind what they were experiencing, but they could continually verify 
that the
        events in the sub-simulation matched their historical records and 
memories up to
        some point in the past.

        So, they have a high degree of confidence that the Godel statement 
they've
        constructed actually is the correct one for their own simulated 
universe. They
        can therefore interpret the conceptual meaning of the statement as 
something
        like "you guys living in the simulation will never enter into your 
output
        terminal a judgment that this statement is true". So they could 
understand
        perfectly well that if they ever *did* enter such a judgment into their 
output
        terminal, that would mean the statement was a false statement about 
arithmetic.
        But provided that they *don't* ever enter any such judgment into their 
output
        terminal, they can see it's a true statement about arithmetic (and can 
discuss
        this fact among themselves and reach a consensus about this fact, as 
long as
        they don't enter it as output to the terminal). If they are mathematical
        platonists, they realize that this feeling of it being their choice 
whether to
        output the statement or not, with the statement's truth or falsity 
depending on
        that choice, is a sort of illusion--really the truth-value of the 
statement is a
        timeless fact about arithmetic. But presumably, in such a situation 
they would
        adopt a "compatibilist" view of free will as many real-world 
philosophers have
        done (http://plato.stanford.edu/entries/compatibilism/), a view which 
sees no
        conflict between the feeling of free will and the idea that our actions 
are
        ultimately completely determined by natural laws and initial conditions.

        Jesse


-- You received this message because you are subscribed to the Google Groups
    "Everything List" group.
    To unsubscribe from this group and stop receiving emails from it, send an 
email to
    everything-list+unsubscr...@googlegroups.com
    <mailto:everything-list%2bunsubscr...@googlegroups.com>.
    To post to this group, send email to everything-list@googlegroups.com
    <mailto:everything-list@googlegroups.com>.
    Visit this group at http://groups.google.com/group/everything-list.
    For more options, visit https://groups.google.com/groups/opt_out.


--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to