A quibble with the beginning of Richard's paper. On the first page it says:

'It is beyond the scope of this paper and admittedly beyond my
understanding to delve into Gödelian logic, which seems to be
self-referential proof by contradiction, except to mention that Penrose in
Shadows of the Mind(1994), as confirmed by David Chalmers(1995), arrived at
a seemingly valid 7 step proof that human “reasoning powers cannot be
captured by any formal system”.'

If you actually read Chalmers' paper at
http://web.archive.org/web/20090204164739/http://psyche.cs.monash.edu.au/v2/psyche-2-09-chalmers.htmlhe
definitely does *not* "confirm" Penrose's argument! He says in the
paper
that Penrose has two basic arguments for his conclusions about
consciousness, and at the end of the section titled "the first argument" he
concludes that the first one fails:

"2.16 It is section 3.3 that carries the burden of this strand of Penrose's
argument, but unfortunately it seems to be one of the least convincing
sections in the book. By his assumption that the relevant class of
computational systems are all straightforward axiom-and-rules system,
Penrose is not taking AI seriously, and certainly is not doing enough to
establish his conclusion that physics is uncomputable. I conclude that none
of Penrose's argument up to this point put a dent in the natural AI
position: that our reasoning powers may be captured by a sound formal
system F, where we cannot determine that F is sound."

Then when dealing with Penrose's "second argument", he says that Penrose
draws the wrong conclusions; where Penrose concludes that our reasoning
cannot be the product of any formal system, Chalmers concludes that the
actual issue is that we cannot be 100% sure our reasoning is "sound" (which
I understand to mean we can never be 100% sure that we have not made a
false conclusion about whether all the propositions we have proved true or
false actually have that truth-value in "true arithmetic"):

"3.12 We can see, then, that the assumption that we know we are sound leads
to a contradiction. One might try to pin the blame on one of the other
assumptions, but all these seem quite straightforward. Indeed, these
include the sort of implicit assumptions that Penrose appeals to in his
arguments all the time. Indeed, one could make the case that all of
premises (1)-(4) are implicitly appealed to in Penrose's main argument. For
the purposes of the argument against Penrose, it does not really matter
which we blame for the contradiction, but I think it is fairly clear that
it is the assumption that the system knows that it is sound that causes
most of the damage. It is this assumption, then, that should be withdrawn.

"3.13 Penrose has therefore pointed to a false culprit. When the
contradiction is reached, he pins the blame on the assumption that our
reasoning powers are captured by a formal system F. But the argument above
shows that this assumption is inessential in reaching the contradiction: A
similar contradiction, via a not dissimilar sort of argument, can be
reached even in the absence of that assumption. It follows that the
responsibility for the contradiction lies elsewhere than in the assumption
of computability. It is the assumption about knowledge of soundness that
should be withdrawn.

"3.14 Still, Penrose's argument has succeeded in clarifying some issues. In
a sense, it shows where the deepest flaw in Gödelian arguments lies. One
might have thought that the deepest flaw lay in the unjustified claim that
one can see the soundness of certain formal systems that underlie our own
reasoning. But in fact, if the above analysis is correct, the deepest flaw
lies in the assumption that we know that we are sound. All Gödelian
arguments appeal to this premise somewhere, but in fact the premise
generates a contradiction. Perhaps we are sound, but we cannot know
unassailably that we are sound."

So it seems Chalmers would have no problem with the "natural AI" position
he discussed earlier, that our reasoning could be adequately captured by a
computer simulation that did not come to its top-level conclusions about
mathematics via a strict axiom/proof method involving the mathematical
questions themselves, but rather by some underlying fallible structure like
a neural network. The bottom-level behavior of the simulated neurons
themselves would be deducible given the initial state of the system using
the axiom/proof method, but that doesn't mean the system as a whole might
not make errors in mathematical calculations; see Douglas Hofstadter's
discussion of this issue starting on p. 571 of "Godel Escher Bach", the
section titled "Irrational and Rational Can Coexist on Different Levels",
where he writes:

"Another way to gain perspective on this is to remember that a brain, too,
is a collection of faultlessly functioning element-neurons. Whenever a
neuron's threshold is surpassed by the sum of the incoming signals,
BANG!-it fires. It never happens that a neuron forgets its arithmetical
knowledge-carelessly adding its inputs and getting a wrong answer. Even
when a neuron dies, it continues to function correctly, in the sense that
its components continue to obey the laws of mathematics and physics. Yet as
we all know, neurons are perfectly capable of supporting high-level
behavior that is wrong, on its own level, in the most amazing ways. Figure
109 is meant to illustrate such a class of levels: an incorrect belief held
in the software of a mind, supported by the hardware of a faultlessly
functioning brain."

Figure 109 depicts the outline of a person's head with "2+2=5" appearing
inside it, but the symbols in "2+2=5" are actually made up of large
collections of smaller mathematical equations, like "7+7=14", which are all
correct. A nice way of illustrating the idea, I think.

I came up with my own thought-experiment to show where Penrose's argument
goes wrong, based on the same conclusion that Chalmers reached: a community
of "realistic" AIs whose simulated brains work similarly to real human
brains would never be able to be 100% certain that they had not reached a
false conclusion about arithmetic, and the very act of stating confidently
in mathematical that they would never reach a wrong conclusion would ensure
that they were endorsing a false proposition about arithmetic. See my
discussion with LauLuna on the "Penrose and algorithms" thread here:
http://groups.google.com/group/everything-list/browse_thread/thread/c92723e0ef1a480c/429e70be57d2940b?#429e70be57d2940b

Jesse

On Thu, Aug 23, 2012 at 6:38 PM, Stephen P. King <stephe...@charter.net>wrote:

>  Dear Richard,
>
>     Your paper <http://vixra.org/pdf/1101.0044v1.pdf> is very
> interesting. It reminds me a lot of Stephen Wolfram's cellular automaton
> theory. I only have one big problem with it. The 10d manifold would be a
> single fixed structure that, while conceivably capable of running the
> computations and/or implementing the Peano arithmetic, has a problem with
> the role of time in it. You might have a solution to this problem that I
> see that I did not deduce as I read your paper. How do you define time for
> your model?
>
> --
> Onward!
>
> Stephen
>
> "Nature, to be commanded, must be obeyed."
> ~ Francis Bacon
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to