Dear Jesse,

`Thank you for this very nice remark. I will have to think about it`

`and read your reference.`

## Advertising

On 8/23/2012 8:19 PM, Jesse Mazer wrote:

A quibble with the beginning of Richard's paper. On the first page itsays:'It is beyond the scope of this paper and admittedly beyond myunderstanding to delve into Gödelian logic, which seems to beself-referential proof by contradiction, except to mention thatPenrose in Shadows of the Mind(1994), as confirmed by DavidChalmers(1995), arrived at a seemingly valid 7 step proof that human“reasoning powers cannot be captured by any formal system”.'If you actually read Chalmers' paper athttp://web.archive.org/web/20090204164739/http://psyche.cs.monash.edu.au/v2/psyche-2-09-chalmers.htmlhe definitely does *not* "confirm" Penrose's argument! He says in thepaper that Penrose has two basic arguments for his conclusions aboutconsciousness, and at the end of the section titled "the firstargument" he concludes that the first one fails:"2.16 It is section 3.3 that carries the burden of this strand ofPenrose's argument, but unfortunately it seems to be one of the leastconvincing sections in the book. By his assumption that the relevantclass of computational systems are all straightforward axiom-and-rulessystem, Penrose is not taking AI seriously, and certainly is not doingenough to establish his conclusion that physics is uncomputable. Iconclude that none of Penrose's argument up to this point put a dentin the natural AI position: that our reasoning powers may be capturedby a sound formal system F, where we cannot determine that F is sound."Then when dealing with Penrose's "second argument", he says thatPenrose draws the wrong conclusions; where Penrose concludes that ourreasoning cannot be the product of any formal system, Chalmersconcludes that the actual issue is that we cannot be 100% sure ourreasoning is "sound" (which I understand to mean we can never be 100%sure that we have not made a false conclusion about whether all thepropositions we have proved true or false actually have thattruth-value in "true arithmetic"):"3.12 We can see, then, that the assumption that we know we are soundleads to a contradiction. One might try to pin the blame on one of theother assumptions, but all these seem quite straightforward. Indeed,these include the sort of implicit assumptions that Penrose appeals toin his arguments all the time. Indeed, one could make the case thatall of premises (1)-(4) are implicitly appealed to in Penrose's mainargument. For the purposes of the argument against Penrose, it doesnot really matter which we blame for the contradiction, but I think itis fairly clear that it is the assumption that the system knows thatit is sound that causes most of the damage. It is this assumption,then, that should be withdrawn."3.13 Penrose has therefore pointed to a false culprit. When thecontradiction is reached, he pins the blame on the assumption that ourreasoning powers are captured by a formal system F. But the argumentabove shows that this assumption is inessential in reaching thecontradiction: A similar contradiction, via a not dissimilar sort ofargument, can be reached even in the absence of that assumption. Itfollows that the responsibility for the contradiction lies elsewherethan in the assumption of computability. It is the assumption aboutknowledge of soundness that should be withdrawn."3.14 Still, Penrose's argument has succeeded in clarifying someissues. In a sense, it shows where the deepest flaw in Gödelianarguments lies. One might have thought that the deepest flaw lay inthe unjustified claim that one can see the soundness of certain formalsystems that underlie our own reasoning. But in fact, if the aboveanalysis is correct, the deepest flaw lies in the assumption that weknow that we are sound. All Gödelian arguments appeal to this premisesomewhere, but in fact the premise generates a contradiction. Perhapswe are sound, but we cannot know unassailably that we are sound."So it seems Chalmers would have no problem with the "natural AI"position he discussed earlier, that our reasoning could be adequatelycaptured by a computer simulation that did not come to its top-levelconclusions about mathematics via a strict axiom/proof methodinvolving the mathematical questions themselves, but rather by someunderlying fallible structure like a neural network. The bottom-levelbehavior of the simulated neurons themselves would be deducible giventhe initial state of the system using the axiom/proof method, but thatdoesn't mean the system as a whole might not make errors inmathematical calculations; see Douglas Hofstadter's discussion of thisissue starting on p. 571 of "Godel Escher Bach", the section titled"Irrational and Rational Can Coexist on Different Levels", where hewrites:"Another way to gain perspective on this is to remember that a brain,too, is a collection of faultlessly functioning element-neurons.Whenever a neuron's threshold is surpassed by the sum of the incomingsignals, BANG!-it fires. It never happens that a neuron forgets itsarithmetical knowledge-carelessly adding its inputs and getting awrong answer. Even when a neuron dies, it continues to functioncorrectly, in the sense that its components continue to obey the lawsof mathematics and physics. Yet as we all know, neurons are perfectlycapable of supporting high-level behavior that is wrong, on its ownlevel, in the most amazing ways. Figure 109 is meant to illustratesuch a class of levels: an incorrect belief held in the software of amind, supported by the hardware of a faultlessly functioning brain."Figure 109 depicts the outline of a person's head with "2+2=5"appearing inside it, but the symbols in "2+2=5" are actually made upof large collections of smaller mathematical equations, like "7+7=14",which are all correct. A nice way of illustrating the idea, I think.I came up with my own thought-experiment to show where Penrose'sargument goes wrong, based on the same conclusion that Chalmersreached: a community of "realistic" AIs whose simulated brains worksimilarly to real human brains would never be able to be 100% certainthat they had not reached a false conclusion about arithmetic, and thevery act of stating confidently in mathematical that they would neverreach a wrong conclusion would ensure that they were endorsing a falseproposition about arithmetic. See my discussion with LauLuna on the"Penrose and algorithms" thread here:http://groups.google.com/group/everything-list/browse_thread/thread/c92723e0ef1a480c/429e70be57d2940b?#429e70be57d2940bJesseOn Thu, Aug 23, 2012 at 6:38 PM, Stephen P. King<stephe...@charter.net <mailto:stephe...@charter.net>> wrote:Dear Richard, Your paper <http://vixra.org/pdf/1101.0044v1.pdf> is very interesting. It reminds me a lot of Stephen Wolfram's cellular automaton theory. I only have one big problem with it. The 10d manifold would be a single fixed structure that, while conceivably capable of running the computations and/or implementing the Peano arithmetic, has a problem with the role of time in it. You might have a solution to this problem that I see that I did not deduce as I read your paper. How do you define time for your model? -

-- Onward! Stephen http://webpages.charter.net/stephenk1/Outlaw/Outlaw.html -- You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.