You could look up "Murmurs in the Cathedral", Daniel Dennett's review  
of Penrose's "The Emperor's New Mind", in the Times literary  
supplement (and maybe online somewhere?)

Here's an excerpt from a review of the review:


However, Penrose's main thesis, for which all this scientific  
exposition is mere supporting argument, is that algorithmic computers  
cannot ever be intelligent, because our mathematical insights are  
fundamentally non-algorithmic. Dennett is having none of it, and  
succinctly points out the underlying fallacy, that, even if there  
could not be an algorithm for a particular behaviour, there could  
still be an algorithm that was very very good (if not perfect) at  
that behaviour:

"The following argument, then, in simply fallacious:
X is superbly capable of achieving checkmate.
There is no (practical) algorithm guaranteed to achieve checkmate,
X does not owe its power to achieve checkmate to an algorithm.
So even if mathematicians are superb recognizers of mathematical  
truth, and even if there is no algorithm, practical or otherwise, for  
recognizing mathematical truth, it does not follow that the power of  
mathematicians to recognize mathematical truth is not entirely  
explicable in terms of their brains executing an algorithm. Not an  
algorithm for intuiting mathematical truth - we can suppose that  
Penrose has proved that there could be no such thing. What would the  
algorithm be for, then? Most plausibly it would be an algorithm - one  
of very many - for trying to stay alive, an algorithm that, by an  
extraordinarily convoluted and indirect generation of byproducts,  
"happened" to be a superb (but not foolproof) recognizer of friends,  
enemies, food, shelter, harbingers of spring, good arguments - and  
mathematical truths. "

it is disconcerting that he does not even address the issue, and  
often writes as if an algorithm could have only the powers it could  
be proven mathematically to have in the worst case.

On Jun 9, 2007, at 4:03 AM, chris peck wrote:

> Hello
> The time has come again when I need to seek advice from the  
> everything-list
> and its contributors.
> Penrose I believe has argued that the inability to algorithmically  
> solve the
> halting problem but the ability of humans, or at least Kurt Godel, to
> understand that formal systems are incomplete together demonstrate  
> that
> human reason is not algorithmic in nature - and therefore that the AI
> project is fundamentally flawed.
> What is the general consensus here on that score. I know that there  
> are many
> perspectives here including those who agree with Penrose. Are there  
> any
> decent threads I could look at that deal with this issue?
> All the best
> Chris.
> _________________________________________________________________
> PC Magazine's 2007 editors' choice for best Web mail--award-winning  
> Windows
> Live Hotmail.
> us&ocid=TXT_TAGHM_migration_HM_mini_pcmag_0507
> >

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at

Reply via email to