Re: Penrose and algorithms

2007-07-08 Thread Bruno Marchal


Le 07-juil.-07, à 16:39, LauLuna a écrit :




 On Jul 7, 12:59 pm, Bruno Marchal [EMAIL PROTECTED] wrote:
 Le 06-juil.-07, à 14:53, LauLuna a écrit :

 But again, for any set of such 'physiological' axioms there is a
 corresponding equivalent set of 'conceptual' axioms. There is all the
 same a logical impossibility for us to know the second set is sound.
 No consistent (and strong enough) system S can prove the soundness of
 any system S' equivalent to S: otherwise S' would prove its own
 soundness and would be inconsistent.  And this is just what is odd.

 It is odd indeed. But it is.

 No, it is not necessary so; the alternative is that such algorithm
 does not exist. I will endorse the existence of that algorithm only
 when I find reason enough to do it. I haven't yet, and the oddities
 its existence implies count, obviously, against its existence.


If the algorithm exists, then the knowable algorithm does not exist. We 
can only bet on comp, not prove it. But it is refutable.





 I'd say this is rather Lucas's argument. Penrose's is like this:

 1. Mathematicians are not using a knowably sound algorithm to do 
 math.
 2. If they were using any algorithm whatsoever, they would be using a
 knowably sound one.
 3. Ergo, they are not using any algorithm at all.

 Do you agree that from what you say above, 2. is already invalidate?

 Not at all. I still find it far likelier that if there is a sound
 algorithm ALG and an equivalent formal system S whose soundness we can
 know, then there is no logical impossibility for our knowing the
 soundness of ALG.


We do agree. You are just postulating not-comp. I have no trouble with 
that.




 What I find inconclusive in Penrose's argument is that he refers not
 just to actual numan intellectual behavior but to some idealized
 (forever sound and consistent) human intelligence. I think the
 existence of a such an ability has to be argued.


A rather good approximation for machine could be given by the 
transfinite set of effective and finite sound extensions of a Lobian 
machine. Like those proposed by Turing. They all obey locally to G and 
G* (as shown by Beklemishev). The infinite and the transfinite does not 
help the machine with regard to the incompleteness phenomenon, except 
if the infinite is made very highly non effective. But in that case you 
tend to the One or truth a very non effective notion).



 If someone asked me: 'do you agree that Penrose's argument does not
 prove there are certain human behaviors which computers can't
 reproduce?',  I'd answered:  'yes, I agree it doesn't'. But if someone
 asked me: 'do you agree that Penrose's argument does not prove human
 intelligence cannot be simulated by computers?'  I'd reply:  'as far
 as that abstract intelligence you speak of exists at all as a real
 faculty, I'd say it is far more probable that computers cannot
 reproduce it'.


Why? All you need to do consists in providing more and more 
time-space-memory to the machine. Humans are universal by extending 
their mind by pictures on walls, ... magnetic tape 




 I.e. some versions of computationalism assume, exactly like Penrose,
 the existence of that abstract human intelligence; I would say those
 formulations of computationalism are nearly refuted by Penrose.

There is a lobian abstract intelligence, but it can differentiate in 
many kinds, and cannot be defined *effectively* (with a program) by any 
machine. It corresponds loosely to the first non effective or 
non-nameable ordinal (the OMEGA_1^Church-Kleene ordinal).



 I hope I've made my point clear.


OK. Personally I am just postulating the comp hyp and study the 
consequences. If we are machine or sequence of machine then we cannot 
which machine we are, still less which sequence of machines we belong 
too ... (introducing eventually verifiable 1-person indeterminacies).
I argue that the laws of observability (physics) emerges from that 
comp-indeterminacy. I think we agree on Penrose.

Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Penrose and algorithms

2007-07-07 Thread Bruno Marchal


Le 06-juil.-07, à 14:53, LauLuna a écrit :


 But again, for any set of such 'physiological' axioms there is a
 corresponding equivalent set of 'conceptual' axioms. There is all the
 same a logical impossibility for us to know the second set is sound.
 No consistent (and strong enough) system S can prove the soundness of
 any system S' equivalent to S: otherwise S' would prove its own
 soundness and would be inconsistent.  And this is just what is odd.


It is odd indeed. But it is.


 I'd say this is rather Lucas's argument. Penrose's is like this:

 1. Mathematicians are not using a knowably sound algorithm to do math.
 2. If they were using any algorithm whatsoever, they would be using a
 knowably sound one.
 3. Ergo, they are not using any algorithm at all.


Do you agree that from what you say above, 2. is already invalidate?

Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Penrose and algorithms

2007-07-07 Thread Bruno Marchal


Le 06-juil.-07, à 19:43, Brent Meeker a écrit :


 Bruno Marchal wrote:
 ...
 Now all (sufficiently rich) theories/machine can prove their own
 Godel's theorem. PA can prove that if PA is consistent then PA cannot
 prove its consitency. A somehow weak (compared to ZF) theory like PA
 can even prove the corresponding theorem for the richer ZF: PA can
 prove that if ZF is consistent then ZF can prove its own consistency.

 Of course you meant ..then ZF cannot prove its own consistency.


Yes. (Sorry).




 So, in general a machine can find its own godelian sentences, and can
 even infer their truth in some abductive way from very minimal
 inference inductive abilities, or from assumptions.

 No sound (or just consistent) machine can ever prove its own godelian
 sentences, in particular no machine can prove its own consistency, but
 then machine can bet on them or know them serendipitously). This is
 comparable with consciousness. Indeed it is easy to manufacture 
 thought
 experiements illustrating that no conscious being can prove it is
 conscious, except that consciousness is more truth related, so that
 machine cannot even define their own consciousness (by Tarski
 undefinability of truth theorem).

 But this is within an axiomatic system - whose reliability already 
 depends on knowing the truth of the axioms.  ISTM that concepts of 
 consciousness, knowledge, and truth that are relative to formal 
 axiomatic systems are already to weak to provide fundamental 
 explanations.


With UDA (Universal Dovetailer Argument) I ask you to implicate 
yourself in a thought experiment. Obviously I bet, hope, pray, that 
you will reason reasonably and soundly.
With the AUDA (the Arithmetical version of UDA, or Plotinus now) I ask 
the Universal Machine to implicate herself in a formal reasoning. As a 
mathematician, I limit myself to sound (and thus self-referentially 
correct) machine, for the same reason I pray you are sound.
Such a restriction is provably non constructive: there is no algorithm 
to decide if a machine is sound or not ... But note that the comp 
assumption and even just the coherence of Church thesis relies on non 
constructive assumptions at the start.


Bruno




http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Penrose and algorithms

2007-07-07 Thread LauLuna



On Jul 7, 12:59 pm, Bruno Marchal [EMAIL PROTECTED] wrote:
 Le 06-juil.-07, à 14:53, LauLuna a écrit :

  But again, for any set of such 'physiological' axioms there is a
  corresponding equivalent set of 'conceptual' axioms. There is all the
  same a logical impossibility for us to know the second set is sound.
  No consistent (and strong enough) system S can prove the soundness of
  any system S' equivalent to S: otherwise S' would prove its own
  soundness and would be inconsistent.  And this is just what is odd.

 It is odd indeed. But it is.

No, it is not necessary so; the alternative is that such algorithm
does not exist. I will endorse the existence of that algorithm only
when I find reason enough to do it. I haven't yet, and the oddities
its existence implies count, obviously, against its existence.


  I'd say this is rather Lucas's argument. Penrose's is like this:

  1. Mathematicians are not using a knowably sound algorithm to do math.
  2. If they were using any algorithm whatsoever, they would be using a
  knowably sound one.
  3. Ergo, they are not using any algorithm at all.

 Do you agree that from what you say above, 2. is already invalidate?

Not at all. I still find it far likelier that if there is a sound
algorithm ALG and an equivalent formal system S whose soundness we can
know, then there is no logical impossibility for our knowing the
soundness of ALG.

What I find inconclusive in Penrose's argument is that he refers not
just to actual numan intellectual behavior but to some idealized
(forever sound and consistent) human intelligence. I think the
existence of a such an ability has to be argued.

If someone asked me: 'do you agree that Penrose's argument does not
prove there are certain human behaviors which computers can't
reproduce?',  I'd answered:  'yes, I agree it doesn't'. But if someone
asked me: 'do you agree that Penrose's argument does not prove human
intelligence cannot be simulated by computers?'  I'd reply:  'as far
as that abstract intelligence you speak of exists at all as a real
faculty, I'd say it is far more probable that computers cannot
reproduce it'.

I.e. some versions of computationalism assume, exactly like Penrose,
the existence of that abstract human intelligence; I would say those
formulations of computationalism are nearly refuted by Penrose.

I hope I've made my point clear.

Best



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Penrose and algorithms

2007-07-06 Thread Bruno Marchal


Le 05-juil.-07, à 22:14, Jesse Mazer a écrit :

 His [Penrose] whole Godelian argument is based on the idea that for 
 any computational
 theorem-proving machine, by examining its construction we can use this
 understanding to find a mathematical statement which *we* know must 
 be
 true, but which the machine can never output--that we understand 
 something
 it doesn't. But I think my argument shows that if you were really to 
 build a
 simulated mathematician or community of mathematicians in a computer, 
 the
 Godel statement for this system would only be true *if* they never 
 made a
 mistake in reasoning or chose to output a false statement to be 
 perverse,
 and that therefore there is no way for us on the outside to have any 
 more
 confidence about whether they will ever output this statement than 
 they do
 (and thus neither of us can know whether the statement is actually a 
 true or
 false theorem of arithmetic).

I think I agree with your line of argumentation, but you way of talking 
could be misleading. Especially if people interpret arithmetic by
If we are in front of a machine that we know to be sound, then we can 
indeed know that the Godelian proposition associated to the machine is 
true. For example, nobody (serious) doubt that PA (Peano Arithmetic, 
the first order formal arithmetic theory/machine) is sound. So we know 
that all the godelian sentences are true, and PA cannot know that. But 
this just proves that I am not PA, and that I have actually stronger 
ability than PA.
I could have taken ZF instead (ZF is Zermelo Fraenkel formal 
theory/machine of sets), although I must say that if I have entire 
confidence in PA, I have only 99,9998% confidence in ZF (and thus I can 
already be only 99,9998% sure of the ZF godelian sentences).
About NF (Quine's New Foundation formal theory machine) I have only 50% 
confidence!!!

Now all (sufficiently rich) theories/machine can prove their own 
Godel's theorem. PA can prove that if PA is consistent then PA cannot 
prove its consitency. A somehow weak (compared to ZF) theory like PA 
can even prove the corresponding theorem for the richer ZF: PA can 
prove that if ZF is consistent then ZF can prove its own consistency. 
So, in general a machine can find its own godelian sentences, and can 
even infer their truth in some abductive way from very minimal 
inference inductive abilities, or from assumptions.

No sound (or just consistent) machine can ever prove its own godelian 
sentences, in particular no machine can prove its own consistency, but 
then machine can bet on them or know them serendipitously). This is 
comparable with consciousness. Indeed it is easy to manufacture thought 
experiements illustrating that no conscious being can prove it is 
conscious, except that consciousness is more truth related, so that 
machine cannot even define their own consciousness (by Tarski 
undefinability of truth theorem).

Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Penrose and algorithms

2007-07-06 Thread Jason



On Jul 5, 2:14 pm, LauLuna [EMAIL PROTECTED] wrote:

 I don't see how to reconcile free will with computationalism either.


It seems like you are an incompatibilist concerning free will.
Freewill can be reconciled with computationalism (or any deterministic
system) if one accepts compatabilism ( 
http://en.wikipedia.org/wiki/Free_will#Compatibilism
).  More worrisome than determinism's affect on freewill, however, is
many-worlds (or other everything/ultimate ensemble theories).  Whereas
determinism says the future is written in stone, many-worlds would say
all futures are written in stone.

Jason


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Penrose and algorithms

2007-07-06 Thread Bruno Marchal


Le 06-juil.-07, à 14:00, Jason a écrit :




 On Jul 5, 2:14 pm, LauLuna [EMAIL PROTECTED] wrote:

 I don't see how to reconcile free will with computationalism either.


 It seems like you are an incompatibilist concerning free will.
 Freewill can be reconciled with computationalism (or any deterministic
 system) if one accepts compatabilism ( 
 http://en.wikipedia.org/wiki/Free_will#Compatibilism
 ).  More worrisome than determinism's affect on freewill, however, is
 many-worlds (or other everything/ultimate ensemble theories).  Whereas
 determinism says the future is written in stone, many-worlds would say
 all futures are written in stone.


Like comp already say. At least with QM we know that the future are 
weighted and free-will will correspond to choosing among normal worlds.
With comp, there is only promising results in that direction, (which 
could lead to a refutation of comp).
John Bell (the physicist, not the quantum logician) has also crticized 
the MWI with respect to free-will, but this does not follow from the 
SWE. The SWE does not say all future are equal. It says that all future 
are realized, but some have negligible probability, and this left room 
for genuine free-will. For example I can choose the stairs, the lift or 
the windows to go outside, but only with the stairs and lift can I stay 
in relatively normal worlds. By going outside by jumping through the 
windows, I take the risk of surviving in a white rabbit world and then 
to remain in the relatively normal world with respect to that not 
normal world. This is why I think quantum immortality is a form of 
terrifying thinking ... if you think twice and take it seriously. Of 
course reality (with or without QM or comp) is more complex in any 
case, so it is much plausibly premature to panic from so theoretical 
elaborations. Actually computer science predicts possible unexpectable 
jump ...
Is it worth exploring the possible comp-hell, to search the limit of 
the unbearable? Well, news indicate humans have some incline to point 
on such direction. That could be the price of free-will. Have you 
read the delicious texts by Smullyan (in Mind'sI I think) about the guy 
who asks God to take away his free-will (and its associated guilt 
feeling) ?

Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Penrose and algorithms

2007-07-06 Thread Brent Meeker

Bruno Marchal wrote:
...
 Now all (sufficiently rich) theories/machine can prove their own 
 Godel's theorem. PA can prove that if PA is consistent then PA cannot 
 prove its consitency. A somehow weak (compared to ZF) theory like PA 
 can even prove the corresponding theorem for the richer ZF: PA can 
 prove that if ZF is consistent then ZF can prove its own consistency. 

Of course you meant ..then ZF cannot prove its own consistency.

Brent Meeker

 So, in general a machine can find its own godelian sentences, and can 
 even infer their truth in some abductive way from very minimal 
 inference inductive abilities, or from assumptions.
 
 No sound (or just consistent) machine can ever prove its own godelian 
 sentences, in particular no machine can prove its own consistency, but 
 then machine can bet on them or know them serendipitously). This is 
 comparable with consciousness. Indeed it is easy to manufacture thought 
 experiements illustrating that no conscious being can prove it is 
 conscious, except that consciousness is more truth related, so that 
 machine cannot even define their own consciousness (by Tarski 
 undefinability of truth theorem).

But this is within an axiomatic system - whose reliability already depends on 
knowing the truth of the axioms.  ISTM that concepts of consciousness, 
knowledge, and truth that are relative to formal axiomatic systems are already 
to weak to provide fundamental explanations.

Brent Meeker


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Penrose and algorithms

2007-07-05 Thread LauLuna



On 29 jun, 19:10, Jesse Mazer [EMAIL PROTECTED] wrote:
 LauLuna  wrote:

 On 29 jun, 02:13, Jesse Mazer [EMAIL PROTECTED] wrote:
   LauLuna wrote:

   For any Turing machine there is an equivalent axiomatic system;
   whether we could construct it or not, is of no significance here.

   But for a simulation of a mathematician's brain, the axioms wouldn't be
   statements about arithmetic which we could inspect and judge whether
 they
   were true or false individually, they'd just be statements about the
 initial
   state and behavior of the simulated brain. So again, there'd be no way
 to
   inspect the system and feel perfectly confident the system would never
   output a false statement about arithmetic, unlike in the case of the
   axiomatic systems used by mathematicians to prove theorems.

 Yes, but this is not the point. For any Turing machine performing
 mathematical skills there is also an equivalent mathematical axiomatic
 system; if we are sound Turing machines, then we could never know that
 mathematical system sound, in spite that its axioms are the same we
 use.

 I agree, a simulation of a mathematician's brain (or of a giant simulated
 community of mathematicians) cannot be a *knowably* sound system, because we
 can't do the trick of examining each axiom and seeing they are individually
 correct statements about arithmetic as with the normal axiomatic systems
 used by mathematicians. But that doesn't mean it's unsound either--it may in
 fact never produce a false statement about arithmetic, it's just that we
 can't be sure in advance, the only way to find out is to run it forever and
 check.

Yes, but how can there be a logical impossibility for us to
acknowledge as sound the same principles and rules we are using?



 But Penrose was not just arguing that human mathematical ability can't be
 based on a knowably sound algorithm, he was arguing that it must be
 *non-algorithmic*.

No, he argues in Shadows of the Mind exactly what I say. He goes on
arguing why a sound algorithm representing human intelligence is
unlikely to be not knowably sound.



 And the impossibility has to be a logical impossibility, not merely a
 technical or physical one since it depends on Gödel's theorem. That's
 a bit odd, isn't it?

 No, I don't see anything very odd about the idea that human mathematical
 abilities can't be a knowably sound algorithm--it is no more odd than the
 idea that there are some cellular automata where there is no shortcut to
 knowing whether they'll reach a certain state or not other than actually
 simulating them, as Wolfram suggests in A New Kind of Science.

The point is that the axioms are exactly our axioms!

In fact I'd
 say it fits nicely with our feeling of free will, that there should be no
 way to be sure in advance that we won't break some rules we have been told
 to obey, apart from actually running us and seeing what we actually end up
 doing.

I don't see how to reconcile free will with computationalism either.

Regards


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Penrose and algorithms

2007-07-05 Thread Jesse Mazer

LauLuna wrote:


On 29 jun, 19:10, Jesse Mazer [EMAIL PROTECTED] wrote:
  LauLuna  wrote:
 
  On 29 jun, 02:13, Jesse Mazer [EMAIL PROTECTED] wrote:
LauLuna wrote:
 
For any Turing machine there is an equivalent axiomatic system;
whether we could construct it or not, is of no significance here.
 
But for a simulation of a mathematician's brain, the axioms wouldn't 
be
statements about arithmetic which we could inspect and judge whether
  they
were true or false individually, they'd just be statements about the
  initial
state and behavior of the simulated brain. So again, there'd be no 
way
  to
inspect the system and feel perfectly confident the system would 
never
output a false statement about arithmetic, unlike in the case of the
axiomatic systems used by mathematicians to prove theorems.
 
  Yes, but this is not the point. For any Turing machine performing
  mathematical skills there is also an equivalent mathematical axiomatic
  system; if we are sound Turing machines, then we could never know that
  mathematical system sound, in spite that its axioms are the same we
  use.
 
  I agree, a simulation of a mathematician's brain (or of a giant 
simulated
  community of mathematicians) cannot be a *knowably* sound system, 
because we
  can't do the trick of examining each axiom and seeing they are 
individually
  correct statements about arithmetic as with the normal axiomatic systems
  used by mathematicians. But that doesn't mean it's unsound either--it 
may in
  fact never produce a false statement about arithmetic, it's just that we
  can't be sure in advance, the only way to find out is to run it forever 
and
  check.

Yes, but how can there be a logical impossibility for us to
acknowledge as sound the same principles and rules we are using?

The axioms in a simulation of a brain would have nothing to do with the 
high-level conceptual principles and rules we use when thinking about 
mathematics, they would be axioms concerning the most basic physical laws 
and microscopic initial conditions of the simulated brain and its simulated 
environment, like the details of which brain cells are connected by which 
synapses or how one cell will respond to a particular electrochemical signal 
from another cell. Just because I think my high-level reasoning is quite 
reliable in general, that's no reason for me to believe a detailed 
simulation of my brain would be sound in the sense that I'm 100% certain 
that this precise arrangement of nerve cells in this particular simulated 
environment, when allowed to evolve indefinitely according to some 
well-defined deterministic rules, would *never* make a mistake in reasoning 
and output an incorrect statement about arithmetic (or even that it would 
never choose to intentionally output a statement it believed to be false 
just to be contrary).


 
  But Penrose was not just arguing that human mathematical ability can't 
be
  based on a knowably sound algorithm, he was arguing that it must be
  *non-algorithmic*.

No, he argues in Shadows of the Mind exactly what I say. He goes on
arguing why a sound algorithm representing human intelligence is
unlikely to be not knowably sound.

He does argue that as a first step, but then he goes on to conclude what I 
said he did, that human intelligence cannot be algorithmic. For example, on 
p. 40 he makes quite clear that his arguments throughout the rest of the 
book are intended to show that there must be something non-computational in 
human mental processes:

I shall primarily be concerned, in Part I of this book, with the issue of 
what it is possible to achieve by use of the mental quality of 
'understanding.' Though I do not attempt to define what this word means, I 
hope that its meaning will indeed be clear enough that the reader will be 
persuaded that this quality--whatever it is--must indeed be an essentail 
part of that mental activity needed for an acceptance of the arguments of 
2.5. I propose to show that the appresiation of these arguments must involve 
something non-computational.

Later, on p. 54:

Why do I claim that this 'awareness', whatever it is, must be something 
non-computational, so that no robot, controlled by a computer, based merely 
on the standard logical ideas of a Turing machine (or equivalent)--whether 
top-down or bottom-up--can achieve or even simulate it? It is here that the 
Godelian argument plays its crucial role.

His whole Godelian argument is based on the idea that for any computational 
theorem-proving machine, by examining its construction we can use this 
understanding to find a mathematical statement which *we* know must be 
true, but which the machine can never output--that we understand something 
it doesn't. But I think my argument shows that if you were really to build a 
simulated mathematician or community of mathematicians in a computer, the 
Godel statement for this system would only be true *if* they never made a 
mistake in 

Re: Penrose and algorithms

2007-06-29 Thread Bruno Marchal


Le 28-juin-07, à 16:32, LauLuna a écrit :



 This is not fair to Penrose. He has convincingly argued in 'Shadows of
 the Mind' that human mathematical intelligence cannot be a knowably
 sound algorithm.

 Assume X is an algorithm representing the human mathematical
 intelligence. The point is not that man cannot recognize X as
 representing his own intellingence, it is rather that human
 intellingence cannot know X to be sound (independently of whether X is
 recognized as what it is). And this is strange because humans could
 exhaustively inspect X and they should find it correct since it
 contains the same principles of reasoning human intelligence employs!


As far as human intelligence is sound and finitely describable as X, 
human intelligence cannot recognize X as being human intelligence.




 One way out is claiming that human intelligence is insonsistent.
 Another, that such a thing as human intelligence could not exist,
 since it is not well defined. The latter seems more of a serious
 objection to me. So, I consider Penrose's argument inconclusive.


Of course this will not work assuming comp, i.e. the (non constructive) 
assumption that there is a level of description such that I can be 
described correctly at that level. The conclusion is only that I cannot 
prove to myself that such a level is the correct one, so the yes 
doctor has to be a non-constructive bet. Practically it needs some 
platonic act of faith. Assuming comp, we don't have to define what is 
intelligence or consciousness ... to make reasoning.




 Anyway, the use Lucas and Penrose make of Gödel's theorem make it seem
 less likely that human reason can be reproduced by machines. This must
 be granted.


The Lucas Penrose (of the Emperor's new clothes) argument is just 
incorrect, and its maximal correct reconstruction just shows that human 
reason/body cannot build machine provably or knowably endowed with 
human reason/body.
It can do that in some non provable way, and we could there is a case 
that animals do something similar since the invention of asexual and 
sexual reproduction.

Penrose is correct in the shadows of the mind, (by adding the 
knowably you refer to above) but he does not take seriously the 
correction into account.
But the whole of the' arithmetical interpretation of Plotinus 
hypostases including its matter theory is build on that nuance. 
Assuming comp, we really cannot know (soundly) which machine we are, 
and thus which computations support us. This gives the arithmetical 
interpretation of the first person comp indeterminacies. It predicts 
also that any sound lobian machine looking at itself below its 
substitution level will discover a sharable form of indeterminacy, like 
QM confirms (and illustrates).

I do appreciate Penrose (I talked with him in Siena). Unlike many 
physicist, he is quite aware of the existence and hardness of the mind 
body problem, and agrees that you cannot have both materialism and 
computationalism (but for different reason than me, and as I said 
slightly incorrect one which forces him to speculate on the wrongness 
of both QM and comp). I get the same conclusion by keeping comp and 
(most probably) QM, but then by abandoning physicalism/materialism.

Regards,

Bruno






 On 9 jun, 18:40, Bruno Marchal [EMAIL PROTECTED] wrote:
 Hi Chris,

 Le 09-juin-07, à 13:03, chris peck a écrit :







 Hello

 The time has come again when I need to seek advice from the
 everything-list
 and its contributors.

 Penrose I believe has argued that the inability to algorithmically
 solve the
 halting problem but the ability of humans, or at least Kurt Godel, to
 understand that formal systems are incomplete together demonstrate 
 that
 human reason is not algorithmic in nature - and therefore that the AI
 project is fundamentally flawed.

 What is the general consensus here on that score. I know that there
 are many
 perspectives here including those who agree with Penrose. Are there 
 any
 decent threads I could look at that deal with this issue?

 All the best

 Chris.

 This is a fundamental issue, even though things are clear for the
 logicians since 1921 ...
 But apparently it is still very cloudy for the physicists (except
 Hofstadter!).

 I have no time to explain, but let me quote the first paragraph of my
 Siena papers (your question is at the heart of the interview of the
 lobian machine and the arithmetical interpretation of Plotinus).

 But you can find many more explanation in my web pages (in french and
 in english). In a nutshell, Penrose, though quite courageous and more
 lucid on the mind body problem than the average physicist, is deadly
 mistaken on Godel. Godel's theorem are very lucky event for mechanism:
 eventually it leads to their theologies ...

 The book by Franzen on the misuse of Godel is quite good. An deep book
 is also the one by Judson Webb, ref in my thesis). We will have the
 opportunity to come back on this deep issue, which illustrate a gap
 

Re: Penrose and algorithms

2007-06-29 Thread LauLuna



On 29 jun, 02:13, Jesse Mazer [EMAIL PROTECTED] wrote:
 LauLuna wrote:

 For any Turing machine there is an equivalent axiomatic system;
 whether we could construct it or not, is of no significance here.

 But for a simulation of a mathematician's brain, the axioms wouldn't be
 statements about arithmetic which we could inspect and judge whether they
 were true or false individually, they'd just be statements about the initial
 state and behavior of the simulated brain. So again, there'd be no way to
 inspect the system and feel perfectly confident the system would never
 output a false statement about arithmetic, unlike in the case of the
 axiomatic systems used by mathematicians to prove theorems.


Yes, but this is not the point. For any Turing machine performing
mathematical skills there is also an equivalent mathematical axiomatic
system; if we are sound Turing machines, then we could never know that
mathematical system sound, in spite that its axioms are the same we
use.

And the impossibility has to be a logical impossibility, not merely a
technical or physical one since it depends on Gödel's theorem. That's
a bit odd, isn't it?

Regards


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Penrose and algorithms

2007-06-29 Thread Jesse Mazer

LauLuna  wrote:



On 29 jun, 02:13, Jesse Mazer [EMAIL PROTECTED] wrote:
  LauLuna wrote:
 
  For any Turing machine there is an equivalent axiomatic system;
  whether we could construct it or not, is of no significance here.
 
  But for a simulation of a mathematician's brain, the axioms wouldn't be
  statements about arithmetic which we could inspect and judge whether 
they
  were true or false individually, they'd just be statements about the 
initial
  state and behavior of the simulated brain. So again, there'd be no way 
to
  inspect the system and feel perfectly confident the system would never
  output a false statement about arithmetic, unlike in the case of the
  axiomatic systems used by mathematicians to prove theorems.
 

Yes, but this is not the point. For any Turing machine performing
mathematical skills there is also an equivalent mathematical axiomatic
system; if we are sound Turing machines, then we could never know that
mathematical system sound, in spite that its axioms are the same we
use.

I agree, a simulation of a mathematician's brain (or of a giant simulated 
community of mathematicians) cannot be a *knowably* sound system, because we 
can't do the trick of examining each axiom and seeing they are individually 
correct statements about arithmetic as with the normal axiomatic systems 
used by mathematicians. But that doesn't mean it's unsound either--it may in 
fact never produce a false statement about arithmetic, it's just that we 
can't be sure in advance, the only way to find out is to run it forever and 
check.

But Penrose was not just arguing that human mathematical ability can't be 
based on a knowably sound algorithm, he was arguing that it must be 
*non-algorithmic*. I think my thought-experiment shows why this doesn't make 
sense--we can see that Godel's theorem doesn't prove that an uploaded brain 
living in a closed computer simulation S would think any different from us, 
just that it wouldn't be able to correctly output a theorem about arithmetic 
equivalent to the simulation S will never output this statement. But this 
doesn't show that the uploaded mind somehow is not self-aware or that we 
know something it doesn't, since *we* can't correctly judge that statement 
to be true either! It might very well be that the simulated brain will slip 
up and make a mistake, giving that statement as output even though the act 
of doing so proves it's a false statement about arithmetic--we have no way 
to prove this will never happen, the only way to know is to run the program 
forever and see.


And the impossibility has to be a logical impossibility, not merely a
technical or physical one since it depends on Gödel's theorem. That's
a bit odd, isn't it?

No, I don't see anything very odd about the idea that human mathematical 
abilities can't be a knowably sound algorithm--it is no more odd than the 
idea that there are some cellular automata where there is no shortcut to 
knowing whether they'll reach a certain state or not other than actually 
simulating them, as Wolfram suggests in A New Kind of Science. In fact I'd 
say it fits nicely with our feeling of free will, that there should be no 
way to be sure in advance that we won't break some rules we have been told 
to obey, apart from actually running us and seeing what we actually end up 
doing.

Jesse

_
Need a break? Find your escape route with Live Search Maps. 
http://maps.live.com/default.aspx?ss=Restaurants~Hotels~Amusement%20Parkcp=33.832922~-117.915659style=rlvl=13tilt=-90dir=0alt=-1000scene=1118863encType=1FORM=MGAC01


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Penrose and algorithms

2007-06-28 Thread LauLuna


This is not fair to Penrose. He has convincingly argued in 'Shadows of
the Mind' that human mathematical intelligence cannot be a knowably
sound algorithm.

Assume X is an algorithm representing the human mathematical
intelligence. The point is not that man cannot recognize X as
representing his own intellingence, it is rather that human
intellingence cannot know X to be sound (independently of whether X is
recognized as what it is). And this is strange because humans could
exhaustively inspect X and they should find it correct since it
contains the same principles of reasoning human intelligence employs!

One way out is claiming that human intelligence is insonsistent.
Another, that such a thing as human intelligence could not exist,
since it is not well defined. The latter seems more of a serious
objection to me. So, I consider Penrose's argument inconclusive.

Anyway, the use Lucas and Penrose make of Gödel's theorem make it seem
less likely that human reason can be reproduced by machines. This must
be granted.

Regards


On 9 jun, 18:40, Bruno Marchal [EMAIL PROTECTED] wrote:
 Hi Chris,

 Le 09-juin-07, à 13:03, chris peck a écrit :







  Hello

  The time has come again when I need to seek advice from the
  everything-list
  and its contributors.

  Penrose I believe has argued that the inability to algorithmically
  solve the
  halting problem but the ability of humans, or at least Kurt Godel, to
  understand that formal systems are incomplete together demonstrate that
  human reason is not algorithmic in nature - and therefore that the AI
  project is fundamentally flawed.

  What is the general consensus here on that score. I know that there
  are many
  perspectives here including those who agree with Penrose. Are there any
  decent threads I could look at that deal with this issue?

  All the best

  Chris.

 This is a fundamental issue, even though things are clear for the
 logicians since 1921 ...
 But apparently it is still very cloudy for the physicists (except
 Hofstadter!).

 I have no time to explain, but let me quote the first paragraph of my
 Siena papers (your question is at the heart of the interview of the
 lobian machine and the arithmetical interpretation of Plotinus).

 But you can find many more explanation in my web pages (in french and
 in english). In a nutshell, Penrose, though quite courageous and more
 lucid on the mind body problem than the average physicist, is deadly
 mistaken on Godel. Godel's theorem are very lucky event for mechanism:
 eventually it leads to their theologies ...

 The book by Franzen on the misuse of Godel is quite good. An deep book
 is also the one by Judson Webb, ref in my thesis). We will have the
 opportunity to come back on this deep issue, which illustrate a gap
 between logicians and physicists.

 Best,

 Bruno

 -- (excerp of A Purely Arithmetical, yet Empirically Falsifiable,
 Interpretation of Plotinus¹ Theory of Matter Cie 2007 )
 1) Incompleteness and Mechanism
 There is a vast literature where G odel¹s first and second
 incompleteness theorems are used to argue that human beings are
 different of, if not superior to, any machine. The most famous attempts
 have been given by J. Lucas in the early sixties and by R. Penrose in
 two famous books [53, 54]. Such type of argument are not well
 supported. See for example the recent book by T. Franzen [21]. There is
 also a less well known tradition where G odel¹s theorems is used in
 favor of the mechanist thesis. Emil Post, in a remarkable anticipation
 written about ten years before G odel published his incompleteness
 theorems, already discovered both the main ³G odelian motivation²
 against mechanism, and the main pitfall of such argumentations [17,
 55]. Post is the first discoverer 1 of Church Thesis, or Church Turing
 Thesis, and Post is the first one to prove the first incompleteness
 theorem from a statement equivalent to Church thesis, i.e. the
 existence of a universal‹Post said ³complete²‹normal (production)
 system 2. In his anticipation, Post concluded at first that the
 mathematician¹s mind or that the logical process is essentially
 creative. He adds : ³It makes of the mathematician much more than a
 clever being who can do quickly what a machine could do ultimately. We
 see that a machine would never give a complete logic ; for once the
 machine is made we could prove a theorem it does not prove²(Post
 emphasis). But Post quickly realized that a machine could do the same
 deduction for its own mental acts, and admits that : ³The conclusion
 that man is not a machine is invalid. All we can say is that man cannot
 construct a machine which can do all the thinking he can. To illustrate
 this point we may note that a kind of machine-man could be constructed
 who would prove a similar theorem for his mental acts.²
 This has probably constituted his motivation for lifting the term
 creative to his set theoretical formulation of mechanical universality
 [56]. To be sure, an 

Re: Penrose and algorithms

2007-06-28 Thread Jesse Mazer

LauLuna wrote:




This is not fair to Penrose. He has convincingly argued in 'Shadows of
the Mind' that human mathematical intelligence cannot be a knowably
sound algorithm.

Assume X is an algorithm representing the human mathematical
intelligence. The point is not that man cannot recognize X as
representing his own intellingence, it is rather that human
intellingence cannot know X to be sound (independently of whether X is
recognized as what it is). And this is strange because humans could
exhaustively inspect X and they should find it correct since it
contains the same principles of reasoning human intelligence employs!

But why do you think human mathematical intelligence should be based on 
nothing more than logical deductions from certain principles of reasoning, 
like an axiomatic system? It seems to me this is the basic flaw in the 
argument--for an axiomatic system we can look at each axiom individually, 
and if we think they're all true statements about mathematics, we can feel 
confident that any theorems derived logically from these axioms should be 
true as well. But if someone gives you a detailed simulation of the brain of 
a human mathematician, there's nothing analogous you can do to feel 100% 
certain that the simulated brain will never give you a false statement. It 
helps if you actually imagine such a simulation being performed, and then 
think about what Godel's theorem would tell you about this simulation, as I 
did in this post:

http://groups.google.com/group/everything-list/browse_thread/thread/f97ba8b290f7/5627eb66017304f2?lnk=gstrnum=1#5627eb66017304f2

Jesse

_
Make every IM count. Download Messenger and join the i'm Initiative now. 
It's free. http://im.live.com/messenger/im/home/?source=TAGHM_June07


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Penrose and algorithms

2007-06-28 Thread LauLuna

For any Turing machine there is an equivalent axiomatic system;
whether we could construct it or not, is of no significance here.

Reading your link I was impressed by Russell Standish's sentence:

'I cannot prove this statement'

and how he said he could not prove it true and then proved it true.

Isn't it more likely that the sentence is paradoxical and therefore
non propositional. This is what could make a difference between humans
and computers: the correspinding sentence for a computer (when 'I' is
replaced with the description of a computer) could not be non
propositional: it would be a gödelian sentence.

Regards

On Jun 28, 10:05 pm, Jesse Mazer [EMAIL PROTECTED] wrote:
 LauLuna wrote:

 This is not fair to Penrose. He has convincingly argued in 'Shadows of
 the Mind' that human mathematical intelligence cannot be a knowably
 sound algorithm.

 Assume X is an algorithm representing the human mathematical
 intelligence. The point is not that man cannot recognize X as
 representing his own intellingence, it is rather that human
 intellingence cannot know X to be sound (independently of whether X is
 recognized as what it is). And this is strange because humans could
 exhaustively inspect X and they should find it correct since it
 contains the same principles of reasoning human intelligence employs!

 But why do you think human mathematical intelligence should be based on
 nothing more than logical deductions from certain principles of reasoning,
 like an axiomatic system? It seems to me this is the basic flaw in the
 argument--for an axiomatic system we can look at each axiom individually,
 and if we think they're all true statements about mathematics, we can feel
 confident that any theorems derived logically from these axioms should be
 true as well. But if someone gives you a detailed simulation of the brain of
 a human mathematician, there's nothing analogous you can do to feel 100%
 certain that the simulated brain will never give you a false statement. It
 helps if you actually imagine such a simulation being performed, and then
 think about what Godel's theorem would tell you about this simulation, as I
 did in this post:

 http://groups.google.com/group/everything-list/browse_thread/thread/f...

 Jesse

 _
 Make every IM count. Download Messenger and join the i'm Initiative now.
 It's free.http://im.live.com/messenger/im/home/?source=TAGHM_June07


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Penrose and algorithms

2007-06-28 Thread Jesse Mazer

LauLuna wrote:



For any Turing machine there is an equivalent axiomatic system;
whether we could construct it or not, is of no significance here.

But for a simulation of a mathematician's brain, the axioms wouldn't be 
statements about arithmetic which we could inspect and judge whether they 
were true or false individually, they'd just be statements about the initial 
state and behavior of the simulated brain. So again, there'd be no way to 
inspect the system and feel perfectly confident the system would never 
output a false statement about arithmetic, unlike in the case of the 
axiomatic systems used by mathematicians to prove theorems.


Reading your link I was impressed by Russell Standish's sentence:

'I cannot prove this statement'

and how he said he could not prove it true and then proved it true.

But prove does not have any precisely-defined meaning here. If you wanted 
to make it closer to Godel's theorem, then again, you'd have to take a 
detailed simulation of a human mind which can output various statements, and 
then look at the statement The simulation will never output this 
statement--certainly the simulated mind can see that if he doesn't make a 
mistake he *will* never output that statement, but he can't be 100% sure 
he'll never make a mistake, and the statement itself is only about the 
well-defined notion of what output the simulation gives, not in more 
ill-defined notions of what the simulation knows or can prove in its own 
mind.

Jesse

_
Get a preview of Live Earth, the hottest event this summer - only on MSN 
http://liveearth.msn.com?source=msntaglineliveearthhm


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Penrose and algorithms

2007-06-21 Thread Pete Carlton
You could look up Murmurs in the Cathedral, Daniel Dennett's review  
of Penrose's The Emperor's New Mind, in the Times literary  
supplement (and maybe online somewhere?)

Here's an excerpt from a review of the review:


--

However, Penrose's main thesis, for which all this scientific  
exposition is mere supporting argument, is that algorithmic computers  
cannot ever be intelligent, because our mathematical insights are  
fundamentally non-algorithmic. Dennett is having none of it, and  
succinctly points out the underlying fallacy, that, even if there  
could not be an algorithm for a particular behaviour, there could  
still be an algorithm that was very very good (if not perfect) at  
that behaviour:

Dennett
The following argument, then, in simply fallacious:
X is superbly capable of achieving checkmate.
There is no (practical) algorithm guaranteed to achieve checkmate,
therefore
X does not owe its power to achieve checkmate to an algorithm.
So even if mathematicians are superb recognizers of mathematical  
truth, and even if there is no algorithm, practical or otherwise, for  
recognizing mathematical truth, it does not follow that the power of  
mathematicians to recognize mathematical truth is not entirely  
explicable in terms of their brains executing an algorithm. Not an  
algorithm for intuiting mathematical truth - we can suppose that  
Penrose has proved that there could be no such thing. What would the  
algorithm be for, then? Most plausibly it would be an algorithm - one  
of very many - for trying to stay alive, an algorithm that, by an  
extraordinarily convoluted and indirect generation of byproducts,  
happened to be a superb (but not foolproof) recognizer of friends,  
enemies, food, shelter, harbingers of spring, good arguments - and  
mathematical truths. 
  /Dennett

it is disconcerting that he does not even address the issue, and  
often writes as if an algorithm could have only the powers it could  
be proven mathematically to have in the worst case.




On Jun 9, 2007, at 4:03 AM, chris peck wrote:


 Hello

 The time has come again when I need to seek advice from the  
 everything-list
 and its contributors.

 Penrose I believe has argued that the inability to algorithmically  
 solve the
 halting problem but the ability of humans, or at least Kurt Godel, to
 understand that formal systems are incomplete together demonstrate  
 that
 human reason is not algorithmic in nature - and therefore that the AI
 project is fundamentally flawed.

 What is the general consensus here on that score. I know that there  
 are many
 perspectives here including those who agree with Penrose. Are there  
 any
 decent threads I could look at that deal with this issue?

 All the best

 Chris.

 _
 PC Magazine's 2007 editors' choice for best Web mail--award-winning  
 Windows
 Live Hotmail.
 http://imagine-windowslive.com/hotmail/?locale=en- 
 usocid=TXT_TAGHM_migration_HM_mini_pcmag_0507


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Penrose and algorithms

2007-06-11 Thread chris peck

cheers Bruno. :)


From: Bruno Marchal [EMAIL PROTECTED]
Reply-To: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: Re: Penrose and algorithms
Date: Sat, 9 Jun 2007 18:40:50 +0200


Hi Chris,

Le 09-juin-07, à 13:03, chris peck a écrit :

 
  Hello
 
  The time has come again when I need to seek advice from the
  everything-list
  and its contributors.
 
  Penrose I believe has argued that the inability to algorithmically
  solve the
  halting problem but the ability of humans, or at least Kurt Godel, to
  understand that formal systems are incomplete together demonstrate that
  human reason is not algorithmic in nature - and therefore that the AI
  project is fundamentally flawed.
 
  What is the general consensus here on that score. I know that there
  are many
  perspectives here including those who agree with Penrose. Are there any
  decent threads I could look at that deal with this issue?
 
  All the best
 
  Chris.


This is a fundamental issue, even though things are clear for the
logicians since 1921 ...
But apparently it is still very cloudy for the physicists (except
Hofstadter!).

I have no time to explain, but let me quote the first paragraph of my
Siena papers (your question is at the heart of the interview of the
lobian machine and the arithmetical interpretation of Plotinus).

But you can find many more explanation in my web pages (in french and
in english). In a nutshell, Penrose, though quite courageous and more
lucid on the mind body problem than the average physicist, is deadly
mistaken on Godel. Godel's theorem are very lucky event for mechanism:
eventually it leads to their theologies ...

The book by Franzen on the misuse of Godel is quite good. An deep book
is also the one by Judson Webb, ref in my thesis). We will have the
opportunity to come back on this deep issue, which illustrate a gap
between logicians and physicists.

Best,

Bruno


-- (excerp of A Purely Arithmetical, yet Empirically Falsifiable,
Interpretation of Plotinus¹ Theory of Matter Cie 2007 )
1) Incompleteness and Mechanism
There is a vast literature where G odel¹s first and second
incompleteness theorems are used to argue that human beings are
different of, if not superior to, any machine. The most famous attempts
have been given by J. Lucas in the early sixties and by R. Penrose in
two famous books [53, 54]. Such type of argument are not well
supported. See for example the recent book by T. Franzen [21]. There is
also a less well known tradition where G odel¹s theorems is used in
favor of the mechanist thesis. Emil Post, in a remarkable anticipation
written about ten years before G odel published his incompleteness
theorems, already discovered both the main ³G odelian motivation²
against mechanism, and the main pitfall of such argumentations [17,
55]. Post is the first discoverer 1 of Church Thesis, or Church Turing
Thesis, and Post is the first one to prove the first incompleteness
theorem from a statement equivalent to Church thesis, i.e. the
existence of a universal‹Post said ³complete²‹normal (production)
system 2. In his anticipation, Post concluded at first that the
mathematician¹s mind or that the logical process is essentially
creative. He adds : ³It makes of the mathematician much more than a
clever being who can do quickly what a machine could do ultimately. We
see that a machine would never give a complete logic ; for once the
machine is made we could prove a theorem it does not prove²(Post
emphasis). But Post quickly realized that a machine could do the same
deduction for its own mental acts, and admits that : ³The conclusion
that man is not a machine is invalid. All we can say is that man cannot
construct a machine which can do all the thinking he can. To illustrate
this point we may note that a kind of machine-man could be constructed
who would prove a similar theorem for his mental acts.²
This has probably constituted his motivation for lifting the term
creative to his set theoretical formulation of mechanical universality
[56]. To be sure, an application of Kleene¹s second recursion theorem,
see [30], can make any machine self-replicating, and Post should have
said only that man cannot both construct a machine doing his thinking
and proving that such machine do so. This is what remains from a
reconstruction of Lucas-Penrose argument : if we are machine we cannot
constructively specify which machine we are, nor, a fortiori, which
computation support us. Such analysis begins perhaps with Benacerraf
[4], (see [41] for more details). In his book on the subject, Judson
Webb argues that Church Thesis is a main ingredient of the Mechanist
Thesis. Then, he argues that, given that incompleteness is an easy‹one
double diagonalization step, see above‹consequence of Church Thesis,
G odel¹s 1931 theorem, which proves incompleteness without appeal to
Church Thesis, can be taken as a confirmation of it. Judson Webb
concludes that G odel¹s incompleteness theorem is a very lucky event
for the mechanist

Re: Penrose and algorithms

2007-06-09 Thread Bruno Marchal

Hi Chris,

Le 09-juin-07, à 13:03, chris peck a écrit :


 Hello

 The time has come again when I need to seek advice from the 
 everything-list
 and its contributors.

 Penrose I believe has argued that the inability to algorithmically 
 solve the
 halting problem but the ability of humans, or at least Kurt Godel, to
 understand that formal systems are incomplete together demonstrate that
 human reason is not algorithmic in nature - and therefore that the AI
 project is fundamentally flawed.

 What is the general consensus here on that score. I know that there 
 are many
 perspectives here including those who agree with Penrose. Are there any
 decent threads I could look at that deal with this issue?

 All the best

 Chris.


This is a fundamental issue, even though things are clear for the 
logicians since 1921 ...
But apparently it is still very cloudy for the physicists (except 
Hofstadter!).

I have no time to explain, but let me quote the first paragraph of my 
Siena papers (your question is at the heart of the interview of the 
lobian machine and the arithmetical interpretation of Plotinus).

But you can find many more explanation in my web pages (in french and 
in english). In a nutshell, Penrose, though quite courageous and more 
lucid on the mind body problem than the average physicist, is deadly 
mistaken on Godel. Godel's theorem are very lucky event for mechanism: 
eventually it leads to their theologies ...

The book by Franzen on the misuse of Godel is quite good. An deep book 
is also the one by Judson Webb, ref in my thesis). We will have the 
opportunity to come back on this deep issue, which illustrate a gap 
between logicians and physicists.

Best,

Bruno


-- (excerp of A Purely Arithmetical, yet Empirically Falsifiable, 
Interpretation of Plotinus¹ Theory of Matter Cie 2007 )
1) Incompleteness and Mechanism
There is a vast literature where G odel¹s first and second 
incompleteness theorems are used to argue that human beings are 
different of, if not superior to, any machine. The most famous attempts 
have been given by J. Lucas in the early sixties and by R. Penrose in 
two famous books [53, 54]. Such type of argument are not well 
supported. See for example the recent book by T. Franzen [21]. There is 
also a less well known tradition where G odel¹s theorems is used in 
favor of the mechanist thesis. Emil Post, in a remarkable anticipation 
written about ten years before G odel published his incompleteness 
theorems, already discovered both the main ³G odelian motivation² 
against mechanism, and the main pitfall of such argumentations [17, 
55]. Post is the first discoverer 1 of Church Thesis, or Church Turing 
Thesis, and Post is the first one to prove the first incompleteness 
theorem from a statement equivalent to Church thesis, i.e. the 
existence of a universal‹Post said ³complete²‹normal (production) 
system 2. In his anticipation, Post concluded at first that the 
mathematician¹s mind or that the logical process is essentially 
creative. He adds : ³It makes of the mathematician much more than a 
clever being who can do quickly what a machine could do ultimately. We 
see that a machine would never give a complete logic ; for once the 
machine is made we could prove a theorem it does not prove²(Post 
emphasis). But Post quickly realized that a machine could do the same 
deduction for its own mental acts, and admits that : ³The conclusion 
that man is not a machine is invalid. All we can say is that man cannot 
construct a machine which can do all the thinking he can. To illustrate 
this point we may note that a kind of machine-man could be constructed 
who would prove a similar theorem for his mental acts.²
This has probably constituted his motivation for lifting the term 
creative to his set theoretical formulation of mechanical universality 
[56]. To be sure, an application of Kleene¹s second recursion theorem, 
see [30], can make any machine self-replicating, and Post should have 
said only that man cannot both construct a machine doing his thinking 
and proving that such machine do so. This is what remains from a 
reconstruction of Lucas-Penrose argument : if we are machine we cannot 
constructively specify which machine we are, nor, a fortiori, which 
computation support us. Such analysis begins perhaps with Benacerraf 
[4], (see [41] for more details). In his book on the subject, Judson 
Webb argues that Church Thesis is a main ingredient of the Mechanist 
Thesis. Then, he argues that, given that incompleteness is an easy‹one 
double diagonalization step, see above‹consequence of Church Thesis, 
G odel¹s 1931 theorem, which proves incompleteness without appeal to 
Church Thesis, can be taken as a confirmation of it. Judson Webb 
concludes that G odel¹s incompleteness theorem is a very lucky event 
for the mechanist philosopher [70, 71]. Torkel Franzen, who 
concentrates mainly on the negative (antimechanist in general) abuses 
of G odel¹s theorems, notes, after