Le 07-juil.-07, à 16:39, LauLuna a écrit :

## Advertising

> > > > On Jul 7, 12:59 pm, Bruno Marchal <[EMAIL PROTECTED]> wrote: >> Le 06-juil.-07, à 14:53, LauLuna a écrit : >> >>> But again, for any set of such 'physiological' axioms there is a >>> corresponding equivalent set of 'conceptual' axioms. There is all the >>> same a logical impossibility for us to know the second set is sound. >>> No consistent (and strong enough) system S can prove the soundness of >>> any system S' equivalent to S: otherwise S' would prove its own >>> soundness and would be inconsistent. And this is just what is odd. >> >> It is odd indeed. But it is. > > No, it is not necessary so; the alternative is that such algorithm > does not exist. I will endorse the existence of that algorithm only > when I find reason enough to do it. I haven't yet, and the oddities > its existence implies count, obviously, against its existence. If the algorithm exists, then the knowable algorithm does not exist. We can only bet on comp, not prove it. But it is refutable. > > >>> I'd say this is rather Lucas's argument. Penrose's is like this: >> >>> 1. Mathematicians are not using a knowably sound algorithm to do >>> math. >>> 2. If they were using any algorithm whatsoever, they would be using a >>> knowably sound one. >>> 3. Ergo, they are not using any algorithm at all. >> >> Do you agree that from what you say above, "2." is already invalidate? > > Not at all. I still find it far likelier that if there is a sound > algorithm ALG and an equivalent formal system S whose soundness we can > know, then there is no logical impossibility for our knowing the > soundness of ALG. We do agree. You are just postulating not-comp. I have no trouble with that. > > What I find inconclusive in Penrose's argument is that he refers not > just to actual numan intellectual behavior but to some idealized > (forever sound and consistent) human intelligence. I think the > existence of a such an ability has to be argued. A rather good approximation for machine could be given by the transfinite set of effective and finite sound extensions of a Lobian machine. Like those proposed by Turing. They all obey locally to G and G* (as shown by Beklemishev). The infinite and the transfinite does not help the machine with regard to the incompleteness phenomenon, except if the infinite is made very highly non effective. But in that case you tend to the "One" or truth a very non effective notion). > > If someone asked me: 'do you agree that Penrose's argument does not > prove there are certain human behaviors which computers can't > reproduce?', I'd answered: 'yes, I agree it doesn't'. But if someone > asked me: 'do you agree that Penrose's argument does not prove human > intelligence cannot be simulated by computers?' I'd reply: 'as far > as that abstract intelligence you speak of exists at all as a real > faculty, I'd say it is far more probable that computers cannot > reproduce it'. Why? All you need to do consists in providing more and more "time-space-memory" to the machine. Humans are "universal" by extending their mind by pictures on walls, ... magnetic tape .... > > I.e. some versions of computationalism assume, exactly like Penrose, > the existence of that abstract human intelligence; I would say those > formulations of computationalism are nearly refuted by Penrose. There is a lobian abstract intelligence, but it can differentiate in many kinds, and cannot be defined *effectively* (with a program) by any machine. It corresponds loosely to the first non effective or non-nameable ordinal (the OMEGA_1^Church-Kleene ordinal). > > I hope I've made my point clear. OK. Personally I am just postulating the comp hyp and study the consequences. If we are machine or sequence of machine then we cannot which machine we are, still less which sequence of machines we belong too ... (introducing eventually verifiable 1-person indeterminacies). I argue that the laws of observability (physics) emerges from that comp-indeterminacy. I think we agree on Penrose. Bruno http://iridia.ulb.ac.be/~marchal/ --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to [EMAIL PROTECTED] To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/everything-list?hl=en -~----------~----~----~----~------~----~------~--~---