Re: Overcoming Incompleteness

2007-05-31 Thread Bruno Marchal


Le 30-mai-07, à 16:00, Bruno Marchal a écrit :

>
>
> Le 29-mai-07, à 07:31, Russell Standish a écrit :
>
>>
>> On Tue, May 29, 2007 at 03:05:52PM +0200, Bruno Marchal wrote:
>>>
>>>
>>> Of course many things depends on definitions, but I thought it was
>>> clear that I consider that any theorem prover machine,  for a theory
>>> like ZF or PA, is already self-aware. And of course such theorem
>>> prover
>>> already exist (not just in platonia).
>>
>> No it wasn't clear. Also theorem proving software exists already in
>> the real world, and has been used to validate elementary circuit
>> design. Unfortunately, more complicated circuits such as Pentium
>> processors are beyond these programs. But I've never heard of anyone
>> considering these programs self-aware before.
>
>
> This is due probably because they have not been conceived in that
> spirit.


Of course when I say that the machine or the theorem prover for PA is 
self-aware, you have to keep in mind the whole background of the comp 
assumption, including the main movie-graph consequence which forbids to 
attach consciousness to any "spatio-temporal-physical activity". PA 
consciousness is de-localized and distributed in *all* the LRA or UD 
computations, and it is arguable that any of your own present 
consciousness is already part of many PA's life (even without comp, you 
*do* extend PA.   ("you" = all of you, not just Russell Standish!).

Locally PA's awareness is just the fact that when PA proves p soon or 
later PA proves Bew('p'), (which I write "modally" as Bp). Now even the 
UD, or LRA (little robinson arithmetic) are aware in that sense.
PA's self-awareness is the fact that for any arithmetical p, PA proves 
Bp -> BBp. LRA definitely lack that self-awareness. Both LRA and PA are 
under the consequences of incompleteness, but only PA and its sound 
logical descendants can overcome it.

Actually the mathematical capture of the Universal Dovetailer Argument 
(UDA) or of Plotinus, can be appreciated without any ontological 
assumption, and should please to positivist and formalist. On the 
contrary the original UDA itself asks for some non mathematical 
assumption like the "yes doctor", and can please only through some 
(minimal) arithmetical-platonist and cognitive assumptions.

Of course I do believe all lobian machine are self-aware in that sense. 
And thanks to their lack of (human? Aristotelian?) prejudices, they 
appear to be very close to Plotinus when they describe what they 
discover by looking inward.

In a sense PA, ZF, ...  are more self-aware than us ... perhaps more in 
this time where self-reflection seems to be a bit out of fashion ... I 
think this is in part due to the mixing of theology religion and 
politics, where the doubting attitude is discouraged ... (other but 
related debate).


Bruno

http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Overcoming Incompleteness

2007-05-30 Thread Bruno Marchal


Le 29-mai-07, à 07:31, Russell Standish a écrit :

>
> On Tue, May 29, 2007 at 03:05:52PM +0200, Bruno Marchal wrote:
>>
>>
>> Of course many things depends on definitions, but I thought it was
>> clear that I consider that any theorem prover machine,  for a theory
>> like ZF or PA, is already self-aware. And of course such theorem 
>> prover
>> already exist (not just in platonia).
>
> No it wasn't clear. Also theorem proving software exists already in
> the real world, and has been used to validate elementary circuit
> design. Unfortunately, more complicated circuits such as Pentium
> processors are beyond these programs. But I've never heard of anyone
> considering these programs self-aware before.


This is due probably because they have not been conceived in that 
spirit.



>
>> But
>> frankly I am glad with Smullyan (rather classical) definition of
>> self-awareness in his little book "FOREVER UNDECIDED":
>> A machine M is self-aware if, having proved p, she will prove soon or
>> later if not already, Bp.
>> A machine M is self-aware of its self-awareness if for any p she 
>> proves
>> Bp -> BBp (that is: she knows the preceding line for any proposition
>> p).
>
> I will try to borrow a copy of this book to read his
> justification. These definitions you give are not at all
> obvious. Sydney Uni has a copy in its library - unfortunately there is
> a bit of administrative hassle for me to organise borrowing priveleges
> there.
>
>> This explains why a philosopher like Malcolm (ref in my thesis), who
>> reasons very rigorously, and who believes the theaetetical definition
>> of knowledge is wrong,  is forced both to disbelieve in comp, and to
>> disbelieve in consciousness in dreams (like with lucidity, but not
>> necessarily).
>
> Well this is different from me. I don't believe the theatetical
> definitions are wrong as such, I just don't really understand their
> justification. If it were a question of terminology, I could happily
> call it "Theatetical knowledge", and prove all sorts of wonderful
> results about it, but still never know what it corresponds to in the
> real world.
>
> Maybe Smullyan has some justification for this?


Actually Smullyan use the Theaetical definition without going in any 
details. He just say that he will say that a machine believes p when 
the machine asserts p, and that the machine knows p when both p is true 
and the machine asserts p.
More involved explanations and critics occur in the original Theaetetus 
by Plato, and all along the philosophical litterature.
Wittgenstein makes a case in his late writing for it when he realize 
that knowledge and beliefs could corresponds to the same mental state 
in different context.
What is really not obvious at all, is that the theaetical definition 
works for the provability of correct machine. Why? Because obviously if 
the machine is correct then Bp and Bp & p are obviously equivalent. The 
key is that this equivalence cannot be proved or asserted by the 
machine, because no correct (lobian) machine can ever know being 
correct. She would know that Bp -> p, and in particular she would know 
that Bf -> f, that is ~Bf, that is: she would know that she is 
consistent, contradicting Godel II.
Same reasoning for all the hypostases: they are all equivalent, in term 
of provability or assertability, but yet, from the machine point of 
view, they obey drastically different logics.



>
>
>>> and also Kripke frames being
>>> the phenomena of time (I can see that they could be a model of
>>> time,
>>> but that is another thing entirely).
>>>
>>
>>
>>
>> I don't remember having said that "Kripke frames"  *are*  the 
>> phenomena
>> of times? What would that means?  All I say is that assuming comp 
>> (well
>> the weak acomp) then the soul or the first person generates its
>> subjective time, and that this is confirmed by the fact that the
>
> This is exactly what I meant. Time for me is subjective (unless one is
> talking about coordinate time, which one does in physics, but is not
> what we're talking about here).
>
>> first-person hypostasis (the one given by the theaetetical definition
>> of knowability) has a  temporal-intuitionistic logical semantics.
>
> This is the point I never understood. Something about it "quacks like
> a duck", so must therefore be a duck?



It is know that intuitionistic logic has a temporal flavor, and this is 
nice from the point of view of informal intuitionistic philosophy which 
is already a theory of "subjective time" à-la Bergson or Brouwer.

Now, Godel already suggested in 1933 (and btw Kolmogorov discovered 
this in 1932) that the modal logic S4 provides an epistemic 
interpretation of intuitionistic logic. Godel suggests to translate 
intuitionistic logic in the (classical) modal logic S4 by using the 
following recursive translation t_int:

t_int(p ) = Bp  (atmic p)
t_int(~p) =  ~B(t_int(p))
t_int(p -> q) = B(t_int(p) -> B(t_int(q)
t_int(p & q) = t_int(p) & t_int(q)
t_int(p V q) = B(t_in

Re: Overcoming Incompleteness

2007-05-29 Thread Russell Standish

On Tue, May 29, 2007 at 03:05:52PM +0200, Bruno Marchal wrote:
> 
> 
> Of course many things depends on definitions, but I thought it was 
> clear that I consider that any theorem prover machine,  for a theory 
> like ZF or PA, is already self-aware. And of course such theorem prover 
> already exist (not just in platonia).  

No it wasn't clear. Also theorem proving software exists already in
the real world, and has been used to validate elementary circuit
design. Unfortunately, more complicated circuits such as Pentium
processors are beyond these programs. But I've never heard of anyone
considering these programs self-aware before.

> But 
> frankly I am glad with Smullyan (rather classical) definition of 
> self-awareness in his little book "FOREVER UNDECIDED": 
> A machine M is self-aware if, having proved p, she will prove soon or 
> later if not already, Bp. 
> A machine M is self-aware of its self-awareness if for any p she proves 
> Bp -> BBp (that is: she knows the preceding line for any proposition 
> p). 

I will try to borrow a copy of this book to read his
justification. These definitions you give are not at all
obvious. Sydney Uni has a copy in its library - unfortunately there is
a bit of administrative hassle for me to organise borrowing priveleges
there.

> This explains why a philosopher like Malcolm (ref in my thesis), who 
> reasons very rigorously, and who believes the theaetetical definition 
> of knowledge is wrong,  is forced both to disbelieve in comp, and to 
> disbelieve in consciousness in dreams (like with lucidity, but not 
> necessarily). 

Well this is different from me. I don't believe the theatetical
definitions are wrong as such, I just don't really understand their
justification. If it were a question of terminology, I could happily
call it "Theatetical knowledge", and prove all sorts of wonderful
results about it, but still never know what it corresponds to in the
real world.

Maybe Smullyan has some justification for this?


> > and also Kripke frames being 
> > the phenomena of time (I can see that they could be a model of 
> > time, 
> > but that is another thing entirely). 
> > 
>  
> 
> 
> I don't remember having said that "Kripke frames"  *are*  the phenomena 
> of times? What would that means?  All I say is that assuming comp (well 
> the weak acomp) then the soul or the first person generates its 
> subjective time, and that this is confirmed by the fact that the 

This is exactly what I meant. Time for me is subjective (unless one is
talking about coordinate time, which one does in physics, but is not
what we're talking about here).

> first-person hypostasis (the one given by the theaetetical definition 
> of knowability) has a  temporal-intuitionistic logical semantics. 

This is the point I never understood. Something about it "quacks like
a duck", so must therefore be a duck?

> That this corresponds on the "real subjective time" for the 
> self-observing machine already follows from the UDA, where 1) 
> comp-practitioners does not modelize their brain with an artificial 
> one, but really accept it as a new brain + 2) the non distinction 
> between "real", virtual and arithmetical. 
> 
> 
> 
> 
> Thanks for the information. I didn't know him. But I know the work by 
> Fontana and some of its followers in Artificial Chemistry, and which is 
> based on  lambda calculus (the little cousin of combinatory logic).  
> Lambda expressions and combinators are very useful to analyze 
> computation with some fine graining (as my Elsevier paper illustrates a 
> bit). But theology needs more the notion of computability (which is 
> independent of the choice of formal systems) than the notion of 
> computation, which is far more involved. Of course there are some 
> links. 
> 

Fontana had a paper about this at the Brussels ECAL, which you attended
IIRC. I missed the Brussels ECAL due to having to return to Australia
before it (I was living in Germany in 1992), otherwise I would have
been there. I remember feeling very disappointed about it. 

Speroni di Fenizio builds on Fontana's model in several important
respects, but is certainly in that tradition.

> 
> Bruno 
> 
> 
> 
> 
> http://iridia.ulb.ac.be/~marchal/ 
> 
> > 
> 

-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-li

Re: Overcoming Incompleteness

2007-05-29 Thread Bruno Marchal

Le 26-mai-07, à 22:32, Russell Standish a écrit :

>
> On Fri, May 25, 2007 at 04:00:40PM +0200, Bruno Marchal wrote:
>>
>>
>> Le 25-mai-07, à 04:12, Russell Standish a écrit :
>>
>>> I don't think anyone yet has managed a self aware formal system,
>>
>> I would say all my work is about that. You can interpret Godel's
>> theorem, or more exactly the fact that machine can prove their own
>> provability logic, and even guess correctly the true but non provable
>> part, as a general statement of self-awareness. Sometimes,
>> self-awareness is defined more precisely by the "4" modal formula: Bp
>> -> BBp. This is a theorem for PA, but not for LRA.
>> When LRA proves p, then soon or later, if not already, LRA will prove
>> Bp (that is LRA will prove that she prove p). So Bp -> BBp is true for
>> LRA, but only PA can prove such a proposition on itself.
>>
>
> Absolutely your work is all about that. And constructing a formal
> self-aware system will just as likely come from an approach like
> yours as anything else. My only statement was that it hasn't been done
> already, and certainly I wasn't aware of you claiming self-awareness
> already.




Of course many things depends on definitions, but I thought it was 
clear that I consider that any theorem prover machine,  for a theory 
like ZF or PA, is already self-aware. And of course such theorem prover 
already exist (not just in platonia).
The "interview" of the lobian machine has already begun, through the 
papers of Godel, Lob, ... Then Solovay's results have captured, through 
the modal logic G and G*,  the whole (infinite) conversation we can 
have with and about such machines (G gives (self-) science and G* gives 
the (self) correct hope/bet/interrogation/theology).
The technical part of my thesis is really the *result* of the interview 
of such self-aware (and even aware of being self-aware) machine. But 
frankly I am glad with Smullyan (rather classical) definition of 
self-awareness in his little book "FOREVER UNDECIDED":
A machine M is self-aware if, having proved p, she will prove soon or 
later if not already, Bp.
A machine M is self-aware of its self-awareness if for any p she proves 
Bp -> BBp (that is: she knows the preceding line for any proposition 
p).
No need to put too much philosophy here, it could be prematured.
As a platonist (or just mathematician), no need to have the interview 
with an "incarnate" machine, we can stay in Platonia, where such 
machine and their interviews exist out of space and time. That is good 
given that the goal (made necessary by the UDA) is to derived the 
structure of space and time from the interview!
But yes, indeed I consider already PA or ZF as person, and this with or 
without terrestrial local implementations.







> Admittedly, I miss some of the conclusions you do make, such as the
> Theatetic axioms encapsulating knowledge, ...



I remember. You said some times ago that the definition of knowledge of 
P (Kp) by true justified opinion (Bp & p) is debatable, and you are 
right; it is somehow debated since, well, a very long time, and in many 
places: China, India, Greece up to Plato.
I postpone a bit a debate on that. Note that this idea is used 
implicitly at step 6 of the Universal Dovetailer Argument. It is 
related to the use of dream in metaphysics.
And such a definition has to be nuanced, at least if we trust Socrate's 
critics of Theatetus, or Plotinus or ... the lobian machine itself!!!  
Indeed, by incompleteness we have the new Bp & Dp nuances, and we have 
the Bp & Dp & p nuances, etc.
Actually those who criticize the definition of knowledge by "Bp & p" 
have to maintain that they are able to distinguish their dream and 
awakeness states. But with comp, the thought experiments (or their 
arithmetical godelian translation) show that we cannot, locally, or in 
a bounded finite time, make that distinction.
This explains why a philosopher like Malcolm (ref in my thesis), who 
reasons very rigorously, and who believes the theaetetical definition 
of knowledge is wrong,  is forced both to disbelieve in comp, and to 
disbelieve in consciousness in dreams (like with lucidity, but not 
necessarily).
Very interestingly, Barnes (ref in my theses) makes a similar move to 
escape Maudlin's "~ comp or ~ materialism" argument.  I am writing 
(albeit slowly) a paper on that for the journal of philosophy as a 
rejoinder to Maudlin and Barnes. We can come back on this later.







> and also Kripke frames being
> the phenomena of time (I can see that they could be a model of time,
> but that is another thing entirely).



I don't remember having said that "Kripke frames"  *are*  the phenomena 
of times? What would that means?  All I say is that assuming comp (well 
the weak acomp) then the soul or the first person generates its 
subjective time, and that this is confirmed by the fact that the 
first-person hypostasis (the one given by the theaetetical definition 
of knowability) has a  temporal-intuitioni

Re: Overcoming Incompleteness

2007-05-27 Thread Russell Standish

On Fri, May 25, 2007 at 04:00:40PM +0200, Bruno Marchal wrote:
> 
> 
> Le 25-mai-07, à 04:12, Russell Standish a écrit :
> 
> > I don't think anyone yet has managed a self aware formal system,
> 
> I would say all my work is about that. You can interpret Godel's 
> theorem, or more exactly the fact that machine can prove their own 
> provability logic, and even guess correctly the true but non provable 
> part, as a general statement of self-awareness. Sometimes, 
> self-awareness is defined more precisely by the "4" modal formula: Bp 
> -> BBp. This is a theorem for PA, but not for LRA.
> When LRA proves p, then soon or later, if not already, LRA will prove 
> Bp (that is LRA will prove that she prove p). So Bp -> BBp is true for 
> LRA, but only PA can prove such a proposition on itself.
> 

Absolutely your work is all about that. And constructing a formal
self-aware system will just as likely come from an approach like
yours as anything else. My only statement was that it hasn't been done
already, and certainly I wasn't aware of you claiming self-awareness
already. Admittedly, I miss some of the conclusions you do make, such as the
Theatetic axioms encapsulating knowledge, and also Kripke frames being
the phenomena of time (I can see that they could be a model of time,
but that is another thing entirely).

> 
> 
> > although self-reproducing systems have been known since the 1950s, and
> > are popularly encountered in the form of computer viruses.
> 
> The principle is really the same. That is what I show in my paper 
> "amoeba, planaria, and dreaming machine". self-reproduction and 
> self-awareness are a consequence of the closure of the set of partial 
> computable function for the daigonalization procedure.
> 

Incidently, I read your Elsevier paper the other day. It has inspired
me to take a look at combinators for artificial life applications. I
discovered that Pietro Speroni di Fenizio has spent a PhD looking at
this. He was a student of Walter Banzhaf and Peter Dittrich - I met
Walter shortly before I caught up with you in Brussels in 2003. Peter
I met in the Lausanne ECAL in 1999. Are you aware of any of this work?
AFACT, he never achieved self-reproduction, which is a precursor to
evolution, but alas he didn't publish any of his code.

> 
> bruno
> 
> 
> 
> http://iridia.ulb.ac.be/~marchal/
> 
> 
> 
-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Overcoming Incompleteness

2007-05-26 Thread Mohsen Ravanbakhsh
On 5/26/07, Jesse Mazer <[EMAIL PROTECTED]> wrote:
>
>
> Mohsen Ravanbakhsh wrote:
>
> >
> >Hi everybody,
> >I need to clarify. When we build this new combined system, we would be
> >immune to Godelian statements for one of them not for the whole system,
> >whatever it might be. So Jesse's argument does not hold, and of course
> the
> >new system does not contradict the Godel's theorem, it's (was!) just a
> way
> >to avoid it.
>
> But didn't you claim the combination of the two of them would be
> "complete"?
> "Complete" is supposed to mean a system will print out *every* true
> statement about arithmetic, and a Godel statement for a theorem-proving
> system is itself a true statement about arithmetic. So if the combined
> system has a Godel statement that it will never print out, and the
> combined
> system prints out every statement that the two of them can print out, then
> the combination of the two of them does not allow you to escape
> incompleteness.
>
> Jesse
>
> _
> More photos, more messages, more storage--get 2GB with Windows Live
> Hotmail.
>
> http://imagine-windowslive.com/hotmail/?locale=en-us&ocid=TXT_TAGHM_migration_HM_mini_2G_0507
>
>
> >
>
Aha, there seems to be subtle point here:
 My claim was: the new system would be complete regarding the statements
each one of two systems(one of them, because they're the same) can generate,
BUT building such a system would require some additional staff(!) to glue
those similar systems so they'd rely on each other, and follow the mentioned
mechanism. Those extra THINGS still should(!) cause the incompleteness of
the new system as a whole regarding the statements that consider this
coupling.
Anyway... it didn't work.

-- 

Mohsen Ravanbakhsh.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Overcoming Incompleteness

2007-05-26 Thread Jesse Mazer

Mohsen Ravanbakhsh wrote:

>
>Hi everybody,
>I need to clarify. When we build this new combined system, we would be
>immune to Godelian statements for one of them not for the whole system,
>whatever it might be. So Jesse's argument does not hold, and of course the
>new system does not contradict the Godel's theorem, it's (was!) just a way
>to avoid it.

But didn't you claim the combination of the two of them would be "complete"? 
"Complete" is supposed to mean a system will print out *every* true 
statement about arithmetic, and a Godel statement for a theorem-proving 
system is itself a true statement about arithmetic. So if the combined 
system has a Godel statement that it will never print out, and the combined 
system prints out every statement that the two of them can print out, then 
the combination of the two of them does not allow you to escape 
incompleteness.

Jesse

_
More photos, more messages, more storage--get 2GB with Windows Live Hotmail. 
http://imagine-windowslive.com/hotmail/?locale=en-us&ocid=TXT_TAGHM_migration_HM_mini_2G_0507


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Overcoming Incompleteness

2007-05-26 Thread Mohsen Ravanbakhsh
Hi everybody,
I need to clarify. When we build this new combined system, we would be
immune to Godelian statements for one of them not for the whole system,
whatever it might be. So Jesse's argument does not hold, and of course the
new system does not contradict the Godel's theorem, it's (was!) just a way
to avoid it.

Bruno:*"...Only if S2 is much more simple than S1, can S1 be complete on
S2." *
And finally this ruins everything...

-

Mohsen Ravanbakhsh.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Overcoming Incompleteness

2007-05-25 Thread Stephen Paul King

Hi Russell,

- Original Message - 
From: "Russell Standish" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, May 25, 2007 12:14 AM
Subject: Re: Overcoming Incompleteness


>
> On Thu, May 24, 2007 at 11:53:59PM -0400, Stephen Paul King wrote:
>>
>> For me the question has always been how does one "overcome
>> Incompleteness" when it is impossible for a simulated system to be 
>> identical
>> to its simulator unless the two are one and the same.
>
> Is it though? If the simulated system is different from the original,
> then indeed I would agree with you.
>

[SPK]

It was the difference that i was trying to focus on... "Bisimulation" 
is, after all, a form of identity if exact.

> In the case of human self-awareness, I thought it was implemented not
> by simulation as such, but by decoupling the actual inputs and outputs
> from the real world, and then feeding the scenario input into the
> actual brain circuit, and examine the output _without_ actually moving
> a muscle. It has something to do with the "mirror" neurons, and it
> really is quite a neat trick (at least Dennett thinks so, and I tend
> to agree).
>

[SPK]

Ok, but that is it that is "generating" and "examining" the inputs and 
outputs? I am trying to frame this in software terms...

> Not being into supernatural explanations, I think a perfectly
> mechanical, or formal model should be able to capture this
> ability. But how to do it without running into infinite regress is the
> challenge. And if and when we have this formal model, we can then see 
> whether
> this idea of solving incompleteness has any merit. I'm as sceptical as
> anyone, but I do believe the case is more subtle than to be destroyed
> by a couple of lines of handwaving argument :).
>

[SPK]

We avoid infinite regress by having only finite computational resourses. 
X can only generate a simulation of itself with a resolution whose upper 
bound is determined by the resourses that X has available within the span of 
of the simulation of X. Remember, X is not a static entity...

Kindest regards,

Stephen 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Overcoming Incompleteness

2007-05-25 Thread James N Rose

Bruno, et al.,

There is a CRITICAL FUNDAMENTAL ERROR in 
Godel's papers and concept.

If a simpler 'less complete' system - which 
-includes- its statements, attempts to make 
-presumptive statements- about a 'more complete'
corresponding system ... and its relationship to
the simpler 'base of statements' system, then a
conclusion is: a system CANNOT accurately 'self 
assess' but can accurately 'other assess' information
which may not in fact be present for assessment.

Generalized conclusion:

It is not possible to assess known information
whereas it is possible to assess unknown information.


James Rose

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Overcoming Incompleteness

2007-05-25 Thread Bruno Marchal


Le 25-mai-07, à 04:12, Russell Standish a écrit :

> I don't think anyone yet has managed a self aware formal system,

I would say all my work is about that. You can interpret Godel's 
theorem, or more exactly the fact that machine can prove their own 
provability logic, and even guess correctly the true but non provable 
part, as a general statement of self-awareness. Sometimes, 
self-awareness is defined more precisely by the "4" modal formula: Bp 
-> BBp. This is a theorem for PA, but not for LRA.
When LRA proves p, then soon or later, if not already, LRA will prove 
Bp (that is LRA will prove that she prove p). So Bp -> BBp is true for 
LRA, but only PA can prove such a proposition on itself.



> although self-reproducing systems have been known since the 1950s, and
> are popularly encountered in the form of computer viruses.

The principle is really the same. That is what I show in my paper 
"amoeba, planaria, and dreaming machine". self-reproduction and 
self-awareness are a consequence of the closure of the set of partial 
computable function for the daigonalization procedure.


bruno



http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Overcoming Incompleteness

2007-05-25 Thread Jesse Mazer

Mohsen Ravanbakhsh
>
>*Jesse,
>I definitely don't think the two systems could be complete, since 
>(handwavey
>argument follows) if you have two theorem-proving algorithms A and B, it's
>trivial to just create a new algorithm that prints out the theorems that
>either A or B could print out, and incompleteness should apply to this too*
>**
>They're not independent systems.putting that aside, I can't find the
>correspondence to my argument. It would be nice if you could clarify your
>point.

I didn't say they were independent--but each has a well-defined set of 
theorems that they will judge to be true, no? My point was just that they 
could not together be complete as you say, since the combination of the two 
can always be treated as a *single* axiomatic system or theorem-proving 
algorithm which proves every theorem in the union of the two sets A and B 
prove individually, and this must necessarily be incomplete--there must be 
true theorems of arithmetic which this single system cannot prove (meaning 
that they don't belong to the set A can prove or the set B can prove).

Jesse

_
More photos, more messages, more storage--get 2GB with Windows Live Hotmail. 
http://imagine-windowslive.com/hotmail/?locale=en-us&ocid=TXT_TAGHM_migration_HM_mini_2G_0507


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Overcoming Incompleteness

2007-05-25 Thread Bruno Marchal

Le 24-mai-07, à 19:32, Mohsen Ravanbakhsh a écrit :


> Thanks for your patience! , I know that my arguments are somehow  
> raw and immature in your view, but I'm just at the beginning.
>
> S1 can simulate S2, but S1 has no reason to believe whatever S2 says.
> There is no problem.
> Hofstadter "strange loop" are more related to arithmetical
> self-reference or general fixed point of recursive operator
>
> OK then it, becomes my own idea!
> Suppose S1 and S2 are the same systems, and both KNOW that the other  
> one is a similar system.



They cannot *know* that. The first person associated to each system is  
different of the other. Unless you mean something *very* general by  
"similar".






> Then both have the reason to believe in each others statements, with  
> the improvement that the new system is COMPLETE.


Why? Only if S2 is much more simple than S1, can S1 be complete on S2.  
No system can be complete on itself or on anything similar to itself.





> We've not exploited any more powerful system to overcome the  
> incompleteness in our system.
> I think this is a great achievement!
> It's actually like this: YOU believe in ME. THEY give  
> you a godelian statement (You theoretically can not prove this  
> statement) you give it to ME and then see that I can neither prove it n 
> or disprove it, so you tell 
>  THEM that their statement is true.


If that is a proof, then you are inconsistent. If it is an intuition,  
or a prayer, a hope, a bet, or anything like that, then it is ok, but  
you don't need to system for that. S1 can use the very similar system  
S1 itself.
The apparent infinite regression is soleved by the traditional  
diagonalisation technics. Look in the archive for both
diagonalisation  and
diagonalization
in the archive. Or ask (and be patient :)




> But the wonder is in what we do just by ourselves. We have a THEORY OF  
> MIND. You actually do not need to ask me about the truth of that  
> statement, you just simulate me and that's why I can see the a  
> godelian statement is at last true. 


But a simulation is not a proof, especially if the simulation doesn't  
halt.




> But in the logical sense ONE system wont be able to overcome the incomp 
> leteness, 
> so I might conclude:
>  I'M NOT ONE LOGICAL SYSTEM!
>  This is how we might rich a theory of self. A loopy(!) and multi(!)  
> self.


Here I do agree (but from a different reasoning). See later, or,  
meanwhile, search for "Plotinus" or "guardian angel" or "hypostases".

Very shortly, the lobian-godelian incompleteness forces the distinction  
between:

p
Bp
Bp & p
Bp & Dt
(Bp & p) & Dt

which makes a total of eight  notions of "self" (8, not 5, because 3 of  
them splits in two different logic due to incompleteness. B is for  
"beweisbar" and correspond to Goel arithmetical provability predicate.

You can read them as

truth
provability
knowability
observability
sensibility

or, with Plotinus (300 AD):

ONE
INTELLECT
SOUL
INTELLIGIBLE MATTER
SENSIBLE MATTER

With the provable versus true distinction (the  [ ] / [ ]*   
distinction), the hypostases are:


one
discursive Intellectdivine Intellect
soul
intell. matter  intell. matter
sensible mat.   sensible mat

AND THEN, éadding the comp assumption leads to the comp physics, for  
the soul and both matter hypostases, and it is enough to cmpare the  
comp-physics with the empirical physics to evaluate the degree of  
plausibility of comp. And that's my point.

Bruno


http://iridia.ulb.ac.be/~marchal/

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Overcoming Incompleteness

2007-05-25 Thread Mohsen Ravanbakhsh
*Russell,*
*Sounds plausible that self-aware systems can manage this. I'd like to
see this done as a formal system though, as I have a natural mistrust
of handwaving arguments! *

I like it too :).
I think the computational view would help in construction.

*Jesse,
I definitely don't think the two systems could be complete, since (handwavey
argument follows) if you have two theorem-proving algorithms A and B, it's
trivial to just create a new algorithm that prints out the theorems that
either A or B could print out, and incompleteness should apply to this too*
**
They're not independent systems.putting that aside, I can't find the
correspondence to my argument. It would be nice if you could clarify your
point.*
* *
* *Brent,*
*But doesn't that depend on having adopted rules of inference and values,
i.e. "the sentence is either true or false".  Why shouldn't we suppose that
self-referential  sentences can have other values or that your informal
reasoning above is a proof and hence there is contradiction*

Actually this is the main advantage of making such loop. you're right only
when we're talking about ONE system and that system has concluded the truth
of such statement. But it can be avoided by two or more systems. They
are reliable in each other's view, and have statements for that. maybe that
is the only (symmetric!) difference of those systems. Except for that,
both(all) are the same.

Mohsen Ravanbakhsh

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Overcoming Incompleteness

2007-05-25 Thread Jesse Mazer

Stephen Paul King wrote:

>
>Dear Jesse,
>
> Hasn't Stephen Wolfram proven that it is impossible to "shortcut"
>predictions for arbitrary behaviours of sufficienty complex systems?
>
>http://www.stephenwolfram.com/publications/articles/physics/85-undecidability/
>
>
>Stephen

The paper itself doesn't seem to prove it--he uses a lot of tentative 
language about how certain problems "may" be computational irreducible or 
are "expected" to be, as in this paragraph:

"Many complex or chaotic dynamical systems are expected to be 
computationally irreducible, and their behavior effectively found only by 
explicit simulation. Just as it is undecidable whether a particular initial 
state in a CA leads to unbounded growth, to self-replication, or has some 
other outcome, so it may be undecidable whether a particular solution to a 
differential equation (studied say with symbolic dynamics) even enters a 
certain region of phase space, and whether, say, a certain -body system is 
ultimately stable. Similarly, the existence of an attractor, say, with a 
dimension above some value, may be undecidable."

Still, I think it's plausible that he's correct, and that there are indeed 
computations for which there is no "shortcut" to finding the program's state 
after N steps except actually running it for N steps.

Jesse

_
Catch suspicious messages before you open them--with Windows Live Hotmail. 
http://imagine-windowslive.com/hotmail/?locale=en-us&ocid=TXT_TAGHM_migration_HM_mini_protection_0507


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Overcoming Incompleteness

2007-05-24 Thread Brent Meeker

Russell Standish wrote:
> You are right when it comes to the combination of two independent
> systems A and B. What the original poster's idea was a
> self-simulating, or self-aware system. In this case, consider the liar
> type paradox:
> 
>   I cannot prove this statement
> 
> Whilst I cannot prove this statement, I do know it is true, simply
> because if I could prove the statement it would be false.

But doesn't that depend on having adopted rules of inference and values, i.e. 
"the sentence is either true or false".  Why shouldn't we suppose that 
self-referential  sentences can have other values or that your informal 
reasoning above is a proof and hence there is contradiction.  

ISTM that all this weirdness comes about because we have tried to formalize 
ordinary reasoning into LOGIC, which works wonderfully for many axiom sets and 
rules of inference, but not for all.  That's why mathematicians generally get 
along just fine without worrying about completeness or the provability of 
consistency.

Brent Meeker

> 
> To know that it is true, I am using self-reference about my own proof
> capabilities. 
> 
> I don't think anyone yet has managed a self aware formal system,
> although self-reproducing systems have been known since the 1950s, and
> are popularly encountered in the form of computer viruses. There has
> to be some relationship between a self-reproducing system and a
> self-aware system...

I think it would be almost trivial to create and AI system that would recognize 
the equivalent of "I cannot prove this statement" is true or mu.  It would just 
be a trick of formal logic and would have very little to do with self-awareness.

Brent Meeker

> 
> Cheers
> 
> On Thu, May 24, 2007 at 09:45:45PM -0400, Jesse Mazer wrote:
>> I definitely don't think the two systems could be complete, since (handwavey 
>> argument follows) if you have two theorem-proving algorithms A and B, it's 
>> trivial to just create a new algorithm that prints out the theorems that 
>> either A or B could print out, and incompleteness should apply to this too.
>>
>> Jesse
>>
>>
>>> From: Russell Standish <[EMAIL PROTECTED]>
>>> Reply-To: [EMAIL PROTECTED]
>>> To: [EMAIL PROTECTED]
>>> Subject: Re: Overcoming Incompleteness
>>> Date: Thu, 24 May 2007 23:59:23 +1000
>>>
>>>
>>> Sounds plausible that self-aware systems can manage this. I'd like to
>>> see this done as a formal system though, as I have a natural mistrust
>>> of handwaving arguments!
>>>
>>> On Thu, May 24, 2007 at 10:32:29AM -0700, Mohsen Ravanbakhsh wrote:
>>>> Thanks for your patience! , I know that my arguments are somehow
>>>> raw and immature in your view, but I'm just at the beginning.
>>>>
>>>> *S1 can simulate S2, but S1 has no reason to believe whatever S2 says.
>>>> There is no problem.
>>>> **Hofstadter "strange loop" are more related to arithmetical
>>>> self-reference or general fixed point of recursive operator*
>>>>
>>>> OK then it, becomes my own idea!
>>>> Suppose S1 and S2 are the same systems, and both KNOW that the other one 
>>> is
>>>> a similar system. Then both have the reason to believe in each others
>>>> statements, with the improvement that the new system is COMPLETE. We've 
>>> not
>>>> exploited any more powerful system to overcome the incompleteness in our
>>>> system.
>>>> I think this is a great achievement!
>>>> It's actually like this: YOU believe in ME. THEY give
>>>> you a godelian statement (You theoretically can not prove this
>>>> statement) you give it to ME and then see that I can neither prove it
>>>> nor disprove it, so you tell
>>>> THEM that their statement is true.
>>>> But the wonder is in what we do just by ourselves. We have a THEORY OF 
>>> MIND.
>>>> You actually do not need to ask me about the truth of that statement, 
>>> you
>>>> just simulate me and that's why I can see the a godelian statement is at
>>>> last
>>>> true. But in the logical sense ONE system wont be able to overcome the
>>>> incompleteness,
>>>> so I might conclude:
>>>> I'M NOT ONE LOGICAL SYSTEM!
>>>> This is how we might rich a theory of self. A loopy(!) and multi(!) 
>>> self.
>>>>
>>>>
>>>> *
>>>>
>>>> *Mohsen Ravanbakhsh
>>>>
>>> --
>>>

Re: Overcoming Incompleteness

2007-05-24 Thread Russell Standish

On Thu, May 24, 2007 at 11:53:59PM -0400, Stephen Paul King wrote:
> 
> For me the question has always been how does one "overcome 
> Incompleteness" when it is impossible for a simulated system to be identical 
> to its simulator unless the two are one and the same. 

Is it though? If the simulated system is different from the original,
then indeed I would agree with you.

In the case of human self-awareness, I thought it was implemented not
by simulation as such, but by decoupling the actual inputs and outputs
from the real world, and then feeding the scenario input into the
actual brain circuit, and examine the output _without_ actually moving
a muscle. It has something to do with the "mirror" neurons, and it
really is quite a neat trick (at least Dennett thinks so, and I tend
to agree).

Not being into supernatural explanations, I think a perfectly
mechanical, or formal model should be able to capture this
ability. But how to do it without running into infinite regress is the
challenge. And if and when we have this formal model, we can then see whether
this idea of solving incompleteness has any merit. I'm as sceptical as
anyone, but I do believe the case is more subtle than to be destroyed
by a couple of lines of handwaving argument :).

Cheers




A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Overcoming Incompleteness

2007-05-24 Thread Stephen Paul King

Dear Jesse,

Hasn't Stephen Wolfram proven that it is impossible to "shortcut" 
predictions for arbitrary behaviours of sufficienty complex systems?

http://www.stephenwolfram.com/publications/articles/physics/85-undecidability/


Stephen

- Original Message - 
From: "Jesse Mazer" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, May 24, 2007 10:31 PM
Subject: Re: Overcoming Incompleteness


snip
> The same thing would be true even if you replaced an individual in a
> computer simulation with a giant simulated community of mathematicians who
> could only output a given theorem if they had a unanimous vote, and where
> the size of the community was constantly growing so the probability of
> errors should be ever-diminishing...although they might hope that they 
> might
> never make an error even if the simulation ran forever, they couldn't
> rigorously prove this unless they found some shortcut for predicting their
> own community's behavior better than just letting the program run and 
> seeing
> what would happen (if they did find such a shortcut, this would have 
> strange
> implications for their own feeling of free will!)
>
> Jesse
> 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Overcoming Incompleteness

2007-05-24 Thread Stephen Paul King

Dear Russell,

Isn't the key feature of a "self-aware system" the ability to generate 
some form of representation of itself within itself? Would it not be a 
simple matter of a system being able to generate some form of simulation of 
itself such that there is both a similarity and a difference between the 
system and its simulation of itself?
The similarity, it seems to me, would be merely a matter of faithful and 
predictible simulation of the system; the difference might be a difference 
in state, resolution or some other measure.

For me the question has always been how does one "overcome 
Incompleteness" when it is impossible for a simulated system to be identical 
to its simulator unless the two are one and the same. Given this identity, 
how then does any notion of provability arise when all that exists is a 
tautology, A=A.

BTW, have you ever taken a look at Jon Barwise's treatment of the Liar 
Paradox?

http://en.wikipedia.org/wiki/Jon_Barwise

Kindest regards,

Stephen

- Original Message - 
From: "Russell Standish" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, May 24, 2007 10:12 PM
Subject: Re: Overcoming Incompleteness


>
> You are right when it comes to the combination of two independent
> systems A and B. What the original poster's idea was a
> self-simulating, or self-aware system. In this case, consider the liar
> type paradox:
>
>  I cannot prove this statement
>
> Whilst I cannot prove this statement, I do know it is true, simply
> because if I could prove the statement it would be false.
>
> To know that it is true, I am using self-reference about my own proof
> capabilities.
>
> I don't think anyone yet has managed a self aware formal system,
> although self-reproducing systems have been known since the 1950s, and
> are popularly encountered in the form of computer viruses. There has
> to be some relationship between a self-reproducing system and a
> self-aware system...
>
> Cheers


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Overcoming Incompleteness

2007-05-24 Thread Jesse Mazer

Russell Standish:

>
>You are right when it comes to the combination of two independent
>systems A and B. What the original poster's idea was a
>self-simulating, or self-aware system. In this case, consider the liar
>type paradox:
>
>   I cannot prove this statement
>
>Whilst I cannot prove this statement, I do know it is true, simply
>because if I could prove the statement it would be false.

Yes, but Godel statements are more complex than verbal statements like the 
one above, they actually encode the complete rules of the theorem-proving 
system into the statement. A better analogy might be if you were an upload 
(see http://en.wikipedia.org/wiki/Mind_transfer) living in a self-contained 
deterministic computer simulation, and the only messages you could send to 
the outside world were judgments about whether particular mathematical 
theorems were true (once you make a judgement, you can't take it back...for 
any judgements your program makes, there must be a halting program that can 
show that you'll definitely make that judgement after some finite number of 
steps). Suppose you know the complete initial conditions X and dynamical 
rules Y of the simulation. Then suppose you're given a mathematical theorem 
Z which you can see qualifies as an "encoding" of the statement "the 
deterministic computer simulation with initial conditions X and dynamical 
rules Y will never output theorem Z as a true statement." You can see 
intuitively that it should be true if your reasoning remains correct, but 
you can't be sure that at some point after the simulation is running for a 
million years or something you won't decide to output that statement in a 
fit of perversity, nor can you actually come up with a rigorous *proof* that 
you'll never do that, since you can't find any shortcut to predicting the 
system's behavior aside from actually letting the simulation run and seeing 
what happens.

The same thing would be true even if you replaced an individual in a 
computer simulation with a giant simulated community of mathematicians who 
could only output a given theorem if they had a unanimous vote, and where 
the size of the community was constantly growing so the probability of 
errors should be ever-diminishing...although they might hope that they might 
never make an error even if the simulation ran forever, they couldn't 
rigorously prove this unless they found some shortcut for predicting their 
own community's behavior better than just letting the program run and seeing 
what would happen (if they did find such a shortcut, this would have strange 
implications for their own feeling of free will!)

Jesse

_
More photos, more messages, more storage--get 2GB with Windows Live Hotmail. 
http://imagine-windowslive.com/hotmail/?locale=en-us&ocid=TXT_TAGHM_migration_HM_mini_2G_0507


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Overcoming Incompleteness

2007-05-24 Thread Russell Standish

You are right when it comes to the combination of two independent
systems A and B. What the original poster's idea was a
self-simulating, or self-aware system. In this case, consider the liar
type paradox:

  I cannot prove this statement

Whilst I cannot prove this statement, I do know it is true, simply
because if I could prove the statement it would be false.

To know that it is true, I am using self-reference about my own proof
capabilities. 

I don't think anyone yet has managed a self aware formal system,
although self-reproducing systems have been known since the 1950s, and
are popularly encountered in the form of computer viruses. There has
to be some relationship between a self-reproducing system and a
self-aware system...

Cheers

On Thu, May 24, 2007 at 09:45:45PM -0400, Jesse Mazer wrote:
> 
> I definitely don't think the two systems could be complete, since (handwavey 
> argument follows) if you have two theorem-proving algorithms A and B, it's 
> trivial to just create a new algorithm that prints out the theorems that 
> either A or B could print out, and incompleteness should apply to this too.
> 
> Jesse
> 
> 
> >From: Russell Standish <[EMAIL PROTECTED]>
> >Reply-To: [EMAIL PROTECTED]
> >To: [EMAIL PROTECTED]
> >Subject: Re: Overcoming Incompleteness
> >Date: Thu, 24 May 2007 23:59:23 +1000
> >
> >
> >Sounds plausible that self-aware systems can manage this. I'd like to
> >see this done as a formal system though, as I have a natural mistrust
> >of handwaving arguments!
> >
> >On Thu, May 24, 2007 at 10:32:29AM -0700, Mohsen Ravanbakhsh wrote:
> > > Thanks for your patience! , I know that my arguments are somehow
> > > raw and immature in your view, but I'm just at the beginning.
> > >
> > > *S1 can simulate S2, but S1 has no reason to believe whatever S2 says.
> > > There is no problem.
> > > **Hofstadter "strange loop" are more related to arithmetical
> > > self-reference or general fixed point of recursive operator*
> > >
> > > OK then it, becomes my own idea!
> > > Suppose S1 and S2 are the same systems, and both KNOW that the other one 
> >is
> > > a similar system. Then both have the reason to believe in each others
> > > statements, with the improvement that the new system is COMPLETE. We've 
> >not
> > > exploited any more powerful system to overcome the incompleteness in our
> > > system.
> > > I think this is a great achievement!
> > > It's actually like this: YOU believe in ME. THEY give
> > > you a godelian statement (You theoretically can not prove this
> > > statement) you give it to ME and then see that I can neither prove it
> > > nor disprove it, so you tell
> > > THEM that their statement is true.
> > > But the wonder is in what we do just by ourselves. We have a THEORY OF 
> >MIND.
> > > You actually do not need to ask me about the truth of that statement, 
> >you
> > > just simulate me and that's why I can see the a godelian statement is at
> > > last
> > > true. But in the logical sense ONE system wont be able to overcome the
> > > incompleteness,
> > > so I might conclude:
> > > I'M NOT ONE LOGICAL SYSTEM!
> > > This is how we might rich a theory of self. A loopy(!) and multi(!) 
> >self.
> > >
> > >
> > >
> > > *
> > >
> > > *Mohsen Ravanbakhsh
> > >
> > > >
> >
> >--
> >
> >
> >A/Prof Russell Standish  Phone 0425 253119 (mobile)
> >Mathematics
> >UNSW SYDNEY 2052  [EMAIL PROTECTED]
> >Australiahttp://www.hpcoders.com.au
> >
> >
> >>
> 
> _
> Like the way Microsoft Office Outlook works? You'll love Windows Live 
> Hotmail. 
> http://imagine-windowslive.com/hotmail/?locale=en-us&ocid=TXT_TAGHM_migration_HM_mini_outlook_0507
> 
> 
> 
-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Overcoming Incompleteness

2007-05-24 Thread Jesse Mazer

I definitely don't think the two systems could be complete, since (handwavey 
argument follows) if you have two theorem-proving algorithms A and B, it's 
trivial to just create a new algorithm that prints out the theorems that 
either A or B could print out, and incompleteness should apply to this too.

Jesse


>From: Russell Standish <[EMAIL PROTECTED]>
>Reply-To: [EMAIL PROTECTED]
>To: [EMAIL PROTECTED]
>Subject: Re: Overcoming Incompleteness
>Date: Thu, 24 May 2007 23:59:23 +1000
>
>
>Sounds plausible that self-aware systems can manage this. I'd like to
>see this done as a formal system though, as I have a natural mistrust
>of handwaving arguments!
>
>On Thu, May 24, 2007 at 10:32:29AM -0700, Mohsen Ravanbakhsh wrote:
> > Thanks for your patience! , I know that my arguments are somehow
> > raw and immature in your view, but I'm just at the beginning.
> >
> > *S1 can simulate S2, but S1 has no reason to believe whatever S2 says.
> > There is no problem.
> > **Hofstadter "strange loop" are more related to arithmetical
> > self-reference or general fixed point of recursive operator*
> >
> > OK then it, becomes my own idea!
> > Suppose S1 and S2 are the same systems, and both KNOW that the other one 
>is
> > a similar system. Then both have the reason to believe in each others
> > statements, with the improvement that the new system is COMPLETE. We've 
>not
> > exploited any more powerful system to overcome the incompleteness in our
> > system.
> > I think this is a great achievement!
> > It's actually like this: YOU believe in ME. THEY give
> > you a godelian statement (You theoretically can not prove this
> > statement) you give it to ME and then see that I can neither prove it
> > nor disprove it, so you tell
> > THEM that their statement is true.
> > But the wonder is in what we do just by ourselves. We have a THEORY OF 
>MIND.
> > You actually do not need to ask me about the truth of that statement, 
>you
> > just simulate me and that's why I can see the a godelian statement is at
> > last
> > true. But in the logical sense ONE system wont be able to overcome the
> > incompleteness,
> > so I might conclude:
> > I'M NOT ONE LOGICAL SYSTEM!
> > This is how we might rich a theory of self. A loopy(!) and multi(!) 
>self.
> >
> >
> >
> > *
> >
> > *Mohsen Ravanbakhsh
> >
> > >
>
>--
>
>
>A/Prof Russell Standish  Phone 0425 253119 (mobile)
>Mathematics
>UNSW SYDNEY 2052[EMAIL PROTECTED]
>Australiahttp://www.hpcoders.com.au
>
>
>>

_
Like the way Microsoft Office Outlook works? You'll love Windows Live 
Hotmail. 
http://imagine-windowslive.com/hotmail/?locale=en-us&ocid=TXT_TAGHM_migration_HM_mini_outlook_0507


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Overcoming Incompleteness

2007-05-24 Thread Russell Standish

Sounds plausible that self-aware systems can manage this. I'd like to
see this done as a formal system though, as I have a natural mistrust
of handwaving arguments!

On Thu, May 24, 2007 at 10:32:29AM -0700, Mohsen Ravanbakhsh wrote:
> Thanks for your patience! , I know that my arguments are somehow
> raw and immature in your view, but I'm just at the beginning.
> 
> *S1 can simulate S2, but S1 has no reason to believe whatever S2 says.
> There is no problem.
> **Hofstadter "strange loop" are more related to arithmetical
> self-reference or general fixed point of recursive operator*
> 
> OK then it, becomes my own idea!
> Suppose S1 and S2 are the same systems, and both KNOW that the other one is
> a similar system. Then both have the reason to believe in each others
> statements, with the improvement that the new system is COMPLETE. We've not
> exploited any more powerful system to overcome the incompleteness in our
> system.
> I think this is a great achievement!
> It's actually like this: YOU believe in ME. THEY give
> you a godelian statement (You theoretically can not prove this
> statement) you give it to ME and then see that I can neither prove it
> nor disprove it, so you tell
> THEM that their statement is true.
> But the wonder is in what we do just by ourselves. We have a THEORY OF MIND.
> You actually do not need to ask me about the truth of that statement, you
> just simulate me and that's why I can see the a godelian statement is at
> last
> true. But in the logical sense ONE system wont be able to overcome the
> incompleteness,
> so I might conclude:
> I'M NOT ONE LOGICAL SYSTEM!
> This is how we might rich a theory of self. A loopy(!) and multi(!) self.
> 
> 
> 
> *
> 
> *Mohsen Ravanbakhsh
> 
> > 

-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Overcoming Incompleteness

2007-05-24 Thread Mohsen Ravanbakhsh
Thanks for your patience! , I know that my arguments are somehow
raw and immature in your view, but I'm just at the beginning.

*S1 can simulate S2, but S1 has no reason to believe whatever S2 says.
There is no problem.
**Hofstadter "strange loop" are more related to arithmetical
self-reference or general fixed point of recursive operator*

OK then it, becomes my own idea!
Suppose S1 and S2 are the same systems, and both KNOW that the other one is
a similar system. Then both have the reason to believe in each others
statements, with the improvement that the new system is COMPLETE. We've not
exploited any more powerful system to overcome the incompleteness in our
system.
I think this is a great achievement!
It's actually like this: YOU believe in ME. THEY give
you a godelian statement (You theoretically can not prove this
statement) you give it to ME and then see that I can neither prove it
nor disprove it, so you tell
THEM that their statement is true.
But the wonder is in what we do just by ourselves. We have a THEORY OF MIND.
You actually do not need to ask me about the truth of that statement, you
just simulate me and that's why I can see the a godelian statement is at
last
true. But in the logical sense ONE system wont be able to overcome the
incompleteness,
so I might conclude:
I'M NOT ONE LOGICAL SYSTEM!
This is how we might rich a theory of self. A loopy(!) and multi(!) self.



*

*Mohsen Ravanbakhsh

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Overcoming Incompleteness

2007-05-23 Thread Bruno Marchal


Le 22-mai-07, à 12:57, Mohsen Ravanbakhsh a écrit :

> Hi everybody,
> It seems Bruno's argument is a bit rich for some of us to digest, so I 
> decided to keep talking by posing another issue.
> By Godel's argument we know that every sufficiently powerful system of 
> logic would be incomplete, and recently there has been much argument 
> to make human an exception;


Mainly Lucas 1960 and Penrose recent books. But Emil Post found in 1921 
(!!!) both the "godelian" argument against mechanism AND the pitfal (it 
is just an error) in that argument.
Those who read french can look at my  "Conscience & Mecanisme" for a 
thorough analysis of that "error", including the natural "machine's 
answer" to that error.



> that's because we see the truth of Godelian statements ( i.e. This 
> sentence is unprovable in this system)


Of course we "see" the truth only for simple machine. Not for machine 
just a little more complex. This is used explicitly in my approach. A 
rich lobian machine (like ZF) can derived the whole theology of PA 
(whole means both the provable and unprovable part (at some level)). 
But ZF has to make a leap of faith to lift that theology on herself.


> Let's call such a system S1, and call another (powerful enough in 
> Godel's sense) system S2. and suppose S2 structurally is able to give 
> statements about the statements in S1. What does it mean? Consider S2 
> as a being able examine some statements in S2 via some operators and 
> get the result(like function calls).
>  
> My claim is:
>  
> 1.
>  S2, is able to see the truth of the Godelian statements in S1, and in 
> some sense:
> "S2 is complete against the statements of S1", because it can see that 
> S1 at last wont be able to evaluate our Godelian statement and so the 
> statement would be correct.

You are correct (up to some details I want bore you right now)

>  
> 2.
> We humans are vulnerable to the Godelian statements like all other 
> logical systems.


Yes. As far as we are correct!


> We have our paradoxes too.
> Consider the same Godelian statement for yourself as a system 
> (i.e. "You can't prove me" or some similar sentences like "This 
> sentence is false")


Better:This sentence is not provable by you.
Is that sentence true? Can you prove it?

>  
> 3.in the first claim consider the first system to have the same 
> attitude toward the second one, I mean let there be a loop (some how 
> similar to Hofstadter's Strange loops as the foundation of self)
> Is it complete of not? 


S1 can simulate S2, but S1 has no reason to believe whatever S2 says. 
There is no problem.
Hofstadter "strange loop" are more related to arithmetical 
self-reference or general fixed point of recursive operator (imo).

I have to go. Don't hesitate to make any comments. If I am slow to 
answer, it just means I'm a bit busy, not that I consider some question 
don't have to be answered. Thanks for the interest,

Bruno









http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---