This is a continuation of a discussion I started off-list, slightly edited to make 
more sense.

Will Pearson wrote:
> Do you have an area where you discuss Goedel machines? If so
please point it out to me. If not feel free to answer me in a reply
to this email if you feel I am saying something worthwhile. 
>  
> All the points are in regards to formal provability and the real
world, such as a robot might walk around in. I would not bring this
up, but I believe you are trying to solve the "grand problem of
AI", and have 
> defined it in terms of interacting with the real world. I don't
think it is possible to formally prove many things about the world,
and a few philosophers, from Popper to Quine would agree with me. 
This is 
> important for three reasons, the first is if you provide an
axiom that is incorrect about the world then the program will not be
optimal because it will have false assumptions. So assuming not much
is 
> axiomatically defined about the world the Goedel machine has
some disadvantages to less fussy architectures. 
>  
> Take for example the possibility of placing some of it's code in
another faster machine to allow the Goedel machine the advantage of
parrellism and speed increase, now the Goedel machine cannot formally 
> prove that that machine will be there tomorrow or that it will
not be tampered with by outside influence (even if you did manage to
give it the correct axioms about the universe, being able to compute
such 
> things from first principles seems ludicrous). And so the Goedel
machine wouldn't transfer part of itself because the waste of time in
transferring part of itself doesn't have a guarenteed pay-off. By
this token 
> I would argue that humans aren't GM as we rely on other fallible
humans for giving us axioms to reason about the world with, which is
not a provably better way of doing things. 
>  
> The third and last problem with the architecture is that it has
unrealistic axioms about itself. It assumes it is in a perfect
mathematical machine where none of its internal workings can go
wrong. Bits do not 
> wear out and need replacement, memory does not get corrupted by
cosmic rays (if it were to be on a space mission). I would hope that
an AI would be able to have some notion of its own falibilty. 
>  
> All said and done, despite my criticisms I admire your work and
your field and wish more AI researchers were exposed to it. My own
research also has programs that alter themselves (although for me it
is a 
> copy) and I rely on natural selection, so I view the Goedel
machine as something of a challenge to compete against, theoretically
at the moment, but also hopefully in the best scientific tradition of 
> experiments. 
>  

Juergen then replied:

Thanks for the message!
There were a few comments on the AGI list:
http://www.mail-archive.com/[EMAIL PROTECTED]/msg01519.html


One thing we should keep in mind: a GM can formally talk
about uncertain parts of the world! If it has an
axiomatized probability distribution on events that are
not exactly predictable, or if it just knows that the
probability distribution has certain properties,
then it will already be able to make formal
statements about, say, expected future reward. e.g.:
http://www.idsia.ch/~juergen/unilearn.html

Such a GM will do what provably promises
higher _expected_ reward, even when this
might kill it with nonzero probability.


My reply
I think I see what you are saying. However in this case shouldn't the wording of the 
final part of  theorem 2.1 in your GM paper read more like the following.

the  utility of starting the execution of the present switchprog is *probably* higher 
than the the utility of waiting for the proof searcher to produce an alternative 
switchprog later.

Also I wonder about what probability knowledge you could safely axiomatise without 
leading the GMs to false conclusions. Especially when the Goedel machines might be 
dealing with humans and also other GM.

  Will Pearson
-- 
___________________________________________________________
Sign-up for Ads Free at Mail.com
http://promo.mail.com/adsfreejump.htm

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to