[agi] Re: JAGI submission

2008-11-25 Thread Eliezer Yudkowsky
On Mon, Nov 24, 2008 at 4:20 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 I submitted my paper A Model for Recursively Self Improving Programs to 
 JAGI and it is ready for open review. For those who have already read it, it 
 is essentially the same paper except that I have expanded the abstract. The 
 paper describes a mathematical model of RSI in closed environments (e.g. 
 boxed AI) and shows that such programs exist in a certain sense. It can be 
 found here:

 http://journal.agi-network.org/Submissions/tabid/99/ctrl/ViewOneU/ID/9/Default.aspx

*Thud.*

This was an interesting attempt to define RSI and I really thought you
were going to prove something interesting from it.

And then, at the last minute, on the last page - *thud*.

Shane Legg, I don't mean to be harsh, but your attempt to link
Kolmogorov complexity to intelligence is causing brain damage among
impressionable youths.

( Link debunked here:

  http://www.overcomingbias.com/2008/11/complexity-and.html )

-- 
Eliezer Yudkowsky
Research Fellow, Singularity Institute for Artificial Intelligence


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Cog Sci Experiment

2008-11-22 Thread Eliezer Yudkowsky
I thought crocodile.  Go figure.

On Sat, Nov 22, 2008 at 12:05 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 I don't get it. I thought of tomato (but technically it is a fruit).

 -- Matt Mahoney, [EMAIL PROTECTED]

 --- On Fri, 11/21/08, Mike Tintner [EMAIL PROTECTED] wrote:

 From: Mike Tintner [EMAIL PROTECTED]
 Subject: [agi] Cog Sci Experiment
 To: agi@v2.listbox.com
 Date: Friday, November 21, 2008, 7:35 PM

 Forgive me if you've seen this, but here's a. curious, (v. brief), mental
 association experiment.(Freaky Math Trick)

 Please do it first before reading on:

 http://www.youtube.com/watch?v=pCq3NFEB2bcfeature=related

 My question is: how do they know your vegetable association? The best I can
 come up with is - the penultimate association (1+7=) EIGHT suggests GRATE is
 what you do to a carrot. Any better explanations? And, whatever the
 explanation, how does the brain (or your brain) make the relevant
 associations from EIGHT to CARROT?
 
 agi | Archives | Modify Your Subscription

 
 agi | Archives | Modify Your Subscription



-- 
Eliezer Yudkowsky
Research Fellow, Singularity Institute for Artificial Intelligence


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Model simplification and the kitchen sink

2004-10-25 Thread Eliezer Yudkowsky
Brad Wyble wrote:
On Sun, 24 Oct 2004, J.Andrew Rogers wrote:
This does not follow.  You can build arbitrarily complex machines with 
a very tiny finite control function and plenty of tape.  The 
complexity of AI as an algorithm and design space is not in the same 
class as the complexity of an instance of human-level AI, even though 
the latter is just the former given some state space to play with.
But that machine cannot fully understand/represent *itself*.  The longer 
the tape of its own understanding gets, the more tape it needs to 
represent itself, etc etc.
The Godel statement represents itself, completely, via diagonalization.
--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] A difficulty with AI reflectivity

2004-10-21 Thread Eliezer Yudkowsky
Ben Goertzel wrote:

Leaving aside other qualms I have about the Gödel Machine, it appears 
to me that for the Gödel Machine to execute a change that switches to 
a new theorem-prover or proof-verifier, the Gödel Machine must prove 
that if the proof-verifier accepts P, then P.
Hmmm... I haven't read that paper for a while, but my intuitive response
is that you're asking the Godel machine to do more than it needs to do.
Doesn't it just have to prove that:
If the proof verifier accepts P, then either P is true or the old 
proof-verifier would have accepted P

or something like that?
Wei Dai wrote:
I think you probably misunderstood the paper. The purpose of searching
for new theorem provers is to speed up the theorem proving, not to be
able to prove new theorems that were unprovable using the old theorem
prover.  So the Gödel Machine only needs to prove that the new prover
will prove a theorem if and only if the current prover will. I don't
think this should be a problem.
It looks to me like this would require a special amendment to Schmidhuber's 
architectural specification - Schmidhuber spoke of proving only that the 
code modification had greater expected utility.  Even if you proved the new 
system would do exactly what the old system did, only faster, it would not 
follow that the new system was an *improvement* - it might be generating 
dreadfully wrong answers even faster - unless you included some kind of 
assumption about the system working.  For example, suppose that we start 
with a Godel Machine that, meta-level aside, starts with an object-level 
program that repeatedly presses a button delivering the equivalent of a 
nasty electrical shock.  If the meta-level speeds up the object level so 
that we get nasty electrical shocks twice as fast, this is not an expected 
utility improvement.  If you build the assumption into the meta-system that 
the object-system works, this is good knowledge if the object-system *does* 
work, but not otherwise.  Correspondingly, if the meta-system does work, 
but the meta-system cannot prove that the meta-system works, you will need 
to include an assumption in the meta-system that the meta-system works. 
Even presuming the basic strategy is correct, the next question is how to 
do this without trashing the meta-system due to Godel and Lob.

One way that might succeed in salvaging the Godel-Machine formalism would 
be to isolate the rewriting of the theorem verifier from the rewriting of 
the rest of the Godel Machine.  When rewriting the remainder of the Godel 
Machine, we try to prove that the remainder rewrite improves expected 
utility, assuming that the verifier admits correct object-level theorems 
(I'm not sure you can get away with this).  When rewriting the verifier, 
the verifier has to pass a different test: proving that the new verifier 
accepts only theorems that are admissible under the old verifier.  I.e., 
any acceptable proof according to the new verifier, is an acceptable proof 
according to the old verifier.  We would not concern ourselves with whether 
the proofs are correct - no object-level questions of expected utility - 
just with proving an embedding of the new formal system in the old formal 
system.  But this requires a compartmentalization of the Godel Machine; it 
can no longer rewrite every part of itself using the same criterion of 
success, and there must be several hardcoded, non-modifiable aspects of the 
architecture (such as the interface between the proof verifier and the rest 
of the system).  This would weaken Schmidhuber's original claim that the 
Godel Machine was fully self-modifying.  It would also require some kind of 
diagonalization in the old formal system so that it can possess a full copy 
of itself, and prove embedding of the new formal system in the old formal 
system plus a copy of itself as an embedding criterion.  You'd have to 
prove an embedding of a new diagonalized formal system in the old 
diagonalized formal system.  This is all new math that doesn't appear in 
Schmidhuber's original paper, nor anywhere else I have yet encountered in 
reading about reflective formal systems.  And since I have not formalized 
this entire argument, the whole thing could contain some catastrophic flaw 
that isn't apparent in verbal discussion.

Hence my request to see if there was anything on this in the existing 
literature.

I would not be satisfied with the solution as given above for a true seed 
AI.  It seems to me that the difficulty with the Godel Machine reflects a 
very deep AI challenge, and I gave the example of the Godel Machine only to 
clarify the nature of the challenge, not to ask how to patch the Godel 
Machine specifically.  Hardcoding that all new proof-verifiers must embed 
their admissible sentences in the set of admissible sentences of the old 
proof-verifier is an interesting solution, but not as interesting as, say, 
an AI that could ponder the relative merit of intuitionistic versus 
classical 

[agi] A difficulty with AI reflectivity

2004-10-19 Thread Eliezer Yudkowsky
I've recently been pondering a difficulty with self-modifying AI.  I shall 
illustrate with Juergen Schmidhuber's Gödel Machine, but the problem 
generalizes.

http://www.idsia.ch/~juergen/goedelmachine.html
Juergen Schmidhuber's self-rewriting Gödel Machine is based on the notion 
of a machine that executes a self-rewrite as soon as it can produce a proof 
that the self-rewrite is an improvement.  This self-rewriting can extend to 
rewriting the theorem prover or theorem checker.  Leaving aside other 
qualms I have about the Gödel Machine, it appears to me that for the Gödel 
Machine to execute a change that switches to a new theorem-prover or 
proof-verifier, the Gödel Machine must prove that if the proof-verifier 
accepts P, then P.  Otherwise there is no way for the Gödel Machine to 
prove the new proof-verifier is useful, i.e., no way to expect results 
accepted by the verifier to be useful.

Example:  For the Gödel Machine's current theorem-prover to accept a 
proposed new theorem-prover, might require establishing that if the new 
theorem-prover accepts a proof that a proposed newer theorem-prover is 
useful, then the newer theorem-prover is useful.  Part of my 
dissatisfaction with the Gödel Machine is that Schmidhuber does not specify 
exactly what is meant by useful, saying only that it has to do with 
expected utility.  But suppose we have an expected utility problem that 
involves maximizing the number of received apples.  To rewrite the current 
proof system with a new proof system would presumably involve proving that 
if the new proof system accepts a rewrite as useful for increasing 
apples, that rewrite is useful for increasing apples.  The verifier has to 
translate a meta-language result into an object-language result.

Löb's Theorem establishes that for any proof system capable of encoding its 
own proof process,

|- Provable('P') - P
  if and only if
|- P.
If you can prove if P is provable then P, you can prove P.
http://www.sm.luth.se/~torkel/eget/godel/theorems.html
http://www.sm.luth.se/~torkel/eget/godel/loeb.html
In the same way that Gödel's Theorem formalizes the ancient Epimenides 
Paradox This sentence is false, Löb's Theorem sparked Haskell Curry to 
invent the Santa Claus Paradox, If this sentence is true then Santa Claus 
exists.  (Consider: *if* the sentence *were* true, then the antecedent 
would be true, and hence Santa Claus would exist, right?  So if the 
sentence were true, Santa Claus would exist?  This is precisely what the 
sentence asserts, and therefore the sentence is true and Santa Claus does 
exist.)  Löb's Theorem eliminates the self-reference in the Santa Claus 
Paradox via a diagonalization argument, as Gödel did with the Epimenides 
Paradox.  The upshot is that if you can prove P's provability implies P, 
the Santa Claus Paradox carries through and you can prove P.  If you could 
prove, in Peano Arithmetic, that a proof of P!=NP in Peano Arithmetic would 
imply P!=NP, you could use Löb's construction to prove P!=NP.

Tarski used Löb's Theorem to argue that no consistent language can contain 
its own truth predicate.  If you want to talk about the truth of statements 
in an object language, you need a meta-language that cannot talk about the 
truth of the meta-language.  You can say The sky is blue and 'The sky is 
blue is true', but not ''The sky is blue is true' is true', unless you 
invent a new concept of truth, true-1, to describe metalanguage truths in 
meta-meta-language.

http://www.ditext.com/tarski/tarski.html
In more familiar terms, it would seem that Schmidhuber's Gödel Machine must 
prove a new proof system is consistent in order to accept it, and that runs 
smack dab into Gödel's original theorem.  Any system that can prove its own 
consistency is inconsistent.

Maybe the Gödel Machine's first proof system would start with ZFC set 
theory.  ZFC suffices to prove the consistency of Peano Arithmetic, and 
might accept a rewrite implementing a proof verifier that accepted PA 
proofs - but what about the next iteration of self-rewriting?  The Peano 
prover would not accept another Peano prover.  The same problem holds for 
starting off the Gödel Machine with an axiom about the consistency of Peano 
Arithmetic, a further axiom about the consistency of the new system PA+1, 
et cetera.  Even if we assert infinite levels of consistency in PA+w, it 
doesn't help.  Just as every descending sequence of ordinals must 
eventually terminate, every sequence of self-rewrites would need to 
eventually terminate.

The difficulty with Schmidhuber's Gödel Machine concerns me because the 
problem with Löb's Theorem seems to generalize to other foundations for 
self-rewriting AI.  Even the simplest consistency under reflection of a 
self-modifying AI would seem to require the AI to believe that if its 
processes assert P, if AI |- P, that is evidence supporting P.  If the AI 
looks at its reflected processes, it will find that the meaning of belief 
P, the causal