Matt,

On 5/9/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> > > > After many postings on this subject, I still assert that
> > > > ANY rational AGI would be religious.
> > >
> > > Not necessarily.  You execute a program P that inputs the conditions
> of
> > > the game and outputs "1 box" or "2 boxes".  Omega executes a program W
> > > as follows:
> > >
> > > if P outputs "1 box"
> > >    then put $1 million in box B
> > > else
> > >    leave box B empty.
> > >
> > > No matter what P is, it cannot call W because it would be infinite
> > > recursion.
> >
> >
> > QED this is NOT the program that Omega executes.
>
> No, it is given that Omega never makes a mistake.  Please try again.


My point was that a program was advanced that had an obvious bug that had
been clearly identified - infinite recursion. Obviously our super
intelligent alien won't be using a program that suffers from such an obvious
bug. Hence, the presence of this bug is proof that this is NOT the program
being used.

> > A rational agent only has to know that there are some things it cannot
> > > compute.  In particular, it cannot understand its own algorithm.
> >
> >
> > There is a LOT wrapped up in your "only". It is one thing to know that
> > you can't presently compute certain things that you have identified, and
> > quite
> > another to believe that an unseen power changes things that you have NOT
> > identified as being beyond your present (flawed) computational
> > abilities. No
> > matter how extensive your observations, you can NEVER be absolutely sure
> > that you understand anything, and you will in fact fail to understand
> > key details of some things without realizing it. With a good workable
> > explanation of the variances between predicted and actual events (God),
> > of course you will continue to look for less divine explanations, but at
> > exactly what point do you broadly dismiss ALL divine explanations, in
> > the absence of alternative explanations?
>
> Intelligent agents cannot recognize higher levels of intelligence in other
> agents.  We invoke divine explanation (godlike AI) because people have
> trouble accepting mathematical proofs of this statement.


Which brings me to another favorite topic - Heidenbugs. These are "good"
bugs where the program is working perfectly, but those unbelievable answers
just COULDN'T possibly be right, so debugging commences. I have wasted
several frustrating days chasing heidenbugs, only to discover that my
program is working just fine - and that it is ME who needs debugging.

I wonder how a super duper AGI will even get debugged, once the operation of
the program goes beyond its all-too-human programmers.

Steve Richfield

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to