Yes, of course this is true ... systems need to have a certain minimum
level of intelligence in order to self-improve in a goal-directed way!!

I said I didn't want to take time to formulate my point (which to me is
extremely intuitively obvious) as a theorem with all conditions explicitly
stated, and I still don't ;-)

If some folks want to believe that self-modifying AGI is not possible,
that's OK with me.  Lots of folks believed human flight was not possible
also, etc. etc. ... and there were even attempts at mathematical/theoretical
proofs of this.  Fortunately the Wright Brothers spent their time building
planes rather than laboriously poking holes in the
intuitively-obviously-wrong
supposed-impossibility-proofs of what they were doing...

ben g

On Thu, Oct 16, 2008 at 11:38 AM, Tim Freeman <[EMAIL PROTECTED]> wrote:

> From: "Ben Goertzel" <[EMAIL PROTECTED]>
>
> >On the other hand, if you insist on mathematical definitions of
> >intelligence, we could talk about, say, the intelligence of a system
> >as the "total prediction difficulty of the set S of sequences, with
> >the property that the system can predict S during a period of time of
> >length T".  We can define prediction difficulty as Shane Legg does in
> >his PhD thesis.  We can then average this over various time-lengths T,
> >using some appropriate weighting function.
> ...
> >Using this sort of definition, a system A2 that is twice as smart as
> >system A1, if allowed to interact with an appropriate environment
> >vastly more complex than either of the systems, would surely be
> >capable of modifying itself into a system A3 that is twice as smart as
> >A2.
>
> Probably not true, as stated.  As you said, the dog (A2) is smarter
> than than the roach (A1).  If that's not true for the mathematical
> definition of intelligence you give above, that's a bug in the
> definition.  The dog is not capable of interesting self-modification;
> it will never construct an A3.
>
> >This seems extremely obvious and I don't want to spend time right now
> >proving it formally.  No doubt writing out the proof would reveal
> >various mathematical conditions on the theorem statement...
>
> You at least need a certain minimum level of intelligence for A1 for
> it to work.  I don't know what that level is.  In the interesting
> case, it's a little less than the best humans, and in the
> uninteresting case, it's orders of magnitude more and we'll never get
> there.  I find it hard to believe that you'll derive a specific
> intelligence level from a mathematical proof, so I think you're at
> best talking about an hopefully-someday empirical result rather than
> something that could be proved.
>
> (I'm not following the larger argument that this is a part of, so I
> have no opinion about it.)
> --
> Tim Freeman               http://www.fungible.com
> [EMAIL PROTECTED]
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to