On 11/3/02 4:26 AM, "Ben Goertzel" <[EMAIL PROTECTED]> wrote:
> Hutter's work draws on a long tradition of research into statistical
> learning theory and algorithmic information theory, mostly notably
> Solomonoff's early work on induction and Levin's work on computational
> measure theory.   At the present time, though, this work is more exciting
> theoretically than pragmatically.  The "constant factor" in his theorem may
> be very large, so that in practice, AIXItl is not really going to be a good
> way to create an AGI software program.  In essence, what AIXItl is doing is
> searching the space of all programs of length L, evaluating each one, and
> finally choosing the best one and running it.  The "constant factors"
> involved deal with the overhead of trying every other possible program
> before hitting on the best one!

I am very aware of these issues.  The tractability issue isn't as bad as it
seems, though it is implicit in the math.  Hutter strongly implies a really
ugly tractability problem, in no small part due to an exponential resource
take-off, but it isn't as bad as it reads.  In practice, the exponent can be
sufficiently small (and much smaller than I think most people believe) that
it becomes tractable for at least human-level AGI on silicon (my estimate),
though it does hit a ramp sooner than later.

> A simple AI system behaving somewhat similar to AIXItl could be built by
> creating a program with three parts:
> .    The data store
> .    The main program
> .    The metaprogram
> The operation of the metaprogram would be, loosely, as follows:
> .    At time t, place within the data store a record containing: the complete
> internal state of the system, and the complete sensory input of the system.
> .    Search the space of all programs P of size |P|< L to find the one that,
> based on the data in the data store, has the highest expected value for the
> given maximization criterion
> .    Install P as the main program

There is a log(n) algorithm/structure that essentially does this, and it
works nicely using maspar too.  It does have a substantially more complex
concept of "meta-program" though.

> Conceptually, the main value of this approach for AGI is that it solidly
> establishes the following contention:
> **If you accept any definition of intelligence of the general form
> "maximization of a certain function of system behavior."
> Then, the problem of creating AGI is basically a problem of dealing with the
> issues of space and time efficiency**
> As with any mathematics-based conclusion, the conclusion only follows if one
> accepts the definitions.  If someone's conception of intelligence
> fundamentally can't be cast into the form of a behavior-based maximization
> criterion, then these ideas aren't relevant for AGI as that person conceives
> it.  However, we believe that the behavior-based maximization criterion
> approach to defining intelligence is a good one, and hence we believe that
> Hutter's work is highly significant.

I agree with this.  In complex environments, any usefully adaptive system
will be balancing the time and space requirements of a bevy of maximization
criterion, which themselves will be constantly adapting at the meta- level.

> Well, I think their work is of limited practical value for the reasons I
> mention above, but, you're obviously hinting at something else.  But since
> you won't tell us, it's not a very interesting topic of conversation huh ;)

More to the point:  I am involved in a commercial venture related to AGI,
and the technology is substantially more developed and advanced than I can
talk about without lawyers getting involved.  It is sufficiently sexy that
it has attracted quite a bit of smart Silicon Valley capital, which is no
small feat for any company over the last year or two, never mind any outfit
working with "AI".


-James Rogers

To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/

Reply via email to