James Rogers wrote:
> I am very aware of these issues.  The tractability issue isn't as
> bad as it
> seems, though it is implicit in the math.  Hutter strongly
> implies a really
> ugly tractability problem, in no small part due to an exponential resource
> take-off, but it isn't as bad as it reads.  In practice, the
> exponent can be
> sufficiently small (and much smaller than I think most people
> believe) that
> it becomes tractable for at least human-level AGI on silicon (my
> estimate),
> though it does hit a ramp sooner than later.

This is an interesting claim you're making, but without knowing the basis
for your estimate, I can't really comment intelligently.

I tend to doubt your estimate is correct here, but I'm open-minded enough to
realize that it might be.  If you ever feel it's possible to share details,
let me know!

> > A simple AI system behaving somewhat similar to AIXItl could be built by
> > creating a program with three parts:
> >
> > .    The data store
> > .    The main program
> > .    The metaprogram
> >
> > The operation of the metaprogram would be, loosely, as follows:
> >
> > .    At time t, place within the data store a record
> containing: the complete
> > internal state of the system, and the complete sensory input of
> the system.
> >
> > .    Search the space of all programs P of size |P|< L to find
> the one that,
> > based on the data in the data store, has the highest expected
> value for the
> > given maximization criterion
> >
> > .    Install P as the main program
> There is a log(n) algorithm/structure that essentially does this, and it
> works nicely using maspar too.  It does have a substantially more complex
> concept of "meta-program" though.

What exactly does the program you're referring to do?  And what is your n?
Is it the same as my L?

If your log(n) the is time complexity, what's the corresponding space
complexity, and how many processors are required?  Exponential in n?  (One
can do a lot with maspar with an exponential number of processors!!)

Details perhaps??  I'm interested but don't fully get it yet...

> More to the point:  I am involved in a commercial venture related to AGI,
> and the technology is substantially more developed and advanced than I can
> talk about without lawyers getting involved.  It is sufficiently sexy that
> it has attracted quite a bit of smart Silicon Valley capital, which is no
> small feat for any company over the last year or two, never mind
> any outfit
> working with "AI".

yeah, I know your situation (though it's good you mentioned it, so that
other list members can know too)..

I assume that your Silicon Valley funding is oriented primarily toward one
or two vertical-market applications of your technology, rather than oriented
primarily toward AGI... but that your software is usable in a narrow-AI way
in the short term, while being built toward AGI in the medium term...

This is really the same kind of path we are taking with Novamente, doing
relatively-narrow-AI apps with our codebase in the short term, while we
build the codebase toward AGI all the while....

Since the academic & gov't research establishment does not want to fund AGI
work any more than the corporate world does, this kind of "multiple
simultaneous agendas" approach seems just about the only way to get the work

- Ben G

To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/

Reply via email to