On Sun, 2002-11-03 at 19:19, Ben Goertzel wrote:
> James Rogers wrote:
> > In practice, the
> > exponent can be
> > sufficiently small (and much smaller than I think most people
> > believe) that
> > it becomes tractable for at least human-level AGI on silicon (my
> > estimate),
> > though it does hit a ramp sooner than later.
> 
> This is an interesting claim you're making, but without knowing the basis
> for your estimate, I can't really comment intelligently.
> 
> I tend to doubt your estimate is correct here, but I'm open-minded enough to
> realize that it might be.  If you ever feel it's possible to share details,
> let me know!


As I recall, the rough estimate using our current architecture put the
memory ramp at somewhere around a trillion neurons equivalent ("neuron
equivalent" being a WAG mapping of structure -- feel free to ignore it).
Somewhere around there (+/- an order of magnitude) it starts to become
fairly expensive in terms of additional memory requirements required for
modest gains in effective intelligence.

Of course, the machine required to hit this ramp would still be
exceptionally large by today's standards.

 
> > There is a log(n) algorithm/structure that essentially does this, and it
> > works nicely using maspar too.  It does have a substantially more complex
> > concept of "meta-program" though.
> 
> What exactly does the program you're referring to do?  And what is your n?
> Is it the same as my L?


I'm referring to a time-complexity of log(n), where "n" is essentially
your "L". A critical difference algorithmically is that the algorithm
selects the optimal program that is currently "knowable" in L (i.e. a
limits of prediction problem), rather than the globally optimal
algorithm in L.

You would quite obviously be correct about the tractability if someone
actually tried to brute force the entire algorithm space in L.  The
knowability factor means that we don't always (hardly ever?) get the
best algorithm, but it learns and adapts very fast and this
automatically sieves the L-space into something very tractable.

 
> If your log(n) the is time complexity, what's the corresponding space
> complexity, and how many processors are required?  Exponential in n?  (One
> can do a lot with maspar with an exponential number of processors!!)


Space complexity as a function of L is exponential (or at least the data
structure is), though the exponent is reasonable.  The maspar bit just
means that the algorithm and data structure that we do this in is
obviously and naturally suited to massively parallel systems.

 
> > More to the point:  I am involved in a commercial venture related to AGI,
> > and the technology is substantially more developed and advanced than I can
> > talk about without lawyers getting involved.  It is sufficiently sexy that
> > it has attracted quite a bit of smart Silicon Valley capital, which is no
> > small feat for any company over the last year or two, never mind
> > any outfit
> > working with "AI".
> 
> yeah, I know your situation (though it's good you mentioned it, so that
> other list members can know too)..


Actually, this situation is a little different. I've dabbled in the
commercial aspects for some time but always pulled back because I
decided that I wasn't ready.  This is actually the real deal
business-wise and relatively recent, not yet at its first birthday.

 
> I assume that your Silicon Valley funding is oriented primarily toward one
> or two vertical-market applications of your technology, rather than oriented
> primarily toward AGI... but that your software is usable in a narrow-AI way
> in the short term, while being built toward AGI in the medium term...


Believe it or not, the people behind the outside funding have a clear
concept of this as an AGI company rather than an application company in
some narrow vertical market, though obviously the initial public
manifestations and demonstrations will actually be vertical applications
that use the AGI technologies.  AGI by itself doesn't do a whole lot --
it mostly just sits around the house drinking my booze. :-)  

A little background: I had some advantage in this in that I've been
around Silicon Valley for over a decade and know quite a few people here
in the capital markets. I have a good reputation here for solving hard
software problems independent of any work on AGI, and I've done quite a
bit of work as a "goto" engineering problem solver for venture
investors.  Knowing all these venture markets people, I very carefully
filtered and selected who I would involve in this, with a major
criterion being individuals who were smart enough to understand and know
what they were looking at.  I didn't even want to talk to people who
wouldn't immediately see past vertical applications and recognize the
capabilities of the core technology in itself.  This is the context in
which I sought (and found) backers: People who knew my background and
reputation well-enough that they wouldn't wonder if I had a clue and
people smart enough to be able to evaluate the technology by themselves
without me having to hard-sell it.

Contrary to some rumors, there are a lot of very smart and
forward-thinking venture funding types in Silicon Valley in addition to
the usual business school idiots.  Some of them can even talk about
algorithmic information theory (the theoretical basis of our technology)
at a shallow level without getting a "deer in the headlights" look.

 
> This is really the same kind of path we are taking with Novamente, doing
> relatively-narrow-AI apps with our codebase in the short term, while we
> build the codebase toward AGI all the while....
> 
> Since the academic & gov't research establishment does not want to fund AGI
> work any more than the corporate world does, this kind of "multiple
> simultaneous agendas" approach seems just about the only way to get the work
> done...


As you mention, it is pretty hard to get proper funding for anything
relating to AGI, especially when it is pretty early in the R&D stage. 
I've actually been working on this AGI technology since the mid-90's,
though originally I only got involved at all trying to solve a
particularly difficult adaptive optimization problem for a client. It
has been essentially self-funded to this point, and it took me a long
time to develop it to the point where I felt the technology could be
sold in a marketplace that has a very jaded and skeptical view of "AI".

I'm also an investor/instigator in another venture which has done very
well and generally made full-funding of the AGI venture fairly certain
regardless.  Patience, hard work, and all of that; I planned to make
this happen one way or another. :-)

Cheers,

-James Rogers
 [EMAIL PROTECTED]


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/

Reply via email to