Richard,

Though we have theoretical disagreements, I largely agree with your
analysis of the value of prototypes for AGI.

Experience has shown repeatedly that prototypes displaying "apparently
intelligent behavior" in various domains are very frequently dead-ends,
because they embody various sorts of "cheating."

And, if AGI really is a complex emergent phenomenon that requires
a certain sort of large complex system in order to come about, then one
would
not expect that any kind of cheap, small-scale prototype would be able
to demonstrate it.

As large-scale funding requires impressive prototypes, one is then faced
with an irritating task of creating prototypes that fulfill the largely
unrelated
goals of

-- looking impressive to investors who want to see prototypes
-- actually being meaningful steps on the path to AGI

It is actually surprisingly difficult to find ways to fulfill these two
goals
at the same time.  I'm hoping we'll be there, with Novamente, sometime
in late 2008 or early 2009.  I don't think our initial virtual animals will
be impressive
enough as AGI to qualify -- though they'll be really cool teachable
animals!! --
but, I think virtual agents with language learning facility will pass the
threshold...

But even once we get there (assuming we do), my own faith in the system
shown-off
as a path to AGI will be largely uncorrelated with its impressiveness as a
prototype...

ben




On Nov 17, 2007 1:41 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:

> Dennis Gorelik wrote:
> > Jiri,
> >
> > Give $1 for the research to who?
> > Research team can easily eat millions $$$ without producing any useful
> > results.
> > If you just randomly pick researchers for investment, your chances to
> > get any useful outcome from the project is close to zero.
> >
> > The best investing practise is to invest only into such teams that
> > produced working prototype already.
> > Serious funding is usually helpful only to scale prototype up.
> > (See how it worked out for Google, for example).
> >
> > So far there is no working prototype of AGI yet, therefore there is no
> > point to invest.
> >
> > On the other hand some narrow AI teams already produced some useful
> > results. Such teams deserve investments.
> > When narrow AI field is mature enough -- making next step to AGI would
> > be possible for self-funding AGI research team.
>
> Although this seems like a reasonable stance, I don't think it is a
> strategy that will lead the world to the fast development (or perhaps
> any development) of a real AGI.  Allow me to explain why I think this.
>
> I agree you would not just pick researchers at random, but on the other
> hand if you insist on a team with a "working prototype" this might well
> be a disaster.
>
> I am in a position to use massive investment straight away (and I have a
> project plan that says how), but the specific technical analysis of the
> AGI problem that I have made indicates that nothing like a 'prototype'
> is even possible until after a massive amount of up-front effort.  There
> are things we can do ahead of time (and some of those are underway), but
> if anyone asks for a prototype that does some fraction of the task, I
> can only point to the technical analysis and ask the investor to
> understand why this is not possible.
>
> Catch 22.  No prototype, no investment;  no investment, no prototype.
>
> Investors are leery of "sorry, no prototype!" claims (with good reason,
> generally) but they are also not tech-savvy enough to comprehend the
> technical analysis that tells them that they should make an exception in
> this case.  And even worse, the technical community (for reasons I have
> explained, to general annoyance ;-) ) has reasons for disliking the
> particular technical analysis I have offered.
>
> If I turn out to be right in my analysis, none of the people who have
> what they claim to be prototypes will actually reach the goal of a
> viable AGI.  (They disagree, of course!).
>
>
>
> Richard Loosemore
>
>
>
>
>
>
>
> >
> > Wednesday, October 31, 2007, 11:50:12 PM, you wrote:
> >
> >> I believe AGI does need promoting. And it's IMO similar with the
> >> immortality research some of the Novamente folks are involved in. It's
> >> just unbelievable how much money (and other resources) are being used
> >> for all kinds of nonsense/insignificant projects worldwide. I wish
> >> every American gave just $1 for AGI and $1 for immortality research.
> >> Imagine what this money could for all of us (if used wisely).
> >> Unfortunately, people will rather spend the money for their popcorn in
> >> a cinema.
> >
> >
> >> Godlike intelligence? :) Ok, here is how I see it: If we survive, I
> >> believe we will eventually get plugged into some sort of pleasure
> >> machine and we will not care about intelligence at all. Intelligence
> >> is a useless tool when there are no problems and no goals to think
> >> about. We don't really want any goals/problems in our minds.
> >> Basically, the goal is to not have goal(s) and safely experience as
> >> intense pleasure as the available design allows for as long as
> >> possible. AGI could be eventually tasked to take care of all what that
> >> takes + search for the system improvements and things that an altered
> >> human mind could consider being even better than feelings as we know
> >> them now. Many might think that they love someone so much that they
> >> would not tell him/her "bye" and get plugged into a pleasure machine,
> >> but I'm pretty sure they would change their mind after the first trial
> >> of a well designed device of that kind. That's how I currently see the
> >> best possible future. Some people, when talking about advanced aliens,
> >> are asking "Where are they?".. Possibly, they are in such a pleasure
> >> machine and don't really care about anything, feeling like true gods
> >> in a world where concepts like intelligence are totally meaningless.
> >
> >
> > -----
> > This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?&;
> >
> >
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=66245744-e55754

Reply via email to