The design space of the future for AGI is the many specialized AGI's 
running on many computers and the feedback from those being functionally 
interwoven into a new and better AGI. 

Dan G

----------------------------------------------------
>From : J.Andrew Rogers <[EMAIL PROTECTED]>
To : agi@v2.listbox.com
Subject : Re: [agi] an AGI by Minsky and Singh
Date : Sun, 12 Jun 2005 10:07:39 -0700
> 
> On Jun 12, 2005, at 2:02 AM, Ben Goertzel wrote:
> > But, your assertion that any competently articulated, competently led 
> > AGI project should be able to fairly easily raise $5M in venture 
> > funding is *also* based on a basket of assumptions, which you didn't 
> > make explicit in your message!
> 
> 
> Yes, very true.  There is no such thing as a context-free opinion. :-)
> 
> However, I also used non-AI development and implementation metrics that 
> would apply to AI development and implementation to build my 
> assumption.  The "new" part of AGI development is a new design space 
> from a computer science perspective, but the fundamental mechanics of 
> implementation of a new design space will not be that different and a 
> lot of the ancillary stuff is well-described.  If it takes a long time 
> to implement, it will be because parts of the AGI design are poorly 
> described such that no one knows if/how they will work.
> 
> The only potential money sink that I consider plausible is very large 
> and exotic hardware, but the necessity of this does not seem apparent 
> to many people actually working on it.  High-end vanilla hardware seems 
> to be what most people require.
> 
> 
> > And my suggestion is that the path from here to AGI is almost 
> > inevitably going to involve a few years of research-oriented 
> > engineering/experimentation prior to any period of more deterministic 
> > product-development-like engineering/tuning.
> 
> 
> The differences in opinion seem to revolve around whether or not useful 
> products can be spun off the main technology track as the technology is 
> developed.   While I would agree that it can be a diversion of sorts, a 
> carefully selected mezzanine product target should be reasonably 
> doable.   How feasible this actually is is a function of the 
> architecture and design to a great extent.
> 
> 
> > I'm not really sure how you're defining these terms, in this context. 
> > In terms of creating AGI, as far as I'm concerned, even if you're in 
> > "late stage development" of your *software system*, until you've 
> > demonstrated robust human-level AGI behaviors, you're still doing 
> > speculative research....  This is only fair given the demonstrated 
> > difficulty of the AGI problem.  I apply this  to my own work as well 
> > as yours and anyone else's....
> 
> 
> I was referring to the ability to demonstrate robust AGI-ish behaviors 
> in implementation.  It does not have to be a completely implemented or 
> solved system if one can demonstrate genuinely new capabilities -- this 
> will have intrinsic business value AGI or not.
> 
> In other words, the pitch should be no less than "we can deliver this 
> wicked coolness *right now*, and with some additional funding we can 
> greatly extend the envelope to more wicked coolness".  The problem is 
> that the initial demonstration of "wicked coolness" has to be a clear 
> differentiator from other half-baked AI ideas, most of which claim to 
> show some type of vague novelty very early on.  It is not easy.
> 
> 
> > And, VC's criteria for "indistinguishability" in this context are 
> > generally quite crude...
> 
> 
> Heh, yes.  The problem of education is very real and there is 
> relatively little one can do about this.  Hence the value of having a 
> bright shiny object for them to fixate on immediately.
> 
> Very few people grok the current theory space (which is somewhat 
> independent of personal theoretical biases), and unlike nanotech, the 
> field is neither straightforward or obvious from basic principles that 
> everyone understands.  For almost everyone, it really *is* a crap 
> shoot.
> 
> cheers,
> 
> j. andrew rogers
> 
> -------
> To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
> please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

-------
To unsubscribe, change your address, or temporarily deactivate your 
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to