On Thu, Oct 2, 2008 at 2:02 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- On Thu, 10/2/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> >I hope not to sound like a broken record here ... but ... not every
> >narrow AI advance is actually a step toward AGI ...
>
> It is if AGI is billions of narrow experts and a distributed index to get
> your messages to the right ones.
>
> I understand your objection that it is way too expensive ($1 quadrillion),
> even if it does pay for itself. I would like to be proved wrong...


IMO, that would be a very interesting AGI, yet not the **most** interesting
kind due to its primarily heterarchical nature ... the human mind has this
sort of self-organized, widely-distributed aspect, but also a more
centralized, coordinated control aspect.  I think an AGI which similarly
combines these two aspects will be much  more interesting and powerful.  For
instance, your proposed AGI would have no explicit self-model, and no
capacity to coordinate a large percentage of its resources into a single
deliberative process.....   It's much like what Francis Heyllighen envisions
as the "Global Brain."  Very interesting, yet IMO not the way to get the
maximum intelligence out of a given amount of computational substrate...


ben g



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to