--- Russell Wallace <[EMAIL PROTECTED]> wrote:
> So my distinction between S-first and D-first isn't particularly
> relevant to you, because you're not proposing a monolithic AGI;
> you're
> instead proposing a community or marketplace of narrow AI modules
> (some S-oriented, some D-oriented), that will hopefully constitute a
> sort of loosely bound collective intelligence. Would that be an
> accurate paraphrase of your view?

That is correct.  If I wanted to build a monolithic AGI (an artificial
human brain with goals) I would model the S-then-D approach that the
brain uses.  But I don't think that is where the market is.  We already
know how to produce humans cheaply.  The expensive part is training
them.

I know the argument that once you build one AGI, making copies is
cheap.  No, it's not.  In an organization, every member has a unique
job, so every member needs to be trained individually.  The costs may
be indirect, i.e. correcting the inevitable novice mistakes, but they
are there.  This is why AGI is expensive.  Software and training aren't
subject to Moore's Law.


-- Matt Mahoney, [EMAIL PROTECTED]

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to