Alex Richter wrote:
> imho, its better do to (paid) small steps from normal software to
> narrow-AI
> to AGI, than a big jump.

Well, I respect your opinion, but I don't think we entirely agree...

I'm not at all sure that it's *better* to proceed incrementally to AGI via
narrow-AI via small steps.

For one thing, I don't think it's possible to proceed *conceptually* to AGI
via narrow AI.  There is a major conceptual leap (or maybe several) between
the two.  Narrow AI provides ingredients of AGI... at best.

I do think that, if one has an AGI design worked out, one can then solve the
*additional* problem of creating narrow AI applications out of components of
the AGI system as they're complete.  But this adds additional distractions
and burdens.  [Of course, the narrow AI work can be fun too, if the
applications are interesting (for instance, my own current work applying
components of Novamente to genetics and proteomics and antibiowarfare).]

Requiring that one's AGI's components be useful in a commercial narrow-AI
context is an additional burden that is sometimes very cumbersome.  (And of
course, not all components *will* be useful in this way, some will only be
useful in the context of the whole AGI system.)

All in all, in an ample-research-funding scenario, I would dispense with
intermediary narrow-AI work in a second and devote all resources toward the
AGI end goal.

One plus of doing narrow-AI work along the way has to do with testing.
Doing careful and systematic testing of all components of one's AGI system,
along the way, is important; but it's often hard to focus on this testing
while pursuing the end goal.  Narrow AI apps involving AGI components
provide a context for very systematic and thorough testing of AGI
components.  But still, is this benefit worth the cost in terms of time and
mental distraction from the end goal?  I'm really not sure....

I don't want to sound like I'm complaining though.  I feel privileged to
have the opportunity to spend my time working with a great team on a mix of
AGI and exciting science-oriented narrow-AI.  It sure beats pumping gas, and
we ARE moving toward the AGI end-goal.  But I'd rather be moving toward the
end goal faster, and I think that would happen if we had a significant chunk
of pure-AGI-focused funding.

But that is not what the world's economic and cultural systems prioritize
right now, obviously!  It's next to impossible to get gov't grant funding
for AGI research (unless one clothes it inside a narrow AI project, which
brings as much distraction as doing narrow-AI with one's software
commercially...), and of course the business world is not presently funding
a heck of a lot research perceived as long-term.  The gov't is pouring close
to $4 billion into antibiowarfare R&D in 2003, but how much into AGI?  Heh.

I think there will come a time -- possibly in a couple years -- when gov't
and industry value AGI and loads of money comes AGI's way.  But that will be
*after* someone (potentially my own Novamente team) demonstrates dramatic
progress toward human-level AGI, in a public and very vividly demonstrable
way.  When a brilliant AGI demonstration is all over the newspapers, THEN,
people in positions of substantial financial control will start to see the
possibilities....  And when someone writes an article on the military
possibilities of (say) Chinese AGI research, then the US gov't will really
wake up....  But until this vivid public demonstration exists, we're all
going to continue working under roughly the current situation -- at least
that's my prediction....

-- Ben G





-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/

Reply via email to