Hmmm...

It's pretty hard to project the timing of different early-stage AGI
applications, as this depends on the particular route taken to AGI,
and there are many possible routes...

We may well see a variety of proto-AGI applications in different
domains, sorta midway between narrow-AI and human-level AGI, including
stuff like

-- maidbots

-- AI financial traders that don't just execute machine learning
algorithms, but grok context, adapt to regime changes, etc.

-- NL question answering systems that grok context and piece together
info from different sources

-- artificial scientists capable of formulating nonobvious hypotheses
and validating them via data analysis, including doing automated data
preprocessing, etc.

Then, after this phase, we may finally see the emergence of unified
AGI systems with true human-level AGI.

**Or**, it could happen that one of the above apps (or something not
on my list) advances way faster than the others, for fundamental AI
reasons or simply for practical economic reasons ... or due to luck...

**Or**, it could well happen that someone gets all the way to
human-level AGI before any of the above proto-AGI applications really
becomes feasible and economically viable.  In that case the answer
will indeed be: Duh, the AGI can do anything...

Which of these alternatives will happen is not obvious to me.  It's
not even obvious to me under the hypothetical assumption that the
Novamente/OpenCog approach is gonna be the one that gets us to
human-level AGI ... let alone if I drop that assumption and think
about the problem from the perspective of the broad scope of possible
AGI architectures.

So I am a bit perplexed that some folks on this list are so
surpassingly **confident** as to which route is going to unfold....  I
don't want to get all Eliezer on you, but really, some reflection on
the human brain's tendency toward overconfidence might be in order ;-O

-- Ben G



On Thu, Apr 17, 2008 at 10:30 AM, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
> Well, I haven't seen any intelligent responses to this so I'll answer it
>  myself:
>
>
>  On Thursday 17 April 2008 06:29:20 am, J Storrs Hall, PhD wrote:
>  > On Thursday 17 April 2008 04:47:41 am, Richard Loosemore wrote:
>  > > If you could build a (completely safe, I am assuming) system that could
>  > > think in *every* way as powerfully as a human being, what would you
>  > > teach it to become:
>  > >
>  > > 1) A travel Agent.
>  > >
>  > > 2) A medical researcher who could learn to be the world's leading
>  > > specialist in a particular field,...
>  >
>  > Travel agent. Better yet, housemaid. I can teach it to become these things
>  > because I know how to do them. Early AGIs will be more likely to be
>  > successful at these things because they're easier to learn.
>  >
>  > This is sort of like Orville Wright asking, "If I build a flying machine,
>  > what's the first use I'll put it to:
>  > 1) Carrying mail.
>  > 2) A manned moon landing."
>
>  Q: You've got to be kidding. There's a huge difference between a 
> mail-carrying
>  fabric-covered open-cockpit biplane and the Apollo spacecraft. It's not
>  comparable at all.
>
>  A: It's only about 50 years' development. More time elapsed between railroads
>  and biplanes.
>
>  Q: Do you think it'll take 50 years to get from travel agents to medical
>  researchers?
>
>  A: No, the pace of development has speeded up, and will speed up more so with
>  AGI. But as in the mail/moon example, the big jump will be getting off the
>  ground in the first place.
>
>  Q: So why not just go for the researcher?
>
>  A: Same reason Orville didn't go for the moon rocket. We build Rosie the
>  maidbot first because:
>  1) we know very well what it's actually supposed to do, so we know if it's
>  learning it right
>  2) we even know a bit about how its internal processing -- vision, motion
>  control, recognition, navigation, etc -- works or could work, so we'll have
>  some chance of writing programs that can learn that kind of thing.
>  3) It's easier to learn to be a housemaid. There are lots of good examples.
>  The essential elements of the task are observable or low-level abstractions.
>  While the robot is learning to wash windows, we the AGI researchers are going
>  to learn how to write better learning algorithms by watching how it learns.
>  4) When, not if, it screws up, a natural part of the learning process,
>  there'll be broken dishes and not a thalidomide disaster.
>
>  The other issue is that the hard part of this is the learning. Say it takes a
>  teraop to run a maidbot well, but petaop to learn to be a maidbot. We run the
>  learning on our one big machine and sell the maidbots cheap with 0.1% the
>  cpu. But being a researcher is all learning -- so each one would need the
>  whole shebang for each copy. A decade of Moore's Law ... and at least that of
>  AGI research.
>
>  Josh
>
>
>
>  -------------------------------------------
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to