I think there is a great deal of confusion between these two objectives.
When I wrote that if you had a car accident due to a fault in AI/AGI and
Matt wrote back talking about downloads this was a case in point. I was
assuming that you had a system which was intelligent but was *not* a
download in any shape or form.

Watson<http://learning.blogs.nytimes.com/2010/06/23/waxing-philosophical-on-watson-and-artificial-intelligence/>is
intelligent. I would be interested to know other peoples answers to
the
5 questions.

1) Turing test - Quite possibly with modifications. Watson needs to be
turned into a chatterbox. This can be done fairly trivially by allowing
Watson to store conversation in his database.

2) Meaningless question. Watson could produce results of thought and feed
these back in. Watson could design a program by referencing other programs
and their comment data. Sinilarly for engineering.

3,4,5 Absolutely not.

How do you solve World Hunger? Does AGI have to. I think if it is truly "G"
it has to. One way would be to find out what other people had written on the
subject and analyse the feasibility of their solutions.


  - Ian Parker

On 24 June 2010 18:20, John G. Rose <[email protected]> wrote:

> I think some confusion occurs where AGI researchers want to build an
> artificial person verses artificial general intelligence. An AGI might be
> just a computational model running in software that can solve problems
> across domains.  An artificial person would be much else in addition to AGI.
>
>
>
> With intelligence engineering and other engineering that artificial person
> could be built, or some interface where it appears to be a person. And a
> huge benefit is in having artificial people to do things that real people
> do. But pursuing AGI need not have to be pursuit of building artificial
> people.
>
>
>
> Also, an AGI need not have to be able to solve ALL problems initially.
> Coming out and asking why some AGI theory wouldn't be able to figure out how
> to solve some problem like say, world hunger, I mean WTF is that?
>
>
>
> John
>
>
>
> *From:* Mike Tintner [mailto:[email protected]]
> *Sent:* Thursday, June 24, 2010 5:33 AM
> *To:* agi
> *Subject:* [agi] The problem with AGI per Sloman
>
>
>
> "One of the problems of AI researchers is that too often they start off
> with an inadequate
> understanding of the *problems* and believe that solutions are only a few
> years away. We need an educational system that not only teaches techniques
> and solutions, but also an understanding of problems and their difficulty —
> which can come from a broader multi-disciplinary education. That could speed
> up progress."
>
> A. Sloman
>
>
>
> (& who else keeps saying that?)
>
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/>| 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
>
> <http://www.listbox.com>
>
>
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to