On Sun, Apr 27, 2008 at 3:54 AM, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:
>
>  What I wanted to say is that any intelligence has
>  to be narrow in a sense if it wants be powerful and useful. There must
>  always be strong assumptions of the world deep in any algorithm of useful
>  intelligence.

>From http://nars.wang.googlepages.com/wang-goertzel.AGI_06.pdf Page 5:
---
3.3. "General-purpose systems are not as good as special-purpose ones"

Compared to the previous one, a weaker objection to AGI is to insist
that even though general-purpose systems can be built, they will not
work as well as special-purpose systems, in terms of performance,
efficiency, etc.

We actually agree with this judgment to a certain degree, though we do
not take it as a valid argument against the need to develop AGI.

For any given problem, a solution especially developed for it almost
always works better than a general solution that covers multiple types
of problem. However, we are not promoting AGI as a technique that will
replace all existing domain-specific AI
techniques. Instead, AGI is needed in situations where ready-made
solutions are not available, due to the dynamic nature of the
environment or the insufficiency of knowledge about the problem. In
these situations, what we expect from an AGI system are not optimal
solutions (which cannot be guaranteed), but flexibility, creatively,
and robustness, which are directly related to the generality of the
design.

In this sense, AGI is not proposed as a competing tool to any AI tool
developed before, by providing better results, but as a tool that can
be used when no other tool can, because the problem is unknown in
advance.
---

Pei

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to