On 6/23/07, Bo Morgan <[EMAIL PROTECTED]> wrote:

On Sat, 23 Jun 2007, Pei Wang wrote:

) On 6/23/07, Bo Morgan <[EMAIL PROTECTED]> wrote:
) >
) > Thanks for putting this together!  If I were to put myself into your
) > theory of AI research, I would probably be roughly included in the
) > Structure-AI and Capability-AI (better descriptions of the brain and
) > computer programs that have more capabilities).
)
) It is a reasonable position, though in the long run you may have to
) choose between the two, since they often conflict.

For example, if one can mentally simulate a computation, it has an analog
in the brain.  I just want to describe the brain in computer language,
which will require much more advanced programming languages to just get
computers to simulate things similar to what people can do mentally.

Sure you can, but this is mostly what I call "Structure-AI".
"Capability-AI" is more about practical problem solving, while whether
the process follows the "human-way" doesn't matter, as in Deep Blue.

Hmm..  It seems that even if Capability-AI isn't the primary goal of the
theory, it must be *one* of the goals.

Of course. Everyone has practical application in mind, and the
difference is how much priority this goal has, compared with the other
goals.

Pei

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to