Jiri Jelinek wrote:
Richard,

Question:  do you believe it will really be possible to build something
that is completely intelligent -- smart enough to understand humans in
such a way as to have conversations on the subtlest of subjects, and
being able to understand the functions of things in our world, even
though those functions sometimes are defined by the most subtle of human
behaviors/preferences/whims -- and yet at the same time, only be a
sophisticated search engine?

Yes

I think that if it were dumb enough that it could be treated as a tool,
then it would have to no be able to understand that it was being used as
a tool.

From my perspective, AGI's acceptance of its "tool" status, does not
make it dumb.

And if it could not understand that, it would just not have any hope of
being generally intelligent.

The general intelligence can be well demonstrated by its problem
solving abilities.

There is much more to this line of attack, but do you see where I am
coming from?

From too human way of thinking that does not necessarily apply to AGIs.

Do you think that the apparent contradiction can be resolved?

Yes - in its scope - which might be just your mind. ;-)

What I meant was that if it had awareness of the consequences of its
actions, it would think before acting, and if it thought about
consequences before acting, it would, ipso facto, not be a "tool".

An AGI thinking only about what it's tasked to think about by a user
makes it a tool, but it does not lower its AGI capabilities. It can
still demonstrate problem solving skills superior to the user.

This is patronizing.





Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61305532-d4c6f3

Reply via email to