https://www.meetup.com/Northwest-Artificial-General-Intelligence-Meetup-Group/events/280242958/

It's common to hear claims in AI -- particularly by marketing types
and other evangelists -- that their AI/robotics based technology
"understands." They want to persuade. But what does a claim of
"understanding" actually mean beyond the smoke screen of sales
rhetoric?

The problem is a tacit claim is often being made that the machine's
understanding is equivalent to human understanding -- at least in some
narrow task. That's what they want you to assume. But in what form is
the understanding? How deep? Does it mean that novel cases can be
understood and thus explained? At what point does the machine's
understanding fail? It seems that "understanding" definitions are
crucial: the boundaries of the machine's functionality appear to be
the boundaries of its understanding.

The user is invariably left to draw his/her own conclusions about the
extent to which the machine's understanding resembles human
understanding -- which is itself difficult to define.

Since this issue is at the very core of artificial general
intelligence, a two-part series is planned.

For this event, Part I, we will be making an informal survey of the
topic with this agenda:

1) Philippe Delmeire has prepared some brief materials (short videos,
images, text) to get us attuned to the topic.

2) Mike Archbold (organizer) has prepared a community survey of
definitions of understanding, particularly with respect to how a
machine should understand. We will critique the definitions. Jim
Bromer is working on highlights of the definitions. The current
version of the survey draft can be found here ->
https://drive.google.com/file/d/1Kij_Ak5uSWTkxdM2vDKBbpEuXdFtJrqj/view?usp=sharing

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T2ee04a3eb9a964b5-Mbf1393dfb2c06fa712a9ac46
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to