A digital machine cannot understand. I don't know why we're trying to invoke a 
reality, which simply isn't. Calling a dog's tail a leg, does not add up to it 
having 5 legs. We're just playing word games with existentialism, trying to 
give gestalt to an elective illusion. It's also referred to as; forcing the 
issue.

Give the machine a comprehension test then, using 2nd, and/or 3rd language 
proficiency expressions in pseudo-random foreign dialect and un-English 
grammar. What result would you get? Understanding, advanced recognition, or 
functional failure?

My guess is, at best an error message stating: "Would you repeat that please. I 
don't understand." AI-driven Dragon Dictate, although a most-useful program, is 
a prime example of this. The routine would also end up in a fatal loop. Why? 
Because it's probably 3X+1 oriented.

Suppose understanding was the beginning of wisdom, what would understanding 
then be?

I think the more-realistic research question should be: "Could an AGI entity - 
even a biomechanical one - be encoded in such a manner as to achieve a 
lower-level of recognizable, clinical consciousness (when compared to humans in 
general)?"

Or stated differently; considering modern service bots, consciousness can be 
faked. How to tell fake from real?

________________________________
From: Mike Archbold <[email protected]>
Sent: Saturday, 11 September 2021 04:50
To: AGI <[email protected]>
Subject: Re: [agi] UNDERSTANDING -- Part I -- the Survey, online discussion: 
Sunday 10 a.m. Pacific Time, evening in Europe, you are invited

It's an easy question to answer... if we know what the machine
understands, we know what it can do. If we don't know what it
understands, we might not. So that's why we don't want sloppy
definitions of understanding in an opaque age of gigantic neural
networks.

On 9/10/21, Matt Mahoney <[email protected]> wrote:
> I don't understand why we are so hung up on the definition of
> understanding. I think this is like the old debate over whether machines
> could think. Can submarines swim?
>
> Philosophy is arguing about the meanings of words. It is the opposite of
> engineering, which is about solving problems. Define what you want the
> machine to do and figure out how to do it.
>
> I know what it means for a human to understand or think or be conscious.
> For machines it's undefined. What problem does defining them solve?
> Machines learn, predict, and act to satisfy goals. What else do you want
> them to do?
>
> On Fri, Sep 10, 2021, 12:06 AM Mike Archbold <[email protected]> wrote:
>
>> On 9/9/21, WriterOfMinds <[email protected]> wrote:
>> > Hey Mike ... I took a look at the Survey doc, and it appears that a lot
>> of
>> > the opinions are under the wrong names. You've entered my definition as
>> > James Bowery's, Daniel Jue's definition as mine, and so forth (looks
>> like an
>> > "off by one" sort of error that continues down the document).
>>
>>
>> I think the problem is only that I put the name following the
>> description, right?? I'll switch it around tomorrow so that you see
>> the name first. I quick checked yours and it looked right.
>>
>>
>> > ------------------------------------------
>> > Artificial General Intelligence List: AGI
>> > Permalink:
>> >
>> https://agi.topicbox.com/groups/agi/T2ee04a3eb9a964b5-M525b91709c9e58f430cb0c40
>> > Delivery options: https://agi.topicbox.com/groups/agi/subscription
>> >

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T2ee04a3eb9a964b5-M5c13acb750c5d62aa989d885
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to