Natural language understanding is a problem. And a system with the ability
to understand natural language is obviously able to solve *this* problem.

But the ability to talk about certain domains does not imply the ability to
solve the problems  in  this domain.

I have argued this point with my example of the two programs for the domain
of graphs.

 

As Ben has said, it essentially depends on definitions. Probably, you have a
different understanding of the meaning of understanding ;-)

But for me there is a difference between understanding a domain and the
ability to solve problems in a domain.

 

I can understand a car  but this does not imply that I can drive a car.

I can understand a proof but this does not imply that I can create it.

My computer understands my programs because it executes every step correctly
but it cannot create a single statement in the language it understands.

 

Did you never experienced a situation where you could not solve a problem
but when another person has shown you the solution you understood it at
once?

You could not create it but you needed not to learn to understand it.

Of course, often when you see a solution for a problem then you learn to
solve it at the same time. But this is exactly the reason why you have the
illusion that understanding and problem solving are the same.

 

Think about a very difficult proof. You can understand every step. But when
you get just an empty piece of paper to write it down again then you cannot
remember the whole proof and thus you cannot create it. But you can
understand it, if you read it. Obviously there is a difference between
understanding and problem solving.

.

 

I am sure, you want to define "understanding" differently. But I do not
agree because then the term "understanding" would be overloaded and too much
mystified.

And we already have too much terms which are unnecessarily mystified in AI.

 

- Matthias

 

 

Terren Suydam [mailto:[EMAIL PROTECTED] wrote





Matthias, 

I say understanding natural language requires the ability to solve problems.
Do you disagree?  If so, then you must have an explanation for how an AI
that could understand language would be able to understand novel metaphors
or analogies without doing any active problem-solving. What is your
explanation for that?

If on the other hand you agree that NLU entails problem-solving, then that
is a start. From there we can argue whether the problem-solving abilities
necessary for NLU are sufficient to allow problem-solving to occur in any
domain (as I have argued). 

Terren

--- On Thu, 10/23/08, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:

From: Dr. Matthias Heger <[EMAIL PROTECTED]>
Subject: AW: [agi] Understanding and Problem Solving
To: agi@v2.listbox.com
Date: Thursday, October 23, 2008, 10:12 AM

I do not agree. Understanding a domain does not imply the ability to solve
problems in that domain.

And the ability to solve problems in a domain even does not imply to have a
generally a deeper understanding of that domain.

 

Once again my example of the problem to find a path within a graph from node
A to node B:

Program p1 (= problem solver) can find a path.

Program p2  (= expert in understanding) can verify and analyze paths.

 

For instance, p2 could be able compare the length of the path for the first
half of the nodes with the length of the path for the second half of the
nodes. It is not necessary that  P1 can do this as well.

 

P2 can not necessarily find a path. But p1 can not necessarily analyze its
solution.

 

Understanding  and problem solving are different things which might have a
common subset but it is wrong that the one implies the other one or vice
versa.

 

And that's the main reason why natural language understanding is not
necessarily AGI-complete.

 

-Matthias

 

 

Terren Suydam [mailto:[EMAIL PROTECTED]  wrote:

 



Once again, there is a depth to understanding - it's not simply a binary
proposition.

Don't you agree that a grandmaster understands chess better than you do,
even if his moves are understandable to you in hindsight?

If I'm not good at math, I might not be able to solve y=3x+4 for x, but I
might understand that y equals 3 times x plus four. My understanding is
superficial compared to someone who can solve for x. 

Finally, don't you agree that understanding natural language requires
solving problems? If not, how would you account for an AI's ability to
understand novel metaphor? 

Terren

--- On Thu, 10/23/08, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:

From: Dr. Matthias Heger <[EMAIL PROTECTED]>
Subject: [agi] Understanding and Problem Solving
To: agi@v2.listbox.com
Date: Thursday, October 23, 2008, 1:47 AM

Terren Suydam wrote:

>>>  

Understanding goes far beyond mere knowledge - understanding *is* the
ability to solve problems. One's understanding of a situation or problem is
only as deep as one's (theoretical) ability to act in such a way as to
achieve a desired outcome. 

<<<  

 

I disagree. A grandmaster of chess can explain his decisions and I will
understand them. Einstein could explain his theory to other physicist(at
least a subset) and they could understand it.

 

I can read a proof in mathematics and I will understand it - because I only
have to understand (= check) every step of the proof.

 

Problem solving is much much more than only understanding.

Problem solving is the ability to *create* a sequence of actions to change a
system's state from A to a desired state B.

 

For example: Problem Find a path from A to B within a graph.

An algorithm which can check a solution and can answer details about the
solution is not necessarily able to find a solution.

 

-Matthias

 

 

                

 

 




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to