OK.
James Ratcliff wrote:

Have to amend that to "acts or replies"
I consider a reply an action. I'm presuming that one can monitor the internal state of the program.
and it could react "unpredictably" depending on the humans level of understanding.... if it sees a nice neat answer, (like the jumping thru the window cause the door was blocked) that the human wasnt aware of, or was suprised about it would be equally good.
I'm a long way from a AGI, so I'm not seriously considering superhuman understanding. That said, I proposing that you are running the system through trials. Once it has "learned" a trial, we say it understands the trial if it responds "correctly". Correctly is defined in terms of the goals of the system rather than in terms of my goals.

And this doesnt cover the opposite of what other actions can be done, and what are the consequences, that is also important.
True.  This doesn't cover intelligence or planning, merely understanding.

And lastly this is for a "situation" only, we also have the more general case about understading a "thing" Where when it sees. or has, or is told about a thing, it understands it if, it know about general properties, and actions that can be done with, or using the thing.
You are correct. I'm presuming that understanding is defined in a situation, and that it doesn't automatically transfer from one situation to another. (E.g., I understand English. Unless the accent is too strong. But I don't understand Hindi, though many English speakers do.)

The main thing being we cant and arnt really defining "understanding" but the effect of the understanding, either in action or in a language reply.
Does understanding HAVE any context free meaning? It might, but I don't feel that I could reasonably assert this. Possibly it depends on the precise definition chosen. (Consider, e.g., that one might choose to use the word "meaning" to refer to the context-free component of understanding. Would or would not this be a reasonable use of the language? To me this seems justifiable, but definitely not self-evident.)

And it should be a level of understanding, not just a y/n.
Probably, but this might depend on the complexity of the system that one was modeling. I definitely have a partial understanding of "How to program an AGI". It's clearly less than 100%, and is probably greater than 1%. It may also depend on the precision with which one is speaking. To be truly precise one would doubtless need to decompose the measure along several dimensions...and it's not at all clear that the same dimensions would be appropriate in every context. But this is clearly not the appropriate place to start.

So if one AI saw an apple and said, I can throw / cut / eat it, and weighted those ideas. and the second had the same list, but weighted eat as more likely, and/or knew people sometimes cut it before eating it. Then the AI would "understand" to a higher level. Likewise if instead, one knew you could bake an apple pie, or apples came from apple trees, he would understand more.
No. That's what I'm challenging. You are relating the apple to the human world rather than to the goals of the AI.

So it starts looking like a knowledge test then.
What you are proposing looks like a knowledge test.  That's not what I mean.

Maybe we could extract simple facts from wiki, and start creating a test there, then add in more complicated things.

James

*/Charles D Hixson <[EMAIL PROTECTED]>/* wrote:

    Ben Goertzel wrote:
    > ...
    > On the other hand, the notions of "intelligence" and "understanding"
    > and so forth being bandied about on this list obviously ARE intended
    > to capture essential aspects of the commonsense notions that
    share the
    > same word with them.
    > ...
    > Ben
    Given that purpose, I propose the following definition:
    A system understands a situation that it encounters if it predictably
    acts in such a way as to maximize the probability of achieving it's
    goals in that situation.

    I'll grant that it's a bit fuzzy, but I believe that it captures the
    essence of the visible evidence of understanding. This doesn't say
    what
    understanding is, merely how you can recognize it.

    -----
    This list is sponsored by AGIRI: http://www.agiri.org/email
    To unsubscribe or change your options, please go to:
    http://v2.listbox.com/member/?list_id=303




_______________________________________
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! http://www.falazar.com/projects/Torrents/tvtorrents_show.php

------------------------------------------------------------------------
Sponsored Link

Mortgage rates as low as 4.625% - $150,000 loan for $579 a month. Intro-*Terms <https://www2.nextag.com/goto.jsp?product=100000035&url=%2fst.jsp&tm=y&search=b_rate150k&s=3968&p=5035&disc=y&vers=722>

------------------------------------------------------------------------
This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to