Mike,

>The plane flew over the hill
>The play is over

Using a formal language can help to avoid many of these issues.

>But then the program must be able to tell what is "in" what or outside, what 
>is behind/over etc.

The communication module in my experimental AGI design includes
several specialized editors, one of which is a "Space Editor" which
allows to use simple objects in a small nD "sample-space" to define
the meaning of terms like "in", "outside", "above", "under" etc. The
goal is to define the meaning as simply as possible and the knowledge
can then be used in more complex scenes generated for problem solving
purposes.
Other editors:
Script Editor - for writing stories the system learns from.
Action Concept Editor - for learning about actions/verbs & related
roles/phases/changes.
Category Editor - for general categorization/grouping concepts.
Formula Editor - math stuff.
Interface Mapper - for teaching how to use tools (e.g. external software)
...
Some of those editors (probably including the Space Editor) will be
available only to privileged users. It's all RBAC-based. Only
lightweight 3D imagination - for performance reasons (our brains
"cheat" too), and no embodiment.. BTW I still have a lot to code
before making the system publicly accessible.

>To "understand" is .. in principle, ..to be able to go into the real world and 
>point to the real objects/actions being referred to..

Not from my perspective.

>I believe that is actually how we *do* understand, how the brain does work, 
>how a GI *must* work

It's ok (and often a must) to use different solutions when developing
for different platforms.
Planes don't flap wings.

>You understand what a key is if you can go and pick one up

Again, AGI can know very little about particular objects and it can be
enough to successfully solve many problems & demonstrate useful level
of concept understanding. Let's say the AGI works as an online
adviser. For many key-involving problems it's good enough to know that
a particular key object can be used to unlock/open another particular
objects + the location info  + sometimes the key color or so, but for
example the exact shape of the key or the exact moves for opening a
particular lock using the key - that's something this online AGI can
in most cases leave to the user. The AGI should be able to learn
details but there are so many details in the real world that, for
practical reasons, the AGI would just need to filter most of it out.
AGI doesn't need to interact with the real world directly in order to
learn enough to be a helpful problem solver. And as long as it does a
good job as a problem solver, who cares about the "understanding" vs
"reacting as if it understands" classification..

Regards,
Jiri Jelinek


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to