Jiri and Matt et al,

I'm getting v. confident about the approach I've just barely begun to outline. Let's call it "realistics" - the title for a new, foundational branch of metacognition, that will oversee all forms of information, incl. esp. language, logic, and maths, and also all image forms, and the whole sphere of semiotics.

The basic premise:

"to understand a piece of information and its "information objects", (eg words) , is to "realise" (or know) how they refer to "real objects" in the real world, (and, ideally, and often necessarily, to be able to point to and engage with those real objects)."

- "this includes understanding/realising when they are "unreal" - when they do NOT refer directly to real objects, but for example to sur-real or metaphorical or abstract or non-existent objects"

Realistics recognizes that understanding involves, you could say, "object-ivity".

Complementarily,

"to 'disunderstand" is to fail to see how information objects refer to real objects."

"to be confused" is not only to fail to see, but to be unsure *which* of the information objects in a piece of information do not refer to real objects" (it's all a bit of a blur)

Bear in mind that human information-processing involves an ENORMOUS amount of disunderstanding and confusion.

And a *major point* of this approach (to be explained on another occasion) is precisely that a great deal of the time people do not understand/realise *why* they do not understand/ are confused - *why* they have such difficulty understanding genetics, atomic physics, philosophy, logic, maths, ethics, neuroscience etc. etc - just about every subject in the curriculum, academic or social - because, like virtual AGI-ers they fall into the trap of FAILING to refer the information to real objects. They do not try to realise what on earth is being talked about. And they even end up concluding (completely wrongly) that there is something wrong with their brain and its information-processing capacity, ending up with a totally unecessary inferiority complex. (There will probably be v. few here, even at this exalted level of intelligence, who are not so affected).

(Realistics should enormously improve human understanding, and holds out the promise that no one will ever fail to understand any information/subject ever again for want of anything other than time and effort).

Now there is a LOT more to expand here [later]. But for now it immediately raises the obvious, and inevitable "object-ion" to any contradictory, "unreal" /"artificial" approach to information and esp language processing/NLP such as you and many other AGIers are outlining.

How will you understand, and recognize when information objects/ e.g language/words are "unreal" ?

e.g.
Turn yourself inside out.
Turn that block of wood inside out.
Turn around in a straight line.
What's inside is not more beautiful than what's on the outside
Drill down into Steve's logic.
Cars can hover just above the ground
The car flew into the wall.
The wall flew away.
Bush wants to liberalise sexual mores.
Truth and beauty are incompatible.

[all such statements obviously real/unreal/untrue/metaphorical in different and sometimes multiple simultaneous ways]

You might also ask yourself how you will, if your approach extends beyond language, know that any image or photo is unreal.

IOW how is any "unreal" approach to information processing (contradictory to mine) different from a putative logic that does *not* recognize truth or a maths that does *not* recognize equality/equations?


Mike,

The plane flew over the hill
The play is over

Using a formal language can help to avoid many of these issues.

But then the program must be able to tell what is "in" what or outside, what is behind/over etc.

The communication module in my experimental AGI design includes
several specialized editors, one of which is a "Space Editor" which
allows to use simple objects in a small nD "sample-space" to define
the meaning of terms like "in", "outside", "above", "under" etc. The
goal is to define the meaning as simply as possible and the knowledge
can then be used in more complex scenes generated for problem solving
purposes.
Other editors:
Script Editor - for writing stories the system learns from.
Action Concept Editor - for learning about actions/verbs & related
roles/phases/changes.
Category Editor - for general categorization/grouping concepts.
Formula Editor - math stuff.
Interface Mapper - for teaching how to use tools (e.g. external software)
...
Some of those editors (probably including the Space Editor) will be
available only to privileged users. It's all RBAC-based. Only
lightweight 3D imagination - for performance reasons (our brains
"cheat" too), and no embodiment.. BTW I still have a lot to code
before making the system publicly accessible.

To "understand" is .. in principle, ..to be able to go into the real world and point to the real objects/actions being referred to..

Not from my perspective.

I believe that is actually how we *do* understand, how the brain does work, how a GI *must* work

It's ok (and often a must) to use different solutions when developing
for different platforms.
Planes don't flap wings.

You understand what a key is if you can go and pick one up

Again, AGI can know very little about particular objects and it can be
enough to successfully solve many problems & demonstrate useful level
of concept understanding. Let's say the AGI works as an online
adviser. For many key-involving problems it's good enough to know that
a particular key object can be used to unlock/open another particular
objects + the location info  + sometimes the key color or so, but for
example the exact shape of the key or the exact moves for opening a
particular lock using the key - that's something this online AGI can
in most cases leave to the user. The AGI should be able to learn
details but there are so many details in the real world that, for
practical reasons, the AGI would just need to filter most of it out.
AGI doesn't need to interact with the real world directly in order to
learn enough to be a helpful problem solver. And as long as it does a
good job as a problem solver, who cares about the "understanding" vs
"reacting as if it understands" classification..

Regards,
Jiri Jelinek


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com





-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to