Vladimir: ..and also why can't 3D world model be just described abstractly,
by
presenting the intelligent agent with bunch of objects with attached
properties and relations between them that preserve certain
invariants? Spacial part of world model doesn't seem to be more
complex than general problem of knowledge arrangement, when you have
to keep track of all kinds of properties that should (and shouldn't)
be derived for given scene.
Vladimir and Edward,
I didn't really address this idea essentially common to you both, properly.
The idea is that a network or framework of symbols/ symbolic concepts can
somehow be used to reason usefully and derive new knowledge about the
world - a network of classes and subclasses and relations between them, all
expressed symbolically. Cyc and Nars are examples.
OK let's try and set up a rough test of how fruitful such networks/ models
can be.
Take your Cyc or similar symbolic model, which presumably will have
something like "animal - mammals - humans - primates - cats etc " and
various relations to "move - jump - sit - stand " and then "jump -
on - objects" etc etc. A vast hierarchy and network of symbolic concepts,
which among other things tell us something about various animals and the
kinds of movements they can make.
Now ask that model in effect: "OK you know that the cat can sit and jump on
a mat. Now tell me what other items in a domestic room a cat can sit and
jump on. And create a scenario of a cat moving around a room."
I suspect that you will find that any purely symbolic system like Cyc will
be extremely limited in its capacity to deduce further knowledge about cats
or other animals and their movements with relation to a domestic room - and
may well have no power at all to create scenarios.
But you or I, with a visual/ sensory model of that cat and that room, will
be able to infer with reasonable success whether it can or can't jump, sit
and stand on every single object in that room - sofa, chair, bottle, radio,
cupboard etc etc. And we will also be able to make very complex assessments
about which parts of the objects it can or can't jump or stand on - which
parts of the sofa, for example - and assessments about which states of
objects, (well it couldn't jump or stand on a large Coke bottle if erect,
but maybe if the bottle were on its side, and almost certainly if it were a
jeroboam on its side). And I think you'll find that our capacity to draw
inferences - from our visual and sensory model - about cats and their
movements is virtually infinite.
And we will also be able to create a virtually infinite set of scenarios of
a cat moving in various ways from point to point around the room.
Reality check: what you guys are essentially advocating is logical systems
and logical reasoning for AGI's - now how many kinds of problems in the real
human world is logic actually used to solve? Not that many. Oh it's an
important part of much problemsolving but only a part. How much scientific
problemsolving depends seriously on logic? Is logic going to help you
understand and have ideas about genetics or how cells work, or the brain
works, or how and why wars start? Is it going to be much use for design
problems? Does it help in telling stories? .. keep on going through the vast
range of human and animal problemsolving (all of which remember are the ONLY
forms of [A]GI that actually work).
That's why I asked you: give me some examples of useful new knowledge or
analogies [especially analogies] that have been derived from logical systems
or logic, period (except about logic itself).
New knowledge - especially new science - comes primarily from new
observation of the world, not from logically working through old knowledge.
Artificial general intelligence - the ability to develop new, unprogrammed
solutions to problems - depends on sensory models and observations.
Let me be brutally challenging here : the reason you guys are attached to
purely symbolic models of the world is not because you have any real
evidence of their being productive (for AGI), but because they're what you
know how to do. Hence Vlad's "why can't 3D world model be just described
abstractly.." He doesn't know - he just hopes - that it can. Logically.
What you need here is not logic but - ahem - evidence {sensory stuff].
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=52680917-d3e445