In the Novamente design this is dealt with via a currently unimplemented
aspect of the design called the "internal simulation world." This is a
very non-human-brain-like approach
Why do you believe that this is a "very non-human-brain-like approach"?
Mirror neurons and many other recent discoveries tend to make me believe
that the human brain does itself have (or indeed, is) an internal simulation
world.
----- Original Message -----
From: "Ben Goertzel" <[EMAIL PROTECTED]>
To: <[email protected]>
Sent: Saturday, March 10, 2007 9:19 AM
Subject: Re: [agi] The Missing Piece
John,
It is certainly clear that mental imagery plays a role in human thinking,
but this role does appear to vary from person to person, both in extent
and in nature. Take a look at Hadamard's old book "The Psychology of
Mathematical Invention" for a fascinating discussion of the different
sorts of mental imagery pursued by different people (visual, acoustic,
verbal, etc.). I myself use a lot of visual and auditory imagery in my
own thinking, but I know others who do not (at least not at the conscious
level).
In the Novamente design this is dealt with via a currently unimplemented
aspect of the design called the "internal simulation world." This is a
very non-human-brain-like approach, but I think it's an interesting and
ultimately very powerful one. What it means is that NM will actually
have, internally, a private 3D world-simulation, complete with a simple
physics engine. It can use this internal sim to experiment with
hypothetical actions in hypothetical situations, but also to draw various
abstract sketches and movies that don't correspond to any real-world
phenomena.
We haven't implemented this part yet due to the familiar lack of adequate
human resources, but I think it will be a valuable addition to NM's
cognitive arsenal. For the sim world, we would use the CrystalSpace
engine that we are now using (in the AGISim project) to give NM a
sim-world to use for embodiment and interaction with humans...
I don't really see mental imagery as a critical missing link btw the
symbolic and the subsymbolic. In NM, there is interaction & translation
between symbolic and subsymbolic knowledge without need for mental
imagery. However, in some cases mental imagery can provide insights that
would be hard to come by otherwise.
-- Ben G
John Scanlon wrote:
My philosophy of AI has never been logic-based or neural-based. I did
explore neural nets during the neural-net mania of the nineties. I did a
lot of reading, and experimented with some with feedforward nets I wrote
using simulated annealing and backpropagation (which never did work very
well). Neural nets seem to have potential as one tool among several
types of incremental learning algorithms, including genetic algorithms
and statistical methods, but in themselves, they are no more than that --
useful tools, but not the solution.
Language, which includes logic, is a way of representing ideas simply
and crudely. Good for communication and internal reasoning -- "if I do
this then this will happen, unless state X is the case, which means that
this other thing will happen," etc. My project uses an artificial
language (Jinnteera) for both these things, and the language is integral
to the whole thing. But it does not function as the core
knowledge-representation scheme.
So this brings us to what I've been calling the missing piece.
Artificial neural nets (as they currently exist) can function as
general-learning algorithms, but they don't represent knowledge of the
real spatiotemporal world well. They are too low-level for handling what
in human intelligence is thought of as mental imagery. Yes, in the
brain, it is all neural based, but in a non-massively-parallel von Neuman
computer system (even a PDP system), building a 100-billion-node neural
net is computationally intractable (is that the right word?). It has to
be done differently.
The missing piece lies between low-level learning algorithms and
highest-level logical-linguistic knowledge representation. When a human
translator, at the U.N., for example, translates between Chinese and
English, he (or she) does it infinitely more effectively than any
translation software could do it, because there is an intermediate
knowledge representation that is neither Chinese nor English, but that
can be readily translated to or from either language by a fluent speaker.
The intermediate knowledge representation is non-linguistic -- it
consists of mental models constructed of sensorimotor patterns
representing a 3-D temporal world.
This sounds very vague and abstract, but I'm working on making it
concrete, in my system (Gnoljinn) -- developing the data structures in
code for implementing this knowledge-representation scheme. There's been
some talk here recently about 3-D vision systems, and this points roughly
in the direction I'm going in. Gnoljinn uses a single sensory modality
right now -- vision -- and will be restricted to it for a good while,
because, while it might be useful to have other sensory modalities, none
of them are absolutely necessary for higher intelligence, and it's best
to keep things as simple as possible starting out.
I seriously wonder if I can do this project myself, or whether I need to
try to find some collaborators.
Yan King Yin wrote:
John Scanlon wrote:
> [...]
> Logical deduction or inference is not thought. It is mechanical
symbol manipulation that can can be programmed into any scientific
pocket calculator.
> [...]
Hi John,
I admire your attitude for attacking the core AI issues =)
One is either neural-based or logic-based, using a crude
dichotomy. So your approach is closer to neural-based? Mine is
closer to the logic-based end of the spectrum.
You did not have a real argument against logical AI. What you
said was just some sentiments about the ill-defined concept of
"thought". You may want to take some time to express an argument
why logic-based AI is doomed. In fact, both Ben's and my system
have certain "neural" characteristics, eg being graphical, having
numerical truth values, etc.
In the end we may all end up somewhere between logic and neural...
------------------------------------------------------------------------
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303