My philosophy of AI has never been logic-based or neural-based.  I did explore 
neural nets during the neural-net mania of the nineties.  I did a lot of 
reading, and experimented with some with feedforward nets I wrote using 
simulated annealing and backpropagation (which never did work very well).  
Neural nets seem to have potential as one tool among several types of 
incremental learning algorithms, including genetic algorithms and statistical 
methods, but in themselves, they are no more than that -- useful tools, but not 
the solution.

Language, which includes logic, is a way of representing ideas simply and 
crudely.  Good for communication and internal reasoning -- "if I do this then 
this will happen, unless state X is the case, which means that this other thing 
will happen," etc.  My project uses an artificial language (Jinnteera) for both 
these things, and the language is integral to the whole thing.  But it does not 
function as the core knowledge-representation scheme.

So this brings us to what I've been calling the missing piece.  Artificial 
neural nets (as they currently exist) can function as general-learning 
algorithms, but they don't represent knowledge of the real spatiotemporal world 
well.  They are too low-level for handling what in human intelligence is 
thought of as mental imagery.  Yes, in the brain, it is all neural based, but 
in a non-massively-parallel von Neuman computer system (even a PDP system), 
building a 100-billion-node neural net is computationally intractable (is that 
the right word?).  It has to be done differently.

The missing piece lies between low-level learning algorithms and highest-level 
logical-linguistic knowledge representation.  When a human translator, at the 
U.N., for example, translates between Chinese and English, he (or she) does it 
infinitely more effectively than any translation software could do it, because 
there is an intermediate knowledge representation that is neither Chinese nor 
English, but that can be readily translated to or from either language by a 
fluent speaker.  The intermediate knowledge representation is non-linguistic -- 
it consists of mental models constructed of sensorimotor patterns representing 
a 3-D temporal world.

This sounds very vague and abstract, but I'm working on making it concrete, in 
my system (Gnoljinn) -- developing the data structures in code for implementing 
this knowledge-representation scheme.  There's been some talk here recently 
about 3-D vision systems, and this points roughly in the direction I'm going 
in.  Gnoljinn uses a single sensory modality right now -- vision -- and will be 
restricted to it for a good while, because, while it might be useful to have 
other sensory modalities, none of them are absolutely necessary for higher 
intelligence, and it's best to keep things as simple as possible starting out.

I seriously wonder if I can do this project myself, or whether I need to try to 
find some collaborators.



Yan King Yin wrote:
  John Scanlon wrote: 
  > [...]
  > Logical deduction or inference is not thought.  It is mechanical symbol 
manipulation that can can be programmed into any scientific pocket calculator.
  > [...]


  Hi John,

  I admire your attitude for attacking the core AI issues =)

  One is either neural-based or logic-based, using a crude dichotomy.  So your 
approach is closer to neural-based?  Mine is closer to the logic-based end of 
the spectrum. 

  You did not have a real argument against logical AI.  What you said was just 
some sentiments about the ill-defined concept of "thought".  You may want to 
take some time to express an argument why logic-based AI is doomed.  In fact, 
both Ben's and my system have certain "neural" characteristics, eg being 
graphical, having numerical truth values, etc. 

  In the end we may all end up somewhere between logic and neural...

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to