The case of Helen Keller can certainly shed some light on the human processes of knowledge acquisition.  Without the senses of vision and hearing, how could she build up an internal model of the world and her place in it?
 
I don't remember the exact number, but something like 70% of the cerebral cortex is involved in visual processing.  We use internal visual-spatial maps to orient ourselves within the space around us and navigate through it.  With our eyes closed, we can pinpoint the locations of objects in this internal model of space by the directions that their sounds come from and by properties of the sounds.  We can tell without looking whether a car is driving toward or away from us by the change in pitch of the sound--up or down.  The auditory information gets fed into the same central map that visual information gets fed into.  The sense of touch also gets fed into the same mapping system.  That's how you can build up a mental picture of a room in complete darkness by feeling your way around.
 
Helen Keller, though without vision and hearing, was not without a human brain and its visual-spatial modelling capabilities.  Using the sense of touch alone, she could build up maps of her environment in the same way that non-blind people do in the dark.  And this is how she also acquired her knowledge of words and language--through the sensory channel of touch (the story of her teacher finally getting her to associate taps on her hands with the water coming out of a spigot).  Her language processing probably took place in the areas of the brain normally used for language processing, such as Broca's and Wernick's areas, but the symbols were grounded in and represented by patterns of incoming touches and patterns of muscle movements for generating the same.  Her knowledge was built up out of sensory-motor patterns.
 
So your bot is an AGI project?  I guess a number of people on this list are actually working on real AGI systems.  I'm also developing one.
 
Like your system, my system also uses ASCII inputs and outputs, but they are part of its formal interface language, a computer language for communication between human and machine and between machine and machine.  The language is like SQL in the sense that it is used for managing knowledge bases, inputting information, and querying, but it has the universal expressability of languages like Prolog, Lojban, or English.  The other IO channels are vision and motor outputs.  If my system is to learn to use a natural language, it will do so through these two channels, learning to recognize characters visually and generate them through motor outputs, rather than using ASCII codes.  I think that this approach to handling natural language is essential for true AI, which should be capable of learning any human language from Greek to Chinese to hieroglyphics, regardless of the types of symbols used.  To learn Chinese, for example, a system should be able to learn to recognize and generate the pictographic characters in their two-dimensional form instead of being tied to numeric codes for the characters.  Not easy, but necessary if you are to say that a system has true general intelligence.
 
Higher intelligence requires the ability to use language for communication and for encoding ideas in linguistic form.  But language is only a secondary form of knowledge representation.  The primary form, at least in the human brain, is spatial-temporal (and probabilistic) models built up out of sensory-motor patterns.
 
----- Original Message -----
Sent: Sunday, May 07, 2006 9:07 AM
Subject: RE: [agi] Logic and Knowledge Representation

>> John said: The human brain, the only high-level intelligent system currently known, uses language and logic for abstract reasoning, but these are based on, and owe their existence to, a more fundamental level of intelligence -- that of pattern-recognition, pattern-matching, and pattern manipulation.
 
I agree with this wholeheartedly.  But on the next point we diverge in our thinking. 
 
>> John said:  In evolution on earth, sensory-motor-based intelligence came first, and the use of language and logic only later.  It seems to me that the right path to true AI will also use sensory-motor patterns as the basic building blocks of knowledge representation.  A typical human being's knowledge of the letter "A" involves recognition of graphical representations of the symbol, memories of its sound when spoken, procedural or muscle memory of how to speak and write it, and memories of where it is commonly found in its linguistic context.  A system should be capable of recognizing symbols visually or auditorially (and possibly of generating them through motor outputs) before it should be expected to comprehend them.
 
Any thoughts or arguments?  Or am I just repeating something everyone already knows?  (I honestly don't know.) >>
 
Speech recognition and visual recognition are separate problems from knowledge representation/pattern recognition.
 
Hellen Keller was blind and deaf but with some help was able to achieve knowledge representatation and pattern recognition without the use of either hear or sight.
 
Think of the senses as the input/output devices and yes an infant's brain must first learn to control those input output devices before it is able to learn and communicate with the world outside itself.
 
But an artificially intelligent entity already has access to an ASCII data stream that is can do input/output to communicate outside itself.
 
Of course because a picture is worth a thousand words a program that can also do visual recognition has access to a larger data store than one that does not.
 
My opinion on the most probable route to a true AI Entity is:
 
1. Build a better fuzzy pattern representation language with an inference mechanism for extracting inducible information from user inputs.  Fuzziness allows the
   language to understand utterances with misspellings words run together etc...
2. Build a bot based on said language
3. Build a large knowledgebase which captures a large enough percentage of real world knowledge to allow the bot learn from natural language data sources i.e. the web.
4. Build a pattern generator which allows the bot learn the information it has read and build new patterns itself to represent the knowledge. 
5. Build a reasoning module based on Bayesian logic to allow simple reasoning to be conducted.
6. Build a conflict resolution module to allow the Bot to resolve/correct conflicting information or ask for help with clarification to build correct mental model.
7. Build a goal and planning module which allow the Bot to operate more autonomously to aid in the goals which we give it i.e.. achieve singularity.
 
Steps 1 and 2 an took me a couple years.
3 is an ongoing effort. Into my fourth year now with 28000 patterns. 
Hint: if the pattern recognition language is good a single pattern should be able express all the ways of expressing a single thought in a single pattern.
This makes the patterns longer and more complex but reduces overall work by not forcing the bot master to write thousands of patterns to account for all possible ways to express a single thought.  My 28000 patterns would requires match at several orders of magnitude more inputs correctly than competing solutions including misspellings, ungrammatical inputs etc.
 
This transforms the difficulty of step 3 from being an totally intractable problem to a doable but still difficult/work intensive problem.
 
Step 4 is keeping me awake at night thinking about it.
 
Steps 5, 6 and 7 don't sound that difficult to me right now but that's only because I haven't thought about them in enough detail.
 
People have challenged the top down approach saying that such a bot would lack grounding or the ability to tie it's knowledge to real world inputs.
 
But it should not be difficult to use a commercial voice recognition engine to transform voice inputs into ASCII inputs.  And the fuzzy recognizer for be able to
compensate many times for the mistakes that the voice recognition software makes in recognizing a word or two in the input stream. 
 
 

From: John Scanlon [mailto:[EMAIL PROTECTED]
Sent: Sunday, May 07, 2006 2:40 AM
To: agi@v2.listbox.com
Subject: [agi] Logic and Knowledge Representation

Is anyone interested in discussing the use of formal logic as the foundation for knowledge representation schemes for AI?  It's a common approach, but I think it's the wrong path.  Even if you add probability or fuzzy logic, it's still insufficient for true intelligence.
 
The human brain, the only high-level intelligent system currently known, uses language and logic for abstract reasoning, but these are based on, and owe their existence to, a more fundamental level of intelligence -- that of pattern-recognition, pattern-matching, and pattern manipulation.
 
Philosophers have grappled with the question of the source of knowledge for as long as there have been philosophers, and one of the best accepted answers in modern philosophy is sensory experience.  Sensory experience, including proprioception and awareness of motor outputs, in addition to the ordinary five senses, is the material that knowledge is built out of.  The brain constructs its logical formulations out of the basic building blocks of the sights, sounds, and feels of linguistic symbols.  The symbols themselves (letters of an alphabet, words in a language, etc.) are built up out of lower-level sensory patterns.
 
In evolution on earth, sensory-motor-based intelligence came first, and the use of language and logic only later.  It seems to me that the right path to true AI will also use sensory-motor patterns as the basic building blocks of knowledge representation.  A typical human being's knowledge of the letter "A" involves recognition of graphical representations of the symbol, memories of its sound when spoken, procedural or muscle memory of how to speak and write it, and memories of where it is commonly found in its linguistic context.  A system should be capable of recognizing symbols visually or auditorially (and possibly of generating them through motor outputs) before it should be expected to comprehend them.
 
Any thoughts or arguments?  Or am I just repeating something everyone already knows?  (I honestly don't know.)
 
J.P.
 

To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to