Ben Goertzal wrote: I don't think that a pragmatically-achievable amount of formally-encoded knowledge is going to be enough to allow a computer system to think deeply and creatively about any domain -- even a technical domain about science. What's missing, among other things, is the intricate interlinking between declarative and procedural knowledge. When humans learn a domain, we learn not only facts, we learn techniques for thinking and problem-solving and experimenting and information-presentation .. and we learn these in such a way that they're all mixed up with the facts....
What you're describing is the "Expert System" approach to AI, closely related to the "common sense" approach to AI. ... I agree that as humans we bring a lot of general knowledge with us when we learn a new domain. That is why I started off with the general conversational domain and am now branching into science, philosophy, mathematics and history. And of course the AI can not make all the connections without being extensively interviewed on a subject and having a human help clarify it's areas of confusion just as a parent answers questions for a child or a teacher for a student. I am not in fact trying to take the exhaustive approach one domain at a time approach but rather to teach it the most commonly known and requested information first. My last email just used that description to identify my thoughts on grounding. I am hoping that by doing this and repeating the interviewing process in an iterative development cycle that eventually the bot will eventually be able to discuss many different subjects at a somewhat superficial level much as same as most humans are capable of. This is a lot different from the exhaustive definition that Cyc provides for each concept. I view what I am doing distinct from expert systems because I do not yet use either a backward or forward inference engine to satisfy a limited number of goal states. The knowledge base is not in the form of rules but rather many matched patterns and encoded factoids of knowledge many of which are transitory in nature and track the context of the conversation. Each pattern may trigger a request for additional information like an expert system. But the bot does not have a particular goal state in mind other that learning new information unless a specific request of it is made by the user. I also differ from Cyc in that realizing the importance of English as a user interface from the beginning, all internal thoughts and goal states occur as an internal dialog in English. This eliminates the requirement to translate an internal knowledge representation to an external natural language other than providing one or response patterns to specific input patterns. It also makes it easy to monitor what the bot is learning and whether it is making proper inferences because it's internal thought process is displayed in English while in debug mode.. The templates which generate the responses in some cases do have conditional logic to determine which output template is appropriate response based on the AI's personality variables and the context of the current conversation. Variables are also set conditionally to maintain metadata for context. If the references a male in it's response [He] and [Him] get set vs. [Her] and [She] if a female is referenced. [CurrentTopic], [It], [There] and [They] are all set to maintain backward contextual references. I was able to find a few references to the Common Sense approach to AI on google and some of the difficulties in achieving it. And I must admit I have not yet implemented non-monotonic reason or probabilistic reasoning as of yet. I am not under the illusion that I am necessarily inventing or implementing anything that has not been conceived of before. As Newton says if I achieve great heights it will be because I have stood on the shoulders of giants. I just see the current state of the art and think that it can be made much better. I do not actually know how far I can take it while staying self-funded, but hopefully by the time my money runs out it will demonstrate enough utility and potential to be of value to someone. I think I like the sound of the Common Sense Approach to AI though. I can't remember the last time anyone accused me of having common sense, but I like the sound of it! I don't think AI is absent sufficient theory, just sufficient execution. I feel like the Cyc Project's heart was in the right place and the level of effort was certainly great, but perhaps the purity of their vision took priority over usability of the end result. Is any company actually using Cyc as anything other than a search engine yet? That being said other than Cyc I am at a loss to name any serious AI efforts which are over a few years in duration and have more that 5 man years worth of effort (not counting promotional and fundraising). The Open Source efforts are interesting and have some utility but are starting to get that designed by committee feel and enhancements are starting to feel like kludges tacked on due to limitations in the initial design. ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]