YKY, Pei's attitude is pretty similar to mine on these matters, although we differ on other more detailed issues regarding AGI.
And, please note that compared to most AI researchers, Pei and I would be among the folks most likely to be sympathetic to your ideas, given that -- we are both explicitly in favor of pushing hard toward AGI rather than fiddling with narrow AI -- are are both in favor of uncertain logic systems as one highly viable path toward AGI You have not explained how you will overcome the issues that plagued GOFAI, such as -- the need for massive amounts of highly uncertain background knowledge to make real-world commonsense inferences -- the combinatorial explosion that ensues when you try to control logical inference on a large body of data My own solution to these problems is to -- learn most knowledge via experience rather than via explicit encoding -- utilize a subtle combination of inference, statistical pattern mining and artificial economics for inference control Pei agrees with me on the "learning via experience" part but has a different approach to the combinatorial explosion problem of inference control. But you have not yet presented any original solutions to these or other major well-documented problems with the GOFAI approach. Yes, intuitively the approach you're suggesting sounds like it should work -- at first. That is why masses of research funding were spent on it decades ago, and why hundreds of brilliant people spent their lives on GOFAI. But you are not giving us any rational reason to suspect you might succeed in this sort of approach where so many others have failed. What is your new and different idea? -- Ben On 1/19/07, Pei Wang <[EMAIL PROTECTED]> wrote:
YKY, Frankly, I still see many conceptual confusions in your description. Of course, some of them come from other people's mistake, but they will hurt your work anyway. For example, what you called "rule" in your postings have two different meanings: (1) A declarative implication statement, "X ==> Y"; (2) A procedure that produces conclusions from premises, "{X} |- Y". These two are related, but not the same thing. Both can be learned, but through very different paths. To confuse the two will cause a mess. The failure of GOFAI has reasons deeper than you suggested. Like Ben, I think you will repeat the same mistake if you follow the current plan. Just adding numbers to your rules won't solve all the problems. "More knowledge, higher intelligence" is an intuitively attractive slogan, but has many problems in it. For example, more knowledge will easily lead to combinatorial explosion, and the reasoning system will derive many "true" but useless conclusions. How do you deal with that? I don't think it is a good idea to attract many volunteers to a project unless the plan is mature enough so that people's time and interest won't be wasted. Sources of human knowledge will be needed by any AGI project, so projects like CYC or MindPixel will be useful, though I'm afraid neither is cost-effective enough to play a central role in satisfying this need. Mining the Web may be more efficient, though it will surely leave gaps in the knowledge base to be filled in by other methods, such as personal experience, NLP, interactive tutoring, etc. Sorry for the negative tone, but since you mentioned my work, I have to clarify my position. Pei On 1/19/07, YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote: > > > On 1/19/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote: > > Well YKY, I don't feel like rehashing these ancient arguments on this > list!! > > > > Others are welcome to do so, if they wish... ;-) > > > > You are welcome to repeat the mistakes of the past if you like, but I > > frankly consider it a waste of effort. > > > > What you have not explained is how what you are doing is fundamentally > > different from what has been tried N times in the past -- by larger, > > better-funded teams with more expertise in mathematical logic... > > Well I think people gave up on logic-based AI (GOFAI if you will) in the 80s > because of newer techniques such as neural networks and statistical learning > methods. They were not necessarily aware of what exactly was the cause of > failure. If they did, they would have tackled it. > > For the type of common sense reasoner I described, we need a *massive* > number of rules. You can either acquire these rule via machine learning or > direct encoding. Machine learning of such rules is possible, but the area > of research is kind of immature. OTOH there has not been a massive project > to collect such rules by hand. So that explains why my type of system has > not been tried before. > > My system is conceptually very close to Cyc, but the difference is that Cyc > only contains ground facts and rely on special predicates (eg $isa, $genl) > to do the reasoning. My project may be the first to openly collect facts as > well as rules. > > I guess Novamente or NARS can benefit by importing these rules, if the > format is right? > > > YKY ________________________________ > This list is sponsored by AGIRI: http://www.agiri.org/email > To unsubscribe or change your options, please go to: > http://v2.listbox.com/member/?list_id=303 ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303
----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303