Abram, A useful midpoint between views is to decide what knowledge must distill down to, to be able to relate it together and do whatever you want to do. I did this with Dr. Eliza and realized that I had to have a column in my DB that contained what people typically say to indicate the presence of various "symptoms" (of various cause-and-effect chain links). I now realize that ignorance of the operation of various processes itself is also a condition with its own "symptoms", each with their own common expressions of ignorance. OK, so just where was my column going to come from? This information is NOT on the Internet, Wikipedia, etc., yet any expert can rattle this information off in a heartbeat. The only obvious answer was to have experts hand code this information. I am STILL listening to anyone who claims to have another/better way, but I have yet to hear ANY other functional proposal. Of course, this simple realization dooms all of the several efforts now underway to "mine" the Internet and Wikipedia for knowledge from which to solve problems, yet no one seems to be interested in this simple gotcha, while these doomed efforts continue.
I believe that ALL of the ongoing disputes here on this forum are born of a lack of analysis. While the contents of a knowledge base may be very complex and interrelated, the structure of that DB should be relatively simple. This discussion should start with a proposal for structure, and continue as the flaws in that proposal are each identified and addressed. Note in passing that the value of any problem solving system lies in its ability to solve problems with an absolute minimum of information. Hence, systems that require the most information are worth the least, and systems that require all information are completely worthless. Dr. Eliza was designed to operate right at the (currently believed to be) absolute minimum. I completely agree with others here that Dr. Eliza is NOT an AGI as currently envisioned. However, for many of the projected problem-solving functions of a future AGI, it appears to be absolutely unbeatable. People need to either target other functionality for a *useful* future AGI, or else develop designs that won't be predictably inferior to Dr. Eliza. For this, they would do well to fully understand the operation of Dr. Eliza, which should be no problem since it is conceptually pretty simple. Most of the code goes to support speech I/O, the USENET interface, etc., and NOT its core problem solving ability. Steve Richfield =============== On 6/21/08, Abram Demski <[EMAIL PROTECTED]> wrote: > > To be honest, I am not completely satisfied with my conclusion on the > post you refer to. I'm not so sure now that the fundamental split > between logical/messy methods should occur at the line between perfect > & approximate methods. This is one type of messiness, but one only. I > think you are referring to a related but different messiness: not > knowing what kind of environment your AI is dealing with. Since we > don't know which kinds of models will fit best with the world, we > should (1) trust our intuitions to some extent, and (2) try things and > see how well they work. This is as Loosemore suggests. > > On the other hand, I do not want to agree with Loosemore too strongly. > Mathematics and mathematical proof is a very important tool, and I > feel like he wants to reject it. His image of an AGI seems to be a > system built up out of totally dumb pieces, with intelligence emerging > unexpectedly. Mine is a system built out of somewhat smart pieces, > cooperating to build somewhat smarter pieces, and so on. Each piece > has provable smarts. > > On Sat, Jun 21, 2008 at 6:54 AM, Jim Bromer <[EMAIL PROTECTED]> wrote: > > I just read Abram Demski's comments about Loosemore's, "Complex Systems, > > Artificial Intelligence and Theoretical Psychology," at > > > http://dragonlogic-ai.blogspot.com/2008/03/i-recently-read-article-called-complex.html > > > > I thought Abram's comments were interesting. I just wanted to make a few > > criticisms. One is that a logical or rational approach to AI does not > > necessarily mean that it would be a fully constrained logical - > mathematical > > method. My point of view is that if you use a logical or a rational > method > > with an unconstrained inductive system (open and not monotonic) then the > > logical system will, for any likely use, act like a rational-non-rational > > system no matter what you do. So when, I for example, start thinking > about > > whether or not I will be able to use my SAT system (logical > satisfiability) > > for an AGI program, I am not thinking of an implementation of a pure > > Aristotelian-Boolean system of knowledge. The system I am currently > > considering would use logic to study theories and theory-like relations > that > > refer to concepts about the natural universe and the universe of thought, > > but without the expectation that those theories could ever constitute a > > sound strictly logical or rational model of everything. Such ideas are > so > > beyond the pale that I do not even consider the possibility to be worthy > of > > effort. No one in his right mind would seriously think that he could > write > > a computer program that could explain everything perfectly without error. > > If anyone seriously talked like that I would take it as a indication of > some > > significant psychological problem. > > > > > > > > I also take it as a given that AI would suffer from the problem of > > computational irreducibility if it's design goals were to completely > > comprehend all complexity using only logical methods in the strictest > sense. > > However, many complex ideas may be simplified and these simplifications > can > > be used wisely in specific circumstances. My belief is that many > > interrelated layers of simplification, if they are used insightfully, can > > effectively represent complex ideas that may not be completely > understood, > > just as we use insightful simplifications while trying to discuss > something > > that is completely understood, like intelligence. My problem with > > developing an AI program is not that I cannot figure out how to create > > complex systems of insightful simplifications, but that I do not know > how > > to develop a computer program capable of sufficient complexity to handle > the > > load that the system would produce. So while I agree with Demski's > > conclusion that, "there is a way to salvage Loosemore's position, > > ...[through] shortcutting an irreducible computation by compromising, > > allowing the system to produce less-than-perfect results," and, "...as we > > tackle harder problems, the methods must become increasingly > approximate," I > > do not agree that the contemporary problem is with logic or with the > > complexity of human knowledge. I feel that the major problem I have is > that > > writing a really really complicated computer program is really really > > difficult. > > > > > > > > The problem I have with people who talk about ANNs or probability nets as > if > > their paradigm of choice were the inevitable solution to complexity is > that > > they never discuss how their approach might actually handle complexity. > Most > > advocates of ANNs or probability deal with the problem of complexity as > if > > it were a problem that either does not exist or has already been solved > by > > whatever tired paradigm they are advocating. I don't get that. > > > > > > > > The major problem I have is that writing a really really complicated > > computer program is really really difficult. But perhaps Abram's idea > could > > be useful here. As the program has to deal with more complicated > > collections of simple insights that concern some hard subject matter, it > > could tend to rely more on approximations to manage those complexes of > > insight. > > > > Jim Bromer > > > > ________________________________ > > agi | Archives | Modify Your Subscription > > > ------------------------------------------- > agi > Archives: http://www.listbox.com/member/archive/303/=now > RSS Feed: http://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > http://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com > ------------------------------------------- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225 Powered by Listbox: http://www.listbox.com
