>> A problem about mathematical series, which you offer, is not an >> adaptive/AGI problem.
Of course not. But it clearly disproved your statement which was it's sole purpose. >> Similarly an AI that deals with series will be able to deal with all kinds >> of hitherto unseen series. Duh. >> A truly adaptive/AGI problem is where your existing ways don't work, your >> rules simply don't apply, and the problem is not only unseen but unexpected. You're not getting it. Yes, your standard simple rules don't apply. But higher level rules and meta-rules kick in to learn and develop new rules. But those higher level rules and meta-rules have to come from somewhere . . . . Where do you think they come from if they aren'y part of your world model? >> A true AGI - like the human or animal brain - has the capacity to cope with >> such divergent, unexpected problems, which require it to produce altogether >> new, as yet unestablished ways of reaching goals from within its existing >> database/world model. Sorry, but the problems that you quote are supposed to be examples of real "unexpectedness" don't strike me as impossible for the system that I'm proposing at all (hard work, obviously yes -- but impossible? hell no). >> You seem to be interested only in basic AI. Hardly (nice insult, by the way). Further, you seem to be unwilling to even attempt to understand what I'm trying to say. The problems that you quote are not at all what I would call unexpected and they *are* soluble from a sophisticated enough model that can learn. We clearly aren't communicating anywhere near well enough for me to continue bothering with this. ----- Original Message ----- From: Mike Tintner To: [email protected] Sent: Wednesday, May 02, 2007 7:47 PM Subject: Re: [agi] rule-based NL system 0 ridiculous about it. It's totally to the point. A problem about mathematical series, which you offer, is not an adaptive/AGI problem. I assume that your machine knows about series. All AI machines will continually be presented with "unseen" problems. A simple calculator will encounter sums that it has never seen before. But they are "expected." and in a very general way "predictable." The calculator or AI machine has ways of dealing with them. It has a set of rules that allow it to deal with a vast number of previously unseen situations/ problem variations. Similarly an AI that deals with series will be able to deal with all kinds of hitherto unseen series. A truly adaptive/AGI problem is where your existing ways don't work, your rules simply don't apply, and the problem is not only unseen but unexpected *Problem: fit these pieces into that case - but the pieces just don't fit, they occupy a larger area than the case -w hat do you do now? *Problem: shore up a leaking dam - so, if you're a beaver, you keep doing what you should - shoring with stones and branches, - but the dam still keeps leaking - what do you do now? *Problem: "remove six matches to form ten" - you have three sets of five matches each, and you must take away six matches - but that will leave 9 matches not ten.. what do you do now? A true AGI - like the human or animal brain - has the capacity to cope with such divergent, unexpected problems, which require it to produce altogether new, as yet unestablished ways of reaching goals from within its existing database/world model. That is the essence of AGI, and how the human brain achieves it is what has to be explained. You seem to be interested only in basic AI. . ----- Original Message ----- From: Mark Waser To: [email protected] Sent: Wednesday, May 02, 2007 11:18 PM Subject: Re: [agi] rule-based NL system >> No it's not "prediction" - nothing can predict the unexpected. Unexpected is *totally* different from previously unseen. If you've seen that 1, 3, 5, 7, 9, & 11 are large & black and that 2, 4, 6, 8, 10 & 12 are small & red but you've never seen 13, you still have expectations about it and can make (what is likely to be very good) predictions about it. >> What distinguishes divergent vs convergent intelligence (AGI vs AI) is the ability to find new ways of overcoming new [previously unseen/unexpected] obstacles to your goals - when your current database/ world model has nothing immediately to offer. Then what *are* you using if your current database/world model has "nothing" to offer. It *has* to be offering something -- like suggested plans to try or ways to get more information. Please take the time to try to understand what I'm saying rather than putting up ridiculous strawmen. ----- Original Message ----- From: Mike Tintner To: [email protected] Sent: Wednesday, May 02, 2007 6:05 PM Subject: Re: [agi] rule-based NL system MW: the fact that prediction of previously unseen things is critical to intelligence. No it's not "prediction" - nothing can predict the unexpected. What distinguishes divergent vs convergent intelligence (AGI vs AI) is the ability to find new ways of overcoming new [previously unseen/unexpected] obstacles to your goals - when your current database/ world model has nothing immediately to offer. -------------------------------------------------------------------------- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?& ---------------------------------------------------------------------------- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?& ---------------------------------------------------------------------------- No virus found in this incoming message. Checked by AVG Free Edition. Version: 7.5.467 / Virus Database: 269.6.2/784 - Release Date: 01/05/2007 14:57 ------------------------------------------------------------------------------ This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?& ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
