Mike, Vladimir, Ben, et al, The mere presence of philosophy is proof positive that there are some domains in which GI doesn't work well at all. Are those domains truly difficult, or just ill adapted to GI? The mere existence of Dr. Eliza would seem to be proof positive that those domains are NOT difficult - but rather we are just missing neuron type 201 or something.
No, an AGI can NOT figure these things out on its own! First, much of the "data" underlying philosophy has been long lost. Sun Tsu is still taught in military colleges, even though the battles upon which that philosophy is based have long been forgotten, thousands of years ago. Further, those battles were fought with primitive hand weapons and bamboo armour, yet these non-obvious principles still apply to modern heavy weapons. Note that these principles predict a quick demise for the U.S. Hence, I am sort of on Ben's side in this particular discussion (Ben, please correct me if I am wrong in this), that an AGI need NOT engage in philosophy to be interesting and even useful, though such an AGI will never rise to become a singularity, but will remain more of a pet. Maybe from such an AGI we can learn enough to build a truly powerful AGI. Closing with yet another entry for Ben's list: *Limits to AGI: GI has fundamental (and somewhat simplistic) limits, which philosophy, decision theory, and some AI efforts seek to surpass. There is absolutely no evidence that an AGI that is better/stronger than our own GI will be any better at competing in the real world, just as many/most of the smartest people in our population (e.g. AGI researchers) are some of society's least successful people, and are often unable to even hold a job. Hence, if the effort is to produce cheap droids, then we already have more than enough biological droids. However, if the effort is to produce super-smart machines able to lead our society, then there are some really fundamental philosophical things that have yet to be understood enough to start engineering such machines.* Steve Richfield ==================== On 10/20/08, Mike Tintner <[EMAIL PROTECTED]> wrote: > > Vlad:Good philosophy is necessary for AI...We need to work more on the > foundations, to understand whether we are > going in the right direction > > More or less perfectly said. While I can see that a majority of people here > don't want it, actually philosophy, (which should be scientifically based), > is essential for AGI, precisely as Vlad says - to decide what are the proper > directions and targets for AGI. What is creativity? Intelligence? What are > the kinds of problems an AGI should be dealing with? What kind(s) of > knowledge representation are necessary? Is language necessary? What forms > should concepts take? What kinds of information structures, eg networks, > should underlie them? What kind(s) of search are necessary? How do analogy > and metaphor work? Is embodiment necessary? etc etc. These are all matters > for what is actually philosophical as well as scientific as well as > technological/engineering discussion. They tend to be often more > philosophical in practice because these areas are so vast that they can't be > neatly covered - or not at present - by any scientific. > experimentally-backed theory. > > If your philosophy is all wrong, then the chances are v. high that your > engineering work will be a complete waste of time. So it's worth considering > whether your personal AGI philosophy and direction are viable. > > And that is essentially what the philosophical discussions here have all > been about - the proper *direction* for AGI efforts to take. Ben has > mischaracterised these discussions. No one - certainly not me - is objecting > to the *feasibility* of AGI. Everyone agrees that AGI in one form or other > is indeed feasible, though some (and increasingly though by no means fully, > Ben himself) incline to robotic AGI. The arguments are mainly about > direction, not feasibility. > > (There is a separate, philosophical discussion, about feasibility in a > different sense - the lack of a culture of feasibility, which is perhaps, > subconsciously what Ben was also referring to - no one, but no one, in > AGI, including Ben, seems willing to expose their AGI ideas and proposals > to any kind of feasibility discussion at all - i.e. how can this or that > method solve any of the problem of general intelligence? This is what Steve > R has pointed to recently, albeit IMO in a rather confusing way. ) > > So while I recognize that a lot of people have an antipathy to my personal > philosoophising, one way or another, you can't really avoid philosophising, > unless you are, say, totally committed to just one approach, like Opencog. > And even then... > > P.S. Philosophy is always a matter of (conflicting) opinion. (Especially, > given last night's exchange, philosophy of science itself). > > > > > > ------------------------------------------- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com > ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34 Powered by Listbox: http://www.listbox.com
