David Hart wrote:
On Sun, Oct 5, 2008 at 7:29 PM, Brad Paulsen <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:

    [snip]  Unfortunately, as long as the mainstream AGI community
    continue to hang on to what should, by now, be a
    thoroughly-discredited strategy, we will never (or too late) achieve
    human-beneficial AGI.


What a strange rant! How can something that's never before been attempted be considered "a thoroughly-discredited strategy"? I.e., creating an AI system system designed for *general learning and reasoning* (one with AGI goals clearly thought through to a greater degree than anyone has attempted previously: http://opencog.org/wiki/OpenCogPrime:Roadmap ) and then carefully and deliberately progressing that AI through Piagetan-inspired inspired stages of learning and development, all the while continuing to methodically improve the AI with ever more sophisticated software development, cognitive algorithm advances (e.g. planned improvements to PLN and MOSES/Reduct), reality modeling and testing iterations, homeostatic system tuning, intelligence testing and metrics, etc.

Please: "strange rant?" I've been known to employ inflammatory rhetoric in the past when my blood was boiling and I have always been sorry I did it. I have an opinion. You don't think it agrees with your opinion. That's called a disagreement amongst peers. Not a "strange rant."

First, you have taken my statement out of context. I was NOT referring to Ben G.'s overall approach to AGI. His *concept* of AGI, if you will. I was referring to the (not "his") strategy of making human-level NLU a prerequisite for AGI (this is not a strategy pioneered by Ben G.).

Human-level AGI is an AI-complete problem. So, this strategy makes the goal of getting to AGI 1.0 dependent on solving an AI-complete problem. The strategy of using embodiment to help crack the NLU problem (also not pioneered by Ben G.) may very well be another AI-complete problem (indeed, it may contain a whole collection of AI-complete problems). I don't think that's a very good plan. You, apparently, do. I can point to past failures, you can only point to future possibilities. Still, neither of us is going to convince the other we are right. End of story. Time will tell (and this e-mail list is conveniently archived for later reference).

Second, "...never before been attempted..."? Simply not true. I was in high school when this stuff was first attempted. I personally remember reading about it. I haven't succumbed to Alzheimer's yet. By the time I got to college, most of the early predictions had already been shown to have been way too optimistic. But, since eyewitness testimony is not usually "good enough," I give you this quote from the Wikipedia article on Strong AI (which is what searching Wikipedia for AGI will get you):

"The first generation of AI researchers were convinced that [AGI] was possible and that it would exist in just a few decades. As AI pioneer Herbert Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."[10] Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who accurately embodied what AI researchers believed they could create by the year 2001. Of note is the fact that AI pioneer Marvin Minsky was a consultant[11] on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time, having himself said on the subject in 1967, "Within a generation...the problem of creating 'artificial intelligence' will substantially be solved."[12]"
(http://en.wikipedia.org/wiki/Artificial_general_intelligence)

So, it has, in fact, been tried before. It has, in fact, always failed. Your comments about the quality of Ben's approach are noted. Maybe you're right. But, it's not germane to my argument which is that those parts of Ben G.'s approach that call for human-level NLU, and that propose embodiment (or virtual embodiment) as a way to achieve human-level NLU, have been tried before, many times, and have always failed. If Ben G. knows something he's not telling us then, when he does, I'll consider modifying my views. But, remember, my comments were never directed at the OpenCog project or Ben G. personally. They were directed at an AGI *strategy* not invented by Ben G. or OpenCog.


One might well have said in early 1903 that the concept of powered flight was "a thoroughly-discredited strategy." It's just as silly to say that now [about Goertzel's approach to AGI] as it would have been to say it then [about the Wright brothers' approach to flight].


What?  No it's not "just as silly."

Let me see if I have this straight. You would have me believe that because "One might as well have said in early 1903 the concept of powered flight was a 'thoroughly-discredited' strategy.'" my objection to a 2008 AGI strategy is "just as silly." Nice try, but...

First, you co-mingle two different arguments when you say the *concept* of powered flight was a "thoroughly-discredited" *strategy*. These are two different things. Saying a concept is thoroughly discredited is much stronger claim than saying a strategy is thoroughly-discredited. I don't know what "one" would have done in 1903 (nor, by the way, do you). But knowing me pretty well, I'd say I would NOT have discounted the *concept* of powered flight. It was not a proven concept at the time, but there had been enough experimentation publicized by early 1903 for a person of scientific bent to believe it could eventually be done. With the right *strategy*. I have *never* said that I believe Ben G.'s *concept* of AGI has been thoroughly-discredited. In fact, I think elements of Ben G.'s AGI concept are brilliant.

But, this says nothing of any particular *strategy* for realizing that concept. I "might," in fact, have said back in 1903 that I thought the *strategy* being employed by Samuel Pierpont Langley was fatally flawed. But, it would have been based on the widely publicized FAILED efforts of that very prominent, government-financed gentleman's powered-flight projects. It would NOT have applied to anything the Wright brothers were or were not doing at the time. The Wright brothers kept their ideas about how to solve that problem (their *strategy*) very close to their vests. Indeed, it is well-documented that they were fanatics about keeping their ideas and processes secret. "One" doesn't criticize that which "one" knows nothing about.

So, let's recap, shall we? I do NOT think Ben G.'s concept of AGI is thoroughly discredited. I do think two *strategies* that have been adopted by Ben G. are problematic and that the "NLU first" strategy has been thoroughly discredited over a period of more than fifty years. It is well-accepted as an AI-complete problem. Every previous attempt to create an AGI has gotten bogged down in its mud and died a painful, prolonged death there. Embodiment might help. It might also be another (or multiple other) AI-complete problem(s). I don't think it's worth the expenditure of scarce engineering resources to find out. Just my opinion. And, this is one case in which I would happily be wrong. It's just a little bit too early in the game for anyone to claim that I am.

I wish the OpenCog project all the luck in the world. Really, I do. It is not now, nor has it ever been, my intent to directly criticize any part of that project. And, as far as I know, I haven't. You seem to think I have and that pains me. But, there's nothing I can do about that beyond this reply. I've always been a believer in multiple approaches to tough problems. Not putting all of one's eggs in a single basket. And, a little competition can be a powerful motivator. Vive la difference!

Brad

-dave
------------------------------------------------------------------------
*agi* | Archives <https://www.listbox.com/member/archive/303/=now> <https://www.listbox.com/member/archive/rss/303/> | Modify <https://www.listbox.com/member/?&;> Your Subscription [Powered by Listbox] <http://www.listbox.com>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to