Dave,
Sorry to reply so tardily. I had to devote some time to other, pressing,
matters.
First, a general comment. There seems to be a very interesting approach to
arguing one's case being taken by some posters on this list in recent days.
I believe this approach was evinced most recently, and most baldly, by
list newbie, Colin Hales. Apparently, Mr. Hales doesn't believe he is
responsible for making his arguments clear or backing up his assertions of
fact. Rather, it is we who must educate ourselves about the background for
his particular arguments and assertions so we will be able to prove those
arguments and assertions to ourselves for him. Now, THAT's an ego! This
approach, of course, begs the question, "Why should I care what Mr. Hales
argues or asserts if he isn't going to take the time and effort needed to
convince me he's not just some poseur?"
Does anyone else find this a tad insulting? It's tantamount to saying,
"Well, I've made my argument in English. It may not be perfectly clear to
you because, to really understand it, you need to be fluent in Esperanto.
If you're not, that's your problem. Go learn Esperanto so you can
understand my fabulous reasoning and be convinced of my argument's
veracity." Get real.
I can (and will) ignore Mr. Hales. But, then, you used this same approach
in your last post to me when you wrote, "The OCP approach/strategy, both
in crucial specifics of its parts and particularly in its total synthesis,
*IS* novel; I recommend a closer re-examination!"
I think not. If you really care whether I think OCP's approach is novel,
you have to convince me, not give me homework. I'm not arguing OCP's
position. You are. If you think I don't understand OCP well enough, and
if you think that is important to get me to take your argument seriously,
then it's up to you to do the heavy lifting.
In this case, though, I'll let you off the hook by gladly conceding the
point. I will accept as true the proposition that OCP's approach to NLU is
completely novel. Of course, I do this gladly because it makes not a bit
of difference.
In the first place, I didn't argue that the OCP approach was not novel in
either its design or implementation. In fact, I'm sure it is. I argued
that trying to solve the artificial intelligence "problem" by, first,
solving the NLU problem is not a novel strategy. We have Mr. Turing to
thank for it. It has been tried before. It has, to date, always failed.
But, as I said, this makes no difference simply because thr fact that the
OCP strategy is novel doesn't prove it will work. Indeed, it's not even
good evidence. Prior approaches that failed were also once novel.
If the problem of NLU is AI-complete (and this is widely believed to be the
case), it will not fall to a finite algorithm with space/time complexity
small enough to make it viable in a real-time AGI. If NLU turns out to not
be AI-complete, then we still have fifty years of past failed effort by
many intelligent, sincere and dedicated people to support the argument that
it is at least a very difficult problem.
My point has been, and still is, that NLU becomes a necessary condition of
AGI IFF we define AGI as AGHI. Many people simply can't conceive of a
general intelligence that isn't human-like. This is understandable since
the only general intelligence we (think we) know something about is human
intelligence. In that context, cracking the NLU problem can (although
still needn't necessarily) be viewed as a prerequisite to cracking the AGI
problem.
But, human intelligence is not the only general intelligence we can imagine
or create. IMHO, we can get to human-beneficial, non-human-like (but,
still, human-inspired) general intelligence much quicker if, at least for
AGI 1.0, we avoid the twin productivity sinks of NLU and embodiment.
In the end, of course, both or us really have only our opinions. You can't
prove the OCP approach, novel though it may be, will, finally, crack the
elusive NLU problem. I can't prove it won't. I, agree, therefore, that we
should agree to disagree and let history sort things out.
Cheers,
Brad
David Hart wrote:
On Mon, Oct 6, 2008 at 4:39 PM, Brad Paulsen <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
So, it has, in fact, been tried before. It has, in fact, always
failed. Your comments about the quality of Ben's approach are noted.
Maybe you're right. But, it's not germane to my argument which is
that those parts of Ben G.'s approach that call for human-level NLU,
and that propose embodiment (or virtual embodiment) as a way to
achieve human-level NLU, have been tried before, many times, and
have always failed. If Ben G. knows something he's not telling us
then, when he does, I'll consider modifying my views. But,
remember, my comments were never directed at the OpenCog project or
Ben G. personally. They were directed at an AGI *strategy* not
invented by Ben G. or OpenCog.
The OCP approach/strategy, both in crucial specifics of its parts and
particularly in its total synthesis, *IS* novel; I recommend a closer
re-examination!
The mere resemblance of some of its parts to past [failed] AI
undertakings is not enough reason to dismiss those parts, IMHO, dislike
of embodiment or NLU or any other aspect that has a GOFAI past lurking
in the wings not withstanding.
OTOH, I will happily agree to disagree on these points to save the AGI
list from going down in flames! ;-)
-dave
------------------------------------------------------------------------
*agi* | Archives <https://www.listbox.com/member/archive/303/=now>
<https://www.listbox.com/member/archive/rss/303/> | Modify
<https://www.listbox.com/member/?&>
Your Subscription [Powered by Listbox] <http://www.listbox.com>
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com