Jim Bromer wrote:
> #####ED PORTERS CURRENT RESPONSE ########>
> Forward and backward chaining are not hacks. They has been two of the
most
> commonly and often successfully techniques in AI search for at least 30
> years. They are not some sort of wave of the hand. They are much more
> concretely grounded in successful AI experience than many of your
much more
> ethereal, and very arguably hand waving, statements about having many
of the
> difficult problems in AI are to be cured by some as yet unclearly defined
> emergence from complexity.
Richard Loosemore's response:
Oh dear: yet again I have to turn a blind eye to the ad hominem insults.
----------------------------------------------
There were no ad hominem insults in Ed's response. His comment about
Richard's ethereal hand waiving was clearly and unmistakably within the
boundaries that Richard has set in his own criticisms again and again.
And Ed specified the target of the criticism when he spoke of the
"difficult problems in AI ...[which]... are to be cured by some as yet
unclearly defined emergence from complexity." All Richard had to do was
to answer the question, and instead he ran for cover behind this bogus
charge of being the victim of an ad hominem insult.
Jim,
Take a more careful look, if you please.
Ed and I were talking about a particular *topic*, but then in the middle
of the discussion about that topic, he suddenly declared that the
techniques in question were "much more concretely grounded in successful
AI experience than many of your much more ethereal, and very arguably
hand waving, statements about having many of the difficult problems in
AI are to be cured by some as yet unclearly defined emergence from
complexity." Instead of trying to make statements about the topic, he
tries to denigrate some proposals that I have made. Whether my
proposals are or are not worthy of such criticism, that has nothing to
do with the topic that was under discussion. He just took a moment out
to make a quick insult.
To make matters worse, what he actually says about my proposals is also
a pretty bad misrepresentation of what I have said. My central claim is
that there is a problem at the heart of the current AI methodology. I
have said that there is a sickness there. I have also given an outline
of a possible cure - but I have been quite clear to everyone that this
is just an outline of the cure, nothing more. Now, do you really think
that a physician should be criticised for IDENTIFYING a malady, because
he did not, in the same breath, also propose a CURE for the malady?
Finally, you yourself say that I "ran for cover behind this bogus
charge of being the victim of an ad hominem insult" .... but I did
nothing of the sort. I went on to ignore the insult, giving as full a
reply to his point as I would have done if the insult had not been there.
As I said, I turned a blind eye to it, albeit after pointing it out.
Tut tut.
If upon reflection, Richard sincerely believes that Ed's comment was an
ad hominem insult, then we can take this comment as a basis for
detecting the true motivation behind those comments of Richard which are
so similar in form.
For example, Richard said, " Understanding that they only have the
status of hacks is a very important sign of maturity as an AI
researcher. There is a very deep truth buried in that fact."
While I have some partial agreement with Richard's side on this one
particular statement, I can only conclude that by using Richard's own
measure of "ad hominem insults" that Richard must have intended this
remark to have that kind of effect. Similarly, I feel comfortable with
the conclusion that every time Richard uses his "hand waiving" argument,
there is a good chance that he is just using it as an all-purpose ad
hominem insult.
Excuse me? "Ad hominem" means that the remarks were designed to win an
argument by insulting the other person. Ed is not an AI researcher, he
admits, himself, that he has only an outsider's perspective on this
field, that he is learning. I was mostly directing that comment at
people who claim to be far more experienced than he.
It is too bad that Richard cannot discuss his complexity theory without
running from the fact that his solution to the problem is based on his
non-explanation that,
"...in this "emergent" (or, to be precise, "complex system") answer to
the question, there is no guarantee that binding will happen. The
binding problem in effect disappears - it does not need to be explicitly
solved because it simply never arises. There is no specific mechanism
designed to construct bindings (although there are lots of small
mechanisms that enforce constraints), there is only a general style of
computation, which is the relaxation-of-constraints style."
From reading Richard's postings I think that Richard does not believe
there is a problem because the nature of complexity itself will solve
the problem - once someone is lucky enough to find the right combination
of initial rules.
For those who believe that problems are solved through study and
experimentation, Richard has no response to the most difficult problems
in contemporary AI research except to cry foul. He does not even
consider such questions to be valid.
There is not much I can do in the face of such a deep misunderstanding
of the actual words I have written on the topic.
I think you are just venting, to be honest.
If you had asked me to clarify my remarks I would have been happy to
try, but your tone indicates that his would be a waste of time.
Richard Loosemore.
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com