On 9/27/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
> Vladimir Nesov wrote:
> >
> >
> > On 9/27/07, *Richard Loosemore* <[EMAIL PROTECTED]
> >     I explain what is the best possible type of evidence for the complex
> >     systems problem that we could ever expect to get (and I also give a
> >     water-tight explanation of why the best possible evidence *cannot*
> >     entail definitive proof).  Then, having said what the best possible
> >     evidence would be, I argue that we *do* in fact have such evidence.
> >
> >     To say that what I did there is just give "analogous examples" is a
> >     gross distortion and trivialization of the real argument.
> >
> >
> > Okay, but it doesn't change the point.
>
> No: it completely changes your objection.
>
> If the argument I gave was nothing more than citing "analogous
> examples", that argument would be so weak that you would not be forced
> to answer it.
>
> But as it is, the argument contains a hard core that demands a response.
>   It says:
>
>    1) In contrast with all other systems that we have ever tried to
> engineer, there is strong evidence that any intelligent system that is
> complete (an AGI) will have to contain a significant amount of complexity.


Other two points are corollaries, they are more or less tautological once
first is accepted.
But saying that experience is consistent with theory doesn't by itself
support the theory. "Witch did it" is also consistent with experience. You
may have done all that is possible under the circumstances, but it still is
not a definitive proof which you'll never have, even if you develop working
AGI based on your approach. So argument is merely terminological, on how to
name the facts and implications.

On the other hand, this possibility should be considered in overall design
search. I argue that it's not really a special case from this POV.

   2) Any system that contains a significant amount of complexity cannot
> be 'engineered' (i.e. you cannot start from the global behavior and use
> normal engineering methods to find the local mechanisms that give rise
> to that global behavior).
>
>    3) The approach used in AI is to try to 'engineer' a complex system,
> which according to (1) and (2) above is an impossible task.
>
> This three-step argument provokes a question:  how can anyone justify
> doing AI with a failure-guaranteed approach?


You don't know about 'guaranteed'. It's impossible to prove.

In the face of that precise question, it is nothing but hand-waving to
> say such things as "I just don't think it is a problem" (which some
> people do) or "All you have done is give analogous examples" (which is
> not even true).
>
> It is sometimes frustrating that people respond to this argument, not by
> clearly understanding it and attacking some aspect of it, but by talking
> around it, evading it, waving their hands, or using some other technique
> designed to avoid having to actually confront the issue.
>
> That kind of evasiveness is a sure sign that the argument itself is
> robust.


Not necessarily. If argument isn't communicated clearly or
information-intensely enough, people get bored and ignore it. They don't
perceive themselves as waving their hands on an argument, but as waving
their hands back on similar hand-waving. They don't avoid confronting the
issue, they just don't perceive an issue being present.

Vladimir Nesov wrote:
> > Of course it can. With big enough magic box, that is. For example, whole
> > system can happen to be a magic box. If design falls apart at that
> > stage, so be it. But if it's consistent for the most part, I see no
> > reason to evade it.
>
> AI designs fall apart in two ways:  when the people try to scale them
> up, and when people try to develop learning mechanisms to ground them in
> real world knowledge.
>
> Both of these falling-apart modes are predicted by the complex systems
> problem:  that is exactly what we would expect if the complex systems
> problem is real.


Actually both of these modes can have easier explanation: they don't
consider system as a whole upfront, so they fail due to disbalance or
equivalently due to lack of flexibility in earlier components.

And both of these falling-apart modes are *exactly* what we observe if
> we look at the history of all AI projects.
>
> What is that:  just a coincidence?


It's culture. AI systems are usually ugly monsters, hacked together from
different parts, or in another branch low-level-bound systems without
high-level view on what's going on (or without a high-level goal, so absence
of such view is justified). It's apparent and it's enough to account for
these problems.

> So, you strategy is to experiment with low-level design and then *test*
> > if interesting high-level competences arise? If so, such test all the
> > same requires at least some theory of high-level competences, on the
> > level of definitions. If you build a theory that unifies some of these
> > competences, you can simplify the task by searching for systems that
> > satisfy the theory.
>
> No:  I really think you did not read what I said in my paper carefully
> enough, because this is such an oversimplification of what I said that I
> hardly recognize it.
>
> I stated quite clearly that the only viable strategy is to use the best
> known example of a natural intelligent system (the human mind) as a
> design template, and that only by staying as close to that design as we
> can will we reduce the amount of searching that has to be done.
>
> So I have already answered your objection:  I am building a high-level
> theory based on the design of the human mind.
>
> But that theory has to be structured in such a way that it allows itself
> to be modified by experimental results.  You have to proceed in a
> careful way from a flexible framework down to specific models.  You
> cannot just arbitrarily declare the design to be a specific set of
> choices (in the way that John Anderson or Alan Newell have done) and
> then leave those decisions untouched from then on.  Anderson and Newell
> make *extreme* design commitments at a very early stage, without any
> room for modification in the face of experimental work on the global
> behavior.


I agree, theory must be flexible, it can critically increase chances of
success. Human brain/mind is an example of working system, so such theory
should include specific cases close to brain/mind. It's still just common
sense.

-- 
Vladimir Nesov                            mailto:[EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=47548081-81619d

Reply via email to