I note that Ben has banned discussion of this topic on the OpenCog list, so I am leaving this list.

If anybody is interested in pursuing any of this stuff any further, I will be happy (well, mostly happy) to take it up on the AGI list.



Borislav Iordanov wrote:
On Fri, Aug 1, 2008 at 12:16 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Borislav Iordanov wrote:
Richard,

No, this does not address the complex systems problem, because there is
a very specific challenge, or brick wall, that you do not mention, and
there is also a very specific recommendation, included in the original
CSP paper, that you also gloss over.
Have you read about the OCP design more in depth or just the
introductory parts? It seems to me that what you call the "complexity
problem" is addressed quite carefully and in the most comprehensive
way that I have seen in the approach that OCP takes.
It sounds like you might not be aware of the specific issue that I am
raising here, which was described in detail in the paper I published on
the "Complex Systems Problem".

I'm quite aware of the issue and in fact, I think it's much bigger and
harder than the way you describe in your paper (where you mostly
assume that people are stuck in trying to find analytical descriptions
of systems and that's why they fail), and I still think the OCP
addresses it in a very convincing manner.

Then please enlighten me. I see absolutely nothing that addresses the problem. I would welcome it if you could explain exactly how the design overcomes the problem, and perhaps you could include a description of what you understand the CSP to be, just so we are on the same page.




It would not be necessary for me to read the whole document because the
only way that (as far as i know) the CSP can be addressed is by a
combination of two things:  (1) a design that stays very close to the
human cognitive system, and (2) a methodology that is designed
specifically to combat the CSP.

If you start by assuming that (1) has to be part of the solution, then
you are not going to be happy with OCP. Insofar as the human cognitive
system evolved "by accident" and as a byproduct to something that had
a very different purpose and within a completely different medium, I
would actually say that this is a riskier bet than what OCP is trying
to do.

But ... this is very frustrating, because the statement of the complex systems problem makes it quite clear that all of the available evidence goes exactly against what you just said. The "accidental" origins of the human system; the "very different purpose" of the human system; the "different medium" of the HCS .... none o fthese things have any bearing on the argument that I presented. If you believe they do, please slow down and spell out the details for me, but I see no way in which to relate them to the argument.

I do not simply "assume" that a human cognitive system design must be part of the solution, I come to that conclusion as a result of a very careful line of reasoning.


As for (2), I believe OCP has precisely this. The way Ben
describes the roadmap, it sounds like the focus is on a single system,
a baby AI that learns and is being trained etc. But given enough
resources (computers and people to train them), nothing prevents the
creation of many such babies in an evolutionary process. Also, looking
at a single system, note that the human cognitive system is actually
build by an evolutionary process that occurs after birth within the
brain itself (see the book "The Symbolic Species") and OCP is designed
to learn through an analogous evolutionary process. So you have
evolution (i.e. random exploration of a "solution space" ocuring at
several levels and time-scales) and in a sense even your (wrongly
targeted, I believe) requirement (1) is satisfied as well.

Not at all. The development that occurs within the human system, and the development that is supposed to occur within the OCP design, are both ontogenetic, not phylogenetic. They both occur in the context of the system's design, so if that design is wrong, no amount of developmental adaptation will fix it. If you took a human brain design, then randomly altered all of the choices about wiring, architectural layout, number of components, distribution of neurotransmitters and so on, would the same developmental algorithms just work around all of your random redesign work, and make the system just as intelligent anyway? I find that extremely hard to believe. What that means is that I do not believe that the developmental algorithms (what you call "evolutionary processes") would have a snowball's chance in hell of exploring the design space far enough to find their way back to the original design, or to some other design that works just as well. Perhaps it would happen, but we have absolutely no reason to believe that it would.

Similarly, we have absolutely no reason to believe, yet, that the developmental algorithms (or any of the other lability that is built into the OCP design process, as opposed to the OCP design content) would be capable of exploring far enough across the design space to find a point at which there was a viable, human-level intelligence design.

What it amounts to is this: in the design space of all possible intelligence-like systems, just how densely populated is this space with viable human or human+ intelligent systems? We know there is one design point - the one occupied by the human cognitive system, but we have no idea whatsoever how densely populated the space is: for all we know, there might only BE one viable design that works!

The assumption behind OCP, and other AI projects with a similar methodology, is that the design space contains sufficiently many solutions that we can just start with a design that "looks good on paper", and then do some parameter tweaking to move through the design space to a viable solution. In other words, there is *always* a viable design within reach, if you make your methodology flexible enough.

But this assumption is pure speculation.  Pure wishful thinking.

And the complex systems problem indicates that moving around in this space is extremely difficult: a given set of changes to the design do not necessarily move you in a direction that you want to go (you cannot predict the relationship between global and local very well). This is not like hill climbing by gradient ascent, it is like hill climbing in which all possible algorithms are guaranteed to take you in an almost completely random direction!

Under those circumstances, what do you want to do? Go to some place in the design space where you *know* that there is a solution, then put up with the randomness of the movements in the design space? Or do you want to go to some far-away corner of the design space, and then just trust your instincts that there is a solution within reach?

If you genuinely do understand the CSP, then you know that what it says is that AI researchers have making the second of these two choices for over fifty years now, and rather unsurprisingly, they have gotten nowhere.



Richard Loosemore





-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to