Russell Wallace wrote:
On 3/12/07, *Richard Loosemore* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:

    This is puzzling, in a way, because this is my ammunition that you are
    using here!  That is exactly what I am trying to do:  invent an AIXML.
    I am a little baffled because you agree, but think I am not trying to do
    that....


I'm equally puzzled, since you came across to me as advocating the opposite! Okay, for the moment rather than reply point by point to this message I'll try to summarize in the hope of pruning the search space.

1. We both say we want AIXML. One of the primary goals of anything in that space is human readability, yet the only example you presented was a list of opaque identifiers with names like foo_A1 and foo_A27. How do you propose to meet the readability goal?


1. Human readibility is, in my view, a bad thing to desire at the beginning. Here is (one aspect of) the reasoning behind that statement.

The main motivation that we, as AI researchers, have for the human readibility requirement is that we want to do some kind of hand-assembly and hand-debugging (in a very general sense) of our AI systems. But what happens in practice is that by committing to that requirement, we usually postpone the question of how the system could have autonomously learned that human-readable knowledge in the first place. We know that we do this postponement (everyone admits that the unsupervised learning and/or grounding of logical terms is a late developer in AI research), but we excuse it in a variety of ways (which it might be better not to delve into, because that is a big subject).

But there is a substantial body of thought that says that those postponed things (mostly learning) are being postponed precisely because conventional AI has boxed itself into a corner by insisting that the representations be readable... they are in a You Can't Get There From Here situation. You might have heard the entrepreneurs' story of the Marketing Guy who came up to the Technology Guy at a company and said "I've invented this great new type of paint: you just brush it on and it produces a pattern like wallpaper!" -- to which the other replies "This is amazing! How does it work?". Marketing Guy looks offended: "I don't know how it works, I just invented it: it's up to you to figure out how the technology bit works."

The point is that it is all very well to come up with a great idea for the way that representations are structured -- that they have clear semantics, etc. -- but if you look into the learning issue in great depth, you eventually come to realize that there might not actually be any viable (unsupervised) learning mechanism that will actually pick up from the world that particular, preordained type of representation.

One of the arguments against this position, of course, is that We Don't Care, because if we went to enough trouble we could 'hand-build' a complete system, or get it up above some threshold of completeness beyond which it would have enough intelligence to be able pick up the learning ball and go on to build new knowledge in a viable way (Doug Lenat said this explicitly in his Google lecture, IIRC). We would not have to do things the way human cognitive systems do them, according to this argument, because we are not constrained by the same problems.

Maybe. But that is a huge maybe. It is contradicted by the Complex Systems Problem (about which more in my AGIRI Workshop 2006 paper), for one thing. Some would also say that all the arguments against this problem sound like special pleading: they all amount to "if we keep doing what we are doing, that problem of not having a good way to acquire new knowledge autonomously will just slowly evaporate." There are a lot of people who simply don't buy that.

More importantly, there is positive evidence that if you abandon the requirement that KR have a clear semantics, you immediately start running into new kinds of powerful behavior: my example of the sonar neural network was suppsoed to illustrate that. Perhaps this example should be taken as a harbinger of a more general truth: abandon the "I must be able to inspect these representations and understand them" requirement, and you can start finding powerful learning systems that build their own representations,a nd can do a lot of thinking, but without the individual KR atoms having a predefined semantics. This does not mean that the atoms have to be completely opaque, as in the sonar example, but it does mean that we could try inventing learning mechanisms first, then later analyzing the good ones to see if we can do a post-hoc interpretation of their semantics.

At the very least, you can see that this philosophy and the one that I take you to be adopting are miles apart.




2. You appeared to be suggesting that each module use a different representation, which is contrary to the AIXML goal of a unified representation.

No, I don't think I meant to imply that. At the base level, all AIXML. But at higher levels of description there are acquired structures, just because of the way that the AIXML works (it is designed to *generate* structure).

On a quite separate issue, I think there may, later, be specializations and modules that are built in .... but they all use a lingua franca.


3. You appear to be advocating the 'copy the brain' approach to AI, which I don't subscribe to.

Well, not exactly.

There are several different attitudes to the 'copy the brain' idea, but I don't like most of them: I sit in an interesting ground between some of the well-known ones.

Here are four that I DO NOT subscribe to:

A) Neuroscience: The "Simulate a Complete Brain" approach. Yuk: I disparage this every chance I get. I just don't think they can get the resolution, and probably would not understand what they had built even if they did have the resolution.

B) Cognitive Psychology: The "Just Keep Studying Cognition" approach. This would involve doing experiments, the way we have been doing for the last fifty years, collected human experimental data until we understand the mind, THEN build one. I don't think this is converging, by itself, because the theoretical models are all too local and too simplistic.

C) Cognitive Science: The "Cognitive Modelling" approach. (ACT-R, SOAR, etc.). These folks do have unified models of cognition (big plus: makes them better than B, above), but they presuppose answers to all the basic questions about representation first (making exactly the mistake I just criticised).

D) One variety of AI: The "GOFAI" approach, in which cognitive science is used as a loose inspiration for the way to build AI systems. Don't like this much, because the way that the 'loose inspiration' happened was just as a kludging together of bits of cognitive psychology and bits of AI, with the worst problems of both left untouched No systematic examination of how to evaluate models. No unified theory to show how the kludges could be extended to cover all aspects of cognition.

Instead of these, I have a very specific way of proceeding, which is:

1) Do a unified, principled reconstruction of the cognitive psychology literature ... and do this in such a way as to come up with a "framework" rather than a "theory of cognition" (a la SOAR and ACT-R).

2) Do not feel obliged to be governed only by the human experimental data: cannot easily get information about a lot of the processes by doing human experiments. The goal is not to model human cognition exactly, but to use it as an example of the only game in town, and learn from it.

3) Build complete models of cognition (using the framework) and test ideas in that context, not in the context of only local models. In other words, if I have no proof that my model could be extended in a plausible way to cover, say, learning of new concepts, then it would be a waste of time to keep working on, say, the KR atoms. Breadth-first search of the landscape, not depth-first.

4) Experiments (simulations) organized in such a way as to address teh Complex Systems Problem (see my paper for a sketch of what this means). Basically, choose some mechanisms that build KR nodes: find out what they can do, then proceed accordingly.

There is one really *really* crucial thing in all of this. If the complex systems argument is correct, then the problem of finding learning mechanisms that build good KR atoms could look (to a pessimist) like a blind search through the infinite space of possible systems, hoping to hit upon the ones that are actually intelligent. You know how John Horton Conway came up with the Game of Life algorithm? Pure trial and error. That pessimist, as I say, might argue that I am advocating that we have no choice but to do the same kind of same blind search.

I think there is one difference between that impossible blind search process and what I am doing: we ourselves *are* cognitive systems, and as such we have insight into some of our own processes. Not great insight, it is true, but if handled in the right way, it could be extremely valuable.

Moreover, I think we use that 'insight' all the time in a surrepticious way. What my approach is designed to do is to make that use of insight systematic, in a way that none of the B, C and D groups have ever done. It is the specific way that I am trying to collect together the cognitive psychology data that is the key to what I am doing: making a true, low-level melding of cognotive science and AI, rather than one of those horrible kludges that tends to happen when cog sci and AI people get together and talk in their incompatible languages.

So why, in the end, do I think we should take any notice of the human cognitive system at all? two step answer:

- I think that we will eventually be forced to make learning mechanisms take precedence over pre-selected representation formats like logic.

- In that situation, with no pre-hoc understanding of the semantics of the KR nodes (because the learning mechanisms are allowed to build those however they want), we would be forced to do something like trial and error exploration of the space of possible learning systems....

- .... unless we took my advice and used everything we know about human cognition, in a unified way, with systematic experimental studies (I mean simulation experiments here, as well as human experiments) to follow up on our insights about how our own systems are working. Only by using the human data, without being constrained by the current practices of cognitive psychology, will we be able to properly explore the space of learning mechanisms.



Okay, that was a semi-brief answer.



Richard Loosemore.






-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to