Derek Zahn wrote:


Thanks again Richard for continuing to make your view on this topic
clear to those who are curious.

As somebody who has tried in good faith and with limited but nonzero
success to understand your argument, I have some comments.  They are
just observations offered with no sarcasm or insult intended.

Thanks for the thoughtful comments.

I wonder if it would help if I reiterated that this was supposed to be an illustration of the *manner* in which the CSP is likely to manifest itself, rather than the reasons why we should believe it will manifest itself?

In other words, what I was trying to achieve was an illustration of the kind of situation that would arise *if* the argument itself was sound. I was trying to do this because there are many people who misinterpret the argument's supposed impact. In particular, many people assume that what I am saying is that intelligence is a completely emergent property of the human mind ... so drastically emergent that it just springs out of apparent chaos. By attempting to give a more detailed example I was hoping to show that the type of situation that could arise might be very subtle - little obvious evidence of 'complexity' - and yet at the same time quite devastating in its impact. It was that contrast between the small complexity footprint and big kick that I was trying toi bring out.

Alas, most of your observations and questions bring us back to the background arguments and reasoning (which is what I was trying to leave out).

Let me try to address some of them.

1) The presentations would be a LOT clearer if you did not always
start with "Suppose that..." and then make up a hypothetical
situation.  As a reader I don't care about the hypothetical
situation, and it is frustrating to be forced into trying to figure
out if it is somehow a metaphor for what I *am* interested in, or
what exactly the reason behind it is.  In this case, if you are
actually talking about a theory of how evolution produced a
significant chunk of human cognition (a society of CBs), then just
say so and lead us to the conclusions about the actual world.  If you
are not theorizing that the evolution/CBs thing is how human minds
work, then I do not see the benefit of walking down the path.  Note
that the basic CB idea you user here strikes me as a good one; it
resonates with things like Minsky's Society of Mind, as well as the
intent behind things like Hall's Sigmas and Goertzel's subgraphs.

My strategy was as follows. (1) Suppose that the human mind is built in such-and-such a way. (2) One consequence of it being that way would be that it would be critically dependent on some mechanisms that give it stability without their contribution being at all obvious. (3) Although this hypothetical mind design is just a guess, it illustrates an entire class of designs that can be very, very different from one another, but which all share the common feature that their stability would be dependent on mechanisms that supply stability without doing so in a way that is understandable. (4) Systems in this general class are, of course, the ones that are called "complex", and the reason that I chose a simple example to illustrate the class is that there are many other examples in which it is much harder to see the linkage between global behavior and local mechanisms ... so I was just trying to pick an example where it becomes as easy to comprehend as possible. (5) One thing we know for sure is that the human mind has all the ingredients that normally give rise to complexity of the 'mild' sort shown in this example, and so it would be a truly astonishing fact if the human mind did not, in some way, have some global-local disconnects tucked away somewhere. (6) I do not necessarily believe that the particular type of global-local disconnect that I used in my example is exactly the one that manifests itself in the human mind, but if I avoid specific examples and instead talk in the abstract, people find it very hard to imagine what it might mean to say that a "small amount of complexity" might make it impossible to build an intelligence as good as the human mind.

Unfortunately, my example is a little ambiguous: do I really think this is true in the human case, or is it just a made-up example? Well, it is a little bit of both. I actually think that it could be true, but I am not in a position to claim it to be true. So it is partly a metaphor and partly real. I can see how that might be frustrating from the reader's point of view. My bad.

It is important to understand, though, that in creating this hypthetical example I was merely trying to illustrate an abstract concept that would otherwise leave many people perplexed.

2) Similarly, when you say
if we were able to look inside a CB system and see what the CBs are
> doing [Note: we can do this, to a limited extent: it is called >
"introspection"], we would notice many aspects of CB behavior ...

It would be a lot better if you left out the "if" and the "would".
Say "when we look inside this CB system..." and "we do notice any
aspects..." if that is what you mean.  If again this is some sort of
strange hypothetical universe as a reader I am not very interested in
speculations about it.

Comments read and understood:  same issue as above.

3) When you say

But now, here is a little problem that we have to deal with. It
turns > out that the CB system built by evolution was functioning
*because* of all that chaotic, organized mayhem, *not* in spite of
it.

Assuming that you are actually talking about human minds instead of a
hypothetical universe, this is a very strong statement.  It is a
theory about human intelligence that needs some support.  It is not
necessarily a theory about "intelligence-in-general"; linking it to
intelligence in general would be another theory requring support.
You may or may not think that "intelligence in general" is a coherent
concept; given your recent statements that there can be no formal
definition of intelligence, it's hard to say whether "intelligence"
that is not isomorphic to human intelligence can exist in your view.

Now, here you have slipped me back from 'illustration' mode to 'presenting the original argument' mode.

I claim that intelligent systems appear to contain an abundance of those ingredients that usually generate complexity (even Ben agrees on this, so I usually cite his opinion at this point). If that is true, then the situation illustrated in my hypothetical example will occur, in different forms, in any intelligent system.





4) Regarding:

Evolution explored the space of possible intelligent mechanisms. In
the course of doing so, it discovered a class of systems that work,
but it may well be > that the ONLY systems in the whole universe
that can function as well as > a human intelligence involve a small
percentage of weirdness that just > balances out to make the system
work. There may be no cleaned-up > versions that work.

The natural response is:  sure, this "may well be", but it just as
easily "may well not be".  This is addressed in your concluding
points, which say that it is not definite, but is very likely.  As a
reader, I do not see a reason to suppose that this is true.  You
offer only the circumstantial evidence that AI has failed for 50
years, but there are many other possible reasons for this:

Again, you are asking about the background argument. The evidence for it is not the historical failure of AI. It is the fact that no matter what design of AI anyone has ever chosen to work with, the functioning of the system quickly becomes dependent on interacting heuristics that put the system squarely in the complex systems class. Even the most viciously "logical" formalism that you can imagine still relies on an inference engine that does things that are (a) not provably correct, and (b) utterly critical for the functioning of the system.

So, no matter what people do, their system designs all lapse into complex systems. That is the core reason to believe the above claim: it does not seem possible to do the job of getting an intelligence built without going there.

You see, people like Ben would say "Sure, this happens: there is definitely some complexity .... but we just do not think it will turn out to have a significant impact". And my claim is that even a small amount of complexity can easily be devastating. It is that "small footprint, big kick" message that is at the core of my argument.


- maybe it's just hard.  many aspects of the universe took more than
50 years to understand, many are still not understood.  i personally
think that if this is true we are unlikely to be just a few years
from the solution, but it does seem like a reasonable viewpoint.

The idea that it is just hard is the default response. It does not address the specific, positive evidence that we can cite by looking at many examples of complex systems, and then comparing those examples to the case of building an intelligence.


- maybe "logic" just stinks as a tool for modeling the world.  it
seemed natural but looking at the things and processes in the human
universe logically seems like a pretty poor idea to me.  maybe
"probabilistic logic" of one sort or another will help.  but the
point here is that it might not be a complex systems issue, it might
just be a knowledge representation and reasoning issue.  perhaps
generated or evolved "program fragments" will fare better; perhaps
things that look like "neural clusters" will work, perhaps we haven't
discovered a good way to model the universe yet.

This is an interesting paragraph. All of the 'perhaps' examples you cite are, in fact, complex. We could go into the details, and perhaps some time we should, but it is not that easy to show.


- maybe we haven't ripped the kitchen sink out of the wall yet...
maybe "intelligence" will turn out to be a conglomeration of 1543
different representation schemes and reasoning tricks, but we've only
put a fraction together so far and therefore only covered a small
section of what intelligence needs to do.

This is the 'critical mass' response. Again, it could be true, but when you suggest that many components interacting together might be the answer, you have to bear in mind that nobody has ever suggested a way to get even the existing smaller number of components to interact in a way that does not bring in complexity. All I am doing is extrapolating from that observed complexity. You have to start looking at specific examples to see this, but (as I said earlier) it is not hard to see.

Just meditate on the logical-inference-control-mechanism for a bit. This is something that determines how many ineferences you draw before you cut off the search and get on with using the inferences you have found so far. The behavior of ths system as a whole is determined by this mechanism. So what the system actually does, and what facts it adds to its knowledge today, is determined by something that is, in essence, a hack, and so the behavior is the outcome of something that will give one response if it cuts off the search at #275935437, but a different response if the search goes one more step to #275935438. The behavior, then, is a complex consequence of minute details of the local mechanism.

The same story can be retold of all the other ideas - different kinds of probabilistic logic, especially.



5) Of course, the argument would be strengthened by a somewhat
detailed suggestion of how AI research *should* proceed; you give
some arguments for why certain (unspecified) approaches *might* not
work, but nothing beyond the barest hint of what to do about it,
which doesn't motivate anybody to give much more than a shrug to your
comments.  I wonder what it is that you expect people to do in
response to this argument which offers only criticism, and that
criticism not even aimed at any specific approach.

I know that responding to long messages like this can be time
consuming, and don't feel like you need to; it does seem that for
whatever reason most of the readers at least on these mailing lists
continue to "not get it"; if you care why that is, this message is
only intended as a data point -- why *I* don't get it.

Hmmm.  Interesting.

My goal is to spark debate on the topic.

I have always claimed that understanding the reason why there is a problem is an important first step in understanding the solution.



Enough for now though.


Richard Loosemore


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to