On Sun, 24 Oct 2004, J.Andrew Rogers wrote:
This does not follow. You can build arbitrarily complex machines with a very tiny finite control function and plenty of tape. The complexity of AI as an algorithm and design space is not in the same class as the complexity of an instance of human-level AI, even though the latter is just the former given some state space to play with.
But that machine cannot fully understand/represent *itself*. The longer the tape of its own understanding gets, the more tape it needs to represent itself, etc etc.
It is highly improbable that the core control function of intelligence cannot be understood by one person, or at least I see no evidence in theory to support this conjecture. Intelligence appears to be a pretty simple thing,
There is no "core control function". Our intelligence is the product of multiple interacting systems. Your belief that you have a single "central executive" that's running the show is an illusion. There's mounting evidence that systems below the level of consciousness are often controlling actions and thoughts, and your central executive is structured such so as to lead you to believe that it's always in charge.
I'll grant you that maybe a single person can design a useful, if minimal, AGI, but as far as the problem of understanding people is concerned, there is every bit of evidence in the fields of neuroanatomy, neurophysiology and behavior that we are complete spaghetti code from top to bottom. Evolution wasn't trying to design a system that is convenient to algorithmic decomposition techniques.
And finally, if intelligence is so easy, why are we so bad at it? In general, people:
*are atrocious at reasoning and rationality
*are subject to a long list of logical fallacies
*are terribly subjective.
*have only rudimentary abilities to partition thoughts away from one another, even when it's absolutely vital
*are riddled with emotional shortcuts to aid decision making in complex situations
*have very inaccurate memory, even about things we've seen just seconds ago
There's no survival advantage to having any of these deficits, we have them because evolution hasn't solved these problems yet. Now you're probably going to argue that *we* have solved these problems. That it's possible to build an inference engine that has none of these faults.
I would reply that we have only solved them for toy problem spaces. That when you try to build a rational inference engine on the scale of human intelligence, it will break down under its own complexity, because being objective and rational over a domain as complex as real-world experience has exhorbitant computational demands.
I predict you will then see why evolution has taken all of these messy shortcuts.
even in theory; most of the nominal complexity can be attributed to people who don't really understand it (IMNSHO) or who require the addition of some complexity to solve a practical design problem. What you are saying is kind of like saying that no one can comprehend pi because no one can recite all the digits.
I would argue that this complexity is atributed to people who really understand what we're up against :)
-Brad
-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
