On Sun, 24 Oct 2004, Ben Goertzel wrote:
Well, I don't think that building an AI is in principle too hard for a single mind to handle.... Understanding the brain may well be, because the brain has so damn many parts with their individual complex dynamics -- an AI doesn't need to be as complicated as the brain, though. (As a rough analogy, look how much harder it is to understand a bird wing than an airplane wing...).
Airplane wings were easy for one person to understand back when they were simple things. But now that airplane wings have been adapted to generate more lift under different conditions, and also possess other vital functions (carrying fuel, air intakes, hydraulics), they are increasing in complexity dramatically.
So shall it be with AGI, as they scale up in complexity to handle real world challenges, they will spiral out of the size that can be comprehended by a single person.
I predict that this point at which AGI design exceeds a human's understanding will come long before they are capable of generative self modification. Hence, we will need teams of people before we get to that point.
-Brad
We're trying to build an AI, not via one person's efforts only, but via the combined efforts of a small team. I'm betting this is enough. I don't understand all of the Novamente codebase in detail -- no one person does -- but our small team, collectively, does.
-- Ben G
-----Original Message----- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of J.Andrew Rogers Sent: Sunday, October 24, 2004 11:19 PM To: [EMAIL PROTECTED] Subject: Re: [agi] Model simplification and the kitchen sink
On Oct 24, 2004, at 2:14 PM, Brad Wyble wrote:Another point to this discussion is that the problems of AI and cognitive science are unsolvable by a single person. 1 brain can't understand itself, but perhaps 10,000 brains can understand or design 1 brain.
This does not follow. You can build arbitrarily complex machines with a very tiny finite control function and plenty of tape. The complexity of AI as an algorithm and design space is not in the same class as the complexity of an instance of human-level AI, even though the latter is just the former given some state space to play with.
It is highly improbable that the core control function of intelligence cannot be understood by one person, or at least I see no evidence in theory to support this conjecture. Intelligence appears to be a pretty simple thing, even in theory; most of the nominal complexity can be attributed to people who don't really understand it (IMNSHO) or who require the addition of some complexity to solve a practical design problem. What you are saying is kind of like saying that no one can comprehend pi because no one can recite all the digits.
j. andrew rogers
------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
