>From recent comments here I can see there are still a lot of people out there who think that building an AGI is a relatively modest-size project, and the key to success is simply uncovering some new insight or technique that has been overlooked thus far. IMHO this is partly a matter of necessary optimism (i.e. "we can only afford a 4-man-year project, so let's hope that will be enough"), and partly a sort of bleedover from the view of human minds that dominated the social sciences for most of the 20th century (i.e. "infants are a blank slate, and blank slates sound pretty simple, so a newly-written AGI must be a relatively simple program"). Unfortunately for AI optimists, all the evidence points in the opposite direction.
If we have learned nothing else about the nature of Mind in the last 50 years, we should at least have learned this: complex adaptive behavior requires a complex, specialized implementation. Always. No exceptions, no free lunches, no magic connectoplasmic shortcuts. We know from the biology folks that the human mind contains at least dozens, and probably hundreds of specialized subsystems. The ones that computer scientists have tried to replicate, like vision and hearing, have turned out to contain massive amounts of complexity - computer vision alone is apparently the kind of problem that takes a good, well-funded team several decades to solve. Now, it may be that some particular subsystems can be omitted from an AGI that isn't intended to be very humanlike. An AGI with no body may not need a kinesthetic sense or motor skills, an AGI without cameras may not need vision, and so on. But anyone who thinks there is some tiny kernel of "pure thought" in there waiting to be duplicated, and all the rest can be safely ignored, is just kidding themselves. Every part of the mind that we have any understanding of at all has turned out to be a tangle of complex algorithms interacting in very complex ways. There is no reason to believe the parts we don't understand are any different. What this means for AI research is that any serious attempt to create an AGI by duplicating the way human minds work would be a massive effort, at least one and probably two orders of magnitude larger than any software development effort ever attempted. That makes it much too big for current software engineering methods, so the effort would almost certainly fail. For projects that intend to implement a completely novel design, the implication is that you can't realistically expect anything like human-equivalent performance on unrestricted tasks. Evolution wouldn't have given us the equivalent of hundreds of millions of lines of specialized software if there were some easy shortcut waiting to be found. So, if you're just trying to build a specialized AI, or to solve a few of the problems between here and AGI, that's great. But if you think your 50 KLOC system is going to somehow bootstrap itself into human-equivalence, you need to take a break and go catch up on what's been happening in cognitive science in the last 20 years. In other words, building a human-equivalent AGI is like sending a manned mission to Alpha Centauri, and current AI technology is on about the level of a V2 rocket. It's a long road from here to there, and we're never going to get anywhere until we admit that fact. The next step is the nasty, challenging problem of getting into space at all, not the nigh-impossible feat of reaching another solar system. Billy Brown ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
