> Ben Goertzel wrote: > > I like to distinguish two kinds of specialized mechanisms: > > > > 1) those that are autonomous > > > > 2) those that build specialized functionality on a foundation of > > general-intelligence-oriented structures and dynamics > > > > The AI field, so far, has focused mainly on Type 1. But I > think Type 2 is > > more important. > > Hmm. Well, using your terminology, I would say that: > > 1) Type 2 mechanisms are only possible once you have the proper > set of type > 1 mechanisms (i.e. the ones that implement thought in the first place).
Well my Type 1 and Type 2 are both specialized-intelligence mechanisms. I also posit general-intelligence mechanisms, which are separate from Type 1 and Type 2 specialized intelligence mechanisms. In the Novamente design, we have three generalized intelligence mechanisms: * higher-order probabilistic inference * evolutionary learning * reinforcement learning each with its own strengths and weaknesses. We also have some complementary specialized cognitive mechanisms, like first-order inference, neural-net-like association-finding, cluster formation, etc. Specialized intelligence components may be built on top of these. For instance, language processing uses aspects of all these (e.g. parsing is largely unification, an aspect of higher-order inference) Or, for something like edge detection, we would use a type 1 specialized mechanism, and general intelligence wouldn't enter into it at all. > But we need to get past the idea that every > AI project should start from scratch and end up delivering a > human-equivalent AGI, because that isn't going to happen. We just aren't > that close yet. I don't think all of us are trying to start from scratch. I'm certainly not, I'm using a lot of ideas developed by others over the past few decades. > > The way the software industry has solved big challenges in the past is to > break them up into sub-problems, figure out which sub-problems > can be solved > right now, solve them as thoroughly as possible, and offer the resulting > solutions as black boxes that can then become inputs into the > next round of > problem solving. That's what happened with operating systems, and > development environments, and database systems. If we want to see real > progress in AI, the same thing needs to happen to problems like NLP, > computer vision, memory, attention, etc. I completely disagree. Building a complex self-organizing system is not like building an ordinary engineered software system. You can't design the parts in isolation. You have to design each part with explicitly consciousness of the whole. Which means it has to be a unified project, not a collection of disparate subprojects aimed at producing black boxes to later be hooked together. This is a profound difference between minds on the one hand, and OS's, DB's and IDE's on the other. And I still say, this is pretty much exactly the approach that conventional academic AI is taking. There is a conventional breakdown of the AI problem into subproblems (of which you've listed several), and people tend to work on each one separately. I don't understand how what you suggest is different from what nearly everyone in the field is doing. -- Ben G ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
