Re: [agi] A point of philosophy, rather than engineering
Ben Goertzel wrote: Hi, Personally, I believe that the most effective AI will have a core general intelligence, that may be rather primitive, and a huge number of specialized intelligence modules. The tricky part of this architecture is designing the various modules so that they can communicate. It isn't clear that this is always reasonable (consider the interfaces between chess and cooking), but if the problem can be handled in a general manner (there's that word again!), then one of the intelligences could be specialized for message passing. In this model the core general intelligence will be for use when none of the hueristics fit the problem. And it's attempts will be watched by another module whose specialty is generating new hueristics. Plausible? I don't really know. Possibly to complicated to actually build. It might need to be evolved from some simpler precursor. It's clear that the human brain does something like what you're suggesting. Much of the brain is specialized for things like vision, motion control, linguistic analysis, time perception, etc. etc. The portion of the human brain devoted to general abstract thinking is very small. Novamente is based on an integrative approach sorta like you suggest. But it's not quite as rigidly modular as you suggest. Rather, we think one needs to -- create a flexible knowledge representation (KR) useful for representing all forms of knowledge (declarative, procedural, perceptual, abstract, linguistic, explicit, implicit, etc. etc.) This probably won't work. Thinking of the brain as a model, we have something called the synesthetic gearbox which is used to relate information in one modality of senstation with another modality. This is a part of the reason that I suggested that one of the hueristic modules be specialized for message passing (and translation). -- create a number of specialized mind agents acting on the KR, carrying out specialized forms of intelligent processes -- create an appropriate set of integrative mind agents acting on the KR, oriented toward creating general intelligence based largely on the activity specialized mindagents Again the term general intelligence. I would like to suggest that the intelligence needed to repair an auto engine is different from that needed to solve a calculus equation. I see the General Intelligence as being the primarily to handle problems for which no hueristic can be found, and would suggest that nearly any even slightly tuned hueristic is better than the general intellligence for almost all problems. E.g., if one is repairing an auto engine, one hueristic would be to remember the shapes of all the pieces you have seen, and to remember where they were when you first saw them. Just think how that one hueristic would assist reassembling the engine. Set up a knowledge base involving all these mind agents.. hook it up to sensors actuators give it a basic goal relating to its environment... Of course, this general framework and 89 cents will get you a McDonald's Junior Burger. All the work is in designing and implementing the KR and the MindAgents!! That's what we've spent (and are spending) all our time on... May I suggest that if you are even close to what you are attempting, that you have the start of a dandy personal secretary. With so much correspondence coming via e-mail these days, this would create a very simplified environment in which the entity would need to operate. In this limited environment you wouldn't need full meanings for most words, only categories and valuations. I have a project which I am aiming at that area, but it is barely getting started. -- Ben --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/ -- -- Charles Hixson Gnu software that is free, The best is yet to be. --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/
RE: [agi] A point of philosophy, rather than engineering
Charles Hixson wrote (in response to me): -- create a flexible knowledge representation (KR) useful for representing all forms of knowledge (declarative, procedural, perceptual, abstract, linguistic, explicit, implicit, etc. etc.) This probably won't work. Thinking of the brain as a model, we have something called the synesthetic gearbox which is used to relate information in one modality of senstation with another modality. This is a part of the reason that I suggested that one of the hueristic modules be specialized for message passing (and translation). There are both significant differences, and significant similarities, between the representations used by different parts of the human brain. They all use neurons and synapses, frequencies of neural firing, neurotransmitter chemistry, etc., in fairly similar ways. Of course there are also some major differences in neural architecture btw brain regions -- different types of neurons, different neurotransmitter concentrations, different connective arrangementc, etc. Similarly there are significant similarities differences btw the representations used by different parts of Novamente. They all use Novamente Nodes and Links, and all use similar quantitative parameters of Nodes and Links, and there's a lot of overlap in the MindAgents (dynamical processes) they use. But there are also significant differences, in the frequency of different node and link types, the parameters of the different MindAgents, etc. Again the term general intelligence. I would like to suggest that the intelligence needed to repair an auto engine is different from that needed to solve a calculus equation. Of course it is different in many ways. It's also similar in many ways. I believe that those two forms of intelligence consist of basically the same set of processes, acting on the same basic sort of knowledge. But the two cases have very different underlying parameter settings. In the brain case, different types of neural connectivity patterns, perhaps different concentrations of neurotransmitters in different brain regions, perhaps even different amounts of different types of neurons -- all of which leads to different emergent structures/dynamics. I see the General Intelligence as being the primarily to handle problems for which no hueristic can be found, and would suggest that nearly any even slightly tuned hueristic is better than the general intellligence for almost all problems. E.g., if one is repairing an auto engine, one hueristic would be to remember the shapes of all the pieces you have seen, and to remember where they were when you first saw them. Just think how that one hueristic would assist reassembling the engine. Yes, but what allows a human mind to learn that heuristic? Our general (reasonably general, but far from absolutely general) intelligence. Set up a knowledge base involving all these mind agents.. hook it up to sensors actuators give it a basic goal relating to its environment... Of course, this general framework and 89 cents will get you a McDonald's Junior Burger. All the work is in designing and implementing the KR and the MindAgents!! That's what we've spent (and are spending) all our time on... May I suggest that if you are even close to what you are attempting, that you have the start of a dandy personal secretary. With so much correspondence coming via e-mail these days, this would create a very simplified environment in which the entity would need to operate. In this limited environment you wouldn't need full meanings for most words, only categories and valuations. As I said in a recent post, I prefer to stay away from natural language processing at this stage, until the system has acquired a rudimentary understanding of natural language thru its own experience. We're not quite there yet ;) ben --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/
RE: [agi] A point of philosophy, rather than engineering
On Tue, 12 Nov 2002, Ben Goertzel wrote: Charles Hixson wrote (in response to me): [...] May I suggest that if you are even close to what you are attempting, that you have the start of a dandy personal secretary. With so much correspondence coming via e-mail these days, this would create a very simplified environment in which the entity would need to operate. In this limited environment you wouldn't need full meanings for most words, only categories and valuations. BenG: As I said in a recent post, I prefer to stay away from natural language processing at this stage, until the system has acquired a rudimentary understanding of natural language thru its own experience. We're not quite there yet ;) That's where the Mentifex AI and Novamente differ (and probably also where A.T. Murray the linguist and Ben Goertzel the mathematician differ). If you're not aiming for language, you're aiming for a smart animal. A.T. Murray -- http://www.scn.org/~mentifex/aisource.html is the cluster of Mind programs described in the AI textbook AI4U based on AI Mind-1.1 by Arthur T. Murray which may be pre-ordered from bookstores with hardcover ISBN 0-595-65437-1 and ODP softcover ISBN 0-595-25922-7. --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/