Richard,
> This could be called a communcation problem, but it is internal, and in > the AGI case it is not so simple as just miscalculated numbers. Communication between subsystems is still communication. So I suggest to call it Communication problem. > So here is a revised version of the problem: suppose that a system > keeps some numbers stored internally, but those numbers are *used* by > the system in such a way that their "meaning" is implicit in the entire > design of the system. When the system uses those numbers to do things, > the numbers are fed into the "using" mechanisms in such a way that you > can only really tell what the numbers "mean" by looking at the overall > way in which they are used. That's right approach of doing things. Concepts gaining meaning by connecting to other concepts. The only exception - concepts that are directly connected to hardcoded sub-systems (dictionary, chat client, web browser, etc). Such directly connected concepts would have some predefined meaning. This predefined meaning would be injected by AGI programmers. > Now, with that idea in mind, now imagine that programmers came along and > set up the *values* for a whole bunch of those numbers, inside the > machine, ON THE ASSUMPTION that those numbers "meant" something that the > programmers had decided they meant. So the programmers were really > definite and explicit about the meaning of the numbers. > Question: what if those two sets of meanings are in conflict? How could they be in conflict, if one set is predefined, and another set gained meaning from predefined set? If you are talking about inconsistencies within predefined set -- that's problem of design & development team. Do you want to address this problem? So far I can suggest one tip: keep the set of predefined concept as small as possible. Most of mature AGI intelligence should come from concepts (and their relations) acquired during system life time. > If the AI system starts out with a design in which symbols are > designed and stocked by > programmers, this part of the machine has ONE implicit meaning for its > symbols ..... but then if a bunch of peripheral machinery is stapled on > the back end of the system, enabling it see the world and use robot > arms, the processing and "symbol" building that goes on in that > part of the system will have ANOTHER implicit meaning for the symbols. > There is no reason why these two sets of symbols should have the same > meaning! Here's my understanding of your problem: We have an AGI, and now we want to extend it by adding new module. We afraid that new module will have problems communicating with other modules, because the meaning of some symbols is different. If I understood your correctly, here're two solutions: Solution #1: Connect modules through Neural Net. Under Neural Net I mean set of concepts (nodes) connected with other concepts by relations. Concepts can be created and deleted dynamically. Relations can be created and deleted dynamically. When we connect new module to the system - it will introduce its own concepts into Neural Net. Initially these concepts are not connected with existing concepts. But then some process will connect these new concepts with existing concepts. One example of such process could be: "if concepts are active at the same time -- connect them". There could be other possible connecting processes. In any case, eventually system would connect all new concepts, and that connections would define how input from new module is interpreted by the rest of the system. Solution #2: Connect new module into another hardcoded modules directly. In this case it's responsibility of AGI development team to make sure that both hardcoded modules talk the same language. That's typical module integration task for developers. > In fact, it turns out (when you think about it a little > longer) that all of the problem has to do with the programmers going in > and building any symbols using THEIR idea of what the symbols should > mean: the system has to be allowed to build its own symbols from the > ground up, without us necessarily being able to interpret those symbols > completely at all. We might nevcer be able to go in and look at a > system-built symbol and say "That means [x]", because the real meaning > of that symbol will be implicit in the way the system uses it. > In summary: the symbol grounding problem is that systems need to have > only one interpretation of their symbols, Not sure what you mean by "one interpretation". Symbol can be have multiple interpretations in different contexts. Our goal is to make sure that different systems and different modules has ~same understanding of the symbols at the time of communication. (Under "symbols" here I mean "data that is passed through interfaces") > and it needs to be the one built by the system itself as a result of > a connection to the external world. So it seems you already have a solution (I propose the same solution) to the "Real Grounding Problem". Can we consider "Real Grounding Problem" theoretically solved? ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=73884295-cb1438
