Ed, I think that we must rely on large collections of relatively simple patterns that are somehow capable of being mixed and used in interactions with the others. These interacting patterns (to use your term) would have extensive variations to make them flexible and useful with other patterns.
When we learn that national housing prices did not provide us with the kind of detail that we needed we go and figure other ways to find data that showed some of the variations that would have helped us to prepare better for a situation like the one we are currently in. I was thinking of that exact example when I wrote about mushy decision making, because the national average price would be more mushy than the regional prices, or a multiple price level index. The mush index of an index does not mean that the index is garbage, but since something like this is derived from finer grained statistics, it really exemplifies the problem. My idea is that an agi program would have to go further than data mining. It would have to be able to shape its own use of statistics in order to establish validity for itself. I really feel that there is something really important about the classifiers of statistical methods that I just haven't grasped yet. My example for this this comes from statistics that are similar but just different enough so that they don't mesh quite right. Like two different marketing surveys that provide similar information which is so close that a marketer can draw conclusions from their combination but which aren't actually close enough to justify this process. Like asking different representative groups if they are planning to buy a television in one survey, and asking how much they think they will spend on appliances during the next two years. The two surveys are so close that you know the results can be combined, but they are so different that it is almost impossible to justify the combination in any reasonable way. If I could only figure this one out I think the other problems I am interested in would start to solve themselves. Jim Bromer On Sat, Nov 29, 2008 at 11:40 AM, Ed Porter <[EMAIL PROTECTED]> wrote: > Jim > > My understanding is that a Novamente-like system would have a process of > natural selection that tends to favor the retention and use of patterns > (perceptive, cognative, behaviors) prove themselves useful in achieving > goals in the word in which it is embodied. > > It seems to me t such a process of natural selection would tend to naturally > put some sort of limit on how out-of-touch many of an AGI's patterns would > be, at least with regard to patterns about things for which the AGI has had > considerable experience from the world in which it is embodied. > > However, we humans often get pretty out of touch with real world > probabilities, as the recent bubble in housing prices, and the commonly > said, although historically inaccurate, statement of several years ago --- > that housing prices never go down on a national --- shows. > > It would be helpful to make AGI's be a little more accurate in their > evaluation of the evidence for many of their assumptions --- and what that > evidence really says --- than we humans are. > > Ed Porter > > -----Original Message----- > From: Jim Bromer [mailto:[EMAIL PROTECTED] > Sent: Saturday, November 29, 2008 10:49 AM > To: [email protected] > Subject: [agi] Mushed Up Decision Processes > > One of the problems that comes with the casual use of analytical > methods is that the user becomes inured to their habitual misuse. When > a casual familiarity is combined with a habitual ignorance of the > consequences of a misuse the user can become over-confident or > unwisely dismissive of criticism regardless of how on the mark it > might be. > > The most proper use of statistical and probabilistic methods is to > base results on a strong association with the data that they were > derived from. The problem is that the AI community cannot afford this > strong a connection to original source because they are trying to > emulate the mind in some way and it is not reasonable to assume that > the mind is capable of storing all data that it has used to derive > insight. > > This is a problem any AI method has to deal with, it is not just a > probability thing. What is wrong with the AI-probability group > mind-set is that very few of its proponents ever consider the problem > of statistical ambiguity and its obvious consequences. > > All AI programmers have to consider the problem. Most theories about > the mind posit the use of similar experiences to build up theories > about the world (or to derive methods to deal effectively with the > world). So even though the methods to deal with the data environment > are detached from the original sources of those methods, they can > still be reconnected by the examination of similar experiences that > may subsequently occur. > > But still it is important to be able to recognize the significance and > necessity of doing this from time to time. It is important to be able > to reevaluate parts of your theories about things. We are not just > making little modifications from our internal theories about things > when we react to ongoing events, we must be making some sort of > reevaluation of our insights about the kind of thing that we are > dealing with as well. > > I realize now that most people in these groups probably do not > understand where I am coming from because their idea of AI programming > is based on a model of programming that is flat. You have the program > at one level and the possible reactions to the data that is input as > the values of the program variables are carefully constrained by that > level. You can imagine a more complex model of programming by > appreciating the possibility that the program can react to IO data by > rearranging subprograms to make new kinds of programs. Although a > subtle argument can be made that any program that conditionally reacts > to input data is rearranging the execution of its subprograms, the > explicit recognition by the programmer that this is useful tool in > advanced programming is probably highly correlated with its more > effective use. (I mean of course it is highly correlated with its > effective use!) I believe that casually constructed learning methods > (and decision processes) can lead to even more uncontrollable results > when used with this self-programming aspect of advanced AI programs. > > The consequences then of failing to recognize that mushed up decision > processes that are never compared against the data (or kinds of > situations) that they were derived from will be the inevitable > emergence of inherently illogical decision processes that will mush up > an AI system long before it gets any traction. > > Jim Bromer > > > ------------------------------------------- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com > > > > ------------------------------------------- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com > ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
