Here is a simplification of the problem as I see it. Your program will have a lot of data and it is distributed. The program will often need do massive searches, and then using some features (which were distributed with the data objects that it searched for) it will have to do subsequent searches. Since it is likely that it will also be looking into false leads the time it will take can easily be exponential or worse. To emphasize the problem it will need to compare the data it collected against whatever basis's it started out with and derived. Not only does it need to be able to find data based on features of other distributed data it also has to discover what kinds of comprises might be made between preexisting possible relations and relations that will seem special to the particular circumstances. Some of this can be done using weighted reasoning (non-discrete reasoning) but there are also going to be a lot of specialized circumstances that it needs to consider. Jim Bromer
------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
