> > > > For example, what is the equivalent of the activation control (or search) > algorithm in Google sets. They operate over huge data. I bet the > algorithm for calculating their search or activation is relatively simple > (much, much, much less than a PhD theses) and look what they can do. So I > think one path is to come up with applications that can use and reason with > large data, having roughly world knowledge-like sparseness, (such as NL > data) and start with relatively simple activation algorithms and develop > then from the ground up. >
Google, I believe, does reasoning about word and phrase co-occurrence using a combination of Bayes net learning with EM clustering (this is based on personal conversations with folks who have worked on related software there). The use of EM helps the Bayes net approach scale. Bayes nets are good for domains like word co-occurence probabilities, in which the relevant data is relatively static. They are not much good for real-time learning. Unlike Bayes nets, the approach taken in PLN and NARS allows efficient uncertain reasoning in dynamic environments based on large knowledge bases (at least in principle, based on the math, algorithms and structures; we haven't proved it yet). -- Ben G ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=64609544-b69ea5
