At one point in the recent past, I had relegated the concept of
"clustering" to the narrow AI domain. But at around the same time, I
was attempting to wrap my head around the problem of hidden variables.
Hidden variables allow an AI to reason about entities beyond its
sensory data, but they introduce a huge search space. Furthermore,
patterns due to hidden variables can always be explained instead as
(possibly more complicated) patterns just in terms of visible data. My
question was: when should a rational entity hypothesize additional
hidden variables?

Around that time someone on this list mentioned the Alchemy
markov-logic system. One of the papers from the Alchemy website
(http://alchemy.cs.washington.edu/papers/kok07/) talks about a method
for learning hidden variables using clustering. At first I was
surprised, but after a little thought this made sense: clusters can be
seen as different states of a hidden variable that is
probabilistically determining the data.

In fact, adding hidden predicates and entities in the case of Markov
logic makes the space of models Turing-complete (and even bigger than
that if higher-order logic is used). But if I am not mistaken the
clustering used in the paper I refer to is not that powerful. So the
question is: is clustering in general powerful enough for AGI? Is it
fundamental to how minds can and should work?

PS-
I know the LIDA framework makes extensive use of clustering, in the
form of associative memory, for another example.


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to