Abram Demski wrote:
At one point in the recent past, I had relegated the concept of
"clustering" to the narrow AI domain. But at around the same time, I
was attempting to wrap my head around the problem of hidden variables.
Hidden variables allow an AI to reason about entities beyond its
sensory data, but they introduce a huge search space. Furthermore,
patterns due to hidden variables can always be explained instead as
(possibly more complicated) patterns just in terms of visible data. My
question was: when should a rational entity hypothesize additional
hidden variables?

Around that time someone on this list mentioned the Alchemy
markov-logic system. One of the papers from the Alchemy website
(http://alchemy.cs.washington.edu/papers/kok07/) talks about a method
for learning hidden variables using clustering. At first I was
surprised, but after a little thought this made sense: clusters can be
seen as different states of a hidden variable that is
probabilistically determining the data.

In fact, adding hidden predicates and entities in the case of Markov
logic makes the space of models Turing-complete (and even bigger than
that if higher-order logic is used). But if I am not mistaken the
clustering used in the paper I refer to is not that powerful. So the
question is: is clustering in general powerful enough for AGI? Is it
fundamental to how minds can and should work?

PS-
I know the LIDA framework makes extensive use of clustering, in the
form of associative memory, for another example.

[Steps cautiously into the discussion ....]

If a system learns new concepts by itself, rather than having them predetermined by the programmer, does it not have to use "hidden variables", almost by definition?

Before the concept is learned, it would just be some kind of regular conjunction, or co-occurrence, or "cluster" of known concepts, would it not?

If this is not the sense of clustering that you mean, would it be fair to say that the concept really only applies to a particular view of knowledge representation?

Otherwise it sounds like behaviorism, where connections between sensory patterns (whatever those might be) and actions (whatever those might be) were supposed to be mediated only by one level of connections.

Puzzled.



Richard Loosemore



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to