On 11/27/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
An issue with Hopfield content-addressable memories is that their
memory capability gets worse and worse as the networks get sparser and
sparser. I did some experiments on this in 1997, though I never
bothered to publish the results ... some of them are at:
http://www.goertzel.org/papers/ANNPaper.html
I found just the opposite - Hopfield network memory capability gets
much better as the networks get sparser, down to very low levels of
sparseness. However, I was measuring performance as a function of
storage space and computation. A fully-connected Hopfield network of
100 neurons has about 10,000 connections. A Hopfield network of 100
neurons that has only 10 connections per neuron performs has one-tenth
as many connections, and can recall more than one-tenth as many
patterns.
Furthermore, if you selectively eliminate the weak connections and
save the strong connections, you can make Hopfield networks very
sparse that perform almost as well as the fully-connected ones.
BTW, the "canonical" results about Hopfield network capacity in the
McEliece 1987 paper are wrong - I can't find the flaw, so I don't know
why they're wrong, but I know that the paper
a) makes the mistake of comparing recall errors of a fixed number of
bits between networks of different sizes, which means that it counts a
1-bit error in recalling a 1000-node pattern as equivalent to a 1-bit
error in recalling a 10-node pattern, and
b) the paper claims that recall of n-bit patterns, starting from a
presented pattern that differs in n/2 bits, is quite good. This is
impossible, since differing in n/2 bits means the input pattern is a
RANDOM pattern wrt the target, and half of all the targets should be
closer to the input pattern.
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303