On Monday 27 November 2006 10:35, Ben Goertzel wrote:
>...
> An issue with Hopfield content-addressable memories is that their
> memory capability gets worse and worse as the networks get sparser and
> sparser.   I did some experiments on this in 1997, though I never
> bothered to publish the results ... 

[General observations, not aimed at Ben in particular:]

One of the reasons I'm not looking at actual Hopfield (or any other kind of 
NN) is that I think that a huge amount of what goes on in AI today is 
premature optimization. I.e. the vast majority of the technical work has more 
to do with taking operations that don't have intelligence and making them run 
fast, than with finding operations that do exhibit intelligence. My approach, 
admittedly unusual, is to assume I have all the processing power and memory I 
need, up to a generous estimate of what the brain provides (a petawords and 
100 petaMACs), and then see if I can come up with operations that do what it 
does. If not it, would be silly to try and do the same task with a machine 
one to 100 thousand times smaller.

There are plenty of cases where it's just a royal pain to get a Hopfield net 
or any other NN to do something that's blindingly simple for an ordinary 
program or vector equation. Ignore the implementation, think in the data 
representation as long as you can. When you've got that nailed, you can try 
for that factor of a thousand optimization...

--Josh

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to