--- On Sat, 6/14/08, Ed Porter <[EMAIL PROTECTED]> wrote: > [Ed Porter] I still think you are going to need multi-bit weights at > row-column element in the matrix -- since most all representations of > synapses I have seen have assumed a weight having at least 6 bits of > information, and there is reason to think you need to store both a short > term and a long term value, since it seems to me it is necessary for the > temporal correlation component of Hebbian learning, and to > represent the state information that is short-term memory.
There is a tradeoff between using a larger number of neurons (redundant representation of concepts) and a more precise representation of synaptic weights. There is also a tradeoff between representing the exact properties of synapses and approximating them from the properties of the neurons they connect. > [Ed Porter] Having matrices for connecting the matrices > makes sense. But my understanding is the SVD is often a form of lossy > compression. That is true. When used to compress semantic relationships, it implements the transitive property. For example, if a word-word matrix learns the relationships rain-wet and wet-water, SVD will infer rain-water even if it was not seen in the training corpus. SVD (or equivalently, a 3 layer neural network) could also be used to compress a mapping of pixels to characters for OCD. The hidden neurons (or largest eigenvalues) would represent intermediate features like line segments. > I ran some benchmarks on my PC (2.2 GHz Athlon-64, 3500+). It copies > large arrays at 1 GB per second using MMX or SSE2, which is not > quite fast enough for a 10^5 by 10^5 neural network simulation. > > [Ed Porter] I assume this is on an uncompressed matrix. I don't know > what> the overhead would be to compress the matrix, such as by > representing all the elements that are empty with run-length > encoding, and then trying to process it. Presumably to use mmx or > sse2 you would have to load some of the compressed matrix into > cache, de-compress it, then run it as a block through the mmx or sse2. It is probably not worth compressing a matrix with 10% density because of the time needed to decompress it. Decompressing runs of zeros is inefficient on a vector processor. SSE2 has prefetch instructions so that matrix elements can already be in cache when it is ready to use. However modern processors usually detect sequential memory access and prefetch automatically. -- Matt Mahoney, [EMAIL PROTECTED] ------------------------------------------- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225 Powered by Listbox: http://www.listbox.com
