Thanks.  Ahh..  Solved in 1980.  Very clever.  Memoize everything.  I 
guess if the game was based on floating point matrix math then these GPUs 
would be more useful.  I guess this applies more to neural-net-like 
algorithms, e.g. markov models, and bayes nets...  although these would 
probably especially benefit from sparse matrix math.  Could arbitrary 
graph algorithms be represented as sparse matrix math that these machines 
would do well?

It seems like communication between disparate areas of the matrix are the 
bottleneck here.  I wonder if very large sparse matrices (a few million 
dimensions square) would be automatically distributed to take advantage of 
local memory access?  Probably an open problem?  Sounds like a lot of our 
full-scale human knowledge problems (like commonsense logic calculations 
that can often be decomposed into sparse matrices) could use this kind of 
thing...

Bo

On Thu, 21 Jun 2007, Russell Wallace wrote:

) On 6/21/07, Bo Morgan <[EMAIL PROTECTED]> wrote:
) > 
) > 
) > You could probably do a crazy "game of life"!  Whole virtual organisms!
) > 
) 
) It turns out for this workload there are much bigger wins on the algorithm
) level - check out the Hash Life family of algorithms - for which CPUs are
) much better suited.
) 
) -----
) This list is sponsored by AGIRI: http://www.agiri.org/email
) To unsubscribe or change your options, please go to:
) http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to