> these articles are still somewhat short on detail, so it's hard to tell.

Yes the linux mag article was a bit empty wasn't it, but did you take a
look at the 20-page whitepaper:
http://www.nvidia.com/content/PDF/fermi_white_papers/NVIDIA_Fermi_Compute_Architecture_Whitepaper.pdf

> Having said that, the "parallel data cache" they alude to may be
> significant. If this is going to enable the construction of data
> structures such as linked lists, or bring down global memory access time
> significantly, then I believe the performance of playout algorithms on
> the architecture will shoot up.

I only skimmed it very lightly, but page 15 discusses memory and page 16
shows how this gives big speedups for radix sort and fluid simulations.

Darren


-- 
Darren Cook, Software Researcher/Developer
http://dcook.org/gobet/  (Shodan Go Bet - who will win?)
http://dcook.org/mlsn/ (Multilingual open source semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)
_______________________________________________
computer-go mailing list
[email protected]
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to