On Wednesday 23 September 2009, Jeshua Lacock wrote: > On Sep 15, 2009, at 7:52 AM, Jyothish Soman wrote: > > Sorry to you all for not being active, I will love to help in this > > effort. >From September 28 to december end, there is only GRASS > > coding on the menu for me. > > > > Please do pass me any work in that time frame OPENCL or CUDA . I > > will be happy to oblige. > > > > Also, I think there is scope for using GPU as a coprocessor and > > splitting work between different processors on the same machine to > > coexist. > > > > FYI, I am very much at ease with CUDA (NVidia GPU programming), than > > any other form of parallelization methods. It is my field of research. > > Greetings, > > Here is a video of a rather impressive Manifold GIS CUDA demo > performing a raster operation: > > http://www.manifold.net/video/nvidia_cuda_demo.wmv > > The operation is reduced from 60 seconds to 2 using 1 GPU - imagine if > they had 4 GPUs! It would go from 60 seconds to nearly realtime.... > > > Best,
Interesting... But I wonder about a couple things. 1. why would it take 60 seconds to compute a slope map from a 1400x1400 cell DEM? On a 3.2Ghz Xeon (5 year old machine), using a 2710x3306 cell raster, r.slope.aspect takes: real 0m7.637s user 0m6.984s sys 0m0.508s 2. It looks like the map in the demo is Int16... Does CUDA-based math support double precision floating point calculations? Last time I checked it didn't. Other than those 2 points, I would love to see GPU-based acceleration in GRASS. A thread from last year on this topic: http://www.mail-archive.com/[email protected]/msg01925.html Hopefully things have improved since then! Cheers, Dylan -- Dylan Beaudette Soil Resource Laboratory http://casoilresource.lawr.ucdavis.edu/ University of California at Davis 530.74.7341 _______________________________________________ grass-dev mailing list [email protected] http://lists.osgeo.org/mailman/listinfo/grass-dev
