Thanks Seth, It makes sense that the slowest part of the whole equation would be the disk operations, and there must be quite a number of disk reads/writes when processing imagery. Currently we use Raid Arrays that push data through at a rate of 300MB/s, granted if these were SSDs in Raid0 we could push beyond 1GB/s. Currently to process (mosaic) an 80GB image it takes several days to complete. This is also on 32bit hardware, and I suspect is only single threaded so we're limited to 3GB RAM. From what I understood the most optimal caching size in GDAL is 500MB, using a 512M window (unless that has changed). If you can easily lay your hands on your GSoC application than that would be great. We are discussing what might be possible with a very talented coder, who eats these types of "challenges" for breakfast ! Perhaps a better approach would be to use something like a grid computing approach then like Condor to break up the processing ?
Cheers, Shaun -----Original Message----- From: Seth Price [mailto:[email protected]] Sent: Thursday, 19 November 2009 8:07 AM To: Shaun Kolomeitz Cc: [email protected] Subject: Re: [gdal-dev] CUDA PyCUDA and GDAL I've been intending for a while to work on either CUDA or OpenCL with GDAL & GRASS. I applied to do this for the Google Summer of Code, but wasn't accepted this past summer. I'll probably work on it someday just to make sure my thesis work gets finished within budget. However, I'm mostly interested in speeding up the resampling routines. They should be able to get close to the theoretical maximum on CUDA. I don't know about the routines which you mention without looking closer at the code. For example, image reading is probably limited by the disk speed, so it wouldn't be faster in CUDA. Translates are another operation which doesn't involve much CPU time compared to disk time, so it would also be difficult to speed it with CUDA. For these operations your best option might be to replace your hard drive with a SSD. I'm not familiar with image mosaics in GDAL, but I would guess that they are heavy on the resampling when generating a quality final image. This is something where each output pixel depends on the nearest ~16 input pixels. It takes a lot of CPU time to process all those pixels, and it would benefit from CUDA. If you want, I could hunt down my GSoC application which would go into a bit more detail. ~Seth On Wed, November 18, 2009 2:46 pm, Shaun Kolomeitz wrote: > I've heard a lot about the power of NVidia CUDA and am curious about > ways in which we could leverage off this to increase the performance of > 1) Image Mosaics 2) Translates and 3) Image Reading/rendering > (especially highly compressed images). > I also see that there is pyCUDA as well. Both of which I am unsure how > (or if) you could use them to run (even portions of) GDAL ? > > If anyone has any pointers it would be nice to know. > > Many thanks, > Shaun Kolomeitz > Principal Project Officer > Business and Asset Services > Queensland Parks and Wildlife Service > > > As of 26 March 2009 the Department of Natural Resources and > Water/Environmental Protection Agency integrated to form the Department > of Environment and Resource Management > > +----------------------------------------------------------------+ > Think B4U Print > 1 ream of paper = 6% of a tree and 5.4kg CO2 in the atmosphere > 3 sheets of A4 paper = 1 litre of water > +----------------------------------------------------------------+ > > > _______________________________________________ > gdal-dev mailing list > [email protected] > http://lists.osgeo.org/mailman/listinfo/gdal-dev > _______________________________________________ gdal-dev mailing list [email protected] http://lists.osgeo.org/mailman/listinfo/gdal-dev
