On 4/16/07, J.P. Delport <[EMAIL PROTECTED]> wrote:
Hi, Robert Osfield wrote: > Hi J.P. > > You could try a 64 bit build, but it might just get you a bit further. This was what I also feared. I might try it anyway though, just to see how far it gets. > The problem is that the current rev of osgdem/VirtualPlanetBuilder > keeps two rows of layer in memory at once to enable it to do matching > of elevations and texels, and if you have a really high res wide area > database you can hit memory limits. The memory used should still way > less than whole target database size, I'm curious how big did it get? The output data reached a size of 18GB on disk when osgdem failed (initially we built on Windows and it failed, Linux got further into building). Like you said, we had to limit ourselves to only a smaller region of high res (+- level 15) data, with the wider area only going to level 12 for osgdem to finish. The input images are 360GB in total, so I suppose there was quite a way to go still. > > For future revs of VirtualPlanetBuilder I plan to rewrite it so that > it does incremental and distributed builds, which will store > intermediate results on disk rather than keeping complete rows in > memory. The later feature will enable database build sizes that are > limited by disk space only, rather than memory footprint. The > incremental build will allow you to stop and restart builds - so if > something crashes, such as software or you hit a hardware failure you > won't have to go back to beginning. The distributed build will allow > you to make use of multiple machines. Yes, this seems to be the best approach. I also thought that one could maybe (with input parameters to the app) run multiple versions of the build app in parallel (each working on a separate piece of data) after the initial layout of the DB has been determined. One could then use a batch scheduler (e.g. Sun Grid Engine) to manage all the processes. > > This work will commence end of Spring/early Summer. > > Robert. Myself and a colleague might be able to contribute, even if it is just for testing new code with a large DB. We also have access to a 16-node Linux cluster that we can try out.
Hi Robert, It will be a pleasure for me to help you on these tasks by contributing or testing too, here we have a lot of very huge databases and we came across lot of these problems when we made our custom processing tool. So let us know when the work will really begin on VTB. :) -- Serge Lages http://www.magrathea-engine.org
_______________________________________________ osg-users mailing list [email protected] http://openscenegraph.net/mailman/listinfo/osg-users http://www.openscenegraph.org/
