Hello all,
I'm having some significant problems using VPB processing extremely large
datasets. For this example, we'll take a look at the ASTER dataset which
comprises 528.2GB of data split among 21,844 GeoTIFFs. The first problem I had
was an error about too many open file handles. I upped the allowable number of
open files from 4096 to 65536 through PAM and limits.conf. No longer do I
receive errors pertaining to open file limit issues, but now I'm seeing
vpbmaster consuming over 16GB of system RAM when trying to generate databases
(16GB is the amount of available RAM I have on the system, so it typically
starts eating into significant SWAP and slows the system to a crawl).
Is there a method to help limit vpbmaster from consuming that much RAM during
normal use?
Question two:
I've attempted to process just one hemisphere at a time (North and South
separately) to avoid issues with the out of memory, and it seemed to work great
(only consumed about 4GB of RAM at any time). I pointed vpbmaster to a
directory with only Northern hemisphere items, and the processing commenced
normally. But, it began failing and blacklisting all of the cores as soon as
it started processing southern hemisphere items. I was using the command
> vpbmaster --geocentric --terrain -d /path/to/northernhemisphere -O
> Compressor=zlib -o output.osgb
Anyway, I decided to look in the code, and I commented out the blacklist
command in the MachinePool.cpp and uncommented the IGNORE_FAILED_TASK, and
everything worked fine from that point forward. Is there a reason that
blacklisting is the first thing VPB goes to? Am I doing something wrong by
allowing it to just ignore the failed tile and move on?
Thanks so much, I'd love to hear your inputs!
Arthur
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=53677#53677
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org