The project I'm working stores an ordered set of many millions of different polygons with varying numbers of vertices. My original solution (which you may find a bit overly complicated) was to have a dataset as a lookup table which referenced another dataset and an offset where the polygon is stored. These other datasets were constrained to exactly the size required to store their data. This has worked thus far but fetching ranges of data is not ideal because each polygon requires two individual reads and must be done with multiple fetches.
Fast-forward to today when I decide to try using variable a length dataset. However, when I finished the conversion, I noticed that the file size had nearly doubled (96mb to 187mb) even though no more data was being stored and that the time to write the file was significantly longer (more than a doubling of file size would necessitate). Is there something I can do to keep the size down or to tune the performance of this? Thanks, Paul _______________________________________________ Hdf-forum is for HDF software users discussion. [email protected] http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
