I think you are right.. doing a dump & reinsert would be a nice way to optimize for size.

However in my tests both of them had about the same fill factor.. so the compression db files will still be smaller.

Well that means that it didn't optimize for size then ;)
Perhaps the theory about inserting in key order yielding 100% FF is incorrect.


The more odd result is that Jim has about a 5x speed improvement when he disables compression. I also see a speedup.

Obviously you'd expect that in the case that the task is CPU-bound.

Joe gets a 14% slowdown.

If Joe's disks are very slow compared to yours, that might explain the difference (because the compressed output file is smaller and hence can be written to disk quicker). Another possibility is differences in filesystem : if in one case the data is not actually being flushed to disk during the measured elapsed test time, and in the other case it is, that could make a big difference, and again the difference would depend on the size of the file.

Just some random ideas...

I'd try running tools like top, vmstat and iostat during the
test execution and see if they reveal anything interesting.






------------------------------------------------------- This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting Tool for open source databases. Create drag-&-drop reports. Save time by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc. Download a FREE copy at http://www.intelliview.com/go/osdn_nl _______________________________________________ ht://Dig Developer mailing list: htdig-dev@lists.sourceforge.net List information (subscribe/unsubscribe, etc.) https://lists.sourceforge.net/lists/listinfo/htdig-dev

Reply via email to