hi,
i've written a field-based compression using bzip2.
my experience: the fields must have at least 50 bytes, or the compressed
data is bigger !
cu, gg
Hickey, Larry schrieb:
I have a blob structure which is primarily doubles. Is there anyone with
some experience with doing data compression to make the blobs smaller?
Tests I have
run so far indicate that compression is too slow on blobs of a few meg to
be practical.
I get now at least 20 to 40 inserts per second but if a single compression
takes over a
second, it's clearly not worth the trouble. Does anybody have experience
with a compression scheme with blobs that consist of mostly arrays of
doubles?
Some schemes ( ibsen) offer lightening speed decompression so if the
database was primarily used to read, this would be good choice but very
expensive to do
the compression required to make it.
-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------
-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------