Hello Marc-Alexandre,

There were identical requests before but the implementation was
postponed. The reason was that 6.0 makes indexes more compact at a price
of slightly increased CPU load, it was not practical to make other
changes at the same time. Now we can (and will) try.
Note that the compression will not be efficient for texts shorter than
1Kb and give bad results for texts of length 4-8Kb. If shorter than 1Kb
then the won space does not pay for the spent CPU. If 4 to 8Kb then the
compression of 1 blob page will usually produce 1 half-complete page of
inlined data in index plus probably a remap version of that page, so
disk image size will finally _grow_ with the compression. It contradicts
with a common sense but matches the experiment.

I can make a storage version that will selectively compress some
literals, but then I will need an experiment with some real data. Not
right now in any case, in a week or two. Meanwhile I can download your
data if you have some big (and live) specimen.

Best Regards,

Ivan Mikhailov
OpenLink Software
http://openlinksw.com

On Wed, 2010-03-10 at 10:09 -0500, Marc-Alexandre Nolin wrote:
> I've a N3 dump I'm currently loading into a Virtuoso Server (a
> complete NCBI Genbank). One literal have always huge size. Its the one
> related to the predicate "sequence". Is it possible to compress
> literal with a rule based on predicate?



Reply via email to