Dennis McCunney: > IIRC, Zlib uses the same compression algorithm as gzip.
Yes, but I think it compresses only one record at a time, instead of the whole channel. This means you'll repeat the dictionary each time, and you won't get the full compression for text that is part of a site template (common across pages). In theory, we could tell zlib to use a pre-defined dictionary, which might be constant for all web pages or at least per-channel, but that would not be trivial. Getting it to treat the whole database as one file might slow things down too much for long documents like ebooks. Combining several logical records into a single physical record might work, but would definately complicate things. -jJ _______________________________________________ plucker-dev mailing list [EMAIL PROTECTED] http://lists.rubberchicken.org/mailman/listinfo/plucker-dev
