If you have a lot of static content, use the TCL thing I posted
earlier.  Runs a file through gzip.  Just don't unlink the output
after the first compression.  You could even do snazzy stuff like see
if the file had been changed since the compression and re-compress it
(but you might need a lock array to prevent simultaneous compressions,
depending on your traffic levels).

We have content that we cache on disk that is relatively expensive to
generate.  I could write something to maintain a compressed copy, but
that also puts more pressure on the virtual memory subsystem, so even
in that case, on-the-fly compression may be a better solution.  It all
depends on where your potential choke points are.

For content that changes very slowly relative to accesses, compressing
ahead of time makes more sense.  But if you are going to regenerate
the content every 6 hours on average (we do), you may only be
redundantly compressing it a few times.  Plus, it took .4 seconds to
compress a 900K HTML file with the gzip executable, including writing
the results to the file system.  So for a reasonably large HTML doc
like 100K, the in-memory compression time is really quick.

Jim

> I'm not against compressing content, in fact it is a feature i've been
> waiting for, but when some of my static html files can be 500k or more,
> compressing them over and over seems like a waste.   For completely dynamic
> content, you could then compress on the fly.   Someone else descriibed a
> month or two ago that one other server keeps two versions of content, one
> plain and one normal.
>
> Daniel P. Stasinski
> http://www.disabilities-r-us.com
> [EMAIL PROTECTED]
>

Reply via email to