Yeah, what Rob said. We have a page that generates a 330K HTML file.
This reduces it to 25K. Since it is a huge table, it renders way
faster with compression because the browser receives all the data
faster (browsers have to receive all rows of a table before rendering
can begin unless the table is completely constrained) Since it is
dynamic content, we can't store it gzipped.
Thanks to Rob's nudging, I finished the C module. It's at:
http://www.rubylane.com/public/rlreturnz
There is weirdness I don't understand, like not copying the first
2 and last 4 bytes of the output from "compress2", but whatever.
There are byte-order issues with adding the CRC and length words
too - I'm on an Intel box, so it works. Got tired of fighting
with C's shift/signed/unsigned/... garbage.
If you want to override ns_return for the whole server, add this
to a startup file in modules (haven't test this - should work):
rename ns_return {}
rename rl_returnz ns_return
Jim
>
> > > What if the content is dynamically generated?
> >
> > Then send normal? It just seems like a lot of CPU to be compressing the
> > same content over and over.
>
> Profile, don't speculate.
>
> The bottleneck is often not the server CPU or disk, but the end user's
> connection - usually 56Kbps or less. In such cases, it may be much
> better to compress a dynamically-generated entity while sending it.
>
> Compressing the data has two other benefits that apply even if the end
> user has a fast connection: you send fewer packets, and you send less
> total data. Fewer packets means fewer possible dropped packets, and
> dropped packets can be a big performance hit. Less total data usually
> means lower cost. For example, I pay $35/GB of data I upload on my T1.
>