On Sun, Apr 29, 2001 at 01:00:00AM +0100, toad wrote:
> On Sat, Apr 28, 2001 at 06:48:58PM -0500, Scott G. Miller wrote:
> > > 
> > > One solution would be to have a minimum compression threshold.  Files
> > > under this threshold would be uncompressed, and files larger than this
> > > threshold would be compressed.  This would result in there be space
> > > and bandwidth savings for big files, but not slowdowns from
> > > compressing small files.
> > 
> > But thats likely to mean that no files would be compressed, as nearly all
> > decently sized files are already compressed with some scheme (jpeg, mpeg,
> > etc).  Me think that you guys just like coding too much.
> > 
> >     Scott 
> This is a special case - text or HTML of a reasonable size that should be
> compressed to save space and hence increase its likelihood of survival on the
> network. It would not be sensible for it to occur automatically, unless if 
> only
> with particular MIME types, but decoding support in freenet_request and fproxy
> might be worthwhile. Medium sized HTML pages are quite compressible, as 
> evidenced by the W3C performance paper. And it's not just about bandwidth. And
> sticking stuff in ZIPs is not a good answer for pages which people would like 
> to
> refer to and browse individually. The size bias means that it will 
> signigicantly

Reply via email to