> This seems like a problem in search of a solution. The stuff that
> needs compressing is already compressed. The stuff that isn't already
> compressed, like HTML files and text files, is small enough that
> compressing it is going to take longer than sending over the wire raw.
>
> It's just not convincing.

I disagree. I think the way things are set up now, a significant portion
of a freenet user's time is spent reading HTML and other
compressable formats in search of media files. zlib compression is
very computationally inexpensive, especially when used with human
readable file formats. It can compress text files upwards of 80-90% in
many cases. This data does not make up a large part of freenet taken
purely as data size, but it does make up a significant percentage of
the files requested by users. Compression of text and HTML files will
certainly not take longer than sending them over the wire raw. Even
when sent over a local loop in HTTP 1.1 with compression, the
increase in retrieval time is about 6-7%, the reduced i/o time nearly
makes up for the time spent doing compression, and that's without
any network lag at all. I definitely think this would help improve the

Reply via email to