On Sat, Apr 28, 2001 at 07:21:22PM -0400, Chris Anderson wrote:
> On Sun, 29 Apr 2001, Stefan Reich wrote:
> 
> > ----- Original Message -----
> > From: "Chris Anderson" <[EMAIL PROTECTED]>
> > > > So, the equation becomes, for sufficiently small data values, compress
> > > > time + const xfer time vs. const xfer time.
> > > >
> > > > Of course, once again, feel free to prove me wrong.
> > > >
> > >
> > > No, I agree.  In addition, there is usually a several K window size in
> > > software based compressors, so the const xfer time is especially true in
> > > that case.
> > 
> > Yeah, for the case of very small data packets I agree too.
> > 
> > But I still figure there are many cases where compression would make sense
> > (large data packets, relatively low bandwidth). And CPU time _is_ cheap,
> > even if a Freenet is supposed to be a non-intrusive background process.
> > 
> > It's your decision whether this justifies compressing _any_ data that is
> > sent through Freenet... I vote for it anyway.
> > 
> 
> If it adds significant storage compacity to the network, I would vote yes
> too.  I'd guess it might add a little, but it's difficult to
> tell.  The amount of time uncompressing will add to surfing freenet is
> insignificant compared to everything else that happens.  I guess I'll have
> to measure it to satisfy Mr.Bad.

One solution would be to have a minimum compression threshold.  Files
under this threshold would be uncompressed, and files larger than this
threshold would be compressed.  This would result in there be space
and bandwidth savings for big files, but not slowdowns from
compressing small files.

-- 
Yes, I know my enemies.
They're the teachers who tell me to fight me.
Compromise, conformity, assimilation, submission, ignorance,
hypocrisy, brutality, the elite.
All of which are American dreams.

              - Rage Against The Machine

PGP signature

Reply via email to