Hi Mark,

>On Mon, 10 Apr 2000, Noel L Yap wrote:
>
>>This sounds like a really good idea -- it sounds like a fairly minor 
>>change to
>>the source (at least on the surface).  I'd opt for gzip/gunzip since it's
>>already distributed as part of the source.
>
>How would you intend to define which files were compressed?  Would you
>envisage it as being repository wide (any files that are not compressed
>get compressed upon next commit) or per file basis, such as adding a 'z'
>optoin to -k such as -kz or -kbz.  You would also need to define a level
>of compression would you not?

I will give you a hint - the switch is already there: -kb! I think that the 
biggest problem with CVS if we talk about the disk storage are the binary 
files. Folks who use CVS for the websites are having HUGE files like 
pictures and multimedia, maybe few MB each!

>Would the compression/decompression add much load to the system at all, I
>could imagine some large repositories it could become rather sluggish.

Maybe, maybe not. If it can be fixed(setup) that let's say binary files are 
always compressed at certain level (let's say z5, which is  better than 
nothing) then you could send the compressed version (yes, I know it is not 
that straight-forward, but it can be done with few tricks) to the client 
directly. Then overhead is not unnecessary - instead of:

get_compressed->uncompress->compress->send->[Client]uncompress

You get something like:

get_compressed->send_compressed->[Client]uncompress

That way you can actually speed the server operations, since it only deals 
with compressed(that it smaller) data.

BR,
Jerzy

The first thing they don't teach you at school: "Never say never".
All the issues not related to the list please send to me in private, thanks.

______________________________________________________
Get Your Private, Free Email at http://www.hotmail.com

Reply via email to