You can also modify RCS (or the librified version of it) to read and write
compressed files.  I modified RCS v5.6 and CVS v1.3 to do just that.  My
work is still available at http://www.wakawaka.com/source.html

But be warned:  This is alpha quality code and error checking and recovery
is somewhat lacking (to put it euphemistically).  I've been wanting to pick
this project up again when I have time, but....

Seems to me that zlib could also be put to good use in a similar manner.

--- Forwarded mail from [EMAIL PROTECTED]

Tobias Weingartner wrote:
> And to you
> people that say "my sandbox is 100+MB, it's not practical, etc", I say
> "have you looked at the price of disk lately!?!"

I have an idea. How hard would it be to patch CVS (server side) to say
"I didn't see file,v; let me see if I see file,v.z (or file,v.gz, .bz,
.bzip, or whathaveyou), run uncompress/gunzip/bunzip, check the file
in, and then re-compress/gzip/bzip the file"?

Seems to me that defining one set of extension, compression program,
uncompress program (or uncompress flag) will completely solve this
concern of "It's getting too big".

Heck, bitkeeper is doing that, and they report that several years of
linux kernel trees get compressed down to a very reasonable size (I
think it was around 100 MB, total, for 3 years of the linux kernel
tree); if it's good enough for linux, how bad can it be for the rest
of us?

--- End of forwarded message from [EMAIL PROTECTED]

Reply via email to