Currently, Zippy is written to work on a seq/string entirely in memory, not on 
streams.

I did this to keep things simple. I had not written this compression stuff 
before so I didn't want to make it any harder than it had to be. I do also 
think in-memory is great for most scenarios (as an analogy, I would expect 
Nim's readFile is used much more than FileStream for example).

I do think streaming support would be a nice improvement for some scenarios 
(like very large files), but supporting that would be fairly big-ish 
undertaking. I don't anticipate working on that in the short term.

As for Zippy vs Snappy, I think my choice would be based on something like this:

Zippy is great for compatibility. HTTP gzip, Zip files, tarballs, PNG, so many 
things must use zlib to work so you don't really get a choice. However, zlib is 
slow to compress and uncompress, so I would choose to use a more modern 
technique if I can get away with it.

Snappy would be that more modern technique I'd prefer if it is an option. Like 
for example when compressing my own data for transport over UDP or something. 
Nobody else's code needs to read it. Snappy is drastically faster at both 
compressing and uncompressing, and is super tiny in terms of code. To me Snappy 
is an awesome local maxima of good enough compression, fast compressing, fast 
uncompressing and code complexity.

Reply via email to