On 03/03/2011 03:21 PM, John Drescher wrote: > On Thu, Mar 3, 2011 at 4:11 PM, Fabio Napoleoni - ZENIT <fa...@zenit.org> > wrote: [...] >> So the poor throughput is given by software compression. I don't know what >> to choose in the speed vs space tradeoff. Ideally the best option could be >> compression on the storage director (it has more powerful hardware than this >> client) but I think that bacula currently doesn't support this feature. >> > You could do that with filesystem compression. There are a few options > for that. Although I admit none of them are in the mainline kernel. > > John Well, depending on what you consider "mainline". I'm currently experimenting with btrfs with filesystem compression. As of 2.6.38, this is available as both gzip/zlib and LZO. I am, daringly enough, experimenting with the 2.6.38 release candidates and LZO compression for a storage daemon on USB external disk volumes. It seems to work so far, but I've only been using it for a week or so and haven't had a chance to do any extensive checks.
The one problem I see with filesystem-supplied compression is that trying not to run out of disk space for backup volumes becomes more difficult, since it's hard to predict exactly how much "real" space the compressed data will take up (whereas is bacula is doing the compression, you can just define e.g. 8 20GB volumes and stick them on a 160GB drive). This may or may not turn out to be a 'real' problem. It does at least take the compression burden off of the client computer, which removes that bottleneck. ------------------------------------------------------------------------------ What You Don't Know About Data Connectivity CAN Hurt You This paper provides an overview of data connectivity, details its effect on application quality, and explores various alternative solutions. http://p.sf.net/sfu/progress-d2d _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users