Hello,

czw., 3 mar 2022 o 13:33 <ego...@ramattack.net> napisał(a):

>
> I designed Bacula deduplication to handle blocks (files) larger than 1k
> because indexing overhead for such small blocks was too high. The larger
> the block you use the lower chance to get a good deduplication ratio. So it
> is a trade-off - small blocks == good deduplication ratio but higher
> indexing overhead; larger blocks == weak deduplication ratio but lower
> indexing overhead. So it was handling block levels from 1K up to 64k (the
> default bacula block size, but could be extended to any size).
>
>
> *I understand what you say but the problem we are facing is the following
> one. Imagine, a machine with a SQL Server and 150GB of databases. Our
> problem is to have to incrementally copy that each day. We don't really
> mind copying 5GB of "wasted" space per day... even when non necessary (just
> for understanding).... but obviously 100GB per day or 200GB... are
> different terms....*
>
>
In most cases (if you use a right database engine) you should be able to
backup "incremental" data at very low-level, basically transactions and use
it for perfect recovery. Use transaction logs level backup and you get a
bonus PITR.


>
> *I was thinking in applying this deduplication only for important files
> really.... hope you can understand me now.. :)*
>
>
>
It is always hard for me to call it "deduplication". :)


>
> BTW. I think a Delta plugin available in BEE is fairly cheap compared to
> full deduplication options.
>
> *I have asked price to Rob Morrison :) :)*
>
>
>
You can say hi to Rob from me. :)

best regards
-- 
Radosław Korzeniewski
rados...@korzeniewski.net
_______________________________________________
Bacula-devel mailing list
Bacula-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-devel

Reply via email to