On Wed, 3 Apr 2019 14:20:34 +0100 (BST)
"G.W. Haywood via BackupPC-users" <backuppc-users@lists.sourceforge.net>
wrote:

Hi G.W.,

> You will need to test the performance yourself.  Performance can be
> improved by avoiding disc writes, which will take orders of magnitude
> longer than reading RAM.  ZFS checksums are in RAM, so you might need
> a lot of it.  ZFS deduplication takes place at disc block level, not
> at file level, so if you have for example files which grow from backup
> to backup where the first parts of files are identical, then you might
> see performance improvements from _both_ kinds of deduplication.  It
> will obviously depend on your data profile,

Only if your modifications exceed the recordsize setting size.

> and it may also depend on
> encryption; I have no idea what impact that might have for example on
> deduplication of files which have identical blocks before encryption.
> I'd expect any sensible encryption system to use something like salts,
> so that blocks stored on disc would be different after encryption even
> if they were identical before it.  Otherwise, interesting attacks on
> the encrypted data can become possible.  There's a lot of literature.

Yup, this is why (both reasons) full disk encryption is not a good
solution compared to file encryption (and it can opens ways to crypto
side attacks.)

> In any event, in my view, the stability of the filesystem is a much
> more important consideration.  I should be reluctant to move any of my
> backups from ext4 to ZFS simply because I have very little information
> about ZFS to work with and (call it my disclaimer) I have no personal
> experience of it at all.  Certainly using the ZFS encryption feature
> would for me be a risk too far.

There are precautions to take (such as one infamous _release_ that made
some files disappear), but not much as usual.
Notice that this kind of adventure is really the cake under the icy when
it happens in a project, 'cos it makes devs reacting by tightening (a
lot) the regression test suite to avoid a Bis Repetita.

The main precaution, the one w$ users usually prefer to ignore, is the
pro IT main motto: "if it is working as expected, don't fix it" ;-p)

Other than that, it is pretty stable and I know a lot of labs of
any branch that use it either on workstations or backup servers,
especially because they can have very long term data (some studies can
last more than half a century) than they don't wanna see corrupted.
This is the great ZFS advantage against conventional RAID that just
ensure data redundancy but not consistency.

Speaking of consistency, please also note that at of mid-2018, rust and
SSDz were already dead and buried, which is a good thing for data
persistance (but only IF the industry adopt the right replacement, which
is far from being a logical choice),
see:
https://www.servethehome.com/carbon-nanotube-nram-exudes-excellence-in-persistent-memory/
and:
https://www.servethehome.com/fujitsu-nram-production-to-start-in-2019/

provided the marketing stays out of the way and the license be cheap,
the low technology used (55nm) could make it tsunamied the stockage
market in less than a year for something that looks like the benefit of
all (not to mention the huge energy sparing.)

JY


_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

Reply via email to