I realize that I've posted some dumb things in this thread so here's a
re-cast summary:

1) In the past, I experimented with fikesystem backups, using my own
file-level checksumming that would detect when a file was already in
the backup repository, and add a hard link rather than allocate new
blocks. You can do that today on any [posix] fikesystem that supports
hard links, by using rsync.

But you are far, far better off using snapshots.

2) I said that I got 7-to-1 "deduplication" using my hard-link system.
That's a meaningless statement, but anyway I was able to save twelve
or so backups of a 100GB dataset on a 160GB hard disk.

You would almost certainly see much better results by using snapshots
on ZFS or btrfs, where a snapshot takes almost no storage to create,
and only uses extra space for any changed blocks. Snapshots are block-
level.

3) Another meaningless statement was my subjective notion that ZFS
dedup led to performance degradation. Forget I said that, as actually
I have no idea. My system was operating with failing drives at the time.

Some people report better performace with ZFS dedup, as it decreases
the number of disk writes.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to