Hi
 I have a deduped dataset (accessed via NFS) where every night I store a clone 
of some servers. The dedup feature works great since only few changes occur 
between each clone (one every day).
After the clone, I send the dataset to a backup machine, which in turn has 
dedup activated on the receiving dataset. Whith a script I make a snapshot and 
do a differential "zfs send -i"  between the last and the preceding snap. 
I was trying to elaborate about using "zfs send -i -D". The -D switch 
deduplicates data on the stream it is going to send right? This means that it 
will dedup the data between the last two snapshots that I'm going to send, and 
this is a full machine clone (little dedup possible). Once it arrives on the 
target deduped dataset, it is *really* deduped and space on disk is saved, but 
network traffic accounts for a full machine clone.
Is there a way to send only deduped data from the last snapshots, since on the 
originating dataset it is already deduped? This will not change the accounting 
of space on destination disks, but greatly reduce the network traffic.
Thanks
-- 
This message posted from opensolaris.org
_______________________________________________
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to