Hi All,

My question is related to: 6839260 want zfs send with properties.

I'm running "zfs send | mbuffer | network | mbuffer | zfs recv" nightly between 
two arrays (2008.11), but my backup script does not do recursive send/recv - it 
walks through all datasets, sends the increments one by one and re-creates any 
newly added datasets. 

I'm wondering if any of you are using a similar solution and ran into the same 
issue. Non-recursive send/recv does not send (all) properties of a dataset - 
for example quota and compression. What I'm thinking of is just defining 
properties I want to synchronize, and doing them one by one like this:

for dataset in $datasets; do
for prop in $props; do

A=zfs get -Ho value $prop $dataset
B=ssh slave zfs get -Ho value $prop $dataset
if [  B!=A ]; then  ssh slave zfs set $prop=$A $dataset; fi

done
done


The example above is of course simplified, as we also have inherited properties 
- but Is there any smarter / proper way of synchronizing properties between two 
pools (on 2008.11)? Basically, the properties I care about are: quota, 
compression, share* 

Many thanks,
owczi
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to