Bonjour à tous, J'ai écris un script qui permet d'effectuer un clone d'arborescence de point de montage sur une autre racine.
Ce script a été écrit afin de permettre la sauvegarde de nos zones à l'aide de tsm sur un système de fichier provenant d'un cliché. Le code source se trouve ici: https://github.com/briner/dolly et une référence de son utilisation est mise après ma signature et provient de: https://github.com/briner/dolly/blob/master/README_DOLLY cED ---- %< ---- %< ---- %< ---- %< ---- %< ---- %< ---- %< ---- %< ---- %< dolly is a tool to create a clone of a VFS constituted of ZFS. It is based on snapshots and clone command. This tool was mainly created to decrease the time of the down time when backuping and to benefit the property of the zfs snapshot : quick and a coherent FS where all the files are snapshotted at the same time. If you find this script useful, consider a recommendation on LinkedIn with my email bri...@infomaniak.ch. :) ---------------------------------------------- DEPENDENCIES: libcbr from https://github.com/briner/libcbr ---------------------------------------------- HELP: dolly -h Usage: dolly clone [<mountpoint_path>] [--snapshot-expiration-datetime=<iso8601>] [--zfs-dir-incl=<path>,...] [--zfs-dir-excl=<path>,...] [--sub-zfs-dir-incl=<path>,...] [--sub-zfs-dir-excl=<path>,...] [--zpool-name-incl=<path>,...] [--zpool-name-excl=<path>,...] [--zfs-dir-dont-keep-snapshot=<path>,...] [--sub-zfs-dir-dont-keep-snapshot=<path>,...] dolly kill [<mountpoint_path>] dolly destroy-old-snapshot-kept [all|<mountpoint_path>] [--as-we-were-in=<iso8601>] more info on : https://github.com/briner/dolly/blob/master/dolly/README Options: -h, --help show this help message and exit --sub-zfs-dir-incl=SUBZFSDIRINCL to include to the clone a subtree of the VFS. the value MUST be a zfs mountpoint,... (default:"/") --sub-zfs-dir-excl=SUBZFSDIREXCL to exclude to the clone a subtree of the VFS. the value MUST be a zfs mountpoint,... (default:"") --zfs-dir-excl=ZFSDIREXCL to include to the clone a ZFS. the value MUST be a zfs mountpoint,... (default:"") --zfs-dir-incl=ZFSDIRINCL to exclude to the clone a ZFS. the value MUST be a zfs mountpoint,... (default:"") --zpool-name-incl=ZPOOLNAMEINCL to exclude to the clone every ZFS of a ZPOOL. the value MUST be a zpool name,... (default:"") --zpool-name-excl=ZPOOLNAMEEXCL to include to the clone every the zfs of a ZPOOL. the value MUST be a zpool name,... (default:"") --zfs-dir-dont-keep-snapshot=ZFSDIR_DONTKEEPSNAPSHOT zfs dir for which we will not keep the snapshot, the value MUST be a mountpoint,... (default:"") --sub-zfs-dir-dont-keep-snapshot=SUBZFSDIR_DONTKEEPSNAPSHOT zfs dir for which we will not keep the snapshot, the value MUST be a mountpoint,... (default:"") --tag=TAG this tag(a-zA-Z_) will be included in the snapname (default:"") --snapshot-expiration-datetime=EXPIRATION_STR the time we will keep the snapshot after the cloning (default:"P7D") --as-we-were-in=ANOTHER_NOW expiration or duration in <iso8601> format. (eg: duration of 5day:"P5D", of 5min:"PT5M", at datetime 2012.09.1975:"1975-08-19" ) -v, --verbose --no-email do not send any email -d, --debug increase the level of debugging ---------------------------------------------- USAGE: 1. let's look at the VFS zfs list # NAME USED AVAIL REFER MOUNTPOINT # rpool 2.31G 43.9G 31K /rpool # rpool/ROOT 2.31G 43.9G 31K legacy # rpool/ROOT/sol-11-SRU-9.5 2.31G 43.9G 2.14G / # rpool/ROOT/sol-11-SRU-9.5/var 149M 43.9G 108M /var # rpool/export 110K 43.9G 32K /export # rpool/export/home 58K 43.9G 39K /export/home # rpool/local 2.76M 43.9G 780K /usr/local # rpool/oracle 63K 43.9G 32K /oracle # rpool/oracle/app 31K 43.9G 31K /oracle/app 2. list the current dolly dolly # <empty> 3. clone the root of the VFS and mount it to the default mountpoint "/dolly/backup" dolly clone # <empty> 3. list the current dolly dolly # mounted : # mounted /dolly/backup we can see that it sees a dolly mounted at /dolly/backup 4. let's see the clone in the VFS zfs list | grep dolly # rpool/ROOT/sol-11-SRU-9.5/dolly_backup_zone-test2 1K 43.9G 2.14G /dolly/backup # rpool/ROOT/sol-11-SRU-9.5/var/dolly_backup_zone-test2 1K 43.9G 108M /dolly/backup/var # rpool/dolly_backup_zone-test2 1K 43.9G 31K /dolly/backup/rpool # rpool/export/dolly_backup_zone-test2 1K 43.9G 32K /dolly/backup/export # rpool/export/home/dolly_backup_zone-test2 1K 43.9G 39K /dolly/backup/export/home # rpool/local/dolly_backup_zone-test2 1K 43.9G 780K /dolly/backup/usr/local # rpool/oracle/app/dolly_backup_zone-test2 1K 43.9G 31K /dolly/backup/oracle/app # rpool/oracle/dolly_backup_zone-test2 1K 43.9G 32K /dolly/backup/oracle yeah… we've got a clone! 5. let's create a new clone of a sub part of the VFS dolly clone /dolly/sub-clone --sub-zfs-dir=/oracle 6. let's see the clone in the VFS zfs list | grep dolly/sub-clone # rpool/oracle/app/dolly_sub-clone_zone-test2 1K 43.9G 31K /dolly/sub-clone/oracle/app # rpool/oracle/dolly_sub-clone_zone-test2 1K 43.9G 32K /dolly/sub-clone/oracle 7. let's see the dolly list dolly # mounted : # mounted /dolly/sub-clone # mounted /dolly/backup # snapshot kept : 8. and let's kill the dolly /dolly/sub-clone dolly kill /dolly/sub-clone # <empty> 9. the dolly list dolly # mounted : # mounted /dolly/backup # snapshot kept : # /dolly/sub-clone @dolly_mnt::2012.09.26_16:49:27.225859::zone-test2 eheh… we can see that the clone get killed, but we still have the snapshot from which the clone was build. So now you can better understand the parameter snapshot-expiration-datetime which is set to P7D (duration of 7 day). In 7 day, when you will invoke clone, kill or destroy-old-snapshot-kept it will search for old snapshot to destroy. 10. let's destroy the old snapshot dolly destroy-old-snapshot-kept --mountpoint=all --as-we-were-in=P5Y # <empty> the parameter --as-we-were-in tell the destroy-old-snapshot-kept command to act the same way as if we were in 5 year. So as the default --snapshot-expiration-datetime is at 7 day then it will destroy it 11. the dolly list dolly # mounted : # /dolly/backup ---------------------------------------------- INFORMATION ABOUT INCLUDING AND EXCLUDING what is going to be cloned is: Σ included - Σ excluded the difference between : --zfs-dir-excl --sub-zfs-dir-excl is that --sub* one will take list all the mountpoint who are under. let's consider this VFS where all the name with * are a zfs mountpoint. /\ / \ / \ / \ usr *export /\ /\ / \ / \ *local \ / *home /\ \ / /|\ / \ / | \ / \ / | \ / \ / | \ / \ / | \ / \ / | \ / *alice *joe *bob / /\ /\ /\ / / \ / \ / \ then: --sub-zfs-dir-incl=/export/home equal: /export/home,/export/home/alice,/export/home/joe,/export/home/bob --sub-zfs-dir-incl=/export/home --zfs-dir-incl=/usr/local equal: /export/home,/export/home/alice,/export/home/joe,/export/home/bob,/usr/local --sub-zfs-dir-incl=/export/home --zfs-dir-incl=/usr/local --sub-zfs-dir-excl=/export/home equal: /export/home,/usr/local -- _______________________________________________ Liste (Open)Solaris francophone ug-fosug@opensolaris.org http://www.mail-archive.com/ug-fosug@opensolaris.org/