> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jesus Cea
> 
> Sorry if this list is inappropriate. Pointers welcomed.

Not at all.  This is the perfect forum for your question.


> So I am thinking about splitting my full two-disk zpool in two zpools,
> one for system and other for data. Both using both disks for
> mirroring. So I would have two slices per disk.

Please see the procedure below, which I wrote as notes for myself, to
perform disaster recovery backup/restore of rpool.  This is not DIRECTLY
applicable for you, but it includes all the necessary ingredients to make
your transition successful.  So please read, and modify as necessary for
your purposes.

Many good notes available:
    ZFS Troubleshooting Guide
 
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS
_Root_Pool_Recovery

Before you begin
    Because you will restore from a boot CD, there are only a few
    compression options available to you:  7z, bzip2, and gzip.
    The clear winner in general is 7z with compression level 1.  It's about
    as fast as gzip, with compression approx 2x stronger than bzip2

    Because you will restore from a boot CD, the media needs to be located
somewhere 
    that can be accessed from a CD boot environment, which does not include
ssh/scp.
    The obvious choice is NFS.  Be aware that solaris NFS client is often
not very
    compatible with linux NFS servers.

    I am assuming there is a solaris NFS server available because it makes
    my job easy while I'm writing this.  ;-)  Note:  You could just as 
    easily store the backup on removable disk in something like a zfs pool.
    Just make sure whatever way you store it, it's accessible from the 
    CD boot environment, which might not support a later version of zpool
etc.

Create NFS exports on some other solaris machine.
    share -F nfs -o rw=machine1:machine2,root=machine1:machine2 /backupdir
    Also edit hosts file to match, because forward/reverse dns must match
for the client.

Create a backup suitable for system recovery.
    mount someserver:/backupdir /mnt

    Create snapshots:
    zfs snapshot rpool@uniquebackupstring
    zfs snapshot rpool/ROOT@uniquebackupstring
    zfs snapshot rpool/ROOT/machinename_slash@uniquebackupstring

    Send snapshots:
    Notice: due to bugs, don't do this recursively.  Do it separately, as
outlined here.
    Notice:  In some version of zpool/zfs, these bugs were fixed so you can
safely do it recursively.  I don't know what rev is needed.
    zfs send rpool@uniquebackupstring | 7z a -mx=1 -si /mnt/rpool.zfssend.7z
    zfs send rpool/ROOT@uniquebackupstring | 7z a -mx=1 -si
/mnt/rpool_ROOT.zfssend.7z
    zfs send rpool/ROOT/machinename_slash@uniquebackupstring | 7z a -mx=1
-si /mnt/rpool_ROOT_machinename_slash.zfssend.7z

    It is also wise to capture a list of the "pristine" zpool and zfs
properties:
    echo "" > /mnt/zpool-properties.txt
    for pool in `zpool list | grep -v '^NAME ' | sed 's/ .*//'` ; do
        echo "-------------------------------------" | tee -a
/mnt/zpool-properties.txt
        echo "zpool get all $pool" | tee -a /mnt/zpool-properties.txt
        zpool get all $pool | tee -a /mnt/zpool-properties.txt
    done
    echo "" > /mnt/zfs-properties.txt
    for fs in `zfs list | grep -v @ | grep -v '^NAME ' | sed 's/ .*//'` ; do
        echo "-------------------------------------" | tee -a
/mnt/zfs-properties.txt
        echo "zfs get all $fs" | tee -a /mnt/zfs-properties.txt
        zfs get all $fs | tee -a /mnt/zfs-properties.txt
    done

    Notice:  The above will also capture info about dump & swap, which might
be important, 
    so you know what sizes & blocksizes they are.

    
Now suppose a disaster has happened.  You need to restore.
    Boot from the CD.
    Choose "Solaris"
    Choose "6.  Single User Shell"

    To bring up the network:
        ifconfig -a plumb
        ifconfig -a
        (Notice the name of the network adapter.  In my case, it's e1000g0)
        ifconfig e1000g0 192.168.1.100/24 up

    mount 192.168.1.105:/backupdir /mnt
    
    Verify that you have access to the backup images.  Now prepare your boot
disk as follows:

    format -e
    (Select the appropriate disk)
        fdisk
        (no fdisk table exists.  Yes, create default)
        partition
        (choose to "modify" a table based on "hog")
        (in my example, I'm using c1t0d0s0 for rpool)
    
    zpool create -f -o failmode=continue -R /a -m legacy rpool c1t0d0s0

    7z x /mnt/rpool.zfssend.7z -so | zfs receive -F rpool
    (notice: the first one requires -F because it already exists.  The
others don't need this.)

    7z x /mnt/rpool_ROOT.zfssend.7z -so | zfs receive rpool/ROOT
    7z x /mnt/rpool_ROOT_machinename_slash.zfssend.7z -so | zfs receive
rpool/ROOT/machinename_slash

    zfs set mountpoint=/rpool rpool
    zfs set mountpoint=legacy rpool/ROOT
    zfs set mountpoint=/      rpool/ROOT/machinename_slash

    zpool set bootfs=rpool/ROOT/machinename_slash rpool

    You did save the zpool-properties.txt and zfs-properties.txt didn't you?
;-)  If not, you'll just have to guess 
    about sizes and blocksizes.  The following 2G dump, 1G swap, and 4k
blocksize swap are pretty standard for a x86 
    ystem with 2G ram.
    zfs create -V 2G rpool/dump
    zfs create -V 1G -b 4k rpool/swap

    installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t0d0s0

    And finally, init 6

    After the system comes up "natural" once, it's probably a good idea to
capture a new "zpool-properties.txt" and "zfs-properties.txt" and 
    compare against the "pristine" ones, to see what (if anything) is
different.  Likely suspects are auto-snapshot properties and
    stuff like that.  Which you probably forgot you ever created, years ago
when you (or someone else) built your server and didn't save any 
    documentation about the process.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to