Giving ZFS the complete disk is the most efficient way for ZFS and then use datasets for your other requirements.  This is the recommended way per all the guides I have read and it works great.  You can add property controls to each dataset if needed

Filesystem             size   used  avail capacity  Mounted on
rpool/ROOT/s10x_u8wos_08a
                       134G   4.2G   124G     4%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                   7.3G   400K   7.3G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap2.so.1
                       129G   4.2G   124G     4%    /lib/libc.so.1
fd                       0K     0K     0K     0%    /dev/fd
swap                   7.3G    96K   7.3G     1%    /tmp
swap                   7.3G    32K   7.3G     1%    /var/run
rpool/export           134G    25K   124G     1%    /export
swie/cache             134G    21G   107G    17%    /export/cache
rpool/export/home      134G   702M   124G     1%    /export/home
rpool/export/install   134G    23K   124G     1%    /export/install
rpool/export/install/media
                       134G   1.0G   124G     1%    /export/install/media
swie/SWIEboot          134G   211K   107G     1%    /opt/SWIE/boot
swie/SWIElnc           134G    96K   107G     1%    /opt/SWIE/lnc
swie/SWIEtlc           134G   1.5M   107G     1%    /opt/SWIE/tlc
rpool                  134G    40K   124G     1%    /rpool

Adding a mirror is pretty easy

Format the inactive disk with slice 0 as the whole disk or to match your slices

Create the ZFS mirror

    zpool attach rpool c0t0d0s0 c0t1d0s0

It will take some time for the new mirror to re-silver (sync)

Now you have to manually create a bootblock or an additional grub
menu item (x86) for the new mirrored disk which is a known bug.

      Sparc: installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0


    X86: installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t1d0s0


This is so you can boot to either disk

Also look into adding a cron line to do zpool scrub commands

   1 0 * * 0 /usr/sbin/zpool scrub rpool


Alex wrote, On 01/27/10 07:52:
Hi everybody,

I'm new to opensolaris and have a few questions regarding the installtion.
During the installation I selected a 20gb "Solaris" partition (i think this i called a slice in osol terminology) where my root lays.

After the installation "zpool status" gives me the rpool, state online and consisting of c7d0s0 (my ide-hdd where osol should be installed).

I have a second ide-hdd on my other ide-controller and want to attach this device to the existing pool. But not only the 20gb partition to mirror the installation-partition, but the whole 120gb drive. is there a way to do this?

At the end I'd like to have a setup simillar to this:
Osol on a 20gb partition on 2x 120gb IDE-HDD in a zfs mirror pool and the rest of the drive as a folder mounted under /mnt/something.

Thank you very much in advance.
Alex
  

_______________________________________________
opensolaris-help mailing list
opensolaris-help@opensolaris.org

Reply via email to