Giovanni wrote on 05/02/2010 02:58:07 PM:
>
> Hi guys
>
> I am new to Opensolaris and ZFS world, I have 6x2TB SATA hdds on my
> system, I picked a single 2TB disk and installed opensolaris
> (therefore zpool was created by the installer)
>
> I went ahead and created a new pool "gpool" with raidz (the kind of
> redundancy I want. Here's the output:
>
> @server:/# zfs list
> NAME                         USED  AVAIL  REFER  MOUNTPOINT
> gpool                        119K  7.13T  30.4K  /gpool
> rpool                       7.78G  1.78T    78K  /rpool
> rpool/ROOT                  3.30G  1.78T    19K  legacy
> rpool/ROOT/opensolaris      3.30G  1.78T  3.15G  /
> rpool/dump                  2.00G  1.78T  2.00G  -
> rpool/export                 491M  1.78T    21K  /export
> rpool/export/home            491M  1.78T    21K  /export/home
> rpool/export/home/G   491M  1.78T   491M  /export/home/G
> rpool/swap                  2.00G  1.78T   101M  -
> @server:/#
>
> @server:/# zpool status
>   pool: gpool
>  state: ONLINE
>  scrub: none requested
> config:
>
>    NAME        STATE     READ WRITE CKSUM
>    gpool       ONLINE       0     0     0
>      raidz1    ONLINE       0     0     0
>        c8t1d0  ONLINE       0     0     0
>        c8t2d0  ONLINE       0     0     0
>        c8t3d0  ONLINE       0     0     0
>        c8t4d0  ONLINE       0     0     0
>        c8t5d0  ONLINE       0     0     0
>
> errors: No known data errors
>
>   pool: rpool
>  state: ONLINE
>  scrub: none requested
> config:
>
>    NAME        STATE     READ WRITE CKSUM
>    rpool       ONLINE       0     0     0
>      c8t0d0s0  ONLINE       0     0     0
>
> errors: No known data errors
> @server:/#
>
>
>  Now, I want to get rid of "rpool" in its entirely, I want to
> migrate all settings, boot records, files from that rpool to "gpool"
> and then add the member of rpool c8t0d0s0 to my existing "gpool" so
> that I have a RAIDZ of 6x drives.
>
> Any guidance on how to do it? I tried to do zfs snapshot

Unless things have changed recently, zfs only supports booing off single
disks or mirrors. If you want to make rpool redundant, you could make it a
mirror, but booting off a raidzx is not possible (inability to boot off
raid5/6 (somewhat similar to raidzx) is actually a pretty common limitation
of sw-raid implementations). Normally what is done is to have a pool for
booing which is a mirror, then have the primary data-store as whatever
redundancy type you want.

>
> # zfs snapshot rp...@move
>
>
> But I don't see the snapshow anywhere on rpool/.zfs (there is no .zfs
folder)

I think there is an option that turns this on... so try running "zfs get
snapdir gpool" then if you want to make it appear "zfs set snapdir=visible
gpool". IIRC it's hidden by default to not confuse people. you can also use
zfs send/recv if you are moving everything in a snapshot.

>
> Thanks

Andrew Hettinger
http://Prominic.NET  ||  ahettin...@prominic.net
Tel:  866.339.3169 (toll free) -or- +1.217.356.2888 x.110 (int'l)
Fax: 866.372.3356 (toll free) -or- +1.217.356.3356            (int'l)
_______________________________________________
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to