Yeah, this threw me.  A 3 disk RAID-Z2 doesn't make sense, because at a
redundancy level, RAID-Z2 looks like RAID 6.  That is, there are 2 levels of

parity for the data.  Out of 3 disks, the equivalent of 2 disks will be used
to
store redundancy (parity) data and only 1 disk equivalent will store actual
data.  This is what others might term a "degenerate case of 3-way
mirroring",
except with a lot more computational overhead since we're performing 2
parity calculations.

I'm curious what the purpose of creating a 3 disk RAID-Z2 pool is/was?
(For my own personal edification.  Maybe there is something for me to learn
from this example.)

Aside: Does ZFS actually create the pool as a 3-way mirror, given that this
configuration is effectively the same?  This is a question for any of the
ZFS
team who may be reading but I'm curious now.

On Mon, Mar 15, 2010 at 10:38, Michael Hassey <mhas...@gmail.com> wrote:

> Sorry if this is too basic -
>
> So I have a single zpool in addition to the rpool, called xpool.
>
> NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
> rpool   136G   109G  27.5G    79%  ONLINE  -
> xpool   408G   171G   237G    42%  ONLINE  -
>
> I have 408 in the pool, am using 171 leaving me 237 GB.
>
> The pool is built up as;
>
>  pool: xpool
>  state: ONLINE
>  scrub: none requested
> config:
>
>        NAME        STATE     READ WRITE CKSUM
>        xpool       ONLINE       0     0     0
>          raidz2    ONLINE       0     0     0
>            c8t1d0  ONLINE       0     0     0
>            c8t2d0  ONLINE       0     0     0
>            c8t3d0  ONLINE       0     0     0
>
> errors: No known data errors
>
>
> But - and here is the question -
>
> Creating file systems on it, and the file systems in play report only 76GB
> of space free....
>
> <<<<SNIP FROM ZFS LIST>>>>>>>
>
> xpool/zones/logserver/ROOT/zbe     975M  76.4G   975M  legacy
> xpool/zones/openxsrvr             2.22G  76.4G  21.9K
>  /export/zones/openxsrvr
> xpool/zones/openxsrvr/ROOT        2.22G  76.4G  18.9K  legacy
> xpool/zones/openxsrvr/ROOT/zbe    2.22G  76.4G  2.22G  legacy
> xpool/zones/puggles                241M  76.4G  21.9K
>  /export/zones/puggles
> xpool/zones/puggles/ROOT           241M  76.4G  18.9K  legacy
> xpool/zones/puggles/ROOT/zbe       241M  76.4G   241M  legacy
> xpool/zones/reposerver             299M  76.4G  21.9K
>  /export/zones/reposerver
>
>
> So my question is, where is the space from xpool being used? or is it?
>
>
> Thanks for reading.
>
> Mike.
> --
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
"You can choose your friends, you can choose the deals." - Equity Private

"If Linux is faster, it's a Solaris bug." - Phil Harman

Blog - http://whatderass.blogspot.com/
Twitter - @khyron4eva
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to