This is a FAQ, but the FAQ is not well maintained :-(
http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq

On Feb 8, 2010, at 1:35 PM, Lasse Osterild wrote:
> Hi,
> 
> This may well have been covered before but I've not been able to find an 
> answer to this particular question.
> 
> I've setup a raidz2 test env using files like this:
> 
> # mkfile 1g t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 s1 s2
> # zpool create dataPool raidz2 /xvm/t1 /xvm/t2 /xvm/t3 /xvm/t4 /xvm/t5
> # zpool add dataPool raidz2 /xvm/t6 /xvm/t7 /xvm/t8 /xvm/t9 /xvm/t10
> # zpool add dataPool spare /xvm/s1 /xvm/s2
> 
> # zpool status dataPool
>  pool: dataPool
> state: ONLINE
> scrub: none requested
> config:
> 
>       NAME          STATE     READ WRITE CKSUM
>       dataPool      ONLINE       0     0     0
>         raidz2-0    ONLINE       0     0     0
>           /xvm/t1   ONLINE       0     0     0
>           /xvm/t2   ONLINE       0     0     0
>           /xvm/t3   ONLINE       0     0     0
>           /xvm/t4   ONLINE       0     0     0
>           /xvm/t5   ONLINE       0     0     0
>         raidz2-1    ONLINE       0     0     0
>           /xvm/t6   ONLINE       0     0     0
>           /xvm/t7   ONLINE       0     0     0
>           /xvm/t8   ONLINE       0     0     0
>           /xvm/t9   ONLINE       0     0     0
>           /xvm/t10  ONLINE       0     0     0
>       spares
>         /xvm/s1     AVAIL   
>         /xvm/s2     AVAIL   
> 
> All is good and it works, I then copied a few gigs of data onto the pool and 
> checked with zpool list
> r...@vmstor01:/# zpool list
> NAME       SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
> dataPool  9.94G  4.89G  5.04G    49%  1.00x  ONLINE  -
> 
> Now here's what I don't get, why does it say the poo sizel is 9.94G when it's 
> made up of 2 x raidz2 consisting of 1G volumes, it should only be 6G which df 
> -h also reports correctly.

No, zpool displays the available pool space. df -h displays something else 
entirely.
If you have 10 1GB vdevs, then the total available pool space is 10GB. From the 
zpool(1m) man page:
...
     size
         Total size of the storage pool.

     These space usage properties report  actual  physical  space
     available  to  the  storage  pool. The physical space can be
     different from the total amount of space that any  contained
     datasets  can  actually  use.  The amount of space used in a
     raidz configuration depends on the  characteristics  of  the
     data being written. In addition, ZFS reserves some space for
     internal accounting that  the  zfs(1M)  command  takes  into
     account,  but the zpool command does not. For non-full pools
     of a reasonable size, these effects should be invisible. For
     small  pools,  or  pools  that are close to being completely
     full, these discrepancies may become more noticeable.
...

 -- richard

>  For a RAIDZ2 pool I find the information, the fact that it's 9.94G and not 
> 5.9G, completely useless and misleading, why is parity part of the 
> calculation? Also ALLOC seems wrong, there's nothing in the pool except a 
> full copy of /usr (just to fill up with test data), it does however correctly 
> display that I've used about 50% of the pool.  This is a build 131 machine 
> btw.
> 
> r...@vmstor01:/# df -h /dataPool
> Filesystem            Size  Used Avail Use% Mounted on
> dataPool              5.9G  3.0G  3.0G  51% /dataPool
> 
> Cheers,
> 
> - Lasse
> 
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to