[Solaris 10 6/06 i86pc]

I recently used a set of 6 disks in a MultiPack to create a RAIDZ volume.  Then 
I proceeded to do zfs set sharenfs=root=a.b.c.d:a.b.c.e space ("space" is how I 
named the ZFS pool).

Then I NFS mounted the ZFS pool on another system, and proceeded to do a find + 
cpio -pvdmu combination to the NFS mounted file system.

Shortly thereafter, I ran out of space on my "space" pool, but `zfs list` kept 
reporting I still had about a GigaByte worth of free space, while `zpool 
status` seemed to correctly report I ran out of space.

Why do these utilities report inconsistent information?

Then I added two more disks to the pool with the `zpool add -fn space c2t10d0 
c2t11d0`, whereby I determined that those would be added as a RAID0, which is 
not what I wanted. `zpool add -f raidz c2t10d0 c2t11d0` added ANOTHER RAIDZ 
STRIPE to the pool, rather than adding the disks to the existing RAIDZ vdev!

Is there a way to add more disks to an existing RAIDZ vdev? If there is, I sure 
haven't figured it out yet. What is it?

And from what I understood from the blogs written by Mr. Bonwick, one of the 
properties of RAIDZ is that it can be used on disks in a pool that are not the 
same size. Why then did I explicitly have to use the -f switch to force a 
creation of a RAIDZ vdev on a salad of 2 and 4GB disks?

Finally, after the find + cpio were done, I noticed that the ZFS file system is 
taking exactly 1.1GB more space than the original UFS file system that the data 
had been copied from. At least that's so according to `du -sh`. Why???
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to