On Thu, Aug 24, 2006 at 10:12:12AM -0600, Arlina Goce-Capiral wrote:
> It does appear that the disk is fill up by 140G.

So this confirms what I was saying, that they are only able to write
ndisks-1 worth of data (in this case, ~68GB * (3-1) == ~136GB.  So there
is no unexpected behavior with respect to the size of their raid-z pool,
just the known (and now fixed) bug.

> I think I now know what happen.  I created a raidz pool and I did not
> write any data to it before I just pulled out a disk.  So I believe the
> zfs filesystem did not initialize yet.  So this is why my zfs filesystem
> was unusable.  Can you confirm this? 

No, that should not be the case.  As soon as the 'zfs' or 'zpool'
command completes, everything will be on disk for the requested action.

> But when I created a zfs filesystem and wrote data to it, it could now
> lose a disk and just be degraded.  I tested this part by removing the
> disk partition in format. 

Well, it sounds like you are testing two different things:  first you
tried physically pulling out a disk, then you tried re-partitioning a
disk.

It sounds like there was a problem when you pulled out the disk.  If you
can describe the problem further (Did the machine panic?  What was the
panic message?) then perhaps we can diagnose it.

> I will try this same test to re-duplicate my issue, but can you confirm
> for me if my zfs filesystem as a raidz requires me to write data to it
> first before it's really ready?

No, that should not be the case.

> Any ideas when the Solaris 10 update 3 (11/06) be released?

I'm not sure, but November or December sounds about right.  And of
course, if they want the fix sooner they can always use Solaris Express
or OpenSolaris!

--matt
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to