Haudy,

Thanks for reporting this bug and helping to improve ZFS.
I'm not sure either how you could have added a note to an
existing report. Anyway I've gone ahead and done that for you
in the "Related Bugs" field. Though opensolaris doesn't reflect it yet

Neil.


Haudy Kazemi wrote:
> I have reported this bug here: 
> http://bugs.opensolaris.org/view_bug.do?bug_id=6685676
> 
> I think this bug may be related, but I do not see where to add a note to 
> an existing bug report: 
> http://bugs.opensolaris.org/view_bug.do?bug_id=6633592
> (both bugs refer to ZFS-8000-2Q however my report shows a FAULTED pool 
> instead of a DEGRADED pool.)
> 
> Thanks,
> 
> -hk
> 
> Haudy Kazemi wrote:
>> Hello,
>>
>> I'm writing to report what I think is an incorrect or conflicting 
>> suggestion in the error message displayed on a faulted pool that does 
>> not have redundancy (equiv to RAID0?).  I ran across this while testing 
>> and learning about ZFS on a clean installation of NexentaCore 1.0.
>>
>> Here is how to recreate the scenario:
>>
>> [EMAIL PROTECTED]:~$ mkfile 200m testdisk1 testdisk2
>> [EMAIL PROTECTED]:~$ sudo zpool create mybigpool $PWD/testdisk1 
>> $PWD/testdisk2
>> Password:
>> [EMAIL PROTECTED]:~$ zpool status mybigpool
>>   pool: mybigpool
>>  state: ONLINE
>>  scrub: none requested
>> config:
>>
>>         NAME                              STATE     READ WRITE CKSUM
>>         mybigpool                         ONLINE       0     0     0
>>           /export/home/kaz/testdisk1  ONLINE       0     0     0
>>           /export/home/kaz/testdisk2  ONLINE       0     0     0
>>
>> errors: No known data errors
>> [EMAIL PROTECTED]:~$ sudo zpool scrub mybigpool
>> [EMAIL PROTECTED]:~$ zpool status mybigpool
>>   pool: mybigpool
>>  state: ONLINE
>>  scrub: scrub completed after 0h0m with 0 errors on Mon Apr  7 22:09:29 2008
>> config:
>>
>>         NAME                              STATE     READ WRITE CKSUM
>>         mybigpool                         ONLINE       0     0     0
>>           /export/home/kaz/testdisk1  ONLINE       0     0     0
>>           /export/home/kaz/testdisk2  ONLINE       0     0     0
>>
>> errors: No known data errors
>>
>> Up to here everything looks fine.  Now lets destroy one of the virtual 
>> drives:
>>
>> [EMAIL PROTECTED]:~$ rm testdisk2
>> [EMAIL PROTECTED]:~$ zpool status mybigpool
>>   pool: mybigpool
>>  state: ONLINE
>>  scrub: scrub completed after 0h0m with 0 errors on Mon Apr  7 22:09:29 2008
>> config:
>>
>>         NAME                              STATE     READ WRITE CKSUM
>>         mybigpool                         ONLINE       0     0     0
>>           /export/home/kaz/testdisk1  ONLINE       0     0     0
>>           /export/home/kaz/testdisk2  ONLINE       0     0     0
>>
>> errors: No known data errors
>>
>> Okay, still looks fine, but I haven't tried to read/write to it yet.  
>> Try a scrub.
>>
>> [EMAIL PROTECTED]:~$ sudo zpool scrub mybigpool
>> [EMAIL PROTECTED]:~$ zpool status mybigpool
>>   pool: mybigpool
>>  state: FAULTED
>> status: One or more devices could not be opened.  Sufficient replicas 
>> exist for
>>         the pool to continue functioning in a degraded state.
>> action: Attach the missing device and online it using 'zpool online'.
>>    see: http://www.sun.com/msg/ZFS-8000-2Q
>>  scrub: scrub completed after 0h0m with 0 errors on Mon Apr  7 22:10:36 2008
>> config:
>>
>>         NAME                              STATE     READ WRITE CKSUM
>>         mybigpool                         FAULTED      0     0     0  
>> insufficient replicas
>>           /export/home/kaz/testdisk1  ONLINE       0     0     0
>>           /export/home/kaz/testdisk2  UNAVAIL      0     0     0  cannot 
>> open
>>
>> errors: No known data errors
>> [EMAIL PROTECTED]:~$
>>
>> There we go.  The pool has faulted as I expected to happen because I 
>> created it as a non-redundant pool.  I think it was the equivalent of a 
>> RAID0 pool with checksumming, at least it behaves like one.  The key to 
>> my reporting this is that the "status" message says "One or more devices 
>> could not be opened.  Sufficient replicas exist for the pool to continue 
>> functioning in a degraded state." while the message further down to the 
>> right of the pool name says "insufficient replicas".
>>
>> The verbose status message is wrong in this case.  From other forum/list 
>> posts looks like that status message is also used for degraded pools, 
>> which isn't a problem, but here we have a faulted pool.  Here's an 
>> example of the same status message used appropriately: 
>> http://mail.opensolaris.org/pipermail/zfs-discuss/2006-April/031298.html
>>
>> Is anyone else able to reproduce this?  And if so, is there a ZFS bug 
>> tracker to report this too? (I didn't see a public bug tracker when I 
>> looked.)
>>
>> Thanks,
>>
>> Haudy Kazemi
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>   
> 
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to