Perhaps I mis-understand, but the below issues are all based on Nevada, 
not Solaris 10.  

Nevada isn't production code.  For real ZFS testing, you must use a 
production release, currently Solaris 10 (update 5, soon to be update 6).

In the last 2 years, I've stored everything in my environment (home 
directory, builds, etc.) on ZFS on multiple types of storage subsystems 
without issues.  All of this has been on Solaris 10, however.

Btw, I completely agree on the panic issue.    If I have a large DB 
server with many pools, and one inconsequential pool fails, I lose the 
entire DB server.   I'd really like to see an option at the zpool level 
directing what to do in a panic for a particular pool.    Perhaps this 
is in the latest bits; if so, sorry, I'm running old stuff.  :-)

I also run ZFS on my mac.  While not production quality, some of the 
panic errors dealing with external (firewire, usb, esata) are very 
irritating.   A hiccup due to a jostled cable, and the entire box 
panics.   That's frustrating.

Timh Bergström wrote:
> Unfortunely I can only agree to the doubts about running ZFS in
> production environments, i've lost ditto-blocks, i''ve gotten
> corrupted pools and a bunch of other failures even in
> mirror/raidz/raidz2 setups with or without hardware mirrors/raid5/6.
> Plus the insecurity of a sudden crash/reboot will corrupt or even
> destroy the pools with "restore from backup" as the only advice. I've
> been lucky so far about getting my pools back thanks to people like
> Victor.
>
> What would be needed is a proper fsck for ZFS which can resolv "minor"
> data corruptions, tools for rebuilding, resizing and moving the data
> about on pools is also needed, even recover of data from faulted
> pools, like there is for ext2/3/ufs/ntfs.
>
> All in all, great FS but not production ready until the tools are in
> place or it gets really really resillient to minor failures and/or
> crashes in both software and hardware. For now i'll stick to XFS/UFS
> and sw/hw-raid and live with the restrictions of such fs.
>
> //T
>
> 2008/10/9 Mike Gerdts <[EMAIL PROTECTED]>:
>   
>> On Thu, Oct 9, 2008 at 7:44 AM, Ahmed Kamal
>> <[EMAIL PROTECTED]> wrote:
>>     
>>>    >
>>>    >In the past year I've lost more ZFS file systems than I have any other
>>>    >type of file system in the past 5 years.  With other file systems I
>>>    >can almost always get some data back.  With ZFS I can't get any back.
>>>
>>>       
>>>> Thats scary to hear!
>>>>
>>>>         
>>> I am really scared now! I was the one trying to quantify ZFS reliability,
>>> and that is surely bad to hear!
>>>       
>> The circumstances where I have lost data have been when ZFS has not
>> handled a layer of redundancy.  However, I am not terribly optimistic
>> of the prospects of ZFS on any device that hasn't committed writes
>> that ZFS thinks are committed.  Mirrors and raidz would also be
>> vulnerable to such failures.
>>
>> I also have run into other failures that have gone unanswered on the
>> lists.  It makes me wary about using zfs without a support contract
>> that allows me to escalate to engineering.  Patching only support
>> won't help.
>>
>> http://mail.opensolaris.org/pipermail/zfs-discuss/2007-December/044984.html
>>   Hang only after I mirrored the zpool, no response on the list
>>
>> http://mail.opensolaris.org/pipermail/zfs-discuss/2008-June/048255.html
>>   I think this is fixed around snv_98, but the zfs-discuss list was
>>   surprisingly silent on acknowledging it as a problem - I had no
>>   idea that it was being worked until I saw the commit.  The panic
>>   seemed to be caused by dtrace - core developers of dtrace
>>   were quite interested in the kernel crash dump.
>>
>> http://mail.opensolaris.org/pipermail/zfs-discuss/2008-September/051109.html
>>   Panic during ON build.  Pool was lost, no response from list.
>>
>> --
>> Mike Gerdts
>> http://mgerdts.blogspot.com/
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>>     
>
>
>
>   
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to