Unfortunely I can only agree to the doubts about running ZFS in
production environments, i've lost ditto-blocks, i''ve gotten
corrupted pools and a bunch of other failures even in
mirror/raidz/raidz2 setups with or without hardware mirrors/raid5/6.
Plus the insecurity of a sudden crash/reboot will corrupt or even
destroy the pools with "restore from backup" as the only advice. I've
been lucky so far about getting my pools back thanks to people like
Victor.

What would be needed is a proper fsck for ZFS which can resolv "minor"
data corruptions, tools for rebuilding, resizing and moving the data
about on pools is also needed, even recover of data from faulted
pools, like there is for ext2/3/ufs/ntfs.

All in all, great FS but not production ready until the tools are in
place or it gets really really resillient to minor failures and/or
crashes in both software and hardware. For now i'll stick to XFS/UFS
and sw/hw-raid and live with the restrictions of such fs.

//T

2008/10/9 Mike Gerdts <[EMAIL PROTECTED]>:
> On Thu, Oct 9, 2008 at 7:44 AM, Ahmed Kamal
> <[EMAIL PROTECTED]> wrote:
>>
>>    >
>>    >In the past year I've lost more ZFS file systems than I have any other
>>    >type of file system in the past 5 years.  With other file systems I
>>    >can almost always get some data back.  With ZFS I can't get any back.
>>
>>> Thats scary to hear!
>>>
>>
>> I am really scared now! I was the one trying to quantify ZFS reliability,
>> and that is surely bad to hear!
>
> The circumstances where I have lost data have been when ZFS has not
> handled a layer of redundancy.  However, I am not terribly optimistic
> of the prospects of ZFS on any device that hasn't committed writes
> that ZFS thinks are committed.  Mirrors and raidz would also be
> vulnerable to such failures.
>
> I also have run into other failures that have gone unanswered on the
> lists.  It makes me wary about using zfs without a support contract
> that allows me to escalate to engineering.  Patching only support
> won't help.
>
> http://mail.opensolaris.org/pipermail/zfs-discuss/2007-December/044984.html
>   Hang only after I mirrored the zpool, no response on the list
>
> http://mail.opensolaris.org/pipermail/zfs-discuss/2008-June/048255.html
>   I think this is fixed around snv_98, but the zfs-discuss list was
>   surprisingly silent on acknowledging it as a problem - I had no
>   idea that it was being worked until I saw the commit.  The panic
>   seemed to be caused by dtrace - core developers of dtrace
>   were quite interested in the kernel crash dump.
>
> http://mail.opensolaris.org/pipermail/zfs-discuss/2008-September/051109.html
>   Panic during ON build.  Pool was lost, no response from list.
>
> --
> Mike Gerdts
> http://mgerdts.blogspot.com/
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
Timh Bergström
System Administrator
Diino AB - www.diino.com
:wq
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to