Oh my, one day after I posted my horror story another one strikes. This is 
validation of the design objectives of ZFS, looks like this type of stuff 
happens more often than not. In the past we'd have just attributed this type of 
problem to some application induced corruption, now ZFS is pinning this problem 
squarely on the storage sub-system.

If you didn't do any ZFS redundancy then your data is DONE as the support 
person indicated. Make sure you follow the instructions in the ZFS FAQ 
otherwise your server will end up in an endless 'panic-reboot cycle'.

Don't shoot the messenger (ZFS), consider running diags on your storage 
sub-system. Good luck.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to