On 28-Nov-06, at 7:02 PM, Elizabeth Schwartz wrote:
On 11/28/06, Frank Cusack <[EMAIL PROTECTED]> wrote:
I suspect this will be the #1 complaint about zfs as it becomes more
popular. "It worked before with ufs and hw raid, now with zfs it says
my data is corrupt! zfs sux0rs!"
That's not the problem, so much as "zfs says my file system is
corrupt, how do I get past this?"
Yes, that's your problem right now. But Frank describes a likely
general syndrome. :-)
With ufs, f'rinstance, I'd run an fsck, kiss the bad file(s)
goodbye, and be on my way.
No, you still have the hardware problem.
With zfs, there's this ominous message saying "destroy the
filesystem and restore from tape". That's not so good, for one
corrupt file.
As others have pointed out, you wouldn't have reached this point with
redundancy - the file would have remained intact despite the hardware
failure. It is strictly correct that to restore the data you'd need
to refer to a backup, in this case.
And even better, turns out erasing the file might just be enough.
Although in my case, I now have a new bad object. Sun pointed me to
docs.sun.com (thanks, that helps!) but I haven't found anything in
the docs on this so far. I am assuming that my bad object 45654c is
an inode number of a special file of some sort, but what? And what
does the range mean? I'd love to read the docs on htis
Problems will continue until your hardware is fixed. (Or you conceal
them with a redundant ZFS configuration, but that would be a bad idea.)
--Toby
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss