On Mar 15, 2010, at 10:55 AM, Gabriele Bulfon <gbul...@sonicle.com>
wrote:
Hello,
I'd like to check for any guidance about using zfs on iscsi storage
appliances.
Recently I had an unlucky situation with an unlucky storage machine
freezing.
Once the storage was up again (rebooted) all other iscsi clients
were happy, while one of the iscsi clients (a sun solaris sparc,
running Oracle) did not mount the volume marking it as corrupted.
I had no way to get back my zfs data: had to destroy and recreate
from backups.
So I have some questions regarding this nice story:
- I remember sysadmins being able to almost always recover data on
corrupted ufs filesystems by magic of superblocks. Is there
something similar on zfs? Is there really no way to access data of a
corrupted zfs filesystem?
- In this case, the storage appliance is a legacy system based on
linux, so raids/mirrors are managed at the storage side its own way.
Being an iscsi target, this volume was mounted as a single iscsi
disk from the solaris host, and prepared as a zfs pool consisting of
this single iscsi target. ZFS best practices, tell me that to be
safe in case of corruption, pools should always be mirrors or raidz
on 2 or more disks. In this case, I considered all safe, because the
mirror and raid was managed by the storage machine. But from the
solaris host point of view, the pool was just one! And maybe this
has been the point of failure. What is the correct way to go in this
case?
- Finally, looking forward to run new storage appliances using
OpenSolaris and its ZFS+iscsitadm and/or comstar, I feel a bit
confused by the possibility of having a double zfs situation: in
this case, I would have the storage zfs filesystem divided into zfs
volumes, accessed via iscsi by a possible solaris host that creates
his own zfs pool on it (...is it too redundant??) and again I would
fall in the same previous case (host zfs pool connected to one only
iscsi resource).
Any guidance would be really appreciated :)
Thanks a lot
Gabriele.
What iSCSI target was this?
If it was IET I hope you were NOT using the write-back option on it as
it caches write data in volatile RAM.
IET does support cache flushes, but if you cache in RAM (bad idea) a
system lockup or panic will ALWAYS loose data.
-Ross
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss