> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
> > If they are close enough for "crossover cable" where the cable is UTP,
> > then they are
> > close enough for SAS.
> Pardon my ignorance, can a system easily serve its local storage
> devices over SAS to a neighbor system (i.e. using a SAS HBA in
> place of an Ethernet NIC of an IB card in Ed's crossover scenario?)
> Would this be doable over today's COMSTAR, using a different
> storage path from the iSCSI stack most often used now?
I was wondering the same thing - but it turns out to be irrelevant. Remember
when I said this?
> Can anybody think of a reason why Option 2 would be stupid, or can you
> think of a better solution?
Well, now I know why it's stupid. Cuz it doesn't work right - It turns out,
iscsi devices (And I presume SAS devices) are not removable storage. That
means, if the device goes offline and comes back online again, it doesn't just
gracefully resilver and move on without any problems, it's in a perpetual state
of IO error, device unreadable. If there were simply cksum errors, or
something like that, I could handle it. But it's bus error, device error,
system can't operate, I have to remove the device permanently.
The really odd thing is - It doesn't always show as faulted in zpool status.
Even when it does show as faulted - I can zpool online, or zpool clear, to make
the pool look healthy again. But when an app tries to use something in that
zpool, the system grinds, and I can see scsi errors spewing into the
/var/adm/messages, and sometimes the system will halt.
This is call caused because I disconnected / rebooted either the iscsi
initiator or target.
Lesson learned: If you create an iscsi target, make *damn* sure it's an
always-on system. And don't use just one. And don't do maintenance on them
both, in anywhere near the same week.
zfs-discuss mailing list