On 11/13/12 22:04, Or Gerlitz wrote:
Bart Van Assche <[email protected]> wrote:
On 11/12/12 23:36, Or Gerlitz wrote:
This patch series reduces path failover time significantly. Instead of
having to wait until the SCSI error handler has finished recovery,
When a SCSI device is selected by mpath and used as a path, aren't failed
commands returned back to the mpath driver for possibly re-submission over
a different path?
The advantage of having a configurable fast_io_fail_tmo parameter is
that this parameter can be configured to a smaller value than the SCSI
timeout and hence that failover mechanisms in higher layers (dm and
multipathd) are triggered more quickly if an I/O error is encountered.
multipathd switches paths as soon as fast_fail_tmo has elapsed. Also, SCSI
hosts that correspond to failed paths are removed. With the upstream SRP
initiator and when triggering path failover repeatedly after some time
hundreds of obsolete SCSI hosts are present.
- Dropped the patches for integration with multipathd.
can you explain this please? are these non SRP patches which we
submitted/accepted
through another maintainer? can you point on the upstream commits?
With that comment I was referring to the dev_loss_tmo and fast_io_fail_tmo
sysfs variables that had been dropped in v2 of this patch set but that have
been reintroduced in v3 of this patch set. If these parameters have been set
in /etc/multipath.conf then multipathd passes these on to the block driver
(ib_srp in this case), at least the block driver provides the dev_loss_tmo
and fast_io_fail_tmo sysfs attributes.
So these are attributes you added to the block layer, or to SRP? I'm
not clear on that.
These attributes have been added to the SRP transport layer. Since the
ib_srp driver registers itself with the SRP transport layer the SRP
transport layer creates these two attributes for the ib_srp driver. This
is similar to how the FC transport layer creates these attributes for FC
initiator drivers.
Bart.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html