We seem to having an issue with the time to failover over iSCSI.
The end goal here being to force a failover within 10 seconds to an
alternate path as defined by dm-multipath.
Distro: CentOS
Kernel version:
dm-multipath version: device-mapper-multipath-0.4.7-17.el5
iscsid version: iscsi-initiator-utils-

We have dm-multipath installed and configured with the following
                udev_dir                              /dev
                polling_interval               3
                selector                      "round-robin 0"
                path_grouping_policy   failover
                getuid_callout                  "/sbin/scsi_id -g -u -
s /block/%n"
                prio_callout                       /bin/true
                path_checker                    readsector0
                rr_min_io                            10
                rr_weight                            uniform
                failback                                manual
                no_path_retry                   fail
                user_friendly_names    yes

We have also modified scsi PDU timeout:
                ACTION=="add", SUBSYSTEM=="scsi" , SYSFS{type}=="0|7|
14", \
                                RUN+="/bin/sh -c 'echo 5 > /sys$
in the /etc/udev/rules.d/50-udev.rules.

We have also modified some parameters in /etc/iscsi/iscsi.conf:
                node.session.timeo.replacement_timeout = 5
                node.conn[0].timeo.login_timeout = 5
                node.conn[0].timeo.logout_timeout = 5
                node.conn[0].timeo.noop_out_interval = 3
                node.conn[0].timeo.noop_out_timeout = 1

Given the above configuration the failover takes place in 2 minutes.
Changing these values to lower or higher values doesn't seem to modify
the change failover time.

Any clues on how we can reduce this failover time would be

Akshay Lal

You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to
To unsubscribe from this group, send email to
For more options, visit this group at

Reply via email to