Hi, Dejan. 2008/1/16, Dejan Muhamedagic <[EMAIL PROTECTED]>: > Hi, > > On Wed, Jan 16, 2008 at 02:20:46PM +0900, DAIKI MATSUDA wrote: > > Hi, All. > > > > I test the iscsi RA on the environment RHEL 5.1 iscsi initiator with > > heartbeat and CentOS 5.1 iscsi target. But they sometime do odd > > behaviour that not to mount file system. Maybe there is little time to > > mount file system after finishing iscsi RA and then Filesystem RA can > > not see scsi device on iscsi. So, I add the wait option to iscsi RA > > after login to iscsi target. > > This RA was tested with both ietd of SLES10 and a dedicated file > server (more than 500 CTS runs). It did happen once that udev was > a bit slow on generating links, but I've never seen that disk > didn't immediately show up. Which version of ietd is on CentOS > 5.1? Is there anything in the system logs? Did you apply all > fixes/upgrades?
RHEL 5.1 (and CentOS 5.1) iscsi target is not ietd but tgtd. The package name is scsi-target-utils-0.0-0.20070620snap.el5 and URL is http://stgt.berlios.de/ . > BTW, the sequence of events on start is: > > iscsiadm -m discovery ... > iscsiadm -m node ... > iscsiadm -m session ... | grep ... > > i.e. before we return the session is checked. Yes, I confused the odd behaviour not to mount file system. The reason is to try to mount the file system before the kernel recognizes and failed. So, I add wait option > As for the wait option, I'd rather have this resolved in another > way, if at all possible. > > Thanks, > > Dejan > > > test log > > [EMAIL PROTECTED] ~]# iscsiadm -m discovery -t sendtargets -p 172.17.246.129 > > && iscsiadm -m node -T iqn.2008-01.xxx:G01_V00 -p 172.17.246.129 -l > > && fdisk -l > > 172.17.246.129:3260,1 iqn.2008-01.jxxx:G01_V00 > > Login session [iface: default, target: iqn.2008-01.xxx:G01_V00, > > portal: 172.17.246.129,3260] > > > > Disk /dev/xvda: 4194 MB, 4194304000 bytes > > 255 heads, 63 sectors/track, 509 cylinders > > Units = cylinders of 16065 * 512 = 8225280 bytes > > > > Device Boot Start End Blocks Id System > > /dev/xvda1 * 1 13 104391 83 Linux > > /dev/xvda2 14 509 3984120 8e Linux LVM > > [EMAIL PROTECTED] ~]# fdisk -l > > > > Disk /dev/xvda: 4194 MB, 4194304000 bytes > > 255 heads, 63 sectors/track, 509 cylinders > > Units = cylinders of 16065 * 512 = 8225280 bytes > > > > Device Boot Start End Blocks Id System > > /dev/xvda1 * 1 13 104391 83 Linux > > /dev/xvda2 14 509 3984120 8e Linux LVM > > > > Disk /dev/sda: 25.8 GB, 25803358208 bytes > > 64 heads, 32 sectors/track, 24608 cylinders > > Units = cylinders of 2048 * 512 = 1048576 bytes > > > > Device Boot Start End Blocks Id System > > /dev/sda1 1 24608 25198576 83 Linux > > > > > > Regards > > MATSUDA, Daiki > > > <cib generated="false" admin_epoch="0" epoch="1" have_quorum="true" > > ignore_dtd="false" num_peers="0" cib_feature_revision="2.0" > > num_updates="40" cib-last-written="Wed Jan 16 13:44:18 2008" > > ccm_transition="1"> > > <configuration> > > <crm_config> > > <cluster_property_set id="cib-bootstrap-options"> > > <attributes> > > <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" > > value="2.1.3-node: b143f7c497816922783be3294320414fc5d99f76"/> > > <nvpair id="cib-bootstrap-options-symmetric-cluster" > > name="symmetric-cluster" value="true"/> > > <nvpair id="cib-bootstrap-options-no-quorum-policy" > > name="no-quorum-policy" value="ignore"/> > > <nvpair id="stonith-enabled" name="stonith-enabled" > > value="true"/> > > </attributes> > > </cluster_property_set> > > </crm_config> > > <nodes> > > <node id="730963bf-4f18-4664-a4c4-19579517e73d" uname="domu4" > > type="normal"/> > > <node id="456fe75d-2190-4612-8f00-d4885b87eec8" uname="domu1" > > type="normal"/> > > </nodes> > > <resources> > > <group id="gr1"> > > <primitive id="iscsi" class="ocf" type="iscsi" > > provider="heartbeat"> > > <operations> > > <op id="iscsi:start" name="start" timeout="60s"/> > > <op id="iscsi:monitor" name="monitor" start_delay="30s" > > interval="15s" on_fail="fence"/> > > </operations> > > <instance_attributes id="iscsi:attrs"> > > <attributes> > > <nvpair id="iscsi:portal" name="portal" > > value="172.17.246.129"/> > > <nvpair id="iscsi:target" name="target" > > value="iqn.2008-01.xxx:G01_V00"/> > > <nvpair id="iscsi:start_wait" name="start_wait" value="5"/> > > </attributes> > > </instance_attributes> > > </primitive> > > <primitive id="Filesystem" class="ocf" type="Filesystem" > > provider="heartbeat"> > > <operations> > > <op id="Filesystem:monitor" name="monitor" start_delay="30s" > > interval="20s" on_fail="fence"/> > > </operations> > > <instance_attributes id="Filesystem:attrs"> > > <attributes> > > <nvpair id="Filesystem:device" name="device" > > value="/dev/sda1"/> > > <nvpair id="Filesystem:directory" name="directory" > > value="/share"/> > > <nvpair id="Filesystem:fstype" name="fstype" value="ext3"/> > > </attributes> > > </instance_attributes> > > </primitive> > > <primitive id="ipaddr" class="ocf" type="IPaddr" > > provider="heartbeat"> > > <operations> > > <op id="ipaddr:monitor" name="monitor" start_delay="30s" > > interval="25s" on_fail="fence"/> > > </operations> > > <instance_attributes id="ia_ipaddr"> > > <attributes> > > <nvpair id="ia_ipaddr_ip" name="ip" value="172.17.246.10"/> > > <nvpair id="ia_ipaddr_nic" name="nic" value="eth0"/> > > <nvpair id="ia_ipaddr_netmask" name="netmask" value="16"/> > > </attributes> > > </instance_attributes> > > </primitive> > > <primitive id="apache" class="ocf" type="apache" > > provider="heartbeat"> > > <operations> > > <op id="apache:monitor" name="monitor" start_delay="30s" > > interval="10s" on_fail="fence"/> > > </operations> > > <instance_attributes id="ia_apache"> > > <attributes> > > <nvpair id="ia_apache_configfile" name="configfile" > > value="/share/httpd.conf"/> > > </attributes> > > </instance_attributes> > > </primitive> > > </group> > > <primitive id="kill_domu1" class="stonith" type="external/ssh" > > provider="heartbeat"> > > <instance_attributes id="kill_domu1:attrs"> > > <attributes> > > <nvpair id="kill_domu1:hostlist" name="hostlist" > > value="domu1"/> > > </attributes> > > </instance_attributes> > > </primitive> > > </resources> > > <constraints> > > <rsc_location id="rloc_gr1" rsc="gr1"> > > <rule id="rloc_domu1:rule1" score="INFINITY"> > > <expression id="rlocl_domu1:rule1:expr1" attribute="#uname" > > operation="eq" value="domu1"/> > > </rule> > > <rule id="rloc_domu1:rule2" score="0"> > > <expression id="rlocl_domu1:rule2:expr1" attribute="#uname" > > operation="ne" value="domu1"/> > > </rule> > > </rsc_location> > > <rsc_location id="rloc_domu4" rsc="kill_domu1"> > > <rule id="rloc_domu4:rule1" score="INFINITY"> > > <expression id="rlocl_domu4:rule1:expr1" attribute="#uname" > > operation="eq" value="domu4"/> > > </rule> > > <rule id="rloc_domu4:rule2" score="-INFINITY"> > > <expression id="rlocl_domu4:rule2:expr1" attribute="#uname" > > operation="ne" value="domu4"/> > > </rule> > > </rsc_location> > > </constraints> > > </configuration> > > </cib> > > > --- iscsi.orig 2008-01-11 09:20:33.000000000 +0900 > > +++ iscsi 2008-01-11 09:20:03.000000000 +0900 > > @@ -32,6 +32,7 @@ > > # OCF_RESKEY_target: the iSCSI target (required) > > # OCF_RESKEY_iscsiadm: iscsiadm program path (optional) > > # OCF_RESKEY_discovery_type: discovery type (optional; default: > > sendtargets) > > +# OCF_RESKEY_start_wait: the wait time after iscsi service started > > (optional; should be the decimal integer and unit is second) > > # > > # Initialization: > > > > @@ -256,6 +257,10 @@ > > *) ;; > > esac > > if iscsi_status; then > > + if [ -n $OCF_RESKEY_start_wait ] ; then > > + ocf_log info "iscsi RA waits for > > $OCF_RESKEY_start_wait sec." > > + sleep $OCF_RESKEY_start_wait > > + fi > > return $OCF_SUCCESS > > else > > return $OCF_ERR_GENERIC > > > _______________________________________________________ > > Linux-HA-Dev: [email protected] > > http://lists.linux-ha.org/mailman/listinfo/linux-ha-dev > > Home Page: http://linux-ha.org/ > > _______________________________________________________ > Linux-HA-Dev: [email protected] > http://lists.linux-ha.org/mailman/listinfo/linux-ha-dev > Home Page: http://linux-ha.org/ > _______________________________________________________ Linux-HA-Dev: [email protected] http://lists.linux-ha.org/mailman/listinfo/linux-ha-dev Home Page: http://linux-ha.org/
