blacklisting some paths in multipath environment
Hello, I'm trying to use open-iscsi with DS3300 array. DS has two controllers, each 2 ethernet ports. Unfortunately I use some SATA disk that aren't capable to be connected into two controllers (only one path on the SATA connector). This causes disks to be accessible only through one controller. My Linux system unfortunately still sees all 4 paths (two controllers x 2 eth ports each; target has 4 IPs then). How I can blacklist some paths and still use automatic node.startup? Thanks, -- Arkadiusz MiśkiewiczPLD/Linux Team arekm / maven.plhttp://ftp.pld-linux.org/ -- You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-is...@googlegroups.com. To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/open-iscsi?hl=en.
Re: After restarting iscsi remote drive are not accesible on IP6 disabled centos 5.4 64bit linux server
one more thing i have tested this in a centos IP6 disabled 5.2 server where am not getting cnic: Unknown symbol __ipv6_addr_type cnic: Unknown symbol ip6_route_output but am geting /dev/mysql/mysql: read failed after 0 of 4096 at 0: Input/output error [r...@ussd1 /]# pvdisplay /dev/mysql/mysql: read failed after 0 of 4096 at 0: Input/output error --- Physical volume --- PV Name /dev/sdc1 VG Name mysql PV Size 300.03 GB / not usable 3.25 MB Allocatable yes PE Size (KByte) 4096 Total PE 76808 Free PE 51208 Allocated PE 25600 PV UUID JUPKC0-0p4S-odWF-kF2s-E1Kc-9D5x-KjTWZw On Nov 19, 1:03 pm, Bangalore aneeshsu...@gmail.com wrote: Hi i have installed one HP blade server with Linux version 2.6.18-164.el5 (mockbu...@builder10.centos.org) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-46)) 001 SMP Thu Sep 3 03:28:30 EDT 2009 CentOS release 5.4 (Final) .I have disabled the IP6 with these commands by adding these in /etc/modeprob.conf alias net-pf-10 off alias ipv6 off options ipv6 disable=1 after that i installed iscsi and pointed towards one netapp .then i created lvm in the system with netapp drive . when ever i restart a server or restart ISCSCI after that i am not not able to access the LVM what i created . Please help me Additional Information dsmg log EXT3 FS on dm-2, internal journal EXT3-fs: recovery complete. EXT3-fs: mounted filesystem with ordered data mode. cxgb3i: tag itt 0x1fff, 13 bits, age 0xf, 4 bits. iscsi: registered transport (cxgb3i) cnic: Unknown symbol __ipv6_addr_type cnic: Unknown symbol ip6_route_output iscsi: registered transport (iser) scsi1 : iSCSI Initiator over TCP/IP Vendor: NETAPP Model: LUN Rev: 7310 Type: Direct-Access ANSI SCSI revision: 04 SCSI device sdb: 629225472 512-byte hdwr sectors (322163 MB) sdb: Write Protect is off sdb: Mode Sense: bd 00 00 08 SCSI device sdb: drive cache: write through SCSI device sdb: 629225472 512-byte hdwr sectors (322163 MB) sdb: Write Protect is off sdb: Mode Sense: bd 00 00 08 SCSI device sdb: drive cache: write through sdb: sdb1 sd 1:0:0:0: Attached scsi disk sdb sd 1:0:0:0: Attached scsi generic sg0 type 0 [r...@wapdb-node2 aneesh]# service iscsi restart Logging out of session [sid: 2, target: iqn.1986-03.com.ibm:sn. 135057127, portal: 10.0.104.202,3260] Logout of [sid: 2, target: iqn.1986-03.com.ibm:sn.135057127, portal: 10.0.104.202,3260]: successful Stopping iSCSI daemon: iscsid dead but pid file exists [ OK ] Starting iSCSI daemon: WARNING: Error inserting libiscsi2 (/lib/ modules/2.6.18-164.el5/kernel/drivers/scsi/libiscsi2.ko): Unknown symbol in module, or unknown parameter (see dmesg) FATAL: Error inserting bnx2i (/lib/modules/2.6.18-164.el5/kernel/ drivers/scsi/bnx2i/bnx2i.ko): Unknown symbol in module, or unknown parameter (see dmesg) [ OK ] [ OK ] Setting up iSCSI targets: Logging in to [iface: default, target: iqn. 1986-03.com.ibm:sn.135057127, portal: 10.0.104.202,3260] Login to [iface: default, target: iqn.1986-03.com.ibm:sn.135057127, portal: 10.0.104.202,3260]: successful [ OK ] [r...@wapdb-node2 aneesh]# [r...@wapdb-node2 /]# lvdisplay /dev/MySql/MySql: read failed after 0 of 4096 at 0: Input/output error --- Logical volume --- LV Name /dev/MySql/MySql VG Name MySql LV UUID dZ4DCD-qcxM-KnCt-xsrK-KyeM-pndo-0a5T9y LV Write Access read/write LV Status available # open 1 LV Size 300.00 GB Current LE 76800 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 -- You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-is...@googlegroups.com. To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/open-iscsi?hl=en.
Re: After restarting iscsi remote drive are not accesible on IP6 disabled centos 5.4 64bit linux server
Dear Mike These all are LVM package what i have installed on system [r...@wapdb-node1 /]# rpm -qa|grep lvm lvm2-cluster-2.02.56-7.el5_5.4 system-config-lvm-1.1.5-4.el5 lvm2-2.02.56-8.el5_5.6 [r...@wapdb-node1 /]# i tried by unmounting the device before restarting Iscsi service , But after that also the result is same as old. After restating of iscsi system is not able to read the mapped netapp drives . Please see the entire process which i did [r...@wapdb-node1 ~]# fdisk -l Disk /dev/cciss/c0d0: 299.9 GB, 299966445568 bytes 255 heads, 63 sectors/track, 36468 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/cciss/c0d0p1 * 1 13 104391 83 Linux /dev/cciss/c0d0p2 14 36468 292824787+ 8e Linux LVM Disk /dev/sda: 322.1 GB, 322163441664 bytes 255 heads, 63 sectors/track, 39167 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 39167 314608896 83 Linux [r...@wapdb-node1 ~]# [r...@wapdb-node1 ~]# fdisk /dev/sda1 Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. The number of cylinders for this disk is set to 39166. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Warning: invalid flag 0x of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): Value out of range. Partition number (1-4): Value out of range. Partition number (1-4): 1 First cylinder (1-39166, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-39166, default 39166): Using default value 39166 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 22: Invalid argument. The kernel still uses the old table. The new table will be used at the next reboot. Syncing disks. [r...@wapdb-node1 ~]# partprobe [r...@wapdb-node1 ~]# Disk /dev/cciss/c0d0: 299.9 GB, 299966445568 bytes 255 heads, 63 sectors/track, 36468 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/cciss/c0d0p1 * 1 13 104391 83 Linux /dev/cciss/c0d0p2 14 36468 292824787+ 8e Linux LVM Disk /dev/sda: 322.1 GB, 322163441664 bytes 255 heads, 63 sectors/track, 39167 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 39167 314608896 83 Linux [r...@wapdb-node1 ~]# pvcreate /dev/sda1 Physical volume /dev/sda1 successfully created [r...@wapdb-node1 ~]# vgcreate mysql /dev/sda1 Volume group mysql successfully created [r...@wapdb-node1 ~]# vgdisplay --- Volume group --- VG Name VolGroup01 System ID Formatlvm2 Metadata Areas1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV0 Cur LV2 Open LV 2 Max PV0 Cur PV1 Act PV1 VG Size 279.25 GB PE Size 32.00 MB Total PE 8936 Alloc PE / Size 8936 / 279.25 GB Free PE / Size 0 / 0 VG UUID 9cMTqi-3Brv-0QO2-zCTW-n3MQ-6vhM-pDvCMQ --- Volume group --- VG Name mysql System ID Formatlvm2 Metadata Areas1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV0 Cur LV0 Open LV 0 Max PV0 Cur PV1 Act PV1 VG Size 300.03 GB PE Size 4.00 MB Total PE 76808 Alloc PE / Size 0 / 0 Free PE / Size 76808 / 300.03 GB VG UUID BE4qpy-u1to-yKKj-ECR2-93i0-Hn2r-londS8 [r...@wapdb-node1 ~]# [r...@wapdb-node1 ~]# lvcreate -n MySqlDB -L 250GB mysql Logical volume MySqlDB created [r...@wapdb-node1 ~]# [r...@wapdb-node1 ~]# lvdisplay --- Logical volume --- LV Name/dev/VolGroup01/LogVol00 VG NameVolGroup01 LV UUID7s6lRU-5lNO-GwmH-UFc8-5kQk-ZIM3-NCtTa1 LV Write Accessread/write LV Status available # open
Re: cannot logout
Hi Mike, I am facing a similar issue. [r...@maverick iscsi]# service iscsid stop Not stopping iscsid: iscsi sessions still active [WARNING] Ok, so we have an active iscsi session. [r...@maverick iscsi]# iscsiadm -m session tcp: [10] 10.63.8.193:3260,1027 iqn. 1992-08.com.netapp:sn.cdfd3c5ae76111df9fd8123478563412:vs.2 [r...@maverick iscsi]# iscsiadm -m session --sid=10 --op=delete iscsiadm: This command will remove the record [iface: default, target: iqn.1992-08.com.netapp:sn.cdfd3c5ae76111df9fd8123478563412:vs.2, portal: 10.63.8.193,3260], but a session is using it. Logout session then rerun command to remove record. iscsiadm: Could not execute operation on all records. Err 22. [r...@maverick iscsi]# iscsiadm -m node --logoutall=all Logging out of session [sid: 10, target: iqn. 1992-08.com.netapp:sn.cdfd3c5ae76111df9fd8123478563412:vs.2, portal: 10.63.8.193,3260] iscsiadm: Could not logout of [sid: 10, target: iqn. 1992-08.com.netapp:sn.cdfd3c5ae76111df9fd8123478563412:vs.2, portal: 10.63.8.193,3260]: iscsiadm: initiator reported error (9 - internal error) [r...@maverick iscsi]# [r...@maverick iscsi]# iscsiadm -m session -P 3 iSCSI Transport Class version 2.0-870 iscsiadm version 2.0-870 Target: iqn.1992-08.com.netapp:sn.cdfd3c5ae76111df9fd8123478563412:vs. 2 Current Portal: 10.63.8.193:3260,1027 Persistent Portal: 10.63.8.193:3260,1027 ** Interface: ** Iface Name: default Iface Transport: tcp Iface Initiatorname: iqn.1994-05.com.fedora:rhooda Iface IPaddress: 10.62.8.175 Iface HWaddress: default Iface Netdev: default SID: 10 iSCSI Connection State: TRANSPORT WAIT iSCSI Session State: FREE Internal iscsid Session State: REPOEN Negotiated iSCSI params: HeaderDigest: None DataDigest: None MaxRecvDataSegmentLength: 131072 MaxXmitDataSegmentLength: 65536 FirstBurstLength: 65536 MaxBurstLength: 65536 ImmediateData: Yes InitialR2T: No MaxOutstandingR2T: 1 Attached SCSI devices: Host Number: 13 State: running scsi13 Channel 00 Id 0 Lun: 0 Attached scsi disk sdb State: running But ... [r...@maverick iscsi]# mount | grep sdb sdb is not mounted anywhere. [r...@maverick iscsi]# fdisk -l | grep sdb fdisk can't see sdb. [r...@maverick iscsi]# cd /dev/sd sda sda1 sda2 sda3 sdb Interesting ... don't know what's going on. Mike, can you help out here. Thanks, Rohit On Nov 15, 1:06 pm, p...@fhri.org wrote: Thanks for your reply Mike. Since submitting that question, I've been able to bounce the server, which cleared up the troublesome session and all is well now. -- Paul -Original Message- From: Mike Christie [mailto:micha...@cs.wisc.edu] Sent: Saturday, November 13, 2010 2:30 PM To: open-iscsi@googlegroups.com Cc: p...@fhri.org Subject: Re: cannot logout On 11/10/2010 05:02 PM, p...@fhri.org wrote: I had a session logged in while structural changes were made to the SAN. The session is no longer valid as target names and LUNs have changed. However, I cannot log the session out or even stop the iscsi service on the server. I can't bounce the server for several more days. I can stop the iscsid service which stops the barage of login errors I get from the SAN, but I cannot end the session or remove the node. [db1: ~]# iscsiadm -m node -p 172.16.10.11 -o delete iscsiadm: This command will remove the record [iface: default, target: target0, portal: 172.16.10.11,3260], but a session is using it. Logout session then rerun command to remove record. iscsiadm: Could not execute operation on all records. Err 22. Yeah, that should fail. It only deletes the record for the target portal. It would not log it out. [db1: ~]# iscsiadm -m node --logoutall=all Logging out of session [sid: 1, target: target0, portal: 172.16.10.11,3260] iscsiadm: Could not logout of [sid: 1, target: target0, portal: 172.16.10.11,3260]: iscsiadm: initiator reported error (9 - internal error) This should work. Can you run iscsiadm -m session -P 3 and send the output. internal error normally means the session is just getting started and so we cannot log it out yet. [db1: ~]# service iscsi stop Logging out of session [sid: 1, target: target0, portal: 172.16.10.11,3260] iscsiadm: Could not logout of [sid: 1, target: target0, portal: 172.16.10.11,3260]: iscsiadm: initiator reported error (9 - internal error)
Re: authenticated discovery howto + documentation bug
On 11/25/2010 10:10 AM, SZÉKELYI Szabolcs wrote: Hi, I'm trying to figure out how to do discovery with authentication, without luck. I didn't manage to set up the credentials, tried various ways. Iscsiadm says: iscsiadm -m discovery [ -hV ] [ -d debug_level ] [-P printlevel] [ -t type -p ip:port -I ifaceN ... [ -l ] ] | [ -p ip:port ] [ -o operation ] [ -n name ] [ -v value ] Although: $ sudo iscsiadm -m discovery -p myiscsistorage:3260 -o new -n \ discovery.sendtargets.auth.username -v myusername iscsiadm: discovery mode: option '-n' is not allowed/supported It looks to me that the above syntax allows these options... Yeah, the docs were wrong. You can pass the new operator to the discovery command like iscsiadm -m discovery -t st -p ip -o new to tell it to only create records for new portals, but you cannot update the discovery db. It is sort of fixed in this release http://kernel.org/pub/linux/kernel/people/mnc/open-iscsi/releases/open-iscsi-2.0-872.tar.gz You have to use discoverydb mode // create a new record iscsiadm -m discoverydb -p myiscsistorage:3260 -t st -o new // update username and password iscsiadm -m discoverydb -p myiscsistorage:3260 -t st -o update -n discovery.sendtargets.auth.username -v myusername In the old tools you can also just edit iscsid.conf with the iscovery.sendtargets.auth.username and password then run the iscsiadm discovery command and it will use the iscsid.conf values. So could you tell me how to do this? Another thing: when I want to delete a discovery record from the database, iscsiadm tells me $ sudo iscsiadm -m discovery -o delete iscsiadm: --record required for delete operation However, I couldn't find --record anywhere in the man page. What is it supposed to be? When you run iscsiadm -m discovery it shows the discovery records. The ip:port is the record id that you pass into iscsiadm as the -p argument. iscsiadm -m discovery 20.15.0.87:3260 via sendtargets 20.15.0.7:3260 via sendtargets 20.15.0.4:3260 via sendtargets iscsiadm -m discovery -p 20.15.0.7:3260 -o delete -- You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-is...@googlegroups.com. To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/open-iscsi?hl=en.
Re: blacklisting some paths in multipath environment
On 11/27/2010 06:23 PM, Arkadiusz Miskiewicz wrote: Hello, I'm trying to use open-iscsi with DS3300 array. DS has two controllers, each 2 ethernet ports. Unfortunately I use some SATA disk that aren't capable to be connected into two controllers (only one path on the SATA connector). This causes disks to be accessible only through one controller. My Linux system unfortunately still sees all 4 paths (two controllers x 2 eth ports each; target has 4 IPs then). How I can blacklist some paths and still use automatic node.startup? You can set specific paths to not get logged into automatically by doing iscsiadm -m node -T target -p ip -o update -n node.startup -v manual -- You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-is...@googlegroups.com. To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/open-iscsi?hl=en.
Re: After restarting iscsi remote drive are not accesible on IP6 disabled centos 5.4 64bit linux server
On 11/28/2010 01:44 PM, Bangalore wrote: i tried by unmounting the device before restarting Iscsi service , But I said you also need to have lvm release the devices. What you are doing is unsupported and the results you are getting are expected. Before running iscsi stop you need to have lvm stop using the iscsi devices, because when the stop is done it will remove the iscsi devices and lvm will be referencing bad devices. After you run iscsi start you need to run something like lvm vgscan so the lvm devices are rebuilt using the new scsi devices. I think you want to email the lvm group to see what commands they reccommend to run. [r...@wapdb-node1 /]# service iscsi stop Logging out of session [sid: 1, target: iqn.1986-03.com.ibm:sn. 135057127, portal: 10.0.104.202,3260] Logout of [sid: 1, target: iqn.1986-03.com.ibm:sn.135057127, portal: 10.0.104.202,3260] successful. Stopping iSCSI daemon: [r...@wapdb-node1 /]# -- You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-is...@googlegroups.com. To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/open-iscsi?hl=en.
Re: After restarting iscsi remote drive are not accesible on IP6 disabled centos 5.4 64bit linux server
On 11/29/2010 06:28 PM, Mike Christie wrote: On 11/28/2010 01:44 PM, Bangalore wrote: i tried by unmounting the device before restarting Iscsi service , But I said you also need to have lvm release the devices. What you are doing is unsupported and the results you are getting are expected. Before running iscsi stop you need to have lvm stop using the iscsi devices, because when the stop is done it will remove the iscsi devices and lvm will be referencing bad devices. After you run iscsi start you need to run something like lvm vgscan so the lvm devices are rebuilt using the new scsi devices. I think you want to email the lvm group to see what commands they reccommend to run. For RHEL 5 based systems here is some more info: http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Online_Storage_Reconfiguration_Guide/removing_devices.html#clean-device-removal I do not think you need to do the lvm commands in there, because I think those are for permanment removal and your situation has the iscsi devices coming back right away. Maybe you just need to run a vgremove (I am not sure, so like I said before it is best to ask the lvm people). Also why are you even running iscsi stop? -- You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-is...@googlegroups.com. To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/open-iscsi?hl=en.