Re: iSCSI lvm and reboot
On 03/29/2010 03:02 PM, Kun Huang wrote: Hi, On the same question, is it possible to write a looping script to periodically rescan/reconnect invoking iscsiadm? In the script you can do iscsiadm -m session --rescan this will rescan all sessions. iscsiadm -m session -r SID --rescan or iscsiadm -m node -T target -p ip -I iface --rescan will rescan specific ones And to login run iscsiadm -m node -T target -p ip -I iface -l Thanks! - Kun On Mon, Mar 29, 2010 at 11:03 AM, Mike Christiemicha...@cs.wisc.edu wrote: On 03/28/2010 03:28 AM, Raimund Sacherer wrote: I am new to iSCSI and FilerSystems, but I am evaluating if it makes sense for our clients. So I set up my testlab and created a KVM Server with 3 instances, 2 x Ubuntu (one for Zimbra LDAP, one for Zimbra Mailserver) 1 x Ubuntu (with some OpenVZ virtual machines in it) These 3 KVM instances have RAW LVM disks which are on a Volume in the iSCSI Filer. I tried yesterday to reboot the filer, without doing anything to the KVM Machines, to simulate outage/human error. The reboot is fine and the iSCSI targets get exposed, but the KVM Servers have their filesystems mounted readonly. In this type of setup you will want high noop values (or maybe just turn them off) and a high replacement_timeout value. So in the iscsid.conf for the iniatitors do something like: # When the iscsi layer detects it cannot reach the target, it will stop IO and if it cannot reconnect to the target within the timeout below it will fail IO. This will cause FSs to be remounted read only or for you to get IO errors. So set this to some value that is long enough to handle your failure. node.session.timeo.replacement_timeout = 600 # you can just turn these off node.conn[0].timeo.logout_timeout = 0 node.conn[0].timeo.noop_out_interval = 0 Some else said to use dm-multipath and for that you could set the queue_if_no_path to 1 or set no_path_retry to a high value. This will basically just catch the iscsi/scsi layer and add extra requeuing capabilities. queue_if_no_path will internally queue IO until the path comes back or until the system runs out of memory or dies in some other way. -- You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-is...@googlegroups.com. To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/open-iscsi?hl=en. -- You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-is...@googlegroups.com. To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/open-iscsi?hl=en.
RE: iSCSI lvm and reboot
Take a look at configuring multipating between the KVM server and the fileserver. You can take advantage of failover or just simply block IO until the fileserver returns to service, and then everything should resume normally. It works for me. -geoff - Geoff Galitz Blankenheim NRW, Germany http://www.galitz.org/ http://german-way.com/blog/ -Original Message- From: open-iscsi@googlegroups.com [mailto:open-is...@googlegroups.com] On Behalf Of Raimund Sacherer Sent: Sonntag, 28. März 2010 10:29 To: open-iscsi Subject: iSCSI lvm and reboot I am new to iSCSI and FilerSystems, but I am evaluating if it makes sense for our clients. So I set up my testlab and created a KVM Server with 3 instances, 2 x Ubuntu (one for Zimbra LDAP, one for Zimbra Mailserver) 1 x Ubuntu (with some OpenVZ virtual machines in it) These 3 KVM instances have RAW LVM disks which are on a Volume in the iSCSI Filer. I tried yesterday to reboot the filer, without doing anything to the KVM Machines, to simulate outage/human error. The reboot is fine and the iSCSI targets get exposed, but the KVM Servers have their filesystems mounted readonly. Is there any way to get the LVM volumes on the KVM Server machine or inside the KVM Virtualized servers back to R/W mode? I could not figure it out, vgchange -aly storage on the KVM server did not change anything. Is the only way to handle this situation with a cold-reset of the KVM Guests? As a reboot command was taking very long, i guess because of the RO mounts i had to reset them. A push in the right direction regarding, e.g. doku, or any help is very appreciated. Thank you, best - RunSolutions Open Source It Consulting - Email: r...@runsolutions.com Parc Bit - Centro Empresarial Son Espanyol Edificio Estel - Local 3D 07121 - Palma de Mallorca Baleares -- You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-is...@googlegroups.com. To unsubscribe from this group, send email to open- iscsi+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/open- iscsi?hl=en. -- You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-is...@googlegroups.com. To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/open-iscsi?hl=en.
Re: iSCSI lvm and reboot
On 03/29/2010 12:51 PM, Raimund Sacherer wrote: Hi Geoff, I was under the impression that for multi-path you need either 2 distinct connections and you can have failover one connection fails, or 2 block-syncronized filers which helps you out if one filer dies. You can do dm-mulitpath with only one path. It would basically give you an extra layer to retry and queue IO at in case something happened. The scsi/iscsi layer only gives you 5 retries. -- You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-is...@googlegroups.com. To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/open-iscsi?hl=en.
Re: iSCSI lvm and reboot
On 03/28/2010 03:28 AM, Raimund Sacherer wrote: I am new to iSCSI and FilerSystems, but I am evaluating if it makes sense for our clients. So I set up my testlab and created a KVM Server with 3 instances, 2 x Ubuntu (one for Zimbra LDAP, one for Zimbra Mailserver) 1 x Ubuntu (with some OpenVZ virtual machines in it) These 3 KVM instances have RAW LVM disks which are on a Volume in the iSCSI Filer. I tried yesterday to reboot the filer, without doing anything to the KVM Machines, to simulate outage/human error. The reboot is fine and the iSCSI targets get exposed, but the KVM Servers have their filesystems mounted readonly. In this type of setup you will want high noop values (or maybe just turn them off) and a high replacement_timeout value. So in the iscsid.conf for the iniatitors do something like: # When the iscsi layer detects it cannot reach the target, it will stop IO and if it cannot reconnect to the target within the timeout below it will fail IO. This will cause FSs to be remounted read only or for you to get IO errors. So set this to some value that is long enough to handle your failure. node.session.timeo.replacement_timeout = 600 # you can just turn these off node.conn[0].timeo.logout_timeout = 0 node.conn[0].timeo.noop_out_interval = 0 Some else said to use dm-multipath and for that you could set the queue_if_no_path to 1 or set no_path_retry to a high value. This will basically just catch the iscsi/scsi layer and add extra requeuing capabilities. queue_if_no_path will internally queue IO until the path comes back or until the system runs out of memory or dies in some other way. -- You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-is...@googlegroups.com. To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/open-iscsi?hl=en.