Aw: [EXT] Re: udev events for iscsi

2020-04-21 Thread Ulrich Windl
>>>  21.04.2020, 17:20 >>>Wondering myself.> On
Apr 21, 2020, at 2:31 AM, Gionatan Danti  wrote:> >
> [reposting, as the previous one seems to be lost]> > Hi all,> I have a
question regarding udev events when using iscsi disks.> > By using "udevadm
monitor" I can see that events are generated when I login and logout from an
iscsi portal/resource, creating/destroying the relative links under /dev/So
running “udevadm monitor” on the initiator, you can see when a block device
becomes available locally.   > > However, I can not see anything when the
remote machine simple dies/reboots/disconnects: while "dmesg" shows the iscsi
timeout expiring, I don't see anything about a removed disk (and the links
under /dev/ remains unaltered, indeed). At the same time, when the remote
machine and disk become available again, no reconnection events happen.As
someone who has had an inordinate amount of experience with the iSCSi
connection breaking ( power outage, Network switch dies,  wrong ethernet cable
pulled, the target server machine hardware crashes, ...) in the middle of
production, the more info the better.   Udev event triggers would help.   I
wonder exactly how XenServer handles this as it itself seemed more resilient. 
XenServer host initiators  do something correct to recover and wonder how that
compares to the normal iSCSi initiator.   But unfortunately, XenServer
LVM-over-iSCSi  does not pass the message along to its Linux virtual drives and
VMs in the same way as Windows VMs.When the target drives became available
again,   MS Windows virtual machines would gracefully recover on their own.   
All Linux VM  filesystems went read only and those VM machines required
forceful  rebooting.   mount remount would not work. > > I can read here that,
years ago, a patch was in progress to give better integration with udev when a
device disconnects/reconnects. Did the patch got merged? Or does the one I
described above remain the expected behavior? Can be changed?> > Thanks.> -- >
You received this message because you are subscribed to the Google Groups
"open-iscsi" group.> To unsubscribe from this group and stop receiving emails
from it, send an email to open-iscsi+unsubscr...@googlegroups.com.> To view
this discussion on the web visit
https://groups.google.com/d/msgid/open-iscsi/13d4c963-b633-4672-97d9-dd41eec5fb5b%40googlegroups.com.--
You received this message because you are subscribed to the Google Groups
"open-iscsi" group.To unsubscribe from this group and stop receiving emails
from it, send an email to open-iscsi+unsubscr...@googlegroups.com.to view this
discussion on the web visit
https://groups.google.com/d/msgid/open-iscsi/9D54680A-F97E-4465-BA6C-566562C5DC91%40eyeconsultantspc.com.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/5E9F80B202A10003875F%40gwsmtp.uni-regensburg.de.


Re: udev events for iscsi

2020-04-21 Thread Gionatan Danti

Il giorno martedì 21 aprile 2020 20:44:22 UTC+2, The Lee-Man ha scritto:
>
>
> Because of the design of iSCSI, there is no way for the initiator to know 
> the server has gone away. The only time an initiator might figure this out 
> is when it tries to communicate with the target.
>
> This assumes we are not using some sort of directory service, like iSNS, 
> which can send asynchronous notifications. But even then, the iSNS server 
> would have to somehow know that the target went down. If the target 
> crashed, that might be difficult to ascertain.
>
> So in the absence of some asynchronous notification, the initiator only 
> knows the target is not responding if it tries to talk to that target.
>
> Normally iscsid defaults to sending periodic NO-OPs to the target every 5 
> seconds. So if the target goes away, the initiator usually notices, even if 
> no regular I/O is occurring.
>

True.
 

>
> But this is where the error recovery gets tricky, because iscsi tries to 
> handle "lossy" connections. What if the server will be right back? Maybe 
> it's rebooting? Maybe the cable will be plugged back in? So iscsi keeps 
> trying to reconnect. As a matter of fact, if you stop iscsid and restart 
> it, it sees the failed connection and retries it -- forever, by default. I 
> actually added a configuration parameter called reopen_max, that can limit 
> the number of retries. But there was pushback on changing the default value 
> from 0, which is "retry forever".
>
> So what exactly do you think the system should do when a connection "goes 
> away"? How long does it have to be gone to be considered gone for good? If 
> the target comes back "later" should it get the same disc name? Should we 
> retry, and if so how much before we give up? I'm interested in your views, 
> since it seems like a non-trivial problem to me.
>

Well, for short disconnections the re-try approach is surely the better 
one. But I naively assumed that a longer disconnection, as described by the 
node.session.timeo.replacement_timeout parameter, would tear down the 
device with a corresponding udev event. Udev should have no problem 
assigning the device a sensible persistent name, right?
 

>
> So you're saying as soon as a bad connection is detected (perhaps by a 
> NOOP), the device should go away? 
>

I would say that the device should go away not a the first NOOP failing, 
but when the replacement_timeout (or another sensible timeout) expires.

This open the door to another question: from iscsid.conf 
 
and README 
 files I 
(wrongly?) understand that replacement_timeout come into play only when the 
SCSI EH is running, while in the other cases different timeouts as 
node.session.err_timeo.lu_reset_timeout and 
node.session.err_timeo.tgt_reset_timeout should affect the (dis)connection. 
However, in all my tests, I only saw replacement_timeout being honored, 
still I did not catch a single running instance of SCSI EH via the proposed 
command iscsiadm -m session -P 3

What I am missing?
Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/67349dca-9647-4dbd-affc-ded6e8f01ee9%40googlegroups.com.


Re: udev events for iscsi

2020-04-21 Thread Donald Williams
Hello,

 If the loss exceeds the timeout value yes.  If the 'drive' doesn't come
back in 30 to 60 seconds it's not likely a transitory event like a cable
pull.

NOOP-IN and NOOP-OUT are also know as KeepAlive.  That's when the
connection is up but the target or initiator isn't responding.   If those
timeout the connection will be dropped and a new connection attempt made.

 Don


On Tue, Apr 21, 2020 at 2:44 PM The Lee-Man  wrote:

> On Tuesday, April 21, 2020 at 12:31:24 AM UTC-7, Gionatan Danti wrote:
>>
>> [reposting, as the previous one seems to be lost]
>>
>> Hi all,
>> I have a question regarding udev events when using iscsi disks.
>>
>> By using "udevadm monitor" I can see that events are generated when I
>> login and logout from an iscsi portal/resource, creating/destroying the
>> relative links under /dev/
>>
>> However, I can not see anything when the remote machine simple
>> dies/reboots/disconnects: while "dmesg" shows the iscsi timeout expiring, I
>> don't see anything about a removed disk (and the links under /dev/ remains
>> unaltered, indeed). At the same time, when the remote machine and disk
>> become available again, no reconnection events happen.
>>
>
> Because of the design of iSCSI, there is no way for the initiator to know
> the server has gone away. The only time an initiator might figure this out
> is when it tries to communicate with the target.
>
> This assumes we are not using some sort of directory service, like iSNS,
> which can send asynchronous notifications. But even then, the iSNS server
> would have to somehow know that the target went down. If the target
> crashed, that might be difficult to ascertain.
>
> So in the absence of some asynchronous notification, the initiator only
> knows the target is not responding if it tries to talk to that target.
>
> Normally iscsid defaults to sending periodic NO-OPs to the target every 5
> seconds. So if the target goes away, the initiator usually notices, even if
> no regular I/O is occurring.
>
> But this is where the error recovery gets tricky, because iscsi tries to
> handle "lossy" connections. What if the server will be right back? Maybe
> it's rebooting? Maybe the cable will be plugged back in? So iscsi keeps
> trying to reconnect. As a matter of fact, if you stop iscsid and restart
> it, it sees the failed connection and retries it -- forever, by default. I
> actually added a configuration parameter called reopen_max, that can limit
> the number of retries. But there was pushback on changing the default value
> from 0, which is "retry forever".
>
> So what exactly do you think the system should do when a connection "goes
> away"? How long does it have to be gone to be considered gone for good? If
> the target comes back "later" should it get the same disc name? Should we
> retry, and if so how much before we give up? I'm interested in your views,
> since it seems like a non-trivial problem to me.
>
>>
>> I can read here that, years ago, a patch was in progress to give better
>> integration with udev when a device disconnects/reconnects. Did the patch
>> got merged? Or does the one I described above remain the expected behavior?
>> Can be changed?
>>
>
> So you're saying as soon as a bad connection is detected (perhaps by a
> NOOP), the device should go away?
>
>>
>> Thanks.
>>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/open-iscsi/7f583720-8a84-4872-8d1a-5cd284295c22%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/CAK3e-EawwxYGb3Gw74%2BP-yBmrnE0ktOL%3DFj1OT_LEQ%2BCZyZUkg%40mail.gmail.com.


Re: udev events for iscsi

2020-04-21 Thread The Lee-Man
On Tuesday, April 21, 2020 at 8:20:23 AM UTC-7, Robert ECEO Townley wrote:
>
> Wondering myself.
>
> On Apr 21, 2020, at 2:31 AM, Gionatan Danti  
> wrote:
>
> 
> [reposting, as the previous one seems to be lost]
>
> Hi all,
> I have a question regarding udev events when using iscsi disks.
>
> By using "udevadm monitor" I can see that events are generated when I 
> login and logout from an iscsi portal/resource, creating/destroying the 
> relative links under /dev/
>
>
> So running “udevadm monitor” on the initiator, you can see when a block 
> device becomes available locally.   
>
>
>
> However, I can not see anything when the remote machine simple 
> dies/reboots/disconnects: while "dmesg" shows the iscsi timeout expiring, I 
> don't see anything about a removed disk (and the links under /dev/ remains 
> unaltered, indeed). At the same time, when the remote machine and disk 
> become available again, no reconnection events happen.
>
>
> As someone who has had an inordinate amount of experience with the iSCSi 
> connection breaking ( power outage, Network switch dies,  wrong ethernet 
> cable pulled, the target server machine hardware crashes, ...) in the 
> middle of production, the more info the better.   Udev event triggers would 
> help.   I wonder exactly how XenServer handles this as it itself seemed 
> more resilient.  
>
> XenServer host initiators  do something correct to recover and wonder how 
> that compares to the normal iSCSi initiator.  
>

I was under the impression that XenServer used open-iscsi.

>  
> But unfortunately, XenServer LVM-over-iSCSi  does not pass the message 
> along to its Linux virtual drives and VMs in the same way as Windows VMs.   
>  
>
> When the target drives became available again,   MS Windows virtual 
> machines would gracefully recover on their own.All Linux VM 
>  filesystems went read only and those VM machines required forceful 
>  rebooting.   mount remount would not work. 
>

A filesystem going read-only means it was likely ext3, which does that if 
it gets IO errors, I believe. (Disclaimer: I'm not a filesystem person.) 

>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/a3ff8e76-fa9b-4290-ba20-f3bf43989b66%40googlegroups.com.


Re: udev events for iscsi

2020-04-21 Thread The Lee-Man
On Tuesday, April 21, 2020 at 12:31:24 AM UTC-7, Gionatan Danti wrote:
>
> [reposting, as the previous one seems to be lost]
>
> Hi all,
> I have a question regarding udev events when using iscsi disks.
>
> By using "udevadm monitor" I can see that events are generated when I 
> login and logout from an iscsi portal/resource, creating/destroying the 
> relative links under /dev/
>
> However, I can not see anything when the remote machine simple 
> dies/reboots/disconnects: while "dmesg" shows the iscsi timeout expiring, I 
> don't see anything about a removed disk (and the links under /dev/ remains 
> unaltered, indeed). At the same time, when the remote machine and disk 
> become available again, no reconnection events happen.
>

Because of the design of iSCSI, there is no way for the initiator to know 
the server has gone away. The only time an initiator might figure this out 
is when it tries to communicate with the target.

This assumes we are not using some sort of directory service, like iSNS, 
which can send asynchronous notifications. But even then, the iSNS server 
would have to somehow know that the target went down. If the target 
crashed, that might be difficult to ascertain.

So in the absence of some asynchronous notification, the initiator only 
knows the target is not responding if it tries to talk to that target.

Normally iscsid defaults to sending periodic NO-OPs to the target every 5 
seconds. So if the target goes away, the initiator usually notices, even if 
no regular I/O is occurring.

But this is where the error recovery gets tricky, because iscsi tries to 
handle "lossy" connections. What if the server will be right back? Maybe 
it's rebooting? Maybe the cable will be plugged back in? So iscsi keeps 
trying to reconnect. As a matter of fact, if you stop iscsid and restart 
it, it sees the failed connection and retries it -- forever, by default. I 
actually added a configuration parameter called reopen_max, that can limit 
the number of retries. But there was pushback on changing the default value 
from 0, which is "retry forever".

So what exactly do you think the system should do when a connection "goes 
away"? How long does it have to be gone to be considered gone for good? If 
the target comes back "later" should it get the same disc name? Should we 
retry, and if so how much before we give up? I'm interested in your views, 
since it seems like a non-trivial problem to me.

>
> I can read here that, years ago, a patch was in progress to give better 
> integration with udev when a device disconnects/reconnects. Did the patch 
> got merged? Or does the one I described above remain the expected behavior? 
> Can be changed?
>

So you're saying as soon as a bad connection is detected (perhaps by a 
NOOP), the device should go away? 

>
> Thanks.
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/7f583720-8a84-4872-8d1a-5cd284295c22%40googlegroups.com.


Re: [EXT] [PATCH] open-iscsi:Modify iSCSI shared memory permissions for logs

2020-04-21 Thread The Lee-Man
On Monday, April 20, 2020 at 5:08:36 AM UTC-7, Uli wrote:
>
> Hi! 
>
> Maybe this could be made a symbolic constant, or even be made 
> configurable. 
> The other interesting thing is that there are three seemingly very similar 
> code fragements to create the shared memory, but each with a different size 
> parameter (sizeof(struct logarea) vs. size vs. MAX_MSG_SIZE + sizeof(struct 
>  logmsg)) ;-) 
>

If you'd like to submit a pull request, I'll consider it. I don't think the 
symbolic constant and machinery around making the permission configurable 
are worth the trouble, since they shouldn't be changed. But I could saying 
making this permission a define in an include file, perhaps with an 
"ifndef" around it. :)

As far as automating the shared memory creation for just 3 cases is not 
worth it, particularly since we're filling in info about the 2nd and 3rd 
segment into our control structure, as we go.

I merge this pull request.

>
> Regards, 
> Ulrich 
>
> >>> Wu Bo  schrieb am 17.04.2020 um 11:08 in Nachricht 
> <6355_1587114536_5E997228_6355_294_1_d6a22a2f-3730-45ee-5256-8a8fe4b017bf@huawei
>  
>
> com>: 
> > Hi, 
> > 
> > Iscsid log damon is responsible for reading data from shared memory 
> > and writing syslog. Iscsid is the root user group. 
> > Currently, it is not seen that non-root users need to read logs. 
> > The principle of minimizing the use of permissions, all the permissions 
> > are changed from 644 to 600. 
> > 
> > Signed-off-by: Wu Bo  
> > --- 
> >   usr/log.c | 6 +++--- 
> >   1 file changed, 3 insertions(+), 3 deletions(-) 
> > 
> > diff --git a/usr/log.c b/usr/log.c 
> > index 6e16e7c..2fc1850 100644 
> > --- a/usr/log.c 
> > +++ b/usr/log.c 
> > @@ -73,7 +73,7 @@ static int logarea_init (int size) 
> >  logdbg(stderr,"enter logarea_init\n"); 
> > 
> >  if ((shmid = shmget(IPC_PRIVATE, sizeof(struct logarea), 
> > -   0644 | IPC_CREAT | IPC_EXCL)) == -1) { 
> > +   0600 | IPC_CREAT | IPC_EXCL)) == -1) { 
> >  syslog(LOG_ERR, "shmget logarea failed %d", errno); 
> >  return 1; 
> >  } 
> > @@ -93,7 +93,7 @@ static int logarea_init (int size) 
> >  size = DEFAULT_AREA_SIZE; 
> > 
> >  if ((shmid = shmget(IPC_PRIVATE, size, 
> > -   0644 | IPC_CREAT | IPC_EXCL)) == -1) { 
> > +   0600 | IPC_CREAT | IPC_EXCL)) == -1) { 
> >  syslog(LOG_ERR, "shmget msg failed %d", errno); 
> >  free_logarea(); 
> >  return 1; 
> > @@ -114,7 +114,7 @@ static int logarea_init (int size) 
> >  la->tail = la->start; 
> > 
> >  if ((shmid = shmget(IPC_PRIVATE, MAX_MSG_SIZE + sizeof(struct 
> > logmsg), 
> > -   0644 | IPC_CREAT | IPC_EXCL)) == -1) { 
> > +   0600 | IPC_CREAT | IPC_EXCL)) == -1) { 
> >  syslog(LOG_ERR, "shmget logmsg failed %d", errno); 
> >  free_logarea(); 
> >  return 1; 
> > -- 
> > 1.8.3.1 
> > 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "open-iscsi" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to open-iscsi+unsubscr...@googlegroups.com. 
> > To view this discussion on the web visit 
> > 
> https://groups.google.com/d/msgid/open-iscsi/d6a22a2f-3730-45ee-5256-8a8fe4b0 
> > 17bf%40huawei.com. 
>
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/ef8b7483-b1fc-46b5-abee-10d0bd6f9d0c%40googlegroups.com.


Re: udev events for iscsi

2020-04-21 Thread Donald Williams
Hello,

 re:  XenServer.  The initiator is the same but I suspect your issue with
the disk timeout value on Linux.  When the connection drops Linux gets the
error and mount RO.   In VMware for example the VMware tools sets Windows
Disktimeout to 60 seconds to not give up so quickly.

 I suspect if you do the same in your Lilnux VM, increase the Disk Timeout
you will likely ride out  transitory network issues and SAN controller
failovers.  Which is where I see this occur all the time.

  This is from a Dell PS Series document. that shows one way to set the
value
http://downloads.dell.com/solutions/storage-solution-resources/(3199-CD-L)RHEL-PSseries-Configuration.pdf


Starting on Page 14.

  Disk timeout values The PS Series arrays can deliver more network I/O
than an initiator can handle, resulting in dropped packets and
retransmissions. Other momentary interruptions in network connectivity can
also cause problems, such as a mount point becoming read-only as a result
of interruptions. To mitigate against unnecessary iSCSI resets during very
brief network interruptions, change the value the kernel uses.

The default setting for Linux is 30 seconds. This can be verified using the
command:

 # for i in $(find /sys/devices/platform –name timeout ) ; do cat $i ; done
30 30

To increase the time it takes before an iSCSI connection is reset to 60
seconds, use the command:

 # for i in $(find /sys/devices/platform –name timeout ); do echo “60” >
$i; done

To verify the changes, re-run the first command.

# for i in $(find /sys/devices/platform –name timeout ); do cat $i; done
60 60

When the system is rebooted, the timeout value will revert to 30 seconds,
unless the appropriate udev rules file is created.

Create a file named /lib/udev/rules.d/99-eqlsd.rules and add the following
content: ACTION!=”remove”, SUBSYSTEM==”block”, ENV{ID_VENDOR}==”EQLOGIC”,
RUN+=”/bin/sh – c ‘echo 60 > /sys/%p/device/timeout’” To test the efficacy
of the new udev rule, reboot the system.

Test that the reboot occurred, and then run the “cat $i” command above.

# uptime 12:31:22 up 1 min, 1 user, load average: 0.78, 0.29, 0.10

# for i in $(find /sys/devices/platform –name timeout ) ; do cat $i ; done
60 60

 Regards,

Don



On Tue, Apr 21, 2020 at 11:20 AM  wrote:

> Wondering myself.
>
> On Apr 21, 2020, at 2:31 AM, Gionatan Danti 
> wrote:
>
> 
> [reposting, as the previous one seems to be lost]
>
> Hi all,
> I have a question regarding udev events when using iscsi disks.
>
> By using "udevadm monitor" I can see that events are generated when I
> login and logout from an iscsi portal/resource, creating/destroying the
> relative links under /dev/
>
>
> So running “udevadm monitor” on the initiator, you can see when a block
> device becomes available locally.
>
>
>
> However, I can not see anything when the remote machine simple
> dies/reboots/disconnects: while "dmesg" shows the iscsi timeout expiring, I
> don't see anything about a removed disk (and the links under /dev/ remains
> unaltered, indeed). At the same time, when the remote machine and disk
> become available again, no reconnection events happen.
>
>
> As someone who has had an inordinate amount of experience with the iSCSi
> connection breaking ( power outage, Network switch dies,  wrong ethernet
> cable pulled, the target server machine hardware crashes, ...) in the
> middle of production, the more info the better.   Udev event triggers would
> help.   I wonder exactly how XenServer handles this as it itself seemed
> more resilient.
>
> XenServer host initiators  do something correct to recover and wonder how
> that compares to the normal iSCSi initiator.
>
> But unfortunately, XenServer LVM-over-iSCSi  does not pass the message
> along to its Linux virtual drives and VMs in the same way as Windows VMs.
>
>
> When the target drives became available again,   MS Windows virtual
> machines would gracefully recover on their own.All Linux VM
>  filesystems went read only and those VM machines required forceful
>  rebooting.   mount remount would not work.
>
>
>
> I can read here that, years ago, a patch was in progress to give better
> integration with udev when a device disconnects/reconnects. Did the patch
> got merged? Or does the one I described above remain the expected behavior?
> Can be changed?
>
> Thanks.
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/open-iscsi/13d4c963-b633-4672-97d9-dd41eec5fb5b%40googlegroups.com
> 
> .
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop 

udev events for iscsi

2020-04-21 Thread gionatan . danti
Hi all,
I have a question regarding udev events when using iscsi disks.

By using "udevadm monitor" I can see that events are generated when I login 
and logout from an iscsi portal/resource, creating/destroying the relative 
links under /dev/

However, I can not see anything when the remote machine simple 
dies/reboots/disconnects: while "dmesg" shows the iscsi timeout expiring, I 
don't see anything about a removed disk (and the links under /dev/ remains 
unaltered, indeed). At the same time, when the remote machine and disk 
become available again, no reconnection events happen.

I read a quite old thread here 
 were 
it was stated that a patch to better integrate iscsi with udev events was 
in progress. Did something changed/happened during these years? Is the 
behavior I observed (and described above) to be expected?

Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/efc571ca-92db-4f58-a8a5-cff9a33dee98%40googlegroups.com.


Re: udev events for iscsi

2020-04-21 Thread robert
Wondering myself.

> On Apr 21, 2020, at 2:31 AM, Gionatan Danti  wrote:
> 
> 
> [reposting, as the previous one seems to be lost]
> 
> Hi all,
> I have a question regarding udev events when using iscsi disks.
> 
> By using "udevadm monitor" I can see that events are generated when I login 
> and logout from an iscsi portal/resource, creating/destroying the relative 
> links under /dev/

So running “udevadm monitor” on the initiator, you can see when a block device 
becomes available locally.   


> 
> However, I can not see anything when the remote machine simple 
> dies/reboots/disconnects: while "dmesg" shows the iscsi timeout expiring, I 
> don't see anything about a removed disk (and the links under /dev/ remains 
> unaltered, indeed). At the same time, when the remote machine and disk become 
> available again, no reconnection events happen.

As someone who has had an inordinate amount of experience with the iSCSi 
connection breaking ( power outage, Network switch dies,  wrong ethernet cable 
pulled, the target server machine hardware crashes, ...) in the middle of 
production, the more info the better.   Udev event triggers would help.   I 
wonder exactly how XenServer handles this as it itself seemed more resilient.  

XenServer host initiators  do something correct to recover and wonder how that 
compares to the normal iSCSi initiator.   

But unfortunately, XenServer LVM-over-iSCSi  does not pass the message along to 
its Linux virtual drives and VMs in the same way as Windows VMs.

When the target drives became available again,   MS Windows virtual machines 
would gracefully recover on their own.All Linux VM  filesystems went read 
only and those VM machines required forceful  rebooting.   mount remount would 
not work. 


> 
> I can read here that, years ago, a patch was in progress to give better 
> integration with udev when a device disconnects/reconnects. Did the patch got 
> merged? Or does the one I described above remain the expected behavior? Can 
> be changed?
> 
> Thanks.
> -- 
> You received this message because you are subscribed to the Google Groups 
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to open-iscsi+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/open-iscsi/13d4c963-b633-4672-97d9-dd41eec5fb5b%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/9D54680A-F97E-4465-BA6C-566562C5DC91%40eyeconsultantspc.com.


udev events for iscsi

2020-04-21 Thread Gionatan Danti
[reposting, as the previous one seems to be lost]

Hi all,
I have a question regarding udev events when using iscsi disks.

By using "udevadm monitor" I can see that events are generated when I login 
and logout from an iscsi portal/resource, creating/destroying the relative 
links under /dev/

However, I can not see anything when the remote machine simple 
dies/reboots/disconnects: while "dmesg" shows the iscsi timeout expiring, I 
don't see anything about a removed disk (and the links under /dev/ remains 
unaltered, indeed). At the same time, when the remote machine and disk 
become available again, no reconnection events happen.

I can read here that, years ago, a patch was in progress to give better 
integration with udev when a device disconnects/reconnects. Did the patch 
got merged? Or does the one I described above remain the expected behavior? 
Can be changed?

Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/13d4c963-b633-4672-97d9-dd41eec5fb5b%40googlegroups.com.