Re: /etc .startup = automatic settings don't appear to make iscsid automatically log in, what do they do?

2016-07-28 Thread james harvey

On Thursday, July 28, 2016 at 1:03:18 PM UTC-4, Chris Leech wrote:
>
> On Thu, Jul 28, 2016 at 02:50:54AM -0700, james harvey wrote: 
> > Am I doing something wrong to get iscsid to automatically login to 
> certain 
> > nodes, or am I not understanding what the .startup = automatic settings 
> do? 
> > 
> > I was about to post about having trouble getting automatic login to 
> work, 
> > but on the mailing list archives, I see some systemd services that run 
> > logins. 
> > 
> > Now I'm confused. 
> > 
> > I spammed automatic everywhere, thinking that would make iscsid 
> > automatically log into them. 
> > 
> > iscsid.conf:node.startup = automatic 
> > nodes/iqn.XXX/IP,port,1/defualt:node.startup = automatic 
> > nodes/iqn.XXX/IP,port,1/defualt:node.conn[0].startup = automatic 
> > send_targets/IP,port/st_config:discovery.startup = automatic 
> > 
> > Should these be left as manual, and run a systemd service that uses 
> > iscsiadm to login instead? 
>
> The node.startup setting is handled by iscsiadm and not iscsid, and the 
> automatic setting just gives an easy filter (log into all vs log into 
> automatic) to use. 
>
> There should be an service that makes use of that, but I haven't looked 
> at what Arch ships.  We should make an effort to standardize systemd 
> units across distros as much as possible, but Arch is also shipping old 
> open-iscsi tools (due to the lack of releases). 
>
> - Chris 
>

Thanks for clarification on which uses node.startup.

Arch's service just has "ExecStart=/sbin/iscsid", not even an ExecStop, and 
nothing for iscsiadm.

If a release gets tagged, if you'd like, I'd be happy to handle getting it 
and standardized systemd units into Arch's packaging.

It would help if the standardized systemd services were all in the git 
repo.  I see an iscsid.service is in there, but not the other service files 
discussed previous on the mailing list.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: /etc .startup = automatic settings don't appear to make iscsid automatically log in, what do they do?

2016-07-28 Thread Chris Leech
On Thu, Jul 28, 2016 at 03:00:28AM -0700, darli...@gmail.com wrote:
> Arch.  Kernel 4.6.4.  open-iscsi 2.0_873.
> 
> Oh boy, that might be my problem right there for one or both of my posts.  
> Is the latest open-iscsi release really from 2012?
> 
> Are there plans to tag another release, or is it just planned to continue 
> git comits without tagging releases?

We/I should get back to tagging releases.

Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: /etc .startup = automatic settings don't appear to make iscsid automatically log in, what do they do?

2016-07-28 Thread Chris Leech
On Thu, Jul 28, 2016 at 02:50:54AM -0700, james harvey wrote:
> Am I doing something wrong to get iscsid to automatically login to certain 
> nodes, or am I not understanding what the .startup = automatic settings do?
> 
> I was about to post about having trouble getting automatic login to work, 
> but on the mailing list archives, I see some systemd services that run 
> logins.
> 
> Now I'm confused.
> 
> I spammed automatic everywhere, thinking that would make iscsid 
> automatically log into them.
> 
> iscsid.conf:node.startup = automatic
> nodes/iqn.XXX/IP,port,1/defualt:node.startup = automatic
> nodes/iqn.XXX/IP,port,1/defualt:node.conn[0].startup = automatic
> send_targets/IP,port/st_config:discovery.startup = automatic
> 
> Should these be left as manual, and run a systemd service that uses 
> iscsiadm to login instead?

The node.startup setting is handled by iscsiadm and not iscsid, and the
automatic setting just gives an easy filter (log into all vs log into
automatic) to use.

There should be an service that makes use of that, but I haven't looked
at what Arch ships.  We should make an effort to standardize systemd
units across distros as much as possible, but Arch is also shipping old
open-iscsi tools (due to the lack of releases).

- Chris

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Can't discover more than 1 iSER device

2016-07-28 Thread darlingm
Just noticied on my other post the open-iscsi I'm running is from 2012.  Is 
the latest open-iscsi release really from 2012?

Are there plans to tag another release, or is it just planned to continue 
git comits without tagging releases?

Maybe the iSER bug I ran into was fixed a long time ago.

On Thursday, July 28, 2016 at 5:45:21 AM UTC-4, james harvey wrote:
>
> Sorry for cross-posting to github, just saw several messages saying to use 
> the mailing list instead.
>
> I made a similar bug report to the linux-rdma mailing list about a year 
> ago, and never followed up here.  I got a response that this is an 
> open-iscsi issue not a kernel issue.  (See 
> http://www.spinics.net/lists/linux-rdma/msg27533.html)
>
> Below is the same bug report, updated now that it's a year later.
>
>
> Two up to date arch systems.  Kernel 4.6.4 (Arch -1).
>
> 2 Mellanox MT25418 [ConnectX VPI PCIe 2.0 2.5GT/s - IB DDR / 10GigE]
> (rev a0) running mlx4_core driver v2.2-1 (Feb, 2014.)  Both on most
> recent firmware for PSID MT_04A0110002, FW Version 2.9.1000.  Systems
> directly connected, no switches.  InfiniBand otherwise works great,
> through VERY extensive testing.
>
> Running OpenFabrics most recent releases of everything (release
> versions, not git versions.)
>
> Open-iscsi 2.0_873-7.
>
> targetcli-fb 2.1.fb43-1, python-rtslib-fb 2.1.fb60-1, and
> python-configshell-fb 1.1.fb20-1.
>
>
> I can't discover more than 1 iSER device working at a time.  Using
> IPoIB lets me discover as many as I want.
>
> At the very end is a workaround - not a fix.
>
>
> I start with 3 disks working through iSCSI over IPoIB, with
> targetcli's (-fb version) ls looking like:
>
> o- / . 
> [...]
>   o- backstores .. 
> [...]
>   | o- block .. [Storage 
> Objects: 3]
>   | | o- sda4  [/dev/sda4 (4.4TiB) write-thru 
> activated]
>   | | o- sdb4  [/dev/sdb4 (4.4TiB) write-thru 
> activated]
>   | | o- sdc4  [/dev/sdc4 (4.4TiB) write-thru 
> activated]
>   | o- fileio . [Storage 
> Objects: 0]
>   | o- pscsi .. [Storage 
> Objects: 0]
>   | o- ramdisk  [Storage 
> Objects: 0]
>   | o- user ... [Storage 
> Objects: 0]
>   o- iscsi  
> [Targets: 3]
>   | o- iqn.2003-01.org.linux-iscsi.terra.x8664:sn.2549ae938766 ... 
> [TPGs: 1]
>   | | o- tpg1 ... [no-gen-acls, 
> no-auth]
>   | |   o- acls .. 
> [ACLs: 1]
>   | |   | o- iqn.2005-03.org.open-iscsi:c04e8f17af18 .. [Mapped 
> LUNs: 1]
>   | |   |   o- mapped_lun0 .. [lun0 block/sda4 
> (rw)]
>   | |   o- luns .. 
> [LUNs: 1]
>   | |   | o- lun0 . [block/sda4 
> (/dev/sda4)]
>   | |   o- portals  
> [Portals: 1]
>   | | o- 0.0.0.0:3260 
> . [OK]
>   | o- iqn.2003-01.org.linux-iscsi.terra.x8664:sn.8518b92b052d ... 
> [TPGs: 1]
>   | | o- tpg1 ... [no-gen-acls, 
> no-auth]
>   | |   o- acls .. 
> [ACLs: 1]
>   | |   | o- iqn.2005-03.org.open-iscsi:c04e8f17af18 .. [Mapped 
> LUNs: 1]
>   | |   |   o- mapped_lun0 .. [lun0 block/sdb4 
> (rw)]
>   | |   o- luns .. 
> [LUNs: 1]
>   | |   | o- lun0 . [block/sdb4 
> (/dev/sdb4)]
>   | |   o- portals  
> [Portals: 1]
>   | | o- 0.0.0.0:3260 
> . [OK]
>   | o- iqn.2003-01.org.linux-iscsi.terra.x8664:sn.d4603198ba50 ... 
> [TPGs: 1]
>   |   o- tpg1 ... [no-gen-acls, 
> no-auth]
>   | o- acls .. 
> [ACLs: 1]
>   | | o- iqn.2005-03.org.open-iscsi:c04e8f17af18 .. [Mapped 
> LUNs: 1]
>   | |   o- mapped_lun0 .. [lun0 block/sdc4 
> (rw)]
>   | o- luns .. 
> [LUNs: 1]
>   | | o- lun0 . [block/sdc4 
> (/dev/sdc4)]
>   | o- portals  
> [Portals: 1]
>   |   o- 0.0.0.0:3260 
> . [OK]
>   o- loopback 

Re: /etc .startup = automatic settings don't appear to make iscsid automatically log in, what do they do?

2016-07-28 Thread darlingm
Arch.  Kernel 4.6.4.  open-iscsi 2.0_873.

Oh boy, that might be my problem right there for one or both of my posts.  
Is the latest open-iscsi release really from 2012?

Are there plans to tag another release, or is it just planned to continue 
git comits without tagging releases?

On Thursday, July 28, 2016 at 5:50:54 AM UTC-4, james harvey wrote:
>
> Am I doing something wrong to get iscsid to automatically login to certain 
> nodes, or am I not understanding what the .startup = automatic settings do?
>
> I was about to post about having trouble getting automatic login to work, 
> but on the mailing list archives, I see some systemd services that run 
> logins.
>
> Now I'm confused.
>
> I spammed automatic everywhere, thinking that would make iscsid 
> automatically log into them.
>
> iscsid.conf:node.startup = automatic
> nodes/iqn.XXX/IP,port,1/defualt:node.startup = automatic
> nodes/iqn.XXX/IP,port,1/defualt:node.conn[0].startup = automatic
> send_targets/IP,port/st_config:discovery.startup = automatic
>
> Should these be left as manual, and run a systemd service that uses 
> iscsiadm to login instead?
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Can't discover more than 1 iSER device

2016-07-28 Thread james harvey
Sorry for cross-posting to github, just saw several messages saying to use 
the mailing list instead.

I made a similar bug report to the linux-rdma mailing list about a year 
ago, and never followed up here.  I got a response that this is an 
open-iscsi issue not a kernel issue.  (See 
http://www.spinics.net/lists/linux-rdma/msg27533.html)

Below is the same bug report, updated now that it's a year later.


Two up to date arch systems.  Kernel 4.6.4 (Arch -1).

2 Mellanox MT25418 [ConnectX VPI PCIe 2.0 2.5GT/s - IB DDR / 10GigE]
(rev a0) running mlx4_core driver v2.2-1 (Feb, 2014.)  Both on most
recent firmware for PSID MT_04A0110002, FW Version 2.9.1000.  Systems
directly connected, no switches.  InfiniBand otherwise works great,
through VERY extensive testing.

Running OpenFabrics most recent releases of everything (release
versions, not git versions.)

Open-iscsi 2.0_873-7.

targetcli-fb 2.1.fb43-1, python-rtslib-fb 2.1.fb60-1, and
python-configshell-fb 1.1.fb20-1.


I can't discover more than 1 iSER device working at a time.  Using
IPoIB lets me discover as many as I want.

At the very end is a workaround - not a fix.


I start with 3 disks working through iSCSI over IPoIB, with
targetcli's (-fb version) ls looking like:

o- / . 
[...]
  o- backstores .. 
[...]
  | o- block .. [Storage 
Objects: 3]
  | | o- sda4  [/dev/sda4 (4.4TiB) write-thru 
activated]
  | | o- sdb4  [/dev/sdb4 (4.4TiB) write-thru 
activated]
  | | o- sdc4  [/dev/sdc4 (4.4TiB) write-thru 
activated]
  | o- fileio . [Storage 
Objects: 0]
  | o- pscsi .. [Storage 
Objects: 0]
  | o- ramdisk  [Storage 
Objects: 0]
  | o- user ... [Storage 
Objects: 0]
  o- iscsi  
[Targets: 3]
  | o- iqn.2003-01.org.linux-iscsi.terra.x8664:sn.2549ae938766 ... 
[TPGs: 1]
  | | o- tpg1 ... [no-gen-acls, 
no-auth]
  | |   o- acls .. 
[ACLs: 1]
  | |   | o- iqn.2005-03.org.open-iscsi:c04e8f17af18 .. [Mapped 
LUNs: 1]
  | |   |   o- mapped_lun0 .. [lun0 block/sda4 
(rw)]
  | |   o- luns .. 
[LUNs: 1]
  | |   | o- lun0 . [block/sda4 
(/dev/sda4)]
  | |   o- portals  
[Portals: 1]
  | | o- 0.0.0.0:3260 . 
[OK]
  | o- iqn.2003-01.org.linux-iscsi.terra.x8664:sn.8518b92b052d ... 
[TPGs: 1]
  | | o- tpg1 ... [no-gen-acls, 
no-auth]
  | |   o- acls .. 
[ACLs: 1]
  | |   | o- iqn.2005-03.org.open-iscsi:c04e8f17af18 .. [Mapped 
LUNs: 1]
  | |   |   o- mapped_lun0 .. [lun0 block/sdb4 
(rw)]
  | |   o- luns .. 
[LUNs: 1]
  | |   | o- lun0 . [block/sdb4 
(/dev/sdb4)]
  | |   o- portals  
[Portals: 1]
  | | o- 0.0.0.0:3260 . 
[OK]
  | o- iqn.2003-01.org.linux-iscsi.terra.x8664:sn.d4603198ba50 ... 
[TPGs: 1]
  |   o- tpg1 ... [no-gen-acls, 
no-auth]
  | o- acls .. 
[ACLs: 1]
  | | o- iqn.2005-03.org.open-iscsi:c04e8f17af18 .. [Mapped 
LUNs: 1]
  | |   o- mapped_lun0 .. [lun0 block/sdc4 
(rw)]
  | o- luns .. 
[LUNs: 1]
  | | o- lun0 . [block/sdc4 
(/dev/sdc4)]
  | o- portals  
[Portals: 1]
  |   o- 0.0.0.0:3260 . 
[OK]
  o- loopback . 
[Targets: 0]
  o- sbp .. 
[Targets: 0]
  o- srpt . 
[Targets: 0]
  o- vhost  
[Targets: 0]


On the initiator system, I clear everything.  Log out via iscsiadm -m
node -U all.  Disconnect via iscsiadm -m discovery -t sendtargets -p
IP -o delete.

On the target system, i go into each of the
iscsi/iqn/tpg1/portals/0.0.0.0:3260 directories and run "enable_iser
true".  Each time it says "iSER enable 

Segfaults from iscsiuio (iscsiuio/src/unix/nic_nl.c)

2016-07-28 Thread Frank Fegert
Hello all,

disclaimer: i'm not a programmer, so the following might be utterly
and completely wrong ;-)

TL;DR: i'm getting segfaults from iscsiuio upon any target login.
Specifically this happens in iscsiuio/src/unix/nic_nl.c. Debugging
this lead me to believe this is a case of trying to unlock a not
locked mutex. A quick and dirty hack which works around this is
available here:
  
https://github.com/frank-fegert/open-iscsi/commit/9f770f9eb0f302d146d455f1d68648e2d0172eb6

There is probably room for a proper fix (e.g. counter on the number of
locks?), which considers the semantics of the whole code.


The longer explanation:

My setup involves 6 Dell M630 hosts (host1, host{5,6,7,8,9}), all with
BCM57810 iSOEs. BCM57810 firmware, software (Debian 8) and targets are
exactly the same on all hosts. I'm using the Debian open-iscsi package,
which is rebuild in order to include iscsiuio and has the changes up to
Git commit 0fa43f29 - but excluding c6d1117b and 76832662 (externalization
of the open-isns code) - backported. The only difference is, host1 has
Intel E5 v3 CPUs, host{5,6,7,8,9} have Intel E5 v4 CPUs.

On host1 everything works fine, iscsiuio runs as expected, access to
targets is working flawlessly.
On host{5,6,7,8,9} i'm getting segfaults like the one in the example
shown below, while trying to log in to any target.

Searching for "__lll_unlock_elision" in conjunction with "pthread_*"
led me to the following resources:
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=800574
  https://lwn.net/Articles/534758/

which are pointing towards a CPU (Broadwell and Skylake) specific pro-
blem when not carefully using mutexes. The general opinion from there
and other related bug reports seems to be, that the application be-
haviour should be changed, in order to fix such an issue.

Tracking this issue further down i ended up in the function "nl_process_
handle_thread" in iscsiuio/src/unix/nic_nl.c and specifically here:

486 rc = pthread_cond_wait(>nl_process_cond,
487>nl_process_mutex);

>From the pthread_cond_wait manpage:
  The pthread_cond_timedwait() and pthread_cond_wait() functions shall
  block on a condition variable. They shall be called with mutex locked
  by the calling thread or undefined behavior results.

On the first pass of the loop, this constraint seems to be true. At
the end of the loop at:

499 pthread_mutex_unlock(>nl_process_mutex);

the mutex is then unlocked. Thus - if i understand the code right -
the above constraint is no longer met on the subsequent passes of the
loop. On Intel E5 v3 this seemed to be tolerated and without any impact.
But on Intel E5 v4 (and other CPUs implementing HLE and RTM) this IMHO
causes the observed segfault.

Could someone more familiar with mutex handling in phtreads and/or the
semantics of the iscsiuio code please take a look at this? I'd be inter-
ested if my analysis is correct and whether my quick'n'dirty fix has
any major side-effects. And - of course - what a proper fix for the ob-
served segfault would look like ;-)

Thanks & best regards,

Frank


host5:~# gdb /sbin/iscsiuio
GNU gdb (Debian 7.7.1+dfsg-5) 7.7.1
Copyright (C) 2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
.
Find the GDB manual and other documentation resources online at:
.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /sbin/iscsiuio...(no debugging symbols found)...done.

(gdb) run -d 4 -f
Starting program: /sbin/iscsiuio -d 4 -f
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
INFO  [Wed Jul 27 10:01:45 2016]Initialize logger using log file: 
/var/log/iscsiuio.log
INFO  [Wed Jul 27 10:01:45 2016]Started iSCSI uio stack: Ver 0.7.8.2
INFO  [Wed Jul 27 10:01:45 2016]Build date: Fri Jul 22 15:40:04 CEST 2016
INFO  [Wed Jul 27 10:01:45 2016]Debug mode enabled
INFO  [Wed Jul 27 10:01:45 2016]Running on sysname: 'Linux', release: 
'3.16.0-4-amd64', version '#1 SMP Debian 3.16.7-ckt25-2+deb8u3 (2016-07-02)' 
machine: 'x86_64'
DBG   [Wed Jul 27 10:01:45 2016]Loaded nic library 'bnx2' Version: '0.7.8.2' 
build on Fri Jul 22 15:40:04 CEST 2016'
DBG   [Wed Jul 27 10:01:45 2016]Added 'bnx2' nic library
DBG   [Wed Jul 27 10:01:45 2016]Loaded nic library 'bnx2x' Version: '0.7.8.2' 
build on Fri Jul 22 15:40:04 CEST 2016'
DBG   [Wed Jul 27 10:01:45 2016]Added 'bnx2x' nic library
[New Thread 0x7760f700 (LWP 4942)]
INFO  [Wed Jul 27 10:01:45 

Re: Does iscsiadm support discovery through iSCSI interface using iSNS

2016-07-28 Thread Vimol Kshetrimayum
Thank you everyone for your reply. 

I found the issue was already discussed here 
https://groups.google.com/forum/#!topic/open-iscsi/y2ImQ7ZXBy8 .

The I realized, the iscsi util package shipped along with RHEL is outdated. 

The issue got resolved after compiling the latest code.

Regards,
-Vimol


On Tuesday, July 26, 2016 at 1:59:38 PM UTC-7, Vimol Kshetrimayum wrote:
>
> Hi,
>
>
> I am trying to discover targets using iSNS but through iSCSI interface.
>
>
> I could discover it through TCP using below command. 
>
> # iscsiadm -m discovery -t isns -p 
>
>
> However, if I tried to discover through iSCSI interface, it throws error. 
> Below is the command that I am using and its output error. 
>
>
> -
>
> # iscsiadm -m discoverydb -t isns -p 10.132.7.209 --discover -I 
> bnx2i.00:0e:1e:53:43:c1
>
>
> iscsiadm: iface bnx2i.00:0e:1e:53:43:c1 is not valid. Will not bind node 
> to it. Iface settings [hw=,ip=,net_if=,iscsi_if=bnx2i.00:0e:1e:53:43:c1]
>
> iscsiadm: No portals found
>
> - 
>
>
> Is there any way to discover target through iSCSI interface using iSNS?
>
>
> Regards,
>
> -Vimol
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.