I'm coming back from holidays, and I am still wanting to make this work.
Thank you Mike for your answer. Though I will test your point in a minute,
let me precise what I was *actually* doing and let you check I did things
right.
See below :
Le vendredi 6 février 2015 23:07:48 UTC+1, Mike Christie a écrit :
>
> So I tried and got that :
> - both NICs (em3 and em4) can ping the 2 SAN's ip
> - With iscsiadm, I'm adding iface0 and iface1, and I specify the MAC
> hardware address of the corresponding em3 and em4
> - When discovering, all seems to work well
>
> Here, I'm doubtful, and here is what I did :
# iscsiadm -m discoverydb -t sendtargets -p 192.168.29.1 --discover -I
iface0
# iscsiadm -m discoverydb -t sendtargets -p 192.168.29.1 --discover -I
iface1
When doing that, I got :
# iscsiadm -m node -P1
Target: iqn.1992-04.com.emc:cx.ckm00094900174.b2
Portal: 192.168.29.2:3260,2
Iface Name: iface1
Target: iqn.1992-04.com.emc:cx.ckm00094900174.a2
Portal: 192.168.29.1:3260,1
Iface Name: iface1
When I run these 2 commands, the last one removes everything in the db and
adds what is discovered and stores it in the DB, I guess.
I the case above, only iface1 is used and will be used.
At this point, trying to login using iface0 obviously fails with "No
records found".
And when trying to login with iface1, I get the same symptom of the login
hanging like so :
# iscsiadm -m node -l -I iface1
Logging in to [iface: iface1, target:
iqn.1992-04.com.emc:cx.ckm00094900174.b2, portal: 192.168.29.2,3260]
(multiple)
Logging in to [iface: iface1, target:
iqn.1992-04.com.emc:cx.ckm00094900174.b2, portal: 192.168.29.2,3260]
(multiple)
Logging in to [iface: iface1, target:
iqn.1992-04.com.emc:cx.ckm00094900174.a2, portal: 192.168.29.1,3260]
(multiple)
Logging in to [iface: iface1, target:
iqn.1992-04.com.emc:cx.ckm00094900174.a2, portal: 192.168.29.1,3260]
(multiple)
Login to [iface: iface1, target: iqn.1992-04.com.emc:cx.ckm00094900174.b2,
portal: 192.168.29.2,3260] successful.
^Ciscsiadm: caught SIGINT, exiting...
> - When login in, it is failing in a way it seems I'm not allowed to
> connect using both ifaces at the same time.
>
> When trying to log in, I specify the iface0, and the login to the 2 SAN ip
> is working well.
> But then, doing the same with iface1 is failing, with a kernel errror
> message :
> iscsid: conn 0 login rejected: target error (03/02)
> And after having logout, trying the same way around with iface1 first
> results in the same behavior.
>
>
> That is weird. I have never seen it before for this type of setup. The
> iscsi tools are able to do tcp connections through the different iscsi
> faces and nics, but in this case the target is returning a error:
>
> -----------------------------------------------------------------
> Out of | 0302 | The target has insufficient session,
> resources | | connection, or other resources.
> -----------------------------------------------------------------
>
>
> You probably need to contact EMC to get info on its limitations or on how to
> set it up for this type of setup.
>
>
> When you only have one session connected, if you do
>
> iscsiadm -m node -T yourtarget -p ip -I iface_you_are_already_logged_in -o new
>
> does this login ok? If you do
>
> iscsiadm -m session -P 3
>
> do you see 2 sessions?
>
>
I also tried to use '-o new' when discovering, to reach the following state
:
# iscsiadm -m node -P1
Target: iqn.1992-04.com.emc:cx.ckm00094900174.b2
Portal: 192.168.29.2:3260,2
Iface Name: iface1
Iface Name: iface0
Target: iqn.1992-04.com.emc:cx.ckm00094900174.a2
Portal: 192.168.29.1:3260,1
Iface Name: iface1
Iface Name: iface0
But when login in, the same hanging appears.
Mike, I tried this exact sequence you're writing, and the login is still
hanging.
I must add that when reaching this hanging state, I get some sort of ghost
sessions that I can not logout, but login OUT from the other sessions, EVEN
when they do not appear anymore, does logout well...
At this point, I don't know what to blame, between the EMC SAN, the
open-iscsi driver or just me not knowing the limits of the system. I
already tried to find every setup I could check on the SAN in case there
would be a limitation but found none.
Would it be possible that the SAN could be limiting the number of
concurrent login, and that the open-iscsi driver reacts by preventing the
login?
--
Nicolas ECARNOT
--
You received this message because you are subscribed to the Google Groups
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.