On 10/20/2010 04:06 PM, Andrea Sperelli wrote:
Hi Mike, thank you for your reply.

2010/10/20 Mike Christie<micha...@cs.wisc.edu>

On 10/19/2010 05:26 AM, Andrea Sperelli wrote:

Hi.
What is the best way to configure my Linux work and HP LeftHand?


Do you work with Thomas Wouters? He just posted about LeftHand targets on
the 18th, and I think that makes a total of 2 LeftHand posts in a couple of
years.

  On the lefthand side I have one VIP ip address and 4 real ip address.
My portal is the VIP.
VIP: 172.30.30.10
IP1: 172.30.30.101
IP2: 172.30.30.102
IP3: 172.30.30.103
IP4: 172.30.30.104

I can login to all the LUN on the HP LEFTHAND san. I can not control
the storage IP address where I'll be connected after login.

Problem 1)
I configured iscsi on Linux with two independent nic, but in this way
lefthand redirect session from each nic to the same storage nic: I can
not configure dm-multipath, because I'm really not havin two path but
only one path on the san side.
The performances are affected so by this behavior.



You could still use dm-multipath for failover from 1 initiator nic to the
other in case one fails.


You are right. But in this way, performance are worst...
The HP Lefthand's network interfaces are configure in alb mode.
I have a two node cluster to do some test.
Actually I have a node configured like suggested by you: dm-multipath.
The other node is configured with a mode 6 bonding, the same of ALB
(Adaptive Load Balancing Mode). Bonding mode 6 supports failover. This node
is the faster, so I'm thinking to avoid using dm-multipath.




Problem 2)
if I modify the ip routing table of the my linux box in order to get
unreachable an ip of the sun:
route add -host 172.30.30.102 gw 1.1.1.5
, iscsid log the following in /var/log/messages:

iscsid: connect to 172.30.30.102:3260 failed (No route to host)


So does the lefthand target redirect us to 172.30.30.102:3260, and you get
that error?

Or are you manually trying to log into that portal? Did you do
iscsiadm -m node -T yourtargetsname -p 172.30.30.102:3260 -o new
iscsiadm -m node -T yourtargetsname -p 172.30.30.102:3260 -l
and then you got the error above?


Are you using iscsi ifaces? What kernel is this?

Excuse me, I did not explain correctly the situation. On the first node I'm
using dm-multipath.
example:
My official portal is 172.30.30.10, the VIP address of the lefthand cluster.
With
iscsiadm -m session -P 3
172.30.30.10 address is always the persistent portal.
The other addresses are current portals.

Does each session have the same current portal or are they getting spread out over all portals? Send the output of the iscsiadm command.


I configured iface, A and B.
When I do
iscsiadm -m node -T yourtargetsname -p 172.30.30.102:3260 -l
I obtain the same result of
iscsiadm -m node -T yourtargetsname -p
172.30.30.10:3260<http://172.30.30.102:3260/>-l

I have A and B interfaces on the same SAN's physical interface.
If I issue the following command:
iscsiadm -m discover -t st -p 172.30.30.10:3260
iscsiadm -m discover -t st -p 172.30.30.101:3260
iscsiadm -m discover -t st -p 172.30.30.102:3260
iscsiadm -m discover -t st -p 172.30.30.201:3260
iscsiadm -m discover -t st -p 172.30.30.202:3260

I always obtain the same result: I obtain targets from portal 172.30.30.10,
so
iscsiadm -m discover -t st -p 172.30.30.101:3260 -i A
is working correctly, but
iscsiadm -m discover -t st -p 172.30.30.102.3260 -i B
is overriding the other discovery task.

Did you mean -I, right? Just so you know for that we do not do discovery through that iface. It only sets things up so that when we login for normal sessions we use that iface.

If that got setup right, and the problem is that when you login to the portals found using the iface. So

iscsiadm -m node -T target -p ip -I iface -l

fails with no route to host or some other network error then you might need to adjust your rp_filter. Set net.ipv4.conf.default.rp_filter to 0 or 2 in /etc/sysctl.conf then reboot the box and retry.



I think it can be a Lefthand cluster behavior.
It is not redundant.
For VMWARE and Windows iSCSI client, HP distributes a DSM driver. With DSM
driver, a client does something like
iscsiadm -m discover -t st -p 172.30.30.10:3260
After the discovery task, when vmware and windows client logins to the
targets, they are connected on all the san interfaces (in this case
172.30.30.101,172.30.30.102, 172.30.30.201 and 172.30.30.202). The triver
itself will manage a failing path.


Ok. Will dig into that.


With Linux OS I'm not finding a way to implement a failover mechanism with
HP Lefthand.
I know that to achieve the perfect performance failover mechanism I have to
do the following:
I need two portal on the same or different broadcast domain (IP networks);
I have to configure two iSCSI ifaces, one for each portal.
I have to login to all target in each one of the two portals
I have to start the multipathd daemon ( and configure blacklist, naming,
...)

Is this right?


Is someone knowing how I have to configure LeftHand to reach my goal?
Tomorrow I'll give you my exact kernel version.

Regards
Andrea






  Can I configure iscsi initiator to try a new login ?


You can try to manually login with the commands above, but the target may
not let you log in.

Ok, It do not work. I have to do logout and then login to a different ip.
but I need real multipath.


  do you know what if instead to change routing table I'll shutdown the
172.30.30.102 san node?

Thanks, I have to obtain some info about the aspecte behaviour.

Regards
Andrea





--
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.

Reply via email to