Re: Question about /sys/class/iscsi_connection

2010-01-08 Thread Yangkook Kim
Hi, Mike. Thank you for the reply.

OK, now I see the history.

 P.S.
 I also thought that connectionXX:0 is misleading. XX is supposed to be
 where
 iscsi session number is put after binding connection and session.


 I did not get what you mean here. The XX is the session number.


Probably I should have not to say misleading. Instead, I should have said
it confused me.

It seemed to me a bit strange to say like connectionXX:0 where XX is
session number.
I was wondering why you have to say session number to describe a connection...

But, I now came to a reason by myself. It is said so probably because
it is just useful to indentiy which connection you are talking about
in one-word, and this is useful in logging.
Or, is there any other reasons?

However, I still think it is misleading to say connectionXX:0 before binding
connection and session. In iscsid.c:session_conn_poll(), connectionXX:0 appears
before binding.

log_debug(3, created new iSCSI session sid %d host 
  no %u, session-id, session-hostno);

snip

log_debug(3, created new iSCSI connection 
  %d:%d, session-id, conn-id);

snip

log_debug(3, bound iSCSI connection %d:%d to session %d,
 session-id, conn-id, session-id);

Does it fit you very natural? Is it only me to worry about it?

2010/1/7, Mike Christie micha...@cs.wisc.edu:
 On 01/02/2010 01:56 PM, Yangkook Kim wrote:
 Hi, Group,

 I am wondering why we have /sys/class/iscsi_connection for iscsi
 connection
 in sysfs filesystem.

 I understand that the one or more iscsi connection form a single iscsi
 session
 and, in this sense, iscsi connection subordinates to iscsi session.
 Then, it more makes sense if we have iscsi_connection/ under
 iscsi_session/.

 Ex. /sys/class/iscsi_session/iscsi_connection/connectionXX:0

 Hey,

 Sorry for the late reply.


 We sort of have that. The reason we have
 /sys/class/iscsi_connection and then
 /sys/class/iscsi_session is due to how the kernel APIs work. For them we
 are using transport classes and classes (see the kernel file
 scsi_transport_iscsi.c), and this is just how they get laid out. We
 could have just made the iscsi_connection a struct with a kobject and
 then added that directly under the session, but at the time people did
 not like the idea of messing with kobjects directly.

 So if you look in the iscsi_session dir

 /sys/class/iscsi_session/session1
 ls

 abort_tmo  ifacenamepassword  tpgt
 data_pdu_in_order  immediate_data   password_in   uevent
 data_seq_in_order  initial_r2t  power username
 device initiatornamerecovery_tmo  username_in
 erllu_reset_tmo state
 fast_abort max_burst_lensubsystem
 first_burst_lenmax_outstanding_r2t  targetname


 there is a device dir/link. When you follow that you see

 ls
 [r...@max session1]# cd device/
 [r...@max device]# ls
 connection1:0  iscsi_session  power  target6:0:0  uevent


 how the connection is below the session1. So sort of what you would
 expect but there is that device dir/link due to us using the
 classes/transport_classes which use device structs instead f kobjects
 directly.





 Can anybody give me an explanation?

 Thanks
 Kim,

 P.S.
 I also thought that connectionXX:0 is misleading. XX is supposed to be
 where
 iscsi session number is put after binding connection and session.


 I did not get what you mean here. The XX is the session number.

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: Need help with multipath and iscsi in CentOS 5.4

2010-01-08 Thread Kyle Schmitt
Using a single path (without MPIO) as a baseline:

With bonding I saw, on average 99-100% of the speed (worst case 78%)
of a single path.
With MPIO (2 nics) I saw, on average 82% of the speed (worst case 66%)
of the single path.
With MPIO with one nic (ifconfig downed the second), I saw, on average
86% of the speed (worst case 66%) of the single path.

There were situations where bonding and MPIO both scored slightly
higher than the single path, but that is most likely due to
differences on the array, since the tests weren't run back to back.

--Kyle

On 1/6/10, Mike Christie micha...@cs.wisc.edu wrote:
 On 12/30/2009 11:48 AM, Kyle Schmitt wrote:
 On Wed, Dec 9, 2009 at 8:52 PM, Mike Christiemicha...@cs.wisc.edu
 wrote:
 So far single connections work: If I setup the box to use one NIC, I
 get one connection and can use it just fine.
 Could you send the /var/log/messages for when you run the login command
 so I can see the disk info?

 Sorry for the delay.  In the meanwhile I tore down the server and
 re-configured it using ethernet bonding.  It worked, according to
 iozone, provided moderately better throughput than the single
 connection I got before.  Moderately.  Measurably.  Not significantly.

 I tore it down after that and reconfigured again using MPIO, and funny
 enough, this time it worked.  I can access the lun now using two
 devices (sdb and sdd), and both ethernet devices that connect to iscsi
 show traffic.

 The weird thing is that aside from writing bonding was measurably
 faster than MPIO.  Does that seem right?


 With MPIO are you seeing the same throughput you would see if you only
 used one path at a time?

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.