Re: [PATCH v2 0/2 2.6.29] cxgb3i -- open-iscsi initiator acceleration

2008-12-16 Thread Pasi Kärkkäinen

On Mon, Dec 15, 2008 at 05:52:54PM -0800, Karen Xie wrote:
 
 Hi, Pasi,
 
 Here are some throughput numbers we see with disktest, one tcp connection 
 (iscsi session).
 
 The setup are between a pair of chelsio 10G adapters. The target is Chelsio's 
 ramdisk target with data discarded (similiar to IET's NULLIO mode). The 
 Chelsio target is used because of the digest offload and payload ddp. 
 
 The ethernet frame is standard 1500 bytes.
 
 The numbers are about 3 months old, but you get the idea :) For cxgb3i 
 driver, since the digest is offloaded the performance is very similar in the 
 digest off case, so only the numbers for digest on are shown.


Thanks! cxgb3i acceleration seems to have a nice performance boost compared
to plain iscsi-tcp.. especially if having digest on! 

 We will re-run the tests and get the cpu stats too, will keep you posted.
 

Yep, cpu stats would be really nice to have too.

-- Pasi

 
 Test cxgb3i iscsi-tcp iscsi-tcp
  digest on  digest on digest off
  (MB/sec)   (MB/sec)  (MB/sec)
 ===
 
 512-read 36.85   34.1336.69
 1k-read  71.91   58.5266.81
 2k-read  137.24  97.75128.46
 4k-read  280.61 137.98214.04
 8k-read  531.34 201.87325.09
 16k-read 953.67 226.49429.32
 64k-read 1099.57248.57626.30
 128k-read1102.65256.04613.94
 256k-read1105.28262.28642.73
  
 512-write39.54   34.18 38.36
 1k-write 79.52   56.51 75.06
 2k-write 158.03  84.12140.85
 4k-write 314.56 126.33282.72
 8k-write 559.83 155.49528.24
 16k-write968.84 168.50676.38
 64k-write1099.31182.82978.82
 128k-write   1074.62182.55974.18
 256k-write   1063.85185.67972.88
 
 
 -Original Message-
 From: Pasi Kärkkäinen [mailto:pa...@iki.fi] 
 Sent: Monday, December 15, 2008 6:06 AM
 To: open-iscsi@googlegroups.com
 Cc: linux-s...@vger.kernel.org; micha...@cs.wisc.edu; 
 james.bottom...@hansenpartnership.com; Karen Xie
 Subject: Re: [PATCH v2 0/2 2.6.29] cxgb3i -- open-iscsi initiator acceleration
 
 On Tue, Dec 09, 2008 at 02:15:22PM -0800, Karen Xie wrote:
  
  [PATCH v2 0/2 2.6.29] cxgb3i -- open-iscsi initiator acceleration 
  
  From: Karen Xie k...@chelsio.com
  
  Here is the updated patchset for adding cxgb3i iscsi initiator.
  
  The updated version incorporates the comments from Mike and Boaz:
  - remove the cxgb3 sysfs entry for the private iscsi ip address, it can be
accessed from iscsi.
  - in cxgb3i.txt, added error message logged for not setting 
  MaxRecvDataSegmentLength properly.
  - renamed cxgb3i Makefile to Kbuild
  - removed select ISCSI_TCP in Kconfig
  - consistent handling of AHS: on tx, reserve rooms for AHS; on rx, assume 
  we could receive AHS.
  - add support of bi-directional commands for ddp setup,
  
  The cxgb3i driver, especially the part handles the offloaded iscsi tcp 
  connection mangement, has gone through the netdev review 
  (http://marc.info/?l=linux-netdevm=121944339211552, 
  http://marc.info/?l=linux-netdevm=121989660016124).
  
  The cxgb3i driver provides iscsi acceleration (PDU offload and payload data 
  direct placement) to the open-iscsi initiator. It accesses the hardware 
  through the cxgb3 module.
  
 
 Hello!
 
 Do you guys have performance comparison/numbers for normal open-iscsi over 
 tcp vs. cxgb3i accelerated?
 
 Would be nice to see throughput/iops/cpu-usage statistics..
 
 -- Pasi
 
 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Linux iscsi performance, multipath issue

2008-12-16 Thread fuubar2003

SuSE 10 and SuSE10w/SP1 install a kernel below 2.6.17. If your kernel
is 2.6.16 or less, then the following kernel tuning variables may help
with TCP performance. Add these to /etc/sysctl.conf and reboot:
net.core.rmem_max = 873200
net.core.wmem_max = 873200
net.ipv4.tcp_rmem = 32768 436600 873200
net.ipv4.tcp_wmem = 8192 436600 873200

Additionally, this may help as well:
Check the maximum size of the receive (RX) buffer:
  ethtool -g eth0

  Example output:
  
  Pre-set maximums:
  RX: 4096
  TX: 4096

  Current hardware settings:
  RX: 256
  TX: 256

   b. Set the RX buffer to the maximum size:
  ethtool -G eth0 rx 4096

   c. Make this change permanent by adding the ethtool command to the /
etc/init.d/network startup script for the relevant network intefaces
between the lines that read esac and rc_exit at the end of the
file, for example:

esac
#added following line to raise rx at boot time
ethtool -G eth0 rx 4096
rc_exit

On Dec 7, 2:01 pm, Kmec jozef.novik...@seznam.cz wrote:
 Hi,
 I would like to ask for help with some strange behavior of linux
 iscsi. Situation is as follows: iSCSI SAN Dell Equallogic, SAS 10k RPM
 drives, 4x Broadcom NIC or 4x Intel NIC in Dell R900 server (24 cores,
 64 GB RAM). It's testing environment where we are trying to measure
 SAN Dell EQL performance.

 Totally we are solving 2 different issues:
 1) In case of running IOmeter test on server running Windows 2008
 server, we are able to get 112 MBps read and 110 MBps write over 1NIC,
 220 MBps read and 210 MBps over 2 NICs. On SuSE 10 or Centos 66 MBps
 read and 38 MBps write only over one NIC. So I think we can forget
 about finding issues on SAN or switch. Strange is, that with dd or
 hdparm we can get wirespeed. Question is what to do to get same
 numbers from IOmeter on Windows and Linux. We also tried dt tool and
 we get same results as from IOmeter.
 How to continue?

 2) our next problem is multipath. When we configure multipath, over
 one NIC with dd we get 90 MBps read, but over 2 NICs just 80 MBps what
 is strange. On switch and SAN we see that data flow is over both NICs,
 but dd shows still 80 MBps.

 I will appreciate any suggestion.

 Jozef

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: was [Re: Login failure]: iSer login failure

2008-12-16 Thread Or Gerlitz

Mike Christie wrote:
 Login session [iiser: iser_connect:connecting to: 10.8.0.112, port  
 0xbc0c
 face: default, target: iqn.1986-iser: iser_cma_handler:event 0 conn  
 81022847ce80 id 81023eb82a00
 03.com.sun:02:91iser: iser_cma_handler:event 2 conn 81022847ce80  
 id 81023eb82a00
 7071dc-db02-ebf0-9508-d11df2889cda, portal: 10.8.0.112,3260]
 iser: iser_create_ib_conn_res:setting conn 81022847ce80 cma_id  
 81023eb82a00: fmr_pool 81043edf8d40 qp 810247154200
 iser: iser_cma_handler:event 9 conn 81022847ce80 id 81023eb82a00
 iser: iscsi_iser_ep_poll:ib conn 81022847ce80 rc = 1
 iser: iscsi_iser_conn_bind:binding iscsi conn 810228385290 to  
 iser_conn 81022847ce80
 iser: iser_cma_handler:event 10 conn 81022847ce80 id  
 81023eb82a00
 iser: iscsi_iser_ep_disconnect:ib conn 81022847ce80 state 4
 iser: iser_free_ib_conn_res:freeing conn 81022847ce80 cma_id  
 81023eb82a00 fmr pool 81043edf8d40 qp 810247154200
 iser: iser_device_try_release:device 81043fee4d40 refcount 1
 [ snip ]
 iscsiadm: initiator reported error (5 - encountered iSCSI login failure)
 iscsiadm: Could not execute operation on all records. Err 107.

 Any way to get some more robust information regarding iSCSI login  
 failure?

 What are you talking about. You do not have the iser_cmd_handler event 
 numbers memorized yet :) Me neither :( It looks like the target 
 disconnected and there was some sort of iser connection error because 
 the ib_conn state is ISER_CONN_DOWN. The iser guys can diagnose it 
 better for whatever kernel you are using.
What you have here is connection established (event 9) and then 
disconnection (event 10). I suggest that you use logging with the iscsid 
user space daemon to realize better what went wrong. The way I do this 
is: start the service, then logout, then kill the daemon, run it again 
with logging/debug prints open and login manually.

run$ iscsid -d 8 -f  /tmp/iscsid.log 21 
login$ iscsiadm -m node -T $target -l
logout$ iscsiadm -m node -T $target -u


As for the iser event numbers... it not iser but rather the RDMA CM 
events, they are present in include/rdma/rdma_cm.h at the kernel source 
tree.

Or.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



iscsi-initiator

2008-12-16 Thread kingble

hello,
I got a problem which have bored me long time,the problem is as
follows:
I can set only a CHAP username and password for initiator in
iscsid.conf,which means every iscsi-Initiator can only establish a
connection which authenticated by CHAP with target,is that true?
Your reply will be appreciated

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: mount as ro for users, rw for root?

2008-12-16 Thread Michael Tokarev

Scott R. Ehrlich wrote:
 Under CentOS 5.2, is it possible to mount an iscsi filesystem/partition as 
 rw for root, but ro for users?
 
 If so, what would be the proper syntax?

Yes, any unix/linux can do that.
The thing is called permissions.
Take a look at `man chmod' for example.

/mjt

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: [PATCH 3/3 2.6.29] cxgb3i -- documentation

2008-12-16 Thread Randy Dunlap

Karen Xie wrote:
 [PATCH 3/3 2.6.29] cxgb3i -- documentation
 
 From: Karen Xie k...@chelsio.com
 
 - document cxgb3i driver's capablity and setup steps to be used with
   open-iscsi initiator.
 
 Signed-off-by: Karen Xie k...@chelsio.com
 ---
 
  Documentation/scsi/cxgb3i.txt |   81 
 +
  1 files changed, 81 insertions(+), 0 deletions(-)
  create mode 100644 Documentation/scsi/cxgb3i.txt
 
 
 diff --git a/Documentation/scsi/cxgb3i.txt b/Documentation/scsi/cxgb3i.txt
 new file mode 100644
 index 000..9456013
 --- /dev/null
 +++ b/Documentation/scsi/cxgb3i.txt
 @@ -0,0 +1,81 @@
 +Chelsio S3 iSCSI Driver for Linux
 +
 +Introduction
 +
 +
 +The Chelsio T3 ASIC based Adapters (S310, S320, S302, S304, Mezz cards, etc.
 +series of products) supports iSCSI acceleration and iSCSI Direct Data 
 Placement

   support

 +(DDP) where the hardware handles the expensive byte touching operations, such
 +as CRC computation and verification, and direct DMA to the final host memory
 +destination:
 +
 + - iSCSI PDU digest generation and verification
 +
 +   On transmitting, Chelsio S3 h/w computes and inserts the Header and
 +   Data digest into the PDUs.
 +   On receiving, Chelsio S3 h/w computes and verifies the Header and
 +   Data digest of the PDUs.
 +
 + - Direct Data Placement (DDP)
 +
 +   S3 h/w can directly place the iSCSI Data-In or Data-Out PDU's
 +   payload into pre-posted final destination host-memory buffers based
 +   on the Initiator Task Tag (ITT) in Data-In or Target Task Tag (TTT)
 +   in Data-Out PDUs.
 +
 + - PDU Transmit and Recovery
 +
 +   On transmitting, S3 h/w accepts the complete PDU (header + data)
 +   from the host driver, computes and inserts the digests, decomposes
 +   the PDU into multiple TCP segments if necessary, and transmit all
 +   the TCP segments onto the wire. It handles TCP retransmission if
 +   needed.
 +
 +   On receving, S3 h/w recovers the iSCSI PDU by reassembling TCP

 receiving,

 +   segments, separating the header and data, calculating and verifying
 +   the digests, then forwards the header to the host. The payload data,

forwarding

 +   if possible, will be directly placed into the pre-posted host DDP
 +   buffer. Otherwise, the payload data will be sent to the host too.
 +
 +The cxgb3i driver interfaces with open-iscsi initiator and provides the iSCSI
 +acceleration through Chelsio hardware wherever applicable.
 +
 +Using the cxgb3i Driver
 +===
 +
 +The following steps need to be taken to accelerates the open-iscsi initiator:
 +
 +1. Load the cxgb3i driver: modprobe cxgb3i
 +
 +   The cxgb3i module registers a new transport class cxgb3i with 
 open-iscsi.
 +
 +   * in the case of recompiling the kernel, the cxgb3i selection is located 
 at
 + Device Drivers
 + SCSI device support ---
 + [*] SCSI low-level drivers  ---
 + M   Chelsio S3xx iSCSI support
 +
 +2. Create an interface file located under /etc/iscsi/ifaces/ for the new
 +   transport class cxgb3i.
 +
 +   The content of the file should be in the following format:
 + iface.transport_name = cxgb3i
 + iface.net_ifacename = ethX
 + iface.ipaddress = iscsi ip address
 +
 +   * if iface.ipaddress is specified, iscsi ip address needs to be either 
 the
 + same as the ethX's ip address or an address on the same subnet. Make
 + sure the ip address is unique in the network.
 +
 +3. edit /etc/iscsi/iscsid.conf
 +   The default setting for MaxRecvDataSegmentLength (131072) is too big, 
 search

   big;

 +   and replace all occurances of xxx.iscsi.MaxRecvDataSegmentLength to be a

  occurrences

 +   value no bigger than 15872 (for example 8192):
 +
 + discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 8192
 + node.conn[0].iscsi.MaxRecvDataSegmentLength = 8192
 +
 +4. To direct open-iscsi traffic to go through cxgb3i's accelerated path,
 +   -I iface file name option needs to be specified with most of the
 +   iscsiadm command. iface file name is the transport interface file 
 created
 +   in step 2.


HTH.

~Randy


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Linux iscsi performance, multipath issue

2008-12-16 Thread Tracy Reed

On Wed, Dec 10, 2008 at 05:38:15PM -0800, fuubar2...@yahoo.com spake thusly:
 SuSE 10 and SuSE10w/SP1 install a kernel below 2.6.17. If your kernel
 is 2.6.16 or less, then the following kernel tuning variables may help

Just out of curiosity, what changed after 2.6.16 which makes these
variables irrelevant? I still set these even on modern kernels but
perhaps I am wasting my time.

-- 
Tracy Reed
http://tracyreed.org

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---