I am currently running IET on a CentOS 5.4 server with the following

Linux titan1 2.6.18-128.7.1.el5 #1 SMP Mon Aug 24 08:21:56 EDT 2009
x86_64 x86_64 x86_64 GNU/Linux

The server is a Dual quad core 2.8 GHz system with 16 GB ram.  I am
also using Coraid disk shelves via AoE for my block storage that I am
offering up as a iscsi target.

I am running v 0.4.17 of IET.

I am getting very good write performance but lousy read performance.
performing a simple sequential write to the iscsi target I get 94
megabytes per sec.  With reads I am only getting 12.4 megabytes per

My ietd.conf looks like this:

Target iqn.2009-11.net.storage:titan.diskshelf1.e1.2
  Lun 1 Path=/dev/etherd/e1.2,Type=blockio
  Alias e1.2
  MaxConnections 1
  InitialR2T No
  ImmediateData Yes
  MaxRecvDataSegmentLength 262144
  MaxXmitDataSegmentLength 262144

I have also made the following tweaks to tcp/ip:

sysctl net.ipv4.tcp_rmem="1000000 1000000 1000000"
sysctl net.ipv4.tcp_wmem="1000000 1000000 1000000"
sysctl net.ipv4.tcp_tw_recycle=1
sysctl net.ipv4.tcp_tw_reuse=1
sysctl net.core.rmem_max=524287
sysctl net.core.wmem_max=524287
sysctl net.core.wmem_default=524287
sysctl net.core.optmem_max=524287
sysctl net.core.netdev_max_backlog=300000

I am using Broadcom cards in the iscsi target server.  I have enabled
jumbo rames on them (MTU 9000).  They are connected directly into a
windows server and I am accessing the iscsi target with MS iscsi
initiator.  The NIC cards on the Windows server are also set to a MTU
of 9000.  There is no switch in between, they are directly connected
into the Windows server.

I also notice that load averages on the Linux box will get into the
7's and 8's when I try pushing the system by performing multiple

Any feedback on what I might be missing here would be great!




You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to