Re: Poor read performance with IET

2009-11-17 Thread Gopu Krishnan
Pryker,

Test your environment with NULL I/O. Perform the read and write and check
the performance which should attains around 100MB/s. If not so the problem
would be around IET on your environment, Else the problem may be attained
with your disk I/O or back-end driver.

Thanks
Gopala krishnan Varatharajan.

On Tue, Nov 17, 2009 at 8:51 PM, pryker  wrote:

> I am currently running IET on a CentOS 5.4 server with the following
> kernel:
>
> Linux titan1 2.6.18-128.7.1.el5 #1 SMP Mon Aug 24 08:21:56 EDT 2009
> x86_64 x86_64 x86_64 GNU/Linux
>
> The server is a Dual quad core 2.8 GHz system with 16 GB ram.  I am
> also using Coraid disk shelves via AoE for my block storage that I am
> offering up as a iscsi target.
>
> I am running v 0.4.17 of IET.
>
> I am getting very good write performance but lousy read performance.
> performing a simple sequential write to the iscsi target I get 94
> megabytes per sec.  With reads I am only getting 12.4 megabytes per
> sec.
>
> My ietd.conf looks like this:
>
> Target iqn.2009-11.net.storage:titan.diskshelf1.e1.2
>  Lun 1 Path=/dev/etherd/e1.2,Type=blockio
>  Alias e1.2
>  MaxConnections 1
>  InitialR2T No
>  ImmediateData Yes
>  MaxRecvDataSegmentLength 262144
>  MaxXmitDataSegmentLength 262144
>
> I have also made the following tweaks to tcp/ip:
>
> sysctl net.ipv4.tcp_rmem="100 100 100"
> sysctl net.ipv4.tcp_wmem="100 100 100"
> sysctl net.ipv4.tcp_tw_recycle=1
> sysctl net.ipv4.tcp_tw_reuse=1
> sysctl net.core.rmem_max=524287
> sysctl net.core.wmem_max=524287
> sysctl net.core.wmem_default=524287
> sysctl net.core.optmem_max=524287
> sysctl net.core.netdev_max_backlog=30
>
> I am using Broadcom cards in the iscsi target server.  I have enabled
> jumbo rames on them (MTU 9000).  They are connected directly into a
> windows server and I am accessing the iscsi target with MS iscsi
> initiator.  The NIC cards on the Windows server are also set to a MTU
> of 9000.  There is no switch in between, they are directly connected
> into the Windows server.
>
> I also notice that load averages on the Linux box will get into the
> 7's and 8's when I try pushing the system by performing multiple
> transfers.
>
> Any feedback on what I might be missing here would be great!
>
> Thanks
>
> Phil
>
> --
>
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To post to this group, send email to open-is...@googlegroups.com.
> To unsubscribe from this group, send email to
> open-iscsi+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/open-iscsi?hl=.
>
>
>


-- 
Regards

Gopu.

--

You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=.




Re: Poor read performance with IET

2009-11-17 Thread Mike Christie
pryker wrote:
> I am currently running IET on a CentOS 5.4 server with the following
> kernel:
> 

...
> I am using Broadcom cards in the iscsi target server.  I have enabled
> jumbo rames on them (MTU 9000).  They are connected directly into a
> windows server and I am accessing the iscsi target with MS iscsi
> initiator.  The NIC cards on the Windows server are also set to a MTU
> of 9000.  There is no switch in between, they are directly connected
> into the Windows server.

It you are using IET with the MS initiator then you should send mail to
iscsitarget-de...@lists.sourceforge.net

They maintain IET so there are a lot more experts there.

--

You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=.




RE: Poor read performance with IET

2009-11-17 Thread pcooper
What type of drives and what type of raid controller? Also if these are
Seagate drives look at the back of the drives for a jumper setting remove
the jumper for faster performance. 
Regards,
Paul 

-Original Message-
From: pryker [mailto:pry...@gmail.com] 
Sent: Tuesday, November 17, 2009 10:21 AM
To: open-iscsi
Subject: Poor read performance with IET

I am currently running IET on a CentOS 5.4 server with the following
kernel:

Linux titan1 2.6.18-128.7.1.el5 #1 SMP Mon Aug 24 08:21:56 EDT 2009
x86_64 x86_64 x86_64 GNU/Linux

The server is a Dual quad core 2.8 GHz system with 16 GB ram.  I am
also using Coraid disk shelves via AoE for my block storage that I am
offering up as a iscsi target.

I am running v 0.4.17 of IET.

I am getting very good write performance but lousy read performance.
performing a simple sequential write to the iscsi target I get 94
megabytes per sec.  With reads I am only getting 12.4 megabytes per
sec.

My ietd.conf looks like this:

Target iqn.2009-11.net.storage:titan.diskshelf1.e1.2
  Lun 1 Path=/dev/etherd/e1.2,Type=blockio
  Alias e1.2
  MaxConnections 1
  InitialR2T No
  ImmediateData Yes
  MaxRecvDataSegmentLength 262144
  MaxXmitDataSegmentLength 262144

I have also made the following tweaks to tcp/ip:

sysctl net.ipv4.tcp_rmem="100 100 100"
sysctl net.ipv4.tcp_wmem="100 100 100"
sysctl net.ipv4.tcp_tw_recycle=1
sysctl net.ipv4.tcp_tw_reuse=1
sysctl net.core.rmem_max=524287
sysctl net.core.wmem_max=524287
sysctl net.core.wmem_default=524287
sysctl net.core.optmem_max=524287
sysctl net.core.netdev_max_backlog=30

I am using Broadcom cards in the iscsi target server.  I have enabled
jumbo rames on them (MTU 9000).  They are connected directly into a
windows server and I am accessing the iscsi target with MS iscsi
initiator.  The NIC cards on the Windows server are also set to a MTU
of 9000.  There is no switch in between, they are directly connected
into the Windows server.

I also notice that load averages on the Linux box will get into the
7's and 8's when I try pushing the system by performing multiple
transfers.

Any feedback on what I might be missing here would be great!

Thanks

Phil

--

You received this message because you are subscribed to the Google Groups
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/open-iscsi?hl=.


--

You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=.




Poor read performance with IET

2009-11-17 Thread pryker
I am currently running IET on a CentOS 5.4 server with the following
kernel:

Linux titan1 2.6.18-128.7.1.el5 #1 SMP Mon Aug 24 08:21:56 EDT 2009
x86_64 x86_64 x86_64 GNU/Linux

The server is a Dual quad core 2.8 GHz system with 16 GB ram.  I am
also using Coraid disk shelves via AoE for my block storage that I am
offering up as a iscsi target.

I am running v 0.4.17 of IET.

I am getting very good write performance but lousy read performance.
performing a simple sequential write to the iscsi target I get 94
megabytes per sec.  With reads I am only getting 12.4 megabytes per
sec.

My ietd.conf looks like this:

Target iqn.2009-11.net.storage:titan.diskshelf1.e1.2
  Lun 1 Path=/dev/etherd/e1.2,Type=blockio
  Alias e1.2
  MaxConnections 1
  InitialR2T No
  ImmediateData Yes
  MaxRecvDataSegmentLength 262144
  MaxXmitDataSegmentLength 262144

I have also made the following tweaks to tcp/ip:

sysctl net.ipv4.tcp_rmem="100 100 100"
sysctl net.ipv4.tcp_wmem="100 100 100"
sysctl net.ipv4.tcp_tw_recycle=1
sysctl net.ipv4.tcp_tw_reuse=1
sysctl net.core.rmem_max=524287
sysctl net.core.wmem_max=524287
sysctl net.core.wmem_default=524287
sysctl net.core.optmem_max=524287
sysctl net.core.netdev_max_backlog=30

I am using Broadcom cards in the iscsi target server.  I have enabled
jumbo rames on them (MTU 9000).  They are connected directly into a
windows server and I am accessing the iscsi target with MS iscsi
initiator.  The NIC cards on the Windows server are also set to a MTU
of 9000.  There is no switch in between, they are directly connected
into the Windows server.

I also notice that load averages on the Linux box will get into the
7's and 8's when I try pushing the system by performing multiple
transfers.

Any feedback on what I might be missing here would be great!

Thanks

Phil

--

You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=.