On Mon, Apr 07, 2008 at 05:14:19PM -0400, Konrad Rzeszutek wrote:
> 
> On Mon, Apr 07, 2008 at 10:24:56PM +0300, Pasi Kärkkäinen wrote:
> > 
> > Hello list!
> > 
> > I have lvm volume (from a single sata disk) exported with IET like this:
> > 
> > Target iqn.2001-04.com.test:vol1
> >     Lun 0 Path=/dev/vg1/test1,Type=fileio
> >     InitialR2T              No
> >     ImmediateData           Yes
> > 
> > I have not modified read-ahead or other disk settings on the target server.
> > target is using deadline elevator/scheduler.
> > 
> > open-iscsi initiator (CentOS 5.1 2.6.18-53.1.14.el5PAE and the default
> > open-iscsi that comes with the distro) sees that volume as /dev/sda.
> > 
> > I'm testing performance with different read-ahead settings on the initiator.
> > 
> > Can somebody explain why I see these throughput changes? 
> 
> What is the MTU set on your NICs? If it is 1500 (the default) a 4KB transfer
> will have to be divided in four TCP packets and re-assembled on the target 
> side.
> This increases the amount of work that the TCP stack has to perform.
> 

Hmm.. I'm using read-ahead values from 128 kB (256) to 2 MB (4096).. so all of 
the
read-ahead values will cause many TCP packets anyway.. ?

> Try setting the frame rate to a jumbo frame - but be careful as your switch 
> needs
> to support jumbo frames. Or you can use a cross-over cable for this test and
> eliminate the switch from the picture.
> 

I could try jumbos and see if it has any effect..

I guess I should also try dd with different read-ahead settings locally on
the target to see if I can see the same behaviour there.. 

Thanks for the reply!

-- Pasi

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~----------~----~----~----~------~----~------~--~---

Reply via email to