Hi Mike,

I use the default configuration of open-iscsi initiator and IET,
except I change the NOP interval to 500s and the
MaxRecvDataSegmentLength of IET to 262144 (default value is 8192).

And the network I'm using is a straight-through cable between two
laptops. I'm not sure whether the NICs support or choose to use jumbo

But as you can see in my previous post, the TCP segments of
iperf traffic were mostly as large as 65000+ bytes but over 90% of
iSCSI ones were only 1448 bytes. So I thought that TCP did support
large segments but it seemed that iSCSI or TCP chose not to use the
large ones but went with the small ones for some reason...

Do you think there might be some configurations I can play with to
have iSCSI and TCP choose to use large segments?

Thanks a lot!


On Jan 6, 12:03 pm, Mike Christie <micha...@cs.wisc.edu> wrote:
> On 01/04/2010 08:54 AM, Jack Z wrote:
> > Then I used Wireshark to grab the traces of iSCSI and iperf and I
> > found lots of iSCSI PDUs were divided into TCP segments of 1448 bytes
> > but with iperf TCP segments could be as large as 65000+ bytes.
> > I first thought this was because of the small default value (8192) for
> > MaxRecvDataSegmentLength. So I increased that value to 262144. But in
> > a later test with 16ms rtt, I found the iSCSI throughput was only
> > improved by 0.7 MB/s and a lot of iSCSI PDUs were still divided into
> > 1448 byte long TCP segments... So I think MaxRecvDataSegmentLength may
> > not be the reason.
> Are you using jumbo frames?
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to