Hi!

I wonder whether anybody did try to play with network tuning parameters related 
to iSCSI. Candidates might be "net.core.?mem*". I've seen these setting for a 
database server, but I don't know what the intention of these actually is:

net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_max = 1048576

Regards,
Ulrich

>>> Heinrich Langos <[email protected]> schrieb am 09.09.2011 um 09:02 in
Nachricht <[email protected]>:
> On Thu, Sep 08, 2011 at 09:25:29PM -0500, Mike Christie wrote:
> > On 09/08/2011 09:23 PM, Mike Christie wrote:
> > > On 09/08/2011 04:36 PM, Mike Christie wrote:
> > >> On 09/08/2011 02:06 AM, Heinrich Langos wrote:
> > >>> Hi htere,
> > >>>
> > >>> I am using the open-iscsi initiator to access a storage back end for
> > >>> my Xen based virtualization infrastructure.
> > >>> Since The current 3.0.x Linux kernel finally has everything that I
> > >>> need to run the host system (Dom0) without any additional patches I
> > >>> thought I'd give it a try and see if I can replace the 2.6.32 hosts
> > >>> that cause a lot of trouble when mixing Xen + iSCSI + multipath.
> > >>>
> > >>> This is raw "dd" throughput for reading ~30GB from an iSCSI storage
> > >>> via a dedicated 1GB Ethernet link.
> > >>>
> > >>> 2.6.32 : 102 MB/s
> > >>> 2.6.38 : 100 MB/s
> > >>> 2.6.39 : 44 MB/s
> > >>> 3.0.1 : 43 MB/s
> > >>>
> > >>
> > >> I can replicate this now. For some reason I only see it with 1 gig. I
> > >> think my 10 gig setups that I have been testing with are limited by
> > >> something else.
> > >>
> > >> Doing git bisect now to track down the change that caused the problem.
> > >>
> 
> First of all thank you very much for taking a look at it. I was wondering if
> I was doing something strange since I havn't seen anybody report anything
> similar and 2.6.39 is out a while now. But I guess iSCSI users tend to be
> coporate users and they don't junp on every new kernel version.
> 
> > > I did not find anything really major in the iscsi code. But I did notice
> > > that if I just disable iptables throughput goes from about 5 MB/s back
> > > up to 85 MB/s.
> > > 
> > > If you disable iptables do you see something similar.
> > > 
> > 
> > I also noticed that in 2.6.38 throughput would almost immediately start
> > at 80-90 MB/s, but with 2.6.39 it takes a while (maybe 10 seconds
> > sometimes) to ramp up.
> 
> I've noticed the oposite. Throughput starts high and goes down to the
> numbers reported after about 10 gigabyte. I've noticed with all the kernels
> I've tested but that is proably a result of the crude way of measuring
> that I use.
> 
> I run 'dd if=/dev/disk/by-path/... of=/dev/null bs=1024k &'
> and 'while kill -USR1 <dd-pid> ; do sleep 2 ; done' .
> Therefore I usually don't get to see the thoughput during the first couple
> of seconds. I'm open to suggestions to improve this.
> 
> I took a look at ip_tables with kernel 3.0.1 (I'll check out the other
> versions later today.)
> If I rmmod iptable_filter ip_tables and x_tables the performance starts 
> out higher (over 80 instead of over 60) and drops down to around 63
> instead of 43. Still not back up in the region it was before.
> 
> cheers
> -henrik



 

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.

Reply via email to