On Feb 29, 6:11 pm, "Ty! Boyack" <[EMAIL PROTECTED]> wrote:
> David Lee Lambert wrote:
> > I have an Ubuntu Gutsy system (using the open-iscsi 2.0-865 package)
> > connected to a LUN on a NetApp filer.  When a single process is
> > reading data from the filer,  I get about 4 MB/s read speed;  when two
> > processes are reading data,  each process gets at least 20 MB/s.
> I'm not quite in the same hardware situation, but I've seen similar
> slow-downs with the default read ahead buffer on the iscsi devices.
> At least on fedora, the default read ahead buffer is 256 512-byte
> blocks.  You can see this number with: [...]

I tried setting higher values of the readahead parameter.  I do get
better read performance for a single large file with a sufficiently
high value,  but performance reading a directory-tree is still
terrible.  I also measured smaller values;  there are some values of
readahead for which performance is twenty times worse than what I was
complaining about (200k/sec !), but for 7 sectors or less I actually
get better performance (15 MB/s) than the other system (10 MB/s).

I've posted my data and graphs at


Over the range from 0 to 250 sectors,  another system exhibits
increasing performance, topping out at 70 MB/s.  Thereafter,
increasing readahead only gives a very slight improvement.

Over that range,  the system I'm worried about gives terrible
performance for many values.  It's hard to see on the graph,  but
there is actually a plateau of 40 MB/s right around 180 sectors;
another of 6 MB/s around 256 sectors; and 14 MB/s at 0 sectors.  The
speed is repeatable (with about 10% variance) at any particular value
of readahead.

Going in the other direction,  readahead in the range of 1000 to 2000
gives over 40 MB/s reading a single large file,  and does not show
regions of really low performance.  This would meet our goal if we
wanted to back up single large files to tape,  since that seems to be
our tape drive's maximum write speed.

However,  raising the readahead does not improve performance tarring
up a directory containing subdirectories and small files as much as we
would like.  It gives a two- or three-times improvement,  but I still
got a maximum speed of 13 MB/s,  while the other system can do tar at
70 MB/s as well.

FInally,  I've done testing of raw TCP and IP performance between the
two hosts using back-to-back netcat and "ping -f".  There's no packet
loss between the hosts,  and I can push 70 MB/s in one direction and
55 MB/s in the other.  "ping -f" reports about 60% packet-loss going
to the NetApp from either host;  I don't know whether that's

The system that works acceptably has a 3 GHz CPU, 1 GB of RAM, and an
Intel 82572EI network adapter.

The system that has problems has a 2.4 GHz CPU, 512 MB of RAM, and a
Linksys adapter (uses the r8169 driver).

For the single-file test,  I used 'dd' to read a 2GB file.

For the tar test,  I used exactly the same set of files on both
computers:  about 1 GB of images, PDF files, and Microsoft Office
documents in a four-deep directory hierarchy.  On the problem
computer, the LUN was split into two partions,  one formatted with
ext3fs and the other with reiserfs,  and the same files were present
on each partition.  On the non-problem computer,  the filesystem is

Naturally,  I don't expect exactly the same performance from a 20%
slower machine;  but I wouldn't expect the slowdowns I'm seeing.  Any
suggestions for how to get performance for a single tar process up
closer to the tape-drive speed (40 MB/s)?


You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi

Reply via email to