Folks,

I've been struggling with a similar problem for a while now.  My write
speeds are around 110M/s whereas, even following A. Eijkhoudt's advice
I've only been able to get 34M/s reads.

The actual setup is an experimental play rig where I'm trying to see
how usable/practical it is to provide a HA Linux client on top of two
Xen Linux servers.  To provide HA storage I was playing with having
the two servers export LVs via iSCSI and having the client RAID1 these
for redundancy.  So the client will see two exported targets - one
being on the same physical machine but on the virtual network - the
other on the alternative server across the physical network.  The idea
then is to get the client to RAID1 the exported targets for HA and
therefore, have the ability to migrate the client between Xen servers
for client HA and hopefully get away from any single points of
failure.  The setup works as expected except for read speeds.

All physical hardware are the same. All machines are standard Sun
v20z's running 4 cores and 8G RAM with Broadcom GE NICs and the
standard internal U320 SCSI HDDs with external eSATA II drives
connected via a Silicon Image eSATA controller.

All machines are running amd64 Gentoo Linux using a 2.6.18-xen kernel
and Xen-3.3.  I'm unable to provide IET and Open-iSCSI versions at the
moment as I'm away from the play rig.  However, I can provide if
anyone is interested.

* The target is IET running on a Xen dom0 running Gentoo Linux with
the internal drives in a S/W RAID0 presented via LVM2.  The iSCSI
target drive is an LV volume exported via blockio.  Read speeds
directly off the internal SCSI drives S/W RAID0 md0 are in the range
of ~140-150M/s with write speeds of ~90-100M/s.  Read & write speeds
off the LV are identical to the raw md0 speeds.  Read speeds directly
off the external eSATA II drives S/W RAID0 md1 are in the range of
~450M/s with write speeds of ~220M/s.  Read & write speeds off the LV
for the external eSATA II drives are identical to the raw md1 speeds.

* The network is GE with 9k MTU and at the moment is just a cross-over
cable between servers.  Tests using iperf show consistent TCP
throughput of 118M/s over the wire.  Tests using dd & netcat also show
117M/s for (RAID0 md0 svr 1)->network->(/dev/null svr 2) and speeds of
95M/s for (RAID md0 svr 1)->network->(RAID md0 svr 2).  The speeds are
identical for (Xen Client 1)->(Xen dom0 srv 1)->network->(Xen dom0 srv
2)->(Xen Client 2).  So Xen doesn't seem to penalize these scenarios.

* The initiator is Open-iSCSI running on the Xen client.  The exported
targets are then S/W RAID1 and the resulting md1 is then divided using
client side LVM2 presenting the LVs to the client.

So the software stack is quite layered.  However, this doesn't seem to
be a problem for writes (110M/s) just reads (30-34M/s).

Initially writes were at ~110Ms but reads were ~1M/s.

Having read-ahead of 8K, MTU of 9K as well as the noop scheduler for
all block devices at all levels on both the target and initiator
pulled the read speed up to ~30M/s.  This is with the recommended TCP
tweaks (e.g. RX/TX buffer size of 16M etc.).
Increasing read-ahead to 16K for all block devices at all levels on
both sides resulted in read speeds of ~34M/s.  Increasing read-ahead
and/or TCP tweaks further doesn't seem to make any further
improvements.  Write speed is unaffected.

Playing with the IET and Open-iSCSI config files and changing the read-
ahead and max recv/xmit block sizes to anything other than default
resulted in 18K/s writes and reads.  However, turning on instant data
produces an initial burst of 100M/s for about 2s and then returns back
to ~30M/s.  Changing the IET compatibility setting essentially kills
everything. i.e. Any read or write hangs with no messages in syslog.

Bonnie++ tests on the client md1 created from the exported targets are
a mixed bag - the first couple of tests vary greatly but then settle
down to also show ~100M/s writes and ~34M/s reads.

A Linux kernel rebuild of the md1 created from the exported targets
result in an average rebuild speed of ~90M/s while a check of the md1
results in an average check speed of ~26M/s.

Breaking the client S/W RAID1 apart and testing both targets
individually results in write speeds of ~70-90M/s to both and read
speeds of ~20-26M/s from both no matter where they are located.
Either via the virtual network on the same physical box or across the
physical network to the alternate box.

About the only thing I have not tried so far - is modifying the Cyl/Hd/
Sec values on the client for the exported iSCSI targets to see if
alignment is a problem.  However, I'm unsure at the moment what values
I'd have to set these to given the multiple layers.


Does anyone have any further ideas? comments or suggestions?   Anyone
tried this before?

Thanks for your time.

Adrian Head.

----------
From: "A. Eijkhoudt" <[EMAIL PROTECTED]>
Date: Sep 13, 1:42 am
Subject: Extremely slow read performance, but write speeds are near-
perfect!
To: open-iscsi


Thank you all very much!

The suggested combinations of network configuration & iSCSI settings
seem to have solved the problem, even though initially the speed
fluctuated very heavily (no idea why ;)). I've run 3 consecutive full
Bonnie++ tests now and tried different combinations, and these are the
best results I could get so far:

Version 1.93c       ------Sequential Output------ --Sequential Input-
--Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec
%CP  /sec %CP
abcdefghijklmn  16G   423  99 105115  22 35580  18   756  99 113453
31 367.5   7
Latency             20001us     500ms     520ms   20001us     110ms
90001us
Version 1.93c       ------Sequential Create------ --------Random
Create--------
abcdefghijklmn      -Create-- --Read--- -Delete-- -Create-- --Read---
-
Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
%CP  /sec %CP
                 16  4106  29 +++++ +++  3244  10   888  95 +++++ +++
2829  12

The system load now sticks at 1.00 (1 CPU/core used at 100%) as well.

Hopefully this discussion/thread will help others in the future!

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~----------~----~----~----~------~----~------~--~---

Reply via email to