On Wed, Apr 13, 2011 at 1:58 AM, Graves, Aaron <[email protected]> wrote:
> Hello list,
>
> I have a layer-2 VSwitch attached to a 1GB OSA-3.
>
> Understanding that "it depends", what ballpark network throughput have you 
> seen for an ftp
>
> a)      Between instances on the same VSwitch
> b)      From an instance on the VSwitch to a distributed system in the same 
> datacenter

Aaron,

As you will realize, throughput is determined by the slowest factor in
a long chain. In many cases the weakest link is determined by CPU, so
throughput depends on the availability of CPU cycles for the guest.
That makes it useful to determine the CPU cost per MB and extrapolate
using the virtual machine share.

My ballpark figures on a z9 for transfer of 1 MB/s (at MTU 1500)
between two guests on the same VSWITCH are:
 1% for Linux virtual time sending the data
 1% for CP overhead sending the data
 3% for Linux virtual time receiving the data
 1% for CP overhead receiving the data
Since the transfer is from one guest to another, 2% of CPU is used by
the sender and 4% by the receiving guest. The difference between
layer2 and layer3 was about 0.5% due to shorter code path in Linux.
This means that when you have one IFL and do not real processing with
the data, you should expect around 20 MB/s. Using a 9K MTU you can
reduce the CPU cost by a factor of 5 (so most cost was per packet
rather than per byte). YMMV on z10 or z196.

When you transfer between a guest and a remote system these numbers
apply as well (you get either the sending or receiving side to
support). On top of that comes the CPU usage in CP for VSWITCH
processing (which is noticeable for receiving data).

An alternative is to use attached OSA devices for Linux. Thanks to
QEBSM not only does the CP portion completely disappear, but also the
path inside Linux is much cheaper. Two guests exchange data through a
shared OSA adapter (not hitting the wire) at MTU 1500 for less than 1%
CPU to send and 1% for receiving. With enough CPU capacity you could
now get to 100 MB/s and run into the latency limitations in a 1 Gb/s
network. For those data rates you also need to tweak QDIO buffers.

As long as you don't transfer huge amounts of data, using VSWITCH is
very attractive because it handles various other things for you. When
each Linux has to do its own fail-over etc, you burn the cycles again
in other places. But for receiving large amounts of data, I tend to
recommend using attached OSA devices. This spoils the fun IMHO for
LCAP support in VSWITCH since that was meant for high data volumes. As
a customer stated "OSA cards are cheaper than IFLs" when he used Linux
bonding rather than LCAP.

Hope you did not expect an easy answer. If you did, there's the "it
depends" as well.

Rob
-- 
Rob van der Heij
Velocity Software
http://www.velocitysoftware.com/

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
----------------------------------------------------------------------
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

Reply via email to