> I know you
> have done a lot of testing with LVS and I would like to get
> some of your
> benchmarks so that I can learn more about how the LVS behaves
> on Linux for
> s/390, so Could I get some of the benchmark numbers you have
> gotten from
> testing tunneling on Linux for s/390? It would really benefit
> me and my
> work with customers, so if you can send some of those numbers it would
> really help a lot of people. I am willing to send them to the
> LVS forum
> (which I belong to) and let them know how bad there
> performance is.
One slight misunderstanding here: it's not LVS that is causing the
performance problem, but the more general requirement of layer 2
bridging for guest LANs crossing multiple physical systems that LVS
happens to trigger. LVS within a single system is relatively painless,
if somewhat limited in comparison to other clustering solutions. Without
knowing what you're asking LVS to do, it's somewhat difficult to compare
techniques -- if all you need is workload distribution a la Local
Director or Lotus Edge Server, then LVS is probably an OK solution. It
depends a LOT on what you are trying to accomplish.
The benchmarks are relatively simple to construct for layer 2 forwarding
testing. You need two physical systems (or two LPARS) with an IP
connection between the systems. Set up a GLAN on each system and attach
two Linux guests to each GLAN as shown here:
A--- GLAN1 -----B1 B2------ GLAN2 ------ C
| |
+- IP tunnel -+
Set up host A using a non-IP based protocol (Netware emulation or
Columbia Appleshare file service in native mode are good choices), and
set up host C as a client. Configure B1 and B2 as encapsulating bridges
(see the Linux Router howto). Create and copy a 10Mb, 100Mb, 1Gb, 10Gb
file between A and C as tiers for the test. Use VM accounting records or
your favorite performance monitor tool to measure the resource
utilization of B1 and B2.
> As a
> matter of fact we have some people working with the LVS in
> production and
> by the way as much as your 8000 linux images running under VM
> customer is
> not a reference (and unknown to everyone) so is this customer
> and it is
> very happy with the performance.
Good. There is a place for LVS -- it does what it does well -- but it's
not the answer for everything, and if you're replacing some of the more
sophisticated clustering techniques that are in the field already, it's
not a complete replacement. I'll send you an NDA in a separate note
and we can work within the boundaries of that -- gotta keep the lawyers
fed, or they get cranky and awkward, and no one wants to ask them to tea
any more...8-)
Without naming names, as you might guess, the majority of the overhead
burned is in B1 and B2. For the 100Mb test and up, B1 and B2 show
averages of 30-35% of a G6 IFL for the duration of the test, which
breaks down primarily to encapsulation and segmentation of the non-IP
datagrams and the corresponding reassembly and unwrapping the
encapsulation on the other end. In the 100Mb test, systems A and C
consumed less than 1% of a CPU. If you add SSL or other
authentication/encryption stuff, the tunneling gets REALLY ugly.
-- db
-- db