Tim Bunce writes:
> On Tue, Feb 13, 2001 at 11:28:20AM +0000, Malcolm Beattie wrote:
> > 
> > you'll see that IBM reckons you can get down to $500 per server
> > (disk not included) by putting 2500 instances on a brand new fancy
> > $1.2 million z900.
> 
> Assuming all the virtual linux servers were fully loaded with tasks
> (say apache+mod_perl as an example)...  What kind of tradition Intel
> platform performance would each virtual linux instance be equivalent to?
> 
> e.g., CPU: ~600MHz PIII?

Heck, if IBM would just get a test system on our floor like they've
been promising for months, I'd be able to find out. It depends on how
the load spreads between the servers. It's much the same problem as
determining how many users you can put on a large multi-user system
and how much real disk space you need. Say you have 10000 users on a
machine: it may be that only 500 or 1000 are active at any one time.
It depends on the environment. Similarly, you can give people large
quotas in many environments (POP servers, some web hosting, some home
filestore) because, on average, people only use a small fraction.

The problems are similar (but not the same) for running multiple
virtual servers on one system. They're similar because you have
overallocation and "competing" for shared resources with potential
bursty and asymmetric behaviour that the system has to smooth out.
They're different because the same rule of thumb numbers don't apply
(or at least they apply but only "one level up"). If you have say
100 systems with 1000 users/clients/whatever for each then you'd get
the same "hit rate" (in some abstract sense) from 1 user of each
virtual server doing 1 hit as from 100 users hitting only one server.
In the former case, you've got the system overhead of using memory
and scheduling for 100 different kernels; in the second case, 99 of
the kernels are sitting idle, paged out, unscheduled and barely
affecting the machine at all.

All I can do is basic sums on the hardware figures (available in my
slides) such as one G6/z900 CPU having roughly 16 times the cache and
memory bandwidth of an Intel CPU and needing zero CPU for most of the
I/O path which is all offloaded onto SAP/channels/OSA. Until IBM get
me that test system, my best guesstimate/hope is that if we were to
put 150 virtual servers on a 3-way G5/G6 system with 16 channels and
1 in 10 active at any instant then, if those systems that are active
all happen to need maximum CPU at the same time, each is getting about
120MHz-worth of CPU and the equivalent of a fast-wide SCSI bandwidth
to disk except that there's almost zero CPU cost for I/O to either
disk or network. In general, CPU use (and I/O and memory) will be
smoother and scheduled across the entire system so that bursty
behaviour for CPU, I/O and memory will all be smoothed out. That's
the theory and, at the moment, I've convinced myself that it could
theoretically hold in practice too but I've no first-hand evidence
(other than other big sites like Telia, the big German ISP and so on
going Linux/390).

> And what about network i/o? Would the z900 network i/o be a bottleneck
> if all the virtual servers were blasting away?

Almost certainly not. You can put 24 OSA-Express Gigabit ports
(12 cards) into a z900, each taking one of your maximum of 256 channels.
See my slides.

--Malcolm

-- 
Malcolm Beattie <[EMAIL PROTECTED]>
Unix Systems Programmer
Oxford University Computing Services

Reply via email to