Re: [zones-discuss] V440/Zones Capacity

2007-03-13 Thread Jim Mauro
The only correct way to do capacity planning is to plot application 
throughput

with system utilization. Maybe you can run 40 Zones each with a webserver,
and a database Zone, or maybe 4. Or 1. It's completely workload dependent.

The system utilization data, without corresponding throughput and response
time values, are useless.

I don't mean to sound smug, but a given systems capacity is determined
by application-level delivered performance, not by whether mpstat(1)
indicates the CPUs have 10% idle or run queue depth.

An Oracle database workload alone could choke a SF440.
Running webservers handling dynamic content with media,
and handling 10k hits per hour is very different than simple
static content and 500 hits a week.

/jim


Dan Price wrote:

On Tue 13 Mar 2007 at 04:46PM, Morris Hooten - SLS Business Infrastructure 
wrote:
  

Based on curent use cases and experienced users of containers
how many sparse root zones could be run on a sunfire 440
wit h4x 1.3 ghz cpus and 16gb ram? i currently have 10 sparse
zones with all running a webserver and a third running oracle.

my average at the moment is: load average: 0.50, 0.41, 0.48
would I be risking it to add three additonal zones running webservers and
oracle?

thoughts? what should my load averages look like consistently?



Load average is not a great metric, although that looks ok.  Can
you post some output of 'mpstat 5' (let it run for a bit and make
sure you get a representative sample).

Also, the output of the bottom portion of 'prstat -Z' may be helpful.

-dp

  

___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] V440/Zones Capacity

2007-03-13 Thread Jim Mauro

That's cool, and I certainly respect your expertise in this area
(among many others when it comes to this stuff).

The problem we run into (more often than not) is that these
things are rarely linear. There's a knee in the curve lurking
behind a dark corner that can trip things up before we're at
the load point the utilization plot told us we'd get to.

With workload tracking, we can see this coming, and plan
accordingly.

All that said, in reality it's a lost art (if it ever really existed),
and sar(1) remains the defacto standard for capacity planning.

DTrace comes to mind as a tool that can be leveraged for
tracking things like response time and throughput for
workloads that are not instrumented to do so. Would make an
interesting exercise to write a DTrace script along the lines of
Brendan's DTrace ToolKit that tracks httpd request/response
times and hit counts

Thanks,
/jim


Dan Price wrote:

On Tue 13 Mar 2007 at 08:44PM, Jim Mauro wrote:
  

The only correct way to do capacity planning is to plot application
throughput with system utilization. Maybe you can run 40 Zones each
with a webserver, and a database Zone, or maybe 4. Or 1. It's
completely workload dependent.



Yes, I agree that that would be optimal.  Offline, Morris gave me some
stats to look at, and it appears that the machine is about 2-5% busy,
with about 30-40% of total system memory in use.  If the machine is
processing its normal workload at that utilization, that's a good
starting point (in my mind) for things seem OK.

-dp

  

___
zones-discuss mailing list
zones-discuss@opensolaris.org