That's cool, and I certainly respect your expertise in this area
(among many others when it comes to this stuff).

The problem we run into (more often than not) is that these
things are rarely linear. There's a knee in the curve lurking
behind a dark corner that can trip things up before we're at
the load point the utilization plot told us we'd get to.

With workload tracking, we can see this coming, and plan
accordingly.

All that said, in reality it's a lost art (if it ever really existed),
and sar(1) remains the defacto standard for capacity planning.

DTrace comes to mind as a tool that can be leveraged for
tracking things like response time and throughput for
workloads that are not instrumented to do so. Would make an
interesting exercise to write a DTrace script along the lines of
Brendan's DTrace ToolKit that tracks httpd request/response
times and hit counts....

Thanks,
/jim


Dan Price wrote:
On Tue 13 Mar 2007 at 08:44PM, Jim Mauro wrote:
The only correct way to do capacity planning is to plot application
throughput with system utilization. Maybe you can run 40 Zones each
with a webserver, and a database Zone, or maybe 4. Or 1. It's
completely workload dependent.

Yes, I agree that that would be optimal.  Offline, Morris gave me some
stats to look at, and it appears that the machine is about 2-5% busy,
with about 30-40% of total system memory in use.  If the machine is
processing its normal workload at that utilization, that's a good
starting point (in my mind) for "things seem OK."

        -dp

_______________________________________________
zones-discuss mailing list
zones-discuss@opensolaris.org

Reply via email to