[zones-discuss] CPU caps are in Nevada

2007-03-13 Thread Alexander Kolbasov
CPU Caps are in Nevada and should be available with the daily build and will 
show up in build 61.

The code is available from cvs.opensolaris.org

- Alex Kolbasov

___
zones-discuss mailing list
zones-discuss@opensolaris.org


[zones-discuss] Re: [rm-discuss] CPU caps are in Nevada

2007-03-13 Thread Matty

On 3/13/07, Alexander Kolbasov [EMAIL PROTECTED] wrote:

CPU Caps are in Nevada and should be available with the daily build and will
show up in build 61.

The code is available from cvs.opensolaris.org


Sweet! Do you think CPU caps will be back ported to a Solaris 10 update?

Thanks,
- Ryan
--
UNIX Administrator
http://prefetch.net
___
zones-discuss mailing list
zones-discuss@opensolaris.org


[zones-discuss] V440/Zones Capacity

2007-03-13 Thread Morris Hooten - SLS Business Infrastructure

Based on curent use cases and experienced users of containers
how many sparse root zones could be run on a sunfire 440
wit h4x 1.3 ghz cpus and 16gb ram? i currently have 10 sparse
zones with all running a webserver and a third running oracle.

my average at the moment is: load average: 0.50, 0.41, 0.48
would I be risking it to add three additonal zones running webservers and
oracle?

thoughts? what should my load averages look like consistently?

Thanks for any feedback/suggestions


___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] V440/Zones Capacity

2007-03-13 Thread Jim Mauro
The only correct way to do capacity planning is to plot application 
throughput

with system utilization. Maybe you can run 40 Zones each with a webserver,
and a database Zone, or maybe 4. Or 1. It's completely workload dependent.

The system utilization data, without corresponding throughput and response
time values, are useless.

I don't mean to sound smug, but a given systems capacity is determined
by application-level delivered performance, not by whether mpstat(1)
indicates the CPUs have 10% idle or run queue depth.

An Oracle database workload alone could choke a SF440.
Running webservers handling dynamic content with media,
and handling 10k hits per hour is very different than simple
static content and 500 hits a week.

/jim


Dan Price wrote:

On Tue 13 Mar 2007 at 04:46PM, Morris Hooten - SLS Business Infrastructure 
wrote:
  

Based on curent use cases and experienced users of containers
how many sparse root zones could be run on a sunfire 440
wit h4x 1.3 ghz cpus and 16gb ram? i currently have 10 sparse
zones with all running a webserver and a third running oracle.

my average at the moment is: load average: 0.50, 0.41, 0.48
would I be risking it to add three additonal zones running webservers and
oracle?

thoughts? what should my load averages look like consistently?



Load average is not a great metric, although that looks ok.  Can
you post some output of 'mpstat 5' (let it run for a bit and make
sure you get a representative sample).

Also, the output of the bottom portion of 'prstat -Z' may be helpful.

-dp

  

___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] V440/Zones Capacity

2007-03-13 Thread Jim Mauro

That's cool, and I certainly respect your expertise in this area
(among many others when it comes to this stuff).

The problem we run into (more often than not) is that these
things are rarely linear. There's a knee in the curve lurking
behind a dark corner that can trip things up before we're at
the load point the utilization plot told us we'd get to.

With workload tracking, we can see this coming, and plan
accordingly.

All that said, in reality it's a lost art (if it ever really existed),
and sar(1) remains the defacto standard for capacity planning.

DTrace comes to mind as a tool that can be leveraged for
tracking things like response time and throughput for
workloads that are not instrumented to do so. Would make an
interesting exercise to write a DTrace script along the lines of
Brendan's DTrace ToolKit that tracks httpd request/response
times and hit counts

Thanks,
/jim


Dan Price wrote:

On Tue 13 Mar 2007 at 08:44PM, Jim Mauro wrote:
  

The only correct way to do capacity planning is to plot application
throughput with system utilization. Maybe you can run 40 Zones each
with a webserver, and a database Zone, or maybe 4. Or 1. It's
completely workload dependent.



Yes, I agree that that would be optimal.  Offline, Morris gave me some
stats to look at, and it appears that the machine is about 2-5% busy,
with about 30-40% of total system memory in use.  If the machine is
processing its normal workload at that utilization, that's a good
starting point (in my mind) for things seem OK.

-dp

  

___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] Re: Re: Re: Patching problem with whole root zones

2007-03-13 Thread Gael

Phil,

I had the same issue last week, and figured that with whole root zones, you
needed to boot all of them in single user mode (zoneadm -z zone boot -s) to
get the patch deployed everywhere. In my case, I'm creating a dedicated /var
in each whole zone.

Regards

Gael

On 3/13/07, Phil Freund [EMAIL PROTECTED] wrote:


Enda,

I'll get an explorer for you and send it on Thursday - I've just been
eaten alive with DST preparation/implementation and subsequent NetBackup
issues.

I'll send you patchadd logs if you tell me where they are stored. I did
all of the patchadds from single-user on the console so if they don't auto
log somewhere, they aren't available.

TIA,
Phil


This message posted from opensolaris.org
___
zones-discuss mailing list
zones-discuss@opensolaris.org





--
Gael
___
zones-discuss mailing list
zones-discuss@opensolaris.org