I just stood up this environment and used nspins to max out the allocations.
Host (8 CPU v890)
pool_1 Max=66K Min=1 Util_goal=50% Actual 5
zone_a 50 shares
proj_alpha 50 shares
proj_beta 50 shares
zone_b 50 shares
pool_2 Max=66K Min=1 Util_goal=50% Actual 2
zone_c 50 shares
default_pool Actual 1
Zones:
zone_a zone_b zone_c
Global_Level 30% 30% 25%
Zone_level 50% 50% 100%
Projects:
zone_a zone_b zone_c
Global_Level_alpha 15%
Global_Level_beta 15%
Global_Level_root 30% 25%
Zone_Level_alpha 25%
Zone_Level_beta 25%
Zone_Level_root 50% 100%
- the zone split of 50%, 50%, and 100% makes sense. 50+50 = 100% in
pool_1 and pool_2 reached 100% from zone_c
- zone_c only had rights to half of the pool's utilization goal so 1/2
of 50% is 25%. So the 25% makes sense.
If I had kicked of a project under pool_2 outside of zone_c, pool_2
would have stolen CPUs from pool_1 and the 30% would drop down to 25%
each and the project would have dropped to 12%.
So the answer seems to be 12.5%. In the end I was just dividing a pie
up into 8 parts.
Mike
Michael Barrett wrote:
Amol A Chiplunkar wrote:
You may want to add more information like you are creating
psets for each pool and the pset.max and pset.min of each.
Because when I create a pset without specifying them and associate it
with a pool, the size is zero.
I thought when you were using a utilization goal instead of counting
CPUs you should declare a max to be a really high, unrealistic number
and the min to be 1. Is that not what you should do?
Also a correction. You cannot specify 50/100 shares. Shares are
relative integer values and the absolute value of shares
at any point of time can be calculated using total shares of of active
entities competing for CPUs.
Yes, I know. But it's easier to follow the conversation instead of me
putting 1 and 1 down or 6789 and 6789.
I would predict that the physical utilization of project_alpha in terms
of %CPUs (as shown by prstat) would be
(0.25 of poolsize of pool_1) * 100 / 16
i.e. if the pool_1 poolsize is 4, the physical utilization would be
6.25 %
That would mean I would have to know how many CPU are in the dynamic
pool at any given time which I don't.
I expect that in order to have 50% utilization the objective will be
set to ~50 value.
And pset.max will be set to a value more than 16.
In such a case, I would expect poold to try and give out as many cpus
as possible
to both the pools since the utilization needs to be kept at around
50% while
the zones and projects are fully utilizing all the CPU cycles they get.
Yes, that is what I want at the pool level.
So at one point, poold should reach a static configuration with
pool_default with
only 1 CPU and the rest of them given to the two pools created.
So your saying the dynamic pool daemon with drive the box to a 16-1=15
then divide that by 2 and get 7.5 CPU bound to each pull. I just don't
see why I need to care about counting threads (CPUs) when I already know
the pool is using 50% of the box.
thanks
- Amol
----- Original Message -----
From: Michael Barrett <[EMAIL PROTECTED]>
Date: Friday, October 27, 2006 6:02 pm
Subject: [zones-discuss] actual utilization of a pool/zone/proj
To: [email protected]
Lets say I have a T2000 with 16 threads.
I have the following:
pool_1 = dynamic pool with 50% utilization goal set
zone_a = 50/100 shares
proj_alpha = 50/100 shares
proj_beta = 50/100 shares
zone_b = 50 shares
pool_2 = dynamic pool with 50% utilization goal set
zone_c = 50/100 shares
zone_d = 50/100 shares
To make the case more simple, we will state that all pools and zones
and projects are running at the max so the constraints are being
forced instead of one Solaris resource entity receiving more than he
is allocated due to the FSS share model.
Is there anyway to estimate before seeing actual utilization what the
physical utilization of proj_alpha would be? Would you just....
50% of the physical box to pool_1 divided by 50 shares for zone_a
equals 25% of host utilization available for zone_a. Then 50% of 25%
(25/50) is 12.5% of raw host utilization available for proj_alpha?
Thanks,
Mike
_______________________________________________
zones-discuss mailing list
[email protected]
_______________________________________________
zones-discuss mailing list
[email protected]