Dan Price wrote:
On Thu 10 May 2007 at 04:21PM, Jerry Jelinek wrote:
of the other controls is trickier although I think Dan's idea of scaling
these based on the system makes it easier. We might also want to think
about scaling based on the number of running zones.
Another way to look at it (and I think what you are saying) would be to
broaden the notion of "shares" a bit to include more of the system
resources-- for example, memory. What's tough there, though, is that
our notion of shares today represent an entitlement, and the case of
memory, we're talking about a cap on utilization.
I think fundamentally we hear from two camps: those who want to
proportionally partition whatever resources are available, and those who
want to see the system as "virtual 512MB Ultra-2's" or "virtual 1GB,
Yes, something like shares for memory would be nice because you don't
have to know ahead of time what your maximum will be and as long as
the system is not overcommitted you can use what you need.
I agree that there are multiple ways people want to slice things up and
we are actually pretty good with the capped and dedicated stuff now.
It is the full sharing with a guaranteed minimum that we might want
to think about improving (for memory). I'm not sure how hard that
will be though.
Just thinking out loud here, I wonder if there is any way we could come
close to this behavior by dynamically adjusting the physical and swap
controls we already have, based upon how many zones are running? It
wouldn't be as good as fair shares for memory but it would be a lot easier
to implement. Or, maybe we could use rcapd to watch the dynamic behavior of
each zone and adjust the physical cap as needed. That would be really
easy to implement. It is harder for the swap cap since I don't think we can
force a zone down to a lower level once it is over and we need it to be at a
zones-discuss mailing list