[zones-discuss] Default RM controls for Containers?

2007-05-10 Thread Jeff Victor
By default, Solaris Containers do not have resource controls. Up through S10 
11/06 you could add many resource controls to Containers, directly or 
indirectly, but some of them were... 'challenging' to use. ;-)


S10 7/07 improves the situation greatly, moving many of the 'indirect' 
controls (e.g. physical memory capping) into the 'direct' category.  In doing 
that, it also made them much easier to use.  But default settings are still 
absent.


This was clearly demonstrated in a recent research paper at Clarkson 
University. They compared resource isolation of 4 different v12n solutions: 
Vmware Workstation, Xen, OpenVZ and Containers. I did a quick summary of the 
Containers conclusions: http://blogs.sun.com/JeffV/date/20070510 .  That blog 
has a link to the paper, too.


I would like to gather thoughts and opinions on this omission: should 
Containers have default RM settings?  Is there a better method to solve this 
problem?  If not, which settings should have defaults?


It might make sense to use FSS for all zones, but some work may be necessary 
to avoid creating new problems.  If that can be done, assigning a default of 1 
share per zone would make sense.


A reasonably large default value for physical capped-memory might be valuable, 
but might cause its own problems, e.g. more support calls: I have 1GB of 
freemem, but the system is paging! Why?!?


Etc.  Thoughts?


--
Jeff VICTOR  Sun Microsystemsjeff.victor @ sun.com
OS AmbassadorSr. Technical Specialist
Solaris 10 Zones FAQ:http://www.opensolaris.org/os/community/zones/faq
--
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] Default RM controls for Containers?

2007-05-10 Thread Jerry Jelinek

Jeff Victor wrote:
By default, Solaris Containers do not have resource controls. Up through 
S10 11/06 you could add many resource controls to Containers, directly 
or indirectly, but some of them were... 'challenging' to use. ;-)


S10 7/07 improves the situation greatly, moving many of the 'indirect' 
controls (e.g. physical memory capping) into the 'direct' category.  In 
doing that, it also made them much easier to use.  But default settings 
are still absent.


This was clearly demonstrated in a recent research paper at Clarkson 
University. They compared resource isolation of 4 different v12n 
solutions: Vmware Workstation, Xen, OpenVZ and Containers. I did a quick 
summary of the Containers conclusions: 
http://blogs.sun.com/JeffV/date/20070510 .  That blog has a link to the 
paper, too.


I would like to gather thoughts and opinions on this omission: should 
Containers have default RM settings?  Is there a better method to solve 
this problem?  If not, which settings should have defaults?


It might make sense to use FSS for all zones, but some work may be 
necessary to avoid creating new problems.  If that can be done, 
assigning a default of 1 share per zone would make sense.


A reasonably large default value for physical capped-memory might be 
valuable, but might cause its own problems, e.g. more support calls: I 
have 1GB of freemem, but the system is paging! Why?!?


Etc.  Thoughts?


When we were talking about this problem a year or so ago we first thought
that the idea of zone templates would be a good way to solve this
problem.  This is:

6409152 RFE: template support for better RM integration

The idea we had was that when you initially create your zone you would
do so from one of a set of pre-configured templates.  We would deliver
various templates that had good settings for the various RM controls.
At the time this idea didn't fly because we didn't have enough capabilities
in zonecfg to do much real work with resource management (we had two
rctls and the pool property).  Things are much better now that we
have the duckhorn project integrated.

Templates would be very easy to implement but maybe there are some
other, better ideas where you don't even need to make a choice.  After
we did duckhorn Dan had the idea of being able to set some of the
controls as percentages.  This is:

6543082 zonecfg should allow specification of some resources as percentage
of system total

I think if we implemented that then maybe we could just have some defaults
for those controls that would automatically be set when you first create
your zone.

For FSS, zones already default to 1 share if you don't explicitly set
the rctl.  I would really like to make FSS the system default scheduling
class if you are running zones.  However, we do need to add some of the
IA support into FSS if we want to do that.  I don't think we have a RFE
open for that.

I'm curious to hear what other folks think,
Jerry
___
zones-discuss mailing list
zones-discuss@opensolaris.org


[zones-discuss] Re: Why is mount disabled for branded zones

2007-05-10 Thread Enda O'Connor

Ellard Roush wrote:

Hi Enda,

The cluster BrandZ zone :
 1. will use the same kernel.
 2. will use the same libs/binaries
 3. will use the same patch+packaging commands
 4. will use the same upgrade commands

The cluster BrandZ zone uses the BrandZ callbacks to
add value. We actually use all of the existing BrandZ callbacks,
we just add a hook for this BrandZ zone type so that our
code gets called in addition to the basic zone functionality.

So it is appropriate to think of the cluster BrandZ zone
as another flavor of the native zone type.


cheers, now I understand :-)

thanks
Enda

Regards,
Ellard

Enda O'Connor ( Sun Micro Systems Ireland) wrote:

Hi Ellard
Thanks for the info, very interesting, some questions

So Brandz zone types that use the patch+packaging commands will be 
running some flavour of Solaris, but not necessarily the same kernel 
as the global, or perhaps different libs/binaries as oppose to kernel.


If so how will they be patched?
ie will they have differenet package versions that those in global zone?


regards
Enda


Ellard Roush wrote:

Hi Enda,

This provides a good opportunity to clear up some misinformation.

The BrandZ lx zone type does not use standard patch/package
commands.

There will be BrandZ zone types that do use standard patch/package
commands. The Cluster group is developing now a cluster BrandZ
zone type that uses the BrandZ callbacks to enhance a zone.
The cluster BrandZ uses standard patch/package commands.
The Zones  BrandZ team in Solaris told us that a BrandZ approach
was the correct way to enhance a zone.

We are now in the middle of correcting these problems.

If you have information about places where this problem
appears, please let us know so that we can fix the problem.

Thanks,
Ellard


Enda O'Connor ( Sun Micro Systems Ireland) wrote:

Tirthankar wrote:

Hi,

On my machine (running s01u4_06) I have 3 local zones.

pship2 @ / $ zoneadm list -cv
 ID NAME STATUS PATH   
BRANDIP
  0 global   running
/   native   shared
  2 cz2  running/zones/cz2 
my_brand  shared
  5 cz4  running/zones/cz4 
native   shared
  - cz3  installed  /zones/cz3 
lx   shared

pship2 @ / $

cz2 is my_brand branded zone

pship2 @ / $ zoneadm -z cz2 mount
zoneadm: zone 'cz2': mount operation is invalid for branded zones.

Why is mount command disallowed for a branded zone ?
I can boot the zone, using the normal zoneadm -z cz2 boot command

Note: The config.xml and platform.xml for my_brand is identical 
to the native brand except for the brand name.



Hi
mount is an internal state used by the patch/package commands only.
It basically does some mount magic, such that the zone's zone is 
mounted in from the global lofs, plus /dev etc. Not really 
applicable to a zone that is not native as it cannot be patched.



Enda
___
zones-discuss mailing list
zones-discuss@opensolaris.org




___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] Default RM controls for Containers?

2007-05-10 Thread Mads Toftum
On Thu, May 10, 2007 at 11:23:18AM -0400, Jeff Victor wrote:
 I would like to gather thoughts and opinions on this omission: should 
 Containers have default RM settings?  Is there a better method to solve 
 this problem?  If not, which settings should have defaults?
 
I really wouldn't like having RM enabled by default on zones as I think
it would create more confusion and annoyance than is fair. That being
said, I think that once you enable RM on your zone, it should choose
sensible settings for you until the point where you choose to override
them. Perhaps it could even be as simple as set rctl=FSS to enable RM.
At that point it would be fine to pull in reasonable defaults. To make
matters simple, I think populating defaults is only worth doing for FSS.

vh

Mads Toftum
-- 
http://soulfood.dk
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] Default RM controls for Containers?

2007-05-10 Thread Jeff Victor

Mads Toftum wrote:

On Thu, May 10, 2007 at 11:23:18AM -0400, Jeff Victor wrote:
I would like to gather thoughts and opinions on this omission: should 
Containers have default RM settings?  Is there a better method to solve 
this problem?  If not, which settings should have defaults?



I really wouldn't like having RM enabled by default on zones as I think
it would create more confusion and annoyance than is fair. That being
said, I think that once you enable RM on your zone, it should choose
sensible settings for you until the point where you choose to override
them. 


Currently there isn't a setting which enables (or disables) RM.  Are you 
suggesting that there should be one 'knob' which enables RM, and chooses 
sufficiently large default values until you override them?


I think that such a situation would be much better than what we currently 
have.  The default values could be ones that no reasonable workload should 
exceed, and could be based on the hardware resources available.  For example, 
the physical memory cap could be set to 50% (or 70%) of available RAM, perhaps 
after subtracting a reasonable amount for the kernel.


A similar method could be used to determine a default for a cap on VM.

However, this model does not solve the problem that is documented in 
Clarkson's paper: the out-of-the-box experience does not protect 
well-behaved zones from poorly-behaved zones, or a DoS attack.



Perhaps it could even be as simple as set rctl=FSS to enable RM.
At that point it would be fine to pull in reasonable defaults. To make
matters simple, I think populating defaults is only worth doing for FSS.


I am not certain I understand: are you saying that it doesn't make sense to 
cap the amount of swap space that a zone can use unless you are also using FSS?


That is not true in some situations.  For example, each container might be in 
a separate resource pool, using its own CPUs.  FSS would not accomplish anything.


--
Jeff VICTOR  Sun Microsystemsjeff.victor @ sun.com
OS AmbassadorSr. Technical Specialist
Solaris 10 Zones FAQ:http://www.opensolaris.org/os/community/zones/faq
--
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] Default RM controls for Containers?

2007-05-10 Thread Mads Toftum
On Thu, May 10, 2007 at 02:11:12PM -0400, Jeff Victor wrote:
 Currently there isn't a setting which enables (or disables) RM.  Are you 
 suggesting that there should be one 'knob' which enables RM, and chooses 
 sufficiently large default values until you override them?
 
Yes.

 Perhaps it could even be as simple as set rctl=FSS to enable RM.
 At that point it would be fine to pull in reasonable defaults. To make
 matters simple, I think populating defaults is only worth doing for FSS.
 
 I am not certain I understand: are you saying that it doesn't make sense to 
 cap the amount of swap space that a zone can use unless you are also using 
 FSS?
 
No, I was just thinking that while handing out cpu shares with FSS is
very simple because you could easily just give 10 shares to each new
zone and never worry about it, but if you have to give a specific
percentage of system resources, then it becomes much harder because
you'd have to either know the total number of zones up front or adjust
over time.

 That is not true in some situations.  For example, each container might be 
 in a separate resource pool, using its own CPUs.  FSS would not accomplish 
 anything.
 
This is a situation where someone has had to take the time to decide a
need for a more complex RM setup - in that case I think it would be fair
to let them define the full RM setting rather than trying to second
guess their reasoning. As in - if you want to play with the advanced
features, then you're on your own.

vh

Mads Toftum
-- 
http://soulfood.dk
___
zones-discuss mailing list
zones-discuss@opensolaris.org


[zones-discuss] Re: Changing a zone's inherit-pkg-dir

2007-05-10 Thread F.V.(Phil)Porcella
Hi,
I was wondering if that trick of adding an additional directory (mount point?)
that you outlined below, would work more than once?

zonecfg -z
zonecfg add fs
zonecfg:fs set dir=
zonecfg:fs set special=
zonecfg:fs set type=lofs
zonecfg:fs end

I tried to use the dir and special during the initial configuration of a zone 
and
it only excepted one of them.  Also, how many directories can you have 
inherited 'initially'
before you install the zone?

Thank you.
Phil
 
 
This message posted from opensolaris.org
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] Re: Changing a zone's inherit-pkg-dir

2007-05-10 Thread Bob Netherton
On Thu, 2007-05-10 at 13:18 -0700, F.V.(Phil)Porcella wrote:

 I tried to use the dir and special during the initial configuration of a zone 
 and
 it only excepted one of them.  Also, how many directories can you have 
 inherited 'initially'
 before you install the zone?

I'm sure there's a limit out there somewhere and it's very likely to be
way beyond what is practical.   On most of my systems I also add /opt to
the inherited-pkg-dir list.   On laptop configurations where space is at
a premium this can turn into a dozen or so mounts (compilers, blastwave,
companion, local docs, etc). 

inherit-pkg-dir is a special case of a read-only loopback mount that is
very important during the installation of a zone (ie, the initial 
lu copy list is significantly smaller and it's OK to fail file creates
in the subsequent pkgadds).

So this all works as advertised.  If you are having problems then post
your zone XML file or the output from a zonecfg -z zonename info and
we'll see if we can spot where things are going wrong.


Bob

___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] Default RM controls for Containers?

2007-05-10 Thread Bob Netherton
On Thu, 2007-05-10 at 14:11 -0400, Jeff Victor wrote:

 However, this model does not solve the problem that is documented in 
 Clarkson's paper: the out-of-the-box experience does not protect 
 well-behaved zones from poorly-behaved zones, or a DoS attack.

I see where you are going with this Jeff, and there are some good ideas
behind all of this.   I have a great desire to rephrase your question
without the reference to zones - how well is Solaris itself
protected against the various forms of DoS attack ?   Do the controls
here suggest rational defaults for zones (ie, should we just inherit
the limits/protections from the Solaris parent) ?

One area where I struggle on this issue - you have to decide between
two different corner cases (both from situations where the person
isn't committed to the documentation): would I rather deal with a
problem that an application dies for no apparent reason or that
DoS situations can happen ?

They are both corner cases right out of the Clarkson paper.   In the
first case, setting default limits could cause apps to throttle or
perhaps fail when reaching their resource cap limits.   In the next
Clarkson paper :-) this will lead to the assumption that Solaris is
either slow or unstable - of which neither is true.   So we have to
explain where the resource controls are, how to tune them, etc.
Reminds me of when we used to play with lotsfree and handspread.

In the second case, unmanaged workloads (which are simple to
administer) can become unmanageable in the presence of hostile
attacks.   And I'm assuming here that about a billion buzzers and
sirens are going to be going off from the  log scrapers
(you do at least scrape logs, don't you) which indicates there
is a trouble in the neighborhood.   So it's not like this is happening
in a vacuum and once diagnosed should be relatively easy to restore
proper equilibrium.

Perhaps this is a case where the unintended consequences of
simplicity may have profound implications ?   Said another way -
I have customers running web servers, simple network daemons, and
Oracle in zones and I have no earthly idea how to suggest a
rational set of defaults, other than inheriting those of the
Solaris parent (which takes me back to my original thought fragment -
is this really a zones issue???).


Bob



___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] Default RM controls for Containers?

2007-05-10 Thread Jerry Jelinek

Bob Netherton wrote:

I see where you are going with this Jeff, and there are some good ideas
behind all of this.   I have a great desire to rephrase your question
without the reference to zones - how well is Solaris itself
protected against the various forms of DoS attack ?   Do the controls
here suggest rational defaults for zones (ie, should we just inherit
the limits/protections from the Solaris parent) ?

...

Perhaps this is a case where the unintended consequences of
simplicity may have profound implications ?   Said another way -
I have customers running web servers, simple network daemons, and
Oracle in zones and I have no earthly idea how to suggest a
rational set of defaults, other than inheriting those of the
Solaris parent (which takes me back to my original thought fragment -
is this really a zones issue???).


There are certainly uses for resource management outside of zones but
RM is a requirement for zones.  The problem for people doing consolidation
is they want to create a predictable environment.  Before consolidation
the various stacks ran on separate systems so a problem on the stack
was contained to that machine.  When you are consolidating you want the
same predictable behavior for the overall system.  Without some form
of resource management a single zone can unpredictably consume all available
resources.  This is true for any virtualization scheme.  This is why, with
a few exceptions, you should always use FSS with zones.  This allows any
zone to use all of the available CPU resources if it has enough work to
do and no other zone is busy, but at the same time it ensures that each zone
will get its share of the CPU if it needs it.  Setting good values for some
of the other controls is trickier although I think Dan's idea of scaling
these based on the system makes it easier.  We might also want to think
about scaling based on the number of running zones.

Jerry

___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] Default RM controls for Containers?

2007-05-10 Thread Dan Price
On Thu 10 May 2007 at 04:21PM, Jerry Jelinek wrote:
 of the other controls is trickier although I think Dan's idea of scaling
 these based on the system makes it easier.  We might also want to think
 about scaling based on the number of running zones.

Another way to look at it (and I think what you are saying) would be to
broaden the notion of shares a bit to include more of the system
resources-- for example, memory.  What's tough there, though, is that
our notion of shares today represent an entitlement, and the case of
memory, we're talking about a cap on utilization.

I think fundamentally we hear from two camps: those who want to
proportionally partition whatever resources are available, and those who
want to see the system as virtual 512MB Ultra-2's or virtual 1GB,
1ghz PCs.

-dp

-- 
Daniel Price - Solaris Kernel Engineering - [EMAIL PROTECTED] - blogs.sun.com/dp
___
zones-discuss mailing list
zones-discuss@opensolaris.org


[zones-discuss] adding a filesystem using zonecfg

2007-05-10 Thread DJR
I did a quick search of this website, but could not find a definite answer.

when creating a filesystem on the global zone and using lofs to have the zone 
see it, do I have to  reboot the zone in order for the zone to actually see it.

I am talking about when creating the filesytem via zonecfg..
not using the mount command to temp/manually mount a filesystem within the 
local zone.


thank you.
 
 
This message posted from opensolaris.org
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] adding a filesystem using zonecfg

2007-05-10 Thread Bob Netherton
On Thu, 2007-05-10 at 16:02 -0700, DJR wrote:
 I did a quick search of this website, but could not find a definite answer.
 
 when creating a filesystem on the global zone and using lofs to have the zone 
 see it, do 
 I have to  reboot the zone in order for the zone to actually see it.

No.  Just perform the mount manually just as you specify with your
zonecfg. 

So lets say I want /a to be mounted in my zone as /b and my
zonepath is /zones/fred

My zonecfg entry would look something like

 add fs; set dir=/b; set special=/a; set type=lofs; set
options=[rw,nosuid,nodevices]; end

Then from the global zone I could perform the exact same mount

# mkdir -p /zones/fred/root/b
# mount -F lofs -o rw,nosuid,nodevices /a /zones/fred/root/b

And it's there for your non-global zone to use.   It's not *exactly*
the same thing as letting zoneadmd do it's magic, but close
enough.

 I am talking about when creating the filesytem via zonecfg..
 not using the mount command to temp/manually mount a filesystem within the 
 local zone.

Right, you drive it from the global zone just like zoneadmd does.


While on the topic, you can also add, delete, or change network
settings on the fly as well.   Not sure how smart this really is, since
open connections might get rather upset, but for well thought
out cases it will prevent you from having to reboot a zone.


# ifconfig bge0 addif 192.168.100.201/24 zone fred up



___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] Default RM controls for Containers?

2007-05-10 Thread Mike Gerdts

On 5/10/07, Dan Price [EMAIL PROTECTED] wrote:

I think fundamentally we hear from two camps: those who want to
proportionally partition whatever resources are available, and those who
want to see the system as virtual 512MB Ultra-2's or virtual 1GB,
1ghz PCs.


The typical scenario I see is that an ISV gives a recommendation like
V120 with 1 GB of RAM or better.  It is then up to the end user to
figure how big of a slice of a T2000 or x4600 that is.

Using NDA information from Sun I can do this translation accurately
enough for my needs.  Each machine in my environment is capable of
handling somewhere between 4 and several hundred Zone Power Units -
ZPUs.  It makes the communication of relative server compute power
very easy among those familiar with the scheme used.

Providing open access to this information across Sun's product line
and opening up the computation methods to allow others to benchmark
other systems would be very helpful.  Perhaps in the future ISV's
would say more meaningful things like 1 - 8 threads with at least 17
ZPUs and 6 GB RAM.

Mike

--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zones-discuss mailing list
zones-discuss@opensolaris.org