[zones-discuss] rcapd

2008-09-01 Thread syed
Hi , 

I am facing  an issue with rcapd, currently I have setup 8 sparse-root 
containers  on a server with 32G physical memory , I have capped each of these 
containers varyingly and there is no issue with capping and it works fine.

The issue arises when one of the containers eats up more memory (rapidly) than 
it has been allocated .It causes other non global zones to be less (noticable ) 
responsive while rcapd is trying to curb this unruly behaviour by  one of the 
containers.I am wondering if this is due to heavy paging ?

Has anyone else seen such behaviour, or is this an acceptable behaviour ? Any 
comments or experiences would be really helpful .

Thanks
--
This message posted from opensolaris.org
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] rcapd

2008-09-01 Thread Jeff Victor
Hi Syed,

I would not be surprised to find that rcapd is behaving correctly on
your system. All of the containers in one Solaris instance share one
Solaris paging system and one set of swap devices. When rcapd is
paging the memory pages of one container out to the swap device, other
workloads sharing that disk will take longer to write to that disk.
This is similar to other virtualized solutions (e.g. hypervisors) that
have similar constraints, similar workloads and are sharing one
internal disk for swap space.

If your other containers are not paging at all, you can reduce this
effect by configuring your swap space on its own disk drive. The
disk-write transactions from those other containers will then *not*
wait for paging activity of the container with a RAM cap that is too
low.

Do you know why that one container is using up more memory than the
cap? Is the cap too low, or the application behaving badly?



On Mon, Sep 1, 2008 at 7:55 AM, syed [EMAIL PROTECTED] wrote:
 Hi ,

 I am facing  an issue with rcapd, currently I have setup 8 sparse-root 
 containers  on a server with 32G physical memory , I have capped each of 
 these containers varyingly and there is no issue with capping and it works 
 fine.

 The issue arises when one of the containers eats up more memory (rapidly) 
 than it has been allocated .It causes other non global zones to be less 
 (noticable ) responsive while rcapd is trying to curb this unruly behaviour 
 by  one of the containers.I am wondering if this is due to heavy paging ?

 Has anyone else seen such behaviour, or is this an acceptable behaviour ? Any 
 comments or experiences would be really helpful .



-- 
--JeffV
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] rcapd

2008-09-01 Thread Mike Gerdts
On Mon, Sep 1, 2008 at 6:55 AM, syed [EMAIL PROTECTED] wrote:

 Hi ,

 I am facing an issue with rcapd, currently I have setup 8
 sparse-root containers on a server with 32G physical memory, I
 have capped each of these containers varyingly and there is no issue
 with capping and it works fine.

 The issue arises when one of the containers eats up more memory
 (rapidly) than it has been allocated .It causes other non global
 zones to be less (noticable ) responsive while rcapd is trying to
 curb this unruly behaviour by one of the containers.I am wondering
 if this is due to heavy paging ?

What does vmstat -p say?  I bet it says yes!

 Has anyone else seen such behaviour, or is this an acceptable
 behaviour ? Any comments or experiences would be really helpful.

I haven't, but then again that is because I expected to see such
behavior if I used rcapd.  There are very few circumstances in my
world that make sense to encourage heavy paging - as rcapd will do.
Solid state disk may change this a bit because paging would likely be
a lot faster.

For now, my approach is to cap the use of swap.  Note that in this
definition, swap is different than most people expect - it refers to
how much memory a zone can reserve.  If the sum of all of your zones'
swap cap is 32 GB, you should see pretty much no paging to swap
devices.  You will still see file system (e.g. executable) paging.
The most noticable side effect is that if the things running in a zone
(including use of /tmp) try to use more than their respective caps,
memory allocations will fail.  I see this as a good thing because it
means that the misbehaving application fails rather than taking down
all the rest.

rcapd works a bit like boot camp (military - not the mac thing).  If
one soldier (zone) misbehaves they all get punished.  There may be
circumstances where this outcome is desirable but server
virtualization using zones is likely not one of them.

--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] rcapd

2008-09-01 Thread Syed Hussain
Hi Mike,

I think you are right, using rcapd for virtulization using zones would be a 
problem, 
thanks for the work around , really appreciate your help.

Regards

uzair

--- On Mon, 9/1/08, Mike Gerdts [EMAIL PROTECTED] wrote:
From: Mike Gerdts [EMAIL PROTECTED]
Subject: Re: [zones-discuss] rcapd
To: syed [EMAIL PROTECTED]
Cc: zones-discuss@opensolaris.org
Date: Monday, September 1, 2008, 9:24 AM

On Mon, Sep 1, 2008 at 6:55 AM, syed [EMAIL PROTECTED] wrote:

 Hi ,

 I am facing an issue with rcapd, currently I have setup 8
 sparse-root containers on a server with 32G physical memory, I
 have capped each of these containers varyingly and there is no issue
 with capping and it works fine.

 The issue arises when one of the containers eats up more memory
 (rapidly) than it has been allocated .It causes other non global
 zones to be less (noticable ) responsive while rcapd is trying to
 curb this unruly behaviour by one of the containers.I am wondering
 if this is due to heavy paging ?

What does vmstat -p say?  I bet it says yes!

 Has anyone else seen such behaviour, or is this an acceptable
 behaviour ? Any comments or experiences would be really helpful.

I haven't, but then again that is because I expected to see such
behavior if I used rcapd.  There are very few circumstances in my
world that make sense to encourage heavy paging - as rcapd will do.
Solid state disk may change this a bit because paging would likely be
a lot faster.

For now, my approach is to cap the use of swap.  Note that in this
definition, swap is different than most people expect - it refers
to
how much memory a zone can reserve.  If the sum of all of your zones'
swap cap is 32 GB, you should see pretty much no paging to swap
devices.  You will still see file system (e.g. executable) paging.
The most noticable side effect is that if the things running in a zone
(including use of /tmp) try to use more than their respective caps,
memory allocations will fail.  I see this as a good thing because it
means that the misbehaving application fails rather than taking down
all the rest.

rcapd works a bit like boot camp (military - not the mac thing).  If
one soldier (zone) misbehaves they all get punished.  There may be
circumstances where this outcome is desirable but server
virtualization using zones is likely not one of them.

--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zones-discuss mailing list
zones-discuss@opensolaris.org

[zones-discuss] rcapd interactions between global and local zones?

2008-06-18 Thread James Litchfield
I'm struggling with the resource capping documentation. This environment 
uses zones
and the documentation is frustratingly vague about resource capping and 
zones.

A) If I run rcapd in the global zone and have resource settings via 
zonecfg for one or more zones,
   it would seem to me that I would not want to run rcapd in any local 
zones with capped-memory
   settings since that since both might end up working on the same 
process and possibly deadlock?

B) If I do not have capped-memory settings for the zones in the zonecfg, 
I should be able to
   run rcapd in the global zone and rcapd in local zones?

Any other limitations?

Jim Litchfield
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] Rcapd threshold vs ZFS cache

2008-06-18 Thread Steffen Weiberle
Brian Smith wrote:
 When rcapd is calculating how much memory is free, to compare to the memory
 cap enforcement threshold, does it consider the memory used by the ZFS cache
 to be free or used? If I set rcapadm -c 90 then will rcapd behave any
 differently than with rcapadm -c 0 if there is a ZFS cache? I have read
 that the ZFS cache will use all free physical RAM.

IIRC, the ARC is limit to 3/4 of total memory. That is the max, it may 
use [much] less, depending on overall memory use.

In your case, where you are allowing up to 90%, the ARC may get small if 
that memory is actually getting used. zfs-discuss is a good place to get 
info regarding ARC vs VM memory

 
 Regards,
 Brian
 
 ___
 zones-discuss mailing list
 zones-discuss@opensolaris.org

___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] rcapd to limit RAM usage of a zone

2006-08-29 Thread Jerry Jelinek

Phil Cordier wrote:

Posted this question on the general zones group at got a deafening silence in 
response - anyone here have any possible answers?

http://forum.sun.com/jive/thread.jspa?forumID=299threadID=100707


It is possible that prstat and rcapd are counting shared memory multiple times,
so that it seems like you are using more memory within the zone.  This is
a known bug:

4754856 *prstat* prstat -atJTZ should count shared segments only once

We are working on an enhancement to zones and resource management which
will improve a bunch of things in this area, including the way rcapd
accounts for shared memory.  It will also allow you to solve your underlying
problem by running a single rcapd in the global zone which will cap each
zones memory consumption.  This is described in this thread:

http://www.opensolaris.org/jive/thread.jspa?threadID=10451tstart=0

Let us know if you have any questions about this project.

Thanks,
Jerry
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] rcapd to limit RAM usage of a zone

2006-08-29 Thread Jeff Victor
Funny but ironic:  rcapd never counts its own memory usage.  See the source code 
at http://cvs.opensolaris.org/source/search?q=rcapd_pid .  Once you subtract 
rcapd's 128MB, the *rest* of the zone is using exactly what you specified as the cap.


But I don't see why rcapd should be using 128MB...

Can you supply the output of

zone11# pmap -ax {pid-of-rcapd}

Phil Cordier wrote:

Posted this question on the general zones group at got a deafening silence in
response - anyone here have any possible answers?

http://forum.sun.com/jive/thread.jspa?forumID=299threadID=100707


--
--
Jeff VICTOR  Sun Microsystemsjeff.victor @ sun.com
OS AmbassadorSr. Technical Specialist
Solaris 10 Zones FAQ:http://www.opensolaris.org/os/community/zones/faq
--
___
zones-discuss mailing list
zones-discuss@opensolaris.org