We do not have per-zone disk swap reservations or limits. Limiting how much of a zone that can be paged out would effectively lock the remaining memory in use by the zone. This memory could not be paged, potentially causing physical memory starvation for other zones.

If one zone is paged out completely, then the other zone won't need much swap space, as there is plenty of free physical memory for it's pages to live in. Both zones paged out at the same time is not a realistic scenerio, as there is no reason to need a physical memory to be free for nobody to use.

I would recommend having some headroom for the kernel and the global zone. The zonestat utility will show how much memory these are using. Otherwise, you are commiting more memory resources to your zones than is actually available. There is not harm in this, except that the zones cannot realize their caps because their is not enough physical memory to supply both zones with their limits at the same time.


On 10/ 8/12 07:12 AM, Habony, Zsolt wrote:


               I am looking for capping parameters of swap area for zones.

We are consolidating independent applications to one physical box. I want to share CPU power only, and use strict memory caps.

What I found is "max-swap" and "capped-memory , physical"

"capped-memory , physical" looks fine, to control physical memory available for a zone.

"max-swap" seems to limit virtual memory (swap area on disk plus physical memory).

Thus in a theoretical example:

if I have two zones migrated from a physical boxes with 8G RAM, 16G swap, I need to give them

8G as physical limit

16+8 as max-swap limit

to reach the similar environment.

These two zones would need 32G swap space.

My concern is that there is no parameter for limiting swap area on disk !

Thus if one zone is idle, it might swapped out almost completely, occupying 16+8=24G swap space, leaving only 8G swap space to the other zone.

Do you have better capping setup to avoid such a situation ?

Thank You,


zones-discuss mailing list
zones-discuss mailing list

Reply via email to