> On Dec 30, 2015, at 8:29 AM, Nahum Shalman <[email protected]> wrote:
> 
> Yes. The swap zvol gets created at boot time to match the size of DRAM 
> because that's how SmartOS expects to run.
> When you increase the DRAM you should increase the swap zvol accordingly.

Yes, and the "shortage" gets compounded by 2x if you don't have enough backing
swap space. A quick check using "swap -sh" will show how much has been 
allocated vs
available. As a first pass, make sure your swap device is at least as big as 
the "used"

For example, a tiny SmartOS system with 16G RAM and swap with a single KVM zone
using 8G shows:
        # swap -sh
        total: 8.1G allocated + 46M reserved = 8.1G used, 13G available

after stopping the KVM zone:
        # swap -sh
        total: 175M allocated + 39M reserved = 214M used, 29G available

Note: the math seems strange at first, notice how 13 + 8 != 29... this is 
because there is both RAM and swap reserved. For an 8G KVM, the system
allocates 8G RAM and 8G swap (29 - 2*8 = 13), to allow the anonymous pages
in RAM to be paged to swap. The accounting is correct, but the accounting rules 
are
complex. For more details and how zones see the swap cap and get faked out see: 
http://src.illumos.org/source/xref/illumos-gate/usr/src/uts/common/vm/vm_swap.c#473
 
<http://src.illumos.org/source/xref/illumos-gate/usr/src/uts/common/vm/vm_swap.c#473>

In general, OS and LX zones do not need these large reservations.

For more detailed look, memstat shows the Anon pages and theese are the pages 
that
can get paged out to the swap device. Using the above example:

        # echo ::memstat | mdb -k
        Page Summary                Pages                MB  %Tot
        ------------     ----------------  ----------------  ----
        Kernel                     344406              1345    8%
        Boot pages                  68306               266    2%
        ZFS File Data              311720              1217    7%
        Anon                      2121324              8286   51%
        Exec and libs                4976                19    0%
        Page cache                  16496                64    0%
        Free (cachelist)            17964                70    0%
        Free (freelist)           1299863              5077   31%

        Total                     4185055             16347
        Physical                  4185054             16347

and after the KVM VM is stopped:

        # echo ::memstat | mdb -k
        Page Summary                Pages                MB  %Tot
        ------------     ----------------  ----------------  ----
        Kernel                     339984              1328    8%
        Boot pages                  68306               266    2%
        ZFS File Data              311941              1218    7%
        Anon                        48057               187    1%
        Exec and libs                4983                19    0%
        Page cache                  16107                62    0%
        Free (cachelist)            18603                72    0%
        Free (freelist)           3377074             13191   81%

        Total                     4185055             16347
        Physical                  4185054             16347

Thus we see emprically that KVM uses anonymous memory. If there is not
enough backing swap space to support the anonymous memory use, then
RAM is used to back, hence the unexpected 2x RAM use. The solution is
simple: have a swap device as large as RAM, even though it might not be 
actually used.
 -- richard

> 
> -Nahum
> 
> On 12/30/2015 11:21 AM, Humberto Ramirez wrote:
>> Nahum, is this something you would recommend every time the host memory 
>> changes/increases?
>> 
>> 
>> 
>> 
>> On Wed, Dec 30, 2015 at 9:16 AM, Nahum Shalman <[email protected] 
>> <mailto:[email protected]>> wrote:
>> On 12/30/2015 05:19 AM, Matthias Götzke wrote:
>>> We upgraded the RAM to 128GB now but this problem will obviously happen 
>>> again when we cannot limit the ARC. I am considering starting a few dummy 
>>> kvm’s with reserved RAM as buffers.
>>> 
>> Bingo!
>> 
>> You probably neglected to increase the size of the swap volume. That 
>> explains your symptoms.
>> QEMU will only be able to lock down memory pages if there's swap space to 
>> provide a backing. It's complicated and on the surface seems a bit silly, 
>> but the good news is that all you need to do is get the system to see 128GB 
>> of available swap space.
>> You can do that with or without a reboot depending on your requirements.
>> 
>> Since you were considering a reboot to adjust ARC max, let's do it the 
>> simple way with two reboots:
>> 
>> 1. Reboot into noinstall mode.
>> 2. "zpool import zones"
>> 3. "zfs set volsized=128G zones/swap"
>> 4. "zpool export zones"
>> 5. "reboot"
>> 6. Bask in your ability to boot more KVM VMs.
>> 
>> -Nahum
>> 
> 
> smartos-discuss | Archives 
> <https://www.listbox.com/member/archive/184463/=now>  
> <https://www.listbox.com/member/archive/rss/184463/21953302-fd56db47> | 
> Modify <https://www.listbox.com/member/?&;> Your Subscription  
> <http://www.listbox.com/>



-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com

Reply via email to