Hi,

This is indeed linux, CentOS 7 to be more precise, using qemu-kvm as hypervisor. The used ram was in the used column. While we have made adjustments by moving and resizing the specific guest that was using 96 GB (verified in top), the ram usage is still fairly high for the amount of allocated ram.

Currently the ram usage looks like this :

total used free shared buff/cache available
Mem:           251G        190G         60G         42M 670M         60G
Swap:          952M        707M        245M


I have 188.5GB of ram allocated to 22 instances on this node. I believe it's unrealistic to think that all these 22 instances have cached/are using up all their ram at this time.

On 2017-03-23 13:07, Kris G. Lindgren wrote:
Sorry for the super stupid question.

But if this is linux are you sure that the memory is not actually being 
consumed via buffers/cache?

free -m
                       total          used        free      shared       
buff/cache   available
Mem:         128751       27708    2796     4099          98246          96156
Swap:          8191           0             8191

Shows that of 128GB 27GB is used, but buffers/cache consumes 98GB of ram.

___________________________________________________________________
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

On 3/23/17, 11:01 AM, "Jean-Philippe Methot" <[email protected]> 
wrote:

     Hi,
Lately, on my production openstack Newton setup, I've ran into a
     situation that defies my assumptions regarding memory management on
     Openstack compute nodes and I've been looking for explanations.
     Basically, we had a VM with a flavor that limited it to 96 GB of ram,
     which, to be quite honest, we never thought we could ever reach. This is
     a very important VM where we wanted to avoid running out of memory at
     all cost. The VM itself generally uses about 12 GB of ram.
We were surprised when we noticed yesterday that this VM, which has been
     running for several months, was using all its 96 GB on the compute host.
     Despite that, in the guest, the OS was indicating a memory usage of
     about 12 GB. The only explanation I see to this is that at some point in
     time, the host had to allocate all the 96GB of ram to the VM process and
     it never took back the allocated ram. This prevented the creation of
     more guests on the node as it was showing it didn't have enough memory 
left.
Now, I was under the assumption that memory ballooning was integrated
     into nova and that the amount of allocated memory to a specific guest
     would deflate once that guest did not need the memory. After
     verification, I've found blueprints for it, but I see no trace of any
     implementation anywhere.
I also notice that on most of our compute nodes, the amount of ram used
     is much lower than the amount of ram allocated to VMs, which I do
     believe is normal.
So basically, my question is, how does openstack actually manage ram
     allocation? Will it ever take back the unused ram of a guest process?
     Can I force it to take back that ram?
--
     Jean-Philippe Méthot
     Openstack system administrator
     PlanetHoster inc.
     www.planethoster.net
_______________________________________________
     OpenStack-operators mailing list
     [email protected]
     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--
Jean-Philippe Méthot
Openstack system administrator
PlanetHoster inc.
www.planethoster.net


_______________________________________________
OpenStack-operators mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to