Not totally sure I am following - the output of free would help a lot.

However, the number you should be caring about is free +buffers/cache.  The 
reason for you discrepancy is you are including the cached in memory file 
system content that linux does in order to improve performance. On boxes with 
enough ram this can easily be 60+ GB.  When the system comes under memory 
pressure (from applications or the kernel wanting more memory) the kernel will 
remove any cached filesystem items to free up memory for processes.  This link 
[1] has a pretty good description of what I am talking about.

Either way, if you want to test to make sure this is a case of filesystem 
caching you can run:

echo 3 > /proc/sys/vm/drop_caches

Which will tell linux to drop all filesystem cache from memory, and I bet a ton 
of your memory will show up.  Note: in doing so - you will affect the 
performance of the box.  Since what use to be an in memory lookup will now have 
to go to the filesystem.  However, over time the cache will re-establish.  You 
can find more examples of how caching interacts with other part of the linux 
memory system here: [2]

To your question about qemu process..  If you use ps aux, the columns VSZ and 
RSS will tell you are wanting.  VSZ is the virtual size (how much memory the 
process has asked the kernel for).  RSS is resident set side, or that actual 
amount of non-swapped memory the process is using.

[1] - http://www.linuxatemyram.com/
[2] - http://www.linuxatemyram.com/play.html
____________________________________________

Kris Lindgren
Senior Linux Systems Engineer
GoDaddy, LLC.


From: Mike Leong <[email protected]<mailto:[email protected]>>
Date: Tuesday, June 23, 2015 at 9:44 AM
To: 
"[email protected]<mailto:[email protected]>"
 
<[email protected]<mailto:[email protected]>>
Subject: [Openstack-operators] Instance memory overhead

My instances are using much more memory that expected.  The amount free memory 
(free + cached) is under 3G on my servers even though the compute nodes are 
configured to reserve 32G.

Here's my setup:
Release: Ice House
Server mem: 256G
Qemu version: 2.0.0+dfsg-2ubuntu1.1
Networking: Contrail 1.20
Block storage: Ceph 0.80.7
Hypervisor OS: Ubuntu 12.04
memory over-provisioning is disabled
kernel version: 3.11.0-26-generic

On nova.conf
reserved_host_memory_mb = 32768

Info on instances:
- root volume is file backed (qcow2) on the hypervisor local storage
- each instance has a rbd volume mounted from Ceph
- no swap file/partition

I've confirmed, via nova-compute.log, that nova is respecting the 
reserved_host_memory_mb directive and is not over-provisioning.  On some 
hypervisors, nova-compute says there's 4GB available for use even though the OS 
has less that 4G left (free +cached)!

I've also summed up the memory from /etc/libvir/qemu/*.xml files and the total 
looks good.

Each hypervisor hosts about 45-50 instances.

Is there good way to calculate the actual usage of each QEMU process?

PS: I've tried free, summing up RSS, and smem but none of them can tell me 
where's the missing mem.

thx
mike
_______________________________________________
OpenStack-operators mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to