On 28.05.2016 11:07, haoyun wrote:
hello,everyone~
I have a cluster with 2 physical machines,and they are pve4.2
my physical machine:
root@cna5:~# free
             total       used       free     shared    buffers     cached
Mem:      65674780   21937328   43737452      93316     166488    1534868
-/+ buffers/cache:   20235972   45438808
Swap:      8388604          0    8388604


my vm:
root@debian:~# free
             total       used       free     shared    buffers     cached
Mem:       4063488     124792    3938696          0      10664      39144
-/+ buffers/cache:      74984    3988504
Swap:       901116          0     901116
root@debian:~# cd /dev/shm
root@debian:/dev/shm# dd if=/dev/zero of=dd.img bs=1M count=3000
dd: writing `dd.img': No space left on device
970+0 records in
969+0 records out
1016737792 bytes (1.0 GB) copied, 0.250504 s, 4.1 GB/s
root@debian:/dev/shm# free
             total       used       free     shared    buffers     cached
Mem:       4063488    1119324    2944164          0      10680    1032052
-/+ buffers/cache:      76592    3986896
Swap:       901116          0     901116


why?

can you post the result of:
df -h /dev/shm

As readable in:
https://www.kernel.org/doc/Documentation/filesystems/tmpfs.txt

The tmpfs is half the size of the physical memory by default,
so it can be full before the whole memory is used
(this is security measure, read the link for more infos).


So if it simply is full you may try to remount it bigger with:
mount -o remount,size=8G /dev/shm

Adapt size, but try to let enough free memory, the OOM (Out Of Memory) Killer
cannot free tmpfs used memory and thus a to big tmpfs can have a negative impact
on the system stability.

cheers,
Thomas

_______________________________________________
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to