For any process with a large number of threads the VMM size has become an
imaginary number ever since the glibc change to allocate a heap per thread.
I look to /proc/$pid/status to find the memory used by a proc RSS + Swap +
kernel page tables.
Jim
On Wednesday, March 6, 2019, 4:25:48 AM EST, Dorigo Alvise (PSI)
<[email protected]> wrote:
#yiv1607149323 P {margin-top:0;margin-bottom:0;}Hello to everyone,Here a PSI
we're observing something that in principle seems strange (at least to me).We
run a Java application writing into disk by mean of a standard
AsynchronousFileChannel, whose I do not the details.There are two instances of
this application: one runs on a node writing on a local drive, the other one
runs writing on a GPFS mounted filesystem (this node is part of the cluster, no
remote-mounting).
What we do see is that in the former the application has a lower sum VIRT+RES
memory and the OS shows a really big cache usage; in the latter, OS's cache is
negligible while VIRT+RES is very (even too) high (with VIRT very high).
So I wonder what is the difference... Writing into a GPFS mounted filesystem,
as far as I understand, implies "talking" to the local mmfsd daemon which fills
up its own pagepool... and then the system will asynchronously handle these
pages to be written on real pdisk. But why the Linux kernel accounts so much
memory to the process itself ? And why this large amount of memory is much more
VIRT than RES ?
thanks in advance,
Alvise
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss