Hi Rayson,

Sorry to sound needy, but have you had time to consider controlling cgroup's memory.memsw.limit_in_bytes via a new attribute defined as the OS-dependent way to measure memory usage (typically RAM+swap), rather than overloading h_vmem/s_vmem?

Keeping things separate should help greatly in improving the utilisation of clusters, which keeps everyone happy.

I'm guessing it is getting rather close to your u2 release now, but from what you've told me, it sounds like this work is an extension to what you've done and doesn't need to hold it up?

In case you missed it, here's the nub of what I've previously said:


On Fri, 25 May 2012, Mark Dixon wrote:
...
memsw usage and virtual address space usage can wildly diverge under fairly common use cases:

* Processes under 64-bit mode. Comparing the VIRT column in "top" with the "RES", "SHR" and "SWAP" columns clearly demonstrates that an ordinary process can be easily out by ~100Mb. This becomes worse, the more shared libraries are loaded. This is significant in a world where cores per CPU are increasing and so memory per core is under pressure to drop.

* Processes that mmap files.

* System V IPC shared memory.

I'm sure that clever people can point out more.


When there's a better way to limit usage of shared resources, we've got no business enforcing a limited virtual memory address space on all processes in all cases, as it isn't the shared resource that jobs have to fight over.

I absolutely agree that sometimes it is appropriate and there should be a mechanism to do so, but I think it should be a separate knob for users and admins to tweak. Otherwise, you're conflating two distinct concepts that do different things.
...

Best wishes,

Mark
--
-----------------------------------------------------------------
Mark Dixon                       Email    : [email protected]
HPC/Grid Systems Support         Tel (int): 35429
Information Systems Services     Tel (ext): +44(0)113 343 5429
University of Leeds, LS2 9JT, UK
-----------------------------------------------------------------
_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to