On Wed, Feb 11, 2015 at 3:08 PM, Mark Hahn <[email protected]> wrote: > if there is ever a significant period of time when you're both > swapping out and in, you're probably thrashing. I'm not advocating > thrashing. however, it's hard to think of rational arguments against > two particular ways to use swap:
My usage of "both" in that context was an equivalent for "either/or". Anyway, I agree with what you say for a general purpose server. However, my view of an HPC machine (either a compute node alone or a cluster as a whole) is one where the CPU is kept as close as possible to 100% usage, as I do interpret the C in HPC as "Computing" i.e "performing calculations". Paying for the latest generation of CPUs makes no economic sense otherwise... > again, I'm not talking about thrashing, wherein the CPU is idled waiting > for pages during a storm of both SI/SO traffic. non-thrash swap usage > does not imply a waste of CPU. Anything not keeping the FP unit busy is a waste of CPU - again, in my view. We have discussed some years ago the jitter generated by daemons running on the nodes. Swap usage (independent of how heavy) is a big generator of such jitter as simply managing the swap, preparing I/O then performing I/O - even with very efficient controllers - means not doing computation. >> bad enough that the RAM is mostly NUMA these days and the various > > what does NUMA have to do with it? I meant it again in the context of keeping the CPU busy. NUMA, as well as swap, means that the CPU doesn't always have the data "at hand", but has to wait for it - from a memory bank connected to a different CPU or from a storage device. >> is now free for others) and the user satisfaction (no swap=faster >> finishing) simultaneously :) > > that is a false equivalency. Care to explain this further? >> For many years now I do not configure swap on my workstations or >> laptops either (using mostly Fedora), as they have enough RAM for a >> full-blown graphical desktop environment and all assorted >> applications. And when I do start to see problems, > > how strange that you attribute the problem to the availability of swap. I don't quite understand your statement. Cheers, Bogdan _______________________________________________ Beowulf mailing list, [email protected] sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
