On 30 April 2014 17:35, B. Franz Lang <franz.l...@umontreal.ca> wrote: > Hi there > > I have been trying to find a way that allows the use of 'parallel' > without completely freezing machines --- which in my case > is due to the parallel execution > of very memory-hungry applications (like a server that has 64 GB > memory, and one instance of an application - unforeseeable - > between 10-60 GB). If a couple of them are started in parallel, --noswap > is unable to master the situation, sometimes running into > swap usage beyond the allocated space, and jobs dropped by the system > (in addition to freezing the server almost solid and taking more time > than not using parallel code). > > I am currently using a rather awkward workaround > by estimating memory usage with commands like /usr/bin/time -f "%M %P" > beforehand, to direct the number of parallel processes. Not ideal. > Is there an easy way around this, or an intention to add features > that would help under such conditions? I could think about having > a first instance of a process be run to sense memory usage, before > sending of the following ones. > > Cheers Franz > >
I've been having the same problem, sometimes without an easy way to predict my experiments require much more memory than other ones based on random parameters (if i am doing a parameter sweep), or due to other factors that i can't deal with easily with any arguments that i know of.