On 26/08/2014 at 11:04, Rob Sargent <robjsarg...@gmail.com> wrote: > We use tools (genetic analysis) which have explicit parameters for processor > consumption and which when left to their own devices use variable numbers of > core during a given run. We gauge their typical cpu load and parallelize > accordingly. I'm not too worried about exceeding cores/slots (within > reason). More runnables than cores to me just means maximal cpu usage. > (Think back to the single cpu days: there was always more than one runnable, > and that was a good thing.)
Hi, The ability to tell GNU Parallel that a certain job requires 'n' slots can also, indirectly, be used when the resource is memory. In this way one could assign 1 to the less memory hungry job and more than 1 to all others, proportionally. > Are you saying you know beforehand which job needs how many cores? If so > (before Ole builds your feature) could you group them into say small, medium > and large and parallel those groups with reasonable --jobs each? Yes, I know beforehand how many cores a job needs, this is an input parameter. Grouping them is exactly what I did, but the drawback is that this imposes an unnecessary synchronism point (barrier) between groups--this is specially bad if there are unbalanced long-running jobs within a group. -- Douglas A. Augusto