The main reason being to avoid too many temporary files being created in the 
sharedDirectory. For each job, this requires around 100-1024 KB. If the input 
files were no links, these would add to this as well. Given hundreds of 
thousands of jobs, this easily adds up. Further, I heard rumours in our lab 
that the SLURM head node/server would crash if more than 10000 jobs are 
submitted at once… Jonathan might have a better idea what’s true about this, 
though. My main concern are the number of temporary files that are created 
(also counting towards the quota of inodes…).

> On 12 May 2015, at 09:34, Romain Reuillon <[email protected]> wrote:
> 
> Hi Andrea,
> 
> no, but why would you do that ?
> 
> Romain
> 
> Le 12/05/2015 10:27, Andreas Schuh a écrit :
>> Hi,
>> 
>> is it possible to limit the number of jobs submitted to the SLURM/Condor/PBS 
>> queue at a time ?
>> 
>> Andreas
>> _______________________________________________
>> OpenMOLE-users mailing list
>> [email protected]
>> http://fedex.iscpif.fr/mailman/listinfo/openmole-users
> 
> 


_______________________________________________
OpenMOLE-users mailing list
[email protected]
http://fedex.iscpif.fr/mailman/listinfo/openmole-users

Reply via email to