Thanks for that pointer.  Not to beat a dead horse, but my problem is
that --exclusive works in two modes.  In immediate mode (for lack of a
better term), I rarely want --exclusive, but when initiating multiple
jobs from within an existing allocation, I want to force it.

Jeff

On Sat, Apr 30, 2011 at 8:37 AM,  <[email protected]> wrote:
> Setting SLURM_EXCLUSIVE environment variable will make this the
> default behavior for srun commands. You could just set this for
> this particular user if desired.
>T
> Moe
>
>
> Quoting [email protected]:
>
>> Ideally I'd like a way of setting this as the default for (s)batch as
>> it provides an avenue for users to overload the nodes thinking that
>> they're running faster (don't get me started...).  I can insist the
>> user runs their job steps serially but it's hard to enforce without a
>> lot of effort.
>>
>> Jeff
>>
>> On Fri, Apr 29, 2011 at 2:05 PM, Auble, Danny <[email protected]> wrote:
>>>
>>> Hey Jeff, Thanks for the accolades ;).
>>>
>>> Have you tried the srun '--exclusive' option?  That should keep  things
>>> separate.
>>>
>>> Let us know if that doesn't work,
>>> Danny
>>>
>>>
>>>> -----Original Message-----
>>>> From: [email protected]
>>>>  [mailto:[email protected]] On Behalf Of
>>>> [email protected]
>>>> Sent: Friday, April 29, 2011 2:01 PM
>>>> To: [email protected]
>>>> Subject: [slurm-dev] Stupid Sbatch Question
>>>>
>>>> I have a user (in the same sense that I have an ingrown toenail) who
>>>> runs a job like this...
>>>>
>>>> She starts with 'sbatch -p whatever -n 145 scriptname'
>>>> The script contains something like:
>>>> srun foo &
>>>> srun bar
>>>>
>>>> What's happening is that each core is doubly allocated e.g. 16 procs
>>>> running on an 8 core node.  Why isn't the second srun constrained by
>>>> the resources consumed by the 1st?
>>>>
>>>> This is running under (the very excellent) Slurm 2.2.3.
>>>>
>>>> Jeff Katcher
>>>> FHCRC Cluster Monkey
>>>
>>>
>>
>>
>
>
>
>
>

Reply via email to