[slurm-dev] Re: How are paired dependencies handled?

2017-08-11 Thread Douglas Jacobsen
I think you want the *kill_invalid_depend* schedulerParameter to have
slurmctld automatically clean up jobs that can never run owing to
unsatisfiable dependencies.

On Aug 11, 2017 3:58 PM, "Alex Reynolds"  wrote:

> Say I submit a job via `sbatch`. Slurm gives it a job ID of `12345`
>
> I then submit two more jobs. The first job runs with the option
> `--dependency:afterok:12345`. The second job runs with the option
> `--dependency:afternotok:12345`.
>
> Those two jobs wait for the first to finish.
>
> The parent job `12345` finishes successfully.
>
> Does the monitor job with the option `--dependency:afternotok:12345` hang
> around in the cluster queue? Or does it get cleared out?
>
> Accordingly, say job `12345` finishes with a non-zero error.
>
> Does the monitor job with the option `--dependency:afterok:12345` stay in
> the queue, or get removed?
>
> Thanks!
>
> -Alex
>


[slurm-dev] How are paired dependencies handled?

2017-08-11 Thread Alex Reynolds
Say I submit a job via `sbatch`. Slurm gives it a job ID of `12345`

I then submit two more jobs. The first job runs with the option
`--dependency:afterok:12345`. The second job runs with the option
`--dependency:afternotok:12345`.

Those two jobs wait for the first to finish.

The parent job `12345` finishes successfully.

Does the monitor job with the option `--dependency:afternotok:12345` hang
around in the cluster queue? Or does it get cleared out?

Accordingly, say job `12345` finishes with a non-zero error.

Does the monitor job with the option `--dependency:afterok:12345` stay in
the queue, or get removed?

Thanks!

-Alex


[slurm-dev] Re: Job priority calculation when submitted to multiple partitions with different priorities

2017-08-11 Thread Corey Keasling


Hello again,

Looks like I'll make more definite plans to upgrade.  Per the Changelog 
for 17.02.3:


 -- Fix updating job priority on multiple partitions to be correct.

Corey

--
Corey Keasling
Software Manager
JILA Computing Group
University of Colorado-Boulder
440 UCB Room S244
Boulder, CO 80309-0440
303-492-9643

On 08/11/2017 01:50 PM, Corey Keasling wrote:


Hi Slurm-Dev,

I'm trying to determine how a job's multifactor priority is calculated
when the job is submitted to multiple partitions where each partition
has a different priority factor.  I'm running 16.05.6 with ill-defined
plans to move to 17.02.

My cluster is partitioned such that one partition is a subset of another
with the subset having a 10x higher PriorityJobFactor.  The intent is to
give greater priority on the subset to the group that purchased it while
allowing all users to run on all nodes.  Thus I hope to permit the
privileged group to submit jobs to both partitions simultaneously, but
to have their greater priority apply only to the subset.  However, based
on squeue and sprio, this may not be happening.

squeue -P reports identical priorities for both entries (i.e., the same
job but considered for p1 and p2).  sprio seems to report the priority
as calculated for the first partition in the list (i.e., if submitted
via sbatch -p1,p2 the job has gets the p1 priority factor, while sbatch
-p2,p1 gives the p2 priority factor).

So what's actually going on under the hood?  Does the scheduler
calculate priorities for each (job,partition) pair separately, or only
once?

Thank you for your help!



[slurm-dev] Job priority calculation when submitted to multiple partitions with different priorities

2017-08-11 Thread Corey Keasling


Hi Slurm-Dev,

I'm trying to determine how a job's multifactor priority is calculated 
when the job is submitted to multiple partitions where each partition 
has a different priority factor.  I'm running 16.05.6 with ill-defined 
plans to move to 17.02.


My cluster is partitioned such that one partition is a subset of another 
with the subset having a 10x higher PriorityJobFactor.  The intent is to 
give greater priority on the subset to the group that purchased it while 
allowing all users to run on all nodes.  Thus I hope to permit the 
privileged group to submit jobs to both partitions simultaneously, but 
to have their greater priority apply only to the subset.  However, based 
on squeue and sprio, this may not be happening.


squeue -P reports identical priorities for both entries (i.e., the same 
job but considered for p1 and p2).  sprio seems to report the priority 
as calculated for the first partition in the list (i.e., if submitted 
via sbatch -p1,p2 the job has gets the p1 priority factor, while sbatch 
-p2,p1 gives the p2 priority factor).


So what's actually going on under the hood?  Does the scheduler 
calculate priorities for each (job,partition) pair separately, or only 
once?


Thank you for your help!

--
Corey Keasling
Software Manager
JILA Computing Group
University of Colorado-Boulder
440 UCB Room S244
Boulder, CO 80309-0440
303-492-9643