Hi Loris, 
we had the same issue with 2.2.7, the jobs with large memory requirements were 
blocking the queue when they got on top of the queue until they were served. 
After these jobs were served the rest of the jobs were served properly.
As a solution we had to create a separate partition for the large memory nodes, 
and since then we don't have this issue again.

I assume (but honestly I have not checked it) that slurm in 2.2.7 version is 
having the same behavior when it is trying to allocate cpu's than when it is 
trying to allocate memory, so if there is not enough cpu's for a job request 
the scheduler will stop serving jobs until there are enough cpu's to fulfill 
the request of the first job in the queue (and I also assume that backfilling 
would work, but since most of the users don't specify usually the real time 
they need, backfilling cannot do too much). Probably something similar is done 
with the memory but to be honest I don't know. 
For us the problem was solved when we placed the nodes on separate homogenous 
partitions according to their hardware.

Regards

Juan Pancorbo Armada
[email protected]
http//www.lrz.de


Leibniz-Rechenzentrum
Abteilung: Hochleistungssysteme
Boltzmannstrasse 1, 85748 Garching
Telefon:  +49 (0) 89 35831-8735
Fax:      +49 (0) 89 35831-8535 

-----Ursprüngliche Nachricht-----
Von: Loris Bennett [mailto:[email protected]] 
Gesendet: Mittwoch, 5. Februar 2014 11:43
An: slurm-dev
Betreff: [slurm-dev] Re: Large memory jobs block small memory job in same 
partition


"Loris Bennett" <[email protected]>
writes:

> We do already use weighting, but my understanding was that this would 
> only affect the order in which resources are assigned and not prevent 
> a job from starting even when resources are available.
>
> I assume that there is some valid reason for a job waiting, but it is 
> not apparent to me.  I guess it would be helpful if it were possible 
> to see exactly what resources a job is waiting for, but I haven't come 
> across a way to do that.

The situation of jobs not starting despite resources being available has 
occurred again.

- User A with the highest priority jobs has reached her running job
  limit, so no more of her jobs can start.  Her jobs have a time limit
  of 2 days.

- User B with the next highest priority jobs needs more memory than is
  available on the free node, so his job can't start there.  His jobs
  have a time limit of 3 days.

- User C is next in line and needs all the CPUs of the node, but very
  little memory.  It seems that his job should start, but it doesn't.
  His jobs have a time limit of 3 days.

Should User B's job prevent User C's job from starting?  Or is it because User 
C's time limit is greater than that of User A?  I can sort of see why a lower 
priority job with a long run-time maybe shouldn't start before a higher 
priority, short run-time job which is being held back due to the running job 
limit, but is this really what is going on?

Regards

Loris
 
--
This signature is currently under construction.

Reply via email to