On 5/2/20 1:44 pm, Antony Cleave wrote:
Hi, from what you are describing it sounds like jobs are backfilling in
front and stopping the large jobs from starting
We use a feature that SchedMD implemented for us called
"bf_min_prio_reserve" which lets you set a priority threshold below
which Sl
Loris Bennett writes:
> Hello David,
>
> David Baker writes:
>
>> Hello,
>>
>> I've taken a very good look at our cluster, however as yet not made
>> any significant changes. The one change that I did make was to
>> increase the "jobsizeweight". That's now our dominant parameter and it
>> does e
> *From:* slurm-users on behalf of
> Renfro, Michael
> *Sent:* 31 January 2020 22:08
> *To:* Slurm User Community List
> *Subject:* Re: [slurm-users] Longer queuing times for larger jobs
>
> Slurm 19.05 now, though all these settings were in effect on 17.02 until
> quite rece
------------------
> From: slurm-users on behalf of
> Killian Murphy
> Sent: 04 February 2020 10:48
> To: Slurm User Community List
> Subject: Re: [slurm-users] Longer queuing times for larger jobs
>
> Hi David.
>
> I
please?
Best regards,
David
From: slurm-users on behalf of Killian
Murphy
Sent: 04 February 2020 10:48
To: Slurm User Community List
Subject: Re: [slurm-users] Longer queuing times for larger jobs
Hi David.
I'd love to hear back about the changes that you make and ho
---
> *From:* slurm-users on behalf of
> Renfro, Michael
> *Sent:* 31 January 2020 22:08
> *To:* Slurm User Community List
> *Subject:* Re: [slurm-users] Longer queuing times for larger jobs
>
> Slurm 19.05 now, though all these settings were in effect on 17.02 until
>
David
From: slurm-users on behalf of Renfro,
Michael
Sent: 31 January 2020 22:08
To: Slurm User Community List
Subject: Re: [slurm-users] Longer queuing times for larger jobs
Slurm 19.05 now, though all these settings were in effect on 17.02 until quite
recently. If
early
> release of v18.
>
> Best regards,
> David
>
> From: slurm-users on behalf of
> Renfro, Michael
> Sent: 31 January 2020 17:23:05
> To: Slurm User Community List
> Subject: Re: [slurm-users] Longer queuing times for larger jobs
>
> I missed reading w
From: slurm-users on behalf of Renfro,
Michael
Sent: 31 January 2020 17:23:05
To: Slurm User Community List
Subject: Re: [slurm-users] Longer queuing times for larger jobs
I missed reading what size your cluster was at first, but found it on a second
read. Our
ot; the system. The larger jobs at the
> expense of the small fry for example, however that is a difficult decision
> that means that someone has got to wait longer for results..
>
> Best regards,
> David
> From: slurm-users on behalf of
> Renfro, Michael
> Sent
Slurm User Community List
Subject: Re: [slurm-users] Longer queuing times for larger jobs
Greetings, fellow general university resource administrator.
Couple things come to mind from my experience:
1) does your serial partition share nodes with the other non-serial partitions?
2) what’s your m
Greetings, fellow general university resource administrator.
Couple things come to mind from my experience:
1) does your serial partition share nodes with the other non-serial partitions?
2) what’s your maximum job time allowed, for serial (if the previous answer was
“yes”) and non-serial parti
Hi David,
David Baker writes:
> Hello,
>
> Our SLURM cluster is relatively small. We have 350 standard compute
> nodes each with 40 cores. The largest job that users can run on the
> partition is one requesting 32 nodes. Our cluster is a general
> university research resource and so there are man
13 matches
Mail list logo