Thanks for your feedbacl. Taking nodes out of maintenance still leaves them
in the reserved state "resv" but still unable to run jobs even though I
believe I've given the correct exception as shown in the original post.


@Ryan: Yeah, I did specify the reservation, Reservation=root_13. The --
before reservation is syntactically incorrect too. In fact, if you don't
specify which reservation is getting updated the scontrol command won't
work.



Best,
Glen

==========================================
Glen MacLachlan, PhD
*HPC Specialist  *
*for Physical Sciences &*

*Professorial Lecturer, Data Sciences*

Office of Technology Services
The George Washington University
725 21st Street
Washington, DC 20052
Suite 211, Corcoran Hall

==========================================




On Fri, Apr 15, 2016 at 1:07 PM, Ryan Cox <[email protected]> wrote:

> Did you try this:  --reservation=root_13
>
>
> On 04/15/2016 08:10 AM, Glen MacLachlan wrote:
>
> Dear all,
>
> Wrapping up a maintenance period and I want to run some test jobs before I
> release the reservation and allow regular user jobs to start running. I've
> modified the reservation to allow jobs from my account:
>
> $ scontrol show res
> ReservationName=root_13 StartTime=2016-04-12T09:00:00
> EndTime=2016-04-15T20:00:00 Duration=3-11:00:00
>    Nodes=ALL NodeCnt=220 CoreCnt=3328 Features=(null) PartitionName=(null)
> Flags=MAINT,SPEC_NODES
>    TRES=cpu=3328
>    Users=bindatype Accounts=(null) Licenses=(null) State=ACTIVE
> BurstBuffer=(null) Watts=n/a
>
>
> but when I try to allocate a set of nodes I keep seeing the following:
>
> $ salloc -p defq -t 10
> salloc: Required node not available (down, drained or reserved)
> salloc: Pending job allocation 1692921
> salloc: job 1692921 queued and waiting for resources
>
>
> Note that all the nodes are currently in the maint state. Am I missing
> something here or is this a problem with scontrol update?
>
>
>
>

Reply via email to