Merlin,
This reminded me I wanted to do something like this too. To my knowledge
there is no command that does this. In fact, when I first started figuring
out the billing weights I used sshare to determine the cost of jobs
submitted one by one (not fun). Anyway, I put together a Python script
> use 1 gpu -- but the job can spread tasks/ranks among the 4 gpus.
> Currently it appears we are only limited to device 0 only.
>
> *In a mpi context,* I'm not certain about the wrapper based method provided
> from the link.
> I'll need to consult with the developer.
>
> Thanks aga
, 2017 at 10:50 AM, Barry Moore <moore0...@gmail.com> wrote:
> Colas,
>
> I would do something like:
>
> sacctmgr modify account where account= user= set
> GrpTresRunMins=cpu=0
>
> to unset:
>
> sacctmgr modify account where account= user= set
> GrpTresRunM
Colas,
I would do something like:
sacctmgr modify account where account= user= set
GrpTresRunMins=cpu=0
to unset:
sacctmgr modify account where account= user= set
GrpTresRunMins=cpu=-1
Hope it helps,
Barry
On Thu, Aug 31, 2017 at 10:31 AM, Colas Rivière
wrote:
>
>
Hello All,
Is it possible to get backfilling when setting grptres=cpu=N for each
account?
i.e. I want to make sure that each group cannot use more than 25% of the
total cores on our cluster. However, when utilization is low, I would like
jobs to start backfilling. Is that possible?
- Barry
--
Loris,
What if you did the following with a large top count?
$ sreport user top start=2017-07-24 end=2017-07-24 topcount=10 -T mem
I presume R has a "groupby" function similar to Pandas.
- Barry
On Tue, Jul 25, 2017 at 4:02 AM, Loris Bennett
wrote:
>
> Hi,
>
> I
Hello All,
Disclaimer: I have no idea if anyone woud be interested in this. I needed
this tool, wrote some code, and figured I should share it with the
community.
Recently, our group at Pitt moved from PBS to Slurm, but we didn't have an
equivalent for Gold. We wanted to have a Gold-like bank,
Jagga,
Don't forget to restart the controller.
- Barry
On Mon, Jun 19, 2017 at 9:47 AM, Barry Moore <moore0...@gmail.com> wrote:
> Jagga,
>
> It's possible you need the following in your slurm configuration. To be
> honest, I am guessing but it is the only thing in my Slu
rk for me:
> >
> > --
> > # sacctmgr modify account where Partition=gred set grptres=gres/gpu=8
> > Unknown option: grptres=gres/gpu=8
> > Use keyword 'where' to modify condition
> > --
> >
> > Keeps saying that the grptres=gres/gpu=8 is a u
account SA set qos=gpu
> sacctmgr modify account CIG set qos=gpu
>
> Now when I submit a second job by user1 asking for 8gpu's I was hoping
> it would go in the queue but it still runs:
>
> srun --gres=gpu:8 -p gred --pty bash
>
> Thanks again for your help.
>
>
--
> >> --- -
> >> normal 0 00:00:00cluster
> >> 1.00
> >>
> >> # sacctmgr list user | grep -i user1
> >>user1 research None
> >
Jagga,
You got it. Something along the lines of:
# Add users to accounts, partition
sacctmgr add user account= partition=
# Set the association limits, I think you can do something similar to
gres/gpu=-1 for the super users (assuming they are in the same account?).
sacctmgr modify account
Thanks alot! These will be very helpful!
On Thu, Jun 15, 2017 at 3:52 PM, Kilian Cavalotti <
kilian.cavalotti.w...@gmail.com> wrote:
>
> Hi Barry,
>
> On Thu, Jun 15, 2017 at 9:16 AM, Barry Moore <moore0...@gmail.com> wrote:
> > Does anyone have a script or knowle
Hey All,
Does anyone have a script or knowledge of how to query wait times for Slurm
jobs in the last year or so?
Thank you,
Barry
--
Barry E Moore II, PhD
E-mail: bmoor...@pitt.edu
Assistant Research Professor
Center for Simulation and Modeling
University of Pittsburgh
Pittsburgh, PA 15260
14 matches
Mail list logo