Re: [slurm-users] sshare vs sreport

2020-03-02 Thread Paul Edmon
sshare is cumulative statistics, so no window is needed.  Its just the sum of the total usage for whatever window you set for fairshare.  If you set no window then it is everything. -Paul Edmon- On 3/2/20 10:34 AM, Enric Fortin wrote: Hi everyone, I’ve noticed that when using `sshare`

[slurm-users] slurm status says jobs are running but they aren't

2020-03-02 Thread c b
Hi, I have a bunch of jobs that according to the slurm status have been running for 30+ minutes, but in reality aren't running. When i go to the node where the job is supposed to be, the processes aren't there (not showing up in top or ps) and the job's stdout/stderr logs are empty. I know it's

[slurm-users] Slrum crash with --mem-per-cpu=

2020-03-02 Thread Park, Gisoo (gp4r)
Hello, After we set MaxMemPerCPU=9000 on a partition, we are seeing Slurm crash when we submit a job with --mem-per-cpu=. When both -n and --mem-per-cpu= were in the sbatch script, #SBATCH -n 1 #SBATCH --mem-per-cpu=2 it worked fine and Slurm automatically increased the number of CPU.

[slurm-users] sshare vs sreport

2020-03-02 Thread Enric Fortin
Hi everyone, I’ve noticed that when using `sshare` instead of `sreport`, on a cluster with multifactor priority and no decay or reset period, `sshare` doesn’t actually ask for a time window, but it still reports raw and effective usage. Does anyone know what `sshare` is doing and how? Thanks.

Re: [slurm-users] Problem with configuration CPU/GPU partitions

2020-03-02 Thread Pavel Vashchenkov
28.02.2020 20:53, Renfro, Michael пишет: > When I made similar queues, and only wanted my GPU jobs to use up to 8 cores > per GPU, I set Cores=0-7 and 8-15 for each of the two GPU devices in > gres.conf. Have you tried reducing those values to Cores=0 and Cores=20? Yes, I've tried to do it.