[slurm-users] How to queue jobs based on non-existent features

2020-07-09 Thread Raj Sahae
Hi all, My apologies if this is sent twice. The first time I sent it without my subscription to the list being complete. I am attempting to use Slurm as a test automation system for its fairly advanced queueing and job control abilities, and also because it scales very well. However, since

Re: [slurm-users] priority/multifactor, sshare, and AccountingStorageEnforce

2020-07-09 Thread Paul Edmon
Try setting RawShares to something greater than 1.  I've seen it be the case then when you set 1 it creates weirdness like this. -Paul Edmon- On 7/9/2020 1:12 PM, Dumont, Joey wrote: Hi, We recently set up fair tree scheduling (we have 19.05 running), and are trying to use sshare to see

[slurm-users] priority/multifactor, sshare, and AccountingStorageEnforce

2020-07-09 Thread Dumont, Joey
Hi, We recently set up fair tree scheduling (we have 19.05 running), and are trying to use sshare to see usage information. Unfortunately, sshare reports all zeros, even though there seems to be data in the backend DB. Here's an example output: $ sshare -l Account User

Re: [slurm-users] changes in slurm.

2020-07-09 Thread Brian Andrus
Navin, 1. you will need to restart slurmctld when you make changes to the physical definition of a node. This can be done without affecting running jobs. 2. You can have a node in more than one partition. That will not hurt anything. Jobs are allocated to nodes, not partitions, the

Re: [slurm-users] Automatically stop low priority jobs when submitting high priority jobs

2020-07-09 Thread Durai Arasan
Hi, Please see job preemption: https://slurm.schedmd.com/preempt.html Best, Durai Arasan Zentrum für Datenverarbeitung Tübingen On Tue, Jul 7, 2020 at 6:45 PM zaxs84 wrote: > Hi all. > > Is there a scheduler option that allows low priority jobs to be > immediately paused (or even stopped)

[slurm-users] changes in slurm.

2020-07-09 Thread navin srivastava
Hi Team, i have 2 small query.because of the lack of testing environment i am unable to test the scenario. working on to set up a test environment. 1. In my environment i am unable to pass #SBATCH --mem-2GB option. i found the reason is because there is no RealMemory entry in the node definition