Re: [slurm-users] non-historical scheduling

2022-04-12 Thread Tina Friedrich
Hi Chagai, there isn't, no. As far as I can tell, SLURM only knows share-tree, it doesn't have functional share like SGE/UGE. (I'm pretty sure what Chagai wants to happen is this: There's a share policy on the cluster that only ever operates in the moment (i.e. per scheduling run). No

Re: [slurm-users] non-historical scheduling

2022-04-12 Thread Loris Bennett
Hi Chagai, Chagai Nota writes: > Hi Loris > > Thanks for your answer. > I tired to configure it and I didn't get desired results. > This is my configuration: > PriorityType=priority/multifactor > PriorityDecayHalfLife=0 > PriorityUsageResetPeriod=DAILY > PriorityFavorSmall=yes >

Re: [slurm-users] non-historical scheduling

2022-04-12 Thread Chagai Nota
Wow thanks for your detailed answer. I'm coming from SGE, and I thought that there will be simple way to make it behave like SGE. As you said hard limits will be waste of resources so it's not good option. -Original Message- From: slurm-users On Behalf Of Paul Edmon Sent: Tuesday,

Re: [slurm-users] non-historical scheduling

2022-04-12 Thread Paul Edmon
So you want a purely fractional usage of the cluster.  That's hard to do via fairshare or with out fairshare as the scheduler will usually fill up all the nodes with the top priority job.  If you don't have fairshare running or any historical data it will revert to FIFO.  So which ever user

Re: [slurm-users] non-historical scheduling

2022-04-12 Thread Chagai Nota
Hi Loris Thanks for your answer. I tired to configure it and I didn't get desired results. This is my configuration: PriorityType=priority/multifactor PriorityDecayHalfLife=0 PriorityUsageResetPeriod=DAILY PriorityFavorSmall=yes PriorityWeightFairshare=10 PriorityWeightAge=0

Re: [slurm-users] [EXT] Distribute the node resources in multiple partitions and regarding job submission script

2022-04-12 Thread Ozeryan, Vladimir
1. I don’t see where you specifying a “Default” partition (DEFAULT=yes) 2. In “NodeName=* ” you have Gres=gpu:2 (All nodes on that line have 2 GPUs.) Create another “NodeName” line below and list your non-gpu nodes there without the GRES flag. From: slurm-users On Behalf Of

[slurm-users] Distribute the node resources in multiple partitions and regarding job submission script

2022-04-12 Thread Purvesh Parmar
Hello, I am using slurm 21.08. I am stuck with the following. Q1 : I have 8 nodes with 2 gpus each and 128 cores with 512 GB RAM. I want to distribute each node's resources in 2 partitions so that "par1" partition will have 2 gpus with 64 cores and 256 GB ram of the node and the other partition

Re: [slurm-users] non-historical scheduling

2022-04-12 Thread Loris Bennett
Hi Chagai, Chagai Nota writes: > Hi > > > > I would like to ask if there is any option that slurm scheduler will consider > only running jobs and not historical data. > > We don’t care about how many jobs users was running in the past but only the > current usage. Look at

[slurm-users] non-historical scheduling

2022-04-12 Thread Chagai Nota
Hi I would like to ask if there is any option that slurm scheduler will consider only running jobs and not historical data. We don't care about how many jobs users was running in the past but only the current usage. Thanks Chagai Nota Important Notice: This