Hi Chansup,
os.execute just returns the EXIT code of the command.
You will need to do a little more, to catch the output of your program, e.g.
function shellExecute(cmd, Output)
if (Output == nil) then Output = true end
local file = assert(io.popen(cmd, 'r'))
if (Output == true) then
Dear Slurm Experts,
I am trying to implement a low priority partition that overlaps with other
partitions which would suspend for higher priority tier jobs. I noticed when I
do this and switch SelectTypeParameters=CR_Core to CR_CPU (my intention is to
use each ht thread separately for better
Have you defined the TmpDisk value for each node?
As far as I know, local disk space is not a valid type for GRES.
https://slurm.schedmd.com/gres.html
"Generic resource (GRES) scheduling is supported through a flexible plugin
mechanism. Support is currently provided for Graphics Processing
Any suggestion on the above query.need help to understand it.
Does TmpFS=/scratch and the request is #SBATCH --tmp=500GB then it will
reserve the 500GB from scratch.
let me know if my assumption is correct?
Regards
Navin.
On Mon, Apr 13, 2020 at 11:10 AM navin srivastava
wrote:
> Hi Team,
Hi;
Did you restart slurmctld after changing
"PriorityType=priority/multifactor"?
Also your nice values are too small. It is not unix nice. Its range is
+/-2147483645, and it race with other priority factors at priority
factor formula. Look priority factor formula at
Hello Lyn, thanks for your reply.
I checked my configuration; the PriorityType was set to
"PriorityType=priority/basic" initially, so my tests refer to that
configuration.
After you post, I set it to "PriorityType=priority/multifactor" and ran the
tests again: the results are the same.
I was
Hi Matteo,
Hard to say without seeing your priority config values, but I'm guessing
you want to take a look at
https://slurm.schedmd.com/priority_multifactor.html.
Regards,
Lyn
On Tue, Apr 14, 2020 at 12:02 AM Matteo F wrote:
> Hello there,
> I am having problems understanding the slurm
Hello there,
I am having problems understanding the slurm scheduler, with regard to the
"nice" parameter.
I have two types of job: one is low priority and uses 4 CPUs (--nice=20),
the other one is high priority and uses 24 CPUs (--nice=10).
When I submit, let's say, 50 low-priority jobs, only 6