Hello Eva,
you must remove the management nodes from the field Nodes of the
PartitionName parameter.
With the slurm.conf file would be easier to write an example, anyway this
should work!
Regards,
Sergio.
2014-09-12 9:06 GMT+02:00 Uwe Sauter uwe.sauter...@gmail.com:
Hi Eva,
if you don't
Hi Eva!
As Sergio said, you have to specify the compute nodes with NodeName=... and then define
partitions including those cnodes with PartitionName=... Nodes=.. without including the head
nodes or the login nodes. Also you could set in slurm.conf file the parameter AllocNodes=...
where
We would like to use the UserCPU, SystemCPU and TotalCPU values from
sacct to assess the efficiency of jobs. When a job exits normally,
these values are reported for the batch script step and includes the
time spent by sub-processes.
However, if the job times out, these values only includ the
Dear SLURM-dev!
Can you please comment on when should one find the BatchFlag set to 1 vs 0. I
am not able to find any document on that. I would like to understand the
difference.
Thank you,
Amit
Jobs submitted using the sbatch command have BatchFlag set to 1.
Jobs submitted using other commands have BatchFlag set to 0.
Quoting Kumar, Amit ahku...@mail.smu.edu:
Dear SLURM-dev!
Can you please comment on when should one find the BatchFlag set to
1 vs 0. I am not able to find any
Greetings,
I am trying to setup my slurm setup to use MySQL. I installed via pre-compiled
RPMs but I am having trouble actually loading the plugin as it isn’t being
installed from he RPMs I currently have. I see documentation on how to include
MySQL support when doing a source compile but not
Hi Brian,
On Fri, Sep 12, 2014 at 9:58 AM, Brian B for...@gmail.com wrote:
I am trying to setup my slurm setup to use MySQL. I installed via
pre-compiled RPMs but I am having trouble actually loading the plugin as it
isn’t being installed from he RPMs I currently have. I see documentation
Thanks much for all your replies!
As for the configuration, in the PartitionName= Nodes= the head node is
never included in any of the partitions. There is a default NodeName in
the slurm configuration set by the install itself. I basically used the
same configuration as I used with slurm
I think the problem you have here is the salloc you ran doesn't
automatically send you to a node in your allocation. I am gussing you
ran your salloc from hpcdev-005 that is why hostname by itself returns
that. If you ran srun hostname inside of your salloc you would get on
the node in
Hello Killian,
Thank you for the info. That seems to have worked. I was confused as the
“slurm-sql” package was built.
Regards,
Brian
On Sep 12, 2014, at 1:16 PM, Kilian Cavalotti
kilian.cavalotti.w...@gmail.com wrote:
Hi Brian,
On Fri, Sep 12, 2014 at 9:58 AM, Brian B
Brian,
If it's at all helpful, below are some of the options I use to build SLURM on
CentOS 6.5 using mock.
$ mock/epel-6-x86_64.cfg
config_opts['root'] = 'slurm'
config_opts['target_arch'] = 'x86_64'
config_opts['legal_host_arches'] = ('x86_64',)
config_opts['chroot_setup_cmd'] = 'install
Thank you Moe!
- Amit
-Original Message-
From: je...@schedmd.com [mailto:je...@schedmd.com]
Sent: Friday, September 12, 2014 10:23 AM
To: slurm-dev
Subject: [slurm-dev] Re: BatchFlag=1 vs BatchFlag=0
Jobs submitted using the sbatch command have BatchFlag set to 1.
Jobs submitted
12 matches
Mail list logo