Hi Everyone,
I am supporting cons_res plugin but my code is segment faulted, so
please give me the hints to use gdb with core. Thanks for ur help in
advance.
Regards
Dineshkumar RAJAGOPAL
*Grenoble Institute Of Technology*
*Grenoble,France*
Moe,
The srun -vv has allowed me to get slurm_debug messages to
stderr. That's progress! But none of my SPANK plugin slurm_debug
calls go to the log.
By changing these config's:
SlurmctldDebug=debug
SlurmdDebug=debug
a fair amount of debug: stuff _is_ turned on inside
slurmctld.log. The
slurm_spank_local_user_init() is being invoked by srun, so that's the
only place it's output gets logged. Spank functions invoked by the
daemons would be logged by those daemons.
Quoting Bob Moench r...@cray.com:
Moe,
The srun -vv has allowed me to get slurm_debug messages to
stderr.
Hello,
is there a way to prevent slurm from parsing the whole jobscript for
#SBATCH statements?
Assume I have the following jobscript job1.sh:
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --job-name=job1
srun -l echo slurm jobid $SLURM_JOB_ID named: $SLURM_JOB_NAME
cat job2.sh
Hello Uwe,
The pmi2 for Slurm is in the contribs/pmi2 of the Slurm source. You must
build it separately, but the makefile is indeed generated during the
regular Slurm configure so the prefix is defined already as you need it to
be.
ex.
cd contribs/pmi2
make make install
--
Jared Baker
Hi Trey, all,
I have Slurm 14.11.5 running. It was build with:
./configure --prefix=/opt/slurm/${SLURMVERSION}
--with-munge=/opt/munge/${MUNGEVERSION} --enable-pam
make
make install
I just tried to compile MVAPICH2 2.1 with
./configure --prefix=/opt/mpi/mvapich2/2.1 --with-pmi=pmi2
I modified the code to match the documentation, which is to keep
parsing until it reaches a command. I checked all the way back to
Slurm v2.0 and this has been the behaviour of sbatch for many years.
This will be in v14.11.6, which we hope to release this week. There's
a patch available
Never mind; which I changed #sbatch to the correct #SBATCH, I
got 4 tasks. According to the man page, this is a bug. For now, I
like Magnus's suggestion :-)
On 04/21/2015 08:21 AM, Andy Riebs
wrote:
Hendryk, what sbatch command line options are you using? How are
you determining
Hi!
A simple solution would be do:
SBATCH=#SBATCH
cat EOF
...
$SBATCH --nodes=1
EOF
/Magnus
On 2015-04-21 13:50, Hendryk Bockelmann wrote:
Hello,
is there a way to prevent slurm from parsing the whole jobscript for
#SBATCH statements?
Assume I have the following jobscript job1.sh:
A better approach would be to add to SLURM a #SBATCH END-OF-OPTIONS or
something similar to mark the end of sbatch options and that sbatch can
stop parsing from that point.
/Magnus
On 2015-04-21 14:40, Andy Riebs wrote:
Never mind; which I changed #sbatch to the correct #SBATCH, I got 4
Your node definition doesn't match what you assigned the partition
'debug'. You probably want NodeName=JGNODE1 instead of NodeName=JGHCSLURM.
- Trey
=
Trey Dockendorf
Systems Analyst I
Texas AM University
Academy for Advanced Telecommunications and Learning
Thank you Trey, it work.
I put this config on controller and works:
# COMPUTE NODES
NodeName=JGNODE[1-1] CPUs=1 State=UNKNOWN
#PartitionName=debug Nodes=JGNODE1 Default=yes MaxTime=INFINITE
PartitionName=CLUSTER Default=yes State=UP nodes=JGNODE[1-1]
But now have problem with the
Thank you Moe and David. I was misunderstanding the CPUs parameter. I
applied your suggestions and all is well now.
Peter.
On 4/20/15 9:06 PM, David Carlet wrote:
Peter,
I'm fairly new to SLURM so someone else can feel free to correct me if I'm
wrong, but, I think:
num_cpus_per_node =
On 21/04/15 22:53, Maciej L. Olchowik wrote:
but how do we keep track and enforce the expiry date? Does anyone
have a neat solution to this problem?
All our projects allocations are done by quarters, so just before the
midnight at the end of each quarter we set all projects to their new
quota
If you have many MPI implementations installed, you can set SLURM_MPI_TYPE
in your environment modules instead of passing --mpi to srun.
On Tue, Apr 21, 2015 at 7:53 AM Sourav Chakraborty
chakraborty...@buckeyemail.osu.edu wrote:
Hi Trey,
To use SLURM+PMI2 with MVAPICH2 you should configure
Have you looked at sreport? I think the Cluster
UserUtilizationByAccount report would give you what you are looking for.
On 04/21/15 05:53, Maciej L. Olchowik wrote:
Dear all,
For our accounting needs, we are currently running slurmdbd with the sbank
scripts:
Sorry noob problem, i'm forgot start munge.
Bests,
Gois
2015-04-21 21:57 GMT+01:00 Jorge Góis mail.jg...@gmail.com:
Thank you Trey, it work.
I put this config on controller and works:
# COMPUTE NODES
NodeName=JGNODE[1-1] CPUs=1 State=UNKNOWN
#PartitionName=debug Nodes=JGNODE1
17 matches
Mail list logo