Here's another output:

# squeue -o "%10F %10c %10C %10T %10M %10R"
ARRAY_JOB_ MIN_CPUS   CPUS       STATE      TIME       NODELIST(R
59         1          1          PENDING    0:00       (Resources
60         1          1          PENDING    0:00       (Priority)
61         1          1          PENDING    0:00       (Priority)
62         1          1          PENDING    0:00       (Priority)
63         1          1          PENDING    0:00       (Priority)
64         1          1          PENDING    0:00       (Priority)
65         1          1          PENDING    0:00       (Priority)
66         1          1          PENDING    0:00       (Resources
57         1          8          RUNNING    1:21       node213   
58         1          8          RUNNING    1:18       node236

On Tue, Jul 12, 2016, at 15:57, Yuri wrote:
> Forgot to say that only the last two lines are the jobs that are
> running.
> 
> On Tue, Jul 12, 2016, at 15:55, Yuri wrote:
> > 
> > -
> > # squeue -o "%A %c %C"
> > JOBID MIN_CPUS CPUS
> > 48 1 1
> > 49 1 1
> > 50 1 1
> > 51 1 1
> > 52 1 1
> > 53 1 1
> > 46 1 8
> > 47 1 8
> > -
> > 
> > Each job is the same bash script:
> > -
> > #!/bin/bash 
> > #SBATCH -n 1
> > #SBATCH -t 0-00:05
> > #SBATCH -p short
> > 
> > sleep 200
> > -
> > 
> > In slurm.conf I have CPUs=4 for each node (but each node actually has a
> > Intel Core i7). My question is: why is slurm assigning only one job per
> > node and each job is consuming 8 CPUs? This should not be happening
> > because the script is single CPU and each node has only 4 CPUs in the
> > specification, so the correct would be 4 jobs per node, right?

Reply via email to