Hi all,
all solved, it was my fault.

The reason of the failure: hostname of the headnode was changed, but
configuration was not updated.

Sorry about the noise,
Albert

On 06/12/13 13:56, Albert Solernou wrote:
> Hi all,
> I've configured Slurm v. 2.6.3 on a GPU cluster with accounting support
> with SlurmDBD. Find attached my configuration (slurm.conf).
> 
> It works all fine for me, but for any other user there's a wall time of
> 10 minutes per job step. See:
>        JobID  Timelimit    Elapsed        NodeList
> ------------ ---------- ---------- ---------------
> 2751         1-00:30:00   00:10:56   k20n[001-002]
> 2751.batch                00:10:56         k20n001
> 2751.0                    00:00:53   k20n[001-002]
> 2751.1                    00:10:03   k20n[001-002]
> 
> Any idea on how to remove this limit?
> 
> Thank you,
> Albert
> 

-- 
---------------------------------
  Dr. Albert Solernou
  Research Associate
  Oxford Supercomputing Centre,
  University of Oxford
  Tel: +44 (0)1865 610631
---------------------------------

Reply via email to