Hi Jordan,

The answer is, well, it's not. SLURM open()'s the file in its final resting 
spot. What I suspect is this may be an artifact of NFS caching. If that's it 
you can try these commands on the node where the file shows as zero length:

sync
echo 2 > /proc/sys/vm/drop_caches

Another option could be to log in to the NAS head and see exactly what it 
thinks the file looks like. 

Hope that helps!

Sent from my iPhone

> On Jul 14, 2015, at 12:39 AM, Jordan Willis <[email protected]> wrote:
> 
> Hi,
> 
> When a job is run, the slurm_%j.out is generated where I would expect, but 
> remains empty until the job has completed.
> 
> This is strange behavior to me since we are using a NAS file system on all 
> nodes including the slurm controller node. So even if the file was being 
> written to on just the node it was being run on, it should show up on the 
> controller node. 
> 
> On torque it generally was written to /var/spool/ directory and file and then 
> copied at the end. When I go to the spool directory defined in slurm.conf, I 
> see the slurm_script file generated but not the output. 
> 
> Where is the output before its copied? Is this behavior expected?
> 
> Thanks so much,
> Jordan

Reply via email to