Am 02.11.2012 um 16:32 schrieb Dave Love:

>> ...snip...
>> Before changing this: I wonder what was the intention >12 years ago to
>> include the name of the queue, as the job/task-id is already unique?
> 
> Yes, that's what I mean.  I'm inclined to change it anyway if there's no
> obvious reason.  (The id is only unique in a given cell, and you could
> currently have trouble from multiple cells with job ids of similar
> sizes, though I doubt that's at all common.)
> 
>> I'm not sure, whether it was already in DQS. In SGE 5.3 there were no
>> cluster queues (i.e. one queue definition per exechost...) and often
>> the number of the exechost was included in the name of the queue
>> because of this, like 1234.1.serial01.q for a serial queue on node01.
> 
> I'm not sure it helps, but dqs_make_tmpdir:
> 
>  /* Note could have multiple instantiations of same job, */
>  /* on same machine, under same queue */
>  sprintf(str,"%s/%d.%s.%d",qconf->tmpdir,job->job_number,qconf->qname,me.pid);
> 
> c.f. sge_make_tmpdir:
> 
>   /* Note could have multiple instantiations of same job, */
>   /* on same machine, under same queue */
>   snprintf(tmpdir, ltmpdir, "%s/"sge_u32"."sge_u32".%s", t, jobid,
>            jataskid, lGetString(qep, QU_qname));

I got an idea: maybe they were thinking of a shared scratch space, where every 
node can have its unique directory in a common location. Whether they used the 
name of the exechost or the name of the queue on each exechost (as said: at 
that time each execnode needed its own queue), was just a matter of taste.

Then the $TMPDIR went to the nodes. But as there still maybe installations with 
a shared scratch space, some libraries like Open MPI create exechost specific 
sub-directories in $TMPDIR - being it shared or not - all is save this way and 
you don't get a conflict

Note: I put an RFE here: https://arc.liv.ac.uk/trac/SGE/ticket/1290 for 
additional functionality (ref.: https://arc.liv.ac.uk/trac/SGE/ticket/570)

-- Reuti
_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to