6 mar 2013 kl. 19:33 skrev Dave Love <[email protected]>
:

> Reuti <[email protected]> writes:
> 
>>> I can't reproduce that (with openmpi tight integration).  Doing this
>>> (which gets three four-core nodes):
>>> 
>>> qsub -pe openmpi 12 -l h_vmem=256M
>>> echo "Script $(hostname): $TMPDIR $NSLOTS"
>>> ulimit -v
>>> for HOST in $(tail -n +2 $PE_HOSTFILE|cut -f1 -d' '); do
>>>     qrsh -inherit $HOST 'echo "Call $(hostname): $TMPDIR $NSLOTS"; ulimit 
>>> -v;
>>>     sleep 60' &
>>> done
>>> wait
>> 
>> Great, then you fixed it already for the actual version.
> 
> I'm puzzled because I don't recall a change in that area, and I'd have
> expected to have noticed it with 6.2u5 in the past, but I'm happy,
> anyhow.

There is definitely some difference to (OGS) GE2011.11. I get the following 
output from a similar script:

qsub -pe openmpi_span 30 -l h_rt=300,h_vmem=200M
echo "Script $(hostname): $TMPDIR $NSLOTS"
 ulimit -v
 for HOST in $(tail -n +2 $PE_HOSTFILE|cut -f1 -d' '); do
     qrsh -inherit $HOST 'echo "Call $(hostname): $TMPDIR $NSLOTS"; ulimit -v;
     sleep 60' &
 done
 wait

Output:

Script my-mgrid5: /tmp/157435.1.small 30
1638400
Call my-mgrid4: /tmp/157435.1.small 7
unlimited
Call my-mgrid3: /tmp/157435.1.small 8
unlimited
Call my-mgrid2: /tmp/157435.1.small 7
unlimited

We are using ssh for the internode login. I will try with the newest SGE, as 
our current install is quite outdated, as you understand.

cheers,
Mikael

> 
> -- 
> Community Grid Engine:  http://arc.liv.ac.uk/SGE/


_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to