Am 13.08.2012 um 13:25 schrieb Richard Ems:
> On 08/13/2012 01:18 PM, Bartosz Biegun wrote:
>> Hi,
>>
>> My cluster haven't global /scratch dir, each compute node have one's
>> /scratch (on local disk).
>> I want to run one of programs on many cpus from this scratch. My job
>> request many cpus (eg -pe ompi 4), but I haven't idea how to request run
>> on one host. It this can be done by complex value?
To limit your job to one host, it's necessary to have a PE with
"allocation_rule $pe_slots".
>
> Nono. GE does not do that at all ! Your script does it, YOU have to
> program it !
Correct, there is no file-staging builtin.
> script.sh:
> =================================================
> # copy input data from server to local node
> {scp / rsync} server:/data/... /scratch/dir/...
I usually don't use `scp` for this. As /home is mounted on all exechosts, a
plain `cp $HOME/myfile $TMPDIR` can do it already, and after a `cd $TMPDIR` you
can execute it local on the exechost (assuming "tmpdir" is set to /scratch in
the queue definition).
If the parallel application is reading data on all exechost, then it's a bit
more complicated, as $TMPDIR is only generated when the slave process is
spanned on them. You you need to create a persistent directory. Besides using
`scp` in this case, a `qrsh -inherit mkdir /scratch/$JOB_ID.extra; qrsh
-inherit cp/home/myfile /scratch/$JOB_ID.extra" could also do.
Unless you have to read a huge amount of data, it's better that only the master
process reads the data and distribute it to the exechosts by MPI calls.
-- Reuti
> # run your program here
> my_nice_program
>
> # copy results back to server
> {scp / rsync} /scratch/dir/... server:/data/...
>
> exit 0
> =================================================
>
>
>
>
> --
> Richard Ems mail: [email protected]
>
> Cape Horn Engineering S.L.
> C/ Dr. J.J. Dómine 1, 5º piso
> 46011 Valencia
> Tel : +34 96 3242923 / Fax 924
> http://www.cape-horn-eng.com
> _______________________________________________
> users mailing list
> [email protected]
> https://gridengine.org/mailman/listinfo/users
>
_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users