Am 15.03.2013 um 04:51 schrieb Vamsi Krishna:

> used the following command to submit two jobs
> 
> cwd: /test1, qsub -l exclusive -l hostname=`uname -n` -pe smp 12 
> batch_test.bash
> 
> cwd:/test2, qsub -l exclusive  -l hostname=`uname -n` -pe smp 12 
> batch_test.bash

If you specify "-exclusive" it will be "exclusive" - it's per job not to allow 
any other job on this machine/queue (resp. "these machines" in case of a PE 
request requiring several nodes). It's not per user.

If you want to reserve a node for your own use only, you can make an advance 
reservation first, and then submit into this advance reservation:

$ qrsub -pe smp 24 -l exclusive -d 3600
Your advance reservation 196 has been granted
$ qrstat
ar-id   name       owner        state start at             end at               
duration
------------------------------------------------------------------------------------------
    196            reuti        r     03/15/2013 16:06:37  03/15/2013 17:06:37  
01:00:00
$ qsub -ar 196 -pe smp 12 foobar.sh
Your job 5814 ("foobar.sh") has been submitted
$ qsub -ar 196 -pe smp 12 foobar.sh
Your job 5815 ("foobar.sh") has been submitted

resp.

$ qmake -ar 196 -pe smp 12 -- 

(It's even not necessary to specify "-l exclusive" for the advance reservation: 
in case you request 24 cores with "allocation_rule $pe_slots" in a node with 24 
cores the advance reservation will block them already.)

-- Reuti


> only first job is running and other one is in queue wait status. but if i 
> submit -p smp 24, successfully it is using 24slots in the host. what could be 
> the reason in waiting for the second job when i use 12 slots.
> 
> job-ID  prior   name       user         state submit/start at     queue       
>                    slots ja-task-ID
> -----------------------------------------------------------------------------------------------------------------
> 1347219 10.35000 batch_test user1    r     03/15/2013 01:21:33                
>       batch.q@node1 12
> 1347220 1.60000 batch_test user1     qw    03/15/2013 01:20:36                
>                    12
> 
> 
> On Fri, Mar 8, 2013 at 9:29 PM, Vamsi Krishna <[email protected]> wrote:
> Thanks Reuti, Progressing well.
> 
>  - V 
> 
> 
> 
> On Wed, Mar 6, 2013 at 2:10 AM, Christopher Heiny <[email protected]> 
> wrote:
> On 03/05/2013 10:35 AM, Reuti wrote:
> Hi,
> 
> Am 05.03.2013 um 18:41 schrieb Vamsi Krishna:
> 
> >is there any way to run make -j <slots in node> when we submit to batch 
> >environment to use all the slots with -pe <slots in node>
> please have a look at the `qmake` tool which is installed with SGE in case 
> you want to start the compilation on the command line (`man qmake`).
> 
> If you are submitting the complete `make` job to a batch queue already, you 
> could use:
> 
> make -j $NSLOTS
> 
> in your jobscript and it will be replaced at runtime with the requested 
> number of slots as you specified it as argument to `qsub -pe smp 4` or alike. 
> If you want to use always all cores in a machine:
> 
>   qsub -l exclusive -pe smp -999 make.sh
> 
> will always use the maximum up to 999 cores in a machine ("exclusive" is a 
> complex being set up with "relop EXCL", "smp" a PE setup with 
> "allocation_rule $pe_slots") and set $NSLOTS accordingly at runtime.
> 
> Hmmm.  That's an interesting trick.
> 
> Our make invocations are already embedded in lengthy Bash scripts submitted 
> to GE.  We use on slot per machine and one slot per job to ensure exclusive 
> access (within that queue) to a given machine.  Then we invoke make from the 
> script like this:
> 
>     cpus=`grep processor /proc/cpuinfo | wc -l`
>     make -j ${cpus}  .......
> 
>                                                 Chris
> 
> _______________________________________________
> users mailing list
> [email protected]
> https://gridengine.org/mailman/listinfo/users
> 
> 
> _______________________________________________
> users mailing list
> [email protected]
> https://gridengine.org/mailman/listinfo/users


_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to