Yes, it works for me but I'm running GE 6.2u5.
$ qstat -f
queuename qtype resv/used/tot. load_avg arch
states
---------------------------------------------------------------------------------
normal@compute-0-0 BIP 0/0/4 0.00 lx26-amd64
---------------------------------------------------------------------------------
normal@compute-0-1 BIP 0/0/4 -NA- lx26-amd64 au
---------------------------------------------------------------------------------
normal@compute-0-2 BIP 0/0/4 0.01 lx26-amd64
---------------------------------------------------------------------------------
normal@compute-0-3 BIP 0/0/4 0.00 lx26-amd64
$ qsub -b y -t 1-4 /bin/sleep 3600
Your job-array 17881.1-4:1 ("sleep") has been submitted
$ qstat -f
queuename qtype resv/used/tot. load_avg arch
states
---------------------------------------------------------------------------------
normal@compute-0-0 BIP 0/0/4 0.00 lx26-amd64
---------------------------------------------------------------------------------
normal@compute-0-1 BIP 0/0/4 -NA- lx26-amd64 au
---------------------------------------------------------------------------------
normal@compute-0-2 BIP 0/4/4 0.01 lx26-amd64
17881 0.55500 sleep ch21778 r 06/07/2011 09:20:13 1 1
17881 0.55500 sleep ch21778 r 06/07/2011 09:20:13 1 2
17881 0.55500 sleep ch21778 r 06/07/2011 09:20:13 1 3
17881 0.55500 sleep ch21778 r 06/07/2011 09:20:13 1 4
---------------------------------------------------------------------------------
normal@compute-0-3 BIP 0/0/4 0.00 lx26-amd64
$ qsub -b y /bin/sleep 3600
Your job 17882 ("sleep") has been submitted
$ qsub -b y /bin/sleep 3600
Your job 17883 ("sleep") has been submitted
$ qsub -b y /bin/sleep 3600
Your job 17884 ("sleep") has been submitted
$ qstat -f
queuename qtype resv/used/tot. load_avg arch
states
---------------------------------------------------------------------------------
normal@compute-0-0 BIP 0/3/4 0.00 lx26-amd64
17882 0.55500 sleep ch21778 r 06/07/2011 09:20:28 1
17883 0.55500 sleep ch21778 r 06/07/2011 09:20:30 1
17884 0.55500 sleep ch21778 r 06/07/2011 09:20:31 1
---------------------------------------------------------------------------------
normal@compute-0-1 BIP 0/0/4 -NA- lx26-amd64 au
---------------------------------------------------------------------------------
normal@compute-0-2 BIP 0/4/4 0.01 lx26-amd64
17881 0.55500 sleep ch21778 r 06/07/2011 09:20:13 1 1
17881 0.55500 sleep ch21778 r 06/07/2011 09:20:13 1 2
17881 0.55500 sleep ch21778 r 06/07/2011 09:20:13 1 3
17881 0.55500 sleep ch21778 r 06/07/2011 09:20:13 1 4
---------------------------------------------------------------------------------
normal@compute-0-3 BIP 0/0/4 0.00 lx26-amd64
Regards,
- Chansup
On Fri, Jun 3, 2011 at 12:19 PM, James Gladden
<[email protected]> wrote:
> Chansup,
>
> Yup - been there, done that. On all my execution hosts I have set "slots"
> to 8 since they are 8 core machines. The command "qhost -F slots" does
> indeed tell me the number of currently available slots on each host - 8 for
> an empty node, zero for a fully occupied node, or something in between as
> the case may be. This actually relates to another thread regarding
> preventing over-subscription when an execution host serves multiple cluster
> queues.
>
> So have you actually observed job packing to work on your cluster(s)?
>
> Jim
>
> On 6/3/2011 7:35 AM, CB wrote:
>>
>> Hi James,
>>
>> I jumped in this thread later but I'm not sure if you already made the
>> following change. This configuration change is necessary to make
>> GridEngine to pack your job on a node.
>>
>> You need to add the following in your exechost configuration:
>>
>> complex_values slots=<Nslots>
>>
>> If you execute the following command, it will show whether it is added or
>> not.
>>
>> qhost -F slots
>>
>> Regards,
>> - Chansup
>> _______________________________________________
>> users mailing list
>> [email protected]
>> https://gridengine.org/mailman/listinfo/users
>>
> _______________________________________________
> users mailing list
> [email protected]
> https://gridengine.org/mailman/listinfo/users
>
_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users