Am 11.12.2012 um 22:19 schrieb Reuti:
> Hi,
>
> Am 07.12.2012 um 11:16 schrieb Arnau Bria:
>
>> I've configured our cluster in the way that slots
>
> slots are consumable by default in `qconf -sc`.
>
>
>> /memory are consumable
>> resources. Our nodes have their limits and there are some default
>> resources requirements at job submission. All this conf should avoid
>> memory/processor oversubscription (at least, from what I've read).
>> something like http://jeetworks.org/node/93 ... is this the recommended
>> way for avoiding over-subscription?
>
> Yes. OTOH you can use an RQS.
>
> But instead of the intermediate files the author provided I would prefer the
> command similar to:
>
> $ for node in node{01..24}; do qconf -mattr exechost complex_values slots=8
> $node; done
I must admit that I used `seq` before Bash 4 due to the missing leading zero
which `seq` could provide. But having Bash 4 you can also get rid of the for
loop at all:
$ qconf -mattr exechost complex_values slots=8 node{01..24}
-- Reuti
> (You need Bash 4 for leading zeros here.)
>
> NB: default requests for consumables I would put in the complex definition.
>
>
>> I've also configured core-binding, and the default for each job is 1
>> slot.
>>
>> But with this conf I have some questions:
>>
>> 1.-) when submitting a job specifying more than 1 job slot (-l slots=2
>> -binding linear:2), OGS fails and suggest to use parallel environment.
>
> Yep.
>
>
>> I've read
>> somewhere that this is OGS design, so I need a pe. I've not found a
>> clear doc about pe (yes, how to create and manage, but not a complete
>> definition of each parameter and its implications) so, anyone could
>> share some doc about it?
>
> The definitions of the entries are in the man page. What is unclear therein?
>
>
>> what is the minimun conf I need for allowing
>> slots requirement al job submission? somethign like:
>>
>> $ qconf -sp smp
>> pe_name smp
>> slots 1024
>> user_lists NONE
>> xuser_lists NONE
>> start_proc_args NONE
>> stop_proc_args NONE
>> allocation_rule $pe_slots
>
> With this setting you are limited to one host, but this is correct for smp.
>
>
>> control_slaves FALSE
>
> For tightly integrated jobs the above needs to be set to TRUE in case you
> want to issue a local startup of a second process (or going to a second
> node). For forks and threads instead it doesn't matter.
>
>
>> job_is_first_task TRUE
>> urgency_slots min
>> accounting_summary FALSE
>>
>>
>> is enough? (this is what I'm using and works fine)
>
> Yes.
>
>
>> 2.-) I've still not configured share based priorities, but when done,
>> if a user request more slots/memory than the default, but it does not
>> use them, is this request taken into account for share calculation?
>> I mean, user A request 8GB and 1 cpu and uses 8GB and 3600 sec of cpu,
>
> h_cpu or h_rt?
>
>
>> and user B requests 16GB and 2 cpus, but uses 8GB and 3600 of cpu. Are
>> both users priorities recalculated by resource usage or by resource
>> requests?
>
> Depends of the setting: ACCT_RESERVED_USAGE and SHARETREE_RESERVED_USAGE in
> SGE's configuration (`man sge_conf`).
>
> -- Reuti
>
>
>> TIA,
>> Arnau
>> _______________________________________________
>> users mailing list
>> [email protected]
>> https://gridengine.org/mailman/listinfo/users
>
>
> _______________________________________________
> users mailing list
> [email protected]
> https://gridengine.org/mailman/listinfo/users
_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users