Am 24.01.2013 um 18:54 schrieb Dave Love:
> [Excuse any duplicates -- I'm not sure if gridengine.org is tits-up
> again as well as our mail hub sulking at my laptop.]
>
> Reuti writes:
>
>>> I think that's an old version. Suggestions are welcome for any
>>> improvements to the current one, whi
[Excuse any duplicates -- I'm not sure if gridengine.org is tits-up
again as well as our mail hub sulking at my laptop.]
Reuti writes:
>> I think that's an old version. Suggestions are welcome for any
>> improvements to the current one, which I tried to tidy up (from which
>> http://arc.liv.ac.
Am 18.01.2013 um 17:24 schrieb Dave Love:
> Reuti writes:
>
>> It's not limited to a PE list entry, but applies to all. It is
>> explained at the beginning of `man queue_conf` under
>> "hostlist". Although it's hard to read due to the bracket being a meta
>> symbol and a character to be typed.
Reuti writes:
> It's not limited to a PE list entry, but applies to all. It is
> explained at the beginning of `man queue_conf` under
> "hostlist". Although it's hard to read due to the bracket being a meta
> symbol and a character to be typed.
I think that's an old version. Suggestions are we
Am 12.01.2013 um 01:04 schrieb berg...@merctech.com:
> Where is the syntax for the pe_list parameter documented? I looked for an
> explanation, but didn't find details or examples in the man pages. There were
> some previous discussions on the mailing list (mostly from you), but they
> don't provi
In the message dated: Fri, 11 Jan 2013 23:45:05 +0100,
The pithy ruminations from Reuti on
were:
=> Am 11.01.2013 um 23:16 schrieb berg...@merctech.com:
=>
=> >
[SNIP!]
=> >
=> >qconf -sq all.q | grep pe_list
=> >pe_list threaded
make,[@mpi-AMD=openmpi-AMD],[@mpi-
Am 11.01.2013 um 23:16 schrieb berg...@merctech.com:
>
> I recently reconfigured our SGE (6.2u5) environment to better handle MPI jobs
> on a heterogeneous cluster. This seems to have caused a problem with the
> "threaded" (SMP) PE.
>
> Our PEs are:
>
> qconf -spl
> make
I recently reconfigured our SGE (6.2u5) environment to better handle MPI jobs
on a heterogeneous cluster. This seems to have caused a problem with the
"threaded" (SMP) PE.
Our PEs are:
qconf -spl
make(unused)
openmpi-AMD