Hey guys,

Another cluster related question.  Currently on our cluster I'm
calling a job like this (from perl):

system("qsub -N msgRun0-1.$$  -pe batch $update_nthreads -cwd -b y -V
-sync n ./msgRun0-1.sh")

Does anyone know what I should use for -pe?  I couldn't tell from the
man page what options are available.  PE refers to parallel
programming environment from my reading.

Thanks,

Greg


On Tue, Jan 24, 2012 at 9:32 AM, mailing list <margeem...@gmail.com> wrote:
> Thanks.  I'll be sure to keep you guys updated.  So far everything is
> working great!
>
> -Greg
>
>
> On Tue, Jan 24, 2012 at 9:20 AM, Enis Afgan <eaf...@emory.edu> wrote:
>> Hi again Greg,
>>
>>
>> On Tue, Jan 24, 2012 at 2:51 PM, mailing list <margeem...@gmail.com> wrote:
>>>
>>> Hi guys,
>>>
>>> So I'm trying to take a bioinformatics pipeline we currently run on
>>> our own SGE cluster (which seems to have a lot of customizations ! I'm
>>> finding out) and I want to run it on Amazon using CloudMan.
>>
>> Glad to hear this! Please let me know how it goes in the end.
>>>
>>>
>>> So I'm trying to figure out what the differences are.  Here are my
>>> questions so far:
>>>
>>> We seem to have 8 slots per node.  Do you guys know if the same is
>>> true on CloudMan?  I'm assuming more than one submitted qsub job can
>>> run on one node at one time?
>>
>> The number of slots per node depends on the type of instance used. CloudMan
>> will configure an SGE instance to have as many slots as there are processing
>> cores. So, for example, a large instance will have 2 slots, an extra large
>> instance will have 4 slots. Each qsub job then gets submitted to one slot.
>>>
>>>
>>> Also is there a way to reserve a certain amount of memory for a job
>>> submitted via qsub, or alternatively to request that the job have
>>> exclusive access to a node so it keeps all the memory available?
>>
>> There are no preconfigured provisions for this in the SGE configuration
>> provided by CloudMan. However, this is a vanilla SGE setup so it is entirely
>> possible to manually setup the queues as you'd like and have it do the
>> various resource reservations. SGE is installed under /opt/sge
>>
>>
>>>
>>> Thanks,
>>>
>>> Greg
>>> ___________________________________________________________
>>> The Galaxy User list should be used for the discussion of
>>> Galaxy analysis and other features on the public server
>>> at usegalaxy.org.  Please keep all replies on the list by
>>> using "reply all" in your mail client.  For discussion of
>>> local Galaxy instances and the Galaxy source code, please
>>> use the Galaxy Development list:
>>>
>>>  http://lists.bx.psu.edu/listinfo/galaxy-dev
>>>
>>> To manage your subscriptions to this and other Galaxy lists,
>>> please use the interface at:
>>>
>>>  http://lists.bx.psu.edu/
>>
>>

___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/

Reply via email to