Hi Ben,

if the job is in waiting in the queue it's unlikely (not impossible) that it is Galaxy fault. Can you recheck your Torque setup and how many cores and memory your job has requested?


Ciao,
Bjoern

Am 22.07.2014 10:09, schrieb 王渭巍:
Hi, Bjoern,
         I've tried the latest galaxy version with Torque 4.1.7, and it seems all 
right. But torque version > 4.2 won't work.
         And I tried to submit“fastqc readqc” jobs via torque (runner pbs),  
but the job is always in the queue waiting. I submited “fastqc readqc”local  
(runner local) , and the job finished successfully. So the question is , it 
seems not all the tools can be submitted via torque (or other resource 
manager), right?



王渭巍

From: Björn Grüning
Date: 2014-07-21 01:23
To: 王渭巍; Björn Grüning; galaxy-dev
Subject: Re: [galaxy-dev] How to configure galaxy with a cluster
Hi Ben,

sorry but we do not run a Torque setup.

Do you have any concrete questions or error messages?

Cheers,
Bjoern

Am 17.07.2014 04:10, schrieb 王渭巍:
Hi, Bjoern
          Would you share your  procedure to make some tools to run on a 
cluster.
          I have tried 
https://wiki.galaxyproject.org/Admin/Config/Performance/Cluster using Torque, 
but got errors.
          I think maybe it's job_conf.xml. Would you share yours?  Thanks a lot

Ben


From: Björn Grüning
Date: 2014-07-16 16:34
To: 王渭巍; Thomas Bellembois; galaxy-dev
Subject: Re: [galaxy-dev] How to configure galaxy with a cluster
Hi Ben,

that is not possible at the moment. The idea is to keep the
user-inferface as easy as possible for the user. You, as admin, can
decide which resource a specific tool with a specific input will use.
You will never see any options like that in a tool, but you can write a
tool by yourself if you like, or "enhance" the megablast tool.

Cheers,
Bjoern


Am 16.07.2014 09:43, schrieb 王渭巍:
Thanks a lot, Thomas! It really helps, I added tools section followed your 
suggestion...

here is my job_conf.xml ( I am using Torque,  I have 3 servers. One for galaxy 
server, two for cluster computing.  )

<?xml version="1.0"?>
<job_conf>
<plugins>
<plugin id="pbs" type="runner" load="galaxy.jobs.runners.pbs:PBSJobRunner"/>
</plugins>
<destinations default="pbs_default">
<destination id="pbs_default" runner="pbs"/>
</destination>
<destination id="long_jobs" runner="pbs">
<param id="Resource_List">walltime=72:00:00,nodes=1:ppn=8</param>
<param id="-p">128</param>
</destination>
</destinations>
<tools>
<tool id="megablast_wrapper" destination="long_jobs"/>
</tools>
</job_conf>

and still no cluster options in "megablast" item.  How can I see cluster 
options in the page, for example, the page will let me choose to use local server or a 
cluster.

Ben



From: Thomas Bellembois
Date: 2014-07-15 17:41
To: galaxy-dev@lists.bx.psu.edu
Subject: Re: [galaxy-dev] How to configure galaxy with a cluster
Hello Ben,

you can configure your Galaxy instance to use your cluster in the
job_conf.xml file:

https://wiki.galaxyproject.org/Admin/Config/Performance/Cluster

You can set up your instance to use your cluster by default for all jobs
or only for specific jobs.

Here is a part of my job_conf.xml for example:

       <plugins>
<!-- LOCAL JOBS -->
           <plugin id="local" type="runner"
load="galaxy.jobs.runners.local:LocalJobRunner" workers="4"/>

           <!-- SUN GRID ENGINE -->
           <plugin id="sge" type="runner"
load="galaxy.jobs.runners.drmaa:DRMAAJobRunner"/>
       </plugins>

       <handlers default="handlers">
           <handler id="handler0" tags="handlers"/>
           <handler id="handler1" tags="handlers"/>
       </handlers>

       <destinations default="sge_default">
           <destination id="local" runner="local"/>
           <destination id="sge_default" runner="sge">
             <param id="nativeSpecification">-r yes -b n -cwd -S /bin/bash
-V -pe galaxy 1</param>
           </destination>
           <destination id="sge_big" runner="sge">
             <param id="nativeSpecification">-r yes -b n -cwd -S /bin/bash
-V -pe galaxy 12</param>
           </destination>

       </destinations>

       <tools>
           <tool id="upload1" destination="local"/>
           <tool
id="toolshed.g2.bx.psu.edu/repos/bhaas/trinityrnaseq/trinityrnaseq/0.0.1" 
destination="sge_big"/>
           <tool id="mira_assembler" destination="sge_big"/>
           <tool id="megablast_wrapper" destination="sge_big"/>
           <tool
id="toolshed.g2.bx.psu.edu/repos/devteam/ncbi_blast_plus/ncbi_blastp_wrapper/0.1.00"
destination="sge_big"/>
           <tool
id="toolshed.g2.bx.psu.edu/repos/devteam/ncbi_blast_plus/ncbi_tblastn_wrapper/0.1.00"
destination="sge_big"/>
           <tool
id="toolshed.g2.bx.psu.edu/repos/devteam/ncbi_blast_plus/ncbi_blastx_wrapper/0.1.00"
destination="sge_big"/>
           <tool
id="toolshed.g2.bx.psu.edu/repos/devteam/ncbi_blast_plus/ncbi_blastn_wrapper/0.1.00"
destination="sge_big"/>
           <tool
id="toolshed.g2.bx.psu.edu/repos/devteam/ncbi_blast_plus/ncbi_tblastx_wrapper/0.1.00"
destination="sge_big"/>
           <tool
id="toolshed.g2.bx.psu.edu/repos/devteam/ncbi_blast_plus/ncbi_rpstblastn_wrapper/0.1.00"
destination="sge_big"/>
           <tool
id="toolshed.g2.bx.psu.edu/repos/devteam/ncbi_blast_plus/ncbi_rpsblast_wrapper/0.1.00"
destination="sge_big"/>
</tools>

Moreover you Galaxy user and Galaxy server must be allowed to submit
jobs to your scheduler.

Hope it  helps,

Thomas




___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
     http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
     http://galaxyproject.org/search/mailinglists/

___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/

Reply via email to