On Wed, Apr 30, 2014 at 4:20 AM, 沈维燕 <shenw...@gmail.com> wrote:
>
> Hi Nate,
> From your previous email,Job deletion in the pbs runner will be fixed in
the next stable release Galaxy.So whether this bug has been fixed in the
version of Galaxy(
https://bitbucket.org/galaxy/galaxy-dist/get/3b3365a39194.zip)?Thank you
very much for your help.
>
> Regards,weiyan


Hi Weiyan,

Yes, this fix is included in the April, 2014 stable release. However, I
would strongly encourage you to use `hg clone` rather than downloading a
static tarball. There have been a number of patches to the stable branch
since its April release. In addition, the tarball linked would pull from
the "default" branch of Galaxy, which includes unstable changesets.

--nate

>
>
>
> 2013-08-08 22:58 GMT+08:00 Nate Coraor <n...@bx.psu.edu>:
>>
>> On Aug 7, 2013, at 9:23 PM, shenwiyn wrote:
>>
>> > Yes,and I also have the same confuse about that.Actually when I set
server:<id> in the universe_wsgi.ini as follows for a try,my Galaxy doesn't
work with Cluster,if I remove server:<id>,it work .
>>
>> Hi Shenwiyn,
>>
>> Are you starting all of the servers that you have defined in
universe_wsgi.ini?  If using run.sh, setting GALAXY_RUN_ALL in the
environment will do this for you:
>>
>>     http://wiki.galaxyproject.org/Admin/Config/Performance/Scaling
>>
>> > [server:node01]
>> > use = egg:Paste#http
>> > port = 8080
>> > host = 0.0.0.0
>> > use_threadpool = true
>> > threadpool_workers = 5
>> > This is my job_conf.xml :
>> > <?xml version="1.0"?>
>> > <job_conf>
>> >     <plugins workers="4">
>> >         <plugin id="local" type="runner"
load="galaxy.jobs.runners.local:LocalJobRunner" workers="4"/>
>> >         <plugin id="pbs" type="runner"
load="galaxy.jobs.runners.pbs:PBSJobRunner" workers="8"/>
>> >     </plugins>
>> >     <handlers default="batch">
>> >         <handler id="node01" tags="batch"/>
>> >         <handler id="node02" tags="batch"/>
>> >     </handlers>
>> >     <destinations default="regularjobs">
>> >         <destination id="local" runner="local"/>
>> >         <destination id="regularjobs" runner="pbs" tags="cluster">
>> >             <param
id="Resource_List">walltime=24:00:00,nodes=1:ppn=4,mem=10G</param>
>> >             <param
id="galaxy_external_runjob_script">scripts/drmaa_external_runner.py</param>
>> >             <param
id="galaxy_external_killjob_script">scripts/drmaa_external_killer.py</param>
>> >             <param
id="galaxy_external_chown_script">scripts/external_chown_script.py</param>
>> >         </destination>
>> >    </destinations>
>> > </job_conf>
>>
>> The galaxy_external_* options are only supported with the drmaa plugin,
and actually only belong in the univese_wsgi.ini for the moment, they have
not been migrated to the new-style job configuration.  They should also
only be used if you are attempting to set up "run jobs as the real user"
job running capabilities.
>>
>> > Further more when I want to kill my jobs  by clicking
<Catch(08-08-09-12-39).jpg> in galaxy web,the job keeps on running in my
background.I do not know how to fix this.
>> > Any help on this would be grateful.Thank you very much.
>>
>> Job deletion in the pbs runner was recently broken, but a fix for this
bug will be part of the next stable release (on Monday).
>>
>> --nate
>>
>> >
>> > shenwiyn
>> >
>> > From: Jurgens de Bruin
>> > Date: 2013-08-07 19:55
>> > To: galaxy-dev
>> > Subject: [galaxy-dev] Help with cluster setup
>> > Hi,
>> >
>> > This is my first Galaxy installation setup so apologies for stupid
questions. I am setting up Galaxy on a Cluster running Torque as the
resource manager. I am working through the documentation but I am unclear
on some things:
>> >
>> > Firstly I am unable to find : start_job_runners within the
universe_wsgi.ini and I dont want to just add this anywhere - any help on
this would be create.
>> >
>> > Further more this is my job_conf.xml :
>> >
>> > <?xml version="1.0"?>
>> > <!-- A sample job config that explicitly configures job running the
way it is configured by default (if there is no explicit config). -->
>> > <job_conf>
>> >     <plugins>
>> >         <plugin id="hpc" type="runner"
load="galaxy.jobs.runners.drmaa:DRMAAJobRunner" workers="4"/>
>> >     </plugins>
>> >     <handlers>
>> >   <!-- Additional job handlers - the id should match the name of a
>> >              [server:<id>] in universe_wsgi.ini.
>> >         <handler id="cn01"/>
>> >         <handler id="cn02"/>
>> >     </handlers>
>> >     <destinations>
>> >         <destination id="hpc" runner="drmaa"/>
>> >     </destinations>
>> > </job_conf>
>> >
>> >
>> > Does this look meaning full, further more where to I set the
additional server:<id>
>> > in the universe_wsgi.ini.
>> >
>> > As background the cluster has 13 compute nodes and a shared storage
array that can be accessed by all nodes in the cluster.
>> >
>> >
>> > Thanks again
>> >
>> >
>> >
>> > --
>> > Regards/Groete/Mit freundlichen Grüßen/recuerdos/meilleures
salutations/
>> > distinti saluti/siong/duì yú/привет
>> >
>> > Jurgens de Bruin
>> > ___________________________________________________________
>> > Please keep all replies on the list by using "reply all"
>> > in your mail client.  To manage your subscriptions to this
>> > and other Galaxy lists, please use the interface at:
>> >  http://lists.bx.psu.edu/
>> >
>> > To search Galaxy mailing lists use the unified search at:
>> >  http://galaxyproject.org/search/mailinglists/
>>
>
___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Reply via email to