[galaxy-dev] job_conf.xml questions

2013-07-31 Thread shenwiyn
Hi Ido Tamir,
I have the same question too. I set "megablast" as "long_jobs" for the 
test,when I cancel my megablast job in Galaxy history before it finishs,it 
still work in my background.The drmaa_external_killer.py seems doesn't work.I 
don't know where is the problem,and hope someone can help me .
a) In my advanced example I have:


walltime=72:00:00,nodes=2:ppn=8,mem=16G
scripts/drmaa_external_runner.py
scripts/drmaa_external_killer.py
scripts/external_chown_script.py






b)In my background,I get the "qstat" result :
Job idName UserTime Use S Queue
-  ---  - -
46.server  ...n...@genomics.cn galaxy  00:00:00 R batch 



shenwiyn

From: Ido Tamir
Date: 2013-07-23 21:53
To: galaxy-dev@lists.bx.psu.edu
Subject: [galaxy-dev] job_conf.xml questions
Hi,
I work myself through the job_conf.xml and have a question:

a)
In your advanced example you have:


 

scripts/drmaa_external_runner.py
scripts/drmaa_external_killer.py
scripts/external_chown_script.py
 

Does this mean that remote_cluster jobs can not be killed unless I add the 3 
scripts to this destination?
Or does this have to do with the "run jobs as real user" that I don't need 
currently. I am using the galaxy user for all jobs for the moment?

To phrase this differently: Do I need these three scripts?

b)
Can I set a wall time limit for only for my local runners?

like:

   walltime=72:00:00


When I set it in limits, will it work for local runners and then I have to 
override it in 
every destination?


thank you very much,
ido





___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] enable to edit pages

2013-07-31 Thread Philipe Moncuquet
Hi guys,

Since the last update of Galaxy. We are enable to edit pages on our local
instance. Pages can be viewed without problems but not editing. The server
error we get is as foolow, any advice would be greatly appreciated.

URL:
http://galaxy-dev.bioinformatics.csiro.au/page/edit_content?id=f2db41e1fa331b3e
Module galaxy.web.framework.middleware.error:*149* in __call__
>>  
>> 
app_iter *=* self*.*application*(*environ*,* sr_checker*)*
Module paste.recursive:*84* in __call__
>>  
>> 
*return* self*.*application*(*environ*,* start_response*)*
Module paste.httpexceptions:*633* in __call__
>>  
>> 
*return* self*.*application*(*environ*,* start_response*)*
Module galaxy.web.framework.base:*132* in __call__
>>  
>> 
*return* self*.*handle_request*(* environ*,* start_response *)*
Module galaxy.web.framework.base:*190* in handle_request
>>  
>> 
body *=* method*(* trans*,* kwargs *)*
Module galaxy.web.framework:*98* in decorator
>>  
>> 
*return* func*(* self*,* trans*,* ***args*,* kwargs *)*
Module galaxy.webapps.galaxy.controllers.page:*446* in edit_content
>>  
>> 
*return* trans*.*fill_template*(* "page/editor.mako"*,* page*=*page *)*
Module galaxy.web.framework:*957* in fill_template
>>  
>> 
*return* self*.*fill_template_mako*(* filename*,* kwargs *)*
Module galaxy.web.framework:*969* in fill_template_mako
>>  
>> 
*return* template*.*render*(* data *)*
Module mako.template:*296* in render
>>  
>> 
*return* runtime*.*_render*(*self*,* self*.*callable_*,* args*,* data*)*
Module mako.runtime:*660* in _render
>>  
>> 
_kwargs_for_callable*(*callable_*,* data*)**)*
Module mako.runtime:*692* in _render_context
>>  
>> 
_exec_template*(*inherit*,* lclcontext*,* args*=*args*,* kwargs*=*kwargs*)*
Module mako.runtime:*718* in _exec_template
>>  
>> 
callable_*(*context*,* ***args*,* kwargs*)*
Module _base_base_panels_mako:*125* in render_body
>>  
>> 
__M_writer*(*unicode*(*self*.*center_panel*(**)**)**)*
Module page_editor_mako:*196* in render_center_panel
>>  
>> 
__M_writer*(*unicode*(*page*.*latest_revision*.*content*.*decode*(*'utf-8'*)
**)**)*
Module encodings.utf_8:*16* in decode
>>  
>> 
*return* codecs*.*utf_8_decode*(*input*,* errors*,* True*)*
*UnicodeEncodeError: 'ascii' codec can't encode character u'\u2019' in
position 1709: ordinal not in range(128)*
*
*
Regards,
Philippe
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] importing data from other public websites into galaxy

2013-07-31 Thread Prasun Dutta
Hi,

I wish to import data into my local instance of galaxy from another 
website. Kindly let me know how to achieve that and what all is 
required. I want to import data from a chemoinformatic database (mainly 
compound structures) into galaxy and run my own analysis tools on them.
 Regards,
Prasun Dutta
MSc (Bioinformatics)
School of Biological Sciences
University of Edinburgh, UK___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Velvet optimizer problems : Urgent help.

2013-07-31 Thread Ricardo Perez

On 07/31/2013 11:24 AM, Perez, Ricardo wrote:

Dear all,

When trying to run velvet optimizer in galaxy, all the files we get return the 
following:

"No peek"

When looking at the output of galaxy, we get the following:

galaxy.jobs.runners.local DEBUG 2013-07-31 09:31:46,829 execution finished: perl 
/usr/local/galaxy/bioinfosoft/shed_tools/toolshed.g2.bx.psu.edu/repos/simon-gladman/velvet_optimiser/43c89d82a7d3/velvet_optimiser/velvet_optimiser_wrapper_vlsci.pl
 '19' '23' '2'  '0' 'short' 'False' 
'' 
'/usr/local/galaxy/galaxy-dist/database/files/002/dataset_2067.dat''not_shortMP''other:''no_amos''not_verbose'
 
'/usr/local/galaxy/galaxy-dist/database/files/002/dataset_2665.dat''/usr/local/galaxy/galaxy-dist/database/files/002/dataset_2666.dat'
 
'/usr/local/galaxy/galaxy-dist/database/files/002/dataset_2667.dat''/usr/local/galaxy/galaxy-dist/database/files/002/dataset_2663.dat'
 '/usr/local/galaxy/galaxy-dist/database/files/002/dataset_2668.dat' > 
/usr/local/galaxy/galaxy-dist/database/files/002/dataset_2664.dat

galaxy.jobs.runners DEBUG 2013-07-31 09:31:47,027 executing external set_meta 
script for job 1983: /usr/local/galaxy/galaxy-dist/set_metadata.sh 
./database/files 
/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983 . 
/usr/local/galaxy/galaxy-dist/universe_wsgi.ini 
/usr/local/galaxy/galaxy-dist/database/tmp/tmp3460Jz 
/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/galaxy.json
 
/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_in_HistoryDatasetAssociation_3117_enOT5T,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_kwds_HistoryDatasetAssociation_3117_vy3tDq,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_out_HistoryDatasetAssociation_3117_IFmVK8,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_results_HistoryDatasetAssociation_3117_0j3kza,,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_ov!

erride_HistoryDatasetAssociation_3117_zwMKHf 
/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_in_HistoryDatasetAssociation_3116_YppZ1L,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_kwds_HistoryDatasetAssociation_3116_Sniis0,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_out_HistoryDatasetAssociation_3116_l9RPFx,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_results_HistoryDatasetAssociation_3116_It8rUk,,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_override_HistoryDatasetAssociation_3116_AwvXlS
 
/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_in_HistoryDatasetAssociation_3115_TOhpBs,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_kwds_HistoryDatasetAssociation_3115_wUomyo,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_out_His!
toryDatasetAssociation_3115_nsUrJl,/usr/local/galaxy/galaxy-di!
st/datab
ase/job_working_directory/001/1983/metadata_results_HistoryDatasetAssociation_3115_DAmGEt,,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_override_HistoryDatasetAssociation_3115_hFGdc0
 
/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_in_HistoryDatasetAssociation_3114_59uEii,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_kwds_HistoryDatasetAssociation_3114_UcezIt,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_out_HistoryDatasetAssociation_3114_YmOlHD,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_results_HistoryDatasetAssociation_3114_owVBfP,,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_override_HistoryDatasetAssociation_3114_O3UH82
 
/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_in_HistoryDatasetAssociation_3113_vMrQrs,/usr/local/galaxy/galaxy-dist/databas!
e/job_working_directory/001/1983/metadata_kwds_HistoryDatasetAssociation_3113_YhwIGZ,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_out_HistoryDatasetAssociation_3113_pAG6wP,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_results_HistoryDatasetAssociation_3113_9ImAZ6,,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_override_HistoryDatasetAssociation_3113_uI5cbM
 
/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_in_HistoryDatasetAssociation_3112_NwB9j6,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_kwds_HistoryDatasetAssociation_3112_LK2cWo,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_out_HistoryDatasetAssociation_3112__FDnXW,/usr/local/galaxy/galaxy-dist/database/job_working_dir

[galaxy-dev] Problems with sharing datasets

2013-07-31 Thread Ricardo Perez

Dear all,

When trying to share a dataset in galaxy among users, usually this
sharing fails.  Usually when we go to "Dataset Securities" and try to
change the permissions of the datasets the sharing fails. Instead, it
keeps the original configuration and then it will open another galaxy
page inside the galaxy browser that we already have. Is this due to
misconfiguration that I may have?

Thank you for your time,
--Ricardo Perez



___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


[galaxy-dev] Choosing which nodes to run on for a whole history (Galaxy cluster)

2013-07-31 Thread Ben Gift
I'm setting up Galaxy and Torque with my cluster and I was wondering if I
could set it up so that nodes could be assigned for a whole history. This
way I can re-run histories on certain nodes for benchmarking runtimes.

It seems that the closest I can get with the built in functionality is
specifying nodes on a per-tool basis, which is good but it would be painful
to have to specify this for each tool I use in a pipeline each time.

So how difficult would it be for me to add this feature, and which files
should I start with in the code base? Or have I misunderstood and this is
actually built in?

Thanks
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] August 2013 Galaxy Update Newsletter is out

2013-07-31 Thread Dave Clements
Hello all,

The August 2013 Galaxy Update is now
available
.

*Highlights:*

   -

   *GCC2013 
Report
   :* Meeting summaries, and links to videos, talks, posters, and Training
   Day materials.
   -

   Two new public
servers
   -

   47 new papers
   -

   SlipStream: Galaxy is now available as an
appliance
   -

   There's a new Galaxy-Ptotoemics mailing
list
   -

   Open 
Positions
at
   eight different organizations
   -

   Galaxy @ 
ISMB:
   links to slides and posters
   -

   Other Upcoming
Events
including
   training in California, Sydney, Italy, Toulouse, and Boston.
   -

   New CloudMan
Release
   -

   Tool Shed 
Contributions
   -

   Other News

If you have anything you would like to see in the next *Galaxy
Update
*, please let us know.

Dave Clements and the Galaxy Team 
-- 
http://galaxyproject.org/GCC2013
http://galaxyproject.org/
http://getgalaxy.org/
http://usegalaxy.org/
http://wiki.galaxyproject.org/
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] reference genomes missing

2013-07-31 Thread Gerald Bothe
I installed a local instance of GALAXY following the instructions at "Get 
Galaxy ..." in the Wiki. It installed ok, but when I tried the Bowtie2 mapper 
with a test data set, I noticed that the reference genomes are missing. How can 
I get the reference genomes (e.g. latest version of mouse and human) and 
integrate them into Galaxy? Or is there a Galaxy version available that has the 
genomes already installed?

Thanks

Gerald ___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] Velvet optimizer problems : Urgent help.

2013-07-31 Thread Perez, Ricardo
Dear all,

When trying to run velvet optimizer in galaxy, all the files we get return the 
following:

"No peek"

When looking at the output of galaxy, we get the following:

galaxy.jobs.runners.local DEBUG 2013-07-31 09:31:46,829 execution finished: 
perl 
/usr/local/galaxy/bioinfosoft/shed_tools/toolshed.g2.bx.psu.edu/repos/simon-gladman/velvet_optimiser/43c89d82a7d3/velvet_optimiser/velvet_optimiser_wrapper_vlsci.pl
 '19' '23' '2'  '0' 'short' 'False' 
'' 
'/usr/local/galaxy/galaxy-dist/database/files/002/dataset_2067.dat''not_shortMP''other:''no_amos''not_verbose'
 
'/usr/local/galaxy/galaxy-dist/database/files/002/dataset_2665.dat''/usr/local/galaxy/galaxy-dist/database/files/002/dataset_2666.dat'
 
'/usr/local/galaxy/galaxy-dist/database/files/002/dataset_2667.dat''/usr/local/galaxy/galaxy-dist/database/files/002/dataset_2663.dat'
 
'/usr/local/galaxy/galaxy-dist/database/files/002/dataset_2668.dat' > 
/usr/local/galaxy/galaxy-dist/database/files/002/dataset_2664.dat

galaxy.jobs.runners DEBUG 2013-07-31 09:31:47,027 executing external set_meta 
script for job 1983: /usr/local/galaxy/galaxy-dist/set_metadata.sh 
./database/files 
/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983 . 
/usr/local/galaxy/galaxy-dist/universe_wsgi.ini 
/usr/local/galaxy/galaxy-dist/database/tmp/tmp3460Jz 
/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/galaxy.json
 
/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_in_HistoryDatasetAssociation_3117_enOT5T,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_kwds_HistoryDatasetAssociation_3117_vy3tDq,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_out_HistoryDatasetAssociation_3117_IFmVK8,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_results_HistoryDatasetAssociation_3117_0j3kza,,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_over!
 ride_HistoryDatasetAssociation_3117_zwMKHf 
/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_in_HistoryDatasetAssociation_3116_YppZ1L,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_kwds_HistoryDatasetAssociation_3116_Sniis0,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_out_HistoryDatasetAssociation_3116_l9RPFx,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_results_HistoryDatasetAssociation_3116_It8rUk,,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_override_HistoryDatasetAssociation_3116_AwvXlS
 
/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_in_HistoryDatasetAssociation_3115_TOhpBs,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_kwds_HistoryDatasetAssociation_3115_wUomyo,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_out_Histo!
 ryDatasetAssociation_3115_nsUrJl,/usr/local/galaxy/galaxy-dist!
 /databas
e/job_working_directory/001/1983/metadata_results_HistoryDatasetAssociation_3115_DAmGEt,,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_override_HistoryDatasetAssociation_3115_hFGdc0
 
/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_in_HistoryDatasetAssociation_3114_59uEii,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_kwds_HistoryDatasetAssociation_3114_UcezIt,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_out_HistoryDatasetAssociation_3114_YmOlHD,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_results_HistoryDatasetAssociation_3114_owVBfP,,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_override_HistoryDatasetAssociation_3114_O3UH82
 
/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_in_HistoryDatasetAssociation_3113_vMrQrs,/usr/local/galaxy/galaxy-dist/database/!
 
job_working_directory/001/1983/metadata_kwds_HistoryDatasetAssociation_3113_YhwIGZ,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_out_HistoryDatasetAssociation_3113_pAG6wP,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_results_HistoryDatasetAssociation_3113_9ImAZ6,,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_override_HistoryDatasetAssociation_3113_uI5cbM
 
/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_in_HistoryDatasetAssociation_3112_NwB9j6,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_kwds_HistoryDatasetAssociation_3112_LK2cWo,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_out_HistoryDatasetAssociation_3112__FDnXW,/usr/local/galaxy/galaxy-dist/database/job_working_directory/001/1983/metadata_results_HistoryDa

Re: [galaxy-dev] Mail is not configured for this Galaxy instance.

2013-07-31 Thread Hans-Rudolf Hotz

Hi Shenwiyn

You need to specify a SMTP server. Have a look at the 'smtp_server' 
settings in your "universe_wsgi.ini" file.



Regards, Hans-Rudolf




On 07/31/2013 04:19 PM, shenwiyn wrote:

Hi everyone,
In my local Galaxy,when I want to reset my Login password,some error
occurre
:"Mail is not configured for this Galaxy instance. Please contact an 
administrator.
"Could anyone tell me some imformation about how to configure the mail
for my Galaxy instance?
Thank you very much.

shenwiyn


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


[galaxy-dev] Mail is not configured for this Galaxy instance.

2013-07-31 Thread shenwiyn
Hi everyone,
In my local Galaxy,when I want to reset my Login password,some error occurre 
:"Mail is not configured for this Galaxy instance. Please contact an 
administrator. "Could anyone tell me some imformation about how to configure 
the mail for my Galaxy instance?
Thank you very much.




shenwiyn___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Appending _task_%d suffix to multi files

2013-07-31 Thread Jorrit Boekel

Hi Alex,

In our lab, files are often fractions of an experiments, but they are 
named by their creators in whatever way they like. I put that code in to 
standardize fraction naming, in case a tool needs input from two files 
that originate from the same fraction (but have been treated in 
different ways). In those cases, in my fork, Galaxy always picks the 
files with the same task_%d numbers.


I can't help you very much right now, as I'm currently away from work 
until October, but I hope this explains why its in there.


cheers,
jorrit

On 07/31/2013 04:15 AM, alex.khassa...@csiro.au wrote:


Hi guys,

We've been using Galaxy for a year now, we created our own Galaxy fork 
where we were making changes to adapt Galaxy to our requirements.  As 
we need "multiple file dataset" - we were using Johns' fork for that 
initially.


Now we are trying to use "The most updated version of the multiple 
file dataset stuff" https://bitbucket.org/msiappdev/galaxy-extras/ 
directly as we don't want to maintain our own version.


One of the problems we have - when we upload multiple files - their 
file names are changed (_task_%d suffix is added to their names).


On our branch we simply removed the code which does it, but now we 
wonder if it is possible to avoid this renaming somehow? I.e. make it 
configurable?


Is it really necessary to change the file names?

-Alex

-Original Message-
From: galaxy-dev-boun...@lists.bx.psu.edu 
[mailto:galaxy-dev-boun...@lists.bx.psu.edu] On Behalf Of Jorrit Boekel

Sent: Thursday, 25 October 2012 8:35 PM
To: Peter Cock
Cc: galaxy-dev@lists.bx.psu.edu
Subject: Re: [galaxy-dev] the multi job splitter

I keep the files matched by keeping a _task_%d suffix to their names. 
So each task is matched with its correct counterpart with the same number.


cheers,

jorrit



___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] shall us specify which tool run in cluster?

2013-07-31 Thread Ido Tamir

On Jul 31, 2013, at 8:52 AM, shenwiyn  wrote:

> Hi Thon Deboer ,
> I am a newer in Galaxy.I installed my Galaxy with Torque2.5.0 ,and Galaxy 
> uses the pbs_modoule to interface with TORQUE.But I have some question of the 
>  job_conf.xml :
> 1.)In your  job_conf.xml ,you use regularjobs,longjobs,shortjobs...to run 
> different jobs ,how our Galaxy know which tool belongs to regularjobs or 
> longjobs.And what is the meaning of "nativeSpecification"?

by specifying, as Thon did, in tools the id of the tool and its "destination" 
which are the settings.
the nativeSpecification allows you to set additional parameters that are passed 
with the call.
e.g.  -pe smp 4 tells the grid engine to use the parallel environment smp with 
4 cores.

> 2.)Shall us use collection of  destination="multicorejobs4"/>to specify bwa ?Does it mean the bwa belong to 
> multicorejobs4,and run in cluster?

exactly

> 3.)Does every tool need us to specify which job it belong to?
>  I saw http://wiki.galaxyproject.org/Admin/Config/Jobs about this,but I am 
> not sure above.Could you help me please?
>  

fortunately there is a default 


> shenwiyn
>  
> From: Thon Deboer
> Date: 2013-07-18 14:31
> To: galaxy-dev
> Subject: [galaxy-dev] Jobs remain in queue until restart
> Hi,
>  
> I have noticed that from time to time, the job queue seems to be “stuck” and 
> can only be unstuck by restarting galaxy.
> The jobs seem to be in the queue state and the python job handler processes 
> are hardly ticking over and the cluster is empty.
>  
> When I restart, the startup procedure realizes all jobs are in the a “new 
> state” and it then assigns a jobhandler after which the jobs start fine….
>  
> Any ideas?
>  Torque
>  
> Thon
>  
> P.S I am using the june version of galaxy and I DO set limits on my users in 
> job_conf.xml as so: (Maybe it is related? Before it went into dormant mode, 
> this user had started lots of jobs and may have hit the limit, but I assumed 
> this limit was the number of running jobs at one time, right?)
>  
> 
> 
> 
> 
>  load="galaxy.jobs.runners.local:LocalJobRunner" workers="2"/>
>  load="galaxy.jobs.runners.drmaa:DRMAAJobRunner" workers="8"/>
>  load="galaxy.jobs.runners.cli:ShellJobRunner" workers="2"/>
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> -V -q long.q -pe smp 1
> 
> 
> 
> -V -q long.q -pe smp 1
> 
> 
> 
> -V -q short.q -pe smp 1
> 
>  tags="cluster,multicore_jobs">
> 
> -V -q long.q -pe smp 4
> 
>  
> 
>  
> 
> 
> python
> interactiveOrCluster
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>  http://lists.bx.psu.edu/
> 
> To search Galaxy mailing lists use the unified search at:
>  http://galaxyproject.org/search/mailinglists/


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/