Re: [galaxy-dev] issue with LSB_DEFAULTQUEUE

2011-05-19 Thread Marina Gourtovaia

Hi Leandro

I do not think the binding is env var aware. I used the following string 
in Galaxy drmaa configuration in universe_wsgi.ini


default_cluster_job_runner = drmaa://-q srpipeline -P pipeline/

-q for the queue and -P for the project

Marina


On 19/05/2011 12:54, Leandro Hermida wrote:

Hi Marina,

Thank you so much for your help, LSF DRMAA is working fine with Galaxy 
now.


One final problem though... if I specify the env var LSB_DEFAULTQUEUE 
to a specific queue the Galaxy server appears in the log to submit the 
job but it doesn't ever seem to get executed and looking in LSF with 
bjobs -u all we don't see any galaxy submitted jobs, strange.  Is 
there something I am not setting up right with the particular queue?


best,
Leandro





--
The Wellcome Trust Sanger Institute is operated by Genome Research 
Limited, a charity registered in England with number 1021457 and a 
company registered in England with number 2742969, whose registered 
office is 215 Euston Road, London, NW1 2BE. 
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Specifying number of requested cores to Galaxy DRMAA

2011-05-19 Thread Leandro Hermida
Hi Louise-Amelie,

Thank you for the post reference, this is exactly what I was looking for.
For us for for example when I want to execute a tool that is a Java command
the JVM typically will typically use multiple cores as it's running.  You
said with TORQUE it will crash when there aren't enough resources when the
job is submitted.  I wonder if you can do the same thing we have done here
with LSF?  With LSF you can configure a maximum server load for each node
and if the submitted jobs push the node load above this threshold (e.g. more
cores requested than available) LSF will temporarily suspend jobs (using
some kind of heuristics) so that the load stays below the threshold and
unsuspend as resources become available.  So for us things just will run
slower when we cannot pass the requested number of cores to LSF.

I would think maybe there is a way with TORQUE to have it achieve the same
thing so jobs don't crash when resources requested are more than available?

regards,
Leandro

2011/5/19 Louise-Amélie Schmitt louise-amelie.schm...@embl.de

  Hi,

 In a previous message, I explained how I did to multithreads certain jobs,
 perhaps you can modify the corresponding files for drmaa in a similar way:

 On 04/26/2011 11:26 AM, Louise-Amélie Schmitt wrote:

 Just one little fix on line 261:
 261 if ( len(l)  1 and l[0] == job_wrapper.tool.id ):

 Otherwise it pathetically crashes when non-multithreaded jobs are
 submitted. Sorry about that.

 Regards,
 L-A

 Le mardi 19 avril 2011 à 14:33 +0200, Louise-Amélie Schmitt a écrit :

  Hello everyone,

 I'm using TORQUE with Galaxy, and we noticed that if a tool is
 multithreaded, the number of needed cores is not communicated to pbs,
 leading to job crashes if the required resources are not available when
 the job is submitted.

 Therefore I modified a little the code as follows in
 lib/galaxy/jobs/runners/pbs.py

 256 # define PBS job options
 257 attrs.append( dict( name = pbs.ATTR_N, value = str( %s_%s_%
 s % ( job_wrapper.job_id, job_wrapper.tool.id, job_wrapper.user ) ) ) )
 258 mt_file = open('tool-data/multithreading.csv', 'r')
 259 for l in mt_file:
 260 l = string.split(l)
 261 if ( l[0] == job_wrapper.tool.id ):
 262 attrs.append( dict( name = pbs.ATTR_l,
 resource = 'nodes', value = '1:ppn='+str(l[1]) ) )
 263 attrs.append( dict( name = pbs.ATTR_l,
 resource = 'mem', value = str(l[2]) ) )
 264 break
 265 mt_file.close()
 266 job_attrs = pbs.new_attropl( len( attrs ) +
 len( pbs_options ) )

 (sorry it didn't come out very well due to line breaking)

 The csv file contains a list of the multithreaded tools, each line
 containing:
 tool id\tnumber of threads\tmemory needed\n

 And it works fine, the jobs wait for their turn properly, but
 information is duplicated. Perhaps there would be a way to include
 something similar in galaxy's original code (if it is not already the
 case, I may not be up-to-date) without duplicating data.

 I hope that helps :)

 Best regards,
 L-A

 ___
 The Galaxy User list should be used for the discussion of
 Galaxy analysis and other features on the public server
 at usegalaxy.org.  Please keep all replies on the list by
 using reply all in your mail client.  For discussion of
 local Galaxy instances and the Galaxy source code, please
 use the Galaxy Development list:

   http://lists.bx.psu.edu/listinfo/galaxy-dev

 To manage your subscriptions to this and other Galaxy lists,
 please use the interface at:

   http://lists.bx.psu.edu/



 On 05/19/2011 12:03 PM, Leandro Hermida wrote:

 Hi,

 When Galaxy is configured to use the DRMAA job runner is there a way for a
 tool to tell DRMAA the number of cores it would like to request? The
 equivalent of bsub -n X in LSF where X is min number of cores to have
 available on node.

 best,
 leandro


 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:

   http://lists.bx.psu.edu/



___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] wait thread: 1002:Not enough memory. error after enabling DRMAA

2011-05-19 Thread Mariusz Mamoński
Hi,

 --

 Message: 13
 Date: Wed, 18 May 2011 17:15:06 +0200
 From: Leandro Hermida soft...@leandrohermida.com
 To: Galaxy Dev galaxy-...@bx.psu.edu
 Subject: [galaxy-dev] wait thread: 1002:Not enough memory. error
        after   enabling DRMAA
 Message-ID: BANLkTi=J4-ct04U=8dm6ra=sjcr_ede...@mail.gmail.com
 Content-Type: text/plain; charset=iso-8859-1

 Hi all,

 I enabled DRMAA on my test Galaxy server installation and in the server
 startup output I get the following  strange E #14ca [     1.10]  * wait


are you using LSF?


 thread: 1002:Not enough memory. lines after loading the job manager:

 ...
 galaxy.jobs.runners.local INFO 2011-05-18 17:10:23,025 starting workers
 galaxy.jobs.runners.local DEBUG 2011-05-18 17:10:23,026 5 workers ready
 galaxy.jobs DEBUG 2011-05-18 17:10:23,026 Loaded job runner:
 galaxy.jobs.runners.local:LocalJobRunner
 galaxy.jobs.runners.drmaa DEBUG 2011-05-18 17:10:23,130 3 workers ready
 galaxy.jobs DEBUG 2011-05-18 17:10:23,130 Loaded job runner:
 galaxy.jobs.runners.drmaa:DRMAAJobRunner
 galaxy.jobs INFO 2011-05-18 17:10:23,131 job manager started
 E #14ca [     0.00]  * wait thread: 1002:Not enough memory.
 E #14ca [     0.02]  * wait thread: 1002:Not enough memory.
 E #14ca [     0.05]  * wait thread: 1002:Not enough memory.
 E #14ca [     0.07]  * wait thread: 1002:Not enough memory.
 E #14ca [     0.10]  * wait thread: 1002:Not enough memory.
 E #14ca [     0.12]  * wait thread: 1002:Not enough memory.
 E #14ca [     0.15]  * wait thread: 1002:Not enough memory.
 E #14ca [     0.17]  * wait thread: 1002:Not enough memory.
 E #14ca [     0.20]  * wait thread: 1002:Not enough memory.
 galaxy.jobs INFO 2011-05-18 17:10:23,373 job stopper started
 E #14ca [     0.22]  * wait thread: 1002:Not enough memory.
 galaxy.sample_tracking.external_service_types DEBUG 2011-05-18 17:10:23,383
 Loaded external_service_type: Simple unknown sequencer 1.0.0
 galaxy.sample_tracking.external_service_types DEBUG 2011-05-18 17:10:23,386
 Loaded external_service_type: Applied Biosystems SOLiD 1.0.0
 galaxy.web.framework.base DEBUG 2011-05-18 17:10:23,400 Enabling 'admin'
 controller, class: AdminGalaxy
 galaxy.web.framework.base DEBUG 2011-05-18 17:10:23,402 Enabling 'async'
 controller, class: ASync
 E #14ca [     0.25]  * wait thread: 1002:Not enough memory.
 E #14ca [     0.27]  * wait thread: 1002:Not enough memory.
 galaxy.web.framework.base DEBUG 2011-05-18 17:10:23,428 Enabling 'dataset'
 controller, class: DatasetInterface
 galaxy.web.framework.base DEBUG 2011-05-18 17:10:23,432 Enabling 'error'
 controller, class: Error
 galaxy.web.framework.base DEBUG 2011-05-18 17:10:23,436 Enabling
 'external_service' controller, class: ExternalService
 galaxy.web.framework.base DEBUG 2011-05-18 17:10:23,437 Enabling
 'external_services' controller, class: ExternalServiceController
 galaxy.web.framework.base DEBUG 2011-05-18 17:10:23,439 Enabling 'forms'
 controller, class: Forms
 galaxy.web.framework.base DEBUG 2011-05-18 17:10:23,441 Enabling 'history'
 controller, class: HistoryController
 E #14ca [     0.30]  * wait thread: 1002:Not enough memory.
 galaxy.web.framework.base DEBUG 2011-05-18 17:10:23,460 Enabling 'library'
 controller, class: Library
 galaxy.web.framework.base DEBUG 2011-05-18 17:10:23,461 Enabling
 'library_admin' controller, class: LibraryAdmin
 galaxy.web.framework.base DEBUG 2011-05-18 17:10:23,462 Enabling
 'library_common' controller, class: LibraryCommon
 galaxy.web.framework.base DEBUG 2011-05-18 17:10:23,475 Enabling 'mobile'
 controller, class: Mobile
 galaxy.web.framework.base DEBUG 2011-05-18 17:10:23,478 Enabling 'page'
 controller, class: PageController
 E #14ca [     0.32]  * wait thread: 1002:Not enough memory.
 galaxy.web.framework.base DEBUG 2011-05-18 17:10:23,482 Enabling
 'request_type' controller, class: RequestType
 galaxy.web.framework.base DEBUG 2011-05-18 17:10:23,483 Enabling 'requests'
 controller, class: Requests
 E #14ca [     0.35]  * wait thread: 1002:Not enough memory.
 galaxy.web.framework.base DEBUG 2011-05-18 17:10:23,508 Enabling
 'requests_admin' controller, class: RequestsAdmin
 galaxy.web.framework.base DEBUG 2011-05-18 17:10:23,509 Enabling
 'requests_common' controller, class: RequestsCommon
 galaxy.web.framework.base DEBUG 2011-05-18 17:10:23,511 Enabling 'root'
 controller, class: RootController
 galaxy.web.framework.base DEBUG 2011-05-18 17:10:23,514 Enabling 'tag'
 controller, class: TagsController
 galaxy.web.framework.base DEBUG 2011-05-18 17:10:23,519 Enabling
 'tool_runner' controller, class: ToolRunner
 E #14ca [     0.37]  * wait thread: 1002:Not enough memory.
 E #14ca [     0.40]  * wait thread: 1002:Not enough memory.
 galaxy.web.framework.base DEBUG 2011-05-18 17:10:23,558 Enabling 'tracks'
 controller, class: TracksController
 galaxy.web.framework.base DEBUG 2011-05-18 17:10:23,562 Enabling
 'ucsc_proxy' controller, class: UCSCProxy
 galaxy.web.framework.base DEBUG 2011-05-18 17:10:23,564 

Re: [galaxy-dev] Specifying number of requested cores to Galaxy DRMAA

2011-05-19 Thread Louise-Amélie Schmitt

Hi again Leandro

Well I might not have been really clear, perhaps I should have re-read 
the mail before posting it :)


The thing is, it was not an issue of Torque starting jobs when there 
were not enough resources available, but rather it believing the needed 
resources for each job being fewer that they were (e.g. always assuming 
the jobs were single-threaded even if the actual tools needed more tan 
one core). if Torque is properly notified of the needed resources, it 
will dispatch them or make them wait accordingly (since it knows the 
nodes' limits and load), like your LSF does.


This hack is not very sexy but it just notifies Torque of the cores 
needed by every multithreaded tool, so it doesn't run a multithreaded 
job when there's only one core available in the chosen node.


Hope that helps :)

Regards,
L-A


On 05/19/2011 03:05 PM, Leandro Hermida wrote:

Hi Louise-Amelie,

Thank you for the post reference, this is exactly what I was looking 
for.  For us for for example when I want to execute a tool that is a 
Java command the JVM typically will typically use multiple cores as 
it's running.  You said with TORQUE it will crash when there aren't 
enough resources when the job is submitted.  I wonder if you can do 
the same thing we have done here with LSF?  With LSF you can configure 
a maximum server load for each node and if the submitted jobs push the 
node load above this threshold (e.g. more cores requested than 
available) LSF will temporarily suspend jobs (using some kind of 
heuristics) so that the load stays below the threshold and unsuspend 
as resources become available.  So for us things just will run slower 
when we cannot pass the requested number of cores to LSF.


I would think maybe there is a way with TORQUE to have it achieve the 
same thing so jobs don't crash when resources requested are more than 
available?


regards,
Leandro

2011/5/19 Louise-Amélie Schmitt louise-amelie.schm...@embl.de 
mailto:louise-amelie.schm...@embl.de


Hi,

In a previous message, I explained how I did to multithreads
certain jobs, perhaps you can modify the corresponding files for
drmaa in a similar way:

On 04/26/2011 11:26 AM, Louise-Amélie Schmitt wrote:

Just one little fix on line 261:
261 if ( len(l)  1 and l[0] ==job_wrapper.tool.id  
http://job_wrapper.tool.id  ):

Otherwise it pathetically crashes when non-multithreaded jobs are
submitted. Sorry about that.

Regards,
L-A

Le mardi 19 avril 2011 à 14:33 +0200, Louise-Amélie Schmitt a écrit :

Hello everyone,

I'm using TORQUE with Galaxy, and we noticed that if a tool is
multithreaded, the number of needed cores is not communicated to pbs,
leading to job crashes if the required resources are not available when
the job is submitted.

Therefore I modified a little the code as follows in
lib/galaxy/jobs/runners/pbs.py

256 # define PBS job options
257 attrs.append( dict( name = pbs.ATTR_N, value = str( %s_%s_%
s % ( job_wrapper.job_id,job_wrapper.tool.id  
http://job_wrapper.tool.id, job_wrapper.user ) ) ) )
258 mt_file = open('tool-data/multithreading.csv', 'r')
259 for l in mt_file:
260 l = string.split(l)
261 if ( l[0] ==job_wrapper.tool.id  
http://job_wrapper.tool.id  ):
262 attrs.append( dict( name = pbs.ATTR_l,
resource = 'nodes', value = '1:ppn='+str(l[1]) ) )
263 attrs.append( dict( name = pbs.ATTR_l,
resource = 'mem', value = str(l[2]) ) )
264 break
265 mt_file.close()
266 job_attrs = pbs.new_attropl( len( attrs ) +
len( pbs_options ) )

(sorry it didn't come out very well due to line breaking)

The csv file contains a list of the multithreaded tools, each line
containing:
tool id\tnumber of threads\tmemory needed\n

And it works fine, the jobs wait for their turn properly, but
information is duplicated. Perhaps there would be a way to include
something similar in galaxy's original code (if it is not already the
case, I may not be up-to-date) without duplicating data.

I hope that helps :)

Best regards,
L-A

___
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
atusegalaxy.org  http://usegalaxy.org.  Please keep all replies on the 
list by
using reply all in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

   http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

   http://lists.bx.psu.edu/



On 05/19/2011 12:03 PM, Leandro Hermida wrote:

Hi,

When 

Re: [galaxy-dev] Specifying number of requested cores to Galaxy DRMAA

2011-05-19 Thread Leandro Hermida
Hi Louise,

I see thank you for the response, maybe there was some confusion, the
feature I was trying to explain with LSF is that you *don't* need to tell it
the required resources for a job and it will still be able to run all the
submitted jobs on a node without crashing even if the jobs submitted need
e.g. 10 more cores are available (that is 10 more cores than LSF thought it
needed).  LSF will just temporarily suspend jobs in mid-run on a node to
keep the load down, but nothing will ever crash even if you are running jobs
that require 20 CPUs and you only have 2.  Thought maybe there was a way to
do this with TORQUE.  If LSF or TORQUE are explcitly passed the resources
needed then they will never need to temporarily suspend because they will
pick a node with those resources free.  That being said, your method is more
efficient for this reason as it will pick the right node with the cores
available instead of picking a node with maybe just one core available and
then running the multithreaded job slower because it has to periodically
suspend it as it is running.

Also I wonder do you run any Java command-line tools via Galaxy? I can't
seem to find with the JVM exactly how many cores it needs during execution
or how to limit it to a certain max, it just jumps around in CPU usage from
50% to over 400%

regards,
Leandro

2011/5/19 Louise-Amélie Schmitt louise-amelie.schm...@embl.de

  Hi again Leandro

 Well I might not have been really clear, perhaps I should have re-read the
 mail before posting it :)

 The thing is, it was not an issue of Torque starting jobs when there were
 not enough resources available, but rather it believing the needed resources
 for each job being fewer that they were (e.g. always assuming the jobs were
 single-threaded even if the actual tools needed more tan one core). if
 Torque is properly notified of the needed resources, it will dispatch them
 or make them wait accordingly (since it knows the nodes' limits and load),
 like your LSF does.

 This hack is not very sexy but it just notifies Torque of the cores needed
 by every multithreaded tool, so it doesn't run a multithreaded job when
 there's only one core available in the chosen node.

 Hope that helps :)

 Regards,
 L-A



 On 05/19/2011 03:05 PM, Leandro Hermida wrote:

 Hi Louise-Amelie,

 Thank you for the post reference, this is exactly what I was looking for.
 For us for for example when I want to execute a tool that is a Java command
 the JVM typically will typically use multiple cores as it's running.  You
 said with TORQUE it will crash when there aren't enough resources when the
 job is submitted.  I wonder if you can do the same thing we have done here
 with LSF?  With LSF you can configure a maximum server load for each node
 and if the submitted jobs push the node load above this threshold (e.g. more
 cores requested than available) LSF will temporarily suspend jobs (using
 some kind of heuristics) so that the load stays below the threshold and
 unsuspend as resources become available.  So for us things just will run
 slower when we cannot pass the requested number of cores to LSF.

 I would think maybe there is a way with TORQUE to have it achieve the same
 thing so jobs don't crash when resources requested are more than available?

 regards,
 Leandro

 2011/5/19 Louise-Amélie Schmitt louise-amelie.schm...@embl.de

  Hi,

 In a previous message, I explained how I did to multithreads certain jobs,
 perhaps you can modify the corresponding files for drmaa in a similar way:

 On 04/26/2011 11:26 AM, Louise-Amélie Schmitt wrote:

 Just one little fix on line 261:
 261 if ( len(l)  1 and l[0] == job_wrapper.tool.id ):

 Otherwise it pathetically crashes when non-multithreaded jobs are
 submitted. Sorry about that.

 Regards,
 L-A

 Le mardi 19 avril 2011 à 14:33 +0200, Louise-Amélie Schmitt a écrit :

  Hello everyone,

 I'm using TORQUE with Galaxy, and we noticed that if a tool is
 multithreaded, the number of needed cores is not communicated to pbs,
 leading to job crashes if the required resources are not available when
 the job is submitted.

 Therefore I modified a little the code as follows in
 lib/galaxy/jobs/runners/pbs.py

 256 # define PBS job options
 257 attrs.append( dict( name = pbs.ATTR_N, value = str( %s_%s_%
 s % ( job_wrapper.job_id, job_wrapper.tool.id, job_wrapper.user ) ) ) )
 258 mt_file = open('tool-data/multithreading.csv', 'r')
 259 for l in mt_file:
 260 l = string.split(l)
 261 if ( l[0] == job_wrapper.tool.id ):
 262 attrs.append( dict( name = pbs.ATTR_l,
 resource = 'nodes', value = '1:ppn='+str(l[1]) ) )
 263 attrs.append( dict( name = pbs.ATTR_l,
 resource = 'mem', value = str(l[2]) ) )
 264 break
 265 mt_file.close()
 266 job_attrs = pbs.new_attropl( len( attrs ) +
 len( pbs_options ) )

 (sorry it didn't come out 

Re: [galaxy-dev] Specifying number of requested cores to Galaxy DRMAA

2011-05-19 Thread Louise-Amélie Schmitt

Hi again

Oh I see... Not the best way to deal with the resources indeed but 
better than nothing though.


I do run Java tools via Galaxy but I haven't paid attention to this 
issue, so I can't really help. It's a homemade tool that only has one 
class so I guess it's not worth the effort in my case. But if you find 
the answer, I'd be interested too.


Good luck,
L-A


On 05/19/2011 04:28 PM, Leandro Hermida wrote:

Hi Louise,

I see thank you for the response, maybe there was some confusion, the 
feature I was trying to explain with LSF is that you *don't* need to 
tell it the required resources for a job and it will still be able to 
run all the submitted jobs on a node without crashing even if the jobs 
submitted need e.g. 10 more cores are available (that is 10 more cores 
than LSF thought it needed).  LSF will just temporarily suspend jobs 
in mid-run on a node to keep the load down, but nothing will ever 
crash even if you are running jobs that require 20 CPUs and you only 
have 2.  Thought maybe there was a way to do this with TORQUE.  If LSF 
or TORQUE are explcitly passed the resources needed then they will 
never need to temporarily suspend because they will pick a node with 
those resources free.  That being said, your method is more efficient 
for this reason as it will pick the right node with the cores 
available instead of picking a node with maybe just one core available 
and then running the multithreaded job slower because it has to 
periodically suspend it as it is running.


Also I wonder do you run any Java command-line tools via Galaxy? I 
can't seem to find with the JVM exactly how many cores it needs during 
execution or how to limit it to a certain max, it just jumps around in 
CPU usage from 50% to over 400%


regards,
Leandro

2011/5/19 Louise-Amélie Schmitt louise-amelie.schm...@embl.de 
mailto:louise-amelie.schm...@embl.de


Hi again Leandro

Well I might not have been really clear, perhaps I should have
re-read the mail before posting it :)

The thing is, it was not an issue of Torque starting jobs when
there were not enough resources available, but rather it believing
the needed resources for each job being fewer that they were (e.g.
always assuming the jobs were single-threaded even if the actual
tools needed more tan one core). if Torque is properly notified of
the needed resources, it will dispatch them or make them wait
accordingly (since it knows the nodes' limits and load), like your
LSF does.

This hack is not very sexy but it just notifies Torque of the
cores needed by every multithreaded tool, so it doesn't run a
multithreaded job when there's only one core available in the
chosen node.

Hope that helps :)

Regards,
L-A



On 05/19/2011 03:05 PM, Leandro Hermida wrote:

Hi Louise-Amelie,

Thank you for the post reference, this is exactly what I was
looking for.  For us for for example when I want to execute a
tool that is a Java command the JVM typically will typically use
multiple cores as it's running.  You said with TORQUE it will
crash when there aren't enough resources when the job is
submitted.  I wonder if you can do the same thing we have done
here with LSF?  With LSF you can configure a maximum server load
for each node and if the submitted jobs push the node load above
this threshold (e.g. more cores requested than available) LSF
will temporarily suspend jobs (using some kind of heuristics) so
that the load stays below the threshold and unsuspend as
resources become available.  So for us things just will run
slower when we cannot pass the requested number of cores to LSF.

I would think maybe there is a way with TORQUE to have it achieve
the same thing so jobs don't crash when resources requested are
more than available?

regards,
Leandro

2011/5/19 Louise-Amélie Schmitt louise-amelie.schm...@embl.de
mailto:louise-amelie.schm...@embl.de

Hi,

In a previous message, I explained how I did to multithreads
certain jobs, perhaps you can modify the corresponding files
for drmaa in a similar way:

On 04/26/2011 11:26 AM, Louise-Amélie Schmitt wrote:

Just one little fix on line 261:
261 if ( len(l)  1 and l[0] ==job_wrapper.tool.id  
http://job_wrapper.tool.id  ):

Otherwise it pathetically crashes when non-multithreaded jobs are
submitted. Sorry about that.

Regards,
L-A

Le mardi 19 avril 2011 à 14:33 +0200, Louise-Amélie Schmitt a écrit :

Hello everyone,

I'm using TORQUE with Galaxy, and we noticed that if a tool is
multithreaded, the number of needed cores is not communicated to pbs,
leading to job crashes if the required resources are not available when
the job is submitted.

Therefore I modified a little the code as follows in

[galaxy-dev] select multiple tool input parameter bugs?

2011-05-19 Thread Leandro Hermida
I have the following tool input parameter:

param name=columns type=select multiple=true label=Select which
columns to output
optionSIZE/option
optionES/option
option selected=trueNES/option
optionNOM p-val/option
option selected=trueFDR q-val/option
optionFWER p-val/option
option selected=trueRANK AT MAX/option
optionLEADING EDGE/option
/param

It displays a single select drop-down menu, is this a bug or did I configure
something wrong?

To check I added display=checkboxes to the param above and now it displays
a multi select but when the form loads everything comes up checked it
doesn't obey any of the selected=true? Is this also a bug or must it be
done a different way?

regards,
Leandro
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] AttributeError: 'list' object has no attribute 'missing_meta' when doing multiple=true input datasets

2011-05-19 Thread Leandro Hermida
Hi again,

I tried changing the format to txt and tabular which I have other datasets
in my history and still the same error and stack trace in Galaxy.

Is it possible at all to have a select multiple of datasets as an input
parameter??

best,
Leandro

On Thu, May 19, 2011 at 6:59 PM, Leandro Hermida soft...@leandrohermida.com
 wrote:

 Hi Galaxy developers,

 Something seems maybe to be wrong with the format=html type... I forgot
 to add before that my tool input param the format=html attribute:

 param type=data multiple=true format=html name=input1 /

 In another tool I have it outputs format=html and this works and displays
 in Galaxy just fine. I would like to use multiple of these output datasets
 in my history as the input for this other tool but something seems to be
 wrong if you try to do this?

 a bit lost,
 Leandro


 On Thu, May 19, 2011 at 3:30 PM, Leandro Hermida 
 soft...@leandrohermida.com wrote:

 Hi,

 I have a tool where the input is multi-select of datasets, e.g.:

 param type=data multiple=true name=input1 /

 I tested it to see what it would pass to my command and I get the
 following debug page and error in Galaxy:

 AttributeError: 'list' object has no attribute 'missing_meta'

 The last part of the stack trace looks like:

   validator.validate( value, history )
 Module galaxy.tools.parameters.validation:185 in validate

 history galaxy.model.History object at 0xb8ea190
 selfgalaxy.tools.parameters.validation.MetadataValidator object
 at 0xb8e6250
 value [galaxy.model.HistoryDatasetAssociation object at 0xa20,
 galaxy.model.HistoryDataset ... Association object at 0xb8ea210]

   if value and value.missing_meta( check = self.check, skip = self.skip
 ):
 AttributeError: 'list' object has no attribute 'missing_meta'

 What am I doing wrong?

 regards,
 Leandro



___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Select first/last N rows from grouped tabular files (e.g. top BLAST hits)

2011-05-19 Thread Peter Cock
On Thu, May 19, 2011 at 7:33 PM, madduri gal...@ci.uchicago.edu wrote:
 I wonder if somebody can give me more context around this issue..


On 3rd May I emailed IBX about their Galaxy install and one of
the (in house) tools mentioned on the workflow image here:
https://ibi.uchicago.edu/resources/galaxy/index.html

I recognised the NCBI BLAST+ tools but the Filter Top Blast
Results tool was new to me, and asked what it did and if it or
any the other IBX tools would be available at the Galaxy Tool Shed:
http://community.g2.bx.psu.edu/

I had a reply from Alex Rodriguez (iBi/CI University of Chicago)
that they haven't put any of the wrappers on the Galaxy tool
shed yet as they are still being worked on. The IBI system
assigned the number [Galaxy #13918].

This thread Select first/last N rows from grouped tabular
files (e.g. top BLAST hits) could have similarities to the
IBI Filter Top Blast Results tool, so I forwarded the email
to the IBI galaxy email address to encourage you (e.g. Alex)
to comment on the thread. The IBI system assigned the
number [Galaxy #14246].

Peter
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


[galaxy-dev] restarting galaxy while tools/jobs are running

2011-05-19 Thread Shantanu Pavgi

If we restart galaxy server then would it disturb galaxy-tools jobs that are 
running? e.g. I want to restart galaxy server while a user is running  bowtie 
job/process. Will galaxy lose any context of this job/process after restart? 

--
Thanks,
Shantanu. 
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] restarting galaxy while tools/jobs are running

2011-05-19 Thread Nate Coraor
Shantanu Pavgi wrote:
 
 If we restart galaxy server then would it disturb galaxy-tools jobs that are 
 running? e.g. I want to restart galaxy server while a user is running  bowtie 
 job/process. Will galaxy lose any context of this job/process after restart? 

Hi Shantanu,

If you are using the local job runner, yes.  If you are using a cluster
job runner, no.

--nate

 
 --
 Thanks,
 Shantanu. 
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
 
   http://lists.bx.psu.edu/
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/