Re: [galaxy-dev] MySQL server has gone away error

2011-01-21 Thread Marina Gourtovaia
Hi Nate It's set to 7200. Changing it to 200 or even to -1(!) does not make any difference. Marina On 21/01/2011 15:31, Nate Coraor wrote: Marina Gourtovaia wrote: Hello My Galaxy instance suffers from 'MySQL server has gone away error'. This error appears if the last web request was more

[galaxy-dev] Galaxy integration with LSF: seg fault

2011-02-02 Thread Marina Gourtovaia
Hello I've set up Galaxy to use LSF. My first job has failed because Galaxy submitted it to the default queue, which was wrong in my case. However, Galaxy gracefully survived the failure, I was able to get the job number from the console output and figure out what went wrong. Next time I

Re: [galaxy-dev] SLURM queue

2011-02-09 Thread Marina Gourtovaia
Try adding export SGE_ROOT=/directory where drmaa sits or one-two higher I had a similar problem compiling a perl wrapper for drmaa (LSF). I found this SGE_ROOT used in the makefile for generating a SWIG perl wrapper for the drmaa.h header. And also there were some assumptions about the

Re: [galaxy-dev] [galaxy-user] Galaxy does not find my executables

2011-02-21 Thread Marina Gourtovaia
Hi In a bash shell, I define the path (needed both by Galaxy to find the right version of python and by tools that run on a cluster to find the executables) and some other global variables on the command line. The cluster jobs (LSF) inherit all these values. This is my line

Re: [galaxy-dev] [galaxy-user] Galaxy does not find my executables

2011-02-21 Thread Marina Gourtovaia
On 21/02/2011 16:20, Nate Coraor wrote: Marina Gourtovaia wrote: Hi In a bash shell, I define the path (needed both by Galaxy to find the right version of python and by tools that run on a cluster to find the executables) and some other global variables on the command line. The cluster jobs

Re: [galaxy-dev] Launching multiple jobs using one tool form with multiple selected datasets

2011-04-15 Thread Marina Gourtovaia
Hi Our production pipeline does this on LSF through job arrays. It would be good if Galaxy supported job arrays Marina On 15/04/2011 13:14, Leandro Hermida wrote: Hi everyone, I was wondering what would be the way in Galaxy to program the following: - User clicks on a tool and form is

[galaxy-dev] typo in galaxy-dist/tool_conf.xml?

2011-05-06 Thread Marina Gourtovaia
Line 298 of galaxy-dist/tool_conf.xml in the zipped distribution downloaded yesterday tool file=indels/indel_table.xml / should be tool file=indels/indel_table.xml / ? Marina -- The Wellcome Trust Sanger Institute is operated by Genome Research Limited, a charity registered in England

Re: [galaxy-dev] wait thread: 1002:Not enough memory. error after enabling DRMAA

2011-05-18 Thread Marina Gourtovaia
Hi Leandro From our previous correspondence I remember that you are using LSF This is some sort of a bug in the latest drmaa-lsf binding. I moved to the previous version and everything is OK for me Marina On 18/05/2011 16:15, Leandro Hermida wrote: Hi all, I enabled DRMAA on my test Galaxy

Re: [galaxy-dev] wait thread: 1002:Not enough memory. error after enabling DRMAA

2011-05-18 Thread Marina Gourtovaia
Yes. M. On 18/05/2011 18:23, Leandro Hermida wrote: Hi Marina, Thanks... so are you using 1.0.3 instead of 1.0.4? best, leandro On Wed, May 18, 2011 at 6:56 PM, Marina Gourtovaia m...@sanger.ac.uk mailto:m...@sanger.ac.uk wrote: Hi Leandro From our previous correspondence I

Re: [galaxy-dev] issue with LSB_DEFAULTQUEUE

2011-05-19 Thread Marina Gourtovaia
Hi Leandro I do not think the binding is env var aware. I used the following string in Galaxy drmaa configuration in universe_wsgi.ini default_cluster_job_runner = drmaa://-q srpipeline -P pipeline/ -q for the queue and -P for the project Marina On 19/05/2011 12:54, Leandro Hermida wrote:

Re: [galaxy-dev] drmaa://native/ : native options are ignored

2011-06-20 Thread Marina Gourtovaia
Hi default_cluster_job_runner = drmaa://-q srpipeline -P pipeline/ works for me on LSF, so your syntax seems to be correct. Assuming that -l mem=4gb:nodes=1:ppn=6 works teh way you expect when you start the jobs on your cluster from the shell, read on... Bearing in mind the the value of the