On Fri, Jul 29, 2011 at 1:01 AM, Ka Ming Nip km...@bcgsc.ca wrote:
My jobs have this problem when the command for the tool is wrapped by the
stderr wrapper script.
Ka Ming
Which stderr wrapper script? I think there is more than one...
I've also had this error message (I'm currently working
Hi
In my galaxy instance, whatever jobs i am submitting it goes into queued
state.
If I restart the server then the previous submitted jobs state changes to
running. but the newly submitted jobs again goes to queued state.
I am at a loss to understand this behaviour of galaxy and unable to
Hi all,
I've run into some file permissions problems as part of using the same
mapped directory
on both the Galaxy server and our cluster. In the process I wrote the
following patch which
fixes the following bug - where Galaxy seems to leave the job in the
pending state:
galaxy.jobs INFO
Hi all,
In my recent email I mentioned problems with our setup and mapped drives. I
am running a test Galaxy on a server under a CIFS mapped drive. If I map the
drive with noperms then things seem to work with submitting jobs to the cluster
etc, but that doesn't seem secure at all. Mounting with
Dear Sir,
We have a program writen in perl. It can run nomally in linux
environment with a file saved as result.
After added it to galaxy system, we found that it can produce the right
result file in directary /var/we/galaxy-dist/database/files/000 , but the
result file
Hello everyone,
I'm working on a script that uploads files and launchs workflows on
them, but I keep getting errors that appear more or less randomly when
the display() and submit() functions are called. In a nutshell, there is
a 1/3 chance the calls fail this way.
Nevertheless, the actions
Hello Glen,
Thanks very much for finding this issue. It's been corrected in change set
5842:bb51baa20151, which should be available in the distribution within the
next few weeks. It is currently available in our development repo at
https://bitbucket.org/galaxy/galaxy-central/ if you need it
Hello,
Question for galaxy maintainers: have you encountered situations where BWA jobs
run 'forever' (for days) ?
A little digging shows that it's the bwa sampe step, and SEQanswers thread
mention it's somewhat common:
http://seqanswers.com/forums/showthread.php?t=11652
The ftp upload module fails when a file has a comma in the file name for
example test.bam works but when copied to test,test.bam that file fails.
Cheers,
Ilya
Ilya Chorny Ph.D.
Bioinformatics - Intern
icho...@illumina.commailto:icho...@illumina.com
858-202-4582
It was the one on the wiki page.
Ka Ming
From: Peter Cock [p.j.a.c...@googlemail.com]
Sent: July 29, 2011 2:42 AM
To: Ka Ming Nip
Cc: Edward Kirton; galaxy-dev@lists.bx.psu.edu
Subject: Re: [galaxy-dev] Job output not returned from cluster
On Fri, Jul 29,
Chris Fields wrote, On 07/29/2011 12:35 PM:
On Jul 29, 2011, at 11:00 AM, Assaf Gordon wrote:
Question for galaxy maintainers: have you encountered situations where BWA
jobs run 'forever' (for days) ?
...
I'm wondering if this is common enough to justify adding -A to the
wrapper, or is it
On Jul 29, 2011, at 12:44 PM, Assaf Gordon wrote:
Chris Fields wrote, On 07/29/2011 12:35 PM:
On Jul 29, 2011, at 11:00 AM, Assaf Gordon wrote:
Question for galaxy maintainers: have you encountered situations where BWA
jobs run 'forever' (for days) ?
...
I'm wondering if this is common
thanks for your comments, fellas.
permissions would certainly cause this problem, but that's not the cause for
me.
most wrappers just serve to redirect stderr, so i don't think it's the
wrapper script itself, but the stdout/stderr files are part of the problem.
the error message is thrown in
We are using SGE cluster with our galaxy install. We have specified resource
and run-time limits for certain tools using tool specific drmaa URL
configuration, e.g.:
- run-time (h_rt, s_rt)
- memory (vf, h_vmem).
This helps scheduler in submitting jobs to an appropriate node and also
Hi Shantanu,
I am also using a SGE cluster and the DRMAA runner for my Galaxy install. I am
also having the same issue for jobs that were killed.
How did you define the run-time or memory/runtime configurations in your DRMAA
URLs?
I had to add -w n in the DRMAA URLs in order for my jobs to be
When I run galaxy as the actual user using the code I committed to my fork, I
run into a problem with dataset_*.dat files that have associated data wherein
the associated data files are copied from the job_working_directory into the
files directory. That directory is owned by the actual user
On Jul 29, 2011, at 4:13 PM, Ka Ming Nip wrote:
Hi Shantanu,
I am also using a SGE cluster and the DRMAA runner for my Galaxy install. I am
also having the same issue for jobs that were killed.
How did you define the run-time or memory/runtime configurations in your DRMAA
URLs?
I had to add
On Jul 29, 2011, at 8:03 PM, ambarish biswas wrote:
With Regards,
Ambarish Biswas,
University of Otago
Department of Biochemistry,
Dunedin, New Zealand,
Tel: +64(22)0855647
Fax: +64(0)3 479 7866
Hi have u tested the option drmaa://-q galaxy -V/ option yet?
Here as it
Hi,
I am setting up SGE in our galaxy mirror. One problem I have is that I cannot
export environment variables of the specific users running the galaxy service.
On command line, I did this by qsub -V script.sh. or add a line
#$ -V
in script.sh
I tried to change
Dear all,
I want to build a Galaxy tool to run R script.
Do you know if there is already such tool or similar function?
If you can share with me, I would very appreciate your help.
Thank you,
Bo Liu
___
Please keep all replies on the list
20 matches
Mail list logo