Thank you Jennifer.
I am not sure if my previous email got gabled up...
Just to be sure, I specified the following in universe_wsgi.ini
default_cluster_job_runner = drmaa://-q default.q -V -v
TMPDIR=/scratch/
when i start galaxy, looking at the console log, this is what I see:
galaxy.jobs
Hi Jun,
I have managed to use something like
outputs
data format=fasta name=output label=#echo os.path.splitext
(str ($input.name))[0]#-ORF.fasta/
/outputs to display the wanted label for the dataset in the
history.
However when I applied the same code to other tool
Through toolshed.
On 14 August 2013 12:50, Bjoern Gruening bjoern.gruen...@gmail.com wrote:
Hi Moritz,
do you installed bwa through the toolshed or manually?
Cheers,
Bjoern
Hey Folks,
Is there really nobody that can help Geert and me? Thats quite
important to me right now.
Hi,
On my local galaxy installation (updated to the latest version yesterday), it takes ages (hours!) to upload a fastq files. I attach a report from the the computer on which galaxy is installed. The fastq file is uploaded from
132.72.88.186. Initially there is an error message (the attached
On Wed, Aug 14, 2013 at 11:56 AM, Peter Cock p.j.a.c...@googlemail.com wrote:
Hi Greg,
I'm hoping you (or Dave) can throw a little light on why the
NCBI BLAST+ nightly tests are not working again yet:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus/949fa0294c0d
e.g. Fatal
Peter,
The problem with testing the installation of package tool dependencies,
especially but not limited to BLAST+, is that the time it takes to
compile them exceeds the timeout for the automated testing framework,
and so it terminates that build step. I am currently working on
enhancing
On Wed, Aug 14, 2013 at 2:32 PM, Dave Bouvier d...@bx.psu.edu wrote:
Peter,
The problem with testing the installation of package tool dependencies,
especially but not limited to BLAST+, is that the time it takes to compile
them exceeds the timeout for the automated testing framework, and so
Hi Greg,
I'm seeing something strange on a system running the August stable
release of galaxy-dist,
$ sudo -u galaxy hg head
changeset: 10393:d05bf67aefa6
branch: stable
tag: tip
user:Nate Coraor n...@bx.psu.edu
date:Mon Aug 12 11:55:41 2013 -0400
summary:
Peter,
Remnants from previously failed installation attempts seems the most
likely explanation, but I'll try to duplicate that situation locally and
see if there's any underlying issue.
--Dave B.
On 8/14/13 09:51:24.000, Peter Cock wrote:
Hi Greg,
I'm seeing something strange on a
Peter,
I've created a Trello card for tracking the status of this issue.
https://trello.com/c/32l2NZRn/1048-toolshed-investigate-possibility-of-recording-tool-dependencies-that-time-out-in-the-automated-testing-process
--Dave B.
On 8/14/13 09:40:03.000, Peter Cock wrote:
On Wed, Aug 14,
Also through toolshed, version 0.5.9-r16. An example error on our part is
(SamToBam):
Error extracting alignments from
(/galaxy/galaxy-dist/database/files/067/dataset_67090.dat), [samopen] SAM
header is present: 25 sequences.
Parse error at line 261924: sequence and quality are inconsistent
Hi
just to keep things up to date I have the the cluster up and running jobs
are being submitted. Last problem I am facing is:
21: UCSC Main on Pig: refGene (chr18:1-61220071)
error
An error occurred with this dataset: *The remote data source application
may be off line, please try again later.
Original Message
Subject: Re: how to add settings of TMPDIR=/scratch for sge/drmaa in
universe_wsgi.ini
Date: Tue, 13 Aug 2013 23:21:16 -0700
From: tin h tin6...@gmail.com
To: Jennifer Jackson j...@bx.psu.edu
CC: Galaxy Dev galaxy-...@bx.psu.edu
Thank you
On Aug 2, 2013, at 1:06 PM, Thon de Boer wrote:
I did some more investigation of this issue
I do notice that my 4 core, 8 slot VM machine has a load of 32, with only my
4 handler processes running (Plus my web server), but not even getting more
than 10% of the CPU each.
There seems to
On Aug 9, 2013, at 11:53 AM, Seth Sims wrote:
Dear Nate,
Adding su - galaxy as the first line of the pre-start script seems to
work reasonably well. Also it looks like the line that sets the egg cache is
not working properly. My egg cache ends up being /tmp/${SERVER_NAME}_egg/
but
I don't think it's a memory issue (but what made you say that?) since each process is hardly using any memory, although VIRT memory in top is showing 2.7GB per python process, RES is only ever going to 250MB and I have a 16GB machine (although SWAP is only 4GB but not using any of the swap either,
Hi,I was very excited to see the re-run option and associate with paused jobs option working very well..I was hoping to be able to re-run a job like this from the API, but cannot seem to find any API calls that corresponds to this
Dear Nate,
Actually... no, galaxy's home was set to a non-existent directory so
the working directory was being changed to the root of the file system.
However the script still seemed to work. I changed the script to use su -
galaxy -c like you show anyway. There seem to be no significant
On Tue, Apr 30, 2013 at 9:46 AM, Scott Hazelhurst
scott.hazelhu...@wits.ac.za wrote:
Below that is the output I get in the log. From the torque log it complains
that there is no default queue specified.
Hi Scott,
Today I was hit by what it seems the same issue. In my case it was an
issue with
Hi, Richard,
I see you are looking in the main toolshed. I cant speak for fcarmia's one
but it doesn't do any dependency installation. The 'digital_dge' one was
deprecated - I've now deleted it.
The differential count models package in the statistics section of the test
toolshed -
Hello gurus,
I made some slight progress...
If I specify a handlers section into the job_conf.xml file with:
handlers
handler id=main/
/handlers
Then galaxy would at least start. However, it is not very functional, I
can't upload files or run job, it says:
An error occurred
21 matches
Mail list logo