Hi Nate,
Thanks for looking into this. I am wondering if you have the work around for
this problem. Thanks so much.
Best regards,
Chee Seng
-Original Message-
From: Nate Coraor [mailto:n...@bx.psu.edu]
Sent: Wednesday, September 07, 2011 9:53 PM
To: CHAN Chee Seng
Cc: Galaxy Dev
Good Morning,
I have had the following issue reported to me by one of our users and I have
confirmed the behaviour.
Hi Mike,
I'm looking at the new Galaxy at http://jic55666:8080/root and I have found a
problem.
If I import a dataset from the data libraries, then stuff runs OK.
If I
For what it's worth, I ran into an issue with the use of /tmp as well.
When merging a lot of BAM files, /tmp filled up and the merge failed.
To make matters worse, since STDERR is redirected and the exit status of
java is not checked, the item in the Galaxy history appeared OK. Though
Paniagua, Eric wrote:
Hi Nate,
Thanks for your answers. I will look into setting
set_metadata_externally=True.
I've observed no impact (suggesting someone did error handling properly), but
upload is the only tool I've tested it with so far. I'll be doing more
shortly, but going with
Mike Wallis wrote:
Hello,
I'm trying to get an instance of Galaxy working where the application server
- the web front end, as I understand it - is on a completely separate host to
the SGE cluster the back end runs on. Is there any way of setting up galaxy
that it uses ssh instead of
Ann Black wrote:
Thanks Nate!
What types of plans do you have for multiple clusters and do you have a
committed timeline?
Hi Ann,
Unfortunately, no committed timeline yet. The plan is to make it
possible to define many job targets, which could be different clusters
or the same cluster
Lance Parsons wrote:
For what it's worth, I ran into an issue with the use of /tmp as
well. When merging a lot of BAM files, /tmp filled up and the merge
failed. To make matters worse, since STDERR is redirected and the
exit status of java is not checked, the item in the Galaxy history
Bruno Zeitouni wrote:
Dear all,
I would like to use our built-in color-spaces indexes for BWA with
Galaxy but I failed to do it.
I added the following line in the bwa_index_color.loc file of the
tool-data directory :
hg19 hg19 hg19 my_path_to/hg19.fa
but I don't see
Hi Dan,
Sure, here's the example where I discovered the bug (data files are not
attached because of a 5MB limit on my email client; see
http://main.g2.bx.psu.edu/u/paniag/h/metadata-bug-example for a history with
the example dataset). The datatype is AffyBatch (or probably anything derived
Hi Dan,
Regarding the tool output actions, could you point me to any documentation or
additional good examples for handling tool output post-processing?
Thanks,
Eric
From: Daniel Blankenberg [d...@bx.psu.edu]
Sent: Monday, October 03, 2011 2:00 PM
To: Paniagua,
Hi Eric,
I was able to reproduce the error. We'll work on a fix for this, but, for now,
you can fix the metadata after uploading by clicking on the pencil icon and
clicking Auto-detect. Thanks for reporting this error.
Thanks for using Galaxy,
Dan
On Oct 3, 2011, at 2:34 PM, Paniagua,
Hi Eric,
Documentation for tool output actions is limited but available from
http://wiki.g2.bx.psu.edu/Admin/Tools/Tool%20Config%20Syntax#A.3Cactions.3E_tag_set
and the code is located at lib/galaxy/tools/parameters/output.py. The
LiftOver, Cut, BWA, fimo, tophat, ccat, srma, mosaik, bfast,
James Vincent wrote:
Hello,
I've read about a number of side projects that integrate iRODS or
other file handling things into Galaxy.
What are the chances Galaxy could be made to use iRODS as it's primary
file store? Instead of storing all files in some local space on a
machine, Galaxy
Someone shared a history with me for the purpose of debugging their Galaxy
problem in our local Galaxy instance. I set the error_email_to to our bug
reporting email address, but I can't submit a bug report using the bug icon
in the history for datasets in an error state. The traceback is below.
The patches have been put into a pull request.
Lance
Nate Coraor wrote:
Lance Parsons wrote:
For what it's worth, I ran into an issue with the use of /tmp as
well. When merging a lot of BAM files, /tmp filled up and the merge
failed. To make matters worse, since STDERR is redirected and the
I am trying to set up a local instance.
When running Clip adapter sequences I get the following error:
fastx_clipper: Invalid quality score value (char 'J' ord 74 quality
value 41) on line 36
gzip: stdout: Broken pipe
I'm stuck on where to try to trouble-shoot this.
This is on groomed fastq data
16 matches
Mail list logo