I'm developing a barcode splitter, which will split barcodes of variable
length and then put them into the history for downstream analysis
(currently FASTX barcode splitter only splits barcodes of the same length
and the user has to download the data through an html link). The number of
How are the output files being handled in split_var_length_barcodes_wrapper.py?
See http://wiki.g2.bx.psu.edu/Admin/Tools/Multiple%20Output%20Files for a
reference-- you're going to want to follow the bottom example there involving
$__new_file_path__ and the naming convention
I've created something similar myself. I've not put it on the toolshed
yet, as I have to test it further, but it seems to work as expected. See
code in attachment.
University of Antwerp
On 12/05/2011 12:47 PM, graham etherington (TSL) wrote:
I am running a tool which is not defining the output file name as parameter.
the tool is running and I can see the output file in the history directory but
not in the history frame.
how could I solve the problem? I should see and download the output file from
the history but I see
Just to make sure I've understood your question: the problem is that a tool
that you're trying to wrap doesn't provide a way to specify a particular output
filename? Take a look at the from_work_dir attribute of a data element.
Or are you asking how to define outputs in general? For
On Dec 4, 2011, at 5:39 PM, Ross wrote:
Just addressing AD/LDAP authentication - authentication is trivially and best
(IMHO) left to an external (eg apache) proxy - save yourself a lot of effort
- it's known to work well.
Lock down the paste process so it only talks to your
Are you using a version of GATK that is 1.3?
Another possibility is that you are missing an R library (e.g. 'ggplot2'), that
is used by the VariantRecalibrator for building the plots. Start up an
interactive R session and type: library('ggplot2')
If you are missing the library,
On Dec 5, 2011, at 3:02 PM, Leon Mei wrote:
Today I upgraded our NBIC server from an older release (fetched on July 5th,
2011) to the latest version in galaxy-dist. After execute hg update and sh
manage_db upgrade and merge some local configurations, I successfully
Here's the output of the migration:
galaxy@galaxy2:~/prog/galaxy-2011-12-5$ sh manage_db.sh upgrade
79 - 80...
Migration script to create tables for disk quotas.
80 - 81...
Migration script to add a 'tool_version' column to the hda/ldda tables.
81 - 82...
Thank you so much Dannon.
I appreciate your help but I need some more clarification.
in the tool I am integrating (which is java classes) if the input parameters are
./main.bash eps0.3_40reads.fa population10_ref.fa 15 6 120
Then teh output is in eps0.3_40reads_I_6_15_CNTGS_DIST0_EM20.txt
I am unable to get VCF files to display in IGV in my local galaxy
installation. BAM files display fine, however, whenever I click on the
link to display a VCF file I get the following error message:
You must import this dataset into your current history before you
can view it at the
There should be additional information in the galaxy database about why the job
failed; take a look at stderr column of the failed job using some SQL like this:
select * from job where state='error' and tool_id='tophat' and stderr like
'%indexing reference%' order by id desc;
Thanx guys, that should give me heaps to go on with :-)
From: Nate Coraor [mailto:n...@bx.psu.edu]
Sent: Tuesday, 6 December 2011 8:21 a.m.
Cc: Smithies, Russell; email@example.com
Subject: Re: [galaxy-dev] possibly weird config
I was wondering what would be the best way to extend Galaxy's API
functionality to allow for runtime modification of tool parameters?
I have successfully been able to run workflows programmatically using
the API, following the basic steps in:
You're correct in that currently the workflow API affords no method for runtime
modification of tool parameters, other than inputs. Depending on your needs,
it might be feasible to have a few static workflows that you reuse often via
the workflow API. If that isn't the case, and you
Mail list logo