You can use $input.name to get the name you see in the history.Unfortunately, that usually is something silly like "Samtools on data 3"I have changed all the labels for all the major file that are produced to provide more sensible names, usually based on the input...An example below is how I
Thanks for addressing this...The workflows load MUCH faster now...ThonOn Mar 20, 2012, at 07:54 AM, Dannon Baker dannonba...@me.com wrote:Hi Thon, Thanks for reporting this. I see what the problem is here at least for the clone duplication, and I've committed a fix in 6833:e8e361707865 that will
Hi,How far along are we about thinking about being able to re-use a complete workflow as a workflow step in another workflow?This would really allow us to modularize certain aspects of the analyses and would allow us to re-use a workflow in another.Barring that, a simple copy/paste from one
Hi,While technically not a Galaxy (Dev) question, but I am running into a non-standard VCF header in the GATK UnifiedGenotyper output.I see##FORMAT=ID=PL,Number=G,Type=Integer,Description="Normalized, Phred-scaled likelihoods for genotypes as defined in the VCF specification"
Yeah...I was thinking something like that...I think it is possible to produce a varied amount of datafiles If I recall correctly...http://wiki.g2.bx.psu.edu/Admin/Tools/Multiple%20Output%20FilesThonOn Feb 20, 2012, at 07:11 PM, Ross ross.laza...@gmail.com wrote:Thon, I just had an idea - write a
Hi,I tried to run a workflow with the API, but get an Error 500 when I try to run the WF...The paster.log shows the following error...$ workflow_execute.py 92cc01ed93dc0f0fc91e3ded35497c0a http://srp106:8080/api/workflows ebfb8f50c6abde6d 'TEST the API'
Hi Danon,thanks for the info...Indeed I had tried hda (I sorta figured H was for History, LD was for Library Dataset) but I got an error complaining about step 91 (I only had one step) but I figured out that the API probably uses a different number of the steps, so this worked./display.py