Hi Ryan,
latest wrappers are here:
https://github.com/galaxyproject/tools-devteam/tree/master/tools/bowtie2
And a PR would be great!
But as far as I can see this is already implemented and you can choose
as option `Paired-end Dataset Collection`, isn't it?
Ciao,
Bjoern
On 24.06.2015
Hi Team,
my tool creates dynamically 96 datasets bundled into a list.
In the history I can see the number 96 in the top as hidden datasets
(6 shown, 96 hidden)
When I open the list, I just can see 64 items.
Now I run the job again and I have 96 more hidden items.
I open the new list and can see
no - not a typo. The tool can process both.
It's just my naming, because I use it for fastq.
I'll change the help tag.
2015-06-25 3:44 GMT-05:00 Peter Cock p.j.a.c...@googlemail.com:
Hi Alexander,
If this wasn't a collection, I would expect format_source to work
(possibly also using
Hi John,
yes - I created this hacky solution and it works.
I now tried what you said, but no success.
Code:
collection name=split_output type=list label=@OUTPUT_NAME_PREFIX@
on ${on_string} (Fastq Collection) format_source=fastq_input1
discover_datasets
You are giving Galaxy mixed signals :). format_source will say to use
the data type specified by the corresponding input - but
discover_datasets pattern=__name_and_ext__ directory=splits /
Is saying (with the pattern __name_and_ext__) read files of the form
out1.fastq and assign the collection
the latter. starting with a dataset, pull it's full history. therefore if
it was created by running a simple single-step tool it's one step. if it
was created as part of a workflow, grab that whole series of
steps/inputs/outputs.
i agree on the python/java bindings being out of date, but even
Can you clarify one thing for me - are you attempting to break a
workflow invocation into steps, and then jobs, and then inputs and
outputs (so working from the workflow invocation) or are you trying to
scan existing histories and find a workflow for each dataset (so
working from the history id
For the list sake - I think we figured this out and IRC and it had to
do with having two versions of Galaxy installed on the same machine.
Alexander - let me know if this issue is not resolved.
-John
On Mon, Jun 15, 2015 at 4:02 PM, Alexander Vowinkel
vowinkel.alexan...@gmail.com wrote:
Thank
Hello,
I'm still relatively new to galaxy. I'm trying to use the API to identify
the string of jobs/datasets that were created as part of executing a
workflow. So far as I can tell, the API gives me the ID of the job, which
corresponds to one step in the workflow. Each of these has
What Bjoern said - unless you meant the older bowtie1 wrappers. Those
have not been updated - I think we decision was made at Penn State to
focus on bowtie2 - but if people are still interested in enhancing the
bowtie1 wrappers I think a PR would be welcome. There have been some
other relatively
Hi Wolfgang,
I only can tell you that we also have problems with handling BAM files
properly in Galaxy.
Our issue is more due to unsorted BAM files, but as far as I understood
this is because the metadata creation changed from using samtools to
using pysam. Maybe this helps you in finding a
If the user selects mulitple pairs of paired-end data, I want that
submitted the same way as a list of paired-end data. I don't want a
separate job for each paired-end data. Rather, I want a single job to
consume the entire list.
It seems to be easier to disable the ability to select multiple
Conversation in IRC. tl;dr - it looks like it might be a GUI related
problem since the API does contain all of the datasets. Carl - any
chance you have an idea of what is going on here?
21:20 jmchilton avowinkel: is it possible there were duplicated
identifiers (has your discover_datasets
13 matches
Mail list logo