In our proteomics lab, a protein sample is fractionated (by e.g. pH)
before analysis in a nr of sample fractions. The fractions are then run
through the mass spectrometer one at a time. Each fraction yields a data
The mass spec data is then matched to peptides by searching a FASTA
file, termed target, with protein sequences. Afterwards the matches are
statistically scored by machine learning. To do this, the data is also
matched with a scrambled FASTA file, termed decoy. Each fraction is
matched to a target and decoy file, which yields two match-files per
The machine learning tool thus picks a target and a decoy matchfile and
puts statistical significances on the matches. In order for this to be
correct, it needs to pick matchfiles that correspond, ie that are
derived from the same fraction.
In our lab, we have not yet looked at John Chilton's (I think) work with
the m: data sets, and our parallel processing is done inside galaxy,
using its split and merge functions to divide a job into tasks. Each
task is sent as a separate job to sge, I think, but others may know more
about this than I.
I really have to get back to my holiday now, cheers,
On 08/01/2013 04:17 AM, piotr.s...@csiro.au wrote:
Thank you for your explanation. Would you be able to give us an
example of what do you mean by fractions and when the task_%d are
being used to pick files. Just want to make sure we have good
understanding of the problem that you solved.
Also, I vaguely remember seeing 'data parallelism" mentioned somewhere
with relation to the m: data sets. Do you currently support in any
way automatic distribution of processing of such datasets to parallel
environments (e.g. array jobs in sge or such?)
*From:*Jorrit Boekel [mailto:jorrit.boe...@scilifelab.se]
*Sent:* Wednesday, July 31, 2013 8:18 PM
*To:* Khassapov, Alex (CSIRO IM&T, Clayton)
*Cc:* p.j.a.c...@googlemail.com; jmchil...@gmail.com;
email@example.com; Szul, Piotr (ICT Centre, Marsfield);
Burdett, Neil (ICT Centre, Herston - RBWH)
*Subject:* Re: Appending _task_%d suffix to multi files
In our lab, files are often fractions of an experiments, but they are
named by their creators in whatever way they like. I put that code in
to standardize fraction naming, in case a tool needs input from two
files that originate from the same fraction (but have been treated in
different ways). In those cases, in my fork, Galaxy always picks the
files with the same task_%d numbers.
I can't help you very much right now, as I'm currently away from work
until October, but I hope this explains why its in there.
On 07/31/2013 04:15 AM, alex.khassa...@csiro.au
We've been using Galaxy for a year now, we created our own Galaxy
fork where we were making changes to adapt Galaxy to our
requirements. As we need "multiple file dataset" - we were using
Johns' fork for that initially.
Now we are trying to use "The most updated version of the multiple
file dataset stuff" https://bitbucket.org/msiappdev/galaxy-extras/
directly as we don't want to maintain our own version.
One of the problems we have - when we upload multiple files -
their file names are changed (_task_%d suffix is added to their
On our branch we simply removed the code which does it, but now we
wonder if it is possible to avoid this renaming somehow? I.e. make
Is it really necessary to change the file names?
[mailto:galaxy-dev-boun...@lists.bx.psu.edu] On Behalf Of Jorrit
Sent: Thursday, 25 October 2012 8:35 PM
To: Peter Cock
Cc: firstname.lastname@example.org <mailto:email@example.com>
Subject: Re: [galaxy-dev] the multi job splitter
I keep the files matched by keeping a _task_%d suffix to their
names. So each task is matched with its correct counterpart with
the same number.
Please keep all replies on the list by using "reply all"
in your mail client. To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
To search Galaxy mailing lists use the unified search at: