I haven't done too much with iProphet yet, but I really need to get that
going asap. I've just been swamped with too much to do and then a vacation,
so now I should be able to get caught up. We are working on a paper or two
where we've done multiple search engines and multiple data analysis packages
on data from the UPS1 standard (and possibly UPS2, not sure if that is going
to be included or not), which was run on an LTQ XL, LTQ-XL Orbitrap, and an
LTQ-FT. We originally were going to also use the 4000 Q-Trap, but once we
saw the results we decided that it might be better left unsaid. I haven't
been the one doing the in depth analysis, but I think I remember that the
X!Tandem k-score data was pretty similar to the sequest results. All 3
search engines have some differences due to their algorithms, so combining
is always a good idea. I haven't done much with OMSAA, I tried it out some
roughly 4-5 years ago, and wasn't that impressed, but it has probably
changed quite a bit for the better since then.

It starts to become an issue of just how much data is really needed for each
sample, since you can always add another search engine or do something else.
It isn't an easy question to answer, but maybe there will be some agreement
to this soon.

Greg

On Tue, Aug 4, 2009 at 11:28 AM, Dave Trudgian <[email protected]> wrote:

>
> Greg,
>
> Nice to know I'm not the only one. I've tried to structure our pipeline
> in a generic fashion, and separate things out to conf files, but it's by
> no means an example of pristine coding. Fingers crossed it would still
> be of use to someone.
>
> As an aside, I think I noticed on here that you've been using iProphet?
> We're combining Mascot, Tandem K-Score, and OMSSA searches by default,
> and have been intending to add more. I just wondered if you have an idea
> of what additional search engines would be most complimentary? I've not
> gotten round to trying out others myself, except pushing crux (free
> sequest re-implementation) results through the sequest pipeline... and I
> haven't done that enough to really assess how much extra we get vs
> Mascot/Tandem/OMSSA.
>
> DT
>
> Greg Bowersock wrote:
> > I've pretty much done the same thing also, but I wasn't planning on
> > releasing the code, as I've been lazy and hardcoded most of the
> information.
> > That and I have built my scripts around an Oracle database, which isn't
> > freely available, so it would take considerable coding to make it more
> > generic. I still have some work to do, but most of the hard work has
> already
> > been done. I think the biggest obstacle to more generic interfaces is
> that
> > just about every group wants something different for the interface.
> > Sometimes you just have to do things your own way, but the more code from
> > other people that you can leverage, the better off you are.
> >
> > Greg
> >
> > On Tue, Aug 4, 2009 at 10:28 AM, dctrud <[email protected]> wrote:
> >
> >
> >> Dear All,
> >>
> >> Firstly, apologies that this is going to be a bit of a rambling post.
> >>
> >> Over the past 12 months I've been working on what is effectively an
> >> alternative interface to the TPP vs Petunia, tailored towards giving a
> >> very simple front-end for users of our central proteomics facilities,
> >> fully automating submission to multiple search engines, and
> >> distributing jobs over a GridEngine cluster. There's a simple MySQL DB
> >> backend that tracks data submissions, searches, and results. The web
> >> app is written in Perl using the catalyst framework, and then there
> >> are a bunch of perl pipeline scripts which invoke the TPP tools etc.
> >>
> >> Lots of similar functionality is present in other packages such as
> >> CPAS, and in Petunia itself as it is continuously improved, but for
> >> various reasons we decided to re-invent the wheel, and have come up
> >> with something that works very nicely for us. I'm now starting the
> >> process of requesting clearance from the University to release the
> >> software. There's a new open source release procedure here and I
> >> really want to make sure anything useful our software contains can be
> >> contributed to the TPP project.
> >>
> >> Given that the open source release procedures here are very new, and
> >> that in the past software has been released under relatively strict
> >> non-commercial licenses that wouldn't allow us to give back to the TPP
> >> project I wondered whether anyone had any tips for arguing the case
> >> for a LGPL/GPL license? Are other people developing custom interfaces/
> >> pipelines for the TPP tools, particularly if targetted at core
> >> facility usage? I'd like to be able to convincingly argue that others
> >> would benefit from our work, but moreover that we could benefit from
> >> people being interested in and potentially collaborating.
> >>
> >> Thanks, and thanks to the TPP developers for all their freely
> >> available work! I hope to be able to give something back soon!
> >>
> >> Dave Trudgian
> >>
> >
> > >
> >
> >
>
>
> --
> Dr. David Trudgian
> Bioinformatician in Proteomics
> University of Oxford
>
> Mon-Thu: CCMP, Roosevelt Drive
> Tel: (+44) (01865 2)87807
>
> Friday : Dunn School of Pathology, S. Parks Rd.
> Tel: (+44) (01865 2)75557
>
>
>
>
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"spctools-discuss" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to