All,

We have recently acquired an Agilent 6520 QTOF and are finding that
the performance of different search engines (with post-analysis by
PeptideProphet / ProteinProphet), and effects of using different
fragment mass tolerances are more complicated than for our other
instruments (Orbtrap / Bruker HCT / Waters QTOF / Bruker HCT / ABI
4800).

Example: On a yeast lysate sample converted to mzXML with Trapper, the
number of spectrum matches at 1% FDR with fragment mass tol. set to
extremes of 0.01Da and 0.5 Da respectively is:

Mascot: 759 | 2182
Tandem K-Score: 954 | 954
Tandem Native: 3879 | N/A (PeptideProphet cannot fit models)
OMSSA: 3762 | 3119

Mascot performs very poorly with a very tight MS/MS tolerance... we
believe low scores are being caused by unmatched relatively intense
ions in the low m/z range. If the tolerance is widened then these ions
are often matched but with very large mass errors (so the matches are
unlikely to be correct given the instruments accuracy). K-Score
usually outperforms native Tandem scoring on other data, but the
opposite seems true here. In fact, the PeptideProphet models for the K-
Score data fit very poorly. That said, the tandem native f-val
distributions at 0.5Da fragment tol. are strange, and PeptideProphet
cannot fit to them.

I just wondered whether anyone else has had similar experiences with
Agilent QTOF data, and has a workflow that is working well? OMSSA and
Tandem native results with tight tolerances look good, but we don't
won't to abandon the other search engines.

Many Thanks,

DT
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"spctools-discuss" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to