Thank you very much David!

When I use PeptideProphet or iProphet from the TPP's GUI, a "*MODEL*" html 
file is also created. In it's *Sens/Error Tables* tab, There is an Error 
Table like this one:

Error Table
Error_Ratemin_probnum_correctnum_incorrect
0.0000 1.0000 2 0
0.0001 0.9979 6815 1
0.0002 0.9957 7051 1
0.0003 0.9937 7194 2
0.0004 0.9914 7296 3
0.0005 0.9894 7376 4
0.0006 0.9870 7442 4
0.0007 0.9852 7497 5
0.0008 0.9828 7546 6
0.0009 0.9805 7588 7
0.0010 0.9784 7626 8
0.0015 0.9672 7775 12
0.0020 0.9576 7881 16
0.0025 0.9465 7965 20
0.0030 0.9369 8033 24
0.0040 0.9150 8139 33
0.0050 0.8982 8223 41
0.0060 0.8797 8293 50
0.0070 0.8610 8352 59
0.0080 0.8420 8403 68
0.0090 0.8230 8448 77
0.0100 0.8032 8487 86
0.0150 0.7223 8638 132
0.0200 0.6367 8739 179
0.0250 0.5674 8811 227
0.0300 0.4996 8866 275
0.0400 0.3870 8944 374
0.0500 0.3072 8997 475
0.0750 0.1509 9070 737
0.1000 0.0769 9106 1013
































*1. *Can I compare this error rate with the PEP validated search results? I 
had decoys in my search database for these results and I have used the 
non-parametric model and accurate mass binning. Is this error rate similar 
to FDR?

*2. *Does this table suggest that for example I can accept probabilities of 
up to ~0.6, *if I am accepting an error rate of 2%*. Can I trust those PSMs 
with that probability threshold as correct PSMs?

Thanks again.
Ali



On Wednesday, December 21, 2016 at 12:26:56 AM UTC-5, David Shteynberg 
wrote:
>
> The best way to compare is to include an independent set of wrong hits 
> (decoys) that are unknown to the algorithm.  Then you can calculate the 
> error rate based on the number of wrong hits that pass any threshold of 
> each algorithm.  The mixture models of each dataset are learned 
> independently from that dataset, by PeptideProphet and by iProphet.  
> Therefore, the minimum threshold to achieve a fixed error rate (and FDR) 
> changes depending on the dataset.  This method will also allow you to test 
> the accuracy of the FDRs reported by the validation tools.
>
> -David
>
> On Tue, Dec 20, 2016 at 4:27 PM, Ali <[email protected] <javascript:>> 
> wrote:
>
>> Dear all
>>
>> I would like to compare the identification rate of a set of 
>> PeptideProphet/iProphet validated results from Comet/X!Tandem, with 
>> MaxQuant's results. For MaxQuant I have PEPs. What is the fair way to 
>> compare the results with these two types of validations? At what threshold 
>> should I cut the peptideprohet/iprophet to be comparable with a PEP of say 
>> 0.1?
>>
>> Is it fair if I compare a PEP=x with PepPro/iPro=1-x? One thing that I 
>> have noticed is that FDR calculated by mean(all PEPs) OR 1-mean(all 
>> PepPro), is always much less for PepPro.
>>
>> And one more question, what is an acceptable PepPro/iPro probability? Is 
>> there a way that I allow a higher FDR without changing the probability 
>> threshold?
>>
>> Thank you very much in advance.
>>
>> Ali
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "spctools-discuss" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> To post to this group, send email to [email protected] 
>> <javascript:>.
>> Visit this group at https://groups.google.com/group/spctools-discuss.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"spctools-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/spctools-discuss.
For more options, visit https://groups.google.com/d/optout.

Reply via email to