Dear TPP developers,
I might have uncovered a bug in ProteinProphet, or at least stumbled across a
result which does not make sense to me. We are using all annotated isoforms
from Swissprot when we search data from human. This roughly doubles the number
of human entries from 20k to 40k many
dear list,
i ran some tests with ptmprophetparser (tpp v4.5.1) on the output of
xtandem, omssa and myrimatch after xinteract has been run on the
original search engine output.
unfortunately, each test produced a different error:
PTMProphetParser interact_myrimatch.pep.xml
Unknown file
on
this issue you reported. Is there any additional information you
can provide? Maybe the tandem file? Are there a large number of
expected identifications?
Thanks
On Sun, Oct 23, 2011 at 3:59 AM, Andreas Quandt
quandt.andr...@gmail.com mailto:quandt.andr...@gmail.com wrote
dear experts,
i run into problem if converting tandem results to pep.xml. i use an
internal standard data set which were processed without problems using
the tpp 4.4.1
however if i use the new version 4.5.0, only 2 out of 4 files are converted.
maybe one of you can interpret the error message i
dear list,
i am currently evaluating the latest tpp release and came across a
problem with mzxml2search which fails generating mgf files from our
mzml files.
mzxml2search does not report any problems during the conversion and
produces a files with only the 'COM=' line.
did anyone of you came
dear list,
i want to run iprophet on a single file i searched with xtandem and
omssa. i tried to use iprophet on the 2 results separately but get for
both of them a segmentation fault although xinteract finished
successfully:
xinteract (TPP v4.4 VUVUZELA rev 1, Build 201102011840 (linux))
dear list,
i am experimenting with the parameters of mzxml2search and was
wondering about the -c option
the wiki indicates following:
-cn1[-n2]suggest charge(s): for scans which do not have a
precursor charge (or charge range) already determined in the input
file, use the user-specified charge
dear list,
i tried to run the combination of
interactparser/refreshparser/peptideprophetparser instead of xinteract.
however when i try to 'emulate' the xinteract parameter
-dDECOY_ -OAPdlIw
with
interactparser: -L7 -Etrypsin -C
refreshparser: PREV_AA_LEN=1 NEXT_AA_LEN=1
dear list,
i tried to comile the r5303 from the trunk repositor but unfortunately i got
compiling errors.
here is what i did:
svn -r 5303 co
https://sashimi.svn.sourceforge.net/svnroot/sashimi/trunk/trans_proteomic_pipeline
trans_proteomic_pipeline
cd trans_proteomic_pipeline/src
touch
-supervised parametric mode, or
semi-parametric mode on this dataset.
-David
On Wed, Feb 2, 2011 at 4:32 PM, Andreas Quandt quandt.andr...@gmail.com
wrote:
hey david,
i uploaded 2 files:
1) the original output file of crux: B08-02057_original_crux.pepXML
2) the pepxml with the latest
like a Sequest
pep.xml file for the time being.
On Tue, Feb 1, 2011 at 3:18 PM, Andreas Quandt quandt.andr...@gmail.com
wrote:
dear list,
i tried processing some pep.xml produced by crux with the tpp.
however i get always the same error message (i tried xinteract
:
How did it process it as Sequest if the search_engine attribute still says
Crux?
-Matt
On 2/2/2011 5:30 AM, Andreas Quandt wrote:
hey jimmy,
thanks for the reply!
i tried to follow your advice wtih modifying the pepxml produced by crux.
however it seems i am not pulling the right strings
? It seems like you got
it to process properly as SEQUEST, so there's something wrong further down.
-Matt
On 2/2/2011 9:56 AM, Andreas Quandt wrote:
hey matt,
thanks for picking this up.
i posted the original first lines of the crux file not all the tries to
modify them ;-)
after fixing
the search engine name and the score names (correct me
if I'm wrong TPP folks). If you need a program to do the replace all you
can try Notepad++ or Gvim.
-Matt
On 2/2/2011 10:22 AM, Andreas Quandt wrote:
sure :-)
originally the spectrum query match looks like
spectrum_query spectrum=B08
and the score names (correct me
if I'm wrong TPP folks). If you need a program to do the replace all you
can try Notepad++ or Gvim.
-Matt
On 2/2/2011 10:22 AM, Andreas Quandt wrote:
sure :-)
originally the spectrum query match looks like
spectrum_query spectrum=B08-02057.mzXML.00234.00234.3
a suggestion how to continue?
cheers,
andreas
On Wed, Feb 2, 2011 at 6:49 PM, Andreas Quandt quandt.andr...@gmail.comwrote:
Hey Jimmy,
I tried the renaming but of course not the adding of the 'missing'
scoring values.
I report back to you guys as soon as I have tested it :-)
Cheers,
Andreas
,
-David
On Wed, Feb 2, 2011 at 1:08 PM, Andreas Quandt quandt.andr...@gmail.com
wrote:
hey david,
thanks for answering!
i used xinteract -dDECOY_ -OAdPlIw B08-02057_mod.pepXML to process the
file.
as you suggest using a trunk version: can you refer to a specific
revision?
cheers
dear list,
i try to process some inspect results with the tpp 4.4.1 but i am running in
some problems:
first i converted the .out file produced by inspect (current version) to a
pep.xml (using the latest converter from terry :-) ).
/usr/local/apps/inspect/InspectToPepXML.py -i run2.out -o
your kernel.
4) Use a linux kernel version = 2.6.23, which I understand has variable
argument length. http://kernelnewbies.org/Linux_2_6_23#line-84
Besides that -- I'm not sure.
-Joe
On Tue, Dec 7, 2010 at 5:14 PM, Andreas Quandt
quandt.andr...@gmail.comwrote:
dear list,
i wanted
Dear list,
Can anyone of you answer on the question which search engines in the
new TPP version are supported with a parametric model and which have
to run with the non-parametric model?
Many thanks in advance for answering!
Cheers,
Andreas
--
You received this message because you are
dear list,
when trying to run xinteract on some pep.xml files generated with spectrast
i run quite often in an NAN probability density error:
xinteract
-N/cluster/scratch/malars/pgrade/PS_TPP_develop_280/PSet_39/collector/LIB/spectrast_xinteract/Spectrast_Xinteract.pep.xml
-OAlIw
fixed in the
latest 4-4 version. Can you try that one?
On Tue, Aug 24, 2010 at 2:01 PM, Andreas Quandt
quandt.andr...@gmail.com wrote:
dear list,
for one of my data analysis with iprophet i get a segmentation fault
error
but i am not sure what this means respectively what could have
hey david,
many thanks for looking into it!
i was wondering if you could point me to a source where conditions such as
not using a '.' character in the file's basename are described because i was
not aware of that.
many thanks in advance,
andreas
On Wed, Jun 2, 2010 at 12:17 AM, David
described does
only occur when using sequest or is this a more general problem?
cheers,
andreas
On Thu, Jun 3, 2010 at 12:24 AM, Andreas Quandt quandt.andr...@gmail.comwrote:
hey david,
many thanks for looking into it!
i was wondering if you could point me to a source where conditions
dear list,
i modified some of my mzXML files by removing ms2 spectra which did not
fulfill certain criteria.
afterwards i tried to analyze them via xtandem but got error messages like
might be a corrupted file.
to overcome this problem i used indexmzXML which corrected the index by
generating a
of the file. Then apply the re-indexing.
Hope this helps,
--Luis
On Tue, Mar 23, 2010 at 9:46 AM, Andreas Quandt
quandt.andr...@gmail.com wrote:
dear list,
i modified some of my mzXML files by removing ms2 spectra which did not
fulfill certain criteria.
afterwards i tried to analyze
at the actual file, can
you upload to the group's files area?
Brian
On Tue, Mar 23, 2010 at 1:20 PM, Andreas Quandt
quandt.andr...@gmail.comwrote:
hey luis,
nice to hear from you and many thanks for your fast answer!
unfortunately this does not do the trick :-(
i renumbered the scans starting
dear list,
i would like to use pep_dbcount and digestdb for some analysis but i am not
sure about the values of the output files.
hence, it would be great if one of you could shortly explain me which values
are displayed there.
many thanks in advance,
andreas
--
You received this message
from digestdb. Something like
digestdb somefile.fasta | sort -k 5,5 digest.output
pep_dbcount digest_output
On Mon, Mar 15, 2010 at 2:58 PM, Andreas Quandt
quandt.andr...@gmail.com wrote:
dear list,
i would like to use pep_dbcount and digestdb for some analysis but i am
not
sure
dear list,
i was wondering if someone could explain me why there are 2 interprophet
result entries for each of the 'hit_rank=1' tags in the pep.xml generated by
iprophet?
cheers,
andreas
--~--~-~--~~~---~--~~
You received this message because you are subscribed
and only see a single
interprophet result entry per search_hit tag.
These files are Mascot, Tandem, OMSSA searches, run individually through
PeptideProphet, combined with iProphet. What workflow gives two tags for
you?
DT
Andreas Quandt wrote:
dear list,
i was wondering if someone
dear list,
i would like to run the interactparser and the mascot2xml with semitryptic
as enzyme.
unfortunately, i do not know how as the help of both programs does not
contain this information.
it would be great if someone could tell me the exact enzyme name i have to
specify.
many thanks in
/O08-10105_c 1 -all
will work.
cheers,
andreas
On Mon, Oct 5, 2009 at 12:51 PM, Andreas Quandt quandt.andr...@gmail.comwrote:
dear list,
i executed following command:
/usr/local/apps/tpp/bin/Out2XML
/IMSB/results/workflow/350/sorcerer/output/13200/original/O08-10105_c -all
-P/IMSB
\TegisterMassHunterDataAccess
it would be great if this information could be added to the wiki page of
trapper which does not indicate the .NET framework as reqirement
cheers,
andreas
On Mon, Oct 5, 2009 at 2:37 PM, Andreas Quandt quandt.andr...@gmail.comwrote:
dear list,
i installed
dear list,
i executed following command:
/usr/local/apps/tpp/bin/Out2XML
/IMSB/results/workflow/350/sorcerer/output/13200/original/O08-10105_c -all
-P/IMSB/results/workflow/350/sorcerer/output/13200/original/sequest.params
where the sequest.params file looks like this:
dear list,
i installed the latest version of trapper on a vanilla windows vista 64 bit
with sp1 in order to convert some agilent files.
unfortunately i get following error message:
C:\Program Files (x86)\trappertrapper.exe --mzXML -c
c:\andreasquandt\data\E_QTOF\E09_100209_RS_fasted3.d
dear list,
i was trying to use Out2XML to convert some sequest .out files.
unfortunately i am always getting a segmentation fault as output. i tried
several paramter sets and also peak lists but no success.
here the command i executed:
tpp/bin/Out2XML -pI -P
dear list,
sorry the previous email was not send with its complete content:
i was trying to use Out2XML to convert some sequest .out files.
unfortunately i am always getting a segmentation fault as output. i tried
several paramter sets and also peak lists but no success.
here the command i
/sorcerer/output/10300/original/O08-10093_c
You shouldn't need the -E option, since Out2XML will use the enzyme from
the sequest.params file.
Greg
On Fri, Oct 2, 2009 at 5:32 AM, Andreas Quandt
quandt.andr...@gmail.comwrote:
dear list,
sorry the previous email was not send with its complete
, Oct 2, 2009 at 11:31 AM, Andreas Quandt
quandt.andr...@gmail.com wrote:
hey greg,
many thanks for your answer!
apparently this would explain a lot ,-)
but what do i have to do when i process multiple files and have therefore
multiple directories?
do i have to specify all
dear list,
i am wondering if there is a possibility to specify the name of the pepXML
and/or its output destination when using the Out2XML parser?
many thanks in advance for any answers!
cheers,
andreas
--~--~-~--~~~---~--~~
You received this message because you
dear list,
i tried to convert a mscot dat file of 7gb and it took several days.
hence, i would like to ask if any of you knows how to speed this up?
cheers,
andreas
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
memory and started thrashing
the disk (constantly paging things in and out to the swap file) which
slows down any operation tremendously. Quite possibly the parser wasn't
designed for files that big (maybe it's storing something internally
that's taking up too much space?).
-Matt
Andreas Quandt
file) which
slows down any operation tremendously. Quite possibly the parser wasn't
designed for files that big (maybe it's storing something internally
that's taking up too much space?).
-Matt
Andreas Quandt wrote:
dear list,
i tried to convert a mscot dat file of 7gb and it took
, Brian Pratt brian.pr...@insilicos.comwrote:
I'd be happy to look at a much smaller but otherwise similar data set for
any clues - you can send to ftp://insilics.serveftp.net/pub
Brian
On Mon, Sep 28, 2009 at 8:08 AM, Andreas Quandt
quandt.andr...@gmail.com wrote:
hi matt,
this was also
lmend...@systemsbiology.orgwrote:
The error seems to be coming from ProtProphModels.pl:
Illegal division by zero at /usr/local/apps/tpp/bin/ProtProphModels.pl line
231.
But it seems that ProteinProphet finished; do those results look ok?
--Luis
On Tue, Sep 22, 2009 at 9:57 AM, Andreas
.
i played a little bit around with the parameter but get always the same
error message.
i also checked the output of the xtandem run (contains over 1000 valid
models) and also the pep.xml (which is valid).
--
Andreas Quandt, PhD
Institute of Molecular Systems Biology
Swiss Federal Institute
I
Natalie or I will be happy to have a look.
Brian
On Fri, Sep 18, 2009 at 6:47 AM, Andreas Quandt
quandt.andr...@gmail.comwrote:
dear list,
when running following command (tpp 4.3.1)
/usr/local/apps/tpp/bin/xinteract
-N/IMSB/results/workflow/45/Xinteract_/Xinteract.pep.xml -dDECOY_
hey brian,
its 4.2.1 in that case ;-)
Brian Pratt wrote:
What version of the software are you using?
Brian
-Original Message-
From: spctools-discuss@googlegroups.com
[mailto:spctools-disc...@googlegroups.com] On Behalf Of andreas quandt
Sent: Thursday, July 09, 2009 5:50 AM
dear list,
i analyzed a few samples with mascot and converted the dat file
successfully to pepXML (only error: scan number cannot be found).
afterwards i was running xinteract on all created pep.xml files
(arguments: l -dDECOY_ -OAlIwp).
this was also successful but when i want to display the
hey brian,
now i know what you mean.
i did not see that the tpp binary folder is automatically created in the
parent directory
so this seems to work fine.
many thanks!
andreas
Andreas Quandt wrote:
Hey Natalie, hello Brian
Sorry for getting back to you so late.
I tried
dear list,
after checking out trunk rev 4414, i wanted to test the petunia
interface but there is a problem with the login when using guest/guest.
here, i get following error message: User guest not found. Please check
your user name, or log in as guest.
does anyone has an idea how to fix
$cmd = gawk '\$5=0.01 {print \$1}' interact.prot_FDR.tsv | sort | head
-n 1 ;
my $prot_decoy_1pc_thresh = `$cmd` || ;
Not elegant, but it works.
DT
andreas quandt wrote:
dear list,
i was wondering if there is a solution to calculate the probability
cutoffs for s specific FDR
dear list,
i would like to test the trunk version of the tpp but have some trouble
to build it.
i was following the instructions for the installation of 4.2.1 on ubuntu
9.04 (specifying the latest boost library in the Makefile.config.incl)
but get following error during the compilation
@googlegroups.com
[mailto:spctools-disc...@googlegroups.com] On Behalf Of andreas quandt
Sent: Wednesday, June 24, 2009 8:35 AM
To: spctools-discuss@googlegroups.com
Subject: [spctools-discuss] building tpp from svn (rev 4395) on ubuntu 9.04
dear list,
i would like to test the trunk version
, I'm not sure what ubuntu 9.04 ships with.
Brian
-Original Message-
From: spctools-discuss@googlegroups.com
[mailto:spctools-disc...@googlegroups.com] On Behalf Of andreas quandt
Sent: Wednesday, June 24, 2009 9:28 AM
To: spctools-discuss@googlegroups.com
Subject: [spctools-discuss
hey david,
perfect!
many thanks for the answer :-)
cheers,
andreas
David Shteynberg wrote:
Hello Andreas,
Indistinguishable proteins must have all peptides in common (same
peptide sequences and same number of enzymaticaly tolerable ends on
the peptides). In a protein_group only some
dear list,
i would like to ask a very basic question regarding the protXML format:
What does the tag 'protein_group' exactly stands for and what is the
difference to the information of indistinguishable proteins?
thank you very much in advance for your answers!
cheers,
andreas
dear list,
does anyone know how to convert a pepXML to a text file with comma-
delimeted (tab-delimeted) values?
i know there is an option from within the pepxml viewer but i would
like to generate the export file without the need of using the
graphical interface.
it would be great if
, at 12:08 PM, Greg Bowersock wrote:
Are you looking for the output from Peptide Prophet or Protein
Prophet? They can both be done via command line, but they are done
different ways.
Greg
On Wed, May 13, 2009 at 1:43 PM, Andreas Quandt quandt.andr...@gmail.com
wrote:
dear list,
does
call to
see where
that previous definition is coming from.
-Original Message-
From: spctools-discuss@googlegroups.com
[mailto:spctools-disc...@googlegroups.com] On Behalf Of Andreas Quandt
Sent: Tuesday, April 21, 2009 4:23 PM
To: spctools-discuss@googlegroups.com
Subject
dear list,
i tried to build the latest version on ubuntu 8.10 (g++ 4.3.1, boost
1.35) and only partly succeeded.
during the make process following error below occured (the same also
happens if i try to build the tpp from its trunk). in addition i also
tried to run the build process with
62 matches
Mail list logo