[ccp4bb] Postdoctoral position in Toulouse, France

2010-10-27 Thread Jean-Denis PEDELACQ
An eighteen-month postdoctoral position, funded by the French National 
Research Agency (ANR) is available immediately in the group of Lionel 
Mourey at the Institut de Pharmacologie et Biologie Structurale in 
Toulouse (http://www.ipbs.fr/english/). We are seeking a motivated 
scientist to join a research project on the structural and functional 
studies of an ensemble of polyketide synthases from M. tuberculosis and 
M. marinum, identified as promising targets for the development of new 
antimicrobial drugs against mycobacteria. You will have the opportunity 
to interact with two renowned teams from the Department of « Molecular 
Mechanisms of Mycobacterial Infections ».


Experiments will involve a broad range of techniques including molecular 
biology, protein expression and purification, various methods of 
structure determination (X-ray crystallography, SAXS). Successful 
candidates should have a background in biochemistry and biophysics. 
Experience in molecular biology is a plus.


Further informations may be obtained from Dr. Jean-Denis Pédelacq 
(telephone: +33-5-61-17-54-11; e-mail: jean-denis.pedel...@ipbs.fr). 
Closing date for applications is late january 2011. Written applications 
including a curriculum vitae and contact details for three referees 
should be forwarded to Jean-Denis Pédelacq, Structural Biophysics Group, 
IPBS-CNRS, 31077 Toulouse Cedex, France.


Re: [ccp4bb] diverging Rcryst and Rfree

2010-10-27 Thread Clemens Vonrhein
Dear Ian,

On Tue, Oct 26, 2010 at 05:15:50PM +0100, Ian Tickle wrote:
 Yes! - the critical piece of information that we're missing is the
 proportion of *all* structures that come from SG centres.  Only
 knowing that can we do any serious statistics ...

The point I was trying to make was not to blame SG centres (or
comparing them with other groups) - I was concerned about technology
mainly.

Clearly, there was some problem at the point of deposition. I would
have thought that especially in structures coming from SG centres
there would be technology in place on both sides (at the SG centre and
at the PDB site) to catch unusual values like an overall Rmerge of
0.99?

Cheers

Clemens

PS: according to some simple search out of those 3026 entries 1473 are
from SG centres and 1553 not.

 


 -- Ian
 
 On Tue, Oct 26, 2010 at 5:07 PM, Frank von Delft
 frank.vonde...@sgc.ox.ac.uk wrote:
   b) very large Rmerge values:
 
       Rmerge  Rwork  Rfree  Rfree-Rwork Resolution
      -
       0.9990 0.1815 0.2086    0.0271     1.80  SG center, unpublished
       0.8700 0.1708 0.2270    0.0562     1.96  unpublished
       0.7700 0.1870 0.2297    0.0428     1.56
       0.7600 0.2380 0.2680    0.0300     2.50  SG center, unpublished
       0.7000 0.1700 0.2253    0.0553     1.71
       0.6400 0.2179 0.2715    0.0536     2.75  SG center, unpublished
 
  The most disturbing to me is that of those with very large overall
  Rmerge values, 3 come from structural genomics centers.
 
  Is that less or more disturbing than that the other 50% come from not-SG
  centers?
 
  Of course, the authors themselves may be willing to help correct the obvious
  typos -- which will presumably disappear forever once we can finally upload
  log files upon deposition (coming soon, I'm told).
 
  On an unrelated note, it's reassuring to see sound statistical principles --
  averages, large N, avoidance of small number-anecdotes, and such rot --
  continue not to be abandoned in the politics of science funding, he said
  airily.
 
  phx
 

-- 

***
* Clemens Vonrhein, Ph.D. vonrhein AT GlobalPhasing DOT com
*
*  Global Phasing Ltd.
*  Sheraton House, Castle Park 
*  Cambridge CB3 0AX, UK
*--
* BUSTER Development Group  (http://www.globalphasing.com)
***


Re: [ccp4bb] Hardware question

2010-10-27 Thread Frank von Delft
 I've been told by my (frighteningly geek-competent) colleague that the 
platters are identical, but the cheaper ones are those at the bottom of 
the Quality Control pile, which are therefore spun more slowly, and 
don't get any claims of reliability.


(Have you checked the disk rotation speed?  I imagine the Dell ones go 
much faster.)


phx


On 27/10/2010 02:52, Edward A. Berry wrote:
Another question about computer hardware- If I configure a computer at 
the

Dell site, it costs about $700 to add a 2TB SATA drive.
On amazon.com or Staples or such, a 2TB drive costs ~$110. to $200
depending on brand.

Are the Dell-installed drives much faster, or more reliable, or have
a better warranty?  After all, RAID is supposed to stand for redundant
array of inexpensive disks, and we could afford a lot more redundancy
at the Amazon.com price.

And, are there any brands or models that should be avoided due to known
reliability issues?

Thanks,
eab


Re: [ccp4bb] Against Method (R)

2010-10-27 Thread Frank von Delft

Yes, but what I think Frank is trying to point out is that the difference
between Fobs and Fcalc in any given PDB entry is generally about 4-5 times
larger than sigma(Fobs).  In such situations, pretty much any standard
statistical test will tell you that the model is highly unlikely to be
correct.

But that's not the question we are normally asking.
It is highly unlikely that any model in biology is correct, if by correct
you mean cannot be improved. Normally we ask the more modest question
have I improved my model today over what it was yesterday?.


I am not saying that everything in the PDB is wrong, just that the
dominant source of error is a shortcoming of the models we use.  Whatever
this source of error is, it vastly overpowers the measurement error.  That
is, errors do not add linearly, but rather as squares, and 20%^2+5%^2 ~
20%^2 .

So, since the experimental error is only a minor contribution to the total
error, it is arguably inappropriate to use it as a weight for each hkl.

I think your logic has run off the track.  The experimental error is an
appropriate weight for the Fobs(hkl) because that is indeed the error
for that observation.  This is true independent of errors in the model.
If you improve the model, that does not magically change the accuracy
of the data.

Sorry, still missing something:

In the weighted Rfactor, we're weighting by the 1/sig**2 (right?)  And 
the reason for that is, presumably, that when we add a term (Fo-Fc) but 
the Fo is crap (huge sigma), we need to ensure we don't add very much of 
it -- so we divide the term by the huge sigma.


But what if Fc also is crap?  Which it patently is:  it's not even 
within 20% of Fo, never mind vaguely within sig(Fo).  Why should we not 
be down-weighting those terms as well?


Or can we ignore that because, since all terms are crap, we'd simply be 
down-weighting the entire Rw by a lot, and we'd be doing it for the Rw 
of both models we're comparing, so they'd cancel out when we take the 
ratio Rw1/Rw2?


But if we're so happy to fudge away the huge gorilla in the room, why 
would we need to be religious about the little gnats on the floor (the 
sig(Fo))?  Is there then really a difference between R1/R2 and Rw1/Rw2, 
for all practical purposes?


(Of course, this is all for the ongoing case we don't know how to model 
the R-factor gap.  And no, I haven't played with actual numbers...)


phx.


Re: [ccp4bb] diverging Rcryst and Rfree [SEC=UNCLASSIFIED]

2010-10-27 Thread Ian Tickle
Anthony,

I have used the minimum of -LLfree (i.e. same as maximum free
likelihood) as a stopping rule for both weight optimisation and adding
waters, the former because it seems to be well justified by theory
(Gerard Bricogne's that is); also it's obviously very similar to Axel
Brunger's min(Rfree) rule for weight optimisation which seems to work
well.  I use it for adding waters because it seems to give a
reasonable number of waters.  Changes in Rfree seem to roughly mirror
changes in -LLfree, though they don't necessarily have minima at the
same points in parameter space; I guess that's not surprising since
unlike LLfree, Rfree is unweighted.  Using the min(-LLfree) rule
routinely for weight optimisation would be quite time consuming, so
now I just use a target RMS-Z(bonds) value based on a linear fit of
RMS-Z(bonds) vs resolution obtained from PDB-REDO refinements, where
the min(-LLfree) rule was used.

I haven't done a systematic study to see whether it can be used to
decide whether or not adding TLS parameters improves the model, but in
most of the cases I looked at (though admittedly not all) using TLS
reduces Rfree and -LLfree, or at least doesn't cause them to increase
significantly, so now I just use TLS routinely (like most other people
I guess!).  If I were being totally consistent with the use of my
rule, I should really test -LLfree after using TLS and if it does
increase then throw away the TLS model!  This area could benefit from
more careful investigation!

I also tried min(Rfree-Rwork) as a stopping rule for weight
optimisation and adding waters but it didn't give good results (i.e.
the number of waters added seemed unrealistic).  I haven't tried your
rule min(Rfree-Rwork/2) in either case, and it may indeed turn out
that it works better than mine.  I was just interested to know whether
you had arrived at your rule by experimentation, and if so how it
compared with other possible rules.

I do have one reservation about your rule; the same also applies to
the min(Rfree-Rwork) rule: you can get situations where a decrease in
both Rwork and Rfree corresponds to a worse model according to the
rule, and conversely an increase in Rwork and Rfree corresponds to an
improved model.  This looks counter-intuitive to me: intuition tells
me that a model which is more consistent with all of the experimental
data (i.e. both the working and test sets) is a better model and one
which is less consistent is a worse one.  Admittedly intuition has
been known to lead one astray and it may be the case that the model
with lower Rwork  Rfree is worse if judged by the deviations from the
target geometry; however it doesn't seem likely that one would in
practice get a lower Rfree with worse geometry unless really unlucky!

For example, starting with a model with Rwork = 20, Rfree = 30 as
before (test value = 20), consider a model with Rwork = 16, Rfree =
29: the test value = 21, so a worse model by your rule.  Conversely
consider a model with Rwork = 24, Rfree = 31: test value = 19, so a
better model by your rule.  As I said this behaviour is not peculiar
to your rule; any rule which involves combining Rwork  Rfree is
likely to exhibit the same behaviour.

Cheers

-- Ian

On Tue, Oct 26, 2010 at 2:52 PM, Ian Tickle ianj...@gmail.com wrote:
 Anthony,

 Your rule actually works on the difference (Rfree - Rwork/2), not
 (Rfree - Rwork) as you said, so is rather different from what most
 people seem to be using.

 For example let's say the current values are Rwork = 20, Rfree = 30,
 so your current test value is (30 - 20/2) = 20.   Then according to
 your rule Rwork = 18, Rfree = 29 is equally acceptable (29 - 18/2 =
 20, i.e. same test value), whereas Rwork = 16, Rfree = 29 would not be
 acceptable by your rule (29 - 16/2 = 21, so the test value is higher).
  Rwork = 18, Rfree = 28 would represent an improvement by your rule
 (28 - 18/2 = 19, i.e. a lower test value).

 You say this criterion provides a defined end-point, i.e. a minimum
 in the test value above.  However wouldn't other linear combinations
 of Rwork  Rfree also have a defined minimum value?  In particular
 Rfree itself always has a defined minimum with respect to adding
 parameters or changing the weights, so would also satisfy your
 criterion.  There has to be some additional criterion that you are
 relying on to select the particular linear combination (Rfree -
 Rwork.2) over any of the other possible ones?

 Cheers

 -- Ian

 On Tue, Oct 26, 2010 at 6:33 AM, DUFF, Anthony a...@ansto.gov.au wrote:


 One “rule of thumb” based on R and R-free divergence that I impress onto
 crystallography students is this:



 If a change in refinement strategy or parameters (eg loosening restraints,
 introducing TLS) or a round of addition of unimportant water molecules
 results in a reduction of R that is more than double the reduction in
 R-free, then don’t do it.



 This rule of thumb has proven successful in providing a defined end point
 for building and 

Re: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)

2010-10-27 Thread Jürgen Bosch
Regarding the riding hydrogens:
They are obviously not visible, but your protein in solution is also not 
visible but still has those weird riding hydrogens:-)
Use them, they are there. And ther was a recent compendium of neutron 
scattering in one of our favorite journals if you really want to see them.

Another rule to add:
You are 98% done with refinement of your structure, does it really matter - I 
mean from a functional/ biological perspective ?
Is it wrong to stop at some point ?
This of course implies you've already passed rule 3-5 from Robbie.

Just some Espresso-thoughts in the morning

Jürgen 

..
Jürgen Bosch
Johns Hopkins Bloomberg School of Public Health
Department of Biochemistry  Molecular Biology
Johns Hopkins Malaria Research Institute
615 North Wolfe Street, W8708
Baltimore, MD 21205
Phone: +1-410-614-4742
Lab:  +1-410-614-4894
Fax:  +1-410-955-3655
http://web.mac.com/bosch_lab/

On Oct 27, 2010, at 1:29, Robbie Joosten robbie_joos...@hotmail.com wrote:

 Dear Anthony,
 
 That is an excellent question! I believe there are quite a lot of 'rules of 
 thumb' going around. Some of them seem to lead to very dogmatic thinking and 
 have caused (refereeing) trouble for good structures and lack of trouble for 
 bad structures. A lot of them were discussed at the CCP4BB so it may be nice 
 to try to list them all.
 
 
 Rule 1: If Rwork  20%, you are done.
 Rule 2: If R-free - Rwork  5%, your structure is wrong.
 Rule 3: At resolution X, the bond length rmsd should be  than Y (What is the 
 rmsd thing people keep talking about?)
 Rule 4: If your resolution is lower than X, you should not 
 use_anisotropic_Bs/riding_hydrogens
 Rule 5: You should not build waters/alternates at resolutions lower than X
 Rule 6: You should do the final refinement with ALL reflections
 Rule 7: No one cares about getting the carbohydrates right  
 
 
 Obviously, this list is not complete. I may also have overstated some of the 
 rules to get the discussion going. Any addidtions are welcome.
 
 Cheers,
 Robbie Joosten
 Netherlands Cancer Institute
 
 Apologies if I have missed a recent relevant thread, but are lists of
 rules of thumb for model building and refinement?
 
 
 
 
 
 Anthony
 
 
 
 Anthony Duff Telephone: 02 9717 3493 Mob: 043 189 1076
 
 
 


Re: [ccp4bb] Help with Optimizing Crystals

2010-10-27 Thread Annie Hassell
Matt-

You might want to try heating your protein to get rid of unfolded/improperly 
folded protein.  We have used 37C for  10 min with good success, but a time 
course at different temperatures is the best way to determine which parameters 
are optimum for your protein.  Heat-chill it on ice-centrifuge-- then set up 
your crystallization trays.  It's a pretty quick test to see if this will work 
for your protein.

Do you have any ligands for your protein?  These have often been the key to 
getting good crystals in our lab.  If you do have good ligands, you may want to 
express and/or purify your protein in the presence of these compounds.

Good Luck!
annie

From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf Of Jürgen 
Bosch
Sent: Tuesday, October 26, 2010 5:46 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Help with Optimizing Crystals



Hi.

Here is some additional information.

1.  The purification method that I used included Ni, tag cleavage, and SEC as a 
final step.  I have tried samples from three different purification batches 
that range in purity, and even the batch with the worst purity seems to produce 
crystals.
Resource Q ? two or more species perhaps ? Does it run as a monomer dimer 
multimer on your SEC ?



2. The protein is a proteolyzed fragment since the full length version did not 
crystallize.  Mutagenesis and methylation, however, may be techniques to 
consider since the protein contains quite a few lysines.

3. There are not any detergents in the buffer, so these are not detergent 
crystals.  The protein buffer just contains Tris at pH 8, NaCl, and DTT.

4. Some experiments that I have done thus far seem to suggest that the crystals 
are protein.  Izit dye soaks well into the crystals, and the few crystals that 
I shot previously did not produce any diffraction pattern whatsoever.  However, 
I have had difficulty seeming them on a gel and they are a bit tough to break.
Do they float or do they sink quickly when you try to mount them ?


5.  I tried seeding previously as follows: I broke some crystals, made a seed 
stock, dipped in a hair, and did serial streak seeding.  After seeding, I 
usually saw small disks or clusters along the path of the hair but nothing 
larger or better looking.

I also had one more question.  Has anyone had an instance where changing the 
precipitation condition or including an additive improved diffraction but did 
not drastically change the shape of the protein?  If so, I may just try further 
optimization with the current conditions and shoot some more crystals.

The additive screen from Hampton is not bad and can make a big difference.


A different topic is it a direct cryo what you are using as a condition ? If 
not what do you use a s a cryo ? Have you tried the old-fashioned way of 
shooting at crystals at room temperature using capillaries (WTHIT ?)

You might be killing your crystal by trying to cryo it is what I'm trying to 
say here.

Jürgen



Thanks for all the helpful advice thus far,
Matt




Re: [ccp4bb] Against Method (R)

2010-10-27 Thread Ed Pozharski
On Tue, 2010-10-26 at 21:16 +0100, Frank von Delft wrote:
 the errors in our measurements apparently have no 
 bearing whatsoever on the errors in our models 

This would mean there is no point trying to get better crystals, right?
Or am I also wrong to assume that the dataset with higher I/sigma in the
highest resolution shell will give me a better model?

On a related point - why is Rmerge considered to be the limiting value
for the R?  Isn't Rmerge a poorly defined measure itself that
deteriorates at least in some circumstances (e.g. increased redundancy)?
Specifically, shouldn't ideal R approximate 0.5*sigmaI/I?

Cheers,

Ed.



-- 
I'd jump in myself, if I weren't so good at whistling.
   Julian, King of Lemurs


Re: [ccp4bb] Against Method (R)

2010-10-27 Thread Ethan A Merritt

On Wed, 27 Oct 2010, Frank von Delft wrote:


So, since the experimental error is only a minor contribution to the total
error, it is arguably inappropriate to use it as a weight for each hkl.

I think your logic has run off the track.  The experimental error is an
appropriate weight for the Fobs(hkl) because that is indeed the error
for that observation.  This is true independent of errors in the model.
If you improve the model, that does not magically change the accuracy
of the data.

Sorry, still missing something:

In the weighted Rfactor, we're weighting by the 1/sig**2 (right?)  And the 
reason for that is, presumably, that when we add a term (Fo-Fc) but the Fo is 
crap (huge sigma), we need to ensure we don't add very much of it -- so we 
divide the term by the huge sigma.


Correct.

But what if Fc also is crap?  Which it patently is:  it's not even within 20% 
of Fo, never mind vaguely within sig(Fo).  Why should we not be 
down-weighting those terms as well?


Because here we want the exact opposite.  If Fc is hugely different from a
well-measured Fobs then it is a sensitive indicator of a problem with the model.
Why would we want to down-weight it? 
Consider the extreme case:  if you down-weight all reflections for which Fc does

not already agree with Fo, then you will always conclude that the current model,
no matter what random drawer you pulled it from, is in fine shape.

Or can we ignore that because, since all terms are crap, we'd simply be 
down-weighting the entire Rw by a lot, and we'd be doing it for the Rw of 
both models we're comparing, so they'd cancel out when we take the ratio 
Rw1/Rw2?


Not sure I follow this.

But if we're so happy to fudge away the huge gorilla in the room, why would 
we need to be religious about the little gnats on the floor (the sig(Fo))? 
Is there then really a difference between R1/R2 and Rw1/Rw2, for all 
practical purposes?


Yes.  That was the message of the 1970 Ford  Rollet paper that Ian provided a
link for.

 Ethan


(Of course, this is all for the ongoing case we don't know how to model the 
R-factor gap.  And no, I haven't played with actual numbers...)


phx.



Re: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)

2010-10-27 Thread Bernhard Rupp (Hofkristallrat a.D.)
Dear Young and Impressionable readers:

I second-guess here that Robbie's intent - after re-refining many many PDB
structures, seeing dreadful things, and becoming a hardened cynic - is to
provoke more discussion in order to put in perspective - if not debunk-
almost all of these rules. 

So it may be better to pretend you have never heard of these rules. Your
crystallographic life might be a happier and less biased one.

If you follow this simple procedure (not a rule)

The model that fits the primary evidence (minimally biased electron density)
best and is at the same time physically meaningful, is the best model, i.
e., all plausibly accountable electron density (and not more) is modeled.

This process of course does require a little work (like looking through all
of the model, not just the interesting parts, and thinking what makes sense)
but may lead to additional and unexpected insights. And in almost all cases,
you will get a model with plausible statistics, without any reliance on
rules. 

For some decisions regarding global parameterizations you have to apply more
sophisticated test such as Ethan pointed out (HR tests) or Ian uses
(LL-tests). And once you know how to do that, you do not need any rules of
thumb anyhow.

So I opt for a formal burial of these rules of thumb and a toast to evidence
and plausibility.

And, as Gerard B said in other words so nicely:

Si tacuisses, philosophus mansisses.

BR

-Original Message-
From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf Of Robbie
Joosten
Sent: Tuesday, October 26, 2010 10:29 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)

Dear Anthony,

That is an excellent question! I believe there are quite a lot of 'rules of
thumb' going around. Some of them seem to lead to very dogmatic thinking and
have caused (refereeing) trouble for good structures and lack of trouble for
bad structures. A lot of them were discussed at the CCP4BB so it may be nice
to try to list them all.
 
 
Rule 1: If Rwork  20%, you are done.
Rule 2: If R-free - Rwork  5%, your structure is wrong.
Rule 3: At resolution X, the bond length rmsd should be  than Y (What is
the rmsd thing people keep talking about?) Rule 4: If your resolution is
lower than X, you should not use_anisotropic_Bs/riding_hydrogens
Rule 5: You should not build waters/alternates at resolutions lower than X
Rule 6: You should do the final refinement with ALL reflections Rule 7: No
one cares about getting the carbohydrates right  
 
 
Obviously, this list is not complete. I may also have overstated some of the
rules to get the discussion going. Any addidtions are welcome.
 
Cheers,
Robbie Joosten
Netherlands Cancer Institute
 
 Apologies if I have missed a recent relevant thread, but are lists of 
 rules of thumb for model building and refinement?





 Anthony



 Anthony Duff Telephone: 02 9717 3493 Mob: 043 189 1076


 =


Re: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)

2010-10-27 Thread Simon Kolstoe
Surely the best model is the one that the referees for your paper  
are happy with?


I have found referees to impose seemingly random and arbitrary  
standards that sometime require a lot of effort to comply with but  
result in little to no impact on the biology being described. Mind you  
discussions on this email list can be a useful resource for telling  
referee's why you don't think you should comply with their rule of  
thumb.


Simon



On 27 Oct 2010, at 20:11, Bernhard Rupp (Hofkristallrat a.D.) wrote:


Dear Young and Impressionable readers:

I second-guess here that Robbie's intent - after re-refining many  
many PDB
structures, seeing dreadful things, and becoming a hardened cynic -  
is to
provoke more discussion in order to put in perspective - if not  
debunk-

almost all of these rules.

So it may be better to pretend you have never heard of these rules.  
Your

crystallographic life might be a happier and less biased one.

If you follow this simple procedure (not a rule)

The model that fits the primary evidence (minimally biased electron  
density)
best and is at the same time physically meaningful, is the best  
model, i.
e., all plausibly accountable electron density (and not more) is  
modeled.


This process of course does require a little work (like looking  
through all
of the model, not just the interesting parts, and thinking what  
makes sense)
but may lead to additional and unexpected insights. And in almost  
all cases,
you will get a model with plausible statistics, without any reliance  
on

rules.

For some decisions regarding global parameterizations you have to  
apply more

sophisticated test such as Ethan pointed out (HR tests) or Ian uses
(LL-tests). And once you know how to do that, you do not need any  
rules of

thumb anyhow.

So I opt for a formal burial of these rules of thumb and a toast to  
evidence

and plausibility.

And, as Gerard B said in other words so nicely:

Si tacuisses, philosophus mansisses.

BR

-Original Message-
From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf  
Of Robbie

Joosten
Sent: Tuesday, October 26, 2010 10:29 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)

Dear Anthony,

That is an excellent question! I believe there are quite a lot of  
'rules of
thumb' going around. Some of them seem to lead to very dogmatic  
thinking and
have caused (refereeing) trouble for good structures and lack of  
trouble for
bad structures. A lot of them were discussed at the CCP4BB so it may  
be nice

to try to list them all.


Rule 1: If Rwork  20%, you are done.
Rule 2: If R-free - Rwork  5%, your structure is wrong.
Rule 3: At resolution X, the bond length rmsd should be  than Y  
(What is
the rmsd thing people keep talking about?) Rule 4: If your  
resolution is

lower than X, you should not use_anisotropic_Bs/riding_hydrogens
Rule 5: You should not build waters/alternates at resolutions lower  
than X
Rule 6: You should do the final refinement with ALL reflections Rule  
7: No

one cares about getting the carbohydrates right


Obviously, this list is not complete. I may also have overstated  
some of the

rules to get the discussion going. Any addidtions are welcome.

Cheers,
Robbie Joosten
Netherlands Cancer Institute


Apologies if I have missed a recent relevant thread, but are lists of
rules of thumb for model building and refinement?





Anthony



Anthony Duff Telephone: 02 9717 3493 Mob: 043 189 1076


  =


Re: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)

2010-10-27 Thread VAN RAAIJ , MARK JOHAN
perhaps we should campaign for it to be obligatory to provide the pdb and 
structure factor file to the journal, and thus referees, upon submission? Then 
he can look for himself to see that building and refinement have been performed 
satisfactorily. 
Mark

 Surely the best model is the one that the referees for your paper 
 are happy with?

 I have found referees to impose seemingly random and arbitrary 
 standards that sometime require a lot of effort to comply with but 
 result in little to no impact on the biology being described. Mind 
 you discussions on this email list can be a useful resource for 
 telling referee's why you don't think you should comply with their 
 rule of thumb.

 Simon



 On 27 Oct 2010, at 20:11, Bernhard Rupp (Hofkristallrat a.D.) wrote:

 Dear Young and Impressionable readers:

 I second-guess here that Robbie's intent - after re-refining many many PDB
 structures, seeing dreadful things, and becoming a hardened cynic - is to
 provoke more discussion in order to put in perspective - if not debunk-
 almost all of these rules.

 So it may be better to pretend you have never heard of these rules. Your
 crystallographic life might be a happier and less biased one.

 If you follow this simple procedure (not a rule)

 The model that fits the primary evidence (minimally biased electron density)
 best and is at the same time physically meaningful, is the best model, i.
 e., all plausibly accountable electron density (and not more) is modeled.

 This process of course does require a little work (like looking through all
 of the model, not just the interesting parts, and thinking what makes sense)
 but may lead to additional and unexpected insights. And in almost all cases,
 you will get a model with plausible statistics, without any reliance on
 rules.

 For some decisions regarding global parameterizations you have to apply more
 sophisticated test such as Ethan pointed out (HR tests) or Ian uses
 (LL-tests). And once you know how to do that, you do not need any rules of
 thumb anyhow.

 So I opt for a formal burial of these rules of thumb and a toast to evidence
 and plausibility.

 And, as Gerard B said in other words so nicely:

 Si tacuisses, philosophus mansisses.

 BR

 -Original Message-
 From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf Of Robbie
 Joosten
 Sent: Tuesday, October 26, 2010 10:29 PM
 To: CCP4BB@JISCMAIL.AC.UK
 Subject: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)

 Dear Anthony,

 That is an excellent question! I believe there are quite a lot of 'rules of
 thumb' going around. Some of them seem to lead to very dogmatic thinking and
 have caused (refereeing) trouble for good structures and lack of trouble for
 bad structures. A lot of them were discussed at the CCP4BB so it may be nice
 to try to list them all.


 Rule 1: If Rwork  20%, you are done.
 Rule 2: If R-free - Rwork  5%, your structure is wrong.
 Rule 3: At resolution X, the bond length rmsd should be  than Y (What is
 the rmsd thing people keep talking about?) Rule 4: If your resolution is
 lower than X, you should not use_anisotropic_Bs/riding_hydrogens
 Rule 5: You should not build waters/alternates at resolutions lower than X
 Rule 6: You should do the final refinement with ALL reflections Rule 7: No
 one cares about getting the carbohydrates right


 Obviously, this list is not complete. I may also have overstated some of the
 rules to get the discussion going. Any addidtions are welcome.

 Cheers,
 Robbie Joosten
 Netherlands Cancer Institute

 Apologies if I have missed a recent relevant thread, but are lists of
 rules of thumb for model building and refinement?





 Anthony



 Anthony Duff Telephone: 02 9717 3493 Mob: 043 189 1076


   =


Mark J van Raaij
Laboratorio M-4
Dpto de Estructura de Macromoléculas
Centro Nacional de Biotecnología - CSIC
c/Darwin 3, Campus Cantoblanco
28049 Madrid
tel. 91 585 4616
email: mjvanra...@cnb.csic.es 

Re: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)

2010-10-27 Thread Ed Pozharski
One can also release structure in the PDB prior to submission - I
believe the HPUB option is rarely (if ever) justified.

Ed.

On Wed, 2010-10-27 at 22:56 +0200, VAN RAAIJ , MARK JOHAN wrote:
 perhaps we should campaign for it to be obligatory to provide the pdb
 and structure factor file to the journal, and thus referees, upon
 submission? Then he can look for himself to see that building and
 refinement have been performed satisfactorily. 
 Mark
 
  Surely the best model is the one that the referees for your paper 
  are happy with?
 
  I have found referees to impose seemingly random and arbitrary 
  standards that sometime require a lot of effort to comply with but 
  result in little to no impact on the biology being described. Mind 
  you discussions on this email list can be a useful resource for 
  telling referee's why you don't think you should comply with their 
  rule of thumb.
 
  Simon
 
 
 
  On 27 Oct 2010, at 20:11, Bernhard Rupp (Hofkristallrat a.D.) wrote:
 
  Dear Young and Impressionable readers:
 
  I second-guess here that Robbie's intent - after re-refining many
 many PDB
  structures, seeing dreadful things, and becoming a hardened cynic -
 is to
  provoke more discussion in order to put in perspective - if not
 debunk-
  almost all of these rules.
 
  So it may be better to pretend you have never heard of these rules.
 Your
  crystallographic life might be a happier and less biased one.
 
  If you follow this simple procedure (not a rule)
 
  The model that fits the primary evidence (minimally biased electron
 density)
  best and is at the same time physically meaningful, is the best
 model, i.
  e., all plausibly accountable electron density (and not more) is
 modeled.
 
  This process of course does require a little work (like looking
 through all
  of the model, not just the interesting parts, and thinking what
 makes sense)
  but may lead to additional and unexpected insights. And in almost
 all cases,
  you will get a model with plausible statistics, without any
 reliance on
  rules.
 
  For some decisions regarding global parameterizations you have to
 apply more
  sophisticated test such as Ethan pointed out (HR tests) or Ian uses
  (LL-tests). And once you know how to do that, you do not need any
 rules of
  thumb anyhow.
 
  So I opt for a formal burial of these rules of thumb and a toast to
 evidence
  and plausibility.
 
  And, as Gerard B said in other words so nicely:
 
  Si tacuisses, philosophus mansisses.
 
  BR
 
  -Original Message-
  From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf
 Of Robbie
  Joosten
  Sent: Tuesday, October 26, 2010 10:29 PM
  To: CCP4BB@JISCMAIL.AC.UK
  Subject: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)
 
  Dear Anthony,
 
  That is an excellent question! I believe there are quite a lot of
 'rules of
  thumb' going around. Some of them seem to lead to very dogmatic
 thinking and
  have caused (refereeing) trouble for good structures and lack of
 trouble for
  bad structures. A lot of them were discussed at the CCP4BB so it
 may be nice
  to try to list them all.
 
 
  Rule 1: If Rwork  20%, you are done.
  Rule 2: If R-free - Rwork  5%, your structure is wrong.
  Rule 3: At resolution X, the bond length rmsd should be  than Y
 (What is
  the rmsd thing people keep talking about?) Rule 4: If your
 resolution is
  lower than X, you should not use_anisotropic_Bs/riding_hydrogens
  Rule 5: You should not build waters/alternates at resolutions lower
 than X
  Rule 6: You should do the final refinement with ALL reflections
 Rule 7: No
  one cares about getting the carbohydrates right
 
 
  Obviously, this list is not complete. I may also have overstated
 some of the
  rules to get the discussion going. Any addidtions are welcome.
 
  Cheers,
  Robbie Joosten
  Netherlands Cancer Institute
 
  Apologies if I have missed a recent relevant thread, but are lists
 of
  rules of thumb for model building and refinement?
 
 
 
 
 
  Anthony
 
 
 
  Anthony Duff Telephone: 02 9717 3493 Mob: 043 189 1076
 
 
=
 
 
 
 Mark J van Raaij
 Laboratorio M-4
 Dpto de Estructura de Macromoléculas
 Centro Nacional de Biotecnología - CSIC
 c/Darwin 3, Campus Cantoblanco
 28049 Madrid
 tel. 91 585 4616
 email: mjvanra...@cnb.csic.es
 
 

-- 
I'd jump in myself, if I weren't so good at whistling.
   Julian, King of Lemurs


Re: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)

2010-10-27 Thread Bernhard Rupp (Hofkristallrat a.D.)
They do send both, if you explicitly ask as a referee and threaten otherwise 
not to review, but who 

a)  has and takes the time to make a map and look  at the parts relevant to 
discussion

b)  knows how to do that properly and with confidence (otherwise it’s 
worthless)

A suggestion to Nature to always pair a crystallographic technical reviewer 
with no stake in the subject of study with those evaluating the biological or 
whatever thematic merits are, was not deemed worthy of response. It would 
admittedly require intimate knowledge of the field and quite some work for the 
editor, that is for sure.

 

BR

 

From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf Of VAN RAAIJ 
, MARK JOHAN
Sent: Wednesday, October 27, 2010 1:57 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)

 

perhaps we should campaign for it to be obligatory to provide the pdb and 
structure factor file to the journal, and thus referees, upon submission? Then 
he can look for himself to see that building and refinement have been performed 
satisfactorily. 

Mark

 Surely the best model is the one that the referees for your paper 
 are happy with?

 I have found referees to impose seemingly random and arbitrary 
 standards that sometime require a lot of effort to comply with but 
 result in little to no impact on the biology being described. Mind 
 you discussions on this email list can be a useful resource for 
 telling referee's why you don't think you should comply with their 
 rule of thumb.

 Simon



 On 27 Oct 2010, at 20:11, Bernhard Rupp (Hofkristallrat a.D.) wrote:

 Dear Young and Impressionable readers:

 I second-guess here that Robbie's intent - after re-refining many many PDB
 structures, seeing dreadful things, and becoming a hardened cynic - is to
 provoke more discussion in order to put in perspective - if not debunk-
 almost all of these rules.

 So it may be better to pretend you have never heard of these rules. Your
 crystallographic life might be a happier and less biased one.

 If you follow this simple procedure (not a rule)

 The model that fits the primary evidence (minimally biased electron density)
 best and is at the same time physically meaningful, is the best model, i.
 e., all plausibly accountable electron density (and not more) is modeled.

 This process of course does require a little work (like looking through all
 of the model, not just the interesting parts, and thinking what makes sense)
 but may lead to additional and unexpected insights. And in almost all cases,
 you will get a model with plausible statistics, without any reliance on
 rules.

 For some decisions regarding global parameterizations you have to apply more
 sophisticated test such as Ethan pointed out (HR tests) or Ian uses
 (LL-tests). And once you know how to do that, you do not need any rules of
 thumb anyhow.

 So I opt for a formal burial of these rules of thumb and a toast to evidence
 and plausibility.

 And, as Gerard B said in other words so nicely:

 Si tacuisses, philosophus mansisses.

 BR

 -Original Message-
 From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf Of Robbie
 Joosten
 Sent: Tuesday, October 26, 2010 10:29 PM
 To: CCP4BB@JISCMAIL.AC.UK
 Subject: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)

 Dear Anthony,

 That is an excellent question! I believe there are quite a lot of 'rules of
 thumb' going around. Some of them seem to lead to very dogmatic thinking and
 have caused (refereeing) trouble for good structures and lack of trouble for
 bad structures. A lot of them were discussed at the CCP4BB so it may be nice
 to try to list them all.


 Rule 1: If Rwork  20%, you are done.
 Rule 2: If R-free - Rwork  5%, your structure is wrong.
 Rule 3: At resolution X, the bond length rmsd should be  than Y (What is
 the rmsd thing people keep talking about?) Rule 4: If your resolution is
 lower than X, you should not use_anisotropic_Bs/riding_hydrogens
 Rule 5: You should not build waters/alternates at resolutions lower than X
 Rule 6: You should do the final refinement with ALL reflections Rule 7: No
 one cares about getting the carbohydrates right


 Obviously, this list is not complete. I may also have overstated some of the
 rules to get the discussion going. Any addidtions are welcome.

 Cheers,
 Robbie Joosten
 Netherlands Cancer Institute

 Apologies if I have missed a recent relevant thread, but are lists of
 rules of thumb for model building and refinement?





 Anthony



 Anthony Duff Telephone: 02 9717 3493 Mob: 043 189 1076


   =


Mark J van Raaij
Laboratorio M-4
Dpto de Estructura de Macromoléculas
Centro Nacional de Biotecnología - CSIC
c/Darwin 3, Campus Cantoblanco
28049 Madrid
tel. 91 585 4616
email: mjvanra...@cnb.csic.es



Re: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)

2010-10-27 Thread Nat Echols
On Wed, Oct 27, 2010 at 2:20 PM, Ed Pozharski epozh...@umaryland.eduwrote:

 One can also release structure in the PDB prior to submission - I
 believe the HPUB option is rarely (if ever) justified.


What's to prevent your closest competitor from downloading the structure and
using it to solve and refine his or her own data?  Then all they need to do
is call their buddies from grad school who are now senior journal editors,
and weasel their way into a high-profile article with minimal review.
 Surely everyone who has spent time in academia knows at least one tenured
professor who does this.  In principle, I mostly agree with your argument,
but you'd need to convince all journals to agree to an embargo period for
released-but-unpublished PDB entries - and it would still be very difficult
to enforce.  The PDB's current rules aren't always optimal, but it's not
even close to as big a mess as science publishing.

-Nat


Re: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)

2010-10-27 Thread Bernhard Rupp (Hofkristallrat a.D.)
Ø  What's to prevent your closest competitor from downloading the structure
and using it to solve and refine his or her own data? 

 

Integrity perhaps? Ahh stupid me – that is a verboten word. My original
title of the recent JApplCryst commentary was a nice alliteration -
‘Scientific inquiry,  inference, and integrity in the biomolecular
crystallography curriculum’. As you see , integrity had to go to prevent
liability issues. It’s does not seem to be a liability to publish nonsense,
though.   

 

Then all they need to do is call their buddies from grad school who are now
senior journal editors, and weasel their way into a high-profile article
with minimal review.  Surely everyone who has spent time in academia knows
at least one tenured professor who does this.  In principle, I mostly agree
with your argument, but you'd need to convince all journals to agree to an
embargo period for released-but-unpublished PDB entries - and it would still
be very difficult to enforce.  The PDB's current rules aren't always
optimal, but it's not even close to as big a mess as science publishing.

 

Again, if every technically competent reviewer asks- if deemed necessary-
for coordinates and declines review if they are refused, that might change. 

 

-Nat



Re: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)

2010-10-27 Thread Bernhard Rupp (Hofkristallrat a.D.)
Sorry I mean coordinates AND data of course.

 

Again, if every technically competent reviewer asks- if deemed necessary-
for coordinates and data and declines review if they are refused, that might
change. 

 

-Nat



Re: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)

2010-10-27 Thread Bernhard Rupp (Hofkristallrat a.D.)
 Surely the best model is the one that the referees for your paper are
happy with?

That may be the sad and pragmatic  wisdom, but certainly not a truth we
should accept...

 I have found referees to impose seemingly random and arbitrary standards 

a) Reviewers are people belonging to a certain population, characterized by
say a property 'review quality' that follows a certain distribution.
Irrespective of the actual shape of that parent distribution, the central
limit theorem informs us that if you sample this distribution reasonably
often, the sampling distribution will be normal. That means, that half of
the reviews will be below average review quality, and half above. 

Unfortunately, the mean of that distribution is 
b) a function of journal editor quality (they pick the reviewers after all)
and 
c) affected by systematic errors such as your reputation and the chance that
you yourself might sit on a reviewer's grant review panel 
By combining a, b, c you can get a  fairly good assessment of the joint
probability of what report you will receive. You do notice that model
quality is not a parameter in this model, because we can neglect marginal
second order contributions.  

  Mind you discussions on this email list can be a useful resource for
telling referee's why you don't think you should comply with their rule of
thumb.

I agree and sympathize with your optimism, but I am afraid that those who
might need this education are not the ones who seek it. I.e., reading the bb
complicates matters (simplicity being one benefit of ROTs)  and you can't
build an empire wasting time on such things.  

Good luck with your reviews!

BR

Simon



On 27 Oct 2010, at 20:11, Bernhard Rupp (Hofkristallrat a.D.) wrote:

 Dear Young and Impressionable readers:

 I second-guess here that Robbie's intent - after re-refining many many 
 PDB structures, seeing dreadful things, and becoming a hardened cynic 
 - is to provoke more discussion in order to put in perspective - if 
 not
 debunk-
 almost all of these rules.

 So it may be better to pretend you have never heard of these rules.  
 Your
 crystallographic life might be a happier and less biased one.

 If you follow this simple procedure (not a rule)

 The model that fits the primary evidence (minimally biased electron
 density)
 best and is at the same time physically meaningful, is the best model, 
 i.
 e., all plausibly accountable electron density (and not more) is 
 modeled.

 This process of course does require a little work (like looking 
 through all of the model, not just the interesting parts, and thinking 
 what makes sense) but may lead to additional and unexpected insights. 
 And in almost all cases, you will get a model with plausible 
 statistics, without any reliance on rules.

 For some decisions regarding global parameterizations you have to 
 apply more sophisticated test such as Ethan pointed out (HR tests) or 
 Ian uses (LL-tests). And once you know how to do that, you do not need 
 any rules of thumb anyhow.

 So I opt for a formal burial of these rules of thumb and a toast to 
 evidence and plausibility.

 And, as Gerard B said in other words so nicely:

 Si tacuisses, philosophus mansisses.

 BR

 -Original Message-
 From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf Of 
 Robbie Joosten
 Sent: Tuesday, October 26, 2010 10:29 PM
 To: CCP4BB@JISCMAIL.AC.UK
 Subject: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)

 Dear Anthony,

 That is an excellent question! I believe there are quite a lot of 
 'rules of thumb' going around. Some of them seem to lead to very 
 dogmatic thinking and have caused (refereeing) trouble for good 
 structures and lack of trouble for bad structures. A lot of them were 
 discussed at the CCP4BB so it may be nice to try to list them all.


 Rule 1: If Rwork  20%, you are done.
 Rule 2: If R-free - Rwork  5%, your structure is wrong.
 Rule 3: At resolution X, the bond length rmsd should be  than Y (What 
 is the rmsd thing people keep talking about?) Rule 4: If your 
 resolution is lower than X, you should not 
 use_anisotropic_Bs/riding_hydrogens
 Rule 5: You should not build waters/alternates at resolutions lower 
 than X Rule 6: You should do the final refinement with ALL reflections 
 Rule
 7: No
 one cares about getting the carbohydrates right


 Obviously, this list is not complete. I may also have overstated some 
 of the rules to get the discussion going. Any addidtions are welcome.

 Cheers,
 Robbie Joosten
 Netherlands Cancer Institute

 Apologies if I have missed a recent relevant thread, but are lists of 
 rules of thumb for model building and refinement?





 Anthony



 Anthony Duff Telephone: 02 9717 3493 Mob: 043 189 1076


=


Re: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)

2010-10-27 Thread Phoebe Rice
Journal editors need to know when the reviewer they trusted is completely out 
to lunch. So please don't just silently knuckle under!
It may make no difference for Nature, but my impression has been that rigorous 
journals like JMB do care about review quality.
  Phoebe

=
Phoebe A. Rice
Dept. of Biochemistry  Molecular Biology
The University of Chicago
phone 773 834 1723
http://bmb.bsd.uchicago.edu/Faculty_and_Research/01_Faculty/01_Faculty_Alphabetically.php?faculty_id=123
http://www.rsc.org/shop/books/2008/9780854042722.asp


 Original message 
Date: Wed, 27 Oct 2010 15:13:03 -0700
From: CCP4 bulletin board CCP4BB@JISCMAIL.AC.UK (on behalf of Bernhard Rupp 
(Hofkristallrat a.D.) hofkristall...@gmail.com)
Subject: Re: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)  
To: CCP4BB@JISCMAIL.AC.UK

 Surely the best model is the one that the referees for your paper are
happy with?

That may be the sad and pragmatic  wisdom, but certainly not a truth we
should accept...

 I have found referees to impose seemingly random and arbitrary standards 

a) Reviewers are people belonging to a certain population, characterized by
say a property 'review quality' that follows a certain distribution.
Irrespective of the actual shape of that parent distribution, the central
limit theorem informs us that if you sample this distribution reasonably
often, the sampling distribution will be normal. That means, that half of
the reviews will be below average review quality, and half above. 

Unfortunately, the mean of that distribution is 
b) a function of journal editor quality (they pick the reviewers after all)
and 
c) affected by systematic errors such as your reputation and the chance that
you yourself might sit on a reviewer's grant review panel 
By combining a, b, c you can get a  fairly good assessment of the joint
probability of what report you will receive. You do notice that model
quality is not a parameter in this model, because we can neglect marginal
second order contributions.  

  Mind you discussions on this email list can be a useful resource for
telling referee's why you don't think you should comply with their rule of
thumb.

I agree and sympathize with your optimism, but I am afraid that those who
might need this education are not the ones who seek it. I.e., reading the bb
complicates matters (simplicity being one benefit of ROTs)  and you can't
build an empire wasting time on such things.  

Good luck with your reviews!

BR

Simon



On 27 Oct 2010, at 20:11, Bernhard Rupp (Hofkristallrat a.D.) wrote:

 Dear Young and Impressionable readers:

 I second-guess here that Robbie's intent - after re-refining many many 
 PDB structures, seeing dreadful things, and becoming a hardened cynic 
 - is to provoke more discussion in order to put in perspective - if 
 not
 debunk-
 almost all of these rules.

 So it may be better to pretend you have never heard of these rules.  
 Your
 crystallographic life might be a happier and less biased one.

 If you follow this simple procedure (not a rule)

 The model that fits the primary evidence (minimally biased electron
 density)
 best and is at the same time physically meaningful, is the best model, 
 i.
 e., all plausibly accountable electron density (and not more) is 
 modeled.

 This process of course does require a little work (like looking 
 through all of the model, not just the interesting parts, and thinking 
 what makes sense) but may lead to additional and unexpected insights. 
 And in almost all cases, you will get a model with plausible 
 statistics, without any reliance on rules.

 For some decisions regarding global parameterizations you have to 
 apply more sophisticated test such as Ethan pointed out (HR tests) or 
 Ian uses (LL-tests). And once you know how to do that, you do not need 
 any rules of thumb anyhow.

 So I opt for a formal burial of these rules of thumb and a toast to 
 evidence and plausibility.

 And, as Gerard B said in other words so nicely:

 Si tacuisses, philosophus mansisses.

 BR

 -Original Message-
 From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf Of 
 Robbie Joosten
 Sent: Tuesday, October 26, 2010 10:29 PM
 To: CCP4BB@JISCMAIL.AC.UK
 Subject: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)

 Dear Anthony,

 That is an excellent question! I believe there are quite a lot of 
 'rules of thumb' going around. Some of them seem to lead to very 
 dogmatic thinking and have caused (refereeing) trouble for good 
 structures and lack of trouble for bad structures. A lot of them were 
 discussed at the CCP4BB so it may be nice to try to list them all.


 Rule 1: If Rwork  20%, you are done.
 Rule 2: If R-free - Rwork  5%, your structure is wrong.
 Rule 3: At resolution X, the bond length rmsd should be  than Y (What 
 is the rmsd thing people keep talking about?) Rule 4: If your 
 resolution is lower than X, you should not 
 

Re: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)

2010-10-27 Thread Jacob Keller
What about the possibility of double-blind review? I have actually
wondered why the reviewers should be given the author info--does that
determine the quality of the work? Am I missing some obvious reason
why reviewers should know who the authors are?

JPK

On Wed, Oct 27, 2010 at 5:50 PM, Phoebe Rice pr...@uchicago.edu wrote:
 Journal editors need to know when the reviewer they trusted is completely out 
 to lunch. So please don't just silently knuckle under!
 It may make no difference for Nature, but my impression has been that 
 rigorous journals like JMB do care about review quality.
  Phoebe

 =
 Phoebe A. Rice
 Dept. of Biochemistry  Molecular Biology
 The University of Chicago
 phone 773 834 1723
 http://bmb.bsd.uchicago.edu/Faculty_and_Research/01_Faculty/01_Faculty_Alphabetically.php?faculty_id=123
 http://www.rsc.org/shop/books/2008/9780854042722.asp


  Original message 
Date: Wed, 27 Oct 2010 15:13:03 -0700
From: CCP4 bulletin board CCP4BB@JISCMAIL.AC.UK (on behalf of Bernhard 
Rupp (Hofkristallrat a.D.) hofkristall...@gmail.com)
Subject: Re: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)
To: CCP4BB@JISCMAIL.AC.UK

 Surely the best model is the one that the referees for your paper are
happy with?

That may be the sad and pragmatic  wisdom, but certainly not a truth we
should accept...

 I have found referees to impose seemingly random and arbitrary standards

a) Reviewers are people belonging to a certain population, characterized by
say a property 'review quality' that follows a certain distribution.
Irrespective of the actual shape of that parent distribution, the central
limit theorem informs us that if you sample this distribution reasonably
often, the sampling distribution will be normal. That means, that half of
the reviews will be below average review quality, and half above.

Unfortunately, the mean of that distribution is
b) a function of journal editor quality (they pick the reviewers after all)
and
c) affected by systematic errors such as your reputation and the chance that
you yourself might sit on a reviewer's grant review panel
By combining a, b, c you can get a  fairly good assessment of the joint
probability of what report you will receive. You do notice that model
quality is not a parameter in this model, because we can neglect marginal
second order contributions.

  Mind you discussions on this email list can be a useful resource for
telling referee's why you don't think you should comply with their rule of
thumb.

I agree and sympathize with your optimism, but I am afraid that those who
might need this education are not the ones who seek it. I.e., reading the bb
complicates matters (simplicity being one benefit of ROTs)  and you can't
build an empire wasting time on such things.

Good luck with your reviews!

BR

Simon



On 27 Oct 2010, at 20:11, Bernhard Rupp (Hofkristallrat a.D.) wrote:

 Dear Young and Impressionable readers:

 I second-guess here that Robbie's intent - after re-refining many many
 PDB structures, seeing dreadful things, and becoming a hardened cynic
 - is to provoke more discussion in order to put in perspective - if
 not
 debunk-
 almost all of these rules.

 So it may be better to pretend you have never heard of these rules.
 Your
 crystallographic life might be a happier and less biased one.

 If you follow this simple procedure (not a rule)

 The model that fits the primary evidence (minimally biased electron
 density)
 best and is at the same time physically meaningful, is the best model,
 i.
 e., all plausibly accountable electron density (and not more) is
 modeled.

 This process of course does require a little work (like looking
 through all of the model, not just the interesting parts, and thinking
 what makes sense) but may lead to additional and unexpected insights.
 And in almost all cases, you will get a model with plausible
 statistics, without any reliance on rules.

 For some decisions regarding global parameterizations you have to
 apply more sophisticated test such as Ethan pointed out (HR tests) or
 Ian uses (LL-tests). And once you know how to do that, you do not need
 any rules of thumb anyhow.

 So I opt for a formal burial of these rules of thumb and a toast to
 evidence and plausibility.

 And, as Gerard B said in other words so nicely:

 Si tacuisses, philosophus mansisses.

 BR

 -Original Message-
 From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf Of
 Robbie Joosten
 Sent: Tuesday, October 26, 2010 10:29 PM
 To: CCP4BB@JISCMAIL.AC.UK
 Subject: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)

 Dear Anthony,

 That is an excellent question! I believe there are quite a lot of
 'rules of thumb' going around. Some of them seem to lead to very
 dogmatic thinking and have caused (refereeing) trouble for good
 structures and lack of trouble for bad structures. A lot of them were
 discussed at the CCP4BB so it may be nice to try to list them all.


 

[ccp4bb] Bug in c_truncate?

2010-10-27 Thread Peter Chan

Hello,

I've been struggling with F2MTZ and importing my hkl file into mtz by 'keeping 
existing freeR data'. I keep getting the error Problem with FREE column in 
input file. All flags apparently identical. Check input file.

At the end of the day, it appears that this only happens in ctruncate and not 
in the old_truncate instead of ctruncate. Has anyone experienced a similar 
problem?

Peter
  

Re: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)

2010-10-27 Thread Bernhard Rupp (Hofkristallrat a.D.)
Why not double open review? If I have something reasonable to say, I should
be able to sign it. Particularly if the publicly purported point of review
is to make the manuscript better.  And imagine what wonderful open hostility
we would enjoy instead of all these hidden grudges! You would never have to
preemptively condemn a paper on grounds of suspicion that it is from someone
who might have reviewed you equally loathful earlier. You actually know that
you are creaming the right bastard!

A more serious question for the editors amongst us: Can I publish review
comments or are they covered under some confidentiality rule? Some of these
gems are quite worthy public entertainment.

Best, BR 

-Original Message-
From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf Of Jacob
Keller
Sent: Wednesday, October 27, 2010 6:02 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)

What about the possibility of double-blind review? I have actually wondered
why the reviewers should be given the author info--does that determine the
quality of the work? Am I missing some obvious reason why reviewers should
know who the authors are?

JPK

On Wed, Oct 27, 2010 at 5:50 PM, Phoebe Rice pr...@uchicago.edu wrote:
 Journal editors need to know when the reviewer they trusted is completely
out to lunch. So please don't just silently knuckle under!
 It may make no difference for Nature, but my impression has been that
rigorous journals like JMB do care about review quality.
  Phoebe

 =
 Phoebe A. Rice
 Dept. of Biochemistry  Molecular Biology The University of Chicago 
 phone 773 834 1723
 http://bmb.bsd.uchicago.edu/Faculty_and_Research/01_Faculty/01_Faculty
 _Alphabetically.php?faculty_id=123
 http://www.rsc.org/shop/books/2008/9780854042722.asp


  Original message 
Date: Wed, 27 Oct 2010 15:13:03 -0700
From: CCP4 bulletin board CCP4BB@JISCMAIL.AC.UK (on behalf of 
Bernhard Rupp (Hofkristallrat a.D.) hofkristall...@gmail.com)
Subject: Re: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)
To: CCP4BB@JISCMAIL.AC.UK

 Surely the best model is the one that the referees for your paper 
 are
happy with?

That may be the sad and pragmatic  wisdom, but certainly not a truth 
we should accept...

 I have found referees to impose seemingly random and arbitrary 
 standards

a) Reviewers are people belonging to a certain population, 
characterized by say a property 'review quality' that follows a certain
distribution.
Irrespective of the actual shape of that parent distribution, the 
central limit theorem informs us that if you sample this distribution 
reasonably often, the sampling distribution will be normal. That 
means, that half of the reviews will be below average review quality, and
half above.

Unfortunately, the mean of that distribution is
b) a function of journal editor quality (they pick the reviewers after 
all) and
c) affected by systematic errors such as your reputation and the 
chance that you yourself might sit on a reviewer's grant review panel 
By combining a, b, c you can get a  fairly good assessment of the 
joint probability of what report you will receive. You do notice that 
model quality is not a parameter in this model, because we can neglect 
marginal second order contributions.

  Mind you discussions on this email list can be a useful resource 
 for
telling referee's why you don't think you should comply with their 
rule of thumb.

I agree and sympathize with your optimism, but I am afraid that those 
who might need this education are not the ones who seek it. I.e., 
reading the bb complicates matters (simplicity being one benefit of 
ROTs)  and you can't build an empire wasting time on such things.

Good luck with your reviews!

BR

Simon



On 27 Oct 2010, at 20:11, Bernhard Rupp (Hofkristallrat a.D.) wrote:

 Dear Young and Impressionable readers:

 I second-guess here that Robbie's intent - after re-refining many 
 many PDB structures, seeing dreadful things, and becoming a hardened 
 cynic
 - is to provoke more discussion in order to put in perspective - if 
 not
 debunk-
 almost all of these rules.

 So it may be better to pretend you have never heard of these rules.
 Your
 crystallographic life might be a happier and less biased one.

 If you follow this simple procedure (not a rule)

 The model that fits the primary evidence (minimally biased electron
 density)
 best and is at the same time physically meaningful, is the best 
 model, i.
 e., all plausibly accountable electron density (and not more) is 
 modeled.

 This process of course does require a little work (like looking 
 through all of the model, not just the interesting parts, and 
 thinking what makes sense) but may lead to additional and unexpected
insights.
 And in almost all cases, you will get a model with plausible 
 statistics, without any reliance on rules.

 For some decisions regarding global parameterizations you 

Re: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)

2010-10-27 Thread Dima Klenchin

What about the possibility of double-blind review? I have actually
wondered why the reviewers should be given the author info--does that
determine the quality of the work? Am I missing some obvious reason
why reviewers should know who the authors are?


I've always felt (and advocated long time ago on Usenet) that the current 
review system gets everything exactly backwards. 1) To prevent hatchet jobs 
of a review, reviewers should not be anonymous. 2) To prevent systematic 
bias by things completely irrelevant to the review job, reviewers (and 
handling editors!) should not be given authors' names and institutions.


I think that the moment #1 happens, each review will start taking much, 
much longer time than it is now. This means that either a lot less would 
ever be reviewed and published or -oh horror- postdocs and graduate 
students would need to be reviewers, too. IMHO, both outcomes are perfectly 
acceptable.


#2 is difficult in practice because self-references and all kind of hints 
can always be planted to make sure everyone knows the names. But maybe if 
such advertisements are frowed upon by the community, their incidence will 
be low enough to be a problem?


-- Dima


Re: [ccp4bb] Rules of thumb (was diverging Rcryst and Rfree)

2010-10-27 Thread Artem Evdokimov
It's fun to watch my innocent little comment unfold into a pandemonium of
email :) That's why i love this mailing list.

Seriously though, there seems to be two salient things said by many people
in many different ways:

1. it's a good idea to look at the model in detail, and pay attention to
structure-based warnings rather than purely number-based ones. Pretty
straightforward.

2. there is a huge gap between the reality of academic peer-review process
and the (not so silent) desires of the crystallographic community. Not a
surprise either.

Thank goodness I am in industry. We get 'laid off' a lot (a well recognized
occupational hazard) but at least we don't live  die by our publication
records.

Artem