Dear Kay.

Arguably, the resolution of a structure is the most important number to
look at; it is definitely the first to be examined, and often the only one
examined by non-structural biologists.

Since this number conveys so much concerning the quality/reliability of
the the structure, it is not surprising that we need to get this one
parameter right.

Let us examine a hypothetical situation, in which a data set at the
2.2-2.0 resolution shell has 20% completeness. Is this a 2.0 A resolution
structure?  While you make a sound argument that including that data may
result in a better refined model (more observations, more restraints), I
would not consider that model the same quality as one refined against a
data set that has >90% completeness at that resolution shell.

As I see it, there are two issues here: one, is whether to include such
data in refinement?  I am not sure if low completeness (especially if not
random) can be detrimental to a correct model, but I will let other weigh
in on that.

The second question is where to declare the resolution limit of a
particular data set?  To my mind, here high completeness (the term "high"
needs a precise definition) better describes the true resolution limit of
the diffraction, and with this what I can conclude about the quality of
the refined model.

My two cents.

Arnon Lavie

On Fri, April 18, 2014 6:51 pm, Kay Diederichs wrote:
> Hi everybody,
>
> since we seem to have a little Easter discussion about crystallographic
> statistics anyway, I would like to bring up one more topic.
>
> A recent email sent to me said: "Another referee complained that the
> completeness in that bin was too low at 85%" - my answer was that I
> consider the referee's assertion as indicating a (unfortunately not
> untypical case of) severe statistical confusion. Actually, there is no
> reason at all to discard a resolution shell just because it is not
> complete, and what would be a cutoff, if there were one? What
> constitutes "too low"?
>
> The benefit of including also incomplete resolution shells is that every
> reflection constitutes a restraint in refinement (and thus reduces
> overfitting), and contributes its little bit of detail to the electron
> density map. Some people may be mis-lead by a wrong understanding of the
> "cats and ducks" examples by Kevin Cowtan: omitting further data from
> maps makes Fourier ripples/artifacts worse, not better.
>
> The unfortunate consequence of the referee's opinion (and its
> enforcement and implementation in papers) is that the structures that
> result from the enforced re-refinement against truncated data are
> _worse_ than the original data that included the "incomplete" resolution
> shells.
>
> So could we as a community please abandon this inappropriate and
> un-justified practice - of course after proper discussion here?
>
> Kay
>
>

Reply via email to