All good points in the contributions from Alexis and Randy.

To confirm Alexis’ interpretation of my previous post, my main concern about 
the ½ bit threshold for “nominal” resolution was that it is in danger of being 
adopted independently of the requirement (for example by those doing x-ray 
imaging of biological cells). I might be approaching the issue rather 
differently from others as I was interested in the data needed to identify a 
feature with a particular size and contrast in the presence of noise and 
background.

Randy convincingly argues that there is information in the data down to a 0.1 
or 0.01 bit threshold and one should use this data if one has it. One issue 
when collecting data is how to position a detector of limited size. The signal 
to background ratio can generally be improved by putting the detector far away 
but the “actual” resolution of the data might then be compromised if the edge 
of the detector corresponds to a 0.5 bit threshold. However, this set up might 
be the best way  for identifying a bound substrate in an electron density map 
even though it compromises any subsequent refinement. Is there a difference 
between an “optical resolution” and a “refinement resolution” from the point of 
view of interpretability? Is this related to Frank’s “need to turn to 
interpretability?” Frank – do you mean interpreting an electron density map of 
some flavour, using machine learning or otherwise?

My conclusion with all this is that we need to define our use of some of the 
terms such as resolution, noise, background, interpretability etc. Doing this 
would mean that new terms would be required as any meaningful definition for 
any term would be likely limit its use.  Most of what is below comes from the 
discussions on this email thread, for example in distinguishing between noise 
and parts of electron density maps which can’t be interpreted by means of a 
model. Here is a first attempt. The list is far too long but probably 
insufficient


1.       Resolution PDB – Like it or not I think James has a point. “I 
personally think that the resolution assigned to the PDB deposition should 
remain the classical I/sigI > 3 at 80% rule”. We can of course discuss the 
details but his point is a valid one.


2.       Resolution PDB refinement - Can we get the PDB to accept an additional 
term defined by  both the resolution and corresponding information content of 
the data actually used for refinement?



3.       Resolution Instrument –  For a telescope it is defined by the aperture 
of the lens. For a MX instrument it is defined by the Bragg resolution at the 
edge of the detector. John Spence recently reminded me that “according to the 
old definitions, resolution should be a property of an instrument (eg a 
microscope) , not the sample.” This is of course a harsh definition but 
nevertheless a useful one.  However, see point 13 below.


4.       Noise in the data– This is a property of the rawest measurement one 
has and corresponds to the best one can do to estimate the Poisson statistics 
for photon counting. I hope Dale, on the “question of density” topic would 
accept this as noise. The manufacturers of detectors often put an imperfect 
model of the detector between the raw data and the user of the instrument so 
the estimate of the intensity and noise is already model dependent. Ideally 
noise can still be given from the intensity (no. of photons in each detector 
pixel). In some respects this “raw” data is the only thing one is certain of.


5.       Background in the data – This is a part of the data one is not 
primarily interested in (e.g. air scatter, solvent scatter). The data reduction 
programs have a model for the background in order to subtract it, taking in to 
account the noise in the data as defined above. This model will not be perfect 
- does the background vary linearly across a spot if this is the model? 
(Similarly the profiles for any profile fitting will not be perfect).



6.       Integrated intensities – derived with individual error estimates by 
the integration programs.



7.       Other systematic errors in the data. This could include radiation 
damage during data collection. The result would be that metrics such as CC1/2, 
FSC etc. derived from the integrated intensity measurements will be worse than 
that expected from the error estimates.


8.       Noise in the electron density maps – This is a function of the noise 
in the data (Parseval’s theorem). This can be improved by exposing for longer. 
See also background in the electron density maps.


9.       Background in the electron density maps – This is essentially parts of 
the maps which one cannot model. It is not noise. However, the systematic 
errors in the data (point 4) could contribute to it. Recent examples in this 
thread include overlapping disordered waters merging in to bulk solvent. If one 
can interpret it in terms of a model, it is no longer background. One can have 
a standard deviation for the background which takes account of the variation in 
the background which cannot be modelled.


10.   Contrast against background. – Our disordered sidechain or partially 
occupied substrate has to be distinguished from this background. A modest 
telescope can easily resolve the moons of Jupiter (2-10 arc minutes separation) 
on a dark night but in the middle of Hong Kong (apparently the worlds brightest 
city) it would struggle. For MX, Increasing the exposure might help if there is 
a significant noise component. After this, the interpretability will not 
improve unless higher order Fourier terms become more significant, thereby 
allowing modelling to improve. Increased exposure could be counterproductive if 
radiation damage results.


11.   Dynamic range requirements for the image. Does one want to see the 
hydrogens on an electron density map in the presence of both noise and 
background. The contrast of the hydrogens will be low compared to the carbon, 
nitrogen and oxygen atoms. This is a similar situation to the Rose criterion 
for x-ray imaging where one wants to see a protein or organelle against the 
varying density of the cytosol. Another example would be to see both Jupiter 
and the fainter moons (e.g. moon number 5) in the presence of background from 
the sky.



12.   Interpretability – Despite the fact that our partially occupied substrate 
has a similar average density to the background, we often have some idea of the 
geometry of the substrate we wish to position. Individual atoms might be poorly 
defined if they are within the standard deviation of the background but a chain 
of them with plausible geometry might be identifiable. The machine learning 
might be able to accomplish this. Again, for x-ray imaging, filaments and 
membranes are easier to observe than single particles.



13.  Information content at a particular resolution-  a far more useful metric 
than resolution. For the case of atoms of approximately equal density (e.g. 
C,N,O atoms) the contrast is high (e.g.  100% depending on how one defines 
contrast). For this case, a half bit FSC threshold might be of some use for 
predicting whether one would observe individual atoms on an electron density 
map.  It would also apply at lower resolution to distinguishing sidechains 
packing together in the ordered interior of proteins. Although the weighting 
schemes for electron density map calculations should be able to handle low 
information content data, it is not clear to me whether including data at an 
information content of 0.01 would results in a significant improvement in the 
maps. Perhaps it would for the case of difference maps where Fc is high and Fo 
is low. For refinement including the low information content data is obviously 
very helpful.



14.   “Local resolution estimation - using small sub-volumes in low-res parts 
of maps.” I am sure Alexis is correct regarding fixed thresholds even though 
they may work for some cases. The “low resolution parts of the maps” does 
conflict somewhat with my harsh restricted us of the term resolution if it is 
restricted to instrument resolution. The same Fourier components contributed to 
these parts of the map as to the other parts. Even if the B factors in this 
part were high, one would still need to measure these Fourier components to 
identify this. Should one say “parts of the map with a low information content 
at this resolution?” Catchy isn’t it.



15.   “Turn a target SNR into an FSC threshold” – Yes, this is exactly what I 
would like to see though I guess the N in SNR might conflict with the strict 
definition of noise.



16.   Information content in the above list. Hopefully very little as none of 
it is suprising. Not sure how one calculates misinformation, disinformation 
content etc.

I am not happy about some of the terms in this list. There must be a better 
phrase than “background in an electron density map” which avoids the term noise.
Also have to read the papers Randy Read, Robert Oeffner & Airlie McCoy, 
http://journals.iucr.org/d/issues/2020/03/00/ba5308/index.html and Alexis Rohou 
https://www.biorxiv.org/content/10.1101/2020.03.01.972067v1. Some of the issues 
in the list above might be clarified in these articles.

Thanks all for the interesting discussions

Colin












From: CCP4 bulletin board <[email protected]> On Behalf Of Randy Read
Sent: 09 March 2020 12:37
To: [email protected]
Subject: Re: [ccp4bb] [3dem] Which resolution?

Hi Alexis,

A brief summary of the relevant points in the paper that Pavel mentioned 
(https://journals.iucr.org/d/issues/2020/03/00/ba5308):

The paper is about how to estimate the amount of information gained by making a 
diffraction measurement, not really about defining resolution limits.  Based on 
the results presented, I would argue that it's not a good idea to define a 
traditional resolution cutoff based on average numbers in a resolution shell, 
because there will almost certainly be useful data beyond that point (as long 
as you account properly for the effect of the measurement error, which is 
something that needs work for a lot of programs!).  In our program Phaser, we 
use all of the data provided to scale the data set and refine parameters that 
define the systematic variation in diffraction intensity (and hence signal).  
In this step, knowing which reflections are weak helps to define the parameters 
characterising the systematic variation due to effects like anisotropy and 
translational non-crystallographic symmetry (tNCS).  However, after this point 
the information gained by measuring any one of these reflections tells you how 
much power that observation will have in subsequent likelihood-based 
calculations.  As the information gain drops, the usefulness of the observation 
in determining refineable parameters with the likelihood also drops.  In the 
context of Phaser, we've found that there's a small amount of benefit from 
including reflections down to an information gain of 0.01 bit, but below that 
the observations can safely be ignored (after the scaling, anisotropy and tNCS 
steps).

However, it's possible that average information content is a useful way to 
think about *nominal* resolution.  We should probably do this systematically, 
but our impression from looking at a variety of deposited diffraction data is 
that the average information gain in the highest resolution shell is typically 
around 0.5 to 1 bit per reflection.  So it looks like the half-bit level agrees 
reasonably well with how people currently choose their resolution limits.

For the future, what I would like to see is, first, that everyone adopts 
something like our LLGI target that accounts very well for the effect of 
intensity measurement error:  the current Rice likelihood target using French & 
Wilson posterior amplitudes breaks down for very weak data with very low 
information gain.  Second, I would like to see people depositing at least their 
unpruned intensity data:  not just derived amplitudes, because the conversion 
from intensities to amplitudes cannot be reversed effectively, and not 
intensity data prescaled to remove anisotropy.  Third, I would like to see 
people distinguishing between nominal resolution (which is a useful number to 
make a first guess about which structures are likely to be most accurate) and 
the actual resolution of the data deposited.  There are diminishing returns to 
including weaker and weaker data, but the resolution cutoff shouldn't exclude a 
substantial number of observations conveying more than, say, 0.1 bit of 
information.

Best wishes,

Randy

On 9 Mar 2020, at 04:06, Alexis Rohou 
<[email protected]<mailto:[email protected]>> wrote:

Hi Colin,

It sounds to me like you are mostly asking about whether 1/2-bit is the 
"correct" target to aim for, the "correct" criterion for a resolution claim. I 
have no view on that. I have yet to read Randy's work on the topic - it sounds 
very informative.

What I do have a view on is, once one has decided one likes 1/2 bit information 
content (equiv to SNR 0.207) or C_ref = 0.5, aka FSC=0.143 (equiv to SNR 0.167) 
as a criterion, how one should turn that into an FSC threshold.

You say you were not convinced by Marin's derivation in 2005. Are you convinced 
now and, if not, why?

No. I was unable to follow Marin's derivation then, and last I tried (a couple 
of years back), I was still unable to follow it. This is despite being 
convinced that Marin is correct that fixed FSC thresholds are not desirable. To 
be clear, my objections have nothing to do with whether 1/2-bit is an 
appropriate criterion, they're entirely about how you turn a target SNR into an 
FSC threshold.

A few years ago, an equivalent thread on 3DEM/CCPEM (I think CCP4BB was spared) 
led me to re-examine the foundations of the use of the FSC in general. You can 
read more details in the manuscript I posted to bioRxiv a few days ago 
(https://www.biorxiv.org/content/10.1101/2020.03.01.972067v1), but essentially 
I conclude that:

(1) fixed-threshold criteria are not always appropriate, because they rely on a 
biased estimator of the SNR, and in cases where n (the number of independent 
samples in a Fourier shell) is low, this bias is significant
(2) thresholds in use today do not involve a significance test; they just 
ignore the variance of the FSC as an estimator of SNR; to caricature, this is 
like the whole field were satisfied with p values of ~0.5.
(3) as far as I can tell, ignoring the bias and variance of the FSC as an 
estimator of SNR is _mostly OK_ when doing global resolution estimates, when 
the estimated resolution is pretty high (large n) and when the FSC curve has a 
steep falloff. That's a lot of hand-waving, which I think we should aim to 
dispense of.
(4) when doing local resolution estimation using small sub-volumes in low-res 
parts of maps, I'm convinced the fixed threshold are completely off.
(5) I see no good reason to keep using fixed FSC thresholds, even for global 
resolution estimates, but I still don' t know whether Marin's 1/2-bit-based FSC 
criterion is correct (if I had to bet, I'd say not). Aiming for 1/2-bit 
information content per Fourier component may be the correct target to aim for, 
and fixed threshold are definitely not the way to go, but I am not convinced 
that the 2005 proposal is the correct way forward
(6) I propose a framework for deriving non-fixed FSC thresholds based on 
desired SNR and confidence levels. Under some conditions, my proposed 
thresholds behave similarly to Marin's 1/2-bit-based curve, which convinces me 
further that Marin really is onto something.

To re-iterate: the choice of target SNR (or information content) is independent 
of the choice of SNR estimator and of statistical testing framework.

Hope this helps,
Alexis



On Sat, Feb 22, 2020 at 2:06 AM Nave, Colin (DLSLtd,RAL,LSCI) 
<[email protected]<mailto:[email protected]>> wrote:
Alexis
This is a very useful summary.

You say you were not convinced by Marin's derivation in 2005. Are you convinced 
now and, if not, why?

My interest in this is that the FSC with half bit thresholds have the danger of 
being adopted elsewhere because they are becoming standard for protein 
structure determination (by EM or MX). If it is used for these mature 
techniques it must be right!

It is the adoption of the ½ bit threshold I worry about. I gave a rather weak 
example for MX which consisted of partial occupancy of side chains, substrates 
etc. For x-ray imaging a wide range of contrasts can occur and, if you want to 
see features with only a small contrast above the surroundings then I think the 
half bit threshold would be inappropriate.

It would be good to see a clear message from the MX and EM communities as to 
why an information content threshold of ½ a bit is generally appropriate for 
these techniques and an acknowledgement that this threshold is 
technique/problem dependent.

We might then progress from the bronze age to the iron age.

Regards
Colin



From: CCP4 bulletin board <[email protected]<mailto:[email protected]>> 
On Behalf Of Alexis Rohou
Sent: 21 February 2020 16:35
To: [email protected]<mailto:[email protected]>
Subject: Re: [ccp4bb] [3dem] Which resolution?

Hi all,

For those bewildered by Marin's insistence that everyone's been messing up 
their stats since the bronze age, I'd like to offer what my understanding of 
the situation. More details in this thread from a few years ago on the exact 
same topic:
https://mail.ncmir.ucsd.edu/pipermail/3dem/2015-August/003939.html
https://mail.ncmir.ucsd.edu/pipermail/3dem/2015-August/003944.html

Notwithstanding notational problems (e.g. strict equations as opposed to 
approximation symbols, or omission of symbols to denote estimation), I believe 
Frank & Al-Ali and "descendent" papers (e.g. appendix of Rosenthal & Henderson 
2003) are fine. The cross terms that Marin is agitated about indeed do in fact 
have an expectation value of 0.0 (in the ensemble; if the experiment were 
performed an infinite number of times with different realizations of noise). I 
don't believe Pawel or Jose Maria or any of the other authors really believe 
that the cross-terms are orthogonal.

When N (the number of independent Fouier voxels in a shell) is large enough, 
mean(Signal x Noise) ~ 0.0 is only an approximation, but a pretty good one, 
even for a single FSC experiment. This is why, in my book, derivations that 
depend on Frank & Al-Ali are OK, under the strict assumption that N is large. 
Numerically, this becomes apparent when Marin's half-bit criterion is plotted - 
asymptotically it has the same behavior as a constant threshold.

So, is Marin wrong to worry about this? No, I don't think so. There are indeed 
cases where the assumption of large N is broken. And under those circumstances, 
any fixed threshold (0.143, 0.5, whatever) is dangerous. This is illustrated in 
figures of van Heel & Schatz (2005). Small boxes, high-symmetry, small objects 
in large boxes, and a number of other conditions can make fixed thresholds 
dangerous.

It would indeed be better to use a non-fixed threshold. So why am I not using 
the 1/2-bit criterion in my own work? While numerically it behaves well at most 
resolution ranges, I was not convinced by Marin's derivation in 2005. 
Philosophically though, I think he's right - we should aim for FSC thresholds 
that are more robust to the kinds of edge cases mentioned above. It would be 
the right thing to do.

Hope this helps,
Alexis



On Sun, Feb 16, 2020 at 9:00 AM Penczek, Pawel A 
<[email protected]<mailto:[email protected]>> wrote:
Marin,

The statistics in 2010 review is fine. You may disagree with assumptions, but I 
can assure you the “statistics” (as you call it) is fine. Careful reading of 
the paper would reveal to you this much.
Regards,
Pawel

On Feb 16, 2020, at 10:38 AM, Marin van Heel 
<[email protected]<mailto:[email protected]>> wrote:

**** EXTERNAL EMAIL ****
Dear Pawel and All others ....
This 2010 review is - unfortunately - largely based on the flawed statistics I 
mentioned before, namely on the a priori assumption that the inner product of a 
signal vector and a noise vector are ZERO (an orthogonality assumption).  The 
(Frank & Al-Ali 1975) paper we have refuted on a number of occasions (for 
example in 2005, and most recently in our BioRxiv paper) but you still take 
that as the correct relation between SNR and FRC (and you never cite the 
criticism...).
Sorry
Marin

On Thu, Feb 13, 2020 at 10:42 AM Penczek, Pawel A 
<[email protected]<mailto:[email protected]>> wrote:
Dear Teige,

I am wondering whether you are familiar with

Resolution measures in molecular electron microscopy.
Penczek PA. Methods Enzymol. 2010.
Citation
Methods Enzymol. 2010;482:73-100. doi: 10.1016/S0076-6879(10)82003-8.

You will find there answers to all questions you asked and much more.

Regards,
Pawel Penczek

Regards,
Pawel
_______________________________________________
3dem mailing list
[email protected]<mailto:[email protected]>
https://mail.ncmir.ucsd.edu/mailman/listinfo/3dem<https://urldefense.proofpoint.com/v2/url?u=https-3A__mail.ncmir.ucsd.edu_mailman_listinfo_3dem&d=DwMFaQ&c=bKRySV-ouEg_AT-w2QWsTdd9X__KYh9Eq2fdmQDVZgw&r=yEYHb4SF2vvMq3W-iluu41LlHcFadz4Ekzr3_bT4-qI&m=3-TZcohYbZGHCQ7azF9_fgEJmssbBksaI7ESb0VIk1Y&s=XHMq9Q6Zwa69NL8kzFbmaLmZA9M33U01tBE6iAtQ140&e=>
_______________________________________________
3dem mailing list
[email protected]<mailto:[email protected]>
https://mail.ncmir.ucsd.edu/mailman/listinfo/3dem

________________________________
To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB&A=1

--
This e-mail and any attachments may contain confidential, copyright and or 
privileged material, and are for the use of the intended addressee only. If you 
are not the intended addressee or an authorised recipient of the addressee 
please notify us of receipt by returning the e-mail and do not use, copy, 
retain, distribute or disclose the information in or attached to the e-mail.
Any opinions expressed within this e-mail are those of the individual and not 
necessarily of Diamond Light Source Ltd.
Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments 
are free from viruses and we cannot accept liability for any damage which you 
may sustain as a result of software viruses which may be transmitted in or with 
the message.
Diamond Light Source Limited (company no. 4375679). Registered in England and 
Wales with its registered office at Diamond House, Harwell Science and 
Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom


________________________________
To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB&A=1

------
Randy J. Read
Department of Haematology, University of Cambridge
Cambridge Institute for Medical Research     Tel: + 44 1223 336500
The Keith Peters Building                               Fax: + 44 1223 336827
Hills Road                                                       E-mail: 
[email protected]<mailto:[email protected]>
Cambridge CB2 0XY, U.K.                             
www-structmed.cimr.cam.ac.uk<http://www-structmed.cimr.cam.ac.uk>


________________________________

To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB&A=1

########################################################################

To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB&A=1

Reply via email to