Just to add my two ha'porth.

I discussed this some years ago with Garib, just after I'd added the anisotropic cutoff to the resolution limits in Mosflm (mentioned by Andrew below); as I remember (and this is an invitation to Garib to contribute and correct me here!), the answer went along the lines that the systematically missing volumes of data would affect the maximum likelihood model used (which used an isotropic resolution diffraction model?), so the refinement of a molecular model produced with anisotropic data would be affected badly.

I think I would tend to follow Pierre's route, though, since the model I implemented only defines the resolution along a* b* and c*, and the sigma cutoff is probably more flexible.

Having said that, there are one or two users out there who have used the anisotropic cutoffs and have been very happy with the results (or at least that's what they said to me).

On 16 Sep 2009, at 11:13, Pierre Rizkallah wrote:

Hi Everyone,

I echo Andrew's thanks for the summary offered by Justin.

I would like to mention another way to trim anisotropic diffraction patterns of the weak patches 'at source', as it were, in MOSFLM, by specifying a sigma cut off applied to each image.

from the manual:
RESOLUTION [ <lowres> ] < highres> > [CUTOFF <sigcut>]
The CUTOFF subkeyword allows the resolution limit of reflections written to the MTZ file to be different for each image. The resolution limit is set as that resolution at which I/sigma(I) drops to below "sigcut". The I/sigma(I) for fully recorded reflections (if any) is used, otherwise partials. Default sigcut 0.0

I used this option in the past 2 years on one particular data set, which certainly lost all its weak hi-res spots, leaving an ellipsoid of reflections. The merging stats improved significantly, but the completness in the outer shells went down significantly too. I did not compare maps with and without cut-off, in this particular case.

If the drop-off is severe, one can imagine the effect on the maps would be streakiness. There is no win-win situation, unfortunately. Some people set the resolution limit to be half-way between that of the best and worst directions, and take everything, weak and strong. It is a sacrifice of some good data for the sake of the overall quality, and produces distortions of its own, maybe without the streaks.

Pierre Rizkallah


**********************************************************************
Dr. Pierre Rizkallah, Senior Lecturer in Structural Biology, WHRI, School of Medicine, Academic Avenue, Heath Park, Cardiff CF14 4XN
email: rizkall...@cf.ac.uk     phone + 44 29 2074 2248
A Leslie <and...@mrc-lmb.cam.ac.uk> 16/09/09 9:24 AM >>>
I would like to thank Justin for his summary of this topic, which I'm
sure many people found of interest, and is very much in the spirit of
the bulletin board.

I would just like to correct one factual error, in that it has been
possible to specify anisotropic resolution limits to MOSFLM for many
years, the appropriate keywords (described in the MOSFLM "Help"
documentation) are:

RESOLUTION ANISO 3.5 2.5 2.5

where the three values are the resolution limits along (or close to)
a*, b*, c*.

Unfortunately this option is not yet available in imosflm.

I have not personally used this option and so cannot compare its
efficacy relative to integrating isotropically and then applying an
anisotropic limit such as Justin describes.


Andrew Leslie

On 15 Sep 2009, at 21:48, Justin Hall wrote:

Dear All;

In response to my "Anisotropic Diffraction In Refinement", which
asked for suggestions for how best to proceed with refinement with
an anisotropic data set, I received a large number of responses
which overwhelmingly suggested using the UCLA Anisotropy Server 
(<http://www.doe-mbi.ucla.edu/~sawaya/anisoscale/
).

The Anisotripy Server treats scaled/truncated data sets (I used
Scala and the old Truncate program). Fo and SigFo are analyzed with
respect to resolution in three dimensions and the data treated in
three steps:
1) An elliptical resolution boundary is determined and applied.
2) A purely anisotropic B-factor is applied to the Fo and SigFo data
to cause the data in all directions to fall off equally.
3) A negative isotropic B-factor is then applied to the structure
factors to force the fall-off in the strongest direction to match
that of the original data, effectively meaning that the data are not
scaled to the mean but the weaker data are scaled up to match the
strongest data.

Application of a elliptical resolution boundary is justified because
the resolution boundary from common integration programs (Denzo and
Mosflm for example) is spherical where diffraction for anisotropic
data is ellipsoidal. A spherical boundary would result in the
inclusion of numerous poorly measured reflections in the higher
resolution shells which effectively makes these data more noisy.
Imposing an ellipsoidal resolution boundary is equivalent to
removing noise from the higher resolution bins and is simply the
anisotropic equivalent of the normal resolution limit truncation.

However, I was confused by the second and third steps.  The second
step of application of anisotropic scale factors is appropriate if
the refinement program does not include anisotropic scaling in its
calculation of Fc, however modern refinement programs do this. Pavel
Afonine touched on this in his CCP4BB general posting in response to
my original posting where he noted that "anisotropic scale factor[s]
that [are] part of the total structure factor take care of this" 
(<https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=ind0909&L=CCP4BB&T=0&F=&S=&P=8362
).

For the third step, applying a negative isotropic B-factor to modify
the Fo is equivalent to sharpening the peaks in your maps and this
can be useful.  However, applying the correction to Fo will also
result in an inappropriate decrease in the average temperature
factor of the resulting model.  Since B-factors are used as a
measure of the coordinate error of an atom, modifying your Fo means
these low B factors will tend to confuse the users of that model
into thinking its quality is better than it really is. If a sharper
map makes identification of model errors easier, the map can be
sharpened when it is calculated, without affecting the parameters in
the PDB file.  The latest versions of Coot, for example, allows you
to sharpen any map that it calculates.

I brought these points to the attention of the Anisotropy Server
director (Michael Sawaya), who is now working to provide an option
to omit steps 2 and 3 for users who do not what their structure
factors modified.

My thanks to everyone who responded to my original question, and to
Dale Tronrud and Michael Sawaya in particular for valuable discussion.

Harry
--
Dr Harry Powell, MRC Laboratory of Molecular Biology, MRC Centre, Hills Road, Cambridge, CB2 0QH

Reply via email to