Dear All;

In response to my "Anisotropic Diffraction In Refinement", which asked for suggestions for how best to proceed with refinement with an anisotropic data set, I received a large number of responses which overwhelmingly suggested using the UCLA Anisotropy Server (<http://www.doe-mbi.ucla.edu/~sawaya/anisoscale/>).

The Anisotripy Server treats scaled/truncated data sets (I used Scala and the old Truncate program). Fo and SigFo are analyzed with respect to resolution in three dimensions and the data treated in three steps:
1) An elliptical resolution boundary is determined and applied.
2) A purely anisotropic B-factor is applied to the Fo and SigFo data to cause the data in all directions to fall off equally. 3) A negative isotropic B-factor is then applied to the structure factors to force the fall-off in the strongest direction to match that of the original data, effectively meaning that the data are not scaled to the mean but the weaker data are scaled up to match the strongest data.

Application of a elliptical resolution boundary is justified because the resolution boundary from common integration programs (Denzo and Mosflm for example) is spherical where diffraction for anisotropic data is ellipsoidal. A spherical boundary would result in the inclusion of numerous poorly measured reflections in the higher resolution shells which effectively makes these data more noisy. Imposing an ellipsoidal resolution boundary is equivalent to removing noise from the higher resolution bins and is simply the anisotropic equivalent of the normal resolution limit truncation.

However, I was confused by the second and third steps. The second step of application of anisotropic scale factors is appropriate if the refinement program does not include anisotropic scaling in its calculation of Fc, however modern refinement programs do this. Pavel Afonine touched on this in his CCP4BB general posting in response to my original posting where he noted that "anisotropic scale factor[s] that [are] part of the total structure factor take care of this" (<https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=ind0909&L=CCP4BB&T=0&F=&S=&P=8362>).

For the third step, applying a negative isotropic B-factor to modify the Fo is equivalent to sharpening the peaks in your maps and this can be useful. However, applying the correction to Fo will also result in an inappropriate decrease in the average temperature factor of the resulting model. Since B-factors are used as a measure of the coordinate error of an atom, modifying your Fo means these low B factors will tend to confuse the users of that model into thinking its quality is better than it really is. If a sharper map makes identification of model errors easier, the map can be sharpened when it is calculated, without affecting the parameters in the PDB file. The latest versions of Coot, for example, allows you to sharpen any map that it calculates.

I brought these points to the attention of the Anisotropy Server director (Michael Sawaya), who is now working to provide an option to omit steps 2 and 3 for users who do not what their structure factors modified.

My thanks to everyone who responded to my original question, and to Dale Tronrud and Michael Sawaya in particular for valuable discussion.

Reply via email to