Woops, sorry. There was a typo in my response. here it is again without the typo.

B factors are 78.96x the value of the mean square variation in an atom's position. The square is the important part of how you scale them. Lets say you have static disorder in the crystal lattice, and that gives every atom an rms variation of 0.5 A relative to their ideal lattice positions, then that static disorder imparts a B factor of 78.96*(0.5)^2 = 19.7 to all atoms. If in addition to lattice disorder you have a side chain flapping in the breeze by another rms 1.0 A, that is B = 79, but the combination of the two things is an rms fluctuation of sqrt(0.5^2 + 1.0^2) = 1.118 rms A, and the total B factor resulting from that is 98.7. It is not a coincidence that 98.7 is the sum of 19.7 and 79. That is, independent sources of disorder _add_ when it comes to the B factors they produce.

So, if you want to "normalize" B factors from one structure to another, the best thing to do is subtract a constant. This is mathematically equivalent to "deconvoluting" one source of overall variation from the site-to-site differences. What should the constant be? Well, the structure-wide atomic B factor average isn't a bad choice. The caveat is that a B factor change of 5 in the context of an overall B of 15 is probably significant, but in a low resolution structure with an overall B factor of 100, it might be nothing more than a random fluctuation. It's like looking at the width of bands on a gel. A small change in a sharp band is significant, but that same change in position for a fat band is more dubious.

Now, crystallographically, all a B factor is is the rate of falloff of the contribution of an atom to the diffraction pattern with increasing resolution. So, the overall B factor can be quite well known, but the B factor of a single atom in the context of tens of thousands of others can be harder to determine. Refinement programs do their best finding the best fit, but in the end you are trying to reconcile a lot of different possible contributors to the fall-off of data with resolution. Because of phases, a small change in one B factor can cancel a small change in another. This is why B factor refinement at low resolution is dangerous.

If you want to compare B factors I'd recommend putting "error bars" on them. That is, re-refine the structures of interest after jiggling the coordinates and setting all the B factors to a constant. See how reproducible the final B factors are. This will give you an idea of how big a change can happen by pure random chance, even with the same data.

Hope that helps!

-James  Holton
MAD Scientist

On 8/2/2017 12:09 PM, Asmita wrote:
Hi,

This might look as a very fundamental question. I have a dataset of crystal structures better than 3.5Ang resolution. For a qualitative analysis, I want to compare the residue-wise B-factors in these structures, but due to different procedures adopted in refinement and scaling, I understand that these values cannot be compared in a raw manner.

Can someone suggest appropriate normalization methods that could be used for scaling these B-factors for a relevant and meaningful comparison? All the files have isotropic B-factor values and there are no ANISOU entries in any of the files.

Thanks

Asmita

Reply via email to