Last time I checked phenix.refine did not use sig(F) nor sig(I) in its 
likelihood calculation.  Refmac does, but for a long time it was not the 
default.  You can turn it off with the WEIGHT NOEXP command, or you can 
even run with no "SIGx" at all in your mtz file.  You do this by leaving 
SIGFP out on the LABIN line.  This can sometimes help, but generally not 
by much.

I'll admit I was surprised when I first learned this is the way sigmas 
are treated in modern maximum-likelihood refinement.  But as it turns 
out sig(I) is almost never the dominant source of error in 
macromolecular models, so leaving it out generally goes unnoticed. There 
are also a few cases in the PDB where the sigmas are completely bonkers 
and including them can make things worse.  So, ignoring sigmas is 
perhaps a safe default.

This is not to say that sigmas are completely useless, they play a very 
important role in phasing, where the errors in the intensity differences 
must be correctly propagated in order for phase improvement to have the 
best chance of working. But for refining a native structure against 
intensity or F data, there just isn't much impact. Don't believe me?  
Try it.  Use sftools to change all your sigI values to, say, the 
average.  Then re-run refinement and see how much it changes your final 
stats, if at all.

Leaving out high-angle or otherwise weak data can improve statistics, 
but that is not a reason to leave them out.  What this is telling you is 
that the fine details of the model are still not in agreement with the 
data.  I the case of the OP, I suspect the Fcalc vs Ftrue difference is 
larger than normal.  Something else is wrong.  In such cases I always 
like to look at the real-space representation of Rwork, which is the 
Fo-Fc difference map.  How big is the biggest peak in this map? Is it 
positive or negative? And where is it?

-James Holton
MAD Scientist

On 7/4/2019 11:05 PM, [email protected] wrote:
> Pavel,
>
> Please correct if wrong, but I thought most refinement programs used the 
> weights e.g. sig(I/F) with I/F so would not really have a hard cut off 
> anyway? You’re just making the stats worse but the model should stay ~ the 
> same (unless you have outliers in there)
>
> Clearly there will be a point where the model stops improving, which is the 
> “true” limit…
>
> Cheers Graeme
>
>
>
> On 5 Jul 2019, at 06:49, Pavel Afonine 
> <[email protected]<mailto:[email protected]>> wrote:
>
> Hi Sam Tang,
>
> Sorry for a naive question. Is there any circumstances where one may wish to 
> refine to a lower resolution? For example if one has a dataset processed to 2 
> A, is there any good reasons for he/she to refine to only, say 2.5 A?
>
> yes, certainly. For example, when information content in the data can justify 
> it.. Randy Read can comment on this more! Also instead of a hard cutoff using 
> a smooth weight based attenuation may be even better. AFAIK, no refinement 
> program can do this smartly currently.
> Pavel
>
> ________________________________
>
> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB&A=1
>
>


########################################################################

To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB&A=1

Reply via email to