Ed,

I may be wrong here (and please by all means correct me), but I think
it's not entirely true that experimental errors are not used in modern
map calculation algorithm.  At the very least, the 2mFo-DFc maps are
calibrated to the model error (which can be ideologically seen as the
"error of experiment" if you include model inaccuracies into that).  And

I supposed my statement may have more precise than helpful. Obviously model and experimental errors do factor into calculation of a 2mFo-DFc map - but is weight and structure factor calculation part of map calculation, or a distinct stage of data processing? I tend to think of it as separate from map calculation, but this may be up for debate (judging by the increasing number of statements along the lines of "I looked at my mtz file in coot and saw X").

[snip]
Nevertheless, the perceived situation is that "our models are not as
good as our data", and therefore experimental errors don't matter.  Now
I am playing another devil's advocate and I know how crazy this sounds
to an unbiased experimental scientist (e.g. if they don't matter, why
bother improving data reduction algorithms?).

The errors in our models are almost definitely more extensive than the errors in our measurements, but one try at answering this "devil's advocate" question would be to point out that the usual likelihood equations all require sigF (either as a component of sigma, or for bootstrapping sigma). I've only done limited testing related to this (it was actually for something else), but likelihood equations produce strange results if you try to get them to ignore sigF.



Pete


I guess maps produced in phenix do not use experimental errors in any
way given that the maximum likelihood formalism implemented there does
not.  Although phenix is not immutable and my understanding may be
outdated.  But this is not the right forum for pondering this specific
question.

Cheers,

Ed.

PS.  I fully realize that Francisco's question was more practical (and
the answer to that is to run REFMAC without SIGFP record in LABIN), but
isn't thread-hijacking fun? :)

On Wed, 2012-05-23 at 10:05 +0300, Nicholas M Glykos wrote:
Hi Francisco,

I'll play devil's advocate, but a measurement without an estimate of its error is closer to theology than to science. The fact that the standard deviations are not used for calculating an electron density map via FFT is only due to the hidden assumption that you have 100% complete, error-free data set, extending to sufficient high (infinite) resolution. When these assumptions do not apply (as is usually the case with physical reality), then the simple-minded FFT is not the correct inversion procedure (and the data do not univocally define a single map). Under these conditions other inversion mathods are needed (such as maximum entropy) for which the standard deviations are actively being used for calculating the map.

My twocents,
Nicholas


On Tue, 22 May 2012, Francisco Hernandez-Guzman wrote:

Hello everyone,

My apologies if this comes as basic, but I wanted to get the expert's take on whether or not the sigmaF values are required in the calculation of an electron density map. If I look at the standard ED equation, sigma's don't appear to be a requirement but all the scripts that I've looked at do require sigma values.

I wanted to calculate the electron density for PDB id: 1HFS but the structure file only lists the Fo's, Fc's and Phases, but no sigmas. Would such structure factor file be considered incomplete?

Thank you for your kind explanation.

Francisco


Reply via email to