Dear Gerard,

I guess you simply did not understand my email, at all. It's in the archive, you may read it again -:)

All the best!
Pavel.

P.S. Are you saying people producing (nearly manually) first macromolecular structures BEFORE the era of cool refinement packages were all doing "2hr0"s ? I would stay away from such a strong statements.


On 8/27/10 3:35 PM, Gerard Bricogne wrote:
Dear Pavel,

      I must say that I find some of the statements in your message rather
glib and shallow, especially on the part of a developer. Where is all the
Bayesian wisdom that Phenix is advertised to have absorbed? Your last
paragraph is shocking in this respect. The whole idea of Bayesian inference
is precisely that it isn't good enough to pull out of a hat, by means of a
trick/blackbox, "a" model that corresponds to the data, but that one needs
to see how many models would do fare more or less as well and to give some
rough probability distribution over them; and if your are going to finally
deliver a single model, it had better be as representative as possible of
that weighted ensemble of possible ones, rather than just "a" model that
happens to have been persuaded to fit the data by hook or by crook.

      Closer to practicalities, the procedure by which a model that ends up
being deposited should be reproducible by third parties as the endpoint of a
refinement calculation from the deposited coordinates and X-ray data,
conducted according to the author's description of their own refinement
procedure. That procedure, however, should always end with a justifiable
purely computational step. It seems very dangerous to state that a model in
which some manual moving around of atoms was given the last word is as good
as anything else. If you start encouraging such casual attitudes, you may
end up with 2hr0.


      With best wishes,

           Gerard.

--
On Fri, Aug 27, 2010 at 02:02:48PM -0700, Pavel Afonine wrote:
  Hello,

The requirement sounds extremely suspect:  every atom in the structure
contributes to every reflection, so refining "only some atoms" makes as
little mathematical sense as refining against "only a subset of
reflections".

I agree with you that the requirement sounds dubious.
But the specific argument you make is not quite right.

Two common counter-examples are real-space refinement and rigid-body
placement of a known fragment relative to an existing partial model.
Not so:  they're tricks to get out of local minima and maybe improve
phases, but they're /not/ useful for generating the model that "best" fits
the data,
I completely agree with Ethan. Although the overall goal of refining
B-factors only for a subset of atoms is not clear (there are at least three
example where I do it in phenix.refine - I won't go into technicalities
here, it's hidden under the hood and no-one knows -:) ), doing so makes
perfect sense in general.

Or would one deposit a model for which real-space refinement has been the
final step?
Of course you would. Refinement - in whatever space - is just a
trick/blackbox to get your model to correspond to the data, and how you do
it: in real, reciprocal or both spaces, manually moving atoms or letting
minimizer or grid search do that - it does not matter.

Pavel.

Reply via email to