Okay fair enough: my straw man is the *final* model, the one we deposit
and then do things with, the one that needs to agree with all the data.
Silly me, for wanting to beat that. :-)
The original question sounded suspiciously like,
"I can't get this one last crappy loop in bad density to have good B
factors, please, how can I make it behave?"
Struck me as requiring a rigorous reply.
phx
On 28/08/2010 06:48, Ethan Merritt wrote:
On Friday 27 August 2010, Frank von Delft wrote:
I'm sorry, I can't simply drop this thread, not when it keeps ignoring
the physics of diffraction:
In order to attempt any (rigorous) scientific conclusions from a
structure, one needs the "best" model, the one that's converged against
the data.
I think you are arguing with a straw man that you set in place
yourself.
Either that or there's a mis-match of words being used.
The rest of us [I think] are using the word "refinement"
to mean "something I do to improve the model, if only incrementally".
I.e., it is one step on a long journey, not the journey in its entirety.
When you run real-space refinement, you're refining against maps that
come from a set of phases - but phases are *derived* data: derived from
the starting model -- from ALL of the starting model. Which real-space
refinement has now changed. So to achieve *convergence*, you have to
recalculate the phases. From ALL of the starting model.
So?
That's what we have to do in non-linear least-squares refinement
in reciprocal space also. Without an analytical solution to the
phase problem, it's all we _can_ do, whether it be in real space
or reciprocal space. Iterative improvement is our stock in trade.
I'm mystified how this procedure can be considered local to a few
atoms. (Even if it is intensely pleasing to watch RSR make a model snap
into some bothersome density.)
The procedure is local for precisely the reasons you already stated.
That doesn't mean there are no global effects. And it certainly doesn't
mean you are finished with your model. It just means you have
refined (and hopefully improved but maybe not) the position of some
set of atoms.
Do you see this as different from, say, adjusting rotamers under
the guidance of molprobity? That's a local change made to improve
agreement with an external prior, rather than to improve agreement
with either the map or with current |mFo-Fc|. Whether it actually
improves your R factors or not won't be known until the next round
of refinement.
phx.
P.S. The availability of spectacular experimental phases *should* allow
convergence purely through real-space refinement, of course. But I've
seen a lot of phasing, and I've never encountered this situation.
If you want to pursue this as a new topic, I'm game.
But can we first agree on definitions for "refinement" and "convergence"?
In the usage that I am familiar with, any well-behaved refinement
algorithm will converge, if only asymptoticly.
You may not like the place it converged to, but that's a different issue.
"Converged" is not the same as "found the true global minimum".
So yes, I agree that real-space refinement often converges to a
non-optimal state. That's why we need an "accept/reject" button in
the Coot interface :-) But converge it does, nonetheless.
Ethan
On 28/08/2010 00:19, Gerard Bricogne wrote:
Dear Pavel,
Yes, I may indeed have been focussed too much attention on your
"subversive"-looking last paragraph, without fully seeing it in the context
of the whole thread. I am also sorry that I was so strident in my criticism:
I should not be writing e-mails on this topic late on a Friday night :-)) .
Have a nice weekend.
Gerard.
--
On Fri, Aug 27, 2010 at 03:48:03PM -0700, Pavel Afonine wrote:
Dear Gerard,
I guess you simply did not understand my email, at all. It's in the
archive, you may read it again -:)
All the best!
Pavel.
P.S. Are you saying people producing (nearly manually) first macromolecular
structures BEFORE the era of cool refinement packages were all doing
"2hr0"s ? I would stay away from such a strong statements.
On 8/27/10 3:35 PM, Gerard Bricogne wrote:
Dear Pavel,
I must say that I find some of the statements in your message rather
glib and shallow, especially on the part of a developer. Where is all the
Bayesian wisdom that Phenix is advertised to have absorbed? Your last
paragraph is shocking in this respect. The whole idea of Bayesian
inference
is precisely that it isn't good enough to pull out of a hat, by means of a
trick/blackbox, "a" model that corresponds to the data, but that one needs
to see how many models would do fare more or less as well and to give some
rough probability distribution over them; and if your are going to finally
deliver a single model, it had better be as representative as possible of
that weighted ensemble of possible ones, rather than just "a" model that
happens to have been persuaded to fit the data by hook or by crook.
Closer to practicalities, the procedure by which a model that ends
up
being deposited should be reproducible by third parties as the endpoint of
a
refinement calculation from the deposited coordinates and X-ray data,
conducted according to the author's description of their own refinement
procedure. That procedure, however, should always end with a justifiable
purely computational step. It seems very dangerous to state that a model
in
which some manual moving around of atoms was given the last word is as
good
as anything else. If you start encouraging such casual attitudes, you may
end up with 2hr0.
With best wishes,
Gerard.
--
On Fri, Aug 27, 2010 at 02:02:48PM -0700, Pavel Afonine wrote:
Hello,
The requirement sounds extremely suspect: every atom in the structure
contributes to every reflection, so refining "only some atoms" makes
as
little mathematical sense as refining against "only a subset of
reflections".
I agree with you that the requirement sounds dubious.
But the specific argument you make is not quite right.
Two common counter-examples are real-space refinement and rigid-body
placement of a known fragment relative to an existing partial model.
Not so: they're tricks to get out of local minima and maybe improve
phases, but they're /not/ useful for generating the model that "best"
fits
the data,
I completely agree with Ethan. Although the overall goal of refining
B-factors only for a subset of atoms is not clear (there are at least
three
example where I do it in phenix.refine - I won't go into technicalities
here, it's hidden under the hood and no-one knows -:) ), doing so makes
perfect sense in general.
Or would one deposit a model for which real-space refinement has been
the
final step?
Of course you would. Refinement - in whatever space - is just a
trick/blackbox to get your model to correspond to the data, and how you
do
it: in real, reciprocal or both spaces, manually moving atoms or letting
minimizer or grid search do that - it does not matter.
Pavel.