Dear Ed,
Tightly restrained refinement will be equivalent to
torsion angle parametrization, since bonds and angles are essentially
fixed (but dihedrals are not).
Simply not true. Think why -:) Hint: in restrained refinement the weight
applies to all terms - bonds, angles, torsions, etc... So if you choose
tight weight in such refinement the torsions will be restrained as
tightly as other terms (at least as it would be in CNS or
phenix.refine). In torsion angle refinement (which is, in fact, a
constrained rigid-body refinement) you still have weights, and you can
make your torsion angle refinement as tight as you like.
Similarly, properly tight restraints on
individual B-factors are not equivalent to grouped B-factors (in
whatever sense) because they can capture the distribution throughout the
structure.
I don't see why two-B per residue wouldn't capture this distribution
*throughout the structure* (it definitely wouldn't throughout the residue).
I think the example that Jose Antonio originally provided (at 3.1A, not
4A) clearly demonstrates that it makes more sense to do properly
restrained individual B-factor refinement than
two-adp-groups-per-residue refinement. Do you disagree specifically on
this issue?
Of course no. This is why when I reply on bb to questions like this:
"which B-factor, group or individual, do I need to refine at say 3.1A
resolution", I always suggest to run these refinement jobs and see which
one gives the best result:
1) TLS + individual isotropic ADP refinement (tls+individual_adp);
2) TLS + group individual ADP refinement (tls+group_adp);
3) individual isotropic ADP refinement;
4) group individual ADP refinement (with one refinable B per residue);
5) group individual ADP refinement (with two refinable B per residue).
+ on top of it you can add automatic weights optimization to remove the
arbitrariness in what is "loose" and "tights" restraints.
This will give the conclusive, rock solid answer about which ADP
parameterization and refinement protocol is good for given model and
data. An alternative is an endless speculation.
In the future the proper parameterization will be chosen automatically
based on a large array of data, model and map characteristics.
The existing
two-adp-groups-per-residue implementation (CNS and phenix) is, imho, an
example of *improper* parametrization.
As you see, in phenix.refine you can combine any B-factor refinement
strategy with any (group, individual iso, aniso, tls), and apply it to
any selected part of your structure. So, I assume at this point of the
software automation, it is up to a smart researcher to decide which
refinement strategy to use. You cannot blame the software for giving you
the freedom to do what you may want to do. For example, if you choose to
refine two-B-factors per residue when you could safely refine individual
B-factors, it will be the example of improper parameterization that you
have chosen (and not the software did for you). In phenix.refine
technically you can refine individual isotropic or anisotropic B-factors
at any resolution - the program will not crash, and it will be up to the
user whether he/she will enjoy the results. Like I said, in the future
the model parameterization will be done automatically.
I guess I'm going off this discussion - otherwise phenix.refine will get
less new options in the future if I keep writing -:)
All the best!
Pavel.