Dear All
        one thing I remembered from what Gerard pointed out was the difference 
in the XPLOR/CNS formalism between strict and restrained which is not a 
continuum. Restrained was obviously when you had multiple copies and they were 
restrained with a weight (which was like a force constant) to be similar when 
superimposed. So if you increase the force constant then they can move during 
refinement but they all try to move together when they move. 

And the other extreme is strict where there was no force applied at all but 
only a single copy of the chain and the ASU is built by applying the NCS 
symmetry. The atoms are free to move but, unlike the case with restrained where 
there is superimposition on the fly, in the strict case there is no automatic 
update of the superimposition matrices. So every move gets religiously copied 
to all the chains when the ASU is made. At this point I guess the copies can 
bump and so apply a force on each other but that is a local, and likely to be 
perturbing, force. 

best wishes
 Martyn 

Martyn Symmons
Cambridge



--- On Thu, 23/9/10, Ian Tickle <ianj...@gmail.com> wrote:

> From: Ian Tickle <ianj...@gmail.com>
> Subject: Re: [ccp4bb] Effect of NCS on estimate of data:parameter ratio
> To: CCP4BB@JISCMAIL.AC.UK
> Date: Thursday, 23 September, 2010, 11:21
> Hi Gerard & Pavel
> 
> Isn't this the proviso I was referring to, that one cannot
> in practice
> use an infinite weight because of rounding errors in the
> target
> function.  The weight just has to be 'big enough' such
> that the
> restraint residual becomes sufficiently small that it's no
> longer
> significant.
> 
> In numerical constrained optimisation the method of
> increasing the
> constraint weights (a.k.a. 'penalty coefficients') until
> the
> constraint violations are sufficiently small is called the
> 'penalty
> method', see http://en.wikipedia.org/wiki/Penalty_method .  The
> method
> where you substitute some of the parameters using the
> constraint
> equations is called (you guessed it!) the 'substitution
> method', see
> http://people.ucsc.edu/~rgil/Optimization.pdf . 
> There are several
> other methods, e.g. the 'augmented Lagrangian method' is
> very popular,
> see 
> http://www.ualberta.ca/CNS/RESEARCH/NAG/FastfloDoc/Tutorial/html/node112.html
> .  As in the penalty method, the AL method adds
> additional parameters
> to be determined (the Lagrange multipliers, one per
> constraint)
> instead of eliminating some parameters using the constraint
> equations;
> however the advantage is that it removes the requirement
> that the
> penalty coefficient be very big.
> 
> The point about all these methods of constrained
> optimisation is that
> they are in principle only different ways of achieving the
> same
> result, at least that's what the textbooks say!
> 
> And now after the penalties and substitutions it's time to
> blow the whistle ...
> 
> Cheers
> 
> -- Ian
> 
> On Wed, Sep 22, 2010 at 10:00 PM, Pavel Afonine <pafon...@lbl.gov>
> wrote:
> >  I agree with Gerard. Example: it's unlikely to
> achieve a result of
> > rigid-body refinement (when you refine six
> rotation/translation parameters)
> > by replacing it with refining individual coordinates
> using infinitely large
> > weights for restraints.
> > Pavel.
> >
> >
> > On 9/22/10 1:46 PM, Gerard DVD Kleywegt wrote:
> >>
> >> Hi Ian,
> >>
> >>> First, constraints are just a special case of
> restraints in the limit
> >>> of infinite weights, in fact one way of
> getting constraints is simply
> >>> to use restraints with very large weights
> (though not too large that
> >>> you get rounding problems). These
> 'pseudo-constraints' will be
> >>> indistinguishable in effect from the 'real
> thing'.  So why treat
> >>> restraints and constraints differently as far
> as the statistics are
> >>> concerned: the difference is purely one of
> implementation.
> >>
> >> In practice this is not true, of course. If you
> impose "infinitely strong"
> >> NCS restraints, any change to a thusly restrained
> parameter by the
> >> refinement program will make the target function
> infinite, so effectively
> >> your model will never change. This is very
> different from the behaviour
> >> under NCS constraints and the resulting models in
> these two cases will in
> >> fact be very easily distinguishable.
> >>
> >> --Gerard
> >>
> >>
> ******************************************************************
> >>                           Gerard J.
>  Kleywegt
> >>   Dept. of Cell & Molecular Biology
>  University of Uppsala
> >>                   Biomedical Centre  Box
> 596
> >>                   SE-751 24 Uppsala
>  SWEDEN
> >>
> >>    http://xray.bmc.uu.se/gerard/  mailto:ger...@xray.bmc.uu.se
> >>
> ******************************************************************
> >>   The opinions in this message are fictional.
>  Any similarity
> >>   to actual opinions, living or dead, is purely
> coincidental.
> >>
> ******************************************************************
> >
>

Reply via email to