Hey all,

let me give this discussion a little kick and see if it spins into outer
space.

How many reflections do people use for cross-validation?  Five per cent is a
value that I read often in papers.  Georg Zocher started with 5% but lowered
that to 1.5% in the course of refinement.  We've had problems with reviewers
once complaining that the 0.3% of reflections we used were not enough.
However, Axel BrĂ¼nger's initial publication deems 1000 reflections
sufficient, and that's exactly what 0.3% of reflections corresponded to in
our data set.

I would think the fewer observations are discarded, the better.  Can one
lower this number further by picking reflections smartly, eg. avoiding
symmetry-related reflections as was discussed on the ccp4bb a little while
back?  Should one agonize at all, given that one should do a last run of
refinement without any reflections excluded?



Andreas


On 1/31/07, Georg Zocher <[EMAIL PROTECTED]> wrote:

 First of all, I would like to thank you for your comments.

After consideration of all your comments, I conclude that there are three
possibilities.

1.) search for some particularly poorly-behaved regions using
parvati-server
   a.) refining the occupancy of that atoms and/or
   b.) tightening the restraints

Problems which have already been metioned:
If I tighten the restraints, the anisotropic model may not be
statistically justified, which seems to be the case.

Using all reflections may not help that much, because I chose a set of
1.5% for Rfree (~1300 reflections) to get as much data as possible for the
refinement. For my first tries of anisotropic refinement I used 5% of the
reflections for Rfree but the same problem arose, so that I decided to cut
the Rfree to 1.5%.

2.) Using shelxl

3.) TLS with multi-groups
   Should be the safe way!?

I will try all the possiblities, but especially the tls refinement seems
to be a good option to be worthy to try.

Thanks for your helpful advices,

georg


Reply via email to