Hello A partially philosophical, partially pragmatic question.
I've noticed a trend, both on ccp4bb and locally, to jump to twinning as an explanation for data sets which do not refine well - that is data sets with R and Rfree stuck above whatever the person's pre- conceived idea of an acceptable R and Rfree are. This usually leads to a mad chase through all possible space groups, twinning refinements, etc. and, in my experience, often results in a lot of time being spent for no significant improvements.
Just out of curiosity, does anyone have a feel for what fraction of stuck data sets are actually twinned? (I presume this will vary somewhat with the type of problem being worked on).
And a sorta-hypothetical question, given nice-looking crystals; images with no visible split spots, extra reflections, or streaks; good predictions; nice integration profiles; good scaling with reasonable systematic absences; a normal solvent content; and a plausible structure solution, and R/Rf somewhat highish (lets say . 25/.3 for 1.8 A data), how often would you expect the Stuck R/Rf to be caused by twinning (or would you not consider this a failed refinement). (My bias is that such data sets are almost never twinned and one should look elsewhere for the problem, but perhaps others know better.)
Sue Sue Roberts Biochemistry & Biopphysics University of Arizona [EMAIL PROTECTED]
