Hi,

I think a problem I've had with my thesis project applies to this discussion:

My Rfactors, after having built a reasonable model into experimental maps in
P3121, were stuck in the upper 40s, and would absolutely not go lower.  Trying
the other space group options (P31, C2) didn't help, and the model appeared
fine: matched the experimental density reasonably well, adopted the fold I
expected, etc. Though my resolution was borderline low for it (2.25 A) and the intensity distribution didn't indicate twinning, I tried refining in Shelxl with a twin law (P31 with cell 58x58x117, twin law 1 0 0 -1 -1 0 0 0 -1, basf refined
to 0.5009).  The twin law I used corresponded to the 2fold of P3121.  I was
ecstatic when my Rfactors came down to R=28, Rf=34!

I rationalized this method of refinement to myself by saying the very poor data
quality (high chi^2s and mosaicity, poor agreement btwn symm-related
reflections, messy spots) was interfering with detection of twinning by the
various twin tests.  Not true.  After being lucky enough to find another
crystal form, the new, better data allowed me to identify a serious error in my model (a ~25-residue loop that wasn't well ordered in first crystal form). Because my protein is small (~250aa), that loop had a large effect on the
Rfactors.  When I corrected it, my model began to refine properly.

It's sad to see months / years of your thesis work summed up in 2 paragraphs! Chalk it up to inexperience. At least now I would know how to deal with a real
twin.

Hope this is useful,
Jess

Quoting Sue Roberts <[EMAIL PROTECTED]>:

Hello

A partially philosophical, partially pragmatic question.

I've noticed a trend, both on ccp4bb and locally, to jump to twinning as an explanation for data sets which do not refine well - that is data sets with R and Rfree stuck above whatever the person's pre- conceived idea of an acceptable R and Rfree are. This usually leads to a mad chase through all possible space groups, twinning refinements, etc. and, in my experience, often results in a lot of time being spent for no significant improvements.

Just out of curiosity, does anyone have a feel for what fraction of stuck data sets are actually twinned? (I presume this will vary somewhat with the type of problem being worked on).

And a sorta-hypothetical question, given nice-looking crystals; images with no visible split spots, extra reflections, or streaks; good predictions; nice integration profiles; good scaling with reasonable systematic absences; a normal solvent content; and a plausible structure solution, and R/Rf somewhat highish (lets say . 25/.3 for 1.8 A data), how often would you expect the Stuck R/Rf to be caused by twinning (or would you not consider this a failed refinement). (My bias is that such data sets are almost never twinned and one should look elsewhere for the problem, but perhaps others know better.)

Sue
Sue Roberts
Biochemistry & Biopphysics
University of Arizona

[EMAIL PROTECTED]

Reply via email to