If R-sleep is to be the "real" validation R-factor, why not just sequester each of R-sleep and the current R-free, each as a randomly-chosen (but mutually exclusive) set of reflections, and then proceed as normally with the other (eg) 80% of the data until the very end of the refinement, using the R-free set to optimize weightings for geometries, NCS symmetry averaging, and so forth, and then simply add those back in at the penultimate step of refinement. In the end, you have R-sleep and the Rfactor corresponding to the rest of the data, just like before, and you can have the additional statistic reporting the difference between R-sleep and and R-free, which we could call something like the R-i-didn't-peak.
Peter Adrian Meyer wrote: > This raises a slightly tangential question though - how do we know how > what obs/param ratio is good enough?
