Hi Frank, thanks a lot for your comments, since they raise some interesting points.
R_pim should give the precision of the averaged measurement, hence the name. It will decrease with increasing data redundancy, obviously. The decrease will be proportional to the square root of the redundancy if only statistical errors or counting errors are present. If other things happen, such as for instance radiation damage, then you are introducing systematic errors, which will lead to either R_pim decreasing less than it should, or R_pim even increasing. This raises an important issue. As more and more images keep being added to a data set, could one decide at some point, when to add any further images? I have done a little work in that direction but nothing exhaustive. Cheers, Manfred. ******************************************************************** * * * Dr. Manfred S. Weiss * * * * Team Leader * * * * EMBL Hamburg Outstation Fon: +49-40-89902-170 * * c/o DESY, Notkestr. 85 Fax: +49-40-89902-149 * * D-22603 Hamburg Email: [EMAIL PROTECTED] * * GERMANY Web: www.embl-hamburg.de/~msweiss/ * * * ******************************************************************** On Sun, 7 Dec 2008, Frank von Delft wrote: > Hi Manfred > > I've been using and thinking Rmeas ever since I first saw it; but > (embarrassingly) I've only just woken up to Rpim -- so thanks for the > prompt. So I trawled the original references (Weiss and Hilgenfeld, > 1997) to find out why it has the form it does, but I must have skimmed > too quickly, because I couldn't find the explanation. > > Rpim, as I understand it, is trying to do two things (See Eq 3 in link > below): > 1) penalise me for bad data > 2) reward me for high redundancy > > But why that *particular* balance of redundancy vs badness? And how do > we know that it was the best one? > > And is this really waterproof? Since the redundancy factor (1/(N-1)) > tends to zero for large N, does it not dominate for large redundancy? > For instance, for terrible data (e.g. wrong symmetry) but very high > redundancy, then Rpim will still tend to zero, won't it? > > So to counteract that, N might be downweighted it turn by the data > badness. Which could in its turn again be.... I don't think I like where > this is going :) > > Cheers > phx. > > > > > > > > > Manfred S. Weiss wrote: > > Dear Deb, > > > > R_meas or R_rim is a merging R-factor which is independent of the > > redundancy or multiplicity of the data (hence its name), R_pim > > stands for precision indicating merging R-factor. R_pim > > gives you the precision of the averaged measurement, which is > > the one you are actually using for structure solution and refinement. > > > > SCALA will calculate both R_rim (R_meas) and R_pim, XDS/XSCALE > > will calculate R_rim (R_meas) only, and SCALEPACK neither of the > > two. However, you may produce a file from SCALEPACK with scaled > > but unmerged intensities (option NO MERGE ORIGINAL INDEX) > > and then download a program from my site called RMERGE or > > RMERGE_4LINUX, which will do the job for you. > > > > If you have further questions, please see the page > > http://www.embl-hamburg.de/~msweiss/projects/msw_qual.html > > or ask me. > > > > Cheers, Manfred > > > > ******************************************************************** > > * * > > * Dr. Manfred S. Weiss * > > * * > > * Team Leader * > > * * > > * EMBL Hamburg Outstation Fon: +49-40-89902-170 * > > * c/o DESY, Notkestr. 85 Fax: +49-40-89902-149 * > > * D-22603 Hamburg Email: [EMAIL PROTECTED] * > > * GERMANY Web: www.embl-hamburg.de/~msweiss/ * > > * * > > ******************************************************************** > > > > > > On Sat, 6 Dec 2008, Debajyoti Dutta wrote: > > > > > >> � > >> > > Dear members, > > > > I have a little query hare about Rpim and Rmeans. How these are used to > > mark data quality, and how can one calculate it. > > > > Thak you for your reply in advance. > > > > Sincerely > > Deb > > >
