Just to expand a little on the beam stop problem: the outlier
rejection algorithm in Scala (& I imagine in other programs) relies on
a consensus, that is it essentially assumes that the majority of
observations are correct (actually they are weighted by 1/
EstimatedVariance). This means that if you have 3 observations of a
reflection behind the beam stop, & one not, then the program is likely
to throw out the one good one & keep the 3 bad ones. It's hard to
think of a good algorithm which would do the Right Thing (maybe we
should assume that for refelctions > say 20Å resolution the strong
ones are right, but I'm not sure this wouldn't cause worse problems)
So tell the integration program where the beam stop is!
Phil
On 16 Feb 2009, at 09:07, Clemens Vonrhein wrote:
Dear Ho,
On Fri, Feb 13, 2009 at 04:45:29PM -0800, Ho-Leung Ng wrote:
Can you elaborate on the effects of improper inclusion of low
resolution (bogus?) reflections? Other than rejecting spots from
obvious artifacts, it bothers me to discard data. But I can also see
how a few inaccurate, very high intensity spots can throw off
scaling.
2) beamstop
The integration software will predict all reflections based on
your parameters (apart from the 000 reflection): it doesn't care
if such a reflection would be behind the beamstop shadow or
not. However, a reflection behind the beamstop will obviously not
actually be there - and the integrated intensity (probably a very
low value) will be wrong.
One example of such effects in the context of experimental
phasing is bogus anomalous differences. Imagine that your
beamstop is not exactly centred around the direct beam. You will
have it extending a little bit more to one side (giving you
maybe 20A low resolution) than to the other side (maybe 30A
resolution). In one orientation of the crystal you might be able
to collect a 25A (h,k,l) reflection very well (because it is on
the side where the beamstop only starts at 30A) - but the
(-h,-k,-l) relfection is collected in an orientation where it is
on the 20A-side of the beamstop, i.e. it is predicted within the
beamstop shadow.
Effect: you have a valid I+ measurement but a more-or-less zero
I- measurement, giving you a huge anomalous difference that
shouldn't really be there.
Now if you measured your data in different orientations (kappa
goniostat) with high enough multiplicity, this one bogus
measurement will probably be thrown out during
scaling/merging. You can e.g. check the so-called ROGUES file
produced by SCALA. But if you have the usual multiplicity of only
3-4 the scaling/merging process might not detect this as an
outlier correctly and it ends up in your data. Sure, programs
like autoSHARP will check for these outliers and try to reject
them - but this is only a hack/fix for the fundamental problem:
telling the integration program what the good area of the
detector is.
Solution: mask your beamstop. All integration programs have tools
for doing that (some are better than others). I haven't seen
any program being able to do it automatically in a reliable way
(if reliable would mean: correctly in at least 50% of cases) -
but I'm no expert in all of them by a long shot. it usually takes
me only about a minute or two for masking the beamstop by hand. A
small investment for a big return (good data) ;-)