Dear Caroline,

in general, it is very difficult to calibrate a neutron diffractometer well
enough to trust the absolute lattice parameters (the relative ones are
usually excellent).  This is especially true for TOF diffractometers.  There
are many reasons for this, including sample position and transparency
effects.  The latter are particularly annoying, because they are
wavelength-dependent, and introduce a non-linear relation between TOF and
d-spacing.  Also, the asymmetric pulse shape does not help.  En passant (are
you listening Bob?), I notice that there is currently no Rietveld code I'm
aware of capable of correcting neutron data for transparency.  Ideally, the
refineable parameter should be mu*R, exactly as for the absorption
correction, and it should be possible to set a constraint between the two
values.  Also, a correction for sample position errors would be nice.  Note
that, for modern scintillator-based diffractometers, a 0.5mm positioning
error (forward or backwards) is equivalent to a peak shift of 10% of the
FWHM at 90 degrees.
My usual advice to my users is that if they want to know the absolute
lattice parameters, they should measure them on their x-ray diffractometer.
Having done that, they should fix the lattice constants and refine the
neutron instrument parameter constants for all banks on the room temperature
data.  These constants should than be fixed for the other temperatures.
This advice should apply equally well to your case, especially if you have
synchrotron data, which are usually very precise.  

The situation is not that bad for constant-wavelength diffractometers,
especially of the multi-soller type (like D2B), provided that they are
recalibrated every time the monochromator is moved.  
You should also consider the possibility that the difference you observe is
due to a compositional gradient from the inside to the outside of the grain.

Paolo Radaelli
GEM instrument scientist

Reply via email to