Re: [RE: Combined neutron/x-ray refinements]

1999-05-25 Thread Alan Hewat, ILL Grenoble

If we have an atom that is seen by one
radiation and not by the other there will be a degradation in the quality of
the parameters by combining the refinement in the current fashion. 

Do you mean for example that we might degrade the parameters of a V atom 
by introducing neutron data ?

I don't think this is true, but it is an interesting question.  If we were to
extrapolate this argument "ad absurdum" we could say that because some
reflections (for a given radiation) do not give any information about some 
parameters (easy to demonstrate) then we would obtain better estimates 
for those parameters by removing those reflections from the least squares 
process.  (Surely untrue :-)

What is true is that if we introduce systematic errors by combining 
radiations, we may indeed degrade the result.  For example if we have
serious preferred orientation with a very small X-ray sample, it is
probably unwise to introduce this biased information into the refinement
of the neutron data, where there may be less bias because of the average
over a much larger volume.  

But if the data is not biased, you must always (?) do better by including
more data, with for example combined X-ray and neutron refinements.

Surely, it
would be better to use a new weighting function for the atomic parameters,
that is dependent on the scattering lengths for each radiation.

Playing around with weighting schemes is to enter dangerous territory.

Alan H.

Alan Hewat, ILL Grenoble, FRANCE [EMAIL PROTECTED] tel (33) 4.76.20.72.13 
ftp://ftp.ill.fr/pub/dif  fax (33) 4.76.48.39.06  http://www.ill.fr/dif/




Re: [Re: [RE: Combined neutron/x-ray refinements]]

1999-05-25 Thread Andrew Wills

Alan,

I am not suggesting removing reflections. But, I think that we should make
sure that we are combining the data in the best possible way. If we know have
strong information on a vanadium position from X-rays and (extrapolate again)
have only noise from neutrons, then stastically introducing the neutron data
whilst no changing the best fit will degrade the least - squares approach to
it. The final structure should fit all data, but are we approaching it
optimally? I know that this is a can of worms, but it is good to think about
what we are doing as combined refinements will continue to become less exotic.

-Andrew
--
Andrew Wills
Centre D'Études Nucléaires de Grenoble


"Alan Hewat, ILL Grenoble" [EMAIL PROTECTED] wrote:
If we have an atom that is seen by one
radiation and not by the other there will be a degradation in the quality of
the parameters by combining the refinement in the current fashion. 

Do you mean for example that we might degrade the parameters of a V atom 
by introducing neutron data ?

I don't think this is true, but it is an interesting question.  If we were to
extrapolate this argument "ad absurdum" we could say that because some
reflections (for a given radiation) do not give any information about some 
parameters (easy to demonstrate) then we would obtain better estimates 
for those parameters by removing those reflections from the least squares 
process.  (Surely untrue :-)

What is true is that if we introduce systematic errors by combining 
radiations, we may indeed degrade the result.  For example if we have
serious preferred orientation with a very small X-ray sample, it is
probably unwise to introduce this biased information into the refinement
of the neutron data, where there may be less bias because of the average
over a much larger volume.  

But if the data is not biased, you must always (?) do better by including
more data, with for example combined X-ray and neutron refinements.

Surely, it
would be better to use a new weighting function for the atomic parameters,
that is dependent on the scattering lengths for each radiation.

Playing around with weighting schemes is to enter dangerous territory.

Alan H.

Alan Hewat, ILL Grenoble, FRANCE [EMAIL PROTECTED] tel (33) 4.76.20.72.13 
ftp://ftp.ill.fr/pub/dif  fax (33) 4.76.48.39.06  http://www.ill.fr/dif/




Get your own FREE, personal Netscape WebMail account today at 
http://webmail.netscape.com.



Re: Stress

1999-05-25 Thread Matteo Leoni

Hi all,

..following a bit the last posting on the topic...

  I start to be a little bit concerned about all those people claiming and
  pointing out that diffraction don't measure a residual stress but a
  residual strain. 
  ..
   
  So, my own idea is that people who don't know how to transform strain in
  stress will measure only strain and the other are measuring stresses.
  
  Let me stress the audience about that; may be is that I am an engineer and
  not a physicist or chemist.
 
 
 Well, there is nothing wrong being an engineer but I cannot share your 
 concerns. Not to bore the audience, I just claim that you are just short of a 
 few quantities to measure stress directly from a diffraction experiment i.e. 
 of the elastic constants (which one hopes to find conveniently tabulated 
 somewhere and prays that they apply to the material under investigation). 

in any case it is in principle possible to measure some "elastic
constants" using diffraction: it's just a matter of performing an in-situ
3 point bending or tensile or whatever kind of mechanical test on your
sample and evaluate the response of the material itself.
Since this is rarely possible, tabulated values as you say are the next
choice... but they're useful only if you know what kind of grain
interaction model is applicable to your particular case. I mean: you find
tabulated the single-crystal elastic constants or the macroscopic
mechanical elastic constants, but you need the X-ray elastic constants for
the polycrystalline under study... you're not measuring all
crystallites... you're just sampling some of them.
Sometimes (eg the case of thin films, especially if they're textured),
having the tabulated values for the single crystal elastic constants and
getting a "physically reasonable" stress value are not synonyms!

 For 
 measuring strain, on the other hand, the diffraction experiment provides all 
 the quantities necessary, you don't even have to know the material. 

are you really sure?? If you care about relative values I agree, otherwise
you need the unstressed lattice spacing
I agree on the fact that what you measure is something "easily" related to
the interplanar spacing; from that you can "easily" get the integrated
relative displacement in the laboratory system (with no "personal
assumptions"). What follows is then just a matter of ASSUMING a
behavior for the material and modelling. From my point of view it's a huge
integral problem, so the solution is not unique... we just need better
models
As for the fact that someone is able to measure strain and some others are
able to measure stresses, well, even if it is impossible to demonstrate
that someone knows THE residual stress present in a sample, it's rather
easy to prove that different values can be obtained from THE SAME set of
measured points. Moreover, if variations of the sin2(psi) method are used,
it can be shown even without recurring to Shannon's theorem, that in
most cases the result depends also on the sampling in the (psi -
2theta/theta space)
Ok ok... this is supposed to be the Rietveld mailing list ;o) oops.. Alan
is gonna kill me... well.. but there are some analogies with the Rietveld
method... and stress models have already implemented inside it... and
well.. at least here neutron and X-ray fans cannot fight ;o) The two
techniques are "more complementary each other" than for structure
determination... to get more info (surface and bulk)... you need both! Now
I'll get a lot of ... ehm on me.. but in this case really X-rays become a
"surface" technique and neutrons a "bulk" one... otherwise I'd give anyone
a thin film and a 30cm-thick "heavy" sample and ask to get reasonable
strain values for both using only one technique!

Have fun...

Mat


PS. In any case better to know the material.. and the instrument: residual
macrostrain is not the only source for peak shift!


  w
g( o 0 )g
--oOO--(_)---OOo---oOO-w-OOo---
  MPI fuer Metallforschung
  Seestrasse, 92 74170 - Stuttgart (D)
  Matteo leoni, PhD
   Department of Materials Engineering 
  University of Trento
 38050 Mesiano (TN, Italy)
 
   .ooo0 0ooo.Tel +39 461 882417  
   (   ) (   )Fax +39 461 881977E-mail:   [EMAIL PROTECTED]
\ (---) /--
 \_) (_/ |  | |  |
.ooo0 0ooo.
(   ) (   )
 \ (   ) / 

RE: Combined neutron/x-ray refinements

1999-05-25 Thread P . G . Radaelli

Jon Wright wrote:

I guess the degradation which is found would come from parameters which
are determined by both datasets and come out with different values in each
separate refinement. 

Not necessarily.  In order to get the ESD, the variance-covariance matrix is
multiplied by chi^2, and the roots of the diagonal elements are taken.
Therefore, if the chi^2 of the combined refinement is worse than that of the
individual ones, the ESD will automatically be worsened.  I think this is by
far the commonest case.  Also, by adding reflections that are insensitive to
a given parameter my feeling is that you increase the esd on that parameter
even if chi^2=1, but the proof of this is too tedious.

Paolo  



RE: Combined neutron/x-ray refinements

1999-05-25 Thread Ed Cussen

As chi^2 is a function of the number of data points included in the
refinement, combined refinements have considerably improved values for a
total chi^2 when compared with refinements carried out against individual
data sets.  

Correspondingly the ESDs in the combined refinement output should be
significantly lower than those obtained from a single data set refinement
unless there is something drastically wrong with the application of
combined refinement to the particular problem (e.g. preferred orientation,
surface vs bulk etc).  

It is my experience that the combined refinement chi^2 is always lower
than that obtained from using just (say) the neutron data.  We have
frequently collected data sets at both room temperature and 5 K using D2b. 
The room temperature data are refined simultaneously with lab X-ray data
to give a chi^2 of 2.02 whilst the D2b data collected at 5 K refined as a
single data set gives chi^2 of 4.53 (published in JACS, 1999, 121,
3958-3967).  In my experience this improvement in chi^2 is typical. 

Eddie Cussen

Inorganic Chemistry Laboratory,
Department of Chemistry,
University of Oxford,
South Parks Road,
Oxford, OX1 3QR
United Kingdom
E-mail: [EMAIL PROTECTED]
tel: (..44)(0)1865-272602
Fax: (..44)(0)1865-272690

On Tue, 25 May 1999 [EMAIL PROTECTED] wrote:

 Jon Wright wrote:
 
 I guess the degradation which is found would come from parameters which
 are determined by both datasets and come out with different values in each
 separate refinement. 
 
 Not necessarily.  In order to get the ESD, the variance-covariance matrix is
 multiplied by chi^2, and the roots of the diagonal elements are taken.
 Therefore, if the chi^2 of the combined refinement is worse than that of the
 individual ones, the ESD will automatically be worsened.  I think this is by
 far the commonest case.  Also, by adding reflections that are insensitive to
 a given parameter my feeling is that you increase the esd on that parameter
 even if chi^2=1, but the proof of this is too tedious.
 
 Paolo  
 






RE: Combined neutron/x-ray refinements

1999-05-25 Thread Alan Hewat, ILL Grenoble

I guess the degradation which is found would come from parameters which
are determined by both datasets and come out with different values in each
separate refinement. 

If they come out differently it is because they are differently biased by 
different systematic errors in the data not described by the model.  

Not necessarily.  In order to get the ESD, the variance-covariance matrix is
multiplied by chi^2, and the roots of the diagonal elements are taken.
Therefore, if the chi^2 of the combined refinement is worse than that of the
individual ones, the ESD will automatically be worsened.

You may also get a higher chi^2 with higher resolution data, so does that 
mean that the structure will be less well determined with hi res data ?

I think not, because the correlation between structural parameters should 
then be smaller - even if you have more points to fit with the same number
of parameters, and the peak shapes are less well described by the model.
You should similarly do better if you have both X-ray and neutron data 
(in the absence of bias).

Also, by adding reflections that are insensitive to
a given parameter my feeling is that you increase the esd on that parameter
even if chi^2=1, but the proof of this is too tedious. 

Tedious and also impossible ? (This sounds like a contradiction in terms)
There is no case you can make based on pure statistics or the mathematics 
of refinement.  The only way combined refinement can be worse is if you 
introduce bias through systematic error (which unfortunately may happen).

... in most cases that the ESD's are underestimated.

Mainly because the ESD's are only correctly calculated if the model
is CAPABLE of fitting the data.  This is not usually true when systematic 
errors are important compared to statistical errors, since the model is
usually not capable of describing these systematic errors fully - 
background, texture etc... 

The conclusion is that you should use combined refinements provided that
one set of data does not contain important uncorrected systematic errors.

Alan Hewat, ILL Grenoble, FRANCE [EMAIL PROTECTED] tel (33) 4.76.20.72.13 
ftp://ftp.ill.fr/pub/dif  fax (33) 4.76.48.39.06  http://www.ill.fr/dif/




RE: Combined neutron/x-ray refinements

1999-05-25 Thread Jon Wright

On Tue, 25 May 1999 [EMAIL PROTECTED] wrote:

 Not necessarily.  In order to get the ESD, the variance-covariance matrix is
 multiplied by chi^2, and the roots of the diagonal elements are taken.

The justification for multiplying by chi^2 is to assume that the
systematic errors are really just due to overestimated counting statistics
and you can rescale the weight of each data point accordingly. A question
arises as to whether you should rescale each pattern's esds according to
the individual patterns chi^2 or do you have to use the overall chi^2 for
both together?

Thinking of an (over-determined) D20 data and an (under-determined) lab
x-ray data set then it makes sense to rescale errors for the D20 data but
not the x-ray (common sense?!). It seems as if the method for calculating
the esd's is nonsense - surely one can only justify rescaling the weights
on a per dataset basis. The systematic errors which are being accounted
for in each dataset are different. Fullprof (multipattern) does give a
chi^2 per pattern although I don't know how it gets the esd's, GSAS
doesn't so I assume it degrades the esd's. (I read that the multiplication
by chi^2 has no basis in statistics anyway :)  

So is it compulsory to multiply by the overall chi^2? If not then I see no
reason for a degradation unless the individual fits get worse due to a
disagreement over a parameter. 

 Therefore, if the chi^2 of the combined refinement is worse than that of the
 individual ones, the ESD will automatically be worsened.  I think this is by
 far the commonest case. 

Agreed although I'm interpreting it as an odd method for estimating an
error. Is it set in stone?

  Also, by adding reflections that are insensitive to
 a given parameter my feeling is that you increase the esd on that parameter
 even if chi^2=1, but the proof of this is too tedious.

Can you direct me to a text with this tedious proof? My feeling is that if
the derivative of a data point w.r.t a parameter is small or zero then it
does not affect the LSQ calculation unless it alters the chi^2.  If the
chi^2 is 1 then how do an extra bunch of zero derivatives affect an
esd??? For example adding or excluding background regions shouldn't alter
the esd's on positions provided the chi^2 is unchanged. 

Is there anything other than GSAS for doing combined fits anyway?
Apologies to the list if I am displaying my ignorance, sometimes it's the
quickest way to learn.

Jon Wright

PS: Sorry to pick at your comments Paolo, it's a shame I'm not at RAL at
the moment. Could have discussed it out over a coffee...




RE: Combined neutron/x-ray refinements

1999-05-25 Thread Jon Wright

On Tue, 25 May 1999, Alan Hewat, ILL Grenoble wrote:

 I guess the degradation which is found would come from parameters which
 are determined by both datasets and come out with different values in each
 separate refinement. 
 
 If they come out differently it is because they are differently biased by 
 different systematic errors in the data not described by the model.  

I was thinking of C-H (D) bondlengths from x-ray and neutron data. Don't
they come out differently if you use spherical form factors for the x-ray
data? I guess a neutron expert might look on this as an systematic error
in the x-ray model :) Maybe not if one looks on the x-ray refinement as
fitting of the electron density function, rather than the nuclear 
positions. For bonding studies it is the differences which are of 
interest!

Jon Wright.

PS: Any offers other than GSAS and multipattern fullprof for actually
doing these fits? 



RE: Combined neutron/x-ray refinements

1999-05-25 Thread Lubomir Smrcok

On Tue, 25 May 1999, Alan Hewat, ILL Grenoble wrote:

 
 Mainly because the ESD's are only correctly calculated if the model
 is CAPABLE of fitting the data.  This is not usually true when systematic 
 errors are important compared to statistical errors, since the model is
 usually not capable of describing these systematic errors fully - 
 background, texture etc... 
 
...and weights.

Lubo