Hi All,

We did an analysis comparing on order of 100 XANES spectra and found that a 
normalization producing the most stereotypically "correct" spectrum did not 
always produce the best linear combinations between them.  That is, we tend to 
think a XANES spectrum should be flat (derivative=0) before the edge, and then 
there is a step, and wiggles, and the wiggles are centered around another flat 
line.  Certainly, this form of spectrum communicates well in publications, and 
probably it is best for publications given that different researchers use 
different normalization algorithms.  However, when comparing XANES spectra 
against each other quantitatively, it is more important that the subtraction 
method be the same.  What we did that worked well was write a normalization 
routine in MATLAB (derived from Matthew Marcus' description since we acquired 
XANES on his beamline).  Then we were better able to compare spectra and fit 
them against each other using linear combination.  When using sp!
 ectra that were individually normalized, the linear combinations were never as 
good.  Incidentally, this is an argument for always including raw spectra in 
supplementary materials even though you would want to use the normalized 
spectrum in a publication.

Cheers,

Zack



On May 15, 2013, at 9:58 AM, ifeffit-requ...@millenia.cars.aps.anl.gov wrote:

Send Ifeffit mailing list submissions to
        ifeffit@millenia.cars.aps.anl.gov

To subscribe or unsubscribe via the World Wide Web, visit
        http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
or, via email, send a message with subject or body 'help' to
        ifeffit-requ...@millenia.cars.aps.anl.gov

You can reach the person managing the list at
        ifeffit-ow...@millenia.cars.aps.anl.gov

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Ifeffit digest..."


Today's Topics:

  1. normalization methods (Matt Newville)
  2. Re: normalization methods (Matthew Marcus)
  3. Re: normalization methods (Matt Newville)
  4. Re: normalization methods (Matthew Marcus)
  5. Re: normalization methods (George Sterbinsky)


----------------------------------------------------------------------

Message: 1
Date: Wed, 15 May 2013 09:35:41 -0500
From: Matt Newville <newvi...@cars.uchicago.edu>
To: XAFS Analysis using Ifeffit <ifeffit@millenia.cars.aps.anl.gov>
Subject: [Ifeffit] normalization methods
Message-ID:
        <ca+7esbobg5uos6or-turjy9zpghoa73c+bernkwsuo4gc4c...@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

Hi Folks,

Over on the github pages for larch, Mauro and Bruce raised an issue
about the "flattening" in Athena. See
https://github.com/xraypy/xraylarch/issues/44

I've added a "flattened output" from Larch's pre_edge() function, but
the question has been raised of whether this is "better" than the
simpler normalized spectra, especially for doing PCA and/or LCF for
XANES.

Currently, the "normalized" spectra is just "(mu -
pre_edge_line)/edge_step". Clearly, a line fitted to the pre-edge of
the spectra is not sufficient to remove all instrumental backgrounds.
In some sense, flattening attempts to do a better job, fitting the
post-edge spectra to a quadratic function.  As Mauro, Bruce, and
Carmelo have pointed out, it is less clear that it is actually better
for XANES analysis.  I think the main concerns are that a) it is so
spectra-specific, and b) it turns on at E0 with a step function.

Bruce suggested doing something more like MBACK or Ifeffit's bkg_cl().
It would certainly be possible to do some sort of "flattening" so
that mu follows the expected energy dependence from tabularized mu(E).

Does anyone else have suggestions, opinions, etc?  Feel free to give
them here or at the github page....

--Matt


------------------------------

Message: 2
Date: Wed, 15 May 2013 07:57:14 -0700
From: Matthew Marcus <mamar...@lbl.gov>
To: XAFS Analysis using Ifeffit <ifeffit@millenia.cars.aps.anl.gov>
Subject: Re: [Ifeffit] normalization methods
Message-ID: <5193a24a.5090...@lbl.gov>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

What I typically do for XANES is divide mu-mu_pre_edge_line by a linear 
function which goes through the post-edge oscillations.
This division goes over the whole data range, including pre-edge.  If the data 
has obvious curvature in the post-edge, I'll use a higher-order
polynomial.  For transmission data, what sometimes linearizes the background is 
to change the abscissa to 1/E^2.7 (the rule-of-thumb absorption
shape) and change it back afterward.  All this is, of course, highly subjective 
and one of the reasons for taking extended XANES data (300eV,
for instance).  For short-range XANES, there isn't enough info to do more than 
divide by a constant.  Once this is done, my LCF programs allow
a slope adjustment as a free parameter, thus muNorm(E) = 
(1+a*(E-E0))*Sum_on_ref{x[ref]*muNorm[ref](E)}.  A sign that this degree of 
freedom
may be being abused is if the sum of the x[ref] is far from 1 or if a*(Emax-E0) 
is large.  Don't get me started on overabsorption :-)
        mam

On 5/15/2013 7:35 AM, Matt Newville wrote:
> Hi Folks,
> 
> Over on the github pages for larch, Mauro and Bruce raised an issue
> about the "flattening" in Athena. See
> https://github.com/xraypy/xraylarch/issues/44
> 
> I've added a "flattened output" from Larch's pre_edge() function, but
> the question has been raised of whether this is "better" than the
> simpler normalized spectra, especially for doing PCA and/or LCF for
> XANES.
> 
> Currently, the "normalized" spectra is just "(mu -
> pre_edge_line)/edge_step". Clearly, a line fitted to the pre-edge of
> the spectra is not sufficient to remove all instrumental backgrounds.
> In some sense, flattening attempts to do a better job, fitting the
> post-edge spectra to a quadratic function.  As Mauro, Bruce, and
> Carmelo have pointed out, it is less clear that it is actually better
> for XANES analysis.  I think the main concerns are that a) it is so
> spectra-specific, and b) it turns on at E0 with a step function.
> 
> Bruce suggested doing something more like MBACK or Ifeffit's bkg_cl().
>  It would certainly be possible to do some sort of "flattening" so
> that mu follows the expected energy dependence from tabularized mu(E).
> 
> Does anyone else have suggestions, opinions, etc?  Feel free to give
> them here or at the github page....
> 
> --Matt
> _______________________________________________
> Ifeffit mailing list
> Ifeffit@millenia.cars.aps.anl.gov
> http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
> 


------------------------------

Message: 3
Date: Wed, 15 May 2013 10:25:23 -0500
From: Matt Newville <newvi...@cars.uchicago.edu>
To: XAFS Analysis using Ifeffit <ifeffit@millenia.cars.aps.anl.gov>
Subject: Re: [Ifeffit] normalization methods
Message-ID:
        <ca+7esbrhv1whj8r2v1-u7wurqcvznmfjrmaf9hnbtfcrjfn...@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

Hi Matthew,

On Wed, May 15, 2013 at 9:57 AM, Matthew Marcus <mamar...@lbl.gov> wrote:
> What I typically do for XANES is divide mu-mu_pre_edge_line by a linear
> function which goes through the post-edge oscillations.
> This division goes over the whole data range, including pre-edge.  If the
> data has obvious curvature in the post-edge, I'll use a higher-order
> polynomial.  For transmission data, what sometimes linearizes the background
> is to change the abscissa to 1/E^2.7 (the rule-of-thumb absorption
> shape) and change it back afterward.  All this is, of course, highly
> subjective and one of the reasons for taking extended XANES data (300eV,
> for instance).  For short-range XANES, there isn't enough info to do more
> than divide by a constant.  Once this is done, my LCF programs allow
> a slope adjustment as a free parameter, thus muNorm(E) =
> (1+a*(E-E0))*Sum_on_ref{x[ref]*muNorm[ref](E)}.  A sign that this degree of
> freedom
> may be being abused is if the sum of the x[ref] is far from 1 or if
> a*(Emax-E0) is large.  Don't get me started on overabsorption :-)
>        mam

Thanks -- I should have said that pre_edge() can now do a
victoreen-ish fit, regressing a line to mu*E^nvict (nvict can be any
real value).

Still, it seems that the current flattening is somewhere between
"better" and "worse", which is unsettling...  Applying the
"flattening" polynomial to the pre-edge range definitely seems to give
poor results, but maybe some energy-dependent compromise is possible.

And, of course, over-absorption is next on the list!

--Matt


------------------------------

Message: 4
Date: Wed, 15 May 2013 08:41:55 -0700
From: Matthew Marcus <mamar...@lbl.gov>
To: XAFS Analysis using Ifeffit <ifeffit@millenia.cars.aps.anl.gov>
Subject: Re: [Ifeffit] normalization methods
Message-ID: <5193acc3.3090...@lbl.gov>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

The way I commonly do pre-edge is to fit with some form plus a power-law 
singularity representing the initial rise of the edge, then
subtract out that "some form".  Now, that form can be either linear, 
linear+E^(-2.7) (for transmission), or linear+ another power-law
singularity centered at the center passband energy of the fluorescence 
detector.  That latter is for fluorescence data which is affected by
the tail of the elastic/Compton peak from the incident energy.  Whichever form 
is taken gets subtraccted from the whole data range, resulting
in data which is pre-edge-subtracted but not yet post-edge normalized.  The 
path then splits; for EXAFS, the usual conversion to k-space, spline
fitting in the post-edge, subtraction and division is done, all interactively.  
Tensioned spline is also available due to request of a prominent user.
For XANES, the post-edge is fit as previously described.  Thus, there's no 
distinction made between data above and below E0 in XANES, whereas
there is such a distinction in EXAFS.
        mam

On 5/15/2013 8:25 AM, Matt Newville wrote:
> Hi Matthew,
> 
> On Wed, May 15, 2013 at 9:57 AM, Matthew Marcus <mamar...@lbl.gov> wrote:
>> What I typically do for XANES is divide mu-mu_pre_edge_line by a linear
>> function which goes through the post-edge oscillations.
>> This division goes over the whole data range, including pre-edge.  If the
>> data has obvious curvature in the post-edge, I'll use a higher-order
>> polynomial.  For transmission data, what sometimes linearizes the background
>> is to change the abscissa to 1/E^2.7 (the rule-of-thumb absorption
>> shape) and change it back afterward.  All this is, of course, highly
>> subjective and one of the reasons for taking extended XANES data (300eV,
>> for instance).  For short-range XANES, there isn't enough info to do more
>> than divide by a constant.  Once this is done, my LCF programs allow
>> a slope adjustment as a free parameter, thus muNorm(E) =
>> (1+a*(E-E0))*Sum_on_ref{x[ref]*muNorm[ref](E)}.  A sign that this degree of
>> freedom
>> may be being abused is if the sum of the x[ref] is far from 1 or if
>> a*(Emax-E0) is large.  Don't get me started on overabsorption :-)
>>         mam
> 
> Thanks -- I should have said that pre_edge() can now do a
> victoreen-ish fit, regressing a line to mu*E^nvict (nvict can be any
> real value).
> 
> Still, it seems that the current flattening is somewhere between
> "better" and "worse", which is unsettling...  Applying the
> "flattening" polynomial to the pre-edge range definitely seems to give
> poor results, but maybe some energy-dependent compromise is possible.
> 
> And, of course, over-absorption is next on the list!
> 
> --Matt
> _______________________________________________
> Ifeffit mailing list
> Ifeffit@millenia.cars.aps.anl.gov
> http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
> 


------------------------------

Message: 5
Date: Wed, 15 May 2013 12:58:53 -0400
From: George Sterbinsky <georgesterbin...@u.northwestern.edu>
To: XAFS Analysis using Ifeffit <ifeffit@millenia.cars.aps.anl.gov>
Subject: Re: [Ifeffit] normalization methods
Message-ID:
        <CALoY8YzRCAa3w=8hC=pn50cysiavzyr6grrczdnw9d2gvpg...@mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

The question of whether it is appropriate to use flattened data for
quantitative analysis is something I've been thinking about a lot recently.
In my specific case, I am analyzing XMCD data at the Co L-edge. To obtain
the XMCD, I measure XAS with total electron yield detection using a ~70%
left or right circularly polarized beam and flip the magnetic field on the
sample at every data point. The goal then, is to subtract the XAS measured
in a positive field (p-XAS) from XAS measured in a negative field (n-XAS)
and get something (the XMCD) that is zero in the pre-edge and post-edge
regions. I often find that after removal of a linear pre-edge, the spectra
still have a linearly increasing post edge (with EXAFS oscillations
superimposed on it), and the slope of the n-XAS and p-XAS post-edge lines
are different. In this case simply multiplying the n-XAS and p-XAS by
constants will never give an XMCD spectrum that is zero in the post edge
region. There is then some component of the XAS background that is not
accounted for by linear subtraction and multiplication by a constant. It
seems to me that flattening could be a good way to account for such a
background. So is flattening a reasonable thing to do in a case such as
this, or is there a better way to account for such a background?

Thanks,
George


On Wed, May 15, 2013 at 11:41 AM, Matthew Marcus <mamar...@lbl.gov> wrote:

> The way I commonly do pre-edge is to fit with some form plus a power-law
> singularity representing the initial rise of the edge, then
> subtract out that "some form".  Now, that form can be either linear,
> linear+E^(-2.7) (for transmission), or linear+ another power-law
> singularity centered at the center passband energy of the fluorescence
> detector.  That latter is for fluorescence data which is affected by
> the tail of the elastic/Compton peak from the incident energy.  Whichever
> form is taken gets subtraccted from the whole data range, resulting
> in data which is pre-edge-subtracted but not yet post-edge normalized.
> The path then splits; for EXAFS, the usual conversion to k-space, spline
> fitting in the post-edge, subtraction and division is done, all
> interactively.  Tensioned spline is also available due to request of a
> prominent user.
> For XANES, the post-edge is fit as previously described.  Thus, there's no
> distinction made between data above and below E0 in XANES, whereas
> there is such a distinction in EXAFS.
>        mam
> 
> 
> On 5/15/2013 8:25 AM, Matt Newville wrote:
> 
>> Hi Matthew,
>> 
>> On Wed, May 15, 2013 at 9:57 AM, Matthew Marcus <mamar...@lbl.gov> wrote:
>> 
>>> What I typically do for XANES is divide mu-mu_pre_edge_line by a linear
>>> function which goes through the post-edge oscillations.
>>> This division goes over the whole data range, including pre-edge.  If the
>>> data has obvious curvature in the post-edge, I'll use a higher-order
>>> polynomial.  For transmission data, what sometimes linearizes the
>>> background
>>> is to change the abscissa to 1/E^2.7 (the rule-of-thumb absorption
>>> shape) and change it back afterward.  All this is, of course, highly
>>> subjective and one of the reasons for taking extended XANES data (300eV,
>>> for instance).  For short-range XANES, there isn't enough info to do more
>>> than divide by a constant.  Once this is done, my LCF programs allow
>>> a slope adjustment as a free parameter, thus muNorm(E) =
>>> (1+a*(E-E0))*Sum_on_ref{x[ref]***muNorm[ref](E)}.  A sign that this
>>> degree of
>>> freedom
>>> may be being abused is if the sum of the x[ref] is far from 1 or if
>>> a*(Emax-E0) is large.  Don't get me started on overabsorption :-)
>>>         mam
>>> 
>> 
>> Thanks -- I should have said that pre_edge() can now do a
>> victoreen-ish fit, regressing a line to mu*E^nvict (nvict can be any
>> real value).
>> 
>> Still, it seems that the current flattening is somewhere between
>> "better" and "worse", which is unsettling...  Applying the
>> "flattening" polynomial to the pre-edge range definitely seems to give
>> poor results, but maybe some energy-dependent compromise is possible.
>> 
>> And, of course, over-absorption is next on the list!
>> 
>> --Matt
>> ______________________________**_________________
>> Ifeffit mailing list
>> ifef...@millenia.cars.aps.anl.**gov <Ifeffit@millenia.cars.aps.anl.gov>
>> http://millenia.cars.aps.anl.**gov/mailman/listinfo/ifeffit<http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit>
>> 
>> ______________________________**_________________
> Ifeffit mailing list
> ifef...@millenia.cars.aps.anl.**gov <Ifeffit@millenia.cars.aps.anl.gov>
> http://millenia.cars.aps.anl.**gov/mailman/listinfo/ifeffit<http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit>
> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<http://millenia.cars.aps.anl.gov/pipermail/ifeffit/attachments/20130515/1589cda2/attachment.htm>

------------------------------

_______________________________________________
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


End of Ifeffit Digest, Vol 123, Issue 14
****************************************


_______________________________________________
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit

Reply via email to