Re: [Ifeffit] normalization methods

2013-05-16 Thread Matt Newville
Hi Matthew, George, Zach,

Thanks for the discussion!

On Wed, May 15, 2013 at 5:41 PM, Matthew Marcus mamar...@lbl.gov wrote:
 I'm not sure what 'flattening' means.  Does that mean dividing by a linear
 or other polynomial function, fitted to the post-edge?
 mam

Sorry, I should have been clearer.  Standard Athena/Ifeffit is to

  a) regress a pre-edge line to mu(E) (no power laws)
  b) regress a post-edge quadratic
  c) set edge_step = post_edge_quadratic(E0) - pre_edge_line(E0)
  b) set  norm(E)  = (mu(E) - pre_edge_line(E)) / edge_step.

Flattening (Athena only, now backported to larch) fits a quadratic to
the post-edge range (typcically E0+100 to end of data) of norm(E), and
then sets

  flattened(E) =  norm(E) for E= E0
 =  norm(E) - quadratic(E) + quadratic(E0)   for E  E0

I think this was originally meant for display purposes only.

Hopefully Bruce can correct me if I'm wrong on any of the details here.

I think it's fair to say that the Standard Athena/Ifeffit approach
to normalization is simple-minded.  It was designed for EXAFS in an
era when accessing databases seemed like a challenge, so even for
EXAFS it is simple-minded.

Flattening might be better at removing instrumental backgrounds, and
be better for linear analysis of XANES.  The main concern I would have
is the potential for a slight discontinuity at E0, or the potential
strong dependence from the choice of E0.

Using something like bkg_cl() (which matched mu(E) to the data from
the Cromer-Liberman tables) or MBACK (which I believe is similar, but
also accounts for  elastic/Compton leakage into the pre-edge part of
fluorescence spectra).

From my point of view, the question is: what's the best way to do
this?The pre_edge() function in Larch does include an energy
exponent term, and now writes out the flattened array, as above.
It does not include the scaling MAM described, but that would not be
hard.  Reimplementing bkg_cl() would not be too hard, but perhaps
trying to port MBACK would be better. Perhaps all of the above is
best?


--Matt
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] normalization methods

2013-05-16 Thread Matthew Marcus

That 'flattening' function seems over-complicated to me and makes an artificial 
discontinuity, at least in slope, at E0 which is
a somewhat arbitrary quantity.  Why not simply divide by the post-edge 
quadratic (norm(E) = (mu(E)-pre_edge_line(E))/quadratic(E))?
In some cases, where there's a big curvature, it may make sense to divide mu(E) 
by the quadratic, then subtract a pre-edge.  What I've never
solved satisfactorily is the case in which the extrapolation of the pre-edge line 
crosses the post-edge, so mu(E)-pre_edge_line(E)0 for
some part of the range.  I've never understood why this happens.
mam

On 5/16/2013 4:47 AM, Matt Newville wrote:

Hi Matthew, George, Zach,

Thanks for the discussion!

On Wed, May 15, 2013 at 5:41 PM, Matthew Marcus mamar...@lbl.gov wrote:

I'm not sure what 'flattening' means.  Does that mean dividing by a linear
or other polynomial function, fitted to the post-edge?
 mam


Sorry, I should have been clearer.  Standard Athena/Ifeffit is to

   a) regress a pre-edge line to mu(E) (no power laws)
   b) regress a post-edge quadratic
   c) set edge_step = post_edge_quadratic(E0) - pre_edge_line(E0)
   b) set  norm(E)  = (mu(E) - pre_edge_line(E)) / edge_step.

Flattening (Athena only, now backported to larch) fits a quadratic to
the post-edge range (typcically E0+100 to end of data) of norm(E), and
then sets

   flattened(E) =  norm(E) for E= E0
  =  norm(E) - quadratic(E) + quadratic(E0)   for E  E0

I think this was originally meant for display purposes only.

Hopefully Bruce can correct me if I'm wrong on any of the details here.

I think it's fair to say that the Standard Athena/Ifeffit approach
to normalization is simple-minded.  It was designed for EXAFS in an
era when accessing databases seemed like a challenge, so even for
EXAFS it is simple-minded.

Flattening might be better at removing instrumental backgrounds, and
be better for linear analysis of XANES.  The main concern I would have
is the potential for a slight discontinuity at E0, or the potential
strong dependence from the choice of E0.

Using something like bkg_cl() (which matched mu(E) to the data from
the Cromer-Liberman tables) or MBACK (which I believe is similar, but
also accounts for  elastic/Compton leakage into the pre-edge part of
fluorescence spectra).


From my point of view, the question is: what's the best way to do

this?The pre_edge() function in Larch does include an energy
exponent term, and now writes out the flattened array, as above.
It does not include the scaling MAM described, but that would not be
hard.  Reimplementing bkg_cl() would not be too hard, but perhaps
trying to port MBACK would be better. Perhaps all of the above is
best?


--Matt
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] normalization methods

2013-05-15 Thread Matthew Marcus

What I typically do for XANES is divide mu-mu_pre_edge_line by a linear 
function which goes through the post-edge oscillations.
This division goes over the whole data range, including pre-edge.  If the data 
has obvious curvature in the post-edge, I'll use a higher-order
polynomial.  For transmission data, what sometimes linearizes the background is 
to change the abscissa to 1/E^2.7 (the rule-of-thumb absorption
shape) and change it back afterward.  All this is, of course, highly subjective 
and one of the reasons for taking extended XANES data (300eV,
for instance).  For short-range XANES, there isn't enough info to do more than 
divide by a constant.  Once this is done, my LCF programs allow
a slope adjustment as a free parameter, thus muNorm(E) = 
(1+a*(E-E0))*Sum_on_ref{x[ref]*muNorm[ref](E)}.  A sign that this degree of 
freedom
may be being abused is if the sum of the x[ref] is far from 1 or if a*(Emax-E0) 
is large.  Don't get me started on overabsorption :-)
mam

On 5/15/2013 7:35 AM, Matt Newville wrote:

Hi Folks,

Over on the github pages for larch, Mauro and Bruce raised an issue
about the flattening in Athena. See
https://github.com/xraypy/xraylarch/issues/44

I've added a flattened output from Larch's pre_edge() function, but
the question has been raised of whether this is better than the
simpler normalized spectra, especially for doing PCA and/or LCF for
XANES.

Currently, the normalized spectra is just (mu -
pre_edge_line)/edge_step. Clearly, a line fitted to the pre-edge of
the spectra is not sufficient to remove all instrumental backgrounds.
In some sense, flattening attempts to do a better job, fitting the
post-edge spectra to a quadratic function.  As Mauro, Bruce, and
Carmelo have pointed out, it is less clear that it is actually better
for XANES analysis.  I think the main concerns are that a) it is so
spectra-specific, and b) it turns on at E0 with a step function.

Bruce suggested doing something more like MBACK or Ifeffit's bkg_cl().
  It would certainly be possible to do some sort of flattening so
that mu follows the expected energy dependence from tabularized mu(E).

Does anyone else have suggestions, opinions, etc?  Feel free to give
them here or at the github page

--Matt
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] normalization methods

2013-05-15 Thread Matt Newville
Hi Matthew,

On Wed, May 15, 2013 at 9:57 AM, Matthew Marcus mamar...@lbl.gov wrote:
 What I typically do for XANES is divide mu-mu_pre_edge_line by a linear
 function which goes through the post-edge oscillations.
 This division goes over the whole data range, including pre-edge.  If the
 data has obvious curvature in the post-edge, I'll use a higher-order
 polynomial.  For transmission data, what sometimes linearizes the background
 is to change the abscissa to 1/E^2.7 (the rule-of-thumb absorption
 shape) and change it back afterward.  All this is, of course, highly
 subjective and one of the reasons for taking extended XANES data (300eV,
 for instance).  For short-range XANES, there isn't enough info to do more
 than divide by a constant.  Once this is done, my LCF programs allow
 a slope adjustment as a free parameter, thus muNorm(E) =
 (1+a*(E-E0))*Sum_on_ref{x[ref]*muNorm[ref](E)}.  A sign that this degree of
 freedom
 may be being abused is if the sum of the x[ref] is far from 1 or if
 a*(Emax-E0) is large.  Don't get me started on overabsorption :-)
 mam

Thanks -- I should have said that pre_edge() can now do a
victoreen-ish fit, regressing a line to mu*E^nvict (nvict can be any
real value).

Still, it seems that the current flattening is somewhere between
better and worse, which is unsettling...  Applying the
flattening polynomial to the pre-edge range definitely seems to give
poor results, but maybe some energy-dependent compromise is possible.

And, of course, over-absorption is next on the list!

--Matt
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] normalization methods

2013-05-15 Thread Matthew Marcus

The way I commonly do pre-edge is to fit with some form plus a power-law 
singularity representing the initial rise of the edge, then
subtract out that some form.  Now, that form can be either linear, 
linear+E^(-2.7) (for transmission), or linear+ another power-law
singularity centered at the center passband energy of the fluorescence 
detector.  That latter is for fluorescence data which is affected by
the tail of the elastic/Compton peak from the incident energy.  Whichever form 
is taken gets subtraccted from the whole data range, resulting
in data which is pre-edge-subtracted but not yet post-edge normalized.  The 
path then splits; for EXAFS, the usual conversion to k-space, spline
fitting in the post-edge, subtraction and division is done, all interactively.  
Tensioned spline is also available due to request of a prominent user.
For XANES, the post-edge is fit as previously described.  Thus, there's no 
distinction made between data above and below E0 in XANES, whereas
there is such a distinction in EXAFS.
mam

On 5/15/2013 8:25 AM, Matt Newville wrote:

Hi Matthew,

On Wed, May 15, 2013 at 9:57 AM, Matthew Marcus mamar...@lbl.gov wrote:

What I typically do for XANES is divide mu-mu_pre_edge_line by a linear
function which goes through the post-edge oscillations.
This division goes over the whole data range, including pre-edge.  If the
data has obvious curvature in the post-edge, I'll use a higher-order
polynomial.  For transmission data, what sometimes linearizes the background
is to change the abscissa to 1/E^2.7 (the rule-of-thumb absorption
shape) and change it back afterward.  All this is, of course, highly
subjective and one of the reasons for taking extended XANES data (300eV,
for instance).  For short-range XANES, there isn't enough info to do more
than divide by a constant.  Once this is done, my LCF programs allow
a slope adjustment as a free parameter, thus muNorm(E) =
(1+a*(E-E0))*Sum_on_ref{x[ref]*muNorm[ref](E)}.  A sign that this degree of
freedom
may be being abused is if the sum of the x[ref] is far from 1 or if
a*(Emax-E0) is large.  Don't get me started on overabsorption :-)
 mam


Thanks -- I should have said that pre_edge() can now do a
victoreen-ish fit, regressing a line to mu*E^nvict (nvict can be any
real value).

Still, it seems that the current flattening is somewhere between
better and worse, which is unsettling...  Applying the
flattening polynomial to the pre-edge range definitely seems to give
poor results, but maybe some energy-dependent compromise is possible.

And, of course, over-absorption is next on the list!

--Matt
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] normalization methods

2013-05-15 Thread George Sterbinsky
The question of whether it is appropriate to use flattened data for
quantitative analysis is something I've been thinking about a lot recently.
In my specific case, I am analyzing XMCD data at the Co L-edge. To obtain
the XMCD, I measure XAS with total electron yield detection using a ~70%
left or right circularly polarized beam and flip the magnetic field on the
sample at every data point. The goal then, is to subtract the XAS measured
in a positive field (p-XAS) from XAS measured in a negative field (n-XAS)
and get something (the XMCD) that is zero in the pre-edge and post-edge
regions. I often find that after removal of a linear pre-edge, the spectra
still have a linearly increasing post edge (with EXAFS oscillations
superimposed on it), and the slope of the n-XAS and p-XAS post-edge lines
are different. In this case simply multiplying the n-XAS and p-XAS by
constants will never give an XMCD spectrum that is zero in the post edge
region. There is then some component of the XAS background that is not
accounted for by linear subtraction and multiplication by a constant. It
seems to me that flattening could be a good way to account for such a
background. So is flattening a reasonable thing to do in a case such as
this, or is there a better way to account for such a background?

Thanks,
George


On Wed, May 15, 2013 at 11:41 AM, Matthew Marcus mamar...@lbl.gov wrote:

 The way I commonly do pre-edge is to fit with some form plus a power-law
 singularity representing the initial rise of the edge, then
 subtract out that some form.  Now, that form can be either linear,
 linear+E^(-2.7) (for transmission), or linear+ another power-law
 singularity centered at the center passband energy of the fluorescence
 detector.  That latter is for fluorescence data which is affected by
 the tail of the elastic/Compton peak from the incident energy.  Whichever
 form is taken gets subtraccted from the whole data range, resulting
 in data which is pre-edge-subtracted but not yet post-edge normalized.
  The path then splits; for EXAFS, the usual conversion to k-space, spline
 fitting in the post-edge, subtraction and division is done, all
 interactively.  Tensioned spline is also available due to request of a
 prominent user.
 For XANES, the post-edge is fit as previously described.  Thus, there's no
 distinction made between data above and below E0 in XANES, whereas
 there is such a distinction in EXAFS.
 mam


 On 5/15/2013 8:25 AM, Matt Newville wrote:

 Hi Matthew,

 On Wed, May 15, 2013 at 9:57 AM, Matthew Marcus mamar...@lbl.gov wrote:

 What I typically do for XANES is divide mu-mu_pre_edge_line by a linear
 function which goes through the post-edge oscillations.
 This division goes over the whole data range, including pre-edge.  If the
 data has obvious curvature in the post-edge, I'll use a higher-order
 polynomial.  For transmission data, what sometimes linearizes the
 background
 is to change the abscissa to 1/E^2.7 (the rule-of-thumb absorption
 shape) and change it back afterward.  All this is, of course, highly
 subjective and one of the reasons for taking extended XANES data (300eV,
 for instance).  For short-range XANES, there isn't enough info to do more
 than divide by a constant.  Once this is done, my LCF programs allow
 a slope adjustment as a free parameter, thus muNorm(E) =
 (1+a*(E-E0))*Sum_on_ref{x[ref]***muNorm[ref](E)}.  A sign that this
 degree of
 freedom
 may be being abused is if the sum of the x[ref] is far from 1 or if
 a*(Emax-E0) is large.  Don't get me started on overabsorption :-)
  mam


 Thanks -- I should have said that pre_edge() can now do a
 victoreen-ish fit, regressing a line to mu*E^nvict (nvict can be any
 real value).

 Still, it seems that the current flattening is somewhere between
 better and worse, which is unsettling...  Applying the
 flattening polynomial to the pre-edge range definitely seems to give
 poor results, but maybe some energy-dependent compromise is possible.

 And, of course, over-absorption is next on the list!

 --Matt
 __**_
 Ifeffit mailing list
 ifef...@millenia.cars.aps.anl.**gov Ifeffit@millenia.cars.aps.anl.gov
 http://millenia.cars.aps.anl.**gov/mailman/listinfo/ifeffithttp://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit

  __**_
 Ifeffit mailing list
 ifef...@millenia.cars.aps.anl.**gov Ifeffit@millenia.cars.aps.anl.gov
 http://millenia.cars.aps.anl.**gov/mailman/listinfo/ifeffithttp://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit

___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] normalization methods

2013-05-15 Thread Matthew Marcus

You say that the flipping difference (p - n) is 0 in pre-edge and far post-edge 
regions, which is as it should be, but then say that the
slopes of p- and n- post-edges, considered separately, are different.  I must 
be misunderstanding because those two statements would seem to be
inconsistent.  I wonder if the sensitivity of the TEY changes with magnetic 
field because of the effect of the field on the trajectories of
the outgoing electrons, which would explain the differing curves.  A 
possibility - if you divide the p-XAS by n-XAS, do you get something
which is a smooth curve everywhere but where MCD is expected?  Does that curve 
match in pre- and far post-edge regions?  If that miracle occurs,
then perhaps you could fit that to a polynomial, except in the MCD region, then 
divide the p-XAS by that polynomial, to remove the effect of
the differing sensitivities.

There are people here at ALS, such as Elke Arenholz earenh...@lbl.gov, who do 
this sort of spectroscopy.  I suggest asking her.
mam

On 5/15/2013 9:58 AM, George Sterbinsky wrote:

The question of whether it is appropriate to use flattened data for 
quantitative analysis is something I've been thinking about a lot recently. In 
my specific case, I am analyzing XMCD data at the Co L-edge. To obtain the 
XMCD, I measure XAS with total electron yield detection using a ~70% left or 
right circularly polarized beam and flip the magnetic field on the sample at 
every data point. The goal then, is to subtract the XAS measured in a positive 
field (p-XAS) from XAS measured in a negative field (n-XAS) and get something 
(the XMCD) that is zero in the pre-edge and post-edge regions. I often find 
that after removal of a linear pre-edge, the spectra still have a linearly 
increasing post edge (with EXAFS oscillations superimposed on it), and the 
slope of the n-XAS and p-XAS post-edge lines are different. In this case simply 
multiplying the n-XAS and p-XAS by constants will never give an XMCD spectrum 
that is zero in the post edge region. There is then some component of t


he

XAS background that is not accounted for by linear subtraction and 
multiplication by a constant. It seems to me that flattening could be a good 
way to account for such a background. So is flattening a reasonable thing to do 
in a case such as this, or is there a better way to account for such a 
background?

Thanks,
George


On Wed, May 15, 2013 at 11:41 AM, Matthew Marcus mamar...@lbl.gov 
mailto:mamar...@lbl.gov wrote:

The way I commonly do pre-edge is to fit with some form plus a power-law 
singularity representing the initial rise of the edge, then
subtract out that some form.  Now, that form can be either linear, 
linear+E^(-2.7) (for transmission), or linear+ another power-law
singularity centered at the center passband energy of the fluorescence 
detector.  That latter is for fluorescence data which is affected by
the tail of the elastic/Compton peak from the incident energy.  Whichever 
form is taken gets subtraccted from the whole data range, resulting
in data which is pre-edge-subtracted but not yet post-edge normalized.  The 
path then splits; for EXAFS, the usual conversion to k-space, spline
fitting in the post-edge, subtraction and division is done, all 
interactively.  Tensioned spline is also available due to request of a 
prominent user.
For XANES, the post-edge is fit as previously described.  Thus, there's no 
distinction made between data above and below E0 in XANES, whereas
there is such a distinction in EXAFS.
 mam


On 5/15/2013 8:25 AM, Matt Newville wrote:

Hi Matthew,

On Wed, May 15, 2013 at 9:57 AM, Matthew Marcus mamar...@lbl.gov 
mailto:mamar...@lbl.gov wrote:

What I typically do for XANES is divide mu-mu_pre_edge_line by a 
linear
function which goes through the post-edge oscillations.
This division goes over the whole data range, including pre-edge.  
If the
data has obvious curvature in the post-edge, I'll use a higher-order
polynomial.  For transmission data, what sometimes linearizes the 
background
is to change the abscissa to 1/E^2.7 (the rule-of-thumb absorption
shape) and change it back afterward.  All this is, of course, highly
subjective and one of the reasons for taking extended XANES data 
(300eV,
for instance).  For short-range XANES, there isn't enough info to 
do more
than divide by a constant.  Once this is done, my LCF programs allow
a slope adjustment as a free parameter, thus muNorm(E) =
(1+a*(E-E0))*Sum_on_ref{x[ref]__*muNorm[ref](E)}.  A sign that this 
degree of
freedom
may be being abused is if the sum of the x[ref] is far from 1 or if
a*(Emax-E0) is large.  Don't get me started on overabsorption :-)
  mam


Thanks -- I should have said that 

Re: [Ifeffit] normalization methods

2013-05-15 Thread Matthew Marcus

OK, I guess I don't know what 'standard normalization' is.  It looks from the 
quotient that you'll need some sort of curved post-edge.
I guess the division didn't work because the electron energy distribution is 
different pre- and post-edge, so the magnetic effects are
different and vary across the edge.  Thus, the shapes of the MCD peaks will be 
at least a little corrupted even if the pre- and post-edge
spectra are taken into account.  I don't know what to do about this.  Did you 
try asking Elke?
mam

On 5/15/2013 11:52 AM, George Sterbinsky wrote:

Hi Matthew,


On Wed, May 15, 2013 at 1:20 PM, Matthew Marcus mamar...@lbl.gov 
mailto:mamar...@lbl.gov wrote:

You say that the flipping difference (p - n) is 0 in pre-edge and far 
post-edge regions, which is as it should be, but then say that the
slopes of p- and n- post-edges, considered separately, are different.  I 
must be misunderstanding because those two statements would seem to be
inconsistent.



Sorry, I think my wording wasn't particularly clear here. What I should have 
said is:

The goal then is to subtract the /normalized/ XAS measured in a positive field 
(p-XAS) from /normalized/ XAS measured in a negative field (n-XAS) and get something (the 
XMCD) that is zero in the pre-edge and post-edge regions. /However, standard 
normalization does not give this result/

Italics indicate new text.

I wonder if the sensitivity of the TEY changes with magnetic field because 
of the effect of the field on the trajectories of
the outgoing electrons, which would explain the differing curves.


I would agree, I think the effect of the magnetic field on the electrons is the 
likely source of the differences in background.

A possibility - if you divide the p-XAS by n-XAS, do you get something
which is a smooth curve everywhere but where MCD is expected?  Does that 
curve match in pre- and far post-edge regions?


No, after division of the p-XAS by the n-XAS (before any normalization), both 
the pre and post-edge regions are smooth, but one would need a step-like 
function to connect them. I've attached a plot showing the result of division.


If that miracle occurs,
then perhaps you could fit that to a polynomial, except in the MCD region, 
then divide the p-XAS by that polynomial, to remove the effect of
the differing sensitivities.

There are people here at ALS, such as Elke Arenholz earenh...@lbl.gov 
mailto:earenh...@lbl.gov, who do this sort of spectroscopy.  I suggest asking 
her.
 mam


Thanks for the suggestion and your reply.

George








On 5/15/2013 9:58 AM, George Sterbinsky wrote:

The question of whether it is appropriate to use flattened data for 
quantitative analysis is something I've been thinking about a lot recently. In 
my specific case, I am analyzing XMCD data at the Co L-edge. To obtain the 
XMCD, I measure XAS with total electron yield detection using a ~70% left or 
right circularly polarized beam and flip the magnetic field on the sample at 
every data point. The goal then, is to subtract the XAS measured in a positive 
field (p-XAS) from XAS measured in a negative field (n-XAS) and get something 
(the XMCD) that is zero in the pre-edge and post-edge regions. I often find 
that after removal of a linear pre-edge, the spectra still have a linearly 
increasing post edge (with EXAFS oscillations superimposed on it), and the 
slope of the n-XAS and p-XAS post-edge lines are different. In this case simply 
multiplying the n-XAS and p-XAS by constants will never give an XMCD spectrum 
that is zero in the post edge region. There is then some
component of the

XAS background that is not accounted for by linear subtraction and 
multiplication by a constant. It seems to me that flattening could be a good 
way to account for such a background. So is flattening a reasonable thing to do 
in a case such as this, or is there a better way to account for such a 
background?

Thanks,
George


On Wed, May 15, 2013 at 11:41 AM, Matthew Marcus mamar...@lbl.gov 
mailto:mamar...@lbl.gov mailto:mamar...@lbl.gov mailto:mamar...@lbl.gov 
wrote:

 The way I commonly do pre-edge is to fit with some form plus a 
power-law singularity representing the initial rise of the edge, then
 subtract out that some form.  Now, that form can be either 
linear, linear+E^(-2.7) (for transmission), or linear+ another power-law
 singularity centered at the center passband energy of the 
fluorescence detector.  That latter is for fluorescence data which is affected 
by
 the tail of the elastic/Compton peak from the incident energy. 
 Whichever form is taken gets subtraccted from the whole data range, resulting
 in data which is pre-edge-subtracted but not yet post-edge 
normalized.  The path then splits; for EXAFS, the usual conversion to k-space, 

Re: [Ifeffit] normalization methods

2013-05-15 Thread George Sterbinsky
By standard normalization, I meant subtraction of a linear pre-edge and
multiplication by a constant. If this treatment is applied to the XAS
spectra before subtraction, one does not obtain an XMCD spectrum that goes
to zero in the post edge region for the data I described. As you noted,
that is what would be expected given the p-XAS and n-XAS have different
slopes in the post-edge region.

On the other hand, standard normalization + flattening does result in pre
and post-edge regions that go to zero, again as one might expect. So
perhaps, the background modeled by standard normalization + flattening is
an accurate representation of the real background in some cases and can be
used in quantitative analysis. Is there reason to believe that cannot be
the case?

Thanks,
George




On Wed, May 15, 2013 at 3:04 PM, Matthew Marcus mamar...@lbl.gov wrote:

 OK, I guess I don't know what 'standard normalization' is.  It looks from
 the quotient that you'll need some sort of curved post-edge.
 I guess the division didn't work because the electron energy distribution
 is different pre- and post-edge, so the magnetic effects are
 different and vary across the edge.  Thus, the shapes of the MCD peaks
 will be at least a little corrupted even if the pre- and post-edge
 spectra are taken into account.  I don't know what to do about this.  Did
 you try asking Elke?
 mam


 On 5/15/2013 11:52 AM, George Sterbinsky wrote:

 Hi Matthew,



 On Wed, May 15, 2013 at 1:20 PM, Matthew Marcus mamar...@lbl.govmailto:
 mamar...@lbl.gov wrote:

 You say that the flipping difference (p - n) is 0 in pre-edge and far
 post-edge regions, which is as it should be, but then say that the
 slopes of p- and n- post-edges, considered separately, are different.
  I must be misunderstanding because those two statements would seem to be
 inconsistent.



 Sorry, I think my wording wasn't particularly clear here. What I should
 have said is:

 The goal then is to subtract the /normalized/ XAS measured in a positive
 field (p-XAS) from /normalized/ XAS measured in a negative field (n-XAS)
 and get something (the XMCD) that is zero in the pre-edge and post-edge
 regions. /However, standard normalization does not give this result/


 Italics indicate new text.

 I wonder if the sensitivity of the TEY changes with magnetic field
 because of the effect of the field on the trajectories of
 the outgoing electrons, which would explain the differing curves.


 I would agree, I think the effect of the magnetic field on the electrons
 is the likely source of the differences in background.

 A possibility - if you divide the p-XAS by n-XAS, do you get something
 which is a smooth curve everywhere but where MCD is expected?  Does
 that curve match in pre- and far post-edge regions?


 No, after division of the p-XAS by the n-XAS (before any normalization),
 both the pre and post-edge regions are smooth, but one would need a
 step-like function to connect them. I've attached a plot showing the result
 of division.


 If that miracle occurs,
 then perhaps you could fit that to a polynomial, except in the MCD
 region, then divide the p-XAS by that polynomial, to remove the effect of
 the differing sensitivities.

 There are people here at ALS, such as Elke Arenholz 
 earenh...@lbl.gov mailto:earenh...@lbl.gov, who do this sort of
 spectroscopy.  I suggest asking her.

  mam


 Thanks for the suggestion and your reply.

 George








 On 5/15/2013 9:58 AM, George Sterbinsky wrote:

 The question of whether it is appropriate to use flattened
 data for quantitative analysis is something I've been thinking about a lot
 recently. In my specific case, I am analyzing XMCD data at the Co L-edge.
 To obtain the XMCD, I measure XAS with total electron yield detection using
 a ~70% left or right circularly polarized beam and flip the magnetic field
 on the sample at every data point. The goal then, is to subtract the XAS
 measured in a positive field (p-XAS) from XAS measured in a negative field
 (n-XAS) and get something (the XMCD) that is zero in the pre-edge and
 post-edge regions. I often find that after removal of a linear pre-edge,
 the spectra still have a linearly increasing post edge (with EXAFS
 oscillations superimposed on it), and the slope of the n-XAS and p-XAS
 post-edge lines are different. In this case simply multiplying the n-XAS
 and p-XAS by constants will never give an XMCD spectrum that is zero in the
 post edge region. There is then some
 component of the

 XAS background that is not accounted for by linear
 subtraction and multiplication by a constant. It seems to me that
 flattening could be a good way to account for such a background. So is
 flattening a reasonable thing to do in a case such as this, or is there a
 better way to account for such a background?

 Thanks,
 George


 On Wed, May 15, 2013 

Re: [Ifeffit] normalization methods

2013-05-15 Thread Matthew Marcus

I'm not sure what 'flattening' means.  Does that mean dividing by a linear or 
other polynomial function, fitted to the post-edge?
mam

On 5/15/2013 1:43 PM, George Sterbinsky wrote:

By standard normalization, I meant subtraction of a linear pre-edge and 
multiplication by a constant. If this treatment is applied to the XAS spectra 
before subtraction, one does not obtain an XMCD spectrum that goes to zero in 
the post edge region for the data I described. As you noted, that is what would 
be expected given the p-XAS and n-XAS have different slopes in the post-edge 
region.

On the other hand, standard normalization + flattening does result in pre and 
post-edge regions that go to zero, again as one might expect. So perhaps, the 
background modeled by standard normalization + flattening is an accurate 
representation of the real background in some cases and can be used in 
quantitative analysis. Is there reason to believe that cannot be the case?

Thanks,
George




On Wed, May 15, 2013 at 3:04 PM, Matthew Marcus mamar...@lbl.gov 
mailto:mamar...@lbl.gov wrote:

OK, I guess I don't know what 'standard normalization' is.  It looks from 
the quotient that you'll need some sort of curved post-edge.
I guess the division didn't work because the electron energy distribution 
is different pre- and post-edge, so the magnetic effects are
different and vary across the edge.  Thus, the shapes of the MCD peaks will 
be at least a little corrupted even if the pre- and post-edge
spectra are taken into account.  I don't know what to do about this.  Did 
you try asking Elke?
 mam


On 5/15/2013 11:52 AM, George Sterbinsky wrote:

Hi Matthew,



On Wed, May 15, 2013 at 1:20 PM, Matthew Marcus mamar...@lbl.gov 
mailto:mamar...@lbl.gov mailto:mamar...@lbl.gov mailto:mamar...@lbl.gov 
wrote:

 You say that the flipping difference (p - n) is 0 in pre-edge and 
far post-edge regions, which is as it should be, but then say that the
 slopes of p- and n- post-edges, considered separately, are 
different.  I must be misunderstanding because those two statements would seem 
to be
 inconsistent.



Sorry, I think my wording wasn't particularly clear here. What I should 
have said is:

The goal then is to subtract the /normalized/ XAS measured in a positive 
field (p-XAS) from /normalized/ XAS measured in a negative field (n-XAS) and get 
something (the XMCD) that is zero in the pre-edge and post-edge regions. /However, 
standard normalization does not give this result/


Italics indicate new text.

 I wonder if the sensitivity of the TEY changes with magnetic field 
because of the effect of the field on the trajectories of
 the outgoing electrons, which would explain the differing curves.


I would agree, I think the effect of the magnetic field on the 
electrons is the likely source of the differences in background.

 A possibility - if you divide the p-XAS by n-XAS, do you get 
something
 which is a smooth curve everywhere but where MCD is expected?  
Does that curve match in pre- and far post-edge regions?


No, after division of the p-XAS by the n-XAS (before any 
normalization), both the pre and post-edge regions are smooth, but one would 
need a step-like function to connect them. I've attached a plot showing the 
result of division.


 If that miracle occurs,
 then perhaps you could fit that to a polynomial, except in the MCD 
region, then divide the p-XAS by that polynomial, to remove the effect of
 the differing sensitivities.

 There are people here at ALS, such as Elke Arenholz earenh...@lbl.gov 
mailto:earenh...@lbl.gov mailto:earenh...@lbl.gov mailto:earenh...@lbl.gov, 
who do this sort of spectroscopy.  I suggest asking her.

  mam


Thanks for the suggestion and your reply.

George








 On 5/15/2013 9:58 AM, George Sterbinsky wrote:

 The question of whether it is appropriate to use flattened 
data for quantitative analysis is something I've been thinking about a lot 
recently. In my specific case, I am analyzing XMCD data at the Co L-edge. To 
obtain the XMCD, I measure XAS with total electron yield detection using a ~70% 
left or right circularly polarized beam and flip the magnetic field on the 
sample at every data point. The goal then, is to subtract the XAS measured in a 
positive field (p-XAS) from XAS measured in a negative field (n-XAS) and get 
something (the XMCD) that is zero in the pre-edge and post-edge regions. I 
often find that after removal of a linear pre-edge, the spectra still have a 
linearly increasing post edge (with EXAFS oscillations superimposed on it), and 
the slope of the n-XAS and p-XAS post-edge lines are different. In this case 
simply multiplying the n-XAS and p-XAS by constants will