Hi Matthew,

On Wed, 12 Dec 2012, Matthew Marcus wrote:
>
> On 12/12/2012 8:03 PM, Matt Newville wrote:
>> Hi Matthew, Bruce, All,
>>
>> Sorry for not being able to join this discussion earlier.  I agree
>> that having glitch-free data is preferred. But I also think it's OK to
>> simply remove diffraction peaks from XAFS data -- you're just
>> asserting that you know those values are not good measurements of
>> mu(E).  I don't see it as all that different from removing obvious
>> outliers from fluorescence channels from a multi-element detector --
>> though that has the appeal of keeping good measurements at a
>> particular energy.
>
> That's kind of my point.  The standard methods, which require uniform
> tabulation, cause you to fill gaps in data by interpolation.
> Effectively, you're claiming knowledge of data for which you don't have
> knowledge.  Better to use a method which preserves
> agnosticism about the bad data.  The method I described for deglitching with
> reference scalers tries to preserve what information
> you do have by allowing you to ignore some channels.  That only works if
> there are 'good' channels to serve as references.

I would say that identifying a point as an invalid measure of mu(E)
and simply not excluding it in the analysis is OK: it is claiming
knowledge that we didn't measure that point well.  I just don't see it
as much different from claiming that outliers in the set of
fluorescence intensities are not good measures of fluorescence
intensity.

You might argue (I would agree) that with multiple measures of
fluorescence you can determine outliers more rigorously.  But I would
also say that then the best thing to do then is to not simply remove
outliers but to take the average AND standard deviation of the samples
and include both in the analysis.  A glitch would then not be removed,
but would be reflected by having a less certain value for that energy
point.

>> I'm not sure that "slow FT" versus "discrete FT" is that important
>> here, though perhaps I'm missing your meaning.    My vies is that The
>> EXAFS signal is band-limited (finite k range due to F(k) and 1/k, and
>> finite R range due to F(k), lambda(k),  and 1/R^2), so  that sampling
>> on reasonably fine grids is going to preserve all the information that
>> is really there.
>
> Again, FFT requires a uniform tabulation of data in k-space, which is
> generally secured by interpolation.  Traditionally, this has
> been histogram interpolation, which tries to take into account the
> possibility of having multiple data points within one of the new
> intervals in k.  That method, however, has to 'make up' data when there isn't
> any in one or more intervals.
>
> Actually, there are three possible tiers of speed.  There's FFT, there's
> discrete FT done the slow way by computing the sums as
> they appear in the textbook (possibly good if you want a funny grid in R
> space), and then there's what I propose.

I agree with your larger point that we can definitely afford the CPU
cycles to do better analysis than use something simply because it is
Fast.  I'm not sure that a slow discrete FT would be significantly
different than an FFT, though.  I think (but am listening...) that the
continuous v discrete FT is like classic audiophile snobbery about the
superiority of analog over digital music.  With sufficient sampling
and resolution and dynamic range in the signal, the difference has to
be trivial.  I'm willing to believe that a digital recording has less
dynamic range, but I'm also willing to believe that this is swamped by
my stereo and ears (and I mostly listen to mp3s through cheap
headphones these days, so I clearly am not buying a tube amplifier
anytime soon...).

But for EXAFS, we know the limits of the signals in k and R (say, 50
Ang^-1 and 25 Ang for extrema), and so can choose samplings that
definitely over-sample enough to not cause significant artifacts.
There can be dynamic range issues (especially with standard
pre-amplifiers and VtoF converters), but I think this is a slightly
separate issue from "the glitch problem".

Interpolation methods are definitely worth discussing and studying.
I'm not sure that the energy values that happened to have been where
the monochromator was stopped for a measurement of mu(E) are more
sacred or a better sampling of the first shell of an EXAFS spectra
than a set of energies interpolated from those measurements to be on
uniform k grid, provided the the original data isn't woefully
undersampled.  With a typical step size of 0.05Ang^1 you can easily
miss several points and still accurately measure distances up to 5 or
probably even 10 Angstroms.

>> I do think that having richer window functions and spectral weightings
>> would be very useful.   You could view chi(k) data with glitches ias
>> wanting a window function that had a very small value (possibly zero:
>> remove this point) exactly at the glitch, but had a large value (~1)
>> at nearby k-values.
>>
>> Another (and possibly best) approach is to assign an uncertainty to
>> each k value of chi.  At the glitch, the uncertainty needs to go way
>> up, so that not matching that point does not harm the overall fit.
>
> A window function which went to 0 in the glitch region is tantamount to an
> assertion that chi(k)=0 there, which is generally not so
> and is probably a lot worse than the error you make by even the lamest
> interpolation through the glitch. The uncertainty makes more
> sense, except that if you filter data, the noise spreads out in a correlated
> manner over adjacent regions.  Also, the tacit assumption
> of an uncertainty-based method is that the noise is uncorrelated within the
> high-uncertainty region.  That said, I can imagine that
> this is one of those methods that works better than it has any right to.

I think of the window as weighting of measurements to consider, and
that a weight of zero is applied to data with infinite uncertainty.  I
don't think of chi(k) as zero outside the window range (say, below
kmin or above kmax), just that we don't want to consider that in the
analysis.  Again, I think the better thing would be to have an
uncertainty for each value of chi(k), and propagate that through the
analysis.

> Now, in my proposed system, how would you fit filtered (q-space) data?  The
> first step is to do the very slow FT (VSFT) on the input
> data with no interpolation.  This leads to a set of sines and cosines which
> replicate the data.  I would then apply the window and
> do the back-summation of all those sines and cosines at the given points.
> This will undoubtedly introduce some artifacts near
> the edges of the deleted regions simply due to our ignorance of what the data
> really do in there.  Next, in each fit iteration, I would
> evaluate the model function at each point, then do the VSFT and filtering
> exactly as had been done on the data, which causes the fit to
> have the same artifacts.  Since the VSFT and filter operations are linear, it
> should be possible to pre-compute a kernel which relates the
> values of unfiltered data to those of filtered data.  This would be a matrix
> of size Np*Np with Np=# of points.  Using the kernel instead
> of evaluating gazillions of trig functions would be reasonably fast; I
> suspect that ifeffit does a similar trick.  Similarly, fitting in
> R-space would be accomplished by computing the VSFT on data and model
> function.
>
> Yes, I know I talk a good game.  Unfortunately, I'm too swamped to attempt
> implementation.  Also, I'd have to reinvent a number of
> very well-engineered wheels to do it.

I hear you!  I'm interested in pursuing "more advanced weightings and
transforms", but don't have much time to spend on such things these
days, so development is slow....

>> Larch has a separate Transform class used for each dataset in a fit -
>> this includes the standard FT parameters and fit ranges.  It is
>> intended to be able to do this sort of advanced windowing (and in
>> principle, advanced weighting too).  Currently (I'd be ready for a
>> release but am commissioning our new beamline and am hoping to post a
>> windows installer by the end of the month)  this has "the standard
>> XAFS window functions", but one can create their own window functions
>> (for k- and/or R-space) with suppressed points/regions, etc, and use
>> them as well.  I have to admit I haven't tried this, but it was
>> definitely part of the intention, and should work.   My hope is to
>> extend this to include some wavelet-like weightings as well.
>>
> What is Larch?  Is it a replacement for ifeffit?  By 'class' do you mean
> object as in OOP?
> "And now for something completely different.  The larch." - Monty Python
>       mam

Yes, Mostly yes, and, Yes, you get the reference exactly ("How to
recognize different types of trees from quite a long way away, Number
1: The Larch").
  http://xraypy.github.com/xraylarch/

Another wide-open appeal:  I'd love help in implementing improvements
like these (better windows, comparing continuous v discrete
transforms, using wavelets, etc), and think work like this would be
publishable.   There are several other improvements I'd like to see,
such as generating Feff Paths on-the-fly, and so forth.    Essentially
all of Larch is in python so that it's very easy to modify and add
features like this.  The available numerical libraries for python are
plentiful and well-maintained.  It's also easy to add wrappers for C
and Fortran code if that is needed.....

--Matt
_______________________________________________
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit

Reply via email to