Re: [Ifeffit] problem with E0 (enot) parameters

2009-06-19 Thread Scott Calvin
Hi Zajac,

What happens if you constrain all E0's to be the same? In the fit  
where they come out large, what are the uncertainties in the E0's?  
What are their correlations with other fitted parameters? There has  
been some debate in this list on the past as to how useful it is to  
allow for different E0's for different paths. It may be that Artemis  
is shifting the E0's for those paths in lieu of some other correlated  
parameter.

--Scott Calvin
Sarah Lawrence College

On Jun 19, 2009, at 5:33 AM, Zajac, Dariusz A. wrote:

> Hi all,
> can anybody help me and send some link about problems with enot in
> Artemis?
> in google, tutorials I couldn't find any help. Of course are few posts
> about delr or ss parameters, but enot is somehow omited (or I can not
> find it...)
>
> the problem is with fiting K4W(CN)8*2H2O at W:L3 edge.
> fit and others parameters looks ok, except enot for C and N, where  
> both
> are around 12 and don't want to fit to other values...
> thanks
> darek

___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] problem with E0 (enot) parameters

2009-06-19 Thread Scott Calvin
Hi Darek,

You've got a false fit. The E0's aren't the biggest problem; look at  
the delr for potassium! Your fit is scrambling all the paths in non- 
physical ways.

Your initial description of the system suggests that you have a decent  
guess as to the structure to start with. What happens when you run a  
fit with very few free parameters? (Perhaps none, or perhaps just  
floating an overall S02, E0, and maybe a couple of sigma2's.) Does it  
look qualitatively right, with peaks where there should be peaks? If  
so, then you probably need a tighter set of constraints. If not, then  
the material is not what you think it is.

--Scott Calvin
Sarah Lawrence College

On Jun 19, 2009, at 7:30 AM, Zajac, Dariusz A. wrote:

> Hi Calvin,
> thanks for the email. I have tried to find something on mailing list
> about E0 but it seems that my "searching words" were not correct. I
> found only few posts but non of them explained (or followed) the  
> problem
> with large enot.
> enot parameters are closed to eachother e.g. enot_C = 12.597(+-0.654)
> and enot_N=12.407(+-1.776); correlations e.g. enot_N and delr_N=0.744,
> enot_C and delr_C=0.605 enot_C and enot_N=-0.313
> more detail you can find in the attachment or below. I have found that
> the value of the first background variable is large (-3437(+-23034))
> when I constrain both enot's parameters I do not see huge changes in
> parameters, chi^2 enot, amp, delr and ssh are the same in the range of
> uncertainties.
> I agree with you that program can shift not the correct parameter, but
> why every time I change parameters (I shift parameters from the local
> minimum) their come back?
>
> cheers
> darek
>
>
> Independent points  =  45.279296875
> Number of variables =  30.0
> Chi-square  =219223.767
> Reduced Chi-square  =   14347.765383235
> R-factor=   0.022707172
> Measurement uncertainty (k) =   0.000102780
> Measurement uncertainty (R) =   0.000306293
> Number of data sets =   1.0
>
>
> Guess parameters +/- uncertainties  (initial guess):
>  amp = 0.8496050   +/-  0.0767160(guessed as
> 0.849812 (0.095158))
>  enot_C  =12.5967170   +/-  0.6539780(guessed as
> 12.601228 (0.744286))
>  delr_C  =-0.0055260   +/-  0.0068920(guessed as
> -0.005265 (0.007572))
>  ss_C= 0.0016740   +/-  0.0008460(guessed as
> 0.001676 (0.000991))
>  enot_N  =12.4065290   +/-  1.7757010(guessed as
> 12.895967 (2.018563))
>  delr_N  = 0.0358140   +/-  0.0193390(guessed as
> 0.042607 (0.029913))
>  ss_N= 0.0068930   +/-  0.0016280(guessed as
> 0.008196 (0.002599))
>  enot_K  = 0.0064050   +/-  9.7412920(guessed as
> -4.066616 (13.931541))
>  delr_K  = 0.9265430   +/-  0.1815320(guessed as
> -0.018996 (0.154984))
>  ss_K= 0.0074030   +/-  0.0124260(guessed as
> 0.011457 (0.010167))
>  enot_O  =-8.4988040   +/- 13.7627610(guessed as
> 0.906319 (10.059312))
>  delr_O  =-0.3610320   +/-  0.0975640(guessed as
> -0.139554 (0.069016))
>  ss_O= 0.0010060   +/-  0.0050660(guessed as
> 0.001140 (0.004395))
>
> Def parameters (using "FEFF0: Path 1: [C5_1]"):
>  enot_CN =12.5016230
>  delr_CN = 0.0151440
>  ss_CN   = 0.0042840
>  enot_CNC=12.5333210
>  delr_CNC= 0.0082540
>  ss_CNC  = 0.0034140
>  enot_NCN=12.4699250
>  delr_NCN= 0.0220340
>  ss_NCN  = 0.0051540
>  enot_KN = 6.2064670
>  delr_KN = 0.4811790
>  ss_KN   = 0.0071480
>  enot_KC = 6.3015610
>  delr_KC = 0.4605080
>  ss_KC   = 0.0045380
>  enot_KNC= 8.3365500
>  delr_KNC= 0.3189440
>  ss_KNC  = 0.0053230
>
> Set parameters:
>  enot_H  =  -0.920939 (0.00)
>  delr_H  =  0.103335 (0.00)
>  ss_H=  0.00749823 (0.00)
>
> Background parameters +/- uncertainties:
>  bkg01_01=  -3437.6705554   +/-   23034.7299514
>  bkg01_02=-0.0609379   +/-  1.8005458
>  bkg01_03= 0.1948870   +/-  0.1595573
>  bkg01_04=-0.0398830   +/-  0.0358870
>  bkg01_05=-0.0025572   +/-  0.0147361
>  bkg01_06= 0.0067564   +/-  0.0081250
>  bkg01_07=-0.0031386   +/-  0.0052991
>  bkg01_08=

Re: [Ifeffit] problem with E0 (enot) parameters

2009-06-19 Thread Scott Calvin
Hi Darek,

OK, so if the K, H, and O don't affect the fit much for the C and N,  
and the K, H, and O are returning nonsensical values, then a logical  
possibility is that the E0's for C and N are correct. If you add 12 eV  
to the E0 you chose in Athena, where in the spectrum does it fall? Is  
it still before the white line? If so, it seems to me you don't have a  
problem. If not, then we have to ponder further.

--Scott Calvin
Sarah Lawrence College

On Jun 19, 2009, at 8:28 AM, Zajac, Dariusz A. wrote:

> Hi Scott,
> look also at H and O,
> but for me and for this fit important are only W-C and W-C-N bondings.
> This sample is an reference sample for other cyano-brigded networks.  
> So
> you suggest to focuse on K ions? how can it help with first 2 peaks? K
> is at ~5A.
> I have analysed in larger R space only to see how the spectrum behave.
> contribution from K, O etc. at k highers than 5A is for me too low to
> analyse it resonable for such compound.
> I have attached in the previus post the last version of results.  
> Anyway,
> enots for C and N do not change if I am enlarging R region (when I am
> including next paths, also for K).
> about material I am quite sure ;) and crystal structure is from
> literature
>
> in the attachment you will find bmp file of the fit: data, fit, bkg  
> and
> K path.
> fitting ranges k(3-15) R(1.7-6) dk 2 dr 0.5, phase correction -  
> first C
>
> cheers
> darek
>

___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] ODP: problem with E0 (enot) parameters

2009-06-20 Thread Scott Calvin
My email system didn't even let the message through, so I lost the  
thread for a bit.

On the other hand, I for one DO want to encourage people to attach  
stuff that's less than 2 Mb when appropriate. It's very convenient to  
have, say, an Artemis project file to look at in some cases, and I  
often look at/write responses to these emails when I'm not connected  
to the web and thus can't follow a link.

Is that an OK rule of thumb with the rest of you? Under 2 Mb => OK to  
attach; over 2 Mb use a link? Or would people prefer a lower threshold?

--Scott Calvin
Sarah Lawrence College

On Jun 20, 2009, at 10:37 AM, Zajac, Dariusz A. wrote:

> I appologize,
> first and last time...
> cheers
> darek
>
>
> -Wiadomość oryginalna-
> Od: ifeffit-boun...@millenia.cars.aps.anl.gov w imieniu Stefan Mangold
> Wysłano: So 2009-06-20 12:44
> Do: XAFS Analysis using Ifeffit
> Temat: Re: [Ifeffit] problem with E0 (enot) parameters
>
> Please do not attach 10 Mb of data on your Mails. Just send an Link to
> an Web-space with your e-mail. People can then download the stuff if
> needed.
>
> Best regards
>
> Stefan

___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] edge height proportional to the molar density?

2009-07-24 Thread Scott Calvin

Hi Haiyan,

Yes, relative concentrations for different elements within the same  
sample can be estimated from edge jumps. Be sure to normalize by the  
difference in absorption coefficient for each edge--you can get those  
figures from Hephaestus, among other places.


This method won't be as accurate as techniques like ICP or XRF,  
largely because it's hard to normalize XAS spectra consistently  
between different edges. But you can easily do better than 20% accuracy.


--Scott Calvin
Sarah Lawrence College

On Jul 24, 2009, at 5:16 PM, Haiyan Zhao wrote:



I am wondering whether the edge height could be used to estimate  
molar density or not. If so, is there any reference about such kind  
of calculation? Thanks!


Haiyan


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


[Ifeffit] Kapton in glove box

2009-08-05 Thread Scott Calvin

Hi Todd,

I've taken the liberty of posting your question to the Ifeffit mailing  
list. You're likely to get more accurate and quicker answers to these  
kinds of questions there.


(For the rest of you: Todd is asking about the technique of preparing  
air-sensitive samples in a glove box, putting them on Kapton tape,  
sealing them in plastic bags, and transporting them to the beamline,  
shooting right through the bags.)


I'll take my shots, though:

It's hard for me to imagine adsorbed oxygen on the Kapton being more  
significant than the other sources of stray oxygen that can be present  
in a glove box. After all, the Kapton's in there too. And I don't  
think it's going to be more significant than the oxygen that diffuses  
through the plastic bags during transport.


The thinner the Kapton tape, the better, as that will minimize the  
absorption due to the tape. It used to be hard to find 1 mil Kapton  
tape with adhesive, but now it's easy. Hephaestus will give you the  
absorption of Kapton, so you can judge how big an effect it will be at  
the energies at which you'll work.


--Scott Calvin
Sarah Lawrence College

On Aug 4, 2009, at 2:39 PM, Monson, Todd wrote:


Scott,

Thanks.  Is it pretty reasonable to assume that the kapton tape that  
you put your samples on doesn’t have any adsorbed oxygen that could  
affect your samples?  Do you do anything to clean the kapton?  Where  
are some good places to buy the kapton (and do you need to purchase  
rather thin kapton tape for doing XAFS)?


Thanks again,

Todd

From: Scott Calvin
Sent: August 04, 2009 11:34 AM
To: Monson, Todd
Cc: Scott Calvin
Subject: Re: mossbauer

Hi Todd,

Regular zip-loc bags work just fine. For heat sealers I've used  
everything from a heat sealer manufactured for the purpose to a  
little propane torch--even a cigarette lighter should work. Putting  
one sealed bag inside another, if the energy you're working at  
allows it, seems to work quite well.


--Scott


On Aug 4, 2009, at 1:15 PM, Monson, Todd wrote:


Scott,

I had another question – what kind of plastic bags and heat sealers  
do you use for sealing up your air-sensitive XAFS samples?  And  
where could I buy them?


Thanks,

Todd



___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Kapton in glove box

2009-08-05 Thread Scott Calvin
Good point on the cardboard, Richard. I always pre-cut strips of  
Kapton tape, so that I wasn't bringing in cardboard at all.


--Scott Calvin
Sarah Lawrence College

On Aug 5, 2009, at 8:13 PM, Richard Mayes wrote:


Todd,

Are you working with oxygen sensitive or moisture sensitive samples  
(or both)?  If it's just moisture sensitive, then you can use  
regular 2-sided tape from your local office supply and polypropylene  
film to seal samples in polycarbonate or aluminum holders (or even  
pellets if you're lucky enough to be able to press pellets that hold  
their shape).  Chemplex Industries is where I have gotten the  
polypropylene films I have used (and Kapton as well -  
www.findtape.com also has a good selection of Kapton tape).


I used this method with many samples that involved heavily chlorided  
titanium on silica and had few problems if they're used within 5-7  
days after packing in a glove box (the samples with problems  
resulted from improperly sealed samples).  You can get jars (baby  
food jars work very well to ship individual samples) to store the  
samples for shipping and if you pack the jars in the glove box, you  
will have the box atmosphere in the jars, for a little while anyway.


A note on oxygen sensitivity (and to an extent moisture  
sensitivity):  you probably already know this, but I'll say it  
anyway...if cardboard is present in the role of Kapton tape, you may  
have oxygen/water diffusion from the cardboard for a few days after  
you take it into the box.  Our rule of thumb was to pull vacuum on  
anything involving cardboard for at least 48 hrs before taking it  
into the box.  All that to say, take your supplies into the box a  
few days ahead of time to allow your box catalyst to take care of  
any residual oxygen/water that make their way in.


HTH,
-Richard


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Kapton in glove box (Todd Monson)

2009-08-06 Thread Scott Calvin
Hi Todd,

Kapton is resistant to most solvents, but that's not necessarily true  
for the adhesive on it! I've had the adhesive completely washed away  
by samples with a little solvent on them before. So you should test  
that before preparing your samples.

--Scott Calvin
Sarah Lawrence College

On Aug 6, 2009, at 11:47 AM, Monson, Todd wrote:

> Thanks for all your comments regarding kapton tape and measuring air  
> and moisture sensitive compounds.  Many of my samples are indeed air  
> sensitive and not just moisture sensitive (iron nanoparticles).  If  
> my particles are dispersed in solvent will the kapton be resistant  
> to that solvent (at least during the time that the solvent is  
> evaporating from the tape in the glove box)?  Darek mentioned kapton  
> dots - could you tell me where I can purchase these?
>
> Thanks again for everyone's help, I thought this mailing list was  
> primarily for software related questions but I am finding it is  
> useful to get help on any XAFS related questions.
>
> Todd

___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Chi in arthemis

2009-08-18 Thread Scott Calvin
Hi euG,

I don't report chi-square at all. I know that the official IXS reports  
suggest it, but until the question of what to use for the measurement  
uncertainty is adequately addressed within the community, it seems to  
me that the number is not very meaningful in an absolute sense. It is  
meaningful, however, for comparing fits on the same data. (I do report  
R-factors to give some sense of closeness of fit.)

On the other hand, uncertainties in fitted parameters are calculated  
in a highly defensible way and should be reported. If the  
uncertainties are large for parameters that are of interest to you,  
then that is indicative of a problem in the fit and should not be  
swept under the rug.

--Scott Calvin
Sarah Lawrence College

On Aug 18, 2009, at 4:02 PM, Eugenio Otal wrote:

> Hi all,
> I see that the reports of the fits a perform in arthemis have Chi  
> really big. The fit is really good and the Chi should be closer to 1  
> than to 100.
> The problem, is what should I report as the Chi. Uncertain in delr  
> or ss have the same problem, are big like the results.
> I found the topic in the list but not a good answer to what should I  
> report.
> Thanks, euG
>
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Chi in arthemis

2009-08-18 Thread Scott Calvin
Hi euG,

That amp is not physically reasonable, unless you're using it as a  
proxy for the coordination number.

The uncertainty on the other variables does not seem high to me for a  
single-shell fit. Well, 2 eV is a bit high for an uncertainty on E0,  
but not crazy high.

There are some approaches that can be used to try to reduce the  
uncertainties, but you shouldn't even think about that until you get  
the amp (S02) straightened out.

--Scott Calvin
Sarah Lawrence College

On Aug 18, 2009, at 6:18 PM, Eugenio Otal wrote:

> Hi Scott,
> here I copy a part of the report:
>
> Independent points  =   6.222656250
> Number of variables =   4.0
> Chi-square  = 247.145092496
> Reduced Chi-square  = 111.193574128
> R-factor=   0.017422216
>
> Guess parameters +/- uncertainties  (initial guess):
>   amp = 6.7815290   +/-  1.4687660(1.)
>   enot= 2.2173620   +/-  2.1499920(0.)
>   delr= 0.0514640   +/-  0.0163900(0.)
>   ss  = 0.0074020   +/-  0.0025220(0.0030)
>
> I see that the R-factor is pretty good, 1.74%, amp is high cause is  
> correlated with the coordination number and always have big errors.,  
> delr has the error of the total distance, so it is ok, but ss and  
> enot have a a really big error, is this normal?
> Thanks, euG

___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Chi in arthemis

2009-08-18 Thread Scott Calvin
Report the error bars as given; otherwise you're reintroducing the  
unknown measurement uncertainty factor. Then somewhere in your paper  
cite ifeffit and make clear that you used that method to determine  
uncertainties.

--Scott Calvin
Sarah Lawrence College

On Aug 18, 2009, at 7:31 PM, Eugenio Otal wrote:

> Hi Scott,
> I forget the amp, I left the N =1, that is the mistake, sorry I have  
> de S02 from other oxide to transfer it.
> I found that error bars in Ifeffit are scaled by Chi2 reduced, so  
> should I transform the error or inform them directly as reported?
> Regards, euG
>

___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Chi in arthemis

2009-08-18 Thread Scott Calvin
Matt,

Is this the most recent IXAS report on error reporting standards?

http://www.i-x-s.org/OLD/subcommittee_reports/sc/err-rep.pdf

It uses a rather expansive definition of epsilon, which explicitly  
includes "imperfect" ab initio standards such as FEFF calculations. It  
indicates that statistical methods such as that used by ifeffit for  
estimating measurement error yields a lower limit for epsilon, and  
thus an overestimate of chi square.

So I think my statement and yours are entirely compatible.

As far as what should be reported, I do deviate from the IXAS  
recommendations by not reporting chi-square. Of course, I tend to work  
in circumstances where the signal-to-noise ratio is very high, and  
thus the statistical uncertainties make a very small contribution to  
the overall measurement error. In such cases I have become convinced  
that the R-factor alone provides as much meaningful information as the  
chi-square values, and that in fact the chi-square values can be  
confusing when listed for fits on different data. For those working  
with dilute samples, on the other hand, I can see that chi-square  
might be a meaningful quantity.

At any rate, I strongly agree that the decision of which measurements  
of quality of fit to produce should not be dependent on what "looks  
good"! That would be bad science. The decision of what figures of  
merit to present should be made a priori.

--Scott Calvin
Sarah Lawrence College

On Aug 18, 2009, at 10:40 PM, Matt Newville wrote:

> Having a "reasonable R-factor" of a few percent misfit and a reduced
> chi-square of  ~100 means the misfit is much larger than the estimated
> uncertainty in the data.  This is not at all unusual.   It does not
> necessarily  mean (as Scott implies) that this is because the
> uncertainty in data is unreasonably low, but can also mean that there
> are systematic problems with the FEFF calculations that do not account
> for the data as accurately as it can be measured.   For most "real"
> data, it is likely that both errors FEFF and a slightly low estimate
> for the uncertainty in the data contribute to making reduced
> chi-square much larger than 1.
>
> And, yes, the community-endorsed recommendation is to report either
> chi-square or reduced chi-square as well as an R-factor.  I think some
> referees might find it a little deceptive to report  R-factor because
> it is "acceptably small" but not reduced chi-square because it is "too
> big".

___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] limits for second shell

2009-08-21 Thread Scott Calvin
Hi Eugenio,

To my eye, I strongly suspect there's real signal there. To confirm,  
try several different k-ranges and k-weights. If the feature persists,  
it is likely physical. (It may change size and shape to some extent.)

--Scott Calvin
Sarah Lawrence College

On Aug 20, 2009, at 11:54 PM, Eugenio Otal wrote:

> Hi,
> I have a sample of a pure Er2O3 (blue line in the attached graph)  
> and a sample of doped ZnO with erbium that has segregated the same  
> oxide (red line) by thermal treatmen.
> The signal for de segregated oxide gets noisy around k=9 because the  
> sample is so diluted, but the radial distribution shows second shell  
> signal, smaller than the pre oxide, but still a signal. My doubt is  
> about how to know if the second shell is real and if that second  
> shell can be useful to obtain information. Is there a criteria to  
> know that? Some limit in k-space?
> Thanks, euG
> ___
> Ifeffit mailing list
> Ifeffit@millenia.cars.aps.anl.gov
> http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit

___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] limits for second shell

2009-08-21 Thread Scott Calvin
Hi Eugenio,

Perform your analysis on the data, using different values for k-max.  
If you are successfully extracting structural information which is  
different from the standard, you will find fitted parameters which:

--are stable with changes in k-max

--take on values that are different from those for the standard,  
taking into account the error bars (i.e. the values for the standard  
do not fall within the error bars for the sample for at least one  
parameter)

--are stable with changes in k-weight.

--Scott Calvin
Sarah Lawrence College

On Aug 21, 2009, at 11:35 AM, Eugenio Otal wrote:

> Hi,
> thanks for the help.
> Let me be more clear. I am trying to know the limits to use  
> information from the second shell, from XRD I found that the phase  
> is present,  but I want to know if I can use the signal to obtain  
> more information about second neighbors.
> I have this sample and some more where I find a second shell signal,  
> the doubt is about how far can I trust in this information, is there  
> a limit in k-space to believe in the information of the second shell  
> and try to fit it?
> I attach the R-space graph with the same limits for the FT, there  
> are still differences, what should I check to be sure the  
> information I can get from the second shell is trustable and not a  
> problem of the noisy signal?
> Thanks, euG
>
> ___
> Ifeffit mailing list
> Ifeffit@millenia.cars.aps.anl.gov
> http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit

___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] mu(E) of theoretically calculated spectra

2009-09-24 Thread Scott Calvin
Hi Bhoopesh,

There are files that are identified as cementite in Matt Newville's  
archive:

http://cars9.uchicago.edu/~newville/ModelLib/search.html

If you manage to get any other data, let me know; I've used the  
standard at the link above, but I'd like to confirm it.

--Scott Calvin
Sarah Lawrence College

On Sep 24, 2009, at 11:45 AM, bhoopeshm wrote:

> Dear All,
>   I am writing to see if anyone on this mailing list has  
> ever measured "Cementite" (Fe3C) EXAFS. It is basically an Iron  
> Carbide, nothing exotic at all.
>
>  I would be very grateful to anyone who might want to share an EXAFS  
> spectrum of Cementite (Fe3C).
>
>
> Regards,
> Bhoopesh
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Cementite EXAFS - thanks

2009-09-24 Thread Scott Calvin
One thing I can say is that the EXAFS data in Matt's archive is  
reasonably well fit by a cementite structure. Since there's a lot of  
variation in cementite-like materials, that doesn't tell us exactly  
what it is, but at least it's in the right family.

--Scott Calvin
Sarah Lawrence College

On Sep 24, 2009, at 5:49 PM, Bhoopesh Mishra wrote:

> Hi Matt,
> I imagined you might not remember about the data or how was  
> it prepared, so I decided not to bother you asking about it. But  
> sounds like you do have an amazing memory.
>
>  Yes, I would be happy to characterize the sample by running  
> both EXAFS and XRD on it if you still have it. I would of course  
> share the information with the mailing list, once I characterize it.  
> I will collect the sample from you sometime soon. Thanks again for  
> offering.
>
>
> Regards,
> Bhoopesh
>

___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


[Ifeffit] Tenure track position for chemist

2009-10-06 Thread Scott Calvin

Hi all,

Since presumably a lot of people on this list know early-career  
chemists with an interest in a position where teaching is valued and  
they can also collaborate closely with a XAFS expert (that's me, I  
guess), I thought I'd pass along information on our opening:


Sarah Lawrence College, a coeducational liberal arts college  
dedicated to individualized education, invites applications from  
broadly trained chemists for a tenure-track position beginning  
August 1, 2010. The successful candidate will teach general  
chemistry, other undergraduate chemistry courses of interest to  
liberal arts students, and upper-level courses including physical  
chemistry. An interest in environmental science is desirable. A  
commitment to working closely with students on an individual basis  
is essential. Candidates should have a Ph.D. in Chemistry or expect  
to receive one by August 1, 2010.


Application materials must include:  cover letter, CV, statement of  
teaching philosophy and research interests, graduate transcript,  
descriptions of two courses suitable for liberal arts students, and  
three letters of recommendation (at least one of which must address  
the candidate's ability to teach general chemistry).  Deadline for  
applications is November 16, 2009.  To apply for the position please  
go to:


https://slc.simplehire.com/applicants/jsp/shared/frameset/Frameset.jsp?time=1254862871561


For information on Sarah Lawrence College, our curriculum, teaching  
methods, and philosophy of education, please see our Web site at: http://www.slc.edu 
 .
SLC is an Equal Opportunity Employer committed to achieving a  
racially and culturally diverse community.


--Scott Calvin
Sarah Lawrence College___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Cu oxide fitting result

2009-10-10 Thread Scott Calvin

Hi Bruce,

From April 10 to April 19 of this year, there was a thread discussing  
the basic strategy Abhijeet is using (first under the subject line  
"High S02" and then "fitting procedure". As I recall, you were busy  
with something else at the time, and saw there were enough experts  
answering so that the question would likely be fully addressed.


What Abhijeet is doing, I believe, is fitting the first shell first,  
setting the relevant values, including S02, to those found from the  
first shell, and then shifting the R-range to fit a new set of paths.  
He then continues that iterative procedure, shell by shell.


A number of people spoke up for that procedure, although concerns were  
raised as to whether it was the best approach for the particular  
system being discussed.


What Abhijeet was doing before that discussion was: fit the first  
shell first, set the relevant values to those found from the first  
shell, and then extend the R-range to fit a new set of paths,  
continuing this as an iterative procedure. This meant that the fitting  
range continued to include paths that he was now constraining to their  
values from a previous fit. My recollection was that was generally  
agreed on to be bad practice, and the idea of shifting the R-range  
through narrow bands was offered as an alternative.


Having participated in the discussion the first time around, I  
personally would not choose this procedure for the kind of system  
Abhijeet is looking at here: a highly crystalline system with strong  
contributions from paths at a wide range of distances that is expected  
to be similar to a known structure. But the point may be to practice  
this procedure for systems where it is more appropriate, in which case  
it makes sense to try it first on a known system.


--Scott Calvin
Sarah Lawrence College


On Oct 9, 2009, at 11:47 AM, Bruce Ravel wrote:



Abhijeet,

I took a quick peak at your project and I find it very confusing.  In
the most recent fit, you are fitting only from 3.5 to 4.4 -- the area
under the *third* peak in the data.  I don't really know how to
comment on this project because you have set a large number of
parameters to seemingly arbitrary values.  From a numerical
perspective, the reason for questions 1 and 2 is because you are
attempting to fit only a narrow and of the data and you have set the
majority of your parameters.  That seems an unlikely strategy to me.

I think the best thing you could do would be to fit *all* of your data
rather than an arbitrary and small band of it.


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Rbkg value

2009-10-23 Thread Scott Calvin

Hi Chris,

One thing to keep in mind is that it is not wrong to eliminate part of  
your signal; it just means you're losing a little bit of data. This is  
similar to a common filtering mistake that beginners make when trying  
to choose the maximum end of their fitting range: they look at the  
paths they are including, and try to set Rmax high enough to include  
most of the contribution from the paths they are including. What they  
should be doing, however, is to look at the paths they are not  
including, and set Rmax low enough so that the contributions from  
those paths are tolerably small.


Looking at correlations between background parameters and fitted  
parameters when the "fit background" option is selected also helps  
provide you with information. I'll admit that the "fit background"  
button confuses me a bit, though (I always have to spend ten minutes  
convincing myself again as to what exactly it is doing), so someone  
else should explain how that can be used to help address your question.


--Scott Calvin
Sarah Lawrence College

On Oct 23, 2009, at 3:51 PM, Chris Patridge wrote:


Hello everyone,

In removing background, most literature suggests Rbkg value of 1.0  
because below this represents mostly noise and low freq components  
not part of the scattering effect.  In using Feff and viewing the  
individual paths calculated a number of them have paths near 1.5 A,  
therefore meaning that they have some contributions very close to  
1.0 A and below in R space due to phase shift.  Has anyone modeled  
materials which contain these rather short Reff and how did you  
decide what was scattering and noise?


Thank you all,


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] SrSO4 (Celestite) and SrCO3

2009-10-26 Thread Scott Calvin

Hi Peter,

I think I have one of SrCO3, but it will take me a couple days to dig  
it up.


--Scott Calvin
Sarah Lawrence College

On Oct 26, 2009, at 7:10 PM, Peter Nico wrote:


Hello All,

Would anyone out there have a SrSO4 or SrCO3 standard XANES spectra  
they would be willing to share?


There aren't any in the databases of which I am aware, namely  
"Lytle, GSECARS, and NSLS X18b."


There is a SrCO3 but its really an EXAFS standard (the edge consists  
of ~3 points.)


I am sorry that I can't necessarily promise a publication  
acknowledgment because I am not sure if the work will ever be  
published.  However, if it is, I'll definitely acknowledge the help.


Thanks. --Peter


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Bug in Athena?

2009-11-19 Thread Scott Calvin

Hi Matt,

I'm the one who requested the merged reference channel.

If the data is ideal, of course only one reference scan is needed. But  
there are two common ways it can be nonideal that are relevant:


1) The monochromator does not hold calibration; i.e. there is an  
energy shift between scans


2) The reference channel is very noisy, perhaps because of an  
inherently thick sample


If  1) is a significant problem and 2) is not, then it makes sense to  
align the scans using the reference, at which point any reference scan  
will do for determination of the chemical shift of the merged data  
from the sample.


If 2) is a significant problem and 1) is not, then it makes sense to  
merge the references along with the sample data, because that will  
make it easier to determine the chemical shift.


If both problems are significant, then you've got a headache.

--Scott Calvin
Sarah Lawrence College


On Nov 19, 2009, at 10:26 AM, Carlo Segre wrote:


Hi Matt:

I agree.  It is useful to have the reference channel from the first  
of the merged data pulled over as reference for the merged data but  
this actuallly only makes sense if the user first aligned using the  
reference.


carlo

On Thu, 19 Nov 2009, Matt Newville wrote:


Is there ever a case where a merged reference channel is useful?

I thought the only possible use for a reference channel was for
comparing individual scans.  That is, prior to merging.

--Matt

On Thu, Nov 19, 2009 at 6:45 AM, Zajac, Dariusz A.
 wrote:

Dear Bruce, Dear All,
maybe it is a naïve question but I want to ask and to point this  
problem...


Windows XP. Athena 0.8.059
Sc.Linux. Athena 0.8.060

I have a set of data with refernces  (one sample, many scans). I  
have marked sample's groups and do "merge marked data in mu(E)"  
then I get merged data together with reference (2 groups: merge -  
sample, and Ref merge - reference).

But...
...if I have marked reference sample's groups and do "merge" then  
I get 2 groups: merge - which is merged data of reference, and Ref  
merge - which is marged data of sample. Oposite to that I did in  
first example!


Is any hidden idea, I can not see, why it should be that way? If  
you don't know about that, can confuse and surprise...

cheers
darek
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit



___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit



--
Carlo U. Segre -- Professor of Physics
Associate Dean for Graduate Admissions, Graduate College
Illinois Institute of Technology
Voice: 312.567.3498Fax: 312.567.3494
se...@iit.edu   http://www.iit.edu/~segre   
se...@debian.org___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Bug in Athena?

2009-11-19 Thread Scott Calvin

Hi Bruce,

On Nov 19, 2009, at 11:48 AM, Bruce Ravel wrote:

Why does Athena make a merge of references?  As Matt points out,  
that is

an odd thing to do.



I may be confused as to what we're talking about. Why is this an odd  
thing to do? It seems perfectly normal to me to want the reference  
scans merged as well as the sample scans, in order to get a clean  
measure of chemical shift.


--Scott Calvin
Sarah Lawrence College

___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Bug in Athena?

2009-11-19 Thread Scott Calvin



On Nov 19, 2009, at 2:59 PM, Matt Newville wrote:


For this case, wouldn't it be better to measure the reference
separately to determine the chemical shift, and not rely on the
reference channel for this purpose?

How often is the reference channel both noisy AND improved by merging?
That would imply a transmission measurement
that was poor due to low flux.  But if this is because the sample is
thick as you suggest, the x-rays hitting the reference could be
dominated by harmonics, and the reference data may just be bad, not
noisy due to counting statistics.



It's a good point. But pick your poison. When I am trying to be  
careful about chemical shift, I don't trust that the mono won't just  
happen to skip a step between measuring the standard separately and  
measuring the sample. So I do both. I measure a standard in the sample  
channel, with a reference in the reference channel. I then leave the  
reference in the reference channel, and put my sample in. If the  
sample is a "reasonable" thickness for transmission, but a bit on the  
high side (say 2.3 absorption lengths), the photon count is down  
pretty far by the time it gets to the reference. The reference is also  
often the worst detector and amplifier that a line has, as the good  
stuff is used for I0, It, and If. So the reference channel may well  
have a considerable amount of random noise which can be improved by  
merging.


If that's the case, and if my sample appears to be suffering no beam  
damage (scans when aligned, lie on top of each other), then I align  
used the sample data. I then merge the sample data and the reference  
data. By comparing the sample to the reference and the previous scans  
where I measured the standard to the reference, I can see if there's  
been any energy shift between scans. As far as harmonics, this  
procedure should detect them. If the merged reference looks different  
from sample to sample (including the case where a standard was also in  
the sample channel), that suggests that there are issues with  
harmonics. If those issues move the first peak of the first  
derivative, I know they're going to affect my determination of  
chemical shift.  Also, if I get a nonzero chemical shift from this  
procedure for the standard, I know there's an issue. If not, they're  
not a problem.


The net result is that I have good confidence that I'm getting  
accurate chemical shifts, as loss of energy calibration, harmonics,  
and noise should all become evident by this procedure.


I'm not recommending this procedure over others; it's just what I do  
in some cases. But it doesn't seem like an unreasonable procedure to me.


--Scott Calvin
Sarah Lawrence College

___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


[Ifeffit] Lattice parameters: EXAFS vs. XRD

2009-12-25 Thread Scott Calvin

Merry Christmas, everyone!

Yes, I'm pondering EXAFS on Christmas...

Here's an issue that I bet has been worked out, and I bet someone on  
this list knows the result and where it's been published.


It's well known that the MSRD ("sigma squared") for EXAFS differs  
substantially from the "Debye-Waller factor" in XRD, because the first  
is the variance in the interatomic distance, and the second is the  
variance in the atomic position relative to a lattice point.


But what about the lattice parameter implied by the nearest-neighbor  
distance in EXAFS as compared to the lattice parameter found by XRD?


It is certainly true that in most materials, particularly highly  
symmetric materials, the nearest-neighbor pair distribution function  
is not Gaussian, and generally has a long tail on the high-r side.  
(This is largely because the hard-core repulsion keeps the atoms from  
getting much closer than their equilibrium positions.) So imagine a  
set of atoms undergoing thermal vibrations around a set of lattice  
points. For concreteness, let's consider an fcc material like copper  
metal. The lattice points themselves are further apart than they would  
be without vibration, sure, but that's not the question. The question  
is whether the square root of two multiplied by the average nearest- 
neighbor distance is still equal to the spacing between lattice points.


My hunch is that the answer is no, and that the EXAFS implied value  
will be slightly larger. While the average structure is still closed- 
packed, the local structure will not be. And in a local structure that  
is not closed-packed, the atoms will occasionally find positions quite  
far from each other, but will never be very close. In a limiting case  
where melting is approached, it's possible to imagine an atom  
migrating away from its lattice point altogether, leaving a distorted  
region around the defect. While XRD would suppress the defect, EXAFS  
would dutifully average in the slightly longer nearest-neighbor  
distances associated with it.


Just to be clear, I am not talking about limitations in some  
particular EXAFS model used in curve-fitting. For example,  
constraining the third cumulant to be zero is known to yield fits with  
nearest-neighbor parameters that are systematically reduced. In fact,  
limitations like that mean the question can't be answered just by  
looking at a set of experimental results: I can make my fitted lattice  
parameter for copper metal go up or down a little bit by changing  
details of a fitting model or tinkering with parameters that  
themselves have some uncertainty associated with them, like the  
photoelectron's mean free path. (Fortunately, this kind of tinkering  
will affect standards and samples in similar ways, and thus don't  
affect my confidence in EXAFS analysis as a tool for investigating  
quantitatively differences between samples, or between samples and a  
standard.) My question is about the ACTUAL pair distribution function  
in a real fcc metal. To the degree it's a question about analysis,  
it's about XRD:


"In an fcc metal should the expectation value of the nearest-neighbor  
separation, multiplied by the square root of two, equal the lattice  
spacing as determined by XRD?"


--Scott Calvin
Sarah Lawrence College
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Lattice parameters: EXAFS vs. XRD

2009-12-30 Thread Scott Calvin
Thanks, Matt--you give a complete and satisfying discussion of this on  
January 23 on this list. I forgot about that because it came at the  
tail end of a long discussion as to whether C3 could ever be 0, but I  
suspect what you said then was rattling around in the back of my head  
and only settled in last week.


--Scott Calvin
Sarah Lawrence College

On Dec 30, 2009, at 8:47 AM, Matt Newville wrote:


Hi Scott,

I believe we had a conversation about this last January.

XAFS is not sensitive to the crystallographic lattice constants.  It
measures the spacing between atoms.  Because of thermal vibrations and
other disorder terms, the average distance between atoms is larger
than the distance between the lattice points.

--Matt




___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Fitting using Experimental standard

2010-01-04 Thread Scott Calvin

Hi Fiona,

An experimental standard is a spectrum of a known material (the term  
is also often used for the known material itself, in addition to its  
spectrum). If you use Athena to do a linear combination fit, you are  
most commonly using experimental standards to do it.


A theoretical standard is a theoretically simulated spectrum of some  
structure. Thus, the theoretical standard is not the crystallographic  
data itself, although that data can serve as a basis for generating a  
theoretical standard using software such as FEFF.


--Scott Calvin
Sarah Lawrence College

On Jan 4, 2010, at 7:16 PM, Fiona R. Kizewski wrote:


Dear all,
Can somebody please explain to me what is theoretical standard and  
what is

experimental standard. My understanding of theoretical standard is the
crystallographic data. However, it is first time I heard experimental
standard.
Thanks


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Bruce ayuda, problemas con athena

2010-02-13 Thread Scott Calvin

You'll probably get Russian before I get Latin.

--Scott Calvin
Sarah Lawrence College

On Feb 13, 2010, at 11:38 AM, Frenkel, Anatoly wrote:


Can't wait for a question in Russian.
Anatoly


From: ifeffit-boun...@millenia.cars.aps.anl.gov on behalf of Ravel,  
Bruce

Sent: Sat 2/13/2010 10:52 AM
To: XAFS Analysis using Ifeffit
Subject: Re: [Ifeffit] Bruce ayuda, problemas con athena

On Friday 12 February 2010, 07:10:17 pm, Jaziel Soto wrote:

> Me encuentro usando el athena 0.8.061 y el ifeffit 1.2.11c  
instalado en
> windows vista de 32 bits. Tengo puesta la compatibilidad de  
windows xp
> service pack 2, ejecutándolo con privilegios de administrador y  
después de
> haber trabajado algo, cuando quiero grabar los .prj no pasa nada.  
Entonces

> lo que trabajo no tiene sentido por que no lo puedo guardar.
>

Jaziel says he's using Athena 0.8.061 and Ifeffit 1.2.11c on 32 bit  
Vista with
service pack 2.  While running with Admin privileges and after  
working for a

while, he cannot save .prj files.


Jaziel,

Primer cosa -- entiendo bien que prefieres escribir en tu idioma,  
pero te

ayuda mucho escribir en ingles.  Asi que puedes pedir ayuda de toda la
communidad.  Muchos de ellos saben mucho mas de Windows que yo.

Aqui no hay sufficiente informacion.  En esta situacion, es util  
abrir el
command window y empezar athena con cle.  Asi, si hay mensajes con  
informacion

sobre el problema, pouedes copiarlos en el email.

Hay la possibilidad que el problem es el mismo que esto:
  http://millenia.cars.aps.anl.gov/pipermail/ifeffit/2010-January/009243.html
Si intentes grabar el fichero an un sitio con letras con accente,  
intente un

sitio solo con letras sin accente.



And then I said: First thing, write in English.  Clearly it is esier  
to write
in your own language, but if you write in English anyone on the list  
will be
able to help you.  Many of the people here know more about Windows  
than I do.


Your email doesn't have quite enough information.  Try opening the  
command
window and starting Athena using the keyboard.  If there are useful  
error

messages sent to the screen, you can copy then into your email.

There is the chance that that you are having the same problem as here:
   http://millenia.cars.aps.anl.gov/pipermail/ifeffit/2010-January/009243.html
The problem might be that you are trying to save the file to a  
folder with
accented characters.  Try saving to a folder without any accented  
characters.


B





--
 Bruce Ravel   bra...@bnl.gov

 National Institute of Standards and Technology
 Synchrotron Methods Group at NSLS --- Beamlines U7A, X24A, X23A2
 Building 535A
 Upton NY, 11973

 My homepage:http://xafs.org/BruceRavel
 EXAFS software: http://cars9.uchicago.edu/~ravel/software/exafs/
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit

___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Bruce ayuda, problemas con athena

2010-02-13 Thread Scott Calvin

Primo Latinam linguam aptiorem dicere disce. Tum te laete respondebo.

--Scott

On Feb 13, 2010, at 7:05 PM, Matt Newville wrote:

On Sat, Feb 13, 2010 at 10:59 AM, Scott Calvin   
wrote:

You'll probably get Russian before I get Latin.


Quam utor postulo notitia quinymo quam Feff ut a vexillum in Artemis?

But you're right:  Russian did come before Latin ;)

--Matt



___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Anharmonic correction

2010-03-26 Thread Scott Calvin

Hi Aaron,

I find a nearest-neighbor third cumulant is frequently a useful  
parameter for nanoscale materials. It's not just the anharmonicity of  
individual bonds, it's also the anharmonicity of the distribution of  
environments. In other words, in nanoscale materials there are core  
atoms and surface atoms, atoms one monolayer below the surface, and so  
on. (Or, in your case, read "interface" for "surface.") The atoms on  
the surface may very well have interatomic distances a bit different  
from those further in, and that distribution is often not symmetric,  
for essentially the same reason that thermal vibrations are not  
symmetric: it's energetically more favorable to stretch a bond from  
equilibrium than to compress it by the same amount.


To use a third cumulant in Artemis, go to the Paths menu and check  
"extended path parameters." The path dialogs will then include a blank  
for "3rd," which is the third cumulant in the literature. It is then  
used like any other path parameter. (Or, of course, you can access it  
through IFEFFIT scripts, again using "3rd.")


--Scott Calvin
Sarah Lawrence College

On Mar 26, 2010, at 1:13 PM, Aaron Slowey wrote:


Dear XAFS community:

I am fitting Hg L3-edge EXAFS of what I think are mercury sulfide  
nanoparticles.  I fit Fourier filtered 1st shell Hg-to-Sulfur pair  
correlations for 5 spectra and obtain interatomic distances (r) that  
are 0.2 angstroms shorter than a cubic HgS(s) (i.e., metacinnabar)  
and Hg-to-S coordination numbers (N) that range from 2.6 to 3.0  
(compared to N = 4 for metacinnabar).  Delta_E0 values are less than  
a few eV, so I think the r's are not 'incorrect' as far as these  
preliminary fits are based on harmonic atomic vibrations/Gaussian  
pair-distribution functions.


What intrigues me most about these data is that the fitted N's are  
consistent with the average 1st-shell Hg-S coordinations of 1 to 2  
nm HgS clusters obtained by isotropically truncating the  
metacinnabar crystal lattice.  In one case, I can also fit first- 
nearest Hg neighbors in the first shell, and this N is also  
consistent with a 1 nm HgS cluster.


My objective is to scrutinize the tentative conclusion that the  
mercury sulfides in the samples consist of 1 to 2 nm subunits  
(within a larger aggregate, as determined by DLS).  For instance,  
while the fitted N's are consistent with nanoclusters, the  
assumption of a metacinnabar lattice to estimate N of nanoclusters  
is undermined by the shorter interatomic distances fitted to the  
data.  This got me reading the work of Manceau and Combes from the  
late 1980s and Frenkel et al. (2001) J. Phys. Chem. B.  In Frenkel's  
paper (p. 12691), they describe that they used a "third cumulant" to  
account for anharmonic corrections, but I'm not sure how exactly  
this is implemented.


It is something that you request in a feff.inp file, or is it a path  
parameter for IFEFFIT to include in its calculations?


I am using Artemis on Mac OS X 10.6 (thanks to iXAFS 2.1.1 beta!!)  
to execute FEFF 7 calculations and fit my data.  I noticed  
parameters called "3rd" and "4th" in the path dialog; is "3rd" the  
same parameter as, for example, the sigma_sub_i_superscript_(3) term  
in eqn (2) of Frenkel et al. (2001)?


Thanks for reading this lengthy note...

Aaron


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Anharmonic correction (Aaron Slowey)

2010-03-26 Thread Scott Calvin

I concur with Grant.

But I would like to also make a complementary observation. For  
moderately disordered systems such as nanoscale samples, there's also  
no a priori reason to assume that the distribution is not skewed. In  
other words, the question of convergence isn't avoided by just  
ignoring higher cumulants!


In general, a reasonably good process is:

1) Try the fit with the third cumulant forced to 0 (i.e. not fit).

2) Try the fit with the third cumulant guessed.

3a) If the third cumulant refines to 0 to within the reported  
uncertainty and the other parameters don't move outside their original  
error bars, then the third cumulant is not needed; go back to fit 1.


3b) If the third cumulant refines to a nonzero value to within the  
reported uncertainty, then look at the value. If it violates the limit  
Grant gives, then it's not appropriate, but you need to find another  
way of dealing with the fit. (Sometimes you may have a splitting  
between two different path lengths that you haven't modeled, for  
instance.) If it is within Grant's limit, then proceed with the usual  
caution you accord to EXAFS fits. For example, evaluate the physical  
sensibility of parameters, the stability of the fit to small changes  
in the data ranges, etc..


--Scott Calvin
Sarah Lawrence College


On Mar 26, 2010, at 4:09 PM, grant bunker wrote:


Aaron -

There are a couple of things you should watch out for when fitting  
cumulants.


First, you should make sure in the fitting process that the third  
cumulant C3 doesn't get much more than twice C2^(3/2) (i.e. 2  
sigma^3) - values much larger than that are probably unphysical,  
even if they happen to give you a better fit.


Second,  the cumulant expansion loses its utility if it doesn't  
converge quickly enough.  It's essentially an expansion in terms of  
order k*sigma, and if that approaches 1 the higher order cumulants  
may be large enough that convergence is questionable.  If you are  
lucky and the effective distribution is Gaussian, or most of the  
variance is due to Gaussian broadening of a skewed distribution, it  
may converge OK, but that shouldn't be assumed a priori.


Grant Bunker

http://gbxafs.iit.edu
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit



___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Measuring particle size of metal oxide

2010-03-28 Thread Scott Calvin

Hi Bill,

Sorry for my slow response; I've been swamped. And ironically, the  
fact that I'm very interested in your topic and wanted to give a good  
reply slowed me from getting to it!


I'm also going to repost your question and my response to the IFEFFIT  
mailing list, as I think it's of general interest.


I think you're missing a key idea concerning this procedure. Both my  
method and Anatoly Frenkel's method for finding particle size rely on  
comparing the effective coordination for different paths. I am  
skeptical of EXAFS determinations of particle size using only one  
path, as coordination number is hard to tease out from other effects  
that suppress amplitude (sample quality, disorder, vacancies...). But  
the relative coordination number of different shells is more reliable.


So you should be using more than one path, each with its own r (the  
absorber-scatterer distance), in order to refine a single value for R,  
the crystallite radius.


If you're using Artemis, this is simple enough to do, as each path has  
a reff value (which is r in the formula you give), and you can define  
a guessed parameter R. (To do this right, you need to make sure that  
the r for multiple-scattering paths is the distance from the absorber  
to the furthest scatterer, not the half-path length. That means  
putting the value in "by hand" for those paths, rather than using reff.)


(If you want a really quick and dirty method, collect a reference for  
a bulk standard and for your sample, and multiply the FT of the bulk  
spectrum by the formula, adjusting R until you get a good match. The  
quick and dirty method doesn't handle multiple-scattering correctly,  
and has the usual problem of the fact that even for direct-scattering  
paths the peaks of the FT are shifted from actual absorber-scatterer  
distances. But it is model-free, which can be nice in some cases.)


By the way, the formula you've cited is derived for spherical  
particles. If they're kinda sorta spherical, it will still give a  
decent approximation and fit. But if the particles are needle-shaped  
or flat plates, then it doesn't work well. You either have to derive  
another formula, or look at Anatoly's papers, which have addressed a  
number of common morphologies.


So far, everything I've said applies to metals, and you asked about  
oxides. While most of this also applies to oxides, it's important to  
realize that the nearest-neighbor oxide paths are not suppressed at  
all; i.e. the formula doesn't apply until the first metal-metal path  
(but does apply to metal-oxide paths further out). This is because the  
surface of such particles is generally comprised of oxygen atoms, and  
the first scattering shell is thus fully populated no matter how small  
the particles are.


--Scott Calvin
Sarah Lawrence College

On Mar 8, 2010, at 1:45 PM, bill.schwa...@yale.edu wrote:


Hi Scott,

We met a couple of times at the EXAFS last two EXAFS workshops at  
BNL, for relative beginners like me.


I am attempting to determine the particle size of PdO particles  
supported on Alumina (3% PdO/Al2O3).


I am wondering if I can collect EXAFS data and use the formula in  
your 2003 Journal of Applied Physics paper(*) to estimate PdO  
particle size:


N_nano = [1-3/4(r/R)+1/16(r/R)^3]N_bulk

Here is a little more background:

PdO particles on metal oxide supports are generally smaller and more  
dispersed in comparison to Pd metal on the same support.  Also, if  
PdO is reduced and then re-oxidized at varying temperatures, the re- 
oxidized PdO particle size varies with temperature, with higher  
oxidation temperatures resulting in smaller PdO particle size.


I have prepared a series of samples of 3% PdO/Al2O3, where the PdO  
has been been reduced and then re-oxideized at various temperatures,  
and I would like to use EXAFS to determine the coordination number  
of my various samples.


A major difference in my planned experiment compared to what you  
described in your paper is that you examined nickel metal, while I  
am looking at a metal oxide.


So my current questions are:

1)  Is the above formula reasonable for determining bulk particle  
radius (R) of metal oxides?


2)  If yes, then what is r, since scattering distance between Pd -  
Pd and Pd - O are different?  Is it reasonable to average the  
distances?


3)  Can N(nano) be determined by averaging the coordination numbers  
for the Pd-Pd and Pd-O paths?


Any guidance you can provide will be greatly appreciated.

Sincerely,

Bill

(*)  Determination of crystallite size in a magnetic nanocomposite  
using extended x-ray absorption fine structure








___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] can sigma square ever be less than zero?

2010-06-28 Thread Scott Calvin

Hi Chris,

Not to be picky, but I think we have to consider the semantics of what  
you're asking very carefully.


"Can ss be negative," as in, can the physical quantity which is the  
variance in the absorber-scatterer distance be negative? No, since  
variances are the square of a real number.


"Can ss be negative," as in, can ifeffit output a negative best-fit  
value for ss? Yes, as you've seen.


"Can ss be negative," as in, can a fit with a negative best-fit value  
for ss be considered a valid fit? That's really what you're asking, I  
think, and the answer is that it could be, depending on what you are  
trying to claim. Since the uncertainty in your case is quite large,  
it's certainly possible that your fit is consistent with believable  
values. But it also means that your fit gives you very little idea of  
what ss should actually be. The twin facts that the uncertainty is  
large and that the best-fit value is very close to 0 make the fit less  
convincing.


The simplest explanation in your case is that Ifeffit is finding a fit  
with an S02 and a ss that are both a bit low. Since they tend to  
correlate highly, that's not uncommon. Have you tried fitting using  
different k-weights, or, better yet, several k-weights simultaneously?


At any rate, I'd say your fit is a promising preliminary fit. As far  
as a publication-quality fit, it would be nice to get the nearest- 
neighbor ss pinned down a bit better.


--Scott Calvin
Sarah Lawrence College

On Jun 28, 2010, at 4:10 PM, Chris Patridge wrote:


Hello all,

I am working on W L3 edge data.  W is acting as a substitution  
dopant in vanadium dioxide at rather low concentration.  In a past  
mailing conversation discussing Feff6 overestimation of E0 for  
heavier elements it was mentioned that the E0 could be past the  
rising edge due to the white line from W data.  Well using this  
comment I aligned data using the theory method well explained by  
Shelly Kelly SnO2 example.  Literature suggests W approximates WO2  
cubic structure locally instead of the VO2 unit structure.  Then  
fitting the first oxygen coordination shell paths which are well  
isolated from the other paths, it gives reasonable values for  
amplitude and enot of 0.77 (0.17) and -1.14 (3.06) respectively.   
delr is -0.067 (0.028) and then ss comes out to -0.00036 (0.00414).   
Can ss be negative if the uncertainty brings it above 0?


Thank you all,

Chris Patridge
PhD Candidate
Department of Chemistry NSC 405
SUNY Buffalo
315-529-0501
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit



___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Vanadium center fitting issues

2010-07-10 Thread Scott Calvin

Hi Chris,

Yes, you do need to account for the inequivalent vanadiums. Each feff  
calculation needs to be weighted by the fraction of vanadiums of that  
type (if you're looking at a tabulation of the asymmetric unit in a  
crystal structure, the number in the position codes like "8f" next to  
the atoms tell you the relative numbers of each). Use the S02 field to  
apply this weighting.


In one sense, that doubles the number of parameters. But the V-V paths  
are present in both calculations, so that cuts it down a little. From  
there, you have the usual options of trying to apply constraints that  
aren't physically unreasonable, and seeing how the fit responds to them.


--Scott Calvin
Sarah Lawrence College

On Jul 10, 2010, at 11:03 AM, Christopher Allen wrote:



Hi,

I wanted to get some advice on a vanadium centered material I’ve  
been trying to fit for quite a while w/ repeated failure. Based  on  
XRD and electrochemistry I’m pretty confident in the material/model,  
and I have the input file for the material.   The first shell out to  
2.2 angstroms in R space is a distorted VO6 w/ bond lengths ranging  
from 1.6 to 2.2 angstroms, similar to the V2O5 which has been  
discussed on the mailing list recently (July 7th).  The last two  
peaks from 2.2 to 3.3 angstroms presumably include 4 V-P and 1 more  
V-O single scattering paths. (the best fit I could get on the 4V-P/ 
1V-O used only the first of these two peaks.)


With regards to the first shell fitting, is it appropriate to be  
grouping the V-O bonds in terms of 1 short, 4 medium, and 1 long  
bond distance to cut down on variables?  I noticed in the V2O5  
previously discussed, that he used one ss and delr term for all 6  V- 
O paths, but I guess that just means making some assumptions that  
any variations in ss and delr are isotropic throughout.  Is that  
correct?


I’ve had real issues w/ those peaks representing V-P/V-O from 2-3.3  
angstroms so I wonder if it’s unreasonable to try and  extract this  
information based on the k space or if possible thats not what I  
have there.   Could it be that I need to account for the two  
inequivalent V centers?  (In that case, wouldn’t the number of  
variables be doubled?)


Thanks for any comments,

Chris


--
Chris Allen
Northeastern University Center
for Renewable Energy Technology
317 Egan Research Center
360 Huntington Ave.
Boston, MA 02115
617-373-5630

< 
livopo4 
.prj>___

Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Case of "nonequivalent multiple atomic sites of absorbing atoms"

2010-07-28 Thread Scott Calvin

Hi Rana,

Currently, S02 is usually described as being due to the relaxation of  
the other electrons in an atom when a core electron is removed,  
resulting in incomplete overlap of initial and final states. This  
appears to be a fairly good description, as careful experiments show  
good agreement with theoretical calculations based on this idea.


Note, however, that there could be some other contributors to S02. A  
photon could, in addition to exciting the core electron at the edge,  
also excite a valence electron.


There's a small thread on the transferability of S02 here:

http://www.mail-archive.com/ifeffit@millenia.cars.aps.anl.gov/msg01626.html

E0 is a tricky concept, in my opinion: it is the energy origin in the  
EXAFS equation. Perhaps a theorist can give me a pithy physical  
interpretation of what happens at that energy, but I don't know there  
needs to be anything; at k near 0, the path expansion is not  
convergent, so I'm not sure we should expect anything special to  
happen exactly at 0. In other words, it's not exactly the Fermi level  
or any other special energy.


E0 is dependent on oxidation state; it can shift by an electron volt  
or two when oxidation states vary.


Note that oxidation state is a simplistic measure of what's happening  
with the electron distribution in a material. Suppose fluorine is  
substituted for iodine in some material. Formally, the oxidation state  
of the atom they are bonded to is not changed by the substitution. But  
in reality, the electron distribution is different, and a small E0  
shift would not be surprising.


I think the bottom line, then, is this:

S02 is completely transferable for the same element at multiple  
absorbing sites.


Delta E0 is transferable with some caution for the same element at  
multiple absorbing sites if the oxidation state is the same.


One other note: there's no rule that when trying constraints, you have  
to start unconstrained and add constraints to see the effect on the  
fit. With complicated systems like yours, it often pays to start with  
unrealistically simple constrains (not only E0's and S02's the same,  
but also sigma2's), and see if you're on the right track. Then look at  
the effect of relaxing constraints.


--Scott Calvin
Sarah Lawrence College



On Jul 28, 2010, at 4:00 AM, Jatinkumar Rana wrote:


Dear Users,

Since long, i was trying to understand the physical meaning of term  
"Delta E0" and "S02" in EXAFS equation. I have little bit of idea  
about both of them. for example, S02 is element specific and it is  
transferable between samples (if we consider same absorbing atom).


However, I am not able to realize their importance in terms of their  
"physical meaning" as far as interaction of photoelectron is  
concerned. Therefore, it is difficult for me to understand their  
influence on EXAFS.


I am dealing with a case of  "nonequivalent multiple atomic sites of  
absorbing atoms". It is quite obvious that in such kind of case no.  
of variables are more than no. of independent points and there is a  
need to constrain the parameters to solve such problems.


I have following questions :

How do i understand "Delta E0" and "S02" theoretically (in terms of  
photoelectron interaction) ?


Can i constrain "Delta E0" for all absorbing atomic site as same ?  
(my assumption : all absorbing atoms are at same oxidation level)


your comments and suggestions would be highly appreciated...

Best regards,
Rana
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit



___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Large Amplitude Values

2010-07-31 Thread Scott Calvin

Hi Gavin,

What are the uncertainties on the high S02 values?

Fluorescence is unlikely to be the culprit. While it can affect your  
ability to normalize properly, you're unlikely to account for a factor  
of 2 by normalization if the data is relatively decent. And self- 
absorption tends to suppress S02, not exaggerate it.


Why did you switch to fluorescence on just the handful of data sets?  
That might provide us a clue.


--Scott Calvin
Sarah Lawrence College

On Jul 30, 2010, at 10:47 PM, Gavin Garside wrote:


Fellow X-Ray Absorption Enthusiasts,

I have recently compiled a model that gives excellent visual fits in  
R, q, and k space for bond spacing in a BCC structure.  This model  
gives bond spacings that make sense, and are very close to what  
would be expected from this set.  The R factors are very low, and  
the enot values correspond quite well to the edge.  However, our  
amplitude values are much larger than typically expected.  They come  
in at the range of 1.8 up to 5.0, but only on a few data sets.  On  
all the rest the amplitude values are 0.4 to 1.0.  Could this  
increase in amplitude be attributed to the fact that we ran  
florescence measurements instead of transmission, and have a weaker  
signal coming to the detector?  What else could be causing this in  
only one data set? All samples used in this model have the same  
structure.  Thanks in advance to any replies, your help and time is  
appreciated.




___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Large Amplitude Values

2010-07-31 Thread Scott Calvin

Hi Gavin,

The problem isn't the value of S02; it's the uncertainty. I've noticed  
that Ifeffit has a tendency to push up the best fit value of S02 when  
it's very uncertain, although I'd have to think more about how it  
determines error bars to confirm that. But it does make sense, in a  
"Price is Right" kind of way--negative S02's presumably fit horribly,  
because they turn chi(k) upside down, and so that biases uncertain  
S02's to move the whole range up.


In any case, your focus should be on reducing that uncertainty.

1) One way to do that is to fit with multiple k-weights, assuming  
you're not doing that already. Check the kw 1, kw 2, and kw 3 boxes  
all together and run a fit. The reason this works is that the EXFAS  
equation shows no k-dependence for S02, but a k^2 dependence for  
sigma^2, which often shows a high correlation with it in fits. Fitting  
multiple k-weights sometimes helps break that correlation.


2) Along the same lines, if you can squeeze out any additional k-range  
that may help.


3) If you're fitting coordination numbers, then adding additional  
scattering shells with some physically defensible scheme for  
constraining coordination numbers to a small number of parameters can  
help a lot.


4) Another good technique is to fit multiple samples simultaneously,  
constraining S02 to be the same for all of them. Or fit the sample and  
a standard measured in a similar way simultaneously, again  
constraining S02 to be the same.


5) Along the same lines, you could fit a standard measured in a  
similar way to determine S02, and then constrain the fit of your  
sample to take on that value.


4 and 5 are similar, so you may wonder if I have a preference. I'd say  
that if the samples, beam, detectors, data, and data reduction are all  
well behaved, then #5 is probably best, and has the benefit of being a  
technique with a long pedigree. If you're a little suspicious of  
something in the chain, though (for example, it's difficult to tell if  
you've been consistent in normalizing your standard and sample,  
because one has a big white line and the other doesn't), then #4 has  
the benefit that it distributes the error in the parameters you are  
fitting between sample and standard. This is good both because your  
sample has less error than otherwise, and because the values for the  
standard act as a "canary in a coal mine," warning you by their  
deviation from known values as to the magnitude of the errors you're  
looking at.


--Scott Calvin
Sarah Lawrence College

On Jul 31, 2010, at 11:56 AM, Gavin Garside wrote:


Scott,

Thank you for a quick response.  The value I am getting for SO2 in  
the fit most of interest is 2.95 plus/minus 3.72.  So with the error  
bar I am in range, but I was just suspicious of it before I make any  
claims about it.  All my experiments were done in florescence  
because we have ordered bulk material.  By creating a sample that  
would work in fluorescence I may have introduced dislocations or  
imperfections that would have effected the physical properties of  
interest in this sample.


Gavin Garside
University of Utah

From: Scott Calvin 
To: XAFS Analysis using Ifeffit 
Sent: Sat, July 31, 2010 4:52:35 AM
Subject: Re: [Ifeffit] Large Amplitude Values

Hi Gavin,

What are the uncertainties on the high S02 values?

Fluorescence is unlikely to be the culprit. While it can affect your  
ability to normalize properly, you're unlikely to account for a  
factor of 2 by normalization if the data is relatively decent. And  
self-absorption tends to suppress S02, not exaggerate it.


Why did you switch to fluorescence on just the handful of data sets?  
That might provide us a clue.


--Scott Calvin
Sarah Lawrence College

On Jul 30, 2010, at 10:47 PM, Gavin Garside wrote:


Fellow X-Ray Absorption Enthusiasts,

I have recently compiled a model that gives excellent visual fits  
in R, q, and k space for bond spacing in a BCC structure.  This  
model gives bond spacings that make sense, and are very close to  
what would be expected from this set.  The R factors are very low,  
and the enot values correspond quite well to the edge.  However,  
our amplitude values are much larger than typically expected.  They  
come in at the range of 1.8 up to 5.0, but only on a few data  
sets.  On all the rest the amplitude values are 0.4 to 1.0.  Could  
this increase in amplitude be attributed to the fact that we ran  
florescence measurements instead of transmission, and have a weaker  
signal coming to the detector?  What else could be causing this in  
only one data set? All samples used in this model have the same  
structure.  Thanks in advance to any replies, your help and time is  
appreciated.





___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mai

Re: [Ifeffit] Stoichiometry from EXAFS data

2010-08-02 Thread Scott Calvin

Hi Peter,

I've done this as well, and compared to reliable methods (e.g. ICP).  
I'd be skeptical of 1%. It's generally quite difficult to determine  
edge steps to that accuracy. Assuming you're using Athena to determine  
the edge step, find the most extreme pre- and post-edge lines that  
seem acceptable and note the range of edge steps. That will yield an  
uncertainty range.


If you have strong features at the white line and just past it, I'd be  
surprised if you can do much better than 10%. If features in that  
region are small, such as you might have in an intermetallic alloy,  
then you might get down to the sub-5% range.


While I think that determining the edge step is likely the major  
source of error, you also have to be aware of the usual suspects in  
XANES analysis, such as the presence of harmonics in transmission or  
self-absorption in fluorescence. Testing for linearity with tricks  
like putting sheets of aluminum foil before I0 can help detect some  
(but not all) of those kinds of issues.


--Scott Calvin
Sarah Lawrence College

On Aug 2, 2010, at 4:03 AM, Peter Zalden wrote:


Dear Feff users,

lately, we measured a sample containing Sb and Te at EXAFS beamline  
CEMO, Hasylab and are wondering whether one can determine the  
stoichiometry from the height of the different K edges´ steps, if  
one normalizes the values on the edge steps of the elements (cf. http://physics.nist.gov/PhysRefData/XrayMassCoef/tab3.html) 
. The absorption gases and the specimen were not changed for the  
different K edges.
Of course, I have already tried doing so and from statistical  
reproducibility and from the resulting values compared to the  
expected ones I would estimate an error of this method of about 1%.  
A source of error that I could imagine originates from the different  
beam position at different energies combined with a slight  
inhomogenity in the pressed sample powder. Now my question is: Are  
there any other sources of error that I should take into accout? Is  
there any reference on this method from a more experienced user that  
I could cite?


Kind regards,
Peter


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Stoichiometry from EXAFS data

2010-08-02 Thread Scott Calvin

Hi again,

Your tellurium edge in particular has the kind of nice regular  
structure around the edge that makes edge step determination  
relatively accurate. Also, you're working at such high energies that  
harmonics are unlikely to be much of an issue.


I'd probably consider your data as good to +/- 3% for stoichiometry.  
Calling that "sigma" or "two sigma" kind of implies that the error in  
treating many similar systems in this way would be normally  
distributed, and that's very unlikely to be true. Unlike with  
population statistics or counting statistics, you wouldn't  
occasionally end up way off "by chance." It's almost more like a  
report of precision: when carefully measuring a length with a ruler  
marked in mm, it's reasonable to interpolate between the mm marks and  
report a measurement as good to, perhaps +/- 0.3 mm. If, by eye, I  
claim 11.3 mm, it might conceivably be 11.0 or 11.6 mm, in part  
because of the ability to eyeball it, and in part because of problems  
with lining up marks using rulers. But unlike with Gaussian  
statistics, where two- or three-sigma events happen now and then, I'd  
often be off by 0.2 mm yet never off by 0.6 mm.


--Scott Calvin
Sarah Lawrence College

On Aug 2, 2010, at 8:51 AM, Peter Zalden wrote:


Hi Scott,

thanks a lot for your quick response! I found your suggestion very  
helpful and tried to change the edge step to both extremes by tuning  
the fitting range for the pre- and post-edge lines in Athena. Due to  
the very flat structure in the XANES range (cf. attachement), I  
could modify the value for the edge step by 3% total, which  
corresponds to an error of +/-2%. One could possibly discuss if this  
value represents the one-sigma or maybe the two-sigma interval, but  
the error is nicely small anyway.
In the last campaign, we measured a sample of Sb_2Te_1 in two  
different annealing conditions and from those different data sets  
(as concerns the EXAFS range), I determined the stoichiometries:  
Sb_2.06Te_0.94 and exactly the same for the second sample.  
Therefore, a sub-5% error seems reasonable to assume for these semi- 
metallic systems.
Concerning the influence of higher harmonics: The beam was usually  
detuned to 70% intensity of the main reflection so that this should  
not have a strong influence, since the amount of detuning was not  
changed for both edges.


Best regards,
Peter

___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] two measurements of the same compound in different beam-lines

2010-08-13 Thread Scott Calvin

Hi Maria,

You have several choices. First, note that the merge function in  
Athena allows you to select options of weight by chi-noise, or weight  
by importance. If you weight by chi-noise, noisier data will be  
counted less. If you have some other way of estimating the quality of  
the data sets, you can enter different numbers in the "importance"  
field and then weight by importance.


If working on two different beamlines, you might consider merging  
chi(k) data, rather than norm(E), as the backgrounds may be different.  
And mu(E) probably makes no sense at all.


Another option is to obtain separate chi(k)'s, but then fit both  
spectra simultaneously in Artemis. The Artemis/Ifeffit default  
behavior in that case will be to use high-R noise for weighting, but  
you can override that by assigning an epsilon to each data set if you  
choose. This method has several advantages: it lets you see if one  
data set is fitting differently from the other; it lets you choose  
different k-ranges if noise begins dominating one data set at a lower  
value of k; it lets you use different values of S02 if there are  
pinhole, harmonic, or self-absorption effects; and it lets you use  
different values of delE0 if the data sets are hard to align properly.


--Scott Calvin
Sarah Lawrence College


On Aug 12, 2010, at 8:44 PM, María Elena Montero Cabrera wrote:


Hi all!
I have performed two independent XAFS measurements of Cr K-edge of  
the same Fe-Cr sample at two different beam-lines at SSRL. I have  
obtained the Fe-K edge data only once. The quality of data are  
different in each measurement. However, I cannot average spectra  
from different Cr-K measurements, and I don't know if I could take  
somehow advantage from having almost twice the information for the  
Cr- K adge, or I have to use only the better quality data and  
discard the other. What do you advice? If I can use both  
measurements, how can I do the fitting in Artemis?

Thank you very much and all take care

--
María Elena

Dra. María Elena Montero Cabrera
Departamento de Medio Ambiente y Energía
Centro de Investigación en Materiales Avanzados (CIMAV)
Miguel de Cervantes 120, Compl. Ind. Chihuahua
Chihuahua CP 31109, Chih. México
Tel (614) 4391123
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


[Ifeffit] Haha

2010-08-15 Thread Scott Calvin

Hi all,

For a little comic relief, I just came across this graph:

http://www.usablemarkets.com/wp-content/uploads/2010/06/fed-rate-3.jpg

I have never seen a graph unrelated to XAFS looks more like (noisy)  
XAFS data...


--Scott Calvin
Sarah Lawrence College
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Ifeffit Digest, Vol 90, Issue 21

2010-08-30 Thread Scott Calvin

Hi Dominik,

I haven't had time to look at your question in detail, but try the  
link below, and see if you find it helpful:


http://cars9.uchicago.edu/ifeffit/Doped

--Scott Calvin
Sarah Lawrence College

On Aug 30, 2010, at 7:43 AM, Jatinkumar Rana wrote:


Dear Dominik,

Thank you so much for your reply. However, i am not able to  
understand the logic behind removal of following atoms from Feff.inp  
file as mentioned by you


**In principle, after running ATOMS, you just need to remove atoms  
from

the generated feff.inp file so that the occupancy is correct. I.e., in
this case, you need to remove roughly 1/4 of the Fe7, 1/2 of the Cr1,
1/3 of the P61, 2/3 of the P62, and 1/3 of the O614 atoms. Try finding
those which are too close to other atoms. The FEFF output might  
help, so

try deleting preferentially those causing FEFF to fail.**

can you please give me the idea behind doing that and is it  
physically defendable to do such changes ??


Looking forward to your reply..

Best regards,
Jatin




On 27.08.2010 14:00, ifeffit-requ...@millenia.cars.aps.anl.gov wrote:

Send Ifeffit mailing list submissions to
ifeffit@millenia.cars.aps.anl.gov

To subscribe or unsubscribe via the World Wide Web, visit
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
or, via email, send a message with subject or body 'help' to
ifeffit-requ...@millenia.cars.aps.anl.gov

You can reach the person managing the list at
ifeffit-ow...@millenia.cars.aps.anl.gov

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Ifeffit digest..."


Today's Topics:

   1. Re: Help with cif file (Dominik Samuelis)
   2. catalysis workshop at the karlsruher synchrotron ANKA
  (Matthias Bauer)


--

Message: 1
Date: Fri, 27 Aug 2010 10:09:05 +0200
From: Dominik Samuelis
To: XAFS Analysis using Ifeffit
Subject: Re: [Ifeffit] Help with cif file
Message-ID:<4c7772a1.2000...@fkf.mpg.de>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Dear Jatin,

in your original cif file, you have site occupancies as low as 0.5  
(Cr1

site) and 0.358 (P62). Just setting them to 1 will not help, because
ATOMS assumes them to be unity anyway.

For such a complicated unit cell, the typical recipe with using
prototypical structures of course does not help, just because there  
is

not a simple prototype structure for arrojadite.

In principle, after running ATOMS, you just need to remove atoms from
the generated feff.inp file so that the occupancy is correct. I.e.,  
in

this case, you need to remove roughly 1/4 of the Fe7, 1/2 of the Cr1,
1/3 of the P61, 2/3 of the P62, and 1/3 of the O614 atoms. Try  
finding
those which are too close to other atoms. The FEFF output might  
help, so

try deleting preferentially those causing FEFF to fail.

Another solution might be loading the structure's cif file into a
structure editor such as DIAMOND. There, you can then check the bond
distance histograms and delete atoms accordingly. At the end, just
export the data in xyz format and use this as the atom positions  
list in

the feff.inp file.

Regards,
Dominik



On 20.08.2010 09:30, Jatinkumar Rana wrote:


Dear Dominic, Dear Bruce,

I am also facing the same problem as experienced by Kleper. I am  
working

on the EXAFS analysis of Arrojadite mineral. We have refined the
strucutre using neutron diffraction to get crystallographic  
information

which can be fed to ATOMS. The original .cif file contains the
fractional occupancy so ATOMS give similar error report as  
mentioned by
Kleper. After reading your post, i changed all site occupancy to 1  
and

then run ATOMS but still it gives me the error report.

Can anybody tell me, why ATOMS report error ?

I have attached both original .cif file (with fractional  
occupancy) and
modified .cif file (all site occupancy = 1). I have EXAFS spectrum  
at Fe

and Mn K-edge.

Looking forward to your answer.

Thanks a lot in advance...

Best regards,
Jatin





On 18.08.2010 19:00, ifeffit-requ...@millenia.cars.aps.anl.gov  
wrote:



Send Ifeffit mailing list submissions to
ifeffit@millenia.cars.aps.anl.gov

To subscribe or unsubscribe via the World Wide Web, visit
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit
or, via email, send a message with subject or body 'help' to
ifeffit-requ...@millenia.cars.aps.anl.gov

You can reach the person managing the list at
ifeffit-ow...@millenia.cars.aps.anl.gov

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Ifeffit digest..."


Today's Topics:

1. Re: Help with cif file (Dominik Samuelis)
2. Re: Help with cif file (Bruce Ravel)


--

Message: 1
Date: Wed, 18 Aug 2010 08:36:03 +0200
From: Dominik Samuelis
To: XAFS Analysis using Ifeffit
Subject: Re

[Ifeffit] More than 256 paths on Mac OS 10.5?

2010-10-06 Thread Scott Calvin

Hi all,

Do any of you have a version of Ifeffit compiled for Mac OS 10.5 that  
allows more than 256 paths?


--Scott Calvin
Sarah Lawrence College
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] sigma^2 values for multiple scattering paths

2010-10-06 Thread Scott Calvin
Although I agree with the main points that Bruce makes, I do want to  
comment on one piece:


On Oct 6, 2010, at 7:03 AM, Bruce Ravel wrote:


.

In no case can I understand a physical explanation for the the MS
sigma^2 being smaller than for the SS.


Actually, there is a physical situation where something like that can  
occur, although it sounds like it's not the one that Han Sen has.


Consider an absorbing atom rattling around in a relatively fixed cage  
or lattice. And then consider a linear (or near-linear) arrangement:


S1 -- A -- S2

One multiple scattering path that can sometimes have a sizable  
contribution is A --> S1 --> S2 --> A. This path will have a sigma^2  
that is a bit larger than the single-scattering path S1 --> S2 --> S1,  
because of the perpendicular component of the motion of A.


But it's quite frequently the case that S1 --> S2 --> S1 is not  
modeled in a fit, because the S edge is not measured.


On the other hand, the single scattering paths A --> S1 --> A and A -- 
> S2 --> A ARE included in the fit. Those two have high sigma^2's,  
because A is rattling around a lot.


Under that circumstance, a multiple-scattering path included in the  
fit may indeed have a lower sigma^2 than the single-scattering paths  
included in the fit.


The moral, of course, is that it's not hard to think physically about  
what sigma^2 means for a multiple scattering path. If one appears to  
have an "unphysically" small sigma2, then the explanation is probably  
one of the ones given by Bruce or Shelly.


One more thought on this. How much does it change your fit, Han Sen,  
if you set the sigma^2 for the multiple-scattering path to some  
"reasonable" value. If the scientific information you want from your  
fit is not sensitive to exactly what sigma^2 the MS path gets, and is  
not significantly different when given a "reasonable" value than when  
allowed to find its "best-fit" value, then there's probably no need to  
resolve the issue. In my experience, this is often the case with low- 
amplitude MS paths: the fit is improved by their inclusion, but may  
not be particularly sensitive to the details of their path parameters.


--Scott Calvin
Sarah Lawrence College
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] More than 256 paths on Mac OS 10.5?

2010-10-06 Thread Scott Calvin

Thanks, Matt!

--Scott Calvin
Sarah Lawrence College

On Oct 6, 2010, at 9:08 AM, Matt Newville wrote:


Hi Scott,


The attached zip file has dynamic libraries (and static program
ifeffit) built with 1024 paths and feff files.  It contains the files

 lib/libifeffit.dylib
 lib/libifeffit.so
 bin/ifeffit

The zip file should be unzipped under
/Applications/iXAFS.app/Contents/Resources/local/
to overwrite the above files.  You should be able to open the iXAFS
Shell and type

  cd  /Applications/iXAFS.app/Contents/Resources/local/
  unzip ~/Downloads/iXAFS_1024paths.zip
  athena


Athena and Artemis will automatically use the new dynamic library.



___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] sigma^2 values for multiple scattering paths (Scott Calvin); Re: Ifeffit Digest, Vol 92, Issue 4

2010-10-06 Thread Scott Calvin


On Oct 6, 2010, at 10:41 AM, Han Sen Soo wrote:


Hello Scott,
Just to make sure I understand what you mean, are you saying that in  
your 3 atom system, the S1 and S2 atoms have relatively fixed  
locations but A may have large vibrational amplitudes in the A-S1  
and A-S2 directions? So the round-trip 3 atom MS path has a small  
sigma^2 value since the variation in the A-S1-S2-A path is dictated  
by the more or less fixed S1 and S2 end-points (with minimal  
perpendicular contribution), whereas the 2 individual SS paths have  
large sigma^2 value due to the large A-S vibrations?




Yes--you explained it far better than I did. :)

--Scott Calvin
Sarah Lawrence College

___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] schemes for delr and sigma2 for multiple scattering paths

2010-10-07 Thread Scott Calvin

Jatin,

IF the uniform expansion model is valid for single scattering paths,  
then it is for multiple scattering paths as well. For some materials,  
particularly those with cubic space groups, that's got a good chance  
of being a useful model. Others tends to distort with changes in  
temperature, doping, etc., and it may not work as well. But even in  
those cases, if you've decided how to constrain the delr for single- 
scattering paths, you'll do reasonably well by using some kind of  
appropriate average of the delr's for related single-scattering paths.


--Scott Calvin
Sarah Lawrence College

On Oct 7, 2010, at 3:05 AM, Jatinkumar Rana wrote:


Dear all,

It is reasonable to assign a constant fraction by which unit cell
expands at a given temperature of XAFS measuremnt and so the variation
in the path lengths for every single scattering paths could be  
assigned
as delr = alpha * Reff. Similarly, one can assign  sigma2 value for  
each

single scattering path depending on both type of scatterer and its
distance from the absorbing atom.

Now coming to multiple scattering paths, Sigma2 for multiple  
scattering

paths can be constrained based on the sigma2 of related single
scattering paths and a definite path-geometry-dependent scheme
(Triangle, collinear, reversed etc.) could be applied.

Is there any such scheme for delr of multiple scattering paths ? or we
can simply assume that all paths (single scattering and multiple
scattering) undergo uniform expansion by a factor alpha.

Thank you so much in advance for your valuable time...

With best regards,
Jatin Rana
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] sigma^2 values for multiple scattering paths (Scott Calvin/Shelly Kelly/Abhijeet Gaur); Re: Ifeffit Digest, Vol 92, Issue 5

2010-10-07 Thread Scott Calvin
Not at all unusual, Han Sen. If you think about the EXAFS equation,  
you'll see that sigma^2 and amplitude primarily affect the amplitude  
of the signal, while distances affect the position of the peak in the  
Fourier transform (or equivalently, the spacing of peaks in chi(k)).  
So sigma^2 and amplitude can trade off without affecting distance- 
based aspects of the fit much.


That's why I suggested you try forcing the sigma^2 to a "reasonable"  
value to see what happened to your fit. Sometimes none of the aspects  
of the fit you're interested in depend strongly on the sigma^2 of low- 
amplitude paths--particularly if what you're interest in is distances  
or information that is in part derived from distances, like phase  
identification. In those cases, the anomalous sigma^2 can be a "yellow  
flag" (think about what might be causing it and decide if it's a  
problem to your scientific case) rather than a "red flag" (drop  
everything and resolve the problem before proceeding).


Also, note from the EXAFS equation that sigma^2 is weighted by k^2,  
and amplitude is not. If fits using different k-weights result in  
significantly different values of sigma^2, that can be a clue that the  
issue is actually one of amplitude, as in your case.


At any rate, I'm glad you solved your issue in such a satisfying way!

--Scott Calvin
Sarah Lawrence College

On Oct 7, 2010, at 11:22 AM, Han Sen Soo wrote:


Hello Shelly and Scott,
Thank you both again for your suggestions. It seems that after  
making the MS path more linear in my cif file, the FEFF calculation  
increased the amplitude value  of the path and dramatically  
increased the sigma^2 value in the fit. Strangely, the fit values  
for the distances remain pretty much the same and the statistical  
figures of merit have improved, but the sigma^2 values are now much  
more reasonable (about twice as large, but I have a more triangular  
than linear model, so you're right Scott, your explanation does not  
work for my case). I guess the increased amplitude made a difference?


Hello Abhijeet, I used a rudimentary geometrical way to get my bond  
angles. For a 3 atom triangle M-O-A, the effective MS path length  
(R_MOA) is twice the sum of the individual bond distances. So if you  
have the R_MOA, R_MO, and R_MA distances from your fits, you can use  
R_MOA - R_MO - R_MA to get the O-A bond length. And with the 3 sides  
of the triangle, you can use the geometrical Cosine Rule to get any  
of the 3 bond angles. This is just geometry so I don't know what the  
error propagation for this would be.


Thanks again everyone!
han sen


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] calibration/alignment

2010-10-14 Thread Scott Calvin

Hi Ornel,

Alignment is used to compensate for monochromators that do not  
maintain stable energy calibration between scans. In conventional  
measurements, you'll generally have several scans that are supposed to  
be of the same sample under identical conditions, and those scans may  
need to be aligned with each other. That is not the case for a time  
series, which is what I think you are saying you have.


So in a time series, how do you compensate for any energy drift of the  
monochromator? If you are recording a simultaneous reference spectrum,  
you can align the reference spectra to each other. (Athena  
automatically will shift the sample spectra by the same amount that  
the reference spectra are shifted.)


If you have a time series but don't have a simultaneous reference  
spectra, it becomes tougher. If you collected a reference spectrum  
before and after the time series, you could try to interpolate any  
shift that's seen, although that's dicey; shifts sometimes occur in  
jumps. But if there's no shift, you're probably OK!


If you have a time series and no reference at all, or a reference only  
before the series, you're out of luck. You're relying then on the  
assumption of energy stability, which on some beamlines might be  
OK...but it is best to confirm that by at least measuring a reference  
before and after.


--Scott Calvin
Sarah Lawrence College

On Oct 14, 2010, at 2:46 AM, ornella smila castro wrote:


Hi everyone,

I am trying to do some data processing with Athena but I am already  
stuck at the first step. The thing is: I read the worked example  
section of the "Athena's user guide" and on the example on the iron  
foil, it is mentioned to calibrate the data at the right energy  
(until here evrything is fine) but then it is said to align the  
data. Can anyone explain to me what does "alignment" exactly means,  
and what is the aim of "aligning the data".   The data that I have  
collected were through a channel through which a solution flow (~200  
microliters/hr) so I am not convinced that alignment makes sense.


Many thanks,
Ornel



___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] calibration/alignment

2010-10-14 Thread Scott Calvin

On Oct 14, 2010, at 7:39 AM, Matt Newville wrote:


If you're just getting started, I would say to not worry about energy
alignment until it becomes an obvious problem.


A cautionary tale (with details made up, since I don't remember them!)  
from when I was just starting out as to what constitutes an "obvious  
problem":


I collected five transmission EXAFS scans on the same sample. The  
scans were on top of each other when I looked at the graph, so I  
merged them...and proceeded to get somewhat screwy fits.


The problem? I only looked at the graph across the whole spectra--say,  
1500 eV. It turns out there was about a 0.7 eV shift between each scan  
and the next one, for a total of roughly 3 eV . That was small enough  
so as to be invisible when looked at on that scale. When I looked at  
just the XANES, though, the shift did become "obvious." I aligned the  
spectra and merged them, and suddenly the problems in the fit went away!


Since then, I've seen the same thing happen with students to whom I am  
teaching the technique.


On the other hand, there's no magic "blessing" given by the process of  
alignment. Suppose I have ten scans of very noisy data, and no  
reference. If I used the auto-align procedure in Athena, it sometimes  
shifts a scan 0.3 eV one way, sometimes 0.2 eV the other way, with no  
apparent rhyme or reason. Looking at the graphs, even zoomed in, just  
shows a bunch of noisy data roughly on top of each other. In that  
case, there's no reason to believe there are actual shifts between  
scans, and I would NOT align them prior to merging.


Finally, beamline scientists usually have a very good idea whether  
their line is prone to drifts. Ask them!


--Scott Calvin
Sarah Lawrence College
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] calibration/aligment...2

2010-10-14 Thread Scott Calvin

Thanks, Ornella, that clarifies what you're doing.

My recommendation is to look closely at the 6 reference spectra and  
see if there appears to be a systematic energy shift between them. For  
example, each spectrum might be shifted by about 0.3 eV from the  
spectrum before it. Or the first two spectra might appear aligned, but  
then the third through sixth are shifted by 1.5 eV. In either of those  
cases, you should align them. In my second example, I might throw out  
the second spectrum as an additional precaution (if the shift occurred  
"all at once," it might have occurred during the scan before which it  
appears). In either case, it doesn't really matter which scan you  
choose to align to (and calibrate, if you have a way of doing that).


If, on the other hand, the 6 reference spectra appear to basically  
overplot except for random noise, I would not try to align them further.


I would treat the 6 spectra for the electrolysed solution similarly-- 
align them to each other if there is a systematic energy shift.


What you should not do, in my opinion, is to align the electrolysed  
scans to the reference scans. You actually expect there to be a  
chemical shift between the two sets of data, and aligning one to the  
other would remove that!


--Scott Calvin
Sarah Lawrence College

On Oct 14, 2010, at 11:00 AM, ornella smila castro wrote:


Hi Matt and Scott,

First, thank you so much for replying to my questions.
I realised that I should have been a bit more accurate on the type  
of experiments I am doing.
To start with, I am doing electrochemistry combined with EXAFS. I am  
using an integrated electrolysis/EXAFS cell. Our experiments are as  
follow: in the case of this experiment in particular, we have a  
solution of ruthenium-based compound that we flow through a channel  
through which the beam passes and that we call "reference". we  
record let's say 6 spectra in a row (we use a flow in order to avoid  
beam damage on our sample).we didn't do any reference spectra (if  
you mean running a scan of a Ru foil before starting the actual  
experiment). then, we make up a new solution  but this time we  
electrolysed the solution (by applying a potential) in order to get  
the cationic produced species. (the obtained spectra will therefore  
be a mixture of the neutral and the cationic species). and we record  
as well a set of 6 spectra.
Here is what I did and what I was wondering if it was the right  
thing to do: I have considered the first spectra of the first set of  
scans as the reference for calibration/alignment. is it ok?
Thus, I am wondering if when I process the second set of data  
(electrolysed solution), which spectra I should take as a starting  
point for calibrating/aligning my second series of scans: the one I  
used for the first serie or the first of the second set of scans?


I hope this is clear...

Thanks again for your help,
Ornella


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


[Ifeffit] Asymmetric error bars in IFeffit

2010-10-22 Thread Scott Calvin

Hi all,

I'm puzzling over an issue with my latest analysis, and it seemed like  
the sort of thing where this mailing list might have some good ideas.


First, a little background on the analysis. It is a simultaneous fit  
to four samples, made of various combinations of three phases.  
Mossbauer has established which samples include which phases. One of  
the phases itself has two crystallographically inequivalent  absorbing  
sites. The result is that the fit includes 12 Feff calculations, four  
data sets, and 1000 paths. Remarkably, everything works quite well,  
yielding a satisfying and informative fit. Depending on the details,  
the fit takes about 90 minutes to run. Kudos to Ifeffit and Horae for  
making such a thing possible!


Several of the parameters that the fit finds are "characteristic  
crystallite radii" for the individual phases. In my published fits, I  
often include a factor that accounts for the fact that a phase is  
nanoscale in a crude way: it assumes the phase is present as spheres  
of uniform radius and applies a suppression factor to the coordination  
numbers of the paths as a function of that radius and of the absorber- 
scatterer distance. Even though this model is rarely strictly correct  
in terms of morphology and size dispersion, it gives a first-order  
approximation to the effect of the reduced coordination numbers found  
in nanoscale materials. Some people, notably Anatoly Frenkel, have  
published models which deal with this effect much more realistically.  
But those techniques also require more fitted variables and work best  
with fairly well-behaved samples. I tend to work with "messy" chemical  
samples of free nanoparticles where the assumption of sphericity isn't  
terrible, and the size dispersion is difficult to model accurately.


At any rate, the project I'm currently working on includes a fitted  
characteristic radius of the type I've described for each of the  
phases in each of the samples. And again, it seems to work pretty  
well, yielding values that are plausible and largely stable.


That's the background information. Now for my question:

The effect of the characteristic radius on the spectrum is a strongly  
nonlinear function of that radius. For example, the difference between  
the EXAFS spectra of 100 nm and 1000 nm single crystals due to the  
coordination number effect is completely negligible. The difference  
between 1 nm and 10 nm crystals, however, is huge.


So for very small crystallites, IFeffit reports perfectly reasonable  
error bars: the radius is 0.7 +/- 0.3 nm, for instance. For somewhat  
larger crystallites, however, it tends to report values like 10 +/-  
500 nm. I understand why it does that: it's evaluating how much the  
parameter would have to change by to have a given impact on the chi  
square of the fit. And it turns out that once you get to about 10 nm,  
the size could go arbitrarily higher than that and not change the  
spectrum much at all. But it couldn't go that much lower without  
affecting the spectrum. So what IFeffit means is something like "the  
best fit value is 10 nm, and it is probable that the value is at least  
4 nm." But it's operating under the assumption that the dependence of  
chi-square on the parameter is parabolic, so it comes up with a  
compromise between a 6 nm error bar on the low side and an infinitely  
large error bar on the high side. Compromising with infinity, however,  
rarely yields sensible results.


Thus my question is if anyone can think of a way to extract some sense  
of these asymmetric error bars from IFeffit. Here are possibilities  
I've considered:


--Fit something like the log of the characteristic radius, rather than  
the radius itself. That creates an asymmetric error bar for the  
radius, but the asymmetry the new error bar possesses has no  
relationship to the uncertainty it "should" possess. This seems to me  
like it's just a way of sweeping the problem under the rug and is  
potentially misleading.


--Rerun the fits setting the variable in question to different values  
to probe how far up or down it can go and have the same effect on the  
fit. But since I've got nine of these factors, and each fit takes more  
than an hour, the computer time required seems prohibitive!


--Somehow parameterize the guessed variable so that it does tend to  
have symmetric error bars, and then calculate the characteristic  
radius and its error bars from that. But it's not at all clear what  
that parameterization would be.


--Ask the IFeffit mailing list for ideas!

Thanks!

--Scott Calvin
Sarah Lawrence College___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Asymmetric error bars in IFeffit

2010-10-24 Thread Scott Calvin
I don't think it's at all strange, Anatoly, and I think Matthew's  
solution is the right one--it seems obvious in retrospect that the  
parameter that Ifeffit should evaluate is 1/R, but apparently it  
wasn't obvious to me on Friday. :)


As for obtaining N instead of R, the beauty of both of our algorithms  
is that they don't depend on finding N; they depend on finding the  
ratio of N's for different shells. Finding N accurately is notoriously  
challenging: you need some way of getting S02, you need to have the  
normalization right, and you're sunk if there are data quality issues  
like an inhomogeneous sample, uncorrected self-absorption, or  
significant beam harmonics. But finding the ratio of N for two or more  
different shells doesn't depend so strongly on any of those things.


Since my method implicitly involves multiple ratios of coordination  
numbers, it is not so clear how to invert it.


In any case, I expect Matthew's solution to work, and will pursue it  
further on Monday.


--Scott Calvin
Sarah Lawrence College

On Oct 24, 2010, at 5:59 PM, Frenkel, Anatoly wrote:


Scott,
It is a strange result. Suppose you fit a bulk metal foil and vary  
the 1nn coordination number. You will not get 12 +/- 1000. You will  
get about 12 +/- 0.3 depending on the data quality and the k range,  
and on the amplitude factor you fix constant. Then, suppose you take  
your formula for a particle radius from your JAP article and  
propagate this uncertainty to get the radius uncertainty. That would  
give you a huge error because you are in the flat region of the N(R)  
function and R does bit affect N.
The meaning of your large error bar is, I think, that you are in  
such a large limit of sizes that they cannot be inverted to get N  
and thus the errors cannot be propagated to find Delta R.
Why don't you try to obtain N instead of R? You will get much  
smaller error bars and you can find the lower R limit from your N(R)  
equation (by plugging in N - deltaN you will find R - delta R).


The right limit is infinity as you pointed out.

Anatoly


From: ifeffit-boun...@millenia.cars.aps.anl.gov >

To: XAFS Analysis using Ifeffit 
Sent: Fri Oct 22 16:23:08 2010
Subject: [Ifeffit] Asymmetric error bars in IFeffit

Hi all,

I'm puzzling over an issue with my latest analysis, and it seemed  
like the sort of thing where this mailing list might have some good  
ideas.


First, a little background on the analysis. It is a simultaneous fit  
to four samples, made of various combinations of three phases.  
Mossbauer has established which samples include which phases. One of  
the phases itself has two crystallographically inequivalent   
absorbing sites. The result is that the fit includes 12 Feff  
calculations, four data sets, and 1000 paths. Remarkably, everything  
works quite well, yielding a satisfying and informative fit.  
Depending on the details, the fit takes about 90 minutes to run.  
Kudos to Ifeffit and Horae for making such a thing possible!


Several of the parameters that the fit finds are "characteristic  
crystallite radii" for the individual phases. In my published fits,  
I often include a factor that accounts for the fact that a phase is  
nanoscale in a crude way: it assumes the phase is present as spheres  
of uniform radius and applies a suppression factor to the  
coordination numbers of the paths as a function of that radius and  
of the absorber-scatterer distance. Even though this model is rarely  
strictly correct in terms of morphology and size dispersion, it  
gives a first-order approximation to the effect of the reduced  
coordination numbers found in nanoscale materials. Some people,  
notably Anatoly Frenkel, have published models which deal with this  
effect much more realistically. But those techniques also require  
more fitted variables and work best with fairly well-behaved  
samples. I tend to work with "messy" chemical samples of free  
nanoparticles where the assumption of sphericity isn't terrible, and  
the size dispersion is difficult to model accurately.


At any rate, the project I'm currently working on includes a fitted  
characteristic radius of the type I've described for each of the  
phases in each of the samples. And again, it seems to work pretty  
well, yielding values that are plausible and largely stable.


That's the background information. Now for my question:

The effect of the characteristic radius on the spectrum is a  
strongly nonlinear function of that radius. For example, the  
difference between the EXAFS spectra of 100 nm and 1000 nm single  
crystals due to the coordination number effect is completely  
negligible. The difference between 1 nm and 10 nm crystals, however,  
is huge.


So for very small crystallites, IFeffit reports perfectly reasonable  
error bars: the radius is 0.7 +/- 0.3 nm, for instan

Re: [Ifeffit] LCF analysis

2010-10-25 Thread Scott Calvin

On Oct 25, 2010, at 8:25 AM, Wayne W Lukens Jr wrote:

A more useful way to look at this is that the probabilities that A,  
B and C are present are 99.%, 93%, and 77%, respectively.


An excellent post, Wayne, but I don't think that last statement is  
quite right. If the F-test gives a probability of 0.23 for material C,  
I believe it's saying that there is a 23% that, given the noise level  
in the data, the fit would indicate that C was present when it was  
not. That is not the same thing as saying there is a 77% chance of C  
being present.


To see this, imagine very, very noisy data. Including C in the fit  
might very well improve the fit in the sense of an R-factor--maybe, in  
fact, there's a 45% chance of a modest improvement with a given set of  
very noisy data, even if there's no C present. That does not mean that  
a result like that should lead to the conclusion that C is more likely  
than not present (55%).


--Scott Calvin
Sarah Lawrence College___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Asymmetric error bars in IFeffit

2010-10-25 Thread Scott Calvin
Yes; it's a case of trying to distinguish between a few boulders and  
lots of pebbles; the total volume isn't the issue.


What I'm looking at is something like surface/volume ratio, but with  
"surface" being path-dependent and gradual. For a nearest-neighbor  
path, only the top monolayer of atoms are on the surface. For a 5  
angstrom path, the transition region from "surface" to "core" extends  
5 angstroms in.


But that more sophisticated definition of "surface" doesn't change the  
fact that the dominant dependence is 1/R, so that should address the  
issue.


--Scott Calvin
Sarah Lawrence College

On Oct 25, 2010, at 4:43 AM, Matt Newville wrote:


Hi Scott,

That's a pretty amazing use case.

But I'm not sure I understand the issue exactly right.   I would have
thought the volume (r**3) was the important physical parameter, and
that a 1000nm particle would dominate the spectra over 3nm particles.
  Or is it that you are trying to distinguish between 1 very large
crystal  or 100s of smaller crystals?   Perhaps the effect you're
really trying to account for is the surface/volume ratio?  If so, I
think using Matthew Marcus's suggestion of using 1/r (with a safety
margin) makes the most sense.

--Matt



___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] something is wrong with Ruthenium-Oxygen bond amplitudes

2010-11-05 Thread Scott Calvin

Hi Maria,

What was the physical form of the samples (powder, thin film, etc.)  
and how were they measured (transmission, fluorescence, ...)?  
Sometimes this kind of thing can stem from sample/beamline/data effects.


--Scott Calvin
Sarah Lawrence College

On Nov 5, 2010, at 5:50 PM, María Elena Montero Cabrera wrote:


Hello friends,

Hopping someone could help us. We are having some problems in  
fitting Ru K-edge in a Ruthenium-cuprate sample on Artemis, with  
path functions obtained using FEFF 8.4, where we got amplitude  
values of less than 0.50 for Ru-O first shells. We think this value  
probably is wrong, although there are some publications where some  
oxygen deficiency is studied and recorded as true. The ATOMS input  
data was found to be ok due to the Rietveld analysis results told us  
these are good. The reason we think we may be wrong with something  
during the fitting appeared  because we tried to fit our reference  
sample of RuO2 (measured under same conditions as the experiment of  
Ru-Cu at SSRL) and we came to the same results, even worst, of  
amplitude lower than 0.40. We are attaching the Artemis files so you  
can take a look on it and give us some light to continue with our  
analysis.

Thanking in advance, take care



___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


[Ifeffit] Origin of terminology "self-absorption"

2010-11-16 Thread Scott Calvin

Hi all,

As some of you know, I'm currently working on a textbook on XAFS  
analysis. Because of that, I'm going to occasionally pose some  
questions for the list that may seem a bit random. I hope none of you  
mind me using the list in this way; the questions may seem to come out  
of left field, but I think they will still be of interest to many.


With that said, here's my question for today:

What is the origin of the use of "self-absorption" to describe the  
suppression of fine-structure observed in thick, concentrated samples  
measured in fluorescence? I understand the physics of the effect  
itself, my question is the curious wording. Compared to a thin  
concentrated sample, the effect might better be described as  
"saturation," while compared to a thick dilute sample, it's actually  
related to a lack of absorption by other elements.


--Scott Calvin
Faculty at Sarah Lawrence College
Currently on sabbatical at Stanford Synchrotron Radiation Laboratory
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Origin of terminology "self-absorption"

2010-11-16 Thread Scott Calvin
I tried a few searches, but rapidly get lost in other uses of the  
term. My guess is we borrowed it from some other spectroscopy, much  
the way we borrowed "Debye-Waller factor" from XRD, and then proceeded  
to change its meaning. But it would be nice to be able to track that  
down.


--Scott Calvin
Faculty at Sarah Lawrence College
Currently on sabbatical at Stanford Synchrotron Radiation Laboratory

On Nov 16, 2010, at 10:54 AM, Matthew Marcus wrote:

It's definitely a misnomer.  I use "overabsorption" and encourage  
others to do so.  I suppose to track it down would require going

back over the seminal papers on the subject.
mam

On 11/16/2010 10:19 AM, Scott Calvin wrote:

Hi all,

As some of you know, I'm currently working on a textbook on XAFS  
analysis. Because of that, I'm going to occasionally pose some  
questions for the list that may seem a bit random. I hope none of  
you mind me using the list in this way; the questions may seem to  
come out of left field, but I think they will still be of interest  
to many.


With that said, here's my question for today:

What is the origin of the use of "self-absorption" to describe the  
suppression of fine-structure observed in thick, concentrated  
samples measured in fluorescence? I understand the physics of the  
effect itself, my question is the curious wording. Compared to a  
thin concentrated sample, the effect might better be described as  
"saturation," while compared to a thick dilute sample, it's  
actually related to a lack of absorption by other elements.


--Scott Calvin
Faculty at Sarah Lawrence College
Currently on sabbatical at Stanford Synchrotron Radiation Laboratory
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit

___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Transmission EXAFS sample

2010-11-19 Thread Scott Calvin

Hi Jatin,

Matt covered most of what I would say, but I'll add a few comments of  
my own.


I'm not sure how you arrived at the conclusion that you have only a  
few percent of what you need--you must be assuming a sample area  
somehow. I have frequently made transmission measurements on samples  
where I only had a few milligrams available. Generally, I did it by  
spreading it on a layer of tape as well as I could and then folding  
the tape over and over again--sometimes to make as many as 16 layers.  
(Of course, that many layers is not advisable if you're below 6 keV or  
so, as the absorption of the tape itself would kill the signal). Even  
if there are lots of pinholes because you can't cover the tape  
effectively, 16 layers from folding will make them cancel out fairly  
well. I can then narrow the beam a bit to match the size of my sample.  
Flux isn't really the issue here, so I don't even need a focussed  
beamline--I can just narrow the slits.


Two other tips:

1) Realize that even with a tiny amount of sample that much of it  
won't end up on the tape. The process of brushing on tape is designed  
to separate the small grains from the big ones, with only the small  
ones ending up on tape. Allow that to happen!


2) You can sometimes get a second piece of tape to have some sample on  
it by putting it sticky side down on your mortar and peeling it back.  
A thin layer of dust from the sample will stick to the tape, and give  
you a little more absorption and a bit more of a uniform distribution.  
If you stack that with the primary piece of tape and then fold a few  
times, you may end up in pretty good shape, as long as you're not  
operating at a low enough energy so that all the layers of tape are a  
problem..


This procedure doesn't give me the best data I've ever seen, but it's  
often not bad.


--Scott Calvin
Sarah Lawrence College

On Nov 19, 2010, at 8:13 AM, Matt Newville wrote:


Dear Jatin,

The idea that the optimum absorption length (mu*t) for transmission
experiments is 2.3 assumes that the errors in the measurement are due
to counting statistics of the x-rays.  For any synchrotron experiment,
the number of x-rays in the transmission chamber is high enough that
the noise from counting statistics is rarely significant.  This means
that using a value of 2.3 is really not that important.

The more important issues are
 a) having a uniform sample.
 b) not having (mu*t) so high that higher-order harmonics dominate
the transmission measurement.

For transmission measurements, it's difficult to overstate the
importance of a uniform sample.  For an ideal thickness, I would say
that the better rules of thumb than mu*t = 2.3 are to aim for an edge
step of 0.1 to 1.0, and a total absorption less than 3.0.

If you only have enough material for an edge step as low as 0.02 (as
you imply), then measuring in fluorescence or electron emission is
probably a better choice.  Such a sample won't be severely affected by
"self-absorption" (or "over absorption" to use the term this mailing
list prefers) in the fluorescence measurement.  I would recommend
simultaneously measuring transmission and florescence for such a
sample.

My concern about a very thin sample is uniformity.  Specifically, is
the grain size really well below mu/0.02 so that a collection of
particles can give a uniform thickness?  Since you didn't give any
details of the system, it's hard to guess.

Is it feasible to pack that material into a smaller area so that the
thickness is increased and use a smaller x-ray beam?


-- Can my sample be only few percentage of the "actual amount" (i.e.
calculated based on above fact) required, and still i can perform
transmission EXAFS ? How would this affect my data ? (I guess, it  
will be

heavily dominated by noise)


I would guess that a sample with mu*t of 0.02 would be dominated by  
pinholes.


-- What if, i have required amount of sample but since material's  
density is
so high that it yields only small volume of powder (for a given  
weight),
that it can not be covered up on multiple layers of Kapton tape to  
ensure

pinhole-free sample ?


If you cannot get the grain size small enough to have many overlapping
grains in the sample, the sample won't be uniform enough for good
transmission data.  The techniques of using multiple layers of mixing
with a low-Z binder don't solve this problem.  These do help to make a
uniform collection of overlapping grains, but don't make the grains
smaller.

I would recommend trying to increase the thickness at the expense of
cross-sectional area, and/or measuring in both transmission and
fluorescence.

Hope that helps,

--Matt
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Transmission EXAFS sample

2010-11-21 Thread Scott Calvin


On Nov 21, 2010, at 2:45 AM, Jatinkumar Rana wrote:




Hi Scott,

Yes I have assumed the sample cross section area to be 1 sq. cm. and
then calculated the amount of sample required for that.

What i planned  is following :

I would calculate the amount of sample required for 1sq.cm area, take
that amount of sample and make it very fine paste using mortar and
pestle, and then apply it uniformly on a piece of kapton tape. Then  
fold

the tape over and over again in such a way that final bunch of tapes
will yield to 1 sq.cm. area containing the required amount of sample.

Will it be the right approach ?? OR I can take randomly few milligrams
of powder  (i.e. not strictly as per calculation) and  make a several
uniform layers of tape ??

With best regards,
Jatin

--


Hi Jatin,

I'm not sure I understand. If you have enough sample for the 1 square  
centimeter target, then there shouldn't be a problem, right? I was  
assuming from your initial question that you weren't going to have  
enough sample to do that.


--Scott Calvin
Faculty at Sarah Lawrence College
Currently on sabbatical at Stanford Synchrotron Radiation Laboratory___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Transmission EXAFS sample

2010-11-22 Thread Scott Calvin
To my mind, when considering sample preparation the important thing is  
not so much the "right" thickness, as knowing the effects to guard  
against as the thickness deviates toward the thin or thick side.


As transmission samples become thicker, the problem of "unwanted"  
photons becomes more severe. Those photons may be harmonics, photons  
scattered into the It detector, or photons from the tails of the  
resolution curve of the monochromator.


As transmission samples become thinner, uniformity becomes more of an  
issue. If you play with the equations, you'll see that if your sample  
is a mixture of regions that have a thickness of 1.0 absorption  
lengths and regions that have a thickness of 2.0 absorption lengths,  
the spectrum is less distorted than if it is a mixture of 0.5 and 1.0  
absorption lengths.


So if a sample is on the thick side, it is particularly important to  
guard against harmonics in the beam and scattered photons. If it is on  
the thin side, it is particularly important to guard against  
nonuniformity.


To put it another way, problems are synergistic. With a well- 
conditioned beam, a uniform sample, and linear detectors, the  
thickness almost doesn't matter (within reason)--at a modern beamline,  
a total absorption of even 0.05 or 4.0 will work.


But as each of those conditions deviates from the ideal, distortions  
become much more severe.


There's an old joke about someone on a diet going in to a fast food  
joint and asking for a double bacon cheeseburger, a large fries...and  
a diet Coke. In XAFS measurements, that attitude actually kind of  
works, because of the synergies I just discussed.


Personally, I trust my ability to condition the beam and minimize  
scattering more than I trust my ability to make a uniform sample, so I  
lean a little toward the thicker side.


--Scott Calvin
Faculty at Sarah Lawrence College
Currently on sabbatical at Stanford Synchrotron Radiation Laboratory



On Nov 22, 2010, at 5:13 AM, Welter, Edmund wrote:


Dear Jatin,

the optimum mued of 2.x is not just derived by simple photon counting
statistics. As Matt pointed out, for transmission measurements at a
synchrotron beamline in conventional scanning mode this is seldom a
matter. Nevertheless, one should avoid to measure subtle changes of
absorption at the extreme ends, that is, transmission near 0 % or  
100 %.

In optical photometry this is described by the more or less famous
"Ringbom plots" which describe the dependency of the accuracy of
quantitative analysis by absorption measurements (usually but not
necessarily in the UV/Vis) from the total absorption of the sample.

This time the number is only near to 42, the optimum transmission is
36.8 % (mue = 1). So, to achieve the highest accuracy in the
determination of small Delta c (c = concentration) you should try to
measure samples with transmissions near to this value (actually the
minimum is broad and transmissions between 0.2 and 0.7 are ok). In our
case, we are not interested in the concentration of the absorber,  
but we

are also interested in (very) small changes of the transmission resp.
absorption in our samples. Or, using Bouger, Lambert Beer's law, in  
our
case mue (-ln(I1/I0) is a function of the absorption coefficient  
(mue0).

The concentration of the absorber and the thickness (d) of the sample
are constant.

-ln(I1/I0) = mue0 * c * d

But then: If the optimum is a mue between 0.35 and 1.6 why are we all
measuring successfully (ok, more or less ;-) using samples having a  
mue
between 2 and 3? ...and 0.35 seems desperately small to me! Maybe  
sample

homogeneity is an issue?

Cheers,
Edmund Welter




___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Transmission EXAFS sample

2010-11-22 Thread Scott Calvin


On Nov 22, 2010, at 7:09 AM, Jatinkumar Rana wrote:



Hi Scott,

Sorry for mixing up the things.

For the case, when i have very limited amount of sample that i can not
cover 1sq.cm area, you, Matt and others have given very very clear
explanation about possible solutions and the probable effects on data
quality. I am really very thankful to all of you for sharing your
experience and expertise.

My last post was with reference to the case when i have enough powders
(i.e., reference oxide compounds). It is just to be ensured that i am
doing things 100% exactly in a same way it has to be done.

With best regards,
Jatin

--
Jatinkumar Rana



Yes, Jatin, the procedure you described is fine. There is no "right"  
way to make samples, although there are many wrong ways.


--Scott Calvin
Faculty at Sarah Lawrence College
Currently on sabbatical at Stanford Synchrotron Radiation Laboratory___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


[Ifeffit] Distortion of transmission spectra due to particle size

2010-11-22 Thread Scott Calvin

Hi all,

I'm tracking down a piece of EXAFS lore which I think is incorrect.

I've seen it said that you cannot compensate for the distortion  
introduced by large particle sizes by making the sample thicker.  
Certainly thick samples have their own set of issues (e.g. "thickness  
effects" from harmonics), but I've seen the claim that the mathematics  
of the distortions introduced by nonuniformity means that there is a  
particle-size distortion that is independent of thickness. This claim  
is sometimes accompanied by an equation giving chi_eff/chi_real as a  
function of particle size diameter D and various absorption  
coefficients.


I've eventually traced this equation back to a paper by Lu and Stern  
from 1983, have walked through the derivation, and believe there is a  
flaw in the logic that has led to the erroneous--and widely quoted-- 
conclusion that thickness cannot compensate for particle size.


The paper, for those who want to follow along, is K. Lu and E. A.  
Stern, "Size effect of powdered sample on EXAFS amplitude," Nucl.  
Instrm. and Meth. 212, 475-478 (1983).


They calculate the intensity transmitted by a spherical particle, and  
from there calculate the attenuation in the normalized EXAFS signal  
for a beam passing through that particle.


They then, however, extend this to multiple layers of particles by the  
following argument:


"Finally, the attenuation in N layers is given by (I/I0)^N, where I is  
the transmitted intensity through one layer. Xeff for N layers is then  
the same as for a single layer since N will cancel in the final result."


This is not the case, is it? It seems to me that their analysis  
assumes that the spheres in subsequent layers line up with the spheres  
in previous ones, so that thick spots are always over thick and thin  
spots over thin. It's little wonder, then, that making the sample  
thicker does not improve the uniformity according to that analysis.


I've done a calculation for the effects of uniformity in a somewhat  
different way, and found that it is indeed true that multiple layers  
on particles show less distortion due to nonuniformity that a single  
layer of particles of the same size, just as one would intuitively  
imagine, and in contrast to Lu and Stern.


Do you agree that the extrapolation to multiple layers in the original  
Lu and Stern paper is not correct, or have I misled myself somehow?


--Scott Calvin
Faculty at Sarah Lawrence College
Currently on sabbatical at Stanford Synchrotron Radiation Laboratory

P.S. None of this should be taken as an endorsement of overly thick  
samples! Harmonics and the like are a concern regardless of the  
uniformity issue.


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Distortion of transmission spectra due to particle size

2010-11-22 Thread Scott Calvin

Some follow-up.

This, for example, is from an excellent workshop presentation by Rob  
Scarrow:



Errors from large particles are independent of thickness


The relative (%) variation in thickness depends on the ratio  
(particle diameter / avg. thickness), so it is tempting to increase  
the avg. thickness (i.e. increase μx) as an alternative to reducing  
the particle diameter.


However, simulations of MnO2 spectra for average Δμ0x = 1, 2 or 3  
show that the errors in derived pre-edge peak heights and EXAFS  
amplitude factors are significant when diameter > 0.2 / Δμ0, but  
that they are not affected by the average sample thickness. (Δμ0  
refers to the edge jump)


The equation at right is given by Heald (quoting earlier work by  
Stern and Lu). D is particle diameter, μ1 is for just below the  
edge, and Δμ =μ(above edge) - μ1.


I've seen similar claims elsewhere, although Scarrow's is particularly  
clear and unambiguous.


The equation Scarrow gives is indeed the one from Lu and Stern, and  
the simulations are based on that equation.


That Lu-Stern equation is derived for a monolayer of spheres, and then  
experimentally tested with multiple layers of tape. I'm still trying  
to work through the math to see how it works for multiple layers. I'm  
not convinced that the N divides out as is claimed in the article. As  
Matt says, it wasn't their main point.


There is no question that if the particle size is large compared to an  
absorption length there will be nonuniformity and thus distortions.


But compare a monolayer of particles with a diameter equal to 0.4  
absorption lengths with four strips of tape of that kind stacked. Do  
we really think the distortion due to nonuniformity will be as bad in  
the latter case as in the first? In practice, I think many  
transmission samples fall in roughly that regime, so the question  
isn't just academic.


I'll keep trying to work through the math and let you know what I find.

--Scott Calvin
Faculty at Sarah Lawrence College
Currently on sabbatical at Stanford Synchrotron Radiation Laboratory___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Distortion of transmission spectra due to particle size

2010-11-23 Thread Scott Calvin


On Nov 22, 2010, at 2:55 PM, Scott Calvin wrote:

But compare a monolayer of particles with a diameter equal to 0.4  
absorption lengths with four strips of tape of that kind stacked. Do  
we really think the distortion due to nonuniformity will be as bad  
in the latter case as in the first? In practice, I think many  
transmission samples fall in roughly that regime, so the question  
isn't just academic.


OK, I've got it straight now. The answer is yes, the distortion from  
nonuniformity is as bad for four strips stacked as for the single  
strip. This is surprising to me, but the mathematics is fairly clear.  
Stacking multiple layers of tape rather than using one thin layer  
improves the signal to noise ratio, but does nothing for uniformity.  
So there's nothing wrong with the arguments in Lu and Stern, Scarrow,  
etc.--it's the notion I had that we use multiple layers of tape to  
improve uniformity that's mistaken.


A bit on how the math works out: for Gaussian distributions of  
thickness, the absorption is attenuated (to first order) by a term  
directly proportional to the variance in the distribution. The  
standard deviation in thickness from point to point in a stack of N  
tapes generally increases as the square root of N (typical statistical  
behavior). This means that the fractional standard deviation goes down  
as the square root of N. In casual conversation, we would usually  
identify a sample with thickness variations of +/-5 % as being
"more uniform" than one with thickness variations of +/- 8%, so it's  
natural to think that a stack of tapes is more uniform than a single  
one. But since the attenuation is proportional to the variance (i.e.  
the square of the standard deviation), it actually increases in  
proportion to N. Since the absorption is also increasing in proportion  
to N, the attenuation remains the same size relative to the  
absorption, and the spectrum is as distorted as ever.


This result doesn't actually depend on having a Gaussian distribution  
of thickness. if each layer has 10% pinholes, for instance, at first  
blush it seems as if two layers should solve most of the problem: the  
fraction of pinholes drops to 1%. But those pinholes are now compared  
to a sample which is twice as thick, on average, and thus create  
nearly as much distortion as before. Add to this that there is now 9%  
of the sample that is half the thickness of the rest, and the  
situation hasn't improved any. I've worked through the math, and the  
cancellation of effects is precise--a two layer sample has the  
identical nonuniformity distortion to a one layer one.


(There is probably a simple and compelling argument as to why this  
distortion is independent of the number of randomly aligned layers for  
ANY thickness distribution, but I haven't yet found it.)


* * *

For me personally, knowing this will cause some changes in the way I  
prepare samples.


First of all, I'm going to move my bias more toward the thin end. My  
samples are generally pretty concentrated, so signal to noise is not a  
big issue. If I'm also not improving uniformity by using more layers  
of tape, there's no reason for me not to keep the total absorption  
down around 1, rather than around 2.


Secondly, I'll approach the notion of eyeballing the assembled stack  
of tapes for uniformity, whether with the naked eye or a microscope,  
with more caution--particularly when teaching new students. The idea  
that a sample which has no evident pinholes is a better sample than  
one that does is not necessarily true, as the example above with the  
single layer exhibiting 10% pinholes as compared to the double layer  
exhibiting 1% demonstrates. Stressing the elimination of visible  
pinholes will tend to bias students toward thicker samples, but not  
necessarily better ones.



--Scott Calvin
Faculty at Sarah Lawrence College
Currently on sabbatical at Stanford Synchrotron Radiation Laboratory
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Distortion of transmission spectra due to particle size

2010-11-24 Thread Scott Calvin

Matt,

Your second simulation confirms what I said:

The standard deviation in thickness from point to point in a stack  
of N tapes generally increases as the square root of N (typical  
statistical behavior).


Now follow that through, using, for example, Grant Bunker's formula  
for the distortion caused by a Gaussian distribution:


(mu x)eff = mu x_o - (mu sigma)^2/2

where sigma is the standard deviation of the thickness.

So if sigma goes as square root of N, and x_o goes as N, the  
fractional attenuation of the measured absorption stays constant, and  
the shape of the measured spectrum stays constant. There is thus no  
reduction in the distortion of the spectrum by measuring additional  
layers.


Your pinholes simulation, on the other hand, is not the scenario I was  
describing. I agree it is better to have more thin layers rather than  
fewer thick layers. My question was whether it is better to have many  
thin layers compared to fewer thin layers. For the "brush sample on  
tape" method of sample preparation, this is more like the question we  
face when we prepare a sample. Our choice is not to spread a given  
amount of sample over more tapes, because we're already spreading as  
thin as we can. Our choice is whether to use more tapes of the same  
thickness.


We don't have to rerun your simulation to see the effect of using  
tapes of the same thickness. All that happens is that the average  
thickness and the standard deviation gets multiplied by the number of  
layers.


So now the results are:

For 10% pinholes, the results are:
# N_layers | % Pinholes | Ave Thickness | Thickness Std Dev |
# 1|  10.0  |0.900  |0.300  |
# 5|  10.0  |4.500  |0.675  |
#25|  10.0  |22.500  |1.500  |

For 5% pinholes:
# N_layers | % Pinholes | Ave Thickness | Thickness Std Dev |
# 1|   5.0  |0.950  |0.218  |
# 5|   5.0  |4.750  |0.485  |
#25|   5.0  |23.750  |1.100  |

For 1% pinholes:
# N_layers | % Pinholes | Ave Thickness | Thickness Std Dev |
# 1|   1.0  |0.990  |0.099  |
# 5|   1.0  |4.950  |0.225  |
#25|   1.0  |24.750  |0.500|

As before, the standard deviation increases as square root of N. Using  
a cumulant expansion (admittedly slightly funky for such a broad  
distribution) necessarily yields the same result as the Gaussian  
distribution: the shape of the measured spectrum is independent of the  
number of layers used! And as it turns out, an exact calculation (i.e.  
not using a cumulant expansion) also yields the same result of  
independence.


So Lu and Stern got it right. But the idea that we can mitigate  
pinholes by adding more layers is wrong.


--Scott Calvin
Faculty at Sarah Lawrence College
Currently on sabbatical at Stanford Synchrotron Radiation Laboratory



On Nov 24, 2010, at 6:05 AM, Matt Newville wrote:


Scott,


OK, I've got it straight now. The answer is yes, the distortion from
nonuniformity is as bad for four strips stacked as for the single  
strip.


I don't think that's correct.

This is surprising to me, but the mathematics is fairly clear.  
Stacking
multiple layers of tape rather than using one thin layer improves  
the signal
to noise ratio, but does nothing for uniformity. So there's nothing  
wrong
with the arguments in Lu and Stern, Scarrow, etc.--it's the notion  
I had
that we use multiple layers of tape to improve uniformity that's  
mistaken.


Stacking multiple layers does improve sample uniformity.

Below is a simple simulation of a sample of unity thickness with
randomly placed pinholes.  First this makes a sample that is 1 layer
of N cells, with each cell either having thickness of 1 or 0.  Then it
makes a sample of the same size and total thickness, but made of 5
independent layers, with each layer having the same fraction of
randomly placed pinholes, so that total thickness for each cell could
be 1, 0.8, 0.6, 0.4, 0.2, or 0.  Then it makes a sample with 25
layers.

The simulation below is in python. I do hope the code is
straightforward enough so that anyone interested can follow. The way
in which pinholes are randomly selected by the code may not be
obvious, so I'll say hear that the "numpy.random.shuffle" function is
like shuffling a deck of cards, and works on its array argument
in-place.

For 10% pinholes, the results are:
# N_layers | % Pinholes | Ave Thickness | Thickness Std Dev |
# 1|  10.0  |0.900  |0.300  |
# 5|  10.0  |0.900  |0.135  |
#25|  10.0  |0.900  |0.060  |

For 5% pinholes:
# N_layers | % Pinholes | Ave Thickness | Thickness Std Dev |
# 1|   5.0  |0.950  | 

Re: [Ifeffit] Distortion of transmission spectra due to particle size

2010-11-24 Thread Scott Calvin
Nov 24, 2010, at 12:04 PM, Matt Newville wrote:


Scott,

You said:

the distortion from nonuniformity is as bad for four strips
stacked as for the single strip.


As I showed earlier, a four layer sample is more uniform than a one
layer sample, whether the total thickness is preserved or the
thickness per layer is preserved.


For 1% pinholes:
# N_layers | % Pinholes | Ave Thickness | Thickness Std Dev |
# 1|   1.0  |0.990  |0.099  |
# 5|   1.0  |4.950  |0.225  |
#25|   1.0  |24.750  |0.500|


Yes, the sample with 25 layers has a more uniform thickness.

As before, the standard deviation increases as square root of N.  
Using a

cumulant expansion (admittedly slightly funky for such a broad
distribution) necessarily yields the same result as the Gaussian
distribution: the shape of the measured spectrum is independent of  
the
number of layers used! And as it turns out, an exact calculation  
(i.e. not
using a cumulant expansion) also yields the same result of  
independence.


OK...  The shape is the same, but the relative widths change.

24.75 +/- 0.50 is a more uniform distribution than 0.99 +/- .099.
Perhaps this is what is confusing you?


The thicker sample is more uniform (in the sense of fractional  
uniformity), but the distortion in the normalized mu(E) due to the  
nonuniformity is identical. That is exactly what is surprising, and  
what I initially did not believe. Now I am firmly convinced that it is  
true.




So Lu and Stern got it right. But the idea that we can mitigate  
pinholes by

adding more layers is wrong.


Adding more layers does make a sample of more uniform thickness.
Perhaps "mitigate pinholes" means something different to you?


By "mitigate pinholes" I mean that we can obtain a normalized energy  
spectrum that is closer to the mu(E) we are trying to measure. But we  
can't do that by adding more identically distributed layers if the  
thick and thin spots are randomly stacked--we end up with exactly the  
same normalized mu(E). (As usual, this analysis neglects signal to  
noise, harmonics, and the like.)


I used a lot of words there to try to be precise. But basically,  
stacking two layers of tape with the hope that pinholes will tend to  
be cancelled out will not work.




In your original message (in which you set out to "track down" a piece
of "incorrect lore") you said that Lu and Stern assumed that layers
were stacked "so that thick spots are always over thick and thin spots
over thin".  They did not assume that.  Given that initial
misunderstanding, and the fact that you haven't shown any calculations
or simulations, it's a bit hard for me to fathom what you think Lu and
Stern "got right" or wrong.


They got everything right. I was trying to save a different piece of  
lore that I think is even more widely disseminated--that stacking thin  
layers of tape reduces the amount of distortion due to nonuniformity  
as compared to one thin layer of tape.


I thought I had shown calculations (the Gaussian case), and Jeremy has  
shown simulations which confirm the result for the pinhole case.


 The main point of their work is that it is better to use more  
layers to get to a given thickness.  You seem to have some objection  
to this, but I cannot figure out what you're trying to say.


I agree that your statement is a true one which is also consistent  
with their paper, but would respectfully differ as to what the main  
point of the paper is. The abstract reads:


Powdered samples samples are commonly used to measure the extended X- 
ray absorption fine structure (EXAFS) and near-edge structure. It is  
shown that sizes of particles typically employed produce significant  
reduction in the EXAFS amplitude for concentrated samples. The  
distortion is calculated and found to be in good agreement with  
measurements on FeSi2 samples. To obtain accurate EXAFS amplitudes  
on powdered samples it is necessary to use particles significantly  
smaller than 400 mesh when the atom of interest is concentrated.



The use of increasing number of layers as the particles are sieved  
more finely is done to provide a controlled experiment in which  
differences in signal to noise and thickness effects are not a factor.  
Suggesting that it is better to use more layers to get a given  
thickness is an indirect way of getting at the real issue, which is  
particle size. For a given particle size, multiple layers provide no  
help whatsoever with nonuniformity-related distortions, but merely  
allow for the desired signal to noise. Lu and Stern provide the  
correct emphasis in their paper. It is only some of the subsequent  
XAFS practitioners, including myself until about 24 hours ago, who  
placed the stress on the multiple layers per se as addressing the  
uniformity issue.


--Scott Calvin
Faculty at Sarah

Re: [Ifeffit] Cadmium K-edge

2011-01-06 Thread Scott Calvin

Hi Alan,

(To the list: I discussed Alan's system a bit with him when he came to  
SSRL, but don't recall the details. That's why in this reply I have a  
little more knowledge about it than what he posted yesterday.)


I suspect the peak at 2.3 A does indeed represent structure of some  
kind. Your k-space data for sample 1 has very little noise, so I don't  
think it's spurious in that sense.


What strikes me about your data is that the FT's have so little  
structure beyond the 1.8 A peak. I don't recall exactly the nature of  
your system: it was cadmium loaded on to the surface of a substrate,  
but I don't recall what the substrate was. If it's low-Z (for example,  
silica), then the scattering off of whatever the oxygen is bonded to  
(in my example, silicon) would show up pretty weakly, and that could  
be the peak at 2.3 A. It's absence in sample 2 could represent the  
cadmium ion being less selective about what surface site it bonds to,  
so that the Cd-Si distance (in my example) varies and the peak washes  
out.


That's all just an example of the kind of thing that could be  
responsible for what you're seeing. I agree that a good plan is to use  
a Cd-O structure to at least let you model the first peak. For the  
second peak, though, you may have to take more inspiration from the  
structure of the substrate than the structure of Cd-O.


It's also intriguing that the white line in your sample is larger than  
in the CdO standard from the database. Someone with more XANES  
experience than I might be able to suggest why that is--something  
about these being bound to the surface?


Is my recollection of your system correct? If so, what was the  
substrate?


--Scott Calvin
Faculty at Sarah Lawrence College
Currently on sabbatical at Stanford Synchrotron Radiation Laboratory


On Jan 5, 2011, at 11:23 PM, Alan Du wrote:



Hi all,

I processed my XAS data and compare them with standards from Farrel  
Lytle Database. (http://img717.imageshack.us/i/xasw.jpg/).


In the RSF, both sample 1 and 2 has a major peak at R = ca. 1.8 A.  
sample 1 has an additional peak at R = ca. 2.3 A. I wonder if this  
tiny peak is significant.


It seems both samples has peak positions similar to those of CdO  
standard, which sound logical because the cadmium will bind to the  
surface oxygen of the material.


I looking at fitting sample 1 and 2 using whatever crystallographic  
info of CdO I can get my hands on.


Wonder if the plan make sense.

Happy new year,
Alan


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


[Ifeffit] Athena crash during LCA

2011-01-07 Thread Scott Calvin

Hi all,

Athena has been crashing for me during one particular linear  
combination analysis, and I'm wondering if any of you have an  
explanation. A project file which demonstrates the problem is  
attached. I am using Athena 0.8.054 with ifeffit 1.2.10 on a MacBook  
Pro using OS 10.5.8.


The behavior can be seen by trying to fit the Data group by using a  
linear combination of Standard 1 and Standard 2 in chi(k). It gets as  
far as "plotting in k-space from group 'Data'...done!' and then hangs,  
with the watch icon remaining indefinitely. This doesn't happen in  
norm(E) or deriv(E) fits, and doesn't happen when Standard 1 is not  
used. But Standard 1 appears to have uncorrupted chi(k) data when  
plotted directly. Standard 1 is from one of the XDAC lines at NSLS, as  
are the standards that work.


I've tried saving Standard 1 and reading it in as a new group, but no  
luck. I've also looked at the data in Standard 1, and I don't see  
anything out of place, such as a place where the energy backs up. I've  
tried starting the chi(k) file with zero values, truncating the end  
values, and changing the background spline to end at the same point  
the data does. And it always hangs at the same point.


Any ideas? I'd like to use this data for a workshop next week!

--Scott Calvin
Faculty at Sarah Lawrence College
Currently on sabbatical at Stanford Synchrotron Radiation Laboratory



LCACrash.prj
Description: Binary data


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Athena crash during LCA

2011-01-07 Thread Scott Calvin
Jeff--sorry, I guess my grammar got a little convoluted. It DOES work  
with norm(E) and deriv(E) fitting for me as well. It does NOT work  
with chi(k) fitting for me.


--Scott Calvin
Faculty at Sarah Lawrence College
Currently on sabbatical at Stanford Synchrotron Radiation Laboratory

On Jan 7, 2011, at 7:58 PM, Jeff Terry wrote:


Did not try really hard to work with it.

Fitting Data as norm(E) from -20.000 to 30.000

Fit included 98 data points and 3 variables
R-factor = 0.000263
chi-square = 0.04347
reduced chi-square = 0.0004482

  groupweight
===
 2: Standard 1 1.000(0.000)
 3: Standard 2 0.455(0.000)


  groupe0 shift
===
 2: Standard 1-0.905( 0.000)
 3: Standard 2-0.905( 0.000)





On Jan 7, 2011, at 8:07 PM, Scott Calvin wrote:


Hi all,

Athena has been crashing for me during one particular linear  
combination analysis, and I'm wondering if any of you have an  
explanation. A project file which demonstrates the problem is  
attached. I am using Athena 0.8.054 with ifeffit 1.2.10 on a  
MacBook Pro using OS 10.5.8.


The behavior can be seen by trying to fit the Data group by using a  
linear combination of Standard 1 and Standard 2 in chi(k). It gets  
as far as "plotting in k-space from group 'Data'...done!' and then  
hangs, with the watch icon remaining indefinitely. This doesn't  
happen in norm(E) or deriv(E) fits, and doesn't happen when  
Standard 1 is not used. But Standard 1 appears to have uncorrupted  
chi(k) data when plotted directly. Standard 1 is from one of the  
XDAC lines at NSLS, as are the standards that work.


I've tried saving Standard 1 and reading it in as a new group, but  
no luck. I've also looked at the data in Standard 1, and I don't  
see anything out of place, such as a place where the energy backs  
up. I've tried starting the chi(k) file with zero values,  
truncating the end values, and changing the background spline to  
end at the same point the data does. And it always hangs at the  
same point.


Any ideas? I'd like to use this data for a workshop next week!

--Scott Calvin
Faculty at Sarah Lawrence College
Currently on sabbatical at Stanford Synchrotron Radiation Laboratory


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit




___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Athena crash during LCA

2011-01-08 Thread Scott Calvin

Hi Matt and Jeff,

I just upgraded to 0.8.061, and it works in that version, so  
apparently whatever the bug was has been fixed. Thanks for checking it  
out for me!


--Scott Calvin
Faculty at Sarah Lawrence College
Currently on sabbatical at Stanford Synchrotron Radiation Laboratory

On Jan 8, 2011, at 7:08 AM, Matt Newville wrote:


Hi Scott,

Like for Jeff,  the chi(k) LCA fit works for me with Athea 0.8.061 on
OSX 10.6.   If I had to make a guess or recommendation for something
to try it would be this:

Standard2 extends to pretty far out in K (>20Ang^-1) and Standard2 is
pretty glitchy beyond 12Ang^-1 or so.  You might try truncating the
data and/or removing the obvious glitches before doing the background
subtraction and LCA fit.
Again, that's a guess.

Does the latest iXAFS3 not work for you on 10.5.8?  I haven't tried
it, but the intention was that it would work on 10.5.

--Matt



___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


[Ifeffit] R dependence of S02

2011-02-04 Thread Scott Calvin

Hi all,

I've been pondering how much R dependence we should expect to see in  
S02; that is, how much R dependence is shown by intrinsic losses.


I've looked at quite a bit of the literature, not to mention old  
threads from this mailing list. One recent (2002) reference is this  
one by Campbell et al.:


http://prb.aps.org/pdf/PRB/v65/i6/e064107

My understanding is that S02 is intended to account for intrinsic  
losses; that is, those that are determined when the core hole forms.  
Extrinsic losses, such as inelastic scattering of the photoelectron  
and the effect of the core-hole lifetime, are accounted for by a mean  
free path term. The mathematics of how ifeffit implements this are  
here: http://cars9.uchicago.edu/~newville/feffit/feffit.ps


Intrinsic losses are dependent on k for at least two reasons. One is  
that the cross-section of the intrinsic effects themselves depends on  
the energy of the incident x-ray. This is evident if one thinks in  
terms of shake-up and shake-off events. But another reason is that  
shake-up and shake-off events rob some energy from the primary  
photoelectron. At low k, ten or twenty electron volts can alter the  
phase of the primary photoelectron significantly, and thus shake-up  
and shake-off events will tend to cancel each other out. But at high  
k, the energy robbed from the photoelectron is less significant,  
because the EXAFS oscillations are more spread out in energy. Thus,  
shake-up and shake-off events, while still occurring, will not  
suppress the EXAFS amplitude as much at high k. S02 therefore  
gradually rises through most of the EXAFS region to reach a limiting  
value of 1 well above the top of the EXAFS region.


The latter effect--the fact that removing a specified amount of energy  
from the primary photoelectron has less of an effect at high k than at  
low--also implies an R dependence. Low R oscillations are further  
apart in energy than high R oscillations, and thus over a specified k  
range low R oscillations should be less affected. In other words, S02  
should show a modest decrease with increasing R over typical EXAFS  
ranges. Some papers, in particular those with John Rehr as an author,  
confirm that S02 should have an R dependence, but don't discuss the  
implications much.


While ifeffit allows for floating ei, a parameter related to the mean  
free path, as I understand it that will still give a damping of the  
amplitude that is exponential in R. It seems to me that the S02  
dependence on R, in contrast, is likely to be more gentle.


Why is this important?

Several authors, including myself, have analyzed crystallite size and/ 
or morphology by comparing the coordination number of successive  
scattering shells. This is potentially much more accurate than just  
finding the first-shell coordination number, because it is independent  
of any amplitude effects that are independent of R, such as  
normalization errors and many experimental effects. Errors in the mean  
free path are a bit more significant, but the exponential dependence  
of the mean free path gives it a very different shape than effects  
from size and morphology. But an R-dependence of S02 would be  
troubling, as the functional form might look a bit more like a size  
effect.


So one question is this: does anyone have an order-of-magnitude  
estimate of how much R dependence to expect in S02 over the EXAFS  
range? If over a range of 2 to 6 angstroms S02 changed by even a few  
percent, that could have a significant effect on the kind of size  
analyses I mentioned in the preceding paragraph.


Of course, another question is if I've completely blown it anywhere in  
my discussion above; I've just been puzzling this out over the last  
few days!


--Scott Calvin
Faculty at Sarah Lawrence College
Currently on sabbatical at Stanford Synchrotron Radiation Laboratory
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Fe in glassy ceramics

2011-02-16 Thread Scott Calvin

Hi Andrei,

It's not unreasonable to model such a thing. Use a highly constrained  
model for the spinel, not allowing coordination numbers to float and  
using very few sigma2 parameters. I've published a model like that for  
fitting spinels here:


http://prb.aps.org/abstract/PRB/v66/i22/e224405

My model is more complicated than yours needs to be, because you don't  
really have site disorder; all the sites are Fe.


Then multiply the amplitudes by a guessed parameter representing the  
fraction of the iron that is in the spinel phase.


The remaining amplitude gets used for the glassy phase, which can  
possibly be fit by a single iron-oxygen path, which can be cloned from  
one of the spinel near neighbor paths, but now with most parameters  
floated.


You may very well need to fix S02 based on the value from a standard.

Fit over a wide range in the Fourier transform, so that you take  
advantage of high R peaks to pin down the spinel part.


Worth a shot, anyway. The key is being aggressive with assumptions and  
constraints, and then being honest about how that affects your  
uncertainties in the end.


--Scott Calvin
Faculty at Sarah Lawrence College
Currently on sabbatical at Stanford Synchrotron Radiation Laboratory


On Feb 16, 2011, at 4:53 AM, Andrei Shiryaev wrote:


Dear colleagues,

Probably similar questions were already asked, but nevertheless I  
would appreciate some advice how to solve the problem.


We are looking at Fe environment in complex glasses, which underwent  
partial crystallization. A fraction of Fe has precipitated as spinel- 
like nanocrystals, another fraction remains in glass. Spinel itself  
is already challenging for XAFS, but our task is to understand the  
Fe fraction in glass (e.g., Fe-O bond length).


We do not know the exact composition of the crystalline phases, thus  
we can not record a suitable standard. Does anybody have an idea how  
could we try to separate the Fe in glass from Fe(cryst) contributions?


I fully realize that the problem is ill-posed, perhaps, some people  
had somehow solved this problem.


Thanks a lot,
Andrei Shiryaev

Instituteof Physical Chemistry
Moscow
Russia


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


[Ifeffit] Looking for reviewers for XAFS textbook chapters

2011-03-07 Thread Scott Calvin

Hi all,

I am looking for people interested in reviewing portions of the XAFS  
textbook I am working on. I am particularly hoping that some people  
less experienced in XAFS will volunteer, so that I can get feedback on  
how well it teaches and how well it covers the topics you'd like to  
know more about. Experts are also welcome!


The first chapter I've got ready is the chapter on sample preparation.  
If you are interested in reviewing this chapter and can get feedback  
back to me within two weeks, please email me at scal...@slc.edu .


--Scott Calvin
Sarah Lawrence College
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Looking for reviewers for XAFS textbook chapters

2011-03-08 Thread Scott Calvin

Darn it!

I did not mean to send the packet to the whole list. If anyone is  
interested in reviewing, please send a request to scal...@slc.edu . I  
apologize for spamming you all with a multi-MB file.


--Scott

On Mar 8, 2011, at 12:24 PM, Scott Calvin wrote:


Great!

I'm attaching the review packet. The page numbering is a bit  
chaotic, because it's actually several documents stitched together:


The copyright notice. My publisher is allowing me to distribute  
review copies, but only on the condition that reviewers not  
redistribute them.
A detailed table of contents. This is more detailed than the final  
version will be, but gives context to reviewers as to what will be  
covered elsewhere.
An excerpt from the preface explaining the origin of the cartoon  
characters. (Yes, there are cartoon characters...)

An introduction to the cartoon characters.
Chapter 3, on sample preparation.
References

In the chapter, I've placed numbers along the margin to aid in  
making comments. For example, you might want to comment on page 18,  
lines 6-7. The numbers won't appear in the final version.


If you have any questions about the context of the chapter, the  
intended audience, or whatever, feel free to ask!


--Scott Calvin
Sarah Lawrence College


On Mar 8, 2011, at 11:27 AM, Prof. Dr. Peter Leinweber wrote:


hello, i am interested in reviewing this. i am not experienced but i
want to learn more.
i look forward to hearing from you, peter leinweber

Am 08.03.2011 04:13, schrieb Scott Calvin:

Hi all,

I am looking for people interested in reviewing portions of the XAFS
textbook I am working on. I am particularly hoping that some people
less experienced in XAFS will volunteer, so that I can get  
feedback on

how well it teaches and how well it covers the topics you'd like to
know more about. Experts are also welcome!

The first chapter I've got ready is the chapter on sample  
preparation.

If you are interested in reviewing this chapter and can get feedback
back to me within two weeks, please email me at scal...@slc.edu .

--Scott Calvin
Sarah Lawrence College
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit




--
Prof. Dr. Peter Leinweber
Soil Science
Institute for Land Use
University of Rostock
D-18051 Rostock/Germany
Tel.: +49 381 498 3120
Fax: +49 381 498 3122

___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit




___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


[Ifeffit] Looking for reviewers for another chapter

2011-04-19 Thread Scott Calvin

Hi all,

I have completed the draft of another chapter of my textbook, and am  
again interested in reviewers (repeat reviewers are fine!). The  
chapter I've currently got ready is about the parameters of the EXAFS  
equation, covering physical meaning, effect on graphs, a bit on common  
constraint schemes, etc.. This will be chapter 10 in the book, and so  
it's not suitable for absolute beginners. I am interested, though, in  
hearing from people who are still learning to fit, as well as experts.


If you are interested in reviewing the chapter and can get feedback  
back to me by 4 May, please contact me directly at scal...@slc.edu .


--Scott Calvin
Sarah Lawrence College
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


[Ifeffit] Bug in Athena merge preferences?

2011-04-28 Thread Scott Calvin

Hi Bruce (and Ifeffiteers),

I'm not positive this is a bug--maybe I'm just misunderstanding how  
the features are supposed to work. (It wouldn't be the first time!)


In Athena 0.8.061 on a Mac running OS 10.5.8, I tried the following (I  
tried it in more than one project, to make sure it's not an  
intermittent bug):


--Under Edit Preferences/Merge/Merge_Weight, choose "n" and apply to  
future sessions

--Close Athena and reopen
--Open a project
--Mark two data sets.
--Change the "importance" of one of them.
--Under the merge menu, choose "weight by importance."
--Under the merge menu, "merge marked data in chi(k)"

The result for me is that the merged file ignores the importance I've  
assigned. I've also tried it in mu(E), with the same result.


It seems that perhaps the "n" value in the preferences overrides the  
"weight by importance" option in the menu? If so, that's not the way I  
expect preferences and menu options to interact.


Trying it with applying the "n" only to the current session also has  
an interesting behavior, although not necessarily the "wrong" one: it  
seems to just change the radio button in the menu. Also, if I change  
the radio button in the menu directly, the option under Preferences  
changes to match. For the current session, I can understand the  
argument that this is a reasonable behavior. But it is likely to cause  
confusion if a user changes it in the menu and then tries to use the  
preferences to check what the future sessions value is set to--if they  
haven't previously changed the preferences directly during the  
session, they might naturally assume that the value there is the value  
for future sessions, but in reality it just represents the most recent  
menu choice they made, and could be different in a future session.


My suggested fix to both issues: the preferences should control what  
value the radio button takes on when Athena is first opened, and  
changes to the preferences should cause an immediate change to the  
status of the radio button. Subsequent changes to the radio button  
should not change the value under the preferences. And the behavior of  
the merge function should always reflect the current value of the  
radio button.


--Scott Calvin
Sarah Lawrence College___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] EXAFS on multi phase systems

2011-05-05 Thread Scott Calvin

Hi Jatinkumar,

It's fairly common to do so. With linear combination analysis or  
principle component analysis, it's necessarily the case. But it's also  
done with modelling using FEFF. I personally have published many  
papers of this type. One early paper of mine that does this is:


“X-ray absorption spectroscopy of highly-cycled Li/CPE/FeS2 cells,” E.  
Strauss, S. Calvin, H. Mehta,* D. Golodnitksy, S. G. Greenbaum, M. L.  
denBoer, V. Dusheiko, and E. Peled, Solid State Ionics 164, 51 (2003).


--Scott Calvin
Sarah Lawrence College

On May 5, 2011, at 12:26 AM, Jatinkumar Rana wrote:


Dear ifeffit users,

I was wondering if one could apply EXAFS to multi-phase systems (e.g.
two phase systems) where both phases could be crystallographically
different but contain same absorbing atom.

Can anyone suggest any literature which dealt with such a problem ??

With best regards,

Jatinkumar Rana


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] What does FEFF stand for?

2011-05-10 Thread Scott Calvin

Right, Chris.

There is a factor in the EXAFS equation, f(k). In different parts of  
the literature, f(k) sometimes has different meanings, but within the  
context of FEFF it refers to the effect of the potential of the  
scattering atom on both the scattering amplitude (the real part) and  
phase (the imaginary part).


Thus, it stands for "f effective."

My understanding, although I could be wrong is that the "effective"  
part came from an improvement of the theory to account for curved-wave  
effects. In other words, early theories approximated the photoelectron  
as a plane wave, but of course it spreads out radially from the  
absorbing atom. That change necessitated tweaking the definitions of  
the factors, so it became the "effective" f.


--Scott Calvin
Sarah Lawrence College

On May 10, 2011, at 11:53 AM, Christopher Patridge wrote:


On 5/10/2011 2:49 PM, Francisco Garcia wrote:

Dear all,

I wish to ask a somewhat novice question: What does the acronym FEFF  
stand for?


Thank you.
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov<mailto:Ifeffit@millenia.cars.aps.anl.gov 
>

http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


I am pretty sure it stand for the calculated Effective Scattering  
factor  F(eff)ective.


buena salud,

Chris Patridge





___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] What does FEFF stand for?

2011-05-10 Thread Scott Calvin
I looked into this a bit further, Bruce, and I'd tentatively say the  
curved-wave corrections do turn out to be the source of the "eff":


The earliest use of f_eff I can find is from a 1986 Phys. Rev. B  
article entitled "Spherical-wave effects in photoelectron  
diffraction," by Sagurton et al. (John Rehr is also in the author  
list). It says "an approximaton for including SW [spherical wave]  
corrections suggested recently by Rehr, Albers, and Natoli has been  
incorporated in some of our calculations...the net limiting result is  
a calculation procedure in which an effective scattering factor  
f_eff,j(r,theta_j) which depends on r takes the place of the usual PW  
[plane wave] f_j(theta_j)."


In addition, was FEFF3 a multiple-scattering code? The comments in its  
header and the 1991 JACS article on it mention only single-scattering.


It would make an extraordinary amount of sense that the "eff" would  
refer to FEFF's ability to handle multiple-scattering paths, but I  
don't think that's the actual historical origin of the terminology.


And as for Anatoly's suggestion, I'll, uh, leave that one be for the  
moment.


--Scott Calvin
Sarah Lawrence College

On May 10, 2011, at 12:17 PM, Bruce Ravel wrote:


On Tuesday, May 10, 2011 03:03:23 pm Scott Calvin wrote:

My understanding, although I could be wrong is that the "effective"
part came from an improvement of the theory to account for curved- 
wave
effects. In other words, early theories approximated the  
photoelectron

as a plane wave, but of course it spreads out radially from the
absorbing atom. That change necessitated tweaking the definitions of
the factors, so it became the "effective" f.


I think you are mistaken.  My memory of the etymology has to do with
the formalism dating back to Feff5 for computing MS paths.

For a purely single scattering theory, you have an F and a phi
(without the subscript eff).  That is, you can simply compute the
scatting function for the one scatterer and be done with it.

Feff's path expansion introduced two clever things to the EXAFS
business.  One is that it provided a formalism for computing a single
function that takes into account the angle-dependent scattering
functions of all atoms in an arbitrary-geometry multiple scattering
path.  This allows one to treat a MS path with the familiar SS EXAFS
equation only by replacing F and phi with F_eff and phi_eff.  That
innovation is central to how Ifeffit works.

The second clever thing is that it's really fast.  That's not such a
big deal today, but back in the mid-90s, when a Feff run could take
several minutes, a faster algorithm was very welcome indeed.

B


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Question about transform windows and statistical parameters

2011-05-11 Thread Scott Calvin

Hi Brandon,

I don't find this terribly surprising.

First, a little background which you may or may not know:

Reduced chi-square is a statistical parameter which requires a  
knowledge of the uncertainty of the measurement to compute. In theory,  
therefore, it "knows" that a "good" fit to noisy data will not be as  
close in an absolute sense as a "good" fit to high-quality data.


The problem, however, is that it's difficult to know what is the  
proper quantity to use for the uncertainty of the measurement in EXAFS  
analysis. One could use the standard deviation of subsequent scans,  
but that is only sensitive to random scan-to-scan error. Something  
like, say, a monochromator glitch is quite reproducible, and yet most  
of us would consider it to be part of the measurement error.


So the default behavior of Ifeffit is to look at the Fourier transform  
between 15 and 25 angstroms, and figure that any amplitude there is  
due to error of some kind, and not signal. It then makes the  
assumption that the same amount of error is present in the range being  
fit (i.e. the error is "white"), and from there computes the reduced  
chi-square.


This is in some sense a dubious procedure, but the real problem is  
that we don't have a good method for estimating the measurement  
uncertainty, so we have to do something.


As long as we are comparing fits to exactly the same data, on the same  
k-range, with the same k-weight, with the same windows, then changes  
in reduced chi-square are worth looking at. If all you've done is  
change a constraint or change the R-range being fit, for instance, a  
lower reduced chi-square is a good sign (use the Hamilton test if you  
want to be rigorous about it.)


But change the k-range, or the k-weight, or the window, or the data,  
and Ifeffit's estimate of the uncertainty can change wildly, causing a  
correspondingly wild change in reduced chi-square. After all, one  
glitch toward the end of the k-range you are thinking can introduce a  
lot of high-R amplitude in to the Fourier transform, and different  
windows would treat it very differently. But single-point glitches  
often don't have much effect on the results of your fit, precisely  
because they do affect the high-R part of the Fourier transform much  
more than low-R part.


Ifeffit's default behavior can be overridden, if you so choose. The  
parameter "epsilon" (available on the Data panel of Artemis) overrides  
Ifeffit's usual estimate for uncertainty. So in your case, I suggest  
putting a number--any number--in for epsilon, and then comparing fits  
using the two windows. Probably you will find that the reduced chi- 
squares become much more similar to each other.


Incidentally, while in this case the default behavior of Ifeffit is  
merely distracting, there is a circumstance where it can be a more  
substantial problem: mutliple data-set fits (e.g. on multiple edges of  
the same sample). If Ifeffit finds uncertainties for the different  
data sets that are quite different from each other because, for  
instance, of the presence of a glitch in one, it will in effect weight  
the data very differently when doing a fit.  In multiple-data set  
fits, therefore, it is often advisable to come up with your own scheme  
for setting epsilons (perhaps inversely proportional to the edge jump  
of the set, or something like that), to avoid wonky weightings.


--Scott Calvin
Sarah Lawrence College


On May 11, 2011, at 12:47 PM, Brandon Reese wrote:


Hello everybody,

I am working on fitting some EXAFS of amorphous materials and have  
noticed an odd (in my mind) behavior when changing transform  
windows.  I settled on a fit using all three k-weights and the  
Hanning transform window obtaining statistical parameters of  
R=0.0018 and chi_R=361.  I decided to change the transform window to  
a Kaiser-Bessel to see what would happen.  The refined parameters  
came out more or less the same, well within the error bars, with the  
Hanning windows having slightly smaller error bars.  But my  
statistical parameters changed significantly to R=0.0022 and  
chi_R=89.354.  It seems that this large change may be related to why  
we can't use the chi_R parameter to compare fits over different k- 
ranges, but I am not sure about that.  Have other people seen this?   
I would guess it means that when looking for trends in different  
data sets, it is more important to be consistent, rather than which  
specific window type is used.


Thanks,
Brandon



___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] EXAFS

2011-05-11 Thread Scott Calvin

Hi Francisco,

I'll take a shot at some of these questions--I'm sure others will  
chime in as well.

On May 11, 2011, at 2:34 PM, Francisco Garcia wrote:


Dear all,

I tried computing K-edge chi(k) for a solvated metal ion. For each
snapshot, I simply carved out a cluster of radius 6 Ang around the
metal.

1. if I set RMAX to say 5.5 in feff.inp, the chi(k) plot looks totally
different from the case where the RMAX line is commented out
(particularly k>10). I thought RMAX is the largest metal-H or metal-O
separation but I think I am wrong

RMAX is the longest half-path length to include in chi(k). For direct  
scattering paths, it is therefore the most distant scattering pair  
that is allowed to contribute. So your understanding of the meaning is  
essentially correct.


In FEFF7, the documentation says it defaults to 2.2 times the nearest  
neighbor if it is not included. So commenting out RMAX does not mean  
to include all paths! That could be responsible for a considerable  
difference in chi(k).



2. I dont quite understand why the number of paths increases
(sometimes by a factor of 3) when I set RMAX as opposed to the case
where I comment it out.


See above. If your nearest neighbor is at, say, 1.8 angstroms, then  
commenting out RMAX is equivalent to setting it to 3.96 angstroms.  
That's much less than 5.5, and explains the smaller number of paths.




3. From time to time I get warnings indicating that a water O-H
distance is too short:

  WARNING:  TWO ATOMS VERY CLOSE TOGETHER.  CHECK INPUT.
   atoms 57
50.144002.097001.16300
70.774002.674001.00800

I tried setting FOLP to 0.8 for H but the warnings persist.


Don't worry about it. This FEFF warning doesn't really expect  
hydrogens, and so sometimes gives a warning for atoms that would be  
close if they were anything bigger, but are OK if one is a hydrogen.  
Looking at the coordinates it specifies in your example, the two atoms  
are about 1 angstrom apart--OK if one is a proton.


4. On what criteria are sigma2 and s02 chosen? They are not usually
reported in published theoretical papers and I was wondering if they
are arbitrary.


FEFF is not usually used for EXAFS on its own. Most people use a  
fitting program, such as Ifeffit, to optimize the values of S02 and  
sigma2 to provide the best fit. (Although S02 in particular is often  
fit for a known material, and then that value used.) When used in that  
way, FEFF should be run with S02 = 1 and sigma2 = 0 to avoid confusion.




5. Finally, is the chi(k) dependent of which version of FEFF is being
used. I am asking this because the peak height of my FEFF6 chi(k) is
slightly lower than that of published results using FEFF8.



Yes, they are slightly different. For EXAFS, they probably aren't  
different enough to affect the answers to your scientific questions  
much, though.


--Scott Calvin
Sarah Lawrence College

___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Question about transform windows and statistical parameters

2011-05-12 Thread Scott Calvin

Hi Brandon,

Matt and Bruce both gave good, thorough answers to your questions this  
morning. Nevertheless, I'm going to chime in too, because there are  
some aspects of this issue I'd like to put emphasis on.


On May 11, 2011, at 8:46 PM, Brandon Reese wrote:

 I tried your suggestion with epsilon and the chi-square values came  
out to be very similar values with the different windows.  Does this  
mean that reporting reduced chi-square values in a paper that  
compared several data sets would not be necessary and/or appropriate?


Bruce said "no" emphatically, and I say "yes," but I think we've  
understood the question differently. As Bruce says:


Of course, reduced chi-square can only be compared for fitting  
models which compute epsilon the same way or use the same value for  
epsilon.


That's the key point. I've gotten away from reporting values for  
reduced chi-square (RCS). That's a personal choice, and is not in  
accord with the International X-Ray Absorption Society's Error  
Reporting Recommendation, available here:


http://ixs.iit.edu/subcommittee_reports/sc/

I think the difficulty in choosing epsilon is more likely to make a  
reduced chi-square number confusing than enlightening. But I am moving  
increasingly toward reporting changes in reduced chi-square between  
fits on the same data, and applying Hamilton's test to determine if  
improvements are statistically significant.



 Would setting a value for epsilon allow comparisons across  
different k-ranges, different (but similar) data sets, or a  
combination of the two using the chi-square parameter?




Maybe not. After all, the epsilon should be different for different k- 
ranges, as your signal to noise ratio probably changes as a function  
of k. Using the same epsilon doesn't reflect that.





In playing around with different windows and dk values my fit  
variables generally stayed within the error bars, but the size of  
the error bars could change more than a factor 2.  Does this mean  
that it would make sense to find a window/dk that seems to "work"  
for a given group of data and stay consistent when analyzing that  
data group?


The fact that your variables stay within the error bars is good news.  
The change in the size of the error bars may be related to a less-than- 
ideal value for dk you may have used for the Kaiser-Bessel window.


But yes, find a window and dk combination that seems to work well and  
then stay consistent for that analysis. Unless the data is  
particularly problematic, I'd prefer making a reasoned choice before  
beginning to fit and then sticking with it; a posteriori choices for  
that kind of thing make me a little nervous.


* * *

At the risk of being redundant, four quick examples.

Example 1: You change the range of R values in the Fourier transform  
over which you are fitting a data set.
For this example, RCS is a valuable statistic for letting you know  
whether the fit supports the change in R-range.


Example 2: You change the range of k values over which you are fitting  
your data.
For this example, comparing RCS is unlikely to be useful. You are  
likely trying different k-ranges because you are suspicious about some  
of the data at the extremes of your range. Including or excluding that  
data likely implies epsilon should be changed, but by how much? Thus  
the unreliability of comparing RCS in this case.


Example 3: You change constraints on a fit on the same data range.
For this example, comparing RCS is very useful.

Example 4: You compare fits on the same data range, with the same  
model, on two different data sets which were collected during the same  
synchrotron run under similar conditions.
For this example, proceed with caution. You may decide to trust  
Ifeffit's method for estimating epsilon, or you may be able to come up  
with your own (perhaps basing it on the size of the edge jumps).  
Hopefully issues like glitches and high-frequency jitter are nearly  
the same for both samples, which gives you a fighting chance of making  
reasonable estimates of epsilon. Done with a little care, there may be  
value in comparing RCS for this kind of case.


--Scott Calvin
Sarah Lawrence College

___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Athena Artemis

2011-05-12 Thread Scott Calvin

Hi George,

I've confirmed the behavior you describe on a Mac running OS 10.5.8,  
Artemis 0.8.014, Ifeffit 1.2.12, and Athena 0.8.061.


It does seem to be a bug or a corrupted file, but I can't figure out  
what it's actually doing. It's not grabbing the wrong data from the  
Athena project--none of the other data sets look that close. It sort  
of seems like it imports a version with a different background  
subtraction (maybe a different background k-weight, for instance) when  
you import from the larger project.


--Scott Calvin
Sarah Lawrence College

On May 12, 2011, at 11:18 AM, George Sterbinsky wrote:

Also, for anyone who just wants to see the difference in the two  
data sets I've attached a figure.


George

On Thu, May 12, 2011 at 1:23 PM, George Sterbinsky <mailto:georgesterbin...@u.northwestern.edu>> wrote:

Hello,

I've noticed an odd behavior in Athena and Artemis and I was hoping  
someone could explain it to me.


I've attached an Artemis file Data3.apj and an Athena file Data4.prj.

First open Data3.apj. Then from the Artemis file menu choose open,  
then open Data4.prj and import the Data4.xmu file. Now plot the data  
in k-space you will see a slight difference between the two spectra,  
most noticeably above 15 k, so plotting in k^3 is best to see the  
difference.


Now close Artemis and don't save. Open Data4.prj with athena and  
choose to import only the file Data4.xmu. Save the project as  
something else. I saved as Data4C.prj, which I have also attached.


Close Athena and don't save. Open Data3.apj again. Then from the  
Artemis file menu choose open, then open Data4C.prj or whatever you  
may have named the file and import the Data4.xmu file, which should  
now be the only data file in the project. Now plot the data in k- 
space and you will find that the two data sets are now the same. The  
differences at high k are no longer present. Can anyone explain what  
is going on here?


Thank you,
George




___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Question about transform windows and statistical parameters

2011-05-13 Thread Scott Calvin
Matt, On May 13, 2011, at 8:39 AM, Matt Newville wrote: After all, the epsilon should be different for different k-ranges, as your signal to noise ratio probably changes as a function of k. Using the same epsilon doesn't reflect that.Without seeing the data in question, this seems like speculation to me.  I'm not at all sure why epsilon (the variation in chi(k)) should depend strongly on the k-range.  In my experience, it usually does not.  The S/N ratio will surely change with k, but that would surely be dominated by the rapid decay in |chi(k)|, rather than a change in epsilon.I'm confused. We Fourier transform k-weighted data. Since Ifeffit uses the high-R amplitude to estimate uncertainty, it seems to me that what matters is signal-to-noise, not just noise in the original unweighted chi(k). Am I wrong in that? I may be misunderstanding how epsilon_r is calculated. And epsilon_r is the relevant epsilon for a fit in R space, right?I think your assumption that epsilon will depend strongly on k may not correct.  Do you have evidence for this?   I would say that it is not strongly dependent on k, and that reduced chi-square is useful in comparing fits with different k-ranges.I just tried it on the FeC2O4 chi(k) attached to this post. It's a good example of data where it's not immediately clear to me what the "best" value for kmax is, so it would be tempting to use RCS to compare fits over different k-ranges. I used k-weight 3, and Hanning windows with dk = 1. I chose kmin as 2 and stepped kmax by 0.5, recording epsilon_r for each:kmax         epsilon_r7           0.034840105             7.5       0.041843848       8          0.082627337           8.5       0.0875503679          0.0860320079.5       0.08599621610        0.088679339    10.5     0.090364699   11        0.092509939      11.5     0.108103081    There's a general trend of increasing epsilon_r with an increase in k. There's also a jump of a factor of 2 between 7.5 and 8. Why? Because there's a glitch there, and the glitch adds high-R structure.To make sure there wasn't something odd about this particular chi(k), I took one of the data sets included with the horae distribution: the file y300.chi in the ybco folder.I followed the same procedure as before, except I stepped by 1 inverse angstrom each time, because of the greater data range.kmax         epsilon_r7     0.0128661258         0.0733836959         0.07825577210       0.08001604011       0.09163457212       0.10541947313       0.16434170114       0.19526695715       0.22472759316       0.41113988217       0.480293296If anything, the trend is more clear here.I find it confusing that you expect  the noise in the data to depend (strongly, even) on k, but not on R.    The general wisdom is the estimate of epsilon from the high-R components is too low, suggesting that the R dependence is significant.    Every time I've looked, I come to the conclusion that noise in data is either consistent with "white" or so small as to be difficult to measure.  I believe Corwin Booth's recent work supports the conventional wisdom that  epsilon decreases with R, but I don't recall it suggesting a significant k dependence.I'm not making any claims as to whether, in general, the noise in the data depends on R. I can speculate about circumstances where low R noise is greater (due, for instance, to temperature fluctuations in cooling water, which are likely to be fairly slow), or where high R noise is greater (an example here would be if whatever system is keeping the beam on the sample vertically as the mono scans is tending to overshoot).But Ifeffit's estimation of epsilon_r demonstrably does not depend on the R-range used for fitting, regardless of the distribution of noise in R. That's a very different thing. Thus, changing the R-range of a fit is completely safe as far as comparing RCS goes.--Scott Calvin Sarah Lawrence College

FeC2O4.chi
Description: Binary data


y300.chi
Description: Binary data
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Question about shift in E0

2011-06-06 Thread Scott Calvin

Hi Brandon,

I don't find this terribly surprising if your samples are mixtures of  
a metal and a metal oxide.


If you look at the derivative of the XANES spectra of a pure  
transition metal and its oxides, you'll generally find that the  
biggest difference is not in the position of the derivative peaks, but  
in their relative size; for the metal the lower energy derivative peak  
is much larger than it is for the oxide. A mixture of oxide and metal,  
therefore, will be primarily evident in XANES through the relative  
heights of these peaks; sometimes the apparent shift can be very  
subtle indeed. For example, consider what happens if the first peak in  
the first derivative is 20 times lower in the oxide than in the metal,  
and is also 1.5 eV higher in energy. If you have one sample which is  
70% metal, and you compare it to a sample which is 40% metal, the tiny  
contribution of the oxide to the first peak will hardly be visible in  
either case, and you might see no measurable shift in energy in the  
XANES for that peak. (You'll see a big change in its amplitude,  
though.) Likewise, the white line may be dominated by the oxide, and  
show little shift in the XANES (but once again, a big change in  
amplitude).


An EXAFS fit, however, doesn't suffer from weighting issues in the  
same way, and may indeed show an E0 shift when the mixture of  
oxidation states changes.


For my description to apply to your system, though, I'd expect you to  
be seeing substantial changes in the amplitudes of XANES features,  
even if their position doesn't shift much. Is that the case for your  
data?


--Scott Calvin
Sarah Lawrence College

On Jun 5, 2011, at 11:56 PM, Brandon Reese wrote:


Hi all,

I am looking at EXAFS of thin film metal oxides.  I am varying both  
metal content and the oxygen content of the films. I aligned the  
scans with a metal reference foil collected simultaneously.  In  
Artemis, I have noticed that when changing between films with no  
extra oxygen versus those with extra oxygen there is a shift in the  
fitted E0 of ~1.5 eV (after aligning to the foil). I tried setting  
the E0 in Athena to the peak of the 1st derivative and the peak of  
the white line with the same result (~7 eV difference). I was a  
little surprised by the offset because in Athena the E0 values  
varied by <0.5 eV. I am not sure if the argument could be made that  
this shift is a result in a changing oxidation state because it  
doesn't show up in the XANES (at least qualitatively).  Are there  
other experimental effects that could cause a shift like this, or is  
this likely something real in my material? If anyone want to see a  
representative group of data, let me know.






___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


[Ifeffit] McMaster correction

2011-06-16 Thread Scott Calvin

Hi all,

I've been pondering the McMaster correction recently.

My understanding is that it is a correction because while chi(k) is  
defined relative to the embedded-atom background mu_o(E), we almost  
always extract it from our data by normalizing by the edge step. Since  
mu_o(E) drops gradually above the edge, the normalization procedure  
results in oscillations that are too small well above edge, which the  
McMaster correction then compensates for. It's also my understanding  
that this correction is the same whether the data is measured in  
absorption or fluorescence, because in this context mu_o(E) refers  
only to absorption due to the edge of interest, which is a  
characteristic of the atom in its local environment and is thus  
independent of measurement mode.


So here's my question: why is existing software structured so that we  
have to put this factor in by hand? Feff, for instance, could simply  
define chi(k) consistently with the usual procedure, so that it was  
normalized by the edge step rather than mu_o(E). A card could be set  
to turn that off if a user desired. Alternatively, a correction could  
be done to the experimental data by Athena, or automatically within  
the fitting procedure by Ifeffit.


Of course, having more than one of those options could cause trouble,  
just as the ability to put sigma2 into a feff calculation and in to  
Ifeffit sometimes does now. But wouldn't it make sense to have it  
available (perhaps even the default) at one of those stages?


--Scott Calvin
Sarah Lawrence College___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] McMaster correction

2011-06-17 Thread Scott Calvin

Thanks, all!

Here's what I got out of the discussion:

FEFF is calculating the "correct" chi(k), and applying an approximate  
correction introduces additional sources of error. But the only way to  
measure chi(k) is to extract it from unnormalized data, and the  
original definition of chi was an arbitrary, if sensible, one: chi(E)  
= mu(E)/mu_o(E) - 1. And mu_o(E), while not known with great accuracy,  
depends only on the element and the edge (perhaps excepting minor  
contributions from AXAFS).


Not applying a correction, whether McMaster or something more accurate  
(such as the ones Anatoly and John suggested), is equivalent to using  
the approximation mu_o(E) = mu_o(E_o), which is less accurate than the  
alternatives. On the other hand, the effect is almost entirely a shift  
in the absolute (as opposed to relative) value of sigma^2.


Considering that, it seems to me that this would be a good option for  
Athena when calculating chi(k). (I think it would be more problematic  
to apply when calculating normalized energy-space data, as in that  
case the correction would depend on instrumental effects and the  
absorption of other edges in the sample.) So, Bruce, I guess this was  
first a discussion and then a feature request. :)


--Scott Calvin
Sarah Lawrence College

On Jun 17, 2011, at 4:14 AM, John J. Rehr wrote:


Hi Scott et al.,

  Thanks for bringing up this issue. Whether or not McMaster  
corrections

are useful does seem to depend on details of the measurement. But
my question is: for the cases where they are useful, can one do
better? As the data & theory get better and better, perhaps we  
should try
to extract more accurate cross sections mu(E). For example, is it at  
all

of interest to have embedded atom cross-sections to replace the atomic
based Cromer-Liberman cross  sections or empirical tables?

 John


On Thu, 16 Jun 2011, Scott Calvin wrote:


Hi all,
I've been pondering the McMaster correction recently.

My understanding is that it is a correction because while chi(k) is  
defined
relative to the embedded-atom background mu_o(E), we almost always  
extract

it from our data by normalizing by the edge step. Since mu_o(E) drops
gradually above the edge, the normalization procedure results in
oscillations that are too small well above edge, which the McMaster
correction then compensates for. It's also my understanding that this
correction is the same whether the data is measured in absorption or
fluorescence, because in this context mu_o(E) refers only to  
absorption due
to the edge of interest, which is a characteristic of the atom in  
its local

environment and is thus independent of measurement mode.

So here's my question: why is existing software structured so that  
we have
to put this factor in by hand? Feff, for instance, could simply  
define
chi(k) consistently with the usual procedure, so that it was  
normalized by
the edge step rather than mu_o(E). A card could be set to turn that  
off if a
user desired. Alternatively, a correction could be done to the  
experimental
data by Athena, or automatically within the fitting procedure by  
Ifeffit.


Of course, having more than one of those options could cause  
trouble, just
as the ability to put sigma2 into a feff calculation and in to  
Ifeffit
sometimes does now. But wouldn't it make sense to have it available  
(perhaps

even the default) at one of those stages?

--Scott Calvin
Sarah Lawrence College


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] McMaster correction

2011-06-17 Thread Scott Calvin
I apologize if I have abused the list while working on my text. I will  
find a different channel for raising these questions and requests once  
the current discussion is complete.


--Scott

On Jun 16, 2011, at 8:11 PM, Matt Newville wrote:


Hope that helps.I have to admit I'm a little uneasy with the
frequency of "I've been pondering..." discussion topics alternating
with requests to review book chapters, and find myself being more
cautious in my response than I would if someone was actually asking a
question.   On the other hand, I don't think anyone would object if
you added a button to Athena that normalized the data in a way that
included a correction for the expected decay of mu(E).


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] SS and MS contributions to EXAFS

2011-06-18 Thread Scott Calvin

Hi Francisco,

Following up on Matt's comment, a FEFF calculation  with default  
option assigns sigma2 to be 0, and that probably won't result in  
relative contributions between SS and MS that look much like what  
happens in the real material.


A better strategy would be to read the calculation into Artemis, as  
Matt suggests, and use a Debye model for the sigma2 for the paths,  
trying a few different Debye temperatures to get a sense of how it  
affects the relative contributions. While the Debye model is not an  
appropriate model for quantitatively fitting most materials, at least  
it takes a stab at how sigma2 might depend on R, which will help with  
understanding how important the MS paths are to chi(k).


--Scott Calvin
Sarah Lawrence College

On Jun 17, 2011, at 7:21 PM, Matt Newville wrote:


Hi Francisco,

On Fri, Jun 17, 2011 at 7:34 PM, Francisco Garcia
 wrote:

Dear users,

I would like to quantify the single scattering (SS) and multiple
scattering (MS) contributions to the EXAFS spectra over a range of k
values. I adopted the following approach and I would like ask
experienced users if my approach is sound:

(1) Run a regular EXAFS (default NLEG=8); I assume this includes all
possible scattering paths. Call the chi(k) data chi.dat
(2) Run another EXAFS but this time I set NLEG=2. I assume this is  
for

all SS contributions. Call the chi(k) data chi_ss.dat
(3) To obtain the MS contribution, I subtracted chi_ss.dat from  
chi.dat


If my approach is faulty, can you tell me how to remedy it?

Thank you.


That approach would work under the assumption that the atomic
coordinates in the feff.inp file fully described the distribution of
atoms in the system.  If, on the other hand, the feff.inp had an
idealized structure, then the approach would model the MS
contributions for that structure.  For example, a feff.inp file often
assumes no static or thermal disorder in the system.  In that case,
the MS contributions would have no disorder terms (eg, sigma^2)
applied to them.

An equivalent approach that may be somewhat simpler would be to use
Artemis to run the Feff calculation with NLEG=8 (4 is probably good
enough below 6Ang in most structures),  read in all the paths, and sum
the paths with NLEG>2.

--Matt
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] E0 issues

2011-06-27 Thread Scott Calvin

Hi Hana,

On Jun 27, 2011, at 2:04 PM, Hana wrote:

I guess my question is obvious to most of you, but after some  
practice and  reading, the following is still unclear to me  
(hopefully some other beginners  will benefit): Normally I have  
calibrated my spectra using a calibration foil –  so energy shift  
was done according to that companion standard. Doing a simple   
linear combination fit, I have set all spectra to the same E0 value,  
somewhere on the edge, above the first derivative maxima (was it a  
good practice?).


That is a good practice, in my opinion. If you're going to do a linear  
combination fit, it doesn't really matter how you choose E0, as long as:


1) The spectra are all aligned on the same energy scale

2) The choice of E0 is in some way consistent

By using a calibration foil, you have assured #1, and then by simply  
making E0 the same for all spectra, you have assured #2.


Recently, I found myself actually confused about E0; since my  
samples inherently have a phase difference (which is the base to my  
ability to differentiate them), how a certain reference point on the  
spectrum can be determined?


"Reference point" is indeed the correct term. As such, it is somewhat  
arbitrary, and just needs to be consistent for all spectra being  
compared.


What can be done when I do not have the calibration foil (especially  
for these heavy elements that do not have specific sharp feature)?


Consistency is the only requirement. There are many ways to align  
consistently. If you've got really noisy reference data for some  
reason, you could even fit some kind of function to the edge and use  
that, but for reference data that's not usually necessary.


And further, now that I am starting to work on the structural model;  
how actually IFEFFIT determined the energy shift when there is no  
specific reference point?


If you float E0 when fitting to feff files, the reported value is a  
shift relative to wherever you initially picked it. Thus if you change  
the initial choice of E0 by 2 eV upward, the shift ifeffit reports  
should be 2 eV less.


Hope that helps...

--Scott Calvin
Sarah Lawrence College
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] E0 issues

2011-06-27 Thread Scott Calvin

Hi again,


Thanks Scott,
I think it is a mistake done by many beginners. To be confident that  
I understood you well: so basically the reproducibility of E0  
between samples while fitting to FEFF files mainly depends on the  
quality of my calibration.


For reproducibility, yes.

Actually, if there is no clear feature in my edges and they are  
different in ‘slopes’, I must have either a calibration foil, or a  
known standard, measured at the same time; otherwise there is no  
good way to get comparable data?


Some beamlines are more stable in energy than others. Sometimes it  
suffices to measure a standard occasionally between measuring data. If  
the calibration does not drift, or drifts in a predictable way, then  
you're OK. But if the calibration jumps around by an eV or two, as is  
not uncommon, then your data inherits that uncertainty unless you  
measure a reference material at the same time.


Is that right, or the fitting, considering all spectra’s components  
will finally lead to a good fit, even with relatively poor   
calibration (I will definitely be more careful in the future; but  
asking for a set of samples that were mistakenly measured for me  
without a standard).


This is a different question. If you are fitting to FEFF, then it  
doesn't matter if the E0's were defined consistently for each sample,  
or if the calibration drifted. You will most likely float E0 as a free  
parameter anyway, so the fitting process will adjust for the  
differences. You do lose the ability to compare the E0's of your  
different samples, and you lose the ability to constrain them to be  
the same, but the rest of the fitting process is fine.


--Scott Calvin
Sarah Lawrence College

P.S. Also note that if ifeffit returns an E0 shift of more than about  
10 eV, that's a warning sign. Check if that would correspond to an E0  
still on or near the rising portion of the edge (a bit past the white  
line is still OK). If it's not, then the fit is not a good one. If it  
is, then it's best to choose a new E0 in Athena (or SixPack) that is  
closer to to where ifeffit wants it; FEFF loses accuracy when the E0  
has to be shifted by that much.



___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Opening SSRL data in Athena

2011-07-07 Thread Scott Calvin
Actually, Athena can open binary SSRL files--I just successfully  
opened one of Lia's files in Athena.


Here's what I did:

Under Settings, choose Plugin registry.
Check the SSRLB box.

From there, it opened fine.

Does trying that work for you, Lia?

I am using Athena 0.8.061 on a Mac running OS 10.5.8.

--Scott Calvin
Sarah Lawrence College

On Jul 7, 2011, at 6:20 PM, Wayne W Lukens Jr wrote:


Hi Lia,

Those are binary files from SSRL, which athena cannot open. There  
are a few ways that you can deal these files.


Easiest is to use Six-Pack to work up the data. I believe it will  
open SSRL binary files. Six-Pack is a GUI-driven suite of programs  
analogous to Athena and Artemis.


Slightly less easy is to convert your binary data to ascii data,  
which can be done on the SSRL computer that you used to collect the  
data. At this point, you will have to ask the beamline scientist to  
do this for you. You can definitely open the ascii files in Six-Pack.


Somewhat more difficult is to use EXAFSPAK to work up the data.  
These are a suite of programs for EXAFS data analysis that are not  
particularly easy to learn. They will open the binary files.

EXAFSPAK is easy to use once you have learned the commands.

Six-Pack:

http://home.comcast.net/~sam_webb/sixpack.html

EXAFSPAK:

http://ssrl.slac.stanford.edu/exafspak.html

Sincerely,

Wayne


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Question on GDS values: amp

2011-08-01 Thread Scott Calvin

Hi Nic,

The delr and ss values may adopt reasonable values with a negative  
amp, but they're almost certainly not the right values. A negative amp  
turns the chi(k) upside-down. To make it ft, the other parameters then  
have to shift the graph over by half an oscillation, yielding values  
of E0 and delR that are wrong.


It is unusual that a first-shell fit to a standard doesn't seem to be  
giving a qualitatively reasonable result with an amp of 1. What  
happens if you do a sum of the most important paths without a fit?  
Does the fit look qualitatively correct over a large range of R? (The  
amplitudes will of course be substantially off without a  
fit.)It sounds like something substantial is wrong: either  
the material isn't actually dadolinium oxide, or the structure you're  
using for the feff calculation isn't right (is there more than one  
crystallographic setting?), or there was serious distortion in the  
measurement (perhaps a thick, concentrated standard was measured in  
fluorescence without correction).


--Scott Calvin
Sarah Lawrence College

On Aug 1, 2011, at 12:36 AM, nicholas@csiro.au wrote:


Dear All,

Just wondering what could I do to make the amp guess value as a  
positive number? If I run it and let it float (i.e. guess), the amp  
becomes negative in the resulting fits, but the fit has nice delr  
and ss values (i.e. make sense). If I restrict the amp value to 1,  
everything else doesn’t fit. I am only fitting the first nearest  
neighbour in a measured standard; Gadolinium oxide.


Regards,

Nic




___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Question on GDS values: amp

2011-08-02 Thread Scott Calvin

Hi Nic,

On Aug 2, 2011, at 6:41 AM, nicholas@csiro.au wrote:

 Just a sidenote, is the general workflow for fitting XAFS data the  
following:


Fit first shell and get reasonable Enot and Amp and then make set  
them, then incrementally add more of the scatter paths and adjust  
the delR for each path correspondingly. Adjusting degeneracy for  
atom assuming there are slight differences in the atoms spacing.


I know the question is very trivial but I don't seem to be able to  
find a general guideline for knowing when the fitting process is  
over in these type of analysis.


As for your side note, there are several workflows that have success.  
The most appropriate one depends both on your level of knowledge about  
the material and your personal preference.


I don't particularly favor the one you describe, though, because it's  
an attempt to fit a few parameters at a time to avoid wrestling with  
correlations. That doesn't end up actually avoiding correlations; it  
just locks them in.


For instance, if you first guess amp, find a "best fit" value, set it,  
and then run a fit with something else varied (say, a degeneracy),  
then you've artificially broken the correlation between amp and  
degeneracy. But you've done it in a completely arbitrary way...you  
haven't really explored the space of the two varied jointly. (Of  
course, if you're only doing a single shell, you can't vary both S02  
and N, because they correlate completely. But you certainly can't  
pretend you can by first varying S02 with N fixed, and then varying N  
with S02 fixed!)


The two most prevalent valid fitting strategies I've seen are:

"Bottom up." (That's my name for it. I've also heard it called "shell  
by shell.") In this strategy, you start with a single shell with few  
constraints; perhaps you guess N (taking S02 from a standard), E0,  
delR, and ss. You try different nearest-neighbors and see what works  
best. As you begin to gain knowledge about the system, you add more  
distant shells and begin to add reasonable constraints. In a  
biological system where you've determined nearest neighbors are  
sulfur, for instance, then your knowledge of the particular system may  
suggest what ligands are present, which might provide information  
about second nearest-neighbors. For instance, the number of second  
nearest-neighbor carbons might be equal to the number of near-neighbor  
sulfurs, constraining some of the degeneracies.


"Top down." Start with a highly-constrained, multi-shell fit. For  
instance, you might include all important paths out to 5 angstroms,  
with the only free parameters S02. E0, and a sigma2 parameter or two.  
If the fit appears qualitatively close, constraints can then be  
relaxed to more realistically model the particular features of the  
material (as one example, you can vary degeneracies to allow for  
vacancies or nanoscale effects). If the fit does not appear  
qualitatively close initially, it is probably the wrong starting  
material. (Amplitudes are often far off initially with this approach,  
and there may be small phase shifts, but the first few big peaks  
should be roughly in the right place. A minor variation on this  
approach is to start with a sum of paths rather than fit at all.)


Note that both strategies end up in the same place: a modestly  
constrained multi-shell fit. (And no, that's not always possible-- 
sometimes first shell information is all you can get.)


When do you use each? If your material is a modestly modified version  
of a known crystal, top down is a good way to go. For instance, you  
might have a doped zinc oxide. You know the structure of zinc oxide;  
the question is the effect of the dopant. Why waste effort trying to  
fit just a nearest-neighbor first, which can be quite difficult, when  
you think you know the rough structure out to several angstroms? On  
the other hand, for something like a protein you probably don't  
initially know much about the structure at all. Then bottom up works  
better. Many problems, such as some environmental problems, fall in  
between the two, and either approach might be effective.


A strategy that I've sometimes seen used by beginners in workshops,  
which is not a good idea, is to fit one shell, get the results, set  
the values of those parameters to the result of the fit, add another  
shell, and so on. This is a misunderstanding of the bottom up  
approach! Rather than using information from outer shells to achieve  
better fits on the inner shells (or vice-versa), you lock in  
distortions and make it difficult to evaluate statistically-related  
measures such as error bars.


Hopefully that helps!

--Scott Calvin
Sarah Lawrence College___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Fe K-edge second shell problems

2011-08-03 Thread Scott Calvin

Hi Paul,

Looking at your data, I agree that there does seem to be second-shell  
scattering present in the signal.


Unfortunately, that most likely means that your material is not close  
to one of the "obvious" model compounds. One possibility to keep in  
mind is that you may have some kind of overlapping paths in that  
second "shell"--for example, partially Fe-Na and partially Fe-Ca, or  
an assortment of Fe-Na's at different distances.


A clue can perhaps be obtained by noting the relative height of the  
peak near 2.3 angstroms compared with the large peak you've fit. As k- 
weight is raised from  0 to 1 to 2 to 3, the peak at 2.3 angstroms  
does not grow relative to the first peak. That suggests the scattering  
may be from another low-Z element like oxygen.


So I'd tentatively try an Fe-O path around 2.7 angstroms with its own  
delR, ss, and N guessed. (Meanwhile, set N_1 to 4 to reduce  
correlations--you've said you expect the first shell to be tetrahedral.)


Good luck--sounds like a stubborn one!

--Scott Calvin
Sarah Lawrence College

On Aug 3, 2011, at 9:49 AM, Paul A Bingham wrote:


Dear Ifeffit users,

I have been struggling with this problem on and off for many months
and I cannot resolve it - hopefully someone out there can help

I have collected fluorescence Fe K-edge EXAFS of oxide glasses doped
with low levels (0.2%) of Fe. The glasses are typified by their major
components SiO2, Na2O, CaO and also low levels of Fe2O3 and CeO2
dopants. I'm currently trying to fit the Fe EXAFS. The first shell is
relatively easy to fit and I'm reasonably happy with the fit I
obtained using a tetrahedral Fe3+ standard, in this case FePO4. The
fits are consistent, as I expected to find, with Fe3+
tetrahedrally-coordinated with four oxygens.

The problem comes - and here's where I could really use some
suggestions - when I try to fit second Fe-x distance. It seems clear
to me that a second Fe-x distance (and possibly a third) are present
in the data. However, despite expending a great deal of time I am
unable to get a fit that appears anywhere near sensible and robust and
for which the output parameters are sensible. I suspect the second
Fe-x distance (I reckon about 2.8 Angstroms) to be Fe-Na but Fe-Ca,
Fe-Si or Fe-O may also be possible. It's also possible that it is
Fe-Fe or Fe-Ce.

I have tried all of the "obvious" Fe model compounds (aegirine,
clinopyroxine, etc) and also many others and I simply cannot get
anything approaching a decent fit. The vast majority of distances in
model compounds are Fe-O distances around 1.9-2.1A, then there is
usually a "gap" until about 3.1A.

I have checked my background subtraction and tried out many different
options, changes and tweaks that I know or can find suggested but I
cannot obtain a fit that is any good. And so I ask my colleagues out
there who are more experienced than I with EXAFS - can anyone help
with this conundrum?

I have attached the Artemis file with the data and simple one-shell
fit using FePO4 cif file; and the Athena file FYI.

Thanks in advance for your time and I look forward very much to
reading any suggestions you may have.

Warm Regards

Paul Bingham

--
Dr. Paul A. Bingham
Immobilisation Science Laboratory
Dept. of Engineering Materials
University of Sheffield
Mappin Street
Sheffield
S1 3JD
UK

Email: p.a.bing...@sheffield.ac.uk
Direct Line: (0114) 2225473
< 
Bingham_Fe_EXAFS_Glass_Ifeffit_Artemis 
>


___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] asking questions effectively (yes, *you* need to read this email)

2011-08-05 Thread Scott Calvin
One addition to Bruce's appeal: for some subscribers to the list,  
large attachments are a problem. For instance, some people are still  
working at dial-up speeds (due to the US' rural digital divide, I was  
one of those recently), or even have limits on the amount of data they  
can download in a month.


The question, then, is what is a "large" attachment? We've had some  
discussion of that on the list previously, and never arrived at a hard- 
and-fast rule.


Nonetheless, let me suggest that anything below 1 megabyte is fine--in  
fact, it should be encouraged so that we can help with the kind of  
questions Bruce just enumerated. Paul's files, for instance, were 214  
KB, or 0.24 MB.


I suggest, therefore, that if you have a project file that is large  
because, for instance, it has many, many fits in its history, please  
re-save it in a smaller version, and attach that. You should also be  
careful with screenshots that they are not needlessly large--e.g.  
saved in a resolution far beyond what is necessary.


In the occasional case that the problem or question requires a large  
file to manifest, such as that described by Nirawat yesterday, some  
other arrangement needs to be worked out. It's possible, for instance,  
to use a service such as Dropbox to make the file available without  
actually attaching it to an email.


--Scott Calvin
Sarah Lawrence College

On Aug 5, 2011, at 3:25 PM, Bruce Ravel wrote:



Hi everyone,

This has been a particularly troubling week for me here on the Ifeffit
mailing list.  This week we have seen an unusually large number of
poorly asked questions.  Not bad questions, mind you, just questions
that have been asked in a manner that makes it hard to provide a
useful response.

On Tuesday, someone had a question about a fit in Artemis, but only
posted the project file which demonstrated the problem after being
prompted to do so.

On Wednesday, someone had an issue about LCF fitting in Athena that is
contrary to most people's experience with the program.  That person
did not bother to provide an example project file or any other
supporting information to clarify what happened.

On Thursday, another person had an Artemis problem which was described
in a short and cryptic email.  Only after being prompted 3 times to
post an example was someone able to be of help.

Also on Thursday, we saw the third example in one week of a problem
with Artemis, but no example project file to demonstrate the problem.

Today, we see someone with a crystallography problem, but we do not
see the actual data that would allow someone to reproduce the problem
on their own computer.



Happily, on Wednesday Paul Bingham posted a clear question and
attached Athena and Artemis project files.  He very quickly got two
useful answers.



You do see the lesson here, don't you?  If your problem cannot be
reproduced on someone else's computer, it is unlikely that you will
get a satisfying answer.

Don't wait to be prodded.  Supply the project file or crystal data
that demonstrates the problem *in your first email*.

The so-called experts on this list, including me, really do want to
help you with your problems.  But we are not mind readers.  You have
to meet us half way.

B



___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Athena: problems with LCF

2011-08-15 Thread Scott Calvin

Hi Nina,

Thank you for bringing your question to the list.

As others have said, the problem lies with not having enough data to  
normalize effectively, and is compounded by not taking control of the  
normalization process to make sure its relatively consistent. Standard  
A, for instance, has a post-edge line that cuts very low through the  
data, as does D. The 1:1 A to D calculated mixture, though, comes  
across more evenly.


This is a problem even with calculated mixtures, as you chose to  
import the calculated mixture as a mu(E) file, and not as a norm(E)  
file. In theory, this should be fixable by fixing the edge step of the  
calculated spectrum to 1, so that it doesn't try to normalize it  
again. Something's not quite right when I do that, because the  
calculated average of A and D sometimes slips below both of the other  
graphs, and an average shouldn't do that. Nevertheless, it gets us  
close: an LCF fit now gives us 54.5% A and the rest D.


Another approach is to remove the "force weights to 1" checkbox. This  
is often necessary if normalizations are in doubt. That works quite  
well here, delivering an A to B ratio of 49 to 53.


Summary:

--It's best to collect data far enough above the edge so that you  
establish an unambiguous post-edge trendline, if possible.


--Post-edge lines should be examined, and changed if they are  
inconsistent between spectra being used for LCF. It's better to just  
eyeball normalization than to use radically different trendlines, for  
instance. I sometimes play around by eye with trendlines to see what  
range of normalizations they give, and incorporate that in to my final  
reported uncertainties.


--If normalizations are difficult for a particular set of spectra, it  
is often better to remove the requirement that weights sum to 1. To  
the degree that normalizations are off, there will be some error in  
the values that are found, but at least the fitting routine is able to  
try to compensate for normalization differences by adjusting the  
weights. In other words, forcing the sum to be 1 when the  
normalizations are different forces a bad fit. Allowing them to total  
to anything allows the algorithm to transfer errors in normalization  
to errors in weighting.


Hope that helps!

--Scott Calvin
Sarah Lawrence College

On Aug 15, 2011, at 5:40 AM, Nina Siebers wrote:


Dear All,

I acquired Cd L3-edge spectra of some binary and ternary mixtures in
varying proportions and for the individual components. The mixtures
were created on Cd-mass basis. Then, I tried to fit the reference
spectra to the spectra of the mixtures using linear combination
fitting of Athena to get their abundance. However, the results were
disappointing despite all spectra were carefully energy calibrated and
normalized, so I decided to create simple mathematical binary and
ternary mixtures by summing up the spectra of the individual reference
spectra. After that I did an edge-step normalization in excel and
imported the normalized calculated mixtures into Athena. Then, I tried
the fitting again to exclude mixing-failures and check sensitivity of
LCF with the idealized spectra. Even though the results of the LCF of
the mathematical mixtures were better compared to the real mixtures,
LCF was also not able to reliable deconvolute these spectra into the
individual reference spectra.

Does anybody have an explanation for that? It would be nice if
somebody could give me information about the mathematical fitting
algorithm implemented in Athena.

Attached is a data file of three mixtures (two ternary and one binary
mixture) including the mathematical mixture created in excel (named
calculated at the end). Mixing ratios are named 1to1to1 (meaning 1:1:1
of the components in the same order). For the 1:1:1 ternary
mathematical mixture the deconvolution was very good, but the others
need improvement.

I hope I made my problem clear this time.

Thanks a lot!
Wishes,
Nina





___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


[Ifeffit] Looking for reviewers for data reduction chapter

2011-09-03 Thread Scott Calvin
Hi all,

Once again, I have a draft of a chapter of my textbook ready for review. This 
chapter is on data reduction, including normalization, background subtraction, 
and Fourier transforms. This will be Chapter 4 in the book, so it should be 
suitable for near-beginners as well as those with more experience. This is also 
a chapter where I had to make a lot of decisions on nomenclature and 
presentation, so experts might be interested as well, particularly if you have 
strong opinions on that kind of thing! It is a bit more than 40 pages long, 
which makes it the longest so far.

If you are interested in reviewing the chapter and can get feedback to me by 29 
September, please contact me directly at scal...@slc.edu . As always, repeat 
reviewers are welcome!

--Scott Calvin 
Sarah Lawrence College
___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Negative Sigma^2

2011-09-13 Thread Scott Calvin
It's probably not initial guesses, Robert.

Look at the uncertainties, and look at correlations. If you are getting, say, 
sigma2 = -0.001 +/- 0.004 A^2, then the values aren't even wrong, exactly, as 
the error bars are consistent with a reasonable value. But sigma2 is not 
well-determined.

If, on the other hand, the error bars are such that the value is unambiguously 
negative, it's handy to see what correlates strongly with the sigma2 in 
question, as that can give you some clues.

Sometimes, a negative sigma2 is indicative of an incomplete model, rather than 
one that is flat out wrong. Perhaps, for instance, there are paths within the 
fitting range that contribute to the signal, but you have not included them in 
your model.

Of course, it could also be that the model is simply wrong. Without more 
details from you, it is impossible to say what the problem is in your 
particular case.

--Scott Calvin
Sarah Lawrence College

On Sep 13, 2011, at 1:32 PM, Palomino, Robert wrote:

I am trying to fit data I recently collected and occasionally I am getting 
negative sigma squared values. Could anyone tell me what this is indicative of: 
am I using the wrong model or are my initial guesses of some parameters way off?

Robert



___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


Re: [Ifeffit] Spline range?

2011-09-18 Thread Scott Calvin
Wei,

You may not have looked at it yet, but the chapter draft I sent to you has 
several pages devoted to the topic.

--Scott Calvin
Sarah Lawrence College

On Sep 18, 2011, at 10:10 AM, Wei Li wrote:

Dear all,

I am wondering about what range should be chosen in Athena? Some literature 
says from 1 to the end; some says from 0.5 to the end.

Wei


--
Wei Li

Postdoc researcher
Environmental Soil Chemistry Group
Delaware Environmental Institute
University of Delaware, Newark,19713
Tel:631-949-0663
http://ag.udel.edu/soilchem/li.html<http://ag.udel.edu/soilchem/>




___
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit


  1   2   3   4   >