TOPAS-Academic V7

2020-05-11 Thread alancoelho
Hi All

 

Just letting everyone know that TOPAS-Academic Version 7 is now available.

 

See http://www.topas-academic.net/ where New7.PDF and the
Technical_Reference.PDF can be downloaded.

 

The Bruker-AXS version won't be far behind.

 

Cheers

Alan

 

++
Please do NOT attach files to the whole list 
Send commands to  eg: HELP as the subject with no body text
The Rietveld_L list archive is on http://www.mail-archive.com/rietveld_l@ill.fr/
++



RE: Software re-binned PD data

2019-09-26 Thread alancoelho
Hi Tony

 

>My I ask is this re-bined data from the measurement software considered as
"raw data" or "treated data"?

 

I'm not sure what is meant by treated data. Almost all neutron data and
synchrotron data with area detectors are "treated data".

 

If the detector has a slit width in the equatorial plane that is 0.03
degrees 2Th then it makes little sense using a step size that is less than
0.03/2 degrees 2Th. If rebinning is done correctly (see rebin_with_dx_of in
the Technical Reference) then rebinning is basically collecting redoing the
experiment with a wider slit.

 

In the case of your PSD then the resolution of the PSD would be the smallest
slit width. If the data has broad features relative to the slit width then
rebinning (or using a bigger slit width) should not change the results. You
could simulate all this using TOPAS to see the difference. Correct rebinning
should not affect parameter errors. 

 

This is a question that is not simple to answer and if there's concern then:

 

1.  Simulating data with the small step size and performing a fit
2.  And then rebinning with various slit widths and then fitting
3.  And then comparing parameters errors and parameter values for all
the refinements should shine light on the area.

 

I don't know where but I feeling is that there should be papers on this. 

 

Cheers

Alan

 

 

From: rietveld_l-requ...@ill.fr  On Behalf Of
iangie
Sent: Thursday, September 26, 2019 1:40 PM
To: rietveld_l@ill.fr
Subject: Software re-binned PD data

 

Dear Rietvelder,

 

I hope you are doing well.

It is generally acknolwdged that Rietveld refinement should be performed on
raw data, without any data processing. 
One of our diffractometer/PSD  scans data at its minimal step size (users
can see that the step size during scan is much smaller than what was set),
and upon finishing, the measurement software re-bin the counts to the step
size what users set (so the data also looks smoother, after re-bin).  
My I ask is this re-bined data from the measurement software considered as
"raw data" or "treated data"? And can we apply Rietveld refinement on this
data?

 

Any comments are welcome. :)

--

Dr. Xiaodong (Tony) Wang

Research Infrastructure Specialist (XRD)

Central Analytical Research Facility (CARF)   |   Institute for Future
Environments

Queensland University of Technology

++
Please do NOT attach files to the whole list 
Send commands to  eg: HELP as the subject with no body text
The Rietveld_L list archive is on http://www.mail-archive.com/rietveld_l@ill.fr/
++



TOPAS Working on MACs

2019-03-04 Thread alancoelho
Hi all

 

Some MACs users are experiencing problems using TOPAS/TOPAS-Academic on a
MAC emulating windows. If you are using a successfully using TOPAS on a MAC
then can you please send me a private e-mail with the details of your
emulator; ie. is it Parallels Desktop, VMware Fusion, VirtualBox or any
other.

 

Thanks in advance.

Cheers

Alan

 

++
Please do NOT attach files to the whole list 
Send commands to  eg: HELP as the subject with no body text
The Rietveld_L list archive is on http://www.mail-archive.com/rietveld_l@ill.fr/
++



RE: Rietveld

2018-08-21 Thread AlanCoelho
It seems that even the origin of least squares is debateable:

 

https://blog.bookstellyouwhy.com/carl-friedrich-gauss-and-the-method-of-least-squares
 

 

"That Gauss was the first to define the method of least squares was contested 
in his day. Adrien-Marie Legendre first published a version of the method in 
1805"

 

I though this discussion would divide the community but from Armel’s pole (good 
idea) it hasn’t. 

 

The Rietveld method is an implementation of the method of least squares with 
the function being minimized changed to suite powder diffraction. Looking at 
the least squares formulae as defined by Gauss, Eq. (1) at:

 

https://books.google.com.au/books?id=gtoTkL7heS0C 

 
=RA1-PA193=RA1-PA193=Charles+fredrick+gauss+and+non-linear+least+squares=bl=ZAS84VXPuo=CZR3HPkPEEY4sxMfCHerAako3qQ=en=X=2ahUKEwj-i6PGt_rcAhWBM94KHbt_AZMQ6AEwAnoECAgQAQ#v=onepage=Charles%20fredrick%20gauss%20and%20non-linear%20least%20squares=false

 

or at:

 

https://en.wikipedia.org/wiki/Least_squares

 

we see that it is familiar except that f describes a diffraction pattern. 
Fitting to the powder data itself (rather than first performing data reduction 
in the form of extracting intensities) is what is implied by Gauss and it seems 
odd that this was not considered by many. 

 

Also, computer code for performing least squares on powder data is 70% 
identical (in my estimate) to performing least squares on single crystal data. 
In fact, if only Gaussian peak shapes are considered then its 90% identical. I 
should know as computer code that perform derivatives on structural parameters 
for single crystal data are exactly the same code used to perform derivatives 
on powder data.

 

My feelings are similar to Scott Speakman in that credit should be given to 
Rietveld for developing and distributing his software; if more was done by 
Rietveld in regards to developing f then all the better.

 

All the best

Alan Coelho

 

 

From: rietveld_l-requ...@ill.fr  On Behalf Of Scott 
Speakman
Sent: Wednesday, 22 August 2018 4:35 AM
To: Rietveld_l@ill.fr
Subject: RE: Rietveld

 

It is interesting to read this conversation and to hear the various points of 
view. 

 

I have one point for consideration to add, and would love to hear the opinion 
of those who were more closely involved in those early days:  I was always 
under the impression that the nomenclature "Rietveld technique" evolved mostly 
because Hugo Rietveld freely distributed the programming code for others to 
use, and allowed the code to be used and incorporated into other programs 
without ever requesting licensing fees or the like.  In that case, the name 
"Rietveld technique" isn't used to credit the inventor(s) of the methodology, 
but rather to acknowledge the author of the original programming code.  

 

 


Kind Regards,

Scott A Speakman, Ph.D.
Principal Scientist- XRD


Tel
Mob

+1 800 279 7297
+1 508 361 8121



Malvern Panalytical Inc.
117 Flanders Road
Westborough MA 01581
United States

  scott.speak...@panalytical.com
  www.malvernpanalytical.com


 


This email and any files transmitted with it are confidential and maybe legally 
privileged. Such message is intended solely for the use of the individual or 
entity to whom they are addressed. Please notify the originator of the message 
if you are not the intended recipient and destroy all copies of the message. 
Please note that any use, dissemination, or reproduction is strictly prohibited 
and may be unlawful. 



The way we want to do business:   The 
value of Integrity - Code of Business Ethics

 

From: rietveld_l-requ...@ill.fr   
mailto:rietveld_l-requ...@ill.fr> > On Behalf Of Le 
Bail Armel
Sent: Tuesday, August 21, 2018 11:09 AM
To: Rietveld_l@ill.fr  
Subject: Re: Rietveld

 

The >1500 subscribers can vote... :

 

https://doodle.com/poll/gh3v3nfhue599w23

 

Best,

 

Armel

 

 

 

> Message du 21/08/18 19:10
> De : "Alan Hewat"   >
> A : "rietveld_l@ill.fr  "   >
> Copie à : 
> Objet : Re: Rietveld
> 
> 

> As a matter of course we didn't took part in the discussion... (Schenk) 

> ...people pretending now to speak in place of Loopstra should stop to do so 
> (Le Bail)

What a contrast of style and substance. Late believers are true believers, and 
Passion evicts Doubt.
>


> 

Seeking sanity, I refer back to Miguel and the meaning of truth and knowledge.:
>

> Scientists cannot ask anyone for the 

RE: [Fwd: Question on file formats]

2008-12-11 Thread AlanCoelho

Lee Wrote
I have some ToF data from GEM ISIS as both a .gss and a .asc file. I have
managed to import and refine this using GSAS but am having some
difficulties importing into TOPAS. Does anyone have some advice please on
maybe file exchange or getting this file into the correct format to be
read by TOPAS. Or do I need to contact GEM to issue me with a different
file type?


I think that the ASCII data from GEM comprises x-axis, y-axis and error
values. Typically TOPAS uses the file extension to determine file format;
ie. .XYE type files. You can either change the extension or use the
'xye_format' switch to indicate XYE format, for example:

xdd FILE xye_format

cheers
alan

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Thursday, 11 December 2008 1:53 AM
To: Rietveld_l@ill.fr
Subject: [Fwd: Question on file formats]

Dear Lee,

If you have data as (.dat) for GSAS, TOPAS can read this file in the
option GSAS CONST data files (“.”). However, you can convert .asc to .xrd
and then to read it with POWDERX and save it as GSAS (.dat)

Best Regards,

Mario Macías
Universidad Industrial de Santander
Colombia
--

I have a question for the list...

I have some ToF data from GEM ISIS as both a .gss and a .asc file. I have
managed to import and refine this using GSAS but am having some
difficulties importing into TOPAS. Does anyone have some advice please on
maybe file exchange or getting this file into the correct format to be
read by TOPAS. Or do I need to contact GEM to issue me with a different
file type?

Thanks,

Lee

~~~
Dr. Lee A. Gerrard
Materials Scientist
 
Materials Science Research Division
SB40.1
AWE Plc
Aldermaston, Reading,
Berkshire
RG7 4PR
 
Tel:  +44 (0)118 982 6516
Fax: +44 (0)118 982 4739
Email: [EMAIL PROTECTED]
~~~ 










RE: macro's in TOPAS

2008-12-04 Thread AlanCoelho
Hi Ross

There's no way of outputting a selection of hkls; it's all or none.

You can however omit hkls at the start of the refinement; ie.

omit_hkls = And(H, K, Mod(L, 2));
omit_hkls = H  10;
etc...

You can use this in an indirect manner to first refine using all hkls and
then in a subsequent refinement omit the unwanted hkls; ie:

tc INP_FILE macro M_ {}
copy INP_FILE.OUT INP_FILE.INP
tc INP_FILE macro M_ { omit_hkls = H; iters 0 }
etc...

where the macro M_ is placed at the str level; ie.

xdd...
str...
M_

Alternatively an actual program can be written to manipulate the output hkl
file; this should be trivial to do. I will for future reference however
consider outputting a selection of hkls.

Cheers
Alan




-Original Message-
From: Ross Williams [mailto:[EMAIL PROTECTED] 
Sent: Thursday, 4 December 2008 7:41 PM
To: rietveld_l@ill.fr
Subject: macro's in TOPAS

Dear All,

I am trying to write a macro in TOPAS (version 4) to output the intensities
of specific reflections for each phase, for the purposes comparing multiple
structures of the same phase from the literature.

To date I have been able to achieve this by using the following macro which
output all the hkl's and the intensity of each, then I use a spreadsheet
package to sort my data appropriately, but this is cumbersome and I have
hundreds of structures I  would like to examine. 


macro out_specific_hkl_details(file)
{
 phase_out file append load out_record out_fmt out_eqn
   {
 %s = Get(phase_name);
 %3.0f  = H;
 %3.0f  = K;
 %3.0f  = L;
 %11.5f \n  = I_after_scale_pks;
   }
}


Ideally I would like to output:

Phase1Name I(hkl_1) I(hkl_2) I(hkl_3) I(hkl_4) I(hkl_5)
Phase2Name I(hkl_1) I(hkl_2) I(hkl_3) I(hkl_4) I(hkl_5)
Phase3Name I(hkl_1) I(hkl_2) I(hkl_3) I(hkl_4) I(hkl_5)
Phase4Name I(hkl_1) I(hkl_2) I(hkl_3) I(hkl_4) I(hkl_5)

The problem is I can't work out how to output parameters for specific hkl's,
can anyone help me?


Thanks,

Ross


+
Ross Williams
PhD Student
Centre for Materials Research
Department of Imaging and Applied Physics
Curtin University of Technology
GPO Box U1987 Perth WA 6845
Western Australia
Phone: +61 (0)8 9266 4219
Fax: +61 (0)8 9266 2377
Email:   [EMAIL PROTECTED]







RE: Anisotropic peak broadening with TOPAS

2008-10-31 Thread AlanCoelho
Hi Frank

I'm not 100% sure what you have been doing but I think the copying and
pasting to get LVol values for particular sets of hkls can be maybe done in
a more simple manner as shown below. If it's not clear then contact me off
the list.

   prm lor_h00 300 min .3
   prm lor_0k0 300 min .3
   prm lor_00l 300 min .3
   prm lor_hkl 300 min .3
   prm gauss_h00 300 min .3
   prm gauss_0k0 300 min .3
   prm gauss_00l 300 min .3
   prm gauss_hkl 300 min .3
   
   prm = 1 / IB_from_CS(gauss_h00, lor_h00); : 0 ' This is LVol
   prm = 0.89 / Voigt_FWHM_from_CS(gauss_h00, lor_h00); : 0 ' This is
LVol_FWHM
   
   lor_fwhm = 
  (0.1 Rad Lam / Cos(Th)) /
  IF And(K == 0, L == 0) THEN
 lor_h00
  ELSE IF And(H == 0, L == 0) THEN
 lor_0k0
  ELSE IF And(H == 0, K == 0) THEN
  lor_00L
   ELSE
 lor_hkl
  ENDIF
  ENDIF
  ENDIF
  ;
  
   gauss_fwhm = 
  (0.1 Rad Lam / Cos(Th)) /
  IF And(K == 0, L == 0) THEN
 gauss_h00
  ELSE IF And(H == 0, L == 0) THEN
 gauss_0k0
  ELSE IF And(H == 0, K == 0) THEN
 gauss_00L
  ELSE
 gauss_hkl
  ENDIF
  ENDIF
  ENDIF
  ;

Cheers
Alan


-Original Message-
From: Frank Girgsdies [mailto:[EMAIL PROTECTED] 
Sent: Friday, 31 October 2008 11:29 PM
To: Frank Girgsdies; Rietveld_l@ill.fr
Subject: Re: Anisotropic peak broadening with TOPAS

Dear Topas users,

thanks to your helpful input, I've now come up
with a (probably clumsy) solution to achieve my
goal (see my original post far down below).

For those who are interested, I'll explain it
in relatively high detail, but before I do so,
I want to make some statements to prevent
unnecessary discussions.

The reasoning behind my doing is the following:
Investigating a large series of similar but
somehow variable samples, my goal is to derive
numerical parameters for each sample from its
powder XRD. Using these parameters, I can compare
and group samples, e.g. by making statements
like sample A and B are similar (or dissimilar)
with respect to this parameter. Thus, the primary
task is to parametrize the XRD results. Ideally,
such parameters would have some physical meaning,
like lattice parameters, crystallite size etc.
However, this does not necessarily mean that I
would interpret or trust parameters like e.g.
LVol-IB on an absolute scale!!! After all, it
is mainly relative trends I'm interested in.

LVol-IB is is one of the parameters I get and
tabulate if the peak broadening can be successfully
described as isotropic size broadening.
   [For details on LVol-IB, see Topas (v3) Users
   Manual, sections 3.4.1 and 3.4.2)]
If, however, the peak broadening is clearly
anisotropic, applying the isotropic model gives
inferior fit results. LVol-IB is still calculated,
but more or less meaningless.
Thus, I wanted an anisotropic fit model that BOTH
(a) yields a satisfactory fit AND (b) still
delivers parameters with a similar meaning as
the isotropic LVol-IB.

Applying a spherical harmonics function satisfied
condition (a), but not (b) (maybe just due to my
lack in mathematical insight).

Applying Peter Stephens' code (but modified for
size broadening) met condition (a) and brought
me halfway to reach condition (b). As I did not
find a way of teaching (coding) Topas to do
all calculations I wanted in the launch mode,
I developed a workaround to reach (b).

Now in detail:
The modified Stephens code I use looks like
this:
 prm s400  29.52196`_2.88202 min 0
 prm s040  40.52357`_4.10160 min 0
 prm s004  6631.09739`_227.63909 min 0
 prm s220  54.23582`_13.82762
 prm s202  1454.83518`_489.04664
 prm s022  5423.10499`_765.48349
 prm mhkl = H^4 s400 + K^4 s040 + L^4 s004 + H^2 K^2 s220 +
H^2 
L^2 s202 + K^2 L^2 s022;
 lor_fwhm = (D_spacing^2 * Sqrt(Max(0,mhkl)) / 1) /
Cos(Th);
Compared to Peters original code, I have changed
the strain dependence * Tan(Th) into the size
dependence / Cos(Th) and re-arranged the remaining
terms in that line of code.
   [Peter has mentioned that from the fundamental
   theoretical point of view, spherical harmonics
   might be better justified then his formula
   for the case of SIZE broadening. Anyway,
   it works for me from the practical point of
   view, thus I'll use it.]
This rearrangement emphasizes the analogy between
the isotropic case c / Cos(Th) (where c is valid
for ALL peaks) and the anisotropic one, where
c is replaced by the hkl dependent term
(D_spacing^2 * Sqrt(Max(0,mhkl)) / 1).
Thus, I freely interpret this term as some
sort of c(hkl), which I will use for some
specific values of hkl to derive hkl dependent
analogues of LVol-IB.
The first step of this calculation I managed
to code for Topas based on Peters equations,
but for specifc hkl values:
 prm ch00 = (Lpa^2 * Sqrt(s400) / 1); :  0.24382`_0.01190
 

RE: PDF refinement pros and cons

2008-06-13 Thread AlanCoelho
Thanks all for the PDF explanations

I think I'm beginning to understand. To summarise what we know:

- PDF for powder data is a Patterson function (as Alan Hewat stated) in one
dimension that plots the histogram of atom separation

- It will show quite nicely the short range order and then possible disorder
longer range (eg. Rotated bucky balls)


There seems to be a few issues:

1) PDF refinement on long range ordered crystals is the same as Rietveld
refinement and little benefit if any is obtained by fitting to G(r)
directly. It's useful however to view a PDF to ascertain whether there's
disorder even in the event of a good Rietveld fit. 

2) A nano-sized crystal, as Vincent mentioned, can be modelled by arranging
the atoms such that it's G(r) matches the observed PDF. This can be done
with disregard to lattice parameters and periodicity in general.

3) a mix of (1) and (2) as in regularly arranged bucky balls but with the
balls randomly rotated.


The ability to do number (2) is what can't be done with normal Rietveld
refinement as a disordered object has no periodicity and no Bragg peaks.
Arranging atoms in space to form an object with disregard to lattice
parameters and space groups can yield a G(r) to match an observed PDF.
However even very small crystals would have 10s of thousands of atoms. Its
seems difficult to try and determine atomic positions of such an object from
a PDF.

It may be worthwhile to look at what the glass community has done (the Debye
Formula - thanks Jon - still reading).

The main point that is still confusing me is whether current PDF analysis
considers an object (or should) or is it the case that most materials are
periodic (with an enlarged unit cell) with parts of the cell being
disordered.

Cheers
Alan




PDF refinement pros and cons

2008-06-12 Thread AlanCoelho
HI all

 

Looking at the Pair Distribution Function and refinement I come away with
the following:

 

Fitting in real space (directly to G(r)) should be equivalent to fitting to
reciprocal space except for a difference in the cost function. Is this
difference beneficial in any way. In other words does the radius of
convergence increase or decrease.

 

The computational effort required to generate G(r) is proportional to N^2
where N is the number of atoms within the unit cell.  The computational
effort for generating F^2 scales by N.Nhkl where Nhkl is the number of
observed reflections. Is there a speed benefit in generating G(r) - my guess
is that it's about the same. Note, generating G(r) by first calculating F
and then performing a Fourier transform is not considered. 

 

In generating the observed PDF there's an attempt to remove instrumental and
background effects. In reciprocal space these unwanted effects are
implicitly considered. This seems a plus for the F^2 refinement.

 

From my simple understanding of the process, there seems to be good
qualitative information in a G(r) pattern but can someone help in explaining
the benefit of actually refining directly to G(r).

 

Cheers

Alan

 



RE: Help: General spherical harmonics

2008-04-21 Thread AlanCoelho

Hi Xiujun 

Topas implements a normalized symmetrized sperical harmonics function, see
Jarvine

J. Appl. Cryst. (1993). 26, 525-531
http://scripts.iucr.org/cgi-bin/paper?S0021889893001219


The expansion is simply a series that is a function hkl values. 

The series is normalized such that the maximum value of each component is 1.
The normalized components are:

Y00  = 1
Y20  = (3.0 Cos(t)^2 - 1.0)* 0.5
Y21p = (Cos(p)*Cos(t)*Sin(t))* 2
Y21m = (Sin(p)*Cos(t)*Sin(t))* 2
Y22p = (Cos(2*p)*Sin(t)^2)
Y22m = (Sin(2*p)*Sin(t)^2)
Y40  = (3 - 30*Cos(t)^2 + 35*Cos(t)^4) *.125000
Y41p = (Cos(p)*Cos(t)*(7*Cos(t)^2-3)*Sin(t)) *.9469461818
Y41m = (Sin(p)*Cos(t)*(7*Cos(t)^2-3)*Sin(t)) *.9469461818
Y42p = (Cos(2*p)*(-1 + 7*Cos(t)^2)*Sin(t)^2) *.78
Y42m = (Sin(2*p)*(-1 + 7*Cos(t)^2)*Sin(t)^2) *.78
Y43p = (Cos(3*p)*Cos(t)*Sin(t)^3) *3.0792014358
Y43m = (Sin(3*p)*Cos(t)*Sin(t)^3) *3.0792014358
Y44p = (Cos(4*p)*Sin(t)^4)
Y44m = (Sin(4*p)*Sin(t)^4)
Y60  = (-5 + 105*Cos(t)^2 - 315*Cos(t)^4 + 231*Cos(t)^6) *.62500.
Y61p = (Cos(p)*(-5 + 30*Cos(t)^2 - 33*Cos(t)^4)*Sin(t)*Cos(t)) *.6913999628
Y61m = (Sin(p)*(-5 + 30*Cos(t)^2 - 33*Cos(t)^4)*Sin(t)*Cos(t)) *.6913999628
Y62p = (Cos(2*p)*(1 - 18*Cos(t)^2 + 33*Cos(t)^4)*Sin(t)^2) *.6454926483
Y62m = (Sin(2*p)*(1 - 18*Cos(t)^2 + 33*Cos(t)^4)*Sin(t)^2) *.6454926483
Y63p = (Cos(3*p)*(3- 11*Cos(t)^2)*Cos(t)*Sin(t)^3) *1.4168477165
Y63m = (Sin(3*p)*(3- 11*Cos(t)^2)*Cos(t)*Sin(t)^3) *1.4168477165
Y64p = (Cos(4*p)*(-1 + 11*Cos(t)^2)*Sin(t)^4) *.816750
Y64m = (Sin(4*p)*(-1 + 11*Cos(t)^2)*Sin(t)^4) *.816750
Y65p = (Cos(5*p)*Cos(t)*Sin(t)^5) *3.8639254683
Y65m = (Sin(5*p)*Cos(t)*Sin(t)^5) *3.8639254683
Y66p = (Cos(6*p)*Sin(t)^6)
Y66m = (Cos(6*p)*Sin(t)^6)
Y80  = (35 - 1260*Cos(t)^2 + 6930*Cos(t)^4 - 12012*Cos(t)^6 +
6435*Cos(t)^8)* .0078125000
Y81p = (Cos(p)*(35*Cos(t) - 385*Cos(t)^3 + 1001*Cos(t)^5 -
715*Cos(t)^7)*Sin(t))* .1134799545
Y81m = (Sin(p)*(35*Cos(t) - 385*Cos(t)^3 + 1001*Cos(t)^5 -
715*Cos(t)^7)*Sin(t))* .1134799545
Y82p = (Cos(2*p)*(-1 + 33*Cos(t)^2 - 143*Cos(t)^4 + 143*Cos(t)^6)*Sin(t)^2)*
.5637178511
Y82m = (Sin(2*p)*(-1 + 33*Cos(t)^2 - 143*Cos(t)^4 + 143*Cos(t)^6)*Sin(t)^2)*
.5637178512
Y83p = (Cos(3*p)*(-3*Cos(t) + 26*Cos(t)^3 - 39*Cos(t)^5)*Sin(t)^3)*
1.6913068375
Y83m = (Sin(3*p)*(-3*Cos(t) + 26*Cos(t)^3 - 39*Cos(t)^5)*Sin(t)^3)*
1.6913068375
Y84p = (Cos(4*p)*(1 - 26*Cos(t)^2 + 65*Cos(t)^4)*Sin(t)^4)* .7011002983
Y84m = (Sin(4*p)*(1 - 26*Cos(t)^2 + 65*Cos(t)^4)*Sin(t)^4)* .7011002983
Y85p = (Cos(5*p)*(Cos(t) - 5*Cos(t)^3)*Sin(t)^5)* 5.2833000817
Y85m = (Sin(5*p)*(Cos(t) - 5*Cos(t)^3)*Sin(t)^5)* 5.2833000775
Y86p = (Cos(6*p)*(-1 + 15*Cos(t)^2)*Sin(t)^6)* .8329862557
Y86m = (Sin(6*p)*(-1 + 15*Cos(t)^2)*Sin(t)^6)* .8329862557
Y87p = (Cos(7*p)*Cos(t)*Sin(t)^7)* 4.5135349314
Y87m = (Sin(7*p)*Cos(t)*Sin(t)^7)* 4.5135349313
Y88p = (Cos(8*p)*Sin(t)^8)
Y88m = (Sin(8*p)*Sin(t)^8)

where 
t = theta 
p = phi


theta and phi are the sperical coordinates of the normal to the hkl plane. 

These components were obtained from Mathematica and mormalized using Topas.

The user determines how the series is used. In the case of correcting for
texture as per Jarvine then the 
intensities of the reflections are multiplied by the series value. This is
accomplished bye first defining a series:

str...
spherical_harmonics_hkl sh sh_order 8

and then scaling the peak intensities, or, 

scale_pks = sh;

after refinement the INP file is updated with the coefficients.

The macro PO_Spherical_Harmonics, as you have defined, can also be used.

Typically the C00 coeffecient is not refined as its series component Y00 is
simply 1 and is 100% correlated with the scale parameter.

You could output the series values as a function of hkl as follows:

scale_pks = sh;
phase_out sh.txt load out_record out_fmt out_eqn {
 %4.0f = H;
 %4.0f = K;
 %4.0f = L;
  %9g\n = sh;
}

Cheers
Alan

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Saturday, 19 April 2008 9:40 AM
To: rietveld_l@ill.fr
Subject: Help: General spherical harmonics

Dear all,

Now i am using the Topas Academic software to do the refinement of my sample
which has stronger preferred orientations in some directions.  
In the program, i use the general spherical harmonics function to correlate
the effect, as shown as below,


'Preferred Orientation using Spherical Harmonics
PO_Spherical_Harmonics(sh, 6 load sh_Cij_prm {
k00   !sh_c00  1.
k41sh_c41   0.36706`
k61sh_c61  -0.30246`
} )

And I see the literature, texture index J is used to evaluate the extent of
PO by the equation shown in attachment ( I don't how to put the equation
here).

But I am not sure what the l means and it's not easy to find the detailed
calculation in the literature. So I am 

RE: Use fix or programmable slits for Rietveld analysis

2007-03-26 Thread AlanCoelho
Hi Maria

Automatic Divergence Slits (ADS) illuminate different parts of the post
monochromator as a function of 2Th. I am guessing but I think that the
crystals are good enough not to change the intensity too much; this is my
experience with Y2O23 in any case. 

At around 70 to 80 degrees 2Th however the beam (typically around 4 degrees
in the equatorial plane) spills out of the post monochromator. This
situation should be avoided; instead above that angle the slits should be
fixed. Thus you have the situation where part of your pattern is analysed
using ADS corrections and part using FDS corrections.

ADS corrections comprise:

- A Sin(Th) scaling of intensities
- A change in peak shape

Note, that anti scatter slits could  also influence intensities at large
divergences.

Cheers
Alan




-Original Message-
From: Fabra-Puchol, Maria [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, 27 March 2007 12:03 AM
To: rietveld_l@ill.fr
Subject: Use fix or programmable slits for Rietveld analysis

 
Hi all,

I have a question using fix (FDS) and programmable slits (ADS) for
Rietveld analysis.
Actually, I work with a X'PERT equipment with fast detection and with
programmable divergence slits in the incident beam and a programmable
antiscattering slits in the diffracted beam.
I have also the possibility to use them in a fix mode. 
I would like to know the opinion of the community about the best
configuration of this slits (fix or programmable) if a Rietveld analysis
is required.
In fact, using programmable slits, the software corrects data and
changes from ADS to FDS. What is the interest to use ADS in this case? 

Thank you all


Maria Fabra Puchol
Microanalysis Engineer
Saint-Gobain CREE
---
550, Avenue Alphonse Jauffret
84306 Cavaillon Cedex-France
[EMAIL PROTECTED] 
telf: +33 (0)4 32 50 09 36
fax: +33 (0)4 32 50 08 51





RE: [Fwd: [ccp4bb] Nature policy update regarding source code]

2007-03-24 Thread AlanCoelho
Vincent wrote:

It is a step forward for F/OSS as it acknowledges that open-source code
allows to spread a new method better than a closed source. As opposed to,
filing a patent - since patents were originally developed to ensure that new
methods be available to all.

You are right in that open source is good at spreading algorithms but no one
should be locked out by decree. Thus the licensing of software is critical;
the GNU GPL license including Copyleft is not to be confused with something
like Python; from the Python web site:

The Python implementation is under an open source license that makes it
freely usable and distributable, even for commercial use

In regards to GNU GPL; never in the history of literature has the words
freedom and choice been so misrepresented; they stand behind their
lawyers. How much of the software under GNU GPL license have been developed
using computers provided by institutes - was it really a hobby. 

Fox is under GNU GPL - not very helpful to society in a general sense
wouldn't you say.

From what I have read on Nature Methods decision then if the journals of J.
Applied Cryst and Acta Cryst were to go down the same path then 2000 plus
users of TOPAS and TOPAS-Academic would be without a means of reading peer
reviewed articles on the algorithms used by those programs. This would be a
tragedy not to mention users of other commercial programs in the field.
Let's hope that it doesn't come to that.


Often this development is not funded as an isolated project - but part of a
larger project 
 (hence the developments at large instruments).

Good to see but to say that these projects are part of a wider orchestrated
effort is to be optimistic.


All the best
Alan




RE: [Fwd: [ccp4bb] Nature policy update regarding source code]

2007-03-24 Thread AlanCoelho
One last reassuring word to the users of TOPAS - if I may

No open source algorithm under any license comes close to equalling those as
implemented in TOPAS and its academic counterpart. This status quo shall
remain.

In the event that commercial entities are locked out of journals then in the
case of TOPAS the manuals shall take up the slack.

Sincerely
Alan Coelho




RE: [Fwd: [ccp4bb] Nature policy update regarding source code]

2007-03-23 Thread AlanCoelho
Not sure what to make of all this Jon

I am to believe that scientists prefer to mull over source code rather than
pseudo code and mathematical descriptions. Anyone that knows just a little
about software development would know that source code is the last thing
that one wants to see.  How many has deciphered the source code of an FFT
routine, how many would want to. The language of maths has evolved over the
centuries and computer code whether it be Fortran or c++ is simply not
adequate for describing complex algorithms. 

There's a hidden agenda here and those pushing it should have the fortitude
to come clean about their motives; that is that scientists would like
software for free. This word 'free' is an ugly word and it is in fact the
reason why software in this field is so bad. Governments/institutes do not
pay scientists to develop software except for short term projects that
typically amounts to a gross waste of tax payer's money. 

Very complex programs such as Axiom, Reduce, Maple and Mathematica require
teams of dedicated mathematicians and code writers looking after it. Maple
and Mathematica are commercial, Reduce is open source and it charges 700
pounds stating that this is necessary for its continued development. Axiom
is a commercial failure by IBM and has been placed on the open source scrap
heap, last I checked it's still just sitting there. Thus if you want to
destroy commercial entities such as Maple and Mathematica then you would
subjecting science to second rate software. How many people check that the
source code of these programs spew out the correct results.

Thus this idea of source code must be made available is a charade. It would
mean that developments in programs like GSAS/TOPAS could not be reported as
the source code is not available. Not sure about the present status of GSAS
source code but it was not available in the past.

Even if governments were to pay for the development of programs such as GSAS
then it is still most certainly not free. There are three options, 1)
Governments/institutes fund software development, 2) Scientists band
together to write open source code or 3) software is paid for on a
commercial basis. Option (1) probably won't happen for political reasons,
option (2) has worked in other fields but doubtful in this field for lack of
expertise and option (3) already exists.

What is interesting is the reason why software is not funded. In my opinion
it is not funded because everyone thinks that software should be free not
bothering where it comes from and most certainly not knowing that computer
science is as difficult a field as science and asking Mr or Mrs Blogs to
write a program without being educated in the area won't work.

The bottom line of all this is that it must not be a prerequisite that
source code be made available in reporting algorithmic developments. Anyone
that states otherwise is being disingenuous - Journals like Nature or
otherwise will most certainly be given a miss by me in regards to publishing
algorithmic developments.

Best regards
Alan Coelho

-Original Message-
From: Jon Wright [mailto:[EMAIL PROTECTED] 
Sent: Saturday, 24 March 2007 6:09 AM
To: rietveld_l@ill.fr
Subject: [Fwd: [ccp4bb] Nature policy update regarding source code]

Hi Everyone; Just in case you don't follow ccp4bb or nature methods:

http://www.nature.com/nmeth/journal/v4/n3/full/nmeth0307-189.html

 I thought that some of you might be interested that the journal Nature
 has clarified the publication requirements regarding source code
 accessibility.  It is likely that some of you deserve congrats
 for this.  Cheers!
 
 http://www.nature.com/nmeth/journal/v4/n3/full/nmeth0307-189.html
 
 Although there are still some small problems, I think that this is a
 big step forward, and certainly an interesting read, if you are
 interested in FOSS and science.
 
 Regards,
 Michael L. Love Ph.D
 Department of Biophysics and Biophysical Chemistry
 School of Medicine
 Johns Hopkins University
 725 N. Wolfe Street
 Room 608B WBSB
 Baltimore MD 21205-2185
 
 Interoffice Mail: 608B WBSB, SoM
 
 office: 410-614-2267
 lab:410-614-3179
 fax:410-502-6910
 cell:   443-824-3451
 http://www.gnu-darwin.org/
 
 
 
 -- Visit proclus realm! http://proclus.tripod.com/ -BEGIN GEEK CODE
BLOCK- Version: 3.1 GMU/S d+@ s: a+ C UBULI$ P+ L+++() E---
W++ N- !o K- w--- !O M++@ V-- PS+++ PE Y+ PGP-- t+++(+) 5+++ X+ R tv-(--)@ b
!DI D- G e h--- r+++ y --END GEEK CODE BLOCK-- 





RE: [Fwd: [ccp4bb] Nature policy update regarding source code]

2007-03-23 Thread AlanCoelho
Vincent

In the original message of Michael Love (forwarded by Jon Wright) it clearly
states:

 Although there are still some small problems, I think that this is a 
 big step forward, and certainly an interesting read, if you are 
 interested in FOSS and science.

What does still some problems mean. Don't kid yourself; if you have read
the SourceForge preamble then you would certainly know the agenda. Don't you
understand free is what has caused the problems now faced in this field;
everyone demands it be free but no one want to pay the real cost of software
development. 

Too often those that wallow in politics end up making decisions that the
rest have to follow; this has happened to Nature to some extent. I certainly
don't want this garbage filling up my mail box. Too often real scientists
are too busy doing what they should be doing; it may pay them to keep their
eyes open a little.

=

If they had mandated all code (full software) associated to articles to be 
open-source, they would clearly have been unreasonable (as much as I like 
open-source), but this is _not_ the case here. As I see things, a _small_ 
piece of code should be associated to each algorithm, not a full software 
with  10 000 lines of code.

They have not yet mandated full software but give them a millimetre and
see what happens. And you cannot simply deposit a small piece of code most
of the time. Most code rely on libraries and rarely is it trivial to rewrite
code to be stand alone. Again this is something that only developers would
know and not as it seems the decision makers.



If someone claims that his algorithm allows computing FFT with O(n) 
complexity, it is fair to ask him for a practical demonstration available
to 
all readers.

If there are those who can't follow pseudo code or mathematical descriptions
then what on earth are they doing in science. If that's not enough then an
executable should suffice. Otherwise you are forcing developers not paid by
tax payers to make their work free. Why is it that people of tax payer paid
institutes must get paid and non-institute people not. If the believers of
this were to forgo their salary and work for free then I would believe the
argument. 

=

Most scientists want software which (i) is efficient and (ii) they can play

around with (=modify). Free is a side issue for most scientists I know.

Yes, most scientist should also know that there's no such thing as a free
lunch. If their Governments don't see it fit to pay for software development
then vote them out of office. You cannot expect Governments not to pay for
software and at the same time demand good software often written by those
not paid by Governments. 

Free is a side issue - To believe that is to assert that pigs can fly.
Again I never want to see source code; I want to see the mathematical
description.

=

Best regards
Alan Coelho





RE: [Fwd: [ccp4bb] Nature policy update regarding source code]

2007-03-23 Thread AlanCoelho
Jon

I did not state, I think, that you are responsible for Nature's decisions.
Far from it, as a messenger you have enlightened me on what is happening in
this area - thank you. 

As for patents; I despise patents on software as it really does inhibit
science. I made a policy a long time ago not to patent. What a developer
seeks is to be paid for the work of coding. New work can easily be described
in the well proven language of mathematics. Anyone that is computationally
minded can and should be able to implement any ideas that I may have come it
with. I have also used many algorithms written by third parties and the math
descriptions accompanied by pseudo code is what I look for - never sour
code.

Cheers
Alan


-Original Message-
From: Jon Wright [mailto:[EMAIL PROTECTED] 
Sent: Saturday, 24 March 2007 12:21 PM
To: rietveld_l@ill.fr
Subject: Re: [Fwd: [ccp4bb] Nature policy update regarding source code]

AlanCoelho wrote:
 Not sure what to make of all this Jon

Don't shoot the messenger, I was surprised enough by it to forward it to 
the list. I guess they imply if you want to keep all implementation 
details secret you should be patenting instead of publishing? (Patents 
seems to be free online, NM is $30 per 2 page article?). As Vincent 
says, the new part of an algorithmic development is often much less than 
the whole package, and useless to the average user without a gui.

Wonder what this means for any REAL programmers out there who are still 
programming directly in hex. Their source is most people's compiled, but 
considerably more interesting to read ;-)

Best,

Jon




RE: Problems using TOPAS R (Rietveld refinement)

2007-03-21 Thread AlanCoelho
Clay people

I think the single crystal analysis of clays is interesting. I have not read
the literature but in determining the intensities is overlap of the dots
considered as I would have expected the dots to be very much smeared (5 to
10 degrees 2Th in my experience). If yes the fitting in two dimension would
be better.

Thus the question to ask is how accurate can QPA be for clays if the
intensities can be accurately obtained; is this an open question or is the
book closed on this. If as Reinhard Kleeberg mentioned that some directions
are unaffected then it would seem plausible that something can be gained
especially if one of those models work. 

Also, TOPAS simply offers a means of describing the peak shapes using a hkl
dependent spherical harmonics. From my experiences it seems to work. Like
Lubomir Smrcok remarked getting the intensities is critical. 

Another important point, again as Lubomir Smrcok mentioned, is preferred
orientation. If there's very strong preferred orientation then the peak
shapes will be affected due to axial divergence as well; it best to remove
preferred orientation.

Cheers
Alan



-Original Message-
From: Reinhard Kleeberg [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, 21 March 2007 7:48 PM
To: rietveld_l@ill.fr
Subject: Re: Problems using TOPAS R (Rietveld refinement)

Dear colleagues,
sorry, my mail should go directly to Leandro, but I used this damned reply
buttom...
My answer was related to Leandro's questions regarding these line broadening
models. I realised that Leandro is going on to apply a Rietveld program for
phase quantification, including kaolinite and later other clay minerals. I
only tried to express my personal experience, that any inadequate profile
description of a clay mineral will surely cause wrong QPA results, nothing
else. This is a practical issue, and it is only partially related to
structure refinement. Lubomir Smrcok is definitely right that other things
like PO are frequently biasing a QPA result, but for the most of these
problems working solutions do exist. 
But I disagree that anisotropic line broadening is a noble problem. In
clay mineral mixtures, it is essentially to fit the profiles of the single
phases as best as one can, to get any reasonable QPA result in a +-5 wt%
interval. On the other hand, for the QPA purpose it is not so much important
to find any sophisticated description of the microstructure of a phase. But
the model should be flexible enough to cover the variablility of the
profiles in a given system, and, on the other hand, stabil enough (not
over-parametrised) to work in mixtures. 
The balancing out of these two issues could be the matter of an endless
debate. And here I agree again, a better, more stable minimisation algorithm
can help to keep a maximum of flexibility of the models.
Best regards
Reinhard Kleeberg

Lubomir Smrcok schrieb:

Gentlemen,
I've been listening for a week or so and I am really wondering what do 
you want to get ... Actually you are setting up a refinement, whose 
results will be, at least, inaccurate. I am always surprised by 
attempts to refine crystal structure of a disordered sheet silicate 
from powders, especially when it is known it hardly works with single 
crystal data. Yes, there are several models of disorder, but who has ever
proved they are really good ?
I do not mean here a graphical comparison of powder patterns with a 
calculated trace, but a comparison of structure factors or integrated 
intensities. (Which ones are to be selected is well described in the 
works of my colleague, S.Durovic and his co-workers.) As far as powders 
are concerned, all sheet silicates suffer from prefered orientation 
along 001. Until you have a pattern taken in a capillary or in 
transmission mode, this effect will be dominating and you can forget 
such noble problems like anisotropic broadening.

Last but not least : quantitative phase analysis by Rietveld is (when 
only scale factors are on) nothing else but multiple linear 
regression. There is a huge volume of literature on the topic, 
especially which variables must, which should and which could be a part of
your model.
I really wonder why the authors of program do not add one option called 
QUAN, which could, upon convergence of highly sophisticated 
non-linear L-S, fix all parameters but scale factors and run standard 
tests or factor analysis. One more diagonalization is not very time 
consuming, is it ? To avoid numerical problems, I'd use SVD.
This idea is free and if it helps people reporting 0.1% MgO (SiO2) in a 
mixture  of 10 phases to think a little of the numbers they are 
getting, I would only be happy :-) Lubo

P.S. Hereby I declare I have never used Topas and I am thus not 
familiar with all its advantages or disadvantages compared to other codes.


On Wed, 21 Mar 2007, Reinhard Kleeberg wrote:

  

Dear Leandro Bravo,
some comments below:

Leandro Bravo schrieb:



In the refinement of chlorite minerals with well defined 

RE: Problems using TOPAS R (Rietveld refinement)

2007-03-21 Thread AlanCoelho
Lubo

SVD as you mentioned does avoid numerical problems as does other methods
such as the conjugate gradient method. SVD minimizes on the residuals |A x -
b| after solving the matrix equation A x = b.

I would like to point out however that errors obtained from the covariance
matrix are an approximation. The idea of fixing parameters as in SVD when a
singular value is encountered is also a little arbitrary as it requires the
user setting a lower limit.

The A matrix is formed at a point in parameter space; when there are strong
correlations (as SVD would report) then that point in space changes from one
refinement to another after modifying the parameter slightly.

If derivatives are numerically calculated, as is the case for convolution
parameters, then the A matrix becomes a function of how the derivative are
calculated; forward difference approximation for example gives different
derivatives than both forward and backwards if the step size in the
derivative is appreciable. For most convolutions and numerical derivatives
in general then it needs to be appreciable for good convergence.

Rietveld people may want to look at the re-sampling technique known as the
bootstrap method of error determination. It gives similar errors to the
covariance matrix when the correlations are weak; the maths journals are
full of details. It requires some more computing time but it actually gives
the distribution. And yes TOPAS has the bootstrap method; other code writers
may wish to investigate it.

Cheers
Alan



 

-Original Message-
From: Lubomir Smrcok [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, 21 March 2007 5:50 PM
To: rietveld_l@ill.fr
Subject: Re: Problems using TOPAS R (Rietveld refinement)

Gentlemen,
I've been listening for a week or so and I am really wondering what do you
want to get ... Actually you are setting up a refinement, whose results
will be, at least, inaccurate. I am always surprised by attempts to refine
crystal structure of a disordered sheet silicate from powders, especially
when it is known it hardly works with single crystal data. Yes, there are
several models of disorder, but who has ever proved they are really good ?
I do not mean here a graphical comparison of powder patterns with a
calculated trace, but a comparison of structure factors or integrated
intensities. (Which ones are to be selected is well described in the works
of my colleague, S.Durovic and his co-workers.)
As far as powders are concerned, all sheet silicates suffer from
prefered orientation along 001. Until you have a pattern taken in a
capillary or in transmission mode, this effect will be dominating and you
can forget such noble problems like anisotropic broadening.

Last but not least : quantitative phase analysis by Rietveld is (when only
scale factors are on) nothing else but multiple linear regression. There
is a huge volume of literature on the topic, especially which variables
must, which should and which could be a part of your model.
I really wonder why the authors of program do not add one option called
QUAN, which could, upon convergence of highly sophisticated non-linear
L-S, fix all parameters but scale factors and run standard tests or factor
analysis. One more diagonalization is not very time consuming, is it ? To
avoid numerical problems, I'd use SVD.
This idea is free and if it helps people reporting 0.1% MgO (SiO2) in a
mixture  of 10 phases to think a little of the numbers they are getting, I
would only be happy :-)
Lubo

P.S. Hereby I declare I have never used Topas and I am thus not familiar
with all its advantages or disadvantages compared to other codes.


On Wed, 21 Mar 2007, Reinhard Kleeberg wrote:

 Dear Leandro Bravo,
 some comments below:

 Leandro Bravo schrieb:

 
  In the refinement of chlorite minerals with well defined disordering
  (layers shifting by exactly b/3 along the three pseudohexagonal Y
  axis), you separate the peaks into k = 3.n (relative sharp, less
  intensive peak) and k #61625; 3.n (broadened or disappeared
  reflections). How did you determined this value k = 3.n and n =
  0,1,2,3..., right?
 
 The occurence of stacking faults along the pseudohexagonal Y axes causes
 broadening of all reflections hkl with k unequal 3n (for example 110,
 020, 111..) whereas the reflections with k equal 3n remain unaffected
 (001, 131, 060, 331...). This is clear from geometric conditions, and
 can be seen in single crystal XRD (oscillation photographs, Weissenberg
 photographs) as well in selected area electron diffraction patterns. The
 fact is known for a long time, and published and discussed in standard
 textbooks, for example *Brindley, G.W., Brown, G.:  Crystal Structures
 of Clay Minerals and their X-ray Identification. Mineralogical Society,
 London, 1980.*

  First, the chlorite refinement.
 
  In the first refinement of chlorite you used no disordering models and
  used ´´cell parameters`` and ´´occupation of octahedra``. So you
  refined the lattice parameters and the 

RE: Problems using TOPAS R (Rietveld refinement)

2007-03-15 Thread AlanCoelho
Leandro

Not sure what the purpose of your refinement is but if it's quantification
then your results would probably be in error to a large extent.

The references given by Alan Hewat and Lubo Smrcok is probably a good
starting point.

Data quality and model errors typically mean that atomic positions should
not be refined for clays; especially for Kaolinite. Also, use a common beq
value for all sites or take them from literature. A gobal beq could then be
superimposed using something like prm b 0 scale_pks = Exp(-b /
D_spacing^2);.

For quantification try spiking the sample with a standard to determine the
amorphous content.

It is possible to get the peak shapes without changing peak intensities; if
you need assistance then contact me off the list.

Cheers
Alan


-Original Message-
From: Leandro Bravo [mailto:[EMAIL PROTECTED] 
Sent: Friday, 16 March 2007 9:15 AM
To: rietveld_l@ill.fr
Subject: Re: Problems using TOPAS R (Rietveld refinement)

Ok, I´m starting to have sucess in the kaolinite refinement, the 
quantification is giving me reasonable values. I´m refining the thermal 
factors, all the atoms positions in the kaolinite, the lattice parameters 
and the cystallite size. Lattice parameters and crystallite size are giving 
me very good numbers, with very low errors (about 0,09). In the thermal 
factors, I realized that alll of them tend to 20, so after all refinements I

put them to 20, and refine all over again. I don´t care that much for atoms 
positions, I´m only using them because refining only lattice, thermal and 
cry size wasn´t enough to make a good calculated pattern to compare with the

measured one.
In the calcite and dolomite I refine: lattice parameters, cry size and 
thermal factors. And use on both a preferred orientation correction 
(spherical harmonics 4 th order). The RWP is about 16.

I´d to hear some opinions about this strategy of refinement, if you think 
that I can spare some refining cycles or even fix some values to reduce 
erros in the refinement.

_
Descubra como mandar Torpedos SMS do seu Messenger para o celular dos seus 
amigos. http://mobile.msn.com/







RE: Powder Diffraction In Q-Space

2007-02-21 Thread AlanCoelho


Simon Billinge wrote:
We can resolve the problem by setting 2pi=1!

What school did you go to; thanks for the info.

This issue of plotting data is one of those's non-issues drummed up by
groups familar with one format but not in another; I would suggest that
everyone should be familar with both types of displays. If we had the case
where only 1/d is plotted eclusively then a whole lot of useful information
would be lost. For example, if I look at a fit of a 2Th plot then I can
immediately ascertain the following by looking at the misfits:

- Are the low angle peaks fitting properly; if not then something is wrong
with either the axial or equitorial divergence corrections.
- If the peaks are not fitting at 90 degrees then I know that sample
penetration is not being accounted for proeprly.
- If variable divergence slits are being used then I know the angle where
spill over on a post monochromator takes occurs.

You cannot make these deduction from 1/d plots.

Bob's Law of Unintended Consequences cannot be ignored. Are we to teach
crystalographers to ignore data collection - big mistake.

Simon also mentioned Fourier transform; FFTs' work in equal data steps so be
aware of data conversion. My guess is that most conversion programs do the
wrong thing.

alan
 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
Of Simon Billinge
Sent: Thursday, 22 February 2007 2:52 AM
To: rietveld_l@ill.fr
Subject: Re: Powder Diffraction In Q-Space

roughly speaking, historically, Q=2pi/d is used by physicists and
S=1/d used by crystallographers and these communities define their
reciprocal lattices and Fourier transforms accordingly.  With the 2pi
in there, Q is the momentum transfer.  Without it in there the Laue
Equations are much cleaner.

Physicists are good at working in reduced units, like setting c=1.  We
can resolve the problem by setting 2pi=1!

S

On 2/21/07, Ray Osborn [EMAIL PROTECTED] wrote:
 On 2007/02/21 9:06, Jonathan Wright [EMAIL PROTECTED] wrote:

  2pi/d just needs a better name than Q?

 I guess, to some extent, this debate depends on whether you are only
 interested in talking to other powder diffraction specialists.  As a
 non-specialist, I would suggest that Q is a more widely used variable -
 certainly in the inelastic scattering community, but also I believe in the
 liquids and amorphous community, who might be interested in studying
 crystallization processes, for example.

 If I want to check the elastic scattering contained within my inelastic
 spectrum, it is certainly an inconvenience having to convert from
two-theta,
 even assuming I know the wavelength.  I certainly hope you don't settle on
 10^4/d^2.

 Regards,
 Ray
 --
 Dr Ray OsbornTel: +1 (630) 252-9011
 Materials Science Division   Fax: +1 (630) 252-
 Argonne National Laboratory  E-mail: [EMAIL PROTECTED]
 Argonne, IL 60439-4845






-- 
Prof. Simon Billinge
Department of Physics and Astronomy
4268 Biomed. Phys. Sciences Building
Michigan State University
East Lansing, MI 48824
tel: +1-517-355-9200 x2202
fax: +1-517-353-4500
email: [EMAIL PROTECTED]
home: http://nirt.pa.msu.edu/




RE: Powder Diffraction In Q-Space

2007-02-21 Thread AlanCoelho

Whether a program has a button to display data as a function of 1/d or a
button to take the square toor of intensities is trivial to the point of not
being talked about much less setting up web sites to get opinions.

The only point worth talking about is how a conversion from 2Th to 1/d is
done in regards to takeing the fourier transform of a powder pattern. 

alan
 

-Original Message-
From: Joerg Bergmann [mailto:[EMAIL PROTECTED] 
Sent: Thursday, 22 February 2007 4:30 AM
To: rietveld_l@ill.fr
Subject: Re: Powder Diffraction In Q-Space

It's a principle of software design not to presume any kind of
equidistant data. Unfortunately, file formats for non-equidistant
data are seldom. So I could not implement any in BGMN, until now.
But, in principle, there is no restriction.

Regards

Joerg Bergmann





RE: CIF for Pyrrhotite 4C

2007-02-02 Thread AlanCoelho
Hi Doug 

You can add new setting in the file SGCOM5.CPP where entries are in the form
of computer adapted symbols by Uri Shmueli, Acta Cryst. (1984). A40,
559-567.

I am probably wrong but my guess for F12/d1 is 

FMCI1A000P2D003

where I used the a+b face-diagonal rotation matrix of 2D, consistency of the
rotation operators are checked. 

The ICSD prvides a means of obtaining the equivalent positions thus you can
check if the string generated equivalent positions matches the ICSD
equivalent positions; this check is useful especially as a lot of symmetry
is redundant. 

Thus if your CIF file has equivalent positions then the job is easy.

If not then not so easy. For example, in the ICSD you will find two
distinctly different sets of equivalent positions for A12/a1 with one of
them being the standard setting.


Contact me off the list if the CIF has equivalent positions.
cheers
alan



-Original Message-
From: Allen, Douglas R. [mailto:[EMAIL PROTECTED] 
Sent: Saturday, 3 February 2007 8:02 AM
To: rietveld_l@ill.fr
Subject: CIF for Pyrrhotite 4C

Can anyone help me with a CIF file for Pyrrhotite 4C.  The CIF output
from ICSD #42491 gives SG 15 setting F12/d1 which is not recognized by
TOPAS.  I found an entry in the ICDD database from the Linus Pauling
file which shows  the setting A12/A1 but when the CIF file is exported
the same unrecognizable F12/d1 appears.  How can I convert this CIF file
to something that does not crash TOPAS.

Doug Allen
Phelps Dodge Process Technology Center



This email and any files transmitted with it are confidential and intended
solely for the use of the individual or entity to whom they are addressed.
If you have received this email in error please notify the system manager.
This message contains confidential information and is intended only for the
individual named. If you are not the named addressee you should not
disseminate, distribute or copy this e-mail.

Este mensaje (incluyendo los archivos adjuntos) esta dirigido solo al
receptor senalado y puede contener informacion de caracter privilegiada,
privada o confidencial.  Si usted no es el receptor senalado o bien ha
recibido este mensaje por error, por favor notifique inmediatamente al
remitente y elimine el mensaje original.  Cualquier otro uso de este mensaje
de correo electronico esta prohibido.






RE: About zero counts etc.

2006-10-13 Thread AlanCoelho
Title: RE: About zero counts etc.



Hi Bill

Excellent, thank you, I am surprised thatI understand 
what you are saying for a change.

In other words instead of squaring some function F^2 to 
minimize on you instead simply minimize on a more general function G. Its the 
squaring that leads to the normal equations and to what we call least squares . 


Its actually quite trival to minimize on a general function 
G. To achieve convergence behaviour similar to least squares then the modified 
BFGS method should cope. The only trick would be to calculate derivatives in a 
manner fast enought to be useful.

Will put it on my todo list.
All the best
Alan


From: David, WIF (Bill) 
[mailto:[EMAIL PROTECTED] Sent: Saturday, 14 October 2006 8:23 
AMTo: rietveld_l@ill.frCc: Sivia, DS 
(Devinder)Subject: RE: About zero counts etc.

Hi Alan,
The short 
answer is that the question of zero counts 
can indeed be answered in a simple and 
practical manner. 
There's a lot of apparent mystique about probability theory especially when one 
invokes the term Bayesian probability theory. However, Bayesian probability theory is really just common sense about what's 
likely and what's unlikely converted into mathematics. It may seem tautologous but the most probable answer is the answer with the highest probability value  
i.e. the maximum of the probability 
distribution function (pdf). This is quite 
obvious  
the subtle bit 
is what is 
the probability distribution function. Normally with 
enough counts, we move over to a Gaussian pdf along the lines of
Gaussian pdf = const * exp(-0.5*(yobs-ycalc)/esd^2) 

This is 
a maximum when the negative log 
pdf is a minimum. The 
negative log pdf is simply
negative log 
pdf = 
const2 * (yobs-ycalc)/esd^2 
In other words, 
least squares. 
The corollary is that least squares is associated with (and only associated with) 
a Gaussian pdf which happily is the case almost 
all of the time in Rietveld analysis. 
However, when there are zeroes and ones around in the observed counts or when 
the model is incomplete or uncertain then we have to move away from 
a Gaussian pdf and by 
implication least squares. 
With zeroes and 
ones around, we have to move from a Gaussian pdf to a Poisson pdf and 
work with the negative log pdf of that. With incomplete models, we have to move over to the maximum 
likelihood methods 
that the macromolecular people have been using for years.
So the bottom 
line is that 
fundamental statistics is as basic as fundamental parameters. You can fudge your way with a 
Pearson VII or a pseudo-Voigt function to 
fit an X-ray emission profile but the physics says that there are better 
functions. Similarly you can fudge your way with tweaking the Gaussian pdf when there are zero and one 
counts around but youd be 
better to move over to the correct probability functions. 
Antoniadis et al. 
have been through all of this back in the early 90s. Ive got code (and its only a few lines) that is precisely Poisson and 
moves seamlessly over to the chi-squared metric. Ill send it to you offline  it 
would be great to have the ability to program our own algebraic minimisation function in a TOPAS input 
file. 
Id love to 
be able to do that for robust statistics and also maximum 
likelihood!
All the 
best,
Bill

-Original 
Message-From: AlanCoelho [mailto:[EMAIL PROTECTED]Sent: 13 October 2006 
22:30To: rietveld_l@ill.frSubject: RE: About zero counts 
etc.
Hi Bill and 
others
Why cant this 
question of zero counts be answered in a simple and practical
manner. I hope to 
read Devinder Sivia's excellent book one day but for the
time being 
it would be useful if the statistical 
heavy weights were to
advise on what 
weighting is appropriate without everyone havng to understand
the details. 

The original 
question was how to get XFIT to load data files with zero
counts; obviously 
setting the values to 1 is incorrect. 
Joerg Bergmann seems
to indicate that 
the weihgting should be:
 weighting = 1 / (Yobs+1);
as the esd is 
sqrt(n+1). Again without reading the book should the weighting
be:
 weighting = 1 / If(Yobs, Yobs, 
Ycalc);
Hopefully these 
spread sheet type formulas are understandable. This last
equation is not 
liked by computers due to a possible zero divide when Ycalc
is zero. 

Any ideas Bill 
and others
Alan

From: David, WIF 
(Bill) [mailto:[EMAIL PROTECTED] 

Sent: Thursday, 
12 October 2006 4:48 PM
To: 
rietveld_l@ill.fr
Subject: RE: 
About zero counts etc.
Dear 
all,
Jon's right - 
when the counts are very low - i.e. zeroes and ones around -
then the correct 
Bayesian approach is to use Poisson 
statistics. This, as
Jon said, has 
been tackled by Antoniadis et al. (Acta Cryst. (1990). A46,
692-711 
Maximum-likelihood methods in powder diffraction refinements, 
A.
Antoniadis, J. 
Berruyer and A. Filhol) in the context of the Rietveld method
some years ago. 
This paper is very informative for those who are intrigued
about the fact

TOPAS-Academic version 4 now available

2006-10-11 Thread AlanCoelho



Dear all
Version 4 of the Academic version of Bruker-AXS TOPAS, TOPAS-Academic, is now 
available to degree-granting institutions comprising universities, university 
run institutes, laboratories and schools. Bruker-AXS TOPAS shall be released 
before the end of the year.This version is big step forward; it is 
faster, more stable and equipped to take on the largest of crystallographic 
problems comprising tens of thousands of parameters. New techniques and ways of 
doing things should significantly assist in the analysis of crystallographic 
data; details are here:
http://members.optusnet.com.au/~alancoelho
All the bestAlan Coelho 


RE: XFit issue

2006-10-10 Thread AlanCoelho
Hi Sajeev

In XFIT the values of the observed data points in the xdd file needs to be
all greater than 1. Simply copy the xdd file into a spread sheet and set
values less than 1 to 1.
alan

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] 
Sent: Wednesday, 11 October 2006 7:22 AM
To: rietveld_l@ill.fr
Subject: XFit issue


Hi,

I am trying to do a profile fit using XFit. When I try to fit using
FitMarqardt, I see an error statement which says,
values for all the  XRD data points must be greater than or equal to 1

Can someone let me know what is the cause of this error.

Thanks a lot.

regards

Sajeev Moorthiyedath






Re: CIF format for powder work

2006-07-12 Thread AlanCoelho
Pam, as the author of the INP format it was not my intention to make your
life difficult. 

Maybe I am wrong but is it the case that we are now supposed to write into
CIF more than just primary/raw data. This to me is not what was originally
meant for CIF; non-primary data such as logical statements and looping
constructs need a script. 

Before stoking the fire I will say that I am not an expert in the field of
archiving but maybe that’s why I may be able to contribute to the
discussion. A long time ago I briefly read the CIF documents and thought
that it was more about helping journals incorporate data into papers than it
was about storing non-raw information. It does of course help data exchange
as it is a standard (supposedly).

CIF at the time of inception was no doubt a step forward but in today’s
computing world the idea of data being separate to a document is fast coming
to an end. Not joining a fad can be wise but how can one level of looping in
CIF be sufficient (someone correct me if things have changed).

Even though the INP format is similar to XML without the tags it is not my
preferred option either. XML however is far superior to CIF especially when
combined with something like JScript. It can describe any document and at
any complexity but it can be cumbersome when things get complex. In extreme
cases nothing beats a script or dare I say a programming language. I don’t
know how many readers know of the Mathematica journals but it is the
ultimate in combining non-raw data with document.

This leads me to my preferred option! Why not use something like c or c++;
and don’t say that it’s not possible as its relatively easy to convert INP
format to c++ with the use of operator overloading and it would look just as
simple. 

Why can’t a CIF file be a c++ file with associated standard headers
predefined (analogous to a dictionary); is it because c++ is too complex –
maybe but gosh aren’t we scientists. There are tools coming online that
allow the viewing of c++ class hierarchies together with its data whilst a
programming is running. One such OLD tool I stumbled upon during a stint at
ISIS is the open source ROOT which allows the browsing of c++ classes and
its data in a GUI friendly manner whilst a program is running; this is
analagous to an XML browser. There are other commercial ones not to mention
any good debugger with the nicest of GUIs. Thus your 'checker' would in fact
be a c++ compiler (interpreter or otherwise) and I can’t think of a better
standard. 

Pie in the sky! Well please explain why? I am willing to bet that such an
approach would be superior to XML and the learning of the c++ syntax simpler
than XML.

All the best
Alan Coelho 



-Original Message-
From: Whitfield, Pamela [mailto:[EMAIL PROTECTED] 
Sent: Thursday, 13 July 2006 12:06 AM
To: rietveld_l@ill.fr

Hi Bob

This was a solution from scratch so I'm afraid I was using Topas 4.  It will
output the structural CIF file and stuff like reflections, calculated
patterns and whatnot into text files, but none of the other various bits and
pieces.  I had to do those by hand.

The CIF checking seems to have particular trouble handling all the different
residuals, i.e. overall, individual patterns, etc and it really doesn't like
splitting the RB into the different phases.  As far as I can tell they are
all there, in the correct places, but CheckCIF doesn't find any of them.
Trying to deal with convolution peak fitting is a little difficult, as
there's nothing like PV parameters to input.  I had to be very vague on that
indeed.  Topas also doesn't use the standard absorption corrections.
CheckCIF didn't manage to extract the structure data.  The Platon check did
(a little weird as I though they were supposed to be equivalent), but
believe it or not it got the Z wrong for the silicon standard so messed up
the checking!

I think I've taken it as far as I can go so it will have to do.  I'll just
have to explain that CheckCIF made some mistakes.

The next file I have to make is from a Topas analysis as well.  I'm not sure
any other program could handle the constraints I constructed for that one
(how's that for being controvertial?!).  
I'd be curious to see if CIF could handle the geometrical constraints that
have been published recently in J.Appl.Cryst. on apatites (I have to admit
to some involvement :-).  For that one no CIF was created but the Topas
input file was deposited.  Maybe I should have done the same here!

The template idea is a good one.  Unfortunately I seem to be doing all sorts
of analyses which aren't directly related to each other from different
instruments, so a template for this one won't help with the next one, which
also involves user-defined instrument convolutions - what joy!

Pam

-Original Message-
From: Von Dreele, Robert B. [mailto:[EMAIL PROTECTED] 
Sent: July 12, 2006 8:49 AM
To: rietveld_l@ill.fr


Pam,
Actually what was the issue with the cif files with multiple phases/data
sets? 

Re: how to find out POLARISATION Factor

2006-05-31 Thread AlanCoelho
Hi Larry

Whilst we are here I have used the  FCJ correction to fit to the ray tracing
0.5 degree equitorial divergence pattern. To do this I simply informed the
Full Axial Model that divergence in the primary beam is zero. Without
refining on any axial divergence parameters the fit is very poor as you
would expect.

To convert the Full Axial Model to the FCJ correction you need to set
divergence in the axial plane to zero, constrain the source length and
sample length in the axial plane to the same value; this I think corresponds
to something like the FCJ S parameter. Refining on the receiving slit length
in the axial plane corresponds to the other parameter which I think is
called L. In doing so and refining on the two parameters the fit was good
with a few misfits around the peak maxima but nothing too serious (let me
know if you want the data). The refined S and L values refined to:

  S = 11.8680272`
  L = 10.428035`

What you find with the FCJ correction however is that at high angles the S
and L refined values are different and at 90 degrees 2Th the FCJ correction
breaks down as it predicts zero braodening. 

But then again for high angles the emission profile dominates and axial
divergence is only a problem for accurate work. Having said that it is
surprising how much emphasis is placed on small changes in size/strain
Gaussian and Lorentzian broadening without first considering axial
divergence between the range 60 to 120 degrees 2Th. Divergences of 2.5
degrees and less however reduces the problem consdierably for the FCJ
approach.

cheers
alan

-Original Message-
From: Larry Finger [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, 31 May 2006 7:25 AM
To: rietveld_l@ill.fr

AlanCoelho wrote:
 I would like to conclude that to be critical of a method such as the
 convolution approach to describing instrument line profiles it would serve
 authors best to first investigate the approach rather than to be critical
of
 it without substance.
   
Amen!

I'm really impressed with how well the convolution method has done. 
Without any way to test it, I've always had to accept the ray-tracers 
argument that they did better. I all knew is that convolution did well 
enough. You have shown that it makes no real difference.

Would it be possible for you to generate the data points for the same 
set of parameters with peaks at 5 and 10 degrees? That way the asymmetry 
will be really pronounced and the convolution method will be even more 
stressed. I don't expect it to make any difference, but I'd like to see 
it on the screen.

Thanks,

Larry



RE: how to find out POLARISATION Factor

2006-05-31 Thread AlanCoelho
Title: Message



The saga 
continues

It has been 
brought to my attention that the fit.xy data I provided in a previous e-mail has 
misfits in the tails as seen in the following (blue is ray tracing, red is 
convolution):






These misfits are due to how the ray tracing program 
calculates the final pattern; it is not due to the aberration. 


I certainly do not wish to get into any discussion 
concerning implementation of methods and I think viewing the plot in full scale 
explains itself as follows.





Pamela Whitfield 
wrote:

This seems to 
have moved away from polarisation onto something far more 
touchy.:-)

As 
always.

Cheers
Alan 
Coelho