zhijian fan wrote:

    *> Like Rp, Rwp, Chi2, RB are respectively from 3.96, 5.33, 2.68, 5.80
    > to 3.35, 4.42, 1.85, 3.07.

    This should pass a test of statistical significance. *
    I made a hypothesis testing. H0: It needs not PO correction.
    The model with PO correction:
    G1=sum[wi(Yi-Yc,i)^2]=(n-p1)*chi2=715*1.85=1322.75.
    The model without PO correction:
    G2=sum[wi(Yi-Yc,i)^2]=(n-p2)*chi2=717*2.68=1921.56,
    Then the F distribution is:
    F=[(G2-G1)/(p1-p2)]/[G1/(n-p1)]=[(1921.56-1322.75)/2]/1.85
    =161.84
    The corresponding p value is 0.0. Hence the H0 are rejected.
    *(2.)The course above is true?*

It's conventional at least, although I haven't checked your numbers.

    In the appendix of the paper(1965) written by Hamilton, it says"
    Interpolation procedures for values of b and (n-m) not
    found in the tables are based on the fact that interpolation in F
    maybe carried out on the reciprocals of the degree of freedom."
    *(3.) Who knows how to use interpolation to get F values not
    listed in the F table? Or is there a free program which may
    calculate it? * Because in realities, the degree of freedom is
    always too large.

There is a subroutine in the imsl subroutine library (not free, but I guess anyone with a license could compile a program which supplies the number). I think you already passed Hamilton's test anyway, a 1% change on Rwp is massive. The problem of degrees of freedom being "too large" is indeed one of the challenges of applying this to powder data. ... "Lies, damned lies and statistics".

    *(5.) The model having anisotropic temperature factors and the one
    just having isotropic temperature factors are nested? If making a
    test with F distribution, usually how large the alfa value is
    taken? 0.05 too?*

In practice there are not too many people doing this routinely, but if you mean that you need to add the anisotropic thermal factors with a 95% probability that they are required, then that sounds about right. In my experience it always came out higher than that but didn't really change my views about a model, which probably means I should be burned at the stake for heresy. You could also ask what Rwp you needed to get the model to be at this 95% confidence level, I think the change will be very small.

Here's another idea that I wanted to run past the list, and might prove useful for you as well. How about trying to get something like the RFree statistic that some single crystal people use? Take your model with no preferred orientation and add a few excluded regions which throw out some of your peaks (wide enough to hide whole peaks, not just half of a peak). Then refine your preferred orientation and see if it comes to about the same result as before, and if it correctly predicts the peaks which you threw out. Clearly you need to have more data than parameters for this to work, but if your refinement can predict things about data which it doesn't know about, then you can feel more confident that the model is good.

Hope this helps,

Jon







Reply via email to