>
> Well, the Jackknife technique
> (http://en.wikipedia.org/wiki/Resampling_(statistics)#Jackknife) does
> something like this.  It uses the errors present inside the collected
> data to estimate the parameter errors.  It's not great, but is useful
> when errors cannot be measured.  You can also use the covariance
> matrix from the optimisation space to estimate errors.  Both are rough
> and approximate, and in convoluted spaces (the diffusion tensor space
> and double motion model-free models of Clore et al., 1990) are known
> to have problems.  Monte Carlo simulations perform much better in
> complex spaces.
>

I have used (and extensively tested) Bootstrap resampling for this
problem. In my hands it works very well provided the data quality is
high (which of course it must be if the resulting values are to be of
any use in model-free analysis). In other words it gives errors
indistinguishable from those derived by Monte Carlo based on duplicate
spectra. Bootstraping, like Jacknife, does not depend on an estimate
of peak hight uncertainty. Its success presumably reflects the smooth
and simple optimisation space involved in an exponential fit to good
data - I fully expect it to fail if applied to the complex spaces of
model-free optimisation.

While on the topic, I can also confirm that baseline RMSD is a good
estimator of peak hight uncertainty. In my hands no sqrt(2) correction
is required. Interestingly, there seems to be no simple relationship
between baseline RMSD and peak volume uncertainty. I never managed to
understand why that is, but perhaps it is related to the behaviour of
noise under apodisation?

Chris

_______________________________________________
relax (http://nmr-relax.com)

This is the relax-users mailing list
[email protected]

To unsubscribe from this list, get a password
reminder, or change your subscription options,
visit the list information page at
https://mail.gna.org/listinfo/relax-users

Reply via email to