Re: Relaxation curve fitting

2014-03-04 Thread mengjun . xue

Hi Edward,

Thank you for your suggestions. I have tried to open qtgrace first and  
then open the intensities file, but I can not seen any curve in  
qtgrace. I have submitted the bug report.


Regards,

Mengjun





Citat af Edward d'Auvergne edw...@nmr-relax.com:


Hi Mengjun,

This looks like a bug in relax on Windows with spaces in the directory
name!  I thought I fixed this many, many years ago - maybe it has
resurfaced in a new place.  Could you please submit a bug report with
this issue?  Actually, before you do that, can you open qtgrace and
then open this file?  If the error is in qtgrace, then the bug report
is not needed as there is nothing relax can do to fix it.  You can
submit a bug using the link
https://gna.org/bugs/?func=additemgroup=relax.  You can also attach
the file there.

Note that you should not have your data files in the same directory as
relax.  You should always keep your data files separate from the
software files.  Mixing the files together is quite dangerous and
might result in program files or directories being overwritten by data
and results files.  If you place your files into a directory on the
C:\ drive without any spaces, this problem will not appear.

Regards,

Edward




On 4 March 2014 18:38,  mengjun@mailbox.tu-berlin.de wrote:

Hi Edward,

Thank you very much for your suggestion. As you suggested, I have started
grace.view user function under Relax Gui, and select qtgrace.exe file and
intensities.agr file, but a error occurs:

[Error] Can't stat file C:\\Program
[Error] Can't stat file Files\\relax-3.1.5\\grace\\intensities.agr

Please find the intensities.agr file (which include 3 residues for test) in
the attachment. Thank you.


With best regards,

Mengjun Xue







Citat af Edward d'Auvergne edw...@nmr-relax.com:



Hi Mengjun,

If you are using the GUI, you don't need to change the qtgrace.exe
file.  The grace.view user function window allows you to choose the
Grace executable file.  Just click on the Select the file button and
select the qtgrace.exe file.

Regards,

Edward



On 3 March 2014 17:51,  mengjun@mailbox.tu-berlin.de wrote:


Hi Troels and Martin,

Thank you so much for your responses. According to your suggestions, the
raw
intensities data can be extracted from results.bz file or rx.save.bz2
file
now.

For the xmgrace installation, I have downloaded qtgrace at
http://sourceforge.net/projects/qtgrace/, and uppack it to
C:\Python27\qtgrace_windows_binary, in the
C:\Python27\qtgrace_windows_binary\bin folder, I found qtgrace.exe,but I
did
not find xmgrace.exe, how to put both qtgrace.exe and xmgrace.exe in the
same bin folder? xmgrace.exe should be downloaded from internet and then
put
it in the same bin file? Thank you so much.

Best regards,

Mengjun





Quoting Troels Emtekær Linnet tlin...@nmr-relax.com:


Dear Mengjun.

For xmgrace installation, follow this:


http://wiki.nmr-relax.com/Installation_windows_Python_x86-32_Visual_Studio_Express_for_Windows_Desktop#xmgrace_-_for_the_plotting_results_of_NMR-relax

In short.
1 ) Download and install
2) Copy qtgrace.exe to xmgrace.exe in same folder
3) Add to your windows path, the path to where xmgrace.exe resides.
4) Test it with opening cmd and write xmgrace. (You may need to
restart computer to update PATH)

Or if you have matplob lib, try this tutorial:
http://wiki.nmr-relax.com/Matplotlib_example



2014-03-03 16:11 GMT+01:00  mengjun@mailbox.tu-berlin.de:



Hi Edward,

I have tried to use relax_fit.py to extract R1 data, I have got 3
files:
rx.out file (R1 values), rx.save.bz2 file, and results.bz2 file, as
Xmgrace
is not available on my computer, I want to display the intensity decay
curves in others software, so how to extract the raw data from the
output
(results.bz2) of relax_fit.py? It seems rx.save.bz2 file is same to
results.bz2 file. Thank you very much.

With best regards,

Mengjun Xue


___
relax (http://www.nmr-relax.com)

This is the relax-users mailing list
relax-users@gna.org

To unsubscribe from this list, get a password
reminder, or change your subscription options,
visit the list information page at
https://mail.gna.org/listinfo/relax-users






















___
relax (http://www.nmr-relax.com)

This is the relax-users mailing list
relax-users@gna.org

To unsubscribe from this list, get a password
reminder, or change your subscription options,
visit the list information page at
https://mail.gna.org/listinfo/relax-users


Re: Relaxation curve fitting

2014-03-03 Thread Troels Emtekær Linnet
Dear Mengjun.

For xmgrace installation, follow this:
http://wiki.nmr-relax.com/Installation_windows_Python_x86-32_Visual_Studio_Express_for_Windows_Desktop#xmgrace_-_for_the_plotting_results_of_NMR-relax

In short.
1 ) Download and install
2) Copy qtgrace.exe to xmgrace.exe in same folder
3) Add to your windows path, the path to where xmgrace.exe resides.
4) Test it with opening cmd and write xmgrace. (You may need to
restart computer to update PATH)

Or if you have matplob lib, try this tutorial:
http://wiki.nmr-relax.com/Matplotlib_example



2014-03-03 16:11 GMT+01:00  mengjun@mailbox.tu-berlin.de:
 Hi Edward,

 I have tried to use relax_fit.py to extract R1 data, I have got 3 files:
 rx.out file (R1 values), rx.save.bz2 file, and results.bz2 file, as Xmgrace
 is not available on my computer, I want to display the intensity decay
 curves in others software, so how to extract the raw data from the output
 (results.bz2) of relax_fit.py? It seems rx.save.bz2 file is same to
 results.bz2 file. Thank you very much.

 With best regards,

 Mengjun Xue


 ___
 relax (http://www.nmr-relax.com)

 This is the relax-users mailing list
 relax-users@gna.org

 To unsubscribe from this list, get a password
 reminder, or change your subscription options,
 visit the list information page at
 https://mail.gna.org/listinfo/relax-users

___
relax (http://www.nmr-relax.com)

This is the relax-users mailing list
relax-users@gna.org

To unsubscribe from this list, get a password
reminder, or change your subscription options,
visit the list information page at
https://mail.gna.org/listinfo/relax-users


Re: Relaxation curve fitting

2014-03-03 Thread Troels Emtekær Linnet
Dear Mengjun.

Let me extend the previous explanation.

Use the GUI to load results.bz2 file.

Then use the User function: Value write, to write a text file with the
desired results.
These are just flat text files as: rx.out
and can include intensities instead, of normalized intensities.
Use these files to plot in any program.

Or use the User function: Grace write, to make grace files.

Best
Troels


2014-03-03 16:33 GMT+01:00 Troels Emtekær Linnet tlin...@nmr-relax.com:
 Dear Mengjun.

 For xmgrace installation, follow this:
 http://wiki.nmr-relax.com/Installation_windows_Python_x86-32_Visual_Studio_Express_for_Windows_Desktop#xmgrace_-_for_the_plotting_results_of_NMR-relax

 In short.
 1 ) Download and install
 2) Copy qtgrace.exe to xmgrace.exe in same folder
 3) Add to your windows path, the path to where xmgrace.exe resides.
 4) Test it with opening cmd and write xmgrace. (You may need to
 restart computer to update PATH)

 Or if you have matplob lib, try this tutorial:
 http://wiki.nmr-relax.com/Matplotlib_example



 2014-03-03 16:11 GMT+01:00  mengjun@mailbox.tu-berlin.de:
 Hi Edward,

 I have tried to use relax_fit.py to extract R1 data, I have got 3 files:
 rx.out file (R1 values), rx.save.bz2 file, and results.bz2 file, as Xmgrace
 is not available on my computer, I want to display the intensity decay
 curves in others software, so how to extract the raw data from the output
 (results.bz2) of relax_fit.py? It seems rx.save.bz2 file is same to
 results.bz2 file. Thank you very much.

 With best regards,

 Mengjun Xue


 ___
 relax (http://www.nmr-relax.com)

 This is the relax-users mailing list
 relax-users@gna.org

 To unsubscribe from this list, get a password
 reminder, or change your subscription options,
 visit the list information page at
 https://mail.gna.org/listinfo/relax-users

___
relax (http://www.nmr-relax.com)

This is the relax-users mailing list
relax-users@gna.org

To unsubscribe from this list, get a password
reminder, or change your subscription options,
visit the list information page at
https://mail.gna.org/listinfo/relax-users


Re: relaxation curve fitting

2012-07-02 Thread Edward d'Auvergne
.  This is the model for the very old-school inversion
  recovery type R1 experiments whereby the magnetisation returns to the
  Boltzmann equilibrium.  I'm guessing you should be using the 'exp'
  model instead.  This is the standard 2 parameter exponential fit
  whereby the magnetisation goes to zero.  This is the standard nowadays
  as it is considered far more accurate for the extraction of the rates
  (simply by having one less parameter to fit).
 
  If you have collected the old-school data, there is a relax branch
  created by Sébastien Morin for handling this experiment type.  This is
  the 'inversion-recovery' branch located at
  http://svn.gna.org/viewcvs/relax/branches/inversion-recovery/.
  However this branch is not complete and will require someone willing
  to dive into C code to complete it (see
  http://www.mail-archive.com/relax-devel@gna.org/msg03353.html).  Note
  that if someone does know C, completing this will require about 50
  lines of code changed in the maths_fns/relax_fit.c and
  maths_fns/exponential.c files (my rough guess anyway).  It should be
  incredibly trivial for someone with C knowledge.  Anyway, I hope some
  of this info helps.
 
  Regards,
 
  Edward
 
 
 
  On 30 June 2012 18:01, Romel Bobby rbob...@aucklanduni.ac.nz wrote:
   Dear all,
  
   I've been trying to use the curve fitting routine for R1 and R2 in
   relax
   using the sample script relax_fit.py. I managed to read in the
   spectra
   and
   obtain a value for the uncertainty. However, once it gets to the
   point
   of
   performing a grid_search that's where it fails (see below). Has
   anyone
   had a
   similar problem?
  
   [?1034h
  
  
   relax 2.0.0
  
 Molecular dynamics by NMR data analysis
  
Copyright (C) 2001-2006 Edward
   d'Auvergne
Copyright (C) 2006-2012 the relax
   development
   team
  
   This is free software which you are welcome to modify and
   redistribute
   under
   the conditions of the
   GNU General Public License (GPL).  This program, including all
   modules,
   is
   licensed under the GPL
   and comes with absolutely no warranty.  For details type 'GPL' within
   the
   relax prompt.
  
   Assistance in using the relax prompt and scripting interface can be
   accessed
   by typing 'help' within
   the prompt.
  
   Processor fabric:  Uni-processor.
  
   script = 'relax_fit.py'
  
  
   
  
  
   ###
   #
 #
   # Copyright (C) 2004-2012 Edward d'Auvergne
 #
   #
 #
   # This file is part of the program relax.
 #
   #
 #
   # relax is free software; you can redistribute it and/or modify
 #
   # it under the terms of the GNU General Public License as published
   by
#
   # the Free Software Foundation; either version 2 of the License, or
 #
   # (at your option) any later version.
 #
   #
 #
   # relax is distributed in the hope that it will be useful,
#
   # but WITHOUT ANY WARRANTY; without even the implied warranty of
#
   # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 #
   # GNU General Public License for more details.
#
   #
 #
   # You should have received a copy of the GNU General Public License
 #
   # along with relax; if not, write to the Free Software
#
   # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
   02111-1307
USA
 #
   #
 #
  
  
   ###
  
   Script for relaxation curve fitting.
  
  
   # Create the 'rx' data pipe.
   pipe.create('rx', 'relax_fit')
  
   # Load the backbone amide 15N spins from a PDB file.
   structure.read_pdb('IL6_mf.pdb')
   structure.load_spins(spin_id='@N')
  
   # Spectrum names.
   names = [
   'T1_1204.04',
   'T1_1504.04',
   'T1_1804.04',
   'T1_2104.04',
   'T1_2404.04',
   'T1_2754.04',
   'T1_304.04',
   'T1_304.040',
   'T1_54.04',
   'T1_604.04',
   'T1_604.040',
   'T1_904.04',
   ]
  
   # Relaxation times (in seconds).
   times = [
   1.204,
   1.504,
   1.804,
   2.104,
   2.404,
   2.754,
   0.304,
   0.304,
   0.054,
   0.604,
   0.604,
   0.904
   ]
  
   # Loop over the spectra.
   for i in xrange(len(names)):
   # Load the peak intensities.
   spectrum.read_intensities(file=names[i]+'.list', dir='',
   spectrum_id=names[i], int_method='height')
  
   # Set the relaxation times.
   relax_fit.relax_time(time=times[i], spectrum_id=names[i])
  
   # Specify the duplicated spectra.
   spectrum.replicated(spectrum_ids=['T1_304.04', 'T1_304.040'])
   spectrum.replicated(spectrum_ids=['T1_604.04', 'T1_604.040'])
  
   # Peak

Re: relaxation curve fitting

2012-06-30 Thread Edward d'Auvergne
Hi Romel,

The problem is that unfortunately the 'inv' model is simply not
implemented yet.  This is the model for the very old-school inversion
recovery type R1 experiments whereby the magnetisation returns to the
Boltzmann equilibrium.  I'm guessing you should be using the 'exp'
model instead.  This is the standard 2 parameter exponential fit
whereby the magnetisation goes to zero.  This is the standard nowadays
as it is considered far more accurate for the extraction of the rates
(simply by having one less parameter to fit).

If you have collected the old-school data, there is a relax branch
created by Sébastien Morin for handling this experiment type.  This is
the 'inversion-recovery' branch located at
http://svn.gna.org/viewcvs/relax/branches/inversion-recovery/.
However this branch is not complete and will require someone willing
to dive into C code to complete it (see
http://www.mail-archive.com/relax-devel@gna.org/msg03353.html).  Note
that if someone does know C, completing this will require about 50
lines of code changed in the maths_fns/relax_fit.c and
maths_fns/exponential.c files (my rough guess anyway).  It should be
incredibly trivial for someone with C knowledge.  Anyway, I hope some
of this info helps.

Regards,

Edward



On 30 June 2012 18:01, Romel Bobby rbob...@aucklanduni.ac.nz wrote:
 Dear all,

 I've been trying to use the curve fitting routine for R1 and R2 in relax
 using the sample script relax_fit.py. I managed to read in the spectra and
 obtain a value for the uncertainty. However, once it gets to the point of
 performing a grid_search that's where it fails (see below). Has anyone had a
 similar problem?

 [?1034h


                                             relax 2.0.0

                               Molecular dynamics by NMR data analysis

                              Copyright (C) 2001-2006 Edward d'Auvergne
                          Copyright (C) 2006-2012 the relax development team

 This is free software which you are welcome to modify and redistribute under
 the conditions of the
 GNU General Public License (GPL).  This program, including all modules, is
 licensed under the GPL
 and comes with absolutely no warranty.  For details type 'GPL' within the
 relax prompt.

 Assistance in using the relax prompt and scripting interface can be accessed
 by typing 'help' within
 the prompt.

 Processor fabric:  Uni-processor.

 script = 'relax_fit.py'
 
 ###
 #
   #
 # Copyright (C) 2004-2012 Edward d'Auvergne
   #
 #
   #
 # This file is part of the program relax.
   #
 #
   #
 # relax is free software; you can redistribute it and/or modify
   #
 # it under the terms of the GNU General Public License as published by
  #
 # the Free Software Foundation; either version 2 of the License, or
   #
 # (at your option) any later version.
   #
 #
   #
 # relax is distributed in the hope that it will be useful,
  #
 # but WITHOUT ANY WARRANTY; without even the implied warranty of
  #
 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
   #
 # GNU General Public License for more details.
  #
 #
   #
 # You should have received a copy of the GNU General Public License
   #
 # along with relax; if not, write to the Free Software
  #
 # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
   #
 #
   #
 ###

 Script for relaxation curve fitting.


 # Create the 'rx' data pipe.
 pipe.create('rx', 'relax_fit')

 # Load the backbone amide 15N spins from a PDB file.
 structure.read_pdb('IL6_mf.pdb')
 structure.load_spins(spin_id='@N')

 # Spectrum names.
 names = [
     'T1_1204.04',
     'T1_1504.04',
     'T1_1804.04',
     'T1_2104.04',
     'T1_2404.04',
     'T1_2754.04',
     'T1_304.04',
     'T1_304.040',
     'T1_54.04',
     'T1_604.04',
     'T1_604.040',
     'T1_904.04',
 ]

 # Relaxation times (in seconds).
 times = [
     1.204,
     1.504,
     1.804,
     2.104,
     2.404,
     2.754,
     0.304,
     0.304,
     0.054,
     0.604,
     0.604,
     0.904
 ]

 # Loop over the spectra.
 for i in xrange(len(names)):
     # Load the peak intensities.
     spectrum.read_intensities(file=names[i]+'.list', dir='',
 spectrum_id=names[i], int_method='height')

     # Set the relaxation times.
     relax_fit.relax_time(time=times[i], spectrum_id=names[i])

 # Specify the duplicated spectra.
 spectrum.replicated(spectrum_ids=['T1_304.04', 'T1_304.040'])
 spectrum.replicated(spectrum_ids=['T1_604.04', 'T1_604.040'])

 # Peak intensity error analysis.
 spectrum.error_analysis()

 # Deselect unresolved spins.
 # deselect.read(file='unresolved', mol_name_col=1, res_num_col=2,
 res_name_col=3, spin_num_col=4, spin_name_col=5)

 # Set the relaxation curve type.
 relax_fit.select_model('inv')

 # Grid search

Re: Curve fitting

2008-10-19 Thread Chris MacRaild
 the peak volume as a measure of peak intensity,
 then we have a statistical problem.  Until someone finds a reference
 or derives the formula for how the RMSD of the base plane noise
 relates to volume error, then peak heights will be essential for error
 analysis.  I'm also willing to be corrected on any of the statistics
 above as I'm not an expert in this and may have missed some
 fundamental connections between theories.  And there may be a less
 'dirty' way of doing the dirty part of the statistics.



 On Fri, Oct 17, 2008 at 1:59 AM, Chris MacRaild [EMAIL PROTECTED] wrote:

 On Thu, Oct 16, 2008 at 8:07 PM, Edward d'Auvergne
 [EMAIL PROTECTED] wrote:

 On Thu, Oct 16, 2008 at 7:02 AM, Chris MacRaild [EMAIL PROTECTED] wrote:

 On Thu, Oct 16, 2008 at 3:11 PM, Sébastien Morin
 [EMAIL PROTECTED] wrote:

 Hi,

 I have a general question about curve fitting within relax.

 Let's say I proceed to curve fitting for some relaxation rates
 (exponential decay) and that I have a duplicate delay for error 
 estimation.

 
 delays

 0.01
 0.01
 0.02
 0.04
 ...
 

 Will the mean value (for delay 0.01) be used for curve fitting and rate
 extraction ?
 Or will both values at delay 0.01 be used during curve fitting, hence
 giving more weight on delay 0.01 ?

 In other words, will the fit only use both values at delay 0.01 for
 error estimation or also for rate extraction, giving more weight for
 this duplicate point ?

 How is this handled in relax ?

 Instinctively, I would guess that the man value must be used for
 fitting, as we don't want the points that are not in duplicate to count
 less in the fitting procedure... Am I right ?


 I would argue not. If we have gone to the trouble of measuring
 something twice (or, equivalently, measuring it with greater
 precision) then we should weight it more strongly to reflect that.

 So we should include both duplicate points in our fit, or we should
 just use the mean value, but weight it to reflect the greater
 certainty we have in its value.

 As I type this I realise this is likely the source of the sqrt(2)
 factor Tyler and Edward have been debating on a parallel thread - the
 uncertainty in height of any one peak is equal to the RMS noise, but
 the std error of the mean of duplicates is less by a factor of
 sqrt(2).

 At the moment, relax simply uses the mean value in the fit.  Despite
 the higher quality of the duplicated data, all points are given the
 same weight.  This is only because of the low data quantity.  As for
 dividing the sd of differences between duplicate spectra by sqrt(2),
 this is not done in relax anymore.  Because some people have collected
 triplicate spectra, although rare, relax calculates the error from
 replicated spectra differently.  I'm prepared to be told that this
 technique is incorrect though.  The procedure relax uses is to apply
 the formula:

 sd^2 = sum({Ii - Iav}^2) / (n - 1),

 where n is the number of spectra, Ii is the intensity in spectrum i,
 Iav is the average intensity, sd is the standard deviation, and sd^2
 is the variance.  This is for a single spin.  The sample number is so
 low that this value is completely meaningless.  Therefore the variance
 is averaged across all spins (well due to a current bug the standard
 deviation is averaged instead).  Then another averaging takes place if
 not all spectra are duplicated.  The variances across all duplicated
 spectra are averaged to give a single error value for all spins across
 all spectra (again the sd averaging bug affects this).  The reason for
 using this approach is that you are not limited to duplicate spectra.
 It also means that the factor of sqrt(2) is not applicable.  If only
 single spectra are collected, then relax's current behaviour of not
 using sqrt(2) seems reasonable.


 Here is how I understand the sqrt(2) issue:

 The sd of duplicate (or triplicate, or quadruplicate, or ... ) peak
 heights is assumed to give a good estimate of the precision with which
 we can measure the height of a single peak. So for peak heights that
 have not been measured in duplicate (ie relaxation times that have not
 been duplicated in our current set of spectra), sd is a good estimate
 of the uncertainty associated with that height.

 For peaks we have measured more than once, we can calculate a mean
 peak height. The precision with which we know that mean value is given
 by the std error of the mean ie. sd/sqrt(n) where n is the number of
 times we have measured that specific relaxation time. I think this is
 the origin of the sqrt(2) for duplicate data.

 A made up example:
 T   I
 0  1.00
 100.90
 100.86
 200.80
 400.75
 700.72
 700.68
 100  0.55
 150  0.40
 200  0.30

 The std deviation of our duplicates is 0.04 so the uncertainty on each
 value above is 0.04

 BUT, the uncertainty on the mean values for our duplicate time points
 (10 and 70) is 0.04/sqrt(2) = 0.028

 So if we use

Re: Curve fitting

2008-10-17 Thread Edward d'Auvergne
 the formula for how the RMSD of the base plane noise
relates to volume error, then peak heights will be essential for error
analysis.  I'm also willing to be corrected on any of the statistics
above as I'm not an expert in this and may have missed some
fundamental connections between theories.  And there may be a less
'dirty' way of doing the dirty part of the statistics.



On Fri, Oct 17, 2008 at 1:59 AM, Chris MacRaild [EMAIL PROTECTED] wrote:
 On Thu, Oct 16, 2008 at 8:07 PM, Edward d'Auvergne
 [EMAIL PROTECTED] wrote:
 On Thu, Oct 16, 2008 at 7:02 AM, Chris MacRaild [EMAIL PROTECTED] wrote:
 On Thu, Oct 16, 2008 at 3:11 PM, Sébastien Morin
 [EMAIL PROTECTED] wrote:
 Hi,

 I have a general question about curve fitting within relax.

 Let's say I proceed to curve fitting for some relaxation rates
 (exponential decay) and that I have a duplicate delay for error estimation.

 
 delays

 0.01
 0.01
 0.02
 0.04
 ...
 

 Will the mean value (for delay 0.01) be used for curve fitting and rate
 extraction ?
 Or will both values at delay 0.01 be used during curve fitting, hence
 giving more weight on delay 0.01 ?

 In other words, will the fit only use both values at delay 0.01 for
 error estimation or also for rate extraction, giving more weight for
 this duplicate point ?

 How is this handled in relax ?

 Instinctively, I would guess that the man value must be used for
 fitting, as we don't want the points that are not in duplicate to count
 less in the fitting procedure... Am I right ?


 I would argue not. If we have gone to the trouble of measuring
 something twice (or, equivalently, measuring it with greater
 precision) then we should weight it more strongly to reflect that.

 So we should include both duplicate points in our fit, or we should
 just use the mean value, but weight it to reflect the greater
 certainty we have in its value.

 As I type this I realise this is likely the source of the sqrt(2)
 factor Tyler and Edward have been debating on a parallel thread - the
 uncertainty in height of any one peak is equal to the RMS noise, but
 the std error of the mean of duplicates is less by a factor of
 sqrt(2).

 At the moment, relax simply uses the mean value in the fit.  Despite
 the higher quality of the duplicated data, all points are given the
 same weight.  This is only because of the low data quantity.  As for
 dividing the sd of differences between duplicate spectra by sqrt(2),
 this is not done in relax anymore.  Because some people have collected
 triplicate spectra, although rare, relax calculates the error from
 replicated spectra differently.  I'm prepared to be told that this
 technique is incorrect though.  The procedure relax uses is to apply
 the formula:

 sd^2 = sum({Ii - Iav}^2) / (n - 1),

 where n is the number of spectra, Ii is the intensity in spectrum i,
 Iav is the average intensity, sd is the standard deviation, and sd^2
 is the variance.  This is for a single spin.  The sample number is so
 low that this value is completely meaningless.  Therefore the variance
 is averaged across all spins (well due to a current bug the standard
 deviation is averaged instead).  Then another averaging takes place if
 not all spectra are duplicated.  The variances across all duplicated
 spectra are averaged to give a single error value for all spins across
 all spectra (again the sd averaging bug affects this).  The reason for
 using this approach is that you are not limited to duplicate spectra.
 It also means that the factor of sqrt(2) is not applicable.  If only
 single spectra are collected, then relax's current behaviour of not
 using sqrt(2) seems reasonable.


 Here is how I understand the sqrt(2) issue:

 The sd of duplicate (or triplicate, or quadruplicate, or ... ) peak
 heights is assumed to give a good estimate of the precision with which
 we can measure the height of a single peak. So for peak heights that
 have not been measured in duplicate (ie relaxation times that have not
 been duplicated in our current set of spectra), sd is a good estimate
 of the uncertainty associated with that height.

 For peaks we have measured more than once, we can calculate a mean
 peak height. The precision with which we know that mean value is given
 by the std error of the mean ie. sd/sqrt(n) where n is the number of
 times we have measured that specific relaxation time. I think this is
 the origin of the sqrt(2) for duplicate data.

 A made up example:
 T   I
 0  1.00
 100.90
 100.86
 200.80
 400.75
 700.72
 700.68
 100  0.55
 150  0.40
 200  0.30

 The std deviation of our duplicates is 0.04 so the uncertainty on each
 value above is 0.04

 BUT, the uncertainty on the mean values for our duplicate time points
 (10 and 70) is 0.04/sqrt(2) = 0.028

 So if we use the mean values as points in our fit, we should use 0.028
 as the uncertainty on those values (while all other peaks have
 uncertainty 0.04

Re: Curve fitting

2008-10-17 Thread Sébastien Morin
 was saying sigma was the variance, but just
 ignore that.  Next we need to average the variance across all spins,
 simply because the sample size is so low for each peak and hence the
 error estimate is horrible.  Whether this estimator of the true
 variance is good or bad is debatable (well, actually, it's bad), but
 it is unavoidable.  It also has the obvious disadvantage in that the
 peak height error is, in reality, different for each peak.

 Now, if not all spectra are replicated, then the approach needs to be
 modified to give us errors for the peaks of the single spectra.  Each
 averaged replicated time point (spectrum) has a single error
 associated with it, the average variance.  These are usually different
 for different time points, in some cases weakly decreasing
 exponentially.  So I think we should average the average variances and
 have a single measure of spread for all peaks in all spectra.  This
 estimator of the variance is again bad.  Interpolation might be
 better, but is still not great.

 Cheers,

 Edward


 P.S.  None of this affects an analysis using peak heights of
 non-replicated spectra and the RMSD of the baseplane noise.  But if
 someone wants to use the peak volume as a measure of peak intensity,
 then we have a statistical problem.  Until someone finds a reference
 or derives the formula for how the RMSD of the base plane noise
 relates to volume error, then peak heights will be essential for error
 analysis.  I'm also willing to be corrected on any of the statistics
 above as I'm not an expert in this and may have missed some
 fundamental connections between theories.  And there may be a less
 'dirty' way of doing the dirty part of the statistics.



 On Fri, Oct 17, 2008 at 1:59 AM, Chris MacRaild [EMAIL PROTECTED] wrote:
   
 On Thu, Oct 16, 2008 at 8:07 PM, Edward d'Auvergne
 [EMAIL PROTECTED] wrote:
 
 On Thu, Oct 16, 2008 at 7:02 AM, Chris MacRaild [EMAIL PROTECTED] wrote:
   
 On Thu, Oct 16, 2008 at 3:11 PM, Sébastien Morin
 [EMAIL PROTECTED] wrote:
 
 Hi,

 I have a general question about curve fitting within relax.

 Let's say I proceed to curve fitting for some relaxation rates
 (exponential decay) and that I have a duplicate delay for error 
 estimation.

 
 delays

 0.01
 0.01
 0.02
 0.04
 ...
 

 Will the mean value (for delay 0.01) be used for curve fitting and rate
 extraction ?
 Or will both values at delay 0.01 be used during curve fitting, hence
 giving more weight on delay 0.01 ?

 In other words, will the fit only use both values at delay 0.01 for
 error estimation or also for rate extraction, giving more weight for
 this duplicate point ?

 How is this handled in relax ?

 Instinctively, I would guess that the man value must be used for
 fitting, as we don't want the points that are not in duplicate to count
 less in the fitting procedure... Am I right ?

   
 I would argue not. If we have gone to the trouble of measuring
 something twice (or, equivalently, measuring it with greater
 precision) then we should weight it more strongly to reflect that.

 So we should include both duplicate points in our fit, or we should
 just use the mean value, but weight it to reflect the greater
 certainty we have in its value.

 As I type this I realise this is likely the source of the sqrt(2)
 factor Tyler and Edward have been debating on a parallel thread - the
 uncertainty in height of any one peak is equal to the RMS noise, but
 the std error of the mean of duplicates is less by a factor of
 sqrt(2).
 
 At the moment, relax simply uses the mean value in the fit.  Despite
 the higher quality of the duplicated data, all points are given the
 same weight.  This is only because of the low data quantity.  As for
 dividing the sd of differences between duplicate spectra by sqrt(2),
 this is not done in relax anymore.  Because some people have collected
 triplicate spectra, although rare, relax calculates the error from
 replicated spectra differently.  I'm prepared to be told that this
 technique is incorrect though.  The procedure relax uses is to apply
 the formula:

 sd^2 = sum({Ii - Iav}^2) / (n - 1),

 where n is the number of spectra, Ii is the intensity in spectrum i,
 Iav is the average intensity, sd is the standard deviation, and sd^2
 is the variance.  This is for a single spin.  The sample number is so
 low that this value is completely meaningless.  Therefore the variance
 is averaged across all spins (well due to a current bug the standard
 deviation is averaged instead).  Then another averaging takes place if
 not all spectra are duplicated.  The variances across all duplicated
 spectra are averaged to give a single error value for all spins across
 all spectra (again the sd averaging bug affects this).  The reason for
 using this approach is that you are not limited to duplicate spectra.
 It also means that the factor of sqrt(2) is not applicable.  If only
 single spectra are collected, then relax's current

Curve fitting

2008-10-15 Thread Sébastien Morin
Hi,

I have a general question about curve fitting within relax.

Let's say I proceed to curve fitting for some relaxation rates
(exponential decay) and that I have a duplicate delay for error estimation.


delays

0.01
0.01 
0.02
0.04
...


Will the mean value (for delay 0.01) be used for curve fitting and rate
extraction ?
Or will both values at delay 0.01 be used during curve fitting, hence
giving more weight on delay 0.01 ?

In other words, will the fit only use both values at delay 0.01 for
error estimation or also for rate extraction, giving more weight for
this duplicate point ?

How is this handled in relax ?

Instinctively, I would guess that the man value must be used for
fitting, as we don't want the points that are not in duplicate to count
less in the fitting procedure... Am I right ?

Thanks for clarifying this...


Séb


___
relax (http://nmr-relax.com)

This is the relax-users mailing list
relax-users@gna.org

To unsubscribe from this list, get a password
reminder, or change your subscription options,
visit the list information page at
https://mail.gna.org/listinfo/relax-users


Re: Curve fitting

2008-10-15 Thread Chris MacRaild
On Thu, Oct 16, 2008 at 3:11 PM, Sébastien Morin
[EMAIL PROTECTED] wrote:
 Hi,

 I have a general question about curve fitting within relax.

 Let's say I proceed to curve fitting for some relaxation rates
 (exponential decay) and that I have a duplicate delay for error estimation.

 
 delays

 0.01
 0.01
 0.02
 0.04
 ...
 

 Will the mean value (for delay 0.01) be used for curve fitting and rate
 extraction ?
 Or will both values at delay 0.01 be used during curve fitting, hence
 giving more weight on delay 0.01 ?

 In other words, will the fit only use both values at delay 0.01 for
 error estimation or also for rate extraction, giving more weight for
 this duplicate point ?

 How is this handled in relax ?

 Instinctively, I would guess that the man value must be used for
 fitting, as we don't want the points that are not in duplicate to count
 less in the fitting procedure... Am I right ?


I would argue not. If we have gone to the trouble of measuring
something twice (or, equivalently, measuring it with greater
precision) then we should weight it more strongly to reflect that.

So we should include both duplicate points in our fit, or we should
just use the mean value, but weight it to reflect the greater
certainty we have in its value.

As I type this I realise this is likely the source of the sqrt(2)
factor Tyler and Edward have been debating on a parallel thread - the
uncertainty in height of any one peak is equal to the RMS noise, but
the std error of the mean of duplicates is less by a factor of
sqrt(2).


Chris

___
relax (http://nmr-relax.com)

This is the relax-users mailing list
relax-users@gna.org

To unsubscribe from this list, get a password
reminder, or change your subscription options,
visit the list information page at
https://mail.gna.org/listinfo/relax-users


One set data for curve-fitting!

2008-09-10 Thread Xia,Wei
Hello,

I have tried to use the relax standard script to do curve-fitting for T1.
However I just got one set of relaxation data(/i.e./ 5ms X 1, 120ms X 1,
240ms X1 ..),  is it possible to  use the
script to fit the data?

Could anybody give me some suggestion how to do it?
Thanks!


-- 

  Xia,Wei
Department of Chemistry
The University of Hong Kong
Pokfulam Road, Hong Kong
P.R.China

___
relax (http://nmr-relax.com)

This is the relax-users mailing list
relax-users@gna.org

To unsubscribe from this list, get a password
reminder, or change your subscription options,
visit the list information page at
https://mail.gna.org/listinfo/relax-users