Hi,

I'm unsure if this is a bug in full_analysis.py, in the internal  
relax code, or user error.  The optimization of the 'sphere' model  
will not converge, now after 160+ rounds.   The chi-squared test has  
converged (long, long ago):

"" from output
         Chi-squared test:
             chi2 (k-1): 100.77647517006251
             chi2 (k):   100.77647517006251
             The chi-squared value has converged.
""

However, the identical model-free models test does has not converged:

"" from output
         Identical model-free models test:
             The model-free models have not converged.

         Identical parameter test:
             The model-free models haven't converged hence the  
parameters haven't converged.
""

Something that confuses me is that the output files in the round_??/ 
aic directory suggest that, for example, the round_160 and round_161  
AIC model selections are equivalent.  Here are the models for the  
first few residues:

""
         1 None None
         2 None None
         3 None None
         4 m2 m2
         5 m2 m2
         6 m2 m2
         7 m2 m2
         8 m2 m2
         9 m4 m4
         10 m1 m1
         11 None None
         12 m2 m2
         13 m2 m2
         14 m1 m1
         15 m2 m2
         16 m3 m3
         17 m3 m3
         18 None None
""

However, I modified the full_analysis.py protocol to print the  
differences in the model selection, within the 'Identical model-free  
model test' section of the 'convergence' definition. Here is the  
beginning of the output (which only contains differences between the  
previous and current rounds):

""
         residue 1: prev=None curr=m2
         residue 2: prev=None curr=m2
         residue 3: prev=None curr=m2
         residue 6: prev=m2 curr=m4
         residue 7: prev=m2 curr=m1
         residue 9: prev=m4 curr=m2
         residue 11: prev=None curr=m2
         residue 12: prev=m2 curr=m3
         residue 13: prev=m2 curr=m3
         residue 15: prev=m2 curr=m1
         residue 16: prev=m3 curr=m2
         residue 17: prev=m3 curr=m1
         residue 18: prev=None curr=m3
""

There should be no data for residues 1-3, 11 and 18 (None), however  
the 'Identical model-free model test' seems as if it ignores residues  
for which 'None' was selected in the curr_model call in the following  
code:

""
         # Create a string representation of the model-free models of  
the previous run.
         prev_models = ''
         for i in xrange(len(self.relax.data.res['previous'])):
             if hasattr(self.relax.data.res['previous'][i], 'model'):
                 #prev_models = prev_models + self.relax.data.res 
['previous'][i].model
                 prev_models = prev_models + ' ' + self.relax.data.res 
['previous'][i].model

         # Create a string representation of the model-free models of  
the current run.
         curr_models = ''
         for i in xrange(len(self.relax.data.res[run])):
             if hasattr(self.relax.data.res[run][i], 'model'):
                 #curr_models = curr_models + self.relax.data.res[run] 
[i].model
                 curr_models = curr_models + ' ' + self.relax.data.res 
[run][i].model
""

For what it's worth, I have residues 1,2,3,11 and 18 in the file  
'unresolved' which is read by the full_analysis.py protocol.  I  
created a separate sequence file (variable = SEQUENCE) that contains  
all residues (those with data and those without), instead of using a  
data file (noe data, in the default full_analysis.py file).  However,  
these residues are not specified in the data (r1, r2 and noe) files,  
as I did not have data for them.  Should I add them but place 'None'  
in the data and error columns?  Could that be causing the problems?   
Or should I create a bug report for this?


Doug


_______________________________________________
relax (http://nmr-relax.com)

This is the relax-users mailing list
[email protected]

To unsubscribe from this list, get a password
reminder, or change your subscription options,
visit the list information page at
https://mail.gna.org/listinfo/relax-users

Reply via email to