Re: [ccp4bb] Resolution discrepancy between MTZ file and Refinement

2024-04-11 Thread venkatareddy dadireddy
Hi

Thank you Pavel for your input.

Venkat

On Wed, Apr 10, 2024 at 1:44 AM Pavel Afonine  wrote:

> Hi,
> every time you run phenix.refine it may exclude some reflections from
> refinement per Read 1999 (Acta Cryst. (1999). D55, 1759-1764). Usually the
> number of reflections omitted ranges from none to just a few, however, this
> may be just enough to make the resolution statistics look slightly
> different.
> Also, if you use Iobs as input data, internally they are converted into
> Fobs using F method, and some reflections may not "survive" this
> conversion. This may be another reason for discrepancies.
> Pavel
>
> On Fri, Apr 5, 2024 at 9:45 PM venkatareddy dadireddy <
> venkatda...@gmail.com> wrote:
>
>>
>> Thank you very much Garib.
>>
>> Venkat
>>
>> On Sat, Apr 6, 2024 at 1:12 AM Garib Murshudov 
>> wrote:
>>
>>> Unless you are confident that twin exists you should not use twin
>>> refinement (Occam’s razor)
>>>
>>>
>>>
>>> On 5 Apr 2024, at 17:24, venkatareddy dadireddy 
>>> wrote:
>>>
>>> CAUTION: This email originated from outside of the LMB:
>>> *.-owner-ccp...@jiscmail.ac.uk -.*
>>> Do not click links or open attachments unless you recognize the sender
>>> and know the content is safe.
>>> If you think this is a phishing email, please forward it to
>>> phish...@mrc-lmb.cam.ac.uk
>>>
>>>
>>> --
>>> Hi Kay and Garib,
>>>
>>> Thank you for your input.
>>> It is actually the twin refinement that gave rise to resolution
>>> discrepancy.
>>> For what reason I don't remember that I have turned the twin refinement
>>> ON
>>> and the same job was cloned again and again.
>>> With the twin refinement OFF, it gave rise to resolution present in the
>>> MTZ (2.0 A).
>>> From Xtriage: *the correlation between*
>>> *the intensities related by the twin law 1/2*h-3/2*k, -1/2*h-1/2*k,-l
>>> with an estimated twin*
>>> *fraction of 0.10 is most likely due to an NCS axis parallel to the twin
>>> axis*.
>>> The statistics independent of twin laws show no twinning (more close to
>>> untwinned than perfect twin).
>>> Please suggest to me on how I proceed with refinement (twin OFF or ON).
>>>
>>> Thank you,
>>> Venkat
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Fri, Apr 5, 2024 at 1:00 AM Garib Murshudov 
>>> wrote:
>>>
 Did you use twin refinement (is it really twin if you used that).
 If twin refinement was used then twin related intensities might have
 different resolution, in case when your crystal are pseudomerohedral
 twinned.

 Regards
 Garib

 On 4 Apr 2024, at 18:40, venkatareddy dadireddy 
 wrote:

 CAUTION: This email originated from outside of the LMB:
 *.-owner-ccp...@jiscmail.ac.uk -.*
 Do not click links or open attachments unless you recognize the sender
 and know the content is safe.
 If you think this is a phishing email, please forward it to
 phish...@mrc-lmb.cam.ac.uk


 --
 Hi Kay,

 Thank you very much for your insights.
 Following are the cell parameters from mtz and pdb header.
 *MTZ: 117.8560   66.1700   70.9040   90.   91.4240   90.000*

 *CRYST1  117.856   66.170   70.904  90.00  91.42  90.00 C 1 2 1*

 The only difference is in the 3rd decimal point.


 Thank you,
 Venkat



 On Tue, Apr 2, 2024 at 10:10 PM Kay Diederichs <
 kay.diederi...@uni-konstanz.de> wrote:

> Hi Venkatareddy Dadireddy,
>
> do the unit cell parameters of your MTZ file and PDB file agree
> exactly ?
>
> Take for example a cell of (100,110,120,90,90,90) in the header of the
> MTZ file,
> and (97,110,120,90,90,90) in the CRYST1 line of the PDB file.
>
> In this example, the (50,0,0) reflection would be at 2.0A resolution
> if using the cell from the MTZ file,
> but it would be at 1.94A resolution if calculating the resolution
> based on the cell from the PDB file.
>
> So perhaps REFMAC5 takes the cell from the PDB file, and phenix.refine
> takes the cell from the MTZ?
> I didn't check but it may be worth finding out.
>
> HTH,
> Kay
>
>
> On Tue, 2 Apr 2024 21:16:57 +0530, venkatareddy dadireddy <
> venkatda...@gmail.com> wrote:
>
> >Hi,
> >
> >The resolution range in my MTZ file is 70.88 - 2.0 A. When I refined
> my
> >structure using REFMAC5, the resolution that it gives is 70.88 - 1.94
> A,
> >the difference of 0.04 A. I also used Phenix.refine which gives the
> >resolution output as it is in the MTZ file. Again, EDS (validation
> report)
> >gives the right resolution. What could be the possible reason for this
> >discrepancy? I have the structure deposited in the Protein Data Bank
> and it
> >is on hold. Thank you in advance for your help.
> >
> >Thank you,
> >
> >
> >
> >
> >
> >
> >*Venkatareddy Dadireddy,B1-10,Prof. S. Ramakumar's Lab,Dept. of
> >Physics,IISc, Banglore.Cell: 

Re: [ccp4bb] Resolution discrepancy between MTZ file and Refinement

2024-04-09 Thread Pavel Afonine
Hi,
every time you run phenix.refine it may exclude some reflections from
refinement per Read 1999 (Acta Cryst. (1999). D55, 1759-1764). Usually the
number of reflections omitted ranges from none to just a few, however, this
may be just enough to make the resolution statistics look slightly
different.
Also, if you use Iobs as input data, internally they are converted into
Fobs using F method, and some reflections may not "survive" this
conversion. This may be another reason for discrepancies.
Pavel

On Fri, Apr 5, 2024 at 9:45 PM venkatareddy dadireddy 
wrote:

>
> Thank you very much Garib.
>
> Venkat
>
> On Sat, Apr 6, 2024 at 1:12 AM Garib Murshudov 
> wrote:
>
>> Unless you are confident that twin exists you should not use twin
>> refinement (Occam’s razor)
>>
>>
>>
>> On 5 Apr 2024, at 17:24, venkatareddy dadireddy 
>> wrote:
>>
>> CAUTION: This email originated from outside of the LMB:
>> *.-owner-ccp...@jiscmail.ac.uk -.*
>> Do not click links or open attachments unless you recognize the sender
>> and know the content is safe.
>> If you think this is a phishing email, please forward it to
>> phish...@mrc-lmb.cam.ac.uk
>>
>>
>> --
>> Hi Kay and Garib,
>>
>> Thank you for your input.
>> It is actually the twin refinement that gave rise to resolution
>> discrepancy.
>> For what reason I don't remember that I have turned the twin refinement ON
>> and the same job was cloned again and again.
>> With the twin refinement OFF, it gave rise to resolution present in the
>> MTZ (2.0 A).
>> From Xtriage: *the correlation between*
>> *the intensities related by the twin law 1/2*h-3/2*k, -1/2*h-1/2*k,-l
>> with an estimated twin*
>> *fraction of 0.10 is most likely due to an NCS axis parallel to the twin
>> axis*.
>> The statistics independent of twin laws show no twinning (more close to
>> untwinned than perfect twin).
>> Please suggest to me on how I proceed with refinement (twin OFF or ON).
>>
>> Thank you,
>> Venkat
>>
>>
>>
>>
>>
>>
>>
>> On Fri, Apr 5, 2024 at 1:00 AM Garib Murshudov 
>> wrote:
>>
>>> Did you use twin refinement (is it really twin if you used that).
>>> If twin refinement was used then twin related intensities might have
>>> different resolution, in case when your crystal are pseudomerohedral
>>> twinned.
>>>
>>> Regards
>>> Garib
>>>
>>> On 4 Apr 2024, at 18:40, venkatareddy dadireddy 
>>> wrote:
>>>
>>> CAUTION: This email originated from outside of the LMB:
>>> *.-owner-ccp...@jiscmail.ac.uk -.*
>>> Do not click links or open attachments unless you recognize the sender
>>> and know the content is safe.
>>> If you think this is a phishing email, please forward it to
>>> phish...@mrc-lmb.cam.ac.uk
>>>
>>>
>>> --
>>> Hi Kay,
>>>
>>> Thank you very much for your insights.
>>> Following are the cell parameters from mtz and pdb header.
>>> *MTZ: 117.8560   66.1700   70.9040   90.   91.4240   90.000*
>>>
>>> *CRYST1  117.856   66.170   70.904  90.00  91.42  90.00 C 1 2 1*
>>>
>>> The only difference is in the 3rd decimal point.
>>>
>>>
>>> Thank you,
>>> Venkat
>>>
>>>
>>>
>>> On Tue, Apr 2, 2024 at 10:10 PM Kay Diederichs <
>>> kay.diederi...@uni-konstanz.de> wrote:
>>>
 Hi Venkatareddy Dadireddy,

 do the unit cell parameters of your MTZ file and PDB file agree exactly
 ?

 Take for example a cell of (100,110,120,90,90,90) in the header of the
 MTZ file,
 and (97,110,120,90,90,90) in the CRYST1 line of the PDB file.

 In this example, the (50,0,0) reflection would be at 2.0A resolution if
 using the cell from the MTZ file,
 but it would be at 1.94A resolution if calculating the resolution based
 on the cell from the PDB file.

 So perhaps REFMAC5 takes the cell from the PDB file, and phenix.refine
 takes the cell from the MTZ?
 I didn't check but it may be worth finding out.

 HTH,
 Kay


 On Tue, 2 Apr 2024 21:16:57 +0530, venkatareddy dadireddy <
 venkatda...@gmail.com> wrote:

 >Hi,
 >
 >The resolution range in my MTZ file is 70.88 - 2.0 A. When I refined my
 >structure using REFMAC5, the resolution that it gives is 70.88 - 1.94
 A,
 >the difference of 0.04 A. I also used Phenix.refine which gives the
 >resolution output as it is in the MTZ file. Again, EDS (validation
 report)
 >gives the right resolution. What could be the possible reason for this
 >discrepancy? I have the structure deposited in the Protein Data Bank
 and it
 >is on hold. Thank you in advance for your help.
 >
 >Thank you,
 >
 >
 >
 >
 >
 >
 >*Venkatareddy Dadireddy,B1-10,Prof. S. Ramakumar's Lab,Dept. of
 >Physics,IISc, Banglore.Cell: 07259492227*
 >

 >
 >
 >To unsubscribe from the CCP4BB list, click the following link:
 >https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
 >
 >This message was issued to members of 

Re: [ccp4bb] Resolution discrepancy between MTZ file and Refinement

2024-04-05 Thread venkatareddy dadireddy
Thank you very much Garib.

Venkat

On Sat, Apr 6, 2024 at 1:12 AM Garib Murshudov 
wrote:

> Unless you are confident that twin exists you should not use twin
> refinement (Occam’s razor)
>
>
>
> On 5 Apr 2024, at 17:24, venkatareddy dadireddy 
> wrote:
>
> CAUTION: This email originated from outside of the LMB:
> *.-owner-ccp...@jiscmail.ac.uk -.*
> Do not click links or open attachments unless you recognize the sender and
> know the content is safe.
> If you think this is a phishing email, please forward it to
> phish...@mrc-lmb.cam.ac.uk
>
>
> --
> Hi Kay and Garib,
>
> Thank you for your input.
> It is actually the twin refinement that gave rise to resolution
> discrepancy.
> For what reason I don't remember that I have turned the twin refinement ON
> and the same job was cloned again and again.
> With the twin refinement OFF, it gave rise to resolution present in the
> MTZ (2.0 A).
> From Xtriage: *the correlation between*
> *the intensities related by the twin law 1/2*h-3/2*k, -1/2*h-1/2*k,-l with
> an estimated twin*
> *fraction of 0.10 is most likely due to an NCS axis parallel to the twin
> axis*.
> The statistics independent of twin laws show no twinning (more close to
> untwinned than perfect twin).
> Please suggest to me on how I proceed with refinement (twin OFF or ON).
>
> Thank you,
> Venkat
>
>
>
>
>
>
>
> On Fri, Apr 5, 2024 at 1:00 AM Garib Murshudov 
> wrote:
>
>> Did you use twin refinement (is it really twin if you used that).
>> If twin refinement was used then twin related intensities might have
>> different resolution, in case when your crystal are pseudomerohedral
>> twinned.
>>
>> Regards
>> Garib
>>
>> On 4 Apr 2024, at 18:40, venkatareddy dadireddy 
>> wrote:
>>
>> CAUTION: This email originated from outside of the LMB:
>> *.-owner-ccp...@jiscmail.ac.uk -.*
>> Do not click links or open attachments unless you recognize the sender
>> and know the content is safe.
>> If you think this is a phishing email, please forward it to
>> phish...@mrc-lmb.cam.ac.uk
>>
>>
>> --
>> Hi Kay,
>>
>> Thank you very much for your insights.
>> Following are the cell parameters from mtz and pdb header.
>> *MTZ: 117.8560   66.1700   70.9040   90.   91.4240   90.000*
>>
>> *CRYST1  117.856   66.170   70.904  90.00  91.42  90.00 C 1 2 1*
>>
>> The only difference is in the 3rd decimal point.
>>
>>
>> Thank you,
>> Venkat
>>
>>
>>
>> On Tue, Apr 2, 2024 at 10:10 PM Kay Diederichs <
>> kay.diederi...@uni-konstanz.de> wrote:
>>
>>> Hi Venkatareddy Dadireddy,
>>>
>>> do the unit cell parameters of your MTZ file and PDB file agree exactly ?
>>>
>>> Take for example a cell of (100,110,120,90,90,90) in the header of the
>>> MTZ file,
>>> and (97,110,120,90,90,90) in the CRYST1 line of the PDB file.
>>>
>>> In this example, the (50,0,0) reflection would be at 2.0A resolution if
>>> using the cell from the MTZ file,
>>> but it would be at 1.94A resolution if calculating the resolution based
>>> on the cell from the PDB file.
>>>
>>> So perhaps REFMAC5 takes the cell from the PDB file, and phenix.refine
>>> takes the cell from the MTZ?
>>> I didn't check but it may be worth finding out.
>>>
>>> HTH,
>>> Kay
>>>
>>>
>>> On Tue, 2 Apr 2024 21:16:57 +0530, venkatareddy dadireddy <
>>> venkatda...@gmail.com> wrote:
>>>
>>> >Hi,
>>> >
>>> >The resolution range in my MTZ file is 70.88 - 2.0 A. When I refined my
>>> >structure using REFMAC5, the resolution that it gives is 70.88 - 1.94 A,
>>> >the difference of 0.04 A. I also used Phenix.refine which gives the
>>> >resolution output as it is in the MTZ file. Again, EDS (validation
>>> report)
>>> >gives the right resolution. What could be the possible reason for this
>>> >discrepancy? I have the structure deposited in the Protein Data Bank
>>> and it
>>> >is on hold. Thank you in advance for your help.
>>> >
>>> >Thank you,
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >*Venkatareddy Dadireddy,B1-10,Prof. S. Ramakumar's Lab,Dept. of
>>> >Physics,IISc, Banglore.Cell: 07259492227*
>>> >
>>> >
>>> >
>>> >To unsubscribe from the CCP4BB list, click the following link:
>>> >https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
>>> >
>>> >This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a
>>> mailing list hosted by www.jiscmail.ac.uk, terms & conditions are
>>> available at https://www.jiscmail.ac.uk/policyandsecurity/
>>> >
>>>
>>> 
>>>
>>> To unsubscribe from the CCP4BB list, click the following link:
>>> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
>>>
>>> This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a
>>> mailing list hosted by www.jiscmail.ac.uk, terms & conditions are
>>> available at https://www.jiscmail.ac.uk/policyandsecurity/
>>>
>>
>>
>> --
>>
>>
>>
>>
>>
>>
>> *Venkatareddy Dadireddy,B1-10,Prof. S. Ramakumar's Lab,Dept. of
>> Physics,IISc, Banglore.Cell: 

Re: [ccp4bb] Resolution discrepancy between MTZ file and Refinement

2024-04-05 Thread Garib Murshudov
Unless you are confident that twin exists you should not use twin refinement 
(Occam’s razor)



> On 5 Apr 2024, at 17:24, venkatareddy dadireddy  wrote:
> 
> CAUTION: This email originated from outside of the LMB:
> .-owner-ccp...@jiscmail.ac.uk -.
> Do not click links or open attachments unless you recognize the sender and 
> know the content is safe.
> If you think this is a phishing email, please forward it to 
> phish...@mrc-lmb.cam.ac.uk 
> 
> --
> 
> Hi Kay and Garib,
> 
> Thank you for your input.
> It is actually the twin refinement that gave rise to resolution discrepancy.
> For what reason I don't remember that I have turned the twin refinement ON
> and the same job was cloned again and again. 
> With the twin refinement OFF, it gave rise to resolution present in the MTZ 
> (2.0 A).
> From Xtriage: the correlation between
> the intensities related by the twin law 1/2*h-3/2*k, -1/2*h-1/2*k,-l with an 
> estimated twin
> fraction of 0.10 is most likely due to an NCS axis parallel to the twin axis. 
> The statistics independent of twin laws show no twinning (more close to 
> untwinned than perfect twin).
> Please suggest to me on how I proceed with refinement (twin OFF or ON).
> 
> Thank you,
> Venkat
> 
>  
> 
>  
> 
> 
> 
> On Fri, Apr 5, 2024 at 1:00 AM Garib Murshudov  > wrote:
> Did you use twin refinement (is it really twin if you used that). 
> If twin refinement was used then twin related intensities might have 
> different resolution, in case when your crystal are pseudomerohedral twinned. 
> 
> Regards
> Garib
> 
>> On 4 Apr 2024, at 18:40, venkatareddy dadireddy > > wrote:
>> 
>> CAUTION: This email originated from outside of the LMB:
>> .-owner-ccp...@jiscmail.ac.uk -.
>> Do not click links or open attachments unless you recognize the sender and 
>> know the content is safe.
>> If you think this is a phishing email, please forward it to 
>> phish...@mrc-lmb.cam.ac.uk 
>> 
>> --
>> 
>> Hi Kay,
>> 
>> Thank you very much for your insights. 
>> Following are the cell parameters from mtz and pdb header.
>> MTZ: 117.8560   66.1700   70.9040   90.   91.4240   90.000
>> CRYST1  117.856   66.170   70.904  90.00  91.42  90.00 C 1 2 1
>> 
>> The only difference is in the 3rd decimal point.
>> 
>> 
>> Thank you,
>> Venkat
>> 
>> 
>> 
>> On Tue, Apr 2, 2024 at 10:10 PM Kay Diederichs 
>> mailto:kay.diederi...@uni-konstanz.de>> 
>> wrote:
>> Hi Venkatareddy Dadireddy,
>> 
>> do the unit cell parameters of your MTZ file and PDB file agree exactly ?
>> 
>> Take for example a cell of (100,110,120,90,90,90) in the header of the MTZ 
>> file, 
>> and (97,110,120,90,90,90) in the CRYST1 line of the PDB file.
>> 
>> In this example, the (50,0,0) reflection would be at 2.0A resolution if 
>> using the cell from the MTZ file, 
>> but it would be at 1.94A resolution if calculating the resolution based on 
>> the cell from the PDB file.
>> 
>> So perhaps REFMAC5 takes the cell from the PDB file, and phenix.refine takes 
>> the cell from the MTZ?
>> I didn't check but it may be worth finding out. 
>> 
>> HTH,
>> Kay
>> 
>> 
>> On Tue, 2 Apr 2024 21:16:57 +0530, venkatareddy dadireddy 
>> mailto:venkatda...@gmail.com>> wrote:
>> 
>> >Hi,
>> >
>> >The resolution range in my MTZ file is 70.88 - 2.0 A. When I refined my
>> >structure using REFMAC5, the resolution that it gives is 70.88 - 1.94 A,
>> >the difference of 0.04 A. I also used Phenix.refine which gives the
>> >resolution output as it is in the MTZ file. Again, EDS (validation report)
>> >gives the right resolution. What could be the possible reason for this
>> >discrepancy? I have the structure deposited in the Protein Data Bank and it
>> >is on hold. Thank you in advance for your help.
>> >
>> >Thank you,
>> >
>> >
>> >
>> >
>> >
>> >
>> >*Venkatareddy Dadireddy,B1-10,Prof. S. Ramakumar's Lab,Dept. of
>> >Physics,IISc, Banglore.Cell: 07259492227*
>> >
>> >
>> >
>> >To unsubscribe from the CCP4BB list, click the following link:
>> >https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1 
>> >
>> >
>> >This message was issued to members of www.jiscmail.ac.uk/CCP4BB 
>> >, a mailing list hosted by 
>> >www.jiscmail.ac.uk , terms & conditions are 
>> >available at https://www.jiscmail.ac.uk/policyandsecurity/ 
>> >
>> >
>> 
>> 
>> 
>> To unsubscribe from the CCP4BB list, click the following link:
>> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1 
>> 
>> 

Re: [ccp4bb] Resolution discrepancy between MTZ file and Refinement

2024-04-05 Thread venkatareddy dadireddy
Hi Kay and Garib,

Thank you for your input.
It is actually the twin refinement that gave rise to resolution discrepancy.
For what reason I don't remember that I have turned the twin refinement ON
and the same job was cloned again and again.
With the twin refinement OFF, it gave rise to resolution present in the MTZ
(2.0 A).
>From Xtriage: *the correlation between*
*the intensities related by the twin law 1/2*h-3/2*k, -1/2*h-1/2*k,-l with
an estimated twin*
*fraction of 0.10 is most likely due to an NCS axis parallel to the twin
axis*.
The statistics independent of twin laws show no twinning (more close to
untwinned than perfect twin).
Please suggest to me on how I proceed with refinement (twin OFF or ON).

Thank you,
Venkat







On Fri, Apr 5, 2024 at 1:00 AM Garib Murshudov 
wrote:

> Did you use twin refinement (is it really twin if you used that).
> If twin refinement was used then twin related intensities might have
> different resolution, in case when your crystal are pseudomerohedral
> twinned.
>
> Regards
> Garib
>
> On 4 Apr 2024, at 18:40, venkatareddy dadireddy 
> wrote:
>
> CAUTION: This email originated from outside of the LMB:
> *.-owner-ccp...@jiscmail.ac.uk -.*
> Do not click links or open attachments unless you recognize the sender and
> know the content is safe.
> If you think this is a phishing email, please forward it to
> phish...@mrc-lmb.cam.ac.uk
>
>
> --
> Hi Kay,
>
> Thank you very much for your insights.
> Following are the cell parameters from mtz and pdb header.
> *MTZ: 117.8560   66.1700   70.9040   90.   91.4240   90.000*
>
> *CRYST1  117.856   66.170   70.904  90.00  91.42  90.00 C 1 2 1*
>
> The only difference is in the 3rd decimal point.
>
>
> Thank you,
> Venkat
>
>
>
> On Tue, Apr 2, 2024 at 10:10 PM Kay Diederichs <
> kay.diederi...@uni-konstanz.de> wrote:
>
>> Hi Venkatareddy Dadireddy,
>>
>> do the unit cell parameters of your MTZ file and PDB file agree exactly ?
>>
>> Take for example a cell of (100,110,120,90,90,90) in the header of the
>> MTZ file,
>> and (97,110,120,90,90,90) in the CRYST1 line of the PDB file.
>>
>> In this example, the (50,0,0) reflection would be at 2.0A resolution if
>> using the cell from the MTZ file,
>> but it would be at 1.94A resolution if calculating the resolution based
>> on the cell from the PDB file.
>>
>> So perhaps REFMAC5 takes the cell from the PDB file, and phenix.refine
>> takes the cell from the MTZ?
>> I didn't check but it may be worth finding out.
>>
>> HTH,
>> Kay
>>
>>
>> On Tue, 2 Apr 2024 21:16:57 +0530, venkatareddy dadireddy <
>> venkatda...@gmail.com> wrote:
>>
>> >Hi,
>> >
>> >The resolution range in my MTZ file is 70.88 - 2.0 A. When I refined my
>> >structure using REFMAC5, the resolution that it gives is 70.88 - 1.94 A,
>> >the difference of 0.04 A. I also used Phenix.refine which gives the
>> >resolution output as it is in the MTZ file. Again, EDS (validation
>> report)
>> >gives the right resolution. What could be the possible reason for this
>> >discrepancy? I have the structure deposited in the Protein Data Bank and
>> it
>> >is on hold. Thank you in advance for your help.
>> >
>> >Thank you,
>> >
>> >
>> >
>> >
>> >
>> >
>> >*Venkatareddy Dadireddy,B1-10,Prof. S. Ramakumar's Lab,Dept. of
>> >Physics,IISc, Banglore.Cell: 07259492227*
>> >
>> >
>> >
>> >To unsubscribe from the CCP4BB list, click the following link:
>> >https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
>> >
>> >This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a
>> mailing list hosted by www.jiscmail.ac.uk, terms & conditions are
>> available at https://www.jiscmail.ac.uk/policyandsecurity/
>> >
>>
>> 
>>
>> To unsubscribe from the CCP4BB list, click the following link:
>> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
>>
>> This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a
>> mailing list hosted by www.jiscmail.ac.uk, terms & conditions are
>> available at https://www.jiscmail.ac.uk/policyandsecurity/
>>
>
>
> --
>
>
>
>
>
>
> *Venkatareddy Dadireddy,B1-10,Prof. S. Ramakumar's Lab,Dept. of
> Physics,IISc, Banglore.Cell: 07259492227*
>
> --
>
> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
>
>
>

-- 






*Venkatareddy Dadireddy,B1-10,Prof. S. Ramakumar's Lab,Dept. of
Physics,IISc, Banglore.Cell: 07259492227*



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1

This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing list 
hosted by www.jiscmail.ac.uk, terms & conditions are available at 
https://www.jiscmail.ac.uk/policyandsecurity/


Re: [ccp4bb] Resolution discrepancy between MTZ file and Refinement

2024-04-04 Thread Garib Murshudov
Did you use twin refinement (is it really twin if you used that). 
If twin refinement was used then twin related intensities might have different 
resolution, in case when your crystal are pseudomerohedral twinned. 

Regards
Garib

> On 4 Apr 2024, at 18:40, venkatareddy dadireddy  wrote:
> 
> CAUTION: This email originated from outside of the LMB:
> .-owner-ccp...@jiscmail.ac.uk -.
> Do not click links or open attachments unless you recognize the sender and 
> know the content is safe.
> If you think this is a phishing email, please forward it to 
> phish...@mrc-lmb.cam.ac.uk 
> 
> --
> 
> Hi Kay,
> 
> Thank you very much for your insights. 
> Following are the cell parameters from mtz and pdb header.
> MTZ: 117.8560   66.1700   70.9040   90.   91.4240   90.000
> CRYST1  117.856   66.170   70.904  90.00  91.42  90.00 C 1 2 1
> 
> The only difference is in the 3rd decimal point.
> 
> 
> Thank you,
> Venkat
> 
> 
> 
> On Tue, Apr 2, 2024 at 10:10 PM Kay Diederichs 
> mailto:kay.diederi...@uni-konstanz.de>> 
> wrote:
> Hi Venkatareddy Dadireddy,
> 
> do the unit cell parameters of your MTZ file and PDB file agree exactly ?
> 
> Take for example a cell of (100,110,120,90,90,90) in the header of the MTZ 
> file, 
> and (97,110,120,90,90,90) in the CRYST1 line of the PDB file.
> 
> In this example, the (50,0,0) reflection would be at 2.0A resolution if using 
> the cell from the MTZ file, 
> but it would be at 1.94A resolution if calculating the resolution based on 
> the cell from the PDB file.
> 
> So perhaps REFMAC5 takes the cell from the PDB file, and phenix.refine takes 
> the cell from the MTZ?
> I didn't check but it may be worth finding out. 
> 
> HTH,
> Kay
> 
> 
> On Tue, 2 Apr 2024 21:16:57 +0530, venkatareddy dadireddy 
> mailto:venkatda...@gmail.com>> wrote:
> 
> >Hi,
> >
> >The resolution range in my MTZ file is 70.88 - 2.0 A. When I refined my
> >structure using REFMAC5, the resolution that it gives is 70.88 - 1.94 A,
> >the difference of 0.04 A. I also used Phenix.refine which gives the
> >resolution output as it is in the MTZ file. Again, EDS (validation report)
> >gives the right resolution. What could be the possible reason for this
> >discrepancy? I have the structure deposited in the Protein Data Bank and it
> >is on hold. Thank you in advance for your help.
> >
> >Thank you,
> >
> >
> >
> >
> >
> >
> >*Venkatareddy Dadireddy,B1-10,Prof. S. Ramakumar's Lab,Dept. of
> >Physics,IISc, Banglore.Cell: 07259492227*
> >
> >
> >
> >To unsubscribe from the CCP4BB list, click the following link:
> >https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1 
> >
> >
> >This message was issued to members of www.jiscmail.ac.uk/CCP4BB 
> >, a mailing list hosted by 
> >www.jiscmail.ac.uk , terms & conditions are 
> >available at https://www.jiscmail.ac.uk/policyandsecurity/ 
> >
> >
> 
> 
> 
> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1 
> 
> 
> This message was issued to members of www.jiscmail.ac.uk/CCP4BB 
> , a mailing list hosted by 
> www.jiscmail.ac.uk , terms & conditions are 
> available at https://www.jiscmail.ac.uk/policyandsecurity/ 
> 
> 
> 
> -- 
> Venkatareddy Dadireddy,
> B1-10,
> Prof. S. Ramakumar's Lab,
> Dept. of Physics,
> IISc, Banglore.
> Cell: 07259492227
> 
> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1 
> 



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1

This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing list 
hosted by www.jiscmail.ac.uk, terms & conditions are available at 
https://www.jiscmail.ac.uk/policyandsecurity/


Re: [ccp4bb] Resolution discrepancy between MTZ file and Refinement

2024-04-04 Thread Kay Diederichs
Hi Venkat,

so what I suggested is not the real problem.

To get to the bottom of this, I suggest you show the logfiles from REFMAC5 and 
phenix.refine, and the output of the following command: 
mtzdmp experimental_data.mtz >& mtzdmp.log
run on the command line.
Please upload the files to some cloud service and post the links. Or attach 
them here.

Best wishes,
Kay



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1

This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing list 
hosted by www.jiscmail.ac.uk, terms & conditions are available at 
https://www.jiscmail.ac.uk/policyandsecurity/


Re: [ccp4bb] Resolution discrepancy between MTZ file and Refinement

2024-04-04 Thread venkatareddy dadireddy
Hi Kay,

Thank you very much for your insights.
Following are the cell parameters from mtz and pdb header.
*MTZ: 117.8560   66.1700   70.9040   90.   91.4240   90.000*

*CRYST1  117.856   66.170   70.904  90.00  91.42  90.00 C 1 2 1*

The only difference is in the 3rd decimal point.


Thank you,
Venkat



On Tue, Apr 2, 2024 at 10:10 PM Kay Diederichs <
kay.diederi...@uni-konstanz.de> wrote:

> Hi Venkatareddy Dadireddy,
>
> do the unit cell parameters of your MTZ file and PDB file agree exactly ?
>
> Take for example a cell of (100,110,120,90,90,90) in the header of the MTZ
> file,
> and (97,110,120,90,90,90) in the CRYST1 line of the PDB file.
>
> In this example, the (50,0,0) reflection would be at 2.0A resolution if
> using the cell from the MTZ file,
> but it would be at 1.94A resolution if calculating the resolution based on
> the cell from the PDB file.
>
> So perhaps REFMAC5 takes the cell from the PDB file, and phenix.refine
> takes the cell from the MTZ?
> I didn't check but it may be worth finding out.
>
> HTH,
> Kay
>
>
> On Tue, 2 Apr 2024 21:16:57 +0530, venkatareddy dadireddy <
> venkatda...@gmail.com> wrote:
>
> >Hi,
> >
> >The resolution range in my MTZ file is 70.88 - 2.0 A. When I refined my
> >structure using REFMAC5, the resolution that it gives is 70.88 - 1.94 A,
> >the difference of 0.04 A. I also used Phenix.refine which gives the
> >resolution output as it is in the MTZ file. Again, EDS (validation report)
> >gives the right resolution. What could be the possible reason for this
> >discrepancy? I have the structure deposited in the Protein Data Bank and
> it
> >is on hold. Thank you in advance for your help.
> >
> >Thank you,
> >
> >
> >
> >
> >
> >
> >*Venkatareddy Dadireddy,B1-10,Prof. S. Ramakumar's Lab,Dept. of
> >Physics,IISc, Banglore.Cell: 07259492227*
> >
> >
> >
> >To unsubscribe from the CCP4BB list, click the following link:
> >https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
> >
> >This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a
> mailing list hosted by www.jiscmail.ac.uk, terms & conditions are
> available at https://www.jiscmail.ac.uk/policyandsecurity/
> >
>
> 
>
> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
>
> This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a
> mailing list hosted by www.jiscmail.ac.uk, terms & conditions are
> available at https://www.jiscmail.ac.uk/policyandsecurity/
>


-- 






*Venkatareddy Dadireddy,B1-10,Prof. S. Ramakumar's Lab,Dept. of
Physics,IISc, Banglore.Cell: 07259492227*



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1

This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing list 
hosted by www.jiscmail.ac.uk, terms & conditions are available at 
https://www.jiscmail.ac.uk/policyandsecurity/


Re: [ccp4bb] Resolution discrepancy between MTZ file and Refinement

2024-04-02 Thread Kay Diederichs
Hi Venkatareddy Dadireddy,

do the unit cell parameters of your MTZ file and PDB file agree exactly ?

Take for example a cell of (100,110,120,90,90,90) in the header of the MTZ 
file, 
and (97,110,120,90,90,90) in the CRYST1 line of the PDB file.

In this example, the (50,0,0) reflection would be at 2.0A resolution if using 
the cell from the MTZ file, 
but it would be at 1.94A resolution if calculating the resolution based on the 
cell from the PDB file.

So perhaps REFMAC5 takes the cell from the PDB file, and phenix.refine takes 
the cell from the MTZ?
I didn't check but it may be worth finding out. 

HTH,
Kay


On Tue, 2 Apr 2024 21:16:57 +0530, venkatareddy dadireddy 
 wrote:

>Hi,
>
>The resolution range in my MTZ file is 70.88 - 2.0 A. When I refined my
>structure using REFMAC5, the resolution that it gives is 70.88 - 1.94 A,
>the difference of 0.04 A. I also used Phenix.refine which gives the
>resolution output as it is in the MTZ file. Again, EDS (validation report)
>gives the right resolution. What could be the possible reason for this
>discrepancy? I have the structure deposited in the Protein Data Bank and it
>is on hold. Thank you in advance for your help.
>
>Thank you,
>
>
>
>
>
>
>*Venkatareddy Dadireddy,B1-10,Prof. S. Ramakumar's Lab,Dept. of
>Physics,IISc, Banglore.Cell: 07259492227*
>
>
>
>To unsubscribe from the CCP4BB list, click the following link:
>https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
>
>This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing 
>list hosted by www.jiscmail.ac.uk, terms & conditions are available at 
>https://www.jiscmail.ac.uk/policyandsecurity/
>



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1

This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing list 
hosted by www.jiscmail.ac.uk, terms & conditions are available at 
https://www.jiscmail.ac.uk/policyandsecurity/


[ccp4bb] Resolution discrepancy between MTZ file and Refinement

2024-04-02 Thread venkatareddy dadireddy
Hi,

The resolution range in my MTZ file is 70.88 - 2.0 A. When I refined my
structure using REFMAC5, the resolution that it gives is 70.88 - 1.94 A,
the difference of 0.04 A. I also used Phenix.refine which gives the
resolution output as it is in the MTZ file. Again, EDS (validation report)
gives the right resolution. What could be the possible reason for this
discrepancy? I have the structure deposited in the Protein Data Bank and it
is on hold. Thank you in advance for your help.

Thank you,






*Venkatareddy Dadireddy,B1-10,Prof. S. Ramakumar's Lab,Dept. of
Physics,IISc, Banglore.Cell: 07259492227*



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1

This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing list 
hosted by www.jiscmail.ac.uk, terms & conditions are available at 
https://www.jiscmail.ac.uk/policyandsecurity/


Re: [ccp4bb] Resolution Discrepancy in Data Set

2024-01-23 Thread Randy John Read
t is also available for Phenix, I believe.
>>>
>>> See https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8248825/
>>>
>>> Hope this helps,
>>>
>>> Dave
>>>
>>>
>>> Dr David C. Briggs CSci MRSB
>>> Principal Laboratory Research Scientist
>>> Signalling and Structural Biology Lab
>>> The Francis Crick Institute
>>> London, UK
>>> ==
>>> about.me/david_briggsFrom: CCP4 bulletin board  on 
>>> behalf of Liliana Margent 
>>> Sent: Wednesday, January 17, 2024 9:19:36 PM
>>> To: CCP4BB@JISCMAIL.AC.UK 
>>> Subject: [ccp4bb] Resolution Discrepancy in Data Set
>>>
>>> External Sender: Use caution.
>>>
>>>
>>> Hello all,
>>>
>>> I hope this message finds you well.
>>>
>>> In my current data set, I’ve encountered a discrepancy between the 
>>> completeness in the high-resolution shells in merged statistics vs the 
>>> refinement statistics. For example, when I look at my merged statistics 
>>> file, output by Xia2 dials, the completeness in the high-resolution shells 
>>> are 97.6%. When I take this data and subsequently refine it in PHENIX I get 
>>> extremely different completeness ranges in the high-resolution shells, but 
>>> I cannot figure out why this large difference is occurring. I’m reaching 
>>> out to you, our esteemed community, for any insights or advice you might 
>>> have. Has anyone else faced a similar challenge? If so, how did you 
>>> navigate through it?  Your experiences and suggestions could be invaluable 
>>> in helping me understand and resolve this issue.
>>>
>>> Thank you in advance for your time and expertise.
>>>
>>> Best regards,
>>> Liliana
>>>
>>> see a side-by-side image of the files I mention in the message in 
>>> https://drive.google.com/file/d/1Y783MzlnqVwXRCtiLeV0Y2EpmkeDL2nd/view?usp=sharing
>>>
>>> 
>>>
>>> To unsubscribe from the CCP4BB list, click the following link:
>>> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
>>>
>>> This message was issued to members of http://www.jiscmail.ac.uk/CCP4BB, a 
>>> mailing list hosted by http://www.jiscmail.ac.uk/, terms & conditions are 
>>> available at https://www.jiscmail.ac.uk/policyandsecurity/
>>> The Francis Crick Institute Limited is a registered charity in England and 
>>> Wales no. 1140062 and a company registered in England and Wales no. 
>>> 06885462, with its registered office at 1 Midland Road London NW1 1AT
>>>
>>> To unsubscribe from the CCP4BB list, click the following link:
>>> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
>>
>>
>> To unsubscribe from the CCP4BB list, click the following link:
>> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1
>
>
>
> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1

-
Randy J. Read
Department of Haematology, University of Cambridge
Cambridge Institute for Medical Research Tel: +44 1223 336500
The Keith Peters Building
Hills Road   E-mail: 
rj...@cam.ac.uk
Cambridge CB2 0XY, U.K.  
www-structmed.cimr.cam.ac.uk




To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1

This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing list 
hosted by www.jiscmail.ac.uk, terms & conditions are available at 
https://www.jiscmail.ac.uk/policyandsecurity/


Re: [ccp4bb] Resolution Discrepancy in Data Set

2024-01-23 Thread Martin Malý
Dear all,

I am sorry for a late reply. R-values should not exceed 0.42 which
happened in your case for shells 1.91-1.83 and 1.83-1.77. It is because
theoretically (under some assumptions), a perfect model gives an R
value of 0.42 against random data (Evans & Murshudov
2013 https://doi.org/10.1107/S090744491361 ). So as David wrote,
your data are not better than 1.9 A. I would refine the structure
against data at resolution of 2.2 A (including modelling of water
molecules), then run paired refinement to add resolution shells 2.2-
2.1, 2.1-2.0 and 2.0-1.9A and choose an optimal high-resolution cutoff
according results. Feel free to ask.

"the actual reflections used in calculations and reported statistics
may be different (in phenix.refine)"
Thank for this comment, this was new for me, good to know! I can
imagine it can cause misleading situations/interpretations...

Nowadays, I quite often hear an opinion "You do not have to cut your
data at high-resolution, refinement program will put low weights for
noisy high-resolution reflections so they will not harm your model." I
just would like to ask Pavel and other refinement experts - would you
dis/agree? Such an approach should be possible by principle but R-
values etc. look quite ugly then.

Best wishes,
Martin

On Wed, 2024-01-17 at 18:15 -0800, Pavel Afonine wrote:
> Hi Liliana,
> 
> a few things to consider:
> 
> 0) There is a Phenix mailing list for Phenix specific questions
> (phenixbb);
> 
> 1) Bin completeness depends on (obviously) how binning is done
> (number of reflections per bin or number of bins or binning in d^2 or
> d^3 spacing or log-binning etc etc etc) -- all of this will affect
> the number of reflections in the "highest resolution bin" and
> corresponding statistics. And..."how binning is done" wildly differs
> by the program used or even within the same program! 
> 
> 2) Refinement (in Phenix) as well as many other Phenix tools apply
> temporary reflection omission according to the Read (1999) paper
> (Acta Cryst. (1999). D55, 1759-1764). This means while you have so
> many reflections in your input reflection file, the actual
> reflections used in calculations and reported statistics may be
> different. Most logs files keep a good record of this, meaning you
> can track this down by inspecting logs files carefully.
> 
> Let me know if you need more assistance with this issue.
> 
> Cheers,
> Pavel
> 
> On Wed, Jan 17, 2024 at 1:43 PM David Briggs
>  wrote:
> > Hi Liliana,
> > Two things leap out at me when I look at your data summary.
> > 
> > (1) Your data probably do not go to 1.77Å. The CC1/2 in your outer
> > shell is below any of the usual thresholds. There are discussions
> > to be had about what the threshold is, but normally CC1/2 values of
> > 0.5 or sometimes 0.3 are used. You should also consider I/sigI.
> > 
> > (2) I believe that by default Phenix.refine excludes weaker
> > reflections from refinement, which leads to the discrepancy in
> > completeness statistics. As your data do not extend as far as what
> > is contained in your mtz file, Phenix excludes those essentially
> > "empty" reflections. Judging by the Phenix refine output, I would
> > estimate your data goes to somewhere around 1.9Å
> > 
> > The program Pairef can help inform your choice of high-resolution
> > cutoff.
> > 
> > This can be run from CCP4cloud, but is also available for Phenix, I
> > believe.
> > 
> > See https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8248825/
> > 
> > Hope this helps,
> > 
> > Dave
> > 
> > 
> > Dr David C. Briggs CSci MRSB
> > Principal Laboratory Research Scientist
> > Signalling and Structural Biology Lab
> > The Francis Crick Institute
> > London, UK
> > ==
> > about.me/david_briggs
> > From: CCP4 bulletin board  on behalf of
> > Liliana Margent 
> > Sent: Wednesday, January 17, 2024 9:19:36 PM
> > To: CCP4BB@JISCMAIL.AC.UK 
> > Subject: [ccp4bb] Resolution Discrepancy in Data Set 
> > 
> > External Sender: Use caution.
> > 
> > 
> > Hello all,
> > 
> > I hope this message finds you well.
> > 
> > In my current data set, I’ve encountered a discrepancy between the
> > completeness in the high-resolution shells in merged statistics vs
> > the refinement statistics. For example, when I look at my merged
> > statistics file, output by Xia2 dials, the completeness in the
> > high-resolution shells are 97.6%. When I take this data and
> > subsequently refine it in PHENIX I get extremely different
> > completeness ranges in the high-resolution shells, but I cannot
>

Re: [ccp4bb] Resolution Discrepancy in Data Set

2024-01-17 Thread Pavel Afonine
Hi Liliana,

a few things to consider:

0) There is a Phenix mailing list for Phenix specific questions (phenixbb);

1) Bin completeness depends on (obviously) how binning is done (number of
reflections per bin or number of bins or binning in d^2 or d^3 spacing or
log-binning etc etc etc) -- all of this will affect the number of
reflections in the "highest resolution bin" and corresponding statistics.
And..."how binning is done" wildly differs by the program used or even
within the same program!

2) Refinement (in Phenix) as well as many other Phenix tools apply
temporary reflection omission according to the Read (1999) paper (Acta
Cryst. (1999). D55, 1759-1764). This means while you have so many
reflections in your input reflection file, the actual reflections used in
calculations and reported statistics may be different. Most logs files keep
a good record of this, meaning you can track this down by inspecting logs
files carefully.

Let me know if you need more assistance with this issue.

Cheers,
Pavel

On Wed, Jan 17, 2024 at 1:43 PM David Briggs 
wrote:

> Hi Liliana,
>
> Two things leap out at me when I look at your data summary.
>
> (1) Your data probably do not go to 1.77Å. The CC1/2 in your outer shell
> is below any of the usual thresholds. There are discussions to be had about
> what the threshold is, but normally CC1/2 values of 0.5 or sometimes 0.3
> are used. You should also consider I/sigI.
>
> (2) I believe that by default Phenix.refine excludes weaker reflections
> from refinement, which leads to the discrepancy in completeness statistics.
> As your data do not extend as far as what is contained in your mtz file,
> Phenix excludes those essentially "empty" reflections. Judging by the
> Phenix refine output, I would estimate your data goes to somewhere around
> 1.9Å
>
> The program Pairef can help inform your choice of high-resolution cutoff.
>
> This can be run from CCP4cloud, but is also available for Phenix, I
> believe.
>
> See https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8248825/
>
> Hope this helps,
>
> Dave
>
>
> *Dr David C. Briggs CSci MRSB*
>
> Principal Laboratory Research Scientist
>
> Signalling and Structural Biology Lab
>
> The Francis Crick Institute
>
> London, UK
>
> ==
>
> about.me/david_briggs
> --
> *From:* CCP4 bulletin board  on behalf of Liliana
> Margent 
> *Sent:* Wednesday, January 17, 2024 9:19:36 PM
> *To:* CCP4BB@JISCMAIL.AC.UK 
> *Subject:* [ccp4bb] Resolution Discrepancy in Data Set
>
>
> External Sender: Use caution.
>
>
> Hello all,
>
> I hope this message finds you well.
>
> In my current data set, I’ve encountered a discrepancy between the
> completeness in the high-resolution shells in merged statistics vs the
> refinement statistics. For example, when I look at my merged statistics
> file, output by Xia2 dials, the completeness in the high-resolution shells
> are 97.6%. When I take this data and subsequently refine it in PHENIX I get
> extremely different completeness ranges in the high-resolution shells, but
> I cannot figure out why this large difference is occurring. I’m reaching
> out to you, our esteemed community, for any insights or advice you might
> have. Has anyone else faced a similar challenge? If so, how did you
> navigate through it?  Your experiences and suggestions could be invaluable
> in helping me understand and resolve this issue.
>
> Thank you in advance for your time and expertise.
>
> Best regards,
> Liliana
>
> see a side-by-side image of the files I mention in the message in
> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdrive.google.com%2Ffile%2Fd%2F1Y783MzlnqVwXRCtiLeV0Y2EpmkeDL2nd%2Fview%3Fusp%3Dsharing=05%7C02%7C%7Cb15f2dcfd7804db15e8808dc17a23ef1%7C4eed7807ebad415aa7a99170947f4eae%7C0%7C1%7C638411232808490947%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=5XcApcqvUkuvjz9oGUsaFpvnfY2p2X1LDE8G4XhFkuo%3D=0
> <https://drive.google.com/file/d/1Y783MzlnqVwXRCtiLeV0Y2EpmkeDL2nd/view?usp=sharing>
>
> 
>
> To unsubscribe from the CCP4BB list, click the following link:
>
> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.jiscmail.ac.uk%2Fcgi-bin%2FWA-JISC.exe%3FSUBED1%3DCCP4BB%26A%3D1=05%7C02%7C%7Cb15f2dcfd7804db15e8808dc17a23ef1%7C4eed7807ebad415aa7a99170947f4eae%7C0%7C1%7C638411232808490947%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=gS2JOO3L%2BYCu2GmMo6I9ajQZn9ZiyZTLzYEZ7CfJN2Q%3D=0
> <https://www.jiscmail.ac.uk/cgi-b

Re: [ccp4bb] Resolution Discrepancy in Data Set

2024-01-17 Thread David Briggs
Hi Liliana,

Two things leap out at me when I look at your data summary.

(1) Your data probably do not go to 1.77Å. The CC1/2 in your outer shell is 
below any of the usual thresholds. There are discussions to be had about what 
the threshold is, but normally CC1/2 values of 0.5 or sometimes 0.3 are used. 
You should also consider I/sigI.

(2) I believe that by default Phenix.refine excludes weaker reflections from 
refinement, which leads to the discrepancy in completeness statistics. As your 
data do not extend as far as what is contained in your mtz file, Phenix 
excludes those essentially "empty" reflections. Judging by the Phenix refine 
output, I would estimate your data goes to somewhere around 1.9Å

The program Pairef can help inform your choice of high-resolution cutoff.

This can be run from CCP4cloud, but is also available for Phenix, I believe.

See https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8248825/

Hope this helps,

Dave



Dr David C. Briggs CSci MRSB

Principal Laboratory Research Scientist

Signalling and Structural Biology Lab

The Francis Crick Institute

London, UK

==

about.me/david_briggs<http://about.me/david_briggs>


From: CCP4 bulletin board  on behalf of Liliana Margent 

Sent: Wednesday, January 17, 2024 9:19:36 PM
To: CCP4BB@JISCMAIL.AC.UK 
Subject: [ccp4bb] Resolution Discrepancy in Data Set


External Sender: Use caution.


Hello all,

I hope this message finds you well.

In my current data set, I’ve encountered a discrepancy between the completeness 
in the high-resolution shells in merged statistics vs the refinement 
statistics. For example, when I look at my merged statistics file, output by 
Xia2 dials, the completeness in the high-resolution shells are 97.6%. When I 
take this data and subsequently refine it in PHENIX I get extremely different 
completeness ranges in the high-resolution shells, but I cannot figure out why 
this large difference is occurring. I’m reaching out to you, our esteemed 
community, for any insights or advice you might have. Has anyone else faced a 
similar challenge? If so, how did you navigate through it?  Your experiences 
and suggestions could be invaluable in helping me understand and resolve this 
issue.

Thank you in advance for your time and expertise.

Best regards,
Liliana

see a side-by-side image of the files I mention in the message in 
https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdrive.google.com%2Ffile%2Fd%2F1Y783MzlnqVwXRCtiLeV0Y2EpmkeDL2nd%2Fview%3Fusp%3Dsharing=05%7C02%7C%7Cb15f2dcfd7804db15e8808dc17a23ef1%7C4eed7807ebad415aa7a99170947f4eae%7C0%7C1%7C638411232808490947%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=5XcApcqvUkuvjz9oGUsaFpvnfY2p2X1LDE8G4XhFkuo%3D=0<https://drive.google.com/file/d/1Y783MzlnqVwXRCtiLeV0Y2EpmkeDL2nd/view?usp=sharing>



To unsubscribe from the CCP4BB list, click the following link:
https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.jiscmail.ac.uk%2Fcgi-bin%2FWA-JISC.exe%3FSUBED1%3DCCP4BB%26A%3D1=05%7C02%7C%7Cb15f2dcfd7804db15e8808dc17a23ef1%7C4eed7807ebad415aa7a99170947f4eae%7C0%7C1%7C638411232808490947%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=gS2JOO3L%2BYCu2GmMo6I9ajQZn9ZiyZTLzYEZ7CfJN2Q%3D=0<https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1>

This message was issued to members of 
https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.jiscmail.ac.uk%2FCCP4BB=05%7C02%7C%7Cb15f2dcfd7804db15e8808dc17a23ef1%7C4eed7807ebad415aa7a99170947f4eae%7C0%7C1%7C638411232808490947%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=y27c8QI3JX%2BYozm2TcSLZlj%2Fo2rOxLUhAJAdwxpxzp8%3D=0<http://www.jiscmail.ac.uk/CCP4BB>,
 a mailing list hosted by 
https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.jiscmail.ac.uk%2F=05%7C02%7C%7Cb15f2dcfd7804db15e8808dc17a23ef1%7C4eed7807ebad415aa7a99170947f4eae%7C0%7C1%7C638411232808490947%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=sJz0AxBNJb5PxOlNT9%2FKN9MobG2EKXURC67CNB4qbnE%3D=0<http://www.jiscmail.ac.uk/>,
 terms & conditions are available at 
https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.jiscmail.ac.uk%2Fpolicyandsecurity%2F=05%7C02%7C%7Cb15f2dcfd7804db15e8808dc17a23ef1%7C4eed7807ebad415aa7a99170947f4eae%7C0%7C1%7C638411232808490947%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=KIkPxpkW00RhNR7ODU%2F1EFLtP6HAd5Jg2r2K5bUC9Tc%3D=0<https://www.jiscmail.ac.uk/policyandsecurity/>

The Francis Crick Institute Limited is a registered charity in England and 
Wales no. 1140062 and a company registered in England and Wa

[ccp4bb] Resolution Discrepancy in Data Set

2024-01-17 Thread Liliana Margent
Hello all,

I hope this message finds you well.

In my current data set, I’ve encountered a discrepancy between the completeness 
in the high-resolution shells in merged statistics vs the refinement 
statistics. For example, when I look at my merged statistics file, output by 
Xia2 dials, the completeness in the high-resolution shells are 97.6%. When I 
take this data and subsequently refine it in PHENIX I get extremely different 
completeness ranges in the high-resolution shells, but I cannot figure out why 
this large difference is occurring. I’m reaching out to you, our esteemed 
community, for any insights or advice you might have. Has anyone else faced a 
similar challenge? If so, how did you navigate through it?  Your experiences 
and suggestions could be invaluable in helping me understand and resolve this 
issue.

Thank you in advance for your time and expertise.

Best regards,
Liliana

see a side-by-side image of the files I mention in the message in 
https://drive.google.com/file/d/1Y783MzlnqVwXRCtiLeV0Y2EpmkeDL2nd/view?usp=sharing



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1

This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing list 
hosted by www.jiscmail.ac.uk, terms & conditions are available at 
https://www.jiscmail.ac.uk/policyandsecurity/


Re: [ccp4bb] Resolution cutoff in CCP4

2020-09-24 Thread Philip D. Jeffrey
Sanity check:
Please check that the number of the reflections in the .mtz file is the same as 
in the .sca file.
Please check that the cell dimensions in the MTZ file are the same as in the 
header of the .sca file

If that's true, and it probably is, then this is a general issue with SCALEPACK 
- it appears to apply the resolution cut on cell dimensions from the .x files 
(possibly the first one) that then can differ significantly from the 
post-refined cell dimensions.  Somewhat rarely it shows up as a large effect, 
as here,  but the major issue is that the resolution values in SCALEPACK are 
inaccurate.

Of course you still want scaling stats - perhaps try to integrate with the cell 
dimensions fixed to the post-refined values.  Or switch to XDS.

Phil Jeffrey
Princeton


From: CCP4 bulletin board  on behalf of Vatsal Purohit 

Sent: Thursday, September 24, 2020 5:40 PM
To: CCP4BB@JISCMAIL.AC.UK 
Subject: [ccp4bb] Resolution cutoff in CCP4


Hi everyone,



I’ve been having an issue with the CCP4 program scatomtz to convert .sca 
generated from HKL2000 into .mtz. While my resolution cutoff in HKL2000 is 
higher (~ 2.0 A), this program cuts it off at 2.2 A even if I set a higher 
resolution limit in this program. Has anyone else experienced this? Any ideas 
on how to deal with this issue would be appreciated!



Regards,

Vatsal



Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10





To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1

This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing list 
hosted by www.jiscmail.ac.uk, terms & conditions are available at 
https://www.jiscmail.ac.uk/policyandsecurity/


[ccp4bb] Resolution cutoff in CCP4

2020-09-24 Thread Vatsal Purohit
Hi everyone,

I’ve been having an issue with the CCP4 program scatomtz to convert .sca 
generated from HKL2000 into .mtz. While my resolution cutoff in HKL2000 is 
higher (~ 2.0 A), this program cuts it off at 2.2 A even if I set a higher 
resolution limit in this program. Has anyone else experienced this? Any ideas 
on how to deal with this issue would be appreciated!

Regards,
Vatsal

Sent from Mail for Windows 10




To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1

This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing list 
hosted by www.jiscmail.ac.uk, terms & conditions are available at 
https://www.jiscmail.ac.uk/policyandsecurity/


Re: [ccp4bb] resolution

2019-07-08 Thread Holton, James M

Last time I checked phenix.refine did not use sig(F) nor sig(I) in its 
likelihood calculation.  Refmac does, but for a long time it was not the 
default.  You can turn it off with the WEIGHT NOEXP command, or you can 
even run with no "SIGx" at all in your mtz file.  You do this by leaving 
SIGFP out on the LABIN line.  This can sometimes help, but generally not 
by much.

I'll admit I was surprised when I first learned this is the way sigmas 
are treated in modern maximum-likelihood refinement.  But as it turns 
out sig(I) is almost never the dominant source of error in 
macromolecular models, so leaving it out generally goes unnoticed. There 
are also a few cases in the PDB where the sigmas are completely bonkers 
and including them can make things worse.  So, ignoring sigmas is 
perhaps a safe default.

This is not to say that sigmas are completely useless, they play a very 
important role in phasing, where the errors in the intensity differences 
must be correctly propagated in order for phase improvement to have the 
best chance of working. But for refining a native structure against 
intensity or F data, there just isn't much impact. Don't believe me?  
Try it.  Use sftools to change all your sigI values to, say, the 
average.  Then re-run refinement and see how much it changes your final 
stats, if at all.

Leaving out high-angle or otherwise weak data can improve statistics, 
but that is not a reason to leave them out.  What this is telling you is 
that the fine details of the model are still not in agreement with the 
data.  I the case of the OP, I suspect the Fcalc vs Ftrue difference is 
larger than normal.  Something else is wrong.  In such cases I always 
like to look at the real-space representation of Rwork, which is the 
Fo-Fc difference map.  How big is the biggest peak in this map? Is it 
positive or negative? And where is it?

-James Holton
MAD Scientist

On 7/4/2019 11:05 PM, graeme.win...@diamond.ac.uk wrote:
> Pavel,
>
> Please correct if wrong, but I thought most refinement programs used the 
> weights e.g. sig(I/F) with I/F so would not really have a hard cut off 
> anyway? You’re just making the stats worse but the model should stay ~ the 
> same (unless you have outliers in there)
>
> Clearly there will be a point where the model stops improving, which is the 
> “true” limit…
>
> Cheers Graeme
>
>
>
> On 5 Jul 2019, at 06:49, Pavel Afonine 
> mailto:pafon...@gmail.com>> wrote:
>
> Hi Sam Tang,
>
> Sorry for a naive question. Is there any circumstances where one may wish to 
> refine to a lower resolution? For example if one has a dataset processed to 2 
> A, is there any good reasons for he/she to refine to only, say 2.5 A?
>
> yes, certainly. For example, when information content in the data can justify 
> it.. Randy Read can comment on this more! Also instead of a hard cutoff using 
> a smooth weight based attenuation may be even better. AFAIK, no refinement 
> program can do this smartly currently.
> Pavel
>
> 
>
> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1
>
>




To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


Re: [ccp4bb] resolution

2019-07-05 Thread Gerard Bricogne
Dear Sam,

 If you have a P1 space group and your dataset was collected in a single
orientation, you will have a great big gaping cusp in it.

 I would suggest that you submit your full dataset to the STARANISO
server at  

  http://staraniso.globalphasing.org/cgi-bin/staraniso.cgi

and take into consideration the distribution of the blue dots. It can be
horrifying: for instance, take a look at 

http://staraniso.globalphasing.org/cgi-bin/RLViewer_PDB_scroll.cgi?ID=6oa7 . 

 The rationale for reducing the resolution at which refinement is done
would be to reduce the proportion (and hence impact) of the systematically
missing data in the cusp, as this gets rapidly worse at high resolution.

 The real remedy is to collect data again, this time in at least two
orientations that are different enough to fill each other's cusps at the
highest resolution. Then you will be able to enjoy the full diffraction
limit (2.0A or better) that your crystal seems willing to give ;-) .


 With best wishes,

  Gerard.

--
On Fri, Jul 05, 2019 at 09:48:35PM +0800, Sam Tang wrote:
> Dear all
> 
> Hello again
> 
> Thanks a lot for the numerous input.
> 
> I received a dataset which was processed to 2.4A but refined to 3A -- this
> was the background I raised this question in the first place. Then I looked
> at the aimless statistics. At 2.4A the high resolution bin CC1/2 0.626,
> I/sigI 2.0, Completeness 84.6, Multiplicity 1.7 (P1 spacegroup).  I suspect
> the reason for the refinement resolution limit to be set at 3 A was simply
> due to better Rw/Rf (0.236/0.294 at 3A; 0.284/0.341 at 2.4A).
> 
> Based on these information am I justified to say that data quality at 2.4 A
> was suboptimal? In this case do you think refining at a (much) lower
> resolution is acceptable?
> 
> Best regards
> 
> Sam
> 
> On Fri, 5 Jul 2019 at 13:43, Sam Tang  wrote:
> 
> > Hello everyone
> >
> > Sorry for a naive question. Is there any circumstances where one may wish
> > to refine to a lower resolution? For example if one has a dataset processed
> > to 2 A, is there any good reasons for he/she to refine to only, say 2.5 A?
> >
> > Thanks!
> >
> > Sam Tang
> >
> 
> 
> 
> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


Re: [ccp4bb] resolution

2019-07-05 Thread Robbie Joosten
You cannot directly compare R-factors because they are calculated over 
different sets of data. It's apples and oranges. 

Your R-factor gap is a bit large too, perhaps your model can be improved a bit 
for instance by using tighter geometric restraints.

Cheers,
Robbie

> -Original Message-
> From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of
> Sam Tang
> Sent: Friday, July 05, 2019 15:49
> To: CCP4BB@JISCMAIL.AC.UK
> Subject: Re: [ccp4bb] resolution
> 
> Dear all
> 
> Hello again
> 
> Thanks a lot for the numerous input.
> 
> I received a dataset which was processed to 2.4A but refined to 3A -- this was
> the background I raised this question in the first place. Then I looked at the
> aimless statistics. At 2.4A the high resolution bin CC1/2 0.626, I/sigI 2.0,
> Completeness 84.6, Multiplicity 1.7 (P1 spacegroup).  I suspect the reason for
> the refinement resolution limit to be set at 3 A was simply due to better
> Rw/Rf (0.236/0.294 at 3A; 0.284/0.341 at 2.4A).
> 
> Based on these information am I justified to say that data quality at 2.4 A 
> was
> suboptimal? In this case do you think refining at a (much) lower resolution is
> acceptable?
> 
> 
> Best regards
> 
> Sam
> 
> On Fri, 5 Jul 2019 at 13:43, Sam Tang  wrote:
> 
> 
>   Hello everyone
> 
>   Sorry for a naive question. Is there any circumstances where one
> may wish to refine to a lower resolution? For example if one has a dataset
> processed to 2 A, is there any good reasons for he/she to refine to only, say
> 2.5 A?
> 
>   Thanks!
> 
>   Sam Tang
> 
> 
> 
> 
> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1




To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


Re: [ccp4bb] resolution

2019-07-05 Thread Mark J van Raaij
looks like a case of the "if Rfree is lower than 0.300 the structure is 
perfect, if Rfree is higher than 0.300 your structure is completely rubbish" 
police striking again (some reviewers are like this). A bit like if p is 
smaller than 0.05 the effect is definitely real and if p is just about 0.05 
there is definitely no effect...
I would include data to 2.4Å. Or perhaps, try steps and look at maps at 2.4, 
2.5, 2.6Å etc. and then decide (PDBredo can do this automagically). And then 
take 2.4Å or some limit much closer to 2.4Å than 3.0Å.
At 2.4Å (or 2.5Å or 2.6Å) there would be much more info and maps should look 
better than at 3.0Å, even if the Rs are a bit higher.

Mark J van Raaij
Dpto de Estructura de Macromoleculas
Centro Nacional de Biotecnologia - CSIC
calle Darwin 3
E-28049 Madrid, Spain
tel. (+34) 91 585 4616


> On 5 Jul 2019, at 15:48, Sam Tang  wrote:
> 
> Dear all
> 
> Hello again
> 
> Thanks a lot for the numerous input. 
> 
> I received a dataset which was processed to 2.4A but refined to 3A -- this 
> was the background I raised this question in the first place. Then I looked 
> at the aimless statistics. At 2.4A the high resolution bin CC1/2 0.626, 
> I/sigI 2.0, Completeness 84.6, Multiplicity 1.7 (P1 spacegroup).  I suspect 
> the reason for the refinement resolution limit to be set at 3 A was simply 
> due to better Rw/Rf (0.236/0.294 at 3A; 0.284/0.341 at 2.4A).
> 
> Based on these information am I justified to say that data quality at 2.4 A 
> was suboptimal? In this case do you think refining at a (much) lower 
> resolution is acceptable?
> 
> Best regards
> 
> Sam
> 
> On Fri, 5 Jul 2019 at 13:43, Sam Tang  > wrote:
> Hello everyone
> 
> Sorry for a naive question. Is there any circumstances where one may wish to 
> refine to a lower resolution? For example if one has a dataset processed to 2 
> A, is there any good reasons for he/she to refine to only, say 2.5 A?
> 
> Thanks!
> 
> Sam Tang
> 
> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1 
> 



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


Re: [ccp4bb] resolution

2019-07-05 Thread Eleanor Dodson
Well - i would always do the final refinement to the highest resolution
with CC1/2 > 0.5

There may be other problems with the data  - completeness low for current
standards ..
Does multiplicity fall off with resolution etc?
Is there considerable anisotropy?

both sets of R factors look surprisingly high.. but see above for possible
reasons..

Eleanor


On Fri, 5 Jul 2019 at 14:49, Sam Tang  wrote:

> Dear all
>
> Hello again
>
> Thanks a lot for the numerous input.
>
> I received a dataset which was processed to 2.4A but refined to 3A -- this
> was the background I raised this question in the first place. Then I looked
> at the aimless statistics. At 2.4A the high resolution bin CC1/2 0.626,
> I/sigI 2.0, Completeness 84.6, Multiplicity 1.7 (P1 spacegroup).  I suspect
> the reason for the refinement resolution limit to be set at 3 A was simply
> due to better Rw/Rf (0.236/0.294 at 3A; 0.284/0.341 at 2.4A).
>
> Based on these information am I justified to say that data quality at 2.4
> A was suboptimal? In this case do you think refining at a (much) lower
> resolution is acceptable?
>
> Best regards
>
> Sam
>
> On Fri, 5 Jul 2019 at 13:43, Sam Tang  wrote:
>
>> Hello everyone
>>
>> Sorry for a naive question. Is there any circumstances where one may wish
>> to refine to a lower resolution? For example if one has a dataset processed
>> to 2 A, is there any good reasons for he/she to refine to only, say 2.5 A?
>>
>> Thanks!
>>
>> Sam Tang
>>
>
> --
>
> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1
>



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


Re: [ccp4bb] resolution

2019-07-05 Thread Sam Tang
Dear all

Hello again

Thanks a lot for the numerous input.

I received a dataset which was processed to 2.4A but refined to 3A -- this
was the background I raised this question in the first place. Then I looked
at the aimless statistics. At 2.4A the high resolution bin CC1/2 0.626,
I/sigI 2.0, Completeness 84.6, Multiplicity 1.7 (P1 spacegroup).  I suspect
the reason for the refinement resolution limit to be set at 3 A was simply
due to better Rw/Rf (0.236/0.294 at 3A; 0.284/0.341 at 2.4A).

Based on these information am I justified to say that data quality at 2.4 A
was suboptimal? In this case do you think refining at a (much) lower
resolution is acceptable?

Best regards

Sam

On Fri, 5 Jul 2019 at 13:43, Sam Tang  wrote:

> Hello everyone
>
> Sorry for a naive question. Is there any circumstances where one may wish
> to refine to a lower resolution? For example if one has a dataset processed
> to 2 A, is there any good reasons for he/she to refine to only, say 2.5 A?
>
> Thanks!
>
> Sam Tang
>



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


Re: [ccp4bb] resolution

2019-07-05 Thread Robbie Joosten
Another reason to cut the data (temporarily!) is to avoid discussion later when 
you try to publish your model. You need to be able to defend your high 
resolution cut-off against the R-merge zealots. The paired refinement test to 
find a good resolution cut-off works well for that. It is based on model 
refinement and works best when your model is already quite good. Also, the test 
can only be unbiased if you haven't yet used your high-resolution data. 

Whatever the outcome of your resolution cut-off choice, please deposit all 
recorded reflections!

Cheers,
Robbie

> -Original Message-
> From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of
> Alexandre Ourjoumtsev
> Sent: Friday, July 05, 2019 09:40
> To: CCP4BB@JISCMAIL.AC.UK
> Subject: Re: [ccp4bb] resolution
> 
> Dear Graeme,
> 
> 
> Right, but you are talking about weights that reflect the data quality and say
> nothing about that of the starting model ; however refinement is a
> comparison of a model with data.
> 
> 
> The higher resolution of the data, the more sensitive they to model
> imperfections.
> 
> Refinement targets are sums over reflections, and each refinement term is a
> function with multiple minima; the higher the resoluion, the more frequent
> these minima.
> 
> 
> If the starting model is too far from the answer, a presence of high-
> resolution data prevents the refinement from moving the model as far as
> necessary; it is trapped by multiple local minima of the crystallographic
> functions that include such high-resolution terms. Removing such terms
> removes or at least attenuate the intermediate local minima and improves
> the convergence. One does not care about the statistices but about
> convergence ("the model stops improving" further than with these data).
> Increaing the resolution step-by-step was the standard refinement strategy
> till the end of 90ths.
> 
> Right, using ML-based targets introduced weights based on comparison of
> Fmodel with Fobs and allowed to do such attenuation in a "soft way". This
> was great and indeed replaced the "before-ML refinement strategy".
> However, such an artificial cut-off of highest-resolution data (temporary, at
> early refinement stages) can be useful in some situations even now and can
> improve convergence even with the modern tools. First cycles of a rigid-body
> refinement can be an example.
> 
> 
> Another reason for a (temporary) removing of higher-resolution data is a
> heavy (systematic) incompleteness of data in the higher-resolution shells.
> 
> 
> 
> With best regards,
> 
> 
> Sacha
> 
> 
> - Le 5 Juil 19, à 8:05, graeme.win...@diamond.ac.uk
>  a écrit :
> 
> 
>   Pavel,
> 
>   Please correct if wrong, but I thought most refinement programs
> used the weights e.g. sig(I/F) with I/F so would not really have a hard cut 
> off
> anyway? You’re just making the stats worse but the model should stay ~ the
> same (unless you have outliers in there)
> 
>   Clearly there will be a point where the model stops improving, which
> is the “true” limit…
> 
>   Cheers Graeme
> 
> 
> 
>   On 5 Jul 2019, at 06:49, Pavel Afonine
> mailto:pafon...@gmail.com>> wrote:
> 
>   Hi Sam Tang,
> 
>   Sorry for a naive question. Is there any circumstances where one
> may wish to refine to a lower resolution? For example if one has a dataset
> processed to 2 A, is there any good reasons for he/she to refine to only, say
> 2.5 A?
> 
>   yes, certainly. For example, when information content in the data
> can justify it.. Randy Read can comment on this more! Also instead of a hard
> cutoff using a smooth weight based attenuation may be even better. AFAIK,
> no refinement program can do this smartly currently.
>   Pavel
> 
>   
> 
>   To unsubscribe from the CCP4BB list, click the following link:
>   https://www.jiscmail.ac.uk/cgi-
> bin/webadmin?SUBED1=CCP4BB=1
> 
> 
>   --
>   This e-mail and any attachments may contain confidential, copyright
> and or privileged material, and are for the use of the intended addressee
> only. If you are not the intended addressee or an authorised recipient of the
> addressee please notify us of receipt by returning the e-mail and do not use,
> copy, retain, distribute or disclose the information in or attached to the e-
> mail.
>   Any opinions expressed within this e-mail are those of the individual
> and not necessarily of Diamond Light Source Ltd.
>   Diamond Light Source Ltd. cannot guarantee that this e-mail or any
> attachments are free from viruses and we cannot accept liabi

Re: [ccp4bb] resolution

2019-07-05 Thread Alexandre Ourjoumtsev
Dear Graeme, 

Right, but you are talking about weights that reflect the data quality and say 
nothing about that of the starting model ; however refinement is a comparison 
of a model with data. 

The higher resolution of the data, the more sensitive they to model 
imperfections. 
Refinement targets are sums over reflections, and each refinement term is a 
function with multiple minima; the higher the resoluion, the more frequent 
these minima. 

If the starting model is too far from the answer, a presence of high-resolution 
data prevents the refinement from moving the model as far as necessary; it is 
trapped by multiple local minima of the crystallographic functions that include 
such high-resolution terms. Removing such terms removes or at least attenuate 
the intermediate local minima and improves the convergence. One does not care 
about the statistices but about convergence (" the model stops improving" 
further than with these data). Increaing the resolution step-by-step was the 
standard refinement strategy till the end of 90ths. 

Right, using ML-based targets introduced weights based on comparison of Fmodel 
with Fobs and allowed to do such attenuation in a "soft way". This was great 
and indeed replaced the "before-ML refinement strategy". However, such an 
artificial cut-off of highest-resolution data (temporary, at early refinement 
stages) can be useful in some situations even now and can improve convergence 
even with the modern tools. First cycles of a rigid-body refinement can be an 
example. 

Another reason for a (temporary) removing of higher-resolution data is a heavy 
(systematic) incompleteness of data in the higher-resolution shells. 

With best regards, 

Sacha 

- Le 5 Juil 19, à 8:05, graeme.win...@diamond.ac.uk 
 a écrit : 

> Pavel,

> Please correct if wrong, but I thought most refinement programs used the 
> weights
> e.g. sig(I/F) with I/F so would not really have a hard cut off anyway? You’re
> just making the stats worse but the model should stay ~ the same (unless you
> have outliers in there)

> Clearly there will be a point where the model stops improving, which is the
> “true” limit…

> Cheers Graeme

> On 5 Jul 2019, at 06:49, Pavel Afonine
> mailto:pafon...@gmail.com>> wrote:

> Hi Sam Tang,

> Sorry for a naive question. Is there any circumstances where one may wish to
> refine to a lower resolution? For example if one has a dataset processed to 2
> A, is there any good reasons for he/she to refine to only, say 2.5 A?

> yes, certainly. For example, when information content in the data can justify
> it.. Randy Read can comment on this more! Also instead of a hard cutoff using 
> a
> smooth weight based attenuation may be even better. AFAIK, no refinement
> program can do this smartly currently.
> Pavel

> 

> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1

> --
> This e-mail and any attachments may contain confidential, copyright and or
> privileged material, and are for the use of the intended addressee only. If 
> you
> are not the intended addressee or an authorised recipient of the addressee
> please notify us of receipt by returning the e-mail and do not use, copy,
> retain, distribute or disclose the information in or attached to the e-mail.
> Any opinions expressed within this e-mail are those of the individual and not
> necessarily of Diamond Light Source Ltd.
> Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments
> are free from viruses and we cannot accept liability for any damage which you
> may sustain as a result of software viruses which may be transmitted in or 
> with
> the message.
> Diamond Light Source Limited (company no. 4375679). Registered in England and
> Wales with its registered office at Diamond House, Harwell Science and
> Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom

> 

> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


Re: [ccp4bb] resolution

2019-07-05 Thread graeme.win...@diamond.ac.uk
Pavel,

Please correct if wrong, but I thought most refinement programs used the 
weights e.g. sig(I/F) with I/F so would not really have a hard cut off anyway? 
You’re just making the stats worse but the model should stay ~ the same (unless 
you have outliers in there)

Clearly there will be a point where the model stops improving, which is the 
“true” limit…

Cheers Graeme



On 5 Jul 2019, at 06:49, Pavel Afonine 
mailto:pafon...@gmail.com>> wrote:

Hi Sam Tang,

Sorry for a naive question. Is there any circumstances where one may wish to 
refine to a lower resolution? For example if one has a dataset processed to 2 
A, is there any good reasons for he/she to refine to only, say 2.5 A?

yes, certainly. For example, when information content in the data can justify 
it.. Randy Read can comment on this more! Also instead of a hard cutoff using a 
smooth weight based attenuation may be even better. AFAIK, no refinement 
program can do this smartly currently.
Pavel



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


-- 
This e-mail and any attachments may contain confidential, copyright and or 
privileged material, and are for the use of the intended addressee only. If you 
are not the intended addressee or an authorised recipient of the addressee 
please notify us of receipt by returning the e-mail and do not use, copy, 
retain, distribute or disclose the information in or attached to the e-mail.
Any opinions expressed within this e-mail are those of the individual and not 
necessarily of Diamond Light Source Ltd. 
Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments 
are free from viruses and we cannot accept liability for any damage which you 
may sustain as a result of software viruses which may be transmitted in or with 
the message.
Diamond Light Source Limited (company no. 4375679). Registered in England and 
Wales with its registered office at Diamond House, Harwell Science and 
Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom




To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


Re: [ccp4bb] resolution

2019-07-04 Thread Pavel Afonine
Hi Sam Tang,

Sorry for a naive question. Is there any circumstances where one may wish
> to refine to a lower resolution? For example if one has a dataset processed
> to 2 A, is there any good reasons for he/she to refine to only, say 2.5 A?
>

yes, certainly. For example, when information content in the data can
justify it.. Randy Read can comment on this more! Also instead of a hard
cutoff using a smooth weight based attenuation may be even better. AFAIK,
no refinement program can do this smartly currently.
Pavel



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


Re: [ccp4bb] resolution

2019-07-04 Thread graeme.win...@diamond.ac.uk
Hi Sam,

If you have good data to 2A, then I cannot imagine throwing away a significant 
fraction of it (there are lot of spots from 2.5-2A) will make your model better

Suggest reading

http://scripts.iucr.org/cgi-bin/paper?S0907444913001121

All best Graeme

On 5 Jul 2019, at 06:43, Sam Tang 
mailto:samtys0...@gmail.com>> wrote:

Hello everyone

Sorry for a naive question. Is there any circumstances where one may wish to 
refine to a lower resolution? For example if one has a dataset processed to 2 
A, is there any good reasons for he/she to refine to only, say 2.5 A?

Thanks!

Sam Tang



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


-- 
This e-mail and any attachments may contain confidential, copyright and or 
privileged material, and are for the use of the intended addressee only. If you 
are not the intended addressee or an authorised recipient of the addressee 
please notify us of receipt by returning the e-mail and do not use, copy, 
retain, distribute or disclose the information in or attached to the e-mail.
Any opinions expressed within this e-mail are those of the individual and not 
necessarily of Diamond Light Source Ltd. 
Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments 
are free from viruses and we cannot accept liability for any damage which you 
may sustain as a result of software viruses which may be transmitted in or with 
the message.
Diamond Light Source Limited (company no. 4375679). Registered in England and 
Wales with its registered office at Diamond House, Harwell Science and 
Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom




To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


[ccp4bb] resolution

2019-07-04 Thread Sam Tang
Hello everyone

Sorry for a naive question. Is there any circumstances where one may wish
to refine to a lower resolution? For example if one has a dataset processed
to 2 A, is there any good reasons for he/she to refine to only, say 2.5 A?

Thanks!

Sam Tang



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


[ccp4bb] FW: [ccp4bb] Resolution mismatch aimless/refmac

2018-03-19 Thread Orru, Dr. Roberto
Dear All,

Thanks everyone for the suggestions and clarifications. As Eleanor and Garib 
wrote, I cannot find real differences in map quality or statistics after 
refinement depending if I set or not the resolution edges.
This is more a matter of how the remarks in the pdb are recorded for 
validation/deposition for which a big red exclamation mark is pointing to the 
mismatch values. This is happening only when the ccp4i2 interface is used for 
refmab but not with ccp4i. Maybe a script problem for the pdb remark data 
extrapolation?
Also if I manually set the resolution I found that the pdb remark is still not 
correct exactly (in ccp4i2). But when I ran with the same files a refmac 
refinement using the ccp4i interface, everything is running smoothly and 
perfectly with no "error" at all.

Just as an open question: As seems more a pdb file writing issue than a 
refinement problem, correcting the pdb file manually writing with a text editor 
in in the remark the exact low resolution, could be also a fix or can be 
considered as faking the file? ( I didn't test it yet, just mornings idea...)

All the best,
Roberto

---
Dr. Roberto Orru, Ph.D.

Insitute of molecular biology (IMB) and
JGU - Institut für Molekulare Physiologie (AG Prof. Wolf)
Mainz, Germany

From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Garib 
Murshudov
Sent: Sunday, March 18, 2018 5:14 PM
To: CCP4BB@JISCMAIL.AC.UK<mailto:CCP4BB@JISCMAIL.AC.UK>
Subject: Re: [ccp4bb] Resolution mismatch aimless/refmac


On 18 Mar 2018, at 16:02, Eleanor Dodson 
<eleanor.dod...@york.ac.uk<mailto:eleanor.dod...@york.ac.uk>> wrote:

Does it matter? The refinement will only use observed reflections.

It should not matter for refinement and map calculations (needs to be checked 
carefully). However it does matter for deposition when multiple entries are 
analysed and the PDB deposition software (rightly so) says that you have 
information mismatch. And it confuses users and most of all it confuses 
deposition. It would confuse me.

Best wishes,
Garib


As Andrew says your output from dataprocessing includes a complete list of all 
possible indices to the upper resolution limit with a Free R flag assigned to 
the indices. This makes it easier to keep the same FreeR assignments for all 
your sets of measurements, which is a good thing..

There is a library call which REFMAC could use which would return the actual 
limits of the observed data - that small modification to the program would give 
a more sensible log file for the refinement.

At present the REFMAC outputs a reflection list will include all indices,  with 
the Fcalc PHIcalc information listed for missing data . I find that quite 
useful in some cases, but if you do not wish to use the terms there are ways of 
excluding them from subsequent calculations..

Eleanor



On 18 March 2018 at 12:04, Garib Murshudov 
<ga...@mrc-lmb.cam.ac.uk<mailto:ga...@mrc-lmb.cam.ac.uk>> wrote:
Dear All,

As far as I know it could happen when you are using i2. We are working to fix 
this problem. At the moment the best solution is to use advanced options in the 
refmac interface and define resolution limits explicitly. For example you can 
add in the advanced options:


resolution 37 2.5

where these resolution limits are from the aimless output. Low resolution limit 
should not affect refinement behaviour, however it may cause problem during 
deposition.

We will have better solution soon. I2 has many good features. Please report if 
you see some misbehaviours.

Best wishes,
Garib

P.S. you can also use advanced options to do things that are not available on 
the interface yet.
1) refinement against electron diffraction. Add

source EC MB   #  electron form factor will be used using Mott-Bethe formula

2) you can add occupancy refinement

etc


On 18 Mar 2018, at 10:46, "Orru, Dr. Roberto" 
<ro...@uni-mainz.de<mailto:ro...@uni-mainz.de>> wrote:

Dear All,

I am noticing that the low resolution in the reflections files after scaling 
with aimless and after refinement with refmac does not coincide.
In a case I have 37A with aimless but for some reason refmac is 67A.

Any idea?
All the best,
R.

Dr Garib N Murshudov
MRC-LMB
Francis Crick Avenue
Cambridge
CB2 0QH UK
Web http://www.mrc-lmb.cam.ac.uk<http://www.mrc-lmb.cam.ac.uk/>,
http://www2.mrc-lmb.cam.ac.uk/groups/murshudov/



Dr Garib N Murshudov
MRC-LMB
Francis Crick Avenue
Cambridge
CB2 0QH UK
Web http://www.mrc-lmb.cam.ac.uk,
http://www2.mrc-lmb.cam.ac.uk/groups/murshudov/



Re: [ccp4bb] Resolution mismatch aimless/refmac

2018-03-18 Thread Garib Murshudov

On 18 Mar 2018, at 16:02, Eleanor Dodson  wrote:

> Does it matter? The refinement will only use observed reflections.

It should not matter for refinement and map calculations (needs to be checked 
carefully). However it does matter for deposition when multiple entries are 
analysed and the PDB deposition software (rightly so) says that you have 
information mismatch. And it confuses users and most of all it confuses 
deposition. It would confuse me.

Best wishes,
Garib


> As Andrew says your output from dataprocessing includes a complete list of 
> all possible indices to the upper resolution limit with a Free R flag 
> assigned to the indices. This makes it easier to keep the same FreeR 
> assignments for all your sets of measurements, which is a good thing.. 
> 
> There is a library call which REFMAC could use which would return the actual 
> limits of the observed data - that small modification to the program would 
> give a more sensible log file for the refinement. 
> 
> At present the REFMAC outputs a reflection list will include all indices,  
> with the Fcalc PHIcalc information listed for missing data . I find that 
> quite useful in some cases, but if you do not wish to use the terms there are 
> ways of excluding them from subsequent calculations.. 
> 
> Eleanor
> 
> 
> 
> On 18 March 2018 at 12:04, Garib Murshudov  wrote:
> Dear All,
> 
> As far as I know it could happen when you are using i2. We are working to fix 
> this problem. At the moment the best solution is to use advanced options in 
> the refmac interface and define resolution limits explicitly. For example you 
> can add in the advanced options:
> 
> 
> resolution 37 2.5
> 
> where these resolution limits are from the aimless output. Low resolution 
> limit should not affect refinement behaviour, however it may cause problem 
> during deposition.
> 
> We will have better solution soon. I2 has many good features. Please report 
> if you see some misbehaviours.
> 
> Best wishes,
> Garib
> 
> P.S. you can also use advanced options to do things that are not available on 
> the interface yet.
> 1) refinement against electron diffraction. Add
> 
> source EC MB   #  electron form factor will be used using Mott-Bethe formula
> 
> 2) you can add occupancy refinement
> 
> etc
> 
> 
> On 18 Mar 2018, at 10:46, "Orru, Dr. Roberto"  wrote:
> 
>> Dear All,
>> 
>> I am noticing that the low resolution in the reflections files after scaling 
>> with aimless and after refinement with refmac does not coincide.
>> In a case I have 37A with aimless but for some reason refmac is 67A.
>> 
>> Any idea?
>> All the best,
>> R.
> 
> Dr Garib N Murshudov
> MRC-LMB
> Francis Crick Avenue
> Cambridge 
> CB2 0QH UK
> Web http://www.mrc-lmb.cam.ac.uk, 
> http://www2.mrc-lmb.cam.ac.uk/groups/murshudov/
> 
> 
> 
> 

Dr Garib N Murshudov
MRC-LMB
Francis Crick Avenue
Cambridge 
CB2 0QH UK
Web http://www.mrc-lmb.cam.ac.uk, 
http://www2.mrc-lmb.cam.ac.uk/groups/murshudov/





Re: [ccp4bb] Resolution mismatch aimless/refmac

2018-03-18 Thread Eleanor Dodson
Does it matter? The refinement will only use observed reflections.
As Andrew says your output from dataprocessing includes a complete list of
all possible* indices* to the upper resolution limit with a Free R flag
assigned to the indices. This makes it easier to keep the same FreeR
assignments for all your sets of measurements, which is a good thing..

There is a library call which REFMAC could use which would return the
actual limits of the observed data - that small modification to the program
would give a more sensible log file for the refinement.

At present the REFMAC outputs a reflection list will include all indices,
 with the Fcalc PHIcalc information listed for missing data . I find that
quite useful in some cases, but if you do not wish to use the terms there
are ways of excluding them from subsequent calculations..

Eleanor



On 18 March 2018 at 12:04, Garib Murshudov  wrote:

> Dear All,
>
> As far as I know it could happen when you are using i2. We are working to
> fix this problem. At the moment the best solution is to use advanced
> options in the refmac interface and define resolution limits explicitly.
> For example you can add in the advanced options:
>
>
> resolution 37 2.5
>
> where these resolution limits are from the aimless output. Low resolution
> limit should not affect refinement behaviour, however it may cause problem
> during deposition.
>
> We will have better solution soon. I2 has many good features. Please
> report if you see some misbehaviours.
>
> Best wishes,
> Garib
>
> P.S. you can also use advanced options to do things that are not available
> on the interface yet.
> 1) refinement against electron diffraction. Add
>
> source EC MB   #  electron form factor will be used using Mott-Bethe
> formula
>
> 2) you can add occupancy refinement
>
> etc
>
>
> On 18 Mar 2018, at 10:46, "Orru, Dr. Roberto"  wrote:
>
> Dear All,
>
> I am noticing that the low resolution in the reflections files after
> scaling with aimless and after refinement with refmac does not coincide.
> In a case I have 37A with aimless but for some reason refmac is 67A.
>
> Any idea?
> All the best,
> R.
>
>
> Dr Garib N Murshudov
> MRC-LMB
> Francis Crick Avenue
> Cambridge
> CB2 0QH UK
> Web http://www.mrc-lmb.cam.ac.uk,
> http://www2.mrc-lmb.cam.ac.uk/groups/murshudov/
>
>
>
>


Re: [ccp4bb] Resolution mismatch aimless/refmac

2018-03-18 Thread Garib Murshudov
Dear All,

As far as I know it could happen when you are using i2. We are working to fix 
this problem. At the moment the best solution is to use advanced options in the 
refmac interface and define resolution limits explicitly. For example you can 
add in the advanced options:


resolution 37 2.5

where these resolution limits are from the aimless output. Low resolution limit 
should not affect refinement behaviour, however it may cause problem during 
deposition.

We will have better solution soon. I2 has many good features. Please report if 
you see some misbehaviours.

Best wishes,
Garib

P.S. you can also use advanced options to do things that are not available on 
the interface yet.
1) refinement against electron diffraction. Add

source EC MB   #  electron form factor will be used using Mott-Bethe formula

2) you can add occupancy refinement

etc


On 18 Mar 2018, at 10:46, "Orru, Dr. Roberto"  wrote:

> Dear All,
> 
> I am noticing that the low resolution in the reflections files after scaling 
> with aimless and after refinement with refmac does not coincide.
> In a case I have 37A with aimless but for some reason refmac is 67A.
> 
> Any idea?
> All the best,
> R.

Dr Garib N Murshudov
MRC-LMB
Francis Crick Avenue
Cambridge 
CB2 0QH UK
Web http://www.mrc-lmb.cam.ac.uk, 
http://www2.mrc-lmb.cam.ac.uk/groups/murshudov/





Re: [ccp4bb] Resolution mismatch aimless/refmac

2018-03-18 Thread Andrew Leslie
Hi Orru,

   If you have used the normal CCP4 processing pipeline, the data from 
AIMLESS will go into the “uniqueifyy” script that adds all possible reflections 
to the MTZ file (down to the lowest possible resolution for your unit cell 
dimensions), so that might explain why the low resolution limit has changed to 
67Å. In  AIMLESS, it will be the low resolution limit set by the integration 
program.

Cheers,

Andrew

> On 18 Mar 2018, at 10:46, Orru, Dr. Roberto  wrote:
> 
> Dear All,
> 
> I am noticing that the low resolution in the reflections files after scaling 
> with aimless and after refinement with refmac does not coincide.
> In a case I have 37A with aimless but for some reason refmac is 67A.
> 
> Any idea?
> All the best,
> R.



[ccp4bb] Resolution mismatch aimless/refmac

2018-03-18 Thread Orru, Dr. Roberto
Dear All,


I am noticing that the low resolution in the reflections files after scaling 
with aimless and after refinement with refmac does not coincide.

In a case I have 37A with aimless but for some reason refmac is 67A.


Any idea?

All the best,

R.


Re: [ccp4bb] Resolution cut off

2018-02-12 Thread Vands
Thanks, Robbie, I will try this.
-Vandna

On Mon, Feb 12, 2018 at 3:00 PM, Robbie Joosten <robbie_joos...@hotmail.com>
wrote:

> Dear Vandna,
>
>
>
> Paired refinement is indeed the most reliable way to see whether the
> higher resolution data help your refinement. It is done automatically on
> the pdb-redo.eu server if the resolution of the data used in refinement
> so far is lower than the resolution of your dataset (by at least 0.1A). We
> get the resolution from your REMARK 3 stuff in the input pdb file header.
>
> Although it is not an ideal experiment, you can also cheat pdb-redo into
> doing paired refinement by forging the header of your pdb file.
>
>
>
> HTH,
>
> Robbie
>
>
>
> Sent from my Windows 10 phone
>
>
>
> *From: *Graeme Winter <graeme.win...@diamond.ac.uk>
> *Sent: *maandag 12 februari 2018 20:48
> *To: *CCP4BB@JISCMAIL.AC.UK
> *Subject: *Re: [ccp4bb] Resolution cut off
>
>
>
> The most useful information for this can come from paired refinement,
> which will tell you if the data in outer shell is improving the model.
>
> https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3689524/
>
> For example
>
> On balance it’s unlikely throwing away measurements will make your model
> better...
>
> Best wishes Graeme
> 
> From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Vands [
> vanx...@gmail.com]
> Sent: 12 February 2018 19:27
> To: ccp4bb
> Subject: [ccp4bb] Resolution cut off
>
> Hi,
>I solved a crystal structure at 1.69 A resolution with R /R free
> 18 / 20 i used 1.69 A data.
>
> Data completeness is 100 % and for the outer shell, it's 50 %. for i /Sig
> I > 1.
>
> Do I need to cut resolution in refinement??
>
> Vandna Kukshal
> Postdoctral Research Associate
> Dept. Biochemistry and Molecular Biophysics
> Washington University School of Medicine
> 660 S. Euclid
> <https://maps.google.com/?q=660+S.+Euclid=gmail=g>, Campus
> Box 8231
> St. Louis, MO 63110
> --
> This e-mail and any attachments may contain confidential, copyright and or
> privileged material, and are for the use of the intended addressee only. If
> you are not the intended addressee or an authorised recipient of the
> addressee please notify us of receipt by returning the e-mail and do not
> use, copy, retain, distribute or disclose the information in or attached to
> the e-mail.
> Any opinions expressed within this e-mail are those of the individual and
> not necessarily of Diamond Light Source Ltd.
> Diamond Light Source Ltd. cannot guarantee that this e-mail or any
> attachments are free from viruses and we cannot accept liability for any
> damage which you may sustain as a result of software viruses which may be
> transmitted in or with the message.
> Diamond Light Source Limited (company no. 4375679). Registered in England
> and Wales with its registered office at Diamond House, Harwell Science and
> Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
>
>
>



-- 
Vandna Kukshal
Postdoctral Research Associate
Dept. Biochemistry and Molecular Biophysics
Washington University School of Medicine
660 S. Euclid, Campus Box 8231
St. Louis, MO 63110


Re: [ccp4bb] Resolution cut off

2018-02-12 Thread Robbie Joosten
Dear Vandna,

Paired refinement is indeed the most reliable way to see whether the higher 
resolution data help your refinement. It is done automatically on the 
pdb-redo.eu server if the resolution of the data used in refinement so far is 
lower than the resolution of your dataset (by at least 0.1A). We get the 
resolution from your REMARK 3 stuff in the input pdb file header.
Although it is not an ideal experiment, you can also cheat pdb-redo into doing 
paired refinement by forging the header of your pdb file.

HTH,
Robbie

Sent from my Windows 10 phone

From: Graeme Winter<mailto:graeme.win...@diamond.ac.uk>
Sent: maandag 12 februari 2018 20:48
To: CCP4BB@JISCMAIL.AC.UK<mailto:CCP4BB@JISCMAIL.AC.UK>
Subject: Re: [ccp4bb] Resolution cut off

The most useful information for this can come from paired refinement, which 
will tell you if the data in outer shell is improving the model.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3689524/

For example

On balance it’s unlikely throwing away measurements will make your model 
better...

Best wishes Graeme

From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Vands 
[vanx...@gmail.com]
Sent: 12 February 2018 19:27
To: ccp4bb
Subject: [ccp4bb] Resolution cut off

Hi,
   I solved a crystal structure at 1.69 A resolution with R /R free  18 / 
20 i used 1.69 A data.

Data completeness is 100 % and for the outer shell, it's 50 %. for i /Sig I > 1.

Do I need to cut resolution in refinement??

Vandna Kukshal
Postdoctral Research Associate
Dept. Biochemistry and Molecular Biophysics
Washington University School of Medicine
660 S. Euclid, Campus Box 8231
St. Louis, MO 63110
--
This e-mail and any attachments may contain confidential, copyright and or 
privileged material, and are for the use of the intended addressee only. If you 
are not the intended addressee or an authorised recipient of the addressee 
please notify us of receipt by returning the e-mail and do not use, copy, 
retain, distribute or disclose the information in or attached to the e-mail.
Any opinions expressed within this e-mail are those of the individual and not 
necessarily of Diamond Light Source Ltd.
Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments 
are free from viruses and we cannot accept liability for any damage which you 
may sustain as a result of software viruses which may be transmitted in or with 
the message.
Diamond Light Source Limited (company no. 4375679). Registered in England and 
Wales with its registered office at Diamond House, Harwell Science and 
Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom



Re: [ccp4bb] Resolution cut off

2018-02-12 Thread Graeme Winter
The most useful information for this can come from paired refinement, which 
will tell you if the data in outer shell is improving the model.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3689524/

For example 

On balance it’s unlikely throwing away measurements will make your model 
better...

Best wishes Graeme 

From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Vands 
[vanx...@gmail.com]
Sent: 12 February 2018 19:27
To: ccp4bb
Subject: [ccp4bb] Resolution cut off

Hi,
   I solved a crystal structure at 1.69 A resolution with R /R free  18 / 
20 i used 1.69 A data.

Data completeness is 100 % and for the outer shell, it's 50 %. for i /Sig I > 1.

Do I need to cut resolution in refinement??

Vandna Kukshal
Postdoctral Research Associate
Dept. Biochemistry and Molecular Biophysics
Washington University School of Medicine
660 S. Euclid, Campus Box 8231
St. Louis, MO 63110
-- 
This e-mail and any attachments may contain confidential, copyright and or 
privileged material, and are for the use of the intended addressee only. If you 
are not the intended addressee or an authorised recipient of the addressee 
please notify us of receipt by returning the e-mail and do not use, copy, 
retain, distribute or disclose the information in or attached to the e-mail.
Any opinions expressed within this e-mail are those of the individual and not 
necessarily of Diamond Light Source Ltd. 
Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments 
are free from viruses and we cannot accept liability for any damage which you 
may sustain as a result of software viruses which may be transmitted in or with 
the message.
Diamond Light Source Limited (company no. 4375679). Registered in England and 
Wales with its registered office at Diamond House, Harwell Science and 
Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom


[ccp4bb] Resolution cut off

2018-02-12 Thread Vands
Hi,
   I solved a crystal structure at 1.69 A resolution with R /R free  18
/ 20 i used 1.69 A data.

Data completeness is 100 % and for the outer shell, it's 50 %. for i /Sig I
> 1.

Do I need to cut resolution in refinement??

Vandna Kukshal
Postdoctral Research Associate
Dept. Biochemistry and Molecular Biophysics
Washington University School of Medicine
660 S. Euclid, Campus Box 8231
St. Louis, MO 63110


Re: [ccp4bb] resolution limits

2017-07-26 Thread Pavel Afonine
Andrew,

phenix.refine may not use reflection-outliers (Read, R. J. (1999). Acta
Cryst. D55, 1759–1764.). Typically this is just a few reflections. If you
have a good reason to disable this, then use
xray_data.outliers_rejection=false.

P.S.: There is Phenix mailing list for Phenix-related questions.

Pavel

On Wed, Jul 26, 2017 at 12:36 AM, Andrew Marshall <
andrew.c.marsh...@adelaide.edu.au> wrote:

> Dear crystallographers,
>
> I have two datasets that were merged/scaled using ccp4's aimless, with
> resolution ranges of 52-1.7 and 57-1.9. However, upon refinement, the
> resolution range used by phenix.refine is 36-1.7 for one and 104-1.9 for
> the other. 1) Why does phenix.refine change the low resolution limits of my
> processed data? 2) How can I prevent this?
>
> Thanks,
>
> Andrew
>


Re: [ccp4bb] resolution limits

2017-07-26 Thread Ian Tickle
Hi Eleanor, what you say is of course true, particularly in the case of the
1st dataset where probably the indices go down to 52 Ang. but the
measurements only start at 36 Ang.  But still it's hard to see how for the
2nd dataset, if the low res cut-off of the indices and/or measurements is
57 Ang., it manages to 'use' a reflection at 104 Ang.  All depends what you
mean by 'use' of course!

Cheers

-- Ian


On 26 July 2017 at 11:33, Eleanor Dodson  wrote:

> The resolution limits of the measured data are not changed but your output
> file must contain all possible h k l even if there is no observations for
> the lowest resolution ones..
>
> dont worry about it!
>
> Eleanor
>
> On 26 July 2017 at 08:36, Andrew Marshall  edu.au> wrote:
>
>> Dear crystallographers,
>>
>> I have two datasets that were merged/scaled using ccp4's aimless, with
>> resolution ranges of 52-1.7 and 57-1.9. However, upon refinement, the
>> resolution range used by phenix.refine is 36-1.7 for one and 104-1.9 for
>> the other. 1) Why does phenix.refine change the low resolution limits of my
>> processed data? 2) How can I prevent this?
>>
>> Thanks,
>>
>> Andrew
>>
>
>


Re: [ccp4bb] resolution limits

2017-07-26 Thread Eleanor Dodson
The resolution limits of the measured data are not changed but your output
file must contain all possible h k l even if there is no observations for
the lowest resolution ones..

dont worry about it!

Eleanor

On 26 July 2017 at 08:36, Andrew Marshall  wrote:

> Dear crystallographers,
>
> I have two datasets that were merged/scaled using ccp4's aimless, with
> resolution ranges of 52-1.7 and 57-1.9. However, upon refinement, the
> resolution range used by phenix.refine is 36-1.7 for one and 104-1.9 for
> the other. 1) Why does phenix.refine change the low resolution limits of my
> processed data? 2) How can I prevent this?
>
> Thanks,
>
> Andrew
>


[ccp4bb] resolution limits

2017-07-26 Thread Andrew Marshall
Dear crystallographers,

I have two datasets that were merged/scaled using ccp4's aimless, with
resolution ranges of 52-1.7 and 57-1.9. However, upon refinement, the
resolution range used by phenix.refine is 36-1.7 for one and 104-1.9 for
the other. 1) Why does phenix.refine change the low resolution limits of my
processed data? 2) How can I prevent this?

Thanks,

Andrew


Re: [ccp4bb] Resolution, R factors and data quality

2013-09-02 Thread Ian Tickle
On 1 September 2013 11:31, Frank von Delft frank.vonde...@sgc.ox.ac.ukwrote:


 2.
 I'm struck by how small the improvements in R/Rfree are in Diederichs 
 Karplus (ActaD 2013, http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3689524/);
 the authors don't discuss it, but what's current thinking on how to
 estimate the expected variation in R/Rfree - does the Tickle formalism
 (1998) still apply for ML with very weak data?


Frank, another point just occurred to me: the main reason for using Rfree
as a model selection criterion is to detect overfitting in cases where
you're comparing models with different numbers of parameters.  That doesn't
apply here since you're comparing the same model.  In that case you would
be much better off comparing Rwork since it has a much lower variance than
Rfree (in fact lower by a factor of 19 if you use the usual 5% of
reflections for the test set).

Cheers

-- Ian


Re: [ccp4bb] Resolution, R factors and data quality

2013-09-02 Thread Robbie Joosten
Hi Frank and Ian,

We struggled with the small changes in free R-factors when we implementing
the paired refinement for resolution cut-offs in PDB_REDO. It's not just the
lack of a proper test of significance for (weighted) R-factor changes, it's
also a more philosophical problem. When should you reject a higher
resolution cut-off? 
a) When it gives significantly higher R-factors (lenient)
b) When it gives numerically higher R-factors (less lenient, but takes away
the need for a significance test)
c) When it does not give significantly lower R-factors (very strict; if I
take X*sigma(R-free) as a cut-off, with X  1.0, in most cases I should
reject the higher cut-off).

PDB_REDO uses b), similar to Karplus and Diederichs in their Science paper.

Then the next question is which metric are you going to use? R-free,
weighted R-free, free log likelihood and CCfree are all written out by
Refmac. At least the latter two have proper significance tests (likelihood
ratios and transformation Z-scores respectively). Note that we use different
models, constructed with different (but very much overlapping) data, but the
metrics are calculated with the same data. The different metrics do not
necessarily move in the same direction when moving to a higher resolution.

We ended up using all 4 in PDB_REDO. By default a higher resolution cut-off
is rejected if more than 1 metric gets (numerically) worse, but this can be
changed by the user.

Next question is the size of the resolution steps. How big should those be
and how should they be set up? Karplus and Diederichs used equal steps in
Angstrom, PDB_REDO uses equal steps in number of reflections. That way you
add the same amount of data (but not usable information) with each step.
Anyway, a different choice of steps will give a different final resolution
cut-off. And the exact cut-off doesn't matter that much (see Evans and
Murshudov). Different (versions of) refinement programs will probably also
give somewhat different results. 

We tested our implementation on a number of structures in the PDB with data
extending to higher resolution than marked in the PDB file and we observed
that quite a lot had very conservative resolution cut-offs. In some cases we
could use so much extra data that we could move to a more complex B-factor
model and seriously improve R-factors.

The best resolution cut-off is unclear and may change over time with
improving methods. So whatever you choose, please deposit all the data that
you can get even if you don't use it yourself. I think that the Karplus and
Diederichs papers show us that you should at least realize that your
resolution cut-off is a methodological choice that you should describe and
should be able to defend if somebody asks you why you made that particular
choice.

Cheers,
Robbie


 On 1 September 2013 11:31, Frank von Delft frank.vonde...@sgc.ox.ac.uk
 wrote:
 
 
 
   2.
   I'm struck by how small the improvements in R/Rfree are in
 Diederichs  Karplus (ActaD 2013,
 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3689524/
 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3689524/ );  the authors
 don't discuss it, but what's current thinking on how to estimate the
expected
 variation in R/Rfree - does the Tickle formalism (1998) still apply for ML
with
 very weak data?
 
 
 
 Frank, our paper is still relevant, unfortunately just not to the question
 you're trying to answer!  We were trying to answer 2 questions: 1) what
 value of Rfree would you expect to get if the structure were free of
 systematic error and only random errors were present, so that could be
used
 as a baseline (assuming a fixed cross-validation test set) to identify
models
 with gross (e.g. chain-tracing) errors; and 2) how much would you expect
 Rfree to vary assuming a fixed starting model but with a different random
 sampling of the test set (i.e. the sampling standard deviation).  The
latter is
 relevant if say you want to compare the same structure (at the same
 resolution obviously) done independently in 2 labs, since it tells you how
big
 the difference in Rfree for an arbitrary choice of test set needs to be
before
 you can claim that it's statistically significant.
 
 
 In this case the questions are different because you're certainly not
 comparing different models using the same test set, neither I suspect are
 you comparing the same model with different randomly selected test sets.
I
 assume in this case that the test sets for different resolution cut-offs
are
 highly correlated, which I suspect makes it quite difficult to say what is
a
 significant difference in Rfree (I have not attempted to do the algebra!).
 
 
 Rfree is one of a number of model selection criteria (see
 http://en.wikipedia.org/wiki/Model_selection#Criteria_for_model_selectio
 n) whose purpose is to provide a metric for comparison of different models
 given specific data, i.e. as for the likelihood function they all take the
form
 f(model | data), so in all cases you're varying 

Re: [ccp4bb] Resolution, R factors and data quality

2013-09-01 Thread Frank von Delft

A bit late to this thread.

1.
Juergen:   Jim was not actually adopting CC*, he was asking how to make 
practical use of it when faced with actual datasets fading into noise.  
If I understand correctly from later responses, paired refinement is 
what KD suggest should be best practice?


2.
I'm struck by how small the improvements in R/Rfree are in Diederichs  
Karplus (ActaD 2013,http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3689524/ 
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3689524/);  the authors 
don't discuss it, but what's current thinking on how to estimate the 
expected variation in R/Rfree - does the Tickle formalism (1998) still 
apply for ML with very weak data?


I'm puzzled by Table 4 (and discussion):  do I read correctly that 
discarding negative unique reflections led to higher CCwork/CCfree?  
Wasn't the point of the paper that massaging data always shows up in 
worse refinement stats?  Is this a corner case, and how would one know?


Cheers
phx











On 28/08/2013 01:48, Bosch, Juergen wrote:

Hi Jim,

all data is good data - the more data you have the better (that's what 
they say anyhow)


Not everybody is adopting to the Karplus Diederich paper as quickly as 
you do. And not to be confused with the Diederichs and Karplus paper :-)

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3689524/
http://www.ncbi.nlm.nih.gov/pubmed/22628654

My models get better by including the data I had been omitting before, 
that's all that counts for me.


Jürgen

P.S. reminds me somehow of those guys collecting more and more data - 
PRISM greetings


On Aug 27, 2013, at 8:29 PM, Jim Pflugrath wrote:


I have to ask flamingly: So what about CC1/2 and CC*?

Did we not replace an arbitrary resolution cut-off based on a value 
of Rmerge with an arbitrary resolution cut-off based on a value of 
Rmeas already?  And now we are going to replace that with an 
arbitrary resolution cut-off based on a value of CC* or is it CC1/2?


I am asked often:  What value of CC1/2 should I cut my resolution at? 
 What should I tell my students?  I've got a course coming up and I 
am sure they will ask me again.


Jim


*From:* CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK 
mailto:CCP4BB@JISCMAIL.AC.UK] on behalf of Arka Chakraborty 
[arko.chakrabort...@gmail.com mailto:arko.chakrabort...@gmail.com]

*Sent:* Tuesday, August 27, 2013 7:45 AM
*To:* CCP4BB@JISCMAIL.AC.UK mailto:CCP4BB@JISCMAIL.AC.UK
*Subject:* Re: [ccp4bb] Resolution, R factors and data quality

Hi all,
does this not again bring up the still prevailing adherence to R 
factors and not  a shift to correlation coefficients ( CC1/2 and CC*) 
? (as Dr. Phil Evans has indicated).?
The way we look at data quality ( by we I mean the end users ) 
needs to be altered, I guess.


best,

Arka Chakraborty

On Tue, Aug 27, 2013 at 9:50 AM, Phil Evans p...@mrc-lmb.cam.ac.uk 
mailto:p...@mrc-lmb.cam.ac.uk wrote:


The question you should ask yourself is why would omitting data
improve my model?

Phil



..
Jürgen Bosch
Johns Hopkins University
Bloomberg School of Public Health
Department of Biochemistry  Molecular Biology
Johns Hopkins Malaria Research Institute
615 North Wolfe Street, W8708
Baltimore, MD 21205
Office: +1-410-614-4742
Lab:  +1-410-614-4894
Fax:  +1-410-955-2926
http://lupo.jhsph.edu








Re: [ccp4bb] Resolution, R factors and data quality

2013-09-01 Thread Ian Tickle
On 1 September 2013 11:31, Frank von Delft frank.vonde...@sgc.ox.ac.ukwrote:


 2.
 I'm struck by how small the improvements in R/Rfree are in Diederichs 
 Karplus (ActaD 2013, http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3689524/);
 the authors don't discuss it, but what's current thinking on how to
 estimate the expected variation in R/Rfree - does the Tickle formalism
 (1998) still apply for ML with very weak data?


Frank, our paper is still relevant, unfortunately just not to the question
you're trying to answer!  We were trying to answer 2 questions: 1) what
value of Rfree would you expect to get if the structure were free of
systematic error and only random errors were present, so that could be used
as a baseline (assuming a fixed cross-validation test set) to identify
models with gross (e.g. chain-tracing) errors; and 2) how much would you
expect Rfree to vary assuming a fixed starting model but with a different
random sampling of the test set (i.e. the sampling standard deviation).
The latter is relevant if say you want to compare the same structure (at
the same resolution obviously) done independently in 2 labs, since it tells
you how big the difference in Rfree for an arbitrary choice of test set
needs to be before you can claim that it's statistically significant.

In this case the questions are different because you're certainly not
comparing different models using the same test set, neither I suspect are
you comparing the same model with different randomly selected test sets.  I
assume in this case that the test sets for different resolution cut-offs
are highly correlated, which I suspect makes it quite difficult to say what
is a significant difference in Rfree (I have not attempted to do the
algebra!).

Rfree is one of a number of model selection criteria (see
http://en.wikipedia.org/wiki/Model_selection#Criteria_for_model_selection)
whose purpose is to provide a metric for comparison of different models
given specific data, i.e. as for the likelihood function they all take the
form f(model | data), so in all cases you're varying the model with fixed
data.  It's use in the form f(data | model), i.e. where you're varying the
data with a fixed model I would say is somewhat questionable and certainly
requires careful analysis to determine whether the results are
statistically significant.  Even assuming we can argue our way around the
inappropriate application of model selection methodology to a different
problem, unfortunately Rfree is far from an ideal criterion in this
respect; a better one would surely be the free log-likelihood as originally
proposed by Gerard Bricogne.

Cheers

-- Ian


Re: [ccp4bb] Resolution, R factors and data quality

2013-08-29 Thread Robbie Joosten
Hi Bernhard,

snip
 But the real objective is – where do data stop making an improvement to the
 model. The categorical statement that all data is good
 
 is simply not true in practice. It is probably specific to each data set 
 refinement, and as long as we do not always run paired refinement ala KD
 
 or similar in order to find out where that point is, the yearning for a simple
 number will not stop (although I believe automation will make the KD
 approach or similar eventually routine).

For what it is worth: This is already implemented in PDB_REDO.

Cheers,
Robbie


Re: [ccp4bb] Resolution, R factors and data quality

2013-08-28 Thread Bernhard Rupp
Based on the simulations I've done the data should be cut at CC1/2 = 0. 
Seriously. Problem is figuring out where it hits zero. 

 

But the real objective is – where do data stop making an improvement to the 
model. The categorical statement that all data is good

is simply not true in practice. It is probably specific to each data set  
refinement, and as long as we do not always run paired refinement ala KD

or similar in order to find out where that point is, the yearning for a simple 
number will not stop (although I believe automation will make the KD approach 
or similar eventually routine). 

 

As for the resolution of the structure I'd say call that where |Fo-Fc| 
(error in the map) becomes comparable to Sigma(Fo). This is I/Sigma = 2.5 if 
Rcryst is 20%.  That is: |Fo-Fc| / Fo = 0.2, which implies |Io-Ic|/Io = 0.4 or 
Io/|Io-Ic| = Io/sigma(Io) = 2.5.

 

Makes sense to me...

 

As long as it is understood that this ‘model resolution value’ derived via your 
argument from I/sigI is not the same as a I/sigI data cutoff (and that Rcryst 
and Rmerge have nothing in common)….

 

-James Holton

MAD Scientist

 

Best, BR

 

 


On Aug 27, 2013, at 5:29 PM, Jim Pflugrath  mailto:jim.pflugr...@rigaku.com 
jim.pflugr...@rigaku.com wrote:

I have to ask flamingly: So what about CC1/2 and CC*?  

 

Did we not replace an arbitrary resolution cut-off based on a value of Rmerge 
with an arbitrary resolution cut-off based on a value of Rmeas already?  And 
now we are going to replace that with an arbitrary resolution cut-off based on 
a value of CC* or is it CC1/2?

 

I am asked often:  What value of CC1/2 should I cut my resolution at?  What 
should I tell my students?  I've got a course coming up and I am sure they will 
ask me again.

 

Jim

 


  _  


From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Arka Chakraborty 
[arko.chakrabort...@gmail.com]
Sent: Tuesday, August 27, 2013 7:45 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Resolution, R factors and data quality

Hi all,

does this not again bring up the still prevailing adherence to R factors and 
not  a shift to correlation coefficients ( CC1/2 and CC*) ? (as Dr. Phil Evans 
has indicated).?

The way we look at data quality ( by we I mean the end users ) needs to be 
altered, I guess.

best,

 

Arka Chakraborty

 

On Tue, Aug 27, 2013 at 9:50 AM, Phil Evans p...@mrc-lmb.cam.ac.uk wrote:

The question you should ask yourself is why would omitting data improve my 
model?

Phil



Re: [ccp4bb] Resolution, R factors and data quality

2013-08-28 Thread Phil Evans
We don't currently have a really good measure of that point where adding the 
extra shell of data adds significant information (whatever that means. 
However, my rough trials (see http://www.ncbi.nlm.nih.gov/pubmed/23793146) 
suggested that the exact cutoff point was not very critical, presumably as the 
information content fades out slowly, so it probably isn't something to 
agonise over too much. K  D's paired refinement may be useful though.

I would again caution against looking too hard at CC* rather than CC1/2: they 
are exactly equivalent, but CC* changes very rapidly at small values, which may 
be misleading. The purpose of CC* is for comparison with CCcryst (i.e. Fo to 
Fc).

I would remind any users of Scala who want to look back at old log files to see 
the statistics for the outer shell at the cutoff they used, that CC1/2 has been 
calculated in Scala for many years under the name CC_IMEAN. It's now called 
CC1/2 in Aimless (and Scala) following Kai's excellent suggestion.

Phil


On 28 Aug 2013, at 08:21, Bernhard Rupp hofkristall...@gmail.com wrote:

 Based on the simulations I've done the data should be cut at CC1/2 = 0. 
 Seriously. Problem is figuring out where it hits zero. 
  
 But the real objective is – where do data stop making an improvement to the 
 model. The categorical statement that all data is good
 is simply not true in practice. It is probably specific to each data set  
 refinement, and as long as we do not always run paired refinement ala KD
 or similar in order to find out where that point is, the yearning for a 
 simple number will not stop (although I believe automation will make the KD 
 approach or similar eventually routine).
  
 As for the resolution of the structure I'd say call that where |Fo-Fc| 
 (error in the map) becomes comparable to Sigma(Fo). This is I/Sigma = 2.5 if 
 Rcryst is 20%.  That is: |Fo-Fc| / Fo = 0.2, which implies |Io-Ic|/Io = 0.4 
 or Io/|Io-Ic| = Io/sigma(Io) = 2.5.
  
 Makes sense to me...
  
 As long as it is understood that this ‘model resolution value’ derived via 
 your argument from I/sigI is not the same as a I/sigI data cutoff (and that 
 Rcryst and Rmerge have nothing in common)….
  
 -James Holton
 MAD Scientist
  
 
 Best, BR
 
  
 
  
 
 
 On Aug 27, 2013, at 5:29 PM, Jim Pflugrath jim.pflugr...@rigaku.com wrote:
 
 I have to ask flamingly: So what about CC1/2 and CC*?  
  
 Did we not replace an arbitrary resolution cut-off based on a value of Rmerge 
 with an arbitrary resolution cut-off based on a value of Rmeas already?  And 
 now we are going to replace that with an arbitrary resolution cut-off based 
 on a value of CC* or is it CC1/2?
  
 I am asked often:  What value of CC1/2 should I cut my resolution at?  What 
 should I tell my students?  I've got a course coming up and I am sure they 
 will ask me again.
  
 Jim
  
 From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Arka 
 Chakraborty [arko.chakrabort...@gmail.com]
 Sent: Tuesday, August 27, 2013 7:45 AM
 To: CCP4BB@JISCMAIL.AC.UK
 Subject: Re: [ccp4bb] Resolution, R factors and data quality
 
 Hi all,
 does this not again bring up the still prevailing adherence to R factors and 
 not  a shift to correlation coefficients ( CC1/2 and CC*) ? (as Dr. Phil 
 Evans has indicated).?
 The way we look at data quality ( by we I mean the end users ) needs to be 
 altered, I guess.
 
 best,
  
 Arka Chakraborty
  
 On Tue, Aug 27, 2013 at 9:50 AM, Phil Evans p...@mrc-lmb.cam.ac.uk wrote:
 The question you should ask yourself is why would omitting data improve my 
 model?
 
 Phil


Re: [ccp4bb] Resolution, R factors and data quality

2013-08-28 Thread Arka Chakraborty
Hi all,
 If I am not wrong, the Karplus  Diederich paper suggests that data is
generally meaningful upto CC1/2  value of 0.20 but they suggest a paired
refinement technique ( pretty easy to perform) to actually decide on the
resolution at which to cut the data. This will be the most prudent thing to
do I guess and not follow any arbitrary value, as each data-set is
different. But the fact remains that even where I/sigma(I) falls to 0.5
useful information remains which will improve the quality of the maps, and
when discarded just leads us a bit further away from  truth. However, as
always, Dr Diederich and Karplus will be the best persons to comment on
that ( as they have already done in the paper :) )

best,

Arka Chakraborty

p.s. Aimless seems to suggest a resolution limit bases on CC1/2=0.5
criterion ( which I guess is done to be on the safe side- Dr. Phil Evans
can explain if there are other or an entirely different reason to it! ).
But if we want to squeeze the most from our data-set,  I guess we need to
push a bit further sometimes :)


On Wed, Aug 28, 2013 at 9:21 AM, Bernhard Rupp hofkristall...@gmail.comwrote:

 **Based on the simulations I've done the data should be cut at CC1/2 =
 0. Seriously. Problem is figuring out where it hits zero. 

 ** **

 But the real objective is – where do data stop making an improvement to
 the model. The categorical statement that all data is good

 is simply not true in practice. It is probably specific to each data set 
 refinement, and as long as we do not always run paired refinement ala KD**
 **

 or similar in order to find out where that point is, the yearning for a
 simple number will not stop (although I believe automation will make the KD
 approach or similar eventually routine). 

 ** **

 As for the resolution of the structure I'd say call that where |Fo-Fc|
 (error in the map) becomes comparable to Sigma(Fo). This is I/Sigma = 2.5
 if Rcryst is 20%.  That is: |Fo-Fc| / Fo = 0.2, which implies |Io-Ic|/Io =
 0.4 or Io/|Io-Ic| = Io/sigma(Io) = 2.5.

 ** **

 Makes sense to me...

 ** **

 As long as it is understood that this ‘model resolution value’ derived via
 your argument from I/sigI is not the same as a I/sigI data cutoff (and
 that Rcryst and Rmerge have nothing in common)….

 ** **

 -James Holton

 MAD Scientist

 ** **

 Best, BR

 ** **

 ** **


 On Aug 27, 2013, at 5:29 PM, Jim Pflugrath jim.pflugr...@rigaku.com
 wrote:

 I have to ask flamingly: So what about CC1/2 and CC*?  

 ** **

 Did we not replace an arbitrary resolution cut-off based on a value of
 Rmerge with an arbitrary resolution cut-off based on a value of Rmeas
 already?  And now we are going to replace that with an arbitrary resolution
 cut-off based on a value of CC* or is it CC1/2?

 ** **

 I am asked often:  What value of CC1/2 should I cut my resolution at?
  What should I tell my students?  I've got a course coming up and I am sure
 they will ask me again.

 ** **

 Jim

 ** **
 --

 *From:* CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Arka
 Chakraborty [arko.chakrabort...@gmail.com]
 *Sent:* Tuesday, August 27, 2013 7:45 AM
 *To:* CCP4BB@JISCMAIL.AC.UK
 *Subject:* Re: [ccp4bb] Resolution, R factors and data quality

 Hi all,

 does this not again bring up the still prevailing adherence to R factors
 and not  a shift to correlation coefficients ( CC1/2 and CC*) ? (as Dr.
 Phil Evans has indicated).?

 The way we look at data quality ( by we I mean the end users ) needs to
 be altered, I guess.

 best,

 ** **

 Arka Chakraborty

 ** **

 On Tue, Aug 27, 2013 at 9:50 AM, Phil Evans p...@mrc-lmb.cam.ac.uk wrote:
 

 The question you should ask yourself is why would omitting data improve
 my model?

 Phil




-- 
*Arka Chakraborty*
*ibmb (Institut de Biologia Molecular de Barcelona)**
**BARCELONA, SPAIN**
*


Re: [ccp4bb] Resolution, R factors and data quality

2013-08-28 Thread Phil Evans
Aimless does indeed calculate the point at which CC1/2 falls below 0.5 but I 
would not necessarily suggest that as the best cutoff point. Personally I 
would also look at I/sigI, anisotropy and completeness, but as I said at that 
point I don't think it makes a huge difference

Phil

On 28 Aug 2013, at 10:00, Arka Chakraborty arko.chakrabort...@gmail.com wrote:

 Hi all,
  If I am not wrong, the Karplus  Diederich paper suggests that data is 
 generally meaningful upto CC1/2  value of 0.20 but they suggest a paired 
 refinement technique ( pretty easy to perform) to actually decide on the 
 resolution at which to cut the data. This will be the most prudent thing to 
 do I guess and not follow any arbitrary value, as each data-set is different. 
 But the fact remains that even where I/sigma(I) falls to 0.5 useful 
 information remains which will improve the quality of the maps, and when 
 discarded just leads us a bit further away from  truth. However, as always, 
 Dr Diederich and Karplus will be the best persons to comment on that ( as 
 they have already done in the paper :) )
 
 best,
 
 Arka Chakraborty
 
 p.s. Aimless seems to suggest a resolution limit bases on CC1/2=0.5 criterion 
 ( which I guess is done to be on the safe side- Dr. Phil Evans can explain if 
 there are other or an entirely different reason to it! ). But if we want to 
 squeeze the most from our data-set,  I guess we need to push a bit further 
 sometimes :)
 
 
 On Wed, Aug 28, 2013 at 9:21 AM, Bernhard Rupp hofkristall...@gmail.com 
 wrote:
 Based on the simulations I've done the data should be cut at CC1/2 = 0. 
 Seriously. Problem is figuring out where it hits zero. 
 
  
 
 But the real objective is – where do data stop making an improvement to the 
 model. The categorical statement that all data is good
 
 is simply not true in practice. It is probably specific to each data set  
 refinement, and as long as we do not always run paired refinement ala KD
 
 or similar in order to find out where that point is, the yearning for a 
 simple number will not stop (although I believe automation will make the KD 
 approach or similar eventually routine).
 
  
 
 As for the resolution of the structure I'd say call that where |Fo-Fc| 
 (error in the map) becomes comparable to Sigma(Fo). This is I/Sigma = 2.5 if 
 Rcryst is 20%.  That is: |Fo-Fc| / Fo = 0.2, which implies |Io-Ic|/Io = 0.4 
 or Io/|Io-Ic| = Io/sigma(Io) = 2.5.
 
  
 
 Makes sense to me...
 
  
 
 As long as it is understood that this ‘model resolution value’ derived via 
 your argument from I/sigI is not the same as a I/sigI data cutoff (and that 
 Rcryst and Rmerge have nothing in common)….
 
  
 
 -James Holton
 
 MAD Scientist
 
  
 
 Best, BR
 
  
 
  
 
 
 On Aug 27, 2013, at 5:29 PM, Jim Pflugrath jim.pflugr...@rigaku.com wrote:
 
 I have to ask flamingly: So what about CC1/2 and CC*?  
 
  
 
 Did we not replace an arbitrary resolution cut-off based on a value of Rmerge 
 with an arbitrary resolution cut-off based on a value of Rmeas already?  And 
 now we are going to replace that with an arbitrary resolution cut-off based 
 on a value of CC* or is it CC1/2?
 
  
 
 I am asked often:  What value of CC1/2 should I cut my resolution at?  What 
 should I tell my students?  I've got a course coming up and I am sure they 
 will ask me again.
 
  
 
 Jim
 
  
 
 From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Arka 
 Chakraborty [arko.chakrabort...@gmail.com]
 Sent: Tuesday, August 27, 2013 7:45 AM
 To: CCP4BB@JISCMAIL.AC.UK
 Subject: Re: [ccp4bb] Resolution, R factors and data quality
 
 Hi all,
 
 does this not again bring up the still prevailing adherence to R factors and 
 not  a shift to correlation coefficients ( CC1/2 and CC*) ? (as Dr. Phil 
 Evans has indicated).?
 
 The way we look at data quality ( by we I mean the end users ) needs to be 
 altered, I guess.
 
 best,
 
  
 
 Arka Chakraborty
 
  
 
 On Tue, Aug 27, 2013 at 9:50 AM, Phil Evans p...@mrc-lmb.cam.ac.uk wrote:
 
 The question you should ask yourself is why would omitting data improve my 
 model?
 
 Phil
 
 
 
 
 -- 
 Arka Chakraborty
 ibmb (Institut de Biologia Molecular de Barcelona)
 BARCELONA, SPAIN


Re: [ccp4bb] Resolution, R factors and data quality

2013-08-28 Thread Bernhard Rupp
 We don't currently have a really good measure of that point where adding
the extra shell of data adds significant information 
  so it probably isn't something to agonise over too much. K  D's paired
refinement may be useful though.

That seems to be a correct assessment of the situation and a forceful
argument to eliminate the
review nonsense of nitpicking on I/sigI values, associated R-merges, and
other
pseudo-statistics once and for good. We can now, thanks to data deposition,
at any time generate or download the maps and the models 
and judge for ourselves even minute details of local model quality from
there. 
As far as use and interpretation goes, when the model meets the map is where
the rubber meets the road.
I therefore make the heretic statement that the entire table 1 of data
collection statistics, justifiable in pre-deposition times 
as some means to guess structure quality can go the way of X-ray film and be
almost always eliminated from papers. 
There is nothing really useful in Table 1, and all its data items and more
are in the PDB header anyhow. 
Availability of maps for review and for users is the key point.

Cheers, BR


Re: [ccp4bb] Resolution, R factors and data quality

2013-08-28 Thread Bosch, Juergen
What a statement !
Give reviewers maps, I agree however, what if the reviewer has no clue of these 
things we call structures ? I think for those people table 1 might still 
provide some justification. I would argue it should go into the supplement at 
least.

Jürgen 

Sent from my iPad

On Aug 28, 2013, at 5:58, Bernhard Rupp hofkristall...@gmail.com wrote:

 We don't currently have a really good measure of that point where adding
 the extra shell of data adds significant information 
 so it probably isn't something to agonise over too much. K  D's paired
 refinement may be useful though.
 
 That seems to be a correct assessment of the situation and a forceful
 argument to eliminate the
 review nonsense of nitpicking on I/sigI values, associated R-merges, and
 other
 pseudo-statistics once and for good. We can now, thanks to data deposition,
 at any time generate or download the maps and the models 
 and judge for ourselves even minute details of local model quality from
 there. 
 As far as use and interpretation goes, when the model meets the map is where
 the rubber meets the road.
 I therefore make the heretic statement that the entire table 1 of data
 collection statistics, justifiable in pre-deposition times 
 as some means to guess structure quality can go the way of X-ray film and be
 almost always eliminated from papers. 
 There is nothing really useful in Table 1, and all its data items and more
 are in the PDB header anyhow. 
 Availability of maps for review and for users is the key point.
 
 Cheers, BR


Re: [ccp4bb] Resolution, R factors and data quality

2013-08-28 Thread Pavel Afonine
Hi,

a random thought: the data resolution, d_min_actual, can be thought of as
such that maximizes the correlation (*) between the synthesis calculated
using your data and an equivalent Fmodel synthesis calculated using
complete set of Miller indices in d_min_actual-inf resolution range, where
d_min=d_min_actual and d_min is the highest resolution of data set in
question. Makes sense to me..

(*) or any other more appropriate similarity measure: usual map CC may not
be the best one in this context.

Pavel


On Tue, Aug 27, 2013 at 5:45 AM, Arka Chakraborty 
arko.chakrabort...@gmail.com wrote:

 Hi all,
 does this not again bring up the still prevailing adherence to R factors
 and not  a shift to correlation coefficients ( CC1/2 and CC*) ? (as Dr.
 Phil Evans has indicated).?
 The way we look at data quality ( by we I mean the end users ) needs to
 be altered, I guess.

 best,

 Arka Chakraborty

 On Tue, Aug 27, 2013 at 9:50 AM, Phil Evans p...@mrc-lmb.cam.ac.uk wrote:

 The question you should ask yourself is why would omitting data improve
 my model?

 Phil

 On 27 Aug 2013, at 02:49, Emily Golden 10417...@student.uwa.edu.au
 wrote:

  Hi All,
 
  I have collected diffraction images to 1 Angstrom resolution to the
 edge of the detector and 0.9A to the corner.I collected two sets, one
 for low resolution reflections and one for high resolution reflections.
  I get 100% completeness above 1A and 41% completeness in the 0.9A-0.95A
 shell.
 
  However, my Rmerge in the highest shelll is not good, ~80%.
 
  The Rfree is 0.17 and Rwork is 0.16 but the maps look very good.   If I
 cut the data to 1 Angstrom the R factors improve but I feel the maps are
 not as good and I'm not sure if I can justify cutting data.
 
  So my question is,  should I cut the data to 1Angstrom or should I keep
 the data I have?
 
  Also, taking geometric restraints off during refinement the Rfactors
 improve marginally, am I justified in doing this at this resolution?
 
  Thank you,
 
  Emily




 --
 *Arka Chakraborty*
 *ibmb (Institut de Biologia Molecular de Barcelona)**
 **BARCELONA, SPAIN**
 *



Re: [ccp4bb] Resolution, R factors and data quality

2013-08-28 Thread Bernhard Rupp
 what if the reviewer has no clue of these things we call structures ? I think 
 for those people table 1 might still provide some justification.

Someone who knows little about structures probably won’t appreciate the 
technical details in Table 1 either 

J rgen 

Sent from my iPad

On Aug 28, 2013, at 5:58, Bernhard Rupp hofkristall...@gmail.com wrote:

 We don't currently have a really good measure of that point where 
 adding
 the extra shell of data adds significant information
 so it probably isn't something to agonise over too much. K  D's 
 paired
 refinement may be useful though.
 
 That seems to be a correct assessment of the situation and a forceful 
 argument to eliminate the review nonsense of nitpicking on I/sigI 
 values, associated R-merges, and other pseudo-statistics once and for 
 good. We can now, thanks to data deposition, at any time generate or 
 download the maps and the models and judge for ourselves even minute 
 details of local model quality from there.
 As far as use and interpretation goes, when the model meets the map is 
 where the rubber meets the road.
 I therefore make the heretic statement that the entire table 1 of data 
 collection statistics, justifiable in pre-deposition times as some 
 means to guess structure quality can go the way of X-ray film and be 
 almost always eliminated from papers.
 There is nothing really useful in Table 1, and all its data items and 
 more are in the PDB header anyhow.
 Availability of maps for review and for users is the key point.
 
 Cheers, BR


Re: [ccp4bb] Resolution, R factors and data quality

2013-08-28 Thread Stefan Gajewski
Jim,

This is coming from someone who just got enlightened a few weeks ago on 
resolution cut-offs.

I am asked often:  What value of CC1/2 should I cut my resolution at? 

The KD paper mentioned that the CC(1/2) criterion loses its significance at ~9 
according to student test.

I doubt that this can be a generally true guideline for a resolution cut-off. 
The structures I am doing right now were cut off at ~20 to ~80 CC(1/2)

You probably do not want to do the same mistake again, we all made before, when 
cutting resolution based on Rmerge/Rmeas, do you?


 What should I tell my students?  I've got a course coming up and I am sure 
 they will ask me again.

This is actually the more valuable insight I got from the KD paper. You don't 
use the CC(1/2) as an absolute indicator but rather as an suggestion. The 
resolution limit is determined by the refinement, not by the data processing.

I think I will handle my data in future as follows:

Bins with CC(1/2) less than 9 should be initially excluded.

The structure is then refined against all reflections in the file and only 
those bins that add information to the map/structure are kept in the final 
rounds. In most cases this will probably be more than CC(1/2) 25. If the last 
shell (CC~9) still adds information to the model, process the images again, 
e.g. till CC(1/2) drops to 0, and see if some more useful information is in 
there. You could also go ahead and use CC(1/2) 0 as initial cut-off, but I 
think that will rather increase computation time than help your structure in 
most cases.


So yes, I would feel comfortable with giving true resolution limits based on 
the refinement of the model, and not based on any number derived from data 
processing. In the end, you can always say  I tried it and this was the 
highest resolution I could model vs. I cut at _numerical value X of this 
parameter_ because everybody else does so.


Re: [ccp4bb] Resolution, R factors and data quality

2013-08-27 Thread Pavel Afonine
Excellent point about R-factors. Indeed, at this resolution they should be
quite lower than what you have. Did you:
- model solvent?
- use anisotropic ADPs?
- add H (this alone can drop R by 1-2%)?
- model alternative conformations?
- How R-factors (Rwork) look in resolution?
Pavel


On Mon, Aug 26, 2013 at 10:47 PM, Emily Golden
10417...@student.uwa.edu.auwrote:

 Thanks Yuriy and Pavel,

 at this resolution one would expect R/Rfree to be ~ 10-11%/12-13% assuming
 you applied anisotropic B-factor refinement ( and probably having  a low
 symmetry SG).
 R merge of 80% may be OK if I/sig for high res shell is 2.

 Yes, I used anisotropic Bfactors and the space group is P1 21 1.  However,
 the I/sig is only 1.5 in the highest shell.   Cutting the data such that
 the I/sig is 2 has improved the R factors.  Thank you.

 Maps get worse Could it be when you use all resolution range you get
 59% of missing reflections in highest resolution shell filled in with DFc
 for the purpose of map calculation?

 Yes! the map that I was looking at was filled.

 Emily


 On 27 August 2013 09:49, Emily Golden 10417...@student.uwa.edu.au wrote:

 Hi All,

 I have collected diffraction images to 1 Angstrom resolution to the edge
 of the detector and 0.9A to the corner.I collected two sets, one for
 low resolution reflections and one for high resolution reflections.
 I get 100% completeness above 1A and 41% completeness in the 0.9A-0.95A
 shell.

 However, my Rmerge in the highest shelll is not good, ~80%.

 The Rfree is 0.17 and Rwork is 0.16 but the maps look very good.   If I
 cut the data to 1 Angstrom the R factors improve but I feel the maps are
 not as good and I'm not sure if I can justify cutting data.

 So my question is,  should I cut the data to 1Angstrom or should I keep
 the data I have?

 Also, taking geometric restraints off during refinement the Rfactors
 improve marginally, am I justified in doing this at this resolution?

 Thank you,

 Emily





Re: [ccp4bb] Resolution, R factors and data quality

2013-08-27 Thread Bernhard Rupp
Maybe a few remarks might help:

 

Ad a) R merge of 80% may be OK if I/sig for high res shell is 2.

What rationale is that statement based upon and what is the exact meaning of
this statement?

 

Is an Rmerge of 80% not ok when I/sigi is say  1.5? Or would 80% be ok if
the i/sigI is 3.0? 

 

Why should an R-merge of  80% be (too) high in the first place?

 

b) there is no statistical justification whatsoever for the I/sigI cutoff
of 2 for refinement. This has been discussed @CCP4bb multiple times, for
good reason. 

In this particular case, the (in)completeness appears to be the dominating
factor. 

 

c) as Pavel notes, the R-value improvement means nil when truncating data -
try to refine from 8 to 2 A and Rs might be even lower (abuse we engaged in
ages ago when we did not know better and no ML)

 

d) absolute values of refinement Rs vs (historic) expectation values cannot
be judged without complete and detailed knowledge of the refinement
protocol. 

 

The ultimate question is whether your model improves with inclusion of more
data or not. Kay Diederichs has a few papers to this effect that make good
reading. 

And CC1/2 seems to provide statistically justifiable limits for cut-off of
(reasonably complete) high resolution shells.

 

LG, BR

 

From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Emily
Golden
Sent: Dienstag, 27. August 2013 07:48
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Resolution, R factors and data quality

 

Thanks Yuriy and Pavel, 

at this resolution one would expect R/Rfree to be ~ 10-11%/12-13% assuming
you applied anisotropic B-factor refinement ( and probably having  a low
symmetry SG). 
R merge of 80% may be OK if I/sig for high res shell is 2.

Yes, I used anisotropic Bfactors and the space group is P1 21 1.  However,
the I/sig is only 1.5 in the highest shell.   Cutting the data such that the
I/sig is 2 has improved the R factors.  Thank you. 

Maps get worse Could it be when you use all resolution range you get 59%
of missing reflections in highest resolution shell filled in with DFc for
the purpose of map calculation?

Yes! the map that I was looking at was filled. 

Emily

 

On 27 August 2013 09:49, Emily Golden 10417...@student.uwa.edu.au wrote:

Hi All, 

I have collected diffraction images to 1 Angstrom resolution to the edge of
the detector and 0.9A to the corner.I collected two sets, one for low
resolution reflections and one for high resolution reflections.  

I get 100% completeness above 1A and 41% completeness in the 0.9A-0.95A
shell.   

 

However, my Rmerge in the highest shelll is not good, ~80%.

The Rfree is 0.17 and Rwork is 0.16 but the maps look very good.   If I cut
the data to 1 Angstrom the R factors improve but I feel the maps are not as
good and I'm not sure if I can justify cutting data. 

So my question is,  should I cut the data to 1Angstrom or should I keep the
data I have?

Also, taking geometric restraints off during refinement the Rfactors improve
marginally, am I justified in doing this at this resolution?

 

Thank you, 

Emily

 



Re: [ccp4bb] Resolution, R factors and data quality

2013-08-27 Thread Phil Evans
The question you should ask yourself is why would omitting data improve my 
model? 

Phil

On 27 Aug 2013, at 02:49, Emily Golden 10417...@student.uwa.edu.au wrote:

 Hi All, 
 
 I have collected diffraction images to 1 Angstrom resolution to the edge of 
 the detector and 0.9A to the corner.I collected two sets, one for low 
 resolution reflections and one for high resolution reflections.  
 I get 100% completeness above 1A and 41% completeness in the 0.9A-0.95A 
 shell.   
 
 However, my Rmerge in the highest shelll is not good, ~80%.
 
 The Rfree is 0.17 and Rwork is 0.16 but the maps look very good.   If I cut 
 the data to 1 Angstrom the R factors improve but I feel the maps are not as 
 good and I'm not sure if I can justify cutting data. 
 
 So my question is,  should I cut the data to 1Angstrom or should I keep the 
 data I have?
 
 Also, taking geometric restraints off during refinement the Rfactors improve 
 marginally, am I justified in doing this at this resolution?
 
 Thank you, 
 
 Emily


Re: [ccp4bb] Resolution, R factors and data quality

2013-08-27 Thread Arka Chakraborty
Hi all,
does this not again bring up the still prevailing adherence to R factors
and not  a shift to correlation coefficients ( CC1/2 and CC*) ? (as Dr.
Phil Evans has indicated).?
The way we look at data quality ( by we I mean the end users ) needs to
be altered, I guess.

best,

Arka Chakraborty

On Tue, Aug 27, 2013 at 9:50 AM, Phil Evans p...@mrc-lmb.cam.ac.uk wrote:

 The question you should ask yourself is why would omitting data improve
 my model?

 Phil

 On 27 Aug 2013, at 02:49, Emily Golden 10417...@student.uwa.edu.au
 wrote:

  Hi All,
 
  I have collected diffraction images to 1 Angstrom resolution to the edge
 of the detector and 0.9A to the corner.I collected two sets, one for
 low resolution reflections and one for high resolution reflections.
  I get 100% completeness above 1A and 41% completeness in the 0.9A-0.95A
 shell.
 
  However, my Rmerge in the highest shelll is not good, ~80%.
 
  The Rfree is 0.17 and Rwork is 0.16 but the maps look very good.   If I
 cut the data to 1 Angstrom the R factors improve but I feel the maps are
 not as good and I'm not sure if I can justify cutting data.
 
  So my question is,  should I cut the data to 1Angstrom or should I keep
 the data I have?
 
  Also, taking geometric restraints off during refinement the Rfactors
 improve marginally, am I justified in doing this at this resolution?
 
  Thank you,
 
  Emily




-- 
*Arka Chakraborty*
*ibmb (Institut de Biologia Molecular de Barcelona)**
**BARCELONA, SPAIN**
*


Re: [ccp4bb] Resolution, R factors and data quality

2013-08-27 Thread Jim Pflugrath
I have to ask flamingly: So what about CC1/2 and CC*?

Did we not replace an arbitrary resolution cut-off based on a value of Rmerge 
with an arbitrary resolution cut-off based on a value of Rmeas already?  And 
now we are going to replace that with an arbitrary resolution cut-off based on 
a value of CC* or is it CC1/2?

I am asked often:  What value of CC1/2 should I cut my resolution at?  What 
should I tell my students?  I've got a course coming up and I am sure they will 
ask me again.

Jim


From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Arka Chakraborty 
[arko.chakrabort...@gmail.com]
Sent: Tuesday, August 27, 2013 7:45 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Resolution, R factors and data quality

Hi all,
does this not again bring up the still prevailing adherence to R factors and 
not  a shift to correlation coefficients ( CC1/2 and CC*) ? (as Dr. Phil Evans 
has indicated).?
The way we look at data quality ( by we I mean the end users ) needs to be 
altered, I guess.

best,

Arka Chakraborty

On Tue, Aug 27, 2013 at 9:50 AM, Phil Evans 
p...@mrc-lmb.cam.ac.ukmailto:p...@mrc-lmb.cam.ac.uk wrote:
The question you should ask yourself is why would omitting data improve my 
model?

Phil


Re: [ccp4bb] Resolution, R factors and data quality

2013-08-27 Thread Bosch, Juergen
Hi Jim,

all data is good data - the more data you have the better (that's what they say 
anyhow)

Not everybody is adopting to the Karplus Diederich paper as quickly as you do. 
And not to be confused with the Diederichs and Karplus paper :-)
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3689524/
http://www.ncbi.nlm.nih.gov/pubmed/22628654

My models get better by including the data I had been omitting before, that's 
all that counts for me.

Jürgen

P.S. reminds me somehow of those guys collecting more and more data - PRISM 
greetings

On Aug 27, 2013, at 8:29 PM, Jim Pflugrath wrote:

I have to ask flamingly: So what about CC1/2 and CC*?

Did we not replace an arbitrary resolution cut-off based on a value of Rmerge 
with an arbitrary resolution cut-off based on a value of Rmeas already?  And 
now we are going to replace that with an arbitrary resolution cut-off based on 
a value of CC* or is it CC1/2?

I am asked often:  What value of CC1/2 should I cut my resolution at?  What 
should I tell my students?  I've got a course coming up and I am sure they will 
ask me again.

Jim


From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UKmailto:CCP4BB@JISCMAIL.AC.UK] 
on behalf of Arka Chakraborty 
[arko.chakrabort...@gmail.commailto:arko.chakrabort...@gmail.com]
Sent: Tuesday, August 27, 2013 7:45 AM
To: CCP4BB@JISCMAIL.AC.UKmailto:CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Resolution, R factors and data quality

Hi all,
does this not again bring up the still prevailing adherence to R factors and 
not  a shift to correlation coefficients ( CC1/2 and CC*) ? (as Dr. Phil Evans 
has indicated).?
The way we look at data quality ( by we I mean the end users ) needs to be 
altered, I guess.

best,

Arka Chakraborty

On Tue, Aug 27, 2013 at 9:50 AM, Phil Evans 
p...@mrc-lmb.cam.ac.ukmailto:p...@mrc-lmb.cam.ac.uk wrote:
The question you should ask yourself is why would omitting data improve my 
model?

Phil

..
Jürgen Bosch
Johns Hopkins University
Bloomberg School of Public Health
Department of Biochemistry  Molecular Biology
Johns Hopkins Malaria Research Institute
615 North Wolfe Street, W8708
Baltimore, MD 21205
Office: +1-410-614-4742
Lab:  +1-410-614-4894
Fax:  +1-410-955-2926
http://lupo.jhsph.edu






Re: [ccp4bb] Resolution, R factors and data quality

2013-08-27 Thread James M Holton
Based on the simulations I've done the data should be cut at CC1/2 = 0. 
Seriously. Problem is figuring out where it hits zero. 

Alternately, if French  Wilson can be modified so the Wilson plot is always 
straight, then the data don't need to be cut at all. 

As for the resolution of the structure I'd say call that where |Fo-Fc| (error 
in the map) becomes comparable to Sigma(Fo). This is I/Sigma = 2.5 if Rcryst is 
20%.  That is: |Fo-Fc| / Fo = 0.2, which implies |Io-Ic|/Io = 0.4 or Io/|Io-Ic| 
= Io/sigma(Io) = 2.5.

Makes sense to me...

-James Holton
MAD Scientist

On Aug 27, 2013, at 5:29 PM, Jim Pflugrath jim.pflugr...@rigaku.com wrote:

 I have to ask flamingly: So what about CC1/2 and CC*?  
 
 Did we not replace an arbitrary resolution cut-off based on a value of Rmerge 
 with an arbitrary resolution cut-off based on a value of Rmeas already?  And 
 now we are going to replace that with an arbitrary resolution cut-off based 
 on a value of CC* or is it CC1/2?
 
 I am asked often:  What value of CC1/2 should I cut my resolution at?  What 
 should I tell my students?  I've got a course coming up and I am sure they 
 will ask me again.
 
 Jim
 
 From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Arka 
 Chakraborty [arko.chakrabort...@gmail.com]
 Sent: Tuesday, August 27, 2013 7:45 AM
 To: CCP4BB@JISCMAIL.AC.UK
 Subject: Re: [ccp4bb] Resolution, R factors and data quality
 
 Hi all,
 does this not again bring up the still prevailing adherence to R factors and 
 not  a shift to correlation coefficients ( CC1/2 and CC*) ? (as Dr. Phil 
 Evans has indicated).?
 The way we look at data quality ( by we I mean the end users ) needs to be 
 altered, I guess.
 
 best,
 
 Arka Chakraborty
 
 On Tue, Aug 27, 2013 at 9:50 AM, Phil Evans p...@mrc-lmb.cam.ac.uk wrote:
 The question you should ask yourself is why would omitting data improve my 
 model?
 
 Phil


[ccp4bb] Resolution, R factors and data quality

2013-08-26 Thread Emily Golden
Hi All,

I have collected diffraction images to 1 Angstrom resolution to the edge of
the detector and 0.9A to the corner.I collected two sets, one for low
resolution reflections and one for high resolution reflections.
I get 100% completeness above 1A and 41% completeness in the 0.9A-0.95A
shell.

However, my Rmerge in the highest shelll is not good, ~80%.

The Rfree is 0.17 and Rwork is 0.16 but the maps look very good.   If I cut
the data to 1 Angstrom the R factors improve but I feel the maps are not as
good and I'm not sure if I can justify cutting data.

So my question is,  should I cut the data to 1Angstrom or should I keep the
data I have?

Also, taking geometric restraints off during refinement the Rfactors
improve marginally, am I justified in doing this at this resolution?

Thank you,

Emily


Re: [ccp4bb] Resolution, R factors and data quality

2013-08-26 Thread Pavel Afonine
Hi Emily,


I get 100% completeness above 1A and 41% completeness in the 0.9A-0.95A
 shell.

 However, my Rmerge in the highest shelll is not good, ~80%.

 The Rfree is 0.17 and Rwork is 0.16 but the maps look very good.   If I
 cut the data to 1 Angstrom the R factors improve but I feel the maps are
 not as good and I'm not sure if I can justify cutting data.



You can't compare R-factors calculated using different sets of reflections.

Maps get worse Could it be when you use all resolution range you get
59% of missing reflections in highest resolution shell filled in with DFc
for the purpose of map calculation?


 Also, taking geometric restraints off during refinement the Rfactors
 improve marginally, am I justified in doing this at this resolution?



It's unlikely you can refine without restraints at this resolution.
Perhaps, without restraints the model still ok overall, but I would bet
there are places that get badly distorted, so have a closer look at your
model quality locally (alternative conformations, mobile loops, etc).

Pavel


Re: [ccp4bb] Resolution, R factors and data quality

2013-08-26 Thread Emily Golden
Thanks Yuriy and Pavel,

at this resolution one would expect R/Rfree to be ~ 10-11%/12-13% assuming
you applied anisotropic B-factor refinement ( and probably having  a low
symmetry SG).
R merge of 80% may be OK if I/sig for high res shell is 2.

Yes, I used anisotropic Bfactors and the space group is P1 21 1.  However,
the I/sig is only 1.5 in the highest shell.   Cutting the data such that
the I/sig is 2 has improved the R factors.  Thank you.

Maps get worse Could it be when you use all resolution range you get
59% of missing reflections in highest resolution shell filled in with DFc
for the purpose of map calculation?

Yes! the map that I was looking at was filled.

Emily


On 27 August 2013 09:49, Emily Golden 10417...@student.uwa.edu.au wrote:

 Hi All,

 I have collected diffraction images to 1 Angstrom resolution to the edge
 of the detector and 0.9A to the corner.I collected two sets, one for
 low resolution reflections and one for high resolution reflections.
 I get 100% completeness above 1A and 41% completeness in the 0.9A-0.95A
 shell.

 However, my Rmerge in the highest shelll is not good, ~80%.

 The Rfree is 0.17 and Rwork is 0.16 but the maps look very good.   If I
 cut the data to 1 Angstrom the R factors improve but I feel the maps are
 not as good and I'm not sure if I can justify cutting data.

 So my question is,  should I cut the data to 1Angstrom or should I keep
 the data I have?

 Also, taking geometric restraints off during refinement the Rfactors
 improve marginally, am I justified in doing this at this resolution?

 Thank you,

 Emily



Re: [ccp4bb] Resolution limit of index in XDS

2013-03-21 Thread Herman . Schreuder
Dear Tim,

It could be that COLSPOT does not rely on experimental setup parameters. 
However, XDS must have reasonably close starting values for distance, direct 
beam position etc., otherwise the autoindexing would fail, so the information 
to calculate an approximate TRUSTED_REGION is available.

For good data, a limited spot range usually works as well. However, for the 
weakly diffracting bad crystals with ice rings, salt spots, multiple 
diffraction patterns etc., one often needs the full range and often needs 
several tries with different parameters before indexing is successful. Since it 
is only cpu-time, it is the least of my worries and, as you mention, it is not 
bad to be forced to think once in a while instead of just clicking buttons in 
GUI's.

Best regards,
Herman



-Original Message-
From: Tim Gruene [mailto:t...@shelx.uni-ac.gwdg.de] 
Sent: Wednesday, March 20, 2013 11:17 PM
To: Schreuder, Herman RD/DE
Cc: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Resolution limit of index in XDS

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dear Herman,

the short answer might be that at the stage of COLSPOT the term 'resolution' 
has a limited meaning because COLSPOT does not rely on the experimental setup 
like distance and beam direction, so the term 'resolution limit' is 
conceptually not applicable at this stage.

Indexing does often not require the full data set, you can reduce the 
SPOT_RANGE if you are worried about processing time, or by a multi-CPU 
machine.

One of the great advantages of XDS is that it asks you to think at a level 
higher than the average MS-Windows user while processing your data, so the 
effort to figure out the three  numbers to set the TRUSTED_REGION is in line 
with the philosphy of XDS as I understand it.

But you are right, I do not have access to the source of XDS and I am not the 
person to address a request to.

Kind regards,
Tim

On 03/20/2013 10:29 AM, herman.schreu...@sanofi.com wrote:
 Dear Tim, but probably I should adres this to Kai Diederichs,
 
 not including the resolution cutoff in COLSPOT and IDXREF is a feature 
 of XDS I do not understand at all. For most cases, it may not matter 
 since only the strong spots are used, but what are the advantages?
 
 In fact there are disadvantages, especially when dealing with poorly 
 diffracting difficult data sets: -when a crystallographer imposes a 
 resolution limit, there are usually good reasons for it.
 -outside the resolution limit, there may be ice rings or contaminating 
 salt spots, which make the autoindexing fail. -when processing 900 
 frame Pilatus data sets, running COLSPOT on the complete detector 
 surface takes significantly longer then running it only on the center 
 region.
 
 Of course, one could fudge a resolution cutoff by translating 
 resolution into pixels and then calculating a TRUSTED_REGION, or 
 manually editing the SPOT.XDS file, but this is a lot of extra and in 
 my view unneccessary work.
 
 I would really consider using the resolution cutoff for COLSPOT as 
 well.
 
 Best, Herman
 
 
 -Original Message- From: CCP4 bulletin board 
 [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Tim Gruene Sent:
 Tuesday, March 19, 2013 11:06 PM To: CCP4BB@JISCMAIL.AC.UK Subject:
 Re: [ccp4bb] Resolution limit of index in XDS
 
 Dear Niu,
 
 indexing relies on strong reflections only, that is (in very
 brieft) why INCLUDE_RESOLUTION_RANGE indeed does not affect the 
 relections collected in COLSPOT which in turn are used by IDXREF.
 You can work around this, however, by making use of TRUSTED_REGION and 
 set it to e.g. 0.7 or 0.6 (you can use adxv to translate resolution 
 into pixel and then calculate the fraction you need to set the second 
 number in TRUSTED_REGION to (or the first if you want to exclude the 
 inner resolution reflections - I remember one data set where this was 
 essential for indexing - DNA was involved
 there)
 
 Best, Tim
 
 On 03/19/2013 08:53 PM, Niu Tou wrote:
 Dear All,
 
 Is there any command can set the resolution limit for index step in 
 XDS? I only found a keyword INCLUDE_RESOLUTION_RANGE, but it looks to 
 be a definition of resolution range after index step as it says:
 
 INCLUDE_RESOLUTION_RANGE=20.0 0.0 !Angstroem; used by 
 DEFPIX,INTEGRATE,CORRECT
 
 Thanks! Niu
 
 
 

- --
Dr Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFRSjVtUxlJ7aRr7hoRApZhAJ9RFBs8D9NGjgLY3KOoNHhNtdOWggCgj7U0
zY7jEFDYZfl0Umb9E1Bzs1U=
=+HjR
-END PGP SIGNATURE-


Re: [ccp4bb] Resolution limit of index in XDS

2013-03-21 Thread Kay Diederichs
Dear Herman,

some pros and cons are documented at 
http://strucbio.biologie.uni-konstanz.de/xdswiki/index.php/Wishlist#Would_be_nice_to_have
 , and the workaround is at 
http://strucbio.biologie.uni-konstanz.de/xdswiki/index.php/Ice_rings . These 
XDSwiki articles are old, and nobody has contributed to the discussion since 
2007 (after all, is is a Wiki!), so there has not been much reason for a change.
Tim is right in that usage of INCLUDE_RESOLUTION_RANGE does not fit well at the 
COLSPOT stage, since COLSPOT knows nothing about wavelength, distance, pixel 
size and so on.
If there is agreement among XDS users that IDXREF should take 
INCLUDE_RESOLUTION_RANGE into account, there is a good chance that the next 
version of XDS will do that.

best,

Kay

On Wed, 20 Mar 2013 09:29:47 +, herman.schreu...@sanofi.com wrote:

Dear Tim, but probably I should adres this to Kai Diederichs,

not including the resolution cutoff in COLSPOT and IDXREF is a feature of XDS 
I do not understand at all. For most cases, it may not matter since only the 
strong spots are used, but what are the advantages?

In fact there are disadvantages, especially when dealing with poorly 
diffracting difficult data sets:
-when a crystallographer imposes a resolution limit, there are usually good 
reasons for it.
-outside the resolution limit, there may be ice rings or contaminating salt 
spots, which make the autoindexing fail.
-when processing 900 frame Pilatus data sets, running COLSPOT on the complete 
detector surface takes significantly longer then running it only on the center 
region.

Of course, one could fudge a resolution cutoff by translating resolution into 
pixels and then calculating a TRUSTED_REGION, or manually editing the SPOT.XDS 
file, but this is a lot of extra and in my view unneccessary work.

I would really consider using the resolution cutoff for COLSPOT as well.

Best,
Herman
 

-Original Message-
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Tim 
Gruene
Sent: Tuesday, March 19, 2013 11:06 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Resolution limit of index in XDS

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dear Niu,

indexing relies on strong reflections only, that is (in very brieft) why 
INCLUDE_RESOLUTION_RANGE indeed does not affect the relections collected in 
COLSPOT which in turn are used by IDXREF. You can work around this, however, 
by making use of TRUSTED_REGION and set it to e.g. 0.7 or 0.6 (you can use 
adxv to translate resolution into pixel and then calculate the fraction you 
need to set the second number in TRUSTED_REGION to (or the first if you want 
to exclude the inner resolution reflections - I remember one data set where 
this was essential for indexing - DNA was involved there)

Best,
Tim

On 03/19/2013 08:53 PM, Niu Tou wrote:
 Dear All,
 
 Is there any command can set the resolution limit for index step in 
 XDS? I only found a keyword INCLUDE_RESOLUTION_RANGE, but it looks to 
 be a definition of resolution range after index step as it
 says:
 
 INCLUDE_RESOLUTION_RANGE=20.0 0.0 !Angstroem; used by 
 DEFPIX,INTEGRATE,CORRECT
 
 Thanks! Niu
 

- --
Dr Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFRSOFJUxlJ7aRr7hoRAo6TAKC+BePgeODbDyngO7N8vCE4CnjxmQCfS5cP
srShHNz1sDK0EMHSbE3fDwA=
=kAwf
-END PGP SIGNATURE-


Re: [ccp4bb] Resolution limit of index in XDS

2013-03-21 Thread Kay Diederichs
On Thu, 21 Mar 2013 08:28:27 +, herman.schreu...@sanofi.com wrote:

Dear Tim,

It could be that COLSPOT does not rely on experimental setup parameters. 
However, XDS must have reasonably close starting values for distance, direct 
beam position etc., otherwise the autoindexing would fail, so the information 
to calculate an approximate TRUSTED_REGION is available.

TRUSTED_REGION and INCLUDE_RESOLUTION_RANGE are orthogonal concepts; both are 
input by the user and not calculated by the program.


For good data, a limited spot range usually works as well. However, for the 
weakly diffracting bad crystals with ice rings, salt spots, multiple 
diffraction patterns etc., one often needs the full range and often needs 
several tries with different parameters before indexing is successful. Since 
it is only cpu-time, it is the least of my worries and, as you mention, it is 
not bad to be forced to think once in a while instead of just clicking buttons 
in GUI's.

Nevertheless I plan to release a GUI for xds soon; among other things, this 
enables to visualize and change TRUSTED_REGION and INCLUDE_RESOLUTION_RANGE .

best,

Kay


Best regards,
Herman



-Original Message-
From: Tim Gruene [mailto:t...@shelx.uni-ac.gwdg.de] 
Sent: Wednesday, March 20, 2013 11:17 PM
To: Schreuder, Herman RD/DE
Cc: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Resolution limit of index in XDS

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dear Herman,

the short answer might be that at the stage of COLSPOT the term 'resolution' 
has a limited meaning because COLSPOT does not rely on the experimental setup 
like distance and beam direction, so the term 'resolution limit' is 
conceptually not applicable at this stage.

Indexing does often not require the full data set, you can reduce the 
SPOT_RANGE if you are worried about processing time, or by a multi-CPU 
machine.

One of the great advantages of XDS is that it asks you to think at a level 
higher than the average MS-Windows user while processing your data, so the 
effort to figure out the three  numbers to set the TRUSTED_REGION is in line 
with the philosphy of XDS as I understand it.

But you are right, I do not have access to the source of XDS and I am not the 
person to address a request to.

Kind regards,
Tim

On 03/20/2013 10:29 AM, herman.schreu...@sanofi.com wrote:
 Dear Tim, but probably I should adres this to Kai Diederichs,
 
 not including the resolution cutoff in COLSPOT and IDXREF is a feature 
 of XDS I do not understand at all. For most cases, it may not matter 
 since only the strong spots are used, but what are the advantages?
 
 In fact there are disadvantages, especially when dealing with poorly 
 diffracting difficult data sets: -when a crystallographer imposes a 
 resolution limit, there are usually good reasons for it.
 -outside the resolution limit, there may be ice rings or contaminating 
 salt spots, which make the autoindexing fail. -when processing 900 
 frame Pilatus data sets, running COLSPOT on the complete detector 
 surface takes significantly longer then running it only on the center 
 region.
 
 Of course, one could fudge a resolution cutoff by translating 
 resolution into pixels and then calculating a TRUSTED_REGION, or 
 manually editing the SPOT.XDS file, but this is a lot of extra and in 
 my view unneccessary work.
 
 I would really consider using the resolution cutoff for COLSPOT as 
 well.
 
 Best, Herman
 
 
 -Original Message- From: CCP4 bulletin board 
 [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Tim Gruene Sent:
 Tuesday, March 19, 2013 11:06 PM To: CCP4BB@JISCMAIL.AC.UK Subject:
 Re: [ccp4bb] Resolution limit of index in XDS
 
 Dear Niu,
 
 indexing relies on strong reflections only, that is (in very
 brieft) why INCLUDE_RESOLUTION_RANGE indeed does not affect the 
 relections collected in COLSPOT which in turn are used by IDXREF.
 You can work around this, however, by making use of TRUSTED_REGION and 
 set it to e.g. 0.7 or 0.6 (you can use adxv to translate resolution 
 into pixel and then calculate the fraction you need to set the second 
 number in TRUSTED_REGION to (or the first if you want to exclude the 
 inner resolution reflections - I remember one data set where this was 
 essential for indexing - DNA was involved
 there)
 
 Best, Tim
 
 On 03/19/2013 08:53 PM, Niu Tou wrote:
 Dear All,
 
 Is there any command can set the resolution limit for index step in 
 XDS? I only found a keyword INCLUDE_RESOLUTION_RANGE, but it looks to 
 be a definition of resolution range after index step as it says:
 
 INCLUDE_RESOLUTION_RANGE=20.0 0.0 !Angstroem; used by 
 DEFPIX,INTEGRATE,CORRECT
 
 Thanks! Niu
 
 
 

- --
Dr Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFRSjVtUxlJ7aRr7hoRApZhAJ9RFBs8D9NGjgLY3KOoNHhNtdOWggCgj7U0

Re: [ccp4bb] Resolution limit of index in XDS

2013-03-21 Thread Herman . Schreuder
I was a little provokative. A GUI with a viewer would actually be an excellent 
idea since it allows one to see what one is doing, which would be of great help 
for difficult data sets. Nevertheless, since XDS is part of many automated 
pipelines, the possibility to run XDS offline with a command file should not be 
touched. If IDXREF would take the INCLUDE_RESOLUTION_RANGE into account, I am 
sure this would improve the performance of XDS.

Best regards and thank you for the work you put into XDS!
Herman

-Original Message-
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Kay 
Diederichs
Sent: Thursday, March 21, 2013 10:02 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Resolution limit of index in XDS

On Thu, 21 Mar 2013 08:28:27 +, herman.schreu...@sanofi.com wrote:

Dear Tim,

It could be that COLSPOT does not rely on experimental setup parameters. 
However, XDS must have reasonably close starting values for distance, direct 
beam position etc., otherwise the autoindexing would fail, so the information 
to calculate an approximate TRUSTED_REGION is available.

TRUSTED_REGION and INCLUDE_RESOLUTION_RANGE are orthogonal concepts; both are 
input by the user and not calculated by the program.


For good data, a limited spot range usually works as well. However, for the 
weakly diffracting bad crystals with ice rings, salt spots, multiple 
diffraction patterns etc., one often needs the full range and often needs 
several tries with different parameters before indexing is successful. Since 
it is only cpu-time, it is the least of my worries and, as you mention, it is 
not bad to be forced to think once in a while instead of just clicking buttons 
in GUI's.

Nevertheless I plan to release a GUI for xds soon; among other things, this 
enables to visualize and change TRUSTED_REGION and INCLUDE_RESOLUTION_RANGE .

best,

Kay


Best regards,
Herman



-Original Message-
From: Tim Gruene [mailto:t...@shelx.uni-ac.gwdg.de]
Sent: Wednesday, March 20, 2013 11:17 PM
To: Schreuder, Herman RD/DE
Cc: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Resolution limit of index in XDS

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dear Herman,

the short answer might be that at the stage of COLSPOT the term 'resolution' 
has a limited meaning because COLSPOT does not rely on the experimental setup 
like distance and beam direction, so the term 'resolution limit' is 
conceptually not applicable at this stage.

Indexing does often not require the full data set, you can reduce the 
SPOT_RANGE if you are worried about processing time, or by a multi-CPU 
machine.

One of the great advantages of XDS is that it asks you to think at a level 
higher than the average MS-Windows user while processing your data, so the 
effort to figure out the three  numbers to set the TRUSTED_REGION is in line 
with the philosphy of XDS as I understand it.

But you are right, I do not have access to the source of XDS and I am not the 
person to address a request to.

Kind regards,
Tim

On 03/20/2013 10:29 AM, herman.schreu...@sanofi.com wrote:
 Dear Tim, but probably I should adres this to Kai Diederichs,
 
 not including the resolution cutoff in COLSPOT and IDXREF is a 
 feature of XDS I do not understand at all. For most cases, it may not 
 matter since only the strong spots are used, but what are the advantages?
 
 In fact there are disadvantages, especially when dealing with poorly 
 diffracting difficult data sets: -when a crystallographer imposes a 
 resolution limit, there are usually good reasons for it.
 -outside the resolution limit, there may be ice rings or 
 contaminating salt spots, which make the autoindexing fail. -when 
 processing 900 frame Pilatus data sets, running COLSPOT on the 
 complete detector surface takes significantly longer then running it 
 only on the center region.
 
 Of course, one could fudge a resolution cutoff by translating 
 resolution into pixels and then calculating a TRUSTED_REGION, or 
 manually editing the SPOT.XDS file, but this is a lot of extra and in 
 my view unneccessary work.
 
 I would really consider using the resolution cutoff for COLSPOT as 
 well.
 
 Best, Herman
 
 
 -Original Message- From: CCP4 bulletin board 
 [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Tim Gruene Sent:
 Tuesday, March 19, 2013 11:06 PM To: CCP4BB@JISCMAIL.AC.UK Subject:
 Re: [ccp4bb] Resolution limit of index in XDS
 
 Dear Niu,
 
 indexing relies on strong reflections only, that is (in very
 brieft) why INCLUDE_RESOLUTION_RANGE indeed does not affect the 
 relections collected in COLSPOT which in turn are used by IDXREF.
 You can work around this, however, by making use of TRUSTED_REGION 
 and set it to e.g. 0.7 or 0.6 (you can use adxv to translate 
 resolution into pixel and then calculate the fraction you need to set 
 the second number in TRUSTED_REGION to (or the first if you want to 
 exclude the inner resolution reflections - I remember one data set 
 where

Re: [ccp4bb] Resolution limit of index in XDS

2013-03-20 Thread vellieux

Hello,

The way I do it is by manually editing the SPOT.XDS file (generated by 
the COLSPOT step). Spots are arranged by order of decreasing intensity 
in that file. So if you do down the file, select an appropriate 
intensity cutoff and then remove all spots below that value, it will 
have the effect of selecting a resolution cutoff (think of the plot of 
I vs. resolution) but you won't know what cutoff this corresponds to 
unless you do a careful analysis of the resulting SPOT.XDS file.


HTH,

Fred.

On 19/03/13 20:53, Niu Tou wrote:

Dear All,

Is there any command can set the resolution limit for index step in 
XDS? I only found a keyword INCLUDE_RESOLUTION_RANGE, but it looks to 
be a definition of resolution range after index step

as it says:

INCLUDE_RESOLUTION_RANGE=20.0 0.0 !Angstroem; used by 
DEFPIX,INTEGRATE,CORRECT


Thanks!
Niu



--
Fred. Vellieux (B.Sc., Ph.D., hdr)
ouvrier de la recherche
IBS / ELMA
41 rue Jules Horowitz
F-38027 Grenoble Cedex 01
Tel: +33 438789605
Fax: +33 438785494


Re: [ccp4bb] Resolution limit of index in XDS

2013-03-20 Thread Herman . Schreuder
Dear Tim, but probably I should adres this to Kai Diederichs,

not including the resolution cutoff in COLSPOT and IDXREF is a feature of XDS I 
do not understand at all. For most cases, it may not matter since only the 
strong spots are used, but what are the advantages?

In fact there are disadvantages, especially when dealing with poorly 
diffracting difficult data sets:
-when a crystallographer imposes a resolution limit, there are usually good 
reasons for it.
-outside the resolution limit, there may be ice rings or contaminating salt 
spots, which make the autoindexing fail.
-when processing 900 frame Pilatus data sets, running COLSPOT on the complete 
detector surface takes significantly longer then running it only on the center 
region.

Of course, one could fudge a resolution cutoff by translating resolution into 
pixels and then calculating a TRUSTED_REGION, or manually editing the SPOT.XDS 
file, but this is a lot of extra and in my view unneccessary work.

I would really consider using the resolution cutoff for COLSPOT as well.

Best,
Herman
 

-Original Message-
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Tim Gruene
Sent: Tuesday, March 19, 2013 11:06 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Resolution limit of index in XDS

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dear Niu,

indexing relies on strong reflections only, that is (in very brieft) why 
INCLUDE_RESOLUTION_RANGE indeed does not affect the relections collected in 
COLSPOT which in turn are used by IDXREF. You can work around this, however, by 
making use of TRUSTED_REGION and set it to e.g. 0.7 or 0.6 (you can use adxv to 
translate resolution into pixel and then calculate the fraction you need to set 
the second number in TRUSTED_REGION to (or the first if you want to exclude the 
inner resolution reflections - I remember one data set where this was essential 
for indexing - DNA was involved there)

Best,
Tim

On 03/19/2013 08:53 PM, Niu Tou wrote:
 Dear All,
 
 Is there any command can set the resolution limit for index step in 
 XDS? I only found a keyword INCLUDE_RESOLUTION_RANGE, but it looks to 
 be a definition of resolution range after index step as it
 says:
 
 INCLUDE_RESOLUTION_RANGE=20.0 0.0 !Angstroem; used by 
 DEFPIX,INTEGRATE,CORRECT
 
 Thanks! Niu
 

- --
Dr Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFRSOFJUxlJ7aRr7hoRAo6TAKC+BePgeODbDyngO7N8vCE4CnjxmQCfS5cP
srShHNz1sDK0EMHSbE3fDwA=
=kAwf
-END PGP SIGNATURE-


Re: [ccp4bb] Resolution limit of index in XDS

2013-03-20 Thread Tim Gruene
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dear Herman,

the short answer might be that at the stage of COLSPOT the term
'resolution' has a limited meaning because COLSPOT does not rely on
the experimental setup like distance and beam direction, so the term
'resolution limit' is conceptually not applicable at this stage.

Indexing does often not require the full data set, you can reduce the
SPOT_RANGE if you are worried about processing time, or by a
multi-CPU machine.

One of the great advantages of XDS is that it asks you to think at a
level higher than the average MS-Windows user while processing your
data, so the effort to figure out the three  numbers to set the
TRUSTED_REGION is in line with the philosphy of XDS as I understand it.

But you are right, I do not have access to the source of XDS and I am
not the person to address a request to.

Kind regards,
Tim

On 03/20/2013 10:29 AM, herman.schreu...@sanofi.com wrote:
 Dear Tim, but probably I should adres this to Kai Diederichs,
 
 not including the resolution cutoff in COLSPOT and IDXREF is a
 feature of XDS I do not understand at all. For most cases, it may
 not matter since only the strong spots are used, but what are the
 advantages?
 
 In fact there are disadvantages, especially when dealing with
 poorly diffracting difficult data sets: -when a crystallographer
 imposes a resolution limit, there are usually good reasons for it. 
 -outside the resolution limit, there may be ice rings or
 contaminating salt spots, which make the autoindexing fail. -when
 processing 900 frame Pilatus data sets, running COLSPOT on the
 complete detector surface takes significantly longer then running
 it only on the center region.
 
 Of course, one could fudge a resolution cutoff by translating
 resolution into pixels and then calculating a TRUSTED_REGION, or
 manually editing the SPOT.XDS file, but this is a lot of extra and
 in my view unneccessary work.
 
 I would really consider using the resolution cutoff for COLSPOT as
 well.
 
 Best, Herman
 
 
 -Original Message- From: CCP4 bulletin board
 [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Tim Gruene Sent:
 Tuesday, March 19, 2013 11:06 PM To: CCP4BB@JISCMAIL.AC.UK Subject:
 Re: [ccp4bb] Resolution limit of index in XDS
 
 Dear Niu,
 
 indexing relies on strong reflections only, that is (in very
 brieft) why INCLUDE_RESOLUTION_RANGE indeed does not affect the
 relections collected in COLSPOT which in turn are used by IDXREF.
 You can work around this, however, by making use of TRUSTED_REGION
 and set it to e.g. 0.7 or 0.6 (you can use adxv to translate
 resolution into pixel and then calculate the fraction you need to
 set the second number in TRUSTED_REGION to (or the first if you
 want to exclude the inner resolution reflections - I remember one
 data set where this was essential for indexing - DNA was involved
 there)
 
 Best, Tim
 
 On 03/19/2013 08:53 PM, Niu Tou wrote:
 Dear All,
 
 Is there any command can set the resolution limit for index step
 in XDS? I only found a keyword INCLUDE_RESOLUTION_RANGE, but it
 looks to be a definition of resolution range after index step as
 it says:
 
 INCLUDE_RESOLUTION_RANGE=20.0 0.0 !Angstroem; used by 
 DEFPIX,INTEGRATE,CORRECT
 
 Thanks! Niu
 
 
 

- -- 
Dr Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFRSjVtUxlJ7aRr7hoRApZhAJ9RFBs8D9NGjgLY3KOoNHhNtdOWggCgj7U0
zY7jEFDYZfl0Umb9E1Bzs1U=
=+HjR
-END PGP SIGNATURE-


[ccp4bb] Resolution limit of index in XDS

2013-03-19 Thread Niu Tou
Dear All,

Is there any command can set the resolution limit for index step in XDS? I
only found a keyword INCLUDE_RESOLUTION_RANGE, but it looks to be a
definition of resolution range after index step
as it says:

INCLUDE_RESOLUTION_RANGE=20.0 0.0 !Angstroem; used by
DEFPIX,INTEGRATE,CORRECT

Thanks!
Niu


Re: [ccp4bb] Resolution limit of index in XDS

2013-03-19 Thread Tim Gruene
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dear Niu,

indexing relies on strong reflections only, that is (in very brieft)
why INCLUDE_RESOLUTION_RANGE indeed does not affect the relections
collected in COLSPOT which in turn are used by IDXREF. You can work
around this, however, by making use of TRUSTED_REGION and set it to
e.g. 0.7 or 0.6 (you can use adxv to translate resolution into pixel
and then calculate the fraction you need to set the second number in
TRUSTED_REGION to (or the first if you want to exclude the inner
resolution reflections - I remember one data set where this was
essential for indexing - DNA was involved there)

Best,
Tim

On 03/19/2013 08:53 PM, Niu Tou wrote:
 Dear All,
 
 Is there any command can set the resolution limit for index step in
 XDS? I only found a keyword INCLUDE_RESOLUTION_RANGE, but it looks
 to be a definition of resolution range after index step as it
 says:
 
 INCLUDE_RESOLUTION_RANGE=20.0 0.0 !Angstroem; used by 
 DEFPIX,INTEGRATE,CORRECT
 
 Thanks! Niu
 

- -- 
Dr Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFRSOFJUxlJ7aRr7hoRAo6TAKC+BePgeODbDyngO7N8vCE4CnjxmQCfS5cP
srShHNz1sDK0EMHSbE3fDwA=
=kAwf
-END PGP SIGNATURE-


Re: [ccp4bb] Resolution and data/parameter ratio, which one is more important?

2013-03-17 Thread Colin Nave
One issue is whether the extra data for the 80% solvent volume consists of 
independent measurements. The references below suggest that the required  
oversampling  of intensities is given when one has a 50% solvent volume.

J. Miao, D. Sayre, and H. N. Chapman, Phase retrieval from the magnitude of 
the Fourier transforms of nonperiodic objects, J. Opt. Soc. Am. A 15, 
1662-1669 (1998)
Or
Q. 
Shenhttp://scripts.iucr.org/cgi-bin/citedin?search_on=nameauthor_name=Shen%2C%20Q%2E,
 I. 
Bazarovhttp://scripts.iucr.org/cgi-bin/citedin?search_on=nameauthor_name=Bazarov%2C%20I%2E
 and P. 
Thibaulthttp://scripts.iucr.org/cgi-bin/citedin?search_on=nameauthor_name=Thibault%2C%20P%2E
 Diffractive imaging of nonperiodic materials with future coherent X-ray 
sources J. Synchrotron Rad. (2004). 11, 432-438

Of course the above assumes everything is ideal.

Colin

From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Guangyu 
Zhu
Sent: 15 March 2013 00:28
To: ccp4bb
Subject: [ccp4bb] Resolution and data/parameter ratio, which one is more 
important?

I have this question. For exmaple, a protein could be crystallized in two 
crystal forms. Two crystal form have same space group, and 1 molecule/asymm. 
One crystal form diffracts to 3A with 50% solvent; and the other diffracts to 
3.6A with 80% solvent. The cell volume of 3.6A crystal must be 5/2=2.5 times 
larger because of higher solvent content. If both data collecte to same 
completeness (say 100%), 3.6A data actually have higher data/parameter ratio, 
5/2/(3.6/3)**3= 1.45 times to 3A data. For refinement, better data/parameter 
should give more accurate structure, ie. 3.6A data is better. But higher 
resolution should give a better resolved electron density map. So which crystal 
form really give a better (more reliable and accurate) protein structure?



-- 

This e-mail and any attachments may contain confidential, copyright and or 
privileged material, and are for the use of the intended addressee only. If you 
are not the intended addressee or an authorised recipient of the addressee 
please notify us of receipt by returning the e-mail and do not use, copy, 
retain, distribute or disclose the information in or attached to the e-mail.

Any opinions expressed within this e-mail are those of the individual and not 
necessarily of Diamond Light Source Ltd. 

Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments 
are free from viruses and we cannot accept liability for any damage which you 
may sustain as a result of software viruses which may be transmitted in or with 
the message.

Diamond Light Source Limited (company no. 4375679). Registered in England and 
Wales with its registered office at Diamond House, Harwell Science and 
Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom

 









Re: [ccp4bb] Resolution and data/parameter ratio, which one is more important?

2013-03-17 Thread Jrh
Dear Dr Zhu,
I hope the following might make things easier to grasp. The 3.0Angstrom 
diffraction resolution is basically required to resolve a protein polypeptide 
chain whether your protein is in an 80% solvent content unit cell or a 50% 
solvent content unit cell. You will have more observations in the former than 
in the latter to reach that goal. In the days of a single counter four circle 
diffractometer that was a major overhead. As has been alluded to in 
considerable detail in other replies solvent flattening does however give phase 
determination benefits and 80% is better than 50%.
Yours sincerely,
John

Prof John R Helliwell DSc FInstP CPhys FRSC CChem F Soc Biol.
Chair School of Chemistry, University of Manchester, Athena Swan Team.
http://www.chemistry.manchester.ac.uk/aboutus/athena/index.html
 
 

On 15 Mar 2013, at 00:27, Guangyu Zhu g...@hwi.buffalo.edu wrote:

 I have this question. For exmaple, a protein could be crystallized in two 
 crystal forms. Two crystal form have same space group, and 1 molecule/asymm. 
 One crystal form diffracts to 3A with 50% solvent; and the other diffracts to 
 3.6A with 80% solvent. The cell volume of 3.6A crystal must be 5/2=2.5 times 
 larger because of higher solvent content. If both data collecte to same 
 completeness (say 100%), 3.6A data actually have higher data/parameter ratio, 
 5/2/(3.6/3)**3= 1.45 times to 3A data. For refinement, better data/parameter 
 should give more accurate structure, ie. 3.6A data is better. But higher 
 resolution should give a better resolved electron density map. So which 
 crystal form really give a better (more reliable and accurate) protein 
 structure?


Re: [ccp4bb] Resolution and data/parameter ratio, which one is more important?

2013-03-16 Thread dusan turk
Dear Guangyu Zhu,

if this is not a hypothetical case you can refine both structures in each 
crystal form separately using whatever software and compare them later.
The structure can be refined also in both crystal forms simultaneously using 
the multi crystal NCS refinement as implemented in MAIN http://www-bmb.ijs.si/ 
and thereby double the data to parameter ratio when compared to the one crystal 
form data set refinement.

best regards,
dusan

On Mar 16, 2013, at 1:00 AM, CCP4BB automatic digest system 
lists...@jiscmail.ac.uk wrote:

 Date:Thu, 14 Mar 2013 20:27:44 -0400
 From:Guangyu Zhu g...@hwi.buffalo.edu
 Subject: Resolution and data/parameter ratio, which one is more important?
 
 I have this question. For exmaple, a protein could be crystallized in two 
 crystal forms. Two crystal form have same space group, and 1 molecule/asymm. 
 One crystal form diffracts to 3A with 50% solvent; and the other diffracts to 
 3.6A with 80% solvent. The cell volume of 3.6A crystal must be 5/2=2.5 times 
 larger because of higher solvent content. If both data collecte to same 
 completeness (say 100%), 3.6A data actually have higher data/parameter ratio, 
 5/2/(3.6/3)**3= 1.45 times to 3A data. For refinement, better data/parameter 
 should give more accurate structure, ie. 3.6A data is better. But higher 
 resolution should give a better resolved electron density map. So which 
 crystal form really give a better (more reliable and accurate) protein 
 structure?




Re: [ccp4bb] Resolution and data/parameter ratio, which one is more important?

2013-03-16 Thread James Holton


Well, when it comes to observations/parameters, it is important to 
remember that not all observations are created equal.  10,000 
observations with I/sigma = 1 are definitely not as desirable as 
10,000 observations with I/sigma = 10.  Not all parameters are created 
equal either.  Yes, you may have 3,000 atoms, but there are bonds 
between them and that means you don't REALLY have 3,000*3 degrees of 
freedom.  How many parameters are removed by each bond, of course, 
depends on how tight your geometry weight is.  So, although the 
observations/parameters rule of thumb is useful for knowing roughly just 
how much you are asking of your fitting program, it is a qualitative 
assessment only.  Not quantitative.


It is instructive to consider the 1-dimensional case.  Say you have 100 
data points, evenly spaced, and you are fitting a curve to them.  If you 
fit a 100th-order polynomial to these points, then you can always get 
your curve to pass straight through every single point.  But do you want 
to do that?  What if the error bars are huge and you can see that the 
points follow a much smoother curve?  In that case, you definitely want 
to reduce the number of parameters so that you are not 
over-fitting.  But how can you tell if your simplified model is 
plausible?  Well, one way to do it is leave out some of the observations 
from the fit and see if a curve fit to the other ones predicts those 
points reasonable well.  This is called a cross check (aka, Rfree).


The equivalent of resolution in this case is the scale on the x-axis.  
Yes, a scale factor.  100 points along the x-axis with a unit cell 
length of 200 is equivalent to 2A data, but if you change the unit 
cell to 300 A and you still have 100 samples, then that is equivalent 
to 3A data with the same number of observations.  But wait!  Shouldn't 
2A data always be better than 3A data if everything else is equal?  
Well, the difference comes from knowing the scale of what you are 
trying to measure. If you're trying to assign an atom-atom distance to 
either a hydrogen bond or a van der Waals bump, then you need to tell 
~2.0 A from ~3.5 A, so now suddenly the scale of the x-axis (200A vs 300 
A) matters.  The problem becomes one of propagating the error bars of 
the data (on the y axis) into error bars on the parameters of the fit 
(on the x-axis).  For my 1-D case of 100 points, this is equivalent to 
knowing how smooth your fitted function should be.  Yes, having more 
data points can be better, but it is the size of your error bars 
relative to what you are trying to measure that is actually important.


So, one way of thinking about the difference between 3A data with 50% 
solvent vs 3.6A data with 80% solvent is to scale the 3.0A unit cell 
so that it has the same volume as the 3.6A crystal's unit cell.  Then 
you effectively have two structures with different 
observations/parameters and the SAME resolution (because you have 
changed the scale of space for one of them).  The re-scaling is 
mathematically equivalent to stretching the 3.0 A electron density map, 
so all you have done is inflate the protein so that the atoms are now 
farther apart.  Does this make them easier to distinguish?  No, because 
although they are farther apart, they are also fatter.  Stretching the 
map changes both the peak widths and the distance between them 
proportionally.  The width of atomic peaks is actually very closely 
related to the resolution (especially at ~3A), so after stretching the 
map the peak widths in both thte 3A/50% and 3.6A/80% cases will be about 
the same, but the distances between the peaks will be larger in the map 
that came from the 3A/50% data.  So, relatively speaking 
(delta-bond/bond_length), you still have more accuracy with the 3A 
case, no matter what the observations/parameters ratio is.


Of course, all this is assuming that all you are interested in is bond 
lengths.  If you happen to know your bond lengths already (such as 
covalent bonds) then that changes the relationship between the errors in 
your data and the parameter you are trying to measure.  To put things 
another way, a 60 nm resolution 3D reconstruction of a 5-micron wide 
cell represents about as many observations as a 1.2A crystal structure 
of a 100 A unit cell. Which is more accurate?  Depends on the question 
you are trying to ask.


So, to answer the OP's question, I still think 3.0A is better than 
3.6A.  Yes, higher solvent content gives you better phases, and phase 
accuracy IS important for placing atoms (30 degrees of phase error at 3 
A means that the spatial waves at that resolution are off by an 
average of ~0.25 A).  But, that advantage is really only in the initial 
stages of phasing, and it fades as soon as the experimental phases start 
holding your refinement back more than they help (which actually happens 
rather quickly).  Remember, the 'bulk solvent' model, as far as the 
phases are concerned, is really just another kind of solvent 

Re: [ccp4bb] Resolution and data/parameter ratio, which one is more important?

2013-03-15 Thread Herman . Schreuder
Dear Guangyu,

80% solvent is an awful lot. The first thing I would do is to check that there 
is not another protein molecule hiding somewhere in the asymmetric unit. What I 
usually do in these cases is to set a very large map radius (say 40-60 Å) and 
look at the complete solvent region to see if there are regions which are 
significantly more noisy. I would also check that the crystal packing makes 
sense, e.g. continuous contacts in all three dimensions and no layers without 
any protein contacts.

Of course that best results are obtained by building and refining both crystal 
forms and cross-checking the results from one crystal form in the other crystal 
form.

Having said that, I have had some amazingly clear and unbiased electron density 
maps from low-resolution, high solvent crystals. It were molecular replacement 
structures, but due to the very strong solvent flattening effect, the phases 
looked like experimental ones. If your low-resolution structure genuinly has 
80% solvent, I would be tempted to start building in that map.

Best,
Herman



From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Guangyu 
Zhu
Sent: Friday, March 15, 2013 1:28 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] Resolution and data/parameter ratio, which one is more 
important?

I have this question. For exmaple, a protein could be crystallized in two 
crystal forms. Two crystal form have same space group, and 1 molecule/asymm. 
One crystal form diffracts to 3A with 50% solvent; and the other diffracts to 
3.6A with 80% solvent. The cell volume of 3.6A crystal must be 5/2=2.5 times 
larger because of higher solvent content. If both data collecte to same 
completeness (say 100%), 3.6A data actually have higher data/parameter ratio, 
5/2/(3.6/3)**3= 1.45 times to 3A data. For refinement, better data/parameter 
should give more accurate structure, ie. 3.6A data is better. But higher 
resolution should give a better resolved electron density map. So which crystal 
form really give a better (more reliable and accurate) protein structure?


Re: [ccp4bb] Resolution and data/parameter ratio, which one is more important?

2013-03-15 Thread Ian Tickle
Hi Guangyu,

I think it's not as straightforward as comparing d/p ratios, that is only
one of several factors that influences precision.  Another important factor
would be the overall level of thermal motion  disorder which will most
likely be significantly higher in the 3.6A 80% case; after all that's
probably the reason that it only diffracts to 3.6A!

All things considered I would go for the 3A form.

Cheers

-- Ian


On 15 March 2013 00:27, Guangyu Zhu g...@hwi.buffalo.edu wrote:

   I have this question. For exmaple, a protein could be crystallized in
 two crystal forms. Two crystal form have same space group, and 1
 molecule/asymm. One crystal form diffracts to 3A with 50% solvent; and the
 other diffracts to 3.6A with 80% solvent. The cell volume of 3.6A crystal
 must be 5/2=2.5 times larger because of higher solvent content. If both
 data collecte to same completeness (say 100%), 3.6A data actually have
 higher data/parameter ratio, 5/2/(3.6/3)**3= 1.45 times to 3A data. For
 refinement, better data/parameter should give more accurate structure, ie.
 3.6A data is better. But higher resolution should give a better resolved
 electron density map. So which crystal form really give a better (more
 reliable and accurate) protein structure?



Re: [ccp4bb] Resolution and data/parameter ratio, which one is more important?

2013-03-15 Thread Guangyu Zhu
Ian,

Because it is same protein, the high thermal motion is likely caused by crystal 
packing, and should be corrected by TLS refinement. The B left over should be 
similar.

Anyway, this is just a hypothetical question. I tried to make other things same 
and just compare resolution and d/p. But you can still find difference. So if 
80% crystal diffract to 3.0A too, then d/p ratio is much higher than 3.0A 50% 
crystal, it must be a more accurate refinement. What if 80% crystal diffract to 
3.1, 3.2A, or 3.3A? Or I change the question to: could d/p ratio compensate 
some resolution?

Thanks!
Guangyu

From: Ian Tickle ianj...@gmail.commailto:ianj...@gmail.com
Date: Friday, March 15, 2013 6:33 AM
To: System Administrator g...@hwi.buffalo.edumailto:g...@hwi.buffalo.edu
Cc: CCP4BB@JISCMAIL.AC.UKmailto:CCP4BB@JISCMAIL.AC.UK 
CCP4BB@jiscmail.ac.ukmailto:CCP4BB@jiscmail.ac.uk
Subject: Re: [ccp4bb] Resolution and data/parameter ratio, which one is more 
important?


Hi Guangyu,

I think it's not as straightforward as comparing d/p ratios, that is only one 
of several factors that influences precision.  Another important factor would 
be the overall level of thermal motion  disorder which will most likely be 
significantly higher in the 3.6A 80% case; after all that's probably the reason 
that it only diffracts to 3.6A!

All things considered I would go for the 3A form.

Cheers

-- Ian


On 15 March 2013 00:27, Guangyu Zhu 
g...@hwi.buffalo.edumailto:g...@hwi.buffalo.edu wrote:
I have this question. For exmaple, a protein could be crystallized in two 
crystal forms. Two crystal form have same space group, and 1 molecule/asymm. 
One crystal form diffracts to 3A with 50% solvent; and the other diffracts to 
3.6A with 80% solvent. The cell volume of 3.6A crystal must be 5/2=2.5 times 
larger because of higher solvent content. If both data collecte to same 
completeness (say 100%), 3.6A data actually have higher data/parameter ratio, 
5/2/(3.6/3)**3= 1.45 times to 3A data. For refinement, better data/parameter 
should give more accurate structure, ie. 3.6A data is better. But higher 
resolution should give a better resolved electron density map. So which crystal 
form really give a better (more reliable and accurate) protein structure?



Re: [ccp4bb] Resolution and data/parameter ratio, which one is more important?

2013-03-15 Thread Jacob Keller
Well, wouldn't NCS be a parallel situation? I have heard, for example, that
the maps of viruses are considerably better at a given resolution than
monomeric proteins. So I would guess that someone has looked at this topic
in the case of NCS. Maybe high solvent content would be equivalent to
filling the solvent holes with equivalent proteins (assuming
(unrealistically) that the crystal diffract to the same resolution, since
the parameter ratio would be the same?

JPK

On Fri, Mar 15, 2013 at 9:27 AM, Guangyu Zhu g...@hwi.buffalo.edu wrote:

 Ian,

 Because it is same protein, the high thermal motion is likely caused by
 crystal packing, and should be corrected by TLS refinement. The B left over
 should be similar.

 Anyway, this is just a hypothetical question. I tried to make other things
 same and just compare resolution and d/p. But you can still find
 difference. So if 80% crystal diffract to 3.0A too, then d/p ratio is much
 higher than 3.0A 50% crystal, it must be a more accurate refinement. What
 if 80% crystal diffract to 3.1, 3.2A, or 3.3A? Or I change the question to:
 could d/p ratio compensate some resolution?

 Thanks!
 Guangyu

 From: Ian Tickle ianj...@gmail.com
 Date: Friday, March 15, 2013 6:33 AM
 To: System Administrator g...@hwi.buffalo.edu
 Cc: CCP4BB@JISCMAIL.AC.UK CCP4BB@jiscmail.ac.uk
 Subject: Re: [ccp4bb] Resolution and data/parameter ratio, which one is
 more important?


 Hi Guangyu,

 I think it's not as straightforward as comparing d/p ratios, that is only
 one of several factors that influences precision.  Another important factor
 would be the overall level of thermal motion  disorder which will most
 likely be significantly higher in the 3.6A 80% case; after all that's
 probably the reason that it only diffracts to 3.6A!

 All things considered I would go for the 3A form.

 Cheers

 -- Ian


 On 15 March 2013 00:27, Guangyu Zhu g...@hwi.buffalo.edu wrote:

 I have this question. For exmaple, a protein could be crystallized in two
 crystal forms. Two crystal form have same space group, and 1
 molecule/asymm. One crystal form diffracts to 3A with 50% solvent; and the
 other diffracts to 3.6A with 80% solvent. The cell volume of 3.6A crystal
 must be 5/2=2.5 times larger because of higher solvent content. If both
 data collecte to same completeness (say 100%), 3.6A data actually have
 higher data/parameter ratio, 5/2/(3.6/3)**3= 1.45 times to 3A data. For
 refinement, better data/parameter should give more accurate structure, ie.
 3.6A data is better. But higher resolution should give a better resolved
 electron density map. So which crystal form really give a better (more
 reliable and accurate) protein structure?





-- 
***

Jacob Pearson Keller, PhD

Looger Lab/HHMI Janelia Farms Research Campus

19700 Helix Dr, Ashburn, VA 20147

email: kell...@janelia.hhmi.org

***


Re: [ccp4bb] Resolution and data/parameter ratio, which one is more important?

2013-03-15 Thread Pete Meyer

Guangyu,
If I'm understanding your question correctly; you're asking if all other 
things are equal (resolution, degree of disorder, etc), does improving 
the data/parameter ratio result in an improved model?


The short answer is: (at least sometimes) yes.

Pete

Guangyu Zhu wrote:

Ian,

Because it is same protein, the high thermal motion is likely caused by crystal 
packing, and should be corrected by TLS refinement. The B left over should be 
similar.

Anyway, this is just a hypothetical question. I tried to make other things same 
and just compare resolution and d/p. But you can still find difference. So if 
80% crystal diffract to 3.0A too, then d/p ratio is much higher than 3.0A 50% 
crystal, it must be a more accurate refinement. What if 80% crystal diffract to 
3.1, 3.2A, or 3.3A? Or I change the question to: could d/p ratio compensate 
some resolution?

Thanks!
Guangyu

From: Ian Tickle ianj...@gmail.commailto:ianj...@gmail.com
Date: Friday, March 15, 2013 6:33 AM
To: System Administrator g...@hwi.buffalo.edumailto:g...@hwi.buffalo.edu
Cc: CCP4BB@JISCMAIL.AC.UKmailto:CCP4BB@JISCMAIL.AC.UK 
CCP4BB@jiscmail.ac.ukmailto:CCP4BB@jiscmail.ac.uk
Subject: Re: [ccp4bb] Resolution and data/parameter ratio, which one is more 
important?


Hi Guangyu,

I think it's not as straightforward as comparing d/p ratios, that is only one of 
several factors that influences precision.  Another important factor would be the 
overall level of thermal motion  disorder which will most likely be 
significantly higher in the 3.6A 80% case; after all that's probably the reason 
that it only diffracts to 3.6A!

All things considered I would go for the 3A form.

Cheers

-- Ian


On 15 March 2013 00:27, Guangyu Zhu 
g...@hwi.buffalo.edumailto:g...@hwi.buffalo.edu wrote:
I have this question. For exmaple, a protein could be crystallized in two 
crystal forms. Two crystal form have same space group, and 1 molecule/asymm. 
One crystal form diffracts to 3A with 50% solvent; and the other diffracts to 
3.6A with 80% solvent. The cell volume of 3.6A crystal must be 5/2=2.5 times 
larger because of higher solvent content. If both data collecte to same 
completeness (say 100%), 3.6A data actually have higher data/parameter ratio, 
5/2/(3.6/3)**3= 1.45 times to 3A data. For refinement, better data/parameter 
should give more accurate structure, ie. 3.6A data is better. But higher 
resolution should give a better resolved electron density map. So which crystal 
form really give a better (more reliable and accurate) protein structure?




[ccp4bb] Resolution and data/parameter ratio, which one is more important?

2013-03-14 Thread Guangyu Zhu
I have this question. For exmaple, a protein could be crystallized in two 
crystal forms. Two crystal form have same space group, and 1 molecule/asymm. 
One crystal form diffracts to 3A with 50% solvent; and the other diffracts to 
3.6A with 80% solvent. The cell volume of 3.6A crystal must be 5/2=2.5 times 
larger because of higher solvent content. If both data collecte to same 
completeness (say 100%), 3.6A data actually have higher data/parameter ratio, 
5/2/(3.6/3)**3= 1.45 times to 3A data. For refinement, better data/parameter 
should give more accurate structure, ie. 3.6A data is better. But higher 
resolution should give a better resolved electron density map. So which crystal 
form really give a better (more reliable and accurate) protein structure?


Re: [ccp4bb] resolution limit

2012-07-19 Thread Kay Diederichs
Hi Narayan,

there's nothing wrong with using data with I/sigmaI 2.5, Rsym 224.3 % for 
multiplicity 7.8 and completeness 98.2 %. 

However, when you discarded frames you might have made the data worse - one 
should only reject data if they deviate systematically (e.g. from radiation 
damage). Weak data should not be rejected, and Rmerge is the wrong measure to 
judge about data quality.

best,

Kay

On Wed, 18 Jul 2012 02:41:48 -0700, narayan viswam nvisw...@gmail.com wrote:

Hello CCP4ers,

 In my data, the highest reolution shell 2.8-3.0 A has I/sigmaI 2.5  Rsym
224.3 % for multiplicity 7.8 and completeness 98.2 %. I solved the
structure by MAD  refined it to Rfree 27.3 %. Ths crystal belongs to P622
space group and it is not twinned. The water content is 68%. I loweredthe
 multiplicity to 4.1 by excluding few images but still the Rsym is  200 %
and I/sigmaI  2.0. My rudimentary crystallography knowledge makes me
believe it's quite Ok to use data upto 2.8 A and report the statistics.
Could I request people's views. Thanks very much.
Narayan



[ccp4bb] resolution limit

2012-07-18 Thread narayan viswam
Hello CCP4ers,

 In my data, the highest reolution shell 2.8-3.0 A has I/sigmaI 2.5  Rsym
224.3 % for multiplicity 7.8 and completeness 98.2 %. I solved the
structure by MAD  refined it to Rfree 27.3 %. Ths crystal belongs to P622
space group and it is not twinned. The water content is 68%. I loweredthe
 multiplicity to 4.1 by excluding few images but still the Rsym is  200 %
and I/sigmaI  2.0. My rudimentary crystallography knowledge makes me
believe it's quite Ok to use data upto 2.8 A and report the statistics.
Could I request people's views. Thanks very much.
Narayan


Re: [ccp4bb] resolution limit

2012-07-18 Thread Ian Tickle
Hi Narayan

My only comment would be that P622 is a fairly uncommon space group
(currently 43 PDB entries excl homologs), but obviously that doesn't
mean it's wrong - just worth double-checking!  Just out of interest
what's the CC(1/2) statistic for your highest shell?

Personally I specify more bins than the default (e.g. 20 instead of
10) so that the highest resolution bin would be somewhat narrower than
yours.  I would also prefer that the binning is done by equal steps in
d*^3 rather than d*^2 as many programs do since this gives a more even
spread of reflections in the bins and gives an even narrower binning
at the high res end (though it does tend to lump all the low res data
into 1 bin!).

Cheers

-- Ian

On 18 July 2012 10:41, narayan viswam nvisw...@gmail.com wrote:


 Hello CCP4ers,

  In my data, the highest reolution shell 2.8-3.0 A has I/sigmaI 2.5  Rsym
 224.3 % for multiplicity 7.8 and completeness 98.2 %. I solved the structure
 by MAD  refined it to Rfree 27.3 %. Ths crystal belongs to P622 space group
 and it is not twinned. The water content is 68%. I loweredthe  multiplicity
 to 4.1 by excluding few images but still the Rsym is  200 % and I/sigmaI 
 2.0. My rudimentary crystallography knowledge makes me believe it's quite Ok
 to use data upto 2.8 A and report the statistics. Could I request people's
 views. Thanks very much.
 Narayan


Re: [ccp4bb] resolution limit

2012-07-18 Thread Edward A. Berry

narayan viswam wrote:

Hello CCP4ers,
  In my data, the highest reolution shell 2.8-3.0 A has I/sigmaI 2.5  Rsym 
224.3 %
for multiplicity 7.8 and completeness 98.2 %. I solved the structure by MAD  
refined it
to Rfree 27.3 %. Ths crystal belongs to P622 space group and it is not twinned. 
The water
content is 68%. I loweredthe  multiplicity to 4.1 by excluding few images but 
still the
Rsym is  200 % and I/sigmaI  2.0. My rudimentary crystallography knowledge 
makes me
believe it's quite Ok to use data upto 2.8 A and report the statistics. Could I 
request
people's views. Thanks very much.
Narayan


After refinement, what is R-free in the last shell? If it is significantly 
better
than random, say around .4 or less, that could be taken as evidence that there
is data in the last shell.
Also check the error model- Rsym 2 sort of implies the error is greater than
the signal, so I/sigI 2 seems surprising.
eab


Re: [ccp4bb] resolution limit

2012-07-18 Thread Edwin Pozharski


As has been shown recently (and discussed on this board), Rsym is not the
best measure of data quality (if any measure at all):

http://www.sciencemag.org/content/336/6084/1030.abstract


 narayan viswam wrote:
 Hello CCP4ers,
  
In my data, the highest reolution shell 2.8-3.0 A has I/sigmaI 2.5

 Rsym 224.3 %
 for multiplicity 7.8 and
completeness 98.2 %. I solved the structure by
 MAD 
refined it
 to Rfree 27.3 %. Ths crystal belongs to P622
space group and it is not
 twinned. The water

content is 68%. I loweredthe  multiplicity to 4.1 by excluding few
 images but still the
 Rsym is  200 % and
I/sigmaI  2.0. My rudimentary crystallography
 knowledge
makes me
 believe it's quite Ok to use data upto 2.8 A and
report the statistics.
 Could I request

people's views. Thanks very much.
 Narayan
 
 After refinement, what is R-free in the last shell? If it is
significantly
 better
 than random, say around .4 or
less, that could be taken as evidence that
 there
 is
data in the last shell.
 Also check the error model- Rsym 2
sort of implies the error is greater
 than
 the signal,
so I/sigI 2 seems surprising.
 eab
 


--

Edwin Pozharski, PhD
University of Maryland, Baltimore


Re: [ccp4bb] resolution limit

2012-07-18 Thread Edwin Pozharski


http://www.ysbl.york.ac.uk/ccp4bb/2001/msg00383.html



Rsym...what's that?
 
 JPK
 
 On Wed,
Jul 18, 2012 at 9:12 AM, Edwin Pozharski

epozh...@umaryland.eduwrote:
 
 As has been
shown recently (and discussed on this board), Rsym is not

the
 best measure of data quality (if any measure at all):


http://www.sciencemag.org/content/336/6084/1030.abstract



  narayan viswam wrote:
  Hello CCP4ers,
  In my data, the
highest reolution shell 2.8-3.0 A has I/sigmaI 2.5 

 Rsym 224.3 %
  for multiplicity 7.8 and
completeness 98.2 %. I solved the structure
 by

 MAD  refined it
  to Rfree 27.3 %. Ths
crystal belongs to P622 space group and it is
 not
  twinned. The water
  content is
68%. I loweredthe multiplicity to 4.1 by excluding few

 images but still the
  Rsym is  200 %
and I/sigmaI  2.0. My rudimentary crystallography

 knowledge makes me
  believe it's quite Ok
to use data upto 2.8 A and report the
 statistics.
  Could I request
  people's views.
Thanks very much.
  Narayan
 
  After refinement, what is R-free in the last shell? If it
is
 significantly
  better

 than random, say around .4 or less, that could be taken as
evidence
 that
  there
  is
data in the last shell.
  Also check the error model-
Rsym 2 sort of implies the error is
 greater
  than
  the signal, so I/sigI 2 seems
surprising.
  eab
 


 --
 Edwin Pozharski, PhD
 University of Maryland, Baltimore
 
 
 
 
 --

***
 Jacob Pearson
Keller
 Northwestern University
 Medical Scientist
Training Program
 email: j-kell...@northwestern.edu

***
 


--

Edwin Pozharski, PhD
University of Maryland, Baltimore


Re: [ccp4bb] resolution limit

2012-07-18 Thread Jacob Keller
I was [too] obliquely alluding to this thread...

http://www.mail-archive.com/ccp4bb@jiscmail.ac.uk/msg27056.html

JPK



On Wed, Jul 18, 2012 at 12:32 PM, Edwin Pozharski epozh...@umaryland.eduwrote:

 http://www.ysbl.york.ac.uk/ccp4bb/2001/msg00383.html



  Rsym...what's that?
 
  JPK
 
  On Wed, Jul 18, 2012 at 9:12 AM, Edwin Pozharski
  epozh...@umaryland.eduwrote:
 
  As has been shown recently (and discussed on this board), Rsym is not
  the
  best measure of data quality (if any measure at all):
 
  http://www.sciencemag.org/content/336/6084/1030.abstract
 
 
 
   narayan viswam wrote:
   Hello CCP4ers,
   In my data, the highest reolution shell 2.8-3.0 A has I/sigmaI 2.5 
   Rsym 224.3 %
   for multiplicity 7.8 and completeness 98.2 %. I solved the structure
  by
   MAD  refined it
   to Rfree 27.3 %. Ths crystal belongs to P622 space group and it is
  not
   twinned. The water
   content is 68%. I loweredthe multiplicity to 4.1 by excluding few
   images but still the
   Rsym is  200 % and I/sigmaI  2.0. My rudimentary crystallography
   knowledge makes me
   believe it's quite Ok to use data upto 2.8 A and report the
  statistics.
   Could I request
   people's views. Thanks very much.
   Narayan
  
   After refinement, what is R-free in the last shell? If it is
  significantly
   better
   than random, say around .4 or less, that could be taken as evidence
  that
   there
   is data in the last shell.
   Also check the error model- Rsym 2 sort of implies the error is
  greater
   than
   the signal, so I/sigI 2 seems surprising.
   eab
  
 
 
  --
  Edwin Pozharski, PhD
  University of Maryland, Baltimore
 
 
 
 
  --
  ***
  Jacob Pearson Keller
  Northwestern University
  Medical Scientist Training Program
  email: j-kell...@northwestern.edu
  ***
 


 --
 Edwin Pozharski, PhD
 University of Maryland, Baltimore




-- 
***
Jacob Pearson Keller
Northwestern University
Medical Scientist Training Program
email: j-kell...@northwestern.edu
***


Re: [ccp4bb] resolution on PDB web page

2012-04-25 Thread Jan Dohnalek
We indeed used the US portal for deposition which may be the difference.
Nevertheless the recent reported resolution values etc. are projected also
to the PDBe portal.

Jan


On Wed, Apr 25, 2012 at 10:10 AM, Mark J van Raaij
mjvanra...@cnb.csic.eswrote:

 Phoebe, Jan, PDB,
 is this something particular to the US portal of the PDB, or general?
 We always use the European portal pdbe and have not had such problems.
 Mark
 Mark J van Raaij
 Laboratorio M-4
 Dpto de Estructura de Macromoleculas
 Centro Nacional de Biotecnologia - CSIC
 c/Darwin 3
 E-28049 Madrid, Spain
 tel. (+34) 91 585 4616
 http://www.cnb.csic.es/~mjvanraaij



 On 25 Apr 2012, at 09:41, Jan Dohnalek wrote:

  There have been other manipulations with user-input values. We could not
 input solvent content 83% for 3cg8 (the real value!!!) as being out of the
 allowed range.
  The resulting value in the PDB is NULL not showing the actually
 interesting feature of the structure.
 
  I also noticed that the reported resolution values are nonsensically
 advertised with three decimal positions after the point which is not the
 way we would put it, is it?
 
  Either fight it or live with it ...
 
  Jan Dohnalek
 
 
 
 
  On Wed, Apr 25, 2012 at 12:23 AM, Phoebe Rice pr...@uchicago.edu
 wrote:
  I just noticed that the PDB has changed the stated resolution for one of
 my old structures!  It was refined against a very anisotropic data set that
 extended to 2.2 in the best direction only.  When depositing I called the
 resolution 2.5 as a rough average of resolution in all 3 directions, but
 now PDB is advertising it as 2.2, which is misleading.
 
  I'm afraid I may not have paid enough attention to the fine print on
 this issue - is the PDB now automatically advertising the resolution of a
 structure as that of the outermost flyspeck used in refinement, regardless
 of more cautious assertions by the authors?  If so, I object!
 
  =
  Phoebe A. Rice
  Dept. of Biochemistry  Molecular Biology
  The University of Chicago
  phone 773 834 1723
 
 http://bmb.bsd.uchicago.edu/Faculty_and_Research/01_Faculty/01_Faculty_Alphabetically.php?faculty_id=123
  http://www.rsc.org/shop/books/2008/9780854042722.asp
 
 
 
  --
  Jan Dohnalek, Ph.D
  Institute of Macromolecular Chemistry
  Academy of Sciences of the Czech Republic
  Heyrovskeho nam. 2
  16206 Praha 6
  Czech Republic
 
  Tel: +420 296 809 340
  Fax: +420 296 809 410




-- 
Jan Dohnalek, Ph.D
Institute of Macromolecular Chemistry
Academy of Sciences of the Czech Republic
Heyrovskeho nam. 2
16206 Praha 6
Czech Republic

Tel: +420 296 809 340
Fax: +420 296 809 410


Re: [ccp4bb] resolution on PDB web page

2012-04-25 Thread Edward A. Berry

We also use the US portal. Can't speak to the solvent content as we never
had a value much over 70%.

As for the resolution range, I never saw any place to enter this
user-defined resolution of the structure.
As far as i know it comes from the record:
REMARK 200  RESOLUTION RANGE HIGH  (A) : 1.200
which should be the high resolution used in refinement.

I suppose in an additional remark you could give the
optical resolution or the resolution of 90% complete at I/sig=2.
Or the title could be The 2.2A resolution structure of protein x,
never mind that there were a few reflections used at 1.7A.
eab

Mark J van Raaij wrote:

Phoebe, Jan, PDB,
is this something particular to the US portal of the PDB, or general?
We always use the European portal pdbe and have not had such problems.
Mark
Mark J van Raaij
Laboratorio M-4
Dpto de Estructura de Macromoleculas
Centro Nacional de Biotecnologia - CSIC
c/Darwin 3
E-28049 Madrid, Spain
tel. (+34) 91 585 4616
http://www.cnb.csic.es/~mjvanraaij



On 25 Apr 2012, at 09:41, Jan Dohnalek wrote:


There have been other manipulations with user-input values. We could not input solvent 
content 83% for 3cg8 (the real value!!!) as being out of the allowed range.
The resulting value in the PDB is NULL not showing the actually interesting 
feature of the structure.

I also noticed that the reported resolution values are nonsensically advertised 
with three decimal positions after the point which is not the way we would put 
it, is it?

Either fight it or live with it ...

Jan Dohnalek




On Wed, Apr 25, 2012 at 12:23 AM, Phoebe Ricepr...@uchicago.edu  wrote:
I just noticed that the PDB has changed the stated resolution for one of my old 
structures!  It was refined against a very anisotropic data set that extended 
to 2.2 in the best direction only.  When depositing I called the resolution 2.5 
as a rough average of resolution in all 3 directions, but now PDB is 
advertising it as 2.2, which is misleading.

I'm afraid I may not have paid enough attention to the fine print on this issue - is the 
PDB now automatically advertising the resolution of a structure as that of 
the outermost flyspeck used in refinement, regardless of more cautious assertions by the 
authors?  If so, I object!

=
Phoebe A. Rice
Dept. of Biochemistry  Molecular Biology
The University of Chicago
phone 773 834 1723
http://bmb.bsd.uchicago.edu/Faculty_and_Research/01_Faculty/01_Faculty_Alphabetically.php?faculty_id=123
http://www.rsc.org/shop/books/2008/9780854042722.asp



--
Jan Dohnalek, Ph.D
Institute of Macromolecular Chemistry
Academy of Sciences of the Czech Republic
Heyrovskeho nam. 2
16206 Praha 6
Czech Republic

Tel: +420 296 809 340
Fax: +420 296 809 410




Re: [ccp4bb] resolution on PDB web page

2012-04-25 Thread Mike Sleutel
Curious that 83% solvent content would be out of range; A quick search in
the pdb indicates that there are 43 entries with solvent content 85% ...

Op 25 april 2012 09:41 schreef Jan Dohnalek dohnalek...@gmail.com het
volgende:

 There have been other manipulations with user-input values. We could not
 input solvent content 83% for 3cg8 (the real value!!!) as being out of the
 allowed range.
 The resulting value in the PDB is NULL not showing the actually
 interesting feature of the structure.

 I also noticed that the reported resolution values are nonsensically
 advertised with three decimal positions after the point which is not the
 way we would put it, is it?

 Either fight it or live with it ...

 Jan Dohnalek

 --


 Vrije Universiteit Brussel (VUB)
 Vlaams Interuniversitair Instituut voor Biotechnologie (VIB)
 Instituut Moleculaire Biologie  Biotechnologie (IMOL)
 Ultrastructuur (ULTR)
 Oefenplein, Gebouw E (4.16)
 Pleinlaan 2, 1050 Brussel
 e-mail: msleu...@vub.ac.be
 Tel:  ++32-(0)2-629-1923
 Fax: ++32-(0)2-629-1963






Re: [ccp4bb] resolution on PDB web page

2012-04-25 Thread Miller, Mitchell D.
I too believe that the value is set from the
high resolution limit form data collection or refinement.
All three numbers (high resolution limit in remark 2, remark 3 
and Remark 200) are supposed to be consistent and are
defined as the highest resolution reflection used.
http://mmcif.rcsb.org/dictionaries/mmcif_pdbx_v40.dic/Items/_reflns.d_resolution_high.html
http://mmcif.rcsb.org/dictionaries/mmcif_pdbx_v40.dic/Items/_refine.ls_d_res_high.html

 Looking at the PDB specification, it shows that there is an option 
to add a free text comment to the remark 2 resolution --
Additional explanatory text may be included starting with the third line of 
the REMARK 2 record. For example, depositors may wish to qualify the resolution 
value provided due to unusual experimental conditions.
http://www.wwpdb.org/documentation/format33/remarks1.html 

  We have not done this, but we have in a number of cases 
qualified the resolution by using a lower resolution in the 
title of the entry and further detailing this nominal
resolution in the remark 3 other refinement remarks. E.g. see
http://www.rcsb.org/pdb/explore/explore.do?structureId=1vr0 
http://www.rcsb.org/pdb/explore/explore.do?structureId=1vkk 

Regards,
Mitch

-Original Message-
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Edward A. 
Berry
Sent: Wednesday, April 25, 2012 6:55 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] resolution on PDB web page

We also use the US portal. Can't speak to the solvent content as we never
had a value much over 70%.

As for the resolution range, I never saw any place to enter this
user-defined resolution of the structure.
As far as i know it comes from the record:
REMARK 200  RESOLUTION RANGE HIGH  (A) : 1.200
which should be the high resolution used in refinement.

I suppose in an additional remark you could give the
optical resolution or the resolution of 90% complete at I/sig=2.
Or the title could be The 2.2A resolution structure of protein x,
never mind that there were a few reflections used at 1.7A.
eab

Mark J van Raaij wrote:
 Phoebe, Jan, PDB,
 is this something particular to the US portal of the PDB, or general?
 We always use the European portal pdbe and have not had such problems.
 Mark
 Mark J van Raaij
 Laboratorio M-4
 Dpto de Estructura de Macromoleculas
 Centro Nacional de Biotecnologia - CSIC
 c/Darwin 3
 E-28049 Madrid, Spain
 tel. (+34) 91 585 4616
 http://www.cnb.csic.es/~mjvanraaij



 On 25 Apr 2012, at 09:41, Jan Dohnalek wrote:

 There have been other manipulations with user-input values. We could not 
 input solvent content 83% for 3cg8 (the real value!!!) as being out of the 
 allowed range.
 The resulting value in the PDB is NULL not showing the actually 
 interesting feature of the structure.

 I also noticed that the reported resolution values are nonsensically 
 advertised with three decimal positions after the point which is not the way 
 we would put it, is it?

 Either fight it or live with it ...

 Jan Dohnalek




 On Wed, Apr 25, 2012 at 12:23 AM, Phoebe Ricepr...@uchicago.edu  wrote:
 I just noticed that the PDB has changed the stated resolution for one of my 
 old structures!  It was refined against a very anisotropic data set that 
 extended to 2.2 in the best direction only.  When depositing I called the 
 resolution 2.5 as a rough average of resolution in all 3 directions, but now 
 PDB is advertising it as 2.2, which is misleading.

 I'm afraid I may not have paid enough attention to the fine print on this 
 issue - is the PDB now automatically advertising the resolution of a 
 structure as that of the outermost flyspeck used in refinement, regardless 
 of more cautious assertions by the authors?  If so, I object!

 =
 Phoebe A. Rice
 Dept. of Biochemistry  Molecular Biology
 The University of Chicago
 phone 773 834 1723
 http://bmb.bsd.uchicago.edu/Faculty_and_Research/01_Faculty/01_Faculty_Alphabetically.php?faculty_id=123
 http://www.rsc.org/shop/books/2008/9780854042722.asp



 --
 Jan Dohnalek, Ph.D
 Institute of Macromolecular Chemistry
 Academy of Sciences of the Czech Republic
 Heyrovskeho nam. 2
 16206 Praha 6
 Czech Republic

 Tel: +420 296 809 340
 Fax: +420 296 809 410



Re: [ccp4bb] resolution on PDB web page

2012-04-25 Thread Phoebe Rice
What freaked me out is that REMARK 2 seems to have changed over time:  I have a 
version of 1ihf.pdb (deposited around 1995) that was apparently downloaded in 
1998, where remark 2 says 2.5, and a version downloaded yesterday where remark 
2 says 2.2.
The whole thing actually started because the newer file has some odd LINK 
records as well, which I've written to PDB about (my fault, sort of: I 
apparently modeled 2 close but alternate positions of the same Cd++ ion as two 
different ions with low occupancy. PDB has now put a LINK between them, which 
makes no chemical sense).

=
Phoebe A. Rice
Dept. of Biochemistry  Molecular Biology
The University of Chicago
phone 773 834 1723
http://bmb.bsd.uchicago.edu/Faculty_and_Research/01_Faculty/01_Faculty_Alphabetically.php?faculty_id=123
http://www.rsc.org/shop/books/2008/9780854042722.asp


 Original message 
Date: Wed, 25 Apr 2012 11:08:52 -0400
From: CCP4 bulletin board CCP4BB@JISCMAIL.AC.UK (on behalf of Edward A. 
Berry ber...@upstate.edu)
Subject: Re: [ccp4bb] resolution on PDB web page  
To: CCP4BB@JISCMAIL.AC.UK

My apologies, I guess there is a separate entry for resolution,
and in my depositions it gets filled from the remark 200 records
from CNS xtal_pdb_submission and I never thought to change it.
I guess now the PDB is enforcing the requirement that it
should be the highest resolution used and hence the same
as remark 200 and hence redundant.

I guess if you want to qualify the resolution on line 3 of
remark 2 you need to ask the annotator to do it for you.
(We have some structures that really should be qualified.)

Miller, Mitchell D. wrote:
 I too believe that the value is set from the
 high resolution limit form data collection or refinement.
 All three numbers (high resolution limit in remark 2, remark 3
 and Remark 200) are supposed to be consistent and are
 defined as the highest resolution reflection used.
 http://mmcif.rcsb.org/dictionaries/mmcif_pdbx_v40.dic/Items/_reflns.d_resolution_high.html
 http://mmcif.rcsb.org/dictionaries/mmcif_pdbx_v40.dic/Items/_refine.ls_d_res_high.html

   Looking at the PDB specification, it shows that there is an option
 to add a free text comment to the remark 2 resolution --
 Additional explanatory text may be included starting with the third line of 
 the REMARK 2 record. For example, depositors may wish to qualify the 
 resolution value provided due to unusual experimental conditions.
 http://www.wwpdb.org/documentation/format33/remarks1.html

We have not done this, but we have in a number of cases
 qualified the resolution by using a lower resolution in the
 title of the entry and further detailing this nominal
 resolution in the remark 3 other refinement remarks. E.g. see
 http://www.rcsb.org/pdb/explore/explore.do?structureId=1vr0
 http://www.rcsb.org/pdb/explore/explore.do?structureId=1vkk  

 Regards,
 Mitch

 -Original Message-
 From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Edward 
 A. Berry
 Sent: Wednesday, April 25, 2012 6:55 AM
 To: CCP4BB@JISCMAIL.AC.UK
 Subject: Re: [ccp4bb] resolution on PDB web page

 We also use the US portal. Can't speak to the solvent content as we never
 had a value much over 70%.

 As for the resolution range, I never saw any place to enter this
 user-defined resolution of the structure.
 As far as i know it comes from the record:
 REMARK 200  RESOLUTION RANGE HIGH  (A) : 1.200
 which should be the high resolution used in refinement.

 I suppose in an additional remark you could give the
 optical resolution or the resolution of 90% complete at I/sig=2.
 Or the title could be The 2.2A resolution structure of protein x,
 never mind that there were a few reflections used at 1.7A.
 eab

 Mark J van Raaij wrote:
 Phoebe, Jan, PDB,
 is this something particular to the US portal of the PDB, or general?
 We always use the European portal pdbe and have not had such problems.
 Mark
 Mark J van Raaij
 Laboratorio M-4
 Dpto de Estructura de Macromoleculas
 Centro Nacional de Biotecnologia - CSIC
 c/Darwin 3
 E-28049 Madrid, Spain
 tel. (+34) 91 585 4616
 http://www.cnb.csic.es/~mjvanraaij



 On 25 Apr 2012, at 09:41, Jan Dohnalek wrote:

 There have been other manipulations with user-input values. We could not 
 input solvent content 83% for 3cg8 (the real value!!!) as being out of 
 the allowed range.
 The resulting value in the PDB is NULL not showing the actually 
 interesting feature of the structure.

 I also noticed that the reported resolution values are nonsensically 
 advertised with three decimal positions after the point which is not the 
 way we would put it, is it?

 Either fight it or live with it ...

 Jan Dohnalek




 On Wed, Apr 25, 2012 at 12:23 AM, Phoebe Ricepr...@uchicago.edu   wrote:
 I just noticed that the PDB has changed the stated resolution for one of 
 my old structures!  It was refined against a very anisotropic data set 
 that extended to 2.2 in the best direction

Re: [ccp4bb] resolution on PDB web page

2012-04-25 Thread H. Raaijmakers
That's nothing. Once someone wrote me because the tungsten atom of my
Tungsten containing formate dehydrogenase had dissapeared.
Lost in translation during some autoscripted conversion.
 It was corrected soon enough.:)

Cheers,
Hans




Jan Dohnalek schreef:
 There have been other manipulations with user-input values. We could not
 input solvent content 83% for 3cg8 (the real value!!!) as being out of
 the
 allowed range.
 The resulting value in the PDB is NULL not showing the actually
 interesting feature of the structure.

 I also noticed that the reported resolution values are nonsensically
 advertised with three decimal positions after the point which is not the
 way we would put it, is it?

 Either fight it or live with it ...

 Jan Dohnalek




 On Wed, Apr 25, 2012 at 12:23 AM, Phoebe Rice pr...@uchicago.edu wrote:

 I just noticed that the PDB has changed the stated resolution for one of
 my old structures!  It was refined against a very anisotropic data set
 that
 extended to 2.2 in the best direction only.  When depositing I called
 the
 resolution 2.5 as a rough average of resolution in all 3 directions, but
 now PDB is advertising it as 2.2, which is misleading.

 I'm afraid I may not have paid enough attention to the fine print on
 this
 issue - is the PDB now automatically advertising the resolution of a
 structure as that of the outermost flyspeck used in refinement,
 regardless
 of more cautious assertions by the authors?  If so, I object!

 =
 Phoebe A. Rice
 Dept. of Biochemistry  Molecular Biology
 The University of Chicago
 phone 773 834 1723

 http://bmb.bsd.uchicago.edu/Faculty_and_Research/01_Faculty/01_Faculty_Alphabetically.php?faculty_id=123
 http://www.rsc.org/shop/books/2008/9780854042722.asp




 --
 Jan Dohnalek, Ph.D
 Institute of Macromolecular Chemistry
 Academy of Sciences of the Czech Republic
 Heyrovskeho nam. 2
 16206 Praha 6
 Czech Republic

 Tel: +420 296 809 340
 Fax: +420 296 809 410



Re: [ccp4bb] resolution on PDB web page

2012-04-25 Thread Jacob Keller
I had heard that there was a world-wide Tungsten shortage, but this is
ridiculous!

JPK

On Wed, Apr 25, 2012 at 1:29 PM, H. Raaijmakers hraaijmak...@xs4all.nlwrote:

 That's nothing. Once someone wrote me because the tungsten atom of my
 Tungsten containing formate dehydrogenase had dissapeared.
 Lost in translation during some autoscripted conversion.
  It was corrected soon enough.:)

 Cheers,
 Hans




 Jan Dohnalek schreef:
  There have been other manipulations with user-input values. We could not
  input solvent content 83% for 3cg8 (the real value!!!) as being out of
  the
  allowed range.
  The resulting value in the PDB is NULL not showing the actually
  interesting feature of the structure.
 
  I also noticed that the reported resolution values are nonsensically
  advertised with three decimal positions after the point which is not the
  way we would put it, is it?
 
  Either fight it or live with it ...
 
  Jan Dohnalek
 
 
 
 
  On Wed, Apr 25, 2012 at 12:23 AM, Phoebe Rice pr...@uchicago.edu
 wrote:
 
  I just noticed that the PDB has changed the stated resolution for one of
  my old structures!  It was refined against a very anisotropic data set
  that
  extended to 2.2 in the best direction only.  When depositing I called
  the
  resolution 2.5 as a rough average of resolution in all 3 directions, but
  now PDB is advertising it as 2.2, which is misleading.
 
  I'm afraid I may not have paid enough attention to the fine print on
  this
  issue - is the PDB now automatically advertising the resolution of a
  structure as that of the outermost flyspeck used in refinement,
  regardless
  of more cautious assertions by the authors?  If so, I object!
 
  =
  Phoebe A. Rice
  Dept. of Biochemistry  Molecular Biology
  The University of Chicago
  phone 773 834 1723
 
 
 http://bmb.bsd.uchicago.edu/Faculty_and_Research/01_Faculty/01_Faculty_Alphabetically.php?faculty_id=123
  http://www.rsc.org/shop/books/2008/9780854042722.asp
 
 
 
 
  --
  Jan Dohnalek, Ph.D
  Institute of Macromolecular Chemistry
  Academy of Sciences of the Czech Republic
  Heyrovskeho nam. 2
  16206 Praha 6
  Czech Republic
 
  Tel: +420 296 809 340
  Fax: +420 296 809 410
 




-- 
***
Jacob Pearson Keller
Northwestern University
Medical Scientist Training Program
email: j-kell...@northwestern.edu
***


  1   2   >