Re: [ccp4bb] Death of Rmerge

2012-06-01 Thread Jacob Keller
> Let's say you collect data (or rather indices) to 1.4 Ang but the real
> resolution is 2.8 Ang and you use all the data in refinement with no
> resolution cut-off, so there are 8 times as many data.  Then your 15
> mins becomes 2 hours - is that still acceptable?  It's unlikely that
> you'll see any difference in the results so was all that extra
> computing worth the effort?
>
> Now work out the total number of pixels in one of your datasets (i.e.
> no of pixels per image times no of images).  Divide that by the no of
> reflections in the a.u. and multiply by 15 mins (it's probably in the
> region of 400 days!): still acceptable?  Again it's unlikely you'll
> see any significant difference in the results (assuming you only use
> the Bragg spots), so again was it worth it?
>
> What matters in terms of information content is not the absolute
> intensity but the ratio intensity / (expected intensity).  As the data
> get weaker at higher d* I falls off, but so does  and the ratio I /
>  becomes progressively more unreliable at determining the
> information content.  So a zero I when the other intensities in the
> same d* shell are strong is indeed a powerful constraint (this I
> suspect is what Wang meant), however if the other intensities in the
> shell are also all zero it tells you next to nothing.
>
> -- Ian
>

I envisioned a process of iteration through the various stages of
processing, so still using integration, scaling, etc. to reduce data
before refinement, but maybe feeding back model-based information to
inform the processing of the images. Something like, Refmac says to
Mosflm: "kill frames 1100-1200: they're too radiation-damaged." But I
like your idea of using all the pixels--that would be the ultimate,
wouldn't it! Actually, the best would be to have the refinement
already going when collecting data, and informing which frames to
take, and for how long! In a couple years that too will take no time
at all, but then again, we'll probably have atomic-precision real-time
in vivo microscopes by then anyway, and crystallography will have
become an (interesting!) historical curiosity...

JPK

***
Jacob Pearson Keller
Northwestern University
Medical Scientist Training Program
email: j-kell...@northwestern.edu
***


Re: [ccp4bb] Death of Rmerge

2012-06-01 Thread Leonid Sazanov
Hi, as we reported in our paper in Table 1 (actually Supplementary Table 1), at 
the end of Scaling 2, completeness in the outer shell after aniso truncation 
was 54%. Whilst 96% completeness and I/sigma 0.8 is of course before aniso 
truncation. I/sigma after truncation would be higher, but it is not clear to me 
how to calculate that number exactly, since aniso truncation is done post data 
scaling. One could of course re-process images in Mosflm with applied aniso 
limits and then scale data, but that would not be exactly the same.

From many trials with strongly anisotropic data we found that for map 
calculation and refinement it is best to cut data anisotropically where F/sigma 
is approaching 2.5-2.7 in each direction, as long as completeness in the outer 
shell remains above 50% or so. Usually the highest useful resolution is also 
where the correlation coefficient between random half-data-set estimates of 
intensities in SCALA falls below about 0.5 (as advocated by Phil Evans, I 
think). CC seems to be less affected by anisotropy (in this case it reached 0.5 
at 3.0 angstrom, which was another criterion to cut data at 3.0).

HTH.
Leo




I am little curious about the anisotropically truncated data for 3RKO:

Percent Possible(All)   96.0
Mean I Over Sigma(Observed) 0.8

In the supplementary table of the nature paper it was made clear that this 
3.16-3.0A, I/sigmaI=0.8 and Rmerge=1.216 shell was the outer shell of the 
anisotropically truncated data. The authors did also report the 
isotropically truncated resolution to be 3.2A with I/sigmaI=1.3 and 
Rmerge=73%.

The authors also stated in the main text that

"the best native data set was anisotropically scaled and truncated to 3.4 Å, 
3.0 Å and 3.0 Å resolution, where the F/σ ratio drops to ~2.6–2.8 along 
the a*, b* and c* axes, respectively (scaling 2, Supplementary Table 1)"

My question is, is the I/sigmaI=0.8 a consequence of many reflections with 
nearly 0 I/sigmaI being included in the calculation? Then what does the 96% 
completeness mean? Does it mean that 96% completeness in the spherical shell 
of 3.16-3.0A was achieved, by including a great number of I=0 reflections?


Zhijie


Re: [ccp4bb] Death of Rmerge

2012-06-01 Thread Ian Tickle
Let's say you collect data (or rather indices) to 1.4 Ang but the real
resolution is 2.8 Ang and you use all the data in refinement with no
resolution cut-off, so there are 8 times as many data.  Then your 15
mins becomes 2 hours - is that still acceptable?  It's unlikely that
you'll see any difference in the results so was all that extra
computing worth the effort?

Now work out the total number of pixels in one of your datasets (i.e.
no of pixels per image times no of images).  Divide that by the no of
reflections in the a.u. and multiply by 15 mins (it's probably in the
region of 400 days!): still acceptable?  Again it's unlikely you'll
see any significant difference in the results (assuming you only use
the Bragg spots), so again was it worth it?

What matters in terms of information content is not the absolute
intensity but the ratio intensity / (expected intensity).  As the data
get weaker at higher d* I falls off, but so does  and the ratio I /
 becomes progressively more unreliable at determining the
information content.  So a zero I when the other intensities in the
same d* shell are strong is indeed a powerful constraint (this I
suspect is what Wang meant), however if the other intensities in the
shell are also all zero it tells you next to nothing.

-- Ian

On 1 June 2012 20:03, Jacob Keller  wrote:
> I don't think any data should be discarded, and I think that although
> we are not there yet, refinement should work directly with the images,
> iterating back and forth through all the various levels of data
> processing. As I think was pointed out by Wang, even an intensity of 0
> provides information placing limits on the possible true values of
> that reflection. It seems that the main reason data were discarded
> historically was because of the limitations of (under)grad students
> going through multiple layers of films, evaluating intensities for
> each spot, or other similar processing limits, most of which are not
> really applicable today. A whole iterated refinement protocol now
> takes, what, 15 minutes?
>
> Jacob
>
>
>
> On Fri, Jun 1, 2012 at 1:29 PM, Ed Pozharski  wrote:
>> http://www.nature.com/nsmb/journal/v4/n4/abs/nsb0497-269.html
>> http://scripts.iucr.org/cgi-bin/paper?S0021889800018227
>>
>> Just collect 360 sweep instead of 180 on a non-decaying crystal and see
>> Rmerge go up due to increase in multiplicity (and enough with redundancy
>> term - the extra data is not really *redundant*).  Is your resolution
>> worse or better?
>>
>> This has been argued over before.  Rmerge has some value in comparing
>> two datasets collected in perfectly identical conditions to see which
>> crystal is better and it may predict to some extent what R-values you
>> might expect.  Otherwise, it's unreliable.
>>
>> Given that it's been 15 years since this was pointed out in no less than
>> Nature group magazine, and we still hear that Rmerge should decide
>> resolution cutoff, chances are increasingly slim that I will personally
>> see the dethroning of that other major oppressor, R-value.
>>
>> On Fri, 2012-06-01 at 10:59 -0700, aaleshin wrote:
>>> Please excuse my ignorance, but I cannot understand why Rmerge is 
>>> unreliable for estimation of the resolution?
>>> I mean, from a theoretical point of view, <1/sigma> is indeed a better 
>>> criterion, but it is not obvious from a practical point of view.
>>>
>>> <1/sigma> depends on a method for sigma estimation, and so same data 
>>> processed by different programs may have different <1/sigma>. Moreover, 
>>> HKL2000 allows users to adjust sigmas manually. Rmerge estimates sigmas 
>>> from differences between measurements of same structural factor, and hence 
>>> is independent of our preferences.  But, it also has a very important 
>>> ability to validate consistency of the merged data. If my crystal changed 
>>> during the data collection, or something went wrong with the 
>>> diffractometer, Rmerge will show it immediately, but <1/sigma>  will not.
>>>
>>> So, please explain why should we stop using Rmerge as a criterion of data 
>>> resolution?
>>>
>>> Alex
>>> Sanford-Burnham Medical Research Institute
>>> 10901 North Torrey Pines Road
>>> La Jolla, California 92037
>>>
>>>
>>>
>>> On Jun 1, 2012, at 5:07 AM, Ian Tickle wrote:
>>>
>>> > On 1 June 2012 03:22, Edward A. Berry  wrote:
>>> >> Leo will probably answer better than I can, but I would say I/SigI counts
>>> >> only
>>> >> the present reflection, so eliminating noise by anisotropic truncation
>>> >> should
>>> >> improve it, raising the average I/SigI in the last shell.
>>> >
>>> > We always include unmeasured reflections with I/sigma(I) = 0 in the
>>> > calculation of the mean I/sigma(I) (i.e. we divide the sum of
>>> > I/sigma(I) for measureds by the predicted total no of reflections incl
>>> > unmeasureds), since for unmeasureds I is (almost) completely unknown
>>> > and therefore sigma(I) is effectively infinite (or at least finite but
>>> > large since you do have some idea of 

Re: [ccp4bb] Death of Rmerge

2012-06-01 Thread Jacob Keller
I don't think any data should be discarded, and I think that although
we are not there yet, refinement should work directly with the images,
iterating back and forth through all the various levels of data
processing. As I think was pointed out by Wang, even an intensity of 0
provides information placing limits on the possible true values of
that reflection. It seems that the main reason data were discarded
historically was because of the limitations of (under)grad students
going through multiple layers of films, evaluating intensities for
each spot, or other similar processing limits, most of which are not
really applicable today. A whole iterated refinement protocol now
takes, what, 15 minutes?

Jacob



On Fri, Jun 1, 2012 at 1:29 PM, Ed Pozharski  wrote:
> http://www.nature.com/nsmb/journal/v4/n4/abs/nsb0497-269.html
> http://scripts.iucr.org/cgi-bin/paper?S0021889800018227
>
> Just collect 360 sweep instead of 180 on a non-decaying crystal and see
> Rmerge go up due to increase in multiplicity (and enough with redundancy
> term - the extra data is not really *redundant*).  Is your resolution
> worse or better?
>
> This has been argued over before.  Rmerge has some value in comparing
> two datasets collected in perfectly identical conditions to see which
> crystal is better and it may predict to some extent what R-values you
> might expect.  Otherwise, it's unreliable.
>
> Given that it's been 15 years since this was pointed out in no less than
> Nature group magazine, and we still hear that Rmerge should decide
> resolution cutoff, chances are increasingly slim that I will personally
> see the dethroning of that other major oppressor, R-value.
>
> On Fri, 2012-06-01 at 10:59 -0700, aaleshin wrote:
>> Please excuse my ignorance, but I cannot understand why Rmerge is unreliable 
>> for estimation of the resolution?
>> I mean, from a theoretical point of view, <1/sigma> is indeed a better 
>> criterion, but it is not obvious from a practical point of view.
>>
>> <1/sigma> depends on a method for sigma estimation, and so same data 
>> processed by different programs may have different <1/sigma>. Moreover, 
>> HKL2000 allows users to adjust sigmas manually. Rmerge estimates sigmas from 
>> differences between measurements of same structural factor, and hence is 
>> independent of our preferences.  But, it also has a very important ability 
>> to validate consistency of the merged data. If my crystal changed during the 
>> data collection, or something went wrong with the diffractometer, Rmerge 
>> will show it immediately, but <1/sigma>  will not.
>>
>> So, please explain why should we stop using Rmerge as a criterion of data 
>> resolution?
>>
>> Alex
>> Sanford-Burnham Medical Research Institute
>> 10901 North Torrey Pines Road
>> La Jolla, California 92037
>>
>>
>>
>> On Jun 1, 2012, at 5:07 AM, Ian Tickle wrote:
>>
>> > On 1 June 2012 03:22, Edward A. Berry  wrote:
>> >> Leo will probably answer better than I can, but I would say I/SigI counts
>> >> only
>> >> the present reflection, so eliminating noise by anisotropic truncation
>> >> should
>> >> improve it, raising the average I/SigI in the last shell.
>> >
>> > We always include unmeasured reflections with I/sigma(I) = 0 in the
>> > calculation of the mean I/sigma(I) (i.e. we divide the sum of
>> > I/sigma(I) for measureds by the predicted total no of reflections incl
>> > unmeasureds), since for unmeasureds I is (almost) completely unknown
>> > and therefore sigma(I) is effectively infinite (or at least finite but
>> > large since you do have some idea of what range I must fall in).  A
>> > shell with  = 2 and 50% completeness clearly doesn't carry
>> > the same information content as one with the same  and
>> > 100% complete; therefore IMO it's very misleading to quote
>> >  including only the measured reflections.  This also means
>> > we can use a single cut-off criterion (we use mean I/sigma(I) > 1),
>> > and we don't need another arbitrary cut-off criterion for
>> > completeness.  As many others seem to be doing now, we don't use
>> > Rmerge, Rpim etc as criteria to estimate resolution, they're just too
>> > unreliable - Rmerge is indeed dead and buried!
>> >
>> > Actually a mean value of I/sigma(I) of 2 is highly statistically
>> > significant, i.e. very unlikely to have arisen by chance variations,
>> > and the significance threshold for the mean must be much closer to 1
>> > than to 2.  Taking an average always increases the statistical
>> > significance, therefore it's not valid to compare an _average_ value
>> > of I/sigma(I) = 2 with a _single_ value of I/sigma(I) = 3 (taking 3
>> > sigma as the threshold of statistical significance of an individual
>> > measurement): that's a case of "comparing apples with pears".  In
>> > other words in the outer shell you would need a lot of highly
>> > significant individual values >> 3 to attain an overall average of 2
>> > since the majority of individual values will be < 1.
>> >
>> >> F/sigF is ex

Re: [ccp4bb] Death of Rmerge

2012-06-01 Thread Ed Pozharski
http://www.nature.com/nsmb/journal/v4/n4/abs/nsb0497-269.html
http://scripts.iucr.org/cgi-bin/paper?S0021889800018227

Just collect 360 sweep instead of 180 on a non-decaying crystal and see
Rmerge go up due to increase in multiplicity (and enough with redundancy
term - the extra data is not really *redundant*).  Is your resolution
worse or better?

This has been argued over before.  Rmerge has some value in comparing
two datasets collected in perfectly identical conditions to see which
crystal is better and it may predict to some extent what R-values you
might expect.  Otherwise, it's unreliable.

Given that it's been 15 years since this was pointed out in no less than
Nature group magazine, and we still hear that Rmerge should decide
resolution cutoff, chances are increasingly slim that I will personally
see the dethroning of that other major oppressor, R-value.

On Fri, 2012-06-01 at 10:59 -0700, aaleshin wrote:
> Please excuse my ignorance, but I cannot understand why Rmerge is unreliable 
> for estimation of the resolution?
> I mean, from a theoretical point of view, <1/sigma> is indeed a better 
> criterion, but it is not obvious from a practical point of view.
> 
> <1/sigma> depends on a method for sigma estimation, and so same data 
> processed by different programs may have different <1/sigma>. Moreover, 
> HKL2000 allows users to adjust sigmas manually. Rmerge estimates sigmas from 
> differences between measurements of same structural factor, and hence is 
> independent of our preferences.  But, it also has a very important ability to 
> validate consistency of the merged data. If my crystal changed during the 
> data collection, or something went wrong with the diffractometer, Rmerge will 
> show it immediately, but <1/sigma>  will not.
> 
> So, please explain why should we stop using Rmerge as a criterion of data 
> resolution? 
> 
> Alex
> Sanford-Burnham Medical Research Institute
> 10901 North Torrey Pines Road
> La Jolla, California 92037
> 
> 
> 
> On Jun 1, 2012, at 5:07 AM, Ian Tickle wrote:
> 
> > On 1 June 2012 03:22, Edward A. Berry  wrote:
> >> Leo will probably answer better than I can, but I would say I/SigI counts
> >> only
> >> the present reflection, so eliminating noise by anisotropic truncation
> >> should
> >> improve it, raising the average I/SigI in the last shell.
> > 
> > We always include unmeasured reflections with I/sigma(I) = 0 in the
> > calculation of the mean I/sigma(I) (i.e. we divide the sum of
> > I/sigma(I) for measureds by the predicted total no of reflections incl
> > unmeasureds), since for unmeasureds I is (almost) completely unknown
> > and therefore sigma(I) is effectively infinite (or at least finite but
> > large since you do have some idea of what range I must fall in).  A
> > shell with  = 2 and 50% completeness clearly doesn't carry
> > the same information content as one with the same  and
> > 100% complete; therefore IMO it's very misleading to quote
> >  including only the measured reflections.  This also means
> > we can use a single cut-off criterion (we use mean I/sigma(I) > 1),
> > and we don't need another arbitrary cut-off criterion for
> > completeness.  As many others seem to be doing now, we don't use
> > Rmerge, Rpim etc as criteria to estimate resolution, they're just too
> > unreliable - Rmerge is indeed dead and buried!
> > 
> > Actually a mean value of I/sigma(I) of 2 is highly statistically
> > significant, i.e. very unlikely to have arisen by chance variations,
> > and the significance threshold for the mean must be much closer to 1
> > than to 2.  Taking an average always increases the statistical
> > significance, therefore it's not valid to compare an _average_ value
> > of I/sigma(I) = 2 with a _single_ value of I/sigma(I) = 3 (taking 3
> > sigma as the threshold of statistical significance of an individual
> > measurement): that's a case of "comparing apples with pears".  In
> > other words in the outer shell you would need a lot of highly
> > significant individual values >> 3 to attain an overall average of 2
> > since the majority of individual values will be < 1.
> > 
> >> F/sigF is expected to be better than I/sigI because dx^2 = 2Xdx,
> >> dx^2/x^2 = 2dx/x, dI/I = 2* dF/F  (or approaches that in the limit . . .)
> > 
> > That depends on what you mean by 'better': every metric must be
> > compared with a criterion appropriate to that metric. So if we are
> > comparing I/sigma(I) with a criterion value = 3, then we must compare
> > F/sigma(F) with criterion value = 6 ('in the limit' of zero I), in
> > which case the comparison is no 'better' (in terms of information
> > content) with I than with F: they are entirely equivalent.  It's
> > meaningless to compare F/sigma(F) with the criterion value appropriate
> > to I/sigma(I): again that's "comparing apples and pears"!
> > 
> > Cheers
> > 
> > -- Ian

-- 
Edwin Pozharski, PhD, Assistant Professor
University of Maryland, Baltimore
---

Re: [ccp4bb] Death of Rmerge

2012-06-01 Thread Phil Evans
As the K & D paper points out, as the signal/noise declines at higher 
resolution, Rmerge goes up to infinity, so there is no sensible way to set a 
limiting value to determine "resolution".

That is not to say that Rmerge has no use: as you say it's a reasonably good 
metric to plot against image number to detect a problem. It just not a suitable 
metric for deciding resolution

I/sigI is pretty good for this, even though the sigma estimates are not very 
reliable. CC1/2 is probably better since it is independent of sigmas and has 
defined values from 1.0 down to 0.0 as signal/noise decreases. But we should be 
careful of any dogma which says what data we should discard, and what the 
cutoff limits should be: I/sigI > 3,2, or 1? CC1/2 > 0.2, 0.3, 0.5 ...? Usually 
it does not make a huge difference, but why discard useful data? Provided the 
data are properly weighted in refinement by weights incorporating observed 
sigmas (true in  Refmac, not true in phenix.refine at present I believe), 
adding extra weak data should do no harm, at least out to some point. Program 
algorithms are improving in their treatment of weak data, but are by no means 
perfect.

One problem as discussed earlier in this thread is that we have got used to the 
idea that nominal resolution is a single number indicating the quality of a 
structure, but this has never been true, irrespective of the cutoff method. 
Apart from the considerable problem of anisotropy, we all need to note the 
wisdom of Ethan Merritt

> "We should also encourage people not to confuse the quality of 
> the data with the quality of the model."

Phil



On 1 Jun 2012, at 18:59, aaleshin wrote:

> Please excuse my ignorance, but I cannot understand why Rmerge is unreliable 
> for estimation of the resolution?
> I mean, from a theoretical point of view, <1/sigma> is indeed a better 
> criterion, but it is not obvious from a practical point of view.
> 
> <1/sigma> depends on a method for sigma estimation, and so same data 
> processed by different programs may have different <1/sigma>. Moreover, 
> HKL2000 allows users to adjust sigmas manually. Rmerge estimates sigmas from 
> differences between measurements of same structural factor, and hence is 
> independent of our preferences.  But, it also has a very important ability to 
> validate consistency of the merged data. If my crystal changed during the 
> data collection, or something went wrong with the diffractometer, Rmerge will 
> show it immediately, but <1/sigma>  will not.
> 
> So, please explain why should we stop using Rmerge as a criterion of data 
> resolution? 
> 
> Alex
> Sanford-Burnham Medical Research Institute
> 10901 North Torrey Pines Road
> La Jolla, California 92037
> 
> 
> 
> On Jun 1, 2012, at 5:07 AM, Ian Tickle wrote:
> 
>> On 1 June 2012 03:22, Edward A. Berry  wrote:
>>> Leo will probably answer better than I can, but I would say I/SigI counts
>>> only
>>> the present reflection, so eliminating noise by anisotropic truncation
>>> should
>>> improve it, raising the average I/SigI in the last shell.
>> 
>> We always include unmeasured reflections with I/sigma(I) = 0 in the
>> calculation of the mean I/sigma(I) (i.e. we divide the sum of
>> I/sigma(I) for measureds by the predicted total no of reflections incl
>> unmeasureds), since for unmeasureds I is (almost) completely unknown
>> and therefore sigma(I) is effectively infinite (or at least finite but
>> large since you do have some idea of what range I must fall in).  A
>> shell with  = 2 and 50% completeness clearly doesn't carry
>> the same information content as one with the same  and
>> 100% complete; therefore IMO it's very misleading to quote
>>  including only the measured reflections.  This also means
>> we can use a single cut-off criterion (we use mean I/sigma(I) > 1),
>> and we don't need another arbitrary cut-off criterion for
>> completeness.  As many others seem to be doing now, we don't use
>> Rmerge, Rpim etc as criteria to estimate resolution, they're just too
>> unreliable - Rmerge is indeed dead and buried!
>> 
>> Actually a mean value of I/sigma(I) of 2 is highly statistically
>> significant, i.e. very unlikely to have arisen by chance variations,
>> and the significance threshold for the mean must be much closer to 1
>> than to 2.  Taking an average always increases the statistical
>> significance, therefore it's not valid to compare an _average_ value
>> of I/sigma(I) = 2 with a _single_ value of I/sigma(I) = 3 (taking 3
>> sigma as the threshold of statistical significance of an individual
>> measurement): that's a case of "comparing apples with pears".  In
>> other words in the outer shell you would need a lot of highly
>> significant individual values >> 3 to attain an overall average of 2
>> since the majority of individual values will be < 1.
>> 
>>> F/sigF is expected to be better than I/sigI because dx^2 = 2Xdx,
>>> dx^2/x^2 = 2dx/x, dI/I = 2* dF/F  (or approaches that in the limit . . .)
>> 
>> That depen

Re: [ccp4bb] Death of Rmerge

2012-06-01 Thread aaleshin
Please excuse my ignorance, but I cannot understand why Rmerge is unreliable 
for estimation of the resolution?
I mean, from a theoretical point of view, <1/sigma> is indeed a better 
criterion, but it is not obvious from a practical point of view.

<1/sigma> depends on a method for sigma estimation, and so same data processed 
by different programs may have different <1/sigma>. Moreover, HKL2000 allows 
users to adjust sigmas manually. Rmerge estimates sigmas from differences 
between measurements of same structural factor, and hence is independent of our 
preferences.  But, it also has a very important ability to validate consistency 
of the merged data. If my crystal changed during the data collection, or 
something went wrong with the diffractometer, Rmerge will show it immediately, 
but <1/sigma>  will not.

So, please explain why should we stop using Rmerge as a criterion of data 
resolution? 

Alex
Sanford-Burnham Medical Research Institute
10901 North Torrey Pines Road
La Jolla, California 92037



On Jun 1, 2012, at 5:07 AM, Ian Tickle wrote:

> On 1 June 2012 03:22, Edward A. Berry  wrote:
>> Leo will probably answer better than I can, but I would say I/SigI counts
>> only
>> the present reflection, so eliminating noise by anisotropic truncation
>> should
>> improve it, raising the average I/SigI in the last shell.
> 
> We always include unmeasured reflections with I/sigma(I) = 0 in the
> calculation of the mean I/sigma(I) (i.e. we divide the sum of
> I/sigma(I) for measureds by the predicted total no of reflections incl
> unmeasureds), since for unmeasureds I is (almost) completely unknown
> and therefore sigma(I) is effectively infinite (or at least finite but
> large since you do have some idea of what range I must fall in).  A
> shell with  = 2 and 50% completeness clearly doesn't carry
> the same information content as one with the same  and
> 100% complete; therefore IMO it's very misleading to quote
>  including only the measured reflections.  This also means
> we can use a single cut-off criterion (we use mean I/sigma(I) > 1),
> and we don't need another arbitrary cut-off criterion for
> completeness.  As many others seem to be doing now, we don't use
> Rmerge, Rpim etc as criteria to estimate resolution, they're just too
> unreliable - Rmerge is indeed dead and buried!
> 
> Actually a mean value of I/sigma(I) of 2 is highly statistically
> significant, i.e. very unlikely to have arisen by chance variations,
> and the significance threshold for the mean must be much closer to 1
> than to 2.  Taking an average always increases the statistical
> significance, therefore it's not valid to compare an _average_ value
> of I/sigma(I) = 2 with a _single_ value of I/sigma(I) = 3 (taking 3
> sigma as the threshold of statistical significance of an individual
> measurement): that's a case of "comparing apples with pears".  In
> other words in the outer shell you would need a lot of highly
> significant individual values >> 3 to attain an overall average of 2
> since the majority of individual values will be < 1.
> 
>> F/sigF is expected to be better than I/sigI because dx^2 = 2Xdx,
>> dx^2/x^2 = 2dx/x, dI/I = 2* dF/F  (or approaches that in the limit . . .)
> 
> That depends on what you mean by 'better': every metric must be
> compared with a criterion appropriate to that metric. So if we are
> comparing I/sigma(I) with a criterion value = 3, then we must compare
> F/sigma(F) with criterion value = 6 ('in the limit' of zero I), in
> which case the comparison is no 'better' (in terms of information
> content) with I than with F: they are entirely equivalent.  It's
> meaningless to compare F/sigma(F) with the criterion value appropriate
> to I/sigma(I): again that's "comparing apples and pears"!
> 
> Cheers
> 
> -- Ian


Re: [ccp4bb] Death of Rmerge

2012-06-01 Thread Ian Tickle
On 1 June 2012 03:22, Edward A. Berry  wrote:
> Leo will probably answer better than I can, but I would say I/SigI counts
> only
> the present reflection, so eliminating noise by anisotropic truncation
> should
> improve it, raising the average I/SigI in the last shell.

We always include unmeasured reflections with I/sigma(I) = 0 in the
calculation of the mean I/sigma(I) (i.e. we divide the sum of
I/sigma(I) for measureds by the predicted total no of reflections incl
unmeasureds), since for unmeasureds I is (almost) completely unknown
and therefore sigma(I) is effectively infinite (or at least finite but
large since you do have some idea of what range I must fall in).  A
shell with  = 2 and 50% completeness clearly doesn't carry
the same information content as one with the same  and
100% complete; therefore IMO it's very misleading to quote
 including only the measured reflections.  This also means
we can use a single cut-off criterion (we use mean I/sigma(I) > 1),
and we don't need another arbitrary cut-off criterion for
completeness.  As many others seem to be doing now, we don't use
Rmerge, Rpim etc as criteria to estimate resolution, they're just too
unreliable - Rmerge is indeed dead and buried!

Actually a mean value of I/sigma(I) of 2 is highly statistically
significant, i.e. very unlikely to have arisen by chance variations,
and the significance threshold for the mean must be much closer to 1
than to 2.  Taking an average always increases the statistical
significance, therefore it's not valid to compare an _average_ value
of I/sigma(I) = 2 with a _single_ value of I/sigma(I) = 3 (taking 3
sigma as the threshold of statistical significance of an individual
measurement): that's a case of "comparing apples with pears".  In
other words in the outer shell you would need a lot of highly
significant individual values >> 3 to attain an overall average of 2
since the majority of individual values will be < 1.

> F/sigF is expected to be better than I/sigI because dx^2 = 2Xdx,
> dx^2/x^2 = 2dx/x, dI/I = 2* dF/F  (or approaches that in the limit . . .)

That depends on what you mean by 'better': every metric must be
compared with a criterion appropriate to that metric. So if we are
comparing I/sigma(I) with a criterion value = 3, then we must compare
F/sigma(F) with criterion value = 6 ('in the limit' of zero I), in
which case the comparison is no 'better' (in terms of information
content) with I than with F: they are entirely equivalent.  It's
meaningless to compare F/sigma(F) with the criterion value appropriate
to I/sigma(I): again that's "comparing apples and pears"!

Cheers

-- Ian


Re: [ccp4bb] Death of Rmerge

2012-05-31 Thread Edward A. Berry

Leo will probably answer better than I can, but I would say I/SigI counts only
the present reflection, so eliminating noise by anisotropic truncation should
improve it, raising the average I/SigI in the last shell.

F/sigF is expected to be better than I/sigI because dx^2 = 2Xdx,
dx^2/x^2 = 2dx/x, dI/I = 2* dF/F  (or approaches that in the limit . . .)

On the other hand the integration software will measure spots whether they
exist or not, so completeness is good even in a shell where there is no data.
Anisotropic truncation removes reflections, so now calculating completeness
in the outer (spherical) shell gives low completeness. In the direction
where the data was truncated to 3.4 A, there are obviously no reflections in
the 3.0-3.1 (e.g.) shell.

Read about anisotropic truncation and scaling at the server they used:
http://services.mbi.ucla.edu/anisoscale/

I think Eleanor Dodson once suggested anisotropic truncation could be
preformed by a script which gets the anisotropy from the fall-off
analysis in truncate, changes the cell parameters to phony ones which
distort reciprocal space until falloff is the same in all directions,
perform (spherical) resolution cut-off, and change the cell parameters
back to the correct.
I think all the refinement programs can perform anisotropic scaling,
but they normally don't save the scaled data.

Zhijie Li wrote:

I am little curious about the anisotropically truncated data for 3RKO:

Percent Possible(All) 96.0
Mean I Over Sigma(Observed) 0.8

In the supplementary table of the nature paper it was made clear that this 
3.16-3.0A,
I/sigmaI=0.8 and Rmerge=1.216 shell was the outer shell of the anisotropically 
truncated
data. The authors did also report the isotropically truncated resolution to be 
3.2A with
I/sigmaI=1.3 and Rmerge=73%.

The authors also stated in the main text that

"the best native data set was anisotropically scaled and truncated to 3.4 Å, 
3.0 Å and 3.0
 Å resolution, where the F/σ ratio drops to ~2.6–2.8 along the a*, b* and c* 
axes,
respectively (scaling 2, Supplementary Table 1)"

My question is, is the I/sigmaI=0.8 a consequence of many reflections with 
nearly 0
I/sigmaI being included in the calculation? Then what does the 96% completeness 
mean? Does
it mean that 96% completeness in the spherical shell of 3.16-3.0A was achieved, 
by
including a great number of I=0 reflections?


Zhijie



--
From: "Edward A. Berry" 
Sent: Thursday, May 31, 2012 2:59 PM
To: 
Subject: Re: [ccp4bb] Death of Rmerge


Yes! I want a copy of this program RESCUT.

REMARK 200 R SYM FOR SHELL (I) : 1.21700
I noticed structure 3RKO reported Rmerge in the last shell greater
than 1, suggesting the police who were defending R-merge were fighting
a losing battle. And this provides a lot of ammunition to those
they are fighting.

Jacob Keller wrote:

Dear Crystallographers,

in case you have not heard, it would appear that the Rmerge statistic
has died as of the publication of PMID: 22628654. Ding Dong...?

JPK

--
***
Jacob Pearson Keller
Northwestern University
Medical Scientist Training Program
email: j-kell...@northwestern.edu
***





Re: [ccp4bb] Death of Rmerge

2012-05-31 Thread Zhijie Li

I am little curious about the anisotropically truncated data for 3RKO:

   Percent Possible(All)96.0
Mean I Over Sigma(Observed) 0.8

In the supplementary table of the nature paper it was made clear that this 
3.16-3.0A, I/sigmaI=0.8 and Rmerge=1.216 shell was the outer shell of the 
anisotropically truncated data. The authors did also report the 
isotropically truncated resolution to be 3.2A with I/sigmaI=1.3 and 
Rmerge=73%.


The authors also stated in the main text that

"the best native data set was anisotropically scaled and truncated to 3.4 Å, 
3.0 Å and 3.0 Å resolution, where the F/σ ratio drops to ~2.6–2.8 along 
the a*, b* and c* axes, respectively (scaling 2, Supplementary Table 1)"


My question is, is the I/sigmaI=0.8 a consequence of many reflections with 
nearly 0 I/sigmaI being included in the calculation? Then what does the 96% 
completeness mean? Does it mean that 96% completeness in the spherical shell 
of 3.16-3.0A was achieved, by including a great number of I=0 reflections?



Zhijie



--
From: "Edward A. Berry" 
Sent: Thursday, May 31, 2012 2:59 PM
To: 
Subject: Re: [ccp4bb] Death of Rmerge


Yes! I want a copy of this program RESCUT.

REMARK 200  R SYM FOR SHELL(I) : 1.21700
I noticed structure 3RKO reported Rmerge in the last shell greater
than 1, suggesting the police who were defending R-merge were fighting
a losing battle. And this provides a lot of ammunition to those
they are fighting.

Jacob Keller wrote:

Dear Crystallographers,

in case you have not heard, it would appear that the Rmerge statistic
has died as of the publication of  PMID: 22628654. Ding Dong...?

JPK

--
***
Jacob Pearson Keller
Northwestern University
Medical Scientist Training Program
email: j-kell...@northwestern.edu
***



Re: [ccp4bb] Fwd: [ccp4bb] Death of Rmerge

2012-05-31 Thread Edward A. Berry

In the meantime we could follow Phoebe Rice's example and put
the resolution at I/sigma=2 in  REMARK 2 "resolution of structure"
but put the actual bleeding-edge resolution we used in the
reduction and refinement statistics (At least if the PDB
will allow us to have different values in these three places)
And cite the REM 2 value in the article.
eab
PS - I believe optical resolution actually gives significantly
more optimistic numbers for resolution than extending from
I/sigma=2 to 0.5. The "resolution" we are used to is the d-spacing
of the Fourier components, which is theoretically larger than the
microscopist's definition of resolution (how close objects can
be and still give separate maxima with a minimum between).

Miller, Mitchell D. wrote:

All three numbers (high resolution limit in remark 2, remark 3
and Remark 200) are supposed to be consistent and are
defined as the highest resolution reflection used.
http://mmcif.rcsb.org/dictionaries/mmcif_pdbx_v40.dic/Items/_reflns.d_resolution_high.html
http://mmcif.rcsb.org/dictionaries/mmcif_pdbx_v40.dic/Items/_refine.ls_d_res_high.html

  Looking at the PDB specification, it shows that there is an option
to add a free text comment to the remark 2 resolution --
"Additional explanatory text may be included starting with the third line of the 
REMARK 2 record. For example, depositors may wish to qualify the resolution value 
provided due to unusual experimental conditions."
http://www.wwpdb.org/documentation/format33/remarks1.html




On Wed, Apr 25, 2012 at 12:23 AM, Phoebe Rice   wrote:
I just noticed that the PDB has changed the stated resolution for one of my old 
structures!  It was refined against a very anisotropic data set that extended 
to 2.2 in the best direction only.  When depositing I called the resolution 2.5 
as a rough average of resolution in all 3 directions, but now PDB is 
advertising it as 2.2, which is misleading.

I'm afraid I may not have paid enough attention to the fine print on this issue - is the 
PDB now automatically advertising the "resolution" of a structure as that of 
the outermost flyspeck used in refinement, regardless of more cautious assertions by the 
authors?  If so, I object!



Dale Tronrud wrote:

On 05/31/12 12:07, Jacob Keller wrote:

Alas, how many lines like the following from a recent Science paper
(PMID: 22605777), probably reviewer-incited, could have been avoided!

"Here, we present three high-resolution crystal structures of the
Thermus thermophilus (Tth) 70S ribosome in complex withRMF, HPF, or
YfiA that were refined by using data extending to 3.0 Å (I/sI = 1),
3.1 Å (I/sI = 1), and 2.75 Å (I/sI = 1) resolution, respectively. The
resolutions at which I/sI = 2 are 3.2 Å, 3.4 Å, and 2.9 Å,
respectively."



I don't see how you can avoid something like this.  With the new,
higher, resolution limits for data (which are good things) people will
tend to assume that a "2.6 A resolution model" will have roughly the
same quality as a "2.6 A resolution model" from five years ago when
the old criteria were used.  K&K show that the weak high resolution
data contain useful information but certainly not as much information
as the data with stronger intensity.

The resolution limit of the data set has been such an important
indicator of the quality of the resulting model (rightly or wrongly)
that it often is included in the title of the paper itself.  Despite
the fact that we now want to include more, weak, data than before
we need to continue to have a quality indicator that readers can
use to assess the models they are reading about.  While cumbersome,
one solution is to state what the resolution limit would have been
had the old criteria been used, as was done in the paper you quote.
This simply gives the reader a measure they can compare to their
previous experiences.

Now would be a good time to break with tradition and institute
a new measure of quality of diffraction data sets.  I believe several
have been proposed over the years, but have simply not caught on.
SFCHECK produces an "optical resolution".  Could this be used in
the title of papers?  I don't believe it is sensitive to the cutoff
resolution and it produces values that are consistent with what the
readers are used to.  With this solution people could include whatever
noisy data they want and not be guilty of overstating the quality of
their model.

Dale Tronrud


JPK



On Thu, May 31, 2012 at 1:59 PM, Edward A. Berry  wrote:

Yes! I want a copy of this program RESCUT.

REMARK 200  R SYM FOR SHELL(I) : 1.21700
I noticed structure 3RKO reported Rmerge in the last shell greater
than 1, suggesting the police who were defending R-merge were fighting
a losing battle. And this provides a lot of ammunition to those
they are fighting.

Jacob Keller wrote:


Dear Crystallographers,

in case you have not heard, it would appear that the Rmerge statistic
has died as of the publication of  PMID: 22628654. Ding Dong...?

JPK

--

Re: [ccp4bb] Fwd: [ccp4bb] Death of Rmerge

2012-05-31 Thread aaleshin
> There are things you can expect to learn from a
> 2Å structure that you are unlikely to learn from a 5Å structure, even
> if equal care has been given to both experiments, so it makes sense
> for the title to give the potential reader an idea which of the two
> cases is presented.  But for this purpose it isn't going to matter
> whether "2Å" is really 1.8Å or 2.2Å. 

What should the title say  when a crystal diffracts to, lets say, 3 A in one 
direction and 4-5 A in others?

Alex Aleshin,

Sanford-Burnham Medical Research Institute
10901 North Torrey Pines Road
La Jolla, California 92037



On May 31, 2012, at 2:50 PM, Ethan Merritt wrote:

> On Thursday, May 31, 2012 02:21:45 pm Dale Tronrud wrote:
>>   The resolution limit of the data set has been such an important
>> indicator of the quality of the resulting model (rightly or wrongly)
>> that it often is included in the title of the paper itself.  Despite
>> the fact that we now want to include more, weak, data than before
>> we need to continue to have a quality indicator that readers can
>> use to assess the models they are reading about.  While cumbersome,
>> one solution is to state what the resolution limit would have been
>> had the old criteria been used, as was done in the paper you quote.
>> This simply gives the reader a measure they can compare to their
>> previous experiences.
> 
> [\me dons flame suit]
> 
> To the extent that reporting the resolution is simply a stand-in
> for reporting the quality of the model, we would do better to cut
> to the chase.  For instance, if you map the Molprobity green/yellow/red
> model quality scoring onto good/mediocre/poor then you can title
> your paper
> 
>   Crystal Structure of Fabulous Protein Foo at Mediocre Quality
> 
> [\me removes flame suit from back, and tongue from cheek]
> 
> 
> More seriously, I don't think it's entirely true that the resolution
> is reported as an indicator of quality in the sense that the model
> is well-refined.  There are things you can expect to learn from a
> 2Å structure that you are unlikely to learn from a 5Å structure, even
> if equal care has been given to both experiments, so it makes sense
> for the title to give the potential reader an idea which of the two
> cases is presented.  But for this purpose it isn't going to matter
> whether "2Å" is really 1.8Å or 2.2Å.  
> 
>>   Now would be a good time to break with tradition and institute
>> a new measure of quality of diffraction data sets.  I believe several
>> have been proposed over the years, but have simply not caught on.
>> SFCHECK produces an "optical resolution".  Could this be used in
>> the title of papers?  I don't believe it is sensitive to the cutoff
>> resolution and it produces values that are consistent with what the
>> readers are used to.  With this solution people could include whatever
>> noisy data they want and not be guilty of overstating the quality of
>> their model.
> 
> We should also encourage people not to confuse the quality of 
> the data with the quality of the model.
> 
>   Ethan
> 
> -- 
> Ethan A Merritt
> Biomolecular Structure Center,  K-428 Health Sciences Bldg
> University of Washington, Seattle 98195-7742


Re: [ccp4bb] Fwd: [ccp4bb] Death of Rmerge

2012-05-31 Thread Ethan Merritt
On Thursday, May 31, 2012 02:21:45 pm Dale Tronrud wrote:
>The resolution limit of the data set has been such an important
> indicator of the quality of the resulting model (rightly or wrongly)
> that it often is included in the title of the paper itself.  Despite
> the fact that we now want to include more, weak, data than before
> we need to continue to have a quality indicator that readers can
> use to assess the models they are reading about.  While cumbersome,
> one solution is to state what the resolution limit would have been
> had the old criteria been used, as was done in the paper you quote.
> This simply gives the reader a measure they can compare to their
> previous experiences.

[\me dons flame suit]

To the extent that reporting the resolution is simply a stand-in
for reporting the quality of the model, we would do better to cut
to the chase.  For instance, if you map the Molprobity green/yellow/red
model quality scoring onto good/mediocre/poor then you can title
your paper

   Crystal Structure of Fabulous Protein Foo at Mediocre Quality

[\me removes flame suit from back, and tongue from cheek]


More seriously, I don't think it's entirely true that the resolution
is reported as an indicator of quality in the sense that the model
is well-refined.  There are things you can expect to learn from a
2Å structure that you are unlikely to learn from a 5Å structure, even
if equal care has been given to both experiments, so it makes sense
for the title to give the potential reader an idea which of the two
cases is presented.  But for this purpose it isn't going to matter
whether "2Å" is really 1.8Å or 2.2Å.  

>Now would be a good time to break with tradition and institute
> a new measure of quality of diffraction data sets.  I believe several
> have been proposed over the years, but have simply not caught on.
> SFCHECK produces an "optical resolution".  Could this be used in
> the title of papers?  I don't believe it is sensitive to the cutoff
> resolution and it produces values that are consistent with what the
> readers are used to.  With this solution people could include whatever
> noisy data they want and not be guilty of overstating the quality of
> their model.

We should also encourage people not to confuse the quality of 
the data with the quality of the model.

Ethan

-- 
Ethan A Merritt
Biomolecular Structure Center,  K-428 Health Sciences Bldg
University of Washington, Seattle 98195-7742


Re: [ccp4bb] Fwd: [ccp4bb] Death of Rmerge

2012-05-31 Thread Jacob Keller
Good idea, but how to get it to catch on without publishing in Science?

JPK

On Thu, May 31, 2012 at 4:21 PM, Dale Tronrud  wrote:
>
> On 05/31/12 12:07, Jacob Keller wrote:
>> Alas, how many lines like the following from a recent Science paper
>> (PMID: 22605777), probably reviewer-incited, could have been avoided!
>>
>> "Here, we present three high-resolution crystal structures of the
>> Thermus thermophilus (Tth) 70S ribosome in complex withRMF, HPF, or
>> YfiA that were refined by using data extending to 3.0 Å (I/sI = 1),
>> 3.1 Å (I/sI = 1), and 2.75 Å (I/sI = 1) resolution, respectively. The
>> resolutions at which I/sI = 2 are 3.2 Å, 3.4 Å, and 2.9 Å,
>> respectively."
>>
>
>   I don't see how you can avoid something like this.  With the new,
> higher, resolution limits for data (which are good things) people will
> tend to assume that a "2.6 A resolution model" will have roughly the
> same quality as a "2.6 A resolution model" from five years ago when
> the old criteria were used.  K&K show that the weak high resolution
> data contain useful information but certainly not as much information
> as the data with stronger intensity.
>
>   The resolution limit of the data set has been such an important
> indicator of the quality of the resulting model (rightly or wrongly)
> that it often is included in the title of the paper itself.  Despite
> the fact that we now want to include more, weak, data than before
> we need to continue to have a quality indicator that readers can
> use to assess the models they are reading about.  While cumbersome,
> one solution is to state what the resolution limit would have been
> had the old criteria been used, as was done in the paper you quote.
> This simply gives the reader a measure they can compare to their
> previous experiences.
>
>   Now would be a good time to break with tradition and institute
> a new measure of quality of diffraction data sets.  I believe several
> have been proposed over the years, but have simply not caught on.
> SFCHECK produces an "optical resolution".  Could this be used in
> the title of papers?  I don't believe it is sensitive to the cutoff
> resolution and it produces values that are consistent with what the
> readers are used to.  With this solution people could include whatever
> noisy data they want and not be guilty of overstating the quality of
> their model.
>
> Dale Tronrud
>
>> JPK
>>
>>
>>
>> On Thu, May 31, 2012 at 1:59 PM, Edward A. Berry  wrote:
>>> Yes! I want a copy of this program RESCUT.
>>>
>>> REMARK 200  R SYM FOR SHELL            (I) : 1.21700
>>> I noticed structure 3RKO reported Rmerge in the last shell greater
>>> than 1, suggesting the police who were defending R-merge were fighting
>>> a losing battle. And this provides a lot of ammunition to those
>>> they are fighting.
>>>
>>> Jacob Keller wrote:

 Dear Crystallographers,

 in case you have not heard, it would appear that the Rmerge statistic
 has died as of the publication of  PMID: 22628654. Ding Dong...?

 JPK

 --
 ***
 Jacob Pearson Keller
 Northwestern University
 Medical Scientist Training Program
 email: j-kell...@northwestern.edu
 ***

>>>
>>
>>
>>
>> --
>> ***
>> Jacob Pearson Keller
>> Northwestern University
>> Medical Scientist Training Program
>> email: j-kell...@northwestern.edu
>> ***
>>
>>



-- 
***
Jacob Pearson Keller
Northwestern University
Medical Scientist Training Program
email: j-kell...@northwestern.edu
***


Re: [ccp4bb] Fwd: [ccp4bb] Death of Rmerge

2012-05-31 Thread Dale Tronrud
On 05/31/12 12:07, Jacob Keller wrote:
> Alas, how many lines like the following from a recent Science paper
> (PMID: 22605777), probably reviewer-incited, could have been avoided!
>
> "Here, we present three high-resolution crystal structures of the
> Thermus thermophilus (Tth) 70S ribosome in complex withRMF, HPF, or
> YfiA that were refined by using data extending to 3.0 Å (I/sI = 1),
> 3.1 Å (I/sI = 1), and 2.75 Å (I/sI = 1) resolution, respectively. The
> resolutions at which I/sI = 2 are 3.2 Å, 3.4 Å, and 2.9 Å,
> respectively."
>

   I don't see how you can avoid something like this.  With the new,
higher, resolution limits for data (which are good things) people will
tend to assume that a "2.6 A resolution model" will have roughly the
same quality as a "2.6 A resolution model" from five years ago when
the old criteria were used.  K&K show that the weak high resolution
data contain useful information but certainly not as much information
as the data with stronger intensity.

   The resolution limit of the data set has been such an important
indicator of the quality of the resulting model (rightly or wrongly)
that it often is included in the title of the paper itself.  Despite
the fact that we now want to include more, weak, data than before
we need to continue to have a quality indicator that readers can
use to assess the models they are reading about.  While cumbersome,
one solution is to state what the resolution limit would have been
had the old criteria been used, as was done in the paper you quote.
This simply gives the reader a measure they can compare to their
previous experiences.

   Now would be a good time to break with tradition and institute
a new measure of quality of diffraction data sets.  I believe several
have been proposed over the years, but have simply not caught on.
SFCHECK produces an "optical resolution".  Could this be used in
the title of papers?  I don't believe it is sensitive to the cutoff
resolution and it produces values that are consistent with what the
readers are used to.  With this solution people could include whatever
noisy data they want and not be guilty of overstating the quality of
their model.

Dale Tronrud

> JPK
> 
> 
> 
> On Thu, May 31, 2012 at 1:59 PM, Edward A. Berry  wrote:
>> Yes! I want a copy of this program RESCUT.
>>
>> REMARK 200  R SYM FOR SHELL(I) : 1.21700
>> I noticed structure 3RKO reported Rmerge in the last shell greater
>> than 1, suggesting the police who were defending R-merge were fighting
>> a losing battle. And this provides a lot of ammunition to those
>> they are fighting.
>>
>> Jacob Keller wrote:
>>>
>>> Dear Crystallographers,
>>>
>>> in case you have not heard, it would appear that the Rmerge statistic
>>> has died as of the publication of  PMID: 22628654. Ding Dong...?
>>>
>>> JPK
>>>
>>> --
>>> ***
>>> Jacob Pearson Keller
>>> Northwestern University
>>> Medical Scientist Training Program
>>> email: j-kell...@northwestern.edu
>>> ***
>>>
>>
> 
> 
> 
> --
> ***
> Jacob Pearson Keller
> Northwestern University
> Medical Scientist Training Program
> email: j-kell...@northwestern.edu
> ***
> 
> 


[ccp4bb] Fwd: [ccp4bb] Death of Rmerge

2012-05-31 Thread Jacob Keller
Alas, how many lines like the following from a recent Science paper
(PMID: 22605777), probably reviewer-incited, could have been avoided!

"Here, we present three high-resolution crystal structures of the
Thermus thermophilus (Tth) 70S ribosome in complex withRMF, HPF, or
YfiA that were refined by using data extending to 3.0 Å (I/sI = 1),
3.1 Å (I/sI = 1), and 2.75 Å (I/sI = 1) resolution, respectively. The
resolutions at which I/sI = 2 are 3.2 Å, 3.4 Å, and 2.9 Å,
respectively."

JPK



On Thu, May 31, 2012 at 1:59 PM, Edward A. Berry  wrote:
> Yes! I want a copy of this program RESCUT.
>
> REMARK 200  R SYM FOR SHELL            (I) : 1.21700
> I noticed structure 3RKO reported Rmerge in the last shell greater
> than 1, suggesting the police who were defending R-merge were fighting
> a losing battle. And this provides a lot of ammunition to those
> they are fighting.
>
> Jacob Keller wrote:
>>
>> Dear Crystallographers,
>>
>> in case you have not heard, it would appear that the Rmerge statistic
>> has died as of the publication of  PMID: 22628654. Ding Dong...?
>>
>> JPK
>>
>> --
>> ***
>> Jacob Pearson Keller
>> Northwestern University
>> Medical Scientist Training Program
>> email: j-kell...@northwestern.edu
>> ***
>>
>



--
***
Jacob Pearson Keller
Northwestern University
Medical Scientist Training Program
email: j-kell...@northwestern.edu
***


-- 
***
Jacob Pearson Keller
Northwestern University
Medical Scientist Training Program
email: j-kell...@northwestern.edu
***


Re: [ccp4bb] Death of Rmerge

2012-05-31 Thread Edward A. Berry

Yes! I want a copy of this program RESCUT.

REMARK 200  R SYM FOR SHELL(I) : 1.21700
I noticed structure 3RKO reported Rmerge in the last shell greater
than 1, suggesting the police who were defending R-merge were fighting
a losing battle. And this provides a lot of ammunition to those
they are fighting.

Jacob Keller wrote:

Dear Crystallographers,

in case you have not heard, it would appear that the Rmerge statistic
has died as of the publication of  PMID: 22628654. Ding Dong...?

JPK

--
***
Jacob Pearson Keller
Northwestern University
Medical Scientist Training Program
email: j-kell...@northwestern.edu
***



Re: [ccp4bb] Death of Rmerge

2012-05-31 Thread Soisson, Stephen M
Rpim (from the East) just told me "You don't need to be helped any longer. 
You've always had the power to go back to Kansas."

-Original Message-
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of 
Oganesyan, Vaheh
Sent: Thursday, May 31, 2012 2:27 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Death of Rmerge

It wasn't doing well lately. So, it was expected.


From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Jacob Keller 
[j-kell...@fsm.northwestern.edu]
Sent: Thursday, May 31, 2012 2:20 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] Death of Rmerge

Dear Crystallographers,

in case you have not heard, it would appear that the Rmerge statistic
has died as of the publication of  PMID: 22628654. Ding Dong...?

JPK

--
***
Jacob Pearson Keller
Northwestern University
Medical Scientist Training Program
email: j-kell...@northwestern.edu
***
To the extent this electronic communication or any of its attachments contain 
information that is not in the public domain, such information is considered by 
MedImmune to be confidential and proprietary. This communication is expected to 
be read and/or used only by the individual(s) for whom it is intended. If you 
have received this electronic communication in error, please reply to the 
sender advising of the error in transmission and delete the original message 
and any accompanying documents from your system immediately, without copying, 
reviewing or otherwise using them for any purpose. Thank you for your 
cooperation.
Notice:  This e-mail message, together with any attachments, contains
information of Merck & Co., Inc. (One Merck Drive, Whitehouse Station,
New Jersey, USA 08889), and/or its affiliates Direct contact information
for affiliates is available at 
http://www.merck.com/contact/contacts.html) that may be confidential,
proprietary copyrighted and/or legally privileged. It is intended solely
for the use of the individual or entity named on this message. If you are
not the intended recipient, and have received this message in error,
please notify us immediately by reply e-mail and then delete it from 
your system.


Re: [ccp4bb] Death of Rmerge

2012-05-31 Thread Oganesyan, Vaheh
It wasn't doing well lately. So, it was expected.


From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Jacob Keller 
[j-kell...@fsm.northwestern.edu]
Sent: Thursday, May 31, 2012 2:20 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] Death of Rmerge

Dear Crystallographers,

in case you have not heard, it would appear that the Rmerge statistic
has died as of the publication of  PMID: 22628654. Ding Dong...?

JPK

--
***
Jacob Pearson Keller
Northwestern University
Medical Scientist Training Program
email: j-kell...@northwestern.edu
***
To the extent this electronic communication or any of its attachments contain 
information that is not in the public domain, such information is considered by 
MedImmune to be confidential and proprietary. This communication is expected to 
be read and/or used only by the individual(s) for whom it is intended. If you 
have received this electronic communication in error, please reply to the 
sender advising of the error in transmission and delete the original message 
and any accompanying documents from your system immediately, without copying, 
reviewing or otherwise using them for any purpose. Thank you for your 
cooperation.


Re: [ccp4bb] Death of Rmerge

2012-05-31 Thread Jacob Keller
Meant to include the following link in the previous message:

http://www.youtube.com/watch?v=9Jn8K8EA7-Q

JPK



On Thu, May 31, 2012 at 1:20 PM, Jacob Keller
 wrote:
> Dear Crystallographers,
>
> in case you have not heard, it would appear that the Rmerge statistic
> has died as of the publication of  PMID: 22628654. Ding Dong...?
>
> JPK
>
> --
> ***
> Jacob Pearson Keller
> Northwestern University
> Medical Scientist Training Program
> email: j-kell...@northwestern.edu
> ***



-- 
***
Jacob Pearson Keller
Northwestern University
Medical Scientist Training Program
email: j-kell...@northwestern.edu
***