Hi
I seem to be getting a lot of outliers rejected by Phaser with data processed
with the latest ctruncate which are not present when data is processed with
the older version (or old truncate) - has something been changed in the code
that would cause this?
With CCP4 6.4: ctruncate
Hmm - Phaser doesn't usually use such high resolution data? Surprised you
are getting any stuff from resolutions higher that 2A.
Whether the intensity at that resolution is meaningful would need careful
inspection of the truncate logs - is the wilson plot reasonable? Are the
4th moments linear,
Hi Randy,
So I've been playing around with equations myself, and I have some alternative
results.
As I understand your Mathematica stuff, you are using the data model:
ip = ij + ib'
ib
where ip is the measured peak (before any background correction), and ij is a
random sample from the
On 8 July 2013 18:29, Douglas Theobald dtheob...@brandeis.edu wrote:
That's all very interesting --- do you have a good ref for TDS where I
can read up on the theory/practice? My protein xtallography books say
even less than SJ about TDS. Anyway, this appears to be a problem
beyond the
On 6/28/2013 5:13 PM, Douglas Theobald wrote:
I admittedly don't understand TDS well. But I thought it was generally
assumed that TDS contributes rather little to the conventional
background measurement outside of the spot (so Stout and Jensen tells
me :). So I was not even really considering
On Jul 7, 2013, at 1:44 PM, Ian Tickle ianj...@gmail.com wrote:
On 29 June 2013 01:13, Douglas Theobald dtheob...@brandeis.edu
wrote:
I admittedly don't understand TDS well. But I thought it was
generally assumed that TDS contributes rather little to the
conventional background
On 29 June 2013 01:13, Douglas Theobald dtheob...@brandeis.edu wrote:
Just because the detectors spit out positive numbers (unsigned ints) does
not mean that those values are Poisson distributed. As I understand it,
the readout can introduce non-Poisson noise, which is usually modeled as
The dominant source of error in an intensity measurement actually
depends on the magnitude of the intensity. For intensities near zero
and with zero background, the read-out noise of image plate or
CCD-based detectors becomes important. On most modern CCD detectors,
however, the read-out
Hi James,
On Sat, Jul 6, 2013 at 6:31 PM, James Holton jmhol...@lbl.gov wrote:
I think it is also important to point out here that the resolution
cutoff of the data you provide to refmac or phenix.refine is not
necessarily the resolution of the structure. This latter quantity,
although
On 21 June 2013 13:36, Ed Pozharski epozh...@umaryland.edu wrote:
Replacing Iobs with E(J) is not only unnecessary, it's ill-advised as it
will distort intensity statistics.
On 21 June 2013 18:40, Ed Pozharski epozh...@umaryland.edu wrote:
I think this is exactly what I was trying to
Ed, sorry, not sure what happened to the 1st attachment, it seems to have
vanished!
Cheers
-- Ian
attachment: Ltest-1.png
On Jun 27, 2013, at 12:30 PM, Ian Tickle ianj...@gmail.com wrote:
On 22 June 2013 19:39, Douglas Theobald dtheob...@brandeis.edu wrote:
So I'm no detector expert by any means, but I have been assured by those who
are that there are non-Poissonian sources of noise --- I believe mostly in
On 22 June 2013 19:39, Douglas Theobald dtheob...@brandeis.edu wrote:
So I'm no detector expert by any means, but I have been assured by those
who are that there are non-Poissonian sources of noise --- I believe mostly
in the readout, when photon counts get amplified. Of course this will
PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] ctruncate bug?
However you decide to argue the point, you must consider _all_ the
observations of a reflection (replicates and symmetry related) together when
you infer Itrue or F etc, otherwise you will bias the result even more. Thus
you
From: Jrh [jrhelliw...@gmail.com]
Sent: Monday, June 24, 2013 12:13 AM
To: Terwilliger, Thomas C
Cc: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] ctruncate bug?
Dear Tom,
I find this suggestion of using the full images an excellent and visionary one.
So, how
@JISCMAIL.AC.UK] on behalf of Phil [
p...@mrc-lmb.cam.ac.uk]
Sent: Friday, June 21, 2013 2:50 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] ctruncate bug?
However you decide to argue the point, you must consider _all_ the
observations of a reflection (replicates and symmetry related) together
@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] ctruncate bug?
However you decide to argue the point, you must consider _all_ the
observations of a reflection (replicates and symmetry related) together
when you infer Itrue or F etc, otherwise you will bias the result even
more. Thus you cannot (easily
of Douglas Theobald
[dtheob...@brandeis.edu]
Sent: Sunday, June 23, 2013 1:52 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] ctruncate bug?
On Jun 22, 2013, at 6:18 PM, Frank von Delft frank.vonde...@sgc.ox.ac.uk
wrote:
A fascinating discussion (I've learnt a lot!); a quick sanity check
On 21 June 2013 19:45, Douglas Theobald dtheob...@brandeis.edu wrote:
The current way of doing things is summarized by Ed's equation:
Ispot-Iback=Iobs. Here Ispot is the # of counts in the spot (the area
encompassing the predicted reflection), and Iback is # of counts in the
background
Ian, I really do think we are almost saying the same thing. Let me try to
clarify.
You say that the Gaussian model is not the correct data model, and that
the Poisson is correct. I more-or-less agree. If I were being pedantic
(me?) I would say that the Poisson is *more* physically realistic
On Sat, Jun 22, 2013 at 1:04 PM, Douglas Theobald dtheob...@brandeis.eduwrote:
Feel free to prove me wrong --- can you derive Ispot-Iback, as an estimate
of Itrue, from anything besides a Gaussian?
OK, I'll prove myself wrong. Ispot-Iback can be derived as an estimate of
Itrue, even when
On 22 June 2013 18:04, Douglas Theobald dtheob...@brandeis.edu wrote:
Ian, I really do think we are almost saying the same thing. Let me try to
clarify.
I agree, but still only almost!
--- but in truth the Poisson model does not account for other physical
sources of error that arise
On Sat, Jun 22, 2013 at 1:56 PM, Ian Tickle ianj...@gmail.com wrote:
On 22 June 2013 18:04, Douglas Theobald dtheob...@brandeis.edu wrote:
--- but in truth the Poisson model does not account for other physical
sources of error that arise from real crystals and real detectors, such as
dark
A fascinating discussion (I've learnt a lot!); a quick sanity check,
though:
In what scenarios would these improved estimates make a significant
difference?
Or rather: are there any existing programs (as opposed to vapourware)
that would benefit significantly?
Cheers
phx
On
On Jun 22, 2013, at 6:18 PM, Frank von Delft frank.vonde...@sgc.ox.ac.uk
wrote:
A fascinating discussion (I've learnt a lot!); a quick sanity check, though:
In what scenarios would these improved estimates make a significant
difference?
Who knows? I always think that improved
On Sat, Jun 22, 2013 at 3:18 PM, Frank von Delft
frank.vonde...@sgc.ox.ac.uk wrote:
In what scenarios would these improved estimates make a significant
difference?
Perhaps datasets where a unusually large number of reflections are very
weak, for instance where TNCS is present, or where the
I agree with Frank. This thread has been fascinating and educational. Thanks
to all. Ron
On Sat, 22 Jun 2013, Douglas Theobald wrote:
On Jun 22, 2013, at 6:18 PM, Frank von Delft frank.vonde...@sgc.ox.ac.uk
wrote:
A fascinating discussion (I've learnt a lot!); a quick sanity check,
On 21 June 2013 13:36, Ed Pozharski epozh...@umaryland.edu wrote:
Replacing Iobs with E(J) is not only unnecessary, it's ill-advised as it
will distort intensity statistics. For example, let's say you have
translational NCS aligned with crystallographic axes, and hence some set of
On Jun 21, 2013, at 8:36 AM, Ed Pozharski epozh...@umaryland.edu wrote:
On 06/20/2013 01:07 PM, Douglas Theobald wrote:
How can there be nothing wrong with something that is unphysical?
Intensities cannot be negative.
I think you are confusing two things - the true intensities and
On 21 June 2013 17:10, Douglas Theobald dtheob...@brandeis.edu wrote:
Yes there is. The only way you can get a negative estimate is to make
unphysical assumptions. Namely, the estimate Ispot-Iback=Iobs assumes that
both the true value of I and the background noise come from a Gaussian
On 06/21/2013 10:19 AM, Ian Tickle wrote:
If you observe the symptoms of translational NCS in the diffraction
pattern (i.e. systematically weak zones of reflections) you must take
it into account when calculating the averages, i.e. if you do it
properly parity groups should be normalised
I kinda think we're saying the same thing, sort of.
You don't like the Gaussian assumption, and neither do I. If you make the
reasonable Poisson assumptions, then you don't get the Ispot-Iback=Iobs for the
best estimate of Itrue. Except as an approximation for large values, but we
are
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based
refinement!
Anyway, refining against Is doesn't solve the problem, it only postpones
it: you still need the Fs for maps! (though errors in Fs may be less
critical
On Jun 21, 2013, at 2:48 PM, Ed Pozharski epozh...@umaryland.edu wrote:
Douglas,
Observed intensities are the best estimates that we can come up with in an
experiment.
I also agree with this, and this is the clincher. You are arguing that
Ispot-Iback=Iobs is the best estimate we can come
On Jun 21, 2013, at 2:52 PM, James Holton jmhol...@lbl.gov wrote:
Yes, but the DIFFERENCE between two Poisson-distributed values can be
negative. This is, unfortunately, what you get when you subtract the
background out from under a spot. Perhaps this is the source of confusion
here?
However you decide to argue the point, you must consider _all_ the observations
of a reflection (replicates and symmetry related) together when you infer Itrue
or F etc, otherwise you will bias the result even more. Thus you cannot
(easily) do it during integration
Phil
Sent from my iPad
On
From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Phil
[p...@mrc-lmb.cam.ac.uk]
Sent: Friday, June 21, 2013 2:50 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] ctruncate bug?
However you decide to argue the point, you must consider _all_ the observations
of a reflection
As a maybe better alternative, we should (once again) consider to refine
against intensities (and I guess George Sheldrick would agree here).
I have a simple question - what exactly, short of some sort of historic inertia
(or memory lapse), is the reason NOT to refine against intensities?
Just trying to understand the basic issues here. How could refining directly
against intensities solve the fundamental problem of negative intensity values?
On Jun 20, 2013, at 11:34 AM, Bernhard Rupp hofkristall...@gmail.com wrote:
As a maybe better alternative, we should (once again)
If you are refining against F's you have to find some way to avoid
calculating the square root of a negative number. That is why people
have historically rejected negative I's and why Truncate and cTruncate
were invented.
When refining against I, the calculation of (Iobs - Icalc)^2
Yes higher R factors is the usual reason people don't like I-based
refinement!
Anyway, refining against Is doesn't solve the problem, it only postpones
it: you still need the Fs for maps! (though errors in Fs may be less
critical then).
-- Ian
On 20 June 2013 17:20, Dale Tronrud
bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Ian Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based refinement!
Anyway, refining against Is doesn't solve the problem, it only postpones
] On Behalf Of Ian
Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based refinement!
Anyway, refining against Is doesn't solve the problem, it only postpones it:
you still need the Fs for maps! (though
...@brandeis.edu]
Sent: 20 June 2013 17:49
To: Bellini, Domenico (DLSLtd,RAL,DIA); ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Seems to me that the negative Is should be dealt with early on, in the
integration step. Why exactly do integration programs report negative Is to
begin with?
On Jun
board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of
Ian Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based
refinement!
Anyway, refining against Is doesn't solve the problem, it only postpones
bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Ian
Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based
refinement!
Anyway, refining against Is doesn't solve the problem, it only
Of Ian
Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based
refinement!
Anyway, refining against Is doesn't solve the problem, it only postpones
it: you still need the Fs for maps! (though
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Ian
Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based
refinement!
Anyway, refining against Is doesn't solve the problem
...
D
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Ian
Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based
refinement!
Anyway, refining against Is doesn't
@JISCMAIL.AC.UK] On Behalf Of Ian
Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based
refinement!
Anyway, refining against Is doesn't solve the problem, it only postpones
it: you still need the Fs
a suggestion ...
D
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of
Ian Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based
refinement!
Anyway, refining
a suggestion ...
D
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Ian
Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based
refinement!
Anyway, refining against
?
More of a question rather than a suggestion ...
D
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Ian
Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based
a suggestion ...
D
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Ian Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based refinement!
Anyway, refining against Is doesn't solve
the background and push all
the Is to positive values?
More of a question rather than a suggestion ...
D
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of
Ian Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors
: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't
like I-based refinement!
Anyway, refining against Is doesn't solve the
problem, it only postpones it: you still need the Fs
for maps! (though errors in Fs may be less critical
then). -- Ian
On 20 June 2013 17:20
?
More of a question rather than a suggestion ...
D
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of
Ian Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based
refinement
On 20 June 2013 20:46, Douglas Theobald dtheob...@brandeis.edu wrote:
Well, I tend to think Ian is probably right, that doing things the
proper way (vs French-Wilson) will not make much of a difference in the
end.
Nevertheless, I don't think refining against the (possibly negative)
[mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Ian
Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based
refinement!
Anyway, refining against Is doesn't solve the problem, it only postpones
it: you
a suggestion ...
D
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of
Ian Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based
refinement!
Anyway, refining against Is doesn't
Hi James,
Concerning XDSCONV, I cannot reproduce your plot. A Linux (64bit) program
test_xdsconv which allows to input I, sigI, I, and mode, where
I: measured intensity
sigI: sigma(I)
I: average I in resolution shell
mode: -1/0/1 for truncated normal/acentric/centric prior
is at
On Wed, 19 Jun 2013 14:19:19 +0100, Kay Diederichs
kay.diederi...@uni-konstanz.de wrote:
I wonder if problem b) is why Evans and Murshudov observe little contribution
of reflections in shells with CC1/2 below 0.27 in one of their test cases,
which had very anisotropic data.
sorry, forgot the
To add to the discussion a plot of the acentric KW from -10 to 10 (normalised
wrt sqrt(sigma) ). ftp://ftp.ccp4.ac.uk/ccb/aZF2.pdf,
black dots are F/sqrt(sigma) while blue is corresponding plot for sigma
The value drops from 0.42 to 0.28 going from h = -4 to h = -10.
Note: for this we are
Dear Kay and Jeff,
frankly, I do not see much justification for any rejection based on
h-cutoff.
FrenchWilson only talk about I/sigI cutoff, which also warrants further
scrutiny. It probably could be argued that reflections with I/sigI-4
are still more likely to be weak than strong so F~0
Hi Ed,
While I don't think French and Wilson argue explicitly for the h-4.0
requirement in their main manuscript, if you look at the source code
included in the supplementary material for this paper, they include this in
their implementation, which is what I worked from.
Charles, do you happen
On Wed, 19 Jun 2013 11:01:22 -0400, Ed Pozharski epozh...@umaryland.edu wrote:
Dear Kay and Jeff,
frankly, I do not see much justification for any rejection based on
h-cutoff.
I agree
FrenchWilson only talk about I/sigI cutoff, which also warrants further
scrutiny. It probably could be
Dear Ed,
AFAIK James Holton found the same issue, and a similar problem also existed in
XDSCONV. In my view, it is an example of the problem that most programs so far
have dealt with weak data in a suboptimal way, and have undergone little
testing with such data.
The latest version of XDSCONV
Hi Kay - could you elaborate on the latest version of XDSCONV has a fix
for it? (A look around The Google did not help me.)
Cheers
Frank
On 18/06/2013 11:38, Kay Diederichs wrote:
Dear Ed,
AFAIK James Holton found the same issue, and a similar problem also existed in
XDSCONV. In my view,
Hi Frank,
older versions of XDSCONV, for datasets with weak high-resolution data, printed
a long list starting with:
SUSPICIOUS REFLECTIONS NOT INCLUDED IN OUTPUT DATA SET
(at most 100 are listed below)
SUSPICIOUS REFLECTIONS NOT INCLUDED IN OUTPUT DATA SET
(at
Hi Ed,
Thanks for including the code block.
I've looked back over the FW paper, and the reason for the h-4.0 cutoff
is that the entire premise assumes that the true intensities are normally
distributed, and the formulation breaks down at that far out of an
outlier. For most datasets I haven't
Actually, Jeff, the problem goes even deeper than that. Have a look at these
Wilson plots:
http://bl831.als.lbl.gov/~jamesh/wilson/wilsons.png
For these plots I took Fs from a unit cell full of a random collection of
atoms, squared them, added Gaussian noise with RMS = 1, and then ran them back
Hi Jeff,
what I did in XDSCONV is to mitigate the numerical difficulties associated with
low h (called Score in XDSCONV output) values, and I removed the h -4
cutoff. The more negative h becomes, the closer to zero is the resulting
amplitude, so not applying a h cutoff makes sense (to me,
I noticed something strange when processing a dataset with imosflm. The
final output ctruncate_etc.mtz, contains IMEAN and F columns, which
should be the conversion according to FrenchWilson. Problem is that
IMEAN has no missing values (100% complete) while F has about 1500
missing (~97%
Hi Ed,
I'm not directly familiar with the ctruncate implementation of French and
Wilson, but from the implementation that I put into Phenix (based on the
original FW paper) I can tell you that any reflection where (I/sigI) -
(sigI/mean_intensity) is less than a defined cutoff (in our case -4.0),
Jeff,
thanks - I can see the same equation and cutoff applied in ctruncate
source.Here is the relevant part of the code
// Bayesian statistics tells us to modify I/sigma by
subtracting off sigma/S
// where S is the mean intensity in the resolution shell
h =
75 matches
Mail list logo