On Jun 27, 2013, at 12:30 PM, Ian Tickle <ianj...@gmail.com> wrote:

> On 22 June 2013 19:39, Douglas Theobald <dtheob...@brandeis.edu> wrote:
> 
>> So I'm no detector expert by any means, but I have been assured by those who 
>> are that there are non-Poissonian sources of noise --- I believe mostly in 
>> the readout, when photon counts get amplified.  Of course this will depend 
>> on the exact type of detector, maybe the newest have only Poisson noise.
> 
> Sorry for delay in responding, I've been thinking about it.  It's indeed 
> possible that the older detectors had non-Poissonian noise as you say, but 
> AFAIK all detectors return _unsigned_ integers (unless possibly the number is 
> to be interpreted as a flag to indicate some error condition, but then 
> obviously you wouldn't interpret it as a count).  So whatever the detector 
> AFAIK it's physically impossible for it to return a negative number that is 
> to be interpreted as a photon count (of course the integration program may 
> interpret the count as a _signed_ integer but that's purely a technical 
> software issue).  

Just because the detectors spit out positive numbers (unsigned ints) does not 
mean that those values are Poisson distributed.  As I understand it, the 
readout can introduce non-Poisson noise, which is usually modeled as Gaussian.  

> I think we're all at least agreed that, whatever the true distribution of 
> Ispot (and Iback) is, it's not in general Gaussian, except as an 
> approximation in the limit of large Ispot and Iback (with the proviso that 
> under this approximation Ispot & Iback can never be negative).  Certainly the 
> assumption (again AFAIK) has always been that var(count) = count and I think 
> I'm right in saying that only a Poisson distribution has that property?

I think you mean that the Poisson has the property that mean(x) = var(x) (and 
since the ML estimate of the mean = count, you get your equation).  Many other 
distributions can approximate that (most of the binomial variants with small 
p).  Also, the standard gamma distribution with scale parameter=1 has that 
exact property.  

>> No, its just terminology.  For you, Iobs is defined as Ispot-Iback, and 
>> that's fine.  (As an aside, assuming the Poisson model, this Iobs will have 
>> a Skellam distribution, which can take negative values and asymptotically 
>> approaches a Gaussian.)  The photons contributed to Ispot from Itrue will 
>> still be Poisson.  Let's call them something besides Iobs, how about Ireal?  
>> Then, the Poisson model is
>> 
>> Ispot = Ireal + Iback'
>> 
>> where Ireal comes from a Poisson with mean Itrue, and Iback' comes from a 
>> Poisson with mean Iback_true.  The same likelihood function follows, as well 
>> as the same points.  You're correct that we can't directly estimate Iback', 
>> but I assume that Iback (the counts around the spot) come from the same 
>> Poisson with mean Iback_true (as usual).  
>> 
>> So I would say, sure, you have defined Iobs, and it has a Skellam 
>> distribution, but what, if anything, does that Iobs have to do with Itrue?  
>> My point still holds, that your Iobs is not a valid estimate of Itrue when 
>> Ispot<Iback.  Iobs as an estimate of Itrue requires unphysical assumptions, 
>> namely that photon counts can be negative.  It is impossible to derive 
>> Ispot-Iback as an estimate for Itrue (when Ispot<Iback) *unless* you make 
>> that unphysical assumption (like the Gaussian model).
> 
> Please note that I have never claimed that Iobs = Ispot - Iback is to be 
> interpreted as an estimate of Itrue, indeed quite the opposite: I agree 
> completely that Iobs has little to do with Itrue when Iobs is negative.  In 
> fact I don't believe anyone else is claiming that Iobs is to be interpreted 
> as an estimate of Itrue either, so maybe this is the source of the 
> misunderstanding?  

Maybe it is, but that has its own problems.  I imagine that most people who 
collect an X-ray dataset think that the intensities in their mtz are indeed 
estimates of the true intensities from their crystal.  Seems like a reasonable 
thing to expect, especially since the fourier of our model is supposed to 
predict Itrue.  If Iobs is not an estimate of Itrue, what exactly is its 
relevance to the structure inference problem?  Maybe it only serves as a 
way-station on the road to the French-Wilson correction?  As I understand it, 
not everyone uses ctruncate.  

> Certainly for me Ispot - Iback is merely the difference between the two 
> measurements, nothing more.  Maybe if we called it something other than Iobs 
> (say Idiff), or even avoided giving it a name altogether that would avoid any 
> further confusion?  Perhaps this whole discussion has been merely about 
> terminology?
>  
>> I'm also puzzled as to your claim that Iback' is not Poisson.  I don't think 
>> your QM argument is relevant, since we can imagine what we would have 
>> detected at the spot if we'd blocked the reflection, and that # of photon 
>> counts would be Poisson.  That is precisely the conventional logic behind 
>> estimating Iback' with Iback (from around the spot), it's supposedly a 
>> reasonable control.  It doesn't matter that in reality the photons are 
>> indistinguishable --- that's exactly what the probability model is for.  
>  
> I'm not clear how you would "block the reflection"?  How could you do that 
> without also blocking the background under it?  A large part of the 
> background comes from the TDS which is coming from the same place that the 
> Bragg diffraction is coming from, i.e. the crystal.  I know of no way of 
> stopping the Bragg diffraction without also stopping the TDS (or vice versa). 
>  Indeed the theory shows that there is in reality no distinction between 
> Bragg diffraction and TDS; they are just components of the total scattering 
> that we find convenient to imagine as separate in the dynamical model of 
> scattering (see http://people.cryst.bbk.ac.uk/~tickle/iucr99/s61.html for the 
> relevant equations).

I admittedly don't understand TDS well.  But I thought it was generally assumed 
that TDS contributes rather little to the conventional background measurement 
outside of the spot (so Stout and Jensen tells me :).  So I was not even really 
considering TDS, which I see as a different problem from measuring background 
(am I mistaken here?).  I thought the background we measure (in the area 
surrounding the spot) mostly came from diffuse solvent scatter, air scatter, 
loop scatter, etc.  If so, then we can just consider Itrue = Ibragg + Itds, and 
worry about modeling the different components of Itrue at a different stage.  
And then it would make sense to think about blocking a reflection (say, with a 
minuscule, precisely positioned beam stop very near the crystal) and measuring 
the background in the spot where the reflection would hit.  That background 
should be approximated pretty well by Iback, the background around the spot 
(especially if we move far enough away from the spot so that TDS is negligible 
there).  

> Any given photon "experiences" the whole crystal on its way from the source 
> to the detector (in fact it experiences more than that: it traverses all 
> possible trajectories simultaneously, it's just that the vast majority cancel 
> by destructive interference).  The resulting wave function of the photon only 
> collapses to a single point on hitting the detector, with a frequency 
> proportional to the square of the wave function at that point, so it's 
> meaningless to talk about the trajectory of an individual photon or whether 
> it "belongs" to Ireal or Iback'.  You can't talk about the error distribution 
> of the experimental measurements of some quantity if it's a physical 
> impossibility to design an experiment to measure it!  It can of course have a 
> probability distribution derived from prior knowledge of the properties of 
> crystals, but that's not a Poisson, it's a Wilson (exponential) distribution. 
>  Is that what you're thinking of?  According to QM the only real quantities 
> are the observables (or functions of observables); in this case only Ispot, 
> Iback and Ispot-Iback (and any other functions of Ispot & Iback that might be 
> relevant) are physically meaningful quantities, all else is mere speculation, 
> i.e. part of the model.

Ahh, this all seems a bit too philosophical, what's really real and what's not 
really real.  There are of course many different observationally equivalent QM 
interpretations, not just the one you espouse above (e.g., "the only real 
quantities are the observables" and "wave function collapse" talk).  I won't go 
down the QM woo road -- next thing we'll confirm Godwin's law and start talking 
about Nazis ... (Blargh! there I did it :)  Anyway, I don't think any of this 
matters for the practice of probabilistic inference.  I can model the 
background from solvent/air scatter as Poisson (as we're dealing with photons 
that are well known to have Poisson distributions), and this background adds to 
the intensity from the coherent scatter of the crystal (which is also from 
Poissonian photons) --- giving the sum of two Poisson variates, which is itself 
a Poisson variate.  If, OTOH, we can't validly use that model, then I don't see 
any justification for F&W's method (see below).

> As I understand it the reason you are suggesting an alternate way of 
> estimating Itrue is that you have a fundamental objection to the F & W 
> algorithm?  However I'm not clear precisely what you find objectionable?  

I actually don't want to rag on F&W too much --- the method is clever and I'm 
inherently fond of Bayesian applications.  I agree that their Gaussian 
approximation will work at high intensities, and I also suspect that it's 
probably "good enough" (perhaps even very good, except at very low photon 
counts).  

But in the spirit of trying to do things as well as possible, I guess my 
problem with F&W is threefold: 

(1) It works with Ispot-Iback, which seems arbitrary (perhaps just an 
anachronism at this point?), and intuitively I suspect that using Ispot-Iback 
discards some important information (e.g., Ispot-Iback is one value instead of 
two).  

(2) It seems the method is really a sort of kludge --- we have some 
measurement, one that is evidently not an estimate of what we're trying to 
measure, and then we use F&W to fix that measurement so that we do get an 
estimate of what we're actually trying to measure.  Can't we just get our 
estimate of Itrue from the get-go?

(3) It makes a strong Gaussian assumption for strictly non-Gaussian data, which 
at some point will no longer hold.  We know photon counts are non-negative, in 
general, and specifically Poisson (in the absence of other added noise).  Why 
not use that?  My unease here is heightened by the fact that I don't exactly 
understand the F&W derivation, or where exactly the Gaussian approximation 
comes in --- it seems there's been a bit of sleight of hand in producing their 
equations (there is no formal derivation in their ACA 1978 paper), and I don't 
have a feel for where the Gaussian approximation works and where it breaks 
down.  

> Perhaps it would be useful to go through F & W in detail and identify where 
> the problem (if any) lies?
> 
> We can say that the total likelihood of J (= Itrue) given Is (= Ispot) and Ib 
> (= Iback) is equal to the prior probability density of J given only knowledge 
> of the crystal (i.e. the estimated no of atoms from which we can calculate 
> E(J)), multiplied by the joint probability density of Is and Ib given J and 
> its SD (assumed equal to the SD of Is-Ib):
> 
>      P(J | Is,Ib) = P(J | E(J)) P(Is,Ib | J,sdJ)

So, the equation you have above is not quite correct, assuming that P(.) is a 
probability distribution.  To get an equation for P(J | Is,Ib), we have to use 
Bayes theorem, so your RHS should be divided by a normalization constant:

P(J | Is,Ib) = P(J | E(J)) P(Is,Ib | J,sdJ) / P(Is,Ib)

(or just replace your '=' with a proportionality)

> The only function of Is and Ib that's relevant to the joint distribution of 
> Is and Ib given J and sdJ, P(Is,Ib | J), is the difference Is-Ib (at least 
> for large Is and Ib: I don't know what happens if they are small).  

So why are we using Is-Ib and not {Is,Ib}?  First, note that:

P(Is,Ib | J,sdJ,B,sdB) = P(Is | Ib,J,sdJ) P(Ib | B,sdB)

since Is and Ib are dependent. I've augmented the notation some, to show the 
explicit dependence on parameters that will be important later.  

Note that if you want to work as if Is and Ib are independent, then 

P(Is,Ib | J,sdJ,B,sdB) = P(Is | J,sdJ) P(Ib | B,sdB)

But then you've got the likelihood function P(Is | J,sdJ), which is all you 
need to find an ML (or Bayesian) estimate of J given data Is.  So if Is and Ib 
are independent, there's no need for F&W at all.  

> Note that it's perfectly proper to talk about P(Is-Ib | J) in this context: 
> it's the distribution of the difference you expect to observe given any J.

I agree with that, your right, assuming you can make practical sense of P(Is-Ib 
| J) ...

> So the above can be rewritten as:
> 
>      P(J | Is-Ib) = P(J | E(J)) P(Is-Ib | J,sdJ)
> 
> P(Is-Ib | J,sdJ) is just the Gaussian error distribution of Is-Ib making use 
> of the Gaussian approximation of the Poisson.  

But where does P(Is-Ib | J,sdJ) actually come from?  Can you derive it?  It's 
not immediately obvious to me how I could predict Is-Ib if all you gave were 
the values for J and sdJ.  In fact, I don't think its possible.  What you need 
is P(Is-Ib | J,sdJ,B,sdB), as I showed above.  Now you and F&W say that P(Is-Ib 
| J,sdJ,B,sdB) is Gaussian, but where exactly does the Gaussian approximation 
come in?

For example, I've been pushing the model:

Is = Ib + Ij

where Ib comes from P(Ib|B,sdB) and Ij comes from P(Ij|J,sdJ).  Given this 
model, I can derive F&W (at least one way, there may be others).  From theory, 
both of those distributions are Poisson, and so for large values of B and J, 
both distributions will approximate a Gaussian.  So Is will also be Gaussian, 
from N(Is | Ib+Ij,sds) (sds^2=sdB^2+sdJ^2).  It follows then that Id=Is-Ib will 
also be Gaussian, from N(Id | J,sdd) (where sdd will be a bit complicated, but 
it will be larger than sds).  

So it already seems to me that by using Is-Ib, the std dev of J will be larger 
than it needs to be --- we should be able to do better if we don't subtract the 
variates.  And the way I derived this, the Gaussian approximation is applied to 
both Ib and Ij, which is exactly where we don't need it --- supposedly F&W 
applies to weak observations, not strong ones.  

But I'm probably missing something. 


> Finally, integrating out J to get the expectation of J (or of F=sqrt(J)) 
> completes the F-W procedure.  As indicated earlier there are good reasons to 
> postpone this until after merging equivalents (which is exactly what we do 
> now).
> 
> So what's wrong with that?
> 
> Cheers
> 
> -- Ian
> 

Reply via email to