>>62500 is < 40^3, so ±20 indices on each axis.
50Å / 20 = 2.5Å,  so not quite 2.5Å resolution

Nice--thanks for calculating that. Couldn't remember how to do it off-hand, and 
I guess my over-estimate comes from most protein crystals having some symmetry. 
I don't really think it affects the question though--do you?

>>All that proves is that assigning each 1x1x1 voxel a separate density value 
>>is a very inefficient use of information.  Adjacent voxels are not 
>>independent, and no possible assignment of values will get around the 
>>inherent blockiness of the representation.

Not sure what this means--what is the precise definition or measure of 
"efficient use of information?" Like a compression algorithm? Are diffraction 
data sets like compressed data?

Also, the "blockiness" of representation is totally ancillary--you can do all 
of the smoothing you want, I think, and the voxel map will still be basically 
lousy. No?

>>I know!  Let's instead of assigning a magnitude per voxel, let's assign a 
>>magnitude per something-resolution-sensitive, like a sin wave.   Then for 
>>each hkl measurement we get one sin wave term.   Add up all the sine waves 
>>and what do you get?  Ta da.  A nice map.

It was good of proto-crystallographers to invent diffraction as a way to apply 
Fourier Series. I don't know--it seems funny to me that somehow diffraction is 
able to harness "efficient information use," whereas the voxel map is not. I am 
looking for more insight into this.

>>Aren't Fourier series marvelous?

Well, I have always liked FTs, but your explanations are not particularly 
enlightening to me yet.

I will re-iterate that the reason I brought this up is that the imaging world 
might learn a lot from crystallography's incredible extraction of all possible 
information through the use of priors and modelling.

Also, I hope you noticed that all of the parameters about the crystallographic 
data set were extremely optimistic, and in reality the information content 
would be far less.

One could compare the information content of the derived structure to that of 
the measurements to get a metric for "information extraction," perhaps, and 
this could be applied across many types of experiments in different fields. I 
nominate crystallography for the best ratio.

JPK



 
> Assuming that it is apt, however: is this a possible way to see the power of 
> all of our Bayesian modelling? Could one use our modelling tools on such a 
> grainy picture and arrive at similar results?
>
> Are our data sets really this poor in information, and we just model the heck 
> out of them, as perhaps evidenced by our scarily low data:parameters ratios?
> 
> My underlying motivation in this thought experiment is to illustrate the 
> richness in information (and poorness of modelling) that one achieves in 
> fluorescence microscopic imaging. If crystallography is any measure of the 
> power of modelling, one could really go to town on some of these terabyte 5D 
> functional data sets we see around here at Janelia (and on YouTube).
> 
> What do you think?
> 
> Jacob Keller
> 
> +++++++++++++++++++++++++++++++++++++++++++++++++
> Jacob Pearson Keller
> Research Scientist / Looger Lab
> HHMI Janelia Research Campus
> 19700 Helix Dr, Ashburn, VA 20147
> (571)209-4000 x3159
> +++++++++++++++++++++++++++++++++++++++++++++++++
> 

--
Ethan A Merritt, Dept of Biochemistry
Biomolecular Structure Center,  K-428 Health Sciences Bldg
MS 357742,   University of Washington, Seattle 98195-7742

Reply via email to