Re: [ccp4bb] ice rings

2018-07-20 Thread Clemens Vonrhein
Dear Chen,

you should only exclude resolution ranges when you actually have
ice-ring contamination in your data: did you decide on that by hand or
is it based in your case on some automatic analysis? If you have this
situation then the completeness will indeed go down - after all, there
are possible reflections that you didn't observe (since you excluded
the resolution ranges these reflections they occur in).

If you are worried about this, you could e.g. try the automatic
detection and treatment of ice-rings during processing in
e.g. autoPROC (see [1] and [2]): it will only exclude ice-ring
resolution ranges that are detected and it will use an excluded
resolution range as narrow as possible/necessary to avoid rejecting
otherwise good data when processing data in XDS.

Cheers

Clemens

[1] www.globalphasing.com/autoproc/
[2] www.globalphasing.com/autoproc/manual/autoPROC7.html#step1_spotnohkl



On Fri, Jul 20, 2018 at 10:49:48AM -0400, CPMAS Chen wrote:
> Hi, All CCP4 users,
> 
> This might be a  little off-topic, but I cannot find a mail list for XDS.
> During XDS processing, we can exclude the resolution range to "remove" ice
> ring.
> In my case, when I excluded these range, the completeness at this range
> will be much lower, ~45%.
> N  1/d^2DmidNmeas NrefNcent  %poss C%poss Mlplct   AnoCmp
> AnoFrc AnoMlt   $$ $$
>   16  0.0582   4.1525565 4013   54   95.2   98.96.4
>  94.6   99.23.2
>  17  0.0619   4.0226217 4167   54   95.6   98.66.3
>  94.7   99.03.2
>  18  0.0657   3.9011393 1973   23   44.4   94.25.8
>  41.2   92.53.1
>  19  0.0694   3.7931643 4556   60   99.0   94.66.9
>  98.7   99.73.5
>  20  0.0732   3.7014014 2149   30   45.4   90.96.5
>  43.7   96.13.4
>  21  0.0769   3.6129456 4242   57   87.5   90.76.9
>  86.1   98.43.5
>  22  0.0807   3.5235722 4940   61   99.3   91.37.2
>  99.2   99.83.6
> 
> If I included these resolution range, the completeness is more than 90%.
> However, when look at the  Mn(I/sd), it does not follow Willson law at high
> resolution.
> Is there a compromise to just exclude partial of these region?
> 
> Thanks!
> N  1/d^2Dmid   Rmrg  Rfull   Rcum  Rmeas   RpimNmeas  AvI
> RMSdevsd  I/RMS Mn(I/sd)  FrcBias  Chi^2 Chi^2c   $$ $$
>   1  0.0017  24.37  0.037  0.037  0.037  0.043  0.022 4236 1235
> 112 65   11.0 42.7 -  1.30   0.97
>   2  0.0050  14.07  0.041  0.041  0.038  0.047  0.024 8627  389
>  32 24   12.1 37.3 -  0.99   0.98
>   3  0.0084  10.90  0.040  0.040  0.039  0.046  0.02311346  438
>  37 26   11.7 37.3 -  0.97   0.96
>   4  0.0118   9.21  0.044  0.044  0.040  0.051  0.02613482  275
>  24 18   11.3 33.1 -  0.95   0.94
>   5  0.0151   8.12  0.051  0.051  0.041  0.061  0.03411303  148
>  15 12   10.1 23.1 -  0.93   0.88
>   6  0.0185   7.35  0.070  0.070  0.043  0.084  0.04514124   84
>  10  98.3 18.7 -  0.93   0.93
>   7  0.0219   6.76  0.086  0.086  0.045  0.102  0.05415700   65
>   9  87.1 15.9 -  0.98   0.97
>   8  0.0252   6.29  0.100  0.100  0.047  0.118  0.06217497   54
>   8  86.4 14.3 -  0.96   0.96
>   9  0.0286   5.91  0.106  0.106  0.050  0.124  0.06518961   54
>   9  86.1 14.0 -  0.95   0.94
>  10  0.0320   5.59  0.113  0.113  0.053  0.133  0.06820406   52
>   9  95.8 13.5 -  0.95   0.95
>  11  0.0353   5.32  0.119  0.119  0.056  0.138  0.07121836   53
>  10  95.5 13.2 -  0.98   0.98
>  12  0.0387   5.08  0.111  0.111  0.059  0.129  0.06722914   64
>  11 105.9 14.2 -  1.01   1.00
>  13  0.0421   4.87  0.117  0.117  0.062  0.137  0.07023980   63
>  11 105.6 13.6 -  1.02   1.01
>  14  0.0454   4.69  0.127  0.127  0.065  0.149  0.07624916   61
>  12 115.2 12.8 -  1.02   1.01
>  15  0.0488   4.53  0.162  0.162  0.069  0.189  0.09726030   47
>  11 114.1 10.5 -  1.01   1.01
>  16  0.0522   4.38  0.202  0.202  0.073  0.235  0.12026790   38
>  12 113.3  8.9 -  0.99   0.99
>  17  0.0555   4.24  0.261  0.261  0.078  0.306  0.15826547   30
>  12 112.6  7.1 -  1.01   1.01
>  18  0.0589   4.12  0.316  0.316  0.081  0.387  0.22021957   24
>  12 112.0  5.2 -  1.00   0.96
>  19  0.0623   4.01  0.498  0.498  0.086  0.602  0.33324279   16
>  13 121.3  3.7 -  1.01   0.98
>  20  0.0656   3.90  0.588  0.588  0.094  0.718  0.40425888   21
>  33 130.6  3.9 -  3.12   1.24
>  21  0.0690   3.81  0.822  0.822  0.101  0.974  0.51528128   12
>  15 140.8  

[ccp4bb] ice rings

2018-07-20 Thread CPMAS Chen
Hi, All CCP4 users,

This might be a  little off-topic, but I cannot find a mail list for XDS.
During XDS processing, we can exclude the resolution range to "remove" ice
ring.
In my case, when I excluded these range, the completeness at this range
will be much lower, ~45%.
N  1/d^2DmidNmeas NrefNcent  %poss C%poss Mlplct   AnoCmp
AnoFrc AnoMlt   $$ $$
  16  0.0582   4.1525565 4013   54   95.2   98.96.4
 94.6   99.23.2
 17  0.0619   4.0226217 4167   54   95.6   98.66.3
 94.7   99.03.2
 18  0.0657   3.9011393 1973   23   44.4   94.25.8
 41.2   92.53.1
 19  0.0694   3.7931643 4556   60   99.0   94.66.9
 98.7   99.73.5
 20  0.0732   3.7014014 2149   30   45.4   90.96.5
 43.7   96.13.4
 21  0.0769   3.6129456 4242   57   87.5   90.76.9
 86.1   98.43.5
 22  0.0807   3.5235722 4940   61   99.3   91.37.2
 99.2   99.83.6

If I included these resolution range, the completeness is more than 90%.
However, when look at the  Mn(I/sd), it does not follow Willson law at high
resolution.
Is there a compromise to just exclude partial of these region?

Thanks!
N  1/d^2Dmid   Rmrg  Rfull   Rcum  Rmeas   RpimNmeas  AvI
RMSdevsd  I/RMS Mn(I/sd)  FrcBias  Chi^2 Chi^2c   $$ $$
  1  0.0017  24.37  0.037  0.037  0.037  0.043  0.022 4236 1235
112 65   11.0 42.7 -  1.30   0.97
  2  0.0050  14.07  0.041  0.041  0.038  0.047  0.024 8627  389
 32 24   12.1 37.3 -  0.99   0.98
  3  0.0084  10.90  0.040  0.040  0.039  0.046  0.02311346  438
 37 26   11.7 37.3 -  0.97   0.96
  4  0.0118   9.21  0.044  0.044  0.040  0.051  0.02613482  275
 24 18   11.3 33.1 -  0.95   0.94
  5  0.0151   8.12  0.051  0.051  0.041  0.061  0.03411303  148
 15 12   10.1 23.1 -  0.93   0.88
  6  0.0185   7.35  0.070  0.070  0.043  0.084  0.04514124   84
 10  98.3 18.7 -  0.93   0.93
  7  0.0219   6.76  0.086  0.086  0.045  0.102  0.05415700   65
  9  87.1 15.9 -  0.98   0.97
  8  0.0252   6.29  0.100  0.100  0.047  0.118  0.06217497   54
  8  86.4 14.3 -  0.96   0.96
  9  0.0286   5.91  0.106  0.106  0.050  0.124  0.06518961   54
  9  86.1 14.0 -  0.95   0.94
 10  0.0320   5.59  0.113  0.113  0.053  0.133  0.06820406   52
  9  95.8 13.5 -  0.95   0.95
 11  0.0353   5.32  0.119  0.119  0.056  0.138  0.07121836   53
 10  95.5 13.2 -  0.98   0.98
 12  0.0387   5.08  0.111  0.111  0.059  0.129  0.06722914   64
 11 105.9 14.2 -  1.01   1.00
 13  0.0421   4.87  0.117  0.117  0.062  0.137  0.07023980   63
 11 105.6 13.6 -  1.02   1.01
 14  0.0454   4.69  0.127  0.127  0.065  0.149  0.07624916   61
 12 115.2 12.8 -  1.02   1.01
 15  0.0488   4.53  0.162  0.162  0.069  0.189  0.09726030   47
 11 114.1 10.5 -  1.01   1.01
 16  0.0522   4.38  0.202  0.202  0.073  0.235  0.12026790   38
 12 113.3  8.9 -  0.99   0.99
 17  0.0555   4.24  0.261  0.261  0.078  0.306  0.15826547   30
 12 112.6  7.1 -  1.01   1.01
 18  0.0589   4.12  0.316  0.316  0.081  0.387  0.22021957   24
 12 112.0  5.2 -  1.00   0.96
 19  0.0623   4.01  0.498  0.498  0.086  0.602  0.33324279   16
 13 121.3  3.7 -  1.01   0.98
 20  0.0656   3.90  0.588  0.588  0.094  0.718  0.40425888   21
 33 130.6  3.9 -  3.12   1.24
 21  0.0690   3.81  0.822  0.822  0.101  0.974  0.51528128   12
 15 140.8  2.7 -  1.03   1.00
 22  0.0724   3.72  1.140  1.140  0.109  1.347  0.709295839
 18 150.5  2.0 -  1.25   1.01
 23  0.0757   3.63  0.819  0.819  0.118  0.989  0.54330705   18
 58 160.3  2.2 -  1.81   1.09
 24  0.0791   3.56  1.643  1.643  0.128  1.926  0.995316267
 18 160.4  1.5 -  1.03   1.00
 25  0.0825   3.48  2.505  2.505  0.139  2.946  1.534323275
 23 170.2  1.0 -  1.33   1.06
 26  0.0858   3.41  1.601  1.601  0.151  1.925  1.05132349   10
 52 180.2  1.4 -  2.72   1.17

-- 

***

Charles Chen

Research Instructor

University of Pittsburgh School of Medicine

Department of Anesthesiology

**



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


[ccp4bb] ice rings

2018-07-20 Thread CPMAS Chen
Hi, All CCP4 users

-- 

***

Charles Chen

Research Instructor

University of Pittsburgh School of Medicine

Department of Anesthesiology

**



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


Re: [ccp4bb] Ice rings... [maps and missing reflections]

2011-10-14 Thread James Holton

On 10/11/2011 12:33 PM, Garib N Murshudov wrote:

We need better way of estimating unobserved reflections.


Indeed we do!  Because this appears to be the sum total of how the 
correctness of the structure is judged.  It is easy to forget I think 
that from the point of view of the refinement program, all reflections 
flagged as belonging to the free set are, in effect, missing.   So 
Rfree is really just a score for how well DFc agrees with Fobs?


-James Holton
MAD Scientist


Re: [ccp4bb] Ice rings...

2011-10-14 Thread James Holton


Automated outlier rejection in scaling will handle a lot of things, 
including ice.  Works better with high multiplicity.  Unless, of course, 
your ice rings are even, then any integration error due to ice will be 
the same for all the symmetry mates and the scaling program will be none 
the wiser.  That said, the integration programs these days tend to have 
pretty sensible defaults for rejecting spots that have weird 
backgrounds.  Plenty of structures get solved from data that has 
horrible-looking ice rings using just the defaults.  In fact, I am 
personally unconvinced that ice rings are a significant problem in and 
of themselves.  More often, they are simply an indication that something 
else is wrong, like the crystal warmed up at some point.


  Nevertheless, if you suspect your ice rings are causing a problem, 
you can try to do something about them.  The deice program already 
mentioned sounds cool, but if you just want to try something quick, 
excluding the resolution ranges of your ice rings can be done in sftools 
like this:

select resol  3.89
select resol  3.93
absent col F SIGF DANO SIGDANO if col F  0
and repeat this for each resolution range you want to exclude.  Best to 
get these ranges from your integration program's graphics display.


In mosflm, you can put EXCLUDE ICE on either the AUTOINDEX or 
RESOLUTION keywords and have any spots on the canonical hexagonal ice 
spacings removed automatically.  The problem with excluding resolution 
ranges, of course, is that your particular ice rings may not be where 
they are supposed to be.  Either due to something physical, like the 
cooling rate, or something artificial, like an error in the camera 
parameters.  It is also possible that what you think are ice rings are 
actually salt rings.  Some salts will precipitate out upon 
cryo-cooling.  Large ice/salt crystals can also produce a lot of 
non-Bragg scatter, which means that you can get sharp features far away 
from the resolution range you expect.  On the other hand, if you have 
cubic ice instead of hexagonal ice (very common in MX samples), then 
there are no rings at 3.91A, 3.45A, 2.68A and throwing out these 
resolution ranges would be a waste.


Another way to exclude ice is to crank up background-based rejection 
criteria.  In denzo/HKL2K, you do this with the reject fraction 
keyword, and in mosflm, REJECT MINBG does pretty much the same thing.  
There are lots of rejection options in integration programs, and which 
one works in your particular case depends on what your ice rings look 
like.  Noone has written a machine-vision type program that can 
recognize and handle all the cases. You will need to play with these 
options until the spots you don't like turn red in the display.


Of course, the best way to deal with ice rings would be to inspect each 
and every one of the spots you have near ice rings and decide on its 
intensity manually.  Then edit the hkl file.



Which brings me to perhaps a more important point: What, exactly, is the 
problem you are having that makes you think the ice rings are to 
blame?  Can't get an MR solution?  Can't get MAD/SAD phases?


Ice has a bad rep in MX, and an undeserved one IMHO.  In fact, by 
controlling either cryoprotectant concentration or cooling rate 
carefully, you can achieve a mixture of amorphous and cubic ice, and 
this mixture has a specific volume (density) intermediate between the 
two.  Many crystals diffract much better when you are able to match the 
specific volume of the stuff in the solvent channels to the specific 
volume protein lattice is trying to achieve on its own.  A great deal 
of effort has gone into characterizing this phenomenon (authors: Juers, 
Weik, Warkentin, Thorne and many others), but I often meet frustrated 
cryo-screeners who seem to have never heard of any of it!


 In general, the automated outlier rejection protocols employed by 
modern software have taken care of most of the problems ice rings 
introduce.  For example, difference Pattersons are VERY sensitive to 
outliers, and all it takes is one bad spot to give you huge ripples that 
swamp all you peaks, but every heavy-atom finding program I am aware of 
calculates Pattersons only after fist doing an outlier rejection 
step.  You might also think that ice rings would mess up your preciously 
subtle anomalous differences, but again, outlier rejection to the rescue.


Now, that said, depending on automated outlier rejection to save you is 
of course a questionable policy, but it is an equally bad idea to 
pretend that it doesn't exist either.  It is funny how in MX we are all 
ready to grab our torch and pitchfork if we hear of someone manually 
editing their hkl files to get rid of reflections they don't like, but 
as long as the software does it, it is okay.  Plausible deniability 
runs deep.



-James Holton
MAD Scientist


On 10/11/2011 8:16 AM, Francis E Reyes wrote:

All,


So I have two intense ice rings where there appear to be 

Re: [ccp4bb] Ice rings... [maps and missing reflections]

2011-10-13 Thread Tim Gruene
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I am glad the structures that have been solved using the
free-lunch-algorithm as implemented in shelxe did not know they were not
allowed to be solved. Of course there is DM involved, as has been
pointed out ;-)

On 10/12/2011 10:12 PM, Edward A. Berry wrote:
 Tim Gruene wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1


 On 10/11/2011 09:58 PM, Ethan Merritt wrote:
 On Tuesday, October 11, 2011 12:33:09 pm Garib N Murshudov wrote:
 In the limit yes. however limit is when we do not have solution,
 i.e. when model errors are very large.  In the limit map
 coefficients will be 0 even for 2mFo-DFc maps. In refinement we have
 some model. At the moment we have choice between 0 and DFc. 0 is not
 the best estimate as Ed rightly points out. We replace (I am sorry
 for self promotion, nevertheless: Murshudov et al, 1997) absent
 reflection with DFc, but it introduces bias. Bias becomes stronger
 as the number of absent reflections become larger. We need better
 way of estimating unobserved reflections. In statistics there are
 few appraoches. None of them is full proof, all of them are
 computationally expensive. One of the techniques is called multiple
 imputation.

 I don't quite follow how one would generate multiple imputations in
 this case.

 Would this be equivalent to generating a map from (Nobs - N) refls, then
 filling in F_estimate for those N refls by back-transforming the map?
 Sort of like phase extension, except generating new Fs rather than
 new phases?

 Some people call this the free-lunch-algorithm ;-)
 Tim

 Doesn't work- the Fourier transform is invertable. As someone already
 said in this
 thread, if the map was made with coefficients of zero for certain
 reflections
 (which is equivalent to omitting those reflections) The back-transform will
 give zero for those reflections. Unless you do some density modification
 first.
 So free-lunch is a good name- there aint no such thing!
 

- -- 
- --
Dr Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFOlqZiUxlJ7aRr7hoRAgYqAKD1vthQQ3WJmHXxklWZiroRYvdFHgCeO0MP
FSF50BnydKjR7ajI3XshBqE=
=F0JM
-END PGP SIGNATURE-


Re: [ccp4bb] Ice rings... [maps and missing reflections]

2011-10-12 Thread Eleanor Dodson
Here we are I presume only worried about strong reflections lost behind 
an ice ring. At least that is where the discussion began.


Isnt the best approach t  this problem to use integration software which 
attempts to give a measurement, albeit with a high error estimate?


The discussion has strayed into what to do with incomplete data sets..
In these cases there might be something to learn from the Free Lunch 
ideas used in ACORN and SHELX and other programs - set the missing 
reflections to E=1, and normalise them properly to an appropriate amplitude.


Eleanor


On 10/11/2011 08:33 PM, Garib N Murshudov wrote:

In the limit yes. however limit is when we do not have solution, i.e. when model errors are very large.  In 
the limit map coefficients will be 0 even for 2mFo-DFc maps. In refinement we have some model. At the moment 
we have choice between 0 and DFc. 0 is not the best estimate as Ed rightly points out. We replace (I am sorry 
for self promotion, nevertheless: Murshudov et al, 1997) absent reflection with DFc, but it 
introduces bias. Bias becomes stronger as the number of absent reflections become larger. We need 
better way of estimating unobserved reflections. In statistics there are few appraoches. None of 
them is full proof, all of them are computationally expensive. One of the techniques is called multiple 
imputation. It may give better refinement behaviour and less biased map. Another one is integration over all 
errors (too many parameters for numerical integration, and there is no closed form formula) of model as well 
as experimental data. This would give less bia

sed map with more pronounced signal.


Regards
Garib


On 11 Oct 2011, at 20:15, Randy Read wrote:


If the model is really bad and sigmaA is estimated properly, then sigmaA will 
be close to zero so that D (sigmaA times a scale factor) will be close to zero. 
 So in the limit of a completely useless model, the two methods of map 
calculation converge.

Regards,

Randy Read

On 11 Oct 2011, at 19:47, Ed Pozharski wrote:


On Tue, 2011-10-11 at 10:47 -0700, Pavel Afonine wrote:

better, but not always. What about say 80% or so complete dataset?
Filling in 20% of Fcalc (or DFcalc or bin-averagedFobs  or else - it
doesn't matter, since the phase will dominate anyway) will highly bias
the map towards the model.


DFc, if properly calculated, is the maximum likelihood estimate of the
observed amplitude.  I'd say that 0 is by far the worst possible
estimate, as Fobs are really never exactly zero.  Not sure what the
situation would be when it's better to use Fo=0, perhaps if the model is
grossly incorrect?  But in that case the completeness may be the least
of my worries.

Indeed, phases drive most of the model bias, not amplitudes.  If model
is good and phases are good then the DFc will be a much better estimate
than zero.  If model is bad and phases are bad then filling in missing
reflections will not increase bias too much.  But replacing them with
zeros will introduce extra noise.  In particular, the ice rings may mess
things up and cause ripples.

On a practical side, one can always compare the maps with and without
missing reflections.

--
After much deep and profound brain things inside my head,
I have decided to thank you for bringing peace to our home.
   Julian, King of Lemurs


--
Randy J. Read
Department of Haematology, University of Cambridge
Cambridge Institute for Medical Research  Tel: + 44 1223 336500
Wellcome Trust/MRC Building   Fax: + 44 1223 336827
Hills RoadE-mail: rj...@cam.ac.uk
Cambridge CB2 0XY, U.K.   www-structmed.cimr.cam.ac.uk


Garib N Murshudov
Structural Studies Division
MRC Laboratory of Molecular Biology
Hills Road
Cambridge
CB2 0QH UK
Email: ga...@mrc-lmb.cam.ac.uk
Web http://www.mrc-lmb.cam.ac.uk






Re: [ccp4bb] Ice rings... [maps and missing reflections]

2011-10-12 Thread Tim Gruene
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On 10/11/2011 09:58 PM, Ethan Merritt wrote:
 On Tuesday, October 11, 2011 12:33:09 pm Garib N Murshudov wrote:
 In the limit yes. however limit is when we do not have solution, i.e. when 
 model errors are very large.  In the limit map coefficients will be 0 even 
 for 2mFo-DFc maps. In refinement we have some model. At the moment we have 
 choice between 0 and DFc. 0 is not the best estimate as Ed rightly points 
 out. We replace (I am sorry for self promotion, nevertheless: Murshudov et 
 al, 1997) absent reflection with DFc, but it introduces bias. Bias becomes 
 stronger as the number of absent reflections become larger. We need better 
 way of estimating unobserved reflections. In statistics there are few 
 appraoches. None of them is full proof, all of them are computationally 
 expensive. One of the techniques is called multiple imputation.
 
 I don't quite follow how one would generate multiple imputations in this case.
 
 Would this be equivalent to generating a map from (Nobs - N) refls, then
 filling in F_estimate for those N refls by back-transforming the map?
 Sort of like phase extension, except generating new Fs rather than new phases?

Some people call this the free-lunch-algorithm ;-)
Tim

   Ethan
 [...]
- -- 
- --
Dr Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFOlVi4UxlJ7aRr7hoRAlU+AKDo+c449pUQ/1cnQAl6SMRqzVkp6wCcDETj
GHB8hFXt1McbxWHfpUAsHtE=
=FOWk
-END PGP SIGNATURE-


Re: [ccp4bb] Ice rings... [maps and missing reflections]

2011-10-12 Thread Edward A. Berry

Tim Gruene wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On 10/11/2011 09:58 PM, Ethan Merritt wrote:

On Tuesday, October 11, 2011 12:33:09 pm Garib N Murshudov wrote:

In the limit yes. however limit is when we do not have solution, i.e. when model errors are very large.  In 
the limit map coefficients will be 0 even for 2mFo-DFc maps. In refinement we have some model. At the moment 
we have choice between 0 and DFc. 0 is not the best estimate as Ed rightly points out. We replace (I am sorry 
for self promotion, nevertheless: Murshudov et al, 1997) absent reflection with DFc, but it 
introduces bias. Bias becomes stronger as the number of absent reflections become larger. We need 
better way of estimating unobserved reflections. In statistics there are few appraoches. None of 
them is full proof, all of them are computationally expensive. One of the techniques is called multiple 
imputation.


I don't quite follow how one would generate multiple imputations in this case.

Would this be equivalent to generating a map from (Nobs - N) refls, then
filling in F_estimate for those N refls by back-transforming the map?
Sort of like phase extension, except generating new Fs rather than new phases?


Some people call this the free-lunch-algorithm ;-)
Tim


Doesn't work- the Fourier transform is invertable. As someone already said in 
this
thread, if the map was made with coefficients of zero for certain reflections
(which is equivalent to omitting those reflections) The back-transform will
give zero for those reflections. Unless you do some density modification first.
So free-lunch is a good name- there aint no such thing!


Re: [ccp4bb] Ice rings... [maps and missing reflections]

2011-10-12 Thread Ethan Merritt
On Wednesday, October 12, 2011 01:12:11 pm Edward A. Berry wrote:
 Tim Gruene wrote:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
 
  On 10/11/2011 09:58 PM, Ethan Merritt wrote:
  On Tuesday, October 11, 2011 12:33:09 pm Garib N Murshudov wrote:
  In the limit yes. however limit is when we do not have solution, i.e. 
  when model errors are very large.  In the limit map coefficients will be 
  0 even for 2mFo-DFc maps. In refinement we have some model. At the moment 
  we have choice between 0 and DFc. 0 is not the best estimate as Ed 
  rightly points out. We replace (I am sorry for self promotion, 
  nevertheless: Murshudov et al, 1997) absent reflection with DFc, but it 
  introduces bias. Bias becomes stronger as the number of absent 
  reflections become larger. We need better way of estimating unobserved 
  reflections. In statistics there are few appraoches. None of them is full 
  proof, all of them are computationally expensive. One of the techniques 
  is called multiple imputation.
 
  I don't quite follow how one would generate multiple imputations in this 
  case.
 
  Would this be equivalent to generating a map from (Nobs - N) refls, then
  filling in F_estimate for those N refls by back-transforming the map?
  Sort of like phase extension, except generating new Fs rather than new 
  phases?
 
  Some people call this the free-lunch-algorithm ;-)
  Tim
 
 Doesn't work- the Fourier transform is invertable. As someone already said in 
 this
 thread, if the map was made with coefficients of zero for certain reflections
 (which is equivalent to omitting those reflections) The back-transform will
 give zero for those reflections. Unless you do some density modification 
 first.
 So free-lunch is a good name- there aint no such thing!

Tim refers to the procedure described in
  Sheldrick, G. M. (2002). Z. Kristallogr. 217, 644–65

which was later incorporated into shelxe as the Free Lunch Algorithm.
It does indeed involve a form of density modification.
Tim is also correct that this procedure is the precedent I had in mind,
although I had forgotten its clever name.

cheers,

Ethan

-- 
Ethan A Merritt
Biomolecular Structure Center,  K-428 Health Sciences Bldg
University of Washington, Seattle 98195-7742


Re: [ccp4bb] Ice rings... [maps and missing reflections]

2011-10-12 Thread George M. Sheldrick
Dear Ethan,

Thankyou for the reference, but actually it's the wrong paper and anyway
my only contribution to the 'free lunch algorithm' was to name it (in the
title of the paper by Uson et al., Acta Cryst. (2007) D63, 1069-1074). By 
that time the method was already being used in ACORN and by the Bari group, 
who were the first to describe it in print (Caliandro et al., Acta Cryst.
Acta Cryst. (2005) D61, 556-565). As you correctly say, it only makes sense 
in the context of density modification, but under favorable conditions,
i.e. native data to 2A or better, inventing data to a resolution that you
would have liked to collect but didn't can make a dramatic improvement to
a map, as SHELXE has often demonstrated. Hence the name. And of course
there is no such thing as a free lunch!

Best regards, George

On Wed, Oct 12, 2011 at 01:25:12PM -0700, Ethan Merritt wrote:
 On Wednesday, October 12, 2011 01:12:11 pm Edward A. Berry wrote:
  Tim Gruene wrote:
   -BEGIN PGP SIGNED MESSAGE-
   Hash: SHA1
  
  
   On 10/11/2011 09:58 PM, Ethan Merritt wrote:
   On Tuesday, October 11, 2011 12:33:09 pm Garib N Murshudov wrote:
   In the limit yes. however limit is when we do not have solution, i.e. 
   when model errors are very large.  In the limit map coefficients will 
   be 0 even for 2mFo-DFc maps. In refinement we have some model. At the 
   moment we have choice between 0 and DFc. 0 is not the best estimate as 
   Ed rightly points out. We replace (I am sorry for self promotion, 
   nevertheless: Murshudov et al, 1997) absent reflection with DFc, but 
   it introduces bias. Bias becomes stronger as the number of absent 
   reflections become larger. We need better way of estimating 
   unobserved reflections. In statistics there are few appraoches. None 
   of them is full proof, all of them are computationally expensive. One 
   of the techniques is called multiple imputation.
  
   I don't quite follow how one would generate multiple imputations in this 
   case.
  
   Would this be equivalent to generating a map from (Nobs - N) refls, then
   filling in F_estimate for those N refls by back-transforming the map?
   Sort of like phase extension, except generating new Fs rather than new 
   phases?
  
   Some people call this the free-lunch-algorithm ;-)
   Tim
  
  Doesn't work- the Fourier transform is invertable. As someone already said 
  in this
  thread, if the map was made with coefficients of zero for certain 
  reflections
  (which is equivalent to omitting those reflections) The back-transform will
  give zero for those reflections. Unless you do some density modification 
  first.
  So free-lunch is a good name- there aint no such thing!
 
 Tim refers to the procedure described in
   Sheldrick, G. M. (2002). Z. Kristallogr. 217, 644–65
 
 which was later incorporated into shelxe as the Free Lunch Algorithm.
 It does indeed involve a form of density modification.
 Tim is also correct that this procedure is the precedent I had in mind,
 although I had forgotten its clever name.
 
   cheers,
 
   Ethan
 
 -- 
 Ethan A Merritt
 Biomolecular Structure Center,  K-428 Health Sciences Bldg
 University of Washington, Seattle 98195-7742
 

-- 
Prof. George M. Sheldrick FRS
Dept. Structural Chemistry, 
University of Goettingen,
Tammannstr. 4,
D37077 Goettingen, Germany
Tel. +49-551-39-3021 or -3068
Fax. +49-551-39-22582


[ccp4bb] Ice rings...

2011-10-11 Thread Francis E Reyes
All,


So I have two intense ice rings where there appear to be lattice spots in 
between them. 

I understand that any reflections that lie directly on the ice ring are 
useless, however, how do software programs (HKL2000, d*Trek, mosflm, XDS) deal 
with these intermediate spots? 

It would seem to me that employing a 'resolution cut off' just before the ice 
ring (on the low resolution side) would be improper, as there are spots on the 
high resolution side of the ice. (see enclosed .tiff)


In fact, how do these programs deal with spots lying on ice rings? Are they 
rejected by some algorithm by those programs during integration, or is it up to 
the scaling/merging (by SCALA for example) step to deal with them? 

Thanks!

F
inline: PastedGraphic-1.tiff

-
Francis E. Reyes M.Sc.
215 UCB
University of Colorado at Boulder







Re: [ccp4bb] Ice rings...

2011-10-11 Thread Bruno KLAHOLZ
Dear Francis,

the spots will be excluded individually based on the inhomogeneous background, 
so you don't need to apply a resolution cutoff.
However, once you have determined and refined your structure it may be worth 
predicting the intensity of these spots and put them back for map calculation,
this might avoid gaps in your map corresponding to inter-atom distances for 
which data are missing in the resolution range of the ice rings; as long as 
this is done only for a relatively small set of reflections there is not much 
risk of introducing a bias here.

HTH,

Bruno




-Message d'origine-
De : CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] De la part de Francis E 
Reyes
Envoyé : Tuesday, October 11, 2011 5:17 PM
À : CCP4BB@JISCMAIL.AC.UK
Objet : [ccp4bb] Ice rings...

All,


So I have two intense ice rings where there appear to be lattice spots in 
between them. 

I understand that any reflections that lie directly on the ice ring are 
useless, however, how do software programs (HKL2000, d*Trek, mosflm, XDS) deal 
with these intermediate spots? 

It would seem to me that employing a 'resolution cut off' just before the ice 
ring (on the low resolution side) would be improper, as there are spots on the 
high resolution side of the ice. (see enclosed .tiff)


In fact, how do these programs deal with spots lying on ice rings? Are they 
rejected by some algorithm by those programs during integration, or is it up to 
the scaling/merging (by SCALA for example) step to deal with them? 

Thanks!

F

###
Dr. Bruno P. Klaholz 
Department of Integrated Structural Biology
Institute of Genetics and of Molecular and Cellular Biology
IGBMC - UMR 7104 - U 964
1, rue Laurent Fries
BP 10142 
67404 ILLKIRCH CEDEX
FRANCE
Tel. from abroad: 0033.388.65.57.55
Tel. inside France: 03.88.65.57.55
Fax from abroad: 0033.388.65.32.76
Fax inside France: 03.88.65.32.76
e-mail: klah...@igbmc.fr
websites:
http://www.igbmc.fr/
http://igbmc.fr/Klaholz


Re: [ccp4bb] Ice rings...

2011-10-11 Thread Edward A. Berry

If the ice rings are really sharp, they trigger the bad
background rejection in denzo/HKL2000. To reject more spots,
increase the reject fraction 0.7 parameter to something
greater than .7. This rejection is on a spot by spot basis,
so spots with good background between the rings should not
be affected. During integration, if you are monitoring
the process with Xdisp, you will see the rejected spots
turn red and/or disappear. To verify they are being
rejected by background fraction, try again with
reject fraction .3 and see if they stay green/yellow.

If the ice ring is broad compared to the integrating box,
it shows up as a high, slanting baseline and the normal
baseline correction procedure is valid, but sigma will
be higher than for a spot on a white background.

Francis E Reyes wrote:

All,


So I have two intense ice rings where there appear to be lattice spots in 
between them.

I understand that any reflections that lie directly on the ice ring are 
useless, however, how do software programs (HKL2000, d*Trek, mosflm, XDS) deal 
with these intermediate spots?

It would seem to me that employing a 'resolution cut off' just before the ice 
ring (on the low resolution side) would be improper, as there are spots on the 
high resolution side of the ice. (see enclosed .tiff)


In fact, how do these programs deal with spots lying on ice rings? Are they 
rejected by some algorithm by those programs during integration, or is it up to 
the scaling/merging (by SCALA for example) step to deal with them?

Thanks!

F





-
Francis E. Reyes M.Sc.
215 UCB
University of Colorado at Boulder







Re: [ccp4bb] Ice rings...

2011-10-11 Thread Dr. Thayumanasamy Somasundaram

Francis,

I would like to bring your attention to our paper in Acta Cryst D Volume 
66 (6), 741-744 (2010) where we deal with spots under the ice-rings. We 
have been very successful in eliminating the ice-rings and recover the 
data underneath. If you are interested you can request the Python script 
from Michael Chapman at OHSU.



   De-icing: recovery of diffraction intensities in the presence of ice
   rings, Michael S. Chapman and^^Thayumanasamy Somasundaram


If you need help please e-mail me outside the CCP4BB.
**

On 10/11/2011 11:16 AM, Francis E Reyes wrote:

All,


So I have two intense ice rings where there appear to be lattice spots in 
between them.

I understand that any reflections that lie directly on the ice ring are 
useless, however, how do software programs (HKL2000, d*Trek, mosflm, XDS) deal 
with these intermediate spots?

It would seem to me that employing a 'resolution cut off' just before the ice 
ring (on the low resolution side) would be improper, as there are spots on the 
high resolution side of the ice. (see enclosed .tiff)


In fact, how do these programs deal with spots lying on ice rings? Are they 
rejected by some algorithm by those programs during integration, or is it up to 
the scaling/merging (by SCALA for example) step to deal with them?

Thanks!

F



-
Francis E. Reyes M.Sc.
215 UCB
University of Colorado at Boulder







--

Dr. Thayumanasamy Somasundaram [Soma]
Director, X-Ray Crystallography Facility (XRF)  
Off. Ph: (850)644-6448| Lab Ph: (850)645-1333   
Fax:(850)644-7244 | E-mail: tsomasunda...@fsu.edu   

URI: www.sb.fsu.edu/~soma | URI: www.sb.fsu.edu/~xray   
Postal Address--
91, Chieftan Way | KLB 414  
Institute of Molecular Biophysics   
Florida State University
Tallahassee, FL 32306-4380, USA.




Re: [ccp4bb] Ice rings...

2011-10-11 Thread James Stroud
I've used a technique called annealing, which amounts to holding an index 
card between the cryo stream and the crystal for a few seconds then removing 
the card quickly.

In my experience, about 70% of the time the diffraction is worse and about 30% 
of the time the ice rings will be gone with slightly improved diffraction, 
allowing recovery of a significant range of data. Most of the time, though, I 
find another crystal that had a better initial freeze, so annealing has never 
been a life saver--but it could be under dire circumstances.

James




On Oct 11, 2011, at 9:30 AM, Dr. Thayumanasamy Somasundaram wrote:

 Francis,
 
 I would like to bring your attention to our paper in Acta Cryst D Volume 66 
 (6), 741-744 (2010) where we deal with spots under the ice-rings. We have 
 been very successful in eliminating the ice-rings and recover the data 
 underneath. If you are interested you can request the Python script from 
 Michael Chapman at OHSU.
 De-icing: recovery of diffraction intensities in the presence of ice rings, 
 Michael S. Chapman and Thayumanasamy Somasundaram
 
 
 If you need help please e-mail me outside the CCP4BB. 
 
 On 10/11/2011 11:16 AM, Francis E Reyes wrote:
 
 All,
 
 
 So I have two intense ice rings where there appear to be lattice spots in 
 between them. 
 
 I understand that any reflections that lie directly on the ice ring are 
 useless, however, how do software programs (HKL2000, d*Trek, mosflm, XDS) 
 deal with these intermediate spots? 
 
 It would seem to me that employing a 'resolution cut off' just before the 
 ice ring (on the low resolution side) would be improper, as there are spots 
 on the high resolution side of the ice. (see enclosed .tiff)
 
 
 In fact, how do these programs deal with spots lying on ice rings? Are they 
 rejected by some algorithm by those programs during integration, or is it up 
 to the scaling/merging (by SCALA for example) step to deal with them? 
 
 Thanks!
 
 F
 
 
 
 -
 Francis E. Reyes M.Sc.
 215 UCB
 University of Colorado at Boulder
 
 
 
 
 
 
 -- 
 
 Dr. Thayumanasamy Somasundaram [Soma]
 Director, X-Ray Crystallography Facility (XRF)
 Off. Ph: (850)644-6448  | Lab Ph: (850)645-1333   
 Fax:(850)644-7244   | E-mail: tsomasunda...@fsu.edu   
 
 URI: www.sb.fsu.edu/~soma | URI: www.sb.fsu.edu/~xray 
 Postal Address--
 91, Chieftan Way | KLB 414
 Institute of Molecular Biophysics 
 Florida State University  
 Tallahassee, FL 32306-4380, USA.  
 



Re: [ccp4bb] Ice rings... [maps and missing reflections]

2011-10-11 Thread Ed Pozharski
On Tue, 2011-10-11 at 15:24 +, Bruno KLAHOLZ wrote:
 However, once you have determined and refined your structure it may be
 worth predicting the intensity of these spots and put them back for
 map calculation,

REFMAC does this by default, because

expected value of unknown structure factors for missing reflections are
better approximated using DFc than with 0 values.

CNS defaults to excluding them.  As for phenix, I am not entirely sure -
it seems that phenix.refine does too (fill_missing_f_obs= False), but if
you use the GUI then the fill in option is turned on.



-- 
Oh, suddenly throwing a giraffe into a volcano to make water is crazy?
Julian, King of Lemurs


Re: [ccp4bb] Ice rings... [maps and missing reflections]

2011-10-11 Thread Nat Echols
On Tue, Oct 11, 2011 at 10:34 AM, Ed Pozharski epozh...@umaryland.eduwrote:

 CNS defaults to excluding them.  As for phenix, I am not entirely sure -
 it seems that phenix.refine does too (fill_missing_f_obs= False), but if
 you use the GUI then the fill in option is turned on.


In practice, it will be turned on for command-line phenix.refine too if you
don't supply your own custom map definitions - actually it produces both
filled and unfilled maps, but the former is what most users will see in
Coot.

-Nat


Re: [ccp4bb] Ice rings... [maps and missing reflections]

2011-10-11 Thread Pavel Afonine
On Tue, Oct 11, 2011 at 10:34 AM, Ed Pozharski epozh...@umaryland.eduwrote:

 expected value of unknown structure factors for missing reflections are
 better approximated using DFc than with 0 values.



better, but not always. What about say 80% or so complete dataset? Filling
in 20% of Fcalc (or DFcalc or bin-averaged Fobs or else - it doesn't
matter, since the phase will dominate anyway) will highly bias the map
towards the model. Clearly there are cases where filling in a few missing
reflections significantly improves map interpretability without introducing
any bias.



 As for phenix, I am not entirely sure -
 it seems that phenix.refine does too (fill_missing_f_obs= False), but if
 you use the GUI then the fill in option is turned on.


phenix.refine always outputs two 2mFo-DFc maps: one is computed using the
original set of Fobs, and the other one is computed using set of Fobs where
missing reflections filled in with DFc calculated using well determined
atoms only. By default, Coot will open the filled one.

Pavel


Re: [ccp4bb] Ice rings... [maps and missing reflections]

2011-10-11 Thread Ed Pozharski
On Tue, 2011-10-11 at 10:47 -0700, Pavel Afonine wrote:
 better, but not always. What about say 80% or so complete dataset?
 Filling in 20% of Fcalc (or DFcalc or bin-averaged Fobs or else - it
 doesn't matter, since the phase will dominate anyway) will highly bias
 the map towards the model.

DFc, if properly calculated, is the maximum likelihood estimate of the
observed amplitude.  I'd say that 0 is by far the worst possible
estimate, as Fobs are really never exactly zero.  Not sure what the
situation would be when it's better to use Fo=0, perhaps if the model is
grossly incorrect?  But in that case the completeness may be the least
of my worries.

Indeed, phases drive most of the model bias, not amplitudes.  If model
is good and phases are good then the DFc will be a much better estimate
than zero.  If model is bad and phases are bad then filling in missing
reflections will not increase bias too much.  But replacing them with
zeros will introduce extra noise.  In particular, the ice rings may mess
things up and cause ripples.

On a practical side, one can always compare the maps with and without
missing reflections.

-- 
After much deep and profound brain things inside my head, 
I have decided to thank you for bringing peace to our home.
Julian, King of Lemurs


Re: [ccp4bb] Ice rings... [maps and missing reflections]

2011-10-11 Thread Pavel Afonine
Hi Ed,

On Tue, Oct 11, 2011 at 11:47 AM, Ed Pozharski epozh...@umaryland.eduwrote:

 On Tue, 2011-10-11 at 10:47 -0700, Pavel Afonine wrote:
  better, but not always. What about say 80% or so complete dataset?
  Filling in 20% of Fcalc (or DFcalc or bin-averaged Fobs or else - it
  doesn't matter, since the phase will dominate anyway) will highly bias
  the map towards the model.

 DFc, if properly calculated, is the maximum likelihood estimate of the
 observed amplitude.  I'd say that 0 is by far the worst possible
 estimate, as Fobs are really never exactly zero.  Not sure what the
 situation would be when it's better to use Fo=0, perhaps if the model is
 grossly incorrect?  But in that case the completeness may be the least
 of my worries.



Yes, that's all true about what is DFc. In terms of missing-Fobs-filling
it's not too important (as map appearance concerned) which values you take,
DFc, Fobs , etc. I spent a few days playing with this some years ago.



 Indeed, phases drive most of the model bias, not amplitudes.  If model
 is good and phases are good then the DFc will be a much better estimate
 than zero.  If model is bad and phases are bad then filling in missing
 reflections will not increase bias too much.  But replacing them with
 zeros will introduce extra noise.  In particular, the ice rings may mess
 things up and cause ripples.


Yep, that was the point - sometimes it is good to do, and sometimes it is
not, and ...


 On a practical side, one can always compare the maps with and without
 missing reflections.


... this is why phenix.refine outputs both maps -:)

All the best,
Pavel


Re: [ccp4bb] Ice rings... [maps and missing reflections]

2011-10-11 Thread Ed Pozharski
On Tue, 2011-10-11 at 11:54 -0700, Pavel Afonine wrote:
 Yep, that was the point - sometimes it is good to do, and sometimes it
 is not, and

Do you have a real life example of Fobs=0 being better?  You make it
sound as if it's 50/50 situation.

-- 
Hurry up before we all come back to our senses!
   Julian, King of Lemurs


Re: [ccp4bb] Ice rings... [maps and missing reflections]

2011-10-11 Thread Pavel Afonine
Do you have a real life example of Fobs=0 being better?



Hopefully, there will be a paper some time soon discussing all this - we
work on this right now.



 You make it
 sound as if it's 50/50 situation.



No (sorry if what I wrote sounded that misleading).

Pavel


Re: [ccp4bb] Ice rings... [maps and missing reflections]

2011-10-11 Thread Randy Read
If the model is really bad and sigmaA is estimated properly, then sigmaA will 
be close to zero so that D (sigmaA times a scale factor) will be close to zero. 
 So in the limit of a completely useless model, the two methods of map 
calculation converge.

Regards,

Randy Read

On 11 Oct 2011, at 19:47, Ed Pozharski wrote:

 On Tue, 2011-10-11 at 10:47 -0700, Pavel Afonine wrote:
 better, but not always. What about say 80% or so complete dataset?
 Filling in 20% of Fcalc (or DFcalc or bin-averaged Fobs or else - it
 doesn't matter, since the phase will dominate anyway) will highly bias
 the map towards the model.
 
 DFc, if properly calculated, is the maximum likelihood estimate of the
 observed amplitude.  I'd say that 0 is by far the worst possible
 estimate, as Fobs are really never exactly zero.  Not sure what the
 situation would be when it's better to use Fo=0, perhaps if the model is
 grossly incorrect?  But in that case the completeness may be the least
 of my worries.
 
 Indeed, phases drive most of the model bias, not amplitudes.  If model
 is good and phases are good then the DFc will be a much better estimate
 than zero.  If model is bad and phases are bad then filling in missing
 reflections will not increase bias too much.  But replacing them with
 zeros will introduce extra noise.  In particular, the ice rings may mess
 things up and cause ripples.
 
 On a practical side, one can always compare the maps with and without
 missing reflections.
 
 -- 
 After much deep and profound brain things inside my head, 
 I have decided to thank you for bringing peace to our home.
Julian, King of Lemurs

--
Randy J. Read
Department of Haematology, University of Cambridge
Cambridge Institute for Medical Research  Tel: + 44 1223 336500
Wellcome Trust/MRC Building   Fax: + 44 1223 336827
Hills RoadE-mail: rj...@cam.ac.uk
Cambridge CB2 0XY, U.K.   www-structmed.cimr.cam.ac.uk


Re: [ccp4bb] Ice rings... [maps and missing reflections]

2011-10-11 Thread Garib N Murshudov
In the limit yes. however limit is when we do not have solution, i.e. when 
model errors are very large.  In the limit map coefficients will be 0 even for 
2mFo-DFc maps. In refinement we have some model. At the moment we have choice 
between 0 and DFc. 0 is not the best estimate as Ed rightly points out. We 
replace (I am sorry for self promotion, nevertheless: Murshudov et al, 1997) 
absent reflection with DFc, but it introduces bias. Bias becomes stronger as 
the number of absent reflections become larger. We need better way of 
estimating unobserved reflections. In statistics there are few appraoches. 
None of them is full proof, all of them are computationally expensive. One of 
the techniques is called multiple imputation. It may give better refinement 
behaviour and less biased map. Another one is integration over all errors (too 
many parameters for numerical integration, and there is no closed form formula) 
of model as well as experimental data. This would give less biased map with 
more pronounced signal.

Regards
Garib


On 11 Oct 2011, at 20:15, Randy Read wrote:

 If the model is really bad and sigmaA is estimated properly, then sigmaA will 
 be close to zero so that D (sigmaA times a scale factor) will be close to 
 zero.  So in the limit of a completely useless model, the two methods of map 
 calculation converge.
 
 Regards,
 
 Randy Read
 
 On 11 Oct 2011, at 19:47, Ed Pozharski wrote:
 
 On Tue, 2011-10-11 at 10:47 -0700, Pavel Afonine wrote:
 better, but not always. What about say 80% or so complete dataset?
 Filling in 20% of Fcalc (or DFcalc or bin-averaged Fobs or else - it
 doesn't matter, since the phase will dominate anyway) will highly bias
 the map towards the model.
 
 DFc, if properly calculated, is the maximum likelihood estimate of the
 observed amplitude.  I'd say that 0 is by far the worst possible
 estimate, as Fobs are really never exactly zero.  Not sure what the
 situation would be when it's better to use Fo=0, perhaps if the model is
 grossly incorrect?  But in that case the completeness may be the least
 of my worries.
 
 Indeed, phases drive most of the model bias, not amplitudes.  If model
 is good and phases are good then the DFc will be a much better estimate
 than zero.  If model is bad and phases are bad then filling in missing
 reflections will not increase bias too much.  But replacing them with
 zeros will introduce extra noise.  In particular, the ice rings may mess
 things up and cause ripples.
 
 On a practical side, one can always compare the maps with and without
 missing reflections.
 
 -- 
 After much deep and profound brain things inside my head, 
 I have decided to thank you for bringing peace to our home.
   Julian, King of Lemurs
 
 --
 Randy J. Read
 Department of Haematology, University of Cambridge
 Cambridge Institute for Medical Research  Tel: + 44 1223 336500
 Wellcome Trust/MRC Building   Fax: + 44 1223 336827
 Hills RoadE-mail: rj...@cam.ac.uk
 Cambridge CB2 0XY, U.K.   www-structmed.cimr.cam.ac.uk

Garib N Murshudov 
Structural Studies Division
MRC Laboratory of Molecular Biology
Hills Road 
Cambridge 
CB2 0QH UK
Email: ga...@mrc-lmb.cam.ac.uk 
Web http://www.mrc-lmb.cam.ac.uk





Re: [ccp4bb] Ice rings... [maps and missing reflections]

2011-10-11 Thread Ethan Merritt
On Tuesday, October 11, 2011 12:33:09 pm Garib N Murshudov wrote:
 In the limit yes. however limit is when we do not have solution, i.e. when 
 model errors are very large.  In the limit map coefficients will be 0 even 
 for 2mFo-DFc maps. In refinement we have some model. At the moment we have 
 choice between 0 and DFc. 0 is not the best estimate as Ed rightly points 
 out. We replace (I am sorry for self promotion, nevertheless: Murshudov et 
 al, 1997) absent reflection with DFc, but it introduces bias. Bias becomes 
 stronger as the number of absent reflections become larger. We need better 
 way of estimating unobserved reflections. In statistics there are few 
 appraoches. None of them is full proof, all of them are computationally 
 expensive. One of the techniques is called multiple imputation.

I don't quite follow how one would generate multiple imputations in this case.

Would this be equivalent to generating a map from (Nobs - N) refls, then
filling in F_estimate for those N refls by back-transforming the map?
Sort of like phase extension, except generating new Fs rather than new phases?

Ethan



 It may give better refinement behaviour and less biased map. Another one is 
 integration over all errors (too many parameters for numerical integration, 
 and there is no closed form formula) of model as well as experimental data. 
 This would give less biased map with more pronounced signal.
 
 Regards
 Garib
 
 
 On 11 Oct 2011, at 20:15, Randy Read wrote:
 
  If the model is really bad and sigmaA is estimated properly, then sigmaA 
  will be close to zero so that D (sigmaA times a scale factor) will be close 
  to zero.  So in the limit of a completely useless model, the two methods of 
  map calculation converge.
  
  Regards,
  
  Randy Read
  
  On 11 Oct 2011, at 19:47, Ed Pozharski wrote:
  
  On Tue, 2011-10-11 at 10:47 -0700, Pavel Afonine wrote:
  better, but not always. What about say 80% or so complete dataset?
  Filling in 20% of Fcalc (or DFcalc or bin-averaged Fobs or else - it
  doesn't matter, since the phase will dominate anyway) will highly bias
  the map towards the model.
  
  DFc, if properly calculated, is the maximum likelihood estimate of the
  observed amplitude.  I'd say that 0 is by far the worst possible
  estimate, as Fobs are really never exactly zero.  Not sure what the
  situation would be when it's better to use Fo=0, perhaps if the model is
  grossly incorrect?  But in that case the completeness may be the least
  of my worries.
  
  Indeed, phases drive most of the model bias, not amplitudes.  If model
  is good and phases are good then the DFc will be a much better estimate
  than zero.  If model is bad and phases are bad then filling in missing
  reflections will not increase bias too much.  But replacing them with
  zeros will introduce extra noise.  In particular, the ice rings may mess
  things up and cause ripples.
  
  On a practical side, one can always compare the maps with and without
  missing reflections.
  
  
  --
  Randy J. Read
  Department of Haematology, University of Cambridge
  Cambridge Institute for Medical Research  Tel: + 44 1223 336500
  Wellcome Trust/MRC Building   Fax: + 44 1223 336827
  Hills RoadE-mail: rj...@cam.ac.uk
  Cambridge CB2 0XY, U.K.   www-structmed.cimr.cam.ac.uk
 
 Garib N Murshudov 
 Structural Studies Division
 MRC Laboratory of Molecular Biology
 Hills Road 
 Cambridge 
 CB2 0QH UK
 Email: ga...@mrc-lmb.cam.ac.uk 
 Web http://www.mrc-lmb.cam.ac.uk
 
 
 
 

-- 
Ethan A Merritt
Biomolecular Structure Center,  K-428 Health Sciences Bldg
University of Washington, Seattle 98195-7742


Re: [ccp4bb] Ice rings... [maps and missing reflections]

2011-10-11 Thread Dale Tronrud
On 10/11/11 12:58, Ethan Merritt wrote:
 On Tuesday, October 11, 2011 12:33:09 pm Garib N Murshudov wrote:
 In the limit yes. however limit is when we do not have solution, i.e. when 
 model errors are very large.  In the limit map coefficients will be 0 even 
 for 2mFo-DFc maps. In refinement we have some model. At the moment we have 
 choice between 0 and DFc. 0 is not the best estimate as Ed rightly points 
 out. We replace (I am sorry for self promotion, nevertheless: Murshudov et 
 al, 1997) absent reflection with DFc, but it introduces bias. Bias becomes 
 stronger as the number of absent reflections become larger. We need better 
 way of estimating unobserved reflections. In statistics there are few 
 appraoches. None of them is full proof, all of them are computationally 
 expensive. One of the techniques is called multiple imputation.
 
 I don't quite follow how one would generate multiple imputations in this case.
 
 Would this be equivalent to generating a map from (Nobs - N) refls, then
 filling in F_estimate for those N refls by back-transforming the map?
 Sort of like phase extension, except generating new Fs rather than new phases?
 
   Ethan

   Unless you do some density modification you'll just get back zeros for
the reflections you didn't enter.

Dale
 
 
 
 It may give better refinement behaviour and less biased map. Another one is 
 integration over all errors (too many parameters for numerical integration, 
 and there is no closed form formula) of model as well as experimental data. 
 This would give less biased map with more pronounced signal.

 Regards
 Garib


 On 11 Oct 2011, at 20:15, Randy Read wrote:

 If the model is really bad and sigmaA is estimated properly, then sigmaA 
 will be close to zero so that D (sigmaA times a scale factor) will be close 
 to zero.  So in the limit of a completely useless model, the two methods of 
 map calculation converge.

 Regards,

 Randy Read

 On 11 Oct 2011, at 19:47, Ed Pozharski wrote:

 On Tue, 2011-10-11 at 10:47 -0700, Pavel Afonine wrote:
 better, but not always. What about say 80% or so complete dataset?
 Filling in 20% of Fcalc (or DFcalc or bin-averaged Fobs or else - it
 doesn't matter, since the phase will dominate anyway) will highly bias
 the map towards the model.

 DFc, if properly calculated, is the maximum likelihood estimate of the
 observed amplitude.  I'd say that 0 is by far the worst possible
 estimate, as Fobs are really never exactly zero.  Not sure what the
 situation would be when it's better to use Fo=0, perhaps if the model is
 grossly incorrect?  But in that case the completeness may be the least
 of my worries.

 Indeed, phases drive most of the model bias, not amplitudes.  If model
 is good and phases are good then the DFc will be a much better estimate
 than zero.  If model is bad and phases are bad then filling in missing
 reflections will not increase bias too much.  But replacing them with
 zeros will introduce extra noise.  In particular, the ice rings may mess
 things up and cause ripples.

 On a practical side, one can always compare the maps with and without
 missing reflections.


 --
 Randy J. Read
 Department of Haematology, University of Cambridge
 Cambridge Institute for Medical Research  Tel: + 44 1223 336500
 Wellcome Trust/MRC Building   Fax: + 44 1223 336827
 Hills RoadE-mail: rj...@cam.ac.uk
 Cambridge CB2 0XY, U.K.   www-structmed.cimr.cam.ac.uk

 Garib N Murshudov 
 Structural Studies Division
 MRC Laboratory of Molecular Biology
 Hills Road 
 Cambridge 
 CB2 0QH UK
 Email: ga...@mrc-lmb.cam.ac.uk 
 Web http://www.mrc-lmb.cam.ac.uk




 


Re: [ccp4bb] Ice rings... [maps and missing reflections]

2011-10-11 Thread Garib N Murshudov
Best way would be to generate from probability distributions derived after 
refinement, but it has a problem that you need to integrate over all errors. 
Another, simpler way would be generate using Wilson distribution multiple times 
and do refinement multiple times and average results. I have not done any tests 
but on paper it looks like a sensible procedure.

regards
Garib



On 11 Oct 2011, at 20:58, Ethan Merritt wrote:

 On Tuesday, October 11, 2011 12:33:09 pm Garib N Murshudov wrote:
 In the limit yes. however limit is when we do not have solution, i.e. when 
 model errors are very large.  In the limit map coefficients will be 0 even 
 for 2mFo-DFc maps. In refinement we have some model. At the moment we have 
 choice between 0 and DFc. 0 is not the best estimate as Ed rightly points 
 out. We replace (I am sorry for self promotion, nevertheless: Murshudov et 
 al, 1997) absent reflection with DFc, but it introduces bias. Bias becomes 
 stronger as the number of absent reflections become larger. We need better 
 way of estimating unobserved reflections. In statistics there are few 
 appraoches. None of them is full proof, all of them are computationally 
 expensive. One of the techniques is called multiple imputation.
 
 I don't quite follow how one would generate multiple imputations in this case.
 
 Would this be equivalent to generating a map from (Nobs - N) refls, then
 filling in F_estimate for those N refls by back-transforming the map?
 Sort of like phase extension, except generating new Fs rather than new phases?
 
   Ethan
 
 
 
 It may give better refinement behaviour and less biased map. Another one is 
 integration over all errors (too many parameters for numerical integration, 
 and there is no closed form formula) of model as well as experimental data. 
 This would give less biased map with more pronounced signal.
 
 Regards
 Garib
 
 
 On 11 Oct 2011, at 20:15, Randy Read wrote:
 
 If the model is really bad and sigmaA is estimated properly, then sigmaA 
 will be close to zero so that D (sigmaA times a scale factor) will be close 
 to zero.  So in the limit of a completely useless model, the two methods of 
 map calculation converge.
 
 Regards,
 
 Randy Read
 
 On 11 Oct 2011, at 19:47, Ed Pozharski wrote:
 
 On Tue, 2011-10-11 at 10:47 -0700, Pavel Afonine wrote:
 better, but not always. What about say 80% or so complete dataset?
 Filling in 20% of Fcalc (or DFcalc or bin-averaged Fobs or else - it
 doesn't matter, since the phase will dominate anyway) will highly bias
 the map towards the model.
 
 DFc, if properly calculated, is the maximum likelihood estimate of the
 observed amplitude.  I'd say that 0 is by far the worst possible
 estimate, as Fobs are really never exactly zero.  Not sure what the
 situation would be when it's better to use Fo=0, perhaps if the model is
 grossly incorrect?  But in that case the completeness may be the least
 of my worries.
 
 Indeed, phases drive most of the model bias, not amplitudes.  If model
 is good and phases are good then the DFc will be a much better estimate
 than zero.  If model is bad and phases are bad then filling in missing
 reflections will not increase bias too much.  But replacing them with
 zeros will introduce extra noise.  In particular, the ice rings may mess
 things up and cause ripples.
 
 On a practical side, one can always compare the maps with and without
 missing reflections.
 
 
 --
 Randy J. Read
 Department of Haematology, University of Cambridge
 Cambridge Institute for Medical Research  Tel: + 44 1223 336500
 Wellcome Trust/MRC Building   Fax: + 44 1223 336827
 Hills RoadE-mail: rj...@cam.ac.uk
 Cambridge CB2 0XY, U.K.   www-structmed.cimr.cam.ac.uk
 
 Garib N Murshudov 
 Structural Studies Division
 MRC Laboratory of Molecular Biology
 Hills Road 
 Cambridge 
 CB2 0QH UK
 Email: ga...@mrc-lmb.cam.ac.uk 
 Web http://www.mrc-lmb.cam.ac.uk
 
 
 
 
 
 -- 
 Ethan A Merritt
 Biomolecular Structure Center,  K-428 Health Sciences Bldg
 University of Washington, Seattle 98195-7742

Garib N Murshudov 
Structural Studies Division
MRC Laboratory of Molecular Biology
Hills Road 
Cambridge 
CB2 0QH UK
Email: ga...@mrc-lmb.cam.ac.uk 
Web http://www.mrc-lmb.cam.ac.uk





Re: [ccp4bb] Ice rings... [maps and missing reflections]

2011-10-11 Thread Ethan Merritt
 On 10/11/11 12:58, Ethan Merritt wrote:
  On Tuesday, October 11, 2011 12:33:09 pm Garib N Murshudov wrote:
  In the limit yes. however limit is when we do not have solution, i.e. when 
  model errors are very large.  In the limit map coefficients will be 0 even 
  for 2mFo-DFc maps. In refinement we have some model. At the moment we have 
  choice between 0 and DFc. 0 is not the best estimate as Ed rightly points 
  out. We replace (I am sorry for self promotion, nevertheless: Murshudov et 
  al, 1997) absent reflection with DFc, but it introduces bias. Bias 
  becomes stronger as the number of absent reflections become larger. We 
  need better way of estimating unobserved reflections. In statistics 
  there are few appraoches. None of them is full proof, all of them are 
  computationally expensive. One of the techniques is called multiple 
  imputation.
  
  I don't quite follow how one would generate multiple imputations in this 
  case.
  
  Would this be equivalent to generating a map from (Nobs - N) refls, then
  filling in F_estimate for those N refls by back-transforming the map?
  Sort of like phase extension, except generating new Fs rather than new 
  phases?
  
  Ethan

Dale Tronrud wrote
 
Unless you do some density modification you'll just get back zeros for
 the reflections you didn't enter.

Sure.  And different DM procedures would give you different imputations,
or at least that was my vague idea.

Garib N Murshudov wrote
 Best way would be to generate from probability distributions derived after 
 refinement, but it has a problem that you need to integrate over all errors. 
 Another, simpler way would be generate using Wilson distribution multiple 
 times and do refinement multiple times and average results. I have not done 
 any tests but on paper it looks like a sensible procedure.

OK.  That makes sense.

Ethan

-- 
Ethan A Merritt
Biomolecular Structure Center,  K-428 Health Sciences Bldg
University of Washington, Seattle 98195-7742