Re: [ccp4bb] Series termination effect calculation.

2012-09-17 Thread James Holton
Yes, the constant term in the 5-Gaussian structure factor tables does 
become annoying when you try to plot electron density in real space, but 
only if you try to make the B factor zero.  If the B factors are ~12 
(like they are in 1m1n), then the electron density 2.0 A from an Fe atom 
is not -0.2 e-/A^3, it is 0.025 e-/A^3. This is only 1% of the electron 
density at the center of a nitrogen atom with the same B factor.


But if you do set the B factor to zero, then the electron density at the 
center of any atom (using the 5-Gaussian model) is infinity.  To put it 
in gnuplot-ish, the structure factor of Fe (in reciprocal space) can be 
plotted with this function:

Fe_sf(s)=Fe_a1*exp(-Fe_b1*s*s)+Fe_a2*exp(-Fe_b2*s*s)+Fe_a3*exp(-Fe_b3*s*s)+Fe_a4*exp(-Fe_b4*s*s)+Fe_c

where:
Fe_c = 1.036900;
Fe_a1 = 11.769500; Fe_a2 = 7.357300; Fe_a3 = 3.522200; Fe_a4 = 2.304500;
Fe_b1 = 4.761100; Fe_b2 = 0.307200; Fe_b3 = 15.353500; Fe_b4 = 76.880501;
and s is sin(theta)/lambda

applying a B factor is then just multiplication by exp(-B*s*s)


Since the terms are all Gaussians, the inverse Fourier transform can 
actually be done analytically, giving the real-space version, or the 
expression for electron density vs distance from the nucleus (r):


Fe_ff(r,B) = \
  +Fe_a1*(4*pi/(Fe_b1+B))**1.5*safexp(-4*pi**2/(Fe_b1+B)*r*r) \
  +Fe_a2*(4*pi/(Fe_b2+B))**1.5*safexp(-4*pi**2/(Fe_b2+B)*r*r) \
  +Fe_a3*(4*pi/(Fe_b3+B))**1.5*safexp(-4*pi**2/(Fe_b3+B)*r*r) \
  +Fe_a4*(4*pi/(Fe_b4+B))**1.5*safexp(-4*pi**2/(Fe_b4+B)*r*r) \
  +Fe_c *(4*pi/(B))**1.5*safexp(-4*pi**2/(B)*r*r);

Where here applying a B factor requires folding it into each Gaussian 
term.  Notice how the Fe_c term blows up as B-0? This is where most of 
the series-termination effects come from. If you want the above 
equations for other atoms, you can get them from here:

http://bl831.als.lbl.gov/~jamesh/pickup/all_atomsf.gnuplot
http://bl831.als.lbl.gov/~jamesh/pickup/all_atomff.gnuplot

This infinitely sharp spike problem seems to have led some people to 
conclude that a zero B factor is non-physical, but nothing could be 
further from the truth!  The scattering from mono-atomic gasses is an 
excellent example of how one can observe the B=0 structure factor.   In 
fact, gas scattering is how the quantum mechanical self-consistent field 
calculations of electron clouds around atoms was experimentally 
verified.  Does this mean that there really is an infinitely sharp 
spike in the middle of every atom?  Of course not.  But there is a 
very sharp spike.


So, the problem of infinite density at the nucleus is really just an 
artifact of the 5-Gaussian formalism.  Strictly speaking, the 
5-Gaussian structure factor representation you find in 
${CLIBD}/atomsf.lib (or Table 6.1.1.4 in the International Tables volume 
C) is nothing more than a curve fit to the true values listed in ITC 
volume C tables 6.1.1.1 (neutral atoms) and 6.1.1.3 (ions).  These 
latter tables are the Fourier transform of the true electron density 
distribution around a particular atom/ion obtained from quantum 
mechanical self-consistent field calculations (like those of Cromer, 
Mann and many others).


The important thing to realize is that the fit was done in _reciprocal_ 
space, and if you look carefully at tables 6.1.1.1 and 6.1.1.3, you can 
see that even at REALLY high angle (sin(theta)/lambda = 6, or 0.083 A 
resolution) there is still significant elastic scattering from the 
heavier atoms.  The purpose of the constant term in the 5-Gaussian 
representation is to try and capture this high-angle tail, and for the 
really heavy atoms this can be more than 5 electron equivalents.  In 
real space, this is equivalent to saying that about 5 electrons are 
located within at least ~0.03 A of the nucleus.  That's a very short 
distance, but it is also not zero.  This is because the first few shells 
of electrons around things like a Uranium nucleus actually are very 
small and dense.  How, then, can we have any hope of modelling heavy 
atoms properly without using a map grid sampling of 0.01A ?  Easy!  The 
B factors are never zero.


Even for a truly infinitely sharp peak (aka a single electron), it 
doesn't take much of a B factor to spread it out to a reasonable size.  
For example, applying a B factor of 9 to a point charge will give it a 
full-width-half max (FWHM) of 0.8 A, the same as the diameter of a 
carbon atom.  A carbon atom with B=12 has FWHM = 1.1 A, the same as a 
point charge with B=16.  Carbon at B=80 and a point with B=93 both 
have FWHM = 2.6 A.  As the B factor becomes larger and larger, it tends 
to dominate the atomic shape (looks like a single Gaussian).  This is 
why it is so hard to assign atom types from density alone.  In fact, 
with B=80, a Uranium atom at 1/100th occupancy is essentially 
indistinguishable from a hydrogen atom. That is, even a modest B factor 
pretty much washes out any sharp features the atoms might have.  
Sometimes I wonder why we bother with form 

Re: [ccp4bb] Series termination effect calculation.

2012-09-17 Thread DUMAS Philippe (UDS)

Le Lundi 17 Septembre 2012 08:32 CEST, James Holton jmhol...@lbl.gov a écrit

Hello
May I add a few words after the thorough comments by James.
I lmay be easier to consider series termination in real space as follows.

The effect of series termination in 3D on rho(r) is of convoluting the exact 
rho(r) with the approximation of a delta function resulting from the limit in 
resolution. This approximation in 3D is given exactly by the function G[X] = 
3*[Sin(X) - X*Cos(X)]/X^3, where X = 2*Pi*r/d (r in Angstrom and d the 
resolution, also in Angstrom). This is the function appearing in the rotation 
function (for exactly the same reason of truncating the resolution).
If you consider that the iron atom is punctual (i.e. its Fourier transform 
would be merely constant), then the approximation resulting  from series 
termination is just given by  G[X] (apart for a scaling factor). And if you 
convolute the exact and ideal rho(r) with G[X], you will obtain the exact form 
of rho[r] affected by series termination. Note that, considering the Gaussian 
approximation of the structure factors, this would amount to convolute 
gaussians with G[X] (see James comments).
I join a figure corresponding to the simplification of a punctual iron atom. I 
only put on this figure the curves corresponding to the limits in resolution, 
1.3, 2 an 2.5 Angstrom because at a resolution of 1 Angstrom, the iron atom is 
definitely not punctual.
I used the same color codes as in Fig. 1 of the paper. One can see that the 
ripples on my approximate figure are essentially the same as in Fig. 1 of the 
paper. Of course, it cannot reproduce the features of rho[r] for r--0 since 
the iron aton is definitely not punctual.

Practical comment. It is quite useful to consider the following  rule of thumb: 
the first minimum of G[X] appears at a distance equal to  0.92*d (d = 
resolution) and the first maximum  at 1.45*d. Therefore, if one suspects that 
series terminaiton effects might cause a spurious through, or peak, it may be 
enough to recalculate the e.d. map at different resolutions to check whether 
these features are moving or not.

Philippe Dumas

PS: it is instructive to make a comparison with the Airy function in astronomy. 
Airy calculated this function to take into account the distorsion brought by 
the limlited optical resolution of a telescope to a punctual image of a star. 
Nothing else than our problem, with an iron atom replacing a star...
Plus ça change, plus c'est la même chose.



 Yes, the constant term in the 5-Gaussian structure factor tables does
 become annoying when you try to plot electron density in real space, but
 only if you try to make the B factor zero.  If the B factors are ~12 
 (like they are in 1m1n), then the electron density 2.0 A from an Fe atom
 is not -0.2 e-/A^3, it is 0.025 e-/A^3. This is only 1% of the electron
 density at the center of a nitrogen atom with the same B factor.

 But if you do set the B factor to zero, then the electron density at the
 center of any atom (using the 5-Gaussian model) is infinity.  To put it
 in gnuplot-ish, the structure factor of Fe (in reciprocal space) can be
 plotted with this function:
 Fe_sf(s)=Fe_a1*exp(-Fe_b1*s*s)+Fe_a2*exp(-Fe_b2*s*s)+Fe_a3*exp(-Fe_b3*s*s)+Fe_a4*exp(-Fe_b4*s*s)+Fe_c

 where:
 Fe_c = 1.036900;
 Fe_a1 = 11.769500; Fe_a2 = 7.357300; Fe_a3 = 3.522200; Fe_a4 = 2.304500;
 Fe_b1 = 4.761100; Fe_b2 = 0.307200; Fe_b3 = 15.353500; Fe_b4 = 76.880501;
 and s is sin(theta)/lambda

 applying a B factor is then just multiplication by exp(-B*s*s)


 Since the terms are all Gaussians, the inverse Fourier transform can 
 actually be done analytically, giving the real-space version, or the 
 expression for electron density vs distance from the nucleus (r):

 Fe_ff(r,B) = \
+Fe_a1*(4*pi/(Fe_b1+B))**1.5*safexp(-4*pi**2/(Fe_b1+B)*r*r) \
+Fe_a2*(4*pi/(Fe_b2+B))**1.5*safexp(-4*pi**2/(Fe_b2+B)*r*r) \
+Fe_a3*(4*pi/(Fe_b3+B))**1.5*safexp(-4*pi**2/(Fe_b3+B)*r*r) \
+Fe_a4*(4*pi/(Fe_b4+B))**1.5*safexp(-4*pi**2/(Fe_b4+B)*r*r) \
+Fe_c *(4*pi/(B))**1.5*safexp(-4*pi**2/(B)*r*r);

 Where here applying a B factor requires folding it into each Gaussian
 term.  Notice how the Fe_c term blows up as B-0? This is where most of
 the series-termination effects come from. If you want the above
 equations for other atoms, you can get them from here:
 http://bl831.als.lbl.gov/~jamesh/pickup/all_atomsf.gnuplot
 http://bl831.als.lbl.gov/~jamesh/pickup/all_atomff.gnuplot

 This infinitely sharp spike problem seems to have led some people to
 conclude that a zero B factor is non-physical, but nothing could be

 further from the truth!  The scattering from mono-atomic gasses is an
 excellent example of how one can observe the B=0 structure factor.   In
 fact, gas scattering is how the quantum mechanical self-consistent field
 calculations of electron clouds around atoms was experimentally
 verified.  Does this mean that there really is an infinitely sharp
 spike in the 

[ccp4bb] mosflm and dectris pilatus

2012-09-17 Thread Bhupesh Taneja

Dear all
 
We have recently collected data at XRD1 beamline at Sincrotrone Trieste on a 
Dectris Pilatus detector.
 
We are unable to integrate the images in mosflm, which keeps crashing with some 
error: e.g. profile_fitted_partials or b_slope or some such! (indexing 
seems to work ok).
 
Any suggestions from other recent users on how to process this data in mosflm?  
(we have changed nullpix threshold to -1 and maximum background slope to 
0.06).
 
Thanks in advance for your replies!
 
-Bhupesh
  

Re: [ccp4bb] Series termination effect calculation.

2012-09-17 Thread Tim Gruene
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dear James et al.,

so to summarise, the answer to Niu's question is that he must add a
factor of e^(-Bs^2) to the formula of Cromer/Mann and then adjust the
value of B until it matches the inset. Given that you claim
rho=0.025e/A^3 (I assume for 1/dmax approx. 0) for B=12 and the inset
shows a value of about 0.6, a somewhat higher B-value should work.

Cheers,
Tim

On 09/17/2012 08:32 AM, James Holton wrote:
 Yes, the constant term in the 5-Gaussian structure factor tables does
 become annoying when you try to plot electron density in real space, but
 only if you try to make the B factor zero.  If the B factors are ~12
 (like they are in 1m1n), then the electron density 2.0 A from an Fe atom
 is not -0.2 e-/A^3, it is 0.025 e-/A^3. This is only 1% of the electron
 density at the center of a nitrogen atom with the same B factor.
 
 But if you do set the B factor to zero, then the electron density at the
 center of any atom (using the 5-Gaussian model) is infinity.  To put it
 in gnuplot-ish, the structure factor of Fe (in reciprocal space) can be
 plotted with this function:
 Fe_sf(s)=Fe_a1*exp(-Fe_b1*s*s)+Fe_a2*exp(-Fe_b2*s*s)+Fe_a3*exp(-Fe_b3*s*s)+Fe_a4*exp(-Fe_b4*s*s)+Fe_c
 
 
 where:
 Fe_c = 1.036900;
 Fe_a1 = 11.769500; Fe_a2 = 7.357300; Fe_a3 = 3.522200; Fe_a4 = 2.304500;
 Fe_b1 = 4.761100; Fe_b2 = 0.307200; Fe_b3 = 15.353500; Fe_b4 = 76.880501;
 and s is sin(theta)/lambda
 
 applying a B factor is then just multiplication by exp(-B*s*s)
 
 
 Since the terms are all Gaussians, the inverse Fourier transform can
 actually be done analytically, giving the real-space version, or the
 expression for electron density vs distance from the nucleus (r):
 
 Fe_ff(r,B) = \
   +Fe_a1*(4*pi/(Fe_b1+B))**1.5*safexp(-4*pi**2/(Fe_b1+B)*r*r) \
   +Fe_a2*(4*pi/(Fe_b2+B))**1.5*safexp(-4*pi**2/(Fe_b2+B)*r*r) \
   +Fe_a3*(4*pi/(Fe_b3+B))**1.5*safexp(-4*pi**2/(Fe_b3+B)*r*r) \
   +Fe_a4*(4*pi/(Fe_b4+B))**1.5*safexp(-4*pi**2/(Fe_b4+B)*r*r) \
   +Fe_c *(4*pi/(B))**1.5*safexp(-4*pi**2/(B)*r*r);
 
 Where here applying a B factor requires folding it into each Gaussian
 term.  Notice how the Fe_c term blows up as B-0? This is where most of
 the series-termination effects come from. If you want the above
 equations for other atoms, you can get them from here:
 http://bl831.als.lbl.gov/~jamesh/pickup/all_atomsf.gnuplot
 http://bl831.als.lbl.gov/~jamesh/pickup/all_atomff.gnuplot
 
 This infinitely sharp spike problem seems to have led some people to
 conclude that a zero B factor is non-physical, but nothing could be
 further from the truth!  The scattering from mono-atomic gasses is an
 excellent example of how one can observe the B=0 structure factor.   In
 fact, gas scattering is how the quantum mechanical self-consistent field
 calculations of electron clouds around atoms was experimentally
 verified.  Does this mean that there really is an infinitely sharp
 spike in the middle of every atom?  Of course not.  But there is a
 very sharp spike.
 
 So, the problem of infinite density at the nucleus is really just an
 artifact of the 5-Gaussian formalism.  Strictly speaking, the
 5-Gaussian structure factor representation you find in
 ${CLIBD}/atomsf.lib (or Table 6.1.1.4 in the International Tables volume
 C) is nothing more than a curve fit to the true values listed in ITC
 volume C tables 6.1.1.1 (neutral atoms) and 6.1.1.3 (ions).  These
 latter tables are the Fourier transform of the true electron density
 distribution around a particular atom/ion obtained from quantum
 mechanical self-consistent field calculations (like those of Cromer,
 Mann and many others).
 
 The important thing to realize is that the fit was done in _reciprocal_
 space, and if you look carefully at tables 6.1.1.1 and 6.1.1.3, you can
 see that even at REALLY high angle (sin(theta)/lambda = 6, or 0.083 A
 resolution) there is still significant elastic scattering from the
 heavier atoms.  The purpose of the constant term in the 5-Gaussian
 representation is to try and capture this high-angle tail, and for the
 really heavy atoms this can be more than 5 electron equivalents.  In
 real space, this is equivalent to saying that about 5 electrons are
 located within at least ~0.03 A of the nucleus.  That's a very short
 distance, but it is also not zero.  This is because the first few shells
 of electrons around things like a Uranium nucleus actually are very
 small and dense.  How, then, can we have any hope of modelling heavy
 atoms properly without using a map grid sampling of 0.01A ?  Easy!  The
 B factors are never zero.
 
 Even for a truly infinitely sharp peak (aka a single electron), it
 doesn't take much of a B factor to spread it out to a reasonable size. 
 For example, applying a B factor of 9 to a point charge will give it a
 full-width-half max (FWHM) of 0.8 A, the same as the diameter of a
 carbon atom.  A carbon atom with B=12 has FWHM = 1.1 A, the same as a
 point charge with B=16.  Carbon at B=80 

[ccp4bb] imosflm background definition

2012-09-17 Thread Andreas Förster

Hi all,

I'm integrating data from a crystal with a fairly long axis.  In iMosflm 
(1.0.7), the background definition is so generous that plenty of pixels 
from adjacent peaks are included (see attached).  Can someone tell me 
how I redefine the background to be tighter around the spots?


Thanks.


Andreas


--
Andreas Förster, Research Associate
Paul Freemont  Xiaodong Zhang Labs
Department of Biochemistry, Imperial College London
http://www.msf.bio.ic.ac.uk
attachment: backgroundSpots.png

Re: [ccp4bb] imosflm background definition

2012-09-17 Thread A Leslie

Hi Andreas,

  The simple answer to this is that you do NOT  
attempt to redefine the background. Providing the additional spots  
shown in your image belong to the same lattice, mosflm will  
automatically exclude pixels from adjacent spots when doing the  
background plane fitting, so you need do nothing. It helps to have a  
larger box size because the background plane is better defined.



To answer your question, you need to go to Settings - Processing  
Options - Advanced integration  in imosflm, then the top 5 lines  
define the measurement box parameters, changing the box width and  
height should do what you want, although you may need to uncheck the  
box Optimise overall box size to stop the box being made bigger again.


Best wishes,

Andrew





On 17 Sep 2012, at 15:03, Andreas Förster wrote:


Hi all,

I'm integrating data from a crystal with a fairly long axis.  In  
iMosflm (1.0.7), the background definition is so generous that  
plenty of pixels from adjacent peaks are included (see attached).   
Can someone tell me how I redefine the background to be tighter  
around the spots?


Thanks.


Andreas


--
   Andreas Förster, Research Associate
   Paul Freemont  Xiaodong Zhang Labs
Department of Biochemistry, Imperial College London
   http://www.msf.bio.ic.ac.uk
backgroundSpots.png


[ccp4bb] CCP4 Study Weekend 2012

2012-09-17 Thread Charles Ballard

The CCP4 Study Weekend (3 - 5 January 2013)
East Midlands Conference Centre, University of Nottingham
 
Thursday 3 - MX User Meeting
Friday 4 / Saturday 5 - CCP4 Study Weekend
 
Molecular Replacements
 
We cordially invite you to participate in this year's Study Weekend at the the 
East Midlands Conference Centre, University of Nottingham. The annual CCP4 
Study Weekend is a chance to shake off the post-New Year torpor, and work hard 
and play hard with your fellow crystallographers. Once again, we have put 
together an exciting scientific programme for Friday and Saturday, either side 
of the traditional conference dinner. Please also check out the satellite 
meetings which may be of interest. The Study Weekend is a chance to catch up 
with old friends, but is also a chance to meet the CCP4 staff who will be there 
in force to demonstrate the latest software and to answer questions - please 
say hello!
 
This year, the topic for the Study Weekend is Molecular Replacements. In 
keeping with previous CCP4 meetings, the lectures will focus on the 
presentation and discussion of advanced methods and techniques developed and 
used by the leaders in the field.
 
Scientific Organisers
Helen Walden - Cancer Research (UK)
Pietro Roversi - University of Oxford (UK)

Further details of the program and the registration are at 
http://www.cse.scitech.ac.uk/events/CCP4_2013/

Terms and Conditions apply.  Please read the cancellation policy before 
applying.
-- 
Scanned by iCritical.



[ccp4bb] problem with phenix refine and non bonded interactions

2012-09-17 Thread Michael Murphy
I am using Phenix 1.8-1069. I am having a problem with phenix refine
terminating with an error message. I added several solvent molecules, MPD
and bicarbonate in Coot, and ran Phenix ReadySet to generate restraints for
them and applied those restraints to all future jobs in that project. When
I try to refine it, I get an error message stating that I have too many
non-bonded interactions. If I generated my restraints file before running
refine, this should not happen (?)


[ccp4bb] Beamline scientist position at SSRL

2012-09-17 Thread Ana Gonzalez
The Structural Molecular Biology (SMB) Group of Stanford Synchrotron 
Radiation Lightsource (SSRL, a Directorate of SLAC) has an opening for a 
Beam Line Scientist.  This position will participate in a large user 
support team that provides expert technical  methodological support for 
scientific experiments at seven highly-automated macromolecular 
crystallography beam lines. Research facilities include x-ray optics, 
automated crystallography instrumentation, and state-of-the-art x-ray 
detectors.


Specific responsibilities Include (but are not limited to):

1) Beamline optimization including the alignment of beam line optics and 
instrumentation; 2) user training and support during experiments; 3) 
troubleshooting, testing, documenting optics, and instrumentation; 4) 
development of new instrumentation and methodologies; 5) backup Systems 
Manager role for Linux systems; 6) Dissemination/tours of facilities.  Up 
to 15% time may be dedicated to scientific collaborations.  All activities 
will be carried out in close collaboration with other scientists, 
engineers, software developers and technicians.  Applicants must be 
willing to provide user support outside of regular hours.


To obtain more information about this position and apply please see: 
https://ch.tbe.taleo.net/CH12/ats/careers/requisition.jsp?org=SLACcws=1rid=905


Ana
--
---
   Ana Gonzalez a...@slac.stanford.edu
  Staff Scientist
 Stanford Synchrotron Radiation Lightsource
2575 Sand Hill Road, MS 99
   Menlo Park  CA 94025
 Phone: (650) 926 8682 Fax: (650) 926 3292
---


[ccp4bb] B-iso vs. B-aniso

2012-09-17 Thread Yuri Pompeu
Dear community,

The protein model I am refining has 400 amino acids (3320 atoms).
Some real quick calculations tell me that to properly refine it 
anisotropically, I would need 119,520 observations. Given my unit-cell 
dimension and space-group it is equivalent to about a 1.24 A complete data set.
However, I have had a couple of cases where anisotropic B-factor refinement 
significantly improved R-work and R-free, while maintaining a reasonable gap 
for lower resolution models (1.4-1.5 A, around 70,000 reflections). What is the 
proper way of modelling the B-factors?
Any thoughts and/or opinions from the community are welcome.
Cheers, 


Re: [ccp4bb] B-iso vs. B-aniso

2012-09-17 Thread Tim Gruene
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dear,

1.24A resolution is reasonable for anisotropic refinement - including
restraints you have more observations than the number of reflections,
so just doing the calculation does not give a concise answer!

You should try and see if the refinement is stable and makes sense
(e.g. check the number of npd-atoms etc.) You may also try the RIGU
command in the beta-version of shelxl (Acta Cryst. (2012). A68, 448-451)!

Regards,
Tim

On 09/17/2012 08:31 PM, Yuri Pompeu wrote:
 Dear community,
 
 The protein model I am refining has 400 amino acids (3320 atoms). 
 Some real quick calculations tell me that to properly refine it 
 anisotropically, I would need 119,520 observations. Given my 
 unit-cell dimension and space-group it is equivalent to about a 
 1.24 A complete data set. However, I have had a couple of cases 
 where anisotropic B-factor refinement significantly improved
 R-work and R-free, while maintaining a reasonable gap for lower
 resolution models (1.4-1.5 A, around 70,000 reflections). What is
 the proper way of modelling the B-factors? Any thoughts and/or
 opinions from the community are welcome. Cheers,
 

- -- 
Dr Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFQV284UxlJ7aRr7hoRAh5FAJ43TevZbbQLYGeE1yM2cqKjZ5KMFACg5bCs
Ijn4owVvuuHldNSnK4Iax1E=
=4Jdg
-END PGP SIGNATURE-


Re: [ccp4bb] B-iso vs. B-aniso

2012-09-17 Thread Ethan Merritt
On Monday, September 17, 2012 11:31:53 am Yuri Pompeu wrote:
 Dear community,
 
 The protein model I am refining has 400 amino acids (3320 atoms).
 Some real quick calculations tell me that to properly refine it 
 anisotropically, I would need 119,520 observations. Given my unit-cell 
 dimension and space-group it is equivalent to about a 1.24 A complete data 
 set.
 However, I have had a couple of cases where anisotropic B-factor refinement 
 significantly improved R-work and R-free, while maintaining a reasonable gap 
 for lower resolution models (1.4-1.5 A, around 70,000 reflections). What is 
 the proper way of modelling the B-factors?

I laid out my thoughts on this topic at last year's CCP4 Study Weekend.
The print version of it may be found here:

   To B or not to B: a question of resolution? 
   Acta Cryst. D68, 468-477. 
   http://dx.doi.org/10.1107/S0907444911028320

One lesson is that lower R-work and R-free does not necessarily indicate that
anisotropic refinement is justified.  In other words, it is not so easy to
determine how much improvement is significant improvement.


 Any thoughts and/or opinions from the community are welcome.
 Cheers, 
 

-- 
Ethan A Merritt
Biomolecular Structure Center,  K-428 Health Sciences Bldg
University of Washington, Seattle 98195-7742


Re: [ccp4bb] B-iso vs. B-aniso

2012-09-17 Thread Robbie Joosten
Dear Yuri,

Why do you think you need 36 reflections per atom when atoms with anisotropic 
B-factors only have 9 parameters? You can get away with much fewer in many 
cases especially if you have good restraints. As Ethan points out, a drop in 
R-free after adding many parameters may be misleading. Proper testing will give 
you a clearer example. 

The Hamilton test in Ethan's paper is implemented in PDB_REDO 
(http://scripts.iucr.org/cgi-bin/paper?ba5174) and I had a quick look at some 
refinement statistics for structures with ~21 reflections/atom (like your 
case):  according to PDB_REDO's strict criteria anisotropic B-factors are 
acceptable in two thirds of the cases. This was tested with Refmac on 285 PDB 
entries; ShelX's new restraints may well increase the success rate.

HTH,
Robbie Joosten

Netherlands Cancer Institute
www.cmbi.ru.nl/pdb_redo

 -Original Message-
 From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of
 Yuri Pompeu
 Sent: Monday, September 17, 2012 20:32
 To: CCP4BB@JISCMAIL.AC.UK
 Subject: [ccp4bb] B-iso vs. B-aniso
 
 Dear community,
 
 The protein model I am refining has 400 amino acids (3320 atoms).
 Some real quick calculations tell me that to properly refine it 
 anisotropically, I
 would need 119,520 observations. Given my unit-cell dimension and space-
 group it is equivalent to about a 1.24 A complete data set.
 However, I have had a couple of cases where anisotropic B-factor refinement
 significantly improved R-work and R-free, while maintaining a reasonable gap
 for lower resolution models (1.4-1.5 A, around 70,000 reflections). What is
 the proper way of modelling the B-factors?
 Any thoughts and/or opinions from the community are welcome.
 Cheers,