Re: [ccp4bb] change of origin for reflections or map

2011-10-12 Thread Ian Tickle
On Wed, Oct 12, 2011 at 1:35 AM, James Holton jmhol...@lbl.gov wrote:

 My list has 8 different allowed
 shifts for I222, but I assume this is because the 0,0,1/2 shift is
 part of a symmetry operator.  I guess it is a matter of semantics as
 to wether or not that is an allowed shift?

 James, the (0,0,1/2) shift is not part of any symmetry operator in I222,
but I'm sure you knew that! - whereas the (1/2,1/2,1/2) shift _is_ a
symmetry operator: the I centring operator in fact.  This means that the
lattice repeating units of the crystal structures (i.e. the set of atomic
positions in the unit cell) will be identical in pairs, so in I222 (or
indeed any I-centred space group) the crystal structure obtaining by
shifting by (0,0,1/2) is identical in all respects to the one obtained from
the (1/2,1/2,0) shift.  In contrast, the structures obtained from all the
other pairs, e,g, (0,0,0) and (0,0,1/2), are different in all respects,
excepting that their sets of calculated amplitudes will be identical.

In your terminology a non-allowed origin shift is one which would cause
even the |Fc|s to differ, which is practice would mean that you would have
to use a different space-group specific formula for the structure factor (so
it's non-allowed only in the sense that you are not allowed to use the
same structure factor formula, but you are allowed to use a different
one).

I prefer to call it non-equivalent origin shift (as opposed to equivalent
origin or centring shift) - it's not really a question of whether it's
allowed or not, it's whether it has any effect on the crystal structure.
In practical terms of course it makes absolutely no difference to the result
if you choose to do things that have absolutely no effect!

Cheers

-- Ian


Re: [ccp4bb] Ice rings... [maps and missing reflections]

2011-10-12 Thread Eleanor Dodson
Here we are I presume only worried about strong reflections lost behind 
an ice ring. At least that is where the discussion began.


Isnt the best approach t  this problem to use integration software which 
attempts to give a measurement, albeit with a high error estimate?


The discussion has strayed into what to do with incomplete data sets..
In these cases there might be something to learn from the Free Lunch 
ideas used in ACORN and SHELX and other programs - set the missing 
reflections to E=1, and normalise them properly to an appropriate amplitude.


Eleanor


On 10/11/2011 08:33 PM, Garib N Murshudov wrote:

In the limit yes. however limit is when we do not have solution, i.e. when model errors are very large.  In 
the limit map coefficients will be 0 even for 2mFo-DFc maps. In refinement we have some model. At the moment 
we have choice between 0 and DFc. 0 is not the best estimate as Ed rightly points out. We replace (I am sorry 
for self promotion, nevertheless: Murshudov et al, 1997) absent reflection with DFc, but it 
introduces bias. Bias becomes stronger as the number of absent reflections become larger. We need 
better way of estimating unobserved reflections. In statistics there are few appraoches. None of 
them is full proof, all of them are computationally expensive. One of the techniques is called multiple 
imputation. It may give better refinement behaviour and less biased map. Another one is integration over all 
errors (too many parameters for numerical integration, and there is no closed form formula) of model as well 
as experimental data. This would give less bia

sed map with more pronounced signal.


Regards
Garib


On 11 Oct 2011, at 20:15, Randy Read wrote:


If the model is really bad and sigmaA is estimated properly, then sigmaA will 
be close to zero so that D (sigmaA times a scale factor) will be close to zero. 
 So in the limit of a completely useless model, the two methods of map 
calculation converge.

Regards,

Randy Read

On 11 Oct 2011, at 19:47, Ed Pozharski wrote:


On Tue, 2011-10-11 at 10:47 -0700, Pavel Afonine wrote:

better, but not always. What about say 80% or so complete dataset?
Filling in 20% of Fcalc (or DFcalc or bin-averagedFobs  or else - it
doesn't matter, since the phase will dominate anyway) will highly bias
the map towards the model.


DFc, if properly calculated, is the maximum likelihood estimate of the
observed amplitude.  I'd say that 0 is by far the worst possible
estimate, as Fobs are really never exactly zero.  Not sure what the
situation would be when it's better to use Fo=0, perhaps if the model is
grossly incorrect?  But in that case the completeness may be the least
of my worries.

Indeed, phases drive most of the model bias, not amplitudes.  If model
is good and phases are good then the DFc will be a much better estimate
than zero.  If model is bad and phases are bad then filling in missing
reflections will not increase bias too much.  But replacing them with
zeros will introduce extra noise.  In particular, the ice rings may mess
things up and cause ripples.

On a practical side, one can always compare the maps with and without
missing reflections.

--
After much deep and profound brain things inside my head,
I have decided to thank you for bringing peace to our home.
   Julian, King of Lemurs


--
Randy J. Read
Department of Haematology, University of Cambridge
Cambridge Institute for Medical Research  Tel: + 44 1223 336500
Wellcome Trust/MRC Building   Fax: + 44 1223 336827
Hills RoadE-mail: rj...@cam.ac.uk
Cambridge CB2 0XY, U.K.   www-structmed.cimr.cam.ac.uk


Garib N Murshudov
Structural Studies Division
MRC Laboratory of Molecular Biology
Hills Road
Cambridge
CB2 0QH UK
Email: ga...@mrc-lmb.cam.ac.uk
Web http://www.mrc-lmb.cam.ac.uk






Re: [ccp4bb] Ice rings... [maps and missing reflections]

2011-10-12 Thread Tim Gruene
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On 10/11/2011 09:58 PM, Ethan Merritt wrote:
 On Tuesday, October 11, 2011 12:33:09 pm Garib N Murshudov wrote:
 In the limit yes. however limit is when we do not have solution, i.e. when 
 model errors are very large.  In the limit map coefficients will be 0 even 
 for 2mFo-DFc maps. In refinement we have some model. At the moment we have 
 choice between 0 and DFc. 0 is not the best estimate as Ed rightly points 
 out. We replace (I am sorry for self promotion, nevertheless: Murshudov et 
 al, 1997) absent reflection with DFc, but it introduces bias. Bias becomes 
 stronger as the number of absent reflections become larger. We need better 
 way of estimating unobserved reflections. In statistics there are few 
 appraoches. None of them is full proof, all of them are computationally 
 expensive. One of the techniques is called multiple imputation.
 
 I don't quite follow how one would generate multiple imputations in this case.
 
 Would this be equivalent to generating a map from (Nobs - N) refls, then
 filling in F_estimate for those N refls by back-transforming the map?
 Sort of like phase extension, except generating new Fs rather than new phases?

Some people call this the free-lunch-algorithm ;-)
Tim

   Ethan
 [...]
- -- 
- --
Dr Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFOlVi4UxlJ7aRr7hoRAlU+AKDo+c449pUQ/1cnQAl6SMRqzVkp6wCcDETj
GHB8hFXt1McbxWHfpUAsHtE=
=FOWk
-END PGP SIGNATURE-


Re: [ccp4bb] change of origin for reflections or map

2011-10-12 Thread Eleanor Dodson
If you have two pdb files - one for each ha solution, you can use 
csymmatch -pdbin-ref soln1.pdb -pdbin soln2.pdb -origin-hand 
-connetivity-radius 1

Eleanor



On 10/11/2011 10:58 PM, George M. Sheldrick wrote:

There are 4 possible origins in I222. There is a simple but inelegant way to
check. Run the SHELXE job for the second dataset four times, first with no MOVE
instruction, then with one of the following MOVE instructions inserted between
UNIT and the first atom in the *_fa.res file from SHELXD:

MOVE 0.5 0 0
MOVE 0 0.5 0
MOVE 0.5 0.5 0

one of these should give you phases with the same origin as your first dataset,
so if you display both maps from the .phs files in COOT they will superimpose.

George

On Tue, Oct 11, 2011 at 11:29:14PM +0200, Klaas Decanniere wrote:



Hi,

I have two solutions from the ShelX C/D/E pipeline I would like to compare
(different datasets, same protein). They seem to have different origins.
Space group is I222, with a choice of 8 origins.
How can I find and apply the correct shift to have the phase sets on a common
origin?
The information on http://www.ccp4.ac.uk/html/alternate_origins.html and http:/
/www.ccp4.ac.uk/html/non-centro_origins.html  explain it very well, but thy
don't point to the tools to use.
Is it a matter of reindex and trying all 8 possibilities?

thanks for your help,

Klaas Decanniere




[ccp4bb] help me to determine ligand's b fator?

2011-10-12 Thread 王瑞
Dear Everybody,

I am sorry for a little off-topic. Could anyone tell me how to determine
a protein, peptide and ligand of a new pdb's b factor? I know there are
rampage and sfcheck to validate in ccp4, but I only found a overall b factor
in their result. By the way, are there a software to determine a new pdb's
all parameters such as Rsym, I/σ, Redundancy, Solvent molecules' b factors?

Thanks


Re: [ccp4bb] help me to determine ligand's b fator?

2011-10-12 Thread Tim Gruene
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dear ,

A PDB file is a coordinate file. The quantities you mention, Rsym,
I/sigI, and Redundancy are quantities calculated from the measured data
and can therefore not be determined from a PDB file.

The PDB file also contains the B-factor, so in order to find out the
B-factors of some coordinates in the PDB file, you can open the file
with any text editor of your choice and look at its entries.

Best wishes,
Tim

On 10/12/2011 01:53 PM, 王瑞 wrote:
 Dear Everybody,
 
 I am sorry for a little off-topic. Could anyone tell me how to determine
 a protein, peptide and ligand of a new pdb's b factor? I know there are
 rampage and sfcheck to validate in ccp4, but I only found a overall b factor
 in their result. By the way, are there a software to determine a new pdb's
 all parameters such as Rsym, I/σ, Redundancy, Solvent molecules' b factors?
 
 Thanks
 

- -- 
- --
Dr Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFOlY9gUxlJ7aRr7hoRAt4cAKCzxQevEo8lo5oP4kde909nPMcb4QCgvk1u
iFnvewprn+436CcS0ThHyZg=
=O1Zt
-END PGP SIGNATURE-


Re: [ccp4bb] help me to determine ligand's b fator?

2011-10-12 Thread Paul Emsley
On 12/10/11 12:53, 王瑞 wrote:
 Dear Everybody,

 I am sorry for a little off-topic. Could anyone tell me how to
 determine a protein, peptide and ligand of a new pdb's b factor? I
 know there are rampage and sfcheck to validate in ccp4, but I only
 found a overall b factor in their result.



You can use Coot to generate a new molecule for your residue selection
(or ligand) and then use the functions:

(average-temperature-factor imol)
(median-temperature-factor imol)
(standard-deviation-temperature-factor imol)

You can also use bfactan, which makes an xml file with temperature
factor statistics, IIRC.

Paul.


[ccp4bb] Definition of B-factor (pedantry)

2011-10-12 Thread Phil Evans
I've been struggling a bit to understand the definition of B-factors, 
particularly anisotropic Bs, and I think I've finally more-or-less got my head 
around the various definitions of B, U, beta etc, but one thing puzzles me.

It seems to me that the natural measure of length in reciprocal space is d* = 
1/d = 2 sin theta/lambda

but the conventional term for B-factor in the structure factor expression is 
exp(-B s^2) where s = sin theta/lambda = d*/2 ie exp(-B (d*/2)^2)

Why not exp (-B' d*^2) which would seem more sensible? (B' = B/4) Why the 
factor of 4?

Or should we just get used to U instead?

My guess is that it is a historical accident (or relic), ie that is the 
definition because that's the way it is

Does anyone understand where this comes from?

Phil

Re: [ccp4bb] Definition of B-factor (pedantry)

2011-10-12 Thread Eleanor Dodson
Not sure if this is helpful Phil, but SCALEIT output includes various 
definitions taken from the Willis and Prior book.


But then there is the problem of converting the amplitude B factors to 
real space..

I attach my anisotropy notes..

It doesnt address the ? of sensible conventions!!

E
On 10/12/2011 02:55 PM, Phil Evans wrote:

I've been struggling a bit to understand the definition of B-factors, 
particularly anisotropic Bs, and I think I've finally more-or-less got my head 
around the various definitions of B, U, beta etc, but one thing puzzles me.

It seems to me that the natural measure of length in reciprocal space is d* = 
1/d = 2 sin theta/lambda

but the conventional term for B-factor in the structure factor expression is 
exp(-B s^2) where s = sin theta/lambda = d*/2 ie exp(-B (d*/2)^2)

Why not exp (-B' d*^2) which would seem more sensible? (B' = B/4) Why the 
factor of 4?

Or should we just get used to U instead?

My guess is that it is a historical accident (or relic), ie that is the 
definition because that's the way it is

Does anyone understand where this comes from?

Phil



+
+
Scaleit:
+
+
Anisotropic temperature factor (REFINE ANISOTROPIC) (default)

   C * exp(-(h**2 B11 + k**2 B22 + l**2 B33 + 
  2hk B12 + 2hl  B13  +  2kl B23))

 T he anisotropic scale is applied to the derivative F as
 (derivative scale)* exp( - (B11*h**2 + B22*k**2 + B33*l**2 +  2*(B12*h*k + 
B13*h*l + B23*k*l) )  )



   An equivalent  form of the anisotropic temperature factor is where 
beta11 = B11/(a*)**2  etc
 exp(-0.25(  h**2 * (a*)**2 * beta11 + k**2 * (b*)**2 * beta22 + l**2 * (c*)**2 
* beta33 
   + 2*k*l*(b*)*(c*)*beta23 + 2*l*h*(c*)*(a*) *beta31 + 2*h*k*(a*)*(b*) 
*beta12))


 (This means the Uij terms of an anisotropic temperature  factor is equal to 
betaij/(8*pi**2.)

 For derivative :  1
 beta matrix - array elements beta11 beta12 beta13
  beta21 beta22 beta23,
  beta31 beta32 beta33

 -2.60940.0.
  0.   -2.60940.
  0.0.   -0.9378



+
  Note: REFMAC5 outputs betaij/4 


The isotropic equivalent is ::
  exp(-B (sin**2(theta)/lamda**2) )  =  

 exp(-0.25( h**2 * (a*)**2 * B   + k**2 * (b*)**2 * B +  l**2 * (c*)**2 
* B 
 + 2*k*l*(b*)*(c*)*cosAS*B  +  2*l*h*(c*)*(a*)*cosBS*B  + 
2*h*k*(a*)*(b*)*cosGS*B))


+
+
REFMAC code:
+
+
  S1  = FLOAT(IHH(1))*RCELL(1)
  S2  = FLOAT(IHH(2))*RCELL(2)
  S3  = FLOAT(IHH(3))*RCELL(3)
  S11 = S1*S1
  S22 = S2*S2
  S33 = S3*S3
  S12 = 2.0*S1*S2
  S13 = 2.0*S1*S3
  S23 = 2.0*S2*S3
  SBS = B_LS_ANISO_OVER(1)*S11 + B_LS_ANISO_OVER(2)*S22 +
   B_LS_ANISO_OVER(3)*S33 + B_LS_ANISO_OVER(4)*S12 +
   B_LS_ANISO_OVER(5)*S13 + B_LS_ANISO_OVER(6)*S23

   EXPAN = EXP(-SBS)


+
+
From RWBROOK and SHELXL
+
+

C
C  PDB files contain anisotropic temperature factors as orthogonal Uo_ijs 
multiplied by 10**4.
C  The order is: Uo11 Uo22 Uo33 Uo12 Uo13 Uo23
C  
C  Shelx defines Ufn_ij to calculate temperature factor as:
C T(aniso_Ufn) = exp (-2PI**2 ( (h*ast)**2 Ufn_11 + (k*bst)**2 Ufn_22 +  
(l*cst)**2 Ufn_33 
C+ 2hk*ast*bst*Ufn_12 + 2hl*ast*bst*Ufn_13+ 
2kl*bst*cst*Ufn_23)
C
C   Note:   Uo_ji == Uo_ij and  Uf_ji == Uf_ij.
C
C  10**4*[Uo_ij] listed on ANISOU card satisfy  the relationship:
C  [Uo_ij] =   [RFu]-1 [Ufn_ij] {[RFu]-1}T   
C  [Ufn_ij] =   [RFu] [Uo_ij] {[RFu]}T  
Cwhere [Rfu] is the normalised [Rf] matrix read derived from the SCALEi 
cards.

   ie Rf11 Rf12 Rf13 =  SCALE1(1)   SCALE1(2) SCALE1(3)
  Rf21 Rf22 Rf23SCALE2(1)   SCALE2(2) SCALE2(3)
  Rf31 Rf32 Rf33SCALE3(1)   SCALE3(2) SCALE3(3)

and   Rfu11 Rf12 Rfu13  = Rf11/FAC1 Rf12/FAC1  Rf13/FAC1 
  where FAC1 = SQRT(Rf11**2 +Rf12**2 +Rf13**2) etc.
For conventional SCALEi  FAC1 = a*, etc  but I am not sure if it is always 
true..

If it is and you 

Re: [ccp4bb] help me to determine ligand's b fator?

2011-10-12 Thread Xiaopeng Hu
I made a simple script for this, perhaps you can edit it for your case.
Just save it as baverage.sh, then run it as  ./baverage.sh your.pdb.



#!/bin/bash
echo --
grep '^ATOM' $1|cut -b61-66|cat |awk '{sum+=$1} END { print  Protein 
Average B =,sum/NR;printProtein Atom Number =, NR}'
grep '^HETATM' $1|grep -v 'HOH'|cut -b61-66 |cat|awk '{sum+=$1} END {print   
Ligand/Ion Average B =,sum/NR;print Ligand/Ion Atom Number =, NR}'
grep '^HETATM' $1|grep 'HOH'|cut -b61-66|cat |awk '{sum+=$1} END {print
Water Average B =,sum/NR;print   Water Number =, NR}'
echo --


-


best,

xiaopeng


Re: [ccp4bb] help me to determine ligand's b fator?

2011-10-12 Thread Christian Roth
Hi,
as Tim pointed out Rsym, I/sigma, Redundancy could not be obtained from a pdb, 
but you find it in the ouptut of SCALA, XSCALE, or similar scaling programs. I 
use Moleman2 from the Uppsala Software Factory to get the B values out of my 
pdb. F.e. Bf St Type provide B-Factors for every component in the file.

Christian

Am Mittwoch 12 Oktober 2011 13:53:56 schrieb 王瑞:
 Dear Everybody,
 
 I am sorry for a little off-topic. Could anyone tell me how to
  determine a protein, peptide and ligand of a new pdb's b factor? I know
  there are rampage and sfcheck to validate in ccp4, but I only found a
  overall b factor in their result. By the way, are there a software to
  determine a new pdb's all parameters such as Rsym, I/σ, Redundancy,
  Solvent molecules' b factors?
 
 Thanks
 


Re: [ccp4bb] Definition of B-factor (pedantry)

2011-10-12 Thread Ian Tickle
Hi Phil

My understanding is that when the B factor was devised it was believed
that it wouldn't represent any physical reality and was initially at
least widely regarded as a garbage dump for errors.  So it made no
difference whether or not it was related to the natural length in
reciprocal space, it was just a number, a fudge factor used to fit
the data.  Bs^2 is simplest to calculate from theta (which can be
measured directly from the film or diffractometer setting), lambda
(which is fixed of course) and B -  particularly if you don't have a
computer!  Also a significant point may be that the scattering factors
were tabulated as a function of s=sin(theta)/lambda (but you could
equally well ask why 2sin(theta)/lambda wasn't used there).  So it's
more convenient to have B multiplying s^2 since you can simply add B
to the constant part of the Gaussian scattering factor function.  Of
course they could have absorbed the extra factor of 2 into lambda
(i.e. use lambda/2 instead of lambda) but maybe no-one thought of
that!

U, the mean square displacement, is the quantity which is directly
related to the physics so if it's realism you're after, use U, not B
(or beta).

Cheers

-- Ian

On Wed, Oct 12, 2011 at 2:55 PM, Phil Evans p...@mrc-lmb.cam.ac.uk wrote:
 I've been struggling a bit to understand the definition of B-factors, 
 particularly anisotropic Bs, and I think I've finally more-or-less got my 
 head around the various definitions of B, U, beta etc, but one thing puzzles 
 me.

 It seems to me that the natural measure of length in reciprocal space is d* = 
 1/d = 2 sin theta/lambda

 but the conventional term for B-factor in the structure factor expression 
 is exp(-B s^2) where s = sin theta/lambda = d*/2 ie exp(-B (d*/2)^2)

 Why not exp (-B' d*^2) which would seem more sensible? (B' = B/4) Why the 
 factor of 4?

 Or should we just get used to U instead?

 My guess is that it is a historical accident (or relic), ie that is the 
 definition because that's the way it is

 Does anyone understand where this comes from?

 Phil


[ccp4bb] How to make a geometric and energetic statistics for 20 NMR structures calculated by CNS program?

2011-10-12 Thread Huayue Li
Hi, all
 
I don't know how to make a geometric and energetic statistics for 20 NMR 
solution structures calculated by CNS program.
 
How to calculate r.m.s. deviations from idealized geometry; and how to 
calculate bond, angle, improper, vdw, NOE, cdih, and total energy?
 
The pdb output files by CNS seems not contain these informations. Should I use 
another software? 
 
Thanks.
 
 
 
Huayue Li, Ph. D
College of Pharmacy
Pusan National University
Geumjeong-gu, Jangjeon-dong
Busan 609-735, Korea
Tel: +82-51-510-2185


[ccp4bb] Akta Prime

2011-10-12 Thread Michael Colaneri
Dear all,

We have an AktaPrime and GE Lifesciences stop servicing these instruments
because they are getting old.  Does anyone know of a third party company
that gives contracts to maintain these instruments?  Thank you.

Mike Colaneri


Re: [ccp4bb] Ice rings... [maps and missing reflections]

2011-10-12 Thread Edward A. Berry

Tim Gruene wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On 10/11/2011 09:58 PM, Ethan Merritt wrote:

On Tuesday, October 11, 2011 12:33:09 pm Garib N Murshudov wrote:

In the limit yes. however limit is when we do not have solution, i.e. when model errors are very large.  In 
the limit map coefficients will be 0 even for 2mFo-DFc maps. In refinement we have some model. At the moment 
we have choice between 0 and DFc. 0 is not the best estimate as Ed rightly points out. We replace (I am sorry 
for self promotion, nevertheless: Murshudov et al, 1997) absent reflection with DFc, but it 
introduces bias. Bias becomes stronger as the number of absent reflections become larger. We need 
better way of estimating unobserved reflections. In statistics there are few appraoches. None of 
them is full proof, all of them are computationally expensive. One of the techniques is called multiple 
imputation.


I don't quite follow how one would generate multiple imputations in this case.

Would this be equivalent to generating a map from (Nobs - N) refls, then
filling in F_estimate for those N refls by back-transforming the map?
Sort of like phase extension, except generating new Fs rather than new phases?


Some people call this the free-lunch-algorithm ;-)
Tim


Doesn't work- the Fourier transform is invertable. As someone already said in 
this
thread, if the map was made with coefficients of zero for certain reflections
(which is equivalent to omitting those reflections) The back-transform will
give zero for those reflections. Unless you do some density modification first.
So free-lunch is a good name- there aint no such thing!


Re: [ccp4bb] help me to determine ligand's b fator?

2011-10-12 Thread Pavel Afonine
Hello,

2011/10/12 Xiaopeng Hu huxp...@mail.sysu.edu.cn

 I made a simple script for this, perhaps you can edit it for your case.
 Just save it as baverage.sh, then run it as  ./baverage.sh your.pdb.


Here is another option:

phenix.pdbtools model_statistics=true model.pdb
or
phenix.model_vs_data model.pdb data.hkl

which, among other statistics, will give you min/max/mean B-factor values
for macromolecule, ligands, solvent.

Pavel


Re: [ccp4bb] Definition of B-factor (pedantry)

2011-10-12 Thread Pavel Afonine
This may answer some of your questions or at least give pointers:

Grosse-Kunstleve RW, Adams PD:
On the handling of atomic anisotropic displacement parameters.
Journal of Applied Crystallography 2002, 35, 477-480.

http://cci.lbl.gov/~rwgk/my_papers/iucr/ks0128_reprint.pdf

Pavel

On Wed, Oct 12, 2011 at 6:55 AM, Phil Evans p...@mrc-lmb.cam.ac.uk wrote:

 I've been struggling a bit to understand the definition of B-factors,
 particularly anisotropic Bs, and I think I've finally more-or-less got my
 head around the various definitions of B, U, beta etc, but one thing puzzles
 me.

 It seems to me that the natural measure of length in reciprocal space is d*
 = 1/d = 2 sin theta/lambda

 but the conventional term for B-factor in the structure factor expression
 is exp(-B s^2) where s = sin theta/lambda = d*/2 ie exp(-B (d*/2)^2)

 Why not exp (-B' d*^2) which would seem more sensible? (B' = B/4) Why the
 factor of 4?

 Or should we just get used to U instead?

 My guess is that it is a historical accident (or relic), ie that is the
 definition because that's the way it is

 Does anyone understand where this comes from?

 Phil


Re: [ccp4bb] Ice rings... [maps and missing reflections]

2011-10-12 Thread Ethan Merritt
On Wednesday, October 12, 2011 01:12:11 pm Edward A. Berry wrote:
 Tim Gruene wrote:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
 
  On 10/11/2011 09:58 PM, Ethan Merritt wrote:
  On Tuesday, October 11, 2011 12:33:09 pm Garib N Murshudov wrote:
  In the limit yes. however limit is when we do not have solution, i.e. 
  when model errors are very large.  In the limit map coefficients will be 
  0 even for 2mFo-DFc maps. In refinement we have some model. At the moment 
  we have choice between 0 and DFc. 0 is not the best estimate as Ed 
  rightly points out. We replace (I am sorry for self promotion, 
  nevertheless: Murshudov et al, 1997) absent reflection with DFc, but it 
  introduces bias. Bias becomes stronger as the number of absent 
  reflections become larger. We need better way of estimating unobserved 
  reflections. In statistics there are few appraoches. None of them is full 
  proof, all of them are computationally expensive. One of the techniques 
  is called multiple imputation.
 
  I don't quite follow how one would generate multiple imputations in this 
  case.
 
  Would this be equivalent to generating a map from (Nobs - N) refls, then
  filling in F_estimate for those N refls by back-transforming the map?
  Sort of like phase extension, except generating new Fs rather than new 
  phases?
 
  Some people call this the free-lunch-algorithm ;-)
  Tim
 
 Doesn't work- the Fourier transform is invertable. As someone already said in 
 this
 thread, if the map was made with coefficients of zero for certain reflections
 (which is equivalent to omitting those reflections) The back-transform will
 give zero for those reflections. Unless you do some density modification 
 first.
 So free-lunch is a good name- there aint no such thing!

Tim refers to the procedure described in
  Sheldrick, G. M. (2002). Z. Kristallogr. 217, 644–65

which was later incorporated into shelxe as the Free Lunch Algorithm.
It does indeed involve a form of density modification.
Tim is also correct that this procedure is the precedent I had in mind,
although I had forgotten its clever name.

cheers,

Ethan

-- 
Ethan A Merritt
Biomolecular Structure Center,  K-428 Health Sciences Bldg
University of Washington, Seattle 98195-7742


Re: [ccp4bb] Ice rings... [maps and missing reflections]

2011-10-12 Thread George M. Sheldrick
Dear Ethan,

Thankyou for the reference, but actually it's the wrong paper and anyway
my only contribution to the 'free lunch algorithm' was to name it (in the
title of the paper by Uson et al., Acta Cryst. (2007) D63, 1069-1074). By 
that time the method was already being used in ACORN and by the Bari group, 
who were the first to describe it in print (Caliandro et al., Acta Cryst.
Acta Cryst. (2005) D61, 556-565). As you correctly say, it only makes sense 
in the context of density modification, but under favorable conditions,
i.e. native data to 2A or better, inventing data to a resolution that you
would have liked to collect but didn't can make a dramatic improvement to
a map, as SHELXE has often demonstrated. Hence the name. And of course
there is no such thing as a free lunch!

Best regards, George

On Wed, Oct 12, 2011 at 01:25:12PM -0700, Ethan Merritt wrote:
 On Wednesday, October 12, 2011 01:12:11 pm Edward A. Berry wrote:
  Tim Gruene wrote:
   -BEGIN PGP SIGNED MESSAGE-
   Hash: SHA1
  
  
   On 10/11/2011 09:58 PM, Ethan Merritt wrote:
   On Tuesday, October 11, 2011 12:33:09 pm Garib N Murshudov wrote:
   In the limit yes. however limit is when we do not have solution, i.e. 
   when model errors are very large.  In the limit map coefficients will 
   be 0 even for 2mFo-DFc maps. In refinement we have some model. At the 
   moment we have choice between 0 and DFc. 0 is not the best estimate as 
   Ed rightly points out. We replace (I am sorry for self promotion, 
   nevertheless: Murshudov et al, 1997) absent reflection with DFc, but 
   it introduces bias. Bias becomes stronger as the number of absent 
   reflections become larger. We need better way of estimating 
   unobserved reflections. In statistics there are few appraoches. None 
   of them is full proof, all of them are computationally expensive. One 
   of the techniques is called multiple imputation.
  
   I don't quite follow how one would generate multiple imputations in this 
   case.
  
   Would this be equivalent to generating a map from (Nobs - N) refls, then
   filling in F_estimate for those N refls by back-transforming the map?
   Sort of like phase extension, except generating new Fs rather than new 
   phases?
  
   Some people call this the free-lunch-algorithm ;-)
   Tim
  
  Doesn't work- the Fourier transform is invertable. As someone already said 
  in this
  thread, if the map was made with coefficients of zero for certain 
  reflections
  (which is equivalent to omitting those reflections) The back-transform will
  give zero for those reflections. Unless you do some density modification 
  first.
  So free-lunch is a good name- there aint no such thing!
 
 Tim refers to the procedure described in
   Sheldrick, G. M. (2002). Z. Kristallogr. 217, 644–65
 
 which was later incorporated into shelxe as the Free Lunch Algorithm.
 It does indeed involve a form of density modification.
 Tim is also correct that this procedure is the precedent I had in mind,
 although I had forgotten its clever name.
 
   cheers,
 
   Ethan
 
 -- 
 Ethan A Merritt
 Biomolecular Structure Center,  K-428 Health Sciences Bldg
 University of Washington, Seattle 98195-7742
 

-- 
Prof. George M. Sheldrick FRS
Dept. Structural Chemistry, 
University of Goettingen,
Tammannstr. 4,
D37077 Goettingen, Germany
Tel. +49-551-39-3021 or -3068
Fax. +49-551-39-22582


Re: [ccp4bb] Definition of B-factor (pedantry)

2011-10-12 Thread Phil Evans
Indeed that paper does lay out clearly the various definitions, thank you, but 
I note that you do explicitly discourage use of B (= 8 pi^2 U), and don't 
explain why the factor is 8 rather than 2 (ie why it multiplies (d*/2)^2 rather 
than d*^2). I think James Holton's reminder that the definition dates from 1914 
answers my question.

So why do we store B in the PDB files rather than U?  :-)

Phil

On 12 Oct 2011, at 21:19, Pavel Afonine wrote:

 This may answer some of your questions or at least give pointers:
 
 Grosse-Kunstleve RW, Adams PD:
 On the handling of atomic anisotropic displacement parameters.
 Journal of Applied Crystallography 2002, 35, 477-480.
 
 http://cci.lbl.gov/~rwgk/my_papers/iucr/ks0128_reprint.pdf
 
 Pavel
 
 On Wed, Oct 12, 2011 at 6:55 AM, Phil Evans p...@mrc-lmb.cam.ac.uk wrote:
 I've been struggling a bit to understand the definition of B-factors, 
 particularly anisotropic Bs, and I think I've finally more-or-less got my 
 head around the various definitions of B, U, beta etc, but one thing puzzles 
 me.
 
 It seems to me that the natural measure of length in reciprocal space is d* = 
 1/d = 2 sin theta/lambda
 
 but the conventional term for B-factor in the structure factor expression 
 is exp(-B s^2) where s = sin theta/lambda = d*/2 ie exp(-B (d*/2)^2)
 
 Why not exp (-B' d*^2) which would seem more sensible? (B' = B/4) Why the 
 factor of 4?
 
 Or should we just get used to U instead?
 
 My guess is that it is a historical accident (or relic), ie that is the 
 definition because that's the way it is
 
 Does anyone understand where this comes from?
 
 Phil
 


Re: [ccp4bb] Definition of B-factor (pedantry)

2011-10-12 Thread James Holton
I think the PDB decided to store B instead of U because unless the
B factor was  80, there would always be a leading 0. in that
column, and that would just be a pitiful waste of two bytes.  At the
time the PDB was created, I understand bytes cost about $100 each!
(But that could be a slight exaggeration)

-James Holton
MAD Scientist

On Wed, Oct 12, 2011 at 2:56 PM, Phil Evans p...@mrc-lmb.cam.ac.uk wrote:
 Indeed that paper does lay out clearly the various definitions, thank you, 
 but I note that you do explicitly discourage use of B (= 8 pi^2 U), and don't 
 explain why the factor is 8 rather than 2 (ie why it multiplies (d*/2)^2 
 rather than d*^2). I think James Holton's reminder that the definition dates 
 from 1914 answers my question.

 So why do we store B in the PDB files rather than U?  :-)

 Phil

 On 12 Oct 2011, at 21:19, Pavel Afonine wrote:

 This may answer some of your questions or at least give pointers:

 Grosse-Kunstleve RW, Adams PD:
 On the handling of atomic anisotropic displacement parameters.
 Journal of Applied Crystallography 2002, 35, 477-480.

 http://cci.lbl.gov/~rwgk/my_papers/iucr/ks0128_reprint.pdf

 Pavel

 On Wed, Oct 12, 2011 at 6:55 AM, Phil Evans p...@mrc-lmb.cam.ac.uk wrote:
 I've been struggling a bit to understand the definition of B-factors, 
 particularly anisotropic Bs, and I think I've finally more-or-less got my 
 head around the various definitions of B, U, beta etc, but one thing puzzles 
 me.

 It seems to me that the natural measure of length in reciprocal space is d* 
 = 1/d = 2 sin theta/lambda

 but the conventional term for B-factor in the structure factor expression 
 is exp(-B s^2) where s = sin theta/lambda = d*/2 ie exp(-B (d*/2)^2)

 Why not exp (-B' d*^2) which would seem more sensible? (B' = B/4) Why the 
 factor of 4?

 Or should we just get used to U instead?

 My guess is that it is a historical accident (or relic), ie that is the 
 definition because that's the way it is

 Does anyone understand where this comes from?

 Phil




Re: [ccp4bb] help me to determine ligand's b fator?

2011-10-12 Thread 王瑞
Thank you for all your advice.They are all good suggestions!

2011/10/12 王瑞 wangrui...@gmail.com

 Dear Everybody,

 I am sorry for a little off-topic. Could anyone tell me how to
 determine a protein, peptide and ligand of a new pdb's b factor? I know
 there are rampage and sfcheck to validate in ccp4, but I only found a
 overall b factor in their result. By the way, are there a software to
 determine a new pdb's all parameters such as Rsym, I/σ, Redundancy, Solvent
 molecules' b factors?

 Thanks



Re: [ccp4bb] Akta Prime / FPLC Options / Off Topic

2011-10-12 Thread Paul Smith
Michael,

Unfortunately, I actually don't know who serves these machines apart from GE.

Because you brought up the subject of GE equipment and service, I thought I 
would ask the community about the best options for routine crystallographic 
scale FPLC.

In my opinion, following the takeover of Pharmacia by GE the the price of GE 
machines, replacement parts, and service has skyrocketed and GE service reps 
seem determined to squeeze and extort every dollar they can.  Personally, I'd 
love to never do business with GE again.  


However, in some ways, they are the only game in town.  GE is the de facto 
standard for our line of work and the Akta line are very good machines.  
However, GE's consistent price gouging and outright crooked service practices 
encourage me look elsewhere.

I've used systems from AP-biotech (junk) and have heard some good things about 
Bio-rad.  What does the community at large think?  Are there other good 
options?  Does anyone have some spare millions and manufacturing connections in 
India/China to consider starting a competing company?

Sorry to hijack your thread Michael.  Let me know what you find out.  The less 
money I send to GE the better.


--Paul





From: Michael Colaneri colane...@gmail.com
To: CCP4BB@JISCMAIL.AC.UK
Sent: Wednesday, October 12, 2011 2:28 PM
Subject: [ccp4bb] Akta Prime


Dear all,

We have an AktaPrime and GE Lifesciences stop servicing these instruments 
because they are getting old.  Does anyone know of a third party company that 
gives contracts to maintain these instruments?  Thank you.

Mike Colaneri

Re: [ccp4bb] change of origin for reflections or map

2011-10-12 Thread Francois Berenger

Hello,

The more I read this mailing list, the more I feel
the crystallographer is a very special human being:

- he lives in the Fourier space
- when he goes to the Cartesian space, he restricts
  himself to a small box that is replicated to the infinity
  using symmetry operators and origin shifts

That being said, some live in a world made of zero and ones...

Regards,
F.


Re: [ccp4bb] Definition of B-factor (pedantry)

2011-10-12 Thread Frances C. Bernstein

At this point I usually chime in with an explanation of why
the Protein Data Bank made some choice or other in the early
days but on the matter of U vs. B I have not information to
contribute.

I can point out the at that time characters were stored in
display code on a CDC 6600 and display code used 6 bits so
'bytes' at that time were less obese.  6 bits per character
explains, of course, why lower case characters were not
routinely used.

Frances

=
Bernstein + Sons
*   *   Information Systems Consultants
5 Brewster Lane, Bellport, NY 11713-2803
*   * ***
 *Frances C. Bernstein
  *   ***  f...@bernstein-plus-sons.com
 *** *
  *   *** 1-631-286-1339FAX: 1-631-286-1999
=

On Wed, 12 Oct 2011, James Holton wrote:


I think the PDB decided to store B instead of U because unless the
B factor was  80, there would always be a leading 0. in that
column, and that would just be a pitiful waste of two bytes.  At the
time the PDB was created, I understand bytes cost about $100 each!
(But that could be a slight exaggeration)

-James Holton
MAD Scientist

On Wed, Oct 12, 2011 at 2:56 PM, Phil Evans p...@mrc-lmb.cam.ac.uk wrote:

Indeed that paper does lay out clearly the various definitions, thank you, but 
I note that you do explicitly discourage use of B (= 8 pi^2 U), and don't 
explain why the factor is 8 rather than 2 (ie why it multiplies (d*/2)^2 rather 
than d*^2). I think James Holton's reminder that the definition dates from 1914 
answers my question.

So why do we store B in the PDB files rather than U?  :-)

Phil

On 12 Oct 2011, at 21:19, Pavel Afonine wrote:


This may answer some of your questions or at least give pointers:

Grosse-Kunstleve RW, Adams PD:
On the handling of atomic anisotropic displacement parameters.
Journal of Applied Crystallography 2002, 35, 477-480.

http://cci.lbl.gov/~rwgk/my_papers/iucr/ks0128_reprint.pdf

Pavel

On Wed, Oct 12, 2011 at 6:55 AM, Phil Evans p...@mrc-lmb.cam.ac.uk wrote:
I've been struggling a bit to understand the definition of B-factors, 
particularly anisotropic Bs, and I think I've finally more-or-less got my head 
around the various definitions of B, U, beta etc, but one thing puzzles me.

It seems to me that the natural measure of length in reciprocal space is d* = 
1/d = 2 sin theta/lambda

but the conventional term for B-factor in the structure factor expression is 
exp(-B s^2) where s = sin theta/lambda = d*/2 ie exp(-B (d*/2)^2)

Why not exp (-B' d*^2) which would seem more sensible? (B' = B/4) Why the 
factor of 4?

Or should we just get used to U instead?

My guess is that it is a historical accident (or relic), ie that is the 
definition because that's the way it is

Does anyone understand where this comes from?

Phil





[ccp4bb] Monomers in COOT

2011-10-12 Thread Dr. STEPHEN SIN-YIN, CHUI
Dear All,

For all monomers (3 letter) used in COOT, where can i find the full names of the
whole library? Many thanks

stephen  

-- 
Dr. Stephen Sin-Yin Chui (徐先賢)
Assistant Professor,
Department of Chemistry,
The University of Hong Kong, Pokfulam Road,
Hong Kong SAR, China.
Tel: 22415814 (Office), 22415818 (X-ray Diffraction Laboratory)


Re: [ccp4bb] Monomers in COOT

2011-10-12 Thread Jacqueline Vitali
try file  search monomer library.  Hit search without typing anything.  It
will give you what it has.
Jackie Vitali

2011/10/12 Dr. STEPHEN SIN-YIN, CHUI chui...@hkucc.hku.hk

 Dear All,

 For all monomers (3 letter) used in COOT, where can i find the full names
 of the
 whole library? Many thanks

 stephen

 --
 Dr. Stephen Sin-Yin Chui (徐先賢)
 Assistant Professor,
 Department of Chemistry,
 The University of Hong Kong, Pokfulam Road,
 Hong Kong SAR, China.
 Tel: 22415814 (Office), 22415818 (X-ray Diffraction Laboratory)