Re: [ccp4bb] R pim and Rmeans

2008-12-10 Thread Manfred S. Weiss
Hi Frank,

of course R_pim is just one number and it may not be discriminatory
enough to decide when to stop including images. But I can say from
my own experience that R_pim will not drop forever. I have seen
data sets with R_pim values of 0.5% to 2.0 A or better resolution,
but never R_pim values significantly smaller than that (even with
the redundancy approaching 100). You may try this out yourself.
I bet you that R_pim will eventually go up again after a few
revolutions when radiation damage kicks in. The real question in
my opinion is, when is the deviation from the 1/(N-1) drop such,
that you would want to stop including more images.

Cheers, Manfred.


*  *
*Dr. Manfred S. Weiss  *
*  *
* Team Leader  *
*  *
* EMBL Hamburg OutstationFon: +49-40-89902-170 *
* c/o DESY, Notkestr. 85 Fax: +49-40-89902-149 *
* D-22603 Hamburg   Email: [EMAIL PROTECTED] *
* GERMANY   Web: www.embl-hamburg.de/~msweiss/ *
*  *



On Tue, 9 Dec 2008, Frank von Delft wrote:

 Hi Manfred


  thanks a lot for your comments, since they raise some interesting
  points.
 
  R_pim should give the precision of the averaged measurement,
  hence the name. It will decrease with increasing data redundancy,
  obviously. The decrease will be proportional to the square root
  of the redundancy if only statistical errors or counting errors
  are present. If other things happen, such as for instance
  radiation damage, then you are introducing systematic errors,
  which will lead to either R_pim decreasing less than it should,
  or R_pim even increasing.
 
  This raises an important issue. As more and more images keep
  being added to a data set, could one decide at some point,
  when to add any further images?

 This really is the point:  in these days of fast data collection, I
 assume that most people collect more frames than necessary for
 completeness.  At least, I always do.  So the question is no longer is
 this data good enough -- that you can test quickly enough with
 downstream programs.

 Rather, it is, how many of the frames that I have should I include, so
 that you don't have to run the same combination of downstream programs
 for 20 combinations of frames.

 Radiation damage is the key, innit.  Sure, I can pat myself on the
 shoulder by downweighting everything by 1/1-N -- so after 15 revolutions
 of tetragonal crystal that'll give a brilliant Rpim, but the crystal
 will be a cinder and the data presumably crap.

 But it's the intermediate zone (1-2x completeness) where I need help,
 but I don't see how Rpim is discriminatory enough.

 phx.



Re: [ccp4bb] R pim and Rmeans

2008-12-10 Thread Eleanor Dodson
There are useful plots from scala showing various measures v frame 
number. I usually look at those and do some hand waving to decide where 
the increasing R_xs indicate you are measuring nothing, or measuring 
something different from the first frames because of radiation damage


Eleanor


Frank von Delft wrote:

Hi Manfred



thanks a lot for your comments, since they raise some interesting
points.

R_pim should give the precision of the averaged measurement,
hence the name. It will decrease with increasing data redundancy,
obviously. The decrease will be proportional to the square root
of the redundancy if only statistical errors or counting errors
are present. If other things happen, such as for instance
radiation damage, then you are introducing systematic errors,
which will lead to either R_pim decreasing less than it should,
or R_pim even increasing.

This raises an important issue. As more and more images keep
being added to a data set, could one decide at some point,
when to add any further images? 


This really is the point:  in these days of fast data collection, I 
assume that most people collect more frames than necessary for 
completeness.  At least, I always do.  So the question is no longer 
is this data good enough -- that you can test quickly enough with 
downstream programs.
Rather, it is, how many of the frames that I have should I include, 
so that you don't have to run the same combination of downstream 
programs for 20 combinations of frames.


Radiation damage is the key, innit.  Sure, I can pat myself on the 
shoulder by downweighting everything by 1/1-N -- so after 15 
revolutions of tetragonal crystal that'll give a brilliant Rpim, but 
the crystal will be a cinder and the data presumably crap.


But it's the intermediate zone (1-2x completeness) where I need help, 
but I don't see how Rpim is discriminatory enough.


phx.




Re: [ccp4bb] R pim and Rmeans

2008-12-10 Thread Kay Diederichs

Hi Frank,

maybe this is an opportunity to state that there is indeed a way to 
assess radiation damage by looking at an R-factor plot, but that 
R-factor is R_d [1], not R_pim.


The formula and some explanation is in the CCP4 wiki at 
http://strucbio.biologie.uni-konstanz.de/ccp4wiki/index.php/R-factors#measuring_radiation_damage


best,

Kay

[1] Diederichs, K. (2006) Some aspects of quantitative analysis and 
correction of radiation damage. Acta Cryst D62, 96-101




Frank von Delft schrieb:

Hi Manfred



thanks a lot for your comments, since they raise some interesting
points.

R_pim should give the precision of the averaged measurement,
hence the name. It will decrease with increasing data redundancy,
obviously. The decrease will be proportional to the square root
of the redundancy if only statistical errors or counting errors
are present. If other things happen, such as for instance
radiation damage, then you are introducing systematic errors,
which will lead to either R_pim decreasing less than it should,
or R_pim even increasing.

This raises an important issue. As more and more images keep
being added to a data set, could one decide at some point,
when to add any further images? 


This really is the point:  in these days of fast data collection, I 
assume that most people collect more frames than necessary for 
completeness.  At least, I always do.  So the question is no longer is 
this data good enough -- that you can test quickly enough with 
downstream programs.
Rather, it is, how many of the frames that I have should I include, so 
that you don't have to run the same combination of downstream programs 
for 20 combinations of frames.


Radiation damage is the key, innit.  Sure, I can pat myself on the 
shoulder by downweighting everything by 1/1-N -- so after 15 revolutions 
of tetragonal crystal that'll give a brilliant Rpim, but the crystal 
will be a cinder and the data presumably crap.


But it's the intermediate zone (1-2x completeness) where I need help, 
but I don't see how Rpim is discriminatory enough.


phx.



--
Kay Diederichshttp://strucbio.biologie.uni-konstanz.de
email: [EMAIL PROTECTED]Tel +49 7531 88 4049 Fax 3183
Fachbereich Biologie, Universität Konstanz, Box M647, D-78457 Konstanz

This e-mail is digitally signed. If your e-mail client does not have the
necessary capabilities, just ignore the attached signature smime.p7s.



smime.p7s
Description: S/MIME Cryptographic Signature


Re: [ccp4bb] refmac 5.5.0068 error

2008-12-10 Thread Victor Lamzin

Dear Michael,

ARP/wARP should recognise this refmac version with no problem. Before 
typing './install.sh' just do 'refmac5 -i' to check that refmac is 
executed fine and CCP4 environment is setup.


If the problem remains please get back to us with details on the 
ARP/wARP version number and computer operating system.


Best regards,
Victor


 Michael Jackson wrote:

 hello,
 Thank you for the reply about the refmac 5.5.0066 error. I downloaded 
refmac 5.5.0068 but there appears to be a problem for ARPwARP to 
recognise the version.  I reinstalled ARPwARP and the install shell 
script freezes when it looks for the refmac file.


[ccp4bb] plate survey

2008-12-10 Thread Flip Hoedemaeker
Hi Community,



I would like to do a little survey on popular 2-3 drop per 96 well plates,
and the pros and cons of these plates. I know that the new Cornings, the MRC
and the Intelliplates are used often in the labs I visit (we use 2 drop MRC
plates mostly). Perhaps you can comment on optical quality, robot handling
(picking up, dispensing), performance (cracking!), ease of harvesting
crystals etc.



Please reply directly to me at [EMAIL PROTECTED] [EMAIL PROTECTED] or *
[EMAIL PROTECTED] [EMAIL PROTECTED], I will post a summary.



Flip


Re: [ccp4bb] Summary - torsion angle restraints in REFMAC

2008-12-10 Thread Borhani, David
On the Ligand tab (upper left of web page), at left you'll see PDB
(model coordinates) and PDB (ideal coordinates). The Ideal coords
really are idealized, including a very different torsion between the two
ring systems in this ligand. The Model coords seem correct (one of the
two in the asymmetric unit was chosen, on some basis), but, again a
Hmm?, the Bfactors of the Model coords were all reset to 10.00. Dave

 -Original Message-
 From: Artem Evdokimov [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, December 09, 2008 9:36 PM
 To: Borhani, David; CCP4BB@JISCMAIL.AC.UK
 Subject: RE: [ccp4bb] Summary - torsion angle restraints in REFMAC
 
 Interestingly, in the interactive 3D applet view of the 
 ligand from the PDB
 the two are perfectly in plane, whereas in the protein viewer 
 the two groups
 are clearly out of plane. I assume that this means that the 
 coordinates for
 the 3D ligand view are re-computed internally and are not 
 representative of
 what's actually in the PDB.
 
 Hmm?
 
 Artem
 
 -Original Message-
 From: CCP4 bulletin board [mailto:[EMAIL PROTECTED] On Behalf Of
 Borhani, David
 Sent: Tuesday, December 09, 2008 9:25 PM
 To: CCP4BB@JISCMAIL.AC.UK
 Subject: Re: [ccp4bb] Summary - torsion angle restraints in REFMAC
 
 RE: Ian's and Eckhard's wise suggestion to deposit non-standard
 parameters used to refine ligands:
 
 I have had some ligands where the 1  8 peri-substituents (amino 
 methyl groups) on a 6/6 fused aromatic ring (deazapteridine 
 derivatives)
 were very clearly out of the plane. It took extensive 
 fiddling with the
 refmac dictionary to get the torsion  planarity restraints relaxed
 enough so that these two atoms were allowed to move into the 
 very clear
 difference density waiting for them, even with 1.1 A data!
 
 See http://www.rcsb.org/pdb/explore.do?structureId=1KMS and
 http://www.rcsb.org/pdb/explore.do?structureId=1KMV for 
 examples; paper
 is here: http://www.ncbi.nlm.nih.gov/pubmed/12096917
 
 I thought I had deposited the refmac library file, but now I 
 cannot find
 it on the PDB web site. Maybe I'm not looking in the right 
 place, but if
 you click, in the ligand (LII) area, on ligand structure view, then
 (at left) Component definition (CIF), you can find this:
 http://www.rcsb.org/pdb/files/ligand/LII.cif. BUT, THIS IS 
 NOT MY FILE!
 Rather, some auto-generated file (probably wrong; definitely doesn't
 have my relaxed parameters).
 
 Similarly, for http://www.rcsb.org/pdb/explore.do?structureId=2C2S
 (http://www.ncbi.nlm.nih.gov/pubmed/17569517), with a rather strange
 carborane ligand, I *distinctly* remember depositing the library file,
 because the EBI deposition software got completely tied in 
 knots trying
 to interpret the deposited coords. Again, no files to be found on the
 PDB. http://www.rcsb.org/pdb/files/ligand/34B.cif is again 
 not my file,
 but an auto-generated one (probably wrong, given the trouble 
 the EBI s/w
 had).
 
 Hoping some folks at EBI/RCSB monitor the CCP4BB,
 
 Dave
 David Borhani, Ph.D.
 D. E. Shaw Research, LLC
 120 West Forty-Fifth Street, 39th Floor
 New York, NY 10036
 [EMAIL PROTECTED]
 212-478-0698
 http://www.deshawresearch.com
 
 


Re: [ccp4bb] Restrictions in ccp4-6.1 ?

2008-12-10 Thread Winn, MD (Martyn)
The current value of maxbat in scala_/parameters.fh is 5000
Mind you, according to CVS this was increased from 1000 to 5000 in 1999!

sortmtz and reindex also have MBATCH=5000. There's no restriction in
the library itself.

Cheers
Martyn

-Original Message-
From: CCP4 bulletin board on behalf of Mueller, Juergen-Joachim
Sent: Tue 12/9/2008 7:28 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] Restrictions in ccp4-6.1 ?
 

Dear developers,
I wonder if the restrictions for MBATCH=1000 in CCP4-v6.0.2 and for scala
(maxbat=1000,maxpmr=2000,maxmat=1000, maxrun=3) hold also in V6.1?
Otherwise I cannot use the precompiled versions and have to recompile!?
Thank you,
Jürgen


[ccp4bb] generating omit maps

2008-12-10 Thread Kathleen Frey
Hi Everyone,

Can anyone tell me a relatively easy way to generate an omit density map for
a ligand? I know that CNS can do this, but I was wondering if there's a CCP4
related program to generate omit maps.

Thanks,
Kathleen


Re: [ccp4bb] generating omit maps

2008-12-10 Thread Luca Jovine

Subject: generating omit maps
From: Kathleen Frey [EMAIL PROTECTED]
Reply-To: Kathleen Frey [EMAIL PROTECTED]
Date: Wed, 10 Dec 2008 10:30:47 -0500

Hi Everyone,

Can anyone tell me a relatively easy way to generate an omit density  
map for
a ligand? I know that CNS can do this, but I was wondering if  
there's a CCP4

related program to generate omit maps.
Thanks,
Kathleen

Hi Kathleen,

Not CCP4, I know, but this can be done very easily in PHENIX:

   http://www.phenix-online.org/documentation/autobuild.htm#anch159

HTH,

Luca


Luca Jovine, Ph.D.
Group Leader, Protein Crystallography Unit
Karolinska Institutet
Department of Biosciences and Nutrition
Hälsovägen 7, SE-141 57 Huddinge, Sweden
Voice: +46.(0)8.6083-301  FAX: +46.(0)8.6089-290
E-mail: [EMAIL PROTECTED]
W3: http://jovinelab.org


Re: [ccp4bb] generating omit maps

2008-12-10 Thread Mark J. van Raaij
as a small variation on this, I would first finish the protein, and  
then include ligands, working from larger to smaller (ATP = citrate  
= glycerol = sulphates = waters). Sometimes several waters (from  
automated solvent building) in place of a bona fide ligand (or a  
glycerol for example) refine eerily well and give reasonable maps...


Mark J. van Raaij
Dpto de Bioquímica, Facultad de Farmacia
Universidad de Santiago
15782 Santiago de Compostela
Spain
http://web.usc.es/~vanraaij/







On 10 Dec 2008, at 16:41, Mischa Machius wrote:

Kathleen - The easiest way is to simply remove the ligand from the  
coordinates and refine for a few cycles. Whether that is  
particularly meaningful is another question. Better would be to  
remove the ligand coordinates, shake the remaining coordinates  
(i.e., randomly displace them by a small amount), and then refine.  
Even better, perhaps, would be to calculate a simulated-annealing  
omit map, but AFAIK, you can't use CCP4 for that. IMHO, the best  
option is to not include the ligand in the model-building and  
refinement processes until all of the protein(s), solvent molecules,  
etc. have been properly modeled. I personally tend to include  
ligands only at the very end of the modeling/refinement process,  
unless there is really no ambiguity. This strategy will minimize any  
model bias from the ligand, and it will give you an omit map by  
default (until you actually include the ligand). Best - MM



Mischa Machius, PhD
Associate Professor
Department of Biochemistry
UT Southwestern Medical Center at Dallas
5323 Harry Hines Blvd.; ND10.214A
Dallas, TX 75390-8816; U.S.A.
Tel: +1 214 645 6381
Fax: +1 214 645 6353



On Dec 10, 2008, at 9:30 AM, Kathleen Frey wrote:


Hi Everyone,

Can anyone tell me a relatively easy way to generate an omit  
density map for a ligand? I know that CNS can do this, but I was  
wondering if there's a CCP4 related program to generate omit maps.


Thanks,
Kathleen


Re: [ccp4bb] generating omit maps

2008-12-10 Thread Mischa Machius

On Dec 10, 2008, at 9:52 AM, Mark J. van Raaij wrote:

as a small variation on this, I would first finish the protein,  
and then include ligands, working from larger to smaller (ATP =  
citrate = glycerol = sulphates = waters). Sometimes several  
waters (from automated solvent building) in place of a bona fide  
ligand (or a glycerol for example) refine eerily well and give  
reasonable maps...


Automated solvent building that includes automatic refinement should  
probably be banned ;)


I usually add solvent molecules before adding ligands. For one, the  
electron density usually improves when adding solvent, so that the  
interpretation of the ligands becomes easier. I would recommend to  
check every single solvent molecule, i.e., never use automatic solvent- 
modeling and refinement (!) routines blindly. Make sure to remove  
those solvent molecules that have clearly been placed into ligand  
density before doing the actual refinement.


Another potential problem may have to be taken into consideration as  
well: depending on the resolution, it can happen that protein side  
chains are being moved into ligand density if it is not occupied by  
some atoms. In such cases, I use a mixed strategy derived from the  
approaches described in my first post.


Best - MM





On 10 Dec 2008, at 16:41, Mischa Machius wrote:

Kathleen - The easiest way is to simply remove the ligand from the  
coordinates and refine for a few cycles. Whether that is  
particularly meaningful is another question. Better would be to  
remove the ligand coordinates, shake the remaining coordinates  
(i.e., randomly displace them by a small amount), and then refine.  
Even better, perhaps, would be to calculate a simulated-annealing  
omit map, but AFAIK, you can't use CCP4 for that. IMHO, the best  
option is to not include the ligand in the model-building and  
refinement processes until all of the protein(s), solvent  
molecules, etc. have been properly modeled. I personally tend to  
include ligands only at the very end of the modeling/refinement  
process, unless there is really no ambiguity. This strategy will  
minimize any model bias from the ligand, and it will give you an  
omit map by default (until you actually include the ligand). Best -  
MM



Mischa Machius, PhD
Associate Professor
Department of Biochemistry
UT Southwestern Medical Center at Dallas
5323 Harry Hines Blvd.; ND10.214A
Dallas, TX 75390-8816; U.S.A.
Tel: +1 214 645 6381
Fax: +1 214 645 6353



On Dec 10, 2008, at 9:30 AM, Kathleen Frey wrote:


Hi Everyone,

Can anyone tell me a relatively easy way to generate an omit  
density map for a ligand? I know that CNS can do this, but I was  
wondering if there's a CCP4 related program to generate omit maps.


Thanks,
Kathleen


Re: [ccp4bb] generating omit maps

2008-12-10 Thread Andrew Gulick
Let me also follow up on this point. I also agree that the ligand should be
added very late in the refinement/model-building procedure. I also encourage
people in  my group to create a subdirectory BEFORE_LIGANDS into which
they put the current PDB and map (or mtz) files prior to adding the ligand.
Putting it into a separate directory avoids accidentally deleting it if you
tidy up your modeling files at some later state.

Come publication time, include in your manuscript the map generated at this
stage prior to inclusion of the ligand. It is sometimes not as pretty but it
gives the reader a honest view of your ligand density.

Cheers,
Andy

-- 
Andrew M. Gulick, Ph.D.
---
(716) 898-8619
Hauptman-Woodward Institute
700 Ellicott St
Buffalo, NY 14203
---
Hauptman-Woodward Institute
Dept. of Structural Biology, SUNY at Buffalo

http://www.hwi.buffalo.edu/Faculty/Gulick/Gulick.html
http://labs.hwi.buffalo.edu/gulick


On 12/10/08 10:41 AM, Mischa Machius [EMAIL PROTECTED]
wrote:

 Kathleen - The easiest way is to simply remove the ligand from the
 coordinates and refine for a few cycles. Whether that is particularly
 meaningful is another question. Better would be to remove the ligand
 coordinates, shake the remaining coordinates (i.e., randomly
 displace them by a small amount), and then refine. Even better,
 perhaps, would be to calculate a simulated-annealing omit map, but
 AFAIK, you can't use CCP4 for that. IMHO, the best option is to not
 include the ligand in the model-building and refinement processes
 until all of the protein(s), solvent molecules, etc. have been
 properly modeled. I personally tend to include ligands only at the
 very end of the modeling/refinement process, unless there is really no
 ambiguity. This strategy will minimize any model bias from the ligand,
 and it will give you an omit map by default (until you actually
 include the ligand). Best - MM
 
 --
 --
 Mischa Machius, PhD
 Associate Professor
 Department of Biochemistry
 UT Southwestern Medical Center at Dallas
 5323 Harry Hines Blvd.; ND10.214A
 Dallas, TX 75390-8816; U.S.A.
 Tel: +1 214 645 6381
 Fax: +1 214 645 6353
 
 
 
 On Dec 10, 2008, at 9:30 AM, Kathleen Frey wrote:
 
 Hi Everyone,
 
 Can anyone tell me a relatively easy way to generate an omit density
 map for a ligand? I know that CNS can do this, but I was wondering
 if there's a CCP4 related program to generate omit maps.
 
 Thanks,
 Kathleen


Re: [ccp4bb] generating omit maps

2008-12-10 Thread Mischa Machius
Kathleen - The easiest way is to simply remove the ligand from the  
coordinates and refine for a few cycles. Whether that is particularly  
meaningful is another question. Better would be to remove the ligand  
coordinates, shake the remaining coordinates (i.e., randomly  
displace them by a small amount), and then refine. Even better,  
perhaps, would be to calculate a simulated-annealing omit map, but  
AFAIK, you can't use CCP4 for that. IMHO, the best option is to not  
include the ligand in the model-building and refinement processes  
until all of the protein(s), solvent molecules, etc. have been  
properly modeled. I personally tend to include ligands only at the  
very end of the modeling/refinement process, unless there is really no  
ambiguity. This strategy will minimize any model bias from the ligand,  
and it will give you an omit map by default (until you actually  
include the ligand). Best - MM



Mischa Machius, PhD
Associate Professor
Department of Biochemistry
UT Southwestern Medical Center at Dallas
5323 Harry Hines Blvd.; ND10.214A
Dallas, TX 75390-8816; U.S.A.
Tel: +1 214 645 6381
Fax: +1 214 645 6353



On Dec 10, 2008, at 9:30 AM, Kathleen Frey wrote:


Hi Everyone,

Can anyone tell me a relatively easy way to generate an omit density  
map for a ligand? I know that CNS can do this, but I was wondering  
if there's a CCP4 related program to generate omit maps.


Thanks,
Kathleen


Re: [ccp4bb] R pim and Rmeans

2008-12-10 Thread Frank von Delft
Yup, there is that.  It doesn't help make the decision, though, of how 
much to include.  And the other problem is it requires higher redundancy 
than I usually have when living on the edge of completeness vs death.  
(It's not multiplicity-weighted, is it?)



(It's other problem is that it is hidden in the currently least 
user-friendly program in crystallography -- as powerful as the algorithm 
undoubtedly is;  I argue that by definition a user-unfriendly program 
cannot be considered powerful.  But I digress.  And provoke...  and don 
my flame-shield.)


phx.




Kay Diederichs wrote:

Hi Frank,

maybe this is an opportunity to state that there is indeed a way to 
assess radiation damage by looking at an R-factor plot, but that 
R-factor is R_d [1], not R_pim.


The formula and some explanation is in the CCP4 wiki at 
http://strucbio.biologie.uni-konstanz.de/ccp4wiki/index.php/R-factors#measuring_radiation_damage 



best,

Kay

[1] Diederichs, K. (2006) Some aspects of quantitative analysis and 
correction of radiation damage. Acta Cryst D62, 96-101




Frank von Delft schrieb:

Hi Manfred



thanks a lot for your comments, since they raise some interesting
points.

R_pim should give the precision of the averaged measurement,
hence the name. It will decrease with increasing data redundancy,
obviously. The decrease will be proportional to the square root
of the redundancy if only statistical errors or counting errors
are present. If other things happen, such as for instance
radiation damage, then you are introducing systematic errors,
which will lead to either R_pim decreasing less than it should,
or R_pim even increasing.

This raises an important issue. As more and more images keep
being added to a data set, could one decide at some point,
when to add any further images? 


This really is the point:  in these days of fast data collection, I 
assume that most people collect more frames than necessary for 
completeness.  At least, I always do.  So the question is no longer 
is this data good enough -- that you can test quickly enough with 
downstream programs.
Rather, it is, how many of the frames that I have should I include, 
so that you don't have to run the same combination of downstream 
programs for 20 combinations of frames.


Radiation damage is the key, innit.  Sure, I can pat myself on the 
shoulder by downweighting everything by 1/1-N -- so after 15 
revolutions of tetragonal crystal that'll give a brilliant Rpim, but 
the crystal will be a cinder and the data presumably crap.


But it's the intermediate zone (1-2x completeness) where I need help, 
but I don't see how Rpim is discriminatory enough.


phx.





[ccp4bb] definition of I Sigma I

2008-12-10 Thread ANDY DODDS
Hi,

does anyone have a definition of I Sigma I please.  Any definitions
that i have found are not very informative for novices.


thanks

andy


Re: [ccp4bb] definition of I Sigma I

2008-12-10 Thread Anastassis Perrakis

Hi -

I Sigma I means nothing.

I/sigma(I) is the average intensity of a group of reflections  
divided by the average standard deviation (sigma) of the same group of  
reflections. Usually its reported per resolution shell, groups of  
reflections within thin shells of resolution.


I/sigma(I) is the intensity divided by the standard deviation  
(sigma) of a reflection, averaged for a group of reflections. Its also  
usually reported per resolution shell, groups of reflections within  
thin shells of resolution.


Both report signal over noise, and in many (most?) publications its  
unclear which one is reported. They are similar but not the same.


Hope this helps.

A.

On 10 Dec 2008, at 23:14, ANDY DODDS wrote:


Hi,

does anyone have a definition of I Sigma I please.  Any definitions
that i have found are not very informative for novices.


thanks

andy


[ccp4bb] Ref for B-factor Underlying Phenomenon

2008-12-10 Thread Jacob Keller
Hello Crystallographers,

does anybody have a good reference dealing with interpretations of what 
B-factors (anisotropic or otherwise) really signify? In other words, a 
systematic addressing of all of the possible underlying 
molecular/crystal/data-collection phenomena which the B-factor mathematically 
models?

Thanks in advance,

Jacob

***
Jacob Pearson Keller
Northwestern University
Medical Scientist Training Program
Dallos Laboratory
F. Searle 1-240
2240 Campus Drive
Evanston IL 60208
lab: 847.491.2438
cel: 773.608.9185
email: [EMAIL PROTECTED]
***

[ccp4bb] Summary: Modeling residues with very poor density

2008-12-10 Thread Andy Millston
 
Thank you Afonine, Hans, Kumar, Deliang, Mark, Jose, Tim, Eleanor and Ed for 
sharing your thoughts.

Here is a summary of the responses I received-

1. One should try to model residues based on 2Fo-Fc maps contoured at 1.0 sigma 
level. Structures of mobile regions/loops can sometimes be modeled  based on 
maps carved at 0.5-0.6 sigma, but in such cases extreme care must be taken to 
avoid modeling based on noise signals and not to omit genuine signals visible 
in the region. 
2. 2Fo-Fc maps contoured below 0.5 sigma and Fo-Fc maps contoured below 1.5 
sigma are full of noise signals and hence need to be carefully analyzed. It is 
advisable to avoid modeling anything based solely on signals seen below the 
above-mentioned sigma cut offs. Fo-Fc maps sometimes improve significantly 
after refining the initial model a couple of times.
3. There seems to be disagreement over modeling residues that one can not see 
in maps and leaving out residues that one can see in maps.

Thanks a lot for your responses.

AM


  

Re: [ccp4bb] Ref for B-factor Underlying Phenomenon

2008-12-10 Thread James Holton
The original reference is Debye, P. (1914) Ann. d. Physik 43, 49, which 
is in German. Waller's paper came later and the forgotten paper wich did 
the math rigorously was Ott, H. (1935) Ann. d. Physik 23, 169. The best 
description I have seen in English was in chapter 1 section 3 of:
James R. W. (1962) “The Optical Principles of the Diffraction of X-rays: 
The Crystalline State”, Vol II. Bell  Hyman Ltd., London.
James is a big book with a lot of math in it, but it is remarkably easy 
to read. Particularly chapter 1. I highly recommend it.


The long and short of the B-factor phenomenon is that the primary 
effect of corrupting a perfect lattice by moving the atoms away from 
their ideal positions is a drop in the Bragg intensities and a 
corresponding increase in background (the elastically-scattered photons 
that don't go into the spots have to go somewhere). R. W. James shows 
the math to prove that the falloff of the Bragg intensities with 
resolution is the Fourier transform of the histogram of atomic 
displacements. It actually doesn't matter if the displacements in 
adjacent unit cells are correlated or not. It is only the histogram of 
displacements from the ideal lattice that is important.


If the distribution (histogram) of atomic displacements is Gaussian, 
then its Fourier transform is also a Gaussian and therefore has the form 
exp(-B*s^2). Where B = 8*pi*u^2 and u is the displacement of an atom 
along the scattering vector s (halfway between the incident and 
diffracted beams). It is interesting that movement of atoms 
perpendicular to s has absolutely no effect on the Bragg intensity! In 
this way, anisotropic displacement distributions lead to anisotropic 
diffraction as the crystal rotates.


Waller showed that thermal vibrations (phonons) in simple crystals do 
indeed produce a Gaussian distribution of atomic displacements, but it 
is also interesting to note that non-Gaussian atomic displacement 
distributions cannot be fully described by a B factor. For example, if 
the atomic displacements have a Lorentzian shape, then the intensity 
fall-off will be exponential: exp(-A*s) (the Fourier transform of a 
Lorentzian is an exponential, and vice versa). I THINK this may be the 
origin of using the letter B, as it is the second term in the Taylor 
expansion of an arbitrary displacement distribution:


exp( ln(K) - A*s -B*s^2 - C*s^3 ...)

For Gaussian atomic displacements, all terms except B will be zero. But, 
I have to admit that Debye's paper doesn't have an equation that looks 
like this. In fact, even R. W. James doesn't call it B, he calls it 
M. So, I could be wrong about the origin of B, but I think someone 
else must have written down the above equation before I did.


-James Holton
MAD Scientist


Jacob Keller wrote:

Hello Crystallographers,
does anybody have a good reference dealing with interpretations of 
what B-factors (anisotropic or otherwise) really signify? In other 
words, a systematic addressing of all of the possible underlying 
molecular/crystal/data-collection phenomena which the B-factor 
mathematically models?

Thanks in advance,
Jacob
***
Jacob Pearson Keller
Northwestern University
Medical Scientist Training Program
Dallos Laboratory
F. Searle 1-240
2240 Campus Drive
Evanston IL 60208
lab: 847.491.2438
cel: 773.608.9185
email: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
***