Re: [ccp4bb] Make Ligand error

2022-01-27 Thread Ronald E. Stenkamp
K2HgI4 worked for solving hemerythrin in 1975.  The various species of HgIx in 
the solution found several binding sites.  Some were single Hg atoms between 
cysteines and others were HgI bound elsewhere.  Ron Stenkamp


From: CCP4 bulletin board  on behalf of Eleanor Dodson 
<176a9d5ebad7-dmarc-requ...@jiscmail.ac.uk>
Sent: Thursday, January 27, 2022 6:25 AM
To: CCP4BB@JISCMAIL.AC.UK 
Subject: Re: [ccp4bb] Make Ligand error

YUVARAJI sent this email re HgI4 coordinates.
The ligand problem has been solved and the map is beautiful, with very sharp 
anomalous difference peaks showing I and HG.
It reveals a lot of substitution - five clusters to 175 residues and a lot of 
alternate conformations. We tried many years ago to us this (pre-SAD phasing) 
and again got too much substitution, not too little. Have other people used it 
successfully? I would be interested to know..
Eleanopr

On Tue, 25 Jan 2022 at 13:41, Eleanor Dodson 
mailto:eleanor.dod...@york.ac.uk>> wrote:
Thank you
I can look at it and maybe be useful..
HgI3c was a heavy atom we tried to use many years back for insulin!

Eleanor

On Tue, 25 Jan 2022 at 12:45, YUVARAJ I 
mailto:yuvee...@gmail.com>> wrote:

Respected Prof. Eleanor Dodson,

Thank you for your reply.

I have added Mercury(II) potassium iodide from (Heavy Atom screen Hg from 
Hampton catlog no: HR2-446 )

https://hamptonresearch.com/uploads/support_materials/Heavy_Atom_UG.pdf

General observations about this Heavy atom: HgI3 can be formed from K2HgI4 with 
the addition of excess KI.

 Using Anomalous signal obtained from this data, I have built the model using 
CRANK2.

I will share the data and the pdb file with you.

It would be a great help, If you could help me in fitting this ligand.

Many Thanks

Yuvaraj

On Tue, Jan 25, 2022 at 5:18 PM Eleanor Dodson 
<176a9d5ebad7-dmarc-requ...@jiscmail.ac.uk>
 wrote:
Dear Yuvara,
  I have just read this..
Your first problem was that you had added the molecule at 3 symmetry related 
positions, and once you corrected that the R factor dropped..
The negative density at the centre of your complex is odd.
Are you sure of its chemical composition?
And have you checked the peaks in the anomalous map.
I can explain how to do that, or if you are allowed to send the data I can show 
you what to expect.

Eleanor Dodson


On Tue, 25 Jan 2022 at 04:41, Paul Emsley 
mailto:pems...@mrc-lmb.cam.ac.uk>> wrote:


On 25/01/2022 04:10, YUVARAJ I wrote:
Respected Prof. Paul
I added this ligand at three places, That why log file showed three molecules,
I got multiple densities of the same ligand.  For testing, I  added it at only 
one place and refined it.
It doesn't help, I am getting the same output.
Kindly let me know How I could fix this ligand.
Thank you

Regards
Yuvara

On Tue, Jan 25, 2022 at 12:03 AM Paul Emsley 
mailto:pems...@mrc-lmb.cam.ac.uk>> wrote:


On 24/01/2022 11:11, YUVARAJ I wrote:
Respected Prof. Paul,
Thank you for your kind reply, After refinement using refmac gives output,
When I visualize in coot, the size of the molecule is small.
but when I do real space refinement with coot, It attains the original size.
I have attached the output and refmac log file with this mail. kindly let me 
know how I could fix this.
Many thanks in advance.

On Mon, Jan 24, 2022 at 12:39 PM Paul Emsley 
mailto:pems...@mrc-lmb.cam.ac.uk>> wrote:


On 23/01/2022 09:56, YUVARAJ I wrote:
Respected Prof. Paul,
Thank you so much for your reply, I have followed your instructions, while 
during refmac run,
It showed the error of "Error: New ligand has been encountered. Stopping now"
when I gave the cif file as input in Additional geometry dictionary option in 
Refmac ,
It is giving the output (screenshot1) attached. Kindly give me instructions or 
any link containing the steps, which I need to follow.
As mentioned by Prof. Gerard, I have both tetrahedral ((HgI4)2-) and trihedral 
(HgI3)-) electron densities.
Can you please send another cif file for (HgI3-) as well and set of 
instructions, It will be very much helpful.
Thank you in advance.
Regards
Yuvaraj



On Sun, Jan 23, 2022 at 5:07 AM Paul Emsley 
mailto:pems...@mrc-lmb.cam.ac.uk>> wrote:


On 22/01/2022 22:10, Georg Mlynek wrote:

Dear Paul,

can you please tell me what the Acedrg Tables reference (I assume a table of 
curated stereochemistry values) and where I can find that table?

Where does coot save these cif files? (Not in CCP4-7\7.1\Lib\data\monomers\ as 
I always thought, before I tried now.)


Many thanks, br Georg.


Am 22.01.2022 um 22:51 schrieb Paul Emsley:


On 22/01/2022 21:47, Paul Emsley wrote:


On 22/01/2022 17:34, YUVARAJ I wrote:


I have solved a protein structure using anomalous signal using 
mercury(II)potassium Iodide(K2HgI4).

I wanted to submit the structure of the protein with mercury(II) potassium 
iodide to PDB.

I am facing problems while making the ligand (HGI4) . HGI4 

Re: [ccp4bb] Reg: water pentagon at dimer interface

2019-09-27 Thread Ronald E. Stenkamp
I don’t know about the myth thing, but I remember Martha Teeter describing 
pentagons of waters in crambin.

Here’s a reference:

Water Structure of a Hydrophobic Protein at Atomic Resolution: Pentagon Rings 
of WaterMolecules in Crystals of Crambin  M. M. Teeter  Proceedings of 
the National Academy of Sciences of the United States of America, 1984, 81(1), 
6014-6018.

Ron

From: CCP4 bulletin board  On Behalf Of Vijaykumar 
Pillalamarri
Sent: Friday, September 27, 2019 4:34 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] Reg: water pentagon at dimer interface

Dear Community,

I solved the structure of a protein from vibrio. There are two molecules in the 
asymmetric unit of this protein. At the dimer interface, the C-termini of both 
the chains interact with each other with the help of five water molecules that 
form a pentagon. I have attached an image showing both the chains and stereo 
image of dimer interface in the inset. I was wondering if there is any 
significance to this or if there is any relevant literature that explains this 
behavior.

Thank you
Vijaykumar Pillalamarri
C/O: Dr. Anthony Addlagatta
Principal Scientist
CSIR-IICT, Tarnaka
Hyderabad, India-57
Mobile: +918886922975



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


Re: [ccp4bb] Off-topic - Crystallisation in anaerobic glove box

2015-03-18 Thread Ronald E Stenkamp

I also wondered about the statement about oils blocking diffusion of O2.  We 
had lots of trouble keeping things anaerobic in a glove box until we degassed 
the oils and waxes used to mount crystals in capillaries.  We found that 
putting them under vacuum removed much of the dissolved oxygen.  The waxes 
required cycling between heating and vacuum several times.  Ron

On Wed, 18 Mar 2015, Edward A. Berry wrote:

Do you have evidence that the oil blocks diffusion of O2? O2 is a nonpolar 
molecule, generally much more soluble in oils than in water. I'm not sure 
about silicone oils, but I would think they also dissolve O2 readily.

eab

On 03/18/2015 08:02 AM, Patrick Shaw Stewart wrote:


Hi Steve

I have one more comment for this thread.

The microbatch-under-oil method is very handy for anaerobic work:

1.  You can keep the microbatch stock solutions in normal microtitre 
plates (polypropylene is best to reduce evaporation) for months, which 
hugely reduces the amount of degassing that you need to do.  You will only 
use say 0.5 ul of stock per drop.


2.  The oil offers a surprising amount of protection from oxidation, 
which may be helpful eg in harvesting.


3.  Microbatch can be automated - in parallel to vapor diffusion if 
desired



It's amazing how often (aerobic) microbatch produces far superior crystals 
to V.D. for no obvious reason - it's well worth trying for both screening 
and optimization.


Best wishes

Patrick



On 11 March 2015 at 10:17, Stephen Carr stephen.c...@rc-harwell.ac.uk 
mailto:stephen.c...@rc-harwell.ac.uk wrote:


Dear CCP4BBer's

Apologies for the off-topic post, but the CCP4BB seems to be the best 
place to ask about crystallisation.


I am looking to set up crystallisation in an anaerobic glove box and 
wondered how other people did this, specifically the crystallisation stage. 
My initial thoughts were to place a small crystallisation incubator inside 
the box, however the smallest I have come across so far (~27L) is still 
rather large.  Has anyone come across smaller incubators?  Alternatively 
are incubators even neccessary if the glove box is placed in a room with 
good air conditioning and stable temperature control?


Any recommendations would be very helpful.

Thanks in advance,

Steve Carr

Dr Stephen Carr
Research Complex at Harwell (RCaH)
Rutherford Appleton Laboratory
Harwell Oxford
Didcot
Oxon OX11 0FA
United Kingdom
Email stephen.c...@rc-harwell.ac.uk 
mailto:stephen.c...@rc-harwell.ac.uk

tel 01235 567717 tel:01235%20567717

This email and any attachments may contain confidential, copyright and 
or privileged material, and are for the use of the intended addressee only. 
If you are not the intended addressee or an authorized recipient of the 
addressee, please notify us of receipt by returning the e-mail and do not 
use, copy, retain, distribute or disclose the information in or attached to 
this email.


Any views or opinions presented are solely those of the author and do 
not necessarily represent those of the Research Complex at Harwell.


There is no guarantee that this email or any attachments are free from 
viruses and we cannot accept liability for any damage which you may sustain 
as a result of software viruses which may be transmitted in or with the 
message.


We use an electronic filing system. Please send electronic versions of 
documents, unless paper is specifically requested.


This email may have a protective marking, for an explanation, please 
see:


http://www.mrc.ac.uk/About/informationandstandards/documentmarking/index.htm.




--
patr...@douglas.co.uk mailto:patr...@douglas.co.ukDouglas Instruments 
Ltd.

  Douglas House, East Garston, Hungerford, Berkshire, RG17 7HD, UK
  Directors: Peter Baldock, Patrick Shaw Stewart

http://www.douglas.co.uk
  Tel: 44 (0) 148-864-9090US toll-free 1-877-225-2034
  Regd. England 2177994, VAT Reg. GB 480 7371 36




Re: [ccp4bb] Visualizing Stereo view

2015-01-18 Thread Ronald E Stenkamp

Are most stereo images now for cross-eyed viewing?  I thought they were for 
wall-eyed viewing.

Perhaps a warning would be helpful for people starting out at looking at 
published stereoviews.  If you look at a stereoview constructed for wall-eyed 
viewing but look at it with crossed eyes, you'll change the handedness of the 
object.  And if you're showing surfaces, they get turned inside-out (or 
something like it).  I usually get a headache soon after mixing modes like this 
and only know that the surfaces are messed up.

Also, in answer to one of Jeorge's questions, the two images in stereoviews 
differ by a small rotation about a vertical axis.  The two images are what each 
of your eyes would see if looking at a single object.  Because your eyes are 
separated by about 2.25 inches (I'm a stubborn non-metrical American...), the 
left- and right-eye views differ slightly.  The amount also depends on how 
close your eyes are supposed to be from the object.  I think long ago things 
were worked out so the rotation is 6 degrees and that corresponds to the 
viewer-object distance being about 30 inches.  If you place the left eye view 
on the left, you need to look at the two images in wall-eyed mode.  If you 
place the left eye view on the right, you need to cross your eyes to generate 
the stereo image.

For those of us who can view stereoimages without the assistance of glasses or 
computers, life is good.  I recommend developing the ability to do that.

Ron

On Sun, 18 Jan 2015, Jim Fairman wrote:


You can create stereo images for publications in pymol:
http://www.pymolwiki.org/index.php/Stereo_ray

Adding labels and getting them to float at the correct depth within the
image can be tricky.

As for visualizing the stereo images, you can either practice alot and get
good at cross eyed stereo viewing, or you can buy a pair of glasses to assist
you in seeing the 3d effect. If you google cross eyed stereo glasses, you
will get many options to purchase. Old chemistry texts used to come with a
pair, but I'm not sure that students actually purchase textbooks anymore.


On Sat, Jan 17, 2015 at 23:50 PM, jeorgemarley thomas kirtswab...@gmail.com
wrote:
  Dear all,
First of all sorry to put this off topic and silly question on bb. Can
anybody suggest me, how to create a stereo image and how it is different
from the normal. How can I visualize it, if anybody has answer for this
please suggest me its significance in analysis. Thank you very much in
advance. 

Thanks

Jeorge 



--
Sent from MetroMail




Re: [ccp4bb] Question about enzyme behavior

2014-10-02 Thread Ronald E Stenkamp

Watch out with oils and oxygen.  Oxygen is fairly soluble in oils.  When we 
worked on deoxyhemerythrin, we had to degas our sealing wax to keep things 
anaerobic.  If you heat wax, and then put it in a vacuum, it'll froth as all 
the gases come out of the liguid.  Then of course it hardens and you have to 
heat and vacuum again.  It took several cycles of that to degas the sealing wax 
for our capillaries.  (That shows how long ago that was.)  Ron

On Thu, 2 Oct 2014, Boaz Shaanan wrote:


Hi Tatiana,

Your problem is most reminiscent to the problem that Max Perutz faced when he dealt with deoxy-haemoglobin crystals, but in those days only mounting in capillaries was the way to bring the crystals to the beam, so he used dithionite, just like you, but mounted the crystals in (specially made for him at LMB) glove box under Nitrogen atmosphere. I guess you're now freezing your crystals for data collection, right? I'm not sure how to do this in a glove box nor am I sure whether after freezing your crystals is protected against oxidation. Maybe. But perhaps you can also consider using Parthon oil (or something similar) as cryo-protectant so it will coat your crystal in the glove box  and will also reduce oxidation? 


Good luck,

  Boaz 



Boaz Shaanan, Ph.D.
Dept. of Life Sciences
Ben-Gurion University of the Negev
Beer-Sheva 84105
Israel

E-mail: bshaa...@bgu.ac.il
Phone: 972-8-647-2220  Skype: boaz.shaanan
Fax:   972-8-647-2992 or 972-8-646-1710






From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of ISABET Tatiana 
[tatiana.isa...@synchrotron-soleil.fr]
Sent: Thursday, October 02, 2014 11:22 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] Question about enzyme behavior

Dear all,

Sorry for a non purely crystallographic question.

I am working on an enzyme which binds Fe2+ cations to catalyzes an 
FeII-dependent hydroxylation reaction.

Because of fast oxidation in presence of the enzyme, it is very difficult to 
soak Fe2+ ions into the crystals. We succeed only under anaerobic conditions 
(glove box). I use a combination of dithionite as a reducing agent and Fe2+SO4 
or (NH4)2Fe(SO4)2 as Fe2+ source. Despite these precautions, the Fe2+ is most 
often disordered in the active site.

When I add Fe2+ under aerobic conditions, Fe2+ oxidizes immediately upon 
contact with the protein solution (despite 1mM Dithionite for 5mM Fe2+ and 
protein concentration = 230uM). Furthermore, the hydroxyl donor molecule, which 
should bind Fe2+ (before the substrate) and one residue of the protein, is not 
seen in the electron-density maps in the active site. I have tried several 
soaking conditions. When I try a co-crystallization approach, adding Fe2+ and 
this hydroxyl donor molecule directly to the protein solution under anaerobic 
conditions, the protein precipitates.
Does anybody have an idea or experience with this type of results? or how to 
fix the molecule to such a site? What type of phenomena could occur at the 
active site preventing the binding of the product?

Thanks for your help

Best regards

Tatiana


Re: [ccp4bb] Calculating anomalous Fourier maps

2014-08-29 Thread Ronald E Stenkamp
I think the answer is in this paper: 
LOW-RESOLUTION ELECTRON-DENSITY AND ANOMALOUS-SCATTERING-DENSITY MAPS OF CHROMATIUM HIGH-POTENTIAL IRON PROTEIN

By: STRAHS, G; KRAUT, J
JOURNAL OF MOLECULAR BIOLOGY  Volume: 35   Issue: 3   Pages: 503-   Published: 
1968



On Fri, 29 Aug 2014, Alexander Aleshin wrote:


Could anyone remind me how to calculate anomalous  difference Fourier maps 
using model-calculated phases? I was doing it by
(1) calculating PHcalc from a pdb file using Sfall, then
(2) merging PHcalc with Dano of experimental SFs, then
(3) calculating a map with Dano and PHcalc using FFT program of CCP4.

Now, I've read Z. Dauter's et all paper 
http://mcl1.ncifcrf.gov/dauter_pubs/175.pdf, and it said that their anomalous 
maps were calculated using (delF, PHcalc-90degrees). Why did they use  -90 
degrees?  How does it relay to a (delF, phcalc) map?

Thank you for an advice.

Alex Aleshin


Re: [ccp4bb] Confusion about space group nomenclature

2014-05-02 Thread Ronald E Stenkamp

I agree with George.  Sohnke is only six letters and it's been used for a long 
time to label these groups.  Ron

On Fri, 2 May 2014, George Sheldrick wrote:


In my program documentation I usually call these 65 the Sohnke space groups,
as defined by the IUCr:
http://reference.iucr.org/dictionary/Sohnke_groups 

George


On 05/02/2014 02:35 PM, Jim Pflugrath wrote:
  After all this discussion, I think that Bernhard can now lay the
  claim that these 65 space groups should really just be labelled
  the Rupp space groups.  At least it is one word.
Jim

__
From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Bernhard
Rupp [hofkristall...@gmail.com]
Sent: Friday, May 02, 2014 3:04 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Confusion about space group nomenclature
….

 

Enough of this thread.

 

Over and out, BR

 

 

 



--
Prof. George M. Sheldrick FRS
Dept. Structural Chemistry, 
University of Goettingen,

Tammannstr. 4,
D37077 Goettingen, Germany
Tel. +49-551-39-33021 or -33068
Fax. +49-551-39-22582





Re: [ccp4bb] Confusion about space group nomenclature

2014-05-02 Thread Ronald E Stenkamp

Bernhard is giving me too much credit.  I just told him I'd seen someone's name 
associated with the 65 space groups, but that's the only information I 
provided.  Ron

On Fri, 2 May 2014, Bernhard Rupp wrote:


Fellows,

my apologies for having sparked that space war.
I wish to interject than in my earlier postings to this thread
to Howard I did give credit to the '65 sons of Sohnke' (albeit sans c).

If we honor him, we ought to spell him right.
http://reference.iucr.org/dictionary/Sohnke_groups
Sohnke (IUCr)
Sohncke (same page, IUCr)
Sohncke (Wikipedia and German primary sources):
http://www.deutsche-biographie.de/sfz80497.html

Ron posted the Sohncke link to me off-line right away and I admit that I 
realized the same by googling
'chiral space groups' which immediately leads you to Wikipedia's space group 
and Sohncke
entry. It also shows (in addition to an interesting 74-group page...) my own 
web list, which imho
erroneously used the improper (no pun intended) adjective 'chiral' for the 65 
Sohncke groups. No more.

Nonetheless, this does not necessarily discredit my quest for a descriptive 
adjective, and the
absence of such after this lively engagement might indicate that the question 
was not quite as
illegitimate as it might have appeared even to the cognoscenti at first sight. 
Nonetheless,

a toast to Sohncke!

BR


-Original Message-
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Gerard 
Bricogne
Sent: Friday, May 02, 2014 7:17 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Confusion about space group nomenclature

Dear John,

What is wrong with honouring Sohnke by using his name for something that he 
first saw a point in defining, and in investigating the properties resulting 
from that definition? Why insist that we should instead replace his name by an 
adjective or a circumlocution? What would we say if someone outside our field 
asked us not to talk about a Bragg reflection, or the Ewald sphere, or the Laue 
method, but to use instead some clever adjective or a noun-phrase as long as 
the name of a Welsh village to explain what these mean?

Again, I think we should have a bit more respect here. When there are simple 
adjectives to describe a mathematical properties, the mathematical vocabulary uses it 
(like a normal subgroup). However, when someone has seen that a definition by 
a conjunction of properties (i.e. something describable by a sentence) turns out to 
characterise objects that have much more interesting properties than just those by which 
they were defined, then they are often called by the name of the mathematician who first 
saw that there is more to them than what defines them. Examples: Coxeter groups, or Lie 
algebras, or the Leech lattice, or the Galois group of a field, the Cayley tree of a 
group ... . It is the name of the first witness to a mathematical phenomenon, just as we 
call chemical reactions by the name of the chemist who saw that mixing certain chemicals 
together led not just to a mixture of those chemicals.

So why don't we give Sohnke what belongs to him, just as we expect other 
scientists to give to Laue, Bragg and Ewald what we think belongs to them? 
Maybe students would not be as refractory to the idea as might first be thought.


With best wishes,

 Gerard.

--
On Fri, May 02, 2014 at 05:42:34PM +0100, Jrh Gmail wrote:

Dear George
My student class would not find that IUCr dictionary definition helpful. What 
they do find helpful is to state that they cannot contain an inversion or a 
mirror.
To honour Sohnke is one thing but is it really necessary as a label? You're 
from Huddersfield I am from Wakefield ie let's call a spade a spade (not a 
'Black and Decker').
Cheers
John

Prof John R Helliwell DSc


On 2 May 2014, at 17:01, George Sheldrick gshe...@shelx.uni-ac.gwdg.de wrote:

In my program documentation I usually call these 65 the Sohnke space groups, as 
defined by the IUCr:
http://reference.iucr.org/dictionary/Sohnke_groups

George



On 05/02/2014 02:35 PM, Jim Pflugrath wrote:
After all this discussion, I think that Bernhard can now lay the claim that these 65 
space groups should really just be labelled the Rupp space groups.  At least 
it is one word.

Jim

From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of
Bernhard Rupp [hofkristall...@gmail.com]
Sent: Friday, May 02, 2014 3:04 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Confusion about space group nomenclature  .

Enough of this thread.

Over and out, BR



--
Prof. George M. Sheldrick FRS
Dept. Structural Chemistry,
University of Goettingen,
Tammannstr. 4,
D37077 Goettingen, Germany
Tel. +49-551-39-33021 or -33068
Fax. +49-551-39-22582




Re: [ccp4bb] I/sigmaI or I/sigmaI

2014-02-12 Thread Ronald E Stenkamp

How did people get I/sigmI when using HKL2000?  Ron

On Wed, 12 Feb 2014, Phil Evans wrote:


I/sigmaI

On 12 Feb 2014, at 11:43, Cai Qixu caiq...@gmail.com wrote:


Dear all,

Does the I/sigmaI in “Table 1” mean for I/sigmaI or I/sigmaI ?

Thanks for your answer.

Best wishes,

Qixu Cai




Re: [ccp4bb] Comparison of Water Positions across PDBs

2013-11-06 Thread Ronald E Stenkamp

I've remained silent as this thread evolved into a discussion of how the PDB deals with 
water names and numbers.  But Nat's comment about the PDB not advertising itself as 
anything other than an archival service finally prodded me into saying something.

Something I've slowly come to realize is that the PDB, while it started as an archive, 
has developed into a working database.  That's why they (the PDB 
workers/organizers/managers) have gotten into this mode where they change things from the 
original deposited files.  I learned several years ago that the PDB is willing to change 
the atom names in a ligand from those previously used in the published  literature.  This 
was done in the name of consistency and essentially made the PDB files into 
database entries, rather than archival files since the atom names no longer matched the 
atom names used in the papers.

Given the difficulties I had in discussing this with the annotaters, I've come 
to realize that as soon as I hit the submit button on a PDB submission, I've 
lost control over what will appear in the distributed file.  In a way, it's 
been liberating to reach that point.  It reduces my sense of responsibility for 
the contents of the file.

Ron

On Wed, 6 Nov 2013, Nat Echols wrote:


On Wed, Nov 6, 2013 at 12:39 AM, Bernhard Rupp hofkristall...@gmail.com wrote:

  Hmmm….does that mean that the journals are now the ultimate authority of 
what stays in
  the PDB?

  I find this slightly irritating and worthy of change.


http://www.wwpdb.org/UAB.html

It is the current wwPDB (Worldwide PDB) policy that entries can be made 
obsolete following a request
from the people responsible for publishing it (be it the principal author or journal 
editors).

I'm not sure I understand why things should be any different; the PDB is not 
advertising itself as
anything other than an archival service, unlike the journals which are supposed 
to be our primary
mechanism of quality control.

-Nat




Re: [ccp4bb] ctruncate bug?

2013-06-22 Thread Ronald E Stenkamp

I agree with Frank.  This thread has been fascinating and educational.  Thanks 
to all.  Ron

On Sat, 22 Jun 2013, Douglas Theobald wrote:


On Jun 22, 2013, at 6:18 PM, Frank von Delft frank.vonde...@sgc.ox.ac.uk 
wrote:


A fascinating discussion (I've learnt a lot!);  a quick sanity check, though:

In what scenarios would these improved estimates make a significant difference?


Who knows?  I always think that improved estimates are always a good thing, ignoring 
computational complexity (by improved I mean making more accurate physical 
assumptions).  This may all be academic --- estimating Itrue with unphysical negative 
values, and then later correcting w/French-Wilson, may give approximately the same 
answers and make no tangible difference in the models.  But that all seems a bit 
convoluted, ad hoc, and unnecessary, esp. now with the available computational power.  It 
might make a difference.


Or rather:  are there any existing programs (as opposed to vapourware) that 
would benefit significantly?

Cheers
phx



On 22/06/2013 18:04, Douglas Theobald wrote:

Ian, I really do think we are almost saying the same thing.  Let me try to 
clarify.

You say that the Gaussian model is not the correct data model, and that the 
Poisson is correct.  I more-or-less agree.  If I were being pedantic (me?) I would say that 
the Poisson is *more* physically realistic than the Gaussian, and more realistic in a very 
important and relevant way --- but in truth the Poisson model does not account for other 
physical sources of error that arise from real crystals and real detectors, such as dark 
noise and read noise (that's why I would prefer a gamma distribution).  I also agree that 
for x10 the Gaussian is a good approximation to the Poisson.  I basically agree with 
every point you make about the Poisson vs the Gaussian, except for the following.

The Iobs=Ispot-Iback equation cannot be derived from a Poisson assumption, except as 
an approximation when  Ispot  Iback.  It *can* be derived from the Gaussian 
assumption (and in fact I think that is probably the *only* justification it has).   
It is true that the difference between two Poissons can be negative.  It is also true 
that for moderate # of counts, the Gaussian is a good approximation to the Poisson.  
But we are trying to estimate Itrue, and both of those points are irrelevant to 
estimating Itrue when Ispot  Iback.  Contrary to your assertion, we are not 
concerned with differences of Poissonians, only sums.  Here is why:

In the Poisson model you outline, Ispot is the sum of two Poisson variables, 
Iback and Iobs.  That means Ispot is also Poisson and can never be negative.  
Again --- the observed data (Ispot) is a *sum*, so that is what we must deal 
with.  The likelihood function for this model is:

L(a) = (a+b)^k exp(-a-b)

where 'k' is the # of counts in Ispot, 'a' is the mean of the Iobs Poisson (i.e., a = 
Itrue), and 'b' is the   mean of the Iback Poisson.  Of course k=0, and both 
parameters a0 and b0.  Our job is to estimate 'a', Itrue.  Given the likelihood 
function above, there is no valid estimate of 'a' that will give a negative value.  For 
example, the ML estimate of 'a' is always non-negative.  Specifically, if we assume 'b' 
is known from background extrapolation, the ML estimate of 'a' is:

a = k-b   if kb

a = 0   if k=b

You can verify this visually by plotting the likelihood function (vs 'a' as 
variable) for any combination of k and b you want.  The SD is a bit more 
difficult, but it is approximately (a+b)/sqrt(k), where 'a' is now the ML 
estimate of 'a'.

Note that the ML estimate of 'a', when kb (IspotIback), is equivalent to 
Ispot-Iback.

Now, to restate:  as an estimate of Itrue, Ispot-Iback cannot be derived from the 
Poisson model.  In contrast, Ispot-Iback *can* be derived from a Gaussian model 
(as the ML and LS estimate of Itrue).  In fact, I'll wager the Gaussian is the 
only reasonable model that gives Ispot-Iback as an estimate of Itrue.  This is why 
I claim that using Ispot-Iback as an estimate of Itrue, even when IspotIback, 
implicitly means you are using a (non-physical) Gaussian model.  Feel free to 
prove me wrong --- can you derive Ispot-Iback, as an estimate of Itrue, from 
anything besides a Gaussian?

Cheers,

Douglas




On Sat, Jun 22, 2013 at 12:06 PM, Ian Tickle ianj...@gmail.com wrote:
On 21 June 2013 19:45, Douglas Theobald dtheob...@brandeis.edu wrote:

The current way of doing things is summarized by Ed's equation: 
Ispot-Iback=Iobs.  Here Ispot is the # of counts in the spot (the area 
encompassing the predicted reflection), and Iback is # of counts in the 
background (usu. some area around the spot).  Our job is to estimate the true 
intensity Itrue.  Ed and others argue that Iobs is a reasonable estimate of 
Itrue, but I say it isn't because Itrue can never be negative, whereas Iobs can.

Now where does the Ispot-Iback=Iobs equation come from?  It implicitly assumes 
that both Iobs 

Re: [ccp4bb] vitrification vs freezing

2012-11-16 Thread Ronald E Stenkamp

I'm a little confused.  Petsko and others were doing 
low-temperature/freezing/vitrification crystal experiments in the 1970s, right? 
 (J. Mol. Biol., 96(3) 381, 1975).  Is there a big difference between what they 
were doing and what's done now.

Ron

On Fri, 16 Nov 2012, Gerard Bricogne wrote:


Dear all,

I think we are perhaps being a little bit insular, or blinkered, in
this discussion. The breakthrough we are talking about, and don't know how
to call, first occurred not in crystallography but in electron microscopy,
in the hands of Jacques Dubochet at EMBL Heidelberg in the early 1980s (see
for instance http://www.unil.ch/dee/page53292.html). It made possible the
direct imaging of molecules in vitrified or vitreous ice and to achieve
higher resolution than the previous technique of negative staining. In that
context it is obvious that the vitreous state refers to water, not to the
macromolecular species embedded in it: the risk of a potential oxymoron in
the crystallographic case arises from trying to choose a single adjective to
qualify a two-component sample in which those components behave differently
under sudden cooling.

I have always found that an expression like flash-frozen has a lot
going for it: it means that the sample was cooled very quickly, so it
describes a process rather than a final state. The fact that this final
state preserves the crystalline arrangement of the macromolecule(s), but
causes the solvent to go into a vitreous phase, is just part of what every
competent reviewer of a crystallographic paper should know, and that ought
to avoid the kind of arguments that started this thread.


With best wishes,

 Gerard.

--
On Thu, Nov 15, 2012 at 11:35:46PM -0700, Javier Gonzalez wrote:

Hi Sebastiano,

I think the term vitrified crystal could be understood as a very nice
oxymoron (http://www.oxymoronlist.com/), but it is essentially
self-contradictory and not technically correct.

As Ethan said, vitrify means turn into glass. Now, a glass state is a
disordered solid state by definition, then it can't be a crystal. A
vitrified crystal would be a crystal which has lost all three-dimensional
ordering, pretty much like the material one gets when using the wrong
cryo-protectant.

What one usually does is to soak the crystal in a cryo-protectant and
then flash-freeze the resulting material, hoping that the crystal structure
will be preserved, while the rest remains disordered in a solid state
(vitrified), so that it won't produce a diffraction pattern by itself, and
will hold the crystal in a fixed position (very convenient for data
collection).

Moreover, I would say that clarifying a material is vitrified when
subjected to liquid N2 temperatures would be required only if you were
working with some liquid solvent which might remain in the liquid phase at
that temperature, instead of the usual solid disordered state, but this is
never the case with protein crystals.

So, I vote for frozen crystal.-

Javier


PS: that comment by James Stroud I forgot to mention that if any
dictionary is an authority on the very cold, it would be the Penguin
dictionary., is hilarious, we need a Like button in the CCP4bb list!

--
Javier M. Gonzalez
Protein Crystallography Station
Bioscience Division
Los Alamos National Laboratory
TA-43, Building 1, Room 172-G
Mailstop M888
Phone: (505) 667-9376


On Thu, Nov 15, 2012 at 2:24 PM, Craig Bingman cbing...@biochem.wisc.eduwrote:


 cryopreserved

It says that the crystals were transferred to cryogenic temperatures in an
attempt to increase their lifetime in the beam, and avoids all of the other
problems with all of the other language described.

I was really trying to stay out of this, because I understand what
everyone means with all of their other word choices.

On Nov 15, 2012, at 2:07 PM, James Stroud wrote:


Isn't cryo-cooled redundant?

James

On Nov 15, 2012, at 11:34 AM, Phil Jeffrey wrote:


Perhaps it's an artisan organic locavore fruit cake.

Either way, your *crystal* is not vitrified.  The solvent in your

crystal might be glassy but your protein better still hold crystalline
order (cf. ice) or you've wasted your time.


Ergo, cryo-cooled is the description to use.

Phil Jeffrey
Princeton

On 11/15/12 1:14 PM, Nukri Sanishvili wrote:

s: An alternative way to avoid the argument and discussion all together
is to use cryo-cooled.
Tim: You go to a restaurant, spend all that time and money and order a
fruitcake?
Cheers,
N.





--

===
* *
* Gerard Bricogne g...@globalphasing.com  *
* *
* Global Phasing Ltd. *
* Sheraton House, Castle Park Tel: +44-(0)1223-353033 *
* Cambridge CB3 0AX, UK   Fax: +44-(0)1223-366889 *
*   

Re: [ccp4bb] vitrification vs freezing

2012-11-16 Thread Ronald E Stenkamp

In the 1975 paper, they describe taking crystals to -100C, but it wasn't done in a 
flash sort of way.  They equilibrated the crystals with various solvent 
combinations as the temperature was reduced.

Trying to recollect what was discussed by my lab mates nearly 40 years ago, I 
think the fact that the crystals were mounted in capillaries caused 
difficulties with plunging the crystals into liquid N2.  We never tried it, but 
the thought of what would happen to the glass, etc was enough to keep us from 
doing the experiment.

I think the reason it took years before people started routine flash cooling of 
crystals was because it was expensive and hard to do.  You needed to buy a 
crystal cooling device, and you had to invest the time and energy in developing 
cryo-solutions. People were more excited about seeing the next interesting 
structure, and room temperature experiments were good enough for that.

Ron


On Fri, 16 Nov 2012, Gerard Bricogne wrote:


Dear Quyen and Ron,

Thank you for bringing up this work. I can remember hearing Greg Petsko
give a seminar at the LMB in Cambridge around 1974, but I never read that
paper. The seminar was about cooling crystals at 4C, and also about work
done with Pierre Douzou to try and retain the high dielectric constant of
water (e.g. with DMSO) when cooling hydrated crystals to temperatures well
below the normal freezing point of water. This had to be done progressively,
with successive increases in the concentration of DMSO, and without ever
giving rise to a transition to a solid phase of the solvent; so it would
seem to have lacked the flash component of today's methods.

It may well be that their work went further into crystal cryo-cooling
and vitrification, and that this extension was described in the JMB paper
you quote but not yet in the version of his seminar that I heard. If so,
thank you for pointing out this reference - but then, why wasn't it taken up
earlier?


With best wishes,

 Gerard.

--
On Fri, Nov 16, 2012 at 01:37:58PM -0500, Quyen Hoang wrote:

I was going to mention that too, but since I was a postdoc of Petsko my words 
could have been viewed as biased.

Quyen



On Nov 16, 2012, at 1:26 PM, Ronald E Stenkamp stenk...@u.washington.edu 
wrote:


I'm a little confused.  Petsko and others were doing 
low-temperature/freezing/vitrification crystal experiments in the 1970s, right? 
 (J. Mol. Biol., 96(3) 381, 1975).  Is there a big difference between what they 
were doing and what's done now.

Ron

On Fri, 16 Nov 2012, Gerard Bricogne wrote:


Dear all,

   I think we are perhaps being a little bit insular, or blinkered, in
this discussion. The breakthrough we are talking about, and don't know how
to call, first occurred not in crystallography but in electron microscopy,
in the hands of Jacques Dubochet at EMBL Heidelberg in the early 1980s (see
for instance http://www.unil.ch/dee/page53292.html). It made possible the
direct imaging of molecules in vitrified or vitreous ice and to achieve
higher resolution than the previous technique of negative staining. In that
context it is obvious that the vitreous state refers to water, not to the
macromolecular species embedded in it: the risk of a potential oxymoron in
the crystallographic case arises from trying to choose a single adjective to
qualify a two-component sample in which those components behave differently
under sudden cooling.

   I have always found that an expression like flash-frozen has a lot
going for it: it means that the sample was cooled very quickly, so it
describes a process rather than a final state. The fact that this final
state preserves the crystalline arrangement of the macromolecule(s), but
causes the solvent to go into a vitreous phase, is just part of what every
competent reviewer of a crystallographic paper should know, and that ought
to avoid the kind of arguments that started this thread.


   With best wishes,

Gerard.

--
On Thu, Nov 15, 2012 at 11:35:46PM -0700, Javier Gonzalez wrote:

Hi Sebastiano,

I think the term vitrified crystal could be understood as a very nice
oxymoron (http://www.oxymoronlist.com/), but it is essentially
self-contradictory and not technically correct.

As Ethan said, vitrify means turn into glass. Now, a glass state is a
disordered solid state by definition, then it can't be a crystal. A
vitrified crystal would be a crystal which has lost all three-dimensional
ordering, pretty much like the material one gets when using the wrong
cryo-protectant.

What one usually does is to soak the crystal in a cryo-protectant and
then flash-freeze the resulting material, hoping that the crystal structure
will be preserved, while the rest remains disordered in a solid state
(vitrified), so that it won't produce a diffraction pattern by itself, and
will hold the crystal in a fixed position (very convenient for data
collection).

Moreover, I would say that clarifying a material is vitrified when
subjected to liquid N2

Re: [ccp4bb] Fun Question - Is multiple isomorphous replacement an obsolete technique?

2012-06-06 Thread Ronald E Stenkamp

There were a number of labs using anomalous dispersion for phasing 40 years 
ago.  The theory for using it dates from the 60s.  And careful experimental 
technique allowed the structure solution of several proteins before 1980 using 
what would be labeled now as SIRAS.  Ron

On Wed, 6 Jun 2012, Dyda wrote:


I suspect that pure MIR (without anomalous) was always a fiction. I doubt that 
anyone has ever used it. Heavy atoms always give
an anomalous signal



Phil


I suspect that there was a time when the anomalous signal in data sets was 
fictional.
Before the invent of flash freezing, systematic errors due to decay and the need
of scaling together many derivative data sets collected on multiple crystals 
could render
weak anomalous signal useless. Therefore MIR was needed. Also, current 
hardware/software
produces much better reduced data, so weak signals can become useful.

Fred

***
Fred Dyda, Ph.D.   Phone:301-402-4496
Laboratory of Molecular BiologyFax: 301-496-0201
DHHS/NIH/NIDDK e-mail:fred.d...@nih.gov
Bldg. 5. Room 303
Bethesda, MD 20892-0560  URGENT message e-mail: 2022476...@mms.att.net
Google maps coords: 39.000597, -77.102102
http://www2.niddk.nih.gov/NIDDKLabs/IntramuralFaculty/DydaFred
***



Re: [ccp4bb] very informative - Trends in Data Fabrication

2012-04-06 Thread Ronald E Stenkamp
Dear John,

Your points are well taken and they're consistent with policies and practices 
in the US as well.  

I wonder about the nature of the employer's responsibility though.  I sit on 
some university committees, and the impression I get is that much of the time, 
the employers are interested in reducing their legal liabilities, not 
protecting the integrity of science.  The end result is the same though in that 
the employers get involved and oversee the handling of scientific misconduct.  

What is unclear to me is whether the system for dealing with misconduct is 
broken.  It seems to work pretty well from my viewpoint.  No system is perfect 
for identifying fraud, errors, etc, and I understand the idea that improvements 
might be possible.  However, too many improvements might break the system as 
well.

Ron 

On Fri, 6 Apr 2012, John R Helliwell wrote:

 Dear Ron,
 Re (3):-
 Yes of course the investigator has that responsibility.
 The additional point I would make is that the employer has a share in
 that responsibility. Indeed in such cases the employer university
 convenes a research fraud investigating committee to form the final
 judgement on continued employment.
 A research fraud policy, at least ours, also includes the need for
 avoding inadvertent loss of raw data, which is also deemed to be
 research malpractice.
 Thus the local data repository, with doi registration for data sets
 that underpin publication, seems to me and many others, ie in other
 research fields, a practical way forward for these data sets.
 It also allows the employer to properly serve the research
 investigations of its employees and be duely diligent to the research
 sponsors whose grants it accepts. That said there is a variation of
 funding that at least our UK agencies will commit to 'Data management
 plans'.
 Greetings,
 John



 2012/4/5 Ronald E Stenkamp stenk...@u.washington.edu:
 This discussion has been interesting, and it's provided an interesting forum 
 for those interested in dealing with fraud in science.  I've not contributed 
 anything to this thread, but the message from Alexander Aleshin prodded me 
 to say some things that I haven't heard expressed before.

 1.  The sky is not falling!  The errors in the birch pollen antigen pointed 
 out by Bernhard are interesting, and the reasons behind them might be 
 troubling.  However, the self-correcting functions of scientific research 
 found the errors, and current publication methods permitted an airing of the 
 problem.  It took some effort, but the scientific method prevailed.

 2.  Depositing raw data frames will make little difference in identifying 
 and correcting structural problems like this one.  Nor will new requirements 
 for deposition of this or that detail.  What's needed for finding the 
 problems is time and interest on the part of someone who's able to look at a 
 structure critically.  Deposition of additional information could be 
 important for that critical look, but deposition alone (at least with 
 today's software) will not be sufficient to find incorrect structures.

 3.  The responsibility for a fraudulent or wrong or poorly-determined 
 structure lies with the investigator, not the society of crystallographers.  
 My political leanings are left-of-central, but I still believe in individual 
 responsibility for behavior and actions.  If someone messes up a structure, 
 they're accountable for the results.

 4.  Adding to the deposition requirements will not make our science more 
 efficient.  Perhaps it's different in other countries, but the 
 administrative burden for doing research in the United States is growing.  
 It would be interesting to know the balance between the waste that comes 
 from a wrong structure and the waste that comes from having each of us deal 
 with additional deposition requirements.

 5.  The real danger that arises from cases of wrong or fraudulent science is 
 that it erodes the trust we have in each others results.  No one has time or 
 resources to check everything, so science is based on trust.  There are 
 efforts underway outside crystallographic circles to address this larger 
 threat to all science, and we should be participating in those discussions 
 as much as possible.

 Ron

 On Thu, 5 Apr 2012, aaleshin wrote:

 Dear John,Thank you for a very informative letter about the IUCr activities 
 towards archiving the experimental
 data. I feel that I did not explain myself properly. I do not object 
 archiving the raw data, I just believe
 that current methodology of validating data at PDB is insufficiently robust 
 and requires a modification.
 Implementation of the raw image storage and validation will take a 
 considerable time, while the recent
 incidents of a presumable data frauds demonstrate that the issue is urgent. 
 Moreover, presenting the
 calculated structural factors in place of the experimental data is not the 
 only abuse that the current
 validation procedure encourages

Re: [ccp4bb] very informative - Trends in Data Fabrication

2012-04-05 Thread Ronald E Stenkamp
This discussion has been interesting, and it's provided an interesting forum 
for those interested in dealing with fraud in science.  I've not contributed 
anything to this thread, but the message from Alexander Aleshin prodded me to 
say some things that I haven't heard expressed before.

1.  The sky is not falling!  The errors in the birch pollen antigen pointed out 
by Bernhard are interesting, and the reasons behind them might be troubling.  
However, the self-correcting functions of scientific research found the errors, 
and current publication methods permitted an airing of the problem.  It took 
some effort, but the scientific method prevailed.   

2.  Depositing raw data frames will make little difference in identifying and 
correcting structural problems like this one.  Nor will new requirements for 
deposition of this or that detail.  What's needed for finding the problems is 
time and interest on the part of someone who's able to look at a structure 
critically.  Deposition of additional information could be important for that 
critical look, but deposition alone (at least with today's software) will not 
be sufficient to find incorrect structures.

3.  The responsibility for a fraudulent or wrong or poorly-determined structure 
lies with the investigator, not the society of crystallographers.  My political 
leanings are left-of-central, but I still believe in individual responsibility 
for behavior and actions.  If someone messes up a structure, they're 
accountable for the results.  

4.  Adding to the deposition requirements will not make our science more 
efficient.  Perhaps it's different in other countries, but the administrative 
burden for doing research in the United States is growing.  It would be 
interesting to know the balance between the waste that comes from a wrong 
structure and the waste that comes from having each of us deal with additional 
deposition requirements.  

5.  The real danger that arises from cases of wrong or fraudulent science is 
that it erodes the trust we have in each others results.  No one has time or 
resources to check everything, so science is based on trust.  There are efforts 
underway outside crystallographic circles to address this larger threat to all 
science, and we should be participating in those discussions as much as 
possible.  

Ron

On Thu, 5 Apr 2012, aaleshin wrote:

 Dear John,Thank you for a very informative letter about the IUCr activities 
 towards archiving the experimental
 data. I feel that I did not explain myself properly. I do not object 
 archiving the raw data, I just believe
 that current methodology of validating data at PDB is insufficiently robust 
 and requires a modification.
 Implementation of the raw image storage and validation will take a 
 considerable time, while the recent
 incidents of a presumable data frauds demonstrate that the issue is urgent. 
 Moreover, presenting the
 calculated structural factors in place of the experimental data is not the 
 only abuse that the current
 validation procedure encourages to do. There might be more numerous 
 occurances of data massaging like
 overestimation of the resolution or data quality, the system does not allow 
 to verify them. IUCr and PDB
 follows the American taxation policy, where the responsibility for a fraud is 
 placed on people, and the agency
 does not take sufficient actions to prevent it. I believe it is inefficient 
 and inhumane. Making a routine
  check of submitted data at a bit lower level would reduce a temptation to 
 overestimate the unclearly defined
 quality statistics and make the model fabrication more difficult to 
 accomplish. Many people do it unknowingly,
 and catching them afterwards makes no good.
 
 I suggested to turn the current incidence, which might be too complex for 
 burning heretics, into something
 productive that is done as soon as possible, something that will prevent 
 fraud from occurring.
 
 Since my persistent trolling at ccp4bb did not take any effect (until now), 
 I wrote a bad-English letter
 to the PDB administration, encouraging them to take urgent actions. Those who 
 are willing to count grammar
 mistakes in it can reading the message below.
 
 With best regards,
 Alexander Aleshin, staff scientist
 Sanford-Burnham Medical Research Institute 
 10901 North Torrey Pines Road
 La Jolla, California 92037
 
 Dear PDB administrators;
 
 I am wringing to you regarding the recently publicized story about submission 
 of calculated structural factors
 to the PDB entry 3k79 
 (http://journals.iucr.org/f/issues/2012/04/00/issconts.html). This presumable 
 fraud (or
 a mistake) occurred just several years after another, more massive 
 fabrication of PDB structures (Acta Cryst.
 (2010). D66, 115) that affected many scientists including myself. The 
 repetitiveness of these events indicates
 that the current mechanism of structure validation by PDB is not sufficiently 
 robust. Moreover, it is
 completely incapable of 

Re: [ccp4bb] Reasoning for Rmeas or Rpim as Cutoff

2012-01-31 Thread Ronald E Stenkamp

James Holton suggested a reason why the forefathers used a 3-sigma cutoff.

I'll give another reason provided to me years ago by one of those guys, Lyle Jensen.  In 
the 70s we were interested in the effects of data-set thresholds on refinement (Acta 
Cryst., B31, 1507-1509 (1975)) so he explained his view of the history of 
less-than cutoffs for me.  It was a very Seattle-centric explanation.

In the 50s and 60s, Lyle collected intensity data using an integrating Weissenberg camera and a 
film densitometer.  Some reflections had intensities below the fog or background level of the film 
and were labeled unobserved.  Sometimes they were used in refinement, but only if the 
calculated Fc values were above the unobserved value.

When diffractometers came along with their scintillation counters, there were measured quantities 
for each reflection (sometimes negative), and Lyle needed some way to compare structures refined 
with diffractometer data with those obtained using film methods.  Through some method he never 
explained, a value of 2-sigma(I) defining less-thans was deemed comparable to the 
unobserved criterion used for the earlier structures.  His justification for the 
2-sigma cutoff was that it allowed him to understand the refinement behavior and R values of these 
data sets collected with newer technology.

I don't know who all contributed to the idea of a 2-sigma cutoff, nor whether 
there were theoretical arguments for it.  I suspect the idea of some type of 
cutoff was discussed at ACA meetings and other places.  And a 2-sigma cutoff 
might have sprung up independently in many labs.

I think the gradual shift to a 3-sigma cutoff was akin to grade inflation.  
If you could improve your R values with a 2-sigma cutoff, 3-sigma would probably be 
better.  So people tried it.  It might be interesting to figure out how that was brought 
under control.  I suspect a few troublesome structures and some persistent editors and 
referees gradually raised our group consciousness to avoid the use of 3-sigma cutoffs.

Ron

On Mon, 30 Jan 2012, James Holton wrote:

Once upon a time, it was customary to apply a 3-sigma cutoff to each and 
every spot observation, and I believe this was the era when the ~35% Rmerge 
in the outermost bin rule was conceived, alongside the 80% completeness 
rule.  Together, these actually do make a  reasonable two-pronged criterion 
for the resolution limit.


Now, by reasonable I don't mean true, just that there is reasoning 
behind it.  If you are applying a 3-sigma cutoff to spots, then the expected 
error per spot is not more than ~33%, so if Rmerge is much bigger than that, 
then there is something funny going on.  Perhaps a violation of the chosen 
space group symmetry (which may only show up at high resolution), radiation 
damage, non-isomorphism, bad absorption corrections, crystal slippage or a 
myriad of other scaling problems could do this.  Rmerge became a popular 
statistic because it proved a good way of detecting problems like these in 
data processing.  Fundamentally, if you have done the scaling properly, then 
Rmerge/Rmeas should not be worse than the expected error of a single spot 
measurement.  This is either the error expected from counting statistics (33% 
if you are using a 3-sigma cutoff), or the calibration error of the 
instrument (~5% on a bad day, ~2% on a good one), whichever is bigger.


As for completeness, 80% overall is about the bare minimum of what you can 
get away with before the map starts to change noticeably.  See my movie here:

http://bl831.als.lbl.gov/~jamesh/movies/index.html#completeness
so I imagine this 80% rule just got extended to the outermost bin.  After 
all, claiming a given resolution when you've only got 50% of the spots at 
that resolution seems unwarranted, but requiring 100% completeness seems a 
little too strict.


Where did these rules come from?  As I recall, I first read about them in the 
manual for the PROCESS program that came with our R-axis IIc x-ray system 
when I was in graduate school (ca 1996).  This program was conveniently 
integrated into the data collection software on the detector control 
computers: one was running VMS, and the new one was an SGI.  I imagine a 
few readers of this BB may have never heard of PROCESS, but it is listed as 
the intensity integration software for at least a thousand PDB entries.  Is 
there a reference for PROCESS?  Yes.  In the literature it is almost always 
cited with: (Molecular Structure Corporation, The Woodlands, TX).  Do I still 
have a copy of the manual?  Uhh.  No.  In fact, the building that once 
contained it has since been torn down.  Good thing I kept my images!


Is this 35% Rmerge with a 3-sigma cutoff method of determining the 
resolution limit statistically valid?  Yes!  There are actually very sound 
statistical reasons for it.  Is the resolution cutoff obtained the best one 
for maximum-likelihood refinement?  Definitely not!  Modern refinement 
programs 

Re: [ccp4bb] question about SIGF

2011-08-20 Thread Ronald E Stenkamp

James, could you please give more information about where and/or how you obtained the 
relationship sigma(I)/I = 2*sigma(F)/F?  A different equation, 
sigma(I)=2*F*sigma(F), can be derived from sigma(I)^2 = (d(I)/dF)^2 * sigma(F)^2.  I 
understand that that equation is based on normal distribution of errors and has numerical 
problems when F is small, so there are other approximations that have been used to 
convert sigma(I) to sigma(F).  However, none that I've seen end up stating that 
sigma(F)=0.5.  Thanks.  Ron

On Sat, 20 Aug 2011, James Holton wrote:

There is a formula for sigma(F) (aka SIGF), but it is actually a common 
misconception that it is simply related to F.  You need to know a few other 
things about the experiment that was done to collect the data.  The 
misconception seems to arise because the fist thing textbooks tell you is 
that F = sqrt(I), where I is the intensity of the spot.  Then, later on, 
they tell you that sigma(I) = sqrt(I) because of counting statistics.  Now, 
if you look up a table of error-propagation formulas, you will find that if 
I=F^2, then sigma(I)/I = 2*sigma(F)/F, and by substituting these equations 
together you readily obtain:


sigma(F) = F/2*sigma(I)/I
sigma(F) = F/2*sigma(I)/F^2
sigma(F) = sigma(I)/(2*F)
sigma(F) = sigma(I)/(2*sqrt(I))
sigma(F) = sqrt(I)/(2*sqrt(I))
sigma(F) = 0.5

Which says that the error in F is always the same, no matter what your 
exposure time?  Hmm.


The critical thing missing from the equations above is something we 
crystallographers call a scale factor.  We love scale factors because they 
let us get away with not knowing a great many things, like the volume of the 
crystal, the absolute intensity of the x-ray beam, and the exact gain of 
the detector.  It's not that we can't measure or look up these things, but 
few of us have the time.  And, by and large, as long as you are aware that 
there is always an unknown scale factor, it doesn't really get in your way. 
So, the real equation is:


I_in_photons = scale*F^2

where scale = 
Ibeam*re^2*Vxtal*lambda^3*Loentz_factor*Polar_factor*Attenuation_factor*exposure_time/deltaphi/Vcell^2


This scale factor comes from Equation 1 in the following paper:
http://dx.doi.org/10.1107/S0907444910007262
where we took pains to describe the exact meaning of each of these variables 
(and their units!) in great detail.  It is open access, so I won't go through 
them here.  I will, however, add that for spots on the detector there are a 
few other factors still missing, like the detector gain, obliquity, parallax, 
and spot partiality, but these are all taken care of by the data processing 
program.  The main thing is to figure out the number of photons that were 
accumulated for a given h,k,l index, and then take the square root of that to 
get the counting error.  Oh, and you also need to know the number of 
background photons that fell into the pixels used to add up photons for the 
h,k,l of interest.  The square root of this count must be combined with the 
counting error of the spot photons, along with a few other sources of 
error.  This is what we discuss around Equation (18) in the linked-to paper 
above.


The short answer, however, is that sqrt(I_in_photons) is only one component 
of sigma(I).  The other factors fall into three main categories: readout 
noise, counting noise and what I call fractional noise.  Now, if you have a 
number of different sources of noise, you get the total noise by adding up 
the squares of all the components, and then taking the square root:
sigma(I_in_photons) = sqrt( I_in_photons + background_photons + 
sigma_readout^2 + frac_error*I_in_photons^2 )


For those of you who use SCALA and think the sqrt( sigI^2 + B*I + sdadd*I^2 ) 
form of this equation looks a lot like the SDCORRection line, good job!  That 
is a very perceptive observation.


 What separates the three kinds of noise is how they relate to the exposure 
time.  For example, readout noise is always the same, no matter what the 
exposure time is, but as you increase the exposure time, the number of 
photons in the spots and the background go up proportionally.  This means 
that the contribution of counting noise to sigma(I) increases as the square 
root of the exposure time.  On modern detectors, the read-out noise is 
equivalent to the counting noise of a few (or even zero) photons/pixel, and 
so as soon as you have more than about 10 photon/pixel of background, the 
readout noise is no longer significant.


 So, in general, noise increases with the square root of exposure time, but 
the signal (I_in_photons) increases in direct proportion to exposure time, so 
the signal-to-noise ratio (from counting noise alone) goes up with the square 
root of exposure time.  That is, until you hit the third type of noise: 
fractional noise.  There are many sources of fractional noise: shutter timing 
error, crystal vibration, fliker in the incident beam intensity, inaccurate 
scaling factors (including the 

Re: [ccp4bb] I/sigmaI of 3.0 rule

2011-03-06 Thread Ronald E Stenkamp

Could you please expand on your statement that small-molecule data has essentially no weak 
spots.?  The small molecule data sets I've worked with have had large numbers of unobserved 
reflections where I used 2 sigma(I) cutoffs (maybe 15-30% of the reflections).  Would you consider those 
weak spots or not?  Ron

On Sun, 6 Mar 2011, James Holton wrote:

I should probably admit that I might be indirectly responsible for the 
resurgence of this I/sigma  3 idea, but I never intended this in the way 
described by the original poster's reviewer!


What I have been trying to encourage people to do is calculate R factors 
using only hkls for which the signal-to-noise ratio is  3.  Not refinement! 
Refinement should be done against all data.  I merely propose that weak data 
be excluded from R-factor calculations after the 
refinement/scaling/mergeing/etc. is done.


This is because R factors are a metric of the FRACTIONAL error in something 
(aka a % difference), but a % error is only meaningful when the thing 
being measured is not zero.  However, in macromolecular crystallography, we 
tend to measure a lot of zeroes.  There is nothing wrong with measuring 
zero!  An excellent example of this is confirming that a systematic absence 
is in fact absent.  The sigma on the intensity assigned to an absent spot 
is still a useful quantity, because it reflects how confident you are in the 
measurement.  I.E.  a sigma of 10 vs 100 means you are more sure that the 
intensity is zero.  However, there is no R factor for systematic absences. 
How could there be!  This is because the definition of % error starts to 
break down as the true spot intensity gets weaker, and it becomes 
completely meaningless when the true intensity reaches zero.


Historically, I believe the widespread use of R factors came about because 
small-molecule data has essentially no weak spots.  With the exception of 
absences (which are not used in refinement), spots from salt crystals are 
strong all the way out to edge of the detector, (even out to the limiting 
sphere, which is defined by the x-ray wavelength).  So, when all the data 
are strong, a % error is an easy-to-calculate quantity that actually 
describes the sigmas of the data very well.  That is, sigma(I) of strong 
spots tends to be dominated by things like beam flicker, spindle stability, 
shutter accuracy, etc.  All these usually add up to ~5% error, and indeed 
even the Braggs could typically get +/-5% for the intensity of the diffracted 
rays they were measuring.  Things like Rsym were therefore created to check 
that nothing funny happened in the measurement.


For similar reasons, the quality of a model refined against all-strong data 
is described very well by a % error, and this is why the refinement R 
factors rapidly became popular.  Most people intuitively know what you mean 
if you say that your model fits the data to within 5%.  In fact, a widely 
used criterion for the correctness of a small molecule structure is that 
the refinement R factor must be LOWER than Rsym.  This is equivalent to 
saying that your curve (model) fit your data to within experimental error. 
Unfortunately, this has never been the case for macromolecular structures!


The problem with protein crystals, of course, is that we have lots of weak 
data.  And by weak, I don't mean bad!  Yes, it is always nicer to have 
more intense spots, but there is nothing shameful about knowing that certain 
intensities are actually very close to zero.  In fact, from the point of view 
of the refinement program, isn't describing some high-angle spot as: zero, 
plus or minus 10, better than I have no idea?   Indeed, several works 
mentioned already as well as the free lunch algorithm have demonstrated 
that these zero data can actually be useful, even if it is well beyond the 
resolution limit.


So, what do we do?  I see no reason to abandon R factors, since they have 
such a long history and give us continuity of criteria going back almost a 
century.  However, I also see no reason to punish ourselves by including lots 
of zeroes in the denominator.  In fact, using weak data in an R factor 
calculation defeats their best feature.  R factors are a very good estimate 
of the fractional component of the total error, provided they are calculated 
with strong data only.


Of course, with strong and weak data, the best thing to do is compare the 
model-data disagreement with the magnitude of the error.  That is, compare 
|Fobs-Fcalc| to sigma(Fobs), not Fobs itself.  Modern refinement programs do 
this!  And I say the more data the merrier.



-James Holton
MAD Scientist


On 3/4/2011 5:15 AM, Marjolein Thunnissen wrote:

hi

Recently on a paper I submitted, it was the editor of the journal who 
wanted exactly the same thing. I never argued with the editor about this 
(should have maybe), but it could be one cause of the epidemic that Bart 
Hazes saw



best regards

Marjolein

On Mar 3, 2011, at 12:29 PM, Roberto 

Re: [ccp4bb] I/sigmaI of 3.0 rule

2011-03-03 Thread Ronald E Stenkamp

Discussions of I/sigma(I) or less-than cutoffs have been going on for at least 
35 years.  For example, see Acta Cryst. (1975) B31, 1507-1509.  I was taught by 
my elders (mainly Lyle Jensen) that less-than cutoffs came into use when 
diffractometers replaced film methods for small molecule work, i.e., 1960s.  To 
compare new and old structures, they needed some criterion for the electronic 
measurements that would correspond to the fog level on their films.  People 
settled on 2 sigma cutoffs (on I which mean 4 sigma on F), but subsequently, 
the cutoffs got higher and higher, as people realized they could get lower and 
lower R values by throwing away the weak reflections.  I'm unaware of any 
statistical justification for any cutoff.  The approach I like the most is to 
refine on Fsquared and use every reflection.  Error estimates and weighting 
schemes should take care of the noise.

Ron

On Thu, 3 Mar 2011, Ed Pozharski wrote:


On Thu, 2011-03-03 at 09:34 -0600, Jim Pflugrath wrote:

As mentioned there is no I/sigmaI rule.  Also you need to specify (and
correctly calculate) I/sigmaI and not I/sigmaI.

A review of similar articles in the same journal will show what is
typical
for the journal.  I think you will find that the I/sigmaI cutoff
varies.
This information can be used in your response to the reviewer as in,
A
review of actual published articles in the Journal shows that 75% (60
out of
80) used an I/sigmaI cutoff of 2 for the resolution of the
diffraction
data used in refinement.  We respectfully believe that our cutoff of 2
should be acceptable.



Jim,

Excellent point.  Such statistics would be somewhat tedious to gather
though, does anyone know if I/sigma stats are available for the whole
PDB somewhere?

On your first point though - why is one better than the other?  My
experimental observation is while the two differ significantly at low
resolution (what matters, of course, is I/sigma itself and not the
resolution per se), at high resolution where the cutoff is chosen they
are not that different.  And since the cutoff value itself is rather
arbitrarily chosen, then why I/sigma is better than I/sigma?

Cheers,

Ed.


--
I'd jump in myself, if I weren't so good at whistling.
  Julian, King of Lemurs



Re: [ccp4bb] Merging statistics and systematic absences

2011-01-24 Thread Ronald E Stenkamp

Maybe counting reflections is dull and boring, but doesn't it make you wonder 
sometimes when two programs read the same file and end up with different 
reflection counts?  What sophisticated mathematics or logical conditions make 
it such that progams can't mimic adding machines?  What else are these programs 
doing?  It's sometimes quite bewildering.  Ron

On Mon, 24 Jan 2011, Phil Evans wrote:


My immediate response to this is that anyone worrying about the number of 
reflections should get out more.

More seriously, is assessing the quality of measurements then multiple observed 
measurements of systematically absent reflections should agree within their error 
estimates and (ideally) have a mean = 0.0, thus it seems valid to include them in 
measures of internal consistency such as Rmerge, Rmeas (strictly they should be 
compared to 0 rather than their mean Ih), i suppose)

On the other hand, in looking at intensity statistics (as in truncate) the observed 
intensities should be compared with their expectation values which will depend on 
the space group (but still included perhaps). However there are so few of these 
reflections that it won't make much difference (though it is more important to 
separate centric  acentric reflections, as there may be quite a lot of 
centrics)

ie situation normal (SNAFU?) - it is not going to make any difference

Phil


On 24 Jan 2011, at 09:13, Graeme Winter wrote:


Dear ccp4bb,

I had an interesting question from a xia2 user last week for which I
did not have a good answer. Here's the situation:

- spacegroup is P212121, which was specified on the command-line
- xia2 processes this as oP, assigns the spacegroup as P212121 before
running scala - generates merging stats
- truncate removes systematically absent reflections

The end result is that there are fewer reflections in the output MTZ
file and hence used for refinement than are reported in the merging
statistics. The question is - what is correct? Clearly the effect on
the merging statistics will be modest or trivial as there were only ~
70 absent reflections, however removing them before scaling  merging
will also be a little fiddly.

I see three (or maybe four) options -

- truncate leave absent reflections in
- remove the absent reflections before scaling
- ignore this as situation normal (which is what xia2 currently does)
- (less helpful *) refine against intensities which includes the
absent reflections but matches the merging statistics

If this were my project I would probably opt for #3 but I can
appreciate that this is a question for the wider audience.

What do others think?

Many thanks in advance,

Graeme

* I mark this as less helpful because this is the wrong *reason* to
merge against intensities. There are clearly good reasons for this.




Re: [ccp4bb] Resolution and distance accuracies

2010-12-23 Thread Ronald E Stenkamp

Something related to the results in the 1984 paper, but never published, is 
that the calculated electron density for an atom with a B of 100 Angstroms**2 
is so flat that you wonder how those atoms can be seen in electron density maps.

Ron

On Thu, 23 Dec 2010, Bernhard Rupp (Hofkristallrat a.D.) wrote:


can anyone point me to a more exact theory of distance accuracy compared

to

optical resolution, preferably one that would apply to microscopy as well.


Stenkamp RE,  Jensen LH (1984) Resolution revisited: limit of detail in
electron density maps. Acta Crystallogr. A40(3), 251-254.

MX, BR



Re: [ccp4bb] Regarding space group P1, P21

2010-10-21 Thread Ronald E Stenkamp

How you choose to make use of (or ignore) crystallographic symmetry comes down to your 
view of what constitutes the best model for the sample you're studying.  How 
similar do you believe the molecules are in your crystal?  If you describe the model in a 
higher symmetry space group, you believe that given the information content of the 
diffraction pattern, the molecules are identical.  If you describe it using fewer 
symmetry operations, you believe the molecules differ in some way.  So, how you describe 
the symmetry of your crystal comes down to determining the simplest model consistent with 
your experimental observations.  Ron

On Thu, 21 Oct 2010, Jacob Keller wrote:


 
I have heard many times that it is a black eye to refine in a lower-symmetry 
spacegroup, but I could never really
understand why. The higher symmetry could be considered merely a helpful 
theoretical lens to improve signal-to-noise,
and therefore imposing higher symmetry on the data could be seen as a sort of 
*leniency* of scientific (or at least
empiric) rigor. I think similarly about using discrete spot intensities rather 
than the whole image--we assume Bragg
conditions and neglect certain things about the image between the spots, which 
is usually valid, but not always. I
wonder why it is considered maladroit to refine in a lower spacegroup, 
then--don't higher spacegroup impose more
assumptions than p1?
 
Jacob Keller
 
  - Original Message -
From: James Holton
To: CCP4BB@JISCMAIL.AC.UK
Sent: Thursday, October 21, 2010 10:55 AM
Subject: Re: [ccp4bb] Regarding space group P1, P21


You pick the Rfree flags in the high-symmetry space group, and then use CAD with 
OUTLIM SPACE P1 to
symmetry-expand them to P1 (or whatever you like).

Things get trickier, however, when your NCS is close to, (bot not exactly) 
crystallographic (NECS?).  Or if you
are simply not sure.  The best way I can think of to deal with this situation is to 
road test your Rfree:
1) do something that you know is wrong, like delete a helix, or put some side 
chains in the wrong place
2) refine with NCS turned on
3) check that Rfree actually goes up
4) un-do the wrong things
5) refine again
6) check that Rfree actually goes down
7) try again with NCS turned off

Remembering these timeless words of wisdom: Control, Control, you must learn 
CONTROL! -Yoda (Jedi Master)

-James Holton
MAD Scientist

On 10/21/2010 8:46 AM, Christina Bourne wrote:
  Dear all,
  How would one properly select reflections for R-free in these 
situations?  Presumably if the
  selection is done in P1 then it mimics twinning or high NCS, such that 
reflections in both the work
  and free set will be (potentially?) related by symmetry.
  -Christina

__
From: Mohinder Pal m...@soton.ac.uk
To: CCP4BB@JISCMAIL.AC.UK
Sent: Thu, October 21, 2010 7:05:42 AM
Subject: [ccp4bb] Regarding space group P1, P21

Dear CCP4BB members,

I have solved a protein-drug complex structure in P21212 space group.  In this 
structure, the drug
molecule is  falling on the two-fold symmetry axis having averaged electron 
density  with 0.5 occupancy.
We tried a lot to crystallize this protein-drug complex in different space 
group but no success so far.  I
have tried to solve the same data  in space group P1 (statistics are fine as I 
have collected data for 360
degree). The map looks even better with one conformation for a drug. 
Interestingly, then I reprocessed the
same data using imosflm in P21 space group which have penalty 1 compared to 4 
for P21212.  The structure
in P21 is  also refining well (with one conformation of the drug compound 
without symmetry axis at the
ligand position). The question is , is it a good practice to solve this 
structure in P1 and P21 even if
the data has higher symmetry?

Secondly, I have been advised that I have to be careful to refine structure in 
P1 as there will be problem
regarding observation/parameter ratio if I add too many water molecules. What 
will be the case if the
electron density present  for water molecules? 

I can put restrains to protein structure  but  I am just curious to know one 
restrain equals how many
observations.

I look forward to hear your suggestions.

Kind regards,

Mohinder Pal



 
***
Jacob Pearson Keller
Northwestern University
Medical Scientist Training Program
Dallos Laboratory
F. Searle 1-240
2240 Campus Drive
Evanston IL 60208
lab: 847.491.2438
cel: 773.608.9185
email: j-kell...@northwestern.edu
***




Re: [ccp4bb] Anisotropic data and an extremely long c axis

2010-06-09 Thread Ronald E Stenkamp
But at some point, getting a clear map might not be the goal.  If you're in refinement mode, the weak reflections also provide information that your model needs to fit.  I find I/sig(I) (or I/sig(I)) to be about as useful as 
Rmerge (or its relatives).   Ron


On Wed, 9 Jun 2010, James Holton wrote:


Frank von Delft wrote:



On 09/06/2010 16:49, James Holton wrote:
Operationally, I recommend treating anisotropic data just like isotropic 
data.  There is nothing wrong with measuring a lot of zeros (think about 
systematic absences), other than making irrelevant statistics like Rmerge 
higher.  One need only glance at the formula for any R factor to see that 
it is undefined when the true F is zero.  Unfortunately, there are still 
a lot of reviewers out there who were trained that the Rmerge in the 
outermost resolution bin must be 20%, and so some very sophisticated 
ellipsoidal cut-off programs have been written to try and meet this 
criterion without throwing away good data.  I am actually not sure where 
this idea came from, but I challenge anyone to come up with a sound 
statistical basis for it.  Better to use I/sigma(I) as a guide, as it 
really does tell you how much information vs noise you have at a given 
resolution.

So, if my outer shell has
10% reflections I/sigI10,
90% reflections I/sigI=1,
will Mean(I/sigI) for that shell tend to 10 or 1?

Presumably I'm calculating it wrong in my simulation (very naive: took 
average of all individual I/sigI), because for me it tends to 1.


But if I did get it right, then how does Mean(I/sigI) tell me that 10% of my 
observations have good signal?


It doesn't.  The mean will not tell you anything about the distribution of 
I/sigI values, it will just tell you the average.  If I may simplify your 
example case to: one good observation (I/sigI = 10) and 9 weak observations 
(I/sigI = 1), then Mean(I/sigI) = ~2.  This is better than Mean(I/sigI) = 1, 
but admittedly still not great.  I know it is tempting to say: but wait!  I've 
got one really good reflection at that resolution!  Doesn't that count for 
something?  Well, it does (a little), but one good reflection does not a clear 
map make.


-James Holton
MAD Scientist



Re: [ccp4bb] units of f0, f', f''

2010-03-01 Thread Ronald E Stenkamp
Hi.

I'm a little reluctant to get into this discussion, but I'm greatly confused by
it all, and I think much of my confusion comes from trying to understand one of 
Ian's assumptions.

Why are the scattering factors viewed as dimensionless quantities?  In
the International Tables (for example, Table 6.1.1.1 in the blue books), the
scattering factors are given in electrons.   In the text for that section,
the scattering factors are obtained from an integral (over space) of the
electron density.  So there's some consistency there between scattering factors
in units of electrons and electron density in electrons/(Angstrom**3).  What's
gained at this point by dropping the word electron from all of these
dimensions?

Ron




On Sat, 27 Feb 2010, Ian Tickle wrote:

 I'm not aware that anyone has suggested the notation rho e/Å^3.

 I think you misunderstood my point, I certainly didn't mean to imply that
 anyone had suggested or used that notation, quite the opposite in fact.  My
 point was that you said that you use the term 'electron density' to define
 two different things either at the same time or on different occasions, but
 that to resolve the ambiguity you use labels such as 'e/Å^3' or
 'sigma/Å^3' attached to the values.  My point was that if I needed to use
 these quantities in equations then the rules of algebra require that
 distinguishable symbols (e.g. rho and rho') be assigned, otherwise I would
 be forced into the highly undesirable situation of labelling the symbols
 with their units in the equations in the way you describe in order to
 distinguish them.  Then in my 'Notation' section my definitions of rho 
 rho' would need to be different in some way, again in order to distinguish
 them: I could not simply call both of them 'electron density' as you appear
 to be doing.

 The question of whether your units of electron density are '1/Å^3' or
 'e/Å^3' clearly comes down to definition, nothing more.  If we can't agree
 on the definition then we are surely not going to agree on the units!
 Actually we don't need to agree on the definition: as long as I know what
 precisely your set of definitions is, I can make the appropriate adjustments
 to my units  you can do the same if you know my definitions; it just makes
 life so much easier if we can agree to use the same definitions!  Again it
 comes down to the importance of having a 'Notation' section so everyone
 knows exactly what the definitions in use are.  My definition of electron
 density is number of electrons per unit volume which I happen to find
 convenient and for which the appropriate units are '1/Å^3'.  In order for
 your choice of units 'e/Å^3' to be appropriate then your definition would
 have to be electric charge per unit volume, then you need to include the
 conversion factor 'e' (charge on the electron) in order to convert from my
 number of electrons to your electric charge, otherwise your values will
 all be very small (around 10^-19 in SI units).  I would prefer to call this
 quantity electric charge density since electron density to me implies
 density of electrons not density of charge.  I just happen to think that
 it's easier to avoid conversion factors unless they're essential.

 Exactly the same thing of course happens with the scattering factor: I'm
 using what I believe is the standard definition (i.e. the one given in
 International Tables), namely the ratio of scattered amplitude to that for a
 free electron which clearly must be unitless.  So I would say 'f = 10' or
 whatever.  I take it that you would say 'f = 10e'.  Assuming that to be the
 case, then it means you must be using a different definition consistent with
 the inclusion of the conversion factor 'e', namely that the scattering
 factor is the equivalent point electric charge, i.e. the point charge that
 would scatter the same X-ray amplitude as the atom.  I've not seen the
 scattering factor defined in that way before: it's somewhat more convoluted
 than the standard definition but still usable.  The question remains of
 course - why would you not want to stick to the standard definitions?

 BTW I assume your 'sigma/Å^3' was a slip and you intended to write just
 'sigma' since sigma(rho) must have the same units as rho (being its RMS
 value), i.e. 1/Å^3, so in your second kind of e.d. map rho/sigma(rho) is
 dimensionless (and therefore unitless).  However since rho and sigma(rho)
 have identical units I don't see how their ratio rho/sigma(rho) can have
 units of 'sigma', as you seem to imply if I've understood correctly?

 What I'm more concerned about is when you assign a numerical value to
 a quantity.  Take the equation E=MC^2.  The equation is true
 regardless
 of how you measure your energy, mass, and speed.  It is when you say
 that M = 42 that it becomes important to unambiguously label 42 with
 its units.  It is when you are given a mass equal to 42 newtons, the
 speed of light in furlongs/fortnight, and asked to calculate
 the energy
 in 

Re: [ccp4bb] H32 or R 3 2 :H

2009-12-15 Thread Ronald E Stenkamp

I appreciate learning that the R32/H32 tangle was based on a wwPDB 
recommendation.  For some reason, I find it calming to view this as a PDB issue 
and not a ccp4 one.  Ron


On Tue, 15 Dec 2009, Eleanor Dodson wrote:

Just a correction - ccp4 had NOTHING to do with H32 definitions - just followed 
the wwwPDB requirements.. there were bitter arguments over accepting it from 
many!


E


Peter Zwart wrote:

Hi Stephen,


R32
H32
R32 :H


Correct. These are all hexagonal setting. As far as I know, the
hexagonal setting of R32 (R32:H) is the first one that comes up in the
ITvA as is listed a R32. The rhombohedral/primitive setting of R32
(R32:R) comes second in the ITvA, I guess the first setting takes
precedence. H32 is a pdb/ccp4ism.

In my cctbx-skewed view, it looks like this:

R32 ==  R32:H (== H32; not supported by the cctbx)

R32:R is the primitive setting of R32:H

Appending the setting to the space group makes life easier (no
ambiguities) and you can do more funky stuff if one has the
stomach/need to do so [like  P212121 (a+b,a-b,c) ].


HTHP




Re: [ccp4bb] video that explains, very simply, what Structural Molecular Biology is about

2009-11-14 Thread Ronald E Stenkamp

The rumblings here at the Univ. of Washington among the computational modelers is that 
some of their current models might be more representative of protein structures in 
solution than are the crystal structure models.  It may take less than a couple of 
decades for a reduced emphasis on crystallographic studies.

Ron Stenkamp


On Sat, 14 Nov 2009, Van Den Berg, Bert wrote:



I wonder, just as a side note, whether there will still be a (big) need for 
X-ray crystallography in a couple of decades?

What will be the state of the art then in structure prediction?
How much of structure space will have been covered by then, so that homology 
modeling can do most of the tricks?

Bert van den Berg
UMass Medical School


-Original Message-
From: CCP4 bulletin board on behalf of George M. Sheldrick
Sent: Sat 11/14/2009 4:08 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] video that explains, very simply, what Structural 
Molecular Biology is about


Apologies for the typo, I meant to say 'Bernhard'.
George

Prof. George M. Sheldrick FRS
Dept. Structural Chemistry,
University of Goettingen,
Tammannstr. 4,
D37077 Goettingen, Germany
Tel. +49-551-39-3021 or -3068
Fax. +49-551-39-22582


On Sat, 14 Nov 2009, George M. Sheldrick wrote:



I also think that it is a very nice video and I will certainly be showing
it to my students (and relatives). However the little step between
recording the X-ray reflections and getting a final refined map might be
expanded a little. That is after all what CCP4 etc. is about! Otherwise,
despite the suggestion in the film that an expert should be consulted, we
are reinforcing the view held by many biologists and chemists that crystal
structure determination is a routine analytical method and not suitable
for an academic career. It worries me that in a couple of decades there
will be few people still active who really understand how it works. On
the other hand, Berhard's impressive new book may solve that problem (if
people still read books).

George

Prof. George M. Sheldrick FRS
Dept. Structural Chemistry,
University of Goettingen,
Tammannstr. 4,
D37077 Goettingen, Germany
Tel. +49-551-39-3021 or -3068
Fax. +49-551-39-22582


On Fri, 13 Nov 2009, claude sauter wrote:


Narayanan Ramasubbu a écrit :
 mb1pja wrote:
  Dear Fred
 
  A really nice video that would be great for giving non-crystallographers
  (including colleagues and 1st year students, and perhaps also friends and
  family) an overview of what we do. Thank you for pointing it out - and of
  course very many thanks to Dominique Sauter for making it. I am sure it
  will prove very popular.
 
  bet wishes
  Pete
 
  (Pete Artymiuk)
 
 
 
  On 11 Nov 2009, at 09:44, Vellieux Frederic wrote:
 
 
   Dear all,
  
   Thought I'd share this with you:
  
   I located this through Ms Ines Kahlaoui, from the Beja Higher Institute
   of Biotechnology in Tunisia (Ines has to teach and locates videos on the
   internet, which she then downloads and uses for teaching). Ines located
   this jewel:
  
   
http://video.google.com/videoplay?docid=7084929825683486794ei=M3b5SvXqD6em2AK3jY33CQq=Plongee+coeur+vivant#
  
   This is the French version (explains everything about Structural
   Molecular Biology, but for the maths :-( , but also shows what we
   crystallographers have known for a long time, since the first colour ES
   graphics workstations in fact, that the electron are blue :-) ).
  
   Both French and English versions can be downloaded from
  
   http://cj.sauter.free.fr/xtal/Film/
  
   No rights associated with the movie, and the Strasbourg group intends to
   release a higher quality version on DVD soon. Please contact them about
   that... I am only sharing what I thought was good for educational
   purposes. 18 minutes of your life, but worth it I think. So feel free to
   share this.
  
   Wish you all a nice day,
  
   Fred.
  
 
 
 Hi:
 Could someone point out the name and where to get these crystallization
 plates used in the video?
 By the way, this is a wonderful video.
 Subbu



Dear Subbu and dear xtal lovers,

the fancy plates used in the video are Nextal EasyXtal plates which are now
sold by Qiagen.

Concerning the video (thank you Fred for you kind advertisement!), the final
version (English/French) will be available in DVD very soon, as well as divx
and flash formats, we are working hard to get them ready by Christmas. This
material will be released under the Creative Commons licence to make it easily
accessible for all kind of education / teaching purposes.

I'll keep you informed as soon as the final version is ready.

Claude


--
Dr Claude Sauter
Institut de Biologie Moléculaire et Cellulaire (IBMC-ARN-CNRS)
Cristallogenèse  Biologie Structurale  tel +33 (0)388 417 102
15 rue René Descartes   fax +33 (0)388 602 218
F-67084 Strasbourg - France  http://cj.sauter.free.fr/xtal






Re: [ccp4bb] refinment in lower space group vs. higher

2009-01-04 Thread Ronald E Stenkamp

Hi.

If the most precise and accurate description of your crystal structure is the 
orthorhombic one, you should average the replicated reflections and refine the 
structure in C2221.  If the crystals are orthorhombic and you refine and report 
it in C2, you're producing a model that is larger and more complex than you 
need to explain the experimental data.  You're also reporting to the world that 
the two molecules in the C2 asymmetric unit are different when there's no 
experimental evidence supporting that.

To extract the most information from your diffraction data, you should refine 
your structure in the highest symmetry space group consistent with the 
experimental data.

Ron


On Sat, 3 Jan 2009, Yu Jiang wrote:


Dear all,

I am now refining one structure in space group C2 with 2 molecules, however
we find the data can be processed in C2221 with 1 molecule. The questions
are:

1. Can I still use space group C2 instead of C2221, even if the two
molecules almost have crystallographic symmetry? Should I provide any
excuses?

2. Is there any indication or necessity to refine structures in lower space
group instead of higher?

Many thanks!






Re: [ccp4bb] About system absence in P4222?

2008-12-11 Thread Ronald E Stenkamp

Hi.

Non-crystallographic symmetry (NCS) doesn't apply to the entire crystal, so how 
can it give rise to systematic absences?  I know it can give rise to 
systematically weak classes of reflections, but they aren't entirely absent.

Ron

On Thu, 11 Dec 2008, Winter, G (Graeme) wrote:


One of the many facilities in pointless is to search for absences and provide a 
list of likely spacegroup choices based on the results. It includes adjustments 
for neighbouring spots to address one of Eleanor's concerns. NCS can cause 
reflections to be systematically absent too.

The program can be found on the ccp4 prerelease pages or on the pointless ftp 
site.

Cheers,

Graeme

-Original Message-
From: CCP4 bulletin board [mailto:[EMAIL PROTECTED] On Behalf Of Eleanor Dodson
Sent: 11 December 2008 09:41
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] About system absence in P4222?

劉家欣(NTHU) wrote:

Dear All:

We have a crystl with P4222 sg.
All statistics look fine.
However, there is a system absense in l axis.
Any body have experiences on that?
Any suggestions would be high appreciated.

jaishin


can you give more details, eg all reflections along the particular axis..
Things like ice rings or overlapping intensity from a next neighbour getting 
integrated inapropriately can cause anomalies..
Eleanor



Re: [ccp4bb] To bathe or not to bathe.

2007-11-25 Thread Ronald E Stenkamp

Just a few comments on consider a crystal bathed in a uniform
beam.

I've not fully bought into the idea that it's OK to have the
beam smaller than the crystal.  I learned most of my crystallography
in a lab dedicated to precise structure determinations, and somewhere
along the line, I picked up the idea that it's better to remove 
systematic errors experimentally than to correct for them computationally.

(Maybe that had something to do with the computing power and programs
available at the time?)

Anyway, I thought the reason people went to smaller beams was that
it made it possible to resolve the spots on the film or detector. 
Isn't that the main reason for using small beams?


In practice, I guess the change in crystal volume actually diffracting
hasn't been a big issue and that frame-to-frame scaling deals with
the problem adequately.

I'm less convinced that frame-to-frame scaling can correct for 
absorption very well.  Due to our irregular-shaped protein crystals,

before the area detectors came along, we'd use an empirical correction
(one due to North comes to mind) based on rotation about the phi axis
of a four-circle goniostat.  It was clearly an approximation to the
more detailed calculations of path-lengths available for crystals with
well-defined faces not surrounded by drops of mother liquor and glass
capillaries.  Has anyone checked to see how frame-to-frame scaling
matches up with analytical determinations of absorption corrections?
It'd be interesting to determine the validity of the assumption that
absorption is simply a function of frame number.

Ron


Re: [ccp4bb] Statistics differences

2007-09-07 Thread Ronald E Stenkamp

Hi.

Perhaps people more familiar with the inner workings of the programs can 
comment better on this, but I believe the FOM is simply a measure of how 
unimodal and sharp your phase distribution is.  If you're working with 
experimentally determined phases, you have phase distributions that might be 
very broad and noisy and the FOM will be low.  If you do solvent flattening or 
other density modifications involving calculating structure factors, you'll end 
up with delta functions for your phase distributions (which you might broaden 
with Sim weighting or some other approximation), and the FOM will be very high.

You might think FOM is well defined, but you need to keep in mind what kinds of
phase distributions are being handled by the different programs.

As far as R and Rfree go, I'm unaware of anyone who can explain why different 
programs give different values.  There are a lot of ways those values can be 
affected, in particular in the treatment of the scale factor.  And it's been 
many years (and many program ago) since the scale factor was under the control 
of the user.

Ron Stenkamp

On Fri, 7 Sep 2007, Jacob Keller wrote:


Dear list,

I have for some time now wondered why different programs output different 
statistics. A low FOM
from program A might be much better than a high FOM from program B, and so on. 
I wonder why, then,
considering that statistical measures are precisely, mathematically defined, 
how is there any
discrepancy? I have also wondered whether people might prefer certain programs 
because they are
statistically flattering. I think in my experience I have seen even statistics 
like Rfree to be
different from different programs, I think even without any refinement--so 
should one use that
program last, right before composing Table I for publication? That seems 
suspicious...

Jacob Keller



==Original message text===
On Fri, 07 Sep 2007 3:39:27 am CDT Andreas Kohl wrote:

Dear all,

we have currently two open postdoctoral positions in our department. We
would very much appreciate it if you could bring this announcement to
the attention of suitable candidates.
All inquiries and applications should be send to:

Prof. Pär Nordlund ([EMAIL PROTECTED]) or Dr. Said Eshaghi
([EMAIL PROTECTED])

Andreas




Postdoctoral positions available at Karolinska Institutet, Sweden

Two postdoctoral positions are available at the division of Biophysics
in the department of Medical Biochemistry and Biophysics, at Karolinska
Intitutet in Stockholm. The division is headed by Professor Pär Nordlund
and is focused on functional characterization of proteins, primarily
using X-ray crystallography. In addition, new technologies are being
developed and improved as tools to enable high-throughput approaches
within protein production. The group has a strong record in structure
determination of soluble and membrane proteins. The laboratory is
well-equipped with state-of-the-art instruments for protein production
and crystallization.

Currently, two postdoctoral positions are available in the membrane
protein group, working with medically important proteins from different
families, such as solute transporters, ion channels and enzymes:
1. Membrane protein chemistry/structural biology. The applicant should
have strong background in recombinant membrane protein production in E.
coli system, preferably with some experience in protein crystallization.
The main topic is to develop new and improved methods for purification
and crystallization of integral membrane proteins. Knowledge about X-ray
crystallography is ideal but not a requirement. The successful candidate
will be part of a dynamic team that is working on production,
crystallization and structure determination of membrane proteins. The
applicant must therefore have the ability to work as a team-player as
well as independently. The position is initially announced for two years.
2. Membrane protein X-ray crystallography. The applicant should have a
strong background in X-ray crystallography with a good track record. The
main tasks are crystallization screening, crystal optimization, data
collection and structure determination of integral membrane proteins.
Previous experience with membrane proteins is ideal but not a
requirement. The successful candidate will be part of a team that is
working on production and crystallization of membrane proteins. The
applicant must therefore have the ability to work as a team-player as
well as independently. The position is initially announced for two years.

Applicants should send a full CV, including a publication list, together
with the name and contact details of three references to:

Prof. Pär Nordlund ([EMAIL PROTECTED]) or Dr. Said Eshaghi
([EMAIL PROTECTED])

---
Prof. Pär Nordlund

[EMAIL 

Re: [ccp4bb] The importance of USING our validation tools

2007-08-17 Thread Ronald E Stenkamp
While all of the comments on this situation have been entertaining, I've been 
most impressed by comments from Bill Scott, Gerard Bricogne and Kim Hendricks.

I think due process is called for in considering problem structures that may or
may not be fabricated.  Public discussion of technical or craftsmanship issues
is fine, but questions of intent, etc are best discussed in private or in more 
formal settings.  We owe that to all involved.

Gerard's comments concerning publishing in journals/magazines like Nature and
Science are correct.  The pressure to publish there is not consistent with
careful, well-documented science.  For many years, we've been teaching our 
graduate students about some of the problems with short papers in those types 
of journals.  The space limitations and the need for relevance force omission 
of important details, so it's very hard to judge the merit of those papers. 
But, don't assume that other real journals do much better with this.  There's 
a lot of non-reproducible science in the journals.  Much of it comes from not 
recognizing or reporting important experimental or computational details, but 
some of it is probably simply false.

Kim's comments about the technical aspects of archiving data make a lot of
sense to me.  The costs of making safe and secure archives are not
insignificant.  And we need to ask if the added value of such archives is worth
the added costs.  I'm not yet convinced of this.

The comments about Richard Reid, shoes, and air-travel are absolutely true.  We
should be very careful about requiring yet more information for submitted
manuscripts.  Publishing a paper is becoming more and more like trying to get
through a crowded air-terminal.  Every time you turn around, there's another
requirement for some additional detail about your work.  In the vast majority
of cases, those details won't matter at all.  In a few cases, a very careful
and conscious referee might figure out something significant based on that
little detail.  But is the inconvenience for most us worth that little benefit?

Clearly, enough information was available to Read, et al. for making the case 
that the original structure has problems.  What evidence is there that 
additional data, like raw data images, would have made any difference to the 
original referees and reviewers?  Refereeing is a human endeavor of great 
importance, but it is not going to be error-free.  And nothing can make it 
error-free.  You simply need to trust that people will be honest and do the 
best job possible in reviewing things.  And that errors that make it through 
the process and are deemed important enough will be corrected by the next layer 
of reviewers.

I believe this current episode, just like those in the past, are terrific 
indicators that our science is strong and functioning well.  If other fields 
aren't reporting and correcting problems like these, maybe it's because they 
simply haven't found them yet.  That statement might be a sign of my 
crystallographic arrogance, but it might also be true.

Ron Stenkamp