Re: [ccp4bb] R: [ccp4bb] Heavy atom vs light atoms density

2020-06-10 Thread Jrh Gmail
Dear Vito
I looked at the examples of I3C in 3e3d, 3e3s and 3e3t and certainly the latter 
two show clear 2Fo-Fc for several I3Cs at a range of occupancies. So my mention 
of the different difference maps’ effectiveness does not apply. 
Hermann and Eleanor suggestions hopefully will explain your I3C map visibility 
you showed
Best wishes
John 

Emeritus Professor of Chemistry John R Helliwell DSc_Physics 



> On 10 Jun 2020, at 09:39, John R Helliwell  wrote:
> 
> Dear Vito,
> The I3C isn't like the platins as a challenge to the 2Fo-Fc map.
> It maybe is more akin to the situation we described here:-
> https://onlinelibrary.wiley.com/doi/abs/10.1107/S0907444903004219
> ie what does your Fo-Fc map, with three iodines placed, at the correct 
> occupancy, look like? 
> Best wishes,
> John 
> 
> Emeritus Professor John R Helliwell DSc
> 
> 
> 
>>> On 10 Jun 2020, at 09:34, Vito Calderone  wrote:
>>> 
>> Dear Pierre,
>> I was in particular talking of 2FoFc refinement maps.
>> In the attached picture for example you can see the 2FoFc map contoured at 1
>> sigma level of an I3C molecule bound to a protein where for I3C the only
>> visible density is that of iodines whereas the density of the other atoms of
>> the ligand is insignificant.
>> Best regards
>> 
>> Vito
>> 
>> -Messaggio originale-
>> Da: CCP4 bulletin board  Per conto di LEGRAND Pierre
>> Inviato: mercoledì 10 giugno 2020 09:31
>> A: CCP4BB@JISCMAIL.AC.UK
>> Oggetto: Re: [ccp4bb] Heavy atom vs light atoms density
>> 
>> Dear Vito,
>> Could you precise in what kind of maps are you experiencing these effects:
>> 2FoFc refinement maps or experimental phasing ?
>> I had this kind of effects long ago in experiemental phasing due to Fourier
>> transform ripple effects. This could be due to scaling problems, low
>> resolution truncation or incompletness (maybe intensity overloads).
>> Another source condition could be local X-ray dose degradation due to the
>> high absoption of the metals.
>> Best regards,
>> Pierre Legrand
>> PROXIMA-1, SOLEIL
>> 
>> De : CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] de la part de Vito
>> Calderone [calder...@cerm.unifi.it] Envoyé : mercredi 10 juin 2020 08:42 À :
>> CCP4BB@JISCMAIL.AC.UK Objet : [ccp4bb] Heavy atom vs light atoms density
>> 
>> Dear All,
>>   many of us have probably experienced that, in the diffraction
>> of protein ligands containing heavy atoms (cisPt, I3C, etc), the
>> overwhelming electron density of the metal can totally flatten that of the
>> light atoms around (or rather make it look insignificant).
>> Is anyone aware of an article/review (to use as a reference) in which this
>> is clearly stated/pointed out?
>> Best regards
>> 
>> Vito Calderone
>> 
>> 
>> 
>> 
>> 
>> To unsubscribe from the CCP4BB list, click the following link:
>> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1
>> 
>> 
>> 
>> To unsubscribe from the CCP4BB list, click the following link:
>> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1
>> 
>> This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing
>> list hosted by www.jiscmail.ac.uk, terms & conditions are available at
>> https://www.jiscmail.ac.uk/policyandsecurity/
>> 
>> 
>> 
>> To unsubscribe from the CCP4BB list, click the following link:
>> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1
>> 
>> This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing 
>> list hosted by www.jiscmail.ac.uk, terms & conditions are available at 
>> https://www.jiscmail.ac.uk/policyandsecurity/
>> 



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1

This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing list 
hosted by www.jiscmail.ac.uk, terms & conditions are available at 
https://www.jiscmail.ac.uk/policyandsecurity/


Re: [ccp4bb] Unusual dataset with high Rmerge and extremely low b-factors?

2019-11-04 Thread Jrh Gmail
Dear Michael
The Rmerge in the strong intensity bin of 0.079 is untypically high it seems to 
me. 
Were the diffraction images underexposed? 
Best wishes
John

Emeritus Professor of Chemistry John R Helliwell DSc_Physics 



> On 3 Nov 2019, at 23:19, Michael Jarva  wrote:
> 
> 
> Hi CCP4BB,
> 
> I have some unusual crystal diffraction data I'd like to get your input on.
> 
> Almost a year ago I shot some small rods sticking out of a loop, so basically 
> no liquid around them - using the microfocus MX2 beamline at the australian 
> synchrotron, collected on an EIGER 16M detector.
> 
> The crystals diffracted weakly and was seemingly not viable at first glance 
> because of high Rmerge/Rpims. See the aimless summary at the bottom of this 
> post. This seemed to stem from a low spot intensity at low resolutions 
> (I/sd(I)=4.6), but since the CC1/2 was fine I went with it anyway.
> 
> Here I also noted an unusually low Mosaicity, 0.05, and Wilson B-factors, 
> 8.02 Å^2.
> 
> Density maps looked great and the build refined easily enough (R/Rfree 
> 0.1939/0.2259) with a mean B-factor of 19.85, which according to phenix is 
> lower than any other structure deposited in that resolution bin. Furthermore, 
> the molprobity score is 0.83, and overall real-space correlation CC is 0.855.
> 
> So my question is, can I feel comfortable depositing this? 
> 
> best regards
> Michael
> 
> Chosen Solution:space group P 1 21 1
> Unit cell:44.93   41.90   45.83  90.00  115.57   90.00
> Number of batches in file:   1659
> The data do not appear to be twinned, from the L-test
> Overall InnerShell OuterShell
> Low resolution limit   41.34 41.34  2.49
> High resolution limit   2.40  8.98  2.40
> 
> Rmerge  (within I+/I-) 0.231 0.084 0.782
> Rmerge  (all I+ and I-)0.266 0.099 0.983
> Rmeas (within I+/I-)   0.323 0.118 1.091
> Rmeas (all I+ & I-)0.317 0.118 1.167
> Rpim (within I+/I-)0.225 0.084 0.759
> Rpim (all I+ & I-) 0.171 0.063 0.623
> Rmerge in top intensity bin0.079- - 
> Total number of observations   19901   362  2067
> Total number unique 6054   126   611
> Mean((I)/sd(I))  2.7   4.6   1.0
> Mn(I) half-set correlation CC(1/2) 0.958 0.991 0.570
> Completeness98.6  98.2  97.3
> Multiplicity 3.3   2.9   3.4
> Mean(Chi^2) 0.48  0.33  0.50
> 
> Anomalous completeness  81.7  92.2  75.1
> Anomalous multiplicity   1.5   1.8   1.9
> DelAnom correlation between half-sets -0.003 0.041 0.045
> Mid-Slope of Anom Normal Probability   0.704   - -  
> 
> The anomalous signal appears to be weak so anomalous flag was left OFF
> 
> Estimates of resolution limits: overall
>from half-dataset correlation CC(1/2) >  0.30: limit =  2.40A  == maximum 
> resolution
>from Mn(I/sd) >  1.50: limit =  2.67A 
>from Mn(I/sd) >  2.00: limit =  2.87A 
> 
> Estimates of resolution limits in reciprocal lattice directions:
>   Along0.96 a* - 0.28 c*
>from half-dataset correlation CC(1/2) >  0.30: limit =  2.40A  == maximum 
> resolution
>from Mn(I/sd) >  1.50: limit =  2.40A  == maximum 
> resolution
>   Along k axis
>from half-dataset correlation CC(1/2) >  0.30: limit =  2.40A  == maximum 
> resolution
>from Mn(I/sd) >  1.50: limit =  2.86A 
>   Along   -0.17 a* + 0.99 c*
>from half-dataset correlation CC(1/2) >  0.30: limit =  2.40A  == maximum 
> resolution
>from Mn(I/sd) >  1.50: limit =  2.98A 
> 
> Anisotropic deltaB (i.e. range of principal components), A^2:  8.62
> 
> Average unit cell:44.93   41.90   45.83   90.00  115.57   90.00
> Space group: P 1 21 1
> Average mosaicity:   0.05
> 
> Minimum and maximum SD correction factors: Fulls   1.27   1.28 Partials   
> 0.00   0.00
> 
> 
> 
> 
> Michael Jarva, PhD
> ACRF Chemical Biology Division
> The Walter and Eliza Hall Institute of Medical Research
> 1G Royal Parade
> Parkville Victoria 3052
> Australia
> Phone: +61 3 9345 2493 
> Email: jarv...@wehi.edu.au | Web: http://www.wehi.edu.au/
> The ACRF Chemical Biology Division is supported by the
> Australian Cancer Research Foundation
> 
> ___ 
> 
> The information in this email is confidential and intended solely for the 
> addressee.
> You must not disclose, forward, print or use it without the permission of the 
> sender.
> 
> The Walter and Eliza Hall Institute 

Re: [ccp4bb] tNCS incompatible with cell dimensions

2019-06-01 Thread Jrh Gmail
Dear Kevin
You could try reindexing into P1, then run Phaser and with its solution as 
input to Zanuda determine the space group. 
Best wishes,
John 

Emeritus Professor of Chemistry John R Helliwell DSc_Physics 




> On 31 May 2019, at 21:09, Kevin Jude  wrote:
> 
> Hello community, I wonder if I could solicit advice about a problematic 
> dataset. I plan to solve the structure by molecular replacement and expect 
> that the protein is relatively compact, ie not elongated. SAXS data supports 
> this expectation.
> 
> The crystals diffract to 2.6 Å resolution and appear to be in P 21 21 2 with 
> a = 49, b = 67, c = 94, which should fit <=2 molecules in the ASU with 40% 
> solvent. The native Patterson shows a large peak (12 sigma) suggesting a tNCS 
> vector of {0.5, 0.5, 0}.
> 
> If you're sharper than me, you may have already spotted the problem - c is 
> the long axis of the unit cell, but tNCS constrains the proteins to a plane 
> parallel to the a,b plane. Indeed, molecular replacement attempts using 
> Phaser will not give a solution in any orthorhombic space group unless I turn 
> off packing, and then I get large overlaps in the a,b plane and huge gaps 
> along c.
> 
> Since I believe that my model is good (or at least the correct shape, based 
> on SAXS), I wonder if I'm misinterpreting my crystallographic data. Any 
> insights into how to approach this problem would be much appreciated.
> 
> --
> Kevin Jude, PhD
> Structural Biology Research Specialist, Garcia Lab
> Howard Hughes Medical Institute
> Stanford University School of Medicine
> Beckman B177, 279 Campus Drive, Stanford CA 94305
> Phone: (650) 723-6431
> 
> 
> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


[ccp4bb] UK Open Research Data Task Force Final Report

2019-03-17 Thread Jrh Gmail
Dear Colleagues 
The UK Open Research Data Task Force Final Report is here:-
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/775006/Realising-the-potential-ORDTF-July-2018.pdf
 
This will be of general interest. As well as contributing to this report I 
wrote the crystallography case study. 
Best wishes, 
John 

Emeritus Professor of Chemistry John R Helliwell DSc_Physics 
Chairman of IUCr Committee on Data





To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


Re: [ccp4bb] FW: [ccp4bb] old data - headers

2019-01-31 Thread Jrh Gmail
Hello Harry
I used SRS 7.2 and 9.6 at a wide variety of monochromatic wavelengths for 
resonant scattering (AD) studies. But I can imagine high intensity application 
PX measurements were made at those specific wavelengths which you mention.
Greetings from Novosibirsk,
John 

Emeritus Professor of Chemistry John R Helliwell DSc_Physics 




> On 31 Jan 2019, at 18:02, Harry Powell 
> <193323b1e616-dmarc-requ...@jiscmail.ac.uk> wrote:
> 
> Hi
> 
> This looks like it was on beamline 7.2 (which had a fixed wavelength of 
> 1.488Å, according to an article written by Liz Duke - see 
> https://www.ccp4.ac.uk/newsletters/newsletter37/11_beamline14.html); I can't 
> remember if detector 421 was a Q4 or a Q4R, but (again, according to the same 
> article) Daresbury certainly had at least one of each at some time!
> 
> BL 7.2 was actually the only beamline at Daresbury that I collected my own 
> data on (before I worked on Mosflm) - using an Arndt-Wonacott camera and 
> film...
> 
> Harry
> 
>> On 31 Jan 2019, at 10:48, Dean Derbyshire wrote:
>> 
>> huge thanks everyone.. what a response.  All good now
>> :)
>> 
>> 
>> -Original Message-
>> From: Pedro Matias [mailto:mat...@itqb.unl.pt] 
>> Sent: den 31 januari 2019 11:46
>> To: Dean Derbyshire ; CCP4BB@JISCMAIL.AC.UK
>> Subject: Re: [ccp4bb] FW: [ccp4bb] old data - headers
>> 
>> Hi Dean,
>> 
>> I reckon that it is an ADSC Quantum 4 or 4R CCD detector. If you open the 
>> images with mosflm or adxv you should see the characteristic 2x2 tile 
>> pattern.
>> 
>> Note how the header format is similar to the one you posted from the ESRF.
>> 
>> Pedro
>> 
>> Às 10:21 de 31/01/2019, Dean Derbyshire escreveu:
>>> thanks all. I recon I have the ESRF data sorted.. but Daresbury.. am I 
>>> right in assuming MARCCD or are we going back as far as image plate?
>>> here is the image header
>>> Harry what do you think
>>> 
>>> HEADER_BYTES=  512;
>>> DIM=2;
>>> BYTE_ORDER=little_endian;
>>> TYPE=unsigned_short;
>>> PIXEL_SIZE=0.08160;
>>> BIN=none;
>>> ADC=slow;
>>> DETECTOR_SN=421;
>>> DATE=Wed Apr 13 17:59:22 2005;
>>> TIME=20.00;
>>> DISTANCE=125.000;
>>> OSC_RANGE=1.000;
>>> PHI=7.000;
>>> OSC_START=7.000;
>>> AXIS=phi;
>>> WAVELENGTH=1.48800;
>>> BEAM_CENTER_X=94.700;
>>> BEAM_CENTER_Y=96.400;
>>> UNIF_PED=1500;
>>> SIZE1=2304;
>>> SIZE2=2304;
>>> CCD_IMAGE_SATURATION=65535;
>>> }
>>> 
>>> 
>>> -Original Message-
>>> From: Dean Derbyshire
>>> Sent: den 31 januari 2019 10:50
>>> To: 'graeme.win...@diamond.ac.uk' ; 'Luca 
>>> Jovine' 
>>> Subject: RE: [ccp4bb] old data
>>> 
>>> ok may have lied.. not just 1 dataset i note.
>>> 
>>> Daresbury 14-1 on 19th April 2005. (I'm assuming MAR image plate but!)
>>> 
>>> ESRF ID23-1 on 4th September 2007.
>>> ESRF ID23-1 on 8th November 2007.
>>> ESRF ID23-1 on 25th June 2008.
>>> ESRF ID23-1 on 11th February2010.
>>> 
>>> I will post the headers in a sec
>>> 
>>> :)
>>> 
>>> -Original Message-
>>> From: graeme.win...@diamond.ac.uk [mailto:graeme.win...@diamond.ac.uk]
>>> Sent: den 31 januari 2019 10:42
>>> To: Dean Derbyshire 
>>> Cc: ccp4bb@jiscmail.ac.uk
>>> Subject: Re: [ccp4bb] old data
>>> 
>>> Hi Dean
>>> 
>>> Usually there is a serial number buried somewhere in the header - many 
>>> are text headers though some have TIFF format binary headers. Often a 
>>> timestamp as well though this is less common
>>> 
>>> From there biosync may help e.g.
>>> 
>>> http://biosync.sbkb.org/beamlineupdatehistory.jsp?region=european
>>> h_id=esrf_name=BM14=400=600
>>> 
>>> (the list of ESRF MX beamlines is short so should not be too painful)
>>> 
>>> Knowing the format, filename you can probably pin it down or if you 
>>> share a little more info someone on the BB will know
>>> 
>>> All the best Graeme
>>> 
>>> 
>>> 
>>> On 31 Jan 2019, at 09:38, Dean Derbyshire 
>>> mailto:dean.derbysh...@medivir.com>> wrote:
>>> 
>>> maybe a silly question it there a data base or other way to tell what 
>>> detector was used to collect historic data. Image header isn’t hugely 
>>> helpful ESRF is all I know for the source but I’d like to know what 
>>> the detector was at the time… -  I’m talking 2005-2010
>>> 
>>>  Dean Derbyshire
>>>  Principal Scientist Protein Crystallography [X]
>>>  Box 1086
>>>  SE-141 22 Huddinge
>>>  SWEDEN
>>>  Visit: Lunastigen 7
>>>  Direct: +46 8 54683219
>>>  Mobile: +46731251723
>>>  www.medivir.com
>>> --
>>>  This transmission is intended for the person to whom or the 
>>> entity to which it is addressed and may contain information that is 
>>> privileged, confidential and exempt from disclosure under applicable law.
>>> If you are not the intended recipient, please be notified that any 
>>> dissemination, distribution or copying is strictly prohibited.
>>> If you have received this transmission in error, please notify us 
>>> immediately.
>>> Thank you for your cooperation.
>>> 
>>> 

[ccp4bb] An overview of our crystallographic science

2018-03-01 Thread Jrh Gmail

Dear Colleagues,
I prepared this overview of our crystallographic science in Biosciences Reports 
at the invitation of the Biochemical Society, which I imagine you would be 
interested in:-
http://www.bioscirep.org/content/37/4/BSR20170204
It is open access.
All best wishes,
John 

Emeritus Professor John R Helliwell DSc

Re: [ccp4bb] Picking water molecules at 4A structure.

2015-04-18 Thread Jrh
Dear Bert 
That is a limitation, I agree. 
Suffice to say the clarity of details seen, or not seen, will not get better in 
the 'real' situation. 
The resolution 'limit' based on CC 1/2 also now needs to be considered (in 
addition to I/sigI criterion). 
John



On 17 Apr 2015, at 17:59, Bert Van-Den-Berg bert.van-den-b...@newcastle.ac.uk 
wrote:

 John, the lower-resolution datasets in your paper were generated by 
 truncating a high-res dataset, i.e. the lo-res datasets are of great 
 quality. Would the conclusions still be valid if the data are true low-res? 
 (i.e. I/sigI 1.5-2 in last shell)?
 
 Tx Bert
 From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of John R 
 Helliwell [jrhelliw...@gmail.com]
 Sent: Friday, April 17, 2015 5:47 PM
 To: CCP4BB@JISCMAIL.AC.UK
 Subject: Re: [ccp4bb] Picking water molecules at 4A structure.
 
 Hi,
 This paper:-
 doi:10.1107/S0907444903004219
 I think will be of interest.
 Whilst 4 Angstrom resolution is not covered the article will indicate the 
 tests you could make to evaluate your 'possible water like densities'.
 Best wishes,
 John
 
 On Mon, Apr 13, 2015 at 7:14 PM, Sudipta Bhattacharyya 
 sudiptabhattacharyya.iit...@gmail.com wrote:
 Dear community,
 
 Recently we have been able to solve a crystal structure of a DNA/protein 
 complex at 4A resolution. After almost the final cycles of model building and 
 refinement (with R/Rfree of ~ 22/27) we could see some small water like 
 densities...all throughout the complex. Now my query is, whether one should 
 pick water molecules at this low resolutions or it is totally unscientific to 
 do so? 
 
 Many thanks in advance...!!!
 
 My best regards,
 Sudipta.   
 
 
 
 -- 
 Professor John R Helliwell DSc


Re: [ccp4bb] Picking water molecules at 4A structure.

2015-04-18 Thread Jrh
Good morning Pavel,
That's interesting.
In our study 'ghosts' of waters in our truncated maps did not occur.
Waters and hydrogens behave differently as ghost objects presumably?
Greetings,
John



On 17 Apr 2015, at 20:10, Pavel Afonine pafon...@gmail.com wrote:

 Hello,
 
 John, the lower-resolution datasets in your paper were generated by 
 truncating a high-res dataset, i.e. the lo-res datasets are of great 
 quality. Would the conclusions still be valid if the data are true low-res? 
 (i.e. I/sigI 1.5-2 in last shell)?
 
 genuinely low-res data set is clearly not the same as one obtained by 
 truncation of high-res reflections. Some time ago I did a test where I 
 truncated an ultra-high resolution data set (0.6A resolution) at 2A, and I 
 could still see H atoms in 2A resolution map!
 
 Pavel


[ccp4bb] Accuracy and precision of unit cell parameters

2014-07-23 Thread Jrh
Dear Colleagues,
Stimulated by Bernhard's posting, and whilst certainly beyond his sense of 
bewilderment, I recalled that there were studies on this issue. These are 
neatly summarised in the Topical review :- 
http://dx.doi.org/10.1107/S01087681269X
By Frank Herbstein in Acta Cryst B 
I imagine colleagues will find this of interest.
Best wishes,
John 

Prof John R Helliwell DSc FInstP CPhys FRSC CChem F Soc Biol.
Chair School of Chemistry, University of Manchester, Athena Swan Team.
http://www.chemistry.manchester.ac.uk/aboutus/athena/index.html
 
 

Re: [ccp4bb] Confusion about space group nomenclature

2014-05-02 Thread Jrh Gmail
Dear George
My student class would not find that IUCr dictionary definition helpful. What 
they do find helpful is to state that they cannot contain an inversion or a 
mirror. 
To honour Sohnke is one thing but is it really necessary as a label? You're 
from Huddersfield I am from Wakefield ie let's call a spade a spade (not a 
'Black and Decker'). 
Cheers
John

Prof John R Helliwell DSc

 On 2 May 2014, at 17:01, George Sheldrick gshe...@shelx.uni-ac.gwdg.de 
 wrote:
 
 In my program documentation I usually call these 65 the Sohnke space groups, 
 as defined by the IUCr: 
 http://reference.iucr.org/dictionary/Sohnke_groups  
 
 George
 
 
 On 05/02/2014 02:35 PM, Jim Pflugrath wrote:
 After all this discussion, I think that Bernhard can now lay the claim that 
 these 65 space groups should really just be labelled the Rupp space 
 groups.  At least it is one word. 
 
 Jim
 
 From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Bernhard Rupp 
 [hofkristall...@gmail.com]
 Sent: Friday, May 02, 2014 3:04 AM
 To: CCP4BB@JISCMAIL.AC.UK
 Subject: Re: [ccp4bb] Confusion about space group nomenclature
 ….
  
 Enough of this thread.
  
 Over and out, BR
 
 
 -- 
 Prof. George M. Sheldrick FRS
 Dept. Structural Chemistry, 
 University of Goettingen,
 Tammannstr. 4,
 D37077 Goettingen, Germany
 Tel. +49-551-39-33021 or -33068
 Fax. +49-551-39-22582
 


Re: [ccp4bb] Confusion about space group nomenclature

2014-05-02 Thread Jrh Gmail
Dear Gerard
I am duly reprimanded .
You are quite correct .
Have a good weekend
John

Prof John R Helliwell DSc

 On 2 May 2014, at 18:16, Gerard Bricogne g...@globalphasing.com wrote:
 
 Dear John,
 
 What is wrong with honouring Sohnke by using his name for something
 that he first saw a point in defining, and in investigating the properties
 resulting from that definition? Why insist that we should instead replace
 his name by an adjective or a circumlocution? What would we say if someone
 outside our field asked us not to talk about a Bragg reflection, or the
 Ewald sphere, or the Laue method, but to use instead some clever adjective
 or a noun-phrase as long as the name of a Welsh village to explain what
 these mean? 
 
 Again, I think we should have a bit more respect here. When there are
 simple adjectives to describe a mathematical properties, the mathematical
 vocabulary uses it (like a normal subgroup). However, when someone has
 seen that a definition by a conjunction of properties (i.e. something
 describable by a sentence) turns out to characterise objects that have much
 more interesting properties than just those by which they were defined, then
 they are often called by the name of the mathematician who first saw that
 there is more to them than what defines them. Examples: Coxeter groups, or
 Lie algebras, or the Leech lattice, or the Galois group of a field, the
 Cayley tree of a group ... . It is the name of the first witness to a
 mathematical phenomenon, just as we call chemical reactions by the name of
 the chemist who saw that mixing certain chemicals together led not just to a
 mixture of those chemicals.
 
 So why don't we give Sohnke what belongs to him, just as we expect
 other scientists to give to Laue, Bragg and Ewald what we think belongs to
 them? Maybe students would not be as refractory to the idea as might first
 be thought.
 
 
 With best wishes,
 
  Gerard.
 
 --
 On Fri, May 02, 2014 at 05:42:34PM +0100, Jrh Gmail wrote:
 Dear George
 My student class would not find that IUCr dictionary definition helpful. 
 What they do find helpful is to state that they cannot contain an inversion 
 or a mirror. 
 To honour Sohnke is one thing but is it really necessary as a label? You're 
 from Huddersfield I am from Wakefield ie let's call a spade a spade (not a 
 'Black and Decker'). 
 Cheers
 John
 
 Prof John R Helliwell DSc
 
 On 2 May 2014, at 17:01, George Sheldrick gshe...@shelx.uni-ac.gwdg.de 
 wrote:
 
 In my program documentation I usually call these 65 the Sohnke space 
 groups, as defined by the IUCr: 
 http://reference.iucr.org/dictionary/Sohnke_groups  
 
 George
 
 
 On 05/02/2014 02:35 PM, Jim Pflugrath wrote:
 After all this discussion, I think that Bernhard can now lay the claim 
 that these 65 space groups should really just be labelled the Rupp space 
 groups.  At least it is one word. 
 
 Jim
 
 From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Bernhard 
 Rupp [hofkristall...@gmail.com]
 Sent: Friday, May 02, 2014 3:04 AM
 To: CCP4BB@JISCMAIL.AC.UK
 Subject: Re: [ccp4bb] Confusion about space group nomenclature
 ….
 
 Enough of this thread.
 
 Over and out, BR
 
 
 -- 
 Prof. George M. Sheldrick FRS
 Dept. Structural Chemistry, 
 University of Goettingen,
 Tammannstr. 4,
 D37077 Goettingen, Germany
 Tel. +49-551-39-33021 or -33068
 Fax. +49-551-39-22582


Re: [ccp4bb] metals disapear

2014-05-01 Thread Jrh
Dear Dean,
I appreciate you might not be able to reveal further details but 'disappearing 
during data collection' sounds interesting as does 'metals' plural (are 
they expected to be close together?). 
Best wishes,
John

Prof John R Helliwell DSc
 
 

On 30 Apr 2014, at 11:33, Dean Derbyshire dean.derbysh...@medivir.com wrote:

 Hi all,
 Has anyone experienced catalytic metal ions disappearing during data 
 collection ?
 If so, is there a way of preventing it?
 D.
  
Dean Derbyshire
Senior Research Scientist
 image001.jpg
Box 1086
SE-141 22 Huddinge
SWEDEN
Visit: Lunastigen 7
Direct: +46 8 54683219
www.medivir.com
  
 --
 This transmission is intended for the person to whom or the entity to which 
 it is addressed and may contain information that is privileged, confidential 
 and exempt from disclosure under applicable law. If you are not the intended 
 recipient, please be notified that any dissemination, distribution or copying 
 is strictly prohibited. If you have received this transmission in error, 
 please notify us immediately.
 Thank you for your cooperation.


Re: [ccp4bb] metals disapear

2014-04-30 Thread Jrh Gmail
Dear Dean
An example, albeit not a metal, can be found here:-
http://journals.iucr.org/s/issues/2007/01/00/xh5011/xh5011.pdf

Such specific damage has a long history:-
http://www.sciencedirect.com/science/article/pii/0022024888903223

An X-ray sensitive metals centre is the Mn5Ca OEC of PS II and details can 
found here :-
http://m.pnas.org/content/102/34/12047.long

But metals in proteins can also be robust to X-rays and eg verified by parallel 
non damaging neutron crystallographic studies such as on MnCa concanabalin A .

Best wishes
John

Prof John R Helliwell DSc

 On 30 Apr 2014, at 11:33, Dean Derbyshire dean.derbysh...@medivir.com wrote:
 
 Hi all,
 Has anyone experienced catalytic metal ions disappearing during data 
 collection ?
 If so, is there a way of preventing it?
 D.
  
Dean Derbyshire
Senior Research Scientist
 image001.jpg
Box 1086
SE-141 22 Huddinge
SWEDEN
Visit: Lunastigen 7
Direct: +46 8 54683219
www.medivir.com
  
 --
 This transmission is intended for the person to whom or the entity to which 
 it is addressed and may contain information that is privileged, confidential 
 and exempt from disclosure under applicable law. If you are not the intended 
 recipient, please be notified that any dissemination, distribution or copying 
 is strictly prohibited. If you have received this transmission in error, 
 please notify us immediately.
 Thank you for your cooperation.


Re: [ccp4bb] metal ion coordination

2014-04-18 Thread Jrh
Dear Faisal,
When scrutinising such distances do be aware of the possibility of false 
precision in the estimates;  see eg http://dx.doi.org/10.1107/S2052252513031485
Best wishes,
John


Prof John R Helliwell DSc
 
 

On 17 Apr 2014, at 21:13, Faisal Tarique faisaltari...@gmail.com wrote:

 Dear all
 
 Can anybody please explain what is the classical metal ion coordination for 
 Mg2+, Ca+ and Na+ with Oxygen atom and the average distance with these metal 
 ions..does the distance vary with the type of metal ion and its coordination 
 with oxygen atom..what is the best way to identify the correct metal ion in 
 the electron density in the vicinity of negatively charged molecule mostly 
 oxygen containing molecule..In one of my paper the reviewer has asked me to 
 check whether the octahedrally coordinated Mg+ is  Ca+ ion..and similarly 
 raised doubt about the identity of the Na+ ion as well..his argument was 
 based on metal ion to oxygen distance..I am attaching the figure with this 
 mail..i request you to please shed some light on this area and help me in 
 clearing some doubts regarding this. 
 
 -- 
 Regards
 
 Faisal
 School of Life Sciences
 JNU
 
 Fig3.tif


[ccp4bb] IUCr Diffraction Data Deposition Working Group

2014-04-05 Thread Jrh Gmail

Dear Colleagues,
We wish to let you know that the Triennial report for 2011 to 2014 on 
diffraction data deposition matters, prepared by the IUCr Diffraction Data 
Deposition Working Group (DDD WG) is now available at:-
http://forums.iucr.org/viewtopic.php?f=21t=343
Best wishes,

John, Brian and Tom 

John Helliwell Chair of the WG;
Brian McMahon Co-Chair of the WG;
Tom Terwilliger Member representing the IUCr Commission on Biological 
Macromolecules


Re: [ccp4bb] twinning problem ?

2014-03-13 Thread Jrh
Dear Jacob,
Measurement of the reciprocal space maps at reflections with triple axis 
diffractometry allows experimental separation of mosaicity and strain 
(variation in unit cell parameter) effects. See eg Boggon et al 2000 Acta Cryst 
D56, 868-880 http://dx.doi.org/10.1107/S090744495837 for such studies on 
protein crystals at NSLS. 

In terms of diffuse scattering the above effects do get mixed in with molecular 
disorders correlated over many unit cells, and thus a 'diffuse scattering 
correction to measured Bragg intensities' is done in the most accurate work.But 
the above effects are separate from molecular disorders over a few unit cells 
ie which cause the diffraction streaks between Bragg peaks. 

Then there are the long range and short range temporal vibrations, optic and 
acoustic modes, in the crystal 

A workshop held at ALS on diffuse scattering recently suggests a systematic 
effort is on hand to analyse diffuse X-ray scattering information in MX data 
sets  for improved descriptions of macromolecular structure and dynamics. 
Archiving of raw diffraction data images would also assist such important 
objectives. 

Best wishes,
John

Prof John R Helliwell DSc 
 
 

On 12 Mar 2014, at 21:15, Keller, Jacob kell...@janelia.hhmi.org wrote:

 For any sample, crystalline or not, a generally valid description of 
 diffraction intensity is it being a Fourier transform of electron density 
 autocorrelation function.
 
 I thought for non-crystalline samples diffraction intensity is simply the 
 Fourier transform of the electron density, not its autocorrelation function. 
 Is that wrong?
 
 
 
 Anyway, regarding spot streaking, perhaps there is a different, simpler 
 formulation for how they arise, based on the two phenomena:
 
 (1) Crystal lattice convoluted with periodic contents, e.g., protein 
 structure in exactly the same orientation
 (2) Crystal lattice convoluted with aperiodic contents, e.g. n different 
 conformations of a protein loop, randomly sprinkled in the lattice.
 
 Option (1) makes normal spots. If there is a lot of scattering material doing 
 (2), then streaks arise due to many super-cells occurring, each with an 
 integral number of unit cells, and following a Poisson distribution with 
 regard to frequency according to the number of distinct conformations. 
 Anyway, I thought of this because it might be related to scattering from 
 aperiodic crystals, in which there is no concept of unit cell as far as I 
 know (just frequent distances), which makes them really interesting for 
 thinking about diffraction.
 
 See the images here of an aperiodic lattice and its Fourier transform, if 
 interested:
 
 http://postimg.org/gallery/1fowdm00/
 
 Mosaicity is a very different phenomenon. It describes a range of angular 
 alignments of microcrystals with the same unit cell within the sample. It 
 broadens diffraction peaks by the same angle irrespective of the data 
 resolution, but it cannot change the length of diffraction vector for each 
 Bragg reflection. For this reason, the elongation of the spot on the 
 detector resulting from mosaicity will be always perpendicular to the 
 diffraction vector. This is distinct from the statistical disorder, where 
 spot elongation will be aligned with the crystal lattice and not the 
 detector plane.
 
 I have been convinced by some elegant, carefully-thought-out papers that this 
 microcrystal conception of the data-processing constant mosaicity is 
 basically wrong, and that the primary factor responsible for observed 
 mosaicity is discrepancies in unit cell constants, and not the microcrystal 
 picture. I think maybe you are referring here to theoretical mosaicity and 
 not the fitting parameter, so I am not contradicting you. I have seen 
 recently an EM study of protein microcrystals which seems to show actual 
 tilted mosaic domains just as you describe, and can find the reference if 
 desired.
 
 Presence of multiple, similar unit cells in the sample is completely 
 different and unrelated condition to statistical disorder.
 
 Agreed!
 
 Jacob


Re: [ccp4bb] twinning problem ?

2014-03-12 Thread Jrh Gmail
Dear Jacob
For a review of this topic see
http://www.tandfonline.com/doi/full/10.1080/08893110310001643551#.UyCVLikgGc0


I also refer you to the more recent OUP IUCr book Chayen, Helliwell and Snell 
ie which includes these topics:-
 
http://global.oup.com/academic/product/macromolecular-crystallization-and-crystal-perfection-9780199213252;jsessionid=5564F908743CCE57BAD506586B47B6CC?cc=gblang=en;

I declare a 'perceived conflict of interest' in making this book suggestion to 
you.

Best wishes
John

Prof John R Helliwell DSc

 On 12 Mar 2014, at 16:59, Keller, Jacob kell...@janelia.hhmi.org wrote:
 
 Not sure I understand why having statistical disorder makes for streaks--does 
 the crystal then have a whole range of unit cell constants, with the spot at 
 the most prevalent value, and the streaks are the tails of the 
 distribution? If so, doesn't having the streak imply a really wide range of 
 constants? And how would this be different from mosaicity? My guess is that 
 this is not the right picture, and this is indeed roughly what mosaicity is.
 
 Alternatively, perhaps the streaks are interpreted as the result of a duality 
 between the unit cell, which yields spots, and a super cell which is so 
 large that it yields extremely close spots which are indistinguishable from 
 lines/streaks. Usually this potential super cell is squelched by destructive 
 interference due to each component unit cell being very nearly identical, but 
 here the destructive interference doesn't happen because each component unit 
 cell differs quite a bit from its fellows.
 
 And I guess in the latter case the supercell would have its cell constant 
 (in the direction of the streaks) equal to (or a function of) the coherence 
 length of the incident radiation?
 
 I know some attempts have been (successfully) made to use diffuse scattering, 
 but has anyone used the streak intensities to determine interesting features 
 of the crystallized protein?
 
 JPK
 
 
 
 -Original Message-
 From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Andrew 
 Leslie
 Sent: Wednesday, March 12, 2014 12:25 PM
 To: CCP4BB@JISCMAIL.AC.UK
 Subject: Re: [ccp4bb] twinning problem ?
 
 Dear Stephen,
 
   I have seen a similar effect in the structure of 
 F1-ATPase complexed with the full length inhibitor protein. The inhibitor is 
 a dimer, and it actually couples 2 copies of the ATPase, but it crystallised 
 with only one copy of the ATPase per asymmetric unit. When I solved the 
 structure by MR, I saw additional density that could not be accounted for. 
 The extra density was, in fact, a second ATPase molecule that was related to 
 the first by a 120 degree rotation about the pseudo 3-fold axis of the 
 enzyme. The dimers were packing with statistical disorder in the crystal 
 lattice. This gave rise to clear streaking between Bragg spots in the 
 diffraction images in a direction that was consistent with that expected from 
 the statistical packing of the inhibitor linked dimers.
 
 Two copies of F1 were included in the refinement, each with occupancy 0.5. 
 the final Rfree was 27.7% (2.8A data). Prior to introduction of the second 
 copy of F1, the Rfree was 37%.
 
 More details are in Cabezon et al., NSMB 10, 744-750, 2003
 
 Best wishes,
 
 Andrew
 
 
 
 On 11 Mar 2014, at 14:04, Stephen Cusack cus...@embl.fr wrote:
 
 Dear All,
  I have 2.6 A data and unambiguous molecular replacement solution 
 for two copies/asymmetric unit of a 80 K protein for a crystal integrated in 
 P212121 (R-merge around 9%) with a=101.8, b=132.2, c=138.9.
 Refinement allowed rebuilding/completion of the model in the noraml 
 way but the R-free does not go below 30%. The map in the model regions looks 
 generally fine but  there is a lot of extra positive density in the solvent 
 regions (some of it looking like weak density for helices and strands)  and 
 unexpected positive peaks within the model region.
 Careful inspection allowed manual positioning of a completely different, 
 overlapping solution for the dimer which fits the extra density perfectly.
 The two incompatible solutions are related by a 2-fold axis parallel to a.
 This clearly suggests some kind of twinning. However twinning analysis 
 programmes (e.g. Phenix-Xtriage), while suggesting the potentiality of 
 pseudo-merohedral twinning (-h, l, k) do not reveal any significant 
 twinning fraction and proclaim the data likely to be untwinned. (NB. 
 The programmes do however highlight a non-crystallographic translation and 
 there are systematic intensity differences in the data). Refinement, 
 including this twinning law made no difference since the estimated twinning 
 fraction was 0.02. Yet the extra density is clearly there and I know exactly 
 the real-space transformation between the two packing solutions.
 How can I best take into account this alternative solution (occupancy seems 
 to be around 20-30%) in the refinement ?
 thanks for your suggestions
 

Re: [ccp4bb] Validity of Ion Sites in PDB

2014-03-07 Thread Jrh
Dear Jacob,
An example where special efforts were made to investigate biocatalysis and the 
role of ions, harnessing both softer X-rays and ion substitutions and a WASP 
analysis to be sure as possible of their identity, can be found here:-
http://www.ncbi.nlm.nih.gov/pubmed/20099851

The review article on metal atoms in proteins by M M Harding Crystallography 
Reviews 2010 http://www.tandfonline.com/doi/full/10.1080/0889311X.2010.485616
which I see just now has been downloaded 722 times thus far, should also prove 
instructive.

I commend to you also the wiki:-
http://strucbio.biologie.uni-konstanz.de/ccp4wiki/index.php/Properties_of_proteins

You indicate that oxygen anomalous scattering could be used; whilst this is 
applicable to chirality  determination in small molecule organic 
crystallography the oxygen anomalous signal is very small and to my knowledge 
not used thus far in protein crystallography. 

Best wishes,
John

Prof John R Helliwell DSc 
 
 

On 6 Mar 2014, at 19:45, Keller, Jacob kell...@janelia.hhmi.org wrote:

 Dear Crystallographers,
 
 I was curious whether there has been a rigorous evaluation of ion binding 
 sites in the structures in the pdb, by PDB-REDO or otherwise. I imagine that 
 there is a considerably broad spectrum of habits and rigor in assigning 
 solute blobs to ion X or water, and in fact it would be difficult in many 
 cases to determine which ion a given blob really is, but there should be at 
 least some fraction of ions/waters which can be shown from the x-ray data and 
 known geometry to be X and not Y. This could be by small anomalous signals 
 (Cl and H2O for example), geometric considerations, or something else. Maybe 
 this does not even matter in most cases, but it might be important in 
 others...
 
 All the best,
 
 Jacob Keller
 
 
 ***
 Jacob Pearson Keller, PhD
 Looger Lab/HHMI Janelia Farms Research Campus
 19700 Helix Dr, Ashburn, VA 20147
 email: kell...@janelia.hhmi.org
 ***


Re: [ccp4bb] Validity of Ion Sites in PDB

2014-03-07 Thread Jrh
Dear Jacob,
Ah yes, I see.
Your wording is perfectly clear.
Sorry for my misunderstanding.
Best wishes,
John


Prof John R Helliwell DSc 
 
 

On 7 Mar 2014, at 15:18, Keller, Jacob kell...@janelia.hhmi.org wrote:

 You indicate that oxygen anomalous scattering could be used; whilst this is 
 applicable to chirality  determination in small molecule organic 
 crystallography the oxygen anomalous signal is very small and to my 
 knowledge not used thus far in protein crystallography. 
 
 Perhaps I should have been clearer--I meant that anomalous scattering could 
 be used to distinguish between Cl- and H20, since Cl- does have a small but 
 measurable anomalous signal at the usual wavelengths, whereas water does not, 
 as you point out. Parenthetically, I have found that, in line with Randy 
 Read's suggestion to me, the LLG maps in Phaser are dramatically better than 
 regular adf's for finding such small signals.
 
 JPK
 
 ==
 
 I was curious whether there has been a rigorous evaluation of ion binding 
 sites in the structures in the pdb, by PDB-REDO or otherwise. I imagine that 
 there is a considerably broad spectrum of habits and rigor in assigning 
 solute blobs to ion X or water, and in fact it would be difficult in many 
 cases to determine which ion a given blob really is, but there should be at 
 least some fraction of ions/waters which can be shown from the x-ray data and 
 known geometry to be X and not Y. This could be by small anomalous signals 
 (Cl and H2O for example), geometric considerations, or something else. Maybe 
 this does not even matter in most cases, but it might be important in 
 others...


Re: [ccp4bb] Large Conformational Change Upon Binding Ligand...

2014-03-02 Thread Jrh
Dear Jacob,
6 phosphogluconate dehydrogenase :-
See Chen et al 2010 J Struct Biol 169,25-35
and for which enzyme there has been a longstanding interest in the protein 
dynamics and studies linking with diffuse X-ray scattering: see Biochem Soc 
Trans (1986) 14, 653-655. 
Best wishes,
John


Prof John R Helliwell DSc 
 
 

On 27 Feb 2014, at 19:43, Keller, Jacob kell...@janelia.hhmi.org wrote:

 Dear Crystallographers,
 
 Does anyone know of good examples of large, reversible conformational changes 
 occurring between ligand-free and -bound states? Could also be a non-relevant 
 molecule binding, like sulfate or something inducing dubiously -relevant 
 changes. I already know of the calmodulin and periplasmic binding protein 
 families, but does anyone know of others out there?
 
 All the best,
 
 Jacob Keller
 
 ***
 Jacob Pearson Keller, PhD
 Looger Lab/HHMI Janelia Farms Research Campus
 19700 Helix Dr, Ashburn, VA 20147
 email: kell...@janelia.hhmi.org
 ***


Re: [ccp4bb] twinning fun

2014-01-29 Thread Jrh
Dear Bert,
In my own review:-
http://www.tandfonline.com/doi/abs/10.1080/08893110802360925?journalCode=gcry20#.UulGyGtYCSM
molecular replacement emerged in my mind as the most robust option for 
structure determination in such a case, apart from finding an untwinned crystal 
form of course.
Best wishes,
John

Prof John R Helliwell DSc FInstP CPhys FRSC CChem F Soc Biol.
Chair School of Chemistry, University of Manchester, Athena Swan Team.
http://www.chemistry.manchester.ac.uk/aboutus/athena/index.html
 
 

On 28 Jan 2014, at 17:26, Bert Van-Den-Berg bert.van-den-b...@newcastle.ac.uk 
wrote:

 Dear all,
 
 I recently collected several datasets for a protein that needs experimental 
 phasing.
 The crystals are hexagonal plates, and (automatic) data processing suggests 
 with high confidence that the space group is P622. This is where the fun 
 begins.
 For some datasets (processed in P622), the intensity distributions are 
 normal, and the L-test (aimless, xtriage) and Z-scores (xtriage) suggest that 
 there is no twinning (twinning fractions  0.05). However, for other datasets 
 (same cell dimensions), the intensity distributions are not normal (eg 
 Z-scores  10). Given that twinning is not possible in P622, this suggests to 
 me that the real space group could be P6 with (near) perfect twinning.
 
 If I now process the normal L-test P622 datasets in P6, the twin-law based 
 tests (britton and H-test in xtriage) give high twin fractions (0.45- 0.5), 
 suggesting all my data is twinned.
 Does this make sense (ie can one have twinning with normal intensity 
 distributions)? 
 If it does, would the normal L-test datasets have a higher probability of 
 being solvable?
 
 Is there any strategy for experimental phasing of (near) perfect twins? SAD 
 would be more suitable than SIR/MIR? (I also have potential heavy atom 
 derivatives).
 
 Thanks for any insights!
 
 Bert


Re: [ccp4bb] monovalent cation binding sites

2013-11-09 Thread Jrh
Dear Ed,
I imagine this reference:-
http://dx.doi.org/10.1021/ja908703c
Will be of interest.
Best wishes,
John

Prof John R Helliwell DSc FInstP CPhys FRSC CChem F Soc Biol.
Chair School of Chemistry, University of Manchester, Athena Swan Team.
http://www.chemistry.manchester.ac.uk/aboutus/athena/index.html
 
 

On 9 Nov 2013, at 02:09, Edward A. Berry ber...@upstate.edu wrote:

 Is there a server or program to predict binding sites for monovalent metal 
 ions?
 Ideally should work with just the protein structure, but a program that sorts 
 through
 the waters in a high resolution structure and tells which are likely to be K+ 
 or Na+
 would also be of interest.
 
 Ed


Re: [ccp4bb] monovalent cation binding sites

2013-11-09 Thread Jrh
Dear Alastair,
This reference:-
http://dx.doi.org/10.1107/S0907444903004219
seems related to your email input below.
I would be grateful though to be guided with a couple of references from you on 
the topics you raise.
Thankyou in anticipation,
Best wishes,
John 

Prof John R Helliwell DSc FInstP CPhys FRSC CChem F Soc Biol.
Chair School of Chemistry, University of Manchester, Athena Swan Team.
http://www.chemistry.manchester.ac.uk/aboutus/athena/index.html
 
 

On 9 Nov 2013, at 20:30, Alastair Fyfe af...@ucsc.edu wrote:

 A related question on this topic: calculated density curves  drop off at 
 different rates even for isoelectronic ions/water. Thus the neighborhood of a 
 mismodeled peak in the error map would be expected to show detectable, 
 non-random, spatial dependence. On the other hand the neighborhood of a 
 well-modeled peak  should be indistinguishable from white noise. Though there 
 are statistics (Moran's I, Geary's C ) for testing spatial effects  in 
 variable correlation that could be applied to  DFc/ 2mFo-DFc correlation,  I 
 haven't seen them applied to this problem. Can anyone suggest a relevant 
 reference? This seems a useful adjunct to bond-valence/non-bonding contact 
 methods.
 thanks,
 Alastair  Fyfe
 
 On 11/09/2013 03:24 AM, Robbie Joosten wrote:
 Hi Ed,
 
 WHAT_CHECK checks water that may be ions and also checks the identity of 
 ions already built. The check my metal server is also very good for final 
 validation of ions.
 
 Cheers,
 Robbie
 
 Sent from my Windows Phone
 
 Van: Edward A. Berry
 Verzonden: 9-11-2013 7:29
 Aan: CCP4BB@JISCMAIL.AC.UK
 Onderwerp: Re: [ccp4bb] monovalent cation binding sites
 
 Thanks, all!
 Ed
 
 
 Nat Echols wrote:
 In the latest Phenix:
 
 mmtbx.water_screen model.pdb data.mtz elements=NA,K
 
 The data are required right now, but I could be convinced to make that 
 optional.
 
 -Nat
 Diana Tomchick wrote:
 There's a command in coot that identifies waters that have an unusually 
 high coordination number. You then need to manually inspect the electron 
 density map and bond lengths, atom type, etc.
 
 Diana
 Shekhar Mande wrote:
 Ed, I dont about monovalent metals (they are typically liganded by 
 hydroxyls of Ser/Thr,
 or the main chain carbonyls).  But we did an analysis of divalent metals in 
 proteins, and
 found several instances, where crystallographers might have mistaken a 
 metal to be a
 water.  Thus, in the PDB, what is reported to be a water, might actually 
 turn out to be a
 metal!  Some of our sites have used to predict functions of proteins, where 
 enzyme assays
 required addition of metals, and hence, I am gratified that it is useful!  
 I am enclosing
 a PDF with this.
 
 We also have a server.
 
 Shekhar
 Dunten, Pete W. wrote:
 Ed,   O had a command
 that scrutinized waters
 and helped find metals
 modeled as water.
 
 Victor Lamzin has a program
 whose name I'be momentarily
 forgotten which gives plots of
 e-density at atomic centers
 versus B-factor,  for each atom
 type.  Points off the lines are
 candidates for incorrectly
 modeled metals.
 
 Pete
 Dunten, Pete W. wrote:
 See attached and the reference noted therein.
 
 Best wishes, Pete
 
 
 Parthasarathy Sampathkumar wrote:
 Hi Ed,
 
 WASP analyse water molecules in high-resolution protein structure to check 
 if some of
 those could be metal ions. WASP could be run as a part of STAN server.
 STAN - the STructure ANalysis server from USF (
 http://xray.bmc.uu.se/cgi-bin/gerard/rama_server.pl )
 
 One could also identity potential metal ions within COOT as well.
 
 HTH,
 Best Wishes,
 Partha
 
 
 
 
 On Fri, Nov 8, 2013 at 9:09 PM, Edward A. Berry ber...@upstate.edu
 mailto:ber...@upstate.edu wrote:
 
 Is there a server or program to predict binding sites for monovalent 
 metal ions?
 Ideally should work with just the protein structure, but a program that 
 sorts through
 the waters in a high resolution structure and tells which are likely to 
 be K+ or Na+
 would also be of interest.
 
 Ed
 
 


Re: [ccp4bb] largest protein crystal ever grown?

2013-10-24 Thread Jrh
Dear Tobias,
Take a look at http://dx.doi.org/10.1107/S0108767389012912
The ribonuclease crystal I used to measure the speed of sound, using laser 
generated ultrasound, was of volume 129 mm3 ie 7.7x6.2x2.7 mm . David Moss of 
Birkbeck College provided it. 
Best wishes,
John

Prof John R Helliwell DSc FInstP CPhys FRSC CChem F Soc Biol.
Chair School of Chemistry, University of Manchester, Athena Swan Team.
http://www.chemistry.manchester.ac.uk/aboutus/athena/index.html
 
 

On 24 Oct 2013, at 16:33, Tobias Beck tobiasb...@gmail.com wrote:

 Dear all,
 
 I was just wondering if anyone has some information or references about the 
 dimensions of the largest protein crystal ever grown? I am aware that for 
 neutron protein crystallography one usually needs crystals with mm 
 dimensions. I have found some information on crystallization under 
 micro-gravity and how this can enlarge the crystal size. However, I would 
 rather be interested in the dimensions for crystals obtained from a regular 
 lab setup.
 
 Thanks, Tobias. 
 
 -- 
 ___
 
 Dr. Tobias Beck
 ETH Zurich
 Laboratory of Organic Chemistry
 Wolfgang-Pauli-Str. 10, HCI F 322
 8093 Zurich, Switzerland
 phone:   +41 44 632 68 65
 fax:+41 44 632 14 86
 web:  http://www.protein.ethz.ch/people/tobias
 ___


Re: [ccp4bb] largest protein crystal ever grown?

2013-10-24 Thread Jrh
Dear Tobias,
There is also this one :- http://dx.doi.org/10.1107/S0021889801007245
In this study we had to use a smaller crystal than the largest ones available 
of 125mm3. They were a lovely rhombic dodecahedral crystal habit. Nb we only 
published details of the size of the one used. These crystals were grown by 
Joseph Gilboa at The Weizmann Institute, who we miss dearly. See the 
appreciation of Joseph by Felix Frolow and myself  at 
http://dx.doi.org/10.1107/S0021889810006941
Yours sincerely,
John

Prof John R Helliwell DSc FInstP CPhys FRSC CChem F Soc Biol.
Chair School of Chemistry, University of Manchester, Athena Swan Team.
http://www.chemistry.manchester.ac.uk/aboutus/athena/index.html
 
 

On 24 Oct 2013, at 16:33, Tobias Beck tobiasb...@gmail.com wrote:

 Dear all,
 
 I was just wondering if anyone has some information or references about the 
 dimensions of the largest protein crystal ever grown? I am aware that for 
 neutron protein crystallography one usually needs crystals with mm 
 dimensions. I have found some information on crystallization under 
 micro-gravity and how this can enlarge the crystal size. However, I would 
 rather be interested in the dimensions for crystals obtained from a regular 
 lab setup.
 
 Thanks, Tobias. 
 
 -- 
 ___
 
 Dr. Tobias Beck
 ETH Zurich
 Laboratory of Organic Chemistry
 Wolfgang-Pauli-Str. 10, HCI F 322
 8093 Zurich, Switzerland
 phone:   +41 44 632 68 65
 fax:+41 44 632 14 86
 web:  http://www.protein.ethz.ch/people/tobias
 ___


Re: [ccp4bb] Why nobody comments about the Nobel committee decision?

2013-10-11 Thread Jrh
Dear Axel,
Quite so, absolutely. 
Theoretical physics and theoretical chemistry sweeped the awards at the Nobels 
this week. 
As I remarked on that other medium yesterday (twitter):- I confess to 
preferring a joint theory and experiment approach, but the Nobel Committee 
didn't ie I think 'that boson' is only real because of the two experiments at 
LHC, but no formal recognition for CERN. So, to paraphrase and expand your 
excellent reprimand of my posting:-
I surely hope that the recent Nobel Prizes will encourage young (and young at 
heart) into the fields of theory, computing and experiment across all our 
sciences.
Greetings,
John

Prof John R Helliwell DSc FInstP CPhys FRSC CChem F Soc Biol.
Chair School of Chemistry, University of Manchester, Athena Swan Team.
http://www.chemistry.manchester.ac.uk/aboutus/athena/index.html
 
 

On 11 Oct 2013, at 00:34, Axel Brunger brun...@stanford.edu wrote:

 Dear John,
 
 I surely hope that the recent Nobel Prize will encourage young people
 to get into into the fields of computational biology and chemistry.  
 
 Moreover, X-ray sources are undergoing new exciting developments 
 (e.g., XFELs) that require new computational approaches, as does 
 cryo-EM.
 
 Cheers,
 Axel
 
 On Oct 10, 2013, at 11:05 AM, Jrh jrhelliw...@gmail.com wrote:
 
 Dear Sacha, Dear Colleagues,
 I also offer my congratulations to the Chemistry Nobellists of yesterday. A 
 very exciting and significant event, which I enjoyed. I recall when my PhD 
 student, Gail Bradbrook, spoke about our harnessing these exciting methods 
 in our crystallographic and structural chemistry concanavalin A saccharide 
 studies, to crystallographers, there was a wide spread of reactions. Ie from 
 scepticism to shared excitement. As an example of Gail's work see eg 
 http://pubs.rsc.org/en/content/articlelanding/1998/ft/a800429c/unauth#!divAbstract
 It is sometimes said that a Nobel Prize kills a field. I think we can say 
 instead that it is mature. But, to couple with the discussion on  peer 
 review; there are weaknesses in conventional ie the usual peer review; it 
 does not cope well with 'risk and adventure' results. post publication peer 
 review is an interesting solution, which in my view should  be tried. This 
 bulletin board itself in fact is a great initiative, institution actually, 
 which helps develops community views of results and trends. 
 Just my two pennies worth,
 Greetings,
 John
 
 Prof John R Helliwell DSc FInstP CPhys FRSC CChem F Soc Biol.
 Chair School of Chemistry, University of Manchester, Athena Swan Team.
 http://www.chemistry.manchester.ac.uk/aboutus/athena/index.html
 
 
 
 On 10 Oct 2013, at 09:26, Alexandre OURJOUMTSEV sa...@igbmc.fr wrote:
 
 Hello to everybody,
 
 Alex, it was a great idea to initiate the conversation sending 
 congratulations to our colleagues !
 Bob, it was another great idea, when congratulating the Winners, to remind 
 us of the framework.
 
 As one of my colleagues pointed out, we shall also give a lot of credits to 
 Shneior Lifson who was in the very origins of these works, ideas and 
 programs (see the paper by M.Levitt The birth of computational structural 
 biology, Nature Structural  Molecuar Biology, 8, 392-393 (2001);  
 http://www.nature.com/nsmb/journal/v8/n5/full/nsb0501_392.html ). 
 
 Older crystallographers may remember a fundamental paper by Levitt  Lifson 
 (1969).
 
 With best wishes,
 
 Sacha Urzhumtsev
 
 
 -Message d'origine-
 De : CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] De la part de 
 Sweet, Robert
 Envoyé : mercredi 9 octobre 2013 23:52
 À : CCP4BB@JISCMAIL.AC.UK
 Objet : Re: [ccp4bb] השב: [ccp4bb] Why nobody comments about the Nobel 
 committee decision?
 
 It deserves comment!!  I've been too busy talking with my friends about it 
 to think of CCP4.
 
 This morning on NPR I heard Karplus's name and started to whoop and holler, 
 and by the time they got to Arieh I realized they had a Hat Trick!!  It's a 
 spectacular thing that this field should get recognition!
 
 An interesting feature to me is that, at least when I was following the 
 field, these three use physics to do their work, modeling with carefully 
 estimated spring constants, etc., and eventually QM results. Those who use 
 phenomenology -- hydrophobic volumes, who likes to lie next to whom, etc. 
 -- are extremely effective (you know who they are), and they deserve 
 credit.  But they (we, some years ago) stand on the shoulders of the 
 achievements of these three.
 
 It's good to remember the late, great, Tony Jack, cut down before reaching 
 his prime. 
 
 Bob
 
 
 From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Nat Echols 
 [nathaniel.ech...@gmail.com]
 Sent: Wednesday, October 09, 2013 5:31 PM
 To: CCP4BB@JISCMAIL.AC.UK
 Subject: Re: [ccp4bb] השב: [ccp4bb] Why nobody comments about the Nobel 
 committee decision?
 
 Levitt also contributed to DEN refinement (Schroder et al. 2007, 2010

Re: [ccp4bb] Why nobody comments about the Nobel committee decision?

2013-10-10 Thread Jrh
Dear Sacha, Dear Colleagues,
I also offer my congratulations to the Chemistry Nobellists of yesterday. A 
very exciting and significant event, which I enjoyed. I recall when my PhD 
student, Gail Bradbrook, spoke about our harnessing these exciting methods in 
our crystallographic and structural chemistry concanavalin A saccharide 
studies, to crystallographers, there was a wide spread of reactions. Ie from 
scepticism to shared excitement. As an example of Gail's work see eg 
http://pubs.rsc.org/en/content/articlelanding/1998/ft/a800429c/unauth#!divAbstract
It is sometimes said that a Nobel Prize kills a field. I think we can say 
instead that it is mature. But, to couple with the discussion on  peer review; 
there are weaknesses in conventional ie the usual peer review; it does not cope 
well with 'risk and adventure' results. post publication peer review is an 
interesting solution, which in my view should  be tried. This bulletin board 
itself in fact is a great initiative, institution actually, which helps 
develops community views of results and trends. 
Just my two pennies worth,
Greetings,
John

Prof John R Helliwell DSc FInstP CPhys FRSC CChem F Soc Biol.
Chair School of Chemistry, University of Manchester, Athena Swan Team.
http://www.chemistry.manchester.ac.uk/aboutus/athena/index.html
 
 

On 10 Oct 2013, at 09:26, Alexandre OURJOUMTSEV sa...@igbmc.fr wrote:

 Hello to everybody,
 
 Alex, it was a great idea to initiate the conversation sending 
 congratulations to our colleagues !
 Bob, it was another great idea, when congratulating the Winners, to remind us 
 of the framework.
 
 As one of my colleagues pointed out, we shall also give a lot of credits to 
 Shneior Lifson who was in the very origins of these works, ideas and programs 
 (see the paper by M.Levitt The birth of computational structural biology, 
 Nature Structural  Molecuar Biology, 8, 392-393 (2001);  
 http://www.nature.com/nsmb/journal/v8/n5/full/nsb0501_392.html ). 
 
 Older crystallographers may remember a fundamental paper by Levitt  Lifson 
 (1969).
 
 With best wishes,
 
 Sacha Urzhumtsev
 
 
 -Message d'origine-
 De : CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] De la part de Sweet, 
 Robert
 Envoyé : mercredi 9 octobre 2013 23:52
 À : CCP4BB@JISCMAIL.AC.UK
 Objet : Re: [ccp4bb] השב: [ccp4bb] Why nobody comments about the Nobel 
 committee decision?
 
 It deserves comment!!  I've been too busy talking with my friends about it to 
 think of CCP4.
 
 This morning on NPR I heard Karplus's name and started to whoop and holler, 
 and by the time they got to Arieh I realized they had a Hat Trick!!  It's a 
 spectacular thing that this field should get recognition!
 
 An interesting feature to me is that, at least when I was following the 
 field, these three use physics to do their work, modeling with carefully 
 estimated spring constants, etc., and eventually QM results. Those who use 
 phenomenology -- hydrophobic volumes, who likes to lie next to whom, etc. -- 
 are extremely effective (you know who they are), and they deserve credit.  
 But they (we, some years ago) stand on the shoulders of the achievements of 
 these three.
 
 It's good to remember the late, great, Tony Jack, cut down before reaching 
 his prime. 
 
 Bob
 
 
 From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Nat Echols 
 [nathaniel.ech...@gmail.com]
 Sent: Wednesday, October 09, 2013 5:31 PM
 To: CCP4BB@JISCMAIL.AC.UK
 Subject: Re: [ccp4bb] השב: [ccp4bb] Why nobody comments about the Nobel 
 committee decision?
 
 Levitt also contributed to DEN refinement (Schroder et al. 2007, 2010).
 
 -Nat
 
 
 On Wed, Oct 9, 2013 at 2:29 PM, Boaz Shaanan 
 bshaa...@bgu.ac.ilmailto:bshaa...@bgu.ac.il wrote:
 Good point. Now since you mentioned contributions of the recent Nobel 
 laureates to crystallography Mike Levitt also had a significant contribution 
 through the by now forgotten Jack-Levitt refinement which to the best of my 
 knowledge was the first time that x-ray term was added to the energy 
 minimization algorithm. I think I'm right about this. This was later adapted 
 by Axel Brunger in Xplor and other progrmas followed.
 Cheers, Boaz
 
 
 
  הודעה מקורית 
 מאת Alexander Aleshin 
 aales...@sanfordburnham.orgmailto:aales...@sanfordburnham.org
 תאריך: 10/10/2013 0:07 (GMT+02:00)
 אל CCP4BB@JISCMAIL.AC.UKmailto:CCP4BB@JISCMAIL.AC.UK
 נושא [ccp4bb] Why nobody comments about the Nobel committee decision?
 
 
 Sorry for a provocative question, but I am surprised why nobody 
 comments/congratulations laureates with regard to recently awarded Nobel 
 prizes? However, one of laureates  in chemistry contributed to a popular 
 method in computational crystallography.
 CHARMM - XPLOR - CNS - PHENIX-…
 
 Alex Aleshin
 Levitt_2001_NatureStrBiol_8_392-393.pdf


Re: [ccp4bb] examples of applied struct bio?

2013-09-17 Thread Jrh
Dear Frank,
I would only add to Randy's excellent suggestions a commendation of Max 
Perutz's book
 Is Science necessary
http://www.amazon.com/Is-Science-Necessary-Essays-Scientists/dp/0192861182
It includes global longevity and food production issues and improvements. Of 
course these are wider contexts than structural biology, but marshalled by Max 
Perutz himself.
Greetings,
John

Prof John R Helliwell DSc 
 
 

On 16 Sep 2013, at 14:23, Frank von Delft frank.vonde...@sgc.ox.ac.uk wrote:

 Hello, I need stuff for a lecture, so I figured I'd best crowd-source it from 
 the best forum on the intertubes:
 
 Anybody know some examples of where structural biology threw up insight(s) 
 that led to very significant practical improvements in some public health 
 approach or industrial process or other non-research application -- 
 ideally in the context of a developing nation/economy/society.  If they made 
 someone rich, even better.
 
 Of particular interest are examples about:
 communicable diseases:  not only the big ones (TB, HIV, malaria), but also 
 immunization, livestock, etc.
 food security:  better diet, food shelf life, crop yields, etc.
 green energy: [preferably excluding so-called biofuels, but I won't be 
 picky]
 water reclamation:  purification, sewage treatment, etc.
 Specifically NOT of interest is structure-guided medicinal chemistry.
 
 (I have some examples, but presumably there are better ones.)
 
 
 Any scraps of info welcome:  journal reference, name of researcher/group, 
 URL, news release, etc.   [Links to actual slides would be an unexpected 
 bonus.]
 
 Thanks!
 phx


Re: [ccp4bb] Fo simulators - summary

2013-09-10 Thread Jrh
I prepared this on Sunday, here it is now:-

Well, I was 'shaken but not stirred' to see a program 'fake_Fobs'. However 
James' posting on the Rfactor gap in MX is a more respectable, Sunday morning, 
topic. I tried to find the previous threads on this via google and couldn't. So 
apologies to all for the danger of a rehash.
James and I did talk about this in Madrid. 
So my two pennies worth:-
We have errors in the Fo part and in the Fc part. Given the current typical 
size of the gap the errors in the Fo part tend not to show up, that would need 
the gap to be say 10%. In general we are not there yet. There are also cases 
of very weak data sets though and where Fo errors are much larger than the 
norm. But in the majority of cases we have to focus on the errors in Fc ie the 
inadequacy of the model. Solvent is a key suspect:- bulk, ordered and semi 
ordered. But now the sense of mystery develops since the Bragg spots are from 
the ordered portion and the ordered and semi ordered solvent is a quite small 
fraction of the ordered portion. (for neutron work though the fraction of 
scattering is larger due to strong deuterium scattering effects). So what else? 
Another key suspect is the general absence of high resolution reflections, 
typically ie vs small molecule work, which means the ordered atoms are simply 
not that well placed as they could be if they were in a hard crystal ( solvent 
free). So given our two key suspects what happens when we have low solvent 
content and we have atomic resolution data? In both cases the Rfactor gap goes 
down. In an occam's razor sense, this is encouraging that we do know what is 
going on. 

Greetings,
John

Prof John R Helliwell DSc



On 7 Sep 2013, at 12:54, James Holton jmhol...@lbl.gov wrote:

 I feel like I should point out that there is about a 20% difference between 
 Fcalc and something I would call a simulated Fobs.  Fcalc is something 
 that refinement programs compute many times every second as they apply 100 
 years worth of brilliant ideas to make your model (Fcalc) match your data 
 (Fobs) as best we know how.  Despite all this, one of the great mysteries of 
 macromolecular structure determination is just how awful the final match 
 is: R/Rfree in the 20%s or high teens at best. Small molecule structures 
 don't have this problem.  In fact, they only recently started depositing 
 Fobs in to the CSD because for the most small molecule structures Fcalc 
 is more accurate than Fobs anyway.
 
 This has been hashed over on this BB a number of times, so I refer the 
 interested reader to the archives.  But there are two major considerations in 
 turning a pdb file into a simulated Fobs:
 1) the solvent
   SFALL (part of the CCP4 suite) is a convenient tool for turning coordinates 
 into maps, or structure factors, but it doesn't do bulk solvent unless you 
 trick it.  I wrote a jiffy for doing this here:
 http://bl831.als.lbl.gov/~jamesh/mlfsom/ano_sfall.com
 download the script, make it executable, and run it with no arguments to see 
 instructions for how to use it.  What is fascinating about this very crude 
 bulk solvent implementation I did is that refinement programs with much more 
 sophisticated bulk solvent implementations have a heck of a time trying to 
 match it.  If you want exactly the bulk solvent you would get from phenix, 
 use phenix.fmodel, but this will not be exactly the same as the bulk solvent 
 you get from REFMAC.  Which one is right? Probably none of them.
 
 2) The R-factor Gap
  One can try to simulate the R-factor gap (between Rmeas and Rfree) by adding 
 random numbers to Fcalc so that it becomes 20% different from Fobs, but 
 this is hardly a physically reasonable source of error.  If you do this 
 enough times for the same PDB file and then average over different crystals 
 you'll still end up with a dataset that will refine to R/Rfree ~ 0/0.
 
 This is the fundamental problem with making simulated Fobs: we actually 
 have no good way of modelling whatever is causing this R-factor Gap, and 
 therefore no good way of simulating it.  If we could simulate it, then some 
 refinement program would quickly implement a way to model the effect, and 
 give you R/Rfree of 0% again.  There are about as many ideas for the cause of 
 the R-factor Gap as there are crystallographers out there, but to this day 
 nobody has come up with a systematic error that, when accounted for in 
 refinement, gives you a small-molecule-style R/Rfree for pretty much anything 
 in the PDB.  Not even lysozyme.
 
 -James Holton
 MAD Scientist
 
 
 On 9/5/2013 9:35 AM, Alastair Fyfe wrote:
 Below are some links to tools for simulating Fobs data:
 
 phenix.fake_f_obs: 
 http://cci.lbl.gov/cctbx_sources/mmtbx/command_line/fake_f_obs.py
 phenix.fmodel: http://cci.lbl.gov/cctbx_sources/mmtbx/command_line/fmodel.py
 sftools (calc keyword):  http://www.ccp4.ac.uk/html/sftools.html
 
 diffraction image simulators from James Holton
 mlfsom: 

Re: [ccp4bb] AW: [ccp4bb] AW: [ccp4bb] Dependency of theta on n/d in Bragg's law

2013-08-23 Thread Jrh
Dear Edward,
Re your em colleagues:-
We are indeed happy to understand their diffraction to 5th order, by which we 
mean the d/5 reflection (1st order) because the two are simply different 
viewpoints.

Just one loose end:-
The remarkable thing is that the diffraction from a crystal is largely empty. 
We focus on the spots, true, but the largely empty diffraction space from a 
crystal in a sense is a most useful aspect about the W L Bragg equation.

Finally, just to mention, when I saw the laser light diffraction from a 
periodic ruled grating for the first time i thought:- it is magnificent. I rank 
it alongside the spectral lines in an atom's emission spectrum, such as the 
sodium D lines ie as i saw in my physics teaching lab. The red shifted hydrogen 
spectra of Hubble himself, available to view in the museum of the astronomical 
observatory in Los Angeles, are of course in a yet different, higher, league of 
where we are in the (expanding) universe. 

Yours sincerely,
John

Prof John R Helliwell DSc FInstP CPhys FRSC CChem F Soc Biol.
Chair School of Chemistry, University of Manchester, Athena Swan Team.
http://www.chemistry.manchester.ac.uk/aboutus/athena/index.html
 
 

On 23 Aug 2013, at 16:34, Edward A. Berry ber...@upstate.edu wrote:

 I think we are just discussing different ways of saying the same thing now.
 But that can be interesting, too.  If not, read no farther.
 
 herman.schreu...@sanofi.com wrote:
 Dear Edward,
 
 Now I am getting a little confused: If you look at a higher order 2n 
 reflection, you will also get diffraction from the intermediate 1n layers, 
 so the structure factor you are looking at is in fact the 1n structure 
 factor. I think your original post was correct.
 
 Yes- I think the original poster's question about diffraction from
 the 2n planes, and whether that contributes to diffraction in the
 1n reflection, has been answered- physically they are the same thing.
 
 My question now is whether it is useful to consider Braggs-law n to
 have values other than one, and whether it is useful to tie Braggs law
 to the unit cell, or better to derive it for a set of equally spaced
 planes (as I think it originally was derived) and later put conditions
 on when those planes will diffract.
 
 In addition to Bragg's law one also talks about the Bragg condition,
 as somewhat related to the diffraction condition although maybe that
 is closer to Laue condition.
 But anyway, the motivation for presenting Braggs law is to decide where
 (as a function of lambda and theta) diffraction will be observed.
 And in a continuos crystal (admittedly not what Braggs law was derived
 for, but what the students are interested in) you don't get diffraction
 without periodicity, and the spacing of the planes has to be related to
 the unit cell for braggs law to help (as you say, periodicity of the planes
 must match periodicity of the crystal).
 
 When Bragg's condition is met, points separated by d scatter in phase.
 Diffraction occurs when d matches the periodicity of the material, so that
 crystallographically-equivalent-by-translation points scatter in phase,
 and the resultants from each unit layer (1-D unit cell) scatter in phase.
 
 If we are just considering equal planes separated by d with nothing between,
 then the periodicity is just d, and bragg condition gives diffraction
 condition.
 If we are considering a crystal with continuous density, if d is equal to
 a unit cell dimension and the planes are perpendicular to that axis, then
 then the periodicity is d and brags law gives the (1-dimensional) diffraction
 condition.
 If d is some arbitrary spacing not related to periodicity of the matter,
 brag condition still tells you that points separated by d along S scatter
 in phase but if d has no relation to the periodicity, diffraction conditions
 are not met and the different slabs thickness d will not scatter in phase.
 If d is an integral submultiple of the periodicity, we get diffraction.
 What is the best way to explain this?
 1. if points separated by d scatter in phase (actually out of phase by one 
 wavelength),
 then spots separated by an integral multiple n of d will scatter in phase
 (out of phase by n wavelengths). Now if n*d is the unit cell spacing, spots
 separated by nd will be crystallographically equivalent, and scatter in
 phase (actually out of phase by n wavelengths).
  But this is more elegantly expressed by using braggs law with d' =  the unit
 cell spacing, nd, and n'= n. The right hand side of braggs law is calculating
 the phase difference, and the left hand is saying this must be = n lambda.
 That's what n is there for!
 
 2. the periodicity of the set of planes must match the periodicity of the 
 crystal-
 if d is a submultiple of the unit cell spacing, points separated by d will 
 scatter in
 phase, but there is no relation between what exists at those points, so they 
 will
 not interfere constructively. each slab of thickness d will have resultant 
 phase
 

Re: [ccp4bb] Dependency of theta on n/d in Bragg's law

2013-08-22 Thread Jrh
Dear Pietro,
The n in Bragg's Law is indeed most interesting for teachers and a most 
delicate matter for those enquiring about it. 

The diffraction grating equation, from which W L Bragg got the idea, a 'cheap 
accolade' he said to have it named after him in his Scientific American 
article, has each order at its own specific theta. n=1 at one angle, n=2 the 
next order of diffraction at higher angle, n=3 the next order at higher angle 
still and so on. This is how physicists usually first meet the effect and use 
monochromatic laser light and a periodically ruled, line, grating to see the 
laser diffraction pattern in the modern physics teaching labs. 

In crystal structure analysis the ruled line of the above is now the unit cell 
of the crystal and the contents are of chemical and biological interest, unlike 
the inside of a ruled line! Thus the switch to using lamba = 2d sin theta form 
of the equation and the n subsumed into the interplanar spacing. d=1 is the 
unit cell edge, d/2 half the unit cell and so on. Each has its own reflection 
intensity. The highest resolution molecular detail we get of the insides of the 
unit cell arising from the highest n diffraction reflection with a 'measurable' 
 intensity.

The use of polychromatic light, or white X-rays, we need not consider just now. 
Suffice to say at this point that, eg historically, the W H Bragg X-ray 
spectrometer provided monochromatic X-rays to illuminate a single crystal and 
so, as his son W L Bragg put it, immediately enabled a clear and more powerful 
analysis of crystal structure and thereby allowed the first detailed atomic  
X-ray crystal structure, sodium chloride, to be resolved. Several other Xray 
crystal structures immediately followed from the Braggs, using the Xray 
spectrometer, before 1914 ie when the Great War pretty much put all basic 
research and development 'on hold'. 

When one does come to the question of 'Laue diffraction' the so called 
multiplicity distribution of Bragg reflections in Laue pattern spots has been 
treated in detail by Cruickshank et al 1987 Acta Cryst A, as pointed out by 
Tim.  Prime numbers are pivotal to the analysis, as James pointed out. 

Best wishes,
John

Prof John R Helliwell DSc FInstP CPhys FRSC CChem F Soc Biol.
Chair School of Chemistry, University of Manchester, Athena Swan Team.
http://www.chemistry.manchester.ac.uk/aboutus/athena/index.html
 
 

On 20 Aug 2013, at 15:36, Pietro Roversi pietro.rove...@bioch.ox.ac.uk wrote:

 Dear all,
 
 I am shocked by my own ignorance, and you feel free to do the same, but
 do you agree with me that according to Bragg's Law
 a diffraction maximum at an angle theta has contributions
 to its intensity from planes at a spacing d for order 1, 
 planes of spacing 2*d for order n=2, etc. etc.?
 
 In other words as the diffraction angle is a function of n/d:
 
 theta=arcsin(lambda/2 * n/d)
 
 several indices are associated with diffraction at the same angle?
 
 (I guess one could also prove the same result by
 a number of Ewald constructions using Ewald spheres
 of radius (1/n*lambda with n=1,2,3 ...)
 
 All textbooks I know on the argument neglect to mention this
 and in fact only n=1 is ever considered.
 
 Does anybody know a book where this trivial issue is discussed?
 
 Thanks!
 
 Ciao
 
 Pietro
 
 
 
 Sent from my Desktop
 
 Dr. Pietro Roversi
 Oxford University Biochemistry Department - Glycobiology Division
 South Parks Road
 Oxford OX1 3QU England - UK
 Tel. 0044 1865 275339


Re: [ccp4bb] ctruncate bug?

2013-06-24 Thread Jrh
Dear Tom,
I find this suggestion of using the full images an excellent and visionary one.
So, how to implement it? 
We are part way along the path with James Holton's reverse Mosflm.
The computer memory challenge could be ameliorated by simple pixel averaging at 
least initially.
The diffuse scattering would be the ultimate gold at the end of the rainbow. 
Peter Moore's new book, inter alia, carries many splendid insights into the 
diffuse scattering in our diffraction patterns.
Fullprof analyses have become a firm trend in other fields, admittedly with 
simpler computing overheads.
Greetings,
John

Prof John R Helliwell DSc FInstP 
 
 

On 21 Jun 2013, at 23:16, Terwilliger, Thomas C terwilli...@lanl.gov wrote:

 I hope I am not duplicating too much of this fascinating discussion with 
 these comments:  perhaps the main reason there is confusion about what to do 
 is that neither F nor I is really the most suitable thing to use in 
 refinement.  As pointed out several times in different ways, we don't measure 
 F or I, we only measure counts on a detector.  As a convenience, we process 
 our diffraction images to estimate I or F and their uncertainties and model 
 these uncertainties as simple functions (e.g., a Gaussian).  There is no need 
 in principle to do that, and if we were to refine instead against the raw 
 image data these issues about positivity would disappear and our structures 
 might even be a little better.
 
 Our standard procedure is to estimate F or I from counts on the detector, 
 then to use these estimates of F or I in refinement.  This is not so easy to 
 do right because F or I contain many terms coming from many pixels and it is 
 hard to model their statistics in detail.  Further, attempts we make to 
 estimate either F or I as physically plausible values (e.g., using the fact 
 that they are not negative) will generally be biased (the values after 
 correction will generally be systematically low or systematically high, as is 
 true for the French and Wilson correction and as would be true for the 
 truncation of I at zero or above).
 
 Randy's method for intensity refinement is an improvement because the 
 statistics are treated more fully than just using an estimate of F or I and 
 assuming its uncertainty has a simple distribution.  So why not avoid all the 
 problems with modeling the statistics of processed data and instead refine 
 against the raw data.  From the structural model you calculate F, from F and 
 a detailed model of the experiment (the same model that is currently used in 
 data processing) you calculate the counts expected on each pixel. Then you 
 calculate the likelihood of the data given your models of the structure and 
 of the experiment.  This would have lots of benefits because it would allow 
 improved descriptions of the experiment (decay, absorption, detector 
 sensitivity, diffuse scattering and other background on the images,on 
 and on) that could lead to more accurate structures in the end.  Of course 
 there are some minor issues about putting all this in computer memory for 
 refinement
 
 -Tom T
 
 From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Phil 
 [p...@mrc-lmb.cam.ac.uk]
 Sent: Friday, June 21, 2013 2:50 PM
 To: CCP4BB@JISCMAIL.AC.UK
 Subject: Re: [ccp4bb] ctruncate bug?
 
 However you decide to argue the point, you must consider _all_ the 
 observations of a reflection (replicates and symmetry related) together when 
 you infer Itrue or F etc, otherwise you will bias the result even more. Thus 
 you cannot (easily) do it during integration
 
 Phil
 
 Sent from my iPad
 
 On 21 Jun 2013, at 20:30, Douglas Theobald dtheob...@brandeis.edu wrote:
 
 On Jun 21, 2013, at 2:48 PM, Ed Pozharski epozh...@umaryland.edu wrote:
 
 Douglas,
 Observed intensities are the best estimates that we can come up with in 
 an experiment.
 I also agree with this, and this is the clincher.  You are arguing that 
 Ispot-Iback=Iobs is the best estimate we can come up with.  I claim that 
 is absurd.  How are you quantifying best?  Usually we have some sort of 
 discrepancy measure between true and estimate, like RMSD, mean absolute 
 distance, log distance, or somesuch.  Here is the important point --- by 
 any measure of discrepancy you care to use, the person who estimates Iobs 
 as 0 when IbackIspot will *always*, in *every case*, beat the person who 
 estimates Iobs with a negative value.   This is an indisputable fact.
 
 First off, you may find it useful to avoid such words as absurd and 
 indisputable fact.  I know political correctness may be sometimes 
 overrated, but if you actually plan to have meaningful discussion, let's 
 assume that everyone responding to your posts is just trying to help figure 
 this out.
 
 I apologize for offending and using the strong words --- my intention was 
 not to offend.  This is just how I talk when brainstorming with my 
 colleagues around a 

Re: [ccp4bb] Refinement against frames

2013-06-24 Thread Jrh
Dear Tim,
With a full interpretation of the diffuse scattering as well how about papers 
becoming entitled:-
The structure and dynamics of enzyme X
As you intimate some diffuse scattering is crystal dependent ie phonons 
derived. Other aspects are however not correlated over multiple unit cells but 
thereby largely related to the dynamics of our macromolecules. (largely means 
we need to allow for static disorder and / or chemical variants possibilities). 

Re instrument aspects:-
The days of instrument setting dependent (eg due to varying magnetic fields as 
we changed xtod) detector response are indeed fortunately behind us. That said 
we might learn something on the instrument aspect doing things against the 
detector plane. Another aspect for example is detailed prediction of spot 
shape, although perhaps for the purists amongst us (eg see Greenhough, 
Helliwell and Rule 1983 JAC), but may add insights into I - I bg, ie the spot 
shape prior can be known. This can be done processing 'forwards or backwards'. 

Greetings,
John

Prof John R Helliwell DSc 
 
 

On 24 Jun 2013, at 12:59, Tim Gruene t...@shelx.uni-ac.gwdg.de wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Dear John,
 
 actually I am not a friend of this idea. Processing software make an
 excellent job of removing the instrumental part from our data. If we
 start to integrate against frames, the next structural title might be
 something like Crystal structure of ABC a xA resolution measured at
 beamline xyz with a frame width of f degrees and a total rotation
 range of phi degreees... the point I am trying to make: once
 integrating against frames one may have to take a lot of issues into
 account for interpreting the structure.
 And do you think that refining against frames will actually give
 greater chemical or biological insight into the sample, or will it
 only give a more accurate description of the crystal contents? These
 are two different things and the latter is - in my opinion - not what
 structures are about.
 
 Best, Tim
 
 P.S.: I changed the subject line, because the thread based sorting of
 my emails is soon going to exceed the width of my screem for the
 original one.
 
 On 06/24/2013 08:13 AM, Jrh wrote:
 Dear Tom, I find this suggestion of using the full images an
 excellent and visionary one. So, how to implement it? We are part
 way along the path with James Holton's reverse Mosflm. The computer
 memory challenge could be ameliorated by simple pixel averaging at
 least initially. The diffuse scattering would be the ultimate gold
 at the end of the rainbow. Peter Moore's new book, inter alia,
 carries many splendid insights into the diffuse scattering in our
 diffraction patterns. Fullprof analyses have become a firm trend in
 other fields, admittedly with simpler computing overheads. 
 Greetings, John
 
 Prof John R Helliwell DSc FInstP
 
 
 
 On 21 Jun 2013, at 23:16, Terwilliger, Thomas C
 terwilli...@lanl.gov wrote:
 
 I hope I am not duplicating too much of this fascinating
 discussion with these comments:  perhaps the main reason there is
 confusion about what to do is that neither F nor I is really the
 most suitable thing to use in refinement.  As pointed out several
 times in different ways, we don't measure F or I, we only measure
 counts on a detector.  As a convenience, we process our
 diffraction images to estimate I or F and their uncertainties and
 model these uncertainties as simple functions (e.g., a Gaussian).
 There is no need in principle to do that, and if we were to
 refine instead against the raw image data these issues about
 positivity would disappear and our structures might even be a
 little better.
 
 Our standard procedure is to estimate F or I from counts on the
 detector, then to use these estimates of F or I in refinement.
 This is not so easy to do right because F or I contain many terms
 coming from many pixels and it is hard to model their statistics
 in detail.  Further, attempts we make to estimate either F or I
 as physically plausible values (e.g., using the fact that they
 are not negative) will generally be biased (the values after
 correction will generally be systematically low or systematically
 high, as is true for the French and Wilson correction and as
 would be true for the truncation of I at zero or above).
 
 Randy's method for intensity refinement is an improvement because
 the statistics are treated more fully than just using an estimate
 of F or I and assuming its uncertainty has a simple distribution.
 So why not avoid all the problems with modeling the statistics of
 processed data and instead refine against the raw data.  From the
 structural model you calculate F, from F and a detailed model of
 the experiment (the same model that is currently used in data
 processing) you calculate the counts expected on each pixel. Then
 you calculate the likelihood of the data given your models of the
 structure and of the experiment.  This would have lots of
 benefits

Re: [ccp4bb] ctruncate bug?

2013-06-24 Thread Jrh
Dear Pavel,
Diffuse scattering is probably the most difficult topic I have worked on.
Reading Peter Moore's new book and his insights give me renewed hope we could 
make much more of it, as I mentioned to Tim re 'structure and dynamics'. 
You describe more aspects below obviously.
Greetings,
John
Prof John R Helliwell DSc 
 
 

On 24 Jun 2013, at 17:12, Pavel Afonine pafon...@gmail.com wrote:

 Refinement against images is a nice old idea. 
 From refinement technical point of view it's going to be challenging. 
 Refining just two flat bulk solvent model ksolBsol simultaneously may be 
 tricky, or occupancy + individual B-factor + TLS, or ask multipolar 
 refinement folk about whole slew of magic they use to refine different 
 multipolar parameters at different stages of refinement proces and in 
 different order and applied to different atom types (H vs non-H) 
 ...etc...etc. Now if you convolute all this with the whole diffraction 
 experiment parameters through using images in refinement that will be big 
 fun, I'm sure.
 Pavel
 
 
 
 On Sun, Jun 23, 2013 at 11:13 PM, Jrh jrhelliw...@gmail.com wrote:
 Dear Tom,
 I find this suggestion of using the full images an excellent and visionary 
 one.
 So, how to implement it?
 We are part way along the path with James Holton's reverse Mosflm.
 The computer memory challenge could be ameliorated by simple pixel averaging 
 at least initially.
 The diffuse scattering would be the ultimate gold at the end of the rainbow. 
 Peter Moore's new book, inter alia, carries many splendid insights into the 
 diffuse scattering in our diffraction patterns.
 Fullprof analyses have become a firm trend in other fields, admittedly with 
 simpler computing overheads.
 Greetings,
 John
 
 Prof John R Helliwell DSc FInstP
 
 
 
 On 21 Jun 2013, at 23:16, Terwilliger, Thomas C terwilli...@lanl.gov 
 wrote:
 
  I hope I am not duplicating too much of this fascinating discussion with 
  these comments:  perhaps the main reason there is confusion about what to 
  do is that neither F nor I is really the most suitable thing to use in 
  refinement.  As pointed out several times in different ways, we don't 
  measure F or I, we only measure counts on a detector.  As a convenience, we 
  process our diffraction images to estimate I or F and their uncertainties 
  and model these uncertainties as simple functions (e.g., a Gaussian).  
  There is no need in principle to do that, and if we were to refine instead 
  against the raw image data these issues about positivity would disappear 
  and our structures might even be a little better.
 
  Our standard procedure is to estimate F or I from counts on the detector, 
  then to use these estimates of F or I in refinement.  This is not so easy 
  to do right because F or I contain many terms coming from many pixels and 
  it is hard to model their statistics in detail.  Further, attempts we make 
  to estimate either F or I as physically plausible values (e.g., using the 
  fact that they are not negative) will generally be biased (the values after 
  correction will generally be systematically low or systematically high, as 
  is true for the French and Wilson correction and as would be true for the 
  truncation of I at zero or above).
 
  Randy's method for intensity refinement is an improvement because the 
  statistics are treated more fully than just using an estimate of F or I and 
  assuming its uncertainty has a simple distribution.  So why not avoid all 
  the problems with modeling the statistics of processed data and instead 
  refine against the raw data.  From the structural model you calculate F, 
  from F and a detailed model of the experiment (the same model that is 
  currently used in data processing) you calculate the counts expected on 
  each pixel. Then you calculate the likelihood of the data given your models 
  of the structure and of the experiment.  This would have lots of benefits 
  because it would allow improved descriptions of the experiment (decay, 
  absorption, detector sensitivity, diffuse scattering and other background 
  on the images,on and on) that could lead to more accurate structures in 
  the end.  Of course there are some minor issues about putting all this in 
  computer memory for refinement
 
  -Tom T
  
  From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Phil 
  [p...@mrc-lmb.cam.ac.uk]
  Sent: Friday, June 21, 2013 2:50 PM
  To: CCP4BB@JISCMAIL.AC.UK
  Subject: Re: [ccp4bb] ctruncate bug?
 
  However you decide to argue the point, you must consider _all_ the 
  observations of a reflection (replicates and symmetry related) together 
  when you infer Itrue or F etc, otherwise you will bias the result even 
  more. Thus you cannot (easily) do it during integration
 
  Phil
 
  Sent from my iPad
 
  On 21 Jun 2013, at 20:30, Douglas Theobald dtheob...@brandeis.edu wrote:
 
  On Jun 21, 2013, at 2:48 PM, Ed Pozharski epozh

Re: [ccp4bb] Definition of diffractometer

2013-06-20 Thread Jrh
Dear Colleagues,
If we may combine Ethan's quote from Stout and Jensen with Tim's meter:-
With film the estimating of a spot's blackness by eye is not a meter, it 
originally was a person's eye aided by a reference strip of blackened graduated 
spot exposures.
With measuring devices the subjectivity goes and it is therefore a meter. 
I therefore believe that eg a CCD diffractometer is a valid terminology.
Greetings,
John


Prof John R Helliwell DSc 
 
 

On 19 Jun 2013, at 19:11, Edward A. Berry ber...@upstate.edu wrote:

 Somewhere I got the idea that a diffractometer is an instrument that measures 
 one reflection at a time. Is that the case, and if so what is the term for 
 instruments like rotation camera, weisenberg, area detector? (What is an area 
 detector?).
 
 Logically I guess a diffractometer could be anything that measures 
 diffraction, and that seems to be view of the wikipedia article of that name.
 eab


Re: [ccp4bb] Concerns about statistics

2013-06-15 Thread Jrh
Dear Ed,
Thankyou for this.
Indeed I have not pushed into the domain of I/sigI as low as 0.4 or CC1/2 as 
low as  0.012. 
So, I do not have an answer to your query at these extremes.
But I concede I am duly corrected by your example and indeed my email did not 
tabulate specifically how far one could investigate the plateau of DPI and 
certainly I was not considering such an extreme as you have investigated.
Best wishes,
Yours sincerely,
John

Prof John R Helliwell DSc
 
 

On 15 Jun 2013, at 15:31, Ed Pozharski epozh...@umaryland.edu wrote:

 On 06/14/2013 07:00 AM, John R Helliwell wrote:
 Alternatively, at poorer resolutions than that, you can monitor if the 
 Cruickshank-Blow Diffraction Precision Index (DPI) improves or not as more 
 data are steadily added to your model refinements.
 Dear John,
 
 unfortunately the behavior of DPIfree is less than satisfactory here - in a 
 couple of cases I looked at it just steadily improves with resolution.  
 Example I have in front of me right now takes resolution down from 2.0A to 
 1.55A, and DPIfree goes down from ~0.17A to 0.09A at almost constant pace 
 (slows down from 0.021 A/0.1A to 0.017 A/0.1A around 1.75A).
 
 Notice that in this specific case I/sigI at 1.55A is ~0.4 and CC(1/2)~0.012 
 (even this non-repentant big-endian couldn't argue there is good signal 
 there).
 
 DPIfree is essentially proportional to Rfree * d^(2.5)  (this is assuming 
 that No~1/d^3, Na and completeness do not change).  To keep up with 
 resolution changes, Rfree would have to go up ~1.9 times, and obviously that 
 is not going to happen no matter how much weak data I throw in.
 
 The maximum-likelihood e.s.u. reported by Refmac makes more sense in this 
 particular case as it clearly slows down big time around 1.77A (see 
 https://plus.google.com/photos/113111298819619451614/albums/5889708830403779217).
  Coincidentally, Rfree also starts going up rapidly around the same 
 resolution.  If anyone is curious what's I/sigI is at the breaking point 
 it's ~1.5 and CC(1/2)~0.6.  And to bash Rmerge a little more, it's 112%.
 
 So there are two questions I am very much interested in here.
 
 a) Why is DPIfree so bad at this?  Can we even believe it given it's erratic 
 behavior in this scenario?
 
 b) I would normally set up a simple data mining project to see how common 
 this ML_esu behavior is, but there is no easily accessible source of data 
 processed to beyond I/sigI=2, let alone I/sigI=1 (are structural genomics 
 folks reading this and do they maybe have such data to mine?).  I can look 
 into all of my own datasets, but that would be a biased selection of several 
 crystal forms.  Perhaps others have looked into this too, and what are your 
 observations? Or maybe you have a dataset processed way beyond I/sigI=1 and 
 are willing to either share it with me together with a final model or run 
 refinement at a bunch of different resolutions and report the result (I can 
 provide bash scripts as needed).
 
 Cheers,
 
 Ed.
 
 -- 
 Oh, suddenly throwing a giraffe into a volcano to make water is crazy?
Julian, King of Lemurs
 


Re: [ccp4bb] CCDs + Re: PILATUS data collection

2013-05-20 Thread Jrh
Dear Gerard,
Many thanks for these useful clarifications.I see your points clearly.

Just to mention that one remark in James's posting regarding photon counting 
versus read noise caught my attention. I will follow up on this ASAP , which 
like fine phi slicing gets to the heart of the measurement physics. 

Greetings,
John

Prof John R Helliwell DSc FInstP CPhys FRSC CChem F Soc Biol.
Chair School of Chemistry, University of Manchester, Athena Swan Team.
http://www.chemistry.manchester.ac.uk/aboutus/athena/index.html
 
 

On 19 May 2013, at 21:06, Gerard Bricogne g...@globalphasing.com wrote:

 Dear John,
 
 Thank you for your message. I do realise that what I wrote may
 have sounded like a categoric blanket endorsement of XDS. Perhaps I
 should have slept on my draft message a little longer, but James's
 long e-mail made me wake up and it seemed appropriate to say what I
 wanted to say, in the form I had written it, withouth further delay.
 
  My main point was to try and shake people out of habits that
 are, in part at least, linked to the use of integration programs still
 based on a 2D analysis of diffraction images, and to getting misled
 into thinking that fine slicing doesn't help because it doesn't help
 these programs. What I most wanted to put across is that the merits of
 fine slicing and low-exposure, high-multiplicity collection protocols
 will emerge faster if people are encouraged to evaluate the data they
 yield through processing with XDS - hence my strong endorsement of it
 in this context.
 
 I do not consider XDS to be beyond perfectibility by any means,
 but I was reluctant to make my e-mail longer by analysing its possible
 improvements. Once its use has spread enough to give users of Pilatus
 detectors the best data they can hope for in the current state of the
 art, then its limitations will make concrete sense to enough people
 for their discussion to move spontaneously to the top of the agenda.
 It is indeed because I think there is so much scope for improvement on
 the current diffraction image processing software, including XDS, that
 I have been strongly (some would perhaps say: stridently) advocating
 the deposition of raw diffraction image data, as both an incentive and
 a testing ground for such developments in the future.
 
 
 With best wishes,
 
  Gerard.
 
 
 
 On Sun, May 19, 2013 at 08:01:34PM +0100, John R Helliwell wrote:
 Dear Gerard,
 Thank you for sharing these extensive details which I feel sure everyone
 will appreciate.
 Just one aspect I wondered about namely your categorical blanket
 endorsement of XDS. Indeed a very fine program and eg most recently
 evaluated and discussed at CCP4 2011 I think it was, where it emerged 'the
 winner'. You probably guess though that I am thinking of our mutually
 emphasized point that one key reason for raw diffraction data images
 availability is to see such software improve. Is XDS already perfection? Is
 its use by users already guaranteed to yield processed data as good as it
 can get?
 Greetings,
 John
 Prof John R Helliwell DSc
 
 
 On Thu, May 16, 2013 at 6:03 PM, Gerard Bricogne 
 g...@globalphasing.comwrote:
 
 Dear James,
 
 A week ago I wrote what I thought was a perhaps excessively long and
 overly dense message in reply to Theresa's initial query, then I thought I
 should sleep on it before sending it, and got distracted by other things.
 
 I guess you may well have used that whole week composing yours ;-) and
 reading it just now makes the temptation of sending mine irresistible. I am
 largely in agreement with you about the need to change mental habits in
 this
 field, and hope that the emphasis on various matters in my message below is
 sufficiently different from yours to make a distinct contribution to this
 very important discussion. Your analysis of pile-up effects goes well
 beyond
 anything I have ever looked at. However, in line with Theresa's initial
 question, I would say that, while I agree with you that the best strategy
 for collecting native data is no strategy at all, this isn't the case
 when
 collecting data for phasing. In that case one needs to go back and consider
 how to measure accurate differences of intensities, not just accurate
 intensities on their own. That is another subject, on which I was going to
 follow up so as to fully answer Theresa's message - but perhaps that should
 come in another installment!
 
 
 With best wishes,
 
  Gerard.
 
 --
 On Tue, May 07, 2013 at 12:04:33AM +0100, Theresa Hsu wrote:
 Dear crystallographers
 
 Is there a good source/review/software to obtain tips for good data
 collection strategy using PILATUS detectors at synchrotron? Do we need to
 collect sweeps of high and low resolution data separately? For anomalous
 phasing (MAD), does the order of wavelengths used affect structure solution
 or limit radiation damage?
 
 Thank you.
 
 Theresa
 --
 
 Dear Theresa,
 
 You have had several excellent replies 

Re: [ccp4bb] Fwd: Re: [ccp4bb] reference for true multiplicity?

2013-05-15 Thread Jrh
Good morning Colin (from this side of the pond),
I never liked the word redundancy. Multiplicity is a however a good word for 
multiple measurements. So, Ethan, what does someone in the USA say when made 
redundant ie out of a job? Surely not that they are now a useful surplus for 
the US economy of the future? 

Re benefits of multiple measurements I would add:-
Any time dependent related variations such as :-
X-ray beam rapid variations;
Crystal movements;
Variations in cold stream flow;
??
??

More esoterically perhaps extinction for very strong reflections in bigger 
crystal cases with longer Xray wavelengths. This would be data sets where 
multiple crystals are needed. This however I don't think affects more than a 
handful of reflections.

Just my two UK pennies worth,
John


Prof John R Helliwell DSc 
 
 

On 14 May 2013, at 21:58, Colin Nave colin.n...@diamond.ac.uk wrote:

 Yes, a good summary.
 The use of the term redundancy (real or otherwise!) in crystallography is 
 potentially misleading as the normal usages means superfluous/ surplus to 
 requirements.  The closest usage I can find from elsewhere is in information 
 theory where it is applied for purposes of error detection when communicating 
 over a noisy channel. Seems similar to the crystallographic use.
 
 The more relevant point is what sort of errors would be mitigated by having 
 different paths through the crystal. The obvious ones are absorption errors 
 and errors in detector calibration. Inverse beam methods can mitigate these 
 by ensuring the systematic errors are similar for the reflections being 
 compared. However, my interpretation of the Acta D59 paper is that it is 
 accepted that systematic errors are present and, by making multiple 
 measurements under different conditions, the effect of these systematic 
 errors will be minimised.
 
 Can anyone suggest other sources of error which would be mitigated by having 
 different paths through the crystal. I don't think radiation damage 
 (mentioned by several people) is one.
 
 Colin
 
 From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Frank 
 von Delft
 Sent: 14 May 2013 14:23
 To: ccp4bb
 Subject: [ccp4bb] Fwd: Re: [ccp4bb] reference for true multiplicity?
 
 George points out that the quote I referred to did not make it to the BB -- 
 here we go, read below and learn, it is a most succinct summary.
 phx
 
  Original Message 
 Subject:
 
 Re: [ccp4bb] reference for true multiplicity?
 
 Date:
 
 Tue, 14 May 2013 09:25:22 +0100
 
 From:
 
 Frank von Delft 
 frank.vonde...@sgc.ox.ac.ukmailto:%3cfrank.vonde...@sgc.ox.ac.uk%3e
 
 To:
 
 George Sheldrick 
 gshe...@shelx.uni-ac.gwdg.demailto:gshe...@shelx.uni-ac.gwdg.de
 
 
 Thanks!  It's the Acta D59 p688 I was thinking of - start of discussion:
 The results presented here show that it is possible to solve
 protein structures using the anomalous scattering from native
 S atoms measured on a laboratory instrument in a careful but
 relatively routine manner, provided that a sufficiently high
 real redundancy is obtained (ranging from 16 to 44 in these
 experiments). Real redundancy implies measurement of
 equivalent or identical re¯ections with different paths through
 the crystal, not just repeated measurements; this is expedited
 by high crystal symmetry and by the use of a three-circle (or )
 goniometer.
 Wise words...
 
 phx
 
 
 On 14/05/2013 08:06, George Sheldrick wrote:
 Dear Frank,
 
 We did extensive testing of this approach at the beginning of this millenium 
 - see
 Acta Cryst. D59 (2003) 393 and 688 - but never claimed that it was our idea.
 
 Best wishes,
 George
 
 On 05/14/2013 06:50 AM, Frank von Delft wrote:
 
 Hi, I'm meant to know this but I'm blanking, so I'll crowdsource instead:
 
 Anybody know a (the) reference where it was showed that the best SAD data is 
 obtained by collecting multiple revolutions at different crystal offsets 
 (kappa settings)?  It's axiomatic now (I hope!), but I remember seeing 
 someone actually show this.  I thought Sheldrick early tweens, but PubMed is 
 not being useful.
 
 (Oh dear, this will unleash references from the 60s, won't it.)
 
 phx
 
 
 
 
 
 
 -- 
 
 This e-mail and any attachments may contain confidential, copyright and or 
 privileged material, and are for the use of the intended addressee only. If 
 you are not the intended addressee or an authorised recipient of the 
 addressee please notify us of receipt by returning the e-mail and do not use, 
 copy, retain, distribute or disclose the information in or attached to the 
 e-mail.
 
 Any opinions expressed within this e-mail are those of the individual and not 
 necessarily of Diamond Light Source Ltd. 
 
 Diamond Light Source Ltd. cannot guarantee that this e-mail or any 
 attachments are free from viruses and we cannot accept liability for any 
 damage which you may sustain as a result of software viruses which may be 
 transmitted in or with the message.
 
 Diamond Light 

Re: [ccp4bb] Strange density in solvent channel and high Rfree

2013-03-20 Thread Jrh
Dear Zbyszek,
I am concerned that the unmerged data would be bypassed and not preserved in 
your recommendation. I also find it counter intuitive that the merged data 
would then be unmerged into a lower symmetry and be better than the unmerged 
data; there is I imagine some useful reference or two you can direct me to that 
may well correct my lack of understanding.  Thirdly I think this a very likely 
useful case to preserve the raw diffraction images. 
All best wishes,
John

Prof John R Helliwell DSc 
 
 

On 19 Mar 2013, at 14:37, Zbyszek Otwinowski zbys...@work.swmed.edu wrote:

 It is a clear-cut case of crystal packing disorder. The tell-tale sign is
 that data can be merged in the higher-symmetry lattice, while the number
 of molecules in the asymmetric unit (3 in P21) is not divisible by the
 higher symmetry factor (2, by going from P21 to P21212).
 From my experience, this is more likely a case of order-disorder than
 merohedral twinning. The difference between these two is that structure
 factors are added for the alternative conformations in the case of
 order-disorder, while intensities (structure factors squared) are added in
 the case of merohedral twinning.
 
 Now an important comment on how to proceed in the cases where data can be
 merged in a higher symmetry, but the structure needs to be solved in a
 lower symmetry due to a disorder.
 
 !Such data needs to be merged in the higher symmetry,assigned R-free flag,
 and THEN expanded to the lower symmetry. Reprocessing the data in a lower
 symmetry is an absolutely wrong procedure and it will artificially reduce
 R-free, as the new R-free flags will not follow data symmetry!
 
 Moreover, while this one is likely to be a case of order-disorder, and
 these are infrequent, reprocessing the data in a lower symmetry seems to
 be frequently abused, essentially in order to reduce R-free. Generally,
 when data CAN be merged in a higher symmetry, the only proper procedure in
 going to a lower-symmetry structure is by expanding these higher-symmetry
 data to a lower symmetry, and not by rescaling and merging the data in a
 lower symmetry.
 
 Zbyszek Otwinowski
 
 Dear all,
 We have solved the problem. Data processing in P1 looks better (six
 molecules in ASU), and Zanuda shows a P 1 21 1 symmetry (three molecules
 in
 ASU), Rfactor/Rfree drops to 0.20978/0.25719 in the first round
 of refinement (without put waters, ligands, etc.).
 
 Indeed, there were one more molecule in ASU, but the over-merged data in
 an orthorhombic lattice hid the correct solution.
 
 Thank you very much for all your suggestions, they were very important to
 solve this problem.
 
 Cheers,
 
 Andrey
 
 2013/3/15 Andrey Nascimento andreynascime...@gmail.com
 
 *Dear all,*
 
 *I have collected a good quality dataset of a protein with 64% of
 solvent
 in P 2 21 21 space group at 1.7A resolution with good statistical
 parameters (values for last shell: Rmerge=0.202; I/Isig.=4.4;
 Complet.=93%
 Redun.=2.4, the overall values are better than last shell). The
 structure
 solution with molecular replacement goes well, the map quality at the
 protein chain is very good, but in the final of refinement, after
 addition
 of a lot of waters and other solvent molecules, TLS refinement, etc. ...
 the Rfree is a quite high yet, considering this resolution
 (1.77A).(Rfree=
 0.29966 and Rfactor= 0.25534). Moreover, I reprocess the data in a lower
 symmetry space group (P21), but I got the same problem, and I tried all
 possible space groups for P222, but with other screw axis I can not even
 solve the structure.*
 
 *A strange thing in the structure are the large solvent channels with a
 lot of electron density positive peaks!? I usually did not see too many
 peaks in the solvent channel like this. This peaks are the only reason
 for
 these high R's in refinement that I can find. But, why are there too
 many
 peaks in the solvent channel???*
 
 *I put a .pdf file (ccp4bb_maps.pdf) with some more information and map
 figures in this link: https://dl.dropbox.com/u/16221126/ccp4bb_maps.pdf*
 
 *
 *
 
 *Do someone have an explanation or solution for this?*
 
 * *
 
 *Cheers,*
 
 *Andrey*
 
 
 
 
 Zbyszek Otwinowski
 UT Southwestern Medical Center at Dallas
 5323 Harry Hines Blvd.
 Dallas, TX 75390-8816
 Tel. 214-645-6385
 Fax. 214-645-6353


Re: [ccp4bb] first use of synchrotron radiation in PX

2013-03-17 Thread Jrh
Dear James,
I agree with your chronology of the first full new protein structures by SR 
MAD. 

The 1975 two wavelength Hoppe and Jakubowksi study of erythrocruorin with Ni 
and Co Kalpha Xray tubes is a classic piece of work of in effect MAD phasing . 
See the IUCr Anomalous Scattering Conference book edited by Abrahams and 
Ramaseshan.

The 1971 Nature paper biological diffraction with SR from Hamburg, whose focus 
was on muscle diffraction, which Colin highlighted, does have an entry though 
for a 'protein crystal' in a table. 

There is also of course the Hamburg 1976 paper Harmsen et al J Mol Biol but 
which generally concluded SR for protein crystallography wasn't worth it; to me 
as a doctoral student at the time this was clearly an incorrect conclusion I 
firmly believed based on the 1971 Hamburg Nature paper and especially what I 
could see in and beyond the 1976 SSRL PNAS paper.

Yours sincerely,
John

Prof John R Helliwell DSc 
 
 

On 16 Mar 2013, at 14:46, James Holton jmhol...@lbl.gov wrote:

 The first report of shooting a protein crystal at a synchrotron (I think) was 
 in 1976:
 http://www.pnas.org/content/73/1/128.full.pdf
 that was rubredoxin
 
 The first PDB file that contains a SYNCHROTRON=Y entry is 1tld (trypsin), 
 which was deposited in 1989:
 http://dx.doi.org/10.1016/0022-2836(89)90110-1
 But the structure of trypsin was arguably already solved at that time.

 Anomalous diffraction was first demonstrated by Coster, Knoll and Prins in 
 1930
 http://dx.doi.org/10.1007/BF01339610
 this was 20 years before Bijvoet.  But not with a synchrotron and definitely 
 not with a protein
 
 The first protein to be solved using anomalous was crambin in 1981:
 http://dx.doi.org/10.1038/290107a0
 but this was not using a synchrotron
 
 The first demonstration of MAD on a protein at a synchrotron was a Tb soak of 
 parvalbumin in 1985
 http://dx.doi.org/10.1016/0014-5793(85)80207-6
 but one could argue that several parvalbumins were already known at that time.
 
 The first MAD structure from native metals was cucumber blue copper protein 
 (2cbp) in 1989
 http://dx.doi.org/10.1126%2Fscience.3406739
 
 The first new structure using MAD, as well as the first SeMet was 
 ribonuclease H (1rnh) in 1990
 http://dx.doi.org/10.1126/science.2169648
 
 If anyone knows of earlier cases, I'd like to hear about it!
 
 -James Holton
 MAD Scientist
 
 On 3/13/2013 7:38 AM, Alan Cheung wrote:
 Hi all - i'm sure this many will know this : when and what was the first 
 protein structure solved on a synchrotron?
 
 Thanks in advance
 Alan
 
 


Re: [ccp4bb] Resolution and data/parameter ratio, which one is more important?

2013-03-17 Thread Jrh
Dear Dr Zhu,
I hope the following might make things easier to grasp. The 3.0Angstrom 
diffraction resolution is basically required to resolve a protein polypeptide 
chain whether your protein is in an 80% solvent content unit cell or a 50% 
solvent content unit cell. You will have more observations in the former than 
in the latter to reach that goal. In the days of a single counter four circle 
diffractometer that was a major overhead. As has been alluded to in 
considerable detail in other replies solvent flattening does however give phase 
determination benefits and 80% is better than 50%.
Yours sincerely,
John

Prof John R Helliwell DSc FInstP CPhys FRSC CChem F Soc Biol.
Chair School of Chemistry, University of Manchester, Athena Swan Team.
http://www.chemistry.manchester.ac.uk/aboutus/athena/index.html
 
 

On 15 Mar 2013, at 00:27, Guangyu Zhu g...@hwi.buffalo.edu wrote:

 I have this question. For exmaple, a protein could be crystallized in two 
 crystal forms. Two crystal form have same space group, and 1 molecule/asymm. 
 One crystal form diffracts to 3A with 50% solvent; and the other diffracts to 
 3.6A with 80% solvent. The cell volume of 3.6A crystal must be 5/2=2.5 times 
 larger because of higher solvent content. If both data collecte to same 
 completeness (say 100%), 3.6A data actually have higher data/parameter ratio, 
 5/2/(3.6/3)**3= 1.45 times to 3A data. For refinement, better data/parameter 
 should give more accurate structure, ie. 3.6A data is better. But higher 
 resolution should give a better resolved electron density map. So which 
 crystal form really give a better (more reliable and accurate) protein 
 structure?


Re: [ccp4bb] first use of synchrotron radiation in PX

2013-03-13 Thread Jrh
Dear Colleagues,
The paper 
http://dx.doi.org/10.1107/S0108768185002233
in work led by Howard Einspahr undertaken at SRS 7.2 is a protein structural 
specific result from synchrotron radiation. 
The MAD method of course yielded totally specific to SR protein crystal 
structures. The conceptualisation goes further back than Herzenberg and Lau 
namely to Okaya and Pepinsky 1956. 
The seleno met approach of Wayne Hendrickson I believe was however the major 
breakthrough.
Best wishes,
John


Prof John R Helliwell DSc FInstP CPhys FRSC CChem F Soc Biol.
Chair School of Chemistry, University of Manchester, Athena Swan Team.
http://www.chemistry.manchester.ac.uk/aboutus/athena/index.html
 
 

On 13 Mar 2013, at 14:38, Alan Cheung che...@lmb.uni-muenchen.de wrote:

 Hi all - i'm sure this many will know this : when and what was the first 
 protein structure solved on a synchrotron?
 
 Thanks in advance
 Alan
 
 
 -- 
 Alan Cheung
 Gene Center
 Ludwig-Maximilians-University
 Feodor-Lynen-Str. 25
 81377 Munich
 Germany
 Phone:  +49-89-2180-76845
 Fax:  +49-89-2180-76999
 E-mail: che...@lmb.uni-muenchen.de


Re: [ccp4bb] Sighting of Protein Crystals in Vivo?!

2013-02-17 Thread Jrh
Jacob, I recall Dorothy Hodgkin or Guy Dodson showing a slide (mid to late 
1970s) with insulin crystals having grown in the Islets of Langerhans. As you 
say, quite remarkable.  John 

Prof John R Helliwell DSc 
 
 

On 15 Feb 2013, at 19:44, Jacob Keller j-kell...@fsm.northwestern.edu wrote:

 Dear Crystallographers,
 
 I was looking at some live, control HEK cells expressing just eGFP, and to my 
 great surprise, saw littered across the dish what appeared to be small 
 fluorescent needles (see attached--sorry about the size, but it's only ~1MB 
 total.) Can these possibly be fortuitous protein crystals? They were too 
 small to mount I think, and for what it's worth, parallel-transfected HeLa 
 cells did not have these things. But, some needles could be seen in the DIC 
 images as well, and the needles were only fluorescent with GFP filter sets, 
 and not CFP, YFP, or texas red filters. I thought of whale myoglobin 
 crystallizing on the decks of ships, but never thought I would see this
 
 Jacob
 
 -- 
 ***
 Jacob Pearson Keller, PhD
 Postdoctoral Associate
 HHMI Janelia Farms Research Campus
 email: j-kell...@northwestern.edu
 ***
 GFP_crystals_DIC.png
 GFP_crystals_Fluorescence.png


Re: [ccp4bb] Golden Jubilee of Ramachandran Plot

2013-01-20 Thread Jrh
Dear Colleagues,
This paper on the early days of crystallographic computing will be of 
relevance:-
http://garfield.library.upenn.edu/classics1980/A1980JR2291.pdf
Greetings,
John

Prof John R Helliwell DSc 
 
 

On 20 Jan 2013, at 06:38, Edward A. Berry ber...@upstate.edu wrote:

 Edward A. Berry wrote:
 http://www.computerhistory.org/revolution/timeline
 
 And this paper describes their use of a digital computer as if it were 
 rather routine:
 Sternberg, J., Stillo, H.  Schwendeman, R. (1960). Spectrophotometric 
 Analysis of Multicomponent Systems Using the
 Least Squares Method in Matrix Form. Analytical Chemistry 32, 84-90.
 
 Well, not exactly routine by today's standards. I was going by memory- actual 
 description is:
 The matrix inversions were . . . carried out on the MISTIC Computer at 
 Michigan State University.
 The assistance of Susann Brimmer in carrying out the calculations on the 
 MISTIC computer is gratefully acknowledged.


Re: [ccp4bb] occupancy vs. Bfactors

2012-11-20 Thread Jrh
Dear Pavel,
Also worth mentioning the obvious that the mathematical functional form of an 
occupancy and a B factor in its -ve exponential is very different BUT at lower 
resolutions they behave similarly. Thus higher resolution refinement allows an 
'easier' determination of each parameter. 
Greetings,
John

Prof John R Helliwell DSc 
 
 

On 19 Nov 2012, at 23:54, Pavel Afonine pafon...@gmail.com wrote:

 Hi Grant,
 
 sounds like you did the right thing (as far as I can guess given the amount 
 of information you provided).
 
 In a nutshell, both, B-factors and occupancies, model disorder. The 
 difference is that occupancies model larger scale disorder (such as distinct 
 conformations) than B-factors (smearing due to temperature vibrations, etc).
 
 Perhaps in you case the side chain has several conformations among which you 
 can see only one, and therefore it's valid to model it with occupancy less 
 than 1 (I presume you refined one occupancy per all atoms in that side chain).
 
 Pavel
 
 On Mon, Nov 19, 2012 at 3:36 PM, GRANT MILLS 
 gdmi...@students.latrobe.edu.au wrote:
 Hello all,
 
 I'm currently working on a structure which if I stub a certain side chain 
 phenix/coot shows me a large green blob which looks strikingly similar to the 
 side chain, when I put it in and run another refinement the blob turns red.
 
 Basically I was just playing around and I changed the occupancy of the side 
 chain and now there are no complaints. But I was thinking, should I haven 
 changed the Bfactors instead? Should I have left well enough alone? If I 
 lower the occupancy manually and do not include alternate confirmations have 
 I introduced modelling bias?
 
 Could someone recommend some good articles I could read on exactly how to 
 correctly fix this problem.
 
 Thanks,
 GM 
 


Re: [ccp4bb] occupancy vs. Bfactors

2012-11-20 Thread Jrh
Nomenclature hazard warning:-

Ian, Thankyou for drawing attention to the nomenclature school:-
Partial occupancy disorder
Which I prefer to refer to as 
Partial occupancy order.

Outside our MX field static disorder refers to what we call split occupancy 
order. I like the latter and dislike the former. Ie where there is disorder 
surely such a chemical moiety would be invisible,  let alone allowing us to be 
able to determine its occupancy from Bragg intensities. 

I once tried to propose an amendment to the IUCr Nomenclature Committee to 
replace static disorder terminology with split occupancy order terminology. The 
forces to which you refer were too strong. Static disorder remains the term in 
approved use.

Prof John R Helliwell DSc 
 
 

On 20 Nov 2012, at 15:49, Ian Tickle ianj...@gmail.com wrote:

 PS: Partial occupancy is not the same as disorder. You can have well-ordered 
 different occupancies that manifest themselves then in superstructure 
 patterns. Common in small molecule/materials.
 
 
 Hello Bernhard
 
 Agree with everything you said up till this point, but I think the owners of 
 the site occupancy disorder websites below would disagree that partial 
 occupancy is not the same as disorder!
 
 http://www.ucl.ac.uk/~uccargr/sod.htm
 
 http://www.tcm.phy.cam.ac.uk/castep/documentation/WebHelp/Html/thCastepDisorder.htm
 
 There are also many research papers on partial occupancy disorder of 
 superlattice materials in the solid state, eg:
 
 http://www.researchgate.net/publication/226559734_Order-disorder_behavior_in_KNbO3_and_KNbO3KTaO3_solid_solutions_and_superlattices_by_molecular-dynamics_simulation
 
 Cheers
 
 -- Ian
  
 
 


Re: [ccp4bb] occupancy vs. Bfactors

2012-11-20 Thread Jrh
Ian, yes absolutely but your very description of where the unit cells are not 
identical is NOT the situation where we see fractional occupancy moieties. In 
such cases a large fraction of the unit cells ARE ordered. QED. John

Prof John R Helliwell DSc FInstP CPhys FRSC CChem F Soc Biol.
Chair School of Chemistry, University of Manchester, Athena Swan Team.
http://www.chemistry.manchester.ac.uk/aboutus/athena/index.html
 
 

On 20 Nov 2012, at 17:33, Ian Tickle ianj...@gmail.com wrote:

 John
 
 Having begun my crystallographic life with small molecules (organic 
 semiconductors) and subsequently moved to PX, and having worked on SOD 
 crystals I stand in both camps (i.e. both meanings: site-occupancy disorder 
 and superoxide dismutase!).  It seems to me that static disorder is the 
 appropriate description of any situation where the time-averaged unit cells 
 are not all identical and the variations are more or less random throughout 
 the lattice.  This would then apply both to SOD and the more common (at least 
 in MX) positional disorder.
 
 But I'm puzzled where you say where there is disorder surely such a chemical 
 moiety would be invisible.  Surely if there is static disorder such that a 
 fraction x of the sites are randomly occupied, with the remaining fraction 
 1-x vacant the moeity in question will be perfectly visible, just with 
 reduced occupancy x.  In fact I had an example of this: a 9-methyl anthracene 
 molecule sitting on an inversion centre with the Me group randomly occupied 
 with half occupancy.  The disordered Me was certainly visible in the map, 
 just with reduced density compared with the other C atoms.
 
 -- Ian
 
 
 On 20 November 2012 17:58, Jrh jrhelliw...@gmail.com wrote:
 Nomenclature hazard warning:-
 
 Ian, Thankyou for drawing attention to the nomenclature school:-
 Partial occupancy disorder
 Which I prefer to refer to as 
 Partial occupancy order.
 
 Outside our MX field static disorder refers to what we call split occupancy 
 order. I like the latter and dislike the former. Ie where there is disorder 
 surely such a chemical moiety would be invisible,  let alone allowing us to 
 be able to determine its occupancy from Bragg intensities. 
 
 I once tried to propose an amendment to the IUCr Nomenclature Committee to 
 replace static disorder terminology with split occupancy order terminology. 
 The forces to which you refer were too strong. Static disorder remains the 
 term in approved use.
 
 
 Prof John R Helliwell DSc 
  
  
 
 On 20 Nov 2012, at 15:49, Ian Tickle ianj...@gmail.com wrote:
 
 PS: Partial occupancy is not the same as disorder. You can have well-ordered 
 different occupancies that manifest themselves then in superstructure 
 patterns. Common in small molecule/materials.
 
 
 Hello Bernhard
 
 Agree with everything you said up till this point, but I think the owners of 
 the site occupancy disorder websites below would disagree that partial 
 occupancy is not the same as disorder!
 
 http://www.ucl.ac.uk/~uccargr/sod.htm
 
 http://www.tcm.phy.cam.ac.uk/castep/documentation/WebHelp/Html/thCastepDisorder.htm
 
 There are also many research papers on partial occupancy disorder of 
 superlattice materials in the solid state, eg:
 
 http://www.researchgate.net/publication/226559734_Order-disorder_behavior_in_KNbO3_and_KNbO3KTaO3_solid_solutions_and_superlattices_by_molecular-dynamics_simulation
 
 Cheers
 
 -- Ian
  
 
 
 


Re: [ccp4bb] vitrification vs freezing

2012-11-16 Thread Jrh
Dear Andrew,
Re cryocooled.

Cooled?

It reminds me of James Bond where Martinis should be shaken but not stirred. 
Ie Cooling sounds awfully gentle, a sort of enjoying a cool sea breeze in the 
Caribbean heat. (Ian Fleming wrote his Bond novels there.)

Shock frozen is more what we are doing to our crystals, a brutal event, rather 
than a cooling, even if labelled cryo cooling.

Greetings,
John

Prof John R Helliwell DSc FInstP CPhys FRSC CChem F Soc Biol.
Chair School of Chemistry, University of Manchester, Athena Swan Team.
http://www.chemistry.manchester.ac.uk/aboutus/athena/index.html
 
 

On 15 Nov 2012, at 18:12, A Leslie and...@mrc-lmb.cam.ac.uk wrote:

 Dear Sebastiano,
 
This is not entirely straight-forward. The 
 Oxford English dictionary gives the first definition of freeze relevant to 
 this discussion as:
 Of (a body of) water: be converted into or become covered with ice through 
 loss of heat
 
 This is certainly not what we want to do to our crystals. 
 
 However, another definition in OED is:
 Cause (a liquid) to solidify by removal of heat, suggesting that this does 
 not necessarily mean the formation of crystals.
 
 The Larousse Dictionary of Science and Technology (1995) has the following 
 definition:
 Freeze-drying (Biol.) A method of fixing tissues sufficiently rapidly as to 
 inhibit the formation of ice-crystals.
 
 The Dictionary of Microbiology and Molecular Biology (3rd Ed) in the entry on 
 Freezing has the sentence:
 Rapid freezing tends to prevent the ice crystal formation by encouraging 
 vitrification.
 
 Both of these erstwhile volumes therefore suggest that freezing does not 
 necessarily imply the formation of crystals. However, the term is ambiguous, 
 while vitrification is not.
 
 Personally I use cryocooled instead.
 
 Best wishes,
 
 Andrew
 
 
 
 On 15 Nov 2012, at 17:13, Sebastiano Pasqualato wrote:
 
 
 Hi folks,
 I have recently received a comment on a paper, in which referee #1 
 (excellent referee, btw!) commented like this:
 
 crystals were vitrified rather than frozen.
 
 These were crystals grew in ca. 2.5 M sodium malonate, directly dip in 
 liquid nitrogen prior to data collection at 100 K.
 We stated in the methods section that crystals were frozen in liquid 
 nitrogen, as I always did.
 
 After a little googling it looks like I've always been wrong, and what we 
 are always doing is doing is actually vitrifying the crystals.
 Should I always use this statement, from now on, or are there 
 english/physics subtleties that I'm not grasping?
 
 Thanks a lot,
 ciao,
 s
 
 
 -- 
 Sebastiano Pasqualato, PhD
 Crystallography Unit
 Department of Experimental Oncology
 European Institute of Oncology
 IFOM-IEO Campus
 via Adamello, 16
 20139 - Milano
 Italy
 
 tel +39 02 9437 5167
 fax +39 02 9437 5990
 
 please note the change in email address!
 sebastiano.pasqual...@ieo.eu
 
 
 
 
 
 
 
 


Re: [ccp4bb] Ca or Zn

2012-10-31 Thread Jrh
Dear Ethan,
Yes indeed. 
An exciting development underway at Diamond Light Source led by Armin Wagner 
will greatly improve the ease of measurement at eg the calcium edge but also 
extending that wavelength range. The  furin paper I quoted  did nevertheless 
successfully show structural detail from those measurements.

The Ga and Zn edges wavelengths are an easier challenge to access, i agree of 
course, rather the interest I tried to stress was the getting of the sigmas on 
the occupancies as well as the occupancies themselves, and how we did that and 
checked them via more than one software is also hopefully of interest.

Greetings from Taipei,
John 

Prof John R Helliwell DSc 
 
 

On 31 Oct 2012, at 10:37, Ethan Merritt merr...@u.washington.edu wrote:

 On Tuesday, 30 October 2012, Jrh wrote:
 This paper describes use of data either side of the calcium edge:-
 
 http://dx.doi.org/10.1107/S0907444905002556
 
 I think that counts as not amenable (which is not quite the same
 as impossible.  From the Methods section of that paper:
 
  Measurements in the vicinity of the K absorption edge of
  calcium (3.07 Å) are close to or beyond the physical limit
  of most beamlines typically used for X-ray crystallography
  [...] It was not possible to observe interpretable
  diffraction patterns at λ = 3 Å with the weakly diffracting
  furin crystals using the MAR CCD detector and exposure
  times up to 20 min per degree of rotation.
 
 They did soldier on and managed to collect extremely weak data
 below the Ca edge and stronger but still very weak data above the
 edge where the Ca f term was appreciable.  But this is far from a
 routine experiment.
 
 Another approach dating back to work in 1972 by Peter Coleman
 and Brian Matthews http://dx.doi.org/10.1016/0006-291X(72)90750-4
 is to replace the Ca with a rare earth having similar chemistry 
 (e.g. La, whose L-1 edge is at 1.98Å).  
 
 
 This next paper describes a case of gallium and zinc mix at 
 one site with occupancy AND sigmas estimated with different software. 
 This example is however much better diffraction resolution than 
 that you may have. But hopefully will still be of interest:-
 http://dx.doi.org/10.1107/S0108768110011237
 
 Ga and Zn, sure.  That's an easy one. 
 The Ga edge is at 1.196Å and the Zn edge is at 1.284Å,
 both edges are nicely in range for data collection and they are
 close enough together that little or no beamline readjustment
 is needed when jumping from one to the other.
 
Ethan
 
 
 
 
 Prof John R Helliwell DSc
 
 
 
 On 31 Oct 2012, at 04:53, Ethan Merritt merr...@u.washington.edu wrote:
 
 On Tuesday, October 30, 2012 01:44:43 pm Adrian Goldman wrote:
 
 The coordination is indicative but not conclusive but, as I responded to 
 the original poster, I think the best approach is to use anomalous 
 scattering.  You can measure just below and above the Ca edge, 
 
 Actually, you can't.  The Ca K-edge is at 3.07Å, which is not a wavelength
 amenable to macromolecular data collection.  
 
   cheers,
 
   Ethan
 
 
 and similarly with the Zn, and those maps will be _highly_ indicative of 
 the relative amounts of metal ion present.  In fact, you can deconvolute 
 so that you know the occupancy of the metals at the various sites.
 
 Adrian
 
 
 On 30 Oct 2012, at 22:37, Chittaranjan Das wrote:
 
 Veerendra,
 
 You can rule out if zinc has replaced calcium ions (although I agree with 
 Nat and others that looking at the coordination sphere should give a big 
 clue) by taking a few crystals, washing them a couple of times and 
 subjecting them to ICP-MS analysis, if you have access to this technique. 
 You can learn how many zinc, if any, have bound per one protein molecule 
 in the dissolved crystal.
 
 Best
 Chitta
 
 
 
 - Original Message -
 From: Veerendra Kumar veerendra.ku...@uconn.edu
 To: CCP4BB@JISCMAIL.AC.UK
 Sent: Tuesday, October 30, 2012 2:55:33 PM
 Subject: [ccp4bb] Ca or Zn
 
 Dear CCP4bb users,
 
 I am working on a Ca2+ binding protein. it has 4-5 ca2+ binding sites.  I 
 purified the protein  in presence of Ca2+ and crystallized the Ca2+ bound 
 protein. I got crystal and solved the structure by SAD phasing at 2.1A 
 resolution. I can see the clear density in the difference map for metals 
 at the expected binding sites. However I had ZnCl2 in the crystallization 
 conditions. Now i am not sure whether the observed density is for Ca or 
 Zn or how many of them are ca or  zn? Since Ca (20 elctron) and Zn (30 
 electron), is this value difference can be used to make a guess about 
 different ions? 
 is there any way we can find the electron density value at different 
 peaks? 
 
 Thank you
 
 Veerendra 
 
 
 
 


[ccp4bb] A case of post publication fraud...Re: [ccp4bb] Fwd: [ccp4bb] PNAS on fraud

2012-10-19 Thread Jrh
Dear Colleagues,
A different type of, post publication, fraud is the case of the discovery of 
streptomycin. See :-
http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(12)61202-1/fulltext
I am just returned from the ICSTI  Conference on Science, Law and Ethics in 
Washington DC representing IUCr where I learnt of this very disturbing case. I 
explained to the book's author and speaker Peter Pringle that, on behalf of 
Universities today, procedures are now in place, at least in the University of 
Manchester, for graduate students and supervisors to both sign within 'eprog' 
that they have discussed matters of authorship etiquette and rules as well as 
IP sharing formalities.
Have a good weekend,
John



Prof John R Helliwell DSc 
 
 

On 19 Oct 2012, at 13:08, Carter, Charlie car...@med.unc.edu wrote:

 
 
 Begin forwarded message:
 
 Date: October 19, 2012 4:40:35 AM EDT
 To: Randy Read rj...@cam.ac.uk
 Subject: Re: [ccp4bb] PNAS on fraud
 
 This thread has been quite interesting to me. I've had a long interest in 
 scientific fraud, which I've generally held to be victimless. While that 
 view is unsupportable in a fundamental sense, I feel strongly that we should 
 understand that error correction costs exponentially more, the smaller the 
 tolerance for errors. In protein synthesis, evolution has settled on error 
 rates of ~1 in 4000-1. Ensuring those rates is already costly in terms 
 of NTPs hydrolyzed. NASA peer review provided me another shock:  budgets for 
 microgravity experiments were an order of magnitude higher than those for 
 ground-based experiments, and most of the increase came via NASA's 
 insistence on higher quality control. 
 
 Informally, I've concluded that the rate of scientific fraud in all journals 
 is probably less than the 1 in 10,000 that (mother) nature settled on.
 
 I concur with Randy.
 
 Charlie
 
 On Oct 18, 2012, at 2:43 PM, Randy Read wrote:
 
 In support of Bayesian reasoning, it's good to see that the data could 
 over-rule our prior belief that Nature/Science/Cell structures would be 
 worse!
 
 


[ccp4bb] Jrh further Re: [ccp4bb] A case of post publication fraud...Re: [ccp4bb] Fwd: [ccp4bb] PNAS on fraud

2012-10-19 Thread Jrh
One of the hardest things for an author, and a handling Editor, is making sure 
that the references list of a submitted article is complete, but is an easier 
task now with our e-tools than in the days of the penicillin discovery. Another 
case is that of Einstein's special theory article of 1905 where Lorentz was not 
cited. Abraham Paix explored this in the biography of Einstein noting that 
Einstein did finally acknowledge that they, Lorentz and Einstein, had been in 
correspondence on the topic.


Prof John R Helliwell DSc 
 
 

On 19 Oct 2012, at 19:43, Bryan Lepore bryanlep...@gmail.com wrote:

 On Fri, Oct 19, 2012 at 2:20 PM, Jrh jrhelliw...@gmail.com wrote:
 Dear Colleagues,
 A different type of, post publication, fraud is the case of the discovery of 
 streptomycin. See :-
 http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(12)61202-1/fulltext
 
  I can't resist posting this quite interesting tangential fact that is 
 unrelated to fraud : penicillin was discovered by Ernest Duchesne.
 
 -Bryan


Re: [ccp4bb] MAD data process problem

2012-05-30 Thread Jrh
Dear Qixu Cai,
The following paper should be informative for your query:-
http://dx.doi.org/10.1107/S0909049595013288
Best wishes,
John
Prof John R Helliwell DSc 
 
 

On 29 May 2012, at 10:11, Qixu Cai caiq...@gmail.com wrote:

 Dear all,
 
 Sorry for the question from MAD beginner.
 
 When we process the MAD datasets, including the peak-data, edge-data and 
 remote-data, which datasets need to be process with anomalous?
 
 I know peak-data obviously need data processing with anomalous, but what 
 about edge-data and remote-data when we want to use MAD method?
 
 Thank you very much!
 
 Best wishes,
 
 Qixu Cai


Re: [ccp4bb] saxs on xtals

2012-05-09 Thread Jrh
Dear Anna,
Very interesting diffraction pattern.
Any chance of measuring to higher resolution?
Ie to try and capture the higher order rings, which presumably are there.
Also interesting that these rings seem quite weak ie the ferritin perhaps not 
fully loaded?
Best wishes,
John


Prof John R Helliwell DSc FInstP CPhys FRSC CChem F Soc Biol.
Chair School of Chemistry, University of Manchester, Athena Swan Team.
http://www.chemistry.manchester.ac.uk/aboutus/athena/index.html
 
 

On 8 May 2012, at 16:54, anna anna marmottalb...@gmail.com wrote:

 Dear all,
 first of all I want to thank you for your attention and all your brilliant 
 suggestions that really cleared my head!!!
 Thanks to you (or because of you!!) now I have many ideas and very much to do.
 
 Colin,
  I was just re-considering my diffraction images. Who knows if they are 
 single xtals indeed! 
 Let's see if I understood your point. Assuming that they are single xtals, if 
 they are located at independent positions in the protein-cage it would be 
 like powder diffraction, with rings at diffraction angles corresponding to 
 magnetite lattice. If they are ordered they should give a diffraction 
 pattern. The corresponding lattice can differ from the protein lattice, do 
 you agree? If this is true, what would I see? Two superimposed diffraction 
 patterns? 
 Actually, I am not able to evaluate it... I attached one of the diffraction 
 images. It seems to me that there are two diffused rings at about 2.5 and 2.9 
 A.
 
 Michael, I just read your reply. I think that the eventual periodicity of the 
 partcles can't be completely independent of the protein periodicity (I 
 attached a hypotethical scheme), as you suggest I will try P1.
 Once I tryed a naive version of what you suggest: I put a magnet over the 
 xtallization plate. All my collegues made fun of me... :) !!
 
 I will check the literature that you all quoted (hard work!)
 
 Thank you again, new suggestions will be really appreciated.
 
 Cheers,
 anna
 
 
 2012/5/8 R. M. Garavito rmgarav...@gmail.com
 Dear Anna,
 
 I know that you already have gotten replies from some top experts, but your 
 intriguing problem brought up some issues I have run across in the past.  
 
 First, from you experience with single crystal diffraction, your results are 
 not that much different from those seen in virus structures where the nucleic 
 acid structure is averaged out.  As the nucleic acid doesn't (and mostly 
 can't) adopt the symmetry of the protein shell, the crystallization process 
 alone does the averaging.   Just because that ferritin and magnetite have 
 cubic symmetry elements, if they don't line up, the magnetite structure can 
 be averaged out upon crystallization.  So, working at lower symmetry may 
 not help, unless there is some directional correlation of the magnetite 
 symmetry and position with the crystal axes.  But try P1 and see what happens.
 
 A second comment is why not try neutron scattering (SANS or single crystal 
 neutron diffraction), particularly as you can match out the protein with D2O 
 and see just the magnetite.  While the same concerns apply for single crystal 
 neutron diffraction, you see more clearly regions of higher average density 
 inside the protein shell.  
 
 And lastly, have you tried crystallizing your ferritin/nanoparticle complexes 
 in the presence of a magnetic field?  It would be a neat trick, and people 
 have tried such things in the past, such as for orienting biomolecules.  Some 
 even used old NMR magnets.  Would be wild, if it worked.
 
 Good luck,
 
 Michael
 
 
 R. Michael Garavito, Ph.D.
 Professor of Biochemistry  Molecular Biology
 603 Wilson Rd., Rm. 513   
 Michigan State University  
 East Lansing, MI 48824-1319
 Office:  (517) 355-9724 Lab:  (517) 353-9125
 FAX:  (517) 353-9334Email:  rmgarav...@gmail.com
 
 
 
 
 
 On May 7, 2012, at 12:30 PM, anna anna wrote:
 
 Dear all,
 I'd like some suggestions/opinions about the sense of an experiment proposed 
 by a collaborator expert in saxs.
 In few words, he wants to collect SAXS data on a suspension of protein xtals 
 to investigate low resolution periodicity of the xtal (more details 
 below). 
 The experiment requires a very huge number of xtals to obtain the circles 
 typical of saxs and it is very time-consuming to me (I know nothing about 
 saxs, I have only to prepare the sample). I proposed to measure a single 
 rotating xtal (like in XRD) but he told they don't have a goniometer on saxs 
 beamline.
 Here is my concern: does it make sense to measure many xtals together? Don't 
 we lose information with respect to single xtal? And, most of all, what can 
 I see by saxs that I can't see by waxs??
 Sorry for the almost off-topic question but I think that only someone who 
 knows both the techniques can help me!!
 
 
 Some detail for who is 

[ccp4bb] A topical one...Re: [ccp4bb] Publication ethics Re: [ccp4bb] Off-topic: Supplying PDB file to reviewers

2012-04-28 Thread Jrh
Dear Mark,
And of course this one is topical:-

http://publicationethics.org/case/lost-raw-data

Best wishes,
John

Prof John R Helliwell DSc FInstP CPhys FRSC CChem F Soc Biol.
Chair School of Chemistry, University of Manchester, Athena Swan Team.
http://www.chemistry.manchester.ac.uk/aboutus/athena/index.html
 
 

On 27 Apr 2012, at 11:40, Mark J van Raaij mjvanra...@cnb.csic.es wrote:

 have a look at this case, no danger of your coordinates going to anyone but 
 yourself if you do it this way:
 http://publicationethics.org/case/author-creates-bogus-email-accounts-proposed-reviewers
 
 
 On 26 Apr 2012, at 12:02, Jrh wrote:
 
 Dear Colleagues,
 I have followed this thread with great interest. It reminds me of the Open 
 Commission Meeting of the Biological Macromolecules Commission in Geneva in 
 2002 at the IUCr Congress. Ie at which it was concluded that protein 
 coordinates and diffraction data would not be provided to referees. 
 
 The ethics and rights of readers, authors and referees is a balancing act, 
 as Jeremy and others have emphasised these different constituency's views. 
 
 The aim of this email though is to draw your attention to the Committee on 
 Publication Ethics (COPE) work and case studies, which are extensive. Ie 
 see:-
 
 http://publicationethics.org/
 
 The COPE Forum will also provide advice on case submissions that are made of 
 alleged publication malpractice, various of which are quite subtle. The 
 processes as well though following on from obvious malpractice eg how a 
 university research malpractice committee can be convened are also detailed. 
 
 Greetings,
 John
 
 Prof John R Helliwell DSc 
 
 
 
 On 26 Apr 2012, at 06:10, Jeremy Tame jt...@tsurumi.yokohama-cu.ac.jp 
 wrote:
 
 The problem is it is not the PI who is jumping, it may be a postdoc he/she 
 is throwing.
 
 Priority makes careers (look back at the Lavoisier/Priestly, 
 Adams/LeVerrier or
 Cope/Marsh controversies), and the history of scientific reviewing is not 
 all edifying.
 
 Too many checks, not enough balances. Science is probably better served if 
 the
 author can publish without passing on the pdb model to a potentially 
 unscrupulous 
 reviewer, and if there are minor errors in the published paper then a 
 competing
 group also has reason to publish its own view. The errors already have to 
 evade the
 excellent validation tools we now have thanks to so many talented 
 programmers,
 and proper figures and tables (plus validation report) should be enough for 
 a review.
 The picture we have of haemoglobin is now much more accurate than the ones 
 which came out decades ago, but those structures were very useful in the 
 mean 
 time. A requirement of resolution better than 2 Angstroms would probably 
 stop poor 
 models entering PDB, but I don't think it would serve science as a whole. 
 Science
 is generally a self-correcting process, rather than a demand for perfection 
 in every
 paper. Computer software follows a similar pattern - bug reports don't 
 always invalidate the
 program.
 
 I have happily released data and coordinates via PDB before publication, 
 even back in the
 1990s when this was unfashionable, but would not do so if I felt it risked 
 a postdoc
 failing to publish a key paper before competitors. It might be helpful if 
 journals were
 more amenable to new structures of solved proteins as the biology often 
 emerges 
 from several models of different conformations or ligation states. But in a 
 publish or
 perish world, authors need rights too. Reviewers do a necessary job, but 
 there is a
 need for balance.
 
 The attached figure shows a French view of Le Verrier discovering Uranus, 
 while
 Adams uses his telescope for a quite different purpose.
 
 Adams_Leverrier.jpg
 
 
 On Apr 26, 2012, at 2:01 AM, Ethan Merritt wrote:
 
 On Wednesday, April 25, 2012 09:40:01 am James Holton wrote:
 
 If you want to make a big splash, then don't complain about 
 being asked to leap from a great height.
 
 
 This gets my vote as the best science-related quote of the year.
 
  Ethan
 
 
 -- 
 Ethan A Merritt
 Biomolecular Structure Center,  K-428 Health Sciences Bldg
 University of Washington, Seattle 98195-7742
 


[ccp4bb] Publication ethics Re: [ccp4bb] Off-topic: Supplying PDB file to reviewers

2012-04-26 Thread Jrh
Dear Colleagues,
I have followed this thread with great interest. It reminds me of the Open 
Commission Meeting of the Biological Macromolecules Commission in Geneva in 
2002 at the IUCr Congress. Ie at which it was concluded that protein 
coordinates and diffraction data would not be provided to referees. 

The ethics and rights of readers, authors and referees is a balancing act, as 
Jeremy and others have emphasised these different constituency's views. 

The aim of this email though is to draw your attention to the Committee on 
Publication Ethics (COPE) work and case studies, which are extensive. Ie see:-

http://publicationethics.org/

The COPE Forum will also provide advice on case submissions that are made of 
alleged publication malpractice, various of which are quite subtle. The 
processes as well though following on from obvious malpractice eg how a 
university research malpractice committee can be convened are also detailed. 

Greetings,
John

Prof John R Helliwell DSc 
 
 

On 26 Apr 2012, at 06:10, Jeremy Tame jt...@tsurumi.yokohama-cu.ac.jp wrote:

 The problem is it is not the PI who is jumping, it may be a postdoc he/she is 
 throwing.
 
 Priority makes careers (look back at the Lavoisier/Priestly, Adams/LeVerrier 
 or
 Cope/Marsh controversies), and the history of scientific reviewing is not all 
 edifying.
 
 Too many checks, not enough balances. Science is probably better served if the
 author can publish without passing on the pdb model to a potentially 
 unscrupulous 
 reviewer, and if there are minor errors in the published paper then a 
 competing
 group also has reason to publish its own view. The errors already have to 
 evade the
 excellent validation tools we now have thanks to so many talented programmers,
 and proper figures and tables (plus validation report) should be enough for a 
 review.
 The picture we have of haemoglobin is now much more accurate than the ones 
 which came out decades ago, but those structures were very useful in the mean 
 time. A requirement of resolution better than 2 Angstroms would probably stop 
 poor 
 models entering PDB, but I don't think it would serve science as a whole. 
 Science
 is generally a self-correcting process, rather than a demand for perfection 
 in every
 paper. Computer software follows a similar pattern - bug reports don't always 
 invalidate the
 program.
 
 I have happily released data and coordinates via PDB before publication, even 
 back in the
 1990s when this was unfashionable, but would not do so if I felt it risked a 
 postdoc
 failing to publish a key paper before competitors. It might be helpful if 
 journals were
 more amenable to new structures of solved proteins as the biology often 
 emerges 
 from several models of different conformations or ligation states. But in a 
 publish or
 perish world, authors need rights too. Reviewers do a necessary job, but 
 there is a
 need for balance.
 
 The attached figure shows a French view of Le Verrier discovering Uranus, 
 while
 Adams uses his telescope for a quite different purpose.
 
 Adams_Leverrier.jpg
 
 
 On Apr 26, 2012, at 2:01 AM, Ethan Merritt wrote:
 
 On Wednesday, April 25, 2012 09:40:01 am James Holton wrote:
 
 If you want to make a big splash, then don't complain about 
 being asked to leap from a great height.
 
 
 This gets my vote as the best science-related quote of the year.
 
Ethan
 
 
 -- 
 Ethan A Merritt
 Biomolecular Structure Center,  K-428 Health Sciences Bldg
 University of Washington, Seattle 98195-7742
 


Re: [ccp4bb] very informative - Trends in Data Fabrication

2012-04-07 Thread Jrh
Dear Ron,

Quite so, and who cannot laugh at the Yes Minister perfect hospital ward 
operating theatre sketch ( Thankyou James W).

Anyway:-
Let's not get too hung up on one detail of your point 3. Your various points, 
including point 3, added several missing elements in this CCp4bb thread. 

Overall what I am saying is that to me it is good that my University at least 
is gearing up to provide a local Data Archive service which, since I wish to 
link my raw data sets in future to my publications via doi registrations, this 
will give a longevity to them that I cannot guarantee with a 'my raw data are 
in my desk drawer' approach. These could be useful in future reuse ie:- I see 
various improvements to understanding diffuse scattering and, secondly,  to 
squeezing more diffraction resolution out of the Bragg data as computer 
hardware and software both improve. Once in my career I nearly made a mistake 
on a space group choice ( I wrote it up as an educational story in 1996 in Acta 
D); if I had made that mistake the literature would finally have caught up i 
suppose and said :- where are the raw data, let's check that space group 
choice. This latter type of challenge of course is catchable via depositing 
processed Bragg data as triclinic; it probably doesn't need raw images. Finally 
I have a project that I have worked for some years on now to solve the 
structure; there are two, possibly several , crystal lattices and diffuse 
streaks. If  I have to finally give up I will make them available via doi on my 
a university raw data archive; meanwhile of course we make new protein and 
recrystallise etc, the other approach!

Greetings,
John

Prof John R Helliwell DSc FInstP CPhys FRSC CChem F Soc Biol.
Chair School of Chemistry, University of Manchester, Athena Swan Team.
http://www.chemistry.manchester.ac.uk/aboutus/athena/index.html
 
 

On 6 Apr 2012, at 17:23, Ronald E Stenkamp stenk...@u.washington.edu wrote:

 Dear John,
 
 Your points are well taken and they're consistent with policies and practices 
 in the US as well.  
 
 I wonder about the nature of the employer's responsibility though.  I sit on 
 some university committees, and the impression I get is that much of the 
 time, the employers are interested in reducing their legal liabilities, not 
 protecting the integrity of science.  The end result is the same though in 
 that the employers get involved and oversee the handling of scientific 
 misconduct.  
 
 What is unclear to me is whether the system for dealing with misconduct is 
 broken.  It seems to work pretty well from my viewpoint.  No system is 
 perfect for identifying fraud, errors, etc, and I understand the idea that 
 improvements might be possible.  However, too many improvements might break 
 the system as well.
 
 Ron 
 
 On Fri, 6 Apr 2012, John R Helliwell wrote:
 
 Dear Ron,
 Re (3):-
 Yes of course the investigator has that responsibility.
 The additional point I would make is that the employer has a share in
 that responsibility. Indeed in such cases the employer university
 convenes a research fraud investigating committee to form the final
 judgement on continued employment.
 A research fraud policy, at least ours, also includes the need for
 avoding inadvertent loss of raw data, which is also deemed to be
 research malpractice.
 Thus the local data repository, with doi registration for data sets
 that underpin publication, seems to me and many others, ie in other
 research fields, a practical way forward for these data sets.
 It also allows the employer to properly serve the research
 investigations of its employees and be duely diligent to the research
 sponsors whose grants it accepts. That said there is a variation of
 funding that at least our UK agencies will commit to 'Data management
 plans'.
 Greetings,
 John
 
 
 
 2012/4/5 Ronald E Stenkamp stenk...@u.washington.edu:
 This discussion has been interesting, and it's provided an interesting 
 forum for those interested in dealing with fraud in science.  I've not 
 contributed anything to this thread, but the message from Alexander Aleshin 
 prodded me to say some things that I haven't heard expressed before.
 
 1.  The sky is not falling!  The errors in the birch pollen antigen pointed 
 out by Bernhard are interesting, and the reasons behind them might be 
 troubling.  However, the self-correcting functions of scientific research 
 found the errors, and current publication methods permitted an airing of 
 the problem.  It took some effort, but the scientific method prevailed.
 
 2.  Depositing raw data frames will make little difference in identifying 
 and correcting structural problems like this one.  Nor will new 
 requirements for deposition of this or that detail.  What's needed for 
 finding the problems is time and interest on the part of someone who's able 
 to look at a structure critically.  Deposition of additional information 
 could be important for that critical look, but deposition alone (at 

[ccp4bb] Category 4 Re: [ccp4bb] very informative - Trends in Data Fabrication

2012-04-05 Thread Jrh
Dear Herbert,
Category 4, in Manchester, we find is tricky, for want of a better word. 
Needless to say that we have collaborators on our Crystallography Research 
Service who request data sets from eg ten years ago, that are now urgent for 
publication writing up. So we are keeping everything, although only recent 
years the raw diffraction images, and nb soon to be assisted by the Univ 
Manchester centralised Data Repository for its researchers. (Incidentally I 
have kept all of my film oscillation, and inc later Laue data, back to approx 
1977, which fills a whole wall shelf worth, ~ 10 metres.)
Greetings,
John

Prof John R Helliwell DSc FInstP CPhys FRSC CChem F Soc Biol.
Chair School of Chemistry, University of Manchester, Athena Swan Team.
http://www.chemistry.manchester.ac.uk/aboutus/athena/index.html
 
 

On 5 Apr 2012, at 13:50, Herbert J. Bernstein y...@bernstein-plus-sons.com 
wrote:

 Dear Colleagues,
 
  Clearly, no system will be able to perfectly preserve every pixel of
 every dataset collected at a cost that can be afforded.  Resources are
 finite and we must set priorities.  I would suggest that, in order
 of declining priority, we try our best to retain:
 
  1.  raw data that might tend to refute published results
  2.  raw data that might tend to support published results
  3.  raw data that may be of significant use in currently
 ongoing studies either in refutation or support
  4.  raw data that may be of significant use in future
 studies
 
 While no archiving system can be perfect, we should not let the
 search for a perfect solution prevent us from working with
 currently available good solutions, and even in this era of tight
 budgets, there are good solutions.
 
  Regards,
Herbert
 
 On 4/5/12 7:16 AM, John R Helliwell wrote:
 Dear 'aales...@burnham.org',
 
 Re the pixel detector; yes this is an acknowledged raw data archiving
 challenge; possible technical solutions include:- summing to make
 coarser images ie in angular range, lossless compression (nicely
 described on this CCP4bb by James Holton) or preserving a sufficient
 sample of data(but nb this debate is certainly not yet concluded).
 
 Re And all this hassle is for the only real purpose of preventing data 
 fraud?
 
 Well.Why publish data?
 Please let me offer some reasons:
 • To enhance the reproducibility of a scientific experiment
 • To verify or support the validity of deductions from an experiment
 • To safeguard against error
 • To allow other scholars to conduct further research based on
 experiments already conducted
 • To allow reanalysis at a later date, especially to extract 'new'
 science as new techniques are developed
 • To provide example materials for teaching and learning
 • To provide long-term preservation of experimental results and future
 access to them
 • To permit systematic collection for comparative studies
 • And, yes, To better safeguard against fraud than is apparently the
 case at present
 
 Also to (probably) comply with your funding agency's grant conditions:-
 Increasingly, funding agencies are requesting or requiring data
 management policies (including provision for retention and access) to
 be taken into account when awarding grants. See e.g. the Research
 Councils UK Common Principles on Data Policy
 (http://www.rcuk.ac.uk/research/Pages/DataPolicy.aspx) and the Digital
 Curation Centre overview of funding policies in the UK
 (http://www.dcc.ac.uk/resources/policy-and-legal/overview-funders-data-policies).
 See also http://forums.iucr.org/viewtopic.php?f=21t=58 for discussion
 on policies relevant to crystallography in other countries. Nb these
 policies extend over derived, processed and raw data, ie without
 really an adequate clarity of policy from one to the other stages of
 the 'data pyramid' ((see
 http://www.stm-assoc.org/integration-of-data-and-publications).
 
 
 And just to mention IUCr Journals Notes for Authors for biological
 macromolecular structures, where we have our ie macromolecular
 crystallography's version of the 'data pyramid' :-
 
 (1) Derived data
 • Atomic coordinates, anisotropic or isotropic displacement
 parameters, space group information, secondary structure and
 information about biological functionality must be deposited with the
 Protein Data Bank before or in concert with article publication; the
 article will link to the PDB deposition using the PDB reference code.
 • Relevant experimental parameters, unit-cell dimensions are required
 as an integral part of article submission and are published within the
 article.
 
 (2) Processed experimental data
 • Structure factors must be deposited with the Protein Data Bank
 before or in concert with article publication; the article will link
 to the PDB deposition using the PDB reference code.
 
 (3) Primary experimental data (here I give small and macromolecule
 Notes for Authors details):-
 For small-unit-cell crystal/molecular structures and macromolecular
 structures IUCr journals have no 

[ccp4bb] Via Annual Reports...Re: [ccp4bb] very informative - Trends in Data Fabrication

2012-04-05 Thread Jrh
Dear Roger,
At the recent ICSTI Workshop on Delivering Data in science the NSF presenter, 
when I asked about monitoring, replied that the PIs' annual reports should 
include data management aspects.
See http://www.icsti.org/spip.php?rubrique42
Best wishes,
John

Prof John R Helliwell DSc FInstP CPhys FRSC CChem F Soc Biol.
Chair School of Chemistry, University of Manchester, Athena Swan Team.
http://www.chemistry.manchester.ac.uk/aboutus/athena/index.html
 
 

On 5 Apr 2012, at 14:08, Roger Rowlett rrowl...@colgate.edu wrote:

 FYI, every NSF grant proposal now must have a data management plan that 
 describes how all experimental data will be archived and in what formats. I'm 
 not sure how seriously these plans are monitored, but a plan must be provided 
 nevertheless. Is anyone NOT archiving their original data in some way?
 
 Roger Rowlett
 
 On Apr 5, 2012 7:16 AM, John R Helliwell jrhelliw...@gmail.com wrote:
 Dear 'aales...@burnham.org',
 
 Re the pixel detector; yes this is an acknowledged raw data archiving
 challenge; possible technical solutions include:- summing to make
 coarser images ie in angular range, lossless compression (nicely
 described on this CCP4bb by James Holton) or preserving a sufficient
 sample of data(but nb this debate is certainly not yet concluded).
 
 Re And all this hassle is for the only real purpose of preventing data 
 fraud?
 
 Well.Why publish data?
 Please let me offer some reasons:
 • To enhance the reproducibility of a scientific experiment
 • To verify or support the validity of deductions from an experiment
 • To safeguard against error
 • To allow other scholars to conduct further research based on
 experiments already conducted
 • To allow reanalysis at a later date, especially to extract 'new'
 science as new techniques are developed
 • To provide example materials for teaching and learning
 • To provide long-term preservation of experimental results and future
 access to them
 • To permit systematic collection for comparative studies
 • And, yes, To better safeguard against fraud than is apparently the
 case at present
 
 Also to (probably) comply with your funding agency's grant conditions:-
 Increasingly, funding agencies are requesting or requiring data
 management policies (including provision for retention and access) to
 be taken into account when awarding grants. See e.g. the Research
 Councils UK Common Principles on Data Policy
 (http://www.rcuk.ac.uk/research/Pages/DataPolicy.aspx) and the Digital
 Curation Centre overview of funding policies in the UK
 (http://www.dcc.ac.uk/resources/policy-and-legal/overview-funders-data-policies).
 See also http://forums.iucr.org/viewtopic.php?f=21t=58 for discussion
 on policies relevant to crystallography in other countries. Nb these
 policies extend over derived, processed and raw data, ie without
 really an adequate clarity of policy from one to the other stages of
 the 'data pyramid' ((see
 http://www.stm-assoc.org/integration-of-data-and-publications).
 
 
 And just to mention IUCr Journals Notes for Authors for biological
 macromolecular structures, where we have our ie macromolecular
 crystallography's version of the 'data pyramid' :-
 
 (1) Derived data
 • Atomic coordinates, anisotropic or isotropic displacement
 parameters, space group information, secondary structure and
 information about biological functionality must be deposited with the
 Protein Data Bank before or in concert with article publication; the
 article will link to the PDB deposition using the PDB reference code.
 • Relevant experimental parameters, unit-cell dimensions are required
 as an integral part of article submission and are published within the
 article.
 
 (2) Processed experimental data
 • Structure factors must be deposited with the Protein Data Bank
 before or in concert with article publication; the article will link
 to the PDB deposition using the PDB reference code.
 
 (3) Primary experimental data (here I give small and macromolecule
 Notes for Authors details):-
 For small-unit-cell crystal/molecular structures and macromolecular
 structures IUCr journals have no current binding policy regarding
 publication of diffraction images or similar raw data entities.
 However, the journals welcome efforts made to preserve and provide
 primary experimental data sets. Authors are encouraged to make
 arrangements for the diffraction data images for their structure to be
 archived and available on request.
 For articles that present the results of powder diffraction profile
 fitting or refinement (Rietveld) methods, the primary diffraction
 data, i.e. the numerical intensity of each measured point on the
 profile as a function of scattering angle, should be deposited.
 Fibre data should contain appropriate information such as a photograph
 of the data. As primary diffraction data cannot be satisfactorily
 extracted from such figures, the basic digital diffraction data should
 be deposited.
 
 
 Finally to mention that 

Re: [ccp4bb] MAD

2012-01-21 Thread Jrh
Dear Colleagues,
The real issue is the 'anomalous' word introduced as an X-ray scattering theory 
correction, which was not anomalous but the actual physical situation of 
resonance scattering.  Thus the most recent of the Anomalous Scattering 
conferences was correctly called REXS2011. Ie Resonant Elastic X-ray 
Scattering. 

The 1975 Anomalous Scattering Conference book incidentally has the Hoppe and 
Jakubowski Ni and Co K alpha two wavelength study either side of the Fe K edge 
for phase determination of the erythrocruorin protein, in turn based on the 
Okaya and Pepinsky 1956 formalism. These are MAD but 'simply' not synchrotron.

Francis Crick's autobiography 'What Mad Pursuit' will give you a further link 
to MAD, based on weak ie small intensity changes.

Just to also mention I regularly refer to the 'Hendrickson Se-met MAD' method. 

The history is interesting. Keith Hodgson is a must mention name, as is 
Stanford Synchrotron Radiation Laboratory.

A most recent wrinkle in nomenclature in this area is the use in chemical 
crystallography by some of Resonant scattering for off resonance ie in 
determining the hand of organics. At present I see no way around correcting 
such mentions but with the unfortunate term:- 
Off-resonance resonance scattering Flack parameter determination of the hand.

Greetings,
John

Prof John R Helliwell DSc 
 
 

On 19 Jan 2012, at 17:50, Ian Tickle ianj...@gmail.com wrote:

 Perhaps I could chime in with a bit of history as I understand it.
 
 The term 'dispersion' in optics, as everyone who knows their history
 is aware of, refers to the classic experiment by Sir Isaac Newton at
 Trinity College here in Cambridge where he observed white light being
 split up ('dispersed') into its component colours by a prism.  This is
 of course due to the variation in refractive index of glass with
 wavelength, so then we arrive at the usual definition of optical
 dispersion as dn/dlambda, i.e. the first derivative of the refractive
 index with respect to the wavelength.
 
 Now the refractive index of an average crystal at around 1 Ang
 wavelength differs by about 1 part in a million from 1, however it can
 be determined by very careful and precise interferometric experiments.
 It's safe to say therefore that the dispersion of X-rays (anomalous
 or otherwise) has no measurable effect whatsoever as far as the
 average X-ray diffraction experiment (SAD, MAD or otherwise) is
 concerned.  The question then is how did the term 'anomalous
 dispersion' get to be applied to X-ray diffraction?  The answer is
 that it turns out that the equation ('Kramer-Kronig relationship')
 governing X-ray scattering is completely analogous to that governing
 optical dispersion, so it's legitimate to use the term 'dispersive'
 (meaning 'analogous to dispersion') for the real part of the
 wavelength-dependent component of the X-ray scattering factor, because
 the real part of the refractive index is what describes dispersion
 (the imaginary part in both cases describes absorption).
 
 So then from 'dispersive' to 'dispersion' to describe the wavelength
 dependence of X-ray scattering is only a short step, even though it
 only behaves _like_ dispersion in its dependence on wavelength.
 However having two different meanings for the same word can get
 confusing and clearly should be avoided if at all possible.
 
 So what does this have to do with the MAD acronym?  I think it stemmed
 from a visit by Wayne Hendrickson to Birkbeck in London some time
 around 1990: he was invited by Tom Blundell to give a lecture on his
 MAD experiments.  At that time Wayne called it multi-wavelength
 anomalous dispersion.  Tom pointed out that this was really a misnomer
 for the reasons I've elucidated above.  Wayne liked the MAD acronym
 and wanted to keep it so he needed a replacement term starting with D
 and diffraction was the obvious choice, and if you look at the
 literature from then on Wayne at least consistently called it
 multi-wavelength anomalous diffraction.
 
 Cheers
 
 -- Ian
 
 On 18 January 2012 18:23, Phil Jeffrey pjeff...@princeton.edu wrote:
 Can I be dogmatic about this ?
 
 Multiwavelength anomalous diffraction from Hendrickson (1991) Science Vol.
 254 no. 5028 pp. 51-58
 
 Multiwavelength anomalous diffraction (MAD) from the CCP4 proceedings
 http://www.ccp4.ac.uk/courses/proceedings/1997/j_smith/main.html
 
 Multi-wavelength anomalous-diffraction (MAD) from Terwilliger Acta Cryst.
 (1994). D50, 11-16
 
 etc.
 
 
 I don't see where the problem lies:
 
 a SAD experiment is a single wavelength experiment where you are using the
 anomalous/dispersive signals for phasing
 
 a MAD experiment is a multiple wavelength version of SAD.  Hopefully one
 picks an appropriate range of wavelengths for whatever complex case one has.
 
 One can have SAD and MAD datasets that exploit anomalous/dispersive signals
 from multiple difference sources.  This after all is one of the things that
 SHARP is particularly good at 

[ccp4bb] Jrh correction Re: [ccp4bb] Mg or water?

2011-12-16 Thread Jrh
Jrh correction:- in the last sentence the final statistical test must involve 
the comparison of each of the two different dictionary distance values in turn 
(essentially assumed to be exactly known for simplicity) versus the 
experimentally derived distance and its DPI derived sigma (L).
Greetings,
John


Prof John R Helliwell DSc 
 
 

On 16 Dec 2011, at 16:07, John R Helliwell jrhelliw...@gmail.com wrote:

 Dear Bie Gao,
 You can obtain an estimate of the standard deviation on your putative
 Mg to ligand distance using the diffraction precision index (DPI)
 approach of Cruickshank and Blow (1999 and 2002 Acta Cryst D). ie:-
 D M Blow Acta Cryst. (2002). D58, 792-797
 Synopsis: The formulae for the diffraction-component precision index
 introduced by Cruickshank [(1999), Acta Cryst. D55, 583-601] are
 simplifed using two approximations. A rearranged formula for the
 precision index is presented which can readily be calculated from
 experimental data.
 
 
 Using this you can add a quantitative test of statistical significance
 to the, of course, very senible earlier inputs to your questions. Do
 note though that the DPI is calculated as a number for the position
 error, sigma (x),  on an atom with an average B factor. You have to
 adjust your DPI value for the atoms in question via the square root of
 their B value ratioed to the average B. For example a lower than
 average B gives a more precise sigma (x) than the average. Finally the
 sigma on the bond distance itself is calculated from the sigma (x)
 pair of values for each atom via the quadrature formula. The usual
 statistical test of significance is if the ligand distance (L) is then
 3sigma (L).
 
 Best wishes,
 Prof John R Helliwell DSc.
 
 On Thu, Dec 15, 2011 at 9:29 PM, bie gao gao...@gmail.com wrote:
 Thank you all for the help. These are the key factors I collected so far:
 1. Distance, Mg--O is shorter (2.0 -- 2.4A)
 2. Coordination, Mg is octahedrally coordinated.
 3. B factors, local B factors (i.e. the residues that coordinate with the
 ion) should be similar.
 4. Use Mn++ to replace Mg.
 I will look into these more.
 
 Best,
 Gao
 
 
 On Wed, Dec 14, 2011 at 5:45 PM, bie gao gao...@gmail.com wrote:
 
 Hi every,
 
 I'm working with 2 crystal forms of a protein from 2 different
 crystallization conditions. Condition 1 has 100mM MgCl2. Condition 2
 doesn't. Both are ~2.9 angstrom.  The 2 structures are virtually identical
 except in condition1, there is a clear positive density surrounded by a Glu
 side chain carboxyl and a couple of main carboxyl groups. (Again, condition
 2 doesn't have this density).
 
 My initial thought is that a Mg atom is incorporated and it fits well. But
 the problem is we can not role out the possibility of a water molecule.
 Refining with Mg gives a b-factor of 42 (about average for the whole
 protein). The b-factor is 21 when refining with a water. Both cases there is
 no positive/negative density at contour=2.0.
 
 Based on the current data, is there any other role we can apply to see how
 likely it is a Mg or water. Or  anomalous scattering is the only way? Thanks
 for your suggestions.
 
 Best,
 Gao
 
 
 
 
 
 -- 
 Professor John R Helliwell DSc


Re: [ccp4bb] To archive or not to archive, that's the question!

2011-10-29 Thread Jrh
Dear Gerard K,
Many thanks indeed for this.
Like Gerard Bricogne you also indicate that the location option being the 
decentralised one is 'quite simple and very cheap in terms of centralised 
cost'. The SR Facilities worldwide I hope can surely follow the lead taken by 
Diamond Light Source and PaN, the European Consortium of SR and Neutron 
Facilities, and keep their data archives and also assist authors with the doi 
registration process for those datasets that result in publication. Linking to 
these dois from the PDB for example is as you confirm straightforward. 

Gerard B's pressing of the above approach via the 'Pilot project'  within the 
IUCr DDD WG various discussions, with a nicely detailed plan, brought home to 
me the merit of the above approach for the even greater challenge for raw data 
archiving for chemical crystallography, both in terms of number of datasets and 
also the SR Facilities role being much smaller. IUCr Journals also note the 
challenge of moving large quantities of data around ie if the Journals were to 
try and host everything for chemical crystallography, and them thus becoming 
'the centre' for these datasets.

So:-  Universities are now establishing their own institutional repositories, 
driven largely by Open Access demands of funders. For these to host raw 
datasets that underpin publications is a reasonable role in my view and indeed 
they already have this category in the University of Manchester eScholar 
system, for example.  I am set to explore locally here whether they would 
accommodate all our Lab's raw Xray images datasets per annum that underpin our 
published crystal structures. 

It would be helpful if readers of this CCP4bb could kindly also explore with 
their own universities if they have such an institutional repository and if raw 
data sets could be accommodated. Please do email me off list with this 
information if you prefer but within the CCP4bb is also good. 

Such an approach involving institutional repositories would also work of course 
for the 25% of MX structures that are for non SR datasets.

All the best for a splendid PDB40 Event.

Greetings,
John
Prof John R Helliwell DSc 
 
 

On 28 Oct 2011, at 22:02, Gerard DVD Kleywegt ger...@xray.bmc.uu.se wrote:

 Hi all,
 
 It appears that during my time here at Cold Spring Harbor, I have missed a 
 small debate on CCP4BB (in which my name has been used in vain to boot).
 
 I have not yet had time to read all the contributions, but would like to make 
 a few points that hopefully contribute to the discussion and keep it with two 
 feet on Earth (as opposed to La La Land where the people live who think that 
 image archiving can be done on a shoestring budget... more about this in a 
 bit).
 
 Note: all of this is on personal title, i.e. not official wwPDB gospel. Oh, 
 and sorry for the new subject line, but this way I can track the replies more 
 easily.
 
 It seems to me that there are a number of issues that need to be separated:
 
 (1) the case for/against storing raw data
 (2) implementation and resources
 (3) funding
 (4) location
 
 I will say a few things about each of these issues in turn:
 
 ---
 
 (1) Arguments in favour and against the concept of storing raw image data, as 
 well as possible alternative solutions that could address some of the issues 
 at lower cost or complexity.
 
 I realise that my views carry a weight=1.0 just like everybody else's, and 
 many of the arguments and counter-arguments have already been made, so I will 
 not add to these at this stage.
 
 ---
 
 (2) Implementation details and required resources.
 
 If the community should decide that archiving raw data would be 
 scientifically useful, then it has to decide how best to do it. This will 
 determine the level of resources required to do it. Questions include:
 
 - what should be archived? (See Jim H's list from (a) to (z) or so.) An 
 initial plan would perhaps aim for the images associated with the data used 
 in the final refinement of deposited structures.
 
 - how much data are we talking about per dataset/structure/year?
 
 - should it be stored close to the source (i.e., responsibility and costs for 
 depositors or synchrotrons) or centrally (i.e., costs for some central 
 resource)? If it is going to be stored centrally, the cost will be 
 substantial. For example, at the EBI -the European Bioinformatics Institute- 
 we have 15 PB of storage. We pay about 1500 GBP (~2300 USD) per TB of storage 
 (not the kind you buy at Dixons or Radio Shack, obviously). For stored data, 
 we have a data-duplication factor of ~8, i.e. every file is stored 8 times 
 (at three data centres, plus back-ups, plus a data-duplication centre, plus 
 unreleased versus public versions of the archive). (Note - this is only for 
 the EBI/PDBe! RCSB and PDBj will have to acquire storage as well.) Moreover, 
 disks have to be housed in a building (not free!), with cooling, security 
 measures, security staff, 

Re: [ccp4bb] To archive or not to archive, that's the question!

2011-10-29 Thread Jrh
Dear Herbert,
I imagine it likely that eg The Univ Manchester eScholar system will have in 
place duplicate storage for the reasons you outline below. However for it to be 
geographically distant is, to my reckoning, less likely, but still possible. I 
will add that further query to my first query to my eScholar user support re 
dataset sizes and doi registration.
Greetings,
John
Prof John R Helliwell DSc 
 
 

On 29 Oct 2011, at 15:49, Herbert J. Bernstein y...@bernstein-plus-sons.com 
wrote:

 One important issue to address is how deal with the perceived
 reliability issues of the federated model and how to start to
 approach the higher reliability of the centralized model described bu
 Gerard K, but without incurring what seems to be at present
 unacceptable costs.  One answer comes from the approach followed in
 communications systems.  If the probability of data loss in each
 communication subsystem is, say, 1/1000, then the probability of data
 loss in two independent copies of the same lossy system is only
 1/1,000,000.  We could apply that lessonto the
 federated data image archive model by asking each institution
 to partner with a second independent, and hopefully geographically
 distant, institution, with an agreement for each to host copies
 of the other's images.  If we restrict that duplication protocol, at least at
 first, to those images strongly related to an actual publication/PDB
 deposition, the incremental cost of greatly improved reliability
 would be very low, with no disruption of the basic federated
 approach being suggested.
 
 Please note that I am not suggesting that institutional repositories
 will have 1/1000 data loss rates, but they will certainly have some
 data loss rate, and this modest change in the proposal would help to
 greatly lower the impact of that data loss rate and allow us to go
 forward with greater confidence.
 
 Regards,
  Herbert
 
 
 At 7:53 AM +0100 10/29/11, Jrh wrote:
 Dear Gerard K,
 Many thanks indeed for this.
 Like Gerard Bricogne you also indicate that the location option being the 
 decentralised one is 'quite simple and very cheap in terms of centralised 
 cost'. The SR Facilities worldwide I hope can surely follow the lead taken 
 by Diamond Light Source and PaN, the European Consortium of SR and Neutron 
 Facilities, and keep their data archives and also assist authors with the 
 doi registration process for those datasets that result in publication. 
 Linking to these dois from the PDB for example is as you confirm 
 straightforward.
 
 Gerard B's pressing of the above approach via the 'Pilot project' within the 
 IUCr DDD WG various discussions, with a nicely detailed plan, brought home 
 to me the merit of the above approach for the even greater challenge for raw 
 data archiving for chemical crystallography, both in terms of number of 
 datasets and also the SR Facilities role being much smaller. IUCr Journals 
 also note the challenge of moving large quantities of data around ie if the 
 Journals were to try and host everything for chemical crystallography, and 
 them thus becoming 'the centre' for these datasets.
 
 So:-  Universities are now establishing their own institutional 
 repositories, driven largely by Open Access demands of funders. For these to 
 host raw datasets that underpin publications is a reasonable role in my view 
 and indeed they already have this category in the University of Manchester 
 eScholar system, for example.  I am set to explore locally here whether they 
 would accommodate all our Lab's raw Xray images datasets per annum that 
 underpin our published crystal structures.
 
 It would be helpful if readers of this CCP4bb could kindly also explore with 
 their own universities if they have such an institutional repository and if 
 raw data sets could be accommodated. Please do email me off list with this 
 information if you prefer but within the CCP4bb is also good.
 
 Such an approach involving institutional repositories would also work of 
 course for the 25% of MX structures that are for non SR datasets.
 
 All the best for a splendid PDB40 Event.
 
 Greetings,
 John
 Prof John R Helliwell DSc
 
 
 
 On 28 Oct 2011, at 22:02, Gerard DVD Kleywegt ger...@xray.bmc.uu.se wrote:
 
 Hi all,
 
 It appears that during my time here at Cold Spring Harbor, I have missed a 
 small debate on CCP4BB (in which my name has been used in vain to boot).
 
 I have not yet had time to read all the contributions, but would like to 
 make a few points that hopefully contribute to the discussion and keep it 
 with two feet on Earth (as opposed to La La Land where the people live who 
 think that image archiving can be done on a shoestring budget... more about 
 this in a bit).
 
 Note: all of this is on personal title, i.e. not official wwPDB gospel. Oh, 
 and sorry for the new subject line, but this way I can track the replies 
 more easily.
 
 It seems to me that there are a number of issues that need to be separated

Re: [ccp4bb] IUCr committees, depositing images

2011-10-25 Thread Jrh
Dear James,
This is technically ingenious stuff.

Perhaps it could be applied to help the 'full archive challenge' ie containing 
many data sets that will never lead to publication/ database deposition?

However for the latter,publication/deposition, subset you would surely not 
'tamper' with the raw measurements? 

The 'grey area' between the two clearcut cases  ie where eventually 
publication/deposition MAY result then becomes the challenge as to whether to 
compress or not? (I would still prefer no tampering.)

Greetings,
John

Prof John R Helliwell DSc 




On 24 Oct 2011, at 22:56, James Holton jmhol...@lbl.gov wrote:

 The Pilatus is fast, but or decades now we have had detectors that can read 
 out in ~1s.  This means that you can collect a typical ~100 image dataset in 
 a few minutes (if flux is not limiting).  Since there are ~150 beamlines 
 currently operating around the world and they are open about 200 days/year, 
 we should be collecting ~20,000,000 datasets each year.
 
 We're not.
 
 The PDB only gets about 8000 depositions per year, which means either we 
 throw away 99.96% of our images, or we don't actually collect images anywhere 
 near the ultimate capacity of the equipment we have.  In my estimation, both 
 of these play about equal roles, with ~50-fold attrition between ultimate 
 data collection capacity and actual collected data, and another ~50 fold 
 attrition between collected data sets and published structures.
 
 Personally, I think this means that the time it takes to collect the final 
 dataset is not rate-limiting in a typical structural biology project/paper. 
  This does not mean that the dataset is of little value.  Quite the opposite! 
  About 3000x more time and energy is expended preparing for the final dataset 
 than is spent collecting it, and these efforts require experimental feedback. 
  The trick is figuring out how best to compress the data used to solve a 
 structure for archival storage.  Do the previous data sets count?  Or 
 should the compression be lossy about such historical details?  Does the 
 stuff between the spots matter?  After all, h,k,l,F,sigF is really just a 
 form of data compression.  In fact, there is no such thing as raw data.  
 Even raw diffraction images are a simplification of the signals that came 
 out of the detector electronics.  But we round-off and average over a lot of 
 things to remove noise.  Largely because noise is difficult to compress.  
 The question of how much compression is too much compression depends on which 
 information (aka noise) you think could be important in the future.
 
 When it comes to fine-sliced data, such as that from Pilatus, the main reason 
 why it doesn't compress very well is not because of the spots, but the 
 background.  It occupies thousands of times more pixels than the spots.  Yes, 
 there is diffuse scattering information in the background pixels, but this 
 kind of data is MUCH smoother than the spot data (by definition), and 
 therefore is optimally stored in larger pixels.  Last year, I messed around a 
 bit with applying different compression protocols to the spots and the 
 background, and found that ~30 fold compression can be easily achieved if you 
 apply h264 to the background and store the spots with lossless png 
 compression:
 
 http://bl831.als.lbl.gov/~jamesh/lossy_compression/
 
 I think these results speak to the relative information content of the 
 spots and the pixels between them.  Perhaps at least the online version of 
 archived images could be in some sort of lossy-background format?  With the 
 real images in some sort of slower storage (like a room full of tapes that 
 are available upon request)?  Would 30-fold compression make the storage of 
 image data tractable enough for some entity like the PDB to be able to afford 
 it?
 
 
 I go to a lot of methods meetings, and it pains me to see the most brilliant 
 minds in the field starved for interesting data sets.  The problem is that 
 it is very easy to get people to send you data that is so bad that it can't 
 be solved by any software imaginable (I've got piles of that!).  As a 
 developer, what you really need is a right answer so you can come up with 
 better metrics for how close you are to it.  Ironically, bad, unsolvable data 
 that is connected to a right answer (aka a PDB ID) is very difficult to 
 obtain.  The explanations usually involve protestations about being in the 
 middle of writing up the paper, the student graduated and we don't understand 
 how he/she labeled the tapes, or the RAID crashed and we lost it all, etc. 
 etc.  Then again, just finding someone who has a data set with the kind of 
 problem you are interested in is a lot of work!  So is figuring out which 
 problem affects the most people, and is therefore interesting.
 
 Is this not exactly the kind of thing that publicly-accessible centralized 
 scientific databases are created to address?
 
 -James Holton
 MAD Scientist
 
 On 

[ccp4bb] JRH input Re: [ccp4bb] Neutron data collection

2011-09-23 Thread Jrh
Dear Rex,
These issues of energy overlaps are addressed in theory, for either diffraction 
probe, in Cruickshank, Helliwell and Moffat 1987 Acta Cryst, and also by the 
same authors in 1991 Acta Cryst for spatial overlaps,  and in practice in eg 
Ren et al JSR 1999 and Nieh  et al JSR 1999. Basically the predominance of 
singlet reflection Laue spots is a consequence of the probability of prime 
numbers and, where you do have energy overlapped spots, the effectiveness of 
energy overlaps' deconvolution arises where you have symmetry equivalents 
and/or multiple occurrences of the same hkl, which usually one does.  The low 
resolution reflections in particular have a higher probability of occurring in 
multiples and thus are mainly the ones that require the deconvolution of 
intensities ie of the fundamental and its harmonic(s). The extracted 
intensities so obtained are actually of a very good precision. A high 
completeness through all the resolution range is overall readily achievable 
with Laue.

Re point 2. These issues, and advantages,  are explored in Blakeley et al 2004 
PNAS which features freezing of such large crystals through to protein and 
ordered solvent, which there is more of, model refinement. Losses of 
diffraction quality for freezing attempts with bigger crystals in our 
experience is worse though ie more probable than with small crystals, and so if 
you don't have some sort of supply, would certainly be off putting, but 
obviously it's doable. Also it is my belief that as experience grows so will 
such procedures improve and the scope thereby widen, encompassing for example 
freeze trapping studies with neutrons as probe.

Greetings,
John
Prof John R Helliwell DSc 
 
 

On 21 Sep 2011, at 10:52, REX PALMER rex.pal...@btinternet.com wrote:

 Re Neutron Data Collection:
 1. What are the limits to data set completeness imposed by a Laue experiment 
 versus those of monochromatic data collection?
 2. What problems are caused by flash freezing the larger protein crystals 
 used for neutron data collection which do not occur for X-ray data collection 
 ie because smaller crystals can be used.
 Any help will be greatly appreciated.  
  
 Rex Palmer
 http://www.bbk.ac.uk/biology/our-staff/emeritus-staff
 http://rexpalmer2010.homestead.com


[ccp4bb] Neat and tidy Re: [ccp4bb] more Computer encryption matters

2011-08-19 Thread Jrh
That is neat and tidy!
I don't suppose you know if Windows 7 might have such a facility?
Anyway it's a good tip and I will start looking in that direction,
Thanks,
John 

Prof John R Helliwell DSc


On 19 Aug 2011, at 09:05, Phil Evans p...@mrc-lmb.cam.ac.uk wrote:

 With OSX 10.6 Snow Leopard, you can FileVault your sensitive home directory, 
 but put all your non-sensitive compute || i/o-intensive files outside your 
 home directory (eg in /Users/Stuff)
 
 I don't know whether you can do this in 10.7 Lion
 
 Phil
 
 
 On 18 Aug 2011, at 22:50, William G. Scott wrote:
 
 OS X 10.7 enables you to do whole-drive encryption.
 
 Here is a description from Arse Technica:
 
 http://arstechnica.com/apple/reviews/2011/07/mac-os-x-10-7.ars/13
 
 I ain't never tried it myself.  10.7 seems to run slow enough as it is.
 
 -- Bill
 
 
 
 On Aug 18, 2011, at 5:34 AM, Andreas Förster wrote:
 
 Since we're on the subject...  I've been tempted on and off to encrypt my 
 hard drive, but after getting burned once a hundred years ago when 
 encrypted data turned into garbled bytes all of a sudden I've been 
 hesitant.  I've gone so far as to install TrueCrypt (on a MacBook), but I 
 haven't put it into action.  Before I do, the big question:
 
 What software do people on the bb use for encryption?  What can be 
 recommended without hesitation?
 
 Thanks.
 
 
 Andreas
 
 
 On 18/08/2011 1:19, Eric Bennett wrote:
 John,
 
 Since so many people have said it's flawless, I'd like to point out this 
 is not always the case.  The particular version of the particular package 
 that we have installs some system libraries that caused a program I use on 
 a moderately frequent basis to crash every time I tried to open a file on 
 a network drive.  It took me about 9 months to figure out what the cause 
 was, during which time I had to manually copy things to the local drive 
 before I could open them in that particular program.  The vendor of the 
 encryption software has a newer version but our IT department is using an 
 older version.  There is another workaround but it's kind of a hack.
 
 So I'd say problems are very rare, but if you run into strange behavior, 
 don't rule out encryption as a possible cause.
 
 -Eric
 
 
 
 -- 
  Andreas Förster, Research Associate
  Paul Freemont  Xiaodong Zhang Labs
 Department of Biochemistry, Imperial College London
  http://www.msf.bio.ic.ac.uk


[ccp4bb] Thankyou re encryption matters

2011-08-18 Thread Jrh
Dear Colleagues,
Thankyou for your detailed, and prompt, helpful advice on this issue, which I 
much appreciate.

I see that I have overestimated my anxieties but should probably still move a 
little cautiously.

Thus I will first suggest to my IT encryptors' taskforce, for want of a better 
term, that as I  promise to delete any past 'sensitive files' , and in future 
will only consult data of this type if held in a university repository, I 
propose not to have encryption for the time being.

Best regards,
John
Prof John R Helliwell DSc


[ccp4bb] Computer encryption matters

2011-08-17 Thread Jrh
Dear Colleagues,
My institution is introducing concerted measures for improved security via 
encryption of files. A laudable plan in case of loss or theft of a computer 
with official files eg exams or student records type of information stored on 
it.

Files, folders or a whole disk drive can be encrypted. Whilst I can target 
specific files, this could get messy and time consuming to target them and keep 
track of new to-be-encrypted files. It is tempting therefore to agree to 
complete encryption. However, as my laptop is my calculations' workbench, as 
well as office tasks, I am concerned that unexpected runtime errors may occur 
from encryption and there may be difficulties of transferability of data files 
to colleagues and students, and to eg PDB.

Does anyone have experience of encryption? Are my anxieties misplaced? If not, 
will I need to plan to separate office files, which could then all be 
encrypted, from crystallographic data files/calculations, which could be left 
unencrypted. If separate treatment is the best plan does one need two computers 
once more, rather than the one laptop? A different solution would be to try to 
insist on an institutional repository keeping such files.

In anticipation,
Thankyou,
John
Prof John R Helliwell DSc


Re: [ccp4bb] Femtosecond Electron Beam

2011-04-14 Thread Jrh
Dear Jacob,
Ahmed Zewail's papers are worth consulting on this, although not protein/bio.  
See also the book by Zewail and Thomas, recently published, easily findable on 
amazon etc, as a handy overview.
Best wishes,
John 

Prof John R Helliwell DSc


On 14 Apr 2011, at 14:38, Jacob Keller j-kell...@fsm.northwestern.edu wrote:

 Dear Crystallographers,
 
 is there any reason why we are not considering using super-intense
 femtosecond electron bursts, instead of photons? Since the scattering
 of electrons is much more efficient, and because they can be focussed
 to solve the phase problem, it seems that it might be worthwhile to
 explore that route of single-molecule structure solution by using
 electrospray techniques similar to the recently-reported results using
 the FEL. Is there some technical limitation which would hinder this
 possibility?
 
 JPK
 
 -- 
 ***
 Jacob Pearson Keller
 Northwestern University
 Medical Scientist Training Program
 cel: 773.608.9185
 email: j-kell...@northwestern.edu
 ***


[ccp4bb] Jrh input Re: [ccp4bb] what to do with disordered side chains

2011-03-31 Thread Jrh
Dear Ed,
Thankyou for this and apologies for late reply.
If one has chemical evidence for the presence of residues but these residues 
are disordered I find the delete atoms option disagreeable. Such a static 
disorder situation should be described by a high atomic displacement parameter, 
in my view. (nb the use of ADP is better than B factor terminology). 
Yours sincerely,
John
Prof John R Helliwell DSc


On 29 Mar 2011, at 22:43, Ed Pozharski epozh...@umaryland.edu wrote:

 The results of the online survey on what to do with disordered side
 chains (from total of 240 responses):
 
 Delete the atoms 43%
 Let refinement take care of it by inflating B-factors41%
 Set occupancy to zero12%
 Other 4%
 
 Other suggestions were:
 
 - Place atoms in most likely spot based on rotomer and contacts and
 indicate high positional sigmas on ATMSIG records
 - To invent refinement that will spread this residues over many rotamers
 as this is what actually happened
 - Delet the atoms but retain the original amino acid name
 - choose the most common rotamer (B-factors don't inflate, they just
 rise slightly)
 - Depends. if the disordered region is unteresting, delete atoms.
 Otherwise, try to model it in one or more disordered model (and then
 state it clearly in the pdb file)
 - In case that no density is in the map, model several conformations of
 the missing segment and insert it into the PDB file with zero
 occupancies. It is equivalent what the NMR people do. 
 - Model it in and compare the MD simulations with SAXS
 - I would assumne Dale Tronrod suggestion the best. Sigatm labels.
 - Let the refinement inflate B-factors, then set occupancy to zero in
 the last round.
 
 Thanks to all for participation,
 
 Ed.
 
 -- 
 I'd jump in myself, if I weren't so good at whistling.
   Julian, King of Lemurs


[ccp4bb] Philosophy and Re: [ccp4bb] I/sigmaI of 3.0 rule

2011-03-05 Thread Jrh
Dear Colleagues,
Agreed!  There is a wider point though which is that the 3D structure and data 
can form a potential for further analysis and thus the data and the structure 
can ideally be more than the current paper's contents. Obviously artificially 
high I/ sig I  cut offs are both unfortunate for the current article and such 
future analyses. In chemical crystallography this potential for further 
analyses is widely recognised. Eg a crystal structure should have all static 
disorder sorted, methyl rotor groups correctly positioned etc even if not 
directly relevant to an article. Such rigour is the requirement for Acta Cryst 
C , for example, in chemical crystallography. 
Best wishes,
John


Prof John R Helliwell DSc


On 4 Mar 2011, at 20:36, Roberto Battistutta roberto.battistu...@unipd.it 
wrote:

 Dear Phil,
 I completely agree with you, your words seem to me the best
 philosophical outcome of the discussion and indicate the right
 perspective to tackle this topic. In particular you write In the end, the
 important question as ever is does the experimental data support the
 conclusions drawn from it? and that will depend on local information
 about particular atoms and groups, not on global indicators. Exactly, in
 my case, all the discussion of the structures was absolutely independent
 from having 1.9, 2.0 or 2.1 A nominal resolution, or to cut at 1.5 or 2.0
 or 3.0 I/sigma. This makes the unjustified (as this two-day discussion has
 clearly pointed out) technical critics of the reviewer even more
 upsetting.
 Ciao,
 Roberto


Re: [ccp4bb] strange density

2011-02-24 Thread Jrh
Dear Alex 
I take it this density effect is in each of the four molecules?
I wonder if the crescent is a series termination effect, although relatively 
rare in our field. If so what might have caused it? Do you have (radial) gaps 
in your data set eg due to heavy ice rings?
Best wishes,
John

Prof John R Helliwell DSc


On 24 Feb 2011, at 00:34, Alex Singer alexander.sin...@utoronto.ca wrote:

 Hi -- I have a high resolution structure (1.6 A) where I'm ready to deposit 
 except I have
 some very strange density, shown in the two pictures here -- sort of a sphere 
 with a
 split cresent around it, falling between molecule A and B His 138 imidazole 
 rings.  The
 sphere is modeled as a Cl atom, more for kicks because resulting 2Fo-Fc 
 maps still have
 considerable positive difference density throughout the sphere.  There are 4 
 molecules in
 the AU, the imidazole ring of H138 in molecules C and D point into a solvent 
 channel.
 Crystallization conditions are 0.2M Mg Chloride, 0.1M Bis-Tris pH6.5, 25% 
 PEG3350,
 cocrystallized in 2.5mM Glycero-3Phosphocholine and cryoprotected by dipping 
 in
 Paratone_N oil.
 
 Let me know what you're thoughts are and thank you for your help.
 
 Alex Singer
 
 
 -- 
 Dr. Alex Singer
 C.H. Best Institute
 112 College St. Room 70
 University of Toronto
 Toronto, Canada, M5G 1L6
 416-978-4033
 snapshot3.jpg
 snapshot4.jpg


Re: [ccp4bb] Why 0.1% bandwidth?

2011-02-09 Thread Jrh
Dear Andre
For a continuum wavelength band source this unit is needed. A monochromator of 
a given type then extracts it's rocking curves worth of bandpass. Or in Laue 
diffraction a wide band is selected with eg a mirror cut off at short 
wavelengths and a filter or transmission mirror at long wavelengths. The sample 
crystal picks out it's rocking curves worth of bandpass. 
The source type can be xray or neutron. The synchrotron undulator  is a special 
case where periodic magnets of low field create a gathering of wavelengths 
emitted from the SR source ie by constructive inteference into a narrow 
bandpass which can be say 0.02%.
This is technical I realise but hope that sets you on the right track to 
consult a relevant textbook.
Best wishes,
John   

Prof John R Helliwell DSc


On 9 Feb 2011, at 12:13, Andre Luis Berteli Ambrosio 
andre.ambro...@lnbio.org.br wrote:

 Dear ccp4bb,
 
  
 
 I sometimes find the flux of x-ray sources reported in units of 
 “photons/s/0.1% bandwidth” instead of simply “photons/s”.
 
 Where does the “1/0.1% bandwidth” unit come from? I have also seen other 
 percentages like 0.01% bw  or 0.02% bw…
 
 Is it simply defining some degree of acceptance in energy (for example, the 
 flux between 8 KeV +/- 8 eV for a given stored current)? Does it somehow have 
 to do with energy resolution?
 
 Thank you in advance for your answers,
 
  
 
 -Andre Ambrosio
 
  


[ccp4bb] See Jiang and Sweet ......Re: [ccp4bb] Structures determined: breakdown of methods

2011-01-16 Thread Jrh
Dear Rex,
A very informative and careful analysis to help your question be answered can 
be found 
In Jiang and Sweet JSR 2004, 11, 319-327.
Greetings,
John
Prof John R Helliwell DSc


On 15 Jan 2011, at 20:28, REX PALMER rex.pal...@btinternet.com wrote:

 Does anyone know of a statistical breakdown of successful protein structure 
 determinations in terms of the method used?
  
 Rex Palmer
 Birkbeck College


Re: [ccp4bb] Strange spots

2010-10-29 Thread Jrh
Dear Gerard
I will do a scan of fig 8.1b asap, probably Monday.
Greetings,
John


Sent from my iPad

On 29 Oct 2010, at 18:44, Gerard Bricogne g...@globalphasing.com wrote:

 Dear John,
 
 Would it be possible to know more about what you are referring to
 without having to buy (or steal) your book :-)) ?
 
 Thank you in advance!
 
 
 With best wishes,
 
  Gerard.
 
 --
 On Fri, Oct 29, 2010 at 06:41:51PM +0100, John R Helliwell wrote:
 Dear Dave,
 You have a collector's item there!
 The closest I have seen is illustrated in my book 'Macromolecular
 Crystallography with Synchrotron Radiation' page 321, which is a small
 molecule example.
 Best wishes,
 John
 Prof John R Helliwell DSc
 
 
 On Fri, Oct 29, 2010 at 5:08 PM, David Goldstone
 david.goldst...@nimr.mrc.ac.uk wrote:
 Dear All,
 
 Does anyone have any insight into what the circles around the spots might
 be?
 
 cheers
 
 Dave
 --
 David Goldstone, PhD
 National Institute for Medical Research
 Molecular Structure
 The Ridgeway
 Mill Hill
 London NW7 1AA
 
 
 
 
 
 -- 
 Professor John R Helliwell DSc
 
 -- 
 
 ===
 * *
 * Gerard Bricogne g...@globalphasing.com  *
 * *
 * Global Phasing Ltd. *
 * Sheraton House, Castle Park Tel: +44-(0)1223-353033 *
 * Cambridge CB3 0AX, UK   Fax: +44-(0)1223-366889 *
 * *
 ===