Re: [ccp4bb] reversed stereo issue in coot and pymol

2013-06-20 Thread Andreas Förster
Same here.  I noticed this is very much dependent on which version of 
the graphics driver you use.  The very latest one (319.17) inverts the 
stereo (which can sometimes be fixed by repeatedly switching between 
mono and stereo).  An older one (310.32) works fine (with a Quadro 600).



Andreas


On 19/06/2013 6:44, jlliu liu wrote:

I am sure if others have the similiar experience as me, sometimes when I
launch pymol and coot, I got the reversed stereo view which is pretty
annoying.  I am using the ASUS VG278H LCD monitor... Thanks in advance
for your advice.


--
Andreas Förster, Research Associate
Paul Freemont  Xiaodong Zhang Labs
Department of Biochemistry, Imperial College London
http://www.msf.bio.ic.ac.uk


Re: [ccp4bb] Definition of diffractometer

2013-06-20 Thread Jrh
Dear Colleagues,
If we may combine Ethan's quote from Stout and Jensen with Tim's meter:-
With film the estimating of a spot's blackness by eye is not a meter, it 
originally was a person's eye aided by a reference strip of blackened graduated 
spot exposures.
With measuring devices the subjectivity goes and it is therefore a meter. 
I therefore believe that eg a CCD diffractometer is a valid terminology.
Greetings,
John


Prof John R Helliwell DSc 
 
 

On 19 Jun 2013, at 19:11, Edward A. Berry ber...@upstate.edu wrote:

 Somewhere I got the idea that a diffractometer is an instrument that measures 
 one reflection at a time. Is that the case, and if so what is the term for 
 instruments like rotation camera, weisenberg, area detector? (What is an area 
 detector?).
 
 Logically I guess a diffractometer could be anything that measures 
 diffraction, and that seems to be view of the wikipedia article of that name.
 eab


Re: [ccp4bb] Definition of diffractometer

2013-06-20 Thread Gerard DVD Kleywegt

So, in SI units it would be a kilometerometer?

--dvd

On Wed, 19 Jun 2013, Edward A. Berry wrote:


an Odometer measures hod?s:
wikipedia: The word derives from the Greek words hod?s (path or gateway) 
and m?tron (measure).
In countries where Imperial units or US customary units are used, it is 
sometimes called a mileometer or milometer, or, colloquially, a tripometer.


Tim Gruene wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Yes, but you need to know the 'geo' has to do with earth, so geometers
measure the earth to make maps, odo, I believe has to do with  smell,
and kilometer is hyphenated kilo-meter, no kil-ometer, so the origin
of that word is nothing to do with 'ometer'. Remembering stuff from
your school days help a great deal understanding the world around you ;-)

Best,
Tim

On 06/20/2013 01:14 AM, Gerard DVD Kleywegt wrote:

Wait, so a geometer measures ges, an odometer measures ods, and a
kilometer measures kils?

--dvd


On Thu, 20 Jun 2013, Tim Gruene wrote:

Dear Ed,

to me, an '-ometer' is a device that measures whatever you put in
front of the 'o', so in case of a diffractometer that's a device
that measures diffraction.

Best, Tim

On 06/19/2013 08:11 PM, Edward A. Berry wrote:

Somewhere I got the idea that a diffractometer is an
instrument that measures one reflection at a time. Is that
the case, and if so what is the term for instruments like
rotation camera, weisenberg, area detector? (What is an area
detector?).

Logically I guess a diffractometer could be anything that
measures diffraction, and that seems to be view of the
wikipedia article of that name. eab








Best wishes,

--Gerard

**
Gerard J. Kleywegt

http://xray.bmc.uu.se/gerard   mailto:ger...@xray.bmc.uu.se
**
The opinions in this message are fictional.  Any similarity to
actual opinions, living or dead, is purely coincidental.
**
Little known gastromathematical curiosity: let z be the radius
and a the thickness of a pizza. Then the volume of that pizza is
equal to pi*z*z*a !
**



- --
Dr Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFRwmb2UxlJ7aRr7hoRAm3QAKCtXvSgkJsdEsyTHlZcNIRA4HPn/ACfTdil
j50Wu3GYaoAEl8RNIDAd92M=
=nZ6U
-END PGP SIGNATURE-






Best wishes,

--Gerard

**
   Gerard J. Kleywegt

  http://xray.bmc.uu.se/gerard   mailto:ger...@xray.bmc.uu.se
**
   The opinions in this message are fictional.  Any similarity
   to actual opinions, living or dead, is purely coincidental.
**
   Little known gastromathematical curiosity: let z be the
   radius and a the thickness of a pizza. Then the volume
of that pizza is equal to pi*z*z*a !
**


Re: [ccp4bb] announcement: (another) GUI for XDS

2013-06-20 Thread Sebastiano Pasqualato

Hi Kay, hi all,

sorry for the -maybe- naive questions, but i'm struggling to get the GUI for 
XDS going.
I'm on Mac OSX 10.8.4. The dmg installer and the script work nicely.

However, I have a couple of problems::

1.
I have placed the generate_XDS.INP script in the XDS directory 
(/Applications/XDS-OSX_64/) and it should be hence added to the PATH by the 
same lines that add the ads commands to the PATH.

Those are the lines:

export XDSPATH=/Applications/XDS-OSX_64/
export PATH=$PATH:$XDSPATH

in my ~/.bashrc

Indeed, the script looks accessible from wherever:

Seba@host053:~ generate_XDS.INP 
generate_XDS.INP version 0.36 (12-June-2013) . Obtain the latest version from
http://strucbio.biologie.uni-konstanz.de/xdswiki/index.php/generate_XDS.INP
Seba@host053:~ 

However, when I load a frame in the xds-gui frame tab, and then click onto the 
generate XDS.INP button, I get a message stating:

You have to install generate_XDS.INP in your Path.

Could you tell me what I am missing?


2.
The wiki page states that XDSgui depends on XDS-viewer.
Could you tell me ho to install that on a Mac?


Thanks a lot for the excellent work,
ciao,

Sebastiano 


On Jun 15, 2013, at 9:49 AM, Kay Diederichs kay.diederi...@uni-konstanz.de 
wrote:

 Hi everybody,
 
 I developed a GUI for academic users of XDS which is documented at 
 http://strucbio.biologie.uni-konstanz.de/xdswiki/index.php/XDSgui . This 
 XDSwiki article also has the links to binaries of xdsGUI (or XDSgui or 
 xds-gui; this has not been decided yet ...), for Linux 32 and 64 bit 
 (compiled on a RHEL 6 clone), and Mac OS X (compiled on 10.7).
 The 'added value' of the GUI is that it produces informative plots which 
 should greatly help to make decisions in data processing.
 The GUI is simple, tries to be self-explanatory and should be straightforward 
 to operate. Noteworthy may be the TOOLS tab which offers a means to run 
 commandline scripts upon a click with the mouse. This tab accepts user 
 modifications which should make it attractive also for expert users - they 
 can run their own scripts.
 This is the first version made publicly available. There are probaby bugs and 
 I would like to learn about these. In particular, Mac experts please tell me 
 how to solve the problems explained at the bottom of the Wiki article ...
 
 thanks,
 
 Kay
 --
 Kay Diederichs
 http://strucbio.biologie.uni-konstanz.de
 email: kay.diederi...@uni-konstanz.de Tel +49 7531 88 4049 Fax 3183
 Fachbereich Biologie, Universität Konstanz, Box 647, D-78457 Konstanz



-- 
Sebastiano Pasqualato, PhD
Crystallography Unit
Department of Experimental Oncology
European Institute of Oncology
IFOM-IEO Campus
via Adamello, 16
20139 - Milano
Italy

tel +39 02 9437 5167
fax +39 02 9437 5990








[ccp4bb] Hydrogens from Shelxl

2013-06-20 Thread Swastik Phulera
Dear All,
I have been working on a high resolution protein structure using Shelx
for refinement.
In the PDB output by Shelxl the hydrogen atom names are in duplicate
which is causing difficulty in PDB deposition.
I am aware of the use of shelxpro's B option to prepare for pdb
deposition, it unfortunately it does not save the H atoms. Is any one
here aware of a way to save Hydrogen atoms correctly such that the
prepared pdb file could be successfully submitted to PDB with the
Hydrogens intact.

Swastik


Re: [ccp4bb] Definition of diffractometer

2013-06-20 Thread Frank von Delft

No, a kilometer is what they use in shooter video games.


On 20/06/2013 08:49, Gerard DVD Kleywegt wrote:

So, in SI units it would be a kilometerometer?

--dvd

On Wed, 19 Jun 2013, Edward A. Berry wrote:


an Odometer measures hod?s:
wikipedia: The word derives from the Greek words hod?s (path or 
gateway) and m?tron (measure).
In countries where Imperial units or US customary units are used, it 
is sometimes called a mileometer or milometer, or, colloquially, a 
tripometer.


Tim Gruene wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Yes, but you need to know the 'geo' has to do with earth, so geometers
measure the earth to make maps, odo, I believe has to do with smell,
and kilometer is hyphenated kilo-meter, no kil-ometer, so the origin
of that word is nothing to do with 'ometer'. Remembering stuff from
your school days help a great deal understanding the world around 
you ;-)


Best,
Tim

On 06/20/2013 01:14 AM, Gerard DVD Kleywegt wrote:

Wait, so a geometer measures ges, an odometer measures ods, and a
kilometer measures kils?

--dvd


On Thu, 20 Jun 2013, Tim Gruene wrote:

Dear Ed,

to me, an '-ometer' is a device that measures whatever you put in
front of the 'o', so in case of a diffractometer that's a device
that measures diffraction.

Best, Tim

On 06/19/2013 08:11 PM, Edward A. Berry wrote:

Somewhere I got the idea that a diffractometer is an
instrument that measures one reflection at a time. Is that
the case, and if so what is the term for instruments like
rotation camera, weisenberg, area detector? (What is an area
detector?).

Logically I guess a diffractometer could be anything that
measures diffraction, and that seems to be view of the
wikipedia article of that name. eab








Best wishes,

--Gerard

**
Gerard J. Kleywegt

http://xray.bmc.uu.se/gerard   mailto:ger...@xray.bmc.uu.se
**
The opinions in this message are fictional.  Any similarity to
actual opinions, living or dead, is purely coincidental.
**
Little known gastromathematical curiosity: let z be the radius
and a the thickness of a pizza. Then the volume of that pizza is
equal to pi*z*z*a !
**



- -- 
Dr Tim Gruene

Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFRwmb2UxlJ7aRr7hoRAm3QAKCtXvSgkJsdEsyTHlZcNIRA4HPn/ACfTdil
j50Wu3GYaoAEl8RNIDAd92M=
=nZ6U
-END PGP SIGNATURE-






Best wishes,

--Gerard

**
   Gerard J. Kleywegt

  http://xray.bmc.uu.se/gerard   mailto:ger...@xray.bmc.uu.se
**
   The opinions in this message are fictional.  Any similarity
   to actual opinions, living or dead, is purely coincidental.
**
   Little known gastromathematical curiosity: let z be the
   radius and a the thickness of a pizza. Then the volume
of that pizza is equal to pi*z*z*a !
**


Re: [ccp4bb] str solving problem

2013-06-20 Thread Eleanor Dodson
As others say - the Rfactors look pretty good for MR, mine usually start
over 50% even with a better model and one hopes they then decrease..
But you say you took the Balbes model into phaser? and I think Balbes
automatically runs cycles of refinement so any comment on R factors may not
mean much.

Have you found both molecules in the asymmetric unit? You only give LLG for
one?
Eleanor




On 19 June 2013 17:44, Eugene Valkov eugene.val...@gmail.com wrote:

 Yes, I would agree with Francis that diffraction shows contribution from
 several lattices, which could lead to misindexing. However, it should be
 feasible to get a model that refines from this sort of data.

 Pramod - could you please post your data processing statistics from your
 scaling program? Better if you have several for different spacegroups.

 Also, I have no idea how HKL200 does this, but could you please provide an
 indexing solution table from Mosflm that shows penalties associated with
 each type of space group? Was there a sharp penalty drop at some point or
 was it more gradual?

 When you index spots in Mosflm, do your predictions agree with the spots?
 Or is there a substantial portion that are missed?

 I would consider altering thresholds in Mosflm for indexing (see the
 manual).

 Eugene




 On 19 June 2013 17:34, Francis E. Reyes francis.re...@colorado.eduwrote:

 On Jun 17, 2013, at 12:36 PM, Pramod Kumar pramod...@gmail.com wrote:

  I have a crystal data diffracted  around 2.9 A*,
  during the data reduction HKL2000 not convincingly showed the space
 group (indexed in lower symmetry p1), while the mosflm given C-centered
 Orthorhombic, and again with little play around HKL2000 given CO
 



  no ice ring is appeared, diffraction pattern looks ok, misindexing in
 any direction is not conclusive to me (plz see the imj attachment)

 The diffraction does not look ok... there's hints of multiple lattices...
 which is not a problem if the two lattice orientations do not perfectly
 overlap (i.e. their spots are separable).

 Last I remember, HKL2000 bases its indexing on the 'strongest' spots on
 an image (though you could manually select spots). It could result in a
 misindex if the strongest spots come from separate lattices (and even worse
 if you have twinning/pseudosymmetry issues).

 Try a program that uses all spots for indexing, across all images (XDS
 for example) and you might get the true space group.

 Or if the crystal is big enough, you could try shooting it in different
 areas and 'searching' for a better spot to collect data.

 Or 'grow a better crystal'.

 F



 -
 Francis E. Reyes PhD
 215 UCB
 University of Colorado at Boulder




 --
 Dr Eugene Valkov
 MRC Laboratory of Molecular Biology
 Francis Crick Avenue
 Cambridge Biomedical Campus
 Cambridge CB2 0QH, U.K.

 Email: eval...@mrc-lmb.cam.ac.uk
 Tel: +44 (0) 1223 407840



Re: [ccp4bb] announcement: (another) GUI for XDS

2013-06-20 Thread Sebastiano Pasqualato

hi Kay,
thanks a lot for the prompt reply.

Indeed the problem rises from the fact that the program is launched in a 
non-bash environment, so one would have to add generate_XDS.INP to the PATH 
of the Mac interface, rather than to the bash PATH.
I took the workaround of launching XDSgui from the command line, rather than by 
clicking on its icon, and everything works fine.

For what concerns ads-viewer, sorry I didn't notice the .dmg file.
installation is just painless with that, adding an alias to the bin in the 
.bashrc file.

The program looks like running nice now.

Thanks again,
ciao,

Sebastiano



On Jun 20, 2013, at 11:18 AM, Kay Diederichs kay.diederi...@uni-konstanz.de 
wrote:

 On 06/20/2013 11:07 AM, Sebastiano Pasqualato wrote:
 
 Hi Kay, hi all,
 
 sorry for the -maybe- naive questions, but i'm struggling to get the GUI
 for XDS going.
 I'm on Mac OSX 10.8.4. The dmg installer and the script work nicely.
 
 However, I have a couple of problems::
 
 1.
 I have placed the generate_XDS.INP script in the XDS directory
 (/Applications/XDS-OSX_64/) and it should be hence added to the PATH by
 the same lines that add the ads commands to the PATH.
 
 Those are the lines:
 
 export XDSPATH=/Applications/XDS-OSX_64/
 export PATH=$PATH:$XDSPATH
 
 in my ~/.bashrc
 
 Indeed, the script looks accessible from wherever:
 
 Seba@host053:~ generate_XDS.INP
 generate_XDS.INP version 0.36 (12-June-2013) . Obtain the latest version
 from
 http://strucbio.biologie.uni-konstanz.de/xdswiki/index.php/generate_XDS.INP
 Seba@host053:~
 
 However, when I load a frame in the xds-gui frame tab, and then click
 onto the generate XDS.INP button, I get a message stating:
 
 You have to install generate_XDS.INP in your Path.
 
 Could you tell me what I am missing?
 
 I don't know much about Macs but this looks more like a bash problem ... it 
 works for me when I use ~/.bash_profile  (i.e. not ~/.bashrc )
 
 The bash documentation on my Linux says:
 When  an  interactive  shell  that  is not a login shell is started, bash 
 reads and executes commands from ~/.bashrc, if that file exists.
 
 The bash is invoked from a _program_ here, not from an interactive shell 
 (console or terminal window), which may result in ~/.bashrc not being 
 executed.
 
 
 
 2.
 The wiki page states that XDSgui depends on XDS-viewer.
 Could you tell me ho to install that on a Mac?
 
 
 XDS-viewer is not my program, so I guess you have to install it like any 
 other program; see also http://xds-viewer.sourceforge.net/
 
 I just looked it up on my Mac:
 turn34:~ dikay$ ll /usr/local/bin/xds-viewer
 lrwxr-xr-x  1 root  wheel  58 Apr  3 11:27 /usr/local/bin/xds-viewer - 
 /Applications/XDS-Viewer.app/Contents/MacOS/xds-viewer-bin
 
 so it was probably installed in the usual way (from the DMG), but I do not 
 remember how I did it.
 
 Please tell me how you solved these problems.
 
 best,
 
 Kay
 
 
 Thanks a lot for the excellent work,
 ciao,
 
 Sebastiano
 
 
 On Jun 15, 2013, at 9:49 AM, Kay Diederichs
 kay.diederi...@uni-konstanz.de mailto:kay.diederi...@uni-konstanz.de
 wrote:
 
 Hi everybody,
 
 I developed a GUI for academic users of XDS which is documented at
 http://strucbio.biologie.uni-konstanz.de/xdswiki/index.php/XDSgui .
 This XDSwiki article also has the links to binaries of xdsGUI (or
 XDSgui or xds-gui; this has not been decided yet ...), for Linux 32
 and 64 bit (compiled on a RHEL 6 clone), and Mac OS X (compiled on 10.7).
 The 'added value' of the GUI is that it produces informative plots
 which should greatly help to make decisions in data processing.
 The GUI is simple, tries to be self-explanatory and should be
 straightforward to operate. Noteworthy may be the TOOLS tab which
 offers a means to run commandline scripts upon a click with the mouse.
 This tab accepts user modifications which should make it attractive
 also for expert users - they can run their own scripts.
 This is the first version made publicly available. There are probaby
 bugs and I would like to learn about these. In particular, Mac experts
 please tell me how to solve the problems explained at the bottom of
 the Wiki article ...
 
 thanks,
 
 Kay
 --
 Kay Diederichs http://strucbio.biologie.uni-konstanz.de
 email: kay.diederi...@uni-konstanz.de
 mailto:kay.diederi...@uni-konstanz.de Tel +49 7531 88 4049 Fax 3183
 Fachbereich Biologie, Universität Konstanz, Box 647, D-78457 Konstanz
 
 
 --
 *Sebastiano Pasqualato, PhD*
 Crystallography Unit
 Department of Experimental Oncology
 European Institute of Oncology
 IFOM-IEO Campus
 via Adamello, 16
 20139 - Milano
 Italy
 
 tel +39 02 9437 5167
 fax +39 02 9437 5990
 
 
 
 
 
 
 
 
 -- 
 Kay Diederichshttp://strucbio.biologie.uni-konstanz.de
 email: kay.diederi...@uni-konstanz.deTel +49 7531 88 4049 Fax 3183
 Fachbereich Biologie, Universität Konstanz, Box M647, D-78457 Konstanz
 
 This e-mail is digitally signed. If your e-mail client does not have the
 necessary capabilities, just ignore the 

Re: [ccp4bb] Hydrogens from Shelxl

2013-06-20 Thread Tim Gruene
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dear Swastik,

unless this is a structure from neutron data I recommend not to
deposit hydrogen atoms at all, because their positions are calculated
rather than refined.

Best,
Tim

On 06/20/2013 11:40 AM, Swastik Phulera wrote:
 Dear All, I have been working on a high resolution protein
 structure using Shelx for refinement. In the PDB output by Shelxl
 the hydrogen atom names are in duplicate which is causing
 difficulty in PDB deposition. I am aware of the use of shelxpro's B
 option to prepare for pdb deposition, it unfortunately it does not
 save the H atoms. Is any one here aware of a way to save Hydrogen
 atoms correctly such that the prepared pdb file could be
 successfully submitted to PDB with the Hydrogens intact.
 
 Swastik
 

- -- 
Dr Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFRwv9yUxlJ7aRr7hoRAgbZAJkB/tSotvx1H0MQi0IFCFiYu8or5gCeI/I1
FAxA5NNhAGH9ZU5PqP93ULs=
=YBLY
-END PGP SIGNATURE-


[ccp4bb] PostDoctoral position available at Sanofi RD in Paris area

2013-06-20 Thread Alexey Rak
Dear All,
I would like to bring to the ccp4b attention that a PostDoc position in 
bio-physics is available in structural biology labs at Sanofi RD in Paris area.
Please see the announcement below.
Best regards,
Alexey

Alexey RAK, PhD
Structure-Design-Informatics / LGCR / Sanofi RD
13, Quai Jules Guesde - BP 14Phone: +33 (0) 1 58 93 86 93
94403 Vitry sur Seine Cedex   Fax: +33 (0) 1 58 93 80 63
FRANCE 
alexey@sanofi.commailto:alexey@sanofi.com
[cid:image002.jpg@01CE6D97.C9B4ED50]
Please consider the environment before printing this email!


_


A post-doctoral fellowship is available for a period of 18 months in the 
department Structure, Design  Informatics at SANOFI RD, Vitry-sur-Seine, 
France, in the field of BioPhysics.

The post doc project is aiming at developing novel biophysical applications to 
characterize biomolecular interaction for a wide range of affinities, molecules 
size and experimental conditions, to be routinely applied in current small 
molecule and biologics projects. The methods will focus on protein interaction 
characterization as well as on kinetic and thermodynamic parameters evaluation.
Candidates must hold a PhD degree in Biophysics or Biochemistry or related 
fields. They must have experience documented by peer-refereed publications in 
the biomolecular interaction characterization. They must have experience with 
NMR, SPR, ITC and DSF methods, structural biology or recombinant protein 
biochemistry experience is an advantage.
Outstanding commitment to successfully perform research and meet the project 
objectives is required as well as capacities in team work and interdisciplinary 
communication.  Mastering scientific English, both written and spoken, is 
essential; knowledge of French is beneficial although not required.
If you meet the requirements and wish to contribute to this project, please 
send before July 15, 2013 a complete resume with a cover letter to Marie Pouyet 
at Sanofi marie.pou...@sanofi.commailto:marie.pou...@sanofi.com. In the 
application, please, include name and contact details of three scientific 
referees.

The starting date is September - December 2013.
Location : Vitry-sur-Seine - Paris areas
Sanofi, an integrated global healthcare leader, discovers, develops and 
distributes therapeutic solutions focused on patients' needs. Sanofi has core 
strengths in the field of healthcare with seven growth platforms: diabetes 
solutions, human vaccines, innovative drugs, consumer healthcare, emerging 
markets, animal health and the new Genzyme.

inline: image001.jpg

[ccp4bb] Twinning problem

2013-06-20 Thread Herman . Schreuder
Dear Bulletin Board,

Prodded by pdb annotators, which are very hesitant to accept coordinate files 
when their Rfactor does not correspond with our Rfactor, I had a look again 
into some old data sets, which I suspect are twinned. Below are the results of 
some twinning tests with the Detwin program (top value: all reflections, lower 
value: reflections  Nsig*obs (whatever that may mean). The space group is P32, 
the resolution is 2.3 - 2.6 Å and data are reasonable complete: 95 - 100%.

From the Detwin analysis, it seems that the crystals are twinned with twin 
operator k,h,-l with a twinning fraction of 0.3 for crystal 1, 0.15 for 
crystal 2 and 0.4 for crystal 3. Crystal 2 can be refined while ignoring 
twinning to get acceptable but not stellar R and Rfree values. However, when I 
try to detwin Fobs of e.g. crystal 1 (twinning fraction 0.3), R and Rfree 
values stay about the same, whatever twinning fraction I try. At the time, I 
used the CNS detwin_perfect protocol to detwin using Fcalcs, which brought the 
Rfactors in acceptable range, but I do not feel that was the perfect solution. 
Ignoring twinning on e.g. crystal 1 produces an Rfactor of 22% and an Rfree of 
29%

Do you have any idea what could be going on?

Thank you for your help!
Herman



Crystal 1:

operator -h,-k,l
 Suggests Twinning factor (0.5-H):0.113
 Suggests Twinning factor (0.5-H):0.147

operator: k,h,-l
 Suggests Twinning factor (0.5-H):0.277
 Suggests Twinning factor (0.5-H):0.323

operator -k,-h,-l
 Suggests Twinning factor (0.5-H):0.101
 Suggests Twinning factor (0.5-H):0.134


Crystal 2:

operator -h,-k,l
 Suggests Twinning factor (0.5-H):0.077
 Suggests Twinning factor (0.5-H):0.108

operator: k,h,-l
 Suggests Twinning factor (0.5-H):0.126
 Suggests Twinning factor (0.5-H):0.161

operator -k,-h,-l
 Suggests Twinning factor (0.5-H):0.072
 Suggests Twinning factor (0.5-H):0.106


Crystal 3:

operator -h,-k,l
 Suggests Twinning factor (0.5-H):0.123
 Suggests Twinning factor (0.5-H):0.149

operator: k,h,-l
 Suggests Twinning factor (0.5-H):0.393
 Suggests Twinning factor (0.5-H):0.433

operator -k,-h,-l
 Suggests Twinning factor (0.5-H):0.110
 Suggests Twinning factor (0.5-H):0.133





[ccp4bb] Delete waters within radius of defined space

2013-06-20 Thread Mo Wong
Hi all,

I'd like to be able to automatically remove modeled/refined water molecules
that overlap regions of space that have been flagged as potential blobs
of interest in an initial difference density map.

My questions are:

1) Can I get Coot to output to a file the coordinates it spits out when
using the Unmodeled blobs validation tool? If not, is there a similar
tool that can do this?
2) Is there a tool out there that can automatically remove water molecules
within a region of space (i.e., remove all water molecules within 5A of
coordinates x,y,z)?

Many thanks of any help/suggestions!


[ccp4bb] AW: Twinning problem

2013-06-20 Thread Herman . Schreuder
Dear Mitch (and Philip and Phil),

It is clear that I should give refmac a go with the non-detwinned F's and just 
the TWIN command.

Thank you for your suggestions,
Herman

 

-Ursprüngliche Nachricht-
Von: Miller, Mitchell D. [mailto:mmil...@slac.stanford.edu] 
Gesendet: Donnerstag, 20. Juni 2013 16:18
An: Schreuder, Herman RD/DE
Betreff: RE: Twinning problem

Hi Herman,
 Have you considered the possibility of your crystals being tetartohedral 
twinned.  That is more than one of the twin laws may apply to your crystals.
E.g. in P32 it is possible to have tetartohedral twinning which would have
4 twin domains - (h,k,l), (k,h,-l), (-h,-k,l) and (-k,-h,-l). Perfect 
tetartohedral twinning of P3 would merge in P622 and each twin domain would 
have a faction of 0.25.

  We have had 2 cases like this (the first 2PRX was before there was support 
for this type of twinning except for in shelxl and we ended up with refined 
twin fractions of 0.38, 0.28, 0.19, 0.15 for the deposited crystal and a 2nd 
crystal that we did not deposit had twin fractions of 0.25, 0.27, 0.17, 0.31).  
The 2nd case we had was after support for twining (including tetartohedral 
twinning) was added to refmac (and I think phenix.refine can also handle this). 
 For 2NUZ, it was P32 with refined twin fractions of 0.25, 0.27, 0.17, 0.31.

  Pietro Roversi wrote a review of tetartohedral twinning for the CCP4 
proceedings issues of acta D http://dx.doi.org/10.1107/S0907444912006737 

  I would try refinement with refmac using the original (non-detwinned F's) 
with just the TWIN command to see if it ends up keeping twin fractions for all 
3 operators (4 domains) -- especially with crystals 1 and 3 which appear to 
have the largest estimates of the other twin fractions.

Regards,
Mitch


==
Mitchell Miller, Ph.D.
Joint Center for Structural Genomics
Stanford Synchrotron Radiation Lightsource
2575 Sand Hill Rd  -- SLAC MS 99
Menlo Park, CA  94025
Phone: 1-650-926-5036
FAX: 1-650-926-3292


-Original Message-
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of 
herman.schreu...@sanofi.com
Sent: Thursday, June 20, 2013 6:47 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] Twinning problem

Dear Bulletin Board,
 
Prodded by pdb annotators, which are very hesitant to accept coordinate files 
when their Rfactor does not correspond with our Rfactor, I had a look again 
into some old data sets, which I suspect are twinned. Below are the results of 
some twinning tests with the Detwin program (top value: all reflections, lower 
value: reflections  Nsig*obs (whatever that may mean). The space group is P32, 
the resolution is 2.3 - 2.6 Å and data are reasonable complete: 95 - 100%.
 
From the Detwin analysis, it seems that the crystals are twinned with twin 
operator k,h,-l with a twinning fraction of 0.3 for crystal 1, 0.15 for 
crystal 2 and 0.4 for crystal 3. Crystal 2 can be refined while ignoring 
twinning to get acceptable but not stellar R and Rfree values. However, when I 
try to detwin Fobs of e.g. crystal 1 (twinning fraction 0.3), R and Rfree 
values stay about the same, whatever twinning fraction I try. At the time, I 
used the CNS detwin_perfect protocol to detwin using Fcalcs, which brought the 
Rfactors in acceptable range, but I do not feel that was the perfect solution. 
Ignoring twinning on e.g. crystal 1 produces an Rfactor of 22% and an Rfree of 
29%
 
Do you have any idea what could be going on? 
 
Thank you for your help!
Herman 
 
 
 
Crystal 1:
 
operator -h,-k,l
Suggests Twinning factor (0.5-H):0.113
Suggests Twinning factor (0.5-H):0.147
 
operator: k,h,-l
Suggests Twinning factor (0.5-H):0.277
Suggests Twinning factor (0.5-H):0.323
 
operator -k,-h,-l
Suggests Twinning factor (0.5-H):0.101
Suggests Twinning factor (0.5-H):0.134
 
 
Crystal 2:
 
operator -h,-k,l
Suggests Twinning factor (0.5-H):0.077
Suggests Twinning factor (0.5-H):0.108
 
operator: k,h,-l
Suggests Twinning factor (0.5-H):0.126
Suggests Twinning factor (0.5-H):0.161
 
operator -k,-h,-l
Suggests Twinning factor (0.5-H):0.072
Suggests Twinning factor (0.5-H):0.106
 
 
Crystal 3:
 
operator -h,-k,l
Suggests Twinning factor (0.5-H):0.123
Suggests Twinning factor (0.5-H):0.149
 
operator: k,h,-l
Suggests Twinning factor (0.5-H):0.393
Suggests Twinning factor (0.5-H):0.433
 
operator -k,-h,-l
Suggests Twinning factor (0.5-H):0.110
Suggests Twinning factor (0.5-H):0.133
 
 
 


Re: [ccp4bb] ctruncate bug?

2013-06-20 Thread Bernhard Rupp
As a maybe better alternative, we should (once again) consider to refine 
against intensities (and I guess George Sheldrick would agree here).

I have a simple question - what exactly, short of some sort of historic inertia 
(or memory lapse), is the reason NOT to refine against intensities? 

Best, BR


Re: [ccp4bb] Twinning problem

2013-06-20 Thread Miller, Mitchell D.
You are welcome.  Let me also for the benefit of others who may
search the archives in the future, let me correct two errors
below - (typo and a miss-recollection).  

Specially, I was thinking that phenix.refine was now able to refine 
multiple twin laws, but according to Nat Echols on the phenix mailing list 
http://phenix-online.org/pipermail/phenixbb/2013-March/019538.html 
phenix.refine only handles 1 twin law at this time. 
(My typo was that and our second structure was 3nuz with 
twin fractions 0.38, 0.32, 0.16 and 0.14 -- not 2nuz).

A useful search for deposited structures mentioning tetartohedral
http://www.ebi.ac.uk/pdbe-srv/view/search?search_type=all_texttext=TETARTOHEDRALLY+OR+TETARTOHEDRAL
 

Regards,
Mitch


-Original Message-
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of 
herman.schreu...@sanofi.com
Sent: Thursday, June 20, 2013 8:04 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] AW: Twinning problem

Dear Mitch (and Philip and Phil),

It is clear that I should give refmac a go with the non-detwinned F's and just 
the TWIN command.

Thank you for your suggestions,
Herman

 

-Ursprüngliche Nachricht-
Von: Miller, Mitchell D. [mailto:mmil...@slac.stanford.edu] 
Gesendet: Donnerstag, 20. Juni 2013 16:18
An: Schreuder, Herman RD/DE
Betreff: RE: Twinning problem

Hi Herman,
 Have you considered the possibility of your crystals being tetartohedral 
twinned.  That is more than one of the twin laws may apply to your crystals.
E.g. in P32 it is possible to have tetartohedral twinning which would have
4 twin domains - (h,k,l), (k,h,-l), (-h,-k,l) and (-k,-h,-l). Perfect 
tetartohedral twinning of P3 would merge in P622 and each twin domain would 
have a faction of 0.25.

  We have had 2 cases like this (the first 2PRX was before there was support 
for this type of twinning except for in shelxl and we ended up with refined 
twin fractions of 0.38, 0.28, 0.19, 0.15 for the deposited crystal and a 2nd 
crystal that we did not deposit had twin fractions of 0.25, 0.27, 0.17, 0.31).  
The 2nd case we had was after support for twining (including tetartohedral 
twinning) was added to refmac (and I think phenix.refine can also handle this). 
 For 2NUZ, it was P32 with refined twin fractions of 0.25, 0.27, 0.17, 0.31.

  Pietro Roversi wrote a review of tetartohedral twinning for the CCP4 
proceedings issues of acta D http://dx.doi.org/10.1107/S0907444912006737 

  I would try refinement with refmac using the original (non-detwinned F's) 
with just the TWIN command to see if it ends up keeping twin fractions for all 
3 operators (4 domains) -- especially with crystals 1 and 3 which appear to 
have the largest estimates of the other twin fractions.

Regards,
Mitch


==
Mitchell Miller, Ph.D.
Joint Center for Structural Genomics
Stanford Synchrotron Radiation Lightsource
2575 Sand Hill Rd  -- SLAC MS 99
Menlo Park, CA  94025
Phone: 1-650-926-5036
FAX: 1-650-926-3292


-Original Message-
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of 
herman.schreu...@sanofi.com
Sent: Thursday, June 20, 2013 6:47 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] Twinning problem

Dear Bulletin Board,
 
Prodded by pdb annotators, which are very hesitant to accept coordinate files 
when their Rfactor does not correspond with our Rfactor, I had a look again 
into some old data sets, which I suspect are twinned. Below are the results of 
some twinning tests with the Detwin program (top value: all reflections, lower 
value: reflections  Nsig*obs (whatever that may mean). The space group is P32, 
the resolution is 2.3 - 2.6 Å and data are reasonable complete: 95 - 100%.
 
From the Detwin analysis, it seems that the crystals are twinned with twin 
operator k,h,-l with a twinning fraction of 0.3 for crystal 1, 0.15 for 
crystal 2 and 0.4 for crystal 3. Crystal 2 can be refined while ignoring 
twinning to get acceptable but not stellar R and Rfree values. However, when I 
try to detwin Fobs of e.g. crystal 1 (twinning fraction 0.3), R and Rfree 
values stay about the same, whatever twinning fraction I try. At the time, I 
used the CNS detwin_perfect protocol to detwin using Fcalcs, which brought the 
Rfactors in acceptable range, but I do not feel that was the perfect solution. 
Ignoring twinning on e.g. crystal 1 produces an Rfactor of 22% and an Rfree of 
29%
 
Do you have any idea what could be going on? 
 
Thank you for your help!
Herman 
 
 
 
Crystal 1:
 
operator -h,-k,l
Suggests Twinning factor (0.5-H):0.113
Suggests Twinning factor (0.5-H):0.147
 
operator: k,h,-l
Suggests Twinning factor (0.5-H):0.277
Suggests Twinning factor (0.5-H):0.323
 
operator -k,-h,-l
Suggests Twinning factor (0.5-H):0.101
Suggests Twinning factor (0.5-H):0.134
 
 
Crystal 2:
 
operator -h,-k,l
Suggests Twinning factor (0.5-H):0.077
Suggests Twinning factor (0.5-H):0.108
 
operator: k,h,-l
Suggests 

Re: [ccp4bb] ctruncate bug?

2013-06-20 Thread Douglas Theobald
Just trying to understand the basic issues here.  How could refining directly 
against intensities solve the fundamental problem of negative intensity values?


On Jun 20, 2013, at 11:34 AM, Bernhard Rupp hofkristall...@gmail.com wrote:

 As a maybe better alternative, we should (once again) consider to refine 
 against intensities (and I guess George Sheldrick would agree here).
 
 I have a simple question - what exactly, short of some sort of historic 
 inertia (or memory lapse), is the reason NOT to refine against intensities? 
 
 Best, BR


Re: [ccp4bb] ctruncate bug?

2013-06-20 Thread Dale Tronrud

   If you are refining against F's you have to find some way to avoid
calculating the square root of a negative number.  That is why people
have historically rejected negative I's and why Truncate and cTruncate
were invented.

   When refining against I, the calculation of (Iobs - Icalc)^2 couldn't
care less if Iobs happens to be negative.

   As for why people still refine against F...  When I was distributing
a refinement package it could refine against I but no one wanted to do
that.  The R values ended up higher, but they were looking at R
values calculated from F's.  Of course the F based R values are lower
when you refine against F's, that means nothing.

   If we could get the PDB to report both the F and I based R values
for all models maybe we could get a start toward moving to intensity
refinement.

Dale Tronrud

On 06/20/2013 09:06 AM, Douglas Theobald wrote:

Just trying to understand the basic issues here.  How could refining directly 
against intensities solve the fundamental problem of negative intensity values?


On Jun 20, 2013, at 11:34 AM, Bernhard Rupp hofkristall...@gmail.com wrote:


As a maybe better alternative, we should (once again) consider to refine 
against intensities (and I guess George Sheldrick would agree here).


I have a simple question - what exactly, short of some sort of historic inertia 
(or memory lapse), is the reason NOT to refine against intensities?

Best, BR


Re: [ccp4bb] ctruncate bug?

2013-06-20 Thread Ian Tickle
Yes higher R factors is the usual reason people don't like I-based
refinement!

Anyway, refining against Is doesn't solve the problem, it only postpones
it: you still need the Fs for maps! (though errors in Fs may be less
critical then).

-- Ian


On 20 June 2013 17:20, Dale Tronrud det...@uoxray.uoregon.edu wrote:

If you are refining against F's you have to find some way to avoid
 calculating the square root of a negative number.  That is why people
 have historically rejected negative I's and why Truncate and cTruncate
 were invented.

When refining against I, the calculation of (Iobs - Icalc)^2 couldn't
 care less if Iobs happens to be negative.

As for why people still refine against F...  When I was distributing
 a refinement package it could refine against I but no one wanted to do
 that.  The R values ended up higher, but they were looking at R
 values calculated from F's.  Of course the F based R values are lower
 when you refine against F's, that means nothing.

If we could get the PDB to report both the F and I based R values
 for all models maybe we could get a start toward moving to intensity
 refinement.

 Dale Tronrud


 On 06/20/2013 09:06 AM, Douglas Theobald wrote:

 Just trying to understand the basic issues here.  How could refining
 directly against intensities solve the fundamental problem of negative
 intensity values?


 On Jun 20, 2013, at 11:34 AM, Bernhard Rupp hofkristall...@gmail.com
 wrote:

  As a maybe better alternative, we should (once again) consider to refine
 against intensities (and I guess George Sheldrick would agree here).


 I have a simple question - what exactly, short of some sort of historic
 inertia (or memory lapse), is the reason NOT to refine against intensities?

 Best, BR




Re: [ccp4bb] ctruncate bug?

2013-06-20 Thread Dom Bellini
Wouldnt be possible to take advantage of negative Is to extrapolate/estimate 
the decay of scattering background (kind of Wilson plot of background 
scattering) to flat out the background and push all the Is to positive values?

More of a question rather than a suggestion ...

D



From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Ian Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?

Yes higher R factors is the usual reason people don't like I-based refinement!

Anyway, refining against Is doesn't solve the problem, it only postpones it: 
you still need the Fs for maps! (though errors in Fs may be less critical then).
-- Ian

On 20 June 2013 17:20, Dale Tronrud 
det...@uoxray.uoregon.edumailto:det...@uoxray.uoregon.edu wrote:
   If you are refining against F's you have to find some way to avoid
calculating the square root of a negative number.  That is why people
have historically rejected negative I's and why Truncate and cTruncate
were invented.

   When refining against I, the calculation of (Iobs - Icalc)^2 couldn't
care less if Iobs happens to be negative.

   As for why people still refine against F...  When I was distributing
a refinement package it could refine against I but no one wanted to do
that.  The R values ended up higher, but they were looking at R
values calculated from F's.  Of course the F based R values are lower
when you refine against F's, that means nothing.

   If we could get the PDB to report both the F and I based R values
for all models maybe we could get a start toward moving to intensity
refinement.

Dale Tronrud


On 06/20/2013 09:06 AM, Douglas Theobald wrote:
Just trying to understand the basic issues here.  How could refining directly 
against intensities solve the fundamental problem of negative intensity values?


On Jun 20, 2013, at 11:34 AM, Bernhard Rupp 
hofkristall...@gmail.commailto:hofkristall...@gmail.com wrote:
As a maybe better alternative, we should (once again) consider to refine 
against intensities (and I guess George Sheldrick would agree here).

I have a simple question - what exactly, short of some sort of historic inertia 
(or memory lapse), is the reason NOT to refine against intensities?

Best, BR




-- 

This e-mail and any attachments may contain confidential, copyright and or 
privileged material, and are for the use of the intended addressee only. If you 
are not the intended addressee or an authorised recipient of the addressee 
please notify us of receipt by returning the e-mail and do not use, copy, 
retain, distribute or disclose the information in or attached to the e-mail.

Any opinions expressed within this e-mail are those of the individual and not 
necessarily of Diamond Light Source Ltd. 

Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments 
are free from viruses and we cannot accept liability for any damage which you 
may sustain as a result of software viruses which may be transmitted in or with 
the message.

Diamond Light Source Limited (company no. 4375679). Registered in England and 
Wales with its registered office at Diamond House, Harwell Science and 
Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom

 









Re: [ccp4bb] ctruncate bug?

2013-06-20 Thread Douglas Theobald
Seems to me that the negative Is should be dealt with early on, in the 
integration step.  Why exactly do integration programs report negative Is to 
begin with?


On Jun 20, 2013, at 12:45 PM, Dom Bellini dom.bell...@diamond.ac.uk wrote:

 Wouldnt be possible to take advantage of negative Is to extrapolate/estimate 
 the decay of scattering background (kind of Wilson plot of background 
 scattering) to flat out the background and push all the Is to positive values?
 
 More of a question rather than a suggestion ...
 
 D
 
 
 
 From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Ian 
 Tickle
 Sent: 20 June 2013 17:34
 To: ccp4bb
 Subject: Re: [ccp4bb] ctruncate bug?
 
 Yes higher R factors is the usual reason people don't like I-based refinement!
 
 Anyway, refining against Is doesn't solve the problem, it only postpones it: 
 you still need the Fs for maps! (though errors in Fs may be less critical 
 then).
 -- Ian
 
 On 20 June 2013 17:20, Dale Tronrud 
 det...@uoxray.uoregon.edumailto:det...@uoxray.uoregon.edu wrote:
   If you are refining against F's you have to find some way to avoid
 calculating the square root of a negative number.  That is why people
 have historically rejected negative I's and why Truncate and cTruncate
 were invented.
 
   When refining against I, the calculation of (Iobs - Icalc)^2 couldn't
 care less if Iobs happens to be negative.
 
   As for why people still refine against F...  When I was distributing
 a refinement package it could refine against I but no one wanted to do
 that.  The R values ended up higher, but they were looking at R
 values calculated from F's.  Of course the F based R values are lower
 when you refine against F's, that means nothing.
 
   If we could get the PDB to report both the F and I based R values
 for all models maybe we could get a start toward moving to intensity
 refinement.
 
 Dale Tronrud
 
 
 On 06/20/2013 09:06 AM, Douglas Theobald wrote:
 Just trying to understand the basic issues here.  How could refining directly 
 against intensities solve the fundamental problem of negative intensity 
 values?
 
 
 On Jun 20, 2013, at 11:34 AM, Bernhard Rupp 
 hofkristall...@gmail.commailto:hofkristall...@gmail.com wrote:
 As a maybe better alternative, we should (once again) consider to refine 
 against intensities (and I guess George Sheldrick would agree here).
 
 I have a simple question - what exactly, short of some sort of historic 
 inertia (or memory lapse), is the reason NOT to refine against intensities?
 
 Best, BR
 
 
 
 
 -- 
 
 This e-mail and any attachments may contain confidential, copyright and or 
 privileged material, and are for the use of the intended addressee only. If 
 you are not the intended addressee or an authorised recipient of the 
 addressee please notify us of receipt by returning the e-mail and do not use, 
 copy, retain, distribute or disclose the information in or attached to the 
 e-mail.
 
 Any opinions expressed within this e-mail are those of the individual and not 
 necessarily of Diamond Light Source Ltd. 
 
 Diamond Light Source Ltd. cannot guarantee that this e-mail or any 
 attachments are free from viruses and we cannot accept liability for any 
 damage which you may sustain as a result of software viruses which may be 
 transmitted in or with the message.
 
 Diamond Light Source Limited (company no. 4375679). Registered in England and 
 Wales with its registered office at Diamond House, Harwell Science and 
 Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
 
 
 
 
 
 
 
 
 


Re: [ccp4bb] ctruncate bug?

2013-06-20 Thread Dom Bellini
Sorry, perhaps what I was thinking was to use the Icalc to proportionally push 
up the Iobs to push the negative Is to positive numbers.

But I guess that would bias the Iobs  ?

Again just questions rather than suggestions.

D 

-Original Message-
From: Douglas Theobald [mailto:dtheob...@brandeis.edu] 
Sent: 20 June 2013 17:49
To: Bellini, Domenico (DLSLtd,RAL,DIA); ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?

Seems to me that the negative Is should be dealt with early on, in the 
integration step.  Why exactly do integration programs report negative Is to 
begin with?


On Jun 20, 2013, at 12:45 PM, Dom Bellini dom.bell...@diamond.ac.uk wrote:

 Wouldnt be possible to take advantage of negative Is to extrapolate/estimate 
 the decay of scattering background (kind of Wilson plot of background 
 scattering) to flat out the background and push all the Is to positive values?
 
 More of a question rather than a suggestion ...
 
 D
 
 
 
 From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of 
 Ian Tickle
 Sent: 20 June 2013 17:34
 To: ccp4bb
 Subject: Re: [ccp4bb] ctruncate bug?
 
 Yes higher R factors is the usual reason people don't like I-based refinement!
 
 Anyway, refining against Is doesn't solve the problem, it only postpones it: 
 you still need the Fs for maps! (though errors in Fs may be less critical 
 then).
 -- Ian
 
 On 20 June 2013 17:20, Dale Tronrud 
 det...@uoxray.uoregon.edumailto:det...@uoxray.uoregon.edu wrote:
   If you are refining against F's you have to find some way to avoid 
 calculating the square root of a negative number.  That is why people 
 have historically rejected negative I's and why Truncate and cTruncate 
 were invented.
 
   When refining against I, the calculation of (Iobs - Icalc)^2 
 couldn't care less if Iobs happens to be negative.
 
   As for why people still refine against F...  When I was distributing 
 a refinement package it could refine against I but no one wanted to do 
 that.  The R values ended up higher, but they were looking at R 
 values calculated from F's.  Of course the F based R values are lower 
 when you refine against F's, that means nothing.
 
   If we could get the PDB to report both the F and I based R values 
 for all models maybe we could get a start toward moving to intensity 
 refinement.
 
 Dale Tronrud
 
 
 On 06/20/2013 09:06 AM, Douglas Theobald wrote:
 Just trying to understand the basic issues here.  How could refining directly 
 against intensities solve the fundamental problem of negative intensity 
 values?
 
 
 On Jun 20, 2013, at 11:34 AM, Bernhard Rupp 
 hofkristall...@gmail.commailto:hofkristall...@gmail.com wrote:
 As a maybe better alternative, we should (once again) consider to refine 
 against intensities (and I guess George Sheldrick would agree here).
 
 I have a simple question - what exactly, short of some sort of historic 
 inertia (or memory lapse), is the reason NOT to refine against intensities?
 
 Best, BR
 
 
 
 
 --
 
 This e-mail and any attachments may contain confidential, copyright and or 
 privileged material, and are for the use of the intended addressee only. If 
 you are not the intended addressee or an authorised recipient of the 
 addressee please notify us of receipt by returning the e-mail and do not use, 
 copy, retain, distribute or disclose the information in or attached to the 
 e-mail.
 
 Any opinions expressed within this e-mail are those of the individual and not 
 necessarily of Diamond Light Source Ltd. 
 
 Diamond Light Source Ltd. cannot guarantee that this e-mail or any 
 attachments are free from viruses and we cannot accept liability for any 
 damage which you may sustain as a result of software viruses which may be 
 transmitted in or with the message.
 
 Diamond Light Source Limited (company no. 4375679). Registered in 
 England and Wales with its registered office at Diamond House, Harwell 
 Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United 
 Kingdom
 
 
 
 
 
 
 
 
 




-- 

This e-mail and any attachments may contain confidential, copyright and or 
privileged material, and are for the use of the intended addressee only. If you 
are not the intended addressee or an authorised recipient of the addressee 
please notify us of receipt by returning the e-mail and do not use, copy, 
retain, distribute or disclose the information in or attached to the e-mail.

Any opinions expressed within this e-mail are those of the individual and not 
necessarily of Diamond Light Source Ltd. 

Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments 
are free from viruses and we cannot accept liability for any damage which you 
may sustain as a result of software viruses which may be transmitted in or with 
the message.

Diamond Light Source Limited (company no. 4375679). Registered in England and 
Wales with its registered office at Diamond House, Harwell Science and 
Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom


Re: [ccp4bb] ctruncate bug?

2013-06-20 Thread Ian Tickle
The prior knowledge about Is is not merely that they are = 0, it's more
than that: we know they have an (approximate) Wilson distribution.  AFAICS
incorporating that information at the integration stage would be almost
equivalent to the FW procedure.  In fact it would probably not be as good
since the experimental estimates of I do have an (approximate) Gaussian
distribution, being the difference of 2 Poisson distributions with large
means (or at least ~ 10).  The corrected Is, being the best estimates of
the true Is would as you point out not have a Gaussian distribution, and
some of the assumptions made in averaging equivalent reflections would not
be valid.  You could still use the corrected Is instead of the experimental
Is in refinement but I suspect it would not make any difference to the
results (except you would get lower R factors!).

-- Ian


On 20 June 2013 17:49, Douglas Theobald dtheob...@brandeis.edu wrote:

 Seems to me that the negative Is should be dealt with early on, in the
 integration step.  Why exactly do integration programs report negative Is
 to begin with?


 On Jun 20, 2013, at 12:45 PM, Dom Bellini dom.bell...@diamond.ac.uk
 wrote:

  Wouldnt be possible to take advantage of negative Is to
 extrapolate/estimate the decay of scattering background (kind of Wilson
 plot of background scattering) to flat out the background and push all the
 Is to positive values?
 
  More of a question rather than a suggestion ...
 
  D
 
 
 
  From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of
 Ian Tickle
  Sent: 20 June 2013 17:34
  To: ccp4bb
  Subject: Re: [ccp4bb] ctruncate bug?
 
  Yes higher R factors is the usual reason people don't like I-based
 refinement!
 
  Anyway, refining against Is doesn't solve the problem, it only postpones
 it: you still need the Fs for maps! (though errors in Fs may be less
 critical then).
  -- Ian
 
  On 20 June 2013 17:20, Dale Tronrud det...@uoxray.uoregon.edumailto:
 det...@uoxray.uoregon.edu wrote:
If you are refining against F's you have to find some way to avoid
  calculating the square root of a negative number.  That is why people
  have historically rejected negative I's and why Truncate and cTruncate
  were invented.
 
When refining against I, the calculation of (Iobs - Icalc)^2 couldn't
  care less if Iobs happens to be negative.
 
As for why people still refine against F...  When I was distributing
  a refinement package it could refine against I but no one wanted to do
  that.  The R values ended up higher, but they were looking at R
  values calculated from F's.  Of course the F based R values are lower
  when you refine against F's, that means nothing.
 
If we could get the PDB to report both the F and I based R values
  for all models maybe we could get a start toward moving to intensity
  refinement.
 
  Dale Tronrud
 
 
  On 06/20/2013 09:06 AM, Douglas Theobald wrote:
  Just trying to understand the basic issues here.  How could refining
 directly against intensities solve the fundamental problem of negative
 intensity values?
 
 
  On Jun 20, 2013, at 11:34 AM, Bernhard Rupp hofkristall...@gmail.com
 mailto:hofkristall...@gmail.com wrote:
  As a maybe better alternative, we should (once again) consider to refine
 against intensities (and I guess George Sheldrick would agree here).
 
  I have a simple question - what exactly, short of some sort of historic
 inertia (or memory lapse), is the reason NOT to refine against intensities?
 
  Best, BR
 
 
 
 
  --
 
  This e-mail and any attachments may contain confidential, copyright and
 or privileged material, and are for the use of the intended addressee only.
 If you are not the intended addressee or an authorised recipient of the
 addressee please notify us of receipt by returning the e-mail and do not
 use, copy, retain, distribute or disclose the information in or attached to
 the e-mail.
 
  Any opinions expressed within this e-mail are those of the individual
 and not necessarily of Diamond Light Source Ltd.
 
  Diamond Light Source Ltd. cannot guarantee that this e-mail or any
 attachments are free from viruses and we cannot accept liability for any
 damage which you may sustain as a result of software viruses which may be
 transmitted in or with the message.
 
  Diamond Light Source Limited (company no. 4375679). Registered in
 England and Wales with its registered office at Diamond House, Harwell
 Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
 
 
 
 
 
 
 
 
 



Re: [ccp4bb] ctruncate bug?

2013-06-20 Thread Andrew Leslie
The integration programs report a negative intensity simply because that is the 
observation. 

Because of noise in the Xray background, in a large sample of intensity 
estimates for reflections whose true intensity is very very small one will 
inevitably get some measurements that are negative. These must not be rejected 
because this will lead to bias (because some of these intensities for symmetry 
mates will be estimated too large rather than too small). It is not unusual for 
the intensity to remain negative even after averaging symmetry mates.

Andrew


On 20 Jun 2013, at 11:49, Douglas Theobald dtheob...@brandeis.edu wrote:

 Seems to me that the negative Is should be dealt with early on, in the 
 integration step.  Why exactly do integration programs report negative Is to 
 begin with?
 
 
 On Jun 20, 2013, at 12:45 PM, Dom Bellini dom.bell...@diamond.ac.uk wrote:
 
 Wouldnt be possible to take advantage of negative Is to extrapolate/estimate 
 the decay of scattering background (kind of Wilson plot of background 
 scattering) to flat out the background and push all the Is to positive 
 values?
 
 More of a question rather than a suggestion ...
 
 D
 
 
 
 From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Ian 
 Tickle
 Sent: 20 June 2013 17:34
 To: ccp4bb
 Subject: Re: [ccp4bb] ctruncate bug?
 
 Yes higher R factors is the usual reason people don't like I-based 
 refinement!
 
 Anyway, refining against Is doesn't solve the problem, it only postpones it: 
 you still need the Fs for maps! (though errors in Fs may be less critical 
 then).
 -- Ian
 
 On 20 June 2013 17:20, Dale Tronrud 
 det...@uoxray.uoregon.edumailto:det...@uoxray.uoregon.edu wrote:
  If you are refining against F's you have to find some way to avoid
 calculating the square root of a negative number.  That is why people
 have historically rejected negative I's and why Truncate and cTruncate
 were invented.
 
  When refining against I, the calculation of (Iobs - Icalc)^2 couldn't
 care less if Iobs happens to be negative.
 
  As for why people still refine against F...  When I was distributing
 a refinement package it could refine against I but no one wanted to do
 that.  The R values ended up higher, but they were looking at R
 values calculated from F's.  Of course the F based R values are lower
 when you refine against F's, that means nothing.
 
  If we could get the PDB to report both the F and I based R values
 for all models maybe we could get a start toward moving to intensity
 refinement.
 
 Dale Tronrud
 
 
 On 06/20/2013 09:06 AM, Douglas Theobald wrote:
 Just trying to understand the basic issues here.  How could refining 
 directly against intensities solve the fundamental problem of negative 
 intensity values?
 
 
 On Jun 20, 2013, at 11:34 AM, Bernhard Rupp 
 hofkristall...@gmail.commailto:hofkristall...@gmail.com wrote:
 As a maybe better alternative, we should (once again) consider to refine 
 against intensities (and I guess George Sheldrick would agree here).
 
 I have a simple question - what exactly, short of some sort of historic 
 inertia (or memory lapse), is the reason NOT to refine against intensities?
 
 Best, BR
 
 
 
 
 -- 
 
 This e-mail and any attachments may contain confidential, copyright and or 
 privileged material, and are for the use of the intended addressee only. If 
 you are not the intended addressee or an authorised recipient of the 
 addressee please notify us of receipt by returning the e-mail and do not 
 use, copy, retain, distribute or disclose the information in or attached to 
 the e-mail.
 
 Any opinions expressed within this e-mail are those of the individual and 
 not necessarily of Diamond Light Source Ltd. 
 
 Diamond Light Source Ltd. cannot guarantee that this e-mail or any 
 attachments are free from viruses and we cannot accept liability for any 
 damage which you may sustain as a result of software viruses which may be 
 transmitted in or with the message.
 
 Diamond Light Source Limited (company no. 4375679). Registered in England 
 and Wales with its registered office at Diamond House, Harwell Science and 
 Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
 
 
 
 
 
 
 
 
 


Re: [ccp4bb] ctruncate bug?

2013-06-20 Thread Douglas Theobald
How can there be nothing wrong with something that is unphysical?  
Intensities cannot be negative.  How could you measure a negative number of 
photons?  You can only have a Gaussian distribution around I=0 if you are using 
an incorrect, unphysical statistical model.  As I understand it, the physics 
predicts that intensities from diffraction should be gamma distributed (i.e., 
the square of a Gaussian variate), which makes sense as the gamma distribution 
assigns probability 0 to negative values.  


On Jun 20, 2013, at 1:00 PM, Bernard D Santarsiero b...@uic.edu wrote:

 There's absolutely nothing wrong with negative intensities. They are 
 measurements of intensities that are near zero, and some will be negative, 
 and others positive.  The distribution around I=0 can still be Gaussian, and 
 you have true esd's.  With F's you used a derived esd since they can't be 
 formally generated from the sigma's on I, and are very much undetermined for 
 small intensities and small F's. 
 
 Small molecule crystallographers routinely refine on F^2 and use all of the 
 data, even if the F^2's are negative.
 
 Bernie
 
 On Jun 20, 2013, at 11:49 AM, Douglas Theobald wrote:
 
 Seems to me that the negative Is should be dealt with early on, in the 
 integration step.  Why exactly do integration programs report negative Is to 
 begin with?
 
 
 On Jun 20, 2013, at 12:45 PM, Dom Bellini dom.bell...@diamond.ac.uk wrote:
 
 Wouldnt be possible to take advantage of negative Is to 
 extrapolate/estimate the decay of scattering background (kind of Wilson 
 plot of background scattering) to flat out the background and push all the 
 Is to positive values?
 
 More of a question rather than a suggestion ...
 
 D
 
 
 
 From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Ian 
 Tickle
 Sent: 20 June 2013 17:34
 To: ccp4bb
 Subject: Re: [ccp4bb] ctruncate bug?
 
 Yes higher R factors is the usual reason people don't like I-based 
 refinement!
 
 Anyway, refining against Is doesn't solve the problem, it only postpones 
 it: you still need the Fs for maps! (though errors in Fs may be less 
 critical then).
 -- Ian
 
 On 20 June 2013 17:20, Dale Tronrud 
 det...@uoxray.uoregon.edumailto:det...@uoxray.uoregon.edu wrote:
 If you are refining against F's you have to find some way to avoid
 calculating the square root of a negative number.  That is why people
 have historically rejected negative I's and why Truncate and cTruncate
 were invented.
 
 When refining against I, the calculation of (Iobs - Icalc)^2 couldn't
 care less if Iobs happens to be negative.
 
 As for why people still refine against F...  When I was distributing
 a refinement package it could refine against I but no one wanted to do
 that.  The R values ended up higher, but they were looking at R
 values calculated from F's.  Of course the F based R values are lower
 when you refine against F's, that means nothing.
 
 If we could get the PDB to report both the F and I based R values
 for all models maybe we could get a start toward moving to intensity
 refinement.
 
 Dale Tronrud
 
 
 On 06/20/2013 09:06 AM, Douglas Theobald wrote:
 Just trying to understand the basic issues here.  How could refining 
 directly against intensities solve the fundamental problem of negative 
 intensity values?
 
 
 On Jun 20, 2013, at 11:34 AM, Bernhard Rupp 
 hofkristall...@gmail.commailto:hofkristall...@gmail.com wrote:
 As a maybe better alternative, we should (once again) consider to refine 
 against intensities (and I guess George Sheldrick would agree here).
 
 I have a simple question - what exactly, short of some sort of historic 
 inertia (or memory lapse), is the reason NOT to refine against intensities?
 
 Best, BR
 
 
 
 
 -- 
 
 This e-mail and any attachments may contain confidential, copyright and or 
 privileged material, and are for the use of the intended addressee only. If 
 you are not the intended addressee or an authorised recipient of the 
 addressee please notify us of receipt by returning the e-mail and do not 
 use, copy, retain, distribute or disclose the information in or attached to 
 the e-mail.
 
 Any opinions expressed within this e-mail are those of the individual and 
 not necessarily of Diamond Light Source Ltd. 
 
 Diamond Light Source Ltd. cannot guarantee that this e-mail or any 
 attachments are free from viruses and we cannot accept liability for any 
 damage which you may sustain as a result of software viruses which may be 
 transmitted in or with the message.
 
 Diamond Light Source Limited (company no. 4375679). Registered in England 
 and Wales with its registered office at Diamond House, Harwell Science and 
 Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
 
 
 
 
 
 
 
 
 
 
 


Re: [ccp4bb] ctruncate bug?

2013-06-20 Thread Douglas Theobald
I still don't see how you get a negative intensity from that.  It seems you are 
saying that in many cases of a low intensity reflection, the integrated spot 
will be lower than the background.  That is not equivalent to having a negative 
measurement (as the measurement is actually positive, and sometimes things are 
randomly less positive than backgroiund).  If you are using a proper 
statistical model, after background correction you will end up with a positive 
(or 0) value for the integrated intensity.  


On Jun 20, 2013, at 1:08 PM, Andrew Leslie and...@mrc-lmb.cam.ac.uk wrote:

 
 The integration programs report a negative intensity simply because that is 
 the observation. 
 
 Because of noise in the Xray background, in a large sample of intensity 
 estimates for reflections whose true intensity is very very small one will 
 inevitably get some measurements that are negative. These must not be 
 rejected because this will lead to bias (because some of these intensities 
 for symmetry mates will be estimated too large rather than too small). It is 
 not unusual for the intensity to remain negative even after averaging 
 symmetry mates.
 
 Andrew
 
 
 On 20 Jun 2013, at 11:49, Douglas Theobald dtheob...@brandeis.edu wrote:
 
 Seems to me that the negative Is should be dealt with early on, in the 
 integration step.  Why exactly do integration programs report negative Is to 
 begin with?
 
 
 On Jun 20, 2013, at 12:45 PM, Dom Bellini dom.bell...@diamond.ac.uk wrote:
 
 Wouldnt be possible to take advantage of negative Is to 
 extrapolate/estimate the decay of scattering background (kind of Wilson 
 plot of background scattering) to flat out the background and push all the 
 Is to positive values?
 
 More of a question rather than a suggestion ...
 
 D
 
 
 
 From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Ian 
 Tickle
 Sent: 20 June 2013 17:34
 To: ccp4bb
 Subject: Re: [ccp4bb] ctruncate bug?
 
 Yes higher R factors is the usual reason people don't like I-based 
 refinement!
 
 Anyway, refining against Is doesn't solve the problem, it only postpones 
 it: you still need the Fs for maps! (though errors in Fs may be less 
 critical then).
 -- Ian
 
 On 20 June 2013 17:20, Dale Tronrud 
 det...@uoxray.uoregon.edumailto:det...@uoxray.uoregon.edu wrote:
 If you are refining against F's you have to find some way to avoid
 calculating the square root of a negative number.  That is why people
 have historically rejected negative I's and why Truncate and cTruncate
 were invented.
 
 When refining against I, the calculation of (Iobs - Icalc)^2 couldn't
 care less if Iobs happens to be negative.
 
 As for why people still refine against F...  When I was distributing
 a refinement package it could refine against I but no one wanted to do
 that.  The R values ended up higher, but they were looking at R
 values calculated from F's.  Of course the F based R values are lower
 when you refine against F's, that means nothing.
 
 If we could get the PDB to report both the F and I based R values
 for all models maybe we could get a start toward moving to intensity
 refinement.
 
 Dale Tronrud
 
 
 On 06/20/2013 09:06 AM, Douglas Theobald wrote:
 Just trying to understand the basic issues here.  How could refining 
 directly against intensities solve the fundamental problem of negative 
 intensity values?
 
 
 On Jun 20, 2013, at 11:34 AM, Bernhard Rupp 
 hofkristall...@gmail.commailto:hofkristall...@gmail.com wrote:
 As a maybe better alternative, we should (once again) consider to refine 
 against intensities (and I guess George Sheldrick would agree here).
 
 I have a simple question - what exactly, short of some sort of historic 
 inertia (or memory lapse), is the reason NOT to refine against intensities?
 
 Best, BR
 
 
 
 
 -- 
 
 This e-mail and any attachments may contain confidential, copyright and or 
 privileged material, and are for the use of the intended addressee only. If 
 you are not the intended addressee or an authorised recipient of the 
 addressee please notify us of receipt by returning the e-mail and do not 
 use, copy, retain, distribute or disclose the information in or attached to 
 the e-mail.
 
 Any opinions expressed within this e-mail are those of the individual and 
 not necessarily of Diamond Light Source Ltd. 
 
 Diamond Light Source Ltd. cannot guarantee that this e-mail or any 
 attachments are free from viruses and we cannot accept liability for any 
 damage which you may sustain as a result of software viruses which may be 
 transmitted in or with the message.
 
 Diamond Light Source Limited (company no. 4375679). Registered in England 
 and Wales with its registered office at Diamond House, Harwell Science and 
 Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
 
 
 
 
 
 
 
 
 
 


Re: [ccp4bb] ctruncate bug?

2013-06-20 Thread Felix Frolow
Intensity is subtraction:  Inet=Iobs - Ibackground.  Iobs and Ibackground can 
not be negative.  Inet CAN be negative if background is higher than Iobs. 
We do not know how to model background scattering modulated my molecular 
transform and mechanical motion of the molecule, 
I recall we have called it TDS - thermal diffuse scattering. Many years ago 
Boaz Shaanan and JH were fascinated by it.
If we would know how deal with TDS, we would go to much nicer structures some 
of us like and for sure to much lower 
R factors all of us love excluding maybe referees who will claim over 
refinement :-\
Dr Felix Frolow   
Professor of Structural Biology and Biotechnology, 
Department of Molecular Microbiology and Biotechnology
Tel Aviv University 69978, Israel

Acta Crystallographica F, co-editor

e-mail: mbfro...@post.tau.ac.il
Tel:  ++972-3640-8723
Fax: ++972-3640-9407
Cellular: 0547 459 608

On Jun 20, 2013, at 20:07 , Douglas Theobald dtheob...@brandeis.edu wrote:

 How can there be nothing wrong with something that is unphysical?  
 Intensities cannot be negative.  How could you measure a negative number of 
 photons?  You can only have a Gaussian distribution around I=0 if you are 
 using an incorrect, unphysical statistical model.  As I understand it, the 
 physics predicts that intensities from diffraction should be gamma 
 distributed (i.e., the square of a Gaussian variate), which makes sense as 
 the gamma distribution assigns probability 0 to negative values.  
 
 
 On Jun 20, 2013, at 1:00 PM, Bernard D Santarsiero b...@uic.edu wrote:
 
 There's absolutely nothing wrong with negative intensities. They are 
 measurements of intensities that are near zero, and some will be negative, 
 and others positive.  The distribution around I=0 can still be Gaussian, and 
 you have true esd's.  With F's you used a derived esd since they can't be 
 formally generated from the sigma's on I, and are very much undetermined for 
 small intensities and small F's. 
 
 Small molecule crystallographers routinely refine on F^2 and use all of the 
 data, even if the F^2's are negative.
 
 Bernie
 
 On Jun 20, 2013, at 11:49 AM, Douglas Theobald wrote:
 
 Seems to me that the negative Is should be dealt with early on, in the 
 integration step.  Why exactly do integration programs report negative Is 
 to begin with?
 
 
 On Jun 20, 2013, at 12:45 PM, Dom Bellini dom.bell...@diamond.ac.uk wrote:
 
 Wouldnt be possible to take advantage of negative Is to 
 extrapolate/estimate the decay of scattering background (kind of Wilson 
 plot of background scattering) to flat out the background and push all the 
 Is to positive values?
 
 More of a question rather than a suggestion ...
 
 D
 
 
 
 From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Ian 
 Tickle
 Sent: 20 June 2013 17:34
 To: ccp4bb
 Subject: Re: [ccp4bb] ctruncate bug?
 
 Yes higher R factors is the usual reason people don't like I-based 
 refinement!
 
 Anyway, refining against Is doesn't solve the problem, it only postpones 
 it: you still need the Fs for maps! (though errors in Fs may be less 
 critical then).
 -- Ian
 
 On 20 June 2013 17:20, Dale Tronrud 
 det...@uoxray.uoregon.edumailto:det...@uoxray.uoregon.edu wrote:
 If you are refining against F's you have to find some way to avoid
 calculating the square root of a negative number.  That is why people
 have historically rejected negative I's and why Truncate and cTruncate
 were invented.
 
 When refining against I, the calculation of (Iobs - Icalc)^2 couldn't
 care less if Iobs happens to be negative.
 
 As for why people still refine against F...  When I was distributing
 a refinement package it could refine against I but no one wanted to do
 that.  The R values ended up higher, but they were looking at R
 values calculated from F's.  Of course the F based R values are lower
 when you refine against F's, that means nothing.
 
 If we could get the PDB to report both the F and I based R values
 for all models maybe we could get a start toward moving to intensity
 refinement.
 
 Dale Tronrud
 
 
 On 06/20/2013 09:06 AM, Douglas Theobald wrote:
 Just trying to understand the basic issues here.  How could refining 
 directly against intensities solve the fundamental problem of negative 
 intensity values?
 
 
 On Jun 20, 2013, at 11:34 AM, Bernhard Rupp 
 hofkristall...@gmail.commailto:hofkristall...@gmail.com wrote:
 As a maybe better alternative, we should (once again) consider to refine 
 against intensities (and I guess George Sheldrick would agree here).
 
 I have a simple question - what exactly, short of some sort of historic 
 inertia (or memory lapse), is the reason NOT to refine against intensities?
 
 Best, BR
 
 
 
 
 -- 
 
 This e-mail and any attachments may contain confidential, copyright and or 
 privileged material, and are for the use of the intended addressee only. 
 If you are not the intended addressee or an authorised recipient of the 
 addressee please notify us of receipt 

Re: [ccp4bb] ctruncate bug?

2013-06-20 Thread Douglas Theobald
On Jun 20, 2013, at 1:47 PM, Felix Frolow mbfro...@post.tau.ac.il wrote:

 Intensity is subtraction:  Inet=Iobs - Ibackground.  Iobs and Ibackground can 
 not be negative.  Inet CAN be negative if background is higher than Iobs. 

Just to reiterate, we know that the true value of Inet cannot be negative.  
Hence, the equation you quote is invalid and illogical --- it has no physical 
or statistical justification (except as an approximation for large Iobs and low 
Iback, when ironically background correction is unnecessary).  That equation 
does not account for random statistical fluctuations (e.g., simple Poisson 
counting statistics of shot noise).  


 We do not know how to model background scattering modulated my molecular 
 transform and mechanical motion of the molecule, 
 I recall we have called it TDS - thermal diffuse scattering. Many years ago 
 Boaz Shaanan and JH were fascinated by it.
 If we would know how deal with TDS, we would go to much nicer structures some 
 of us like and for sure to much lower 
 R factors all of us love excluding maybe referees who will claim over 
 refinement :-\
 Dr Felix Frolow   
 Professor of Structural Biology and Biotechnology, 
 Department of Molecular Microbiology and Biotechnology
 Tel Aviv University 69978, Israel
 
 Acta Crystallographica F, co-editor
 
 e-mail: mbfro...@post.tau.ac.il
 Tel:  ++972-3640-8723
 Fax: ++972-3640-9407
 Cellular: 0547 459 608
 
 On Jun 20, 2013, at 20:07 , Douglas Theobald dtheob...@brandeis.edu wrote:
 
 How can there be nothing wrong with something that is unphysical?  
 Intensities cannot be negative.  How could you measure a negative number of 
 photons?  You can only have a Gaussian distribution around I=0 if you are 
 using an incorrect, unphysical statistical model.  As I understand it, the 
 physics predicts that intensities from diffraction should be gamma 
 distributed (i.e., the square of a Gaussian variate), which makes sense as 
 the gamma distribution assigns probability 0 to negative values.  
 
 
 On Jun 20, 2013, at 1:00 PM, Bernard D Santarsiero b...@uic.edu wrote:
 
 There's absolutely nothing wrong with negative intensities. They are 
 measurements of intensities that are near zero, and some will be negative, 
 and others positive.  The distribution around I=0 can still be Gaussian, 
 and you have true esd's.  With F's you used a derived esd since they can't 
 be formally generated from the sigma's on I, and are very much undetermined 
 for small intensities and small F's. 
 
 Small molecule crystallographers routinely refine on F^2 and use all of the 
 data, even if the F^2's are negative.
 
 Bernie
 
 On Jun 20, 2013, at 11:49 AM, Douglas Theobald wrote:
 
 Seems to me that the negative Is should be dealt with early on, in the 
 integration step.  Why exactly do integration programs report negative Is 
 to begin with?
 
 
 On Jun 20, 2013, at 12:45 PM, Dom Bellini dom.bell...@diamond.ac.uk 
 wrote:
 
 Wouldnt be possible to take advantage of negative Is to 
 extrapolate/estimate the decay of scattering background (kind of Wilson 
 plot of background scattering) to flat out the background and push all 
 the Is to positive values?
 
 More of a question rather than a suggestion ...
 
 D
 
 
 
 From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Ian 
 Tickle
 Sent: 20 June 2013 17:34
 To: ccp4bb
 Subject: Re: [ccp4bb] ctruncate bug?
 
 Yes higher R factors is the usual reason people don't like I-based 
 refinement!
 
 Anyway, refining against Is doesn't solve the problem, it only postpones 
 it: you still need the Fs for maps! (though errors in Fs may be less 
 critical then).
 -- Ian
 
 On 20 June 2013 17:20, Dale Tronrud 
 det...@uoxray.uoregon.edumailto:det...@uoxray.uoregon.edu wrote:
 If you are refining against F's you have to find some way to avoid
 calculating the square root of a negative number.  That is why people
 have historically rejected negative I's and why Truncate and cTruncate
 were invented.
 
 When refining against I, the calculation of (Iobs - Icalc)^2 couldn't
 care less if Iobs happens to be negative.
 
 As for why people still refine against F...  When I was distributing
 a refinement package it could refine against I but no one wanted to do
 that.  The R values ended up higher, but they were looking at R
 values calculated from F's.  Of course the F based R values are lower
 when you refine against F's, that means nothing.
 
 If we could get the PDB to report both the F and I based R values
 for all models maybe we could get a start toward moving to intensity
 refinement.
 
 Dale Tronrud
 
 
 On 06/20/2013 09:06 AM, Douglas Theobald wrote:
 Just trying to understand the basic issues here.  How could refining 
 directly against intensities solve the fundamental problem of negative 
 intensity values?
 
 
 On Jun 20, 2013, at 11:34 AM, Bernhard Rupp 
 hofkristall...@gmail.commailto:hofkristall...@gmail.com wrote:
 As a maybe better alternative, we should (once again) 

[ccp4bb] NAG-NAG

2013-06-20 Thread Monika Pathak
HiPlease can I ask about how can I put NAG to asparagine in coot (I think its 2NAG that I can put in density) and if possible to refine itin refmacthen.Thanks in advance for help	RegardsMonikaMonika PathakUniversity of NottinghamNG7 2RD
This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please send it back to me, and immediately delete it. Please do not use, copy or disclose the information contained in this message or in any attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham.This message has been checked for viruses but the contents of an attachment may still contain software viruses which could damage your computer system, you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation.



Re: [ccp4bb] ctruncate bug?

2013-06-20 Thread Ian Tickle
Douglas, I think you are missing the point that estimation of the
parameters of the proper Bayesian statistical model (i.e. the Wilson prior)
in order to perform the integration in the manner you are suggesting
requires knowledge of the already integrated intensities!  I suppose we
could iterate, i.e. assume an approximate prior, integrate, calculate a
better prior, re-do the integration with the new prior and so on (hoping of
course that the whole process converges), but I think most people would
regard that as overkill.  Also dealing with the issue of averaging
estimates of intensities that no longer have a Gaussian error distribution,
and also crucially outlier rejection, would require some rethinking of the
algorithms. The question is would it make any difference in the end
compared with the 'post-correction' we're doing now?

Cheers

-- Ian


On 20 June 2013 18:14, Douglas Theobald dtheob...@brandeis.edu wrote:

 I still don't see how you get a negative intensity from that.  It seems
 you are saying that in many cases of a low intensity reflection, the
 integrated spot will be lower than the background.  That is not equivalent
 to having a negative measurement (as the measurement is actually positive,
 and sometimes things are randomly less positive than backgroiund).  If you
 are using a proper statistical model, after background correction you will
 end up with a positive (or 0) value for the integrated intensity.


 On Jun 20, 2013, at 1:08 PM, Andrew Leslie and...@mrc-lmb.cam.ac.uk
 wrote:

 
  The integration programs report a negative intensity simply because that
 is the observation.
 
  Because of noise in the Xray background, in a large sample of intensity
 estimates for reflections whose true intensity is very very small one will
 inevitably get some measurements that are negative. These must not be
 rejected because this will lead to bias (because some of these intensities
 for symmetry mates will be estimated too large rather than too small). It
 is not unusual for the intensity to remain negative even after averaging
 symmetry mates.
 
  Andrew
 
 
  On 20 Jun 2013, at 11:49, Douglas Theobald dtheob...@brandeis.edu
 wrote:
 
  Seems to me that the negative Is should be dealt with early on, in the
 integration step.  Why exactly do integration programs report negative Is
 to begin with?
 
 
  On Jun 20, 2013, at 12:45 PM, Dom Bellini dom.bell...@diamond.ac.uk
 wrote:
 
  Wouldnt be possible to take advantage of negative Is to
 extrapolate/estimate the decay of scattering background (kind of Wilson
 plot of background scattering) to flat out the background and push all the
 Is to positive values?
 
  More of a question rather than a suggestion ...
 
  D
 
 
 
  From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of
 Ian Tickle
  Sent: 20 June 2013 17:34
  To: ccp4bb
  Subject: Re: [ccp4bb] ctruncate bug?
 
  Yes higher R factors is the usual reason people don't like I-based
 refinement!
 
  Anyway, refining against Is doesn't solve the problem, it only
 postpones it: you still need the Fs for maps! (though errors in Fs may be
 less critical then).
  -- Ian
 
  On 20 June 2013 17:20, Dale Tronrud det...@uoxray.uoregon.edumailto:
 det...@uoxray.uoregon.edu wrote:
  If you are refining against F's you have to find some way to avoid
  calculating the square root of a negative number.  That is why people
  have historically rejected negative I's and why Truncate and cTruncate
  were invented.
 
  When refining against I, the calculation of (Iobs - Icalc)^2 couldn't
  care less if Iobs happens to be negative.
 
  As for why people still refine against F...  When I was distributing
  a refinement package it could refine against I but no one wanted to do
  that.  The R values ended up higher, but they were looking at R
  values calculated from F's.  Of course the F based R values are lower
  when you refine against F's, that means nothing.
 
  If we could get the PDB to report both the F and I based R values
  for all models maybe we could get a start toward moving to intensity
  refinement.
 
  Dale Tronrud
 
 
  On 06/20/2013 09:06 AM, Douglas Theobald wrote:
  Just trying to understand the basic issues here.  How could refining
 directly against intensities solve the fundamental problem of negative
 intensity values?
 
 
  On Jun 20, 2013, at 11:34 AM, Bernhard Rupp hofkristall...@gmail.com
 mailto:hofkristall...@gmail.com wrote:
  As a maybe better alternative, we should (once again) consider to
 refine against intensities (and I guess George Sheldrick would agree here).
 
  I have a simple question - what exactly, short of some sort of
 historic inertia (or memory lapse), is the reason NOT to refine against
 intensities?
 
  Best, BR
 
 
 
 
  --
 
  This e-mail and any attachments may contain confidential, copyright
 and or privileged material, and are for the use of the intended addressee
 only. If you are not the intended addressee or an authorised recipient of
 the 

Re: [ccp4bb] ctruncate bug?

2013-06-20 Thread Kay Diederichs
Douglas,

the intensity is negative if the integrated spot has a lower intensity than the 
estimate of the background under the spot. So yes, we are not _measuring_ 
negative intensities, rather we are estimating intensities, and that estimate 
may turn out to be negative. In a later step we try to correct for this, 
because it is non-physical, as you say. At that point, the proper statistical 
model comes into play. Essentially we use this as a prior. In the order of 
increasing information, we can have more or less informative priors for weak 
reflections:
1) I  0
2) I has a distribution looking like the right half of a Gaussian, and we 
estimate its width from the variance of the intensities in a resolution shell
3) I follows a Wilson distribution, and we estimate its parameters from the 
data in a resolution shell
4) I must be related to Fcalc^2 (i.e. once the structure is solved, we 
re-integrate using the Fcalc as prior)
For a given experiment, the problem is chicken-and-egg in the sense that only 
if you know the characteristics of the data can you choose the correct prior.
I guess that using prior 4) would be heavily frowned upon because there is a 
danger of model bias. You could say: A Bayesian analysis done properly should 
not suffer from model bias. This is probably true, but the theory to ensure the 
word properly is not available at the moment.
Crystallographers usually use prior 3) which, as I tried to point out, also has 
its weak spots, namely if the data do not behave like those of an ideal crystal 
- and today's projects often result in data that would have been discarded ten 
years ago, so they are far from ideal.
Prior 2) is available as an option in XDSCONV
Prior 1) seems to be used, or is available, in ctruncate in certain cases (I 
don't know the details)

Using intensities instead of amplitudes in refinement would avoid having to 
choose a prior, and refinement would therefore not be compromised in case of 
data violating the assumptions underlying the prior. 

By the way, it is not (Iobs-Icalc)^2 that would be optimized in refinement 
against intensities, but rather the corresponding maximum likelihood formula 
(which I seem to remember is more complicated than the amplitude ML formula, or 
is not an analytical formula at all, but maybe somebody knows better).

best,

Kay


On Thu, 20 Jun 2013 13:14:28 -0400, Douglas Theobald dtheob...@brandeis.edu 
wrote:

I still don't see how you get a negative intensity from that.  It seems you 
are saying that in many cases of a low intensity reflection, the integrated 
spot will be lower than the background.  That is not equivalent to having a 
negative measurement (as the measurement is actually positive, and sometimes 
things are randomly less positive than backgroiund).  If you are using a 
proper statistical model, after background correction you will end up with a 
positive (or 0) value for the integrated intensity.  


On Jun 20, 2013, at 1:08 PM, Andrew Leslie and...@mrc-lmb.cam.ac.uk wrote:

 
 The integration programs report a negative intensity simply because that is 
 the observation. 
 
 Because of noise in the Xray background, in a large sample of intensity 
 estimates for reflections whose true intensity is very very small one will 
 inevitably get some measurements that are negative. These must not be 
 rejected because this will lead to bias (because some of these intensities 
 for symmetry mates will be estimated too large rather than too small). It is 
 not unusual for the intensity to remain negative even after averaging 
 symmetry mates.
 
 Andrew
 
 
 On 20 Jun 2013, at 11:49, Douglas Theobald dtheob...@brandeis.edu wrote:
 
 Seems to me that the negative Is should be dealt with early on, in the 
 integration step.  Why exactly do integration programs report negative Is 
 to begin with?
 
 
 On Jun 20, 2013, at 12:45 PM, Dom Bellini dom.bell...@diamond.ac.uk wrote:
 
 Wouldnt be possible to take advantage of negative Is to 
 extrapolate/estimate the decay of scattering background (kind of Wilson 
 plot of background scattering) to flat out the background and push all the 
 Is to positive values?
 
 More of a question rather than a suggestion ...
 
 D
 
 
 
 From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Ian 
 Tickle
 Sent: 20 June 2013 17:34
 To: ccp4bb
 Subject: Re: [ccp4bb] ctruncate bug?
 
 Yes higher R factors is the usual reason people don't like I-based 
 refinement!
 
 Anyway, refining against Is doesn't solve the problem, it only postpones 
 it: you still need the Fs for maps! (though errors in Fs may be less 
 critical then).
 -- Ian
 
 On 20 June 2013 17:20, Dale Tronrud 
 det...@uoxray.uoregon.edumailto:det...@uoxray.uoregon.edu wrote:
 If you are refining against F's you have to find some way to avoid
 calculating the square root of a negative number.  That is why people
 have historically rejected negative I's and why Truncate and cTruncate
 were invented.
 
 When refining against I, 

[ccp4bb] Puzzling observation about size exclusion chromatography

2013-06-20 Thread Zhang, Zhen
Dear all,

I just observed a puzzling phenomenon when purifying a refolded protein with 
size exclusion chromatography. The protein was solubilized by 8M Urea and 
refolded by dialysis against 500mM Arginine in PBS. The protein is 40KDal and 
is expected to be a trimer. The puzzling part is the protein after refolding 
always eluted at 18ml from the superdex S200 column (10/300), which is 
calculated to be 5KDal by standard. However, the fractions appear to be at 
40KDal with SDS PAGE and the protein is functional in term of in vitro binding 
to the protein-specific monoclonal antibody. I could not explain the 
observation and I am wondering if anyone has the similar experience or has an 
opinion on this. Any comments are welcome.

Thanks.

Zhen


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.


Re: [ccp4bb] ctruncate bug?

2013-06-20 Thread Douglas Theobald
Kay, I understand the French-Wilson way of currently doing things, as you 
outline below.  My point is that it is not optimal --- we could do things 
better --- since even French-Wilson accepts the idea of negative intensity 
measurements.  I am trying to disabuse the (very stubborn) view that when the 
background is more than the spot, the only possible estimate of the intensity 
is a negative value.  This is untrue, and unjustified by the physics involved.  
In principle, there is no reason to use French-Wilson, as we should never have 
reported a negative integrated intensity to begin with.  

I also understand that (Iobs-Icalc)^2 is not the actual refinement target, but 
the same point applies, and the actual target is based on a fundamental 
Gaussian assumption for the Is.  


On Jun 20, 2013, at 2:13 PM, Kay Diederichs kay.diederi...@uni-konstanz.de 
wrote:

 Douglas,
 
 the intensity is negative if the integrated spot has a lower intensity than 
 the estimate of the background under the spot. So yes, we are not _measuring_ 
 negative intensities, rather we are estimating intensities, and that estimate 
 may turn out to be negative. In a later step we try to correct for this, 
 because it is non-physical, as you say. At that point, the proper 
 statistical model comes into play. Essentially we use this as a prior. In 
 the order of increasing information, we can have more or less informative 
 priors for weak reflections:
 1) I  0
 2) I has a distribution looking like the right half of a Gaussian, and we 
 estimate its width from the variance of the intensities in a resolution shell
 3) I follows a Wilson distribution, and we estimate its parameters from the 
 data in a resolution shell
 4) I must be related to Fcalc^2 (i.e. once the structure is solved, we 
 re-integrate using the Fcalc as prior)
 For a given experiment, the problem is chicken-and-egg in the sense that only 
 if you know the characteristics of the data can you choose the correct prior.
 I guess that using prior 4) would be heavily frowned upon because there is a 
 danger of model bias. You could say: A Bayesian analysis done properly should 
 not suffer from model bias. This is probably true, but the theory to ensure 
 the word properly is not available at the moment.
 Crystallographers usually use prior 3) which, as I tried to point out, also 
 has its weak spots, namely if the data do not behave like those of an ideal 
 crystal - and today's projects often result in data that would have been 
 discarded ten years ago, so they are far from ideal.
 Prior 2) is available as an option in XDSCONV
 Prior 1) seems to be used, or is available, in ctruncate in certain cases (I 
 don't know the details)
 
 Using intensities instead of amplitudes in refinement would avoid having to 
 choose a prior, and refinement would therefore not be compromised in case of 
 data violating the assumptions underlying the prior. 
 
 By the way, it is not (Iobs-Icalc)^2 that would be optimized in refinement 
 against intensities, but rather the corresponding maximum likelihood formula 
 (which I seem to remember is more complicated than the amplitude ML formula, 
 or is not an analytical formula at all, but maybe somebody knows better).
 
 best,
 
 Kay
 
 
 On Thu, 20 Jun 2013 13:14:28 -0400, Douglas Theobald dtheob...@brandeis.edu 
 wrote:
 
 I still don't see how you get a negative intensity from that.  It seems you 
 are saying that in many cases of a low intensity reflection, the integrated 
 spot will be lower than the background.  That is not equivalent to having a 
 negative measurement (as the measurement is actually positive, and sometimes 
 things are randomly less positive than backgroiund).  If you are using a 
 proper statistical model, after background correction you will end up with a 
 positive (or 0) value for the integrated intensity.  
 
 
 On Jun 20, 2013, at 1:08 PM, Andrew Leslie and...@mrc-lmb.cam.ac.uk wrote:
 
 
 The integration programs report a negative intensity simply because that is 
 the observation. 
 
 Because of noise in the Xray background, in a large sample of intensity 
 estimates for reflections whose true intensity is very very small one will 
 inevitably get some measurements that are negative. These must not be 
 rejected because this will lead to bias (because some of these intensities 
 for symmetry mates will be estimated too large rather than too small). It 
 is not unusual for the intensity to remain negative even after averaging 
 symmetry mates.
 
 Andrew
 
 
 On 20 Jun 2013, at 11:49, Douglas Theobald dtheob...@brandeis.edu wrote:
 
 Seems to me that the negative Is should be dealt with early on, in the 
 integration step.  Why exactly do integration programs report negative Is 
 to begin with?
 
 
 On Jun 20, 2013, at 12:45 PM, Dom Bellini dom.bell...@diamond.ac.uk 
 wrote:
 
 Wouldnt be possible to take advantage of negative Is to 
 extrapolate/estimate the decay of scattering background (kind of Wilson 
 

Re: [ccp4bb] ctruncate bug?

2013-06-20 Thread Kay Diederichs

Douglas,

as soon as you come up with an algorithm that gives accurate, unbiased 
intensity estimates together with their standard deviations, everybody 
will be happy. But I'm not aware of progress in this question (Poisson 
signal with background) in the last decades - I'd be glad to be proven 
wrong!


Kay

Am 20.06.13 21:27, schrieb Douglas Theobald:

Kay, I understand the French-Wilson way of currently doing things, as you 
outline below.  My point is that it is not optimal --- we could do things 
better --- since even French-Wilson accepts the idea of negative intensity 
measurements.  I am trying to disabuse the (very stubborn) view that when the 
background is more than the spot, the only possible estimate of the intensity 
is a negative value.  This is untrue, and unjustified by the physics involved.  
In principle, there is no reason to use French-Wilson, as we should never have 
reported a negative integrated intensity to begin with.

I also understand that (Iobs-Icalc)^2 is not the actual refinement target, but 
the same point applies, and the actual target is based on a fundamental 
Gaussian assumption for the Is.


On Jun 20, 2013, at 2:13 PM, Kay Diederichs kay.diederi...@uni-konstanz.de 
wrote:


Douglas,

the intensity is negative if the integrated spot has a lower intensity than the estimate of the background 
under the spot. So yes, we are not _measuring_ negative intensities, rather we are estimating intensities, 
and that estimate may turn out to be negative. In a later step we try to correct for this, 
because it is non-physical, as you say. At that point, the proper statistical model comes into 
play. Essentially we use this as a prior. In the order of increasing information, we can have 
more or less informative priors for weak reflections:
1) I  0
2) I has a distribution looking like the right half of a Gaussian, and we 
estimate its width from the variance of the intensities in a resolution shell
3) I follows a Wilson distribution, and we estimate its parameters from the 
data in a resolution shell
4) I must be related to Fcalc^2 (i.e. once the structure is solved, we 
re-integrate using the Fcalc as prior)
For a given experiment, the problem is chicken-and-egg in the sense that only 
if you know the characteristics of the data can you choose the correct prior.
I guess that using prior 4) would be heavily frowned upon because there is a danger of 
model bias. You could say: A Bayesian analysis done properly should not suffer from model 
bias. This is probably true, but the theory to ensure the word properly is 
not available at the moment.
Crystallographers usually use prior 3) which, as I tried to point out, also has 
its weak spots, namely if the data do not behave like those of an ideal crystal 
- and today's projects often result in data that would have been discarded ten 
years ago, so they are far from ideal.
Prior 2) is available as an option in XDSCONV
Prior 1) seems to be used, or is available, in ctruncate in certain cases (I 
don't know the details)

Using intensities instead of amplitudes in refinement would avoid having to 
choose a prior, and refinement would therefore not be compromised in case of 
data violating the assumptions underlying the prior.

By the way, it is not (Iobs-Icalc)^2 that would be optimized in refinement 
against intensities, but rather the corresponding maximum likelihood formula 
(which I seem to remember is more complicated than the amplitude ML formula, or 
is not an analytical formula at all, but maybe somebody knows better).

best,

Kay


On Thu, 20 Jun 2013 13:14:28 -0400, Douglas Theobald dtheob...@brandeis.edu 
wrote:


I still don't see how you get a negative intensity from that.  It seems you are 
saying that in many cases of a low intensity reflection, the integrated spot 
will be lower than the background.  That is not equivalent to having a negative 
measurement (as the measurement is actually positive, and sometimes things are 
randomly less positive than backgroiund).  If you are using a proper 
statistical model, after background correction you will end up with a positive 
(or 0) value for the integrated intensity.


On Jun 20, 2013, at 1:08 PM, Andrew Leslie and...@mrc-lmb.cam.ac.uk wrote:



The integration programs report a negative intensity simply because that is the 
observation.

Because of noise in the Xray background, in a large sample of intensity 
estimates for reflections whose true intensity is very very small one will 
inevitably get some measurements that are negative. These must not be rejected 
because this will lead to bias (because some of these intensities for symmetry 
mates will be estimated too large rather than too small). It is not unusual for 
the intensity to remain negative even after averaging symmetry mates.

Andrew


On 20 Jun 2013, at 11:49, Douglas Theobald dtheob...@brandeis.edu wrote:


Seems to me that the negative Is should be dealt with early on, in the 
integration step.  Why exactly do 

Re: [ccp4bb] Puzzling observation about size exclusion chromatography

2013-06-20 Thread Patrick Loll
If your protein elutes very late, that means it's binding to the column matrix 
(so all estimates of size go into the trash). Check to see that the ionic 
strength of buffer is reasonable (equivalent to, say, 150 mM NaCl). If so, then 
the only solution is to go to a different matrix type.
Pat

On 20 Jun 2013, at 3:09 PM, Zhang, Zhen wrote:

 Dear all,
 
 I just observed a puzzling phenomenon when purifying a refolded protein with 
 size exclusion chromatography. The protein was solubilized by 8M Urea and 
 refolded by dialysis against 500mM Arginine in PBS. The protein is 40KDal and 
 is expected to be a trimer. The puzzling part is the protein after refolding 
 always eluted at 18ml from the superdex S200 column (10/300), which is 
 calculated to be 5KDal by standard. However, the fractions appear to be at 
 40KDal with SDS PAGE and the protein is functional in term of in vitro 
 binding to the protein-specific monoclonal antibody. I could not explain the 
 observation and I am wondering if anyone has the similar experience or has an 
 opinion on this. Any comments are welcome.
 
 Thanks.
 
 Zhen
 
 
 The information in this e-mail is intended only for the person to whom it is
 addressed. If you believe this e-mail was sent to you in error and the e-mail
 contains patient information, please contact the Partners Compliance HelpLine 
 at
 http://www.partners.org/complianceline . If the e-mail was sent to you in 
 error
 but does not contain patient information, please contact the sender and 
 properly
 dispose of the e-mail.


Re: [ccp4bb] ctruncate bug?

2013-06-20 Thread Douglas Theobald
Well, I tend to think Ian is probably right, that doing things the proper way 
(vs French-Wilson) will not make much of a difference in the end.  

Nevertheless, I don't think refining against the (possibly negative) 
intensities is a good solution to dealing with negative intensities --- that 
just ignores the problem, and will end up overweighting large negative 
intensities.  Wouldn't it be better to correct the negative intensities with FW 
and then refine against that?


On Jun 20, 2013, at 3:38 PM, Kay Diederichs kay.diederi...@uni-konstanz.de 
wrote:

 Douglas,
 
 as soon as you come up with an algorithm that gives accurate, unbiased 
 intensity estimates together with their standard deviations, everybody will 
 be happy. But I'm not aware of progress in this question (Poisson signal with 
 background) in the last decades - I'd be glad to be proven wrong!
 
 Kay
 
 Am 20.06.13 21:27, schrieb Douglas Theobald:
 Kay, I understand the French-Wilson way of currently doing things, as you 
 outline below.  My point is that it is not optimal --- we could do things 
 better --- since even French-Wilson accepts the idea of negative intensity 
 measurements.  I am trying to disabuse the (very stubborn) view that when 
 the background is more than the spot, the only possible estimate of the 
 intensity is a negative value.  This is untrue, and unjustified by the 
 physics involved.  In principle, there is no reason to use French-Wilson, as 
 we should never have reported a negative integrated intensity to begin with.
 
 I also understand that (Iobs-Icalc)^2 is not the actual refinement target, 
 but the same point applies, and the actual target is based on a fundamental 
 Gaussian assumption for the Is.
 
 
 On Jun 20, 2013, at 2:13 PM, Kay Diederichs kay.diederi...@uni-konstanz.de 
 wrote:
 
 Douglas,
 
 the intensity is negative if the integrated spot has a lower intensity than 
 the estimate of the background under the spot. So yes, we are not 
 _measuring_ negative intensities, rather we are estimating intensities, and 
 that estimate may turn out to be negative. In a later step we try to 
 correct for this, because it is non-physical, as you say. At that point, 
 the proper statistical model comes into play. Essentially we use this as 
 a prior. In the order of increasing information, we can have more or less 
 informative priors for weak reflections:
 1) I  0
 2) I has a distribution looking like the right half of a Gaussian, and we 
 estimate its width from the variance of the intensities in a resolution 
 shell
 3) I follows a Wilson distribution, and we estimate its parameters from the 
 data in a resolution shell
 4) I must be related to Fcalc^2 (i.e. once the structure is solved, we 
 re-integrate using the Fcalc as prior)
 For a given experiment, the problem is chicken-and-egg in the sense that 
 only if you know the characteristics of the data can you choose the correct 
 prior.
 I guess that using prior 4) would be heavily frowned upon because there is 
 a danger of model bias. You could say: A Bayesian analysis done properly 
 should not suffer from model bias. This is probably true, but the theory to 
 ensure the word properly is not available at the moment.
 Crystallographers usually use prior 3) which, as I tried to point out, also 
 has its weak spots, namely if the data do not behave like those of an ideal 
 crystal - and today's projects often result in data that would have been 
 discarded ten years ago, so they are far from ideal.
 Prior 2) is available as an option in XDSCONV
 Prior 1) seems to be used, or is available, in ctruncate in certain cases 
 (I don't know the details)
 
 Using intensities instead of amplitudes in refinement would avoid having to 
 choose a prior, and refinement would therefore not be compromised in case 
 of data violating the assumptions underlying the prior.
 
 By the way, it is not (Iobs-Icalc)^2 that would be optimized in refinement 
 against intensities, but rather the corresponding maximum likelihood 
 formula (which I seem to remember is more complicated than the amplitude ML 
 formula, or is not an analytical formula at all, but maybe somebody knows 
 better).
 
 best,
 
 Kay
 
 
 On Thu, 20 Jun 2013 13:14:28 -0400, Douglas Theobald 
 dtheob...@brandeis.edu wrote:
 
 I still don't see how you get a negative intensity from that.  It seems 
 you are saying that in many cases of a low intensity reflection, the 
 integrated spot will be lower than the background.  That is not equivalent 
 to having a negative measurement (as the measurement is actually positive, 
 and sometimes things are randomly less positive than backgroiund).  If you 
 are using a proper statistical model, after background correction you will 
 end up with a positive (or 0) value for the integrated intensity.
 
 
 On Jun 20, 2013, at 1:08 PM, Andrew Leslie and...@mrc-lmb.cam.ac.uk 
 wrote:
 
 
 The integration programs report a negative intensity simply because that 
 is the 

Re: [ccp4bb] ctruncate bug?

2013-06-20 Thread Tim Gruene
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dear Douglas,

why don't you try this and publish the results? As Kay already pointed
out everybody would be delighted if we can get better data - if
including negative intensities leads to better models, I think most
crystallographers could not be bothered if they are non-physical.

Best,
Tim

On 06/20/2013 09:46 PM, Douglas Theobald wrote:
 Well, I tend to think Ian is probably right, that doing things the
 proper way (vs French-Wilson) will not make much of a difference
 in the end.
 
 Nevertheless, I don't think refining against the (possibly
 negative) intensities is a good solution to dealing with negative
 intensities --- that just ignores the problem, and will end up
 overweighting large negative intensities.  Wouldn't it be better to
 correct the negative intensities with FW and then refine against
 that?
 
 
 On Jun 20, 2013, at 3:38 PM, Kay Diederichs
 kay.diederi...@uni-konstanz.de wrote:
 
 Douglas,
 
 as soon as you come up with an algorithm that gives accurate,
 unbiased intensity estimates together with their standard
 deviations, everybody will be happy. But I'm not aware of
 progress in this question (Poisson signal with background) in the
 last decades - I'd be glad to be proven wrong!
 
 Kay
 
 Am 20.06.13 21:27, schrieb Douglas Theobald:
 Kay, I understand the French-Wilson way of currently doing
 things, as you outline below.  My point is that it is not
 optimal --- we could do things better --- since even
 French-Wilson accepts the idea of negative intensity
 measurements.  I am trying to disabuse the (very stubborn) view
 that when the background is more than the spot, the only
 possible estimate of the intensity is a negative value.  This
 is untrue, and unjustified by the physics involved.  In
 principle, there is no reason to use French-Wilson, as we
 should never have reported a negative integrated intensity to
 begin with.
 
 I also understand that (Iobs-Icalc)^2 is not the actual
 refinement target, but the same point applies, and the actual
 target is based on a fundamental Gaussian assumption for the
 Is.
 
 
 On Jun 20, 2013, at 2:13 PM, Kay Diederichs
 kay.diederi...@uni-konstanz.de wrote:
 
 Douglas,
 
 the intensity is negative if the integrated spot has a lower
 intensity than the estimate of the background under the spot.
 So yes, we are not _measuring_ negative intensities, rather
 we are estimating intensities, and that estimate may turn out
 to be negative. In a later step we try to correct for this,
 because it is non-physical, as you say. At that point, the
 proper statistical model comes into play. Essentially we
 use this as a prior. In the order of increasing
 information, we can have more or less informative priors for
 weak reflections: 1) I  0 2) I has a distribution looking
 like the right half of a Gaussian, and we estimate its width
 from the variance of the intensities in a resolution shell 3)
 I follows a Wilson distribution, and we estimate its
 parameters from the data in a resolution shell 4) I must be
 related to Fcalc^2 (i.e. once the structure is solved, we
 re-integrate using the Fcalc as prior) For a given
 experiment, the problem is chicken-and-egg in the sense that
 only if you know the characteristics of the data can you
 choose the correct prior. I guess that using prior 4) would
 be heavily frowned upon because there is a danger of model
 bias. You could say: A Bayesian analysis done properly should
 not suffer from model bias. This is probably true, but the
 theory to ensure the word properly is not available at the
 moment. Crystallographers usually use prior 3) which, as I
 tried to point out, also has its weak spots, namely if the
 data do not behave like those of an ideal crystal - and
 today's projects often result in data that would have been
 discarded ten years ago, so they are far from ideal. Prior 2)
 is available as an option in XDSCONV Prior 1) seems to be
 used, or is available, in ctruncate in certain cases (I don't
 know the details)
 
 Using intensities instead of amplitudes in refinement would
 avoid having to choose a prior, and refinement would
 therefore not be compromised in case of data violating the
 assumptions underlying the prior.
 
 By the way, it is not (Iobs-Icalc)^2 that would be optimized
 in refinement against intensities, but rather the
 corresponding maximum likelihood formula (which I seem to
 remember is more complicated than the amplitude ML formula,
 or is not an analytical formula at all, but maybe somebody
 knows better).
 
 best,
 
 Kay
 
 
 On Thu, 20 Jun 2013 13:14:28 -0400, Douglas Theobald
 dtheob...@brandeis.edu wrote:
 
 I still don't see how you get a negative intensity from
 that.  It seems you are saying that in many cases of a low
 intensity reflection, the integrated spot will be lower
 than the background.  That is not equivalent to having a
 negative measurement (as the measurement is actually
 positive, and sometimes things are randomly 

Re: [ccp4bb] Puzzling observation about size exclusion chromatography

2013-06-20 Thread RHYS GRINTER
Hi Zhen,

I'm not sure that binding to a monoclonal antibody is good evidence that the 
protein is in a natively folded state. I would be suspicious of such a result 
as the protein could be improperly, which is causing it to interact with the 
column matrix. It could be useful to use some other techniques (Activity Assay, 
Circular Dichroism, DSC, Native Page etc. to validate the refolding).

Best,

Rhys


From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] On Behalf Of Patrick Loll 
[pat.l...@drexel.edu]
Sent: 20 June 2013 20:39
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Puzzling observation about size exclusion chromatography

If your protein elutes very late, that means it's binding to the column matrix 
(so all estimates of size go into the trash). Check to see that the ionic 
strength of buffer is reasonable (equivalent to, say, 150 mM NaCl). If so, then 
the only solution is to go to a different matrix type.
Pat

On 20 Jun 2013, at 3:09 PM, Zhang, Zhen wrote:

 Dear all,

 I just observed a puzzling phenomenon when purifying a refolded protein with 
 size exclusion chromatography. The protein was solubilized by 8M Urea and 
 refolded by dialysis against 500mM Arginine in PBS. The protein is 40KDal and 
 is expected to be a trimer. The puzzling part is the protein after refolding 
 always eluted at 18ml from the superdex S200 column (10/300), which is 
 calculated to be 5KDal by standard. However, the fractions appear to be at 
 40KDal with SDS PAGE and the protein is functional in term of in vitro 
 binding to the protein-specific monoclonal antibody. I could not explain the 
 observation and I am wondering if anyone has the similar experience or has an 
 opinion on this. Any comments are welcome.

 Thanks.

 Zhen


 The information in this e-mail is intended only for the person to whom it is
 addressed. If you believe this e-mail was sent to you in error and the e-mail
 contains patient information, please contact the Partners Compliance HelpLine 
 at
 http://www.partners.org/complianceline . If the e-mail was sent to you in 
 error
 but does not contain patient information, please contact the sender and 
 properly
 dispose of the e-mail.

Re: [ccp4bb] ctruncate bug?

2013-06-20 Thread Felix Frolow
Measurement worth not very much if simultaneously error of measurement is not 
considered.
10 counts intensity on the top of 1 counts background is close to nothing. 
10 counts on the top of 1 count background is an excellent intensity.
Simultaneously with diffraction several types of X-ray scattering events occurs 
such as:
Scattering  from the instrument parts can be regarded as one of the sources  of 
systematic errors (apertures, beam-stops, peenholes) the should be eliminated 
or just disregarded
Scattering from the air - more or less similar to latter but can be 
conveniently subtracted and will not produce much systematic errors
Scattering from the water in crystal, sometime complicated by powder 
diffraction appearance 
Scattering from phonons of the crystalline lattice (TDS)
These two are legitimate contributors to the negative net intensity which can 
be  (see Andrew Lesley and Bernard Santasiero comments) negative.  What is 
true value of Inet I do not know. I know that
Inet can be properly measured according to accepted protocols and 101 of 
counting statistics in the nuclear physics. And a contribution of the negative 
intensity processed by FW or similar (maybe OZ and WM and others) into the 
quality of the 
structure and electron density map is MEASURABLE. It is documented already very 
long time ago:
[10] Hirshfeld, F.L.; Rabinowich, D. Treating Weak Reflexions in Least-Squares 
Calculations. Acta
Crystallogr. 1973, A29, 510–513.
[11] Arnberg, L.; Hovmo¨ller, S.; Westman, S. On the Significance of 
‘Non-Significant’ Reflexions.
Acta Crystallogr. 1979, A35, 497–499
However these papers are related to small molecule structures.
I personally do not think that protein crystals from point of view of 
diffraction physics are different from that from crystals of small molecules.

Dr Felix Frolow   
Professor of Structural Biology and Biotechnology, Department of Molecular 
Microbiology and Biotechnology
Tel Aviv University 69978, Israel

Acta Crystallographica F, co-editor

e-mail: mbfro...@post.tau.ac.il
Tel:  ++972-3640-8723
Fax: ++972-3640-9407
Cellular: 0547 459 608

On Jun 20, 2013, at 20:59 , Douglas Theobald dtheob...@brandeis.edu wrote:

 On Jun 20, 2013, at 1:47 PM, Felix Frolow mbfro...@post.tau.ac.il wrote:
 
 Intensity is subtraction:  Inet=Iobs - Ibackground.  Iobs and Ibackground 
 can not be negative.  Inet CAN be negative if background is higher than 
 Iobs. 
 
 Just to reiterate, we know that the true value of Inet cannot be negative.  
 Hence, the equation you quote is invalid and illogical --- it has no physical 
 or statistical justification (except as an approximation for large Iobs and 
 low Iback, when ironically background correction is unnecessary).  That 
 equation does not account for random statistical fluctuations (e.g., simple 
 Poisson counting statistics of shot noise).  
 
 
 We do not know how to model background scattering modulated my molecular 
 transform and mechanical motion of the molecule, 
 I recall we have called it TDS - thermal diffuse scattering. Many years ago 
 Boaz Shaanan and JH were fascinated by it.
 If we would know how deal with TDS, we would go to much nicer structures 
 some of us like and for sure to much lower 
 R factors all of us love excluding maybe referees who will claim over 
 refinement :-\
 Dr Felix Frolow   
 Professor of Structural Biology and Biotechnology, 
 Department of Molecular Microbiology and Biotechnology
 Tel Aviv University 69978, Israel
 
 Acta Crystallographica F, co-editor
 
 e-mail: mbfro...@post.tau.ac.il
 Tel:  ++972-3640-8723
 Fax: ++972-3640-9407
 Cellular: 0547 459 608
 
 On Jun 20, 2013, at 20:07 , Douglas Theobald dtheob...@brandeis.edu wrote:
 
 How can there be nothing wrong with something that is unphysical?  
 Intensities cannot be negative.  How could you measure a negative number of 
 photons?  You can only have a Gaussian distribution around I=0 if you are 
 using an incorrect, unphysical statistical model.  As I understand it, the 
 physics predicts that intensities from diffraction should be gamma 
 distributed (i.e., the square of a Gaussian variate), which makes sense as 
 the gamma distribution assigns probability 0 to negative values.  
 
 
 On Jun 20, 2013, at 1:00 PM, Bernard D Santarsiero b...@uic.edu wrote:
 
 There's absolutely nothing wrong with negative intensities. They are 
 measurements of intensities that are near zero, and some will be negative, 
 and others positive.  The distribution around I=0 can still be Gaussian, 
 and you have true esd's.  With F's you used a derived esd since they can't 
 be formally generated from the sigma's on I, and are very much 
 undetermined for small intensities and small F's. 
 
 Small molecule crystallographers routinely refine on F^2 and use all of 
 the data, even if the F^2's are negative.
 
 Bernie
 
 On Jun 20, 2013, at 11:49 AM, Douglas Theobald wrote:
 
 Seems to me that the negative Is should be dealt with early on, in the 
 integration 

Re: [ccp4bb] ctruncate bug?

2013-06-20 Thread Ian Tickle
On 20 June 2013 20:46, Douglas Theobald dtheob...@brandeis.edu wrote:

 Well, I tend to think Ian is probably right, that doing things the
 proper way (vs French-Wilson) will not make much of a difference in the
 end.

 Nevertheless, I don't think refining against the (possibly negative)
 intensities is a good solution to dealing with negative intensities ---
 that just ignores the problem, and will end up overweighting large negative
 intensities.  Wouldn't it be better to correct the negative intensities
 with FW and then refine against that?


Hmmm, I seem to recall suggesting that a while back (but there were no
takers!).

I also think that using corrected Is, as opposed to corrected Fs, (however
you choose to do it) is the right way to do twinning  other statistical
tests.  For example the Padilla/Yeates L test uses the cumulative
distribution of |I1 - I2| / (I1 + I2) where I1  I2 are intensities of
unrelated reflections (but close in reciprocal space).  The denominator of
this expression is clearly going to have problems if you feed it negative
intensities!  Also I believe (my apologies if I'm wrong!) that the UCLA
twinning server obtains the Is by squaring the Fs (presumably obtained by
F-W).  This is a formally invalid procedure (the expectation of I is not
the square of the expectation of F).  See here for an explanation of the
difference: http://xtal.sourceforge.net/man/bayest-desc.html .

Cheers

-- Ian


Re: [ccp4bb] Puzzling observation about size exclusion chromatography

2013-06-20 Thread Zhang, Zhen
Hi Kushol,

No. The void for the column is 8ml and the whole volume of the column is 24ml. 
You must be talking about a different column. 

Zhen

-Original Message-
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Kushol 
Gupta
Sent: Thursday, June 20, 2013 4:09 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Puzzling observation about size exclusion chromatography

Isn't 18 mLs into a Superdex 200 10/300 column run out near where the 670kD
marker is, just after the void at ~15 mLs?  Zhen, did you mean ~500kD rather
than 5kD?.

Kushol

Kushol Gupta, Ph.D.
Research Associate - Van Duyne Laboratory 
Perelman School of Medicine
University of Pennsylvania 
kgu...@mail.med.upenn.edu
215-573-7260 / 267-259-0082


Hi Zhen,

I'm not sure that binding to a monoclonal antibody is good evidence that the
protein is in a natively folded state. I would be suspicious of such a
result as the protein could be improperly, which is causing it to interact
with the column matrix. It could be useful to use some other techniques
(Activity Assay, Circular Dichroism, DSC, Native Page etc. to validate the
refolding).

Best,

Rhys


From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] On Behalf Of Patrick Loll
[pat.l...@drexel.edu]
Sent: 20 June 2013 20:39
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Puzzling observation about size exclusion
chromatography

If your protein elutes very late, that means it's binding to the column
matrix (so all estimates of size go into the trash). Check to see that the
ionic strength of buffer is reasonable (equivalent to, say, 150 mM NaCl). If
so, then the only solution is to go to a different matrix type.
Pat

On 20 Jun 2013, at 3:09 PM, Zhang, Zhen wrote:

 Dear all,

 I just observed a puzzling phenomenon when purifying a refolded protein
with size exclusion chromatography. The protein was solubilized by 8M Urea
and refolded by dialysis against 500mM Arginine in PBS. The protein is
40KDal and is expected to be a trimer. The puzzling part is the protein
after refolding always eluted at 18ml from the superdex S200 column
(10/300), which is calculated to be 5KDal by standard. However, the
fractions appear to be at 40KDal with SDS PAGE and the protein is functional
in term of in vitro binding to the protein-specific monoclonal antibody. I
could not explain the observation and I am wondering if anyone has the
similar experience or has an opinion on this. Any comments are welcome.

 Thanks.

 Zhen


 The information in this e-mail is intended only for the person to whom 
 it is addressed. If you believe this e-mail was sent to you in error 
 and the e-mail contains patient information, please contact the 
 Partners Compliance HelpLine at http://www.partners.org/complianceline 
 . If the e-mail was sent to you in error but does not contain patient 
 information, please contact the sender and properly dispose of the
e-mail.=


Re: [ccp4bb] Puzzling observation about size exclusion chromatography

2013-06-20 Thread Kushol Gupta
Mea culpa - I'm thinking minutes at 0.5 ml/min, not mLs!

(clearly I'm overdue for my afternoon caffeine...)

Kushol

-Original Message-
From: Zhang, Zhen [mailto:zhen_zh...@dfci.harvard.edu] 
Sent: Thursday, June 20, 2013 4:18 PM
To: 'Kushol Gupta'; CCP4BB@JISCMAIL.AC.UK
Subject: RE: [ccp4bb] Puzzling observation about size exclusion
chromatography

Hi Kushol,

No. The void for the column is 8ml and the whole volume of the column is
24ml. You must be talking about a different column. 

Zhen

-Original Message-
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Kushol
Gupta
Sent: Thursday, June 20, 2013 4:09 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Puzzling observation about size exclusion
chromatography

Isn't 18 mLs into a Superdex 200 10/300 column run out near where the 670kD
marker is, just after the void at ~15 mLs?  Zhen, did you mean ~500kD rather
than 5kD?.

Kushol

Kushol Gupta, Ph.D.
Research Associate - Van Duyne Laboratory Perelman School of Medicine
University of Pennsylvania kgu...@mail.med.upenn.edu
215-573-7260 / 267-259-0082


Hi Zhen,

I'm not sure that binding to a monoclonal antibody is good evidence that the
protein is in a natively folded state. I would be suspicious of such a
result as the protein could be improperly, which is causing it to interact
with the column matrix. It could be useful to use some other techniques
(Activity Assay, Circular Dichroism, DSC, Native Page etc. to validate the
refolding).

Best,

Rhys


From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] On Behalf Of Patrick Loll
[pat.l...@drexel.edu]
Sent: 20 June 2013 20:39
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Puzzling observation about size exclusion
chromatography

If your protein elutes very late, that means it's binding to the column
matrix (so all estimates of size go into the trash). Check to see that the
ionic strength of buffer is reasonable (equivalent to, say, 150 mM NaCl). If
so, then the only solution is to go to a different matrix type.
Pat

On 20 Jun 2013, at 3:09 PM, Zhang, Zhen wrote:

 Dear all,

 I just observed a puzzling phenomenon when purifying a refolded 
 protein
with size exclusion chromatography. The protein was solubilized by 8M Urea
and refolded by dialysis against 500mM Arginine in PBS. The protein is
40KDal and is expected to be a trimer. The puzzling part is the protein
after refolding always eluted at 18ml from the superdex S200 column
(10/300), which is calculated to be 5KDal by standard. However, the
fractions appear to be at 40KDal with SDS PAGE and the protein is functional
in term of in vitro binding to the protein-specific monoclonal antibody. I
could not explain the observation and I am wondering if anyone has the
similar experience or has an opinion on this. Any comments are welcome.

 Thanks.

 Zhen


 The information in this e-mail is intended only for the person to whom 
 it is addressed. If you believe this e-mail was sent to you in error 
 and the e-mail contains patient information, please contact the 
 Partners Compliance HelpLine at http://www.partners.org/complianceline
 . If the e-mail was sent to you in error but does not contain patient 
 information, please contact the sender and properly dispose of the
e-mail.=


Re: [ccp4bb] Puzzling observation about size exclusion chromatography

2013-06-20 Thread Matthew Franklin

Hi Zhen -

Superdex is known to have some ion-exchange characteristics, so that it 
can weakly interact with some proteins.  This is why the manufacturer 
recommends including a certain amount of salt in the running buffer.  I 
have had the same experience with a few proteins, including one that 
came off the column well after the salt peak!  (The protein was very 
clean after this step; all other proteins had eluted earlier.)


As others have said, you can't rely on molecular weight calibrations in 
this case, but this behavior alone is no reason to think that the 
protein is misfolded or otherwise badly behaved. If you don't like the 
late elution, try increasing the salt concentration of your running 
buffer to 250 or even 500 mM. You'll probably need to exchange the 
eluted protein back into a low-salt buffer for your next steps (e.g. 
crystallization) if you do this.


- Matt


On 6/20/13 3:09 PM, Zhang, Zhen wrote:

Dear all,

I just observed a puzzling phenomenon when purifying a refolded protein with 
size exclusion chromatography. The protein was solubilized by 8M Urea and 
refolded by dialysis against 500mM Arginine in PBS. The protein is 40KDal and 
is expected to be a trimer. The puzzling part is the protein after refolding 
always eluted at 18ml from the superdex S200 column (10/300), which is 
calculated to be 5KDal by standard. However, the fractions appear to be at 
40KDal with SDS PAGE and the protein is functional in term of in vitro binding 
to the protein-specific monoclonal antibody. I could not explain the 
observation and I am wondering if anyone has the similar experience or has an 
opinion on this. Any comments are welcome.

Thanks.

Zhen


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.





--
Matthew Franklin, Ph. D.
Senior Scientist
New York Structural Biology Center
89 Convent Avenue, New York, NY 10027
(212) 939-0660 ext. 9374


Re: [ccp4bb] ctruncate bug?

2013-06-20 Thread Randy Read
Hi,

The intensity-based likelihood refinement target was in our paper in 1996 
(http://www-structmed.cimr.cam.ac.uk/Personal/randy/pubs/li0224r.pdf).  It's 
perfectly happy with negative net intensities.  Basically, the question you're 
asking with a negative net intensity is what is the probability that you could 
observe a particular negative net intensity (with its associated standard 
deviation) given the intensity calculated from the model.  The theory takes 
account of the fact that the true intensity is both correlated to the 
calculated one *and* constrained to be positive.  As a result, reflections with 
more strongly negative net intensities shouldn't be overweighted relative to 
ones with intensities much closer to zero, unlike the intensity-based 
least-squares target.

The expression is indeed more complicated than the one for amplitudes, and is 
computed by integrating the terms of a series expansion.  We contributed code 
for this to CNS, and I'm not aware of anyone having implemented it yet for any 
other program.  In fact, I get the feeling that it's rarely used even in CNS, 
even though our limited tests suggested that it can give better results than 
the amplitude-based target.

-
Randy J. Read
Department of Haematology, University of Cambridge
Cambridge Institute for Medical ResearchTel: +44 1223 336500
Wellcome Trust/MRC Building Fax: +44 1223 336827
Hills RoadE-mail: 
rj...@cam.ac.uk
Cambridge CB2 0XY, U.K.   
www-structmed.cimr.cam.ac.uk

On 20 Jun 2013, at 19:13, Kay Diederichs kay.diederi...@uni-konstanz.de wrote:

 Douglas,
 
 the intensity is negative if the integrated spot has a lower intensity than 
 the estimate of the background under the spot. So yes, we are not _measuring_ 
 negative intensities, rather we are estimating intensities, and that estimate 
 may turn out to be negative. In a later step we try to correct for this, 
 because it is non-physical, as you say. At that point, the proper 
 statistical model comes into play. Essentially we use this as a prior. In 
 the order of increasing information, we can have more or less informative 
 priors for weak reflections:
 1) I  0
 2) I has a distribution looking like the right half of a Gaussian, and we 
 estimate its width from the variance of the intensities in a resolution shell
 3) I follows a Wilson distribution, and we estimate its parameters from the 
 data in a resolution shell
 4) I must be related to Fcalc^2 (i.e. once the structure is solved, we 
 re-integrate using the Fcalc as prior)
 For a given experiment, the problem is chicken-and-egg in the sense that only 
 if you know the characteristics of the data can you choose the correct prior.
 I guess that using prior 4) would be heavily frowned upon because there is a 
 danger of model bias. You could say: A Bayesian analysis done properly should 
 not suffer from model bias. This is probably true, but the theory to ensure 
 the word properly is not available at the moment.
 Crystallographers usually use prior 3) which, as I tried to point out, also 
 has its weak spots, namely if the data do not behave like those of an ideal 
 crystal - and today's projects often result in data that would have been 
 discarded ten years ago, so they are far from ideal.
 Prior 2) is available as an option in XDSCONV
 Prior 1) seems to be used, or is available, in ctruncate in certain cases (I 
 don't know the details)
 
 Using intensities instead of amplitudes in refinement would avoid having to 
 choose a prior, and refinement would therefore not be compromised in case of 
 data violating the assumptions underlying the prior. 
 
 By the way, it is not (Iobs-Icalc)^2 that would be optimized in refinement 
 against intensities, but rather the corresponding maximum likelihood formula 
 (which I seem to remember is more complicated than the amplitude ML formula, 
 or is not an analytical formula at all, but maybe somebody knows better).
 
 best,
 
 Kay
 
 
 On Thu, 20 Jun 2013 13:14:28 -0400, Douglas Theobald dtheob...@brandeis.edu 
 wrote:
 
 I still don't see how you get a negative intensity from that.  It seems you 
 are saying that in many cases of a low intensity reflection, the integrated 
 spot will be lower than the background.  That is not equivalent to having a 
 negative measurement (as the measurement is actually positive, and sometimes 
 things are randomly less positive than backgroiund).  If you are using a 
 proper statistical model, after background correction you will end up with a 
 positive (or 0) value for the integrated intensity.  
 
 
 On Jun 20, 2013, at 1:08 PM, Andrew Leslie and...@mrc-lmb.cam.ac.uk wrote:
 
 
 The integration programs report a negative intensity simply because that is 
 the observation. 
 
 Because of noise in the Xray background, in a large sample of intensity 
 estimates for reflections whose true intensity is very 

Re: [ccp4bb] Puzzling observation about size exclusion chromatography

2013-06-20 Thread Zhang, Zhen
Hi Matthew,

Thanks a lot. That is a great idea. I will try the high salt and worry about 
the crystallization later. 

Zhen

-Original Message-
From: Matthew Franklin [mailto:mfrank...@nysbc.org] 
Sent: Thursday, June 20, 2013 4:34 PM
To: Zhang, Zhen
Cc: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Puzzling observation about size exclusion chromatography

Hi Zhen -

Superdex is known to have some ion-exchange characteristics, so that it 
can weakly interact with some proteins.  This is why the manufacturer 
recommends including a certain amount of salt in the running buffer.  I 
have had the same experience with a few proteins, including one that 
came off the column well after the salt peak!  (The protein was very 
clean after this step; all other proteins had eluted earlier.)

As others have said, you can't rely on molecular weight calibrations in 
this case, but this behavior alone is no reason to think that the 
protein is misfolded or otherwise badly behaved. If you don't like the 
late elution, try increasing the salt concentration of your running 
buffer to 250 or even 500 mM. You'll probably need to exchange the 
eluted protein back into a low-salt buffer for your next steps (e.g. 
crystallization) if you do this.

- Matt


On 6/20/13 3:09 PM, Zhang, Zhen wrote:
 Dear all,

 I just observed a puzzling phenomenon when purifying a refolded protein with 
 size exclusion chromatography. The protein was solubilized by 8M Urea and 
 refolded by dialysis against 500mM Arginine in PBS. The protein is 40KDal and 
 is expected to be a trimer. The puzzling part is the protein after refolding 
 always eluted at 18ml from the superdex S200 column (10/300), which is 
 calculated to be 5KDal by standard. However, the fractions appear to be at 
 40KDal with SDS PAGE and the protein is functional in term of in vitro 
 binding to the protein-specific monoclonal antibody. I could not explain the 
 observation and I am wondering if anyone has the similar experience or has an 
 opinion on this. Any comments are welcome.

 Thanks.

 Zhen


 The information in this e-mail is intended only for the person to whom it is
 addressed. If you believe this e-mail was sent to you in error and the e-mail
 contains patient information, please contact the Partners Compliance HelpLine 
 at
 http://www.partners.org/complianceline . If the e-mail was sent to you in 
 error
 but does not contain patient information, please contact the sender and 
 properly
 dispose of the e-mail.




-- 
Matthew Franklin, Ph. D.
Senior Scientist
New York Structural Biology Center
89 Convent Avenue, New York, NY 10027
(212) 939-0660 ext. 9374


Re: [ccp4bb] ctruncate bug?

2013-06-20 Thread Randy Read
Yes, the function implemented in CNS includes the sigma term.

Best wishes,

Randy

-
Randy J. Read
Department of Haematology, University of Cambridge
Cambridge Institute for Medical ResearchTel: +44 1223 336500
Wellcome Trust/MRC Building Fax: +44 1223 336827
Hills RoadE-mail: 
rj...@cam.ac.uk
Cambridge CB2 0XY, U.K.   
www-structmed.cimr.cam.ac.uk

On 20 Jun 2013, at 22:18, Douglas Theobald dtheob...@brandeis.edu wrote:

 On Jun 20, 2013, at 4:36 PM, Randy Read rj...@cam.ac.uk wrote:
 
 Hi,
 
 The intensity-based likelihood refinement target was in our paper in 1996 
 (http://www-structmed.cimr.cam.ac.uk/Personal/randy/pubs/li0224r.pdf).  It's 
 perfectly happy with negative net intensities.  Basically, the question 
 you're asking with a negative net intensity is what is the probability that 
 you could observe a particular negative net intensity (with its associated 
 standard deviation) given the intensity calculated from the model.  The 
 theory takes account of the fact that the true intensity is both correlated 
 to the calculated one *and* constrained to be positive.  As a result, 
 reflections with more strongly negative net intensities shouldn't be 
 overweighted relative to ones with intensities much closer to zero, unlike 
 the intensity-based least-squares target.
 
 I see, since you integrated out the true intensity over 0 to inf.  That 
 ameliorates most of my concern with I-based ML refinement.  So, it also looks 
 like each measured intensity is weighted by a function of its measured 
 sigma_j.  Is that correct?  And was that implemented in CNS?
 
 The expression is indeed more complicated than the one for amplitudes, and 
 is computed by integrating the terms of a series expansion.  We contributed 
 code for this to CNS, and I'm not aware of anyone having implemented it yet 
 for any other program.  In fact, I get the feeling that it's rarely used 
 even in CNS, even though our limited tests suggested that it can give better 
 results than the amplitude-based target.
 
 -
 Randy J. Read
 Department of Haematology, University of Cambridge
 Cambridge Institute for Medical ResearchTel: +44 1223 336500
 Wellcome Trust/MRC Building Fax: +44 1223 336827
 Hills Road
 E-mail: rj...@cam.ac.uk
 Cambridge CB2 0XY, U.K.   
 www-structmed.cimr.cam.ac.uk
 
 On 20 Jun 2013, at 19:13, Kay Diederichs kay.diederi...@uni-konstanz.de 
 wrote:
 
 Douglas,
 
 the intensity is negative if the integrated spot has a lower intensity than 
 the estimate of the background under the spot. So yes, we are not 
 _measuring_ negative intensities, rather we are estimating intensities, and 
 that estimate may turn out to be negative. In a later step we try to 
 correct for this, because it is non-physical, as you say. At that point, 
 the proper statistical model comes into play. Essentially we use this as 
 a prior. In the order of increasing information, we can have more or less 
 informative priors for weak reflections:
 1) I  0
 2) I has a distribution looking like the right half of a Gaussian, and we 
 estimate its width from the variance of the intensities in a resolution 
 shell
 3) I follows a Wilson distribution, and we estimate its parameters from the 
 data in a resolution shell
 4) I must be related to Fcalc^2 (i.e. once the structure is solved, we 
 re-integrate using the Fcalc as prior)
 For a given experiment, the problem is chicken-and-egg in the sense that 
 only if you know the characteristics of the data can you choose the correct 
 prior.
 I guess that using prior 4) would be heavily frowned upon because there is 
 a danger of model bias. You could say: A Bayesian analysis done properly 
 should not suffer from model bias. This is probably true, but the theory to 
 ensure the word properly is not available at the moment.
 Crystallographers usually use prior 3) which, as I tried to point out, also 
 has its weak spots, namely if the data do not behave like those of an ideal 
 crystal - and today's projects often result in data that would have been 
 discarded ten years ago, so they are far from ideal.
 Prior 2) is available as an option in XDSCONV
 Prior 1) seems to be used, or is available, in ctruncate in certain cases 
 (I don't know the details)
 
 Using intensities instead of amplitudes in refinement would avoid having to 
 choose a prior, and refinement would therefore not be compromised in case 
 of data violating the assumptions underlying the prior. 
 
 By the way, it is not (Iobs-Icalc)^2 that would be optimized in refinement 
 against intensities, but rather the corresponding maximum likelihood 
 formula (which I seem to remember is more complicated than the amplitude ML 
 formula, or is not an analytical formula at all, but maybe somebody knows 
 better).

[ccp4bb] Beamline Scientist opening

2013-06-20 Thread Shekhar Mande
I am posting this job opening for a beamline Scientist position on behalf
of Prof. D D Sarma.  Interested candidates may write to him directly.  The
advertisement reads as below:

*Beam-line Scientist and Engineer Positions*

*at Elettra, Italian Synchrotron source, Trieste, ITALY*



Two beam-lines dedicated to high pressure x-ray diffraction and
Macromolecular Crystallographic studies are being commissioned at the
Elettra synchrotron source in Trieste, Italy, as a part of an Indo-Italian
collaboration funded by the Department of Science and Technology on the
Indian side.

These beam-lines presently have three openings at the level of beam-line
scientists and engineers and we seek applications from suitable Indian
candidates for these positions.

The applicants should have backgrounds in x-ray diffraction and structural
studies. Expertise in macromolecular crystallography and/or high pressure
crystallographic studies including a working knowledge of diamond anvil
cells is desirable.  Prior experience in instrumentation and data analysis
will be considered as an advantage. Familiarity with computer programming
and interest and knowledge in instrumentation and software packages like
such as MATLAB, LABVIEW, EPICS etc. would be added assets.

Selected candidates will be expected to assist in the commissioning of the
beam-lines, installation of end-stations, carry out test experiments and
later provide support to users at the beam lines.  The compensation package
is highly attractive for suitable candidates with the possibility of yearly
visit to India, higher than the average salary structure as well as the
possibility of having synchrotron beamtime to run one's own research
program.

Candidates below the age of 35 years would be preferred. For the post of
beam-line scientist, the candidate should have a doctoral degree in
physics, chemistry, earth sciences, any branches of life sciences or any
other relevant or related field. For the post of beam-line engineer, the
candidate with either BTech/ MSc physics or MSc Chemistry may apply.  Both
the posts would prefer a work experience at synchrotron beam-lines and/or
high-pressure laboratories and an outstanding R  D record.

Interested candidates should apply within three weeks of the publication of
this advertisement, sending a detailed CV, along with a list of
publications, and the names and addresses of three references to the
following address:

Prof. D. D. Sarma; Solid State and Structural Chemistry Unit, Indian
Institute of Science, Bangalore 560012. Email: sa...@sscu.iisc.ernet.in 
sarma...@gmail.com






Shekhar

-- 
Shekhar C. Mande (शेखर चिं मांडे)
Director, National Centre for Cell Science
Ganeshkhind, Pune 411 007
Email: shek...@nccs.res.in, direc...@nccs.res.in
Phone: +91-20-25708121
Fax:+91-20-25692259