Re: [ccp4bb] Question about TEV cleavage

2011-04-01 Thread Peter Hsu
We've been using the S219V mutant for cleaving out tags. We usually do our 
cleavage reactions in an overnight dialysis after a Ni-column into 50mM Tris, 
100-200 mM NaCl, 5 mM BME at 4C. I've never found problems with losing any 
protein in the reaction and recover usually 90% of the protein after passing 
through the column a second time to remove the tag and protease. 

Feel free to contact me if you have any questions.

Best of luck,
Peter


Re: [ccp4bb] Question about GST cleavage

2011-04-01 Thread Alun R. Coker
Agitation can cause denaturation of proteins resulting in loss of 
activity, precipitation and even cross-beta amyloid fibre growth.  
Partial unfolding will probably make most proteins more protease sensitive.


Alun.

On 31/03/2011 20:41, gauri misra wrote:

Just an offshoot of the same Question..
I would like to ask whether the same applies for GST-tag digestion 
using thrombin..

No agitation gives better results in the above case too...
Any personal experiences

On Thu, Mar 31, 2011 at 11:29 AM, Klaus Piontek 
klaus.pion...@ocbc.uni-freiburg.de 
mailto:klaus.pion...@ocbc.uni-freiburg.de wrote:


And not at full moon!

Klaus


Am 31.03.2011 um 16:23 schrieb Xiaopeng Hu:


Our experience is do not shake the tube during TEV cleavage,I
dont know why, but it does help.

xiaopeng


Dr. Klaus Piontek
Albert-Ludwigs-University Freiburg
Institute of Organic Chemistry and Biochemistry, Room 401 H
Albertstrasse 21
D-79104 Freiburg Germany
Phone: ++49-761-203-6036 tel:%2B%2B49-761-203-6036
Fax: ++49-761-203-8714 tel:%2B%2B49-761-203-8714
Email: klaus.pion...@ocbc.uni-freiburg.de
mailto:klaus.pion...@ocbc.uni-freiburg.de
Web: http://www.chemie.uni-freiburg.de/orgbio/w3platt/




--
Alun R. Coker
Centre for Amyloidosis and Acute Phase Proteins
Division of Medicine (Royal Free Campus)
University College London
Rowland Hill Street
London
NW32PF

Tel: +44(0)20 7433 2764
Fax: +44(0)20 7433 2776



Re: [ccp4bb] Crystallographic Breakthrough - DarkMatter Version 1.0

2011-04-01 Thread Robbie Joosten

Hi Ethan,
 
Awsome progress! Really, I looked for other options like such. 2011 
will be a good year for crystallography. I should implement this in PDB_REDO.
 
Cheers,
Robbie
 
 Date: Thu, 31 Mar 2011 23:06:47 -0700
 From: merr...@u.washington.edu
 Subject: [ccp4bb] Crystallographic Breakthrough - DarkMatter Version 1.0
 To: CCP4BB@JISCMAIL.AC.UK
 
 Hi to all on ccp4bb:
 
 What better day to announce the availability of a breakthrough technique
 in macromolecular crystallography?
 
 Given recent discussion and in particular James Holton's suggestion that
 the problem of disordered sidechains is a problem akin to the difficulty
 of describing dark matter and dark energy...
 
 I am happy to announce a new crystallographic tool that can improve your
 model by accounting for an often-neglected physical property. A detailed
 explanation, references, and a preliminary implementation of the program
 can be downloaded from
 
 http://skuld.bmsc.washington.edu/DarkMatter
 
 -- 
 Ethan A Merritt
 Karmic Diffraction Project
 Fine crystallography since April 1, 2011
 What goes around, comes around - usually as a symmetry equivalent
  

Re: [ccp4bb] Crystallographic Breakthrough - DarkMatter Version 1.0

2011-04-01 Thread Flip Hoedemaeker

Dear Ethan,

I would really really like to enhance all my PDB files, but I am 
concerned I will create a black hole in my hard drive. I hope you can 
convince me of the safety of your tool, thx


Flip

On 4/1/2011 8:06, Ethan Merritt wrote:

Hi to all on ccp4bb:

What better day to announce the availability of a breakthrough technique
in macromolecular crystallography?

Given recent discussion and in particular James Holton's suggestion that
the problem of disordered sidechains is a problem akin to the difficulty
of describing dark matter and dark energy...

I am happy to announce a new crystallographic tool that can improve your
model by accounting for an often-neglected physical property. A detailed
explanation, references, and a preliminary implementation of the program
can be downloaded from

http://skuld.bmsc.washington.edu/DarkMatter



Re: [ccp4bb] Crystallographic Breakthrough - DarkMatter Version 1.0

2011-04-01 Thread Tim Gruene
Maybe the next version of coot as well as pymol/ Raster3D could also display
virtual particles. This would really flashily push the quality of our models,
especially on the title pages of the electronic versions of journals. There
could even be a special July-14th-mode
(http://de.wikipedia.org/w/index.php?title=Datei:DESYNebelkammer.jpgfiletimestamp=20090223134909).

Cheers, Tim


On Fri, Apr 01, 2011 at 11:15:36AM +0200, Flip Hoedemaeker wrote:
 Dear Ethan,
 
 I would really really like to enhance all my PDB files, but I am
 concerned I will create a black hole in my hard drive. I hope you
 can convince me of the safety of your tool, thx
 
 Flip
 
 On 4/1/2011 8:06, Ethan Merritt wrote:
 Hi to all on ccp4bb:
 
 What better day to announce the availability of a breakthrough technique
 in macromolecular crystallography?
 
 Given recent discussion and in particular James Holton's suggestion that
 the problem of disordered sidechains is a problem akin to the difficulty
 of describing dark matter and dark energy...
 
 I am happy to announce a new crystallographic tool that can improve your
 model by accounting for an often-neglected physical property. A detailed
 explanation, references, and a preliminary implementation of the program
 can be downloaded from
 
  http://skuld.bmsc.washington.edu/DarkMatter
 

-- 
--
Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

phone: +49 (0)551 39 22149

GPG Key ID = A46BEE1A



signature.asc
Description: Digital signature


Re: [ccp4bb] Crystallographic Breakthrough - DarkMatter Version 1.0

2011-04-01 Thread Frank von Delft
I'm pretty sure Coot has been displaying it all along.  In the early 
days it displayed it much better, I must say, which is why it tended to 
crash.




Maybe the next version of coot as well as pymol/ Raster3D could also display
virtual particles. This would really flashily push the quality of our models,
especially on the title pages of the electronic versions of journals. There
could even be a special July-14th-mode
(http://de.wikipedia.org/w/index.php?title=Datei:DESYNebelkammer.jpgfiletimestamp=20090223134909).

Cheers, Tim


On Fri, Apr 01, 2011 at 11:15:36AM +0200, Flip Hoedemaeker wrote:

Dear Ethan,

I would really really like to enhance all my PDB files, but I am
concerned I will create a black hole in my hard drive. I hope you
can convince me of the safety of your tool, thx

Flip

On 4/1/2011 8:06, Ethan Merritt wrote:

Hi to all on ccp4bb:

What better day to announce the availability of a breakthrough technique
in macromolecular crystallography?

Given recent discussion and in particular James Holton's suggestion that
the problem of disordered sidechains is a problem akin to the difficulty
of describing dark matter and dark energy...

I am happy to announce a new crystallographic tool that can improve your
model by accounting for an often-neglected physical property. A detailed
explanation, references, and a preliminary implementation of the program
can be downloaded from

http://skuld.bmsc.washington.edu/DarkMatter



Re: [ccp4bb] xds question: inverse beam, lots of wedges

2011-04-01 Thread Harry Powell
Hi

I'd just process it in iMosflm, and run the Quickscale task after integration. 
With almost no effort you should get a rapid visual indicator  (in the graphs 
produced by Scala) of the discontinuities between the wedges.

If the discontinuities are too big, then you might encounter some items of 
interest during the integration stage...

On 31 Mar 2011, at 23:08, Patrick Loll wrote:

 We've just collected a number of inverse beam data sets. It turns out the 
 crystals showed little radiation damage, so we have a lot of data: 2 x 360 
 deg for each crystal, broken up into 30 deg wedges. The collection order went 
 like this: 0-30 deg, 180-210, 30-60, 210-240, etc.
 
 Now, assuming no slippage, I could simply integrate the first set of data 
 (non-inverse?) in one run: 0-360 deg. However, since the 12 individual wedges 
 making up this 360 deg sweep were not collected  immediately one after the 
 other, I don't expect the scale factors for individual images to vary 
 smoothly (there should be discontinuities at the boundaries between wedges). 
 If I do integrate the data in one fell swoop, am I in danger of introducing 
 errors? For example, I seem to recall that denzo had built-in restraints to 
 ensure that scale factors for adjacent images didn't vary by too much. Is 
 there a similar restraint that in XDS that I might run afoul of?
 
 The alternative is to integrate each each wedge separately, but with 24 
 wedges per xtal, this is starting to look a little tedious.
 
 Cheers,
 Pat

Harry
--
Dr Harry Powell, MRC Laboratory of Molecular Biology, MRC Centre, Hills Road, 
Cambridge, CB2 0QH


Re: [ccp4bb] Crystallographic Breakthrough - DarkMatter Version 1.0

2011-04-01 Thread Pedro M. Matias

Have you noticed the date ? It's April 1!

At 10:31 01-04-2011, Frank von Delft wrote:
I'm pretty sure Coot has been displaying it all along.  In the early 
days it displayed it much better, I must say, which is why it tended to crash.




Maybe the next version of coot as well as pymol/ Raster3D could also display
virtual particles. This would really flashily push the quality of our models,
especially on the title pages of the electronic versions of journals. There
could even be a special July-14th-mode
(http://de.wikipedia.org/w/index.php?title=Datei:DESYNebelkammer.jpgfiletimestamp=20090223134909).

Cheers, Tim


On Fri, Apr 01, 2011 at 11:15:36AM +0200, Flip Hoedemaeker wrote:

Dear Ethan,

I would really really like to enhance all my PDB files, but I am
concerned I will create a black hole in my hard drive. I hope you
can convince me of the safety of your tool, thx

Flip

On 4/1/2011 8:06, Ethan Merritt wrote:

Hi to all on ccp4bb:

What better day to announce the availability of a breakthrough technique
in macromolecular crystallography?

Given recent discussion and in particular James Holton's suggestion that
the problem of disordered sidechains is a problem akin to the difficulty
of describing dark matter and dark energy...

I am happy to announce a new crystallographic tool that can improve your
model by accounting for an often-neglected physical property. A detailed
explanation, references, and a preliminary implementation of the program
can be downloaded from

http://skuld.bmsc.washington.edu/DarkMatter


Industry and Medicine Applied Crystallography
Macromolecular Crystallography Unit
___
Phones : (351-21) 446-9100 Ext. 1669
  (351-21) 446-9669 (direct)
Fax   : (351-21) 441-1277 or 443-3644

email : mat...@itqb.unl.pt

http://www.itqb.unl.pt/research/biological-chemistry/industry-and-medicine-applied-crystallography
http://www.itqb.unl.pt/labs/macromolecular-crystallography-unit

Mailing address :
Instituto de Tecnologia Quimica e Biologica
Apartado 127
2781-901 OEIRAS
Portugal


Re: [ccp4bb] Crystallographic Breakthrough - DarkMatter Version 1.0

2011-04-01 Thread Pedro M. Matias

Well played, Ethan!

At 07:06 01-04-2011, Ethan Merritt wrote:

Hi to all on ccp4bb:

What better day to announce the availability of a breakthrough technique
in macromolecular crystallography?

Given recent discussion and in particular James Holton's suggestion that
the problem of disordered sidechains is a problem akin to the difficulty
of describing dark matter and dark energy...

I am happy to announce a new crystallographic tool that can improve your
model by accounting for an often-neglected physical property. A detailed
explanation, references, and a preliminary implementation of the program
can be downloaded from

http://skuld.bmsc.washington.edu/DarkMatter

--
Ethan A Merritt
Karmic Diffraction Project
Fine crystallography since April 1, 2011
What goes around, comes around - usually as a symmetry equivalent


Industry and Medicine Applied Crystallography
Macromolecular Crystallography Unit
___
Phones : (351-21) 446-9100 Ext. 1669
  (351-21) 446-9669 (direct)
Fax   : (351-21) 441-1277 or 443-3644

email : mat...@itqb.unl.pt

http://www.itqb.unl.pt/research/biological-chemistry/industry-and-medicine-applied-crystallography
http://www.itqb.unl.pt/labs/macromolecular-crystallography-unit

Mailing address :
Instituto de Tecnologia Quimica e Biologica
Apartado 127
2781-901 OEIRAS
Portugal


Re: [ccp4bb] problem of conventions

2011-04-01 Thread Ian Tickle
On Fri, Apr 1, 2011 at 5:30 AM, Santarsiero, Bernard D. b...@uic.edu wrote:
 Ian,

 I think it's amazing that we can program computers to resolve a  b  c
 but it would be a major undertaking to store the matrix transformations
 for 22121 to 21212 and reindex a cell to a standard setting.

I think you misunderstood the point I was making.  Multiply your one
by the several hundred datasets we sometimes collect for the various
clones and crystallisation conditions needed to optimise the crystal
form for soaking - that's what I mean by 'major undertaking'.  As I
explained all the datasets collected for a given crystal form have to
be indexed the same way (even if only for archival purposes) before we
can store them in the database (otherwise we would end up in an awful
muddle!).  I don't have a batch script to filter all the relevant
datasets from the database, re-index each one (that's the easy part!),
and re-register them all as a new crystal form.  Why should I? -
no-one has given me a cogent reason to re-index them in the first
place which would justify the resulting downtime of the project (OK
call me lazy!).  I hope you see that doing each one manually is a
non-starter: the project would have to be locked during the period of
the operation so no new datasets could be down- or uploaded (which
would further cause the upstream pipeline to backup).  Operations that
appear trivial when you only have to do them once suddenly become big
problems when they have to be performed on an industrial scale!

 I was also
 told that I was lazy to not reindex to the standard setting when I was a
 grad student. Now it takes less than a minute to enter a transformation
 and re-index.

They told you wrong!  The conventional cell is the convention (by
definition!), and the standard setting doesn't always correspond to
the conventional cell (though in most cases it does).  There's a
reason for the distinction between meanings of 'standard' and
'conventional' - the meanings are very precise and
non-interchangeable.

 The orthorhombic rule of a  b  c makes sense in 222 or 212121, but when
 there is a standard setting of the 2-fold along the c-axis, then why not
 adopt that?

As I explained, sometimes we don't know the true space group (in terms
of assigning the screw axes) until further along the pipeline (e.g.
after MR or refinement), or at least it's always safer to be
non-committal beyond P222 - why commit oneself to an irrevocable
decision before it's absolutely necessary?  You don't need to know the
exact space group just to screen crystals for diffracting power!
Adopting the standard setting would in the particular case of SGs 5,
17  18 require later re-indexing  I hope you see why for us that's a
non-starter.

I'm not a believer in conventions for their own sake - a convention is
merely a default set of rules which you apply when you have no sound
basis on which to make a choice - the convention makes what is
effectively a totally arbitrary choice for you.  Conventions do have
the advantage that if other people follow them then they will make the
same decisions as you.  The moment I have sufficient justification
(e.g. as I said isomorphism overrides convention) to break with
convention then I would have no hesitation in doing so.  The fact that
the standard setting has a 2-fold along c is merely an arbitrary
choice and doesn't seem to me to be a good enough reason to break with
the unit-cell convention.

-- Ian


 On Thu, March 31, 2011 5:48 pm, Ian Tickle wrote:
 On Thu, Mar 31, 2011 at 10:43 PM, James Holton jmhol...@lbl.gov wrote:
 I have the 2002 edition, and indeed it only contains space group
 numbers up to 230.  The page numbers quoted by Ian contain space group
 numbers 17 and 18.

 You need to distinguish the 'IT space group number' which indeed goes
 up to 230 (i.e. the number of unique settings), from the 'CCP4 space
 group number' which, peculiar to CCP4 (which is why I called it
 'CCP4-ese'), adds a multiple of 1000 to get a unique number for the
 alternate settings as used in the API.  The page I mentioned show the
 diagrams for IT SG #18 P22121 (CCP4 #3018), P21221 (CCP4 #2018) and
 P21212 (CCP4 #18), so they certainly are all there!

 Although I am all for program authors building in support for the
 screwy orthorhombics (as I call them), I should admit that my
 fuddy-duddy strategy for dealing with them remains simply to use space
 groups 17 and 18, and permute the cell edges around with REINDEX to
 put the unique (screw or non-screw) axis on the c position.

 Re-indexing is not an option for us (indeed if there were no
 alternative, it would be a major undertaking), because the integrity
 of our LIMS database requires that all protein-ligand structures from
 the same target  crystal form are indexed with the same (or nearly
 the same) cell and space group (and it makes life so much easier!).
 With space-groups such as P22121 it can happen (indeed it has
 happened) that it was not possible to define the 

Re: [ccp4bb] Crystallographic Breakthrough - DarkMatter Version 1.0

2011-04-01 Thread Mark J van Raaij
The program appears a bit black-box to me, could you provide more details 
(today of course).
Mark

Sent from my HTC

- Reply message -
From: Robbie Joosten robbie_joos...@hotmail.com
Date: Fri, Apr 1, 2011 11:04
Subject: [ccp4bb] Crystallographic Breakthrough  -  DarkMatter Version 1.0
To: CCP4BB@JISCMAIL.AC.UK


Hi Ethan,
 
Awsome progress! Really, I looked for other options like such. 2011 
will be a good year for crystallography. I should implement this in PDB_REDO.
 
Cheers,
Robbie
 
 Date: Thu, 31 Mar 2011 23:06:47 -0700
 From: merr...@u.washington.edu
 Subject: [ccp4bb] Crystallographic Breakthrough - DarkMatter Version 1.0
 To: CCP4BB@JISCMAIL.AC.UK
 
 Hi to all on ccp4bb:
 
 What better day to announce the availability of a breakthrough technique
 in macromolecular crystallography?
 
 Given recent discussion and in particular James Holton's suggestion that
 the problem of disordered sidechains is a problem akin to the difficulty
 of describing dark matter and dark energy...
 
 I am happy to announce a new crystallographic tool that can improve your
 model by accounting for an often-neglected physical property. A detailed
 explanation, references, and a preliminary implementation of the program
 can be downloaded from
 
 http://skuld.bmsc.washington.edu/DarkMatter
 
 -- 
 Ethan A Merritt
 Karmic Diffraction Project
 Fine crystallography since April 1, 2011
 What goes around, comes around - usually as a symmetry equivalent



[ccp4bb] problem of conventions

2011-04-01 Thread Boaz Shaanan
Excuse my naive (perhaps ignorant) question: when was the
 abc rule/convention/standard/whatever introduced? None of the 
textbooks I came across mentions it as far as I could see (not that this is 
reason for or against this rule of course).

    Thanks,

   Boaz


Boaz Shaanan, Ph.D.
Dept. of Life Sciences
Ben-Gurion University of the Negev
Beer-Sheva 84105
Israel
Phone: 972-8-647-2220 ; Fax: 646-1710
Skype: boaz.shaanan‎


Re: [ccp4bb] The meaning of B-factor, was Re: [ccp4bb] what to do with disordered side chains

2011-04-01 Thread Robbie Joosten

Hi Frank,

  I described in the previous e-mail the probabilistic interpretation of
  B-factors. In the case of very high uncertainty = poorly ordered side
  chains, I prefer to deposit the conformer representing maximum a
  posteriori, even if it does not represent all possible conformations.
  Maximum a posteriori will have significant contribution from the most
  probable conformation of side chain (prior knowledge) and should not
  conflict with likelihood (electron density map).
  Thus, in practice I model the most probable conformation as long as it
  it in even very weak electron density, does not overlap significantly
  with negative difference electron density and do not clash with other
  residues.
 If it's probability you're after, if there's no density to guide you 
 (very common!) you'd have to place all likely rotamers that don't 
 clash with anything, and set their occupancies to their probability (as 
 encoded in the rotamer library).
Which library? The one for all side chains of a specific type, or the one for a 
specific type with a given backbone conformation? These are quite different and 
change with the content of the PDB.
'Hacking' the occupancies is risky bussiness in general: errors are made quite 
easily. I frequently encounter side chains with partial occupancies but no 
alternatives, how can I relate this to the experimental date? Even worse, I 
also see cases where the occupancies of alternates sum up to values  1.00. 
What does that mean? Is that a local increase of DarmMatter accidentally 
encoded in the occupancy?

 This is now veering into data-free protein modeling territory... wasn't 
 the idea to present to the downstream user an atomic representation of 
 what the electron density shows us?
Yes, but what we see can be deceiving.

 Worse, what we're also doing is encoding multiple different things in 
 one place - what database people call poorly normalised, i.e. to 
 understand a data field requires further parsing and if statements. In 
 this case: to know whether there was no density, as end-user I'd have 
 to have to second-guess what exactly those 
 high-B-factor-variable-occupancy atoms mean.
 
 Until the PDB is expanded, the conventions need to be clear, and I 
 thought they were:
 High B-factor == means atom is there but density is weak
 Atom missing == no density to support it.
Unfortunately, it is not trivial to decide when there is 'no density'. We must 
have a good metric to do this, but I don't think it exists yet. Removing atoms 
is thus very subjective. This explaines why I frequently find positive 
difference density peaks near missing side chains. Leaving side chains in 
sometimes gives negative difference density but refining them with proper 
B-factor restrainsts reduces the problem a lot. There is still the problem of 
radiation damage, but that is relatively small. At least refining the B-factor 
is more reproducible and less subjective than making the binary choice to keep 
or remove an atom.
 
Cheers,
Robbie

 
 Oh well...
 phx.
  

Re: [ccp4bb] what to do with disordered side chains

2011-04-01 Thread Quyen Hoang
Dear Gerard,

I agree with you based on debates at some conferences.

But, based on what I have seen here so far, it seems to me that everybody knows 
exactly what to do with disordered side chains.
People that want to build structures to best fit the data tend to prefer 
omitting disordered side chains. On the other hand, people that want to build 
structures to best represent reality tend to prefer building them. I don't see 
any disagreement here nor do I see any problems with either approach. Different 
people collect the same data to study different things and I feel that they are 
entitle to view and interpret the data the way that they fine most meaningful. 

Equations are attempts to describe reality, I don't see why we should constrain 
reality to fit equations. 

Cheers,
Quyen


On Mar 31, 2011, at 12:21 PM, Gerard Bricogne wrote:

 Dear Quyen,
 
 On Thu, Mar 31, 2011 at 11:27:58AM -0400, Quyen Hoang wrote:
 Thank you for your post, Herman.
 Since there is no holy bible to provide guidance, perhaps we should hold 
 off the idea of electing a powerful dictator to enforce this?
 - at least until we all can come to a consensus on how the dictator 
 should dictate...
 
 
 ... but that might well be even harder than to decide what to do with
 disordered side chains ... .
 
 
 With best wishes,
 
  Gerard.
 
 --
 
 ===
 * *
 * Gerard Bricogne g...@globalphasing.com  *
 * *
 * Global Phasing Ltd. *
 * Sheraton House, Castle Park Tel: +44-(0)1223-353033 *
 * Cambridge CB3 0AX, UK   Fax: +44-(0)1223-366889 *
 * *
 ===
 
 
 On Mar 31, 2011, at 10:22 AM, herman.schreu...@sanofi-aventis.com wrote:
 
 Dear Quyen,
 I am afraid you won't get any better answers than you got so far. There is 
 no holy bible telling you what to do with disordered side chains. I fully 
 agree with James that you should try to get the best possible model, which 
 best explains your data and that will be your decision. Here are my 2 
 cents:
 
 -If you see alternative positions, you have to build them.
 -If you do not see alternative positions, I would not replace one fantasy 
 (some call it most likely) orientation with 2 or 3 fantasy orientations.
 -I personally belong to the let the B-factors take care of it camp, but 
 that is my personal opinion. Leaving side chains out could lead to 
 misinterpretations by slightly less savy users of our data, especially 
 when charge distributions are being studied. Besides, we know (almost) for 
 sure that the side chain is there, it is only disordered and as we just 
 learned, even slightly less savy users know what flaming red side chains 
 mean. Even if they may not be mathematically entirely correct, huge 
 B-factors clearly indicate that there is disorder involved.
 -I would not let occupancies take up the slack since even very savy users 
 have never heard of them and again, the side chain is fully occupied, only 
 disordered. Of course if you build alternate positions, you have to divede 
 the occupancies amongst them.
 
 Best,
 Herman
 
 From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of 
 Quyen Hoang
 Sent: Thursday, March 31, 2011 3:55 PM
 To: CCP4BB@JISCMAIL.AC.UK
 Subject: Re: [ccp4bb] what to do with disordered side chains
 
 We are getting off topic a little bit.
 
 Original topic: is it better to not build disordered sidechains or build 
 them and let B-factors take care of it?
 Ed's poll got almost a 50:50 split.
 Question still unanswered.
 
 Second topic introduced by Pavel: Your B-factors are valid within a 
 harmonic (small) approximation of atomic vibrations. Larger scale motions 
 you are talking about go beyond the harmonic approximation, and using the 
 B-factor to model them is abusing the corresponding mathematical model.
 And that these large scale motions (disorders) are better represented by 
 alternative conformations and associated with them occupancies.
 
 My question is, how many people here do this?
 If you're currently doing what Pavel suggested here, how do you decide 
 where to keep the upper limit of B-factors and what the occupancies are 
 for each atom (data with resolution of 2.0A or worse)? I mean, do you cap 
 the B-factor at a reasonable number to represent natural atomic vibrations 
 (which is very small as Pavel pointed out) and then let the occupancies 
 pick up the slack? More importantly, what is your reason for doing this?
 
 Cheers and thanks for your contribution,
 Quyen
 
 
 On Mar 30, 2011, at 5:20 PM, Pavel Afonine wrote:
 
 Mark,
 alternative conformations and associated with them occupancies 

Re: [ccp4bb] The meaning of B-factor, was Re: [ccp4bb] what to do with disordered side chains

2011-04-01 Thread Frank von Delft

Hi Robbie

 If it's probability you're after, if there's no density to guide you
 (very common!) you'd have to place all likely rotamers that don't
 clash with anything, and set their occupancies to their probability (as
 encoded in the rotamer library).
Which library? The one for all side chains of a specific type, or the 
one for a specific type with a given backbone conformation? These are 
quite different and change with the content of the PDB.
'Hacking' the occupancies is risky bussiness in general: errors are 
made quite easily. I frequently encounter side chains with partial 
occupancies but no alternatives, how can I relate this to the 
experimental date? Even worse, I also see cases where the occupancies 
of alternates sum up to values  1.00. What does that mean? Is that a 
local increase of DarmMatter accidentally encoded in the occupancy?
Actually, I wasn't advocating it - I was taking ZO's suggestion to it's 
logical conclusion to point out the problem, namely deciding what is 
most likely.  This you underline with your (very valid) question.




 Until the PDB is expanded, the conventions need to be clear, and I
 thought they were:
 High B-factor == means atom is there but density is weak
 Atom missing == no density to support it.
Unfortunately, it is not trivial to decide when there is 'no density'. 
We must have a good metric to do this, but I don't think it exists 
yet. Removing atoms is thus very subjective. This explaines why I 
frequently find positive difference density peaks near missing side 
chains. Leaving side chains in sometimes gives negative difference 
density but refining them with proper B-factor restrainsts reduces the 
problem a lot. There is still the problem of radiation damage, but 
that is relatively small. At least refining the B-factor is more 
reproducible and less subjective than making the binary choice to keep 
or remove an atom.

(Radiation damage is NOT a relatively small problem.)

The fundamental problem remains:  we're cramming too many meanings into 
one number.  This the PDB could indeed solve, by giving us another 
column.  (He said airily, blithely launching a totally new flame war.)


phx.


Re: [ccp4bb] Crystallographic Breakthrough - DarkMatter Version 1.0

2011-04-01 Thread Jürgen Bosch
I assume TLS is supported or do we have to wait for version 1.1 ? When will you 
have a 10.7 (lion) standalone version compiled ? 

Jürgen 

..
Jürgen Bosch
Johns Hopkins Bloomberg School of Public Health
Department of Biochemistry  Molecular Biology
Johns Hopkins Malaria Research Institute
615 North Wolfe Street, W8708
Baltimore, MD 21205
Phone: +1-410-614-4742
Lab:  +1-410-614-4894
Fax:  +1-410-955-3655
http://web.mac.com/bosch_lab/

On Apr 1, 2011, at 2:06, Ethan Merritt merr...@u.washington.edu wrote:

 Hi to all on ccp4bb:
 
 What better day to announce the availability of a breakthrough technique
 in macromolecular crystallography?
 
 Given recent discussion and in particular James Holton's suggestion that
 the problem of disordered sidechains is a problem akin to the difficulty
 of describing dark matter and dark energy...
 
 I am happy to announce a new crystallographic tool that can improve your
 model by accounting for an often-neglected physical property. A detailed
 explanation, references, and a preliminary implementation of the program
 can be downloaded from
 
http://skuld.bmsc.washington.edu/DarkMatter
 
 -- 
 Ethan A Merritt
 Karmic Diffraction Project
 Fine crystallography since April 1, 2011
 What goes around, comes around - usually as a symmetry equivalent


Re: [ccp4bb] xds question: inverse beam, lots of wedges

2011-04-01 Thread David Schuller

On 03/31/11 18:08, Patrick Loll wrote:

We've just collected a number of inverse beam data sets. It turns out the 
crystals showed little radiation damage, so we have a lot of data: 2 x 360 deg 
for each crystal, broken up into 30 deg wedges. The collection order went like 
this: 0-30 deg, 180-210, 30-60, 210-240, etc.

Now, assuming no slippage, I could simply integrate the first set of data 
(non-inverse?) in one run: 0-360 deg. However, since the 12 individual wedges 
making up this 360 deg sweep were not collected  immediately one after the 
other, I don't expect the scale factors for individual images to vary smoothly 
(there should be discontinuities at the boundaries between wedges).

So? Isn't that the purpose of scale factors?

--
===
All Things Serve the Beam
===
   David J. Schuller
   modern man in a post-modern world
   MacCHESS, Cornell University
   schul...@cornell.edu


Re: [ccp4bb] Crystallographic Breakthrough - DarkMatter Version 1.0

2011-04-01 Thread Dirk Kostrewa

Hi Ethan,

many thanks for that - your Dark Matter really (en)lightened my day! I 
wonder, how many pdb records in the future will contain the 
corresponding remark lines that your incredible perl script produces :-)


Best regards,

Dirk.

Am 01.04.11 08:06, schrieb Ethan Merritt:

Hi to all on ccp4bb:

What better day to announce the availability of a breakthrough technique
in macromolecular crystallography?

Given recent discussion and in particular James Holton's suggestion that
the problem of disordered sidechains is a problem akin to the difficulty
of describing dark matter and dark energy...

I am happy to announce a new crystallographic tool that can improve your
model by accounting for an often-neglected physical property. A detailed
explanation, references, and a preliminary implementation of the program
can be downloaded from

http://skuld.bmsc.washington.edu/DarkMatter



--

***
Dirk Kostrewa
Gene Center Munich, A5.07
Department of Biochemistry
Ludwig-Maximilians-Universität München
Feodor-Lynen-Str. 25
D-81377 Munich
Germany
Phone:  +49-89-2180-76845
Fax:+49-89-2180-76999
E-mail: kostr...@genzentrum.lmu.de
WWW:www.genzentrum.lmu.de
***


Re: [ccp4bb] problem of conventions

2011-04-01 Thread Gerard Bricogne
Dear Boaz,

 I think you are the one who is finally asking the essential question. 
 
 The classification we all know about, which goes back to the 19th
century, is not into 230 space groups, but 230 space-group *types*, i.e.
classes where every form of equivalencing (esp. by choice of setting) has
been applied to the enumeration of the classes and the choice of a unique
representative for each of them. This process of maximum reduction leaves
very little room for the introducing conventions like a certain ordering
of the lengths of cell parameters. This seems to me to be a major mess-up in
the field - a sort of second-hand mathematics by (IUCr) committee which
has remained so ill-understood as to generate all these confusions. The work
on the derivation of the classes of 4-dimensional space groups explained the
steps of this classification beautifully (arithmetic classes - extension by
non-primitive translations - equivalencing under the action of the
normaliser), the last step being the choice of a privileged setting *in
termns of the group itself* in choosing the representative of each class.
The extra convention abc leads to choosing that representative in a way
that depends on the metric properties of the sample instead of once and for
all (how about that for a brilliant step backward!). Software providers then
have to de-standardise the set of 230 space group *types* (where each
representative is uniquely defined once you give the space group (*type*)
number) to accommodate all alternative choices of settings that might be
randomly thrown at them by the metric properties of e.g. everyone's
orthorhombic crystals. Mathematically, what one then needs to return to is
the step before taking out the action of the normaliser, but this picture
gets drowned in clerical disputes about low-level software issues.

 My own take on this (when I was writing symmetry-reduction routines for
my NCS-averaging programs, along with space-group specific FFT routines in
the dark ages) was: once you have a complete mathematical classification
that is engraved in stone (i.e. in the old International Tables and in 
crystallographic software as we knew it), then stick to it and re-index back
and forth to/from the unique representative listed under the IT number, as
needed - don't try and extend group-theoretic Tables to re-introduce
incidental metrical properties that had been so neatly factored out from the
final symmetry picture. Otherwise you get a dog's dinner.


 So much for my 0.02 Euro.
 
 
 With best wishes,
 
  Gerard.

--
On Fri, Apr 01, 2011 at 11:30:12AM +, Boaz Shaanan wrote:
 Excuse my naive (perhaps ignorant) question: when was the
  abc rule/convention/standard/whatever introduced? None of the 
 textbooks I came across mentions it as far as I could see (not that this is 
 reason for or against this rule of course).
 
     Thanks,
 
    Boaz
 
 
 Boaz Shaanan, Ph.D.
 Dept. of Life Sciences
 Ben-Gurion University of the Negev
 Beer-Sheva 84105
 Israel
 Phone: 972-8-647-2220 ; Fax: 646-1710
 Skype: boaz.shaanan‎

-- 

 ===
 * *
 * Gerard Bricogne g...@globalphasing.com  *
 * *
 * Global Phasing Ltd. *
 * Sheraton House, Castle Park Tel: +44-(0)1223-353033 *
 * Cambridge CB3 0AX, UK   Fax: +44-(0)1223-366889 *
 * *
 ===


Re: [ccp4bb] Crystallographic Breakthrough - DarkMatter Version 1.0

2011-04-01 Thread Edward A. Berry

Do you have a list of dark-matter-aware PDB refinement programs?
Adding dark matter and refining in TNT or Xplor gives me exactly
the same R as without. Furthermore the final refined files have
lost the dark matter as far as I can see. This leads me to believe
these programs are completely ignoring the dark matter.

Ed

Ethan Merritt wrote:

Hi to all on ccp4bb:

What better day to announce the availability of a breakthrough technique
in macromolecular crystallography?

Given recent discussion and in particular James Holton's suggestion that
the problem of disordered sidechains is a problem akin to the difficulty
of describing dark matter and dark energy...

I am happy to announce a new crystallographic tool that can improve your
model by accounting for an often-neglected physical property. A detailed
explanation, references, and a preliminary implementation of the program
can be downloaded from

http://skuld.bmsc.washington.edu/DarkMatter

--
Ethan A Merritt
Karmic Diffraction Project
Fine crystallography since April 1, 2011
What goes around, comes around - usually as a symmetry equivalent



Re: [ccp4bb] Crystallographic Breakthrough - DarkMatter Version 1.0

2011-04-01 Thread Ed Pozharski
Oh, that is where those pesky inhibitors I couldn't find were
hiding...  

On Thu, 2011-03-31 at 23:06 -0700, Ethan Merritt wrote:
 Given recent discussion and in particular James Holton's suggestion
 that
 the problem of disordered sidechains is a problem akin to the
 difficulty
 of describing dark matter and dark energy...
 
 I am happy to announce a new crystallographic tool that can improve
 your
 model by accounting for an often-neglected physical property. 
-- 
I'd jump in myself, if I weren't so good at whistling.
   Julian, King of Lemurs


[ccp4bb] SFCHECK produces incomplete postscript

2011-04-01 Thread Andrew T. Torelli
Dear all,

I have been trying to compare a model that I'm refining against the 
native SFs using SFCHECK.  SFCHECK finishes normally (no errors in log file, 
seemingly complete list of output .ps files), but produces a postscript file 
with only the first page of output (and it is mostly blank).  There is the 
typical light-grey panels on a dark-grey background format that I'm used to for 
SFCHECK postscript files, but there are no figures or data.  Also, my mouse 
icon indicates it is hung trying to load/read the file (i.e. it's a moving 
busy icon under Linux).

I've tried other postscript viewers without luck.  I can successfully 
run SFCHECK on a completely different model/MTZ pair without problem though.  
So does anyone know of circumstances that would lead to a hung postscript 
file from SFCHECK?

Thanks for your help,
-Andy




Re: [ccp4bb] problem of conventions

2011-04-01 Thread Ian Tickle
Dear Gerard,

The theory's fine as long as the space group can be unambiguously
determined from the diffraction pattern.  However practice is
frequently just like the ugly fact that destroys the beautiful theory,
which means that a decision on the choice of unit cell may have to be
made on the basis of incomplete or imperfect information (i.e.
mis-identification of the systematic absences).  The 'conservative'
choice (particularly if it's not necessary to make a choice at that
time!) is to choose the space group without screw axes (i.e. P222 for
orthorhombic).  Then if it turns out later that you were wrong it's
easy to throw away the systematic absences and change the space group
symbol.  If you make any other choice and it turns out you were wrong
you might find it hard sometime later to recover the reflections you
threw away!  This of course implies that the unit-cell choice
automatically conforms to the IT convention; this convention is of
course completely arbitrary but you have to make a choice and that one
is as good as any.

So at that point lets say this is the 1970s and you know it might be
several years before your graduate student is able to collect the
high-res data and do the model-building and refinement, so you publish
the unit cell and tentative space group, and everyone starts making
use of your data.  Some years later the structure solution and
refinement is completed and the space group can now be assigned
unambiguously.  The question is do you then revise your previous
choice of unit cell risking the possibility of confusing everyone
including yourself, just in order that the space-group setting
complies with a completely arbitrary 'standard' (and the unit cell
non-conventional), and requiring a re-index of your data (and
permutation of the co-ordinate datasets).  Or do you stick with the IT
unit cell convention and leave it as it is?  For me the choice is easy
('if it ain't broke then don't fix it!').

Cheers

-- Ian

On Fri, Apr 1, 2011 at 1:40 PM, Gerard Bricogne g...@globalphasing.com wrote:
 Dear Boaz,

     I think you are the one who is finally asking the essential question.

     The classification we all know about, which goes back to the 19th
 century, is not into 230 space groups, but 230 space-group *types*, i.e.
 classes where every form of equivalencing (esp. by choice of setting) has
 been applied to the enumeration of the classes and the choice of a unique
 representative for each of them. This process of maximum reduction leaves
 very little room for the introducing conventions like a certain ordering
 of the lengths of cell parameters. This seems to me to be a major mess-up in
 the field - a sort of second-hand mathematics by (IUCr) committee which
 has remained so ill-understood as to generate all these confusions. The work
 on the derivation of the classes of 4-dimensional space groups explained the
 steps of this classification beautifully (arithmetic classes - extension by
 non-primitive translations - equivalencing under the action of the
 normaliser), the last step being the choice of a privileged setting *in
 termns of the group itself* in choosing the representative of each class.
 The extra convention abc leads to choosing that representative in a way
 that depends on the metric properties of the sample instead of once and for
 all (how about that for a brilliant step backward!). Software providers then
 have to de-standardise the set of 230 space group *types* (where each
 representative is uniquely defined once you give the space group (*type*)
 number) to accommodate all alternative choices of settings that might be
 randomly thrown at them by the metric properties of e.g. everyone's
 orthorhombic crystals. Mathematically, what one then needs to return to is
 the step before taking out the action of the normaliser, but this picture
 gets drowned in clerical disputes about low-level software issues.

     My own take on this (when I was writing symmetry-reduction routines for
 my NCS-averaging programs, along with space-group specific FFT routines in
 the dark ages) was: once you have a complete mathematical classification
 that is engraved in stone (i.e. in the old International Tables and in
 crystallographic software as we knew it), then stick to it and re-index back
 and forth to/from the unique representative listed under the IT number, as
 needed - don't try and extend group-theoretic Tables to re-introduce
 incidental metrical properties that had been so neatly factored out from the
 final symmetry picture. Otherwise you get a dog's dinner.


     So much for my 0.02 Euro.


     With best wishes,

          Gerard.

 --
 On Fri, Apr 01, 2011 at 11:30:12AM +, Boaz Shaanan wrote:
 Excuse my naive (perhaps ignorant) question: when was the
  abc rule/convention/standard/whatever introduced? None of the
 textbooks I came across mentions it as far as I could see (not that this is 
 reason for or against this rule of course).

     Thanks,

   

[ccp4bb] early (incomplete model) refinement if you had the location of a bound (highly occupied) anomalous scatterer

2011-04-01 Thread Francis E Reyes
Hi all,

could you use its position in real space as a target for (to make it easy, 
rigid body) refinement? Real space meaning the electron density around the 
scatterer in an anomalous LLG map.

Some other restraints: Say it's a metal cofactor and you know that it needs to 
be in a specific position with respect to your protein.


F
-
Francis E. Reyes M.Sc.
215 UCB
University of Colorado at Boulder


Re: [ccp4bb] Crystallographic Breakthrough - DarkMatter Version 1.0

2011-04-01 Thread Quyen Hoang
Will this affect my reprocessing of the data with D*TREK on my journey  
to XPLORE the planets MERCURY and rPLUTO in my ENDEAVOUR to find and  
BUSTER some CRYSTALS with my on-board TNT into XPOWDER?
I am still trying to GRASP the idea of AUTODOCKing on precise HKL  
locations based on the SHARP but CONVX images produced by CRYSTAL  
STUDIO.


Quyen


On Apr 1, 2011, at 2:06 AM, Ethan Merritt wrote:


Hi to all on ccp4bb:

What better day to announce the availability of a breakthrough  
technique

in macromolecular crystallography?

Given recent discussion and in particular James Holton's suggestion  
that
the problem of disordered sidechains is a problem akin to the  
difficulty

of describing dark matter and dark energy...

I am happy to announce a new crystallographic tool that can improve  
your
model by accounting for an often-neglected physical property. A  
detailed
explanation, references, and a preliminary implementation of the  
program

can be downloaded from

http://skuld.bmsc.washington.edu/DarkMatter

--
Ethan A Merritt
Karmic Diffraction Project
Fine crystallography since April 1, 2011
What goes around, comes around - usually as a symmetry equivalent


Re: [ccp4bb] problem of conventions

2011-04-01 Thread Santarsiero, Bernard D.
Dear Ian,

Well, it *IS* broke. If you are running some type of process, as you
implied in referring to LIMS, then there is a step in which you move from
the crystal system and point group to the actual space group. So, at that
point you identify P22121. The next clear step, automatically by software,
is to convert to P21212, and move on. That doesn't take an enormous amount
of code writing, and you have a clear trail on how you got there.

To be even more intrusive, what if you had cell parameters of 51.100,
51.101, and 51.102, and it's orthorhombic, P21212. For other co-crystals,
soaks, mutants, etc., you might have both experimental errors and real
differences in the unit cell, so you're telling me that you would process
according to the a  b  c rule in P222 to average and scale, and then it
might turn out to be P22121, P21221, or P21212 later on? When you wish to
compare coordinates, then you have re-assign one coordinate data to match
the other by using superposition, rather than taking on an earlier step of
just using the conventional space group of P21212?

Again, while I see use of the a  b  c rule when there isn't an
overriding reason to assign it otherwise, as in P222 or P212121, there
*is* a reason to stick to the convention of one standard setting. That's
the rationale on using P21/n sometimes vs. P21/c, or I2 vs C2, to avoid a
large beta angle, and adopt a non-standard setting.

Finally, if you think it's fine to use P22121, then can I assume that you
also allow the use of space group A2 and B2?

Bernie


Bernie







On Fri, April 1, 2011 8:46 am, Ian Tickle wrote:
 Dear Gerard,

 The theory's fine as long as the space group can be unambiguously
 determined from the diffraction pattern.  However practice is
 frequently just like the ugly fact that destroys the beautiful theory,
 which means that a decision on the choice of unit cell may have to be
 made on the basis of incomplete or imperfect information (i.e.
 mis-identification of the systematic absences).  The 'conservative'
 choice (particularly if it's not necessary to make a choice at that
 time!) is to choose the space group without screw axes (i.e. P222 for
 orthorhombic).  Then if it turns out later that you were wrong it's
 easy to throw away the systematic absences and change the space group
 symbol.  If you make any other choice and it turns out you were wrong
 you might find it hard sometime later to recover the reflections you
 threw away!  This of course implies that the unit-cell choice
 automatically conforms to the IT convention; this convention is of
 course completely arbitrary but you have to make a choice and that one
 is as good as any.

 So at that point lets say this is the 1970s and you know it might be
 several years before your graduate student is able to collect the
 high-res data and do the model-building and refinement, so you publish
 the unit cell and tentative space group, and everyone starts making
 use of your data.  Some years later the structure solution and
 refinement is completed and the space group can now be assigned
 unambiguously.  The question is do you then revise your previous
 choice of unit cell risking the possibility of confusing everyone
 including yourself, just in order that the space-group setting
 complies with a completely arbitrary 'standard' (and the unit cell
 non-conventional), and requiring a re-index of your data (and
 permutation of the co-ordinate datasets).  Or do you stick with the IT
 unit cell convention and leave it as it is?  For me the choice is easy
 ('if it ain't broke then don't fix it!').

 Cheers

 -- Ian

 On Fri, Apr 1, 2011 at 1:40 PM, Gerard Bricogne g...@globalphasing.com
 wrote:
 Dear Boaz,

     I think you are the one who is finally asking the essential
 question.

     The classification we all know about, which goes back to the 19th
 century, is not into 230 space groups, but 230 space-group *types*, i.e.
 classes where every form of equivalencing (esp. by choice of setting)
 has
 been applied to the enumeration of the classes and the choice of a
 unique
 representative for each of them. This process of maximum reduction
 leaves
 very little room for the introducing conventions like a certain
 ordering
 of the lengths of cell parameters. This seems to me to be a major
 mess-up in
 the field - a sort of second-hand mathematics by (IUCr) committee
 which
 has remained so ill-understood as to generate all these confusions. The
 work
 on the derivation of the classes of 4-dimensional space groups explained
 the
 steps of this classification beautifully (arithmetic classes -
 extension by
 non-primitive translations - equivalencing under the action of the
 normaliser), the last step being the choice of a privileged setting *in
 termns of the group itself* in choosing the representative of each
 class.
 The extra convention abc leads to choosing that representative in a
 way
 that depends on the metric properties of the sample instead of once and
 for
 all 

[ccp4bb] Disordered sidechains - a statement by the revolutionary non-dictator

2011-04-01 Thread Gerard DVD Kleywegt

People of the disordered sidechains, ave!

-

perhaps the IUCr and/or PDB (Gerard K?) should issue some guidelines along 
these lines? And oblige us all to follow them?


(Mark J van Raaij)

-

this discussion has flared up many times in the past, and maybe it's time for 
a powerful dictator at the PDB to create the law...


(Filip Van Petegem)

-

Also, who should decide on the magic number: the all-knowing gurus at the 
protein data bank? Maybe we should really start using cif files


(Herman Schreuder)

-

In response to recent calls for me to act as a crystallographic dictator to 
decide how disordered sidechains should be treated, I would like to issue the 
following abridged statement:




I salute you -

The youth of victory,

People of the disordered sidechains,

People of challenge,

Youth of challenge,

They are a generation of disorder and challenge.

I salute you. You present the world the true pictures of the crystallographic 
community. You present the truth that the agents and cowards are trying to 
distort, to cover, to give a wrong picture of you before the world. Some 
CCP4BB readers are betraying you and depicting you as a bad people: Look at 
crystallographers, look at crystallographers! Crystallographers don't want 
victory. They don't want revolution. They want sidechains with high B-values. 
Crystallographers want occupancies. But here in Parkers Piece they want. 
Crystallography is leading continents, Africa, Asia and South America. Victory 
to the people of crystallography. And this is being pointed at the 
crystallographers.


They want no identification, identity when they say to people, crystallography 
in PDB. When they say crystallography, Revolution, crystallography, Gerard, 
all X-ray generators consider us, as the mecca, rulers of the world, even the 
superpowers, they want to converge on Cambridge, on Uppsala. They give their 
insults of you in crystallographic bulletin boards, they want to insult you. 
We want to retrieve in the square, everywhere, Gerard K has no rule. He's not 
the President, he's the leader of a revolution, he has nothing to Resign. 
Revolution means always sacrifice until the end of the crystal. This is my 
field, the field of my great-grandfathers, we planted, and we watered it with 
our grandfather's crystallisation soup. We deserve crystallography from those 
rats and crystallisation agents, who are being paid by security persons, damn 
them, damn their phases, if they have phases, they don't have phases, they 
don't have cooridnates. All sidechains are with us here, they (American 
crystallographerss) can see us all chanting the same slogans. Everyone 
challenging, we challenge America with its mighty synchrotron, we challenge 
even the superpower in the world and we became victory. Here they put their 
heads down, crystallography even kissed the grave of the Leader of all 
Martyrs. It's not victory for the cities of crystallography, but victory for 
the crystallographic community. This is the victory they want to give a bad 
image about. Italy, the Empire of the time, fell apart on crystallographic 
soil. I am bigger than any job, I am a revolutionary, I am from the 
Netherlands, from oasis that brought victory, and enjoy it from generation to 
generation, crystallography will remain at the top and will lead Cambridge and 
South Kensington.


We cannot hinder the process of this revolution from these greasy rats and 
cats. I am paying the price for staying here, and my grandfather who fell a 
martyr in 1911, I will not delete sidechain atoms and I will die as a martyr 
at the end, The remains of my father is the proof, grandfather, and my uncle 
Sheikh Alwyn, in the hills of Wales, I will not leave these righteous remains. 
He, Bricogne says that freedom cannot enjoy the shadow of these trees, we 
planted these trees and we watered it with our precipitants.


I am talking to you from the house which was bombarded by a hundred and 
seventy X-rays, by ESRF and Diamond. They left all houses and were aiming for 
Gerard's houses. Is it because he is president of crystallography? They could 
have treated him like other presidents, but Gerard K is history, resistance, 
freedom, victory, revolution, high B-values. This is an admission from the 
biggest power that Gerard K is not the president, is not a normal person, you 
can't poison him or lead demonstrations against him. When X-rays were falling 
on my crystal, and killing my carboxylates, where were you, you rats? Where 
were you those with big crystals? Where were you? You were in America. You 
were applauding your master, the synchrotrons. One hundred and seventy X-rays, 
left all palaces and leaders and kings and came to the great portakabin of 
Gerard K. This is a victory we should not be relinquished by anybody, any 
country or people, in myself Cambridge or any mission fighting back the 
tyranny of the America, we did not give in, we were resilient, here.


Now I want to tell 

Re: [ccp4bb] Crystallographic Breakthrough - DarkMatter Version 1.0

2011-04-01 Thread Jacob Keller
Well, actually you probably haven't subtracted the dark R value from
your final R. This and other values are available in a dark folder
on your hard drive, which is impossible to see/read, but which
contains a lot of important information which will make your structure
have a much higher impact factor. You can probably make a
maximum-likelihood approximation of the appropriate values, and apply
them to your structure so that your R values correspond the majority
in the PDB.

HTH,

JPK


On Fri, Apr 1, 2011 at 8:30 AM, Edward A. Berry ber...@upstate.edu wrote:
 Do you have a list of dark-matter-aware PDB refinement programs?
 Adding dark matter and refining in TNT or Xplor gives me exactly
 the same R as without. Furthermore the final refined files have
 lost the dark matter as far as I can see. This leads me to believe
 these programs are completely ignoring the dark matter.

 Ed

 Ethan Merritt wrote:

 Hi to all on ccp4bb:

 What better day to announce the availability of a breakthrough technique
 in macromolecular crystallography?

 Given recent discussion and in particular James Holton's suggestion that
 the problem of disordered sidechains is a problem akin to the difficulty
 of describing dark matter and dark energy...

 I am happy to announce a new crystallographic tool that can improve your
 model by accounting for an often-neglected physical property. A detailed
 explanation, references, and a preliminary implementation of the program
 can be downloaded from

                http://skuld.bmsc.washington.edu/DarkMatter

 --
 Ethan A Merritt
 Karmic Diffraction Project
 Fine crystallography since April 1, 2011
 What goes around, comes around - usually as a symmetry equivalent





-- 
***
Jacob Pearson Keller
Northwestern University
Medical Scientist Training Program
cel: 773.608.9185
email: j-kell...@northwestern.edu
***


Re: [ccp4bb] [phenixbb] what to do with disordered side chains

2011-04-01 Thread John Badger
Strangely enough, Nature may be the only journal that fully enables reviewers 
of protein crystallography to do the proper job - reviewer access is provided 
to unreleased coordinates AND diffraction data upon request! (no this is not 
April1 joke). There are reasons to suspect that the request is not made very 
often but that is more a shame on the reviewers than the willingness of the 
journal at this point in time. 

Of course, it is a confusing situation when you only see density for a part of 
a ligand and it comes up all the time.

Thanks
John Badger


[ccp4bb] Quick-and-dirty searches of both PDB and EMDB at PDBe

2011-04-01 Thread Gerard DVD Kleywegt

Hi all,

As part of its recent winter update, the Protein Data Bank in Europe (PDBe; 
http://pdbe.org/) has improved its facility that allows for tandem searches of 
PDB and EMDB. It was designed to allow users to carry out many of their 
day-to-day searches (without the need to fill out a complex form or learn a 
special query syntax). Simply type what you are looking for, click the SEARCH 
button, and we will do our best to dig up relevant information, be it in the 
PDB, in EMDB or on our website.


QUICK ACCESS TO ENTRIES, SERVICES, SEQUENCES


If you go to the PDBe home page (http://pdbe.org/), you will see a Google-like 
search box in the friendly green banner near the top of the page (just below 
our motto, Bringing Structure to Biology). You can use this search box in a 
number of ways:


- type a PDB code (e.g., 1cbs), and you will be taken directly to the summary 
page for that entry. You can type any valid code, even if it's not in the 
current release, so you can use this facility to obtain information about the 
status of entries that have not been released yet (e.g., 2yd0) or entries that 
are no longer in the archive (e.g., theoretical models).


- type a valid EMDB code (e.g., 1607) and you will be taken straight to the 
summary page for that entry.


HINT: if, instead of being taken directly to a summary page for a certain PDB 
or EMDB code, you want to actually search PDB and/or EMDB for references to 
that particular code, simply enclose it in double quotes. For instance, 
searching for 1mi6 will take you to the summary page for PDB entry 1mi6, 
whereas searching for 1mi6 will give you a set of hits in both PDB and EMDB 
that all contain a reference to 1mi6.


- type something resembling a PDBe service or resource name and chances are 
that the name will be recognised and you will be taken straight to that 
service or resource (e.g., autodep, emdep, pdbemotif, pdbepisa, pdbefold, 
pdbechem, quips, portfolio, etc.).


- you can search the protein sequences in the PDB by entering seq: (or 
sequence:) followed by a (partial) amino-acid sequence in one-letter code 
(e.g., seq:GNKKGSEQESVKEFLAKAKEDFLKKWETPSQNTA). The sequence will be 
compared to all protein sequences in the PDB using FastA, and the results will 
be presented to you for further analysis in the PDBe sequence browser (see 
http://pdbe.org/sequence).


TEXT-BASED SEARCHES
---

Of course you can do general text-based searches of the PDB and EMDB as well - 
just type one or more search terms in the box and hit the SEARCH button.


- If you type a single search term and it gives hits in the PDB, you will get 
a results page with a tree structure on the left which shows in which 
categories the term was found. For instance, if you look for Jones, that could 
be an author, but it could also be part of the name of a molecule (e.g., Bence 
Jones protein). By clicking on an appropriate branch in the tree, you select 
only those entries for which the search term occurs in that data category 
(e.g., author or PDB compound).


- If you type more than one search term, only entries that contain all these 
terms will be selected as hits. For instance, if you search for kleywegt po4 
- without the quotes - you will get only one hit, 1CBQ. Note that if you 
enclose your search terms in double quotes, you will only get hits that match 
exactly (i.e., the complete search expression must occur somewhere in the 
entry, not just all of the keywords individually). For instance, searching for 
HCV NS3 protease yields 31 hits in the PDB if you enclose the terms in 
double quotes, but 177 hits if you don't.


Note that there are two tabs on the results page - one labelled PDB entries 
and the other EMDB entries. If you do a search for Baumeister, you will get 
14 hits in the PDB. If you click on the EMDB entries tab, you will find that 
there are 10 hits in EMDB.


HINT: if you want the EMDB results tab to become active straightaway, preface 
your search term(s) by emdb: (without the quotes), e.g. search for 
emdb:saibil and you will immediately get the list of 56 EMDB hits.


SEARCH RESULTS
--

The search results are sorted by release date by default, with the most 
recently released entries at the top. This ensures that if you read an 
exciting paper about new ClpC structures, a search for clpc will give you the 
latest entries first. You can change the sort order and criterion with a 
drop-down menu.


Each entry that is found as a hit in a search is shown in a panel that 
contains useful summary information and allows you to launch various searches 
and services with a single mouse-click. If you do a search for hiv-1, for 
example, you will get many hits in the PDB and two dozen in EMDB:


- For each PDB hit you will see: the PDB code, a small image of the structure, 
the resolution (for X-ray and EM structures), the title of the entry, a set of 
PDBprints that provide 

[ccp4bb] program to calculate electron density at x,y,z

2011-04-01 Thread Ed Pozharski
I need to calculate the electron density values for a list of spatial
locations (e.g. atom positions in a model) using an mtz-file that
already contains map coefficients.  To write my own code may be easier
than I think (if one can manipulate mtz columns, isn't the only problem
left how to incorporate symmetry-related reflections?), but I would need
an alternative at least for troubleshooting purposes. So,

Does anyone know of a software tool that can calculate point electron
density for every atom in a structure?

If I would have to bring a dependency into this, the best choice for me
would be clipper libs.

Thanks in advance,

Ed.


-- 
I'd jump in myself, if I weren't so good at whistling.
   Julian, King of Lemurs


Re: [ccp4bb] SFCHECK produces incomplete postscript

2011-04-01 Thread S. Karthikeyan
If you have break in the protein chain, this problem will occur. Put TER card
in the PDB file where ever the chain break is there, then run SFCHECK. The .ps
output now will be complete and should be able to see in the viewer.

HTH

-Karthik


 Dear all,

   I have been trying to compare a model that I'm refining against the 
 native SFs
 using SFCHECK.  SFCHECK finishes normally (no errors in log file, seemingly
 complete list of output .ps files), but produces a postscript file with only
 the first page of output (and it is mostly blank).  There is the typical
 light-grey panels on a dark-grey background format that I'm used to for 
 SFCHECK
 postscript files, but there are no figures or data.  Also, my mouse icon
 indicates it is hung trying to load/read the file (i.e. it's a moving busy
 icon under Linux).

   I've tried other postscript viewers without luck.  I can successfully 
 run
 SFCHECK on a completely different model/MTZ pair without problem though.  So
 does anyone know of circumstances that would lead to a hung postscript file
 from SFCHECK?

 Thanks for your help,
 -Andy





Re: [ccp4bb] SFCHECK produces incomplete postscript

2011-04-01 Thread Sergei Strelkov

April 1st, 2011


Dear Andy,

We have observed the same problem before,
and just today we could finally find an explanation.

Apparently, a new (still undocumented) functionality
was quietly introduced into few widely used oscillation data
processing programs, enabling the recording of the scattering
from antimatter atoms traditionally ignored in crystallography
(see a related discussion on this BB earlier today!).

While obviously a welcomed improvement, the inclusion
of antimatter SFs in the calculations has resulted in
aberrant behaviour of some other programs. This apparently
includes SFCHECK which attempts to calculate the R-factor and
further statistics, but since the data for matter and antimatter
cancel out, all you get is a blank output.

I wonder if Alexei already has a new version of SFCHECK
that outputs the matter and antimatter SF statistics separately - ?

HTH,
Sergei




Dear all,

I have been trying to compare a model that I'm refining against the native SFs using 
SFCHECK.  SFCHECK finishes normally (no errors in log file, seemingly complete list of output .ps 
files), but produces a postscript file with only the first page of output (and it is mostly blank). 
 There is the typical light-grey panels on a dark-grey background format that I'm used to for 
SFCHECK postscript files, but there are no figures or data.  Also, my mouse icon indicates it is 
hung trying to load/read the file (i.e. it's a moving busy icon under 
Linux).

I've tried other postscript viewers without luck.  I can successfully run SFCHECK 
on a completely different model/MTZ pair without problem though.  So does anyone know of 
circumstances that would lead to a hung postscript file from SFCHECK?

Thanks for your help,
-Andy




Re: [ccp4bb] program to calculate electron density at x,y,z

2011-04-01 Thread Edward A. Berry

Ed Pozharski wrote:

I need to calculate the electron density values for a list of spatial
locations (e.g. atom positions in a model) using an mtz-file that
already contains map coefficients.  To write my own code may be easier
than I think (if one can manipulate mtz columns, isn't the only problem
left how to incorporate symmetry-related reflections?), but I would need
an alternative at least for troubleshooting purposes. So,

Does anyone know of a software tool that can calculate point electron
density for every atom in a structure?


fft to calculate the map, then mapman with the peek value command.
Give it a pdb file (fek.pdb) with the coordinates of the atoms,
it returns a pdb file with electron density at those points in
the B-factor column. Several options for interpolating map values to the
chosen point, iirc.

Something like:

setenv MAPSIZE 500
/data/trp/berry/usf/rave/lx_mapman -b eof
re m1 d.map CCP4
!norm m1
!pick level 5.2
!pick peaks m1 danopeaks.pdb pdb
peek value m1 fek.pdb fepkhght.pdb int
quit
eof






If I would have to bring a dependency into this, the best choice for me
would be clipper libs.

Thanks in advance,

Ed.


--
I'd jump in myself, if I weren't so good at whistling.
Julian, King of Lemurs



Re: [ccp4bb] What happened to this innovative method by MV King?

2011-04-01 Thread Miguel Ortiz Lombardía
Le 01/04/11 12:39, REX PALMER a écrit :
 Dear Protein Crystallographers
 I would like to share with you something I came across today.
 Unfortunately I was only able to copy the first 4 pages of the article
 by MV King as I need to post the email before 12am and the quality of
 the copy is somewhat lacking. I was wondering if anyone knows if
 anything came of the proposed method of heavy atom substitution as a
 Google Scholar search has failed to bring anything up.
 Best wishes
  
 Rex Palmer
 Birkbeck College, London
 
 

They said:

In Gold we thrust

and so they did.

No other crystal phases survived and hence all this messy problem.


The locusts have no King, yet go they forth all of them by bands


-- 
Miguel

Architecture et Fonction des Macromolécules Biologiques (UMR6098)
CNRS, Universités d'Aix-Marseille I  II
Case 932, 163 Avenue de Luminy, 13288 Marseille cedex 9, France
Tel: +33(0) 491 82 55 93
Fax: +33(0) 491 26 67 20
mailto:miguel.ortiz-lombar...@afmb.univ-mrs.fr
http://www.afmb.univ-mrs.fr/Miguel-Ortiz-Lombardia

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


Re: [ccp4bb] program to calculate electron density at x,y,z

2011-04-01 Thread Pavel Afonine
Hi Ed,

if you are familiar with CCTBX then

map_value = map_data.eight_point_interpolation(site_fractional)

Also, there is a similar method that will just give you the density value at
closes grid point.

Let me know if interested, and I can send you a 10 lines Python
script-example that will do it.

Pavel.


On Fri, Apr 1, 2011 at 8:16 AM, Ed Pozharski epozh...@umaryland.edu wrote:

 I need to calculate the electron density values for a list of spatial
 locations (e.g. atom positions in a model) using an mtz-file that
 already contains map coefficients.  To write my own code may be easier
 than I think (if one can manipulate mtz columns, isn't the only problem
 left how to incorporate symmetry-related reflections?), but I would need
 an alternative at least for troubleshooting purposes. So,

 Does anyone know of a software tool that can calculate point electron
 density for every atom in a structure?

 If I would have to bring a dependency into this, the best choice for me
 would be clipper libs.

 Thanks in advance,

 Ed.


 --
 I'd jump in myself, if I weren't so good at whistling.
   Julian, King of Lemurs



Re: [ccp4bb] The meaning of B-factor, was Re: [ccp4bb] what to do with disordered side chains

2011-04-01 Thread Zbyszek Otwinowski
The meaning of B-factor is the (scaled) sum of all positional
uncertainties, and not just its one contributor, the Atomic Displacement
Parameter that describes the relative displacement of an atom in the
crystal lattice by a Gaussian function.
That meaning (the sum of all contributions) comes from the procedure that
calculates the B-factor in all PDB X-ray deposits, and not from an
arbitrary decision by a committee. All programs that refine B-factors
calculate an estimate of positional uncertainty, where contributors can be
both Gaussian and non-Gaussian. For a non-Gaussian contributor, e.g.
multiple occupancy, the exact numerical contribution is rather a complex
function, but conceptually it is still an uncertainty estimate. Given the
resolution of the typical data, we do not have a procedure to decouple
Gaussian and non-Gaussian contributors, so we have to live with the
B-factor being defined by the refinement procedure. However, we should
still improve the estimates of the B-factor, e.g. by changing the
restraints. In my experience, the Refmac's default restraints on B-factors
in side chains are too tight and I adjust them. Still, my preference would
be to have harmonic restraints on U (square root of B) rather than on Bs
themselves.
It is not we who cram too many meanings on the B-factor, it is the quite
fundamental limitation of crystallographic refinement.

Zbyszek Otwinowski

 The fundamental problem remains:  we're cramming too many meanings into
one number [B factor].  This the PDB could indeed solve, by giving us
another column.  (He said airily, blithely launching a totally new flame
war.)
 phx.



Re: [ccp4bb] The meaning of B-factor, was Re: [ccp4bb] what to do with disordered side chains

2011-04-01 Thread Bernhard Rupp (Hofkristallrat a.D.)
 In my experience, the Refmac's default restraints on B-factors in side chains 
 are too tight and I adjust them. 

Concur. See BMC p 640.

BR


Re: [ccp4bb] What happened to this innovative method by MV King?

2011-04-01 Thread James Stroud
This was not so much an advance but a remarkable observation. We have since 
learned that these clathrates are entirely impractical. The problem is not so 
much their dextrorotatory properties, which are more or less a nuisance, but 
that they are too dense and have absolutely no affinity for other compounds.

James




On Apr 1, 2011, at 3:39 AM, REX PALMER wrote:

 Dear Protein Crystallographers
 I would like to share with you something I came across today. Unfortunately I 
 was only able to copy the first 4 pages of the article by MV King as I need 
 to post the email before 12am and the quality of the copy is somewhat 
 lacking. I was wondering if anyone knows if anything came of the proposed 
 method of heavy atom substitution as a Google Scholar search has failed to 
 bring anything up.
 Best wishes
  
 Rex Palmer
 Birkbeck College, London
 King_1.jpgKing_2.jpgKing_3.jpgKIng_4.jpg



Re: [ccp4bb] The meaning of B-factor, was Re: [ccp4bb] what to do with disordered side chains

2011-04-01 Thread James Holton
I'm not sure I entirely agree with ZO's assessment that a B factor is
a measure of uncertainty.  Pedantically, all it really is is an
instruction to the refinement program to build some electron density
with a certain width and height at a certain location.  The result is
then compared to the data, parameters are adjusted, etc.  I don't
think the B factor is somehow converted into an error bar on the
calculated electron density, is it?

For example, a B-factor of 500 on a carbon atom just means that the
peak to build is ~0.02 electron/A^3 tall, and ~3 A wide (full width
at half maximum).  By comparison, a carbon with B=20 is 1.6
electrons/A^3 tall and ~0.7 A wide (FWHM).  One of the bugs that
Dale referred to is the fact that most refinement programs do not
plot electron density more than 3 A away from each atomic center, so
a substantial fraction of the 6 electrons represented by a carbon with
B=500 will be sharply cut off, and missing from the FC calculation.
Then again, all 6 electrons will be missing if the atoms are simply
not modeled, or if the occupancy is zero.

The point I am trying to make here is that there is no B factor that
will make an atom go away, because the way B factors are implemented
is to always conserve the total number of electrons in the atom, but
just spread them out over more space.

Now, a peak height of 0.02 electrons/A^3 may sound like it might as
well be zero, especially when sitting next to a B=20 atom, but what if
all the atoms have high B factors?  For example, if the average
(Wilson) B factor is 80 (like it typically is for a ~4A structure),
then the average peak height of a carbon atom is 0.3 electrons/A^3,
and then 0.02 electrons/A^3 starts to become more significant.  If we
consider a ~11 A structure, then the average atomic B factor will be
around 500.  This B vs resolution relationship is something I
derived empirically from the PDB (Holton JSR 2009).  Specifically, the
average B factor for PDB files at a given resolution d is: B =
4*d^2+12.  Admittedly, this is on average, but the trend does make
physical sense: atoms with high B factors don't contribute very much
to high-angle spots.

More formally, the problem with using a high B-factor as a flag is
that it is not resolution-general.  Dale has already pointed this out.

Personally, I prefer to think of B factors as a atom-by-atom
resolution rather than an error bar, and this is how I tell
students to interpret them (using the B = 4*d^2+12 formula).  The
problem I have with the error bar interpretation is that
heterogeneity and uncertainty are not the same thing.  That is, just
because the atom is jumping around does not mean you don't know
where the centroid of the distribution is.  The u_x in
B=8*pi^2*u_x^2 does reflect the standard error of atomic position in
a GIVEN unit cell, but since we are averaging over trillions of cells,
the error bar on the AVERAGE atomic position is actually a great
deal smaller than u.  I think this distinction is important because
what we are building is a model of the AVERAGE electron density, not a
single molecule.

Just my 0.02 electrons

-James Holton
MAD Scientist



On Fri, Apr 1, 2011 at 10:57 AM, Zbyszek Otwinowski
zbys...@work.swmed.edu wrote:
 The meaning of B-factor is the (scaled) sum of all positional
 uncertainties, and not just its one contributor, the Atomic Displacement
 Parameter that describes the relative displacement of an atom in the
 crystal lattice by a Gaussian function.
 That meaning (the sum of all contributions) comes from the procedure that
 calculates the B-factor in all PDB X-ray deposits, and not from an
 arbitrary decision by a committee. All programs that refine B-factors
 calculate an estimate of positional uncertainty, where contributors can be
 both Gaussian and non-Gaussian. For a non-Gaussian contributor, e.g.
 multiple occupancy, the exact numerical contribution is rather a complex
 function, but conceptually it is still an uncertainty estimate. Given the
 resolution of the typical data, we do not have a procedure to decouple
 Gaussian and non-Gaussian contributors, so we have to live with the
 B-factor being defined by the refinement procedure. However, we should
 still improve the estimates of the B-factor, e.g. by changing the
 restraints. In my experience, the Refmac's default restraints on B-factors
 in side chains are too tight and I adjust them. Still, my preference would
 be to have harmonic restraints on U (square root of B) rather than on Bs
 themselves.
 It is not we who cram too many meanings on the B-factor, it is the quite
 fundamental limitation of crystallographic refinement.

 Zbyszek Otwinowski

 The fundamental problem remains:  we're cramming too many meanings into
 one number [B factor].  This the PDB could indeed solve, by giving us
 another column.  (He said airily, blithely launching a totally new flame
 war.)
 phx.




Re: [ccp4bb] Crystallographic Breakthrough - DarkMatter Version 1.0

2011-04-01 Thread Phoebe Rice
Congratulations on your amazing discovery, which immediately suggests many new 
lines of inquiry:

Does dark matter affect macromolecular stability?  Can it explain the 
difficulty some students have in sample preparation?  Is it found in higher 
concentrations in brains that are thought to be denser (we won't say by whom)?

=
Phoebe A. Rice
Dept. of Biochemistry  Molecular Biology
The University of Chicago
phone 773 834 1723
http://bmb.bsd.uchicago.edu/Faculty_and_Research/01_Faculty/01_Faculty_Alphabetically.php?faculty_id=123
http://www.rsc.org/shop/books/2008/9780854042722.asp


 Original message 
Date: Thu, 31 Mar 2011 23:06:47 -0700
From: CCP4 bulletin board CCP4BB@JISCMAIL.AC.UK (on behalf of Ethan Merritt 
merr...@u.washington.edu)
Subject: [ccp4bb] Crystallographic Breakthrough  -  DarkMatter Version 1.0  
To: CCP4BB@JISCMAIL.AC.UK

Hi to all on ccp4bb:

What better day to announce the availability of a breakthrough technique
in macromolecular crystallography?

Given recent discussion and in particular James Holton's suggestion that
the problem of disordered sidechains is a problem akin to the difficulty
of describing dark matter and dark energy...

I am happy to announce a new crystallographic tool that can improve your
model by accounting for an often-neglected physical property. A detailed
explanation, references, and a preliminary implementation of the program
can be downloaded from

   http://skuld.bmsc.washington.edu/DarkMatter

-- 
Ethan A Merritt
Karmic Diffraction Project
Fine crystallography since April 1, 2011
What goes around, comes around - usually as a symmetry equivalent


Re: [ccp4bb] The meaning of B-factor, was Re: [ccp4bb] what to do with disordered side chains

2011-04-01 Thread Randy J. Read
In this case, I'm more on ZO's side. Let's say that the refinement program 
can't get an atom to the right position (for instance, to pick a reasonably 
realistic example, because you've put a leucine side chain in backwards). 
In that case, the B-factor for the atom nearest to where there should be 
one in the structure will get larger to smear out its density and put some 
in the right place. To a good approximation, the optimal increase in the 
B-factor will be the one you'd expect for a Gaussian probability 
distribution, i.e. 8Pi^2/3 times the positional error squared. So a refined 
B-factor does include a measure of the uncertainty or error in the atom's 
position.


Best wishes,

Randy Read

On Apr 1 2011, James Holton wrote:


I'm not sure I entirely agree with ZO's assessment that a B factor is
a measure of uncertainty.  Pedantically, all it really is is an
instruction to the refinement program to build some electron density
with a certain width and height at a certain location.  The result is
then compared to the data, parameters are adjusted, etc.  I don't
think the B factor is somehow converted into an error bar on the
calculated electron density, is it?

For example, a B-factor of 500 on a carbon atom just means that the
peak to build is ~0.02 electron/A^3 tall, and ~3 A wide (full width
at half maximum).  By comparison, a carbon with B=20 is 1.6
electrons/A^3 tall and ~0.7 A wide (FWHM).  One of the bugs that
Dale referred to is the fact that most refinement programs do not
plot electron density more than 3 A away from each atomic center, so
a substantial fraction of the 6 electrons represented by a carbon with
B=500 will be sharply cut off, and missing from the FC calculation.
Then again, all 6 electrons will be missing if the atoms are simply
not modeled, or if the occupancy is zero.

The point I am trying to make here is that there is no B factor that
will make an atom go away, because the way B factors are implemented
is to always conserve the total number of electrons in the atom, but
just spread them out over more space.

Now, a peak height of 0.02 electrons/A^3 may sound like it might as
well be zero, especially when sitting next to a B=20 atom, but what if
all the atoms have high B factors?  For example, if the average
(Wilson) B factor is 80 (like it typically is for a ~4A structure),
then the average peak height of a carbon atom is 0.3 electrons/A^3,
and then 0.02 electrons/A^3 starts to become more significant.  If we
consider a ~11 A structure, then the average atomic B factor will be
around 500.  This B vs resolution relationship is something I
derived empirically from the PDB (Holton JSR 2009).  Specifically, the
average B factor for PDB files at a given resolution d is: B =
4*d^2+12.  Admittedly, this is on average, but the trend does make
physical sense: atoms with high B factors don't contribute very much
to high-angle spots.

More formally, the problem with using a high B-factor as a flag is
that it is not resolution-general.  Dale has already pointed this out.

Personally, I prefer to think of B factors as a atom-by-atom
resolution rather than an error bar, and this is how I tell
students to interpret them (using the B = 4*d^2+12 formula).  The
problem I have with the error bar interpretation is that
heterogeneity and uncertainty are not the same thing.  That is, just
because the atom is jumping around does not mean you don't know
where the centroid of the distribution is.  The u_x in
B=8*pi^2*u_x^2 does reflect the standard error of atomic position in
a GIVEN unit cell, but since we are averaging over trillions of cells,
the error bar on the AVERAGE atomic position is actually a great
deal smaller than u.  I think this distinction is important because
what we are building is a model of the AVERAGE electron density, not a
single molecule.

Just my 0.02 electrons

-James Holton
MAD Scientist



On Fri, Apr 1, 2011 at 10:57 AM, Zbyszek Otwinowski
zbys...@work.swmed.edu wrote:
The meaning of B-factor is the (scaled) sum of all positional 
uncertainties, and not just its one contributor, the Atomic Displacement 
Parameter that describes the relative displacement of an atom in the 
crystal lattice by a Gaussian function. That meaning (the sum of all 
contributions) comes from the procedure that calculates the B-factor in 
all PDB X-ray deposits, and not from an arbitrary decision by a 
committee. All programs that refine B-factors calculate an estimate of 
positional uncertainty, where contributors can be both Gaussian and 
non-Gaussian. For a non-Gaussian contributor, e.g. multiple occupancy, 
the exact numerical contribution is rather a complex function, but 
conceptually it is still an uncertainty estimate. Given the resolution 
of the typical data, we do not have a procedure to decouple Gaussian and 
non-Gaussian contributors, so we have to live with the B-factor being 
defined by the refinement procedure. However, we should still improve 
the estimates of the B-factor, 

[ccp4bb] OT: PCR instrument

2011-04-01 Thread Bernhard Rupp (Hofkristallrat a.D.)
Dear All,

I was polled for a  recommendation for a  good PCR instrument,
but I am not much of a molecular biology person - if someone could
please help and kindly send some recommendations to

Eric W. Reinheimer ewreinhei...@csupomona.edu

Best regards, BR
-
Bernhard Rupp
001 (925) 209-7429
+43 (676) 571-0536
b...@ruppweb.org
hofkristall...@gmail.com
http://www.ruppweb.org/
-
No animals were hurt or killed during the 
production of this email.
-


Re: [ccp4bb] SFCHECK produces incomplete postscript

2011-04-01 Thread Andrew T. Torelli
Dear Karthik and Sergei,

Thank you for the replies (helpful and humorous).  Karthik, I confirmed 
that the chain breaks in my .PDB have TER cards, but I arrive at the same 
result.  Perhaps dark matter is to blame for this singularity after all...  
In that case, maybe I just need to try again tomorrow. I'll post the solution 
if I find it before moving on.

Regards,
-Andy

-Original Message-
From: S. Karthikeyan [mailto:skart...@imtech.res.in] 
Sent: Friday, April 01, 2011 11:21 AM
To: Andrew T. Torelli
Cc: ccp4bb@jiscmail.ac.uk
Subject: Re: [ccp4bb] SFCHECK produces incomplete postscript

If you have break in the protein chain, this problem will occur. Put TER card
in the PDB file where ever the chain break is there, then run SFCHECK. The .ps
output now will be complete and should be able to see in the viewer.

HTH

-Karthik


 Dear all,

   I have been trying to compare a model that I'm refining against the 
 native SFs
 using SFCHECK.  SFCHECK finishes normally (no errors in log file, seemingly
 complete list of output .ps files), but produces a postscript file with only
 the first page of output (and it is mostly blank).  There is the typical
 light-grey panels on a dark-grey background format that I'm used to for 
 SFCHECK
 postscript files, but there are no figures or data.  Also, my mouse icon
 indicates it is hung trying to load/read the file (i.e. it's a moving busy
 icon under Linux).

   I've tried other postscript viewers without luck.  I can successfully 
 run
 SFCHECK on a completely different model/MTZ pair without problem though.  So
 does anyone know of circumstances that would lead to a hung postscript file
 from SFCHECK?

 Thanks for your help,
 -Andy







[ccp4bb] phenix library issues

2011-04-01 Thread Tim Gruene
Dear all,

sorry for being off-topic!

I have been experimenting with the latest phenix build [1].

I am fairly impressed with phenix.canephor [2]. At first sight the projected
density looked a little week until I realised that I had used
phenix.project.mediterranean. As I switched to phenix.project.occidental, the
result was expectedly a good average. The rmsd of about 5e12A was also fairly
acceptable, it seemed to me.

Subsequently I attempted to improve the result using phenix.calzone [3].
Unfortunately this program crashed because of a missing libcapsicum.so.4.01.11
which I could not finde anywhere on the web. Could anyone please let me know
where to find this library?


Best wishes, Tim

P.S. How about the resolution limit of 2.56A - couldnin principle one should be
able to improve this significantly by switching from CuKa to, e.g MoKa
radiation?


[1] PHENIX: a comprehensive Python-based system for macromolecular structure
solution. P. D. Adams, P. V. Afonine, G. Bunkóczi, V. B. Chen, I. W. Davis, N.
Echols, J. J. Headd, L.-W. Hung, G. J. Kapral, R. W. Grosse-Kunstleve, A. J.
McCoy, N. W. Moriarty, R. Oeffner, R. J. Read, D. C. Richardson, J. S.
Richardson, T. C. Terwilliger and P. H. Zwart. Acta Cryst. D66, 213-221 (2010).

[2] Crystal cookery - using high-throughput technologies and the grocery store
as a teaching tool. J. R. Luft, N. M. Furlani, R. E. NeMoyer, E. J. Penna, J. R.
Wolfley, M. E. Snell, S. A. Potter and E. H. Snell J. Appl. Cryst. (2010). 43,
1189-1207

-- 

[3] Structure and stereochemistry of
(24R)-27-nor-5[alpha]-cholestane-3[beta],4[beta],5,6[alpha],7[beta],8,14,15[alpha],24-nonaol:
a highly hydroxylated marine steroid from the starfish Archaster typicus. C. A.
Mattia, L. Mazzarella, R. Puliti, R. Riccio and L. Minale. Acta Cryst. (1988).
C44, 2170-2173 
--
Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

phone: +49 (0)551 39 22149

GPG Key ID = A46BEE1A



signature.asc
Description: Digital signature


Re: [ccp4bb] phenix library issues

2011-04-01 Thread Ethan Merritt
On Friday, April 01, 2011 01:51:31 pm Tim Gruene wrote:

 Subsequently I attempted to improve the result using phenix.calzone [3].

This program comes in both Chicago and New York localizations.
Do you know which one you have? 

 Unfortunately this program crashed because of a missing libcapsicum.so.4.01.11
 which I could not finde anywhere on the web. Could anyone please let me know
 where to find this library?

I think the upstream source is here:  http://tinyurl.com/2wcbq7q
A more general substitute might be:   http://tinyurl.com/3c5z589

You might also be able to extract it from libsalsa:   http://tinyurl.com/3orwygl

cheers,

Ethan

-- 
Ethan A Merritt
Biomolecular Structure Center,  K-428 Health Sciences Bldg
University of Washington, Seattle 98195-7742


Re: [ccp4bb] Crystallographic Breakthrough - DarkMatter Version 1.0

2011-04-01 Thread Jacob Keller
How *did* those physicists get such a convenient hypothesis, when the
rest of us have only light matter to work with! ...Or do we also
really have our dark matter too?

JPK

On Fri, Apr 1, 2011 at 2:12 PM, Phoebe Rice pr...@uchicago.edu wrote:
 Congratulations on your amazing discovery, which immediately suggests many 
 new lines of inquiry:

 Does dark matter affect macromolecular stability?  Can it explain the 
 difficulty some students have in sample preparation?  Is it found in higher 
 concentrations in brains that are thought to be denser (we won't say by whom)?

 =
 Phoebe A. Rice
 Dept. of Biochemistry  Molecular Biology
 The University of Chicago
 phone 773 834 1723
 http://bmb.bsd.uchicago.edu/Faculty_and_Research/01_Faculty/01_Faculty_Alphabetically.php?faculty_id=123
 http://www.rsc.org/shop/books/2008/9780854042722.asp


  Original message 
Date: Thu, 31 Mar 2011 23:06:47 -0700
From: CCP4 bulletin board CCP4BB@JISCMAIL.AC.UK (on behalf of Ethan Merritt 
merr...@u.washington.edu)
Subject: [ccp4bb] Crystallographic Breakthrough  -  DarkMatter Version 1.0
To: CCP4BB@JISCMAIL.AC.UK

Hi to all on ccp4bb:

What better day to announce the availability of a breakthrough technique
in macromolecular crystallography?

Given recent discussion and in particular James Holton's suggestion that
the problem of disordered sidechains is a problem akin to the difficulty
of describing dark matter and dark energy...

I am happy to announce a new crystallographic tool that can improve your
model by accounting for an often-neglected physical property. A detailed
explanation, references, and a preliminary implementation of the program
can be downloaded from

               http://skuld.bmsc.washington.edu/DarkMatter

--
Ethan A Merritt
Karmic Diffraction Project
Fine crystallography since April 1, 2011
What goes around, comes around - usually as a symmetry equivalent




-- 
***
Jacob Pearson Keller
Northwestern University
Medical Scientist Training Program
cel: 773.608.9185
email: j-kell...@northwestern.edu
***


Re: [ccp4bb] Crystallographic Breakthrough - DarkMatter Version 1.0

2011-04-01 Thread George T. DeTitta
It may simply be the case that all those seleniums scattering anomalously are 
pumping the dark matter. 
Sent via BlackBerry by ATT

-Original Message-
From: Jacob Keller j-kell...@fsm.northwestern.edu
Sender: CCP4 bulletin board CCP4BB@JISCMAIL.AC.UK
Date: Fri, 1 Apr 2011 17:25:02 
To: CCP4BB@JISCMAIL.AC.UK
Reply-To: Jacob Keller j-kell...@fsm.northwestern.edu
Subject: Re: [ccp4bb] Crystallographic Breakthrough - DarkMatter Version 1.0

How *did* those physicists get such a convenient hypothesis, when the
rest of us have only light matter to work with! ...Or do we also
really have our dark matter too?

JPK

On Fri, Apr 1, 2011 at 2:12 PM, Phoebe Rice pr...@uchicago.edu wrote:
 Congratulations on your amazing discovery, which immediately suggests many 
 new lines of inquiry:

 Does dark matter affect macromolecular stability?  Can it explain the 
 difficulty some students have in sample preparation?  Is it found in higher 
 concentrations in brains that are thought to be denser (we won't say by whom)?

 =
 Phoebe A. Rice
 Dept. of Biochemistry  Molecular Biology
 The University of Chicago
 phone 773 834 1723
 http://bmb.bsd.uchicago.edu/Faculty_and_Research/01_Faculty/01_Faculty_Alphabetically.php?faculty_id=123
 http://www.rsc.org/shop/books/2008/9780854042722.asp


  Original message 
Date: Thu, 31 Mar 2011 23:06:47 -0700
From: CCP4 bulletin board CCP4BB@JISCMAIL.AC.UK (on behalf of Ethan Merritt 
merr...@u.washington.edu)
Subject: [ccp4bb] Crystallographic Breakthrough  -  DarkMatter Version 1.0
To: CCP4BB@JISCMAIL.AC.UK

Hi to all on ccp4bb:

What better day to announce the availability of a breakthrough technique
in macromolecular crystallography?

Given recent discussion and in particular James Holton's suggestion that
the problem of disordered sidechains is a problem akin to the difficulty
of describing dark matter and dark energy...

I am happy to announce a new crystallographic tool that can improve your
model by accounting for an often-neglected physical property. A detailed
explanation, references, and a preliminary implementation of the program
can be downloaded from

               http://skuld.bmsc.washington.edu/DarkMatter

--
Ethan A Merritt
Karmic Diffraction Project
Fine crystallography since April 1, 2011
What goes around, comes around - usually as a symmetry equivalent




-- 
***
Jacob Pearson Keller
Northwestern University
Medical Scientist Training Program
cel: 773.608.9185
email: j-kell...@northwestern.edu
***


Re: [ccp4bb] The meaning of B-factor, was Re: [ccp4bb] what to do with disordered side chains

2011-04-01 Thread Eric Bennett
Personally I think it is a _good_ thing that those missing atoms are a pain, 
because it helps ensure you are aware of the problem.  As somebody who is in 
the business of supplying non-structural people with models, and seeing how 
those models are sometimes (mis)interpreted, I think it's better to inflict 
that pain than it is to present a model that non-structural people are likely 
to over-interpret.  

The PDB provides various manipulated versions of crystal structures, such as 
biological assemblies.  I don't think it would necessarily be a bad idea to 
build missing atoms back into those sorts of processed files but for the main 
deposited entry the best way to make sure the model is not abused is to leave 
out atoms that can't be modeled accurately.

Just as an example since you mention surfaces, some of the people I work with 
calculate solvent accessible surface areas of individual residues for purposes 
such as engineering cysteines for chemical conjugation, and if residues are 
modeled into bogus positions just to say all the atoms are there, software that 
calculates per-residue SASA has to have a reliable way of knowing to ignore 
those atoms when calculating the area of neighboring residues.  Ad hoc 
solutions like putting very large values in the B column are not clear cut for 
such a software program to interpret.  Leaving the atom out completely is 
pretty unambiguous.

-Eric


On Mar 31, 2011, at 7:34 PM, Scott Pegan wrote:

 I agree with Zbyszek with the modeling of side chains and stress the 
 following points:
 
 1) It drives me nuts when I find that PDB is missing atoms from side chains.  
  This requires me to rebuild them to get any use out of the PDB such as 
 relevant surface renderings or electropotential plots.   I am an experienced 
 structural biologist so that I can immediately identify that they have been 
 removed and  can rebuild them.  I feel sorry for my fellow scientists from 
 other biological fields that can't perform this task readability, thus 
 removing these atoms from a model limits their usefulness to a wider 
 scientific audience.
 
 2)  Not sure if any one has documented the percentage of actual side chains 
 missing from radiation damage versus heterogeneity in confirmation (i.e. 
 dissolved a crystal after collection and sent it to Mass Spec).   Although 
 the former likely happens occasionally, my gut tells me that the latter is 
 significantly more predominant.  As a result, absence of atoms from a side 
 chain in the PDB where the main chain is clearly visible in the electron 
 density might make for the best statistics for an experimental model, but 
 does not reflect a reality.  
 
 Scott
 


Re: [ccp4bb] program to calculate electron density at x,y,z

2011-04-01 Thread Bart Hazes

Hi Ed,

I wrote a short program name HYDENS that takes a PDB file and an H K L 
amplitude phase file for a full hemisphere of data. You can make the 
latter from an MTZ with sftools. The program is on my website at 
http://129.128.24.248/highlights.html. There is a linux executable as 
well as the source code that should compile with any standard fortran 
compiler.


Bart

On 11-04-01 09:16 AM, Ed Pozharski wrote:

I need to calculate the electron density values for a list of spatial
locations (e.g. atom positions in a model) using an mtz-file that
already contains map coefficients.  To write my own code may be easier
than I think (if one can manipulate mtz columns, isn't the only problem
left how to incorporate symmetry-related reflections?), but I would need
an alternative at least for troubleshooting purposes. So,

Does anyone know of a software tool that can calculate point electron
density for every atom in a structure?

If I would have to bring a dependency into this, the best choice for me
would be clipper libs.

Thanks in advance,

Ed.




--



Bart Hazes (Associate Professor)
Dept. of Medical Microbiology  Immunology
University of Alberta
1-15 Medical Sciences Building
Edmonton, Alberta
Canada, T6G 2H7
phone:  1-780-492-0042
fax:1-780-492-7521