Re: [ccp4bb] CCP4BB Digest - 12 Feb 2009 to 13 Feb 2009 (#2009-45)

2009-02-16 Thread Clemens Vonrhein
Dear Ho,

On Fri, Feb 13, 2009 at 04:45:29PM -0800, Ho-Leung Ng wrote:
  Can you elaborate on the effects of improper inclusion of low
 resolution (bogus?) reflections? Other than rejecting spots from
 obvious artifacts, it bothers me to discard data. But I can also see
 how a few inaccurate, very high intensity spots can throw off scaling.

I completely agree: it also bothers me to discard data. However, the
crucial word here is 'data' - which is different from Miller indices
HKL.

So I am mainly concerned with two types of reflections (HKL) that
aren't really 'data':

  1) overloads

 These are obviously not included into your final reflection file
 (unless you explicitely tell the integration software to do that
 - in which case you know exactly what you are doing anyway). So
 there is no problem ... or is there?

 Overloaded reflections are only very few at low resolution - and
 the most important reflections are obviously the ones at 1.94A
 resolution so that one can have a 'better-than-2A' structure in
 the end ... ;-) ... So still no problem, right?

 And who cares if the completelness of the data isn't 100% but
 rather 99.4%? Exactly ... so where is the problem?

 But: these few missing reflections are systematically the
 strongest ones at low(ish) resolution, and any systematically
 missing data is not a good thing to have.

 Solution: always collect a low-intensity pass to measure those
 strong reflections if there is a substantial amount of overloads.

  2) beamstop

 The integration software will predict all reflections based on
 your parameters (apart from the 000 reflection): it doesn't care
 if such a reflection would be behind the beamstop shadow or
 not. However, a reflection behind the beamstop will obviously not
 actually be there - and the integrated intensity (probably a very
 low value) will be wrong.

 One example of such effects in the context of experimental
 phasing is bogus anomalous differences. Imagine that your
 beamstop is not exactly centred around the direct beam. You will
 have it extending a little bit more to one side (giving you
 maybe 20A low resolution) than to the other side (maybe 30A
 resolution). In one orientation of the crystal you might be able
 to collect a 25A (h,k,l) reflection very well (because it is on
 the side where the beamstop only starts at 30A) - but the
 (-h,-k,-l) relfection is collected in an orientation where it is
 on the 20A-side of the beamstop, i.e. it is predicted within the
 beamstop shadow.

 Effect: you have a valid I+ measurement but a more-or-less zero
 I- measurement, giving you a huge anomalous difference that
 shouldn't really be there.

 Now if you measured your data in different orientations (kappa
 goniostat) with high enough multiplicity, this one bogus
 measurement will probably be thrown out during
 scaling/merging. You can e.g. check the so-called ROGUES file
 produced by SCALA. But if you have the usual multiplicity of only
 3-4 the scaling/merging process might not detect this as an
 outlier correctly and it ends up in your data. Sure, programs
 like autoSHARP will check for these outliers and try to reject
 them - but this is only a hack/fix for the fundamental problem:
 telling the integration program what the good area of the
 detector is.

 Solution: mask your beamstop. All integration programs have tools
 for doing that (some are better than others). I haven't seen
 any program being able to do it automatically in a reliable way
 (if reliable would mean: correctly in at least 50% of cases) -
 but I'm no expert in all of them by a long shot. it usually takes
 me only about a minute or two for masking the beamstop by hand. A
 small investment for a big return (good data) ;-)

There are other possibly problematic reflections at ice-rings etc:
these can also have effects seen in the maps. But the above effects
have one thing in common: they happen mainly at low-resolution. And
our models can be seen to consist of basically two real-space
components (atoms in form of a PDB file and bulk-solvent in form of a
mask) - one of which is a low-resolution object (solvent mask) and the
other a high-resolution object (atoms). They need to be combined
through some clever scaling: if there are issues with the
low-resolution reflections this scaling can go wrong - sometimes
really badly.

Hope that helps a bit.

Cheers

Clemens

 
 
 ho
 UC Berkeley
 
  Date:Fri, 13 Feb 2009 17:14:38 +
  From:Clemens Vonrhein vonrh...@globalphasing.com
  Subject: Re: unstable refinement
 
  * resolution limits: are you suddenly including all those poorly
   measured or non existent reflections at the low resolution end (10A
   and lower) that are only present because the beamstop masking wasn't
   don properly during data 

Re: [ccp4bb] CCP4BB Digest - 12 Feb 2009 to 13 Feb 2009 (#2009-45)

2009-02-16 Thread Phil Evans
Just to expand a little on the beam stop problem: the outlier  
rejection algorithm in Scala ( I imagine in other programs) relies on  
a consensus, that is it essentially assumes that the majority of  
observations are correct (actually they are weighted by 1/ 
EstimatedVariance). This means that if you have 3 observations of a  
reflection behind the beam stop,  one not, then the program is likely  
to throw out the one good one  keep the 3 bad ones. It's hard to  
think of a good algorithm which would do the Right Thing (maybe we  
should assume that for refelctions  say 20Å resolution the strong  
ones are right, but I'm not sure this wouldn't cause worse problems)


So tell the integration program where the beam stop is!

Phil


On 16 Feb 2009, at 09:07, Clemens Vonrhein wrote:


Dear Ho,

On Fri, Feb 13, 2009 at 04:45:29PM -0800, Ho-Leung Ng wrote:

Can you elaborate on the effects of improper inclusion of low
resolution (bogus?) reflections? Other than rejecting spots from
obvious artifacts, it bothers me to discard data. But I can also see
how a few inaccurate, very high intensity spots can throw off  
scaling.



 2) beamstop

The integration software will predict all reflections based on
your parameters (apart from the 000 reflection): it doesn't care
if such a reflection would be behind the beamstop shadow or
not. However, a reflection behind the beamstop will obviously not
actually be there - and the integrated intensity (probably a very
low value) will be wrong.

One example of such effects in the context of experimental
phasing is bogus anomalous differences. Imagine that your
beamstop is not exactly centred around the direct beam. You will
have it extending a little bit more to one side (giving you
maybe 20A low resolution) than to the other side (maybe 30A
resolution). In one orientation of the crystal you might be able
to collect a 25A (h,k,l) reflection very well (because it is on
the side where the beamstop only starts at 30A) - but the
(-h,-k,-l) relfection is collected in an orientation where it is
on the 20A-side of the beamstop, i.e. it is predicted within the
beamstop shadow.

Effect: you have a valid I+ measurement but a more-or-less zero
I- measurement, giving you a huge anomalous difference that
shouldn't really be there.

Now if you measured your data in different orientations (kappa
goniostat) with high enough multiplicity, this one bogus
measurement will probably be thrown out during
scaling/merging. You can e.g. check the so-called ROGUES file
produced by SCALA. But if you have the usual multiplicity of only
3-4 the scaling/merging process might not detect this as an
outlier correctly and it ends up in your data. Sure, programs
like autoSHARP will check for these outliers and try to reject
them - but this is only a hack/fix for the fundamental problem:
telling the integration program what the good area of the
detector is.

Solution: mask your beamstop. All integration programs have tools
for doing that (some are better than others). I haven't seen
any program being able to do it automatically in a reliable way
(if reliable would mean: correctly in at least 50% of cases) -
but I'm no expert in all of them by a long shot. it usually takes
me only about a minute or two for masking the beamstop by hand. A
small investment for a big return (good data) ;-)



[ccp4bb] Technician position available at IMP - Vienna, AUSTRIA

2009-02-16 Thread Stolt-bergner,Peggy
Dear All,


A LABORATORY TECHNICIAN position is open at the Research Institute of
Molecular Pathology (IMP) in Vienna, Austria, in the Group of Dr. Peggy
Stolt-Bergner (http://www.imp.ac.at/research/peggy-stolt-bergner/)

We are seeking a scientific Technician experienced in molecular biology and
protein analysis. You will be engaged in fundamental research within
different projects involving structural biology and protein biochemistry.
You will be involved in the following areas:

♦ MOLECULAR BIOLOGY: Cloning of DNA, expression and purification of
recombinant proteins in bacteria and/or yeast.

♦ PROTEIN CHEMISTRY: 2D gel analysis, Western Blotting, FPLC and HPLC,
biophysical measurements for protein characterization, protein
crystallization.

♦ DATA MANAGEMENT: Handling different programs for data evaluation. Use of
basic computer programs such as spreadsheets, graphing, and word processing.

The ideal candidate has comprehensive experience in molecular biology and
protein purification, and a fundamental knowledge of protein chemistry.
Enthusiasm and motivation, the ability to work as part of a team, and
excellent organizational skills are also necessary.  Experience with protein
crystallization and/or X-ray crystallography is an advantage, but is not
required.  Further training will be provided on-site.

The IMP is a basic research institute focused on understanding biological
mechanisms at the molecular and cellular level, and offers state-of-the-art
research facilities and a vibrant, international scientific community. The
working language is English.


Closing Date: March 15, 2009


Please forward your CV, including the names of 2-3 references, to:
Dr. Peggy Stolt-Bergner
Research Institute of Molecular Pathology
Dr. Bohr-gasse 7
A-1030 Vienna
Austria
Email: st...@imp.ac.at






[ccp4bb] Off-topic: ligand enrichment

2009-02-16 Thread Yingjie Peng
Dear guys,

Sorry for the off-topic question.

After I have solved my strucutre, I have found my target ligand bound at the
potential binding site. Also, I have
found that there are two more ligand molecules bound along the path from
solvent to the binding site. I think this
can enrich the ligand to binding site, enhancing the local concentration of
the ligand, thus reducing the Km of the
ligand.

I am wondering if anybody can give some suggestions on how to solve this
problem clearly. If there is any
similar case, it will be better.

Thank you in advance.

Best wishes,

Yingjie

Yingjie PENG, Ph.D. student
Structural Biology Group
Shanghai Institute of Biochemistry and Cell Biology (SIBCB)
Shanghai Institute of Biological Sciences (SIBS)
Chinese Academy of Sciences (CAS)
320 Yue Yang Road, Shanghai 200031
P. R. China
86-21-54921117
Email: yjp...@sibs.ac.cn


Re: [ccp4bb] PEG3350-based cryoprotectant

2009-02-16 Thread Alfred Lammens
Hi Keith,

 

I freezed my crystals with 35% PEG4000 as cryo.

They grow in 16% PEG4000, 4% Isopropanol, 0.1 M Na-Acetate. I just put the
hanging drop over 35% PEG4000, 4% Isopropanol, 0.1 M Na-Acetate and let it
equilibrate over night. They diffracted even better than with other cryos -
might be an additional shrinking effect.

 

All the best

 

Alfred

 

 

  _  

From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf Of Keith
Romano
Sent: Sunday, February 15, 2009 3:15 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] PEG3350-based cryoprotectant

 

Hi all,

I have protein crystals in complex with substrate grown in 20-30% (w/v) PEG
3350, 4% (w/v) ammonium sulfate, and 0.1M sodium MES buffer at pH 6.5. I
purify and concentrate my protein in a high salt buffer (0.5M NaCl, 0.1M
sodium MES at pH 6.5, 10% glycerol, 2mM DTT). 

I grow my crystals with vapor diffusion in 24-well format by hanging a drop
of equal volume protein and precipitant solution over the reservoir of
precipitant solution. Interestingly, when I do the math, the initial
osmolarity of my drop is greater than that of the reservoir (due to the high
NaCl in the protein solution). As far as I know, this runs against the
principles of the vapor diffusion method, as vapor will leave the reservoir
and enter the drop...

Nevertheless, these conditions yield giant, rod-like crystals over 1mm long.
However, they don't react well to direct flash freezing- the spots tend to
smear and indexed refinement leads to high mosaicity. I have tried many
cryoprotectant solutions by making up the given precipitant solution with
15-25% glycerol or ethylene glycol, including a range from 0mM to 350mM
NaCl. In general, dipping the crystals in cryoprotectant improves the
diffraction and lessens the spot smearing. However, the diffraction
usually becomes highly twinned and hard to index. After transfer into the
cryoprotectant, the crysatls appear to crack and often break apart, as
observed under the microscope. 

It seems like my crystals are very sensitive to the osmotic/ionic change
when transferred to the cryoprotectant. I have been unable to find a
stable cryoprotectant, and I am wondering if anyone has had similar
experience with high-weight PEGs and could suggest some cryoprotectants to
try out. 

 

Any input would be greatly appreciated! 

 

Keith

 


Department of Biochemistry  Molecular Pharmacology

970L Lazare Research Building

University of Massachusetts Medical School

364 Plantation Street

Worcester, MA 01605

 






 



Re: [ccp4bb] unstable refinement

2009-02-16 Thread Ian Tickle
Clemens, I know we've had this discussion several times before, but I'd
like to take you up on the point you made that reducing Rfree-R is
necessarily always a 'good thing'.  Suppose the refinement had started
from a point where Rfree was biased, e.g. the test set in use had
previously been part of the working set, so that Rfree-R was too small.
In that case one would hope and indeed expect that Rfree-R would
increase on further refinement now excluding the test set.  Shouldn't
the criterion be that Rfree-R should attain its expected value
(dependent of course on the observation/parameter ratio and the
weighting parameters), so a high value of |(Rfree-R) - Rfree-R| is
bad, i.e. any significant deviations of (Rfree-R) from its expectation
are bad?

I would go further than that and say that anyway Rfree is meaningless
unless the refinement has converged, i.e. reached its maximum (local or
global) total likelihood (i.e. data+restraints).  So one simply cannot
compare the Rfree (or Rfree-R) values at the beginning and end of a run.
The purpose of Rfree (or better free likelihood) is surely to compare
the *results* of *different* runs where convergence has been attained
and where the *refinement protocol* (i.e. selection of parameters to
vary and weighting parameters) has been varied, and then to choose as
the optimal protocol (and therefore optimal result) the one that gave
the lowest Rfree (or highest free likelihood).

Rfree-R is then used as a subsidiary test to verify that it has attained
its expected value, if not then something is wrong, i.e. either the
refinement didn't converge (Rfree-R lower than Rfree-R) or there are
non-random errors (Rfree-R higher than Rfree-R), or a combination of
factors.

Cheers

-- Ian

 -Original Message-
 From: owner-ccp...@jiscmail.ac.uk [mailto:owner-ccp...@jiscmail.ac.uk]
On
 Behalf Of Clemens Vonrhein
 Sent: 13 February 2009 17:15
 To: CCP4BB@JISCMAIL.AC.UK
 Subject: Re: [ccp4bb] unstable refinement
 
 * you don't mention if the R and Rfree move up identically - or if you
   have a faster increase in R than in Rfree, which would mean that
   your R-factors are increasing (bad I guess) but your Rfree-R gap is
   closing down (good).
 
   So moving from R/Rfree=0.20/0.35 to R/Rfree=0.32/37 is different
   than moving from R/Rfree=0.20/0.25 to R/Rfree=0.23/0.28.


Disclaimer
This communication is confidential and may contain privileged information 
intended solely for the named addressee(s). It may not be used or disclosed 
except for the purpose for which it has been sent. If you are not the intended 
recipient you must not review, use, disclose, copy, distribute or take any 
action in reliance upon it. If you have received this communication in error, 
please notify Astex Therapeutics Ltd by emailing 
i.tic...@astex-therapeutics.com and destroy all copies of the message and any 
attached documents. 
Astex Therapeutics Ltd monitors, controls and protects all its messaging 
traffic in compliance with its corporate email policy. The Company accepts no 
liability or responsibility for any onward transmission or use of emails and 
attachments having left the Astex Therapeutics domain.  Unless expressly 
stated, opinions in this message are those of the individual sender and not of 
Astex Therapeutics Ltd. The recipient should check this email and any 
attachments for the presence of computer viruses. Astex Therapeutics Ltd 
accepts no liability for damage caused by any virus transmitted by this email. 
E-mail is susceptible to data corruption, interception, unauthorized amendment, 
and tampering, Astex Therapeutics Ltd only send and receive e-mails on the 
basis that the Company is not liable for any such alteration or any 
consequences thereof.
Astex Therapeutics Ltd., Registered in England at 436 Cambridge Science Park, 
Cambridge CB4 0QA under number 3751674


[ccp4bb] Refinement

2009-02-16 Thread Rana Refaey
Hi all

I have two datasets of resolutions 1.6 and 1.65 Å both of the same molecule,
the problem that i am facing is the refinement.

The R factors are stuck at very high values 0.3329 and 0.3791 after
restrained refinement, although the the map fits into the electron density
very well.

Regards,
Rana



[ccp4bb] Detector pitch, roll, yaw

2009-02-16 Thread jonathan elegheert

Dear bb,

can somebody explain to me the exact definitions for detector roll, 
pitch and yaw? Can they be detector dependent or rather dependent on the 
beamline setup? Is there a relationship with the goniometer position?


Many thanks in advance,

Jonathan
--
Jonathan Elegheert
Ph.D. Student

Unit for Structural Biology  Biophysics
http://www.lprobe.ugent.be/xray.html

Lab for Protein Biochemistry and Biomolecular Engineering
Department of Biochemistry  Microbiology
Ghent University, Belgium

e-mail: jonathan.eleghe...@ugent.be


Re: [ccp4bb] unstable refinement

2009-02-16 Thread George M. Sheldrick
Dear Ian,

That was in fact one of my reasons for only calculating the free R
at the end of a SHELXL refinement run (the other reason, now less 
important, was to save some CPU time). I have to add that I am no
longer completely convinced that I made the right decision all 
those years ago. A stable refinement in which R decreases but 
Rfree goes through a minimum and then starts to rise might be a 
useful indication of overfitting?!

Best wishes, George 

Prof. George M. Sheldrick FRS
Dept. Structural Chemistry,
University of Goettingen,
Tammannstr. 4,
D37077 Goettingen, Germany
Tel. +49-551-39-3021 or -3068
Fax. +49-551-39-22582


On Mon, 16 Feb 2009, Ian Tickle wrote:

 Clemens, I know we've had this discussion several times before, but I'd
 like to take you up on the point you made that reducing Rfree-R is
 necessarily always a 'good thing'.  Suppose the refinement had started
 from a point where Rfree was biased, e.g. the test set in use had
 previously been part of the working set, so that Rfree-R was too small.
 In that case one would hope and indeed expect that Rfree-R would
 increase on further refinement now excluding the test set.  Shouldn't
 the criterion be that Rfree-R should attain its expected value
 (dependent of course on the observation/parameter ratio and the
 weighting parameters), so a high value of |(Rfree-R) - Rfree-R| is
 bad, i.e. any significant deviations of (Rfree-R) from its expectation
 are bad?
 
 I would go further than that and say that anyway Rfree is meaningless
 unless the refinement has converged, i.e. reached its maximum (local or
 global) total likelihood (i.e. data+restraints).  So one simply cannot
 compare the Rfree (or Rfree-R) values at the beginning and end of a run.
 The purpose of Rfree (or better free likelihood) is surely to compare
 the *results* of *different* runs where convergence has been attained
 and where the *refinement protocol* (i.e. selection of parameters to
 vary and weighting parameters) has been varied, and then to choose as
 the optimal protocol (and therefore optimal result) the one that gave
 the lowest Rfree (or highest free likelihood).
 
 Rfree-R is then used as a subsidiary test to verify that it has attained
 its expected value, if not then something is wrong, i.e. either the
 refinement didn't converge (Rfree-R lower than Rfree-R) or there are
 non-random errors (Rfree-R higher than Rfree-R), or a combination of
 factors.
 
 Cheers
 
 -- Ian
 
  -Original Message-
  From: owner-ccp...@jiscmail.ac.uk [mailto:owner-ccp...@jiscmail.ac.uk]
 On
  Behalf Of Clemens Vonrhein
  Sent: 13 February 2009 17:15
  To: CCP4BB@JISCMAIL.AC.UK
  Subject: Re: [ccp4bb] unstable refinement
  
  * you don't mention if the R and Rfree move up identically - or if you
have a faster increase in R than in Rfree, which would mean that
your R-factors are increasing (bad I guess) but your Rfree-R gap is
closing down (good).
  
So moving from R/Rfree=0.20/0.35 to R/Rfree=0.32/37 is different
than moving from R/Rfree=0.20/0.25 to R/Rfree=0.23/0.28.
 
 
 Disclaimer
 This communication is confidential and may contain privileged information 
 intended solely for the named addressee(s). It may not be used or disclosed 
 except for the purpose for which it has been sent. If you are not the 
 intended recipient you must not review, use, disclose, copy, distribute or 
 take any action in reliance upon it. If you have received this communication 
 in error, please notify Astex Therapeutics Ltd by emailing 
 i.tic...@astex-therapeutics.com and destroy all copies of the message and any 
 attached documents. 
 Astex Therapeutics Ltd monitors, controls and protects all its messaging 
 traffic in compliance with its corporate email policy. The Company accepts no 
 liability or responsibility for any onward transmission or use of emails and 
 attachments having left the Astex Therapeutics domain.  Unless expressly 
 stated, opinions in this message are those of the individual sender and not 
 of Astex Therapeutics Ltd. The recipient should check this email and any 
 attachments for the presence of computer viruses. Astex Therapeutics Ltd 
 accepts no liability for damage caused by any virus transmitted by this 
 email. E-mail is susceptible to data corruption, interception, unauthorized 
 amendment, and tampering, Astex Therapeutics Ltd only send and receive 
 e-mails on the basis that the Company is not liable for any such alteration 
 or any consequences thereof.
 Astex Therapeutics Ltd., Registered in England at 436 Cambridge Science Park, 
 Cambridge CB4 0QA under number 3751674
 


[ccp4bb] moving the cofactor in model

2009-02-16 Thread Elad Binshtein
i have a density map and a model and i want to place the cofactor.
i use wincoot and find the electron density for the cofactor but... 
how can i move the cofactor and play with it in this area?


Re: [ccp4bb] Refinement

2009-02-16 Thread Tim Gruene

Hi Rana,

Your email contains very little information which might help finding the 
reason for the high R-values


- the resolution itself does not mean your data are of good quality. What 
is the completeness (overall and high resolution shell), what are Rint/ 
Rsym and I/sigma? Maybe you are including a lot of noise.

- how did you solve the phase problem? MR or experimental phasing, i.e.
  could it be that you model is wrong? What's the Rfree value?
- is your model complete, or are parts missing because there is no
  density?
- have you tried arp/warp to get to a better model?
- are you confident the space group is correct? Maybe the data are
  twinned?

and probably many more issues one might consider.

Tim



--
Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A


On Mon, 16 Feb 2009, Rana Refaey wrote:


Hi all

I have two datasets of resolutions 1.6 and 1.65 ? both of the same molecule,
the problem that i am facing is the refinement.

The R factors are stuck at very high values 0.3329 and 0.3791 after
restrained refinement, although the the map fits into the electron density
very well.

Regards,
Rana




Re: [ccp4bb] Detector pitch, roll, yaw

2009-02-16 Thread Tim Gruene

Hi,

this is probably dependent on the integration program you are using - 
there might not be one universal definition... did you compare the 
programs' documentation?


Tim

--
Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A


On Mon, 16 Feb 2009, jonathan elegheert wrote:


Dear bb,

can somebody explain to me the exact definitions for detector roll, pitch and 
yaw? Can they be detector dependent or rather dependent on the beamline 
setup? Is there a relationship with the goniometer position?


Many thanks in advance,

Jonathan
--
Jonathan Elegheert
Ph.D. Student

Unit for Structural Biology  Biophysics
http://www.lprobe.ugent.be/xray.html

Lab for Protein Biochemistry and Biomolecular Engineering
Department of Biochemistry  Microbiology
Ghent University, Belgium

e-mail: jonathan.eleghe...@ugent.be



Re: [ccp4bb] moving the cofactor in model

2009-02-16 Thread Anthony Addlagatta
Elad,

If it is only moving the coordinates of the co-factor and placing it in the 
density,
once you load the coordinate file for your co-factor into the wincoot, it is 
pretty much
drag and rotate process by using the option Rotate/Translate Zone

But if your question is with respect to playing with the geometry of the 
co-factor, you
need to generate/download the cif file and read it into the coot. You can refer 
to the
previous ccp4 previous thread given below with respect to this.

http://www.mail-archive.com/ccp4bb@jiscmail.ac.uk/msg07769.html

Anthony


On Mon, 16 Feb 2009 11:26:35 +, Elad Binshtein wrote
 i have a density map and a model and i want to place the cofactor.
 i use wincoot and find the electron density for the cofactor but... 
 how can i move the cofactor and play with it in this area?


-
Anthony Addlagatta, Ph.D.
Ramanujan Fellow and Senior Scientist
Center for Chemical Biology
Indian Institute of Chemical Technology [IICT]
Tarnaka, Hyderabad- 57, INDIA
Tel:+91-40-27191583
Url: http://www.iictindia.org/zacb/Dr.%20Anthony.aspx


[ccp4bb] Questions about (possibly) twinned data

2009-02-16 Thread Van Den Berg, Bert
Hello all,
 
we have a dataset collected from multiple (2 or 3) parts of  the same crystal 
with a microbeam (20 micron). The merged data scales OK (not great) in 
monoclinic (1-3% rejections). The resolution is 3.2-3.3 A, so the data is not 
fantastic. This is the cell (similar for other datasets):
 
Cell: 70.012   126.449   107.98890.00089.94690.000 p21

Processing in orthorhombic makes the scaling a lot worse, so I'm assuming its 
monoclinic for now. Running xtriage gives the following summary:

---
Twinning and intensity statistics summary (acentric data):

Statistics independent of twin laws
  - I^2/I^2 : 1.877
  - F^2/F^2 : 0.834
  - |E^2-1|   : 0.663
  - |L|, L^2: 0.411, 0.235
   Multivariate Z score L-test: 6.737
   The multivariate Z score is a quality measure of the given
   spread in intensities. Good to reasonable data are expected
   to have a Z score lower than 3.5.
   Large values can indicate twinning, but small values do not
   necessarily exclude it.


Statistics depending on twin laws
-
| Operator | type | R obs. | Britton alpha | H alpha | ML alpha |
-
| h,-k,-l  |  PM  | 0.167  | 0.367 | 0.339   | 0.152|
-

Patterson analyses
  - Largest peak height   : 5.962
   (corresponding p value : 0.72096)


The largest off-origin peak in the Patterson function is 5.96% of the
height of the origin peak. No significant pseudotranslation is detected.

So, I'm assuming that these crystals are monoclinic and that they are 
pseudo-merohedrally twinned. Is this a reasonable assumption? I get a decent 
solution for the P21 data from molecular replacement with a 50% identical model 
(LLG 900, with the rotation Z-scores low (4-5), but the corresponding 
translation Z-scores high (8-20)).

My questions are: what would be the best way to refine? More specifically, what 
twin fraction should be used as the different tests give different fractions. 
Is the twin fraction automatically determined in phenix.refine or does this 
need to be specified? Finally, can twinning be responsible for the fact that 
the data do not scale well (using data collected on different parts of the same 
crystal)?

Any hints appreciated!

Cheers, Bert

 
Bert van den Berg
University of Massachusetts Medical School
Program in Molecular Medicine
Biotech II, 373 Plantation Street, Suite 115
Worcester MA 01605
Phone: 508 856 1201 (office); 508 856 1211 (lab)
e-mail: bert.vandenb...@umassmed.edu
http://www.umassmed.edu/pmm/faculty/vandenberg.cfm

 


Re: [ccp4bb] unstable refinement

2009-02-16 Thread Ian Tickle
Dear George

I would still maintain that values of Rfree where the refinement had not
attained convergence are totally uninformative, so I would say you made
the right call!  During a refinement run, Rfree is often observed to
fall initially and then increase towards the end, though usually not
significantly.  One cannot deduce anything from this behaviour, and
indeed it is not at all surprising: since Rfree is not the target
function of the optimisation (or even correlated with it) there's no
reason why it should do anything in particular.  Exactly the same
applies to Rwork: because it's a completely different function from the
target function (it contains no weighting information for one thing),
there's absolutely no reason why Rwork should be a minimum at
convergence (even in the case of unrestrained refinement, and even
though it surely is correlated with the target function).  If that were
true we would be able to use Rwork as the target function!

The test for overfitting can only be done if you have at least 2
refinement runs done with different protocols (e.g. no of waters added)
to compare: the one with the higher Rfree (or lower free likelihood) at
convergence is overfitted.  Note that this is a relative test: you can
never be sure that a particular model is not overfitted.  It's always
possible for someone to come along in the future using a different
parameter set (or different weighting) and produce a lower Rfree than
you did (using the same data of course), making your model overfitted
after the fact!

Cheers

-- Ian

 -Original Message-
 From: George M. Sheldrick [mailto:gshe...@shelx.uni-ac.gwdg.de]
 Sent: 16 February 2009 11:24
 To: Ian Tickle
 Cc: CCP4BB@JISCMAIL.AC.UK
 Subject: Re: [ccp4bb] unstable refinement
 
 
 Dear Ian,
 
 That was in fact one of my reasons for only calculating the free R
 at the end of a SHELXL refinement run (the other reason, now less
 important, was to save some CPU time). I have to add that I am no
 longer completely convinced that I made the right decision all
 those years ago. A stable refinement in which R decreases but
 Rfree goes through a minimum and then starts to rise might be a
 useful indication of overfitting?!
 
 Best wishes, George
 
 Prof. George M. Sheldrick FRS
 Dept. Structural Chemistry,
 University of Goettingen,
 Tammannstr. 4,
 D37077 Goettingen, Germany
 Tel. +49-551-39-3021 or -3068
 Fax. +49-551-39-22582
 
 
 On Mon, 16 Feb 2009, Ian Tickle wrote:
 
  Clemens, I know we've had this discussion several times before, but
I'd
  like to take you up on the point you made that reducing Rfree-R is
  necessarily always a 'good thing'.  Suppose the refinement had
started
  from a point where Rfree was biased, e.g. the test set in use had
  previously been part of the working set, so that Rfree-R was too
small.
  In that case one would hope and indeed expect that Rfree-R would
  increase on further refinement now excluding the test set.
Shouldn't
  the criterion be that Rfree-R should attain its expected value
  (dependent of course on the observation/parameter ratio and the
  weighting parameters), so a high value of |(Rfree-R) - Rfree-R| is
  bad, i.e. any significant deviations of (Rfree-R) from its
expectation
  are bad?
 
  I would go further than that and say that anyway Rfree is
meaningless
  unless the refinement has converged, i.e. reached its maximum (local
or
  global) total likelihood (i.e. data+restraints).  So one simply
cannot
  compare the Rfree (or Rfree-R) values at the beginning and end of a
run.
  The purpose of Rfree (or better free likelihood) is surely to
compare
  the *results* of *different* runs where convergence has been
attained
  and where the *refinement protocol* (i.e. selection of parameters to
  vary and weighting parameters) has been varied, and then to choose
as
  the optimal protocol (and therefore optimal result) the one that
gave
  the lowest Rfree (or highest free likelihood).
 
  Rfree-R is then used as a subsidiary test to verify that it has
attained
  its expected value, if not then something is wrong, i.e. either the
  refinement didn't converge (Rfree-R lower than Rfree-R) or there
are
  non-random errors (Rfree-R higher than Rfree-R), or a combination
of
  factors.
 
  Cheers
 
  -- Ian
 
   -Original Message-
   From: owner-ccp...@jiscmail.ac.uk
[mailto:owner-ccp...@jiscmail.ac.uk]
  On
   Behalf Of Clemens Vonrhein
   Sent: 13 February 2009 17:15
   To: CCP4BB@JISCMAIL.AC.UK
   Subject: Re: [ccp4bb] unstable refinement
  
   * you don't mention if the R and Rfree move up identically - or if
you
 have a faster increase in R than in Rfree, which would mean that
 your R-factors are increasing (bad I guess) but your Rfree-R gap
is
 closing down (good).
  
 So moving from R/Rfree=0.20/0.35 to R/Rfree=0.32/37 is different
 than moving from R/Rfree=0.20/0.25 to R/Rfree=0.23/0.28.
 
 
  Disclaimer
  This communication is confidential and may contain privileged
 information 

Re: [ccp4bb] Detector pitch, roll, yaw

2009-02-16 Thread Bram Schierbeek




Hi Jonathan,

Pitch, Roll and Yaw come from flight dynamics, see e.g.
http://www.answers.com/topic/pitch-yaw-roll.
With X parallel to the X-ray beam, Z pointing to the Zenith and Y being
perpendicular to X  Y, they correspond to the roty, rotz and rotx
of the detector with respect to the primary beam. So, they are
dependent on the beam line setup. 
In principle, the pitch roll and yaw should be independent of omega,
kappa (or chi for that matter), phi, theta-swing or detector distance. 
In theory there could be a difference between the Pitch, Roll and Yaw at small and
at large distance, e.g. if your detector track is bent or curved.
I have not seen this in practice with the equipment I most commonly use.

As far as I know, there is one integration program that I know of (SAINT), that uses this definition of
detector-rotations.

Best wishes,

Bram

jonathan elegheert wrote:
Dear bb,
  
  
can somebody explain to me the exact definitions for detector roll,
pitch and yaw? Can they be detector dependent or rather dependent on
the beamline setup? Is there a relationship with the goniometer
position?
  
  
Many thanks in advance,
  
  
Jonathan
  


-- 



Dr. Bram Schierbeek
Application Scientist Structural
Biology

Bruker AXS B.V.
Oostsingel 209, P.O.Box 811
2600 AV Delft, the Netherlands
Tel.: +31 (15) 2152508
Fax: +31
(15)2152599
bram.schierb...@bruker-axs.nl
www.bruker-axs.com








Re: [ccp4bb] Off-topic: ligand enrichment

2009-02-16 Thread Edward A. Berry

Yingjie Peng wrote:
..
After I have solved my strucutre, I have found my target ligand bound at 
the potential binding site. Also, I have
found that there are two more ligand molecules bound along the path from 
solvent to the binding site. I think this
can enrich the ligand to binding site, enhancing the local concentration 
of the ligand, thus reducing the Km of the

ligand.


I've heard this kind of explanation for alternate binding sites before,
but I am skeptical. To the extent that the bound ligands are in equilibrium
with the bulk phase, the local activity of the ligand will be the same as in
the bulk phase- i.e. the bound ligands don't count in figuring the effective
concentration. If anything they lower the activity, competing with the
active site if the concentration of ligand is not  [enzyme].

There would be a local buffering effect, so if the enzyme is gated by
a nerve impulse or absorption of a quantum of light so that it is
usually inactive and turns on suddenly, the local binding sites could
release their load in response to the local depletion faster than ligand
could diffuse in from the bulk- Then during the next off period all the
local binding sites recharge.  This would be like the function of a
bypass capacitor in a digital electronic circuit. But during steady-state
turnover I don't see how the bound ligand could help any.

It may be easy to get ligand in physiologically irrelevant low-affinity
sites due to the high concentration of protein in crystallization experiments.
The protein is a good fraction of 1 mM, whereas physiologically important
binding sites are often in the nM to uM range. So if you add a 3-fold excess
of your ligand, and one equivalent binds at the specific site, there will
still be 100 uM or so free ligand, which may bind at low affinity non-specific
sites. Whether this loosely bound ligand will be well-ordered enough to
identify in the density is another question.

Just my thoughts on the matter,
Ed


Re: [ccp4bb] Off-topic: ligand enrichment

2009-02-16 Thread Herman . Schreuder
Dear Yingjie,
I agree with Ed Berry that I do not believe that nearby binding sites
influence the Km (~Kd) which depend on bound and unbound concentrations.
However, there could be a strong kinetic effect, e.g. these secondary
binding sites could act as stepping stones when the path to the primary
binding site would otherwise be difficult to pass. The crystal structure
of the potassium channel provides a beautiful example of this.
 
Best regards,
Herman 




From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On
Behalf Of Yingjie Peng
Sent: Monday, February 16, 2009 11:09 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] Off-topic: ligand enrichment


Dear guys,

Sorry for the off-topic question.

After I have solved my strucutre, I have found my target ligand
bound at the potential binding site. Also, I have
found that there are two more ligand molecules bound along the
path from solvent to the binding site. I think this
can enrich the ligand to binding site, enhancing the local
concentration of the ligand, thus reducing the Km of the
ligand. 

I am wondering if anybody can give some suggestions on how to
solve this problem clearly. If there is any
similar case, it will be better.

Thank you in advance.

Best wishes,

Yingjie

Yingjie PENG, Ph.D. student
Structural Biology Group
Shanghai Institute of Biochemistry and Cell Biology (SIBCB)
Shanghai Institute of Biological Sciences (SIBS)
Chinese Academy of Sciences (CAS)
320 Yue Yang Road, Shanghai 200031
P. R. China
86-21-54921117
Email: yjp...@sibs.ac.cn




Re: [ccp4bb] Questions about (possibly) twinned data

2009-02-16 Thread Christopher Colbert
Hi Bert,

It seems unikely you are experiencing merohedral twinning in your crystal 
since none of your unit cell dimensions are equal length or integer 
multiples.  For your cell, you would expect to see multiple lattices.  Is 
it possible you have a dimer in the asymmetric unit?  Strong NCS parallel 
to a principle lattice direction can sometimes give twin-like 
statistics especially at lower resolutions.

Hope this helps,

Chris


On Mon, 16 Feb 2009, Van Den Berg, Bert wrote:

Hello all,
 
we have a dataset collected from multiple (2 or 3) parts of  the same 
crystal with a microbeam (20 micron). The merged data scales OK (not great) 
in monoclinic (1-3% rejections). The resolution is 3.2-3.3 A, so the data is 
not fantastic. This is the cell (similar for other datasets):
 
Cell: 70.012   126.449   107.98890.00089.94690.000 p21

Processing in orthorhombic makes the scaling a lot worse, so I'm assuming 
its monoclinic for now. Running xtriage gives the following summary:

---
Twinning and intensity statistics summary (acentric data):

Statistics independent of twin laws
  - I^2/I^2 : 1.877
  - F^2/F^2 : 0.834
  - |E^2-1|   : 0.663
  - |L|, L^2: 0.411, 0.235
   Multivariate Z score L-test: 6.737
   The multivariate Z score is a quality measure of the given
   spread in intensities. Good to reasonable data are expected
   to have a Z score lower than 3.5.
   Large values can indicate twinning, but small values do not
   necessarily exclude it.


Statistics depending on twin laws
-
| Operator | type | R obs. | Britton alpha | H alpha | ML alpha |
-
| h,-k,-l  |  PM  | 0.167  | 0.367 | 0.339   | 0.152|
-

Patterson analyses
  - Largest peak height   : 5.962
   (corresponding p value : 0.72096)


The largest off-origin peak in the Patterson function is 5.96% of the
height of the origin peak. No significant pseudotranslation is detected.

So, I'm assuming that these crystals are monoclinic and that they are 
pseudo-merohedrally twinned. Is this a reasonable assumption? I get a decent 
solution for the P21 data from molecular replacement with a 50% identical 
model (LLG 900, with the rotation Z-scores low (4-5), but the corresponding 
translation Z-scores high (8-20)).

My questions are: what would be the best way to refine? More specifically, 
what twin fraction should be used as the different tests give different 
fractions. Is the twin fraction automatically determined in phenix.refine or 
does this need to be specified? Finally, can twinning be responsible for the 
fact that the data do not scale well (using data collected on different 
parts of the same crystal)?

Any hints appreciated!

Cheers, Bert

 
Bert van den Berg
University of Massachusetts Medical School
Program in Molecular Medicine
Biotech II, 373 Plantation Street, Suite 115
Worcester MA 01605
Phone: 508 856 1201 (office); 508 856 1211 (lab)
e-mail: bert.vandenb...@umassmed.edu
http://www.umassmed.edu/pmm/faculty/vandenberg.cfm

 


Christopher L. Colbert, Ph.D.
InstructorPhone: (214) 645 5944
University of Texas Southwestern Medical Center   FAX:   (214) 645 5945
6001 Forest Park Lane
Dallas, TX 75390


Re: [ccp4bb] Questions about (possibly) twinned data

2009-02-16 Thread Borhani, David
Bert, Your self-Patterson peak may be real, i.e. you have pseudo
translation, which can then make the statistics *look* like the crystal
is twinned. Try a self-Patterson (perhaps sharpened) at somewhat lower
resolution, e.g 6 A. Maybe the peak is real, but is only 6% of origin
due to a slight mis-orientation of the molecules. Dave
David Borhani, Ph.D. 
D. E. Shaw Research, LLC 
120 West Forty-Fifth Street, 39th Floor 
New York, NY 10036 
david.borh...@deshawresearch.com 
212-478-0698 
http://www.deshawresearch.com http://www.deshawresearch.com/  




From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On
Behalf Of Van Den Berg, Bert
Sent: Monday, February 16, 2009 9:12 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] Questions about (possibly) twinned data


Hello all,
 
we have a dataset collected from multiple (2 or 3) parts of  the
same crystal with a microbeam (20 micron). The merged data scales OK
(not great) in monoclinic (1-3% rejections). The resolution is 3.2-3.3
A, so the data is not fantastic. This is the cell (similar for other
datasets):
 
Cell: 70.012   126.449   107.98890.00089.94690.000
p21


Processing in orthorhombic makes the scaling a lot worse, so I'm
assuming its monoclinic for now. Running xtriage gives the following
summary:



---
Twinning and intensity statistics summary (acentric data):

Statistics independent of twin laws
  - I^2/I^2 : 1.877
  - F^2/F^2 : 0.834
  - |E^2-1|   : 0.663
  - |L|, L^2: 0.411, 0.235
   Multivariate Z score L-test: 6.737
   The multivariate Z score is a quality measure of the
given
   spread in intensities. Good to reasonable data are
expected
   to have a Z score lower than 3.5.
   Large values can indicate twinning, but small values do
not
   necessarily exclude it.


Statistics depending on twin laws

-
| Operator | type | R obs. | Britton alpha | H alpha | ML alpha
|

-
| h,-k,-l  |  PM  | 0.167  | 0.367 | 0.339   | 0.152
|

-

Patterson analyses
  - Largest peak height   : 5.962
   (corresponding p value : 0.72096)


The largest off-origin peak in the Patterson function is 5.96%
of the
height of the origin peak. No significant pseudotranslation is
detected.

So, I'm assuming that these crystals are monoclinic and that
they are pseudo-merohedrally twinned. Is this a reasonable assumption? I
get a decent solution for the P21 data from molecular replacement with a
50% identical model (LLG 900, with the rotation Z-scores low (4-5), but
the corresponding translation Z-scores high (8-20)).

My questions are: what would be the best way to refine? More
specifically, what twin fraction should be used as the different tests
give different fractions. Is the twin fraction automatically determined
in phenix.refine or does this need to be specified? Finally, can
twinning be responsible for the fact that the data do not scale well
(using data collected on different parts of the same crystal)?

Any hints appreciated!

Cheers, Bert

 
Bert van den Berg
University of Massachusetts Medical School
Program in Molecular Medicine
Biotech II, 373 Plantation Street, Suite 115
Worcester MA 01605
Phone: 508 856 1201 (office); 508 856 1211 (lab)
e-mail: bert.vandenb...@umassmed.edu
http://www.umassmed.edu/pmm/faculty/vandenberg.cfm

 



[ccp4bb] deadline for beamtime proposals during April/May 2009 for BM14/ESRF 20th Feb. 2009

2009-02-16 Thread Martin A. Walsh
Dear all, deadline for submission of proposals for beamtime at BM14 (ESRF)
for the second run of 2009 [April/May] is this Friday, the20th February
2009. Beamline proposals can be submitted on-line from the BM14 webpages at
http://www.bm14.eu  

PLEASE remember this call is distinct from ESRF proposal call and allows you
direct access to BM14 -We provide funding for UK, EMBL and EU member state
countries for travel and subsistence - special requests can be forwarded
directly to wa...@esrf and we will do our best to accommodate you.

BEAMLINE FEATURES

**MARMOSIAC 225 CCD detector  (
http://www.mar-usa.com/products/mx_series.htm
http://www.mar-usa.com/products/mx_series.htm)

**MD2 Microdiffractometer
(http://www.embl-grenoble.fr/groups/instr/MD2-17.pdf) 

**MiniKappa goniostat (http://www.embl-grenoble.fr/groups/instr/MK2.pdf
-videos available here -
http://www.embl-grenoble.fr/groups/instr/mk2videos.html )

**Robotic Sample changer (
http://journals.iucr.org/d/issues/2006/10/00/gx5085/index.html
http://journals.iucr.org/d/issues/2006/10/00/gx5085/index.html)

**Easily Tuneable over the 6.5 - 18 keV range with X-ray flux at sample
through a 100micron aperture  of ~1.5x10*10 photons/s/200mA synchrotron beam
current at 12.7 keV 

**REMOTE ACCESS for data collection 

** HC1 HUMIDITY CONTROL DEVICE (Upon request and subject to availability) as
a tool for improving diffraction quality. See:

 http://www.embl-grenoble.fr/groups/instr/humidifier_page1.html
http://www.embl-grenoble.fr/groups/instr/humidifier_page1.html

FULL CALL DETAILS follow:




SYNCHROTRON BEAM TIME FOR MACROMOLECULAR CRYSTALLOGRAPHY AT THE UK/EMBL MAD
BEAMLINE BM14, ESRF 

Scheduling Period: April/May 2009

DEADLINE for proposals: 20th February 2009


-

Call for proposals from EMBL member states, EU states and associated states
for Apr/May 2009 synchrotron run at ESRF. The bending magnet MAD beamline
BM14 at the ESRF is being operated as a UK Collaborative Research Group(CRG)
beamline in collaboration with the EMBL Grenoble Outstation. Proposals for
beam time can be made via a user-friendly web based application form, both
for monochromatic and multi-wavelength experiments.

Beam time is available to groups from the UK and EMBL, EU and EU associated
member states. Deadline(20th Feb. 2009) for beamtime proposals at BM14/ESRF
for April/May 2009 

Reimbursement is available for travel and subsistence for groups based in
the UK and EU member or associate member states (see NeedToKnow Link in Main
menu on Beamline Homepage for details on reimbursement and other useful
information)

Full details of the beamline characteristics, the user program and the
procedure for the submission of electronic proposals can be found at the
beamline homepage http://www.bm14.eu http://www.bm14.eu/   or by following
the links from  http://ww.embl-grenoble.fr/ http://ww.embl-grenoble.fr or
http://www.esrf.eu/ http://www.esrf.eu 


--

Please don't hesistate to contact me if you have any queries

Thanks

Martin

 

 

 

-

Martin Walsh,

MRC Group Leader and BM14 Responsible

Medical Research Council (MRC) France,

CRG BM14,

c/o ESRF,

6, rue Jules Horowitz,

B.P. 220

38043 Grenoble CEDEX

France

Tel:   +33 4 38.88.19.69

Fax:  +33 4 76.88.23.80

email1:  mailto:wa...@esrf.fr wa...@esrf.fr

email2:  mailto:walshb...@gmail.com walshb...@gmail.com

 

 

 



[ccp4bb] PDB protein strucutrues as screen saver

2009-02-16 Thread Jayashankar
Dear Scientists,

It may be too much...

But as a biophysics student I would like to appreciate and feel happy to
have pdb
structures as my computers screen savers than to have some funny and fancy
stuffs.
And it may help me as a motivator to solve my own structures in future

I want to ask is there any existing script that grep strucutres one by one
with one line definition of that structure.




S.Jayashankar
Research Student
Institute for Biophysical Chemistry
Hannover Medical School
Germany.


Re: [ccp4bb] PDB protein strucutrues as screen saver

2009-02-16 Thread Nadir T. Mrabet

Hi,
You may want to have a look at 
http://www.luminorum.com/html/luminorum_ltd___extras.html.

hth
Nadir

--

Pr. Nadir T. Mrabet
   Cellular  Molecular Biochemistry
   INSERM U-724
   Nancy University, School of Medicine
   9, Avenue de la Foret de Haye, BP 184
   54505 Vandoeuvre-les-Nancy Cedex
   France
   Phone: +33 (0)3.83.68.32.73
   Fax:   +33 (0)3.83.68.32.79
   E-mail: nadir.mra...@medecine.uhp-nancy.fr
   



Jayashankar wrote:

Dear Scientists,

It may be too much...

But as a biophysics student I would like to appreciate and feel happy 
to have pdb
structures as my computers screen savers than to have some funny and 
fancy stuffs.

And it may help me as a motivator to solve my own structures in future

I want to ask is there any existing script that grep strucutres one by 
one with one line definition of that structure.





S.Jayashankar
Research Student
Institute for Biophysical Chemistry
Hannover Medical School
Germany.


[ccp4bb] reifine metal with phenix

2009-02-16 Thread Lisa Wang
Hi all,
 The resolution of my structure is 3.1A. There are three Mg binds in this
structure. I try to refine it with phenix. There are cleare extrea density
before I add Mg atoms. But when I put Mg in the central of density with good
coordination and try to refine it by phenix, those Mg move away.Does phenix
need addiational paramters file to refine Mg and keedp it 2.1A   2.1A from
it coordinated atom?
 Thanks.
Lisa


Re: [ccp4bb] PDB protein strucutrues as screen saver

2009-02-16 Thread Jürgen Bosch
Get a Mac, render some images in Pymol and run a slideshow with Ken  
burns effect if you want.


Jürgen
On 16 Feb 2009, at 13:22, Jayashankar wrote:


Dear Scientists,

It may be too much...

But as a biophysics student I would like to appreciate and feel  
happy to have pdb
structures as my computers screen savers than to have some funny and  
fancy stuffs.
And it may help me as a motivator to solve my own structures in  
future


I want to ask is there any existing script that grep strucutres one  
by one with one line definition of that structure.





S.Jayashankar
Research Student
Institute for Biophysical Chemistry
Hannover Medical School
Germany.


-
Jürgen Bosch
Johns Hopkins Bloomberg School of Public Health
Biochemistry and Molecular Biology, W8708
615 North Wolfe Street
Baltimore, MD 21205
Phone: +1-410-614-4742
Fax:  +1-410-955-3655


Re: [ccp4bb] PDB protein strucutrues as screen saver

2009-02-16 Thread Sabuj Pattanayek
But as a biophysics student I would like to appreciate and feel happy 
to have pdb
structures as my computers screen savers than to have some funny and 
fancy stuffs.
And it may help me as a motivator to solve my own structures in 
future


http://74.125.47.132/search?q=cache:_WRPDpKtbtcJ:ubuntuforums.org/showthread.php%3Ft%3D291503+linux+pdb+screen+saverhl=enct=clnkcd=1gl=us

Ubuntu forums was down so there's the google cache page. I'm sure you 
can port the directions to your distro of Linux, if that's what you're 
running.


[ccp4bb] Looking for a free program for calculating powder diffraction pattern

2009-02-16 Thread wob
Hi,

I'm looking for a free program for calculating powder diffraction pattern, 
given a PDB file. I googled for hours and only found a bunch of junks... 

Thanks for your help!
Owen



  

Re: [ccp4bb] PDB protein strucutrues as screen saver

2009-02-16 Thread William G. Scott

On Feb 16, 2009, at 10:22 AM, Jayashankar wrote:


Dear Scientists,

It may be too much...

But as a biophysics student I would like to appreciate and feel  
happy to

have pdb
structures as my computers screen savers than to have some funny and  
fancy

stuffs.
And it may help me as a motivator to solve my own structures in  
future


I want to ask is there any existing script that grep strucutres one  
by one

with one line definition of that structure.




S.Jayashankar
Research Student
Institute for Biophysical Chemistry
Hannover Medical School
Germany.



I've been using this on my ppc G5.  It is free.  Unfortunately it  
seems to have a bias toward amino acids:


http://www.sourcecod.com/structure/


[ccp4bb] Occupancy Refinement

2009-02-16 Thread protein.chemist protein.chemist
Dear All,

I have a question about the occupancy refinement of a ligand.  I have a
dataset of 2.3 angstrom and the ligand binds in multiple conformations in
the active site.
My question is if it is possible to tell which orientation(s) has/have the
highest occupancy based on occupancy refinement.
What is the best way to refine the occupancy?

Thanks,
Mariah

-- 
Mariah Jones
Department of Biochemistry
University of Florida


Re: [ccp4bb] unstable refinement

2009-02-16 Thread Axel Brunger

Dear Ian,

I totally agree with your observations and recommendations.  If one is
concerned about instability of the optimizer (minimization
and/or simulated annealing) I suggest to also monitor the value
of the total energy function (X-ray maximum likelihood term
plus all restraints).

Another source for slight variations in R values can occur after
recalculation of the bulk solvent mask and model parameters
if the model has significantly moved between solvent mask updates.

Axel


On Feb 16, 2009, at 6:21 AM, Ian Tickle wrote:


Dear George

I would still maintain that values of Rfree where the refinement had  
not
attained convergence are totally uninformative, so I would say you  
made

the right call!  During a refinement run, Rfree is often observed to
fall initially and then increase towards the end, though usually not
significantly.  One cannot deduce anything from this behaviour, and
indeed it is not at all surprising: since Rfree is not the target
function of the optimisation (or even correlated with it) there's no
reason why it should do anything in particular.  Exactly the same
applies to Rwork: because it's a completely different function from  
the

target function (it contains no weighting information for one thing),
there's absolutely no reason why Rwork should be a minimum at
convergence (even in the case of unrestrained refinement, and even
though it surely is correlated with the target function).  If that  
were

true we would be able to use Rwork as the target function!

The test for overfitting can only be done if you have at least 2
refinement runs done with different protocols (e.g. no of waters  
added)
to compare: the one with the higher Rfree (or lower free likelihood)  
at

convergence is overfitted.  Note that this is a relative test: you can
never be sure that a particular model is not overfitted.  It's always
possible for someone to come along in the future using a different
parameter set (or different weighting) and produce a lower Rfree than
you did (using the same data of course), making your model overfitted
after the fact!

Cheers

-- Ian


-Original Message-
From: George M. Sheldrick [mailto:gshe...@shelx.uni-ac.gwdg.de]
Sent: 16 February 2009 11:24
To: Ian Tickle
Cc: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] unstable refinement


Dear Ian,

That was in fact one of my reasons for only calculating the free R
at the end of a SHELXL refinement run (the other reason, now less
important, was to save some CPU time). I have to add that I am no
longer completely convinced that I made the right decision all
those years ago. A stable refinement in which R decreases but
Rfree goes through a minimum and then starts to rise might be a
useful indication of overfitting?!

Best wishes, George

Prof. George M. Sheldrick FRS
Dept. Structural Chemistry,
University of Goettingen,
Tammannstr. 4,
D37077 Goettingen, Germany
Tel. +49-551-39-3021 or -3068
Fax. +49-551-39-22582


On Mon, 16 Feb 2009, Ian Tickle wrote:


Clemens, I know we've had this discussion several times before, but

I'd

like to take you up on the point you made that reducing Rfree-R is
necessarily always a 'good thing'.  Suppose the refinement had

started

from a point where Rfree was biased, e.g. the test set in use had
previously been part of the working set, so that Rfree-R was too

small.

In that case one would hope and indeed expect that Rfree-R would
increase on further refinement now excluding the test set.

Shouldn't

the criterion be that Rfree-R should attain its expected value
(dependent of course on the observation/parameter ratio and the
weighting parameters), so a high value of |(Rfree-R) - Rfree-R| is
bad, i.e. any significant deviations of (Rfree-R) from its

expectation

are bad?

I would go further than that and say that anyway Rfree is

meaningless

unless the refinement has converged, i.e. reached its maximum (local

or

global) total likelihood (i.e. data+restraints).  So one simply

cannot

compare the Rfree (or Rfree-R) values at the beginning and end of a

run.

The purpose of Rfree (or better free likelihood) is surely to

compare

the *results* of *different* runs where convergence has been

attained

and where the *refinement protocol* (i.e. selection of parameters to
vary and weighting parameters) has been varied, and then to choose

as

the optimal protocol (and therefore optimal result) the one that

gave

the lowest Rfree (or highest free likelihood).

Rfree-R is then used as a subsidiary test to verify that it has

attained

its expected value, if not then something is wrong, i.e. either the
refinement didn't converge (Rfree-R lower than Rfree-R) or there

are

non-random errors (Rfree-R higher than Rfree-R), or a combination

of

factors.

Cheers

-- Ian


-Original Message-
From: owner-ccp...@jiscmail.ac.uk

[mailto:owner-ccp...@jiscmail.ac.uk]

On

Behalf Of Clemens Vonrhein
Sent: 13 February 2009 17:15
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] unstable 

Re: [ccp4bb] reifine metal with phenix

2009-02-16 Thread Joern Krausze
Dear Lisa,

you can specify custom bond and angle restraints by using the option 
refinement.geometry_restraints.edits. Best is to save them to a file (e.g. 
restraints_edits.params) and use this as input for your next 
phenix.refine run. Assuming the ideal distance for your Mg is 2.1A and the 
amino acid is asp113 of chain A, the according file entry could look like 
this:

refinement.geometry_restraints.edits {
  bond {
action = *add
atom_selection_1 = name MG01 and chain G and resname MG and resseq 1
atom_selection_2 = name OD2 and chain A and resname ASP and resseq 113
distance_ideal = 2.1000
sigma = 0.01
  }

You can check the phenix hompage for detailed documentation:

http://phenix-online.org/documentation/refinement.htm

Yours,

Joern


**
Address:
Joern Krausze
University of Leipzig
Centre for Biotechnology and Biomedicine
Deutscher Platz 5
04103 Leipzig
Germany

eMail:  krau...@bbz.uni-leipzig.de
Phone:  +49 (0)341 9731312
Fax:+49 (0)341 9731319
**

On Mon, 16 Feb 2009, Lisa Wang wrote:

 Hi all,
  The resolution of my structure is 3.1A. There are three Mg binds in this
 structure. I try to refine it with phenix. There are cleare extrea density
 before I add Mg atoms. But when I put Mg in the central of density with good
 coordination and try to refine it by phenix, those Mg move away.Does phenix
 need addiational paramters file to refine Mg and keedp it 2.1A   2.1A from
 it coordinated atom?
  Thanks.
 Lisa
 


[ccp4bb] include or exclude overloads

2009-02-16 Thread Ho-Leung Ng
Hi Clemens,

 Thank you for the clarification. I had thought you were
advocating using a general low resolution cutoff, with which I would
disagree. I spend a lot of time troubleshooting data collected and
processed by other people. Those are good reminders to go back and
check beamstop settings and overloaded spots.


Phil,

 I was thinking specifically of what to do with overloads when a
short exposure pass wasn't done, when I asked what should be done with
poorly measured data. Exclusion gives you zeroes instead of what
should be the highest intensity spots in your data set. But inclusion
could throw off scaling, which would be worse. SCALA by default
rejects estimated intensities from overloads from mosflm, which I
presume is for a very good reason. Would it be better to not use the
estimated intensities for scale determination but keep them in the
data set?


ho
UC Berkeley

 --

 Date:Mon, 16 Feb 2009 09:07:38 +
 From:Clemens Vonrhein vonrh...@globalphasing.com
 Subject: Re: CCP4BB Digest - 12 Feb 2009 to 13 Feb 2009 (#2009-45)

 Dear Ho,

 On Fri, Feb 13, 2009 at 04:45:29PM -0800, Ho-Leung Ng wrote:
  Can you elaborate on the effects of improper inclusion of low
 resolution (bogus?) reflections? Other than rejecting spots from
 obvious artifacts, it bothers me to discard data. But I can also see
 how a few inaccurate, very high intensity spots can throw off scaling.

 I completely agree: it also bothers me to discard data. However, the
 crucial word here is 'data' - which is different from Miller indices
 HKL.

 So I am mainly concerned with two types of reflections (HKL) that
 aren't really 'data':

  1) overloads

 These are obviously not included into your final reflection file
 (unless you explicitely tell the integration software to do that
 - in which case you know exactly what you are doing anyway). So
 there is no problem ... or is there?

 Overloaded reflections are only very few at low resolution - and
 the most important reflections are obviously the ones at 1.94A
 resolution so that one can have a 'better-than-2A' structure in
 the end ... ;-) ... So still no problem, right?

 And who cares if the completelness of the data isn't 100% but
 rather 99.4%? Exactly ... so where is the problem?

 But: these few missing reflections are systematically the
 strongest ones at low(ish) resolution, and any systematically
 missing data is not a good thing to have.

 Solution: always collect a low-intensity pass to measure those
 strong reflections if there is a substantial amount of overloads.

  2) beamstop

 The integration software will predict all reflections based on
 your parameters (apart from the 000 reflection): it doesn't care
 if such a reflection would be behind the beamstop shadow or
 not. However, a reflection behind the beamstop will obviously not
 actually be there - and the integrated intensity (probably a very
 low value) will be wrong.

 One example of such effects in the context of experimental
 phasing is bogus anomalous differences. Imagine that your
 beamstop is not exactly centred around the direct beam. You will
 have it extending a little bit more to one side (giving you
 maybe 20A low resolution) than to the other side (maybe 30A
 resolution). In one orientation of the crystal you might be able
 to collect a 25A (h,k,l) reflection very well (because it is on
 the side where the beamstop only starts at 30A) - but the
 (-h,-k,-l) relfection is collected in an orientation where it is
 on the 20A-side of the beamstop, i.e. it is predicted within the
 beamstop shadow.

 Effect: you have a valid I+ measurement but a more-or-less zero
 I- measurement, giving you a huge anomalous difference that
 shouldn't really be there.

 Now if you measured your data in different orientations (kappa
 goniostat) with high enough multiplicity, this one bogus
 measurement will probably be thrown out during
 scaling/merging. You can e.g. check the so-called ROGUES file
 produced by SCALA. But if you have the usual multiplicity of only
 3-4 the scaling/merging process might not detect this as an
 outlier correctly and it ends up in your data. Sure, programs
 like autoSHARP will check for these outliers and try to reject
 them - but this is only a hack/fix for the fundamental problem:
 telling the integration program what the good area of the
 detector is.

 Solution: mask your beamstop. All integration programs have tools
 for doing that (some are better than others). I haven't seen
 any program being able to do it automatically in a reliable way
 (if reliable would mean: correctly in at least 50% of cases) -
 but I'm no expert in all of them by a long shot. it usually takes
 me only about a minute or 

[ccp4bb] Protein crystallization

2009-02-16 Thread Liew Chong Wai
Hi all
 
I am dealing with a protein which size is about 200kDa. Due to the impurity and 
degradation problem, the protein has gone thru 3 purification steps (affinity 
column  ion exchange column  gel filtration column). The buffer condition for 
the last step of purification was 50mM MOPS pH7.0, 500mM NaCl, 20% glycerol, 
and 1mM DTT. Although there are few crystals were observed in different buffer 
condition, but they are too small and fragile and almost no diffraction at all. 
I have tried to optimize crystallization condition in 24-well format by hanging 
drop of equal volume of protein and buffer, but it is not producible. I 
believed that my protein might have conformational change and I have no idea 
how to solve it. Please advice
Thanks
 
vid


Re: [ccp4bb] PDB protein strucutrues as screen saver

2009-02-16 Thread William G. Scott
I just emailed the guy today and asked him if there was any hope of  
getting an intel version in the future, and he wrote back almost  
immediately and said he is working on it and is about 90% done.  The  
current one runs only on PPC.


I tried to hint subtly that it would be kind of cool to have nucleic  
acids. ...



On Feb 16, 2009, at 7:42 PM, Engin Ozkan wrote:

Does this screensaver run on 10.5 intel Macs? It looks like it was  
developed a while ago, and not updated since then.


Engin

Manish Chandra Pathak wrote:



For mac, a screen saver (structure) is already available. moreover  
it's free and displays couple of structural information exactly the  
way Jayashankar wants.


http://www.sourcecod.com/structure/

I hope their is something like this for Linux/Windows also.



*From:* Jürgen Bosch jubo...@jhsph.edu
*To:* CCP4BB@JISCMAIL.AC.UK
*Sent:* Monday, February 16, 2009 2:44:50 PM
*Subject:* Re: [ccp4bb] PDB protein strucutrues as screen saver

Get a Mac, render some images in Pymol and run a slideshow with Ken  
burns effect if you want.


Jürgen
On 16 Feb 2009, at 13:22, Jayashankar wrote:

 Dear Scientists,

 It may be too much...

 But as a biophysics student I would like to appreciate and feel  
happy to have pdb
 structures as my computers screen savers than to have some funny  
and fancy stuffs.
 And it may help me as a motivator to solve my own structures in  
future


 I want to ask is there any existing script that grep strucutres  
one by one with one line definition of that structure.





 S.Jayashankar
 Research Student
 Institute for Biophysical Chemistry
 Hannover Medical School
 Germany.

-
Jürgen Bosch
Johns Hopkins Bloomberg School of Public Health
Biochemistry and Molecular Biology, W8708
615 North Wolfe Street
Baltimore, MD 21205
Phone: +1-410-614-4742
Fax:  +1-410-955-3655



Re: [ccp4bb] Looking for a free program for calculating powder diffraction pattern

2009-02-16 Thread Jon Wright

Dear Owen,

The GSAS package can do that:

http://www.ccp14.ac.uk/solution/gsas/

It is important to put some values for the solvent contribution if you 
want to get a meaningful pattern. Shout if you need a hand.


Good luck,

Jon

wob wrote:

Hi,

I'm looking for a free program for calculating powder diffraction 
pattern, given a PDB file. I googled for hours and only found a bunch of 
junks...


Thanks for your help!
Owen