[ccp4bb] Postdoctoral and Ph.D. opportunities at the Max Planck Institute for Medical Research, Heidelberg

2012-01-30 Thread Anton Meinhart


Postdoctoral Position and PhD positions in Structural Biology
at the Max Planck Institute for Medical Research, Heidelberg.


The Meinhart lab in the Department of Biomolecular Mechanisms seeks to 
recruit outstanding postdoctoral scientists and PhD students with 
experience and/or interest in mechanistic studies of either RNA 
processing machines (maturation of RNA 3’-ends) or macromolecular 
complexes that lead to programmed cell death and virulence in pathogenic 
bacteria (by poisoning bacterial cell wall synthesis). Depending on 
previous experience and scientific interest, successful candidates will 
contribute to the analysis of these macromolecular assemblies using 
structural (X-ray crystallography), biophysical / biochemical methods 
(fluorescence-based assays, isothermal titration calorimetry, analytical 
ultracentrifucation, etc.) and molecular biology (in vivo validation of 
structure-based functional hypotheses). The position provides the 
opportunity for broad training in protein expression, crystallization 
using high-throughput pipelines, kinetics, spectroscopy, genetics, and 
molecular biology. For more information, please check the laboratory 
website (http://www.mpimf-heidelberg.mpg.de/groups/rna_processing). The 
Meinhart lab is embedded in the Department of Biomolecular Mechnisms 
offering a unique, multi-disciplinary and international environment 
allowing to develop an multifaceted research experience which provides 
an ideal starting-point for a successful scientific career (for further 
information see: 
http://www.mpimf-heidelberg.mpg.de/departments/biomolecular_mechanisms). 
Heidelberg is one of the top centers for biomedical research in Germany, 
and graduate students will have access to several different Ph.D. programs.


Applicants for postdoctoral positions should possess/ expect to obtain a 
PhD in natural sciences and have at least one peer-reviewed publication 
as a first author. They should have experience in aspects of a 
structural biology pipeline, for example protein production and 
purification, X-ray crystallography etc. Past experience with 
complementary biophysical techniques (ITC, AUC, CD etc.) and molecular 
biology is desirable. Please send CV, a brief summary of research 
experience and interest, contact address of three referees to 
bmm.recruitm...@mpimf-heidelberg.mpg.de referring to “AM_Recruitment_2012#1”


Applicants for PhD positions should have completed their master in 
natural sciences and have background in biochemistry, structural biology 
or molecular biology. Experience with X-ray crystallography is not 
required but a dedication to biophysics and structural biology 
techniques is desirable. Please send CV, brief summary of previous 
research experience and contact address of two referees to 
bmm.recruitm...@mpimf-heidelberg.mpg.de referring to “AM_Recruitment_2012#2”


Applications will be considered until the positions are filled.

Best regards,

Anton Meinhart

_
Dr. Anton Meinhart
Department of Biomolecular Mechanisms
Max-Planck-Institute for Medical Research
Jahnstraße 29
69120 Heidelberg
GERMANY
phone + 49 6221 / 486505
e-mail: anton.meinh...@mpimf-heidelberg.mpg.de


Re: [ccp4bb] Reasoning for Rmeas or Rpim as Cutoff

2012-01-30 Thread Randy Read
Hi,

Here are a couple of links on the idea of judging resolution by a type of 
cross-validation with data not used in refinement:

Ling et al, 1998: http://pubs.acs.org/doi/full/10.1021/bi971806n
Brunger et al, 2008: 
http://journals.iucr.org/d/issues/2009/02/00/ba5131/index.html
  (cites earlier relevant papers from Brunger's group)

Best wishes,

Randy Read

On 30 Jan 2012, at 07:09, arka chakraborty wrote:

 Hi all,
 
 In the context of the above going discussion can anybody post links for a few 
 relevant articles?
 
 Thanks in advance,
 
 ARKO
 
 On Mon, Jan 30, 2012 at 3:05 AM, Randy Read rj...@cam.ac.uk wrote:
 Just one thing to add to that very detailed response from Ian.
 
 We've tended to use a slightly different approach to determining a sensible 
 resolution cutoff, where we judge whether there's useful information in the 
 highest resolution data by whether it agrees with calculated structure 
 factors computed from a model that hasn't been refined against those data.  
 We first did this with the complex of the Shiga-like toxin B-subunit pentamer 
 with the Gb3 trisaccharide (Ling et al, 1998).  From memory, the point where 
 the average I/sig(I) drops below 2 was around 3.3A.  However, we had a good 
 molecular replacement model to solve this structure and, after just carrying 
 out rigid-body refinement, we computed a SigmaA plot using data to the edge 
 of the detector (somewhere around 2.7A, again from memory).  The SigmaA plot 
 dropped off smoothly to 2.8A resolution, with values well above zero 
 (indicating significantly better than random agreement), then dropped 
 suddenly.  So we chose 2.8A as the cutoff.  Because there were four pentamers 
 in the asymmetric unit, we could then use 20-fold NCS averaging, which gave a 
 fantastic map.  In this case, the averaging certainly helped to pull out 
 something very useful from a very weak signal, because the maps weren't 
 nearly as clear at lower resolution.
 
 Since then, a number of other people have applied similar tests.  Notably, 
 Axel Brunger has done some careful analysis to show that it can indeed be 
 useful to take data beyond the conventional limits.
 
 When you don't have a great MR model, you can do something similar by 
 limiting the resolution for the initial refinement and rebuilding, then 
 assessing whether there's useful information at higher resolution by using 
 the improved model (which hasn't seen the higher resolution data) to compute 
 Fcalcs.  By the way, it's not necessary to use a SigmaA plot -- the 
 correlation between Fo and Fc probably works just as well.  Note that, when 
 the model has been refined against the lower resolution data, you'll expect a 
 drop in correlation at the resolution cutoff you used for refinement, unless 
 you only use the cross-validation data for the resolution range used in 
 refinement.
 
 -
 Randy J. Read
 Department of Haematology, University of Cambridge
 Cambridge Institute for Medical ResearchTel: +44 1223 336500
 Wellcome Trust/MRC Building Fax: +44 1223 336827
 Hills RoadE-mail: 
 rj...@cam.ac.uk
 Cambridge CB2 0XY, U.K.   
 www-structmed.cimr.cam.ac.uk
 
 On 29 Jan 2012, at 17:25, Ian Tickle wrote:
 
  Jacob, here's my (personal) take on this:
 
  The data quality metrics that everyone uses clearly fall into 2
  classes: 'consistency' metrics, i.e. Rmerge/meas/pim and CC(1/2) which
  measure how well redundant observations agree, and signal/noise ratio
  metrics, i.e. mean(I/sigma) and completeness, which relate to the
  information content of the data.
 
  IMO the basic problem with all the consistency metrics is that they
  are not measuring the quantity that is relevant to refinement and
  electron density maps, namely the information content of the data, at
  least not in a direct and meaningful way.  This is because there are 2
  contributors to any consistency metric: the systematic errors (e.g.
  differences in illuminated volume and absorption) and the random
  errors (from counting statistics, detector noise etc.).  If the data
  are collected with sufficient redundancy the systematic errors should
  hopefully largely cancel, and therefore only the random errors will
  determine the information content.  Therefore the systematic error
  component of the consistency measure (which I suspect is the biggest
  component, at least for the strong reflections) is not relevant to
  measuring the information content.  If the consistency measure only
  took into account the random error component (which it can't), then it
  would be essentially be a measure of information content, if only
  indirectly (but then why not simply use a direct measure such as the
  signal/noise ratio?).
 
  There are clearly at least 2 distinct problems with Rmerge, first it's
  including systematic errors in its measure of consistency, second it's
  not invariant with 

[ccp4bb] to show multiple sequence alignment with sec. str.

2012-01-30 Thread sreetama das
Dear All,
 Is there any module in CCP4/ other related software/servers which 
can show a multiple alignment of homologous sequences from a protein family, 
together with their secondary structures?
Thanks in advance,
regards,
sreetama


Re: [ccp4bb] Reasoning for Rmeas or Rpim as Cutoff

2012-01-30 Thread Frank von Delft

Hi Randy - thank you for a very interesting reminder to old literature.

I'm intrigued:  how come this apparently excellent idea has not become 
standard best practice in the 14 years since it was published?


phx


On 30/01/2012 09:40, Randy Read wrote:

Hi,

Here are a couple of links on the idea of judging resolution by a type 
of cross-validation with data not used in refinement:


Ling et al, 1998: http://pubs.acs.org/doi/full/10.1021/bi971806n
Brunger et al, 2008: 
http://journals.iucr.org/d/issues/2009/02/00/ba5131/index.html

  (cites earlier relevant papers from Brunger's group)

Best wishes,

Randy Read

On 30 Jan 2012, at 07:09, arka chakraborty wrote:


Hi all,

In the context of the above going discussion can anybody post links 
for a few relevant articles?


Thanks in advance,

ARKO

On Mon, Jan 30, 2012 at 3:05 AM, Randy Read rj...@cam.ac.uk 
mailto:rj...@cam.ac.uk wrote:


Just one thing to add to that very detailed response from Ian.

We've tended to use a slightly different approach to determining
a sensible resolution cutoff, where we judge whether there's
useful information in the highest resolution data by whether it
agrees with calculated structure factors computed from a model
that hasn't been refined against those data.  We first did this
with the complex of the Shiga-like toxin B-subunit pentamer with
the Gb3 trisaccharide (Ling et al, 1998).  From memory, the point
where the average I/sig(I) drops below 2 was around 3.3A.
 However, we had a good molecular replacement model to solve this
structure and, after just carrying out rigid-body refinement, we
computed a SigmaA plot using data to the edge of the detector
(somewhere around 2.7A, again from memory).  The SigmaA plot
dropped off smoothly to 2.8A resolution, with values well above
zero (indicating significantly better than random agreement),
then dropped suddenly.  So we chose 2.8A as the cutoff.  Because
there were four pentamers in the asymmetric unit, we could then
use 20-fold NCS averaging, which gave a fantastic map.  In this
case, the averaging certainly helped to pull out something very
useful from a very weak signal, because the maps weren't nearly
as clear at lower resolution.

Since then, a number of other people have applied similar tests.
 Notably, Axel Brunger has done some careful analysis to show
that it can indeed be useful to take data beyond the conventional
limits.

When you don't have a great MR model, you can do something
similar by limiting the resolution for the initial refinement and
rebuilding, then assessing whether there's useful information at
higher resolution by using the improved model (which hasn't seen
the higher resolution data) to compute Fcalcs.  By the way, it's
not necessary to use a SigmaA plot -- the correlation between Fo
and Fc probably works just as well.  Note that, when the model
has been refined against the lower resolution data, you'll expect
a drop in correlation at the resolution cutoff you used for
refinement, unless you only use the cross-validation data for the
resolution range used in refinement.

-
Randy J. Read
Department of Haematology, University of Cambridge
Cambridge Institute for Medical ResearchTel: +44 1223 336500
tel:%2B44%201223%20336500
Wellcome Trust/MRC Building Fax: +44 1223
336827 tel:%2B44%201223%20336827
Hills Road  
 E-mail: rj...@cam.ac.uk mailto:rj...@cam.ac.uk

Cambridge CB2 0XY, U.K. www-structmed.cimr.cam.ac.uk
http://www-structmed.cimr.cam.ac.uk/

On 29 Jan 2012, at 17:25, Ian Tickle wrote:

 Jacob, here's my (personal) take on this:

 The data quality metrics that everyone uses clearly fall into 2
 classes: 'consistency' metrics, i.e. Rmerge/meas/pim and
CC(1/2) which
 measure how well redundant observations agree, and signal/noise
ratio
 metrics, i.e. mean(I/sigma) and completeness, which relate to the
 information content of the data.

 IMO the basic problem with all the consistency metrics is that they
 are not measuring the quantity that is relevant to refinement and
 electron density maps, namely the information content of the
data, at
 least not in a direct and meaningful way.  This is because
there are 2
 contributors to any consistency metric: the systematic errors (e.g.
 differences in illuminated volume and absorption) and the random
 errors (from counting statistics, detector noise etc.).  If the
data
 are collected with sufficient redundancy the systematic errors
should
 hopefully largely cancel, and therefore only the random errors will
 determine the information content.  Therefore the systematic error
 component of the consistency measure (which I suspect is 

[ccp4bb] Kinase crystallization

2012-01-30 Thread CHAVES SANJUAN, ANTONIO


Dear all, 

I am trying to crystallize a protein kinase without any success. 

I suspect about its characteristic catalytic loop. I have already prepared 
different constructs, different expression vectors, and different mutant 
proteins (pseudo-phosphorylated, active, inactive?). I have also tested some 
ligands as ANPpnp (non hidrolizable nucleotide), ADP (product) and manganese 
(cofactor). 

I am thinking in trying to cocrystallize it with a general kinase substrate (a 
peptide, a small molecule...). Does any one have any experience or suggestion? 

Thanks in advance. 

Sincerely, Antonio

Re: [ccp4bb] to show multiple sequence alignment with sec. str.

2012-01-30 Thread Florian Brückner
Hi Sreetama,

you can use STRAP for that: http://3d-alignment.eu/. It allows you to do 
multible sequence alignments and use various algorithms to predict secondary 
structure or display secondary structure assignments of PDB entries.

Cheers

Florian


Am 30.01.2012 um 11:02 schrieb sreetama das:

 Dear All,
  Is there any module in CCP4/ other related software/servers 
 which can show a multiple alignment of homologous sequences from a protein 
 family, together with their secondary structures?
 Thanks in advance,
 regards,
 sreetama

-

Dr. Florian Brückner
Biomolecular Research Laboratory
OFLG/102
Paul Scherrer Institut
CH-5232 Villigen PSI
Switzerland

Tel.:   +41-(0)56-310-2332
Email:  florian.brueck...@psi.ch






Re: [ccp4bb] Ligand chirality error

2012-01-30 Thread Eleanor Dodson
Do remember you can assign chirality as both.. This can be useful as 
otherwise the refinement programs force the requested chirality and 
espec at low resolution it can be hard to see any indication of error..

..
Eleanor


On 01/27/2012 09:06 AM, herman.schreu...@sanofi.com wrote:

Dear Debajyoti,

The way I check the chirality is to compare the fitted compound with the
structural formula. In coot, I rotate the ligand such that it has the
same orientation as in the formula and check that the out of plane
group goes in the right direction. However, it happens quite often that
the chirality of the bound compound is not the chirality the chemist
thinks it has, or the chirality is not known. In that case, I refine
both enantiomers and look which one fits best. Things to look for are
distorted bond angles, poor fitting and small positive and negative
blobs of difference density (green and red blobs in coot with default
settings).

To change the chirality, you have to edit the cif dictionary which
contains the description of your compound. In the dictionaries I use you
have to change the  _chem_comp_chir.volume.sign from positiv to negativ
or vice versa.

Good luck!
Herman






From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On
Behalf Of Debajyoti Dutta
Sent: Friday, January 27, 2012 8:25 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] Ligand chirality error



Hi all,

How to check the presence of improper chirality of a fitted
ligand. Is there any way to get rid of such error.

All suggestions are welcome. Thanks in advance.

sincerely
Debajyoti


http://sigads.rediff.com/RealMedia/ads/click_nx.ads/www.rediffmail.com/
signatureline.htm@Middle?   
Follow Rediff Deal ho jaye!
http://track.rediff.com/click?url=___http://dealhojaye.rediff.com?sc_ci
d=rediffmailsignature___cmp=signaturelnk=rediffmailsignaturenewservic
e=deals   to get exciting offers in your city everyday. 




[ccp4bb] Postdoc opportunity at the Paterson Institute, Manchester

2012-01-30 Thread Ivan Ahel
---
Postdoctoral position in DNA Damage Response Group at the Paterson Institute 
for Cancer Research, Manchester
---

The Paterson Institute is a leading cancer centre of excellence core-funded by 
Cancer Research UK and is an Institute of The University of Manchester.

A 3-years position with the possibility of extension is available in the DNA 
Damage Response group led by Dr Ivan Ahel to study structure and function of 
DNA repair enzymes:

•   Slade, D., Dunstan, M.S., Barkauskaite, E., Weston, R., Lafite, P., 
Dixon, N., Ahel, M., Leys, D., and Ahel, I. (2011). The structure and catalytic 
mechanism of a poly(ADP-ribose) glycohydrolase. Nature 477, 616–620.

•   Ahel, I., Ahel, D., Matsusaka, T., Clark, A.J., Pines, J., Boulton, 
S.J., and West, S.C. (2008). Poly(ADP-ribose)-binding zinc finger motifs in DNA 
repair/checkpoint proteins. Nature 451, 81–85.

The successful candidate will have a PhD in biochemistry or cell biology, and 
at least one peer-reviewed publication as a first author.

For information on the DNA Damage Response group, please visit:
www.paterson.man.ac.uk/dnadamage/

Enquiries should be directed to Dr Ivan Ahel at: ia...@picr.man.ac.uk

Closing date:  24th February 2012


Re: [ccp4bb] Kinase crystallization

2012-01-30 Thread Artem Evdokimov
It is a fairly common issue with kinases. Among other options you may want
to try a generic kinase inhibitor (there are several good ones just look at
pdb structures for ieas) and if this does not help then you could attempt
to clamp the motion down via an inter-lobe engineered disulphide bond...

Artem
On Jan 30, 2012 5:17 AM, CHAVES SANJUAN, ANTONIO xanto...@iqfr.csic.es
wrote:

 Dear all,**

 I am trying to crystallize a protein kinase without any success.**

 I suspect about its characteristic catalytic loop. I have already prepared
 different constructs, different expression vectors, and different mutant
 proteins (pseudo-phosphorylated, active, inactive?). I have also tested
 some ligands as ANPpnp (non hidrolizable nucleotide), ADP (product) and
 manganese (cofactor).**

 I am thinking in trying to cocrystallize it with a general kinase
 substrate (a peptide, a small molecule...). Does any one have any
 experience or suggestion?**

 Thanks in advance.**

 Sincerely,**
 Antonio


Re: [ccp4bb] Kinase crystallization

2012-01-30 Thread Schubert, Carsten [JRDUS]
Staurosporine come to mind as a general kinase inhibitor. I also second
Artem's suggestion that ligands make a big difference, we had several
cases of kinases which required ligands for crystallization success.
Also make sure you eliminate any floppy ends which may interfere with
packing.

 

Good luck

 

Carsten

 

 

 

From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of
CHAVES SANJUAN, ANTONIO
Sent: Monday, January 30, 2012 5:07 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] Kinase crystallization

 

Dear all,

I am trying to crystallize a protein kinase without any success.

I suspect about its characteristic catalytic loop. I have already
prepared different constructs, different expression vectors, and
different mutant proteins (pseudo-phosphorylated, active, inactive?). I
have also tested some ligands as ANPpnp (non hidrolizable nucleotide),
ADP (product) and manganese (cofactor).

I am thinking in trying to cocrystallize it with a general kinase
substrate (a peptide, a small molecule...). Does any one have any
experience or suggestion?

Thanks in advance.

Sincerely,

Antonio



[ccp4bb] pointless eats FP column

2012-01-30 Thread Christian Roth
Hi,

I want determine the spacegroup with pointless and it should directly write 
out the mtz in the best sg. When I give a mtz with Structure Factor amplitude 
pointless recognize the file (9 columns) change the space group and the new 
file 
in the new space group has just  7 columns and FP and SigFP diappeared. 
Running pointless with the option match to reference it works fine and the all 
columns are there. 
I tried it from commandline and Interface but it does not work. I use 
pointless 1.6.5.
Has anyone a suggestion what goes wrong here. Does it prevent user error when 
merged F's are given and omit them from the final mtz? However the input mtz 
was in P1 so it should be not a problem.

Thanks in advance 

Best Regards 

Christian


Re: [ccp4bb] pointless eats FP column

2012-01-30 Thread Phil Evans
Pointless can really only determine the space group from an unmerged file which 
wouldn't contain a merged amplitude F, so I'm not quite sure what you are 
trying to do

Can you send me the file  your command off-list  I'll check

best wishes
Phil


On 30 Jan 2012, at 13:54, Christian Roth wrote:

 Hi,
 
 I want determine the spacegroup with pointless and it should directly write 
 out the mtz in the best sg. When I give a mtz with Structure Factor amplitude 
 pointless recognize the file (9 columns) change the space group and the new 
 file 
 in the new space group has just  7 columns and FP and SigFP diappeared. 
 Running pointless with the option match to reference it works fine and the 
 all 
 columns are there. 
 I tried it from commandline and Interface but it does not work. I use 
 pointless 1.6.5.
 Has anyone a suggestion what goes wrong here. Does it prevent user error when 
 merged F's are given and omit them from the final mtz? However the input mtz 
 was in P1 so it should be not a problem.
 
 Thanks in advance 
 
 Best Regards 
 
 Christian


Re: [ccp4bb] Reasoning for Rmeas or Rpim as Cutoff

2012-01-30 Thread Jacob Keller
Somebody sent this to me after a previous post a while back--a sort of
case-study:

Wang, J. (2010). Inclusion of weak high-resolution X-ray data for
improvement of a group II intron structure. Acta crystallographica
Section D, Biological crystallography 66, 988-1000.

JPK




On Mon, Jan 30, 2012 at 4:03 AM, Frank von Delft
frank.vonde...@sgc.ox.ac.uk wrote:
 Hi Randy - thank you for a very interesting reminder to old literature.

 I'm intrigued:  how come this apparently excellent idea has not become
 standard best practice in the 14 years since it was published?

 phx



 On 30/01/2012 09:40, Randy Read wrote:

 Hi,

 Here are a couple of links on the idea of judging resolution by a type of
 cross-validation with data not used in refinement:

 Ling et al, 1998: http://pubs.acs.org/doi/full/10.1021/bi971806n
 Brunger et al,
 2008: http://journals.iucr.org/d/issues/2009/02/00/ba5131/index.html
   (cites earlier relevant papers from Brunger's group)

 Best wishes,

 Randy Read

 On 30 Jan 2012, at 07:09, arka chakraborty wrote:

 Hi all,

 In the context of the above going discussion can anybody post links for a
 few relevant articles?

 Thanks in advance,

 ARKO

 On Mon, Jan 30, 2012 at 3:05 AM, Randy Read rj...@cam.ac.uk wrote:

 Just one thing to add to that very detailed response from Ian.

 We've tended to use a slightly different approach to determining a
 sensible resolution cutoff, where we judge whether there's useful
 information in the highest resolution data by whether it agrees with
 calculated structure factors computed from a model that hasn't been refined
 against those data.  We first did this with the complex of the Shiga-like
 toxin B-subunit pentamer with the Gb3 trisaccharide (Ling et al, 1998).
  From memory, the point where the average I/sig(I) drops below 2 was around
 3.3A.  However, we had a good molecular replacement model to solve this
 structure and, after just carrying out rigid-body refinement, we computed a
 SigmaA plot using data to the edge of the detector (somewhere around 2.7A,
 again from memory).  The SigmaA plot dropped off smoothly to 2.8A
 resolution, with values well above zero (indicating significantly better
 than random agreement), then dropped suddenly.  So we chose 2.8A as the
 cutoff.  Because there were four pentamers in the asymmetric unit, we could
 then use 20-fold NCS averaging, which gave a fantastic map.  In this case,
 the averaging certainly helped to pull out something very useful from a very
 weak signal, because the maps weren't nearly as clear at lower resolution.

 Since then, a number of other people have applied similar tests.  Notably,
 Axel Brunger has done some careful analysis to show that it can indeed be
 useful to take data beyond the conventional limits.

 When you don't have a great MR model, you can do something similar by
 limiting the resolution for the initial refinement and rebuilding, then
 assessing whether there's useful information at higher resolution by using
 the improved model (which hasn't seen the higher resolution data) to compute
 Fcalcs.  By the way, it's not necessary to use a SigmaA plot -- the
 correlation between Fo and Fc probably works just as well.  Note that, when
 the model has been refined against the lower resolution data, you'll expect
 a drop in correlation at the resolution cutoff you used for refinement,
 unless you only use the cross-validation data for the resolution range used
 in refinement.

 -
 Randy J. Read
 Department of Haematology, University of Cambridge
 Cambridge Institute for Medical Research    Tel: +44 1223 336500
 Wellcome Trust/MRC Building                         Fax: +44 1223 336827
 Hills Road
  E-mail: rj...@cam.ac.uk
 Cambridge CB2 0XY, U.K.
 www-structmed.cimr.cam.ac.uk

 On 29 Jan 2012, at 17:25, Ian Tickle wrote:

  Jacob, here's my (personal) take on this:
 
  The data quality metrics that everyone uses clearly fall into 2
  classes: 'consistency' metrics, i.e. Rmerge/meas/pim and CC(1/2) which
  measure how well redundant observations agree, and signal/noise ratio
  metrics, i.e. mean(I/sigma) and completeness, which relate to the
  information content of the data.
 
  IMO the basic problem with all the consistency metrics is that they
  are not measuring the quantity that is relevant to refinement and
  electron density maps, namely the information content of the data, at
  least not in a direct and meaningful way.  This is because there are 2
  contributors to any consistency metric: the systematic errors (e.g.
  differences in illuminated volume and absorption) and the random
  errors (from counting statistics, detector noise etc.).  If the data
  are collected with sufficient redundancy the systematic errors should
  hopefully largely cancel, and therefore only the random errors will
  determine the information content.  Therefore the systematic error
  component of the consistency measure (which I suspect is the biggest
  component, at least for the strong 

Re: [ccp4bb] pointless eats FP column

2012-01-30 Thread Phil Evans
Dear Christian

When I run this data into Pointless it doesn't find any higher symmetry than 
I422. However, the data appear to be highly twinned so I would be wary of 
believing that.

If you put a merged file into Pointless, it can check for under-merging, but 
you can't really expect to use the output file as the program is now. It treats 
the file as if it was unmerged by squaring F to make a fake intensity, which is 
what is in the output file

You can take the raw output from XDS into Pointless, either XDS_ASCII.HKL or 
INTEGRATE.HKL, and that will give you a better idea of the symmetry - you can 
then scale the output file with Scala or Aimless if you wish, though I have no 
reason to suppose that it is better than CORRECT  XSCALE

I'm copying this to the BB for general informaiton

best wishes
Phil

On 30 Jan 2012, at 13:54, Christian Roth wrote:

 Hi,
 
 I want determine the spacegroup with pointless and it should directly write 
 out the mtz in the best sg. When I give a mtz with Structure Factor amplitude 
 pointless recognize the file (9 columns) change the space group and the new 
 file 
 in the new space group has just  7 columns and FP and SigFP diappeared. 
 Running pointless with the option match to reference it works fine and the 
 all 
 columns are there. 
 I tried it from commandline and Interface but it does not work. I use 
 pointless 1.6.5.
 Has anyone a suggestion what goes wrong here. Does it prevent user error when 
 merged F's are given and omit them from the final mtz? However the input mtz 
 was in P1 so it should be not a problem.
 
 Thanks in advance 
 
 Best Regards 
 
 Christian


Re: [ccp4bb] Reasoning for Rmeas or Rpim as Cutoff

2012-01-30 Thread Jacob Keller
 I'm intrigued:  how come this apparently excellent idea has not become
 standard best practice in the 14 years since it was published?

It would seem because too few people know about it, and it is not
implemented in any software in the usual pipeline. Maybe it could be?

Perhaps the way to do it would be always to integrate to
ridiculously-high resolution, give that to Refmac, and starting from
lower resolution, to iterate to higher resolution according the most
recent sigma a calculation, and cutoff according to some reasonable
sigma a value?

JPK




 phx



 On 30/01/2012 09:40, Randy Read wrote:

 Hi,

 Here are a couple of links on the idea of judging resolution by a type of
 cross-validation with data not used in refinement:

 Ling et al, 1998: http://pubs.acs.org/doi/full/10.1021/bi971806n
 Brunger et al,
 2008: http://journals.iucr.org/d/issues/2009/02/00/ba5131/index.html
   (cites earlier relevant papers from Brunger's group)

 Best wishes,

 Randy Read

 On 30 Jan 2012, at 07:09, arka chakraborty wrote:

 Hi all,

 In the context of the above going discussion can anybody post links for a
 few relevant articles?

 Thanks in advance,

 ARKO

 On Mon, Jan 30, 2012 at 3:05 AM, Randy Read rj...@cam.ac.uk wrote:

 Just one thing to add to that very detailed response from Ian.

 We've tended to use a slightly different approach to determining a
 sensible resolution cutoff, where we judge whether there's useful
 information in the highest resolution data by whether it agrees with
 calculated structure factors computed from a model that hasn't been refined
 against those data.  We first did this with the complex of the Shiga-like
 toxin B-subunit pentamer with the Gb3 trisaccharide (Ling et al, 1998).
  From memory, the point where the average I/sig(I) drops below 2 was around
 3.3A.  However, we had a good molecular replacement model to solve this
 structure and, after just carrying out rigid-body refinement, we computed a
 SigmaA plot using data to the edge of the detector (somewhere around 2.7A,
 again from memory).  The SigmaA plot dropped off smoothly to 2.8A
 resolution, with values well above zero (indicating significantly better
 than random agreement), then dropped suddenly.  So we chose 2.8A as the
 cutoff.  Because there were four pentamers in the asymmetric unit, we could
 then use 20-fold NCS averaging, which gave a fantastic map.  In this case,
 the averaging certainly helped to pull out something very useful from a very
 weak signal, because the maps weren't nearly as clear at lower resolution.

 Since then, a number of other people have applied similar tests.  Notably,
 Axel Brunger has done some careful analysis to show that it can indeed be
 useful to take data beyond the conventional limits.

 When you don't have a great MR model, you can do something similar by
 limiting the resolution for the initial refinement and rebuilding, then
 assessing whether there's useful information at higher resolution by using
 the improved model (which hasn't seen the higher resolution data) to compute
 Fcalcs.  By the way, it's not necessary to use a SigmaA plot -- the
 correlation between Fo and Fc probably works just as well.  Note that, when
 the model has been refined against the lower resolution data, you'll expect
 a drop in correlation at the resolution cutoff you used for refinement,
 unless you only use the cross-validation data for the resolution range used
 in refinement.

 -
 Randy J. Read
 Department of Haematology, University of Cambridge
 Cambridge Institute for Medical Research    Tel: +44 1223 336500
 Wellcome Trust/MRC Building                         Fax: +44 1223 336827
 Hills Road
  E-mail: rj...@cam.ac.uk
 Cambridge CB2 0XY, U.K.
 www-structmed.cimr.cam.ac.uk

 On 29 Jan 2012, at 17:25, Ian Tickle wrote:

  Jacob, here's my (personal) take on this:
 
  The data quality metrics that everyone uses clearly fall into 2
  classes: 'consistency' metrics, i.e. Rmerge/meas/pim and CC(1/2) which
  measure how well redundant observations agree, and signal/noise ratio
  metrics, i.e. mean(I/sigma) and completeness, which relate to the
  information content of the data.
 
  IMO the basic problem with all the consistency metrics is that they
  are not measuring the quantity that is relevant to refinement and
  electron density maps, namely the information content of the data, at
  least not in a direct and meaningful way.  This is because there are 2
  contributors to any consistency metric: the systematic errors (e.g.
  differences in illuminated volume and absorption) and the random
  errors (from counting statistics, detector noise etc.).  If the data
  are collected with sufficient redundancy the systematic errors should
  hopefully largely cancel, and therefore only the random errors will
  determine the information content.  Therefore the systematic error
  component of the consistency measure (which I suspect is the biggest
  component, at least for the strong reflections) is not 

Re: [ccp4bb] Reasoning for Rmeas or Rpim as Cutoff

2012-01-30 Thread Florian Schmitzberger
On Jan 30, 2012, at 10:28 AM, Jacob Keller wrote:

 I'm intrigued:  how come this apparently excellent idea has not become
 standard best practice in the 14 years since it was published?
 
 It would seem because too few people know about it, and it is not
 implemented in any software in the usual pipeline. Maybe it could be?

Phenix.model_vs_data calculates a sigmaA_ vs resolution plot (in comprehensive 
validation in the GUI). Pavel would probably have replied by now, but I don't 
think the discussion has been cross-posted to the phenix bb.

Cheers,

Florian




 
 Perhaps the way to do it would be always to integrate to
 ridiculously-high resolution, give that to Refmac, and starting from
 lower resolution, to iterate to higher resolution according the most
 recent sigma a calculation, and cutoff according to some reasonable
 sigma a value?
 
 JPK
 
 
 
 
 phx
 
 
 
 On 30/01/2012 09:40, Randy Read wrote:
 
 Hi,
 
 Here are a couple of links on the idea of judging resolution by a type of
 cross-validation with data not used in refinement:
 
 Ling et al, 1998: http://pubs.acs.org/doi/full/10.1021/bi971806n
 Brunger et al,
 2008: http://journals.iucr.org/d/issues/2009/02/00/ba5131/index.html
   (cites earlier relevant papers from Brunger's group)
 
 Best wishes,
 
 Randy Read
 
 On 30 Jan 2012, at 07:09, arka chakraborty wrote:
 
 Hi all,
 
 In the context of the above going discussion can anybody post links for a
 few relevant articles?
 
 Thanks in advance,
 
 ARKO
 
 On Mon, Jan 30, 2012 at 3:05 AM, Randy Read rj...@cam.ac.uk wrote:
 
 Just one thing to add to that very detailed response from Ian.
 
 We've tended to use a slightly different approach to determining a
 sensible resolution cutoff, where we judge whether there's useful
 information in the highest resolution data by whether it agrees with
 calculated structure factors computed from a model that hasn't been refined
 against those data.  We first did this with the complex of the Shiga-like
 toxin B-subunit pentamer with the Gb3 trisaccharide (Ling et al, 1998).
  From memory, the point where the average I/sig(I) drops below 2 was around
 3.3A.  However, we had a good molecular replacement model to solve this
 structure and, after just carrying out rigid-body refinement, we computed a
 SigmaA plot using data to the edge of the detector (somewhere around 2.7A,
 again from memory).  The SigmaA plot dropped off smoothly to 2.8A
 resolution, with values well above zero (indicating significantly better
 than random agreement), then dropped suddenly.  So we chose 2.8A as the
 cutoff.  Because there were four pentamers in the asymmetric unit, we could
 then use 20-fold NCS averaging, which gave a fantastic map.  In this case,
 the averaging certainly helped to pull out something very useful from a very
 weak signal, because the maps weren't nearly as clear at lower resolution.
 
 Since then, a number of other people have applied similar tests.  Notably,
 Axel Brunger has done some careful analysis to show that it can indeed be
 useful to take data beyond the conventional limits.
 
 When you don't have a great MR model, you can do something similar by
 limiting the resolution for the initial refinement and rebuilding, then
 assessing whether there's useful information at higher resolution by using
 the improved model (which hasn't seen the higher resolution data) to compute
 Fcalcs.  By the way, it's not necessary to use a SigmaA plot -- the
 correlation between Fo and Fc probably works just as well.  Note that, when
 the model has been refined against the lower resolution data, you'll expect
 a drop in correlation at the resolution cutoff you used for refinement,
 unless you only use the cross-validation data for the resolution range used
 in refinement.
 
 -
 Randy J. Read
 Department of Haematology, University of Cambridge
 Cambridge Institute for Medical ResearchTel: +44 1223 336500
 Wellcome Trust/MRC Building Fax: +44 1223 336827
 Hills Road
  E-mail: rj...@cam.ac.uk
 Cambridge CB2 0XY, U.K.
 www-structmed.cimr.cam.ac.uk
 
 On 29 Jan 2012, at 17:25, Ian Tickle wrote:
 
 Jacob, here's my (personal) take on this:
 
 The data quality metrics that everyone uses clearly fall into 2
 classes: 'consistency' metrics, i.e. Rmerge/meas/pim and CC(1/2) which
 measure how well redundant observations agree, and signal/noise ratio
 metrics, i.e. mean(I/sigma) and completeness, which relate to the
 information content of the data.
 
 IMO the basic problem with all the consistency metrics is that they
 are not measuring the quantity that is relevant to refinement and
 electron density maps, namely the information content of the data, at
 least not in a direct and meaningful way.  This is because there are 2
 contributors to any consistency metric: the systematic errors (e.g.
 differences in illuminated volume and absorption) and the random
 errors (from counting statistics, detector noise etc.).  If the data
 are collected with 

Re: [ccp4bb] B_sol from EDS

2012-01-30 Thread Pavel Afonine
Hi Bernhard,

I just calculated k_sol and B_sol for all PDB entries that
 - have reflection data available,
 - I could re-compute the R-factor within 5%, and
 - R-work30%
using a simple cctbx script. Here is what I get:

Distribution of k_sol:
 0.000 - 0.060  : 27
 0.060 - 0.120  : 12
 0.120 - 0.180  : 51
 0.180 - 0.240  : 182
 0.240 - 0.300  : 1770
 0.300 - 0.360  : 13819
 0.360 - 0.420  : 19731
 0.420 - 0.480  : 3039
 0.480 - 0.540  : 471
 0.540 - 0.600  : 256

Distribution of B_sol:
 0.000 - 31.300 : 4349
31.300 - 62.600 : 29425
62.600 - 93.900 : 4578
93.900 - 125.200: 597
   125.200 - 156.500: 225
   156.500 - 187.800: 84
   187.800 - 219.100: 37
   219.100 - 250.400: 23
   250.400 - 281.700: 10
   281.700 - 313.000: 30

It seems like the result of similar exercise done by Fokine and Urzhumtsev
(Acta Cryst. (2002). D58, 1387-1392) still holds (see figure 3 on page 1390
there).

Pavel


On Mon, Jan 30, 2012 at 11:10 AM, Bernhard Rupp (Hofkristallrat a.D.) 
hofkristall...@gmail.com wrote:

 Dear All,

 when I plot bulk solvent B and K extracted from EDS, an improbable and
 bimodal distribution appears.
 In the B_sol vs k_sol PDF a sharp line of values with B-sol of 70 appears
 (B-axis left to right, 0-200).

 http://www.ruppweb.org/images/b_sol_contour.jpg
 http://www.ruppweb.org/images/b_sol_surface.jpg

 According to a quick peak at EDS instructions,
 it uses the REFMAC flat bulk solvent model throughout  for bulk solvent
 correction.

 The main peak in fact has the expected distribution, but it seems that the
 sharp peak at B_sol=70
 represents some cut-off that in a certain set of calculations was used.

 For data mining it would be useful to know where/when these cutoffs were
 used.

 Best regards, BR
 -
 Bernhard Rupp
 http://www.ruppweb.org/
 -



Re: [ccp4bb] Reasoning for Rmeas or Rpim as Cutoff

2012-01-30 Thread Dunten, Pete W.
Frank,

Don't you already get a plot of SigmaA versus resolution from refmac,
where the free set of reflections has been used to estimate SigmaA?

Have a look at some of your log files.

Pete

From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] On Behalf Of Frank von Delft 
[frank.vonde...@sgc.ox.ac.uk]
Sent: Monday, January 30, 2012 2:03 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Reasoning for Rmeas or Rpim as Cutoff

Hi Randy - thank you for a very interesting reminder to old literature.

I'm intrigued:  how come this apparently excellent idea has not become standard 
best practice in the 14 years since it was published?

phx


On 30/01/2012 09:40, Randy Read wrote:
Hi,

Here are a couple of links on the idea of judging resolution by a type of 
cross-validation with data not used in refinement:

Ling et al, 1998: http://pubs.acs.org/doi/full/10.1021/bi971806n
Brunger et al, 2008: 
http://journals.iucr.org/d/issues/2009/02/00/ba5131/index.html
  (cites earlier relevant papers from Brunger's group)

Best wishes,

Randy Read

On 30 Jan 2012, at 07:09, arka chakraborty wrote:

Hi all,

In the context of the above going discussion can anybody post links for a few 
relevant articles?

Thanks in advance,

ARKO

On Mon, Jan 30, 2012 at 3:05 AM, Randy Read 
rj...@cam.ac.ukmailto:rj...@cam.ac.uk wrote:
Just one thing to add to that very detailed response from Ian.

We've tended to use a slightly different approach to determining a sensible 
resolution cutoff, where we judge whether there's useful information in the 
highest resolution data by whether it agrees with calculated structure factors 
computed from a model that hasn't been refined against those data.  We first 
did this with the complex of the Shiga-like toxin B-subunit pentamer with the 
Gb3 trisaccharide (Ling et al, 1998).  From memory, the point where the average 
I/sig(I) drops below 2 was around 3.3A.  However, we had a good molecular 
replacement model to solve this structure and, after just carrying out 
rigid-body refinement, we computed a SigmaA plot using data to the edge of the 
detector (somewhere around 2.7A, again from memory).  The SigmaA plot dropped 
off smoothly to 2.8A resolution, with values well above zero (indicating 
significantly better than random agreement), then dropped suddenly.  So we 
chose 2.8A as the cutoff.  Because there were four pentamers in the asymmetric 
unit, we could then use 20-fold NCS averaging, which gave a fantastic map.  In 
this case, the averaging certainly helped to pull out something very useful 
from a very weak signal, because the maps weren't nearly as clear at lower 
resolution.

Since then, a number of other people have applied similar tests.  Notably, Axel 
Brunger has done some careful analysis to show that it can indeed be useful to 
take data beyond the conventional limits.

When you don't have a great MR model, you can do something similar by limiting 
the resolution for the initial refinement and rebuilding, then assessing 
whether there's useful information at higher resolution by using the improved 
model (which hasn't seen the higher resolution data) to compute Fcalcs.  By the 
way, it's not necessary to use a SigmaA plot -- the correlation between Fo and 
Fc probably works just as well.  Note that, when the model has been refined 
against the lower resolution data, you'll expect a drop in correlation at the 
resolution cutoff you used for refinement, unless you only use the 
cross-validation data for the resolution range used in refinement.

-
Randy J. Read
Department of Haematology, University of Cambridge
Cambridge Institute for Medical ResearchTel: +44 1223 
336500tel:%2B44%201223%20336500
Wellcome Trust/MRC Building Fax: +44 1223 
336827tel:%2B44%201223%20336827
Hills RoadE-mail: 
rj...@cam.ac.ukmailto:rj...@cam.ac.uk
Cambridge CB2 0XY, U.K.   
www-structmed.cimr.cam.ac.ukhttp://www-structmed.cimr.cam.ac.uk/

On 29 Jan 2012, at 17:25, Ian Tickle wrote:

 Jacob, here's my (personal) take on this:

 The data quality metrics that everyone uses clearly fall into 2
 classes: 'consistency' metrics, i.e. Rmerge/meas/pim and CC(1/2) which
 measure how well redundant observations agree, and signal/noise ratio
 metrics, i.e. mean(I/sigma) and completeness, which relate to the
 information content of the data.

 IMO the basic problem with all the consistency metrics is that they
 are not measuring the quantity that is relevant to refinement and
 electron density maps, namely the information content of the data, at
 least not in a direct and meaningful way.  This is because there are 2
 contributors to any consistency metric: the systematic errors (e.g.
 differences in illuminated volume and absorption) and the random
 errors (from counting statistics, detector noise etc.).  If the data
 are collected with sufficient redundancy the 

[ccp4bb] Staff Scientist position at MacCHESS

2012-01-30 Thread Marian Szebenyi
The Macromolecular Diffraction Facility of the Cornell High-Energy Synchrotron 
Source (MacCHESS) has an opening for a Staff Scientist (Research Associate) to 
pursue the development of novel techniques in x-ray scattering as applied to 
structural biology, and to support users at MacCHESS.  There will also be 
opportunities to pursue projects in structural biology, using current 
crystallographic and SAXS methods.  Research areas of particular interest 
include structure solution from multiple crystals, use of Laue diffraction, 
BioSAXS, microfluidics, and user interfaces for beamline operation.  A Ph.D. in 
structural biology, biophysics, or a related field, and at least 3 years of 
experience beyond the degree in a relevant field is required.  A solid 
publication record is essential, and experience working at a synchrotron 
facility is highly desirable.  Excellent communication skills are a must, 
including fluency in the English language.  Appointments are nominally for three 
years with the possibility for renewal, subject to mutual satisfaction and the 
availability of funds.


Located on an Ivy League university campus in picturesque upstate New York, the 
Cornell High-Energy Synchrotron Source (CHESS) serves a world-wide user base of 
structural biologists, chemists, physicists, and engineers.  MacCHESS is an 
NIH-supported National Resource providing support for structural biology at 
CHESS.  MacCHESS is a heavily team-oriented environment.


Please provide an application and have at least three letters of reference sent 
to:

Dr. Marian  Szebenyi, Chair
MacCHESS Staff Scientist Search Committee
c/o Peggy Steenrod
Newman Laboratory
Cornell University
Ithaca, NY  14853  USA

Applications should include a cover letter, curriculum vita, a publication list, 
and a detailed summary of research experience and interests.  Electronic 
submissions and inquiries may be addressed to search-cla...@cornell.edu.  Salary 
and starting date are negotiable.


Cornell is an equal opportunity, affirmative action educator and employer.


Re: [ccp4bb] B_sol from EDS

2012-01-30 Thread Bernhard Rupp (Hofkristallrat a.D.)
Yes, that is about what one would expect. I also checked a few of the
extreme outliers, and almost always can come up with a reasonable value.
Which does not remove my curiosity regarding the B_sol 70 cutoff and its
purpose.

 

Cheers, BR

 

From: Pavel Afonine [mailto:pafon...@gmail.com] 
Sent: Monday, January 30, 2012 11:33 AM
To: b...@hofkristallamt.org
Cc: CCP4BB@jiscmail.ac.uk
Subject: Re: [ccp4bb] B_sol from EDS

 

Hi Bernhard,

I just calculated k_sol and B_sol for all PDB entries that 

 - have reflection data available, 

 - I could re-compute the R-factor within 5%, and 

 - R-work30% 

using a simple cctbx script. Here is what I get:

Distribution of k_sol:
 0.000 - 0.060  : 27
 0.060 - 0.120  : 12
 0.120 - 0.180  : 51
 0.180 - 0.240  : 182
 0.240 - 0.300  : 1770
 0.300 - 0.360  : 13819
 0.360 - 0.420  : 19731
 0.420 - 0.480  : 3039
 0.480 - 0.540  : 471
 0.540 - 0.600  : 256

Distribution of B_sol:
 0.000 - 31.300 : 4349
31.300 - 62.600 : 29425
62.600 - 93.900 : 4578
93.900 - 125.200: 597
   125.200 - 156.500: 225
   156.500 - 187.800: 84
   187.800 - 219.100: 37
   219.100 - 250.400: 23
   250.400 - 281.700: 10
   281.700 - 313.000: 30

 

It seems like the result of similar exercise done by Fokine and Urzhumtsev
(Acta Cryst. (2002). D58, 1387-1392) still holds (see figure 3 on page 1390
there).

Pavel

 

On Mon, Jan 30, 2012 at 11:10 AM, Bernhard Rupp (Hofkristallrat a.D.)
hofkristall...@gmail.com wrote:

Dear All,

when I plot bulk solvent B and K extracted from EDS, an improbable and
bimodal distribution appears.
In the B_sol vs k_sol PDF a sharp line of values with B-sol of 70 appears
(B-axis left to right, 0-200).

http://www.ruppweb.org/images/b_sol_contour.jpg
http://www.ruppweb.org/images/b_sol_surface.jpg

According to a quick peak at EDS instructions,
it uses the REFMAC flat bulk solvent model throughout  for bulk solvent
correction.

The main peak in fact has the expected distribution, but it seems that the
sharp peak at B_sol=70
represents some cut-off that in a certain set of calculations was used.

For data mining it would be useful to know where/when these cutoffs were
used.

Best regards, BR
-
Bernhard Rupp
http://www.ruppweb.org/
-

 



[ccp4bb] odd behaviour of reindex

2012-01-30 Thread Jens Kaiser
Hi all,
  we encountered an odd behaviour of REINDEX.

Snip form logfile:

 Data line--- reindex HKL (h+l)/2, -k, (h-l)/2
 Data line--- end

  Reflections will be reindexed, and unit cell recalculated

 Reindexing transformation:
   (h' k' l') =  ( h  k  l ) (  1.0  0.0  1.0 )
 (  0.0 -1.0  0.0 )
 (  0.5  0.0 -0.5 )

Obviously, the first line of the matrix is not what we intended to
create.

inputting the transformation as HKL h/2+l/2, -k, h/2-l/2
produces the desired result:

 Data line--- reindex HKL h/2+l/2, -k, h/2-l/2
 Data line--- end

  Reflections will be reindexed, and unit cell recalculated

 Reindexing transformation:
   (h' k' l') =  ( h  k  l ) (  0.5  0.0  0.5 )
 (  0.0 -1.0  0.0 )
 (  0.5  0.0 -0.5 )



Admittedly, the documentation does not use any brackets in the examples,
but i would expect REINDEX either to throw an error or treat (h+l)/2
like (h-l)/2 but not treat them in the way encountered.

Cheers,

Jens


-- 
+-+-+
| Jens T. Kaiser  | Office: +1(626)395-2662 |
| California Institute of Technology  | Lab:+1(626)395-8392 |
| m/c 114-96  | Cell:   +1(626)379-1650 |
| 1200 E. California Blvd.| Xray:   +1(626)395-2661 |
| Pasadena, CA 91125  | Email:  kai...@caltech.edu  |
| USA | Skype:  jens.t.kaiser   |
+-+-+


Re: [ccp4bb] Kinase crystallization

2012-01-30 Thread George Kontopidis
if the ligand binding site is exposed to the solvent a bound ligand may help.

if the protein has is flexible domains and ligand fix it in one conformation, a 
bound ligand will help even more.

All the above assuming that the purity and the concentration of the protein are 
high.

 

George 

 

 

From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of 
Debreczeni, Judit
Sent: Monday, January 30, 2012 3:45 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Kinase crystallization

 

Does your kinase autophosphorylate by any chance? -- That can produce 
differently phosphorylated species and affect crystallisability. You can detect 
it by e.g. mass spec, and tackle it by dephosphorylating the protein prior to 
crystallisation or by coexpression with a phosphatase.

 

 

 

  _  

AstraZeneca UK Limited is a company incorporated in England and Wales with 
registered number: 03674842 and a registered office at 2 Kingdom Street, 
London, W2 6BD.

Confidentiality Notice: This message is private and may contain confidential, 
proprietary and legally privileged information. If you have received this 
message in error, please notify us and remove it from your system and note that 
you must not copy, distribute or take any action in reliance on it. Any 
unauthorised use or disclosure of the contents of this message is not permitted 
and may be unlawful.

Disclaimer: Email messages may be subject to delays, interception, non-delivery 
and unauthorised alterations. Therefore, information expressed in this message 
is not given or endorsed by AstraZeneca UK Limited unless otherwise notified by 
an authorised representative independent of this message. No contractual 
relationship is created by this message by any person unless specifically 
indicated by agreement in writing other than email.

Monitoring: AstraZeneca UK Limited may monitor email traffic data and content 
for the purposes of the prevention and detection of crime, ensuring the 
security of our computer systems and checking compliance with our Code of 
Conduct and policies. 

From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of CHAVES 
SANJUAN, ANTONIO
Sent: 30 January 2012 10:07
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] Kinase crystallization

 

Dear all,

I am trying to crystallize a protein kinase without any success.

I suspect about its characteristic catalytic loop. I have already prepared 
different constructs, different expression vectors, and different mutant 
proteins (pseudo-phosphorylated, active, inactive?). I have also tested some 
ligands as ANPpnp (non hidrolizable nucleotide), ADP (product) and manganese 
(cofactor).

I am thinking in trying to cocrystallize it with a general kinase substrate (a 
peptide, a small molecule...). Does any one have any experience or suggestion?

Thanks in advance.

Sincerely,

Antonio