Re: [ccp4bb] shelxl, refinement of occupany

2007-03-22 Thread George M. Sheldrick
Refining a common occupancy (using one free variable) for a loop that is 
in poor density is well worth trying. The loop may well have multiple 
conformations and this may make it easier to find a second conformation in 
the difference map. I also recommend refining the occupancies of any 
selenium atoms present (e.g. by changing '11' to '1'), this allows for 
partial incorporation and radiation damage.

Hydrogen atoms should be included in all final refinements, they cost 
no extra parameters and can reduce the free R by 0.5 to 1.0%; the 
antibumping restraints involving them (BUMP) are also useful. However I 
do not include OH hydrogens because the stupid program sometimes puts 
them in the wrong place (e.g. two on the same H-bond) and the combination 
of the riding model and antibumping restraints can tear the structure 
apart. For amide sidechains and histidines I would recommend checking the 
conformations with the molprobity server before adding the hydrogens in 
SHELXL. It is however less work to model all alternative conformations 
before adding hydrogens with HFIX, the program will then set up the 
disorder correctly for the hydrogens too. In such cases the disorder 
should be modeled one atom further back than you can see in the maps; if 
CG has two positions then there should be two pairs of alternative H-atoms 
on CB, which the program will set automatically if you have included PART 
1 and PART 2 alternatives for CB. This is appreciably more work to set up 
later by hand if you add the hydrogens before modeling the disorder!

It is worth trying to make the waters anisotropic (with an ISOR restraint) 
to see if this reduces R free significantly. If the .lst file gives NPD 
warnings the restraints on the anisotropic atoms are too soft.

And I am surprised that after all your emails on the subject you still 
can't spell 'SHELX'!

George


Prof. George M. Sheldrick FRS
Dept. Structural Chemistry, 
University of Goettingen,
Tammannstr. 4,
D37077 Goettingen, Germany
Tel. +49-551-39-3021 or -3068
Fax. +49-551-39-2582


On Wed, 21 Mar 2007, U Sam wrote:

 I am looking for some advice.
 
 (1) In shelex what should I mention to refine occupancy.
 I have two molecule in asym unit.
 In A molecule residues 89-92 is present, but in B these residues are missing.
 So I believe in B these residues should not be with zero occupancy, although I
 donot find any prominent density (Fo-Fc). Occupancy could be anywhere between
 0.0 to 1.0. How can I refine this parameter. Or, I should neglect this missing
 part of residues in B indicating a occupancy of 0.0 or keep a gap of these
 residues with no information including coordinates.
 
 Right now R1=14% and R1(free) =18%, without making water anisotropic.
 
 (2) I am using 1.4 A data. Should I refine water anisotropically ? If answer
 is yes, when.
 
 (3) Should I add hydrogen at this resolution. If yes, when should I do.
 
 Thanks
 Sam
 
 _
 The average US Credit Score is 675. The cost to see yours: $0 by Experian.
 http://www.freecreditreport.com/pm/default.aspx?sc=660600bcd=EMAILFOOTERAVERAGE
 


Re: [ccp4bb] Scale factor in ccp4

2007-03-22 Thread George M. Sheldrick
A less convoluted method is to read both .sca files into xprep, scale 
them together and write out the combined .sca file.

George

Prof. George M. Sheldrick FRS
Dept. Structural Chemistry, 
University of Goettingen,
Tammannstr. 4,
D37077 Goettingen, Germany
Tel. +49-551-39-3021 or -3068
Fax. +49-551-39-2582


On Thu, 22 Mar 2007, Eleanor Dodson wrote:

 When youu run scalepack2mtz the GUI always follows this by TRUNCATE to convert
 Is to Fs
 
 At that stage there is an attempt to put the data on roughly an absolute
 scale, either using the NRES you gave as input or if that is not set, I think
 assuming 50% of the cell volume is protein.
 Anyway the scales WILL be different after TRUNCATE.
 
 If you want to scale them together more carefully you will need to run cad,
 then SCALEIT ( on the GUI undr exptl phases)
 THEN convert each I1 and I2 back to a sca file ..
 
 Seems a lot of trouble! Why do you need this?
 
 Eleanor
 
 yang li wrote:
  Hi:
I have two set of data from the same crystal with the names 1.sca and
  2.sca,
  they have different Intensity values due to different scale factors.
  Now I use Scalepack2mtz
  convert them to 1.mtz and 2.mtz, then use cad to merge to a cad.mtz, then
  convert it to cad.sca file, I find that the Intensity values in this
  cad.sca arediffrent from
  1.sca and 2.sca, I wonder if the program has scaled the values itself?
  If that is true,
  which program did this, Scalepack2mtz or cad?
  Thanks!
  
  Li Yang
  
  
 


[ccp4bb] Workshop: Macromolecular Crystallography at PETRA III

2007-03-22 Thread Michele Cianci
 Beamline Design Workshop

   Macromolecular Crystallography at PETRA III

  EMBL-Hamburg Outstation, April 23 to 25, 2007

Starting in 2007, the PETRA storage ring in Hamburg, Germany,
will be converted into a 3rd generation source for synchrotron
radiation. The new PETRA III storage ring will be operational
in 2009 and provide X-rays of extreme brilliance.

EMBL Hamburg is in charge of building and operating three
undulator beamlines on PETRA III. Two of these beamlines will
be dedicated to macromolecular X-ray crystallography and one
to small angle X-ray scattering (Further information can be
obtained under http://www.embl-hamburg.de/services/petra/).
The beamlines will be integrated with facilities for sample
preparation, high-throughput crystallization, data processing
and data evaluation.

The workshop will bring together structural biologists and beamline
scientists to discuss scientific and experimental challenges in
macromolecular crystallography. The results of this discussion
will directly enter into the design of the two beamlines for
macromolecular crystallography in the [EMAIL PROTECTED] project.

The workshop will start on 23rd April at 15:00 and finish on
25th April at 18:00. There is no registration fee, but participants
are expected to cover their own travel and accommodation.

For further information and registration, please go to:

   http://www.embl-hamburg.de/workshops/2007/mx/

The number of participants will be limited to 50.
The registration deadline is April 4th, 2007.
Applicants will be informed about the acceptance of
their applications by April 9th, 2007.

Organising committee:
Thomas R. Schneider, Michele Cianci, Gleb Bourenkov


Re: [ccp4bb] Highest shell standards

2007-03-22 Thread Ranvir Singh
I will agree with Ulrich. Even at 3.0 A, it is
possible to have a  structure with reasonable accuracy
which can explain the biological function/ or is
consistent with available biochemical data.
Ranvir
--- Ulrich Genick [EMAIL PROTECTED] wrote:

 Here are my 2-3 cents worth on the topic:
 
 The first thing to keep in mind is that the goal of
 a structure  
 determination
 is not to get the best stats or to claim the highest
 possible  
 resolution.
 The goal is to get the best possible structure and
 to be confident that
 observed features in a structure are real and not
 the result of noise.
 
  From that perspective, if any of the conclusions
 one draws from a  
 structure
 change depending on whether one includes data with
 an I/sigI in the  
 highest
 resolution shell of 2 or 1, one probably treads on
 thin ice.
 
 The general guide that one should include only data,
 for which the  
 shell's average
   I/sigI  2 comes from the following simple
 consideration.
 
 
 F/sigF = 2 I/sigI
 
 So if you include data with an I/sigI of 2 then your
 F/sigF =4.  In  
 other words you will
 have a roughly 25% experimental uncertainty in your
 F.
 Now assume that you actually knew the structure of
 your protein and  
 you would
 calculate the crystallographic R-factor between the
 Fcalcs from your  
 true structure and the
 observed F.
 In this situation, you would expect to get a
 crystallographic R- 
 factor around 25%,
 simply because of the average error in your
 experimental structure  
 factor.
 Since most macromolecular structures have R-factors
 around 20%, it  
 makes little
 sense to include data, where the experimental
 uncertainty alone will
 guarantee that your R-factor will be worse.
 Of course, these days maximum-likely-hood refinement
 will just down  
 weight
 such data and all you do is to burn CPU cycles.
 
 
 If you actually want to do a semi rigorous test of
 where you should stop
 including data, simply include increasingly higher
 resolution data in  
 your
 refinement and see if your structure improves.
 If you have really high resolution data (i.e. 
 better than 1.2 Angstrom)
 you can do matrix inversion in SHELX and get
 estimated standard  
 deviations (esd)
 for your refined parameters. As you include more and
 more data the  
 esds should
 initially decrease. Simply keep including higher
 resolution data  
 until your esds
 start to increase again.
 
 Similarly, for lower resolution data you can monitor
 some molecular  
 parameters, which are not
 included in the stereochemical restraints and see,
 if the inclusion  
 of higher-resolution data makes the
 agreement between the observed and expected
 parameters better. For  
 example SHELX does not
 restrain torsion angles in aliphatic portions of
 side chains. If your  
 structure improves, those
 angles should cluster more tightly around +60 -60
 and 180...
 
 
 
 
 Cheers,
 
 Ulrich
 
 
  Could someone point me to some standards for data
 quality,  
  especially for publishing structures? I'm
 wondering in particular  
  about highest shell completeness, multiplicity,
 sigma and Rmerge.
 
  A co-worker pointed me to a '97 article by
 Kleywegt and Jones:
 
  http://xray.bmc.uu.se/gerard/gmrp/gmrp.html
 
  To decide at which shell to cut off the
 resolution, we nowadays  
  tend to use the following criteria for the highest
 shell:  
  completeness  80 %, multiplicity  2, more than
 60 % of the  
  reflections with I  3 sigma(I), and Rmerge  40
 %. In our opinion,  
  it is better to have a good 1.8 Å structure, than
 a poor 1.637 Å  
  structure.
 
  Are these recommendations still valid with maximum
 likelihood  
  methods? We tend to use more data, especially in
 terms of the  
  Rmerge and sigma cuttoff.
 
  Thanks in advance,
 
  Shane Atwell
 
 



 

TV dinner still cooling? 
Check out Tonight's Picks on Yahoo! TV.
http://tv.yahoo.com/


Re: [ccp4bb] Highest shell standards

2007-03-22 Thread Santarsiero, Bernard D.
There are journals that have specific specifications for these parameters,
so it matters where you publish. I've seen restrictions that the highest
resolution shell has to have I/sig  2 and completeness  90%. Your
mileage may vary.

I typically process my data to a maximum I/sig near 1, and completeness in
the highest resolution shell to 50% or greater. It's reasonable to expect
the multiplicity/redundancy to be greater than 2, though that is difficult
with the lower symmetry space groups in triclinc and monoclinic systems
(depending upon crystal orientation and detector geometry). The chi^2's
should be relatively uniform over the entire resolution range, near 1 in
the highest resolution bins, and near 1 overall. With this set of
criteria, R(merge)/R(sym) (on I) can be as high as 20% and near 100% for
the highest resolution shell. R is a poor descriptor when you have a
substantial number of weak intensities because it is dominated by the
denominator; chi^2's are a better descriptor since it has, essentially,
the same numerator.

One should also note that the I/sig criteria can be misleading. It is the
*average* of the I/sig in a resolution shell, and as such, will include
intensities that are both weaker and stronger than the average. For the
highest resolution shell, if you discard intensities greater than 2sig,
then you are also discarding intensities substantially greater than 2sig
as well. The natural falloff of the intensities is reflected (no pun
intended) by the average B-factor of the structure, and you need the
higher resolution, weaker data to best define that parameter.

Protein diffraction data is inherently weak, and far weaker than we obtain
for small molecule crystals. Generally, we need all the data we can get,
and the dynamic range of the data that we do get is smaller than that
observed for small molecule crystals. That's why we use restraints in
refinement. An observation of a weak intensity is just as valid as the
observation of a strong observation, since you are minimizing a function
related to matching Iobs to Icalc. This is even more valid with refinement
targets like the maximum likelihood function. The ONLY reasons that we
ever used I/sig or F/sig cutoffs in refinements was to make the
calculations faster (since we were substantially limited by computing
power decades ago), the sig's were not well-defined for weak intensities
(especially for F's), and the detectors were not as sensitive. Now, with
high brilliance x-ray sources and modern detectors, you can, in fact,
measure weak intensities well--far better than we could decades ago. And
while the dynamic range of intensities for a protein set is relatively
flat, in comparison to a small molecule dataset, those weak terms near
zero are important in restraining the Fcalc's to be small, and therefore
helping to define the phases properly.

In 2007, I don't see a valid argument of severe cutoff's in I/sig at the
processing stage. I/sig = 1 and a reasonable completeness of 30-50% in the
highest resolution shell should be adequate to include most of the useful
data. Later on, during refinement, you can, indeed, increase the
resolution limit, if you wish. Again, with targets like maximum
likelihood, there is no statistical reason to do that. You do it because
it makes the R(cryst), R(free), and FOM look better. You do it because you
want to have a 2.00A vs. 1.96A resolution structure. What is always true
is that you need to look at the maps, and they need as many terms in the
Fourier summation as you can include. There should never be an argument
that you're savings on computing cycles. It's takes far longer to look
carefully at an electron density map and make decisions on what to do than
to carry out refinement. We're rarely talking about twice the computing
time, we're probably thinking 10% more. That's definately not a reason to
throw out data. We've got lots of computing power and lots of disk
storage, let's use to our advantage.

That's my nickel.

Bernie Santarsiero




On Thu, March 22, 2007 7:00 am, Ranvir Singh wrote:
 I will agree with Ulrich. Even at 3.0 A, it is
 possible to have a  structure with reasonable accuracy
 which can explain the biological function/ or is
 consistent with available biochemical data.
 Ranvir
 --- Ulrich Genick [EMAIL PROTECTED] wrote:

 Here are my 2-3 cents worth on the topic:

 The first thing to keep in mind is that the goal of
 a structure
 determination
 is not to get the best stats or to claim the highest
 possible
 resolution.
 The goal is to get the best possible structure and
 to be confident that
 observed features in a structure are real and not
 the result of noise.

  From that perspective, if any of the conclusions
 one draws from a
 structure
 change depending on whether one includes data with
 an I/sigI in the
 highest
 resolution shell of 2 or 1, one probably treads on
 thin ice.

 The general guide that one should include only data,
 for which the
 shell's average
   I/sigI  2 

Re: [ccp4bb] Highest shell standards

2007-03-22 Thread Sue Roberts
I have a question about how the experimental sigmas are affected when  
one includes resolution shells containing mostly unobserved  
reflections.  Does this vary with the data reduction software being  
used?


One thing I've noticed when scaling data (this with d*trek (Crystal  
Clear) since it's the program I use most) is that I/sigma(I) of  
reflections can change significantly when one changes the high  
resolution cutoff.


If I set the detector so that the edge is about where I stop seeing  
reflections and integrate to the corner of the detector, I'll get a  
dataset where I/sigma(I) is really compressed - there is a lot of  
high resolution data with I/sigma(I) about 1, but for the lowest  
resolution shell, the overall I/sigma(I) will be maybe 8-9.  If the  
data set is cutoff at a lower resolution (where I/sigma(I) in the  
shell is about 2) and scaled, I/sigma(I) in the lowest resolution  
shell will be maybe 20 or even higher (OK, there is a different  
resolution cutoff for this shell, but if I look at individual  
reflections, the trend holds).  Since the maximum likelihood  
refinements use sigmas for weighting this must affect the  
refinement.  My experience is that interpretation of the maps is  
easier when the cut-off datasets are used. (Refinement is via refmac5  
or shelx).  Also, I'm mostly talking about datasets from  well- 
diffracting crystals (better than 2 A).


Sue


On Mar 22, 2007, at 2:29 AM, Eleanor Dodson wrote:

I feel that is rather severe for ML refinement - sometimes for  
instance it helps to use all the data from the images, integrating  
right into the corners, thus getting a very incomplete set for the  
highest resolution shell.  But for exptl phasing it does not help  
to have many many weak reflections..


Is there any way of testing this though? Only way I can think of to  
refine against a poorer set with varying protocols, then improve  
crystals/data and see which protocol for the poorer data gave the  
best agreement for the model comparison?


And even that is not decisive - presumably the data would have come  
from different crystals with maybe small diffs between the models..

Eleanor



Shane Atwell wrote:


Could someone point me to some standards for data quality,  
especially for publishing structures? I'm wondering in particular  
about highest shell completeness, multiplicity, sigma and Rmerge.


A co-worker pointed me to a '97 article by Kleywegt and Jones:

_http://xray.bmc.uu.se/gerard/gmrp/gmrp.html_

To decide at which shell to cut off the resolution, we nowadays  
tend to use the following criteria for the highest shell:  
completeness  80 %, multiplicity  2, more than 60 % of the  
reflections with I  3 sigma(I), and Rmerge  40 %. In our  
opinion, it is better to have a good 1.8 Å structure, than a poor  
1.637 Å structure.


Are these recommendations still valid with maximum likelihood  
methods? We tend to use more data, especially in terms of the  
Rmerge and sigma cuttoff.


Thanks in advance,

*Shane Atwell*



Sue Roberts
Biochemistry  Biopphysics
University of Arizona

[EMAIL PROTECTED]


Re: [ccp4bb] Highest shell standards

2007-03-22 Thread Petrus H Zwart
 I typically process my data to a maximum I/sig near 1, and 
 completeness in
 the highest resolution shell to 50% or greater. It

What about maps computed of very incomplete datasets at high resolution? Don't 
you get a false sense of details when the missing reflections are filled in 
with DFc when computing a 2MFo-DFc map?

P


[ccp4bb] Postdoc and Ph.D. student positions

2007-03-22 Thread Clemens Steegborn



Postdoctoral Position in Structural Biology / Biochemistry

We are looking for a highly motivated Postdoc for our laboratory at the 
Institute for Physiological Chemistry at Ruhr-University Bochum, Germany. 
Our work involves biochemical and structural studies on cellular signaling 
systems with relevance to cancer, metabolic diseases, or aging. A main 
focus of the group is the mechanism of metabolic sensing and cyclic 
nucleotide signaling.


Our laboratory is located at the Medical School of the Ruhr-University 
Bochum which harbours outstanding research groups and facilities. The group 
is further associated with the Max-Planck-Institute of Molecular Physiology 
in Dortmund giving us access to state-of the art x-ray facilities and 
synchrotron beamlines. Our lab offers excellent research opportunities and 
a stimulating environment for research in structural biology.


The ideal candidate is a highly motivated Ph.D. with an interest in 
medically relevant questions; experience in protein biochemistry and 
protein crystallization is mandatory. Additional experience in molecular 
biology would be an asset.


Applications (including CV, research experience, and two names and contact 
information for references) should be sent to

Dr. Clemens Steegborn Ruhr-University Bochum MA 2/141
Universitaetsstr. 150
44801 Bochum
Germany
Email: mailto:[EMAIL PROTECTED][EMAIL PROTECTED]
mailto:[EMAIL PROTECTED][EMAIL PROTECTED] 


Jun.-Prof. Dr. Clemens Steegborn
Ruhr-University Bochum
Dept. Physiol. Chemistry, MA 2/141
Universitaetsstr. 150
44801 Bochum, Germany

phone: 0049 234 32 27041
fax: 0049 234 32 14193
email: [EMAIL PROTECTED] 

Re: [ccp4bb] Highest shell standards

2007-03-22 Thread Jose Antonio Cuesta-Seijo
I have observed something similar myself using Saint in a Bruker  
Smart6K detector and using denzo in lab and syncrotron detectors.
First the I over sigma never really drops to zero, no mater how much  
over your real resolution limit you integrate.
Second, if I integrate to the visual resolution limit of, say, 1.5A,  
I get nice dataset statistics. If I now re-integrate (and re-scale)  
to 1.2A, thus including mostly empty (background) pixels everywhere,  
then cut the dataset after scaling to the same 1.5A limit, the  
statistics are much worse, booth in I over sigma and Rint. (Sorry, no  
numbers here, I tried this sometime ago).
I guess the integration is suffering at profile fitting level while  
the scaling suffers from general noise (those weak reflections  
between 1.5A and 1.2A will be half of your total data!).

I would be careful to go much over the visual resolution limit.
Jose.

**
Jose Antonio Cuesta-Seijo
Cancer Genomics and Proteomics
Ontario Cancer Institute, UHN
MaRs TMDT Room 4-902M
101 College Street
M5G 1L7 Toronto, On, Canada
Phone:  (416)581-7544
Fax: (416)581-7562
email: [EMAIL PROTECTED]
**


On Mar 22, 2007, at 10:59 AM, Sue Roberts wrote:

I have a question about how the experimental sigmas are affected  
when one includes resolution shells containing mostly unobserved  
reflections.  Does this vary with the data reduction software being  
used?


One thing I've noticed when scaling data (this with d*trek (Crystal  
Clear) since it's the program I use most) is that I/sigma(I) of  
reflections can change significantly when one changes the high  
resolution cutoff.


If I set the detector so that the edge is about where I stop seeing  
reflections and integrate to the corner of the detector, I'll get a  
dataset where I/sigma(I) is really compressed - there is a lot of  
high resolution data with I/sigma(I) about 1, but for the lowest  
resolution shell, the overall I/sigma(I) will be maybe 8-9.  If the  
data set is cutoff at a lower resolution (where I/sigma(I) in the  
shell is about 2) and scaled, I/sigma(I) in the lowest resolution  
shell will be maybe 20 or even higher (OK, there is a different  
resolution cutoff for this shell, but if I look at individual  
reflections, the trend holds).  Since the maximum likelihood  
refinements use sigmas for weighting this must affect the  
refinement.  My experience is that interpretation of the maps is  
easier when the cut-off datasets are used. (Refinement is via  
refmac5 or shelx).  Also, I'm mostly talking about datasets from   
well-diffracting crystals (better than 2 A).


Sue


On Mar 22, 2007, at 2:29 AM, Eleanor Dodson wrote:

I feel that is rather severe for ML refinement - sometimes for  
instance it helps to use all the data from the images, integrating  
right into the corners, thus getting a very incomplete set for the  
highest resolution shell.  But for exptl phasing it does not help  
to have many many weak reflections..


Is there any way of testing this though? Only way I can think of  
to refine against a poorer set with varying protocols, then  
improve crystals/data and see which protocol for the poorer data  
gave the best agreement for the model comparison?


And even that is not decisive - presumably the data would have  
come from different crystals with maybe small diffs between the  
models..

Eleanor



Shane Atwell wrote:


Could someone point me to some standards for data quality,  
especially for publishing structures? I'm wondering in particular  
about highest shell completeness, multiplicity, sigma and Rmerge.


A co-worker pointed me to a '97 article by Kleywegt and Jones:

_http://xray.bmc.uu.se/gerard/gmrp/gmrp.html_

To decide at which shell to cut off the resolution, we nowadays  
tend to use the following criteria for the highest shell:  
completeness  80 %, multiplicity  2, more than 60 % of the  
reflections with I  3 sigma(I), and Rmerge  40 %. In our  
opinion, it is better to have a good 1.8 Å structure, than a poor  
1.637 Å structure.


Are these recommendations still valid with maximum likelihood  
methods? We tend to use more data, especially in terms of the  
Rmerge and sigma cuttoff.


Thanks in advance,

*Shane Atwell*



Sue Roberts
Biochemistry  Biopphysics
University of Arizona

[EMAIL PROTECTED]


Re: [ccp4bb] Highest shell standards

2007-03-22 Thread Santarsiero, Bernard D.
My guess is that the integration is roughly the same, unless the profiles
are really poorly defined, but that the scaling that is suffering from
using a lot of high-resolution weak data. We've integrated data to say
I/sig = 0.5, and sometimes seem more problems with scaling. I then cut
back to I/sig = 1 and it's fine. The major difficulty arises that if the
crystal is dying, and the decay/scaling/absorption model isn't good
enough. So that's definately a consideration when trying to get a more
complete data set and higher resolution (so more redundancy).

Bernie


On Thu, March 22, 2007 12:21 pm, Jose Antonio Cuesta-Seijo wrote:
 I have observed something similar myself using Saint in a Bruker
 Smart6K detector and using denzo in lab and syncrotron detectors.
 First the I over sigma never really drops to zero, no mater how much
 over your real resolution limit you integrate.
 Second, if I integrate to the visual resolution limit of, say, 1.5A,
 I get nice dataset statistics. If I now re-integrate (and re-scale)
 to 1.2A, thus including mostly empty (background) pixels everywhere,
 then cut the dataset after scaling to the same 1.5A limit, the
 statistics are much worse, booth in I over sigma and Rint. (Sorry, no
 numbers here, I tried this sometime ago).
 I guess the integration is suffering at profile fitting level while
 the scaling suffers from general noise (those weak reflections
 between 1.5A and 1.2A will be half of your total data!).
 I would be careful to go much over the visual resolution limit.
 Jose.

 **
 Jose Antonio Cuesta-Seijo
 Cancer Genomics and Proteomics
 Ontario Cancer Institute, UHN
 MaRs TMDT Room 4-902M
 101 College Street
 M5G 1L7 Toronto, On, Canada
 Phone:  (416)581-7544
 Fax: (416)581-7562
 email: [EMAIL PROTECTED]
 **


 On Mar 22, 2007, at 10:59 AM, Sue Roberts wrote:

 I have a question about how the experimental sigmas are affected
 when one includes resolution shells containing mostly unobserved
 reflections.  Does this vary with the data reduction software being
 used?

 One thing I've noticed when scaling data (this with d*trek (Crystal
 Clear) since it's the program I use most) is that I/sigma(I) of
 reflections can change significantly when one changes the high
 resolution cutoff.

 If I set the detector so that the edge is about where I stop seeing
 reflections and integrate to the corner of the detector, I'll get a
 dataset where I/sigma(I) is really compressed - there is a lot of
 high resolution data with I/sigma(I) about 1, but for the lowest
 resolution shell, the overall I/sigma(I) will be maybe 8-9.  If the
 data set is cutoff at a lower resolution (where I/sigma(I) in the
 shell is about 2) and scaled, I/sigma(I) in the lowest resolution
 shell will be maybe 20 or even higher (OK, there is a different
 resolution cutoff for this shell, but if I look at individual
 reflections, the trend holds).  Since the maximum likelihood
 refinements use sigmas for weighting this must affect the
 refinement.  My experience is that interpretation of the maps is
 easier when the cut-off datasets are used. (Refinement is via
 refmac5 or shelx).  Also, I'm mostly talking about datasets from
 well-diffracting crystals (better than 2 A).

 Sue


 On Mar 22, 2007, at 2:29 AM, Eleanor Dodson wrote:

 I feel that is rather severe for ML refinement - sometimes for
 instance it helps to use all the data from the images, integrating
 right into the corners, thus getting a very incomplete set for the
 highest resolution shell.  But for exptl phasing it does not help
 to have many many weak reflections..

 Is there any way of testing this though? Only way I can think of
 to refine against a poorer set with varying protocols, then
 improve crystals/data and see which protocol for the poorer data
 gave the best agreement for the model comparison?

 And even that is not decisive - presumably the data would have
 come from different crystals with maybe small diffs between the
 models..
 Eleanor



 Shane Atwell wrote:

 Could someone point me to some standards for data quality,
 especially for publishing structures? I'm wondering in particular
 about highest shell completeness, multiplicity, sigma and Rmerge.

 A co-worker pointed me to a '97 article by Kleywegt and Jones:

 _http://xray.bmc.uu.se/gerard/gmrp/gmrp.html_

 To decide at which shell to cut off the resolution, we nowadays
 tend to use the following criteria for the highest shell:
 completeness  80 %, multiplicity  2, more than 60 % of the
 reflections with I  3 sigma(I), and Rmerge  40 %. In our
 opinion, it is better to have a good 1.8 Å structure, than a poor
 1.637 Å structure.

 Are these recommendations still valid with maximum likelihood
 methods? We tend to use more data, especially in terms of the
 Rmerge and sigma cuttoff.

 Thanks in advance,

 *Shane Atwell*


 Sue Roberts
 Biochemistry  Biopphysics
 University of Arizona

 [EMAIL PROTECTED]



[ccp4bb] X-ray generator uninterruptible power supply

2007-03-22 Thread William Scott

Hi Citizens:

Does anyone use an uninterruptible power supply for their X-ray  
generator?  Here at UCDIY, the electricity supply is pretty sketchy,  
and it is hammering our X-ray generator every time someone forgets to  
feed the hamster or grease his wheel. If so, how much does such a  
thing cost, and how unpleasant is it to maintain (since at UCDIY, you  
get to do all your own maintenance)?


Thanks in advance.

Bill


[ccp4bb] How to subtract one electron density map from another

2007-03-22 Thread Qing Xie

Hi,
I'm trying to get the difference map by subtracting the native electron 
density map from the complex electron density map. MAPMASK has a function 
of ADD/MULT, but I don't know how to use it?

Any other ways to attack this problem in real space?

Thanks in advance,

Qing


Re: [ccp4bb] How to subtract one electron density map from another

2007-03-22 Thread Paul Emsley
On Thu, 2007-03-22 at 16:06 -0500, Qing Xie wrote:
 I'm trying to get the difference map by subtracting the native electron 
 density map from the complex electron density map. MAPMASK has a function 
 of ADD/MULT, but I don't know how to use it?
 Any other ways to attack this problem in real space?

use overlapmap
with 
ADD 1 -1

presuming your maps are on the correct scale (if not you can use mapmask
to scale (one of) them).

Paul.


Re: [ccp4bb] X-ray generator uninterruptible power supply

2007-03-22 Thread Matthew . Franklin
CCP4 bulletin board CCP4BB@JISCMAIL.AC.UK wrote on 03/22/2007 04:46:35
PM:

 Hi Citizens:

 Does anyone use an uninterruptible power supply for their X-ray
 generator?  Here at UCDIY, the electricity supply is pretty sketchy,
 and it is hammering our X-ray generator every time someone forgets to
 feed the hamster or grease his wheel. If so, how much does such a
 thing cost, and how unpleasant is it to maintain (since at UCDIY, you
 get to do all your own maintenance)?

 Thanks in advance.

 Bill

Hi Bill -

I'm afraid a UPS for that kind of power load may be very expensive.  My
007HF needs a 20 amp, 200 VAC circuit, so let's say the UPS needs to be
able to deliver 4 kW of power (you could probably get away with less).  If
you're talking about an older power hog like an RU-H3R, that wants an 85
amp (!) 200 VAC circuit, so your UPS needs to handle 17 kW load.

Let's assume that you're trying to ride out little glitches (seconds to
minutes), not long outages.  So you don't need huge battery capacity, just
high wattage.

Here's a couple of sites that might help you - they helped me when I was
looking for a UPS:

http://www.powerware.com/UPS/selector/SolutionOverview.asp
http://www.apcc.com/template/size/apc/index.cfm?
http://www.advizia.com/tripplite/

You'll find that a UPS to run a low-power generator like the 007HF will
cost you $5000 - $10,000, while something that can handle an RU-H3R looks
like it'll cost $20k and up.  I'm not even sure that all of these systems
will deliver 3-phase power like the generator needs.

I never considered getting a UPS for my generator, even though our building
has no backup power and Con Ed also forgets to feed their hamsters
occasionally.  What I did get was a UPS for my cryostream - I figured that
the data collection can always be restarted if the generator died in the
middle of the night, but once the crystal is gone, it's gone.  I paid less
than $10k for a UPS that can run my Cryojet for over 15 hours, so even if
the power goes out in the middle of the night, I can come in and rescue the
crystal the next day.

Email me if you want more info on my choice of UPS for my cryo.

- Matt

--
Matthew Franklin , Ph.D.
Senior Scientist, ImClone Systems
180 Varick Street, 6th floor
New York, NY 10014
phone:(917)606-4116   fax:(212)645-2054


Confidentiality Note:  This e-mail, and any attachment to it, contains
privileged and confidential information intended only for the use of the
individual(s) or entity named on the e-mail.  If the reader of this e-mail
is not the intended recipient, or the employee or agent responsible for
delivering it to the intended recipient, you are hereby notified that
reading it is strictly prohibited.  If you have received this e-mail in
error, please immediately return it to the sender and delete it from your
system.  Thank you.


Re: [ccp4bb] X-ray generator uninterruptible power supply

2007-03-22 Thread David J. Schuller
On Thu, 2007-03-22 at 18:54 -0400, [EMAIL PROTECTED] wrote:

 I never considered getting a UPS for my generator, even though our building
 has no backup power and Con Ed also forgets to feed their hamsters
 occasionally.  What I did get was a UPS for my cryostream - I figured that
 the data collection can always be restarted if the generator died in the
 middle of the night, but once the crystal is gone, it's gone.  I paid less
 than $10k for a UPS that can run my Cryojet for over 15 hours, so even if
 the power goes out in the middle of the night, I can come in and rescue the
 crystal the next day.

That's a good idea, we do that now on our crystallography beamlines.

-- 
===
With the single exception of Cornell, there is not a college in the
United States where truth has ever been a welcome guest - R.G. Ingersoll
===
  David J. Schuller
  modern man in a post-modern world
  MacCHESS, Cornell University
  [EMAIL PROTECTED]


Re: [ccp4bb] How to subtract one electron density map from another

2007-03-22 Thread Ulrich Genick
Why not simply scale the two data sets, and subtract corresponding Fs  
from one another and then

calculate a map from those Fs.

If you want to an error-weighted map, you should also perform
error propagation on your sigF. Assuming that the two errors are  
independent of one another

the formula for doing so would be sigmaFa-Fb = sqrt(sigFa^2 + sigFb^2).

My advice would be to use omit phases for this map to avoid biasing  
your difference map

by model phases. In other words calculate your phases
from a model, in which you have removed atoms that show significant  
peaks

in a preliminary difference map.

Provided you use the same phases for both maps (which you should to  
avoid bias)
subtracting the two maps structure factor by structure factor and  
subtracting them

pixel by pixel is mathematically equivalent.


Cheers,

Ulrich




On Mar 22, 2007, at 5:06 PM, Qing Xie wrote:


Hi,
I'm trying to get the difference map by subtracting the native  
electron density map from the complex electron density map. MAPMASK  
has a function of ADD/MULT, but I don't know how to use it?

Any other ways to attack this problem in real space?

Thanks in advance,

Qing


Re: [ccp4bb] How to subtract one electron density map from another

2007-03-22 Thread Edward A. Berry

Mapman (Uppsala software factory) allows you to subtract maps
if they are on the same grid. You may need to multiply one
map by a factor (again in mapman) to make the background flat
in the regions you believe to be identical.

Linearity of the Fourier Transform implies you could get
the same result by subtracting in reciprocal space.
However if the crystals are not perfectly isomorphous,
it may help to skew one map onto the others before
subtracting, i.e. apply a rotation-translation operator
to superimpose equivalent areas, which could be a reason
for doing it in real space.

You can do this with
mave (also USF), getting the operator by superimposing
models in O or lsqman, or, if the refinement has not
progressed that far, refining the operator from identity
with the improve option of mave.

I expect mapmask and friends of dmmulti can do all this too,
but I am less familiar with the CCP4 tools. Either way,
there is certain amount of documentation you will need
to read in order to know how to use it. But that is
time well spent in the long run, as it will help you to
interpret the results or modify the procedure as new
situations arise. If you get stuck and have specific
questions, the BB is hear to help!

Ed