Dear ccp4 users,
It is with great sadness that I announce that Dr. Richard Kahn, Life
Sciences Research Director in CNRS at Institute de Biologie Structurale
Jean-Pierre Ebel (IBS), passed away on October 1, 2011, in Grenoble.
He was a superb crystallographer. He was always very keen to
All,
So I have two intense ice rings where there appear to be lattice spots in
between them.
I understand that any reflections that lie directly on the ice ring are
useless, however, how do software programs (HKL2000, d*Trek, mosflm, XDS) deal
with these intermediate spots?
It would seem
Dear Francis,
the spots will be excluded individually based on the inhomogeneous background,
so you don't need to apply a resolution cutoff.
However, once you have determined and refined your structure it may be worth
predicting the intensity of these spots and put them back for map
If the ice rings are really sharp, they trigger the bad
background rejection in denzo/HKL2000. To reject more spots,
increase the reject fraction 0.7 parameter to something
greater than .7. This rejection is on a spot by spot basis,
so spots with good background between the rings should not
be
Francis,
I would like to bring your attention to our paper in Acta Cryst D Volume
66 (6), 741-744 (2010) where we deal with spots under the ice-rings. We
have been very successful in eliminating the ice-rings and recover the
data underneath. If you are interested you can request the Python
I've used a technique called annealing, which amounts to holding an index
card between the cryo stream and the crystal for a few seconds then removing
the card quickly.
In my experience, about 70% of the time the diffraction is worse and about 30%
of the time the ice rings will be gone with
On Tue, 2011-10-11 at 15:24 +, Bruno KLAHOLZ wrote:
However, once you have determined and refined your structure it may be
worth predicting the intensity of these spots and put them back for
map calculation,
REFMAC does this by default, because
expected value of unknown structure factors
On Tue, Oct 11, 2011 at 10:34 AM, Ed Pozharski epozh...@umaryland.eduwrote:
CNS defaults to excluding them. As for phenix, I am not entirely sure -
it seems that phenix.refine does too (fill_missing_f_obs= False), but if
you use the GUI then the fill in option is turned on.
In practice, it
On Tue, Oct 11, 2011 at 10:34 AM, Ed Pozharski epozh...@umaryland.eduwrote:
expected value of unknown structure factors for missing reflections are
better approximated using DFc than with 0 values.
better, but not always. What about say 80% or so complete dataset? Filling
in 20% of Fcalc (or
http://hoowstuffworks.blogspot.com/2011/10/adobe-demos-amazing-unblur-feature.html
Though I can't really see the image myself... the gasp of the audience is
telling
With respect to existing density modification programs, I wonder if such
technology (whatever it is) can ever clear up my messy
I could be wrong, but my understanding is that they're removing motion
blur from the image - so I don't think it'll be directly applicable to
density modification.
But I'd be very happy to be wrong on this one.
Pete
Francis E Reyes wrote:
On Tuesday, October 11, 2011 11:21:41 am Francis E Reyes wrote:
http://hoowstuffworks.blogspot.com/2011/10/adobe-demos-amazing-unblur-feature.html
Though I can't really see the image myself... the gasp of the audience is
telling
With respect to existing density modification programs, I
On Tue, 2011-10-11 at 10:47 -0700, Pavel Afonine wrote:
better, but not always. What about say 80% or so complete dataset?
Filling in 20% of Fcalc (or DFcalc or bin-averaged Fobs or else - it
doesn't matter, since the phase will dominate anyway) will highly bias
the map towards the model.
Hi Ed,
On Tue, Oct 11, 2011 at 11:47 AM, Ed Pozharski epozh...@umaryland.eduwrote:
On Tue, 2011-10-11 at 10:47 -0700, Pavel Afonine wrote:
better, but not always. What about say 80% or so complete dataset?
Filling in 20% of Fcalc (or DFcalc or bin-averaged Fobs or else - it
doesn't
Position (general motion blur is a special case of it) dependent blurring can
be applied to denisty improvement but problem is extremely ill posed. While
deblurring you need to reduce noise amplification. Proper regularisation needs
to be designed, probem becomes NxN linear equation where N is
On Tue, 2011-10-11 at 11:54 -0700, Pavel Afonine wrote:
Yep, that was the point - sometimes it is good to do, and sometimes it
is not, and
Do you have a real life example of Fobs=0 being better? You make it
sound as if it's 50/50 situation.
--
Hurry up before we all come back to our senses!
Hi all,
I recently got diffraction data of 214 AA protein. When I processed
the data, pointless suggested me a space group as P41. However, when I
ran Phaser with 'all choices of alternate space group', it gave me a
pdb file with P43. Additionally, phenix.xtriage suggested me P422 with
twin laws
Do you have a real life example of Fobs=0 being better?
Hopefully, there will be a paper some time soon discussing all this - we
work on this right now.
You make it
sound as if it's 50/50 situation.
No (sorry if what I wrote sounded that misleading).
Pavel
P4(1) and P4(3) are enantiomorphic space groups. The only difference is the
helix (one way, or another). No difference in the diffraction pattern. Hence a
program (or a
crystallographer) cannot distinguish the 2 based on the diffraction pattern.
Once you start phasing, e.g. by molecular
If the model is really bad and sigmaA is estimated properly, then sigmaA will
be close to zero so that D (sigmaA times a scale factor) will be close to zero.
So in the limit of a completely useless model, the two methods of map
calculation converge.
Regards,
Randy Read
On 11 Oct 2011, at
In the limit yes. however limit is when we do not have solution, i.e. when
model errors are very large. In the limit map coefficients will be 0 even for
2mFo-DFc maps. In refinement we have some model. At the moment we have choice
between 0 and DFc. 0 is not the best estimate as Ed rightly
On Tuesday, October 11, 2011 12:33:09 pm Garib N Murshudov wrote:
In the limit yes. however limit is when we do not have solution, i.e. when
model errors are very large. In the limit map coefficients will be 0 even
for 2mFo-DFc maps. In refinement we have some model. At the moment we have
On 10/11/11 12:58, Ethan Merritt wrote:
On Tuesday, October 11, 2011 12:33:09 pm Garib N Murshudov wrote:
In the limit yes. however limit is when we do not have solution, i.e. when
model errors are very large. In the limit map coefficients will be 0 even
for 2mFo-DFc maps. In refinement we
We are looking for a research associate to join our group. If you have any
questions, please contact me.
-Dirk
Dirksen E. Bussiere, Ph.D., MBA
Director, Structural Chemistry
Novartis Institutes for BioMedical Research
4560 Horton Street, M/S 4.6
Emeryville, CA. 94608 USA
Best way would be to generate from probability distributions derived after
refinement, but it has a problem that you need to integrate over all errors.
Another, simpler way would be generate using Wilson distribution multiple times
and do refinement multiple times and average results. I have
On 10/11/11 12:58, Ethan Merritt wrote:
On Tuesday, October 11, 2011 12:33:09 pm Garib N Murshudov wrote:
In the limit yes. however limit is when we do not have solution, i.e. when
model errors are very large. In the limit map coefficients will be 0 even
for 2mFo-DFc maps. In
Hi,
I have two solutions from the ShelX C/D/E pipeline I would like to compare
(different datasets, same protein). They seem to have different origins.
Space group is I222, with a choice of 8 origins.
How can I find and apply the correct shift to have the phase sets on a
common origin?
The
There are 4 possible origins in I222. There is a simple but inelegant way to
check. Run the SHELXE job for the second dataset four times, first with no MOVE
instruction, then with one of the following MOVE instructions inserted between
UNIT and the first atom in the *_fa.res file from SHELXD:
Applications are invited for a postdoctoral position to work on the
development of high throughput techniques for membrane protein structure
determination in collaboration with the Canadian Light Source researches
Drs. P. Grochulski and M. Fodje. The Canadian Macromolecular Crystallography
I wrote a little jiffy program for doing things like this:
http://bl831.als.lbl.gov/~jamesh/pickup/origins.com
you run it like this:
origins.com rigthorigin.pdb wrongorigin.pdb I222 correlate
This will shift wrongorigin.pdb by each of what I think are the
allowed origin shifts, calculate an
I think this is called P21, with the additional annoyance that you
need to pick your Rfree set in P212121 and then symmetry-expand it.
Otherwise, your NCS operators will constrain your free reflections
to have the same intensity as their NCS mates. I'm sure you didn't
make that mistake, but a lot
31 matches
Mail list logo