[ccp4bb] 回复:[ccp4bb] suggestions are welcome

2017-07-20 Thread Smith Liu
please characterize your a and b to see they really what they are



发自网易邮箱大师


在2017年07月18日 10:25,高艺娜 写道:
Hi all ,

It has been reported the Negative stain EM of a protein A-B complex, but 
according to my gel filtration results (I purified A and B respectively for 
incubation) , I found that A could not bind to B, of course I tried different 
buffer condition with various pH value, even the binding condition only had 50 
mm Kcl. Do you have any suggestion or methods that I can try to get the protein 
A-B complex?

Any suggestion is welcome,

Thank you all ,

Best,

Re: [ccp4bb] Fine Phi Slicing

2017-07-20 Thread James Holton
An important aspect of fine phi slicing that has not been mentioned yet 
(and took me a long time to figure out) is the impact of read-out time.  
Traditionally, read-out time is simply a delay that makes the overall 
data collection take longer, but with so-called "shutterless" data 
collection the read-out time can have a surprising impact on data 
quality.  It's 2 ms on my Pilatus3 S 6M.  This doesn't sound like much, 
and indeed 2 ms is also the timing jitter of my x-ray shutter, which  
had not been a problem with CCD detectors for 15 years.  The difference 
is that with so-called "shutterless" data collection not only can 
appreciable intensity fall into this 2 ms hole, but none of the data 
processing programs have a way to "correct" for it.  What you end up 
with is Rmerge/Rmeas values of 15-30% in the lowest-angle bin, and 
correspondingly low overall I/sigma.  At first,  I couldn't even solve 
lysozyme by S-SAD!  This had been an easy task with the Q315r I had just 
replaced.  The difference turned out to be "noise" coming from this 
read-out gap.


The 2 ms gap between images is only important if it is comparable to the 
time it takes a relp to transit the Ewald sphere.  At 1 deg/s and mosaic 
spread of 0.5 deg this is 0.002 deg of missing data, or about 1% error 
in integrated intensity.  This is fine for most applications.  But if 
you are turning at 25 deg/s with a room-temperature crystal of mosaicity 
0.05 deg, then you could loose the spot entirely in a 2 ms read-out gap 
(100% error).  This is one of several arguments for fine phi slicing, 
where you make sure that every spot is not just observed, but split over 
2-3 images.  This also helps the pile-up correction Gerd already 
mentioned.  What is often overlooked, however, is that the error due to 
read-out gap is only relevant to partials.  Fulls don't experience it at 
all, so wide phi slicing is practically immune to it.  But with fine phi 
slicing everything is a partial, and 100% of the spots are going to take 
on read-out-gap error. So, what is the solution?  Slow down.


The problem with slowing down the spindle, of course, is radiation 
damage.  If you've got a flux of 1e12 photons/s into a 100x100 micron 
beam spot and 1 A wavelength you are dosing metal-free protein crystals 
at about 50 kGy/s.  Most room-temperature crystals can endure no more 
than 200 kGy, so they will live for about 4 seconds in this beam.  A 
detector framing at 25 Hz will only get 100 images, no matter what the 
spindle speed. The decision then: is it better to get 100 deg at 1 
deg/image?  or 2.5 deg with fine phi slicing?   That is, if the mosaic 
spread is 0.05 deg, you can do no more than 0.025 deg/image and still 
barely qualify as "fine phi slicing". The 25 Hz framing rate dictates no 
less than 40 ms exposures, and that means turning the spindle at (0.025 
deg/ 0.04 s) = 0.625 deg/s. Thus, we cover 2.5 deg in the 4 seconds 
before the crystal dies.  That's just algebra.  The pragmatic 
consequence is the difference between getting a complete dataset from 
one crystal and needing to merge 40 crystals.


Of course, you can attenuate 40x and get 100 fine-sliced degrees, but 
that will take 40x more beam time.  The images will also be 40x weaker.  
In my experience you need at least an average of 1 photon/pixel/image 
before even the best data processing algorithms start to fall over.  You 
can actually calculate photons/pixel/image beforehand if you know your 
flux and how thick your sample is:


photons/pixel = 1.2e-5*flux*exposure*thickness/pixels

where flux is in photons/s, exposure in seconds, thickness in microns 
and 1.2e-5 comes from the NIST elastic scattering cross section of light 
atoms (C, N, O are all ~ 0.2 cm^2/g), the rough density of protein 
crystals (1.2 g/cm^3), and the fact that about half of all scattered 
photons land on a flat detector at typical distance from the sample, or: 
0.2*1.2*(1e-4 cm/um)/2 = 1.155e-5


So, if your flux is 1e12 photons/s and your sample is 100 um thick you 
will get ~1 photon/pixel on a 6M in about 5 ms.  That corresponds to a 
framing rate of 200 Hz.  If the detector can't go that fast, you need to 
attenuate.  Note this is the total sample thickness, including the stuff 
around the crystal.  Air scatter counts as 1 micron of sample thickness 
for every mm of air between the beamstop and collimator.  So, in a way, 
the beam properties and detector properties can be matched.  What is 
counter-intuitive is that a 25 Hz detector is sub-optimal for a 100 
micron beam with flux 1e12 photons/s.  Certain fluxaholics would already 
call that a "weak" beam, so why does getting a faster detector mean you 
should attenuate it?


It would seem that fine slicing requires throwing away a lot of beam, 
unless you only want a few degrees from every crystal. Maybe there is a 
better way?


I have experimented with 1-deg images with no attenuation and room 
temperature crystals and the results are surprisingly good. 

[ccp4bb] Staraniso - determination of anisotropic resolution limits and statistics

2017-07-20 Thread vincent Chaptal

Dear Gerard and Ian,

thank you for your work in putting together the Staraniso sever and your 
effort to tackle the anisotropic diffraction problem, a non-trivial 
behavior that creates many problems in structure solving, and that is 
very prevalent in the world of membrane proteins...


I have been following your threads very closely over the past months; I 
am now in the possession of another one of these anisotropic datasets 
and just submitted a job on your server, and I would like to ask you 
some general questions for which I couldn't find answers on your website 
(I'm emailing the ccp4bb as it might be of interest to other readers as 
well).


* It seems like the (local) I/s(I) is your criteria of choice to decide 
how far is the data diffracting. And you offer us a range of values from 
1 to 3 with small steps, and a default value of 1.20. Could you shed 
more light on this choice, how important it is, and what attention 
should be devoted to it?


My understanding from this is that with the new detectors, a photon 
measured is a real photon, so users can go to very low values of I/s(I) 
to get the most data out the detectors. For isotropic diffraction, I 
would cut my data at several of these values and consider the maps to 
see if I have any gain on their quality. However, with my anisotropic 
data, with the default cut off of 1.20, I get in the following table 
that brings me more questions:


Summary of merging statistics for observed data extracted from the final 
MRFANA log file:
  Overall  InnerShell 
OuterShell

---
 Low resolution limit  48.618 48.618   4.200
 High resolution limit  3.811 12.254   3.811


 Rmerge  (all I+ & I-)  0.182 0.046   7.878
 Rmerge  (within I+/I-) 0.186 0.043   7.643
 Rmeas   (within I+/I-) 0.223 0.053   9.089
 Rmeas   (all I+ & I-)  0.200 0.052   8.537
 Rpim(within I+/I-) 0.122 0.031   4.870
 Rpim(all I+ & I-)  0.080 0.024   3.262
 Total number of observations  148030 54798188
 Total number unique23914 11921196
 Mean(I)/sd(I)7.3 21.2 1.3
 Completeness (spherical)60.4 96.912.0
 Completeness (ellipsoidal)  92.5 96.967.0
 Multiplicity 6.2 4.6 6.8
 CC(1/2)0.996 0.996   0.061


* In the past, before your server existed, I have cut my anisotropic 
data keeping a 20% completeness in the highest resolution shell, it 
seemed to me the correct cutoff to ensure map quality, albeit I must say 
it was more a "wet-finger" guess than a real study. This completeness 
was of course spherical. Given the I/s(I) criteria stated above, is the 
completeness a secondary criteria or should it be taken into account to 
refine which I/s(I) to choose for scaling performed in Staraniso? What 
completeness should be kept minimal?


* A side question arises from this table, I then have a CC(1/2) of 0.061 
in the highest resolution shell. Should it be worrisome or is CC(1/2) 
not a good criteria in this case? Same question as above, what CC(1/2) 
should be kept minimal?


Thank you in advance for your help and feedback.
All the Best
Vincent



--

Vincent Chaptal, PhD

Institut de Biologie et Chimie des Protéines

Drug Resistance and Membrane Proteins Laboratory

7 passage du Vercors

69007 LYON

FRANCE

+33 4 37 65 29 01

http://www.ibcp.fr




[ccp4bb] XQuartz problems

2017-07-20 Thread Cygler, Miroslaw
Hello Mac Gurus,
I have run today into a problem with XQuartz on my iMac. No problems till now. 
Now, any application that uses X11 does not start. It invokes XQuartz but that 
is it. What is more, when running, XQuartz can not be stopped; after an attempt 
to quit the application XQuartz immediately restarts. I cannot restart the 
computer the normal way but have to hold the power button to power down the 
computer. I deleted XQuartz and installed it from the depository but to no 
avail.
This behaviour might be related to me plugging in today someone else hard drive 
and transferring files to my own hard drive. Something else could have been 
transferred to my iMac. I would like to avoid reinstalling the operating system 
from scratch and restoring the files from backup
Any advice what to try? Thanks for your help,

Mirek





Re: [ccp4bb] Fine Phi Slicing

2017-07-20 Thread Graeme Winter
James,

On

On 20 Jul 2017, at 19:06, James Holton 
> wrote:

 In my experience you need at least an average of 1 photon/pixel/image before 
even the best data processing algorithms start to fall over.

I do not agree… both XDS and DIALS will (today) to pretty well with images 
which are a factor of 25 weaker than this (i.e. around .04 counts/pixel on 
average)

https://zenodo.org/record/49559

I think is the right set, it was measured a while back.

Background estimation was the bigger problem with this, since almost all of the 
pixels are 0…

Have a play, data like this are quite entertaining.

Cheers Graeme

-- 
This e-mail and any attachments may contain confidential, copyright and or 
privileged material, and are for the use of the intended addressee only. If you 
are not the intended addressee or an authorised recipient of the addressee 
please notify us of receipt by returning the e-mail and do not use, copy, 
retain, distribute or disclose the information in or attached to the e-mail.
Any opinions expressed within this e-mail are those of the individual and not 
necessarily of Diamond Light Source Ltd. 
Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments 
are free from viruses and we cannot accept liability for any damage which you 
may sustain as a result of software viruses which may be transmitted in or with 
the message.
Diamond Light Source Limited (company no. 4375679). Registered in England and 
Wales with its registered office at Diamond House, Harwell Science and 
Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom


[ccp4bb] ample

2017-07-20 Thread Patrick Loll
I’m intrigued by the prospect of using AMPLE to test multiple distant homologs 
in a MR problem. I’ve used HHPRED to identify about 20 high-probability 
homologs of known structure, each of which has about 20-25% identity with the 
unknown protein. However, it’s not clear to me from the documentation whether 
the program will use the alignments from HHPRED, and, if so, how I should 
provide that information. 

Or does AMPLE perform its own alignment? I.e., do I simply point the program to 
a directory containing 20 different PDB files and stand back?

Thanks for any insights.

Cheers,

Pat 
---
Patrick J. Loll, Ph. D.  
Professor of Biochemistry & Molecular Biology
Drexel University College of Medicine
Room 10-102 New College Building
245 N. 15th St., Mailstop 497
Philadelphia, PA  19102-1192  USA

(215) 762-7706
pjl...@gmail.com
pj...@drexel.edu


[ccp4bb] high Rfree

2017-07-20 Thread 张士军
Hi everyone

   I have got a anti-parallel coiled-coil structure in a short fragment 
recently, then I want to solve a longer fragment structure with phenix-MR using 
this short fragment structure as a model.The result is not good because of the 
Rwork and Rfree is high.So I think the longer fragment will be parallel 
coiled-coil which is different with the shorter one. I am wondering whether 
there are any other methods to handle this phenomenon besides heavy atom 
phasing? Thanks a lot!!!

Re: [ccp4bb] Fine Phi Slicing

2017-07-20 Thread Keller, Jacob
Based on this, a vision for the future:

A warehouse filled with sealed-tube, top-hat-profiled sources, super-accurate 
goniostats, and Dectris detectors, a robot running back and forth from a 
central dewar to place the crystals, all images 1-bit, intensities measured as 
probabilities; a day when crystal-frying will finally come to an end, or will 
be used routinely (as RIP) as the sure-fire way to solve crystal structures.

JPK



-Original Message-
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of James 
Holton
Sent: Thursday, July 20, 2017 2:07 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Fine Phi Slicing

An important aspect of fine phi slicing that has not been mentioned yet (and 
took me a long time to figure out) is the impact of read-out time.  
Traditionally, read-out time is simply a delay that makes the overall data 
collection take longer, but with so-called "shutterless" data collection the 
read-out time can have a surprising impact on data quality.  It's 2 ms on my 
Pilatus3 S 6M.  This doesn't sound like much, and indeed 2 ms is also the 
timing jitter of my x-ray shutter, which had not been a problem with CCD 
detectors for 15 years.  The difference is that with so-called "shutterless" 
data collection not only can appreciable intensity fall into this 2 ms hole, 
but none of the data processing programs have a way to "correct" for it.  What 
you end up with is Rmerge/Rmeas values of 15-30% in the lowest-angle bin, and 
correspondingly low overall I/sigma.  At first,  I couldn't even solve lysozyme 
by S-SAD!  This had been an easy task with the Q315r I had just replaced.  The 
difference turned out to be "noise" coming from this read-out gap.

The 2 ms gap between images is only important if it is comparable to the time 
it takes a relp to transit the Ewald sphere.  At 1 deg/s and mosaic spread of 
0.5 deg this is 0.002 deg of missing data, or about 1% error in integrated 
intensity.  This is fine for most applications.  But if you are turning at 25 
deg/s with a room-temperature crystal of mosaicity
0.05 deg, then you could loose the spot entirely in a 2 ms read-out gap (100% 
error).  This is one of several arguments for fine phi slicing, where you make 
sure that every spot is not just observed, but split over
2-3 images.  This also helps the pile-up correction Gerd already mentioned.  
What is often overlooked, however, is that the error due to read-out gap is 
only relevant to partials.  Fulls don't experience it at all, so wide phi 
slicing is practically immune to it.  But with fine phi slicing everything is a 
partial, and 100% of the spots are going to take on read-out-gap error. So, 
what is the solution?  Slow down.

The problem with slowing down the spindle, of course, is radiation damage.  If 
you've got a flux of 1e12 photons/s into a 100x100 micron beam spot and 1 A 
wavelength you are dosing metal-free protein crystals at about 50 kGy/s.  Most 
room-temperature crystals can endure no more than 200 kGy, so they will live 
for about 4 seconds in this beam.  A detector framing at 25 Hz will only get 
100 images, no matter what the spindle speed. The decision then: is it better 
to get 100 deg at 1 
deg/image?  or 2.5 deg with fine phi slicing?   That is, if the mosaic 
spread is 0.05 deg, you can do no more than 0.025 deg/image and still barely 
qualify as "fine phi slicing". The 25 Hz framing rate dictates no less than 40 
ms exposures, and that means turning the spindle at (0.025 deg/ 0.04 s) = 0.625 
deg/s. Thus, we cover 2.5 deg in the 4 seconds before the crystal dies.  That's 
just algebra.  The pragmatic consequence is the difference between getting a 
complete dataset from one crystal and needing to merge 40 crystals.

Of course, you can attenuate 40x and get 100 fine-sliced degrees, but that will 
take 40x more beam time.  The images will also be 40x weaker.  
In my experience you need at least an average of 1 photon/pixel/image before 
even the best data processing algorithms start to fall over.  You can actually 
calculate photons/pixel/image beforehand if you know your flux and how thick 
your sample is:

photons/pixel = 1.2e-5*flux*exposure*thickness/pixels

where flux is in photons/s, exposure in seconds, thickness in microns and 
1.2e-5 comes from the NIST elastic scattering cross section of light atoms (C, 
N, O are all ~ 0.2 cm^2/g), the rough density of protein crystals (1.2 g/cm^3), 
and the fact that about half of all scattered photons land on a flat detector 
at typical distance from the sample, or: 
0.2*1.2*(1e-4 cm/um)/2 = 1.155e-5

So, if your flux is 1e12 photons/s and your sample is 100 um thick you will get 
~1 photon/pixel on a 6M in about 5 ms.  That corresponds to a framing rate of 
200 Hz.  If the detector can't go that fast, you need to attenuate.  Note this 
is the total sample thickness, including the stuff around the crystal.  Air 
scatter counts as 1 micron of sample thickness for every mm of air between