[ccp4bb] Postdoc position at Houston TMC area

2017-07-07 Thread Wang, Zhao
Dear All,
A CPRIT funded postdoctoral position on single particle cryo-EM is available at 
Houston Texas Medical Center. The project will be led by the Dr. Mien-Chie Hung 
group at the University of Texas MD Anderson and will involve collaborations 
with Dr. Zhao Wang group (Baylor College of Medicine). We are looking for a 
creative and energetic colleague to study on the structure of antibodies-bound 
immune checkpoint ligand/receptor, which play critical role in tumor-associated 
immune escape in cancer cells. A sharp analytical mind and the ability to work 
in a group are essential. The center at BCM has expertise spans the entire 
discipline, including experimental and computational aspects of high-resolution 
single particle analysis of both soluble and membrane proteins as well as 
CryoET of both purified material and cells.
The candidate must have a PhD (or will shortly be awarded) with experience in 
protein/complex expression and purification.  Applicants with backgrounds in 
the following fields are encouraged to apply: X-ray crystallography, physics, 
biochemistry, and biology. Submit a CV with 3 references for consideration to 
zh...@bcm.edu and 
mh...@mdanderson.org
Wang, Zhao
National Center for Macromolecular Imaging
Baylor College of Medicine
N420, Alkek Building
Houston, TX 77030
zh...@bcm.edu






Re: [ccp4bb] Rmergicide Through Programming

2017-07-07 Thread Edward A. Berry

I think the confusion here is that the "multiplicity correction" is applied
on each reflection, where it will be an integer 2 or greater (can't estimate
variance with only one measurement). You can only correct in an approximate
way using using the average multiplicity of the dataset, since it would depend
on the distribution of multiplicity over the reflections.

And the correction is for r-merge. You don't need to apply a correction
to R-meas.
R-meas is a redundancy-independent best estimate of the variance.
Whatever you would have used R-merge for (hopefully taking allowance
for the multiplicity) you can use R-meas and not worry about multiplicity.
Again, what information does R-merge provide that R-meas does not provide
in a more accurate way?

According to the denso manual, one way to artificially reduce
R-merge is to include reflections with only one measure (averaging
in a lot of zero's always helps bring an average down), and they say
there were actually some programs that did that. However I'm
quite sure none of the ones we rely on today do that.

On 07/07/2017 03:12 PM, Kay Diederichs wrote:

James,

I cannot follow you. "n approaches 1" can only mean n = 2 because n is integer. 
And for n=2 the sqrt(n/(n-1)) factor is well-defined. For n=1, neither contributions to 
Rmeas nor Rmerge nor to any other precision indicator can be calculated anyway, because 
there's nothing this measurement can be compared against.

just my 2 cents,

Kay

On Fri, 7 Jul 2017 10:57:17 -0700, James Holton  
wrote:


I happen to be one of those people who think Rmerge is a very useful
statistic.  Not as a method of evaluating the resolution limit, which is
mathematically ridiculous, but for a host of other important things,
like evaluating the performance of data collection equipment, and
evaluating the isomorphism of different crystals, to name a few.

I like Rmerge because it is a simple statistic that has a simple formula
and has not undergone any "corrections".  Corrections increase
complexity, and complexity opens the door to manipulation by the
desperate and/or misguided.  For example, overzealous outlier rejection
is a common way to abuse R factors, and it is far too often swept under
the rug, sometimes without the user even knowing about it.  This is
especially problematic when working in a regime where the statistic of
interest is unstable, and for R factors this is low intensity data.
Rejecting just the right "outliers" can make any R factor look a lot
better.  Why would Rmeas be any more unstable than Rmerge?  Look at the
formula. There is an "n-1" in the denominator, where n is the
multiplicity.  So, what happens when n approaches 1 ?  What happens when
n=1? This is not to say Rmerge is better than Rmeas. In fact, I believe
the latter is generally superior to the first, unless you are working
near n = 1. The sqrt(n/(n-1)) is trying to correct for bias in the R
statistic, but fighting one infinity with another infinity is a
dangerous game.

My point is that neither Rmerge nor Rmeas are easily interpreted without
knowing the multiplicity.  If you see Rmeas = 10% and the multiplicity
is 10, then you know what that means.  Same for Rmerge, since at n=10
both stats have nearly the same value.  But if you have Rmeas = 45% and
multiplicity = 1.05, what does that mean?  Rmeas will be only 33% if the
multiplicity is rounded up to 1.1. This is what I mean by "numerical
instability", the value of the R statistic itself becomes sensitive to
small amounts of noise, and behaves more and more like a random number
generator. And if you have Rmeas = 33% and no indication of
multiplicity, it is hard to know what is going on.  I personally am a
lot more comfortable seeing qualitative agreement between Rmerge and
Rmeas, because that means the numerical instability of the multiplicity
correction didn't mess anything up.

Of course, when the intensity is weak R statistics in general are not
useful.  Both Rmeas and Rmerge have the sum of all intensities in the
denominator, so when the bin-wide sum approaches zero you have another
infinity to contend with.  This one starts to rear its ugly head once
I/sigma drops below about 3, and this is why our ancestors always
applied a sigma cutoff before computing an R factor.  Our small-molecule
colleagues still do this!  They call it "R1".  And it is an excellent
indicator of the overall relative error.  The relative error in the
outermost bin is not meaningful, and strangely enough nobody ever
reported the outer-resolution Rmerge before 1995.

For weak signals, Correlation Coefficients are better, but for strong
signals CC pegs out at >95%, making it harder to see relative errors.
I/sigma is what we'd like to know, but the value of "sigma" is still
prone to manipulation by not just outlier rejection, but massaging the
so-called "error model".  Suffice it to say, crystallographic data
contain more than one type of error.  Some sources are important for
weak 

Re: [ccp4bb] Rmergicide Through Programming

2017-07-07 Thread Keller, Jacob
Not so fast:

First of all, I cannot remember having ever come across a paper reporting a 
multiplicity of around 1, and if there are such cases, they are so rare that 
they are not worth accounting for, and should raise eyebrows in the first 
place, cast into doubt all of the statistics, let alone the structure. What can 
one do with a data set like this in terms of stats? Pretty much nothing. This 
could be answered by a multiplicity search in the pdb, I guess. Okay, just did 
this, and there are 3,076 structures with overall multiplicities of 0-2, and 
92,807 with > 2. Okay, so 3.3%, which is more than I would have thought. If you 
cut it down to 1.5, though, there are only 400, so 0.4%.

Second, according to the CCP4 wiki, it seems that both Rmerge and Rmeas sums do 
not include reflections in which there is only one measurement (a bit ambiguous 
in the wording though). How, for example, can one calculate the [I - ] term 
with only one reflection? Just call it 0? We’d have to confirm this with the 
developers, but it seems that neither R includes n=1 reflections, and 
therefore, the infinite denominator problem is non-existent.

Also, the calculation for Rmeas corrects for the multiplicity of each 
reflection, so cannot really “approach” 1.

Third, I cannot see how Rmerge is any more impervious to manipulation—consider 
Uriah’s script in my recent foray into fiction. Also, if I am wrong and Rmerge 
does include n=1 reflections, what better way to make R values become 0 than 
setting the multiplicity = 1 for all reflections? Perfect data! (at least when 
measured by Rmerge.) Further, since Rmeas corrects for multiplicity on a 
per-reflection basis, multiplicity can be either 1, in which case the 
reflection is not included, or more, in which case the infinity problem 
disappears.

Fourth, I liked at first the reason of evaluation of data collection equipment, 
but this is a pretty special case generally not relevant to publishing 
structures, and one would be further stymied much more by the variability of 
the sample(s).

I still am not convinced, and suspect perhaps the Rmeas folks can/will answer 
better than I.

All the best,

Jacob


From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Frank von 
Delft
Sent: Friday, July 07, 2017 2:26 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Rmergicide Through Programming


Okay, that is a strong answer:  Rmeas has too many infinities for comfort.  
Thanks, very instructive yet again!

phx


On 07/07/2017 18:57, James Holton wrote:
I happen to be one of those people who think Rmerge is a very useful statistic. 
 Not as a method of evaluating the resolution limit, which is mathematically 
ridiculous, but for a host of other important things, like evaluating the 
performance of data collection equipment, and evaluating the isomorphism of 
different crystals, to name a few.

I like Rmerge because it is a simple statistic that has a simple formula and 
has not undergone any "corrections".  Corrections increase complexity, and 
complexity opens the door to manipulation by the desperate and/or misguided.  
For example, overzealous outlier rejection is a common way to abuse R factors, 
and it is far too often swept under the rug, sometimes without the user even 
knowing about it.  This is especially problematic when working in a regime 
where the statistic of interest is unstable, and for R factors this is low 
intensity data.  Rejecting just the right "outliers" can make any R factor look 
a lot better.  Why would Rmeas be any more unstable than Rmerge?  Look at the 
formula. There is an "n-1" in the denominator, where n is the multiplicity.  
So, what happens when n approaches 1 ?  What happens when n=1? This is not to 
say Rmerge is better than Rmeas. In fact, I believe the latter is generally 
superior to the first, unless you are working near n = 1. The sqrt(n/(n-1)) is 
trying to correct for bias in the R statistic, but fighting one infinity with 
another infinity is a dangerous game.

My point is that neither Rmerge nor Rmeas are easily interpreted without 
knowing the multiplicity.  If you see Rmeas = 10% and the multiplicity is 10, 
then you know what that means.  Same for Rmerge, since at n=10 both stats have 
nearly the same value.  But if you have Rmeas = 45% and multiplicity = 1.05, 
what does that mean?  Rmeas will be only 33% if the multiplicity is rounded up 
to 1.1. This is what I mean by "numerical instability", the value of the R 
statistic itself becomes sensitive to small amounts of noise, and behaves more 
and more like a random number generator. And if you have Rmeas = 33% and no 
indication of multiplicity, it is hard to know what is going on.  I personally 
am a lot more comfortable seeing qualitative agreement between Rmerge and 
Rmeas, because that means the numerical instability of the multiplicity 
correction didn't mess anything up.

Of course, when the intensity is weak R statistics in general are not useful.  

Re: [ccp4bb] Rmergicide Through Programming

2017-07-07 Thread Kay Diederichs
James,

I cannot follow you. "n approaches 1" can only mean n = 2 because n is integer. 
And for n=2 the sqrt(n/(n-1)) factor is well-defined. For n=1, neither 
contributions to Rmeas nor Rmerge nor to any other precision indicator can be 
calculated anyway, because there's nothing this measurement can be compared 
against.

just my 2 cents,

Kay

On Fri, 7 Jul 2017 10:57:17 -0700, James Holton  
wrote:

>I happen to be one of those people who think Rmerge is a very useful 
>statistic.  Not as a method of evaluating the resolution limit, which is 
>mathematically ridiculous, but for a host of other important things, 
>like evaluating the performance of data collection equipment, and 
>evaluating the isomorphism of different crystals, to name a few.
>
>I like Rmerge because it is a simple statistic that has a simple formula 
>and has not undergone any "corrections".  Corrections increase 
>complexity, and complexity opens the door to manipulation by the 
>desperate and/or misguided.  For example, overzealous outlier rejection 
>is a common way to abuse R factors, and it is far too often swept under 
>the rug, sometimes without the user even knowing about it.  This is 
>especially problematic when working in a regime where the statistic of 
>interest is unstable, and for R factors this is low intensity data.  
>Rejecting just the right "outliers" can make any R factor look a lot 
>better.  Why would Rmeas be any more unstable than Rmerge?  Look at the 
>formula. There is an "n-1" in the denominator, where n is the 
>multiplicity.  So, what happens when n approaches 1 ?  What happens when 
>n=1? This is not to say Rmerge is better than Rmeas. In fact, I believe 
>the latter is generally superior to the first, unless you are working 
>near n = 1. The sqrt(n/(n-1)) is trying to correct for bias in the R 
>statistic, but fighting one infinity with another infinity is a 
>dangerous game.
>
>My point is that neither Rmerge nor Rmeas are easily interpreted without 
>knowing the multiplicity.  If you see Rmeas = 10% and the multiplicity 
>is 10, then you know what that means.  Same for Rmerge, since at n=10 
>both stats have nearly the same value.  But if you have Rmeas = 45% and 
>multiplicity = 1.05, what does that mean?  Rmeas will be only 33% if the 
>multiplicity is rounded up to 1.1. This is what I mean by "numerical 
>instability", the value of the R statistic itself becomes sensitive to 
>small amounts of noise, and behaves more and more like a random number 
>generator. And if you have Rmeas = 33% and no indication of 
>multiplicity, it is hard to know what is going on.  I personally am a 
>lot more comfortable seeing qualitative agreement between Rmerge and 
>Rmeas, because that means the numerical instability of the multiplicity 
>correction didn't mess anything up.
>
>Of course, when the intensity is weak R statistics in general are not 
>useful.  Both Rmeas and Rmerge have the sum of all intensities in the 
>denominator, so when the bin-wide sum approaches zero you have another 
>infinity to contend with.  This one starts to rear its ugly head once 
>I/sigma drops below about 3, and this is why our ancestors always 
>applied a sigma cutoff before computing an R factor.  Our small-molecule 
>colleagues still do this!  They call it "R1".  And it is an excellent 
>indicator of the overall relative error.  The relative error in the 
>outermost bin is not meaningful, and strangely enough nobody ever 
>reported the outer-resolution Rmerge before 1995.
>
>For weak signals, Correlation Coefficients are better, but for strong 
>signals CC pegs out at >95%, making it harder to see relative errors.  
>I/sigma is what we'd like to know, but the value of "sigma" is still 
>prone to manipulation by not just outlier rejection, but massaging the 
>so-called "error model".  Suffice it to say, crystallographic data 
>contain more than one type of error.  Some sources are important for 
>weak spots, others are important for strong spots, and still others are 
>only apparent in the mid-range.  Some sources of error are only 
>important at low multiplicity, and others only manifest at high 
>multiplicity. There is no single number that can be used to evaluate all 
>aspects of data quality.
>
>So, I remain a champion of reporting Rmerge.  Not in the high-angle bin, 
>because that is essentially a random number, but overall Rmerge and 
>low-angle-bin Rmerge next to multiplicity, Rmeas, CC1/2 and other 
>statistics is the only way you can glean enough information about where 
>the errors are coming from in the data.  Rmeas is a useful addition 
>because it helps us correct for multiplicity without having to do math 
>in our head.  Users generally thank you for that. Rmerge, however, has 
>served us well for more than half a century, and I believe Uli Arndt 
>knew what he was doing.  I hope we all know enough about history to 
>realize that future generations seldom thank their ancestors for 

Re: [ccp4bb] Rmergicide Through Programming

2017-07-07 Thread Frank von Delft
Okay, that /is/ a strong answer:  Rmeas has too many infinities for 
comfort.  Thanks, very instructive yet again!


phx


On 07/07/2017 18:57, James Holton wrote:
I happen to be one of those people who think Rmerge is a very useful 
statistic.  Not as a method of evaluating the resolution limit, which 
is mathematically ridiculous, but for a host of other important 
things, like evaluating the performance of data collection equipment, 
and evaluating the isomorphism of different crystals, to name a few.


I like Rmerge because it is a simple statistic that has a simple 
formula and has not undergone any "corrections".  Corrections increase 
complexity, and complexity opens the door to manipulation by the 
desperate and/or misguided.  For example, overzealous outlier 
rejection is a common way to abuse R factors, and it is far too often 
swept under the rug, sometimes without the user even knowing about 
it.  This is especially problematic when working in a regime where the 
statistic of interest is unstable, and for R factors this is low 
intensity data.  Rejecting just the right "outliers" can make any R 
factor look a lot better.  Why would Rmeas be any more unstable than 
Rmerge?  Look at the formula. There is an "n-1" in the denominator, 
where n is the multiplicity.  So, what happens when n approaches 1 ?  
What happens when n=1? This is not to say Rmerge is better than Rmeas. 
In fact, I believe the latter is generally superior to the first, 
unless you are working near n = 1. The sqrt(n/(n-1)) is trying to 
correct for bias in the R statistic, but fighting one infinity with 
another infinity is a dangerous game.


My point is that neither Rmerge nor Rmeas are easily interpreted 
without knowing the multiplicity.  If you see Rmeas = 10% and the 
multiplicity is 10, then you know what that means.  Same for Rmerge, 
since at n=10 both stats have nearly the same value.  But if you have 
Rmeas = 45% and multiplicity = 1.05, what does that mean?  Rmeas will 
be only 33% if the multiplicity is rounded up to 1.1. This is what I 
mean by "numerical instability", the value of the R statistic itself 
becomes sensitive to small amounts of noise, and behaves more and more 
like a random number generator. And if you have Rmeas = 33% and no 
indication of multiplicity, it is hard to know what is going on.  I 
personally am a lot more comfortable seeing qualitative agreement 
between Rmerge and Rmeas, because that means the numerical instability 
of the multiplicity correction didn't mess anything up.


Of course, when the intensity is weak R statistics in general are not 
useful.  Both Rmeas and Rmerge have the sum of all intensities in the 
denominator, so when the bin-wide sum approaches zero you have another 
infinity to contend with.  This one starts to rear its ugly head once 
I/sigma drops below about 3, and this is why our ancestors always 
applied a sigma cutoff before computing an R factor.  Our 
small-molecule colleagues still do this!  They call it "R1".  And it 
is an excellent indicator of the overall relative error.  The relative 
error in the outermost bin is not meaningful, and strangely enough 
nobody ever reported the outer-resolution Rmerge before 1995.


For weak signals, Correlation Coefficients are better, but for strong 
signals CC pegs out at >95%, making it harder to see relative errors.  
I/sigma is what we'd like to know, but the value of "sigma" is still 
prone to manipulation by not just outlier rejection, but massaging the 
so-called "error model".  Suffice it to say, crystallographic data 
contain more than one type of error.  Some sources are important for 
weak spots, others are important for strong spots, and still others 
are only apparent in the mid-range.  Some sources of error are only 
important at low multiplicity, and others only manifest at high 
multiplicity. There is no single number that can be used to evaluate 



all aspects of data quality.

So, I remain a champion of reporting Rmerge.  Not in the high-angle 
bin, because that is essentially a random number, but overall Rmerge 
and low-angle-bin Rmerge next to multiplicity, Rmeas, CC1/2 and other 
statistics is the only way you can glean enough information about 
where the errors are coming from in the data.  Rmeas is a useful 
addition because it helps us correct for multiplicity without having 
to do math in our head.  Users generally thank you for that. Rmerge, 
however, has served us well for more than half a century, and I 
believe Uli Arndt knew what he was doing.  I hope we all know enough 
about history to realize that future generations seldom thank their 
ancestors for "protecting" them from information.


-James Holton
MAD Scientist


On 7/5/2017 10:36 AM, Graeme Winter wrote:

Frank,

you are asking me to remove features that I like, so I would feel 
that the challenge is for you to prove that this is harmful however:


  - at the minimum, I find it a useful check sum that the stats are 
internally 

Re: [ccp4bb] Rmergicide Through Programming

2017-07-07 Thread James Holton
I happen to be one of those people who think Rmerge is a very useful 
statistic.  Not as a method of evaluating the resolution limit, which is 
mathematically ridiculous, but for a host of other important things, 
like evaluating the performance of data collection equipment, and 
evaluating the isomorphism of different crystals, to name a few.


I like Rmerge because it is a simple statistic that has a simple formula 
and has not undergone any "corrections".  Corrections increase 
complexity, and complexity opens the door to manipulation by the 
desperate and/or misguided.  For example, overzealous outlier rejection 
is a common way to abuse R factors, and it is far too often swept under 
the rug, sometimes without the user even knowing about it.  This is 
especially problematic when working in a regime where the statistic of 
interest is unstable, and for R factors this is low intensity data.  
Rejecting just the right "outliers" can make any R factor look a lot 
better.  Why would Rmeas be any more unstable than Rmerge?  Look at the 
formula. There is an "n-1" in the denominator, where n is the 
multiplicity.  So, what happens when n approaches 1 ?  What happens when 
n=1? This is not to say Rmerge is better than Rmeas. In fact, I believe 
the latter is generally superior to the first, unless you are working 
near n = 1. The sqrt(n/(n-1)) is trying to correct for bias in the R 
statistic, but fighting one infinity with another infinity is a 
dangerous game.


My point is that neither Rmerge nor Rmeas are easily interpreted without 
knowing the multiplicity.  If you see Rmeas = 10% and the multiplicity 
is 10, then you know what that means.  Same for Rmerge, since at n=10 
both stats have nearly the same value.  But if you have Rmeas = 45% and 
multiplicity = 1.05, what does that mean?  Rmeas will be only 33% if the 
multiplicity is rounded up to 1.1. This is what I mean by "numerical 
instability", the value of the R statistic itself becomes sensitive to 
small amounts of noise, and behaves more and more like a random number 
generator. And if you have Rmeas = 33% and no indication of 
multiplicity, it is hard to know what is going on.  I personally am a 
lot more comfortable seeing qualitative agreement between Rmerge and 
Rmeas, because that means the numerical instability of the multiplicity 
correction didn't mess anything up.


Of course, when the intensity is weak R statistics in general are not 
useful.  Both Rmeas and Rmerge have the sum of all intensities in the 
denominator, so when the bin-wide sum approaches zero you have another 
infinity to contend with.  This one starts to rear its ugly head once 
I/sigma drops below about 3, and this is why our ancestors always 
applied a sigma cutoff before computing an R factor.  Our small-molecule 
colleagues still do this!  They call it "R1".  And it is an excellent 
indicator of the overall relative error.  The relative error in the 
outermost bin is not meaningful, and strangely enough nobody ever 
reported the outer-resolution Rmerge before 1995.


For weak signals, Correlation Coefficients are better, but for strong 
signals CC pegs out at >95%, making it harder to see relative errors.  
I/sigma is what we'd like to know, but the value of "sigma" is still 
prone to manipulation by not just outlier rejection, but massaging the 
so-called "error model".  Suffice it to say, crystallographic data 
contain more than one type of error.  Some sources are important for 
weak spots, others are important for strong spots, and still others are 
only apparent in the mid-range.  Some sources of error are only 
important at low multiplicity, and others only manifest at high 
multiplicity. There is no single number that can be used to evaluate all 
aspects of data quality.


So, I remain a champion of reporting Rmerge.  Not in the high-angle bin, 
because that is essentially a random number, but overall Rmerge and 
low-angle-bin Rmerge next to multiplicity, Rmeas, CC1/2 and other 
statistics is the only way you can glean enough information about where 
the errors are coming from in the data.  Rmeas is a useful addition 
because it helps us correct for multiplicity without having to do math 
in our head.  Users generally thank you for that. Rmerge, however, has 
served us well for more than half a century, and I believe Uli Arndt 
knew what he was doing.  I hope we all know enough about history to 
realize that future generations seldom thank their ancestors for 
"protecting" them from information.


-James Holton
MAD Scientist


On 7/5/2017 10:36 AM, Graeme Winter wrote:

Frank,

you are asking me to remove features that I like, so I would feel that the 
challenge is for you to prove that this is harmful however:

  - at the minimum, I find it a useful check sum that the stats are internally 
consistent (though I interpret it for lots of other reasons too)
  - it is faulty I agree, but (with caveats) still useful IMHO

Sorry for being terse, but I remain to be 

Re: [ccp4bb] Any suggestions?

2017-07-07 Thread Dr. Isabel De Moraes
Dear all,

My apologies for not being able to see the pictures. I have attached them as 
PDFs here but not sure if you will be able to get the files.

Anyway, many thanks to all who have reply via ccp4bb or privately with 
suggestions.  Indeed the protein is purified from the natural source so there 
are numerous possibilities.

Kind regards,
Isabel







On 7 Jul 2017, at 12:57, Dr. Isabel De Moraes 
> wrote:

Any suggestions regarding to the positive densities?

The crystallisation condition only has NaCl and CaCl



[cid:DD08D41F-CAB5-458D-BBD0-5DDCADC746FF@diamond.ac.uk]




[cid:B6BDEE5E-325C-4FEF-A7BC-B4F679D5315F@diamond.ac.uk]


Best regards,
Isabel




--
This e-mail and any attachments may contain confidential, copyright and or 
privileged material, and are for the use of the intended addressee only. If you 
are not the intended addressee or an authorised recipient of the addressee 
please notify us of receipt by returning the e-mail and do not use, copy, 
retain, distribute or disclose the information in or attached to the e-mail.
Any opinions expressed within this e-mail are those of the individual and not 
necessarily of Diamond Light Source Ltd.
Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments 
are free from viruses and we cannot accept liability for any damage which you 
may sustain as a result of software viruses which may be transmitted in or with 
the message.
Diamond Light Source Limited (company no. 4375679). Registered in England and 
Wales with its registered office at Diamond House, Harwell Science and 
Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom

-
Dr Isabel Moraes

Membrane Protein Laboratory  Group Leader

Diamond Light Source Ltd,
Harwell Science and Innovation Campus,
Oxfordshire, OX11 ODE, UK
-

[cid:b2df3ed8-aa8e-4a06-a691-8732bdbf96ce@fed.cclrc.ac.uk]


Re: [ccp4bb] Any suggestions?

2017-07-07 Thread Evans, Nicola
Yes no images came through, however on looking at your crystallisation 
conditions for clues, is there anything in the protein prep that could have 
been carried through, even from the early stage buffers, or cell growth media? 
We once had a detergent in the cell lysis stage that was removed from all 
subsequent buffers (and protein put down 2 columns) but it made it through and 
produced density in the crystal structure.


Good luck with your search,


Nicola


From: CCP4 bulletin board  on behalf of Ian Clifton 

Sent: 07 July 2017 14:23:36
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Any suggestions?

"Dr. Isabel  De Moraes"  writes:

> Any suggestions regarding to the positive densities?
>
Your pictures didn’t seem to make it as attachments, at least as
received here.

BW,
--
Ian ◎


Re: [ccp4bb] Any suggestions?

2017-07-07 Thread Ian Clifton
"Dr. Isabel  De Moraes"  writes:

> Any suggestions regarding to the positive densities?
>
Your pictures didn’t seem to make it as attachments, at least as
received here.

BW,
-- 
Ian ◎


[ccp4bb] Any suggestions?

2017-07-07 Thread Dr. Isabel De Moraes
Any suggestions regarding to the positive densities?

The crystallisation condition only has NaCl and CaCl



[cid:DD08D41F-CAB5-458D-BBD0-5DDCADC746FF@diamond.ac.uk]




[cid:B6BDEE5E-325C-4FEF-A7BC-B4F679D5315F@diamond.ac.uk]


Best regards,
Isabel




-- 
This e-mail and any attachments may contain confidential, copyright and or 
privileged material, and are for the use of the intended addressee only. If you 
are not the intended addressee or an authorised recipient of the addressee 
please notify us of receipt by returning the e-mail and do not use, copy, 
retain, distribute or disclose the information in or attached to the e-mail.
Any opinions expressed within this e-mail are those of the individual and not 
necessarily of Diamond Light Source Ltd. 
Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments 
are free from viruses and we cannot accept liability for any damage which you 
may sustain as a result of software viruses which may be transmitted in or with 
the message.
Diamond Light Source Limited (company no. 4375679). Registered in England and 
Wales with its registered office at Diamond House, Harwell Science and 
Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom


Re: [ccp4bb] CCP4i2 error message manual coot

2017-07-07 Thread Jon Agirre
Me.

On 7 July 2017 at 10:38, Bernhard Rupp  wrote:

> Dear ccp4i2 developers,
>
>
>
> I am trying to run manual coot for sugar correction after privateer.
>
> Privateer works fine, but the subsequent manual coot (button)
>
> starts but exits with a short error message and
>
> plenty of error logs I do not completely comprehend.
>
>
>
> Coot itself also works fine.
>
>
>
> Who may I spam with the logs?
>
>
>
> Thx, BR
>
>
>
> --
>
> Bernhard Rupp
>
> CVMO
>
> http://www.hofkristallamt.org/
>
> b...@hofkristallamt.org
>
> +1 925 209 7429 <(925)%20209-7429>
>
> +43 676 571 0536 <+43%20676%205710536>
>
> --
>
> Many plausible ideas vanish
>
> at the presence of thought
>
> --
>
>
>



-- 
Dr Jon Agirre
York Structural Biology Laboratory / Department of Chemistry
University of York, Heslington, YO10 5DD, York, England
http://www.york.ac.uk/chemistry/research/ysbl/people/staff/jagirre/
Twitter: @alwaysonthejazz
+44 (0) 1904 32 8270


[ccp4bb] CCP4i2 error message manual coot

2017-07-07 Thread Bernhard Rupp
Dear ccp4i2 developers,

 

I am trying to run manual coot for sugar correction after privateer.

Privateer works fine, but the subsequent manual coot (button)

starts but exits with a short error message and

plenty of error logs I do not completely comprehend.

 

Coot itself also works fine.

 

Who may I spam with the logs?

 

Thx, BR

 

--

Bernhard Rupp

CVMO

  http://www.hofkristallamt.org/

  b...@hofkristallamt.org

+1 925 209 7429

+43 676 571 0536

--

Many plausible ideas vanish 

at the presence of thought

--