[ccp4bb] A new historical structural achievement

2011-07-21 Thread Justin Hall

Dear CCP4BB,

Forgive me for soap boxing, but yesterday the first structure of a  
GPCR/Gprotein complex was released (PDB:  
http://www.rcsb.org/pdb/explore/explore.do?structureId=3SN6, article:  
http://www.nature.com/nature/journal/vnfv/ncurrent/full/nature10361.html).


Recently a student preparing for their dissertation asked this board  
for its opinion about the most significant recent structural  
accomplishment, and among many things the progress on GPCRs was  
mentioned (albeit as being "cute", I think).


Reading this work from Kobilka and Sunahara's two groups, I am floored  
by what it must have taken to achieve this - particularly if you know  
how hard and how long people have tried (both past and present) to get  
a GPCR/Gprotein complex structure. It is my opinion that this  
structure was something of a holy grail for the GPCR community.


So, even if you don't usually follow the developments of membrane  
crystallography, I wanted to invite your attention to this historical  
achievement in the GPCR field; and I hope you will join me in  
congratulating the scientist involved in this work.


Cheers~

~Justin


Re: [ccp4bb] Nanodrop versus Nanophotomter Pearl versus good old Bradford.

2011-06-16 Thread Justin Hall

Hi Alex,

I read Filip's comment about volume not as a path length argument, but  
about concentration uncertainty in mixing small volumes to dilute a  
sample down before measuring it (?). I have never had to make a  
dilution for my nanodrop (my proteins are usually not that  
concentrated), but I could see his point if I did have to.


As for the variance between samples, I don't know about >25%, but I  
have observed multiple readings to have variance. I always take 3  
readings on my nanodrop and then average them to deal with the  
variance I see. I don't mind doing this because the instrument is so  
fast, and I don't mind the cost at 6 ul of sample total.


The most variance I have seen is usually in spin columns, where I will  
be doing a buffer exchange from a storage buffer (sometimes at ca. 20%  
glycerol) into an assay or xstal buffer, and I have wondered to myself  
if the variance I see could be due to incomplete mixing of a protein  
sample betwen a viscous buffer at the bottom with the rest of the  
buffer. I don't know how often other people find themselves in a  
situation where they may be sampling their 2 ul from a  
"micro-environment" that is not homogenous with the rest of the  
sample, but with small volumes I think that be a problem. Food for  
thought.


Filip, I would buy a nanodrop. It is much better than a  
Bradford/cuvette and your students will love you for it. Cheers~


~Justin


Quoting aaleshin :


Filip,
25% accuracy is observed only for very diluted (OD280< 0.1) or  
concentrated samples. But those sample a rarely used for ITC or CD.  
The concentrated samples require dilution but a regular spec does it  
too. Since the light passway is very short in Nanodrop it is  
accurate with more concentrated samples, which we crystallographers  
use, so Nanodrop is ideal instrument for our trade.


If the drop is within recommended volume like 1-2 ul for our model,  
its size has a very small influence on the measurement.



Cuvettes will give a better accuracy provided you clean them properly.
I hated those times when I had to measure a concentration because of  
a need to wash a cuvette. In a biological lab they are always dirty.  
We switched to plastic disposable cuvettes for that reason...


Alex

On Jun 16, 2011, at 1:06 PM, Filip Van Petegem wrote:


25% is not acceptable for ITC or CD experiments though...

I was just sharing our bad experience with a demo nanodrop we had.  
Even if evaporation is not an issue, one has to take pipetting  
errors into account when dealing with small volumes.  The relative  
error on 1-2ul is a lot bigger than on 50ul. Unless you want to  
pre-mix 50ul and use a small quantity of that, which defeats the  
purpose of miniaturization...  It all depends on your applications  
and sample availability, but if you want a very accurate  
measurement, miniaturized volumes just won't get you the same  
accuracy.


Cuvettes will give a better accuracy provided you clean them  
properly. Just some water or EtOH is *not* enough...


Filip Van Petegem



On Thu, Jun 16, 2011 at 12:52 PM, aaleshin  wrote:
I also like our Nanodrop, but I do not recommend using it for  
Bradford measurements.


The 25% accuracy mentioned by Flip is pretty good for biological  
samples.  Using 50 ul cuvette in a traditional spectrophotometer  
will not give this accuracy because cleanness of the cuvette will  
be a big issue...


Alex

On Jun 16, 2011, at 12:43 PM, Oganesyan, Vaheh wrote:

I completely disagree with Filip’s assessment. I’ve been using  
nanodrop nearly 5 years and never had inconsistency issues. If you  
work at reasonable speed (if you put a drop there then lower the  
lever and click measure before you do anything else) there will be  
no issues. At very high concentrations the accuracy and therefore  
consistency may become lower. Concentrations between 5 and 10  
mg/ml should be fine. The instrument is pricey though.


 Vaheh



From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf  
Of Filip Van Petegem

Sent: Thursday, June 16, 2011 3:34 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Nanodrop versus Nanophotomter Pearl versus  
good old Bradford.


Dear Arnon,

the Bradford method is not recommended for accurate measurements.   
The readings are strongly dependent on the amino acid composition.  
 A much better method is using the absorption at 280nm under  
denaturing conditions (6M Guanidine), and using calculated  
extinction coefficients based on the composition of mostly  
Tyrosine and Tryptophan residues (+ disulfide bonds).  This method  
is also old (Edelhoch, 1967), but very reliable.


One thing about the nanodrop: smaller volume = more evaporation.   
On the demo we've had, I was so unimpressed with the precision  
(>25% variability between two consecutive measurement) that we  
didn't consider this instrument at all.  So unless you just want a  
'rough' estimate, I wouldn't recommend it at all. But most  
respectable

Re: [ccp4bb] protein lost activity after size exclusion chromatography

2011-03-16 Thread Justin Hall

Hi Harvey,

Well, knowing nothing about your protein, allow me to ruminate anyway...

It sounds like you are exploring the possibility of a metal ion or  
other cofactor being lost. This is a reasonable first thing to check,  
but your buffer exchange steps should allow small cofactors (smaller  
than most proteins that is) to pass through your membrane and away  
from your protein. This suggests that your loss of activity is due to  
the loss of something the size of your protein. Four things come to  
mind right away.


1) The least exotic possibility I can think of is maybe your protein  
is inactive all along according to your assay (your assay could have a  
problem in it, I would suggest trouble shooting your assay as a first  
step). This could result in your relatively dirtier prep falsely  
reporting activity because of another protein component (i.e. an  
impurity) that is active according to your assay, and then lost later  
during your purification.


2) This next idea seems unlikely, but you asked so... Could there be  
another protein component missing that is necessary for activity that  
you don't know about? This protein would be lost during purification  
resulting in an inactive form or your protein.


3) Probably another red herring here... Maybe your protein is not  
stable without lots of other proteins around. I have personally seen  
proteins that go to pot at low concentrations, but are very stable at  
high concentrations, for which this sort of reasoning is invoked. You  
could try adding Arg or other amino acids to keep it folded.


4) Is your protein active in a cleaved form? I have seen kinases with  
competent kinase domains in the absence of regulatory domains. If you  
run an activity assay that included the cleaved form of your protein,  
and then lose this cleaved form later after purifying away the cleaved  
protein, it would appear that you have lost activity.


The most important advice I can give you is to pay attention to what  
your assays are really telling you, not what you think they are  
telling you because of useful assumptions we all make, but what the  
data really reports. For example, your activity assay shows no  
activity, the problem could be your protein, or a component of the  
assay, it is a bad idea to assume the protein is the only place  
something could be wrong. A factual analysis will hopefully allow you  
to trace back what you really know and where things could be going  
wrong.


Hope this doesn't give you too many gooses to chase, hopefully  
somewhere in here is a spark to help you reason yourself out of your  
problem. Cheers~


~Justin

Quoting Harvey Rodriguez :


Dear all,

Recently, I came across an obstacle on the purification and acitivty
measurement of my protein. My protein was expressed with an C terminal His
tag in the HEK 293T cells and purified by nickel affinity, anion
exchange and size exclucion chromatography. For every purification step, I
preserved some sample to test the activty. Strikingly, the protein retains
activity after nickel affinity column even for three days but lost almost
all the activty immediately after Mono Q and SEC. Therefore, I speculated
that something (metal ion or co-factor) binding to the protein was striped
by the Mono Q column. Then I skipped this step and only use the SEC for
further purification. However, the protein is still not active no matter
what buffer I use, eg. Tris,Hepes or PBS. The protein I purified by nickel
column is also in the PBS buffer and no additive was added. Buffer exchange
in the concentrator doesn't affect the activity of the protein. Can anyone
explain why anion exchange or size exclucion chromatography destroy the
activity of the protein? Any comment or proposal is appreciated!

Harvey



[ccp4bb] off topic: GPCR membane insertion/orientation

2011-03-04 Thread Justin Hall

Dear Community,

In trying to trouble shoot an experiment I have become interested in  
the cellular process that regulates the insertion and proper  
orientation of membrane proteins. I am looking for references for how  
a GPCR is correctly oriented during expression (i.e. the extra  
cellular domain ends up extra cellularly oriented instead of a 50/50  
mix in and out), my intuition is that there must be an N-terminal  
sequence that directs this process, but I am having no luck finding  
information on what this sequence is for GPCRs, what players are  
involved or how orientation is thought to be controlled. Any  
suggestions?


This is all spurred by my wanting to use phage display with a protein  
that binds to the intracellular side of a GPCR, but of course that is  
the hard side to present to the outside of a cell so I need to figure  
out how to flip these guys around. I have thought about adding a new  
TM helix before TM1 (or removing TM1) to flip these guys, but was  
hoping there might be another way around that doesn't involve such  
massive architectural rearrangement such as simply clipping the  
N-terminal sequence responsible for proper orientation (if such a  
thing exists). Cheers~


~Justin


[ccp4bb] A tool for MR that renumbers and replaces amino acids of your search model

2011-02-03 Thread Justin Hall

Dear Community,

I am looking for a tool that can convert a potential MR search model  
to match the residue number and a.a. type of my actual protein.  
Specifically, I have a homolog structure, with slightly different  
start and stop residues, and several non identical a.a.s relative to  
my protein. Can anyone direct me to a tool that can renumber the a.a.s  
in my search model, and stub nonidentical a.a.s to Ala? I recall there  
is a program in CCP4 that will renumber, leaving the user to change  
a.a. type, but this problem seams like such a common one that I am  
surprised there is not a utility that would perform both functions.  
Cheers~


~Justin


Re: [ccp4bb] If it is a new structure?

2010-12-20 Thread Justin Hall

Hi Liu,

If I understand your question correctly, youre asking "how different  
do two structures need to be for one to be "new'". If by "new" you  
mean a new fold, then the answer is NO. Your structure and the homolog  
have the same fold.


However, if your structure is the first structure of a protein in a  
new class, then your structure is a new insight for that reason (e.g.  
it is the first structure of a Unobtainium-metalloprotease).


If it is not the first structure of a protein from a new class, lets  
say a previous structure of Unobtainium-metalloprotease has been  
solved using H. sapiens' sequence, but your protein is the first D.  
melanogaster ortholog solved, then your structure is a new insight for  
that reason.


So, in a nut-shell, I guess what I am saying is that your protein is  
not a new fold, but is almost certainly "new" by some qualification,  
and you will know best what that qualification is. I hope that helps,  
cheers and happy holidays~


~Justin





On 20/12/2010 10:49,  wrote:

The structure of my protein is as shown as the purple one. Another one
,as shown as green,is homologous .But the structure of my protein can't
be obtained by using molecular replacement. And both structures have
much different, especially in B chain. If my structure is a new one?
thank you for help.


Re: [ccp4bb] relationship between B factors and Koff

2010-11-19 Thread Justin Hall

Hi Sebastiano,

I have had some experience with protein:protein complexes with KD ~  
10-1 uM, kinetic characterization and trying to purify a complex of  
these proteins using SEC. While I would say that if you have reliable  
evidence from SPR that you have a fast on (high Kon), then you must  
have a fast off (high Koff) because by definition KD = 10 E-6 =  
Koff/Kon. However, I have observed several systems where you have a KD  
~ 10-1 uM, but the kinetics are not fast on/fast off. In my  
experience, I have never seen anything in the crystal structures of  
the weak affinity complexes I have solved that would coorelate  
B-factors to Kon/Koff, and while it might be tempting for you to draw  
this comparison in your structure, I would warn that this is too large  
a leap without further (non-anecdotal) evidence.


As a further note, during SEC purification of complexes, I have  
observed that you generally have to have the complexes at at least 5  
to 10-fold higher initial concentration if you want to purify the  
complex, which you are only pushing with your 80-100 uM high end  
concentration. A colleague of mine once told me this is due to a 5 to  
10-fold dilution effect upon addition to the column, but I have never  
verified this nor read any primary source that validated this so I  
cannot supply a reference (others might be able to help here). Good  
luck and cheers~


~Justin



Quoting Sebastiano Pasqualato :


Hi all,
I have a crystallographical/biochemical problem, and maybe some of  
you guys can help me out.


We have recently crystallized a protein:protein complex, whose Kd  
has been measured being ca. 10 uM (both by fluorescence polarization  
and surface plasmon resonance).
Despite the 'decent' affinity, we couldn't purify an homogeneous  
complex in size exclusion chromatography, even mixing the protein at  
concentrations up to 80-100 uM each.
We explained this behavior by assuming that extremely high Kon/Koff  
values combine to give this 10 uM affinity, and the high Koff value  
would account for the dissociation going on during size exclusion  
chromatography. We have partial evidence for this from the SPR  
curves, although we haven't actually measured the Kon/Koff values.


We eventually managed to solve the crystal structure of the complex  
by mixing the two proteins (we had to add an excess of one of them  
to get good diffraction data).
Once solved the structure (which makes perfect biological sense and  
has been validated), we get mean B factors for one of the component  
(the larger) much lower than those of the other component (the  
smaller one, which we had in excess). We're talking about 48 Å^2 vs.  
75 Å^2.


I was wondering if anybody has had some similar cases, or has any  
hint on the possible relationship it might (or might not) exist  
between high a Koff value and high B factors (a relationship we are  
tempted to draw).


Thanks in advance,
best regards,
ciao
s


--
Sebastiano Pasqualato, PhD
IFOM-IEO Campus
Dipartimento di Oncologia Sperimentale
Istituto Europeo di Oncologia
via Adamello, 16
20139 - Milano
Italy

tel +39 02 9437 5094
fax +39 02 9437 5990




Re: [ccp4bb] offtopic: effect of compound impurities on ITC?

2010-08-26 Thread Justin Hall

Hi Francis,

I might save you some time by telling you up front you should just go  
back and purify your compound to remove the impurity, you dont even  
need to read the rest of this, just go.


Along the lines of what Savvas was saying, with any equilibrium  
binding assay between two direct competitors ("Y" is the impurity and  
"Z" is your analyte), if you are working at concentrations above the  
KD then the resultant complexes (XY and XZ) will partition according  
to their relative association strengths (dG) and concentrations. So,  
if Y and Z have equivalent dG values, then the concentration of XY  
([XY]) and [XZ] will be a function of [Y] and [Z], if [Y]=[Z] in this  
circumstance then [XY]=[XZ].


If dGy >> dGz or [Y] >> [Z], then you are in the clear. This is why  
going back to purify Z from Y is a good idea.


Now,the great thing about ITC is of course that you can get dG, dH and  
-TdS in one experiment, but this is also going to bite you in the butt  
here since you will simultaneously be determining dG, dH and -TdS for  
both Y and Z, which leaves you will more unknowns that you have data  
to solve for unless you independently know [X], [Y], [Z] and dG, dH  
and -TdS for XY or XZ.


In fact, the circumstance where you know [X], [Y], [Z] and dG, dH and  
-TdS for XY or XZ is what Savvas is describing with "displacement"  
assays, and unless I am misunderstanding your situation it sounds like  
you dont know these parameters. For that reason I would not qualify  
this as a displacement assay, but instead just as a poorly controlled  
experiment . Now, you might be able to do the experiment with pure Y  
binding to X to determine dG, dH and TdS, then perform the proposed  
experiment with impure Y and Z as a "displacement" binding, but this  
is going to still be a headache because your uncertainty will be  
greater, you will not have as accurate a measure of [Y] and [Z] as  
when they are pure, and since your your direct signal (dH) is going to  
be from the formation of both XY and XZ (dHtotal = dHy + dHz) S/N will  
be equal to or less than the experiment with pure Y or pure Z (my  
nanny used to say 'dont do good experiments with bad reagents, youll  
just waste time', she was very wise).


Hope that helps, cheers~

~Justin







Quoting Savvas Savvides :


Hi Francis
I guess it depends on how much residual high-affinity binder you  
have in the mixture and what the difference in affinity is between Y  
and deriv-Y. Another issue is of course whether Y and derY compete  
for the same binding site and have the same stoichiometry. A well  
designed displacement ITC experiment and comparisons thereof with  
ITC data for your high-affinity binder should lead to some good  
answers.  Knowing the ratio of Y vs deriv-Y in your starting  
compound solution will be an advantage.


A very useful reference in thinking about and carrying out  
displacement ITC in our group has been the one by Velazquez-Campoy  
and Freire. This article was specifically written to address the  
application of displacement titrations in ITC. We have applied this  
approach to address several types of questions concerning  
interactions in the uM-pM range.


Velazquez-Campoy A, Freire E.
Isothermal titration calorimetry to determine association constants  
for high-affinity ligands

Nat Protoc. 2006;1(1):186-91.

Best regards
Savvas


Savvas Savvides
Unit for Structural Biology @ L-ProBE
Ghent University
K.L. Ledeganckstraat 35, 9000 Ghent, Belgium
Ph. +32  (0)472 928 519 http://www.LProBE.ugent.be/xray.html



On 24 Aug 2010, at 17:11, Francis E Reyes wrote:


Hi All

I'm curious the effect of small impurities in commercially  
synthesized compounds on ITC and its analysis. Say if compound Y is  
the high affinity binder, but you make a derivative that differs  
from a single functional group from Y (you used Y to make this new  
compound) and you never are able to completely get rid of Y. How  
does this affect the analysis of determining the derivative's  
affinity by ITC?


References or personal experience is appreciated!

F

-
Francis E. Reyes M.Sc.
215 UCB
University of Colorado at Boulder

gpg --keyserver pgp.mit.edu --recv-keys 67BA8D5D

8AE2 F2F4 90F7 9640 28BC  686F 78FD 6669 67BA 8D5D





Re: [ccp4bb] Zalman monitor on Linux and Coot

2009-11-04 Thread Justin Hall
Thanks to everyone for the info on Zalman monitors, sorry to have  
muddied the waters for you Ajit. Best wishes~


~Justin

Quoting Justin Hall :


Hi Ajit;

One of our CRT monitors broke recently, and in the context of  
bemoaning the loss to a friend I was told that LCD monitors will not  
work for stereo viewing. I understood the reason to be related to  
the difference in refresh rates (?), with LCD's not being fast  
enough so that the viewer is left seeing ghosts. The effect, which I  
have not seen first hand, was described to me as capable of making  
most hapless stereo viewers very ill, very fast. I would encourage  
you to plumb the depths of knowledge on this subject further, but  
that is my simple understanding.


Best wishes~

~Justin

Quoting Ajit Datta :


Hello everyone,
   Sorry for a non-CCP4 related question again. Can anyone let me  
know how to make stereo work on linux with Zalman monitor with  
Coot? Is it as simple as what we do with CRT monitors? Or do we  
need something else? We presently use CRT monitors on a Quadro FX  
4600 graphics card. I would like to move to LCDs.


Thanks for all inputs

Ajit B.







Re: [ccp4bb] Zalman monitor on Linux and Coot

2009-11-04 Thread Justin Hall

Hi Ajit;

One of our CRT monitors broke recently, and in the context of  
bemoaning the loss to a friend I was told that LCD monitors will not  
work for stereo viewing. I understood the reason to be related to the  
difference in refresh rates (?), with LCD's not being fast enough so  
that the viewer is left seeing ghosts. The effect, which I have not  
seen first hand, was described to me as capable of making most hapless  
stereo viewers very ill, very fast. I would encourage you to plumb the  
depths of knowledge on this subject further, but that is my simple  
understanding.


Best wishes~

~Justin

Quoting Ajit Datta :


Hello everyone,
Sorry for a non-CCP4 related question again. Can anyone let me  
know how to make stereo work on linux with Zalman monitor with Coot?  
Is it as simple as what we do with CRT monitors? Or do we need  
something else? We presently use CRT monitors on a Quadro FX 4600  
graphics card. I would like to move to LCDs.


Thanks for all inputs

Ajit B.




[ccp4bb] Update: "Summary for "Anisotropic Diffraction In Refinement" question

2009-10-06 Thread Justin Hall

Dear All;

With regards both to my original question and the recent question by  
Katja Schleider, I have been told by Mike Sawaya that he has just  
implemented an option for inputting the B-factor's of your choice to  
his server (http://www.doe-mbi.ucla.edu/~sawaya/anisoscale/). This  
option allows for an aniostropic resolution limit and no change to the  
user's Fo's if desired.


Katja, my personal solution had been to impose an elliptical  
resolution limit to minimize the amount of erroneous data (see below)  
though that comes with some caveats as described below:



From my original summary:
Application of a elliptical resolution boundary is justified because  
the resolution boundary from common integration programs (Denzo and  
Mosflm for example) is spherical where diffraction for anisotropic  
data is ellipsoidal. A spherical boundary would result in the  
inclusion of numerous poorly measured reflections in the higher  
resolution shells which effectively makes these data more noisy.  
Imposing an ellipsoidal resolution boundary is equivalent to removing  
noise from the higher resolution bins and is simply the anisotropic  
equivalent of the normal resolution limit truncation.



from Peter Zwart

Hi Justin,

Please be careful in interpreting maps from elliptically truncated  
maps, there is a potential for introducing some bias. In Refmac (as  
well and Phenix) maps are produced that fill in missing amplitudes  
with DFcalc. When your mtz file contains only a small fraction of  
miller indices in the highest (spherical) shell, all the missing  
reflections will be assigned DFcalc. Depending on your anisotropy,  
this can be a significant number of reflections.


I'm not sure how serious this issue is, but it might be worthwhile  
checking the 'unfilled' maps as well (both phenix.refine and Refmac  
allow you to compute these).


HTH

Peter


I hope this helps, good luck Katja.


[ccp4bb] Summary for "Anisotropic Diffraction In Refinement" question

2009-09-15 Thread Justin Hall

Dear All;

In response to my "Anisotropic Diffraction In Refinement", which asked  
for suggestions for how best to proceed with refinement with an  
anisotropic data set, I received a large number of responses which  
overwhelmingly suggested using the UCLA Anisotropy Server  
().


The Anisotripy Server treats scaled/truncated data sets (I used Scala  
and the old Truncate program). Fo and SigFo are analyzed with respect  
to resolution in three dimensions and the data treated in three steps:

1) An elliptical resolution boundary is determined and applied.
2) A purely anisotropic B-factor is applied to the Fo and SigFo data  
to cause the data in all directions to fall off equally.
3) A negative isotropic B-factor is then applied to the structure  
factors to force the fall-off in the strongest direction to match that  
of the original data, effectively meaning that the data are not scaled  
to the mean but the weaker data are scaled up to match the strongest  
data.


Application of a elliptical resolution boundary is justified because  
the resolution boundary from common integration programs (Denzo and  
Mosflm for example) is spherical where diffraction for anisotropic  
data is ellipsoidal. A spherical boundary would result in the  
inclusion of numerous poorly measured reflections in the higher  
resolution shells which effectively makes these data more noisy.  
Imposing an ellipsoidal resolution boundary is equivalent to removing  
noise from the higher resolution bins and is simply the anisotropic  
equivalent of the normal resolution limit truncation.


However, I was confused by the second and third steps.  The second  
step of application of anisotropic scale factors is appropriate if the  
refinement program does not include anisotropic scaling in its  
calculation of Fc, however modern refinement programs do this. Pavel  
Afonine touched on this in his CCP4BB general posting in response to  
my original posting where he noted that "anisotropic scale factor[s]  
that [are] part of the total structure factor take care of this"  
().


For the third step, applying a negative isotropic B-factor to modify  
the Fo is equivalent to sharpening the peaks in your maps and this can  
be useful.  However, applying the correction to Fo will also result in  
an inappropriate decrease in the average temperature factor of the  
resulting model.  Since B-factors are used as a measure of the  
coordinate error of an atom, modifying your Fo means these low B  
factors will tend to confuse the users of that model
into thinking its quality is better than it really is. If a sharper  
map makes identification of model errors easier, the map can be  
sharpened when it is calculated, without affecting the parameters in  
the PDB file.  The latest versions of Coot, for example, allows you to  
sharpen any map that it calculates.


I brought these points to the attention of the Anisotropy Server  
director (Michael Sawaya), who is now working to provide an option to  
omit steps 2 and 3 for users who do not what their structure factors  
modified.


My thanks to everyone who responded to my original question, and to  
Dale Tronrud and Michael Sawaya in particular for valuable discussion.


[ccp4bb] anisotropic diffraction in refinement

2009-09-01 Thread Justin Hall

Dear All;

I am working with a data set which is anisotropic. The resolution  
limits are ~ 2.75 by 3.45 A.  The I have integrated (using Mosflm) the  
data out to 2.75 A, the data therefore includes a mix of real (I/sig  
>>1) and imaginary (I/sig ~1) data past the 3.45 A resolution bin.


I am concerned that the presence of the poor quality data in the outer  
shells will cause my good data to 2.75 A resolution to be down  
weighted in refinement.  Since anisotropic resolution limits do not  
seem to be an option, are there other tools that would allow proper  
weighting for this situation?


Cheers~

~Justin Hall
Oregon State University