Re: [ccp4bb] [3dem] Which resolution?

2020-03-07 Thread Robert Stroud
James answer seems right, and make abject sense. -and makes sense by experience 
too..  
Bob
Robert Stroud
str...@msg.ucsf.edu



> On Mar 7, 2020, at 12:01 PM, James Holton  wrote:
> 
> Yes, that's right.  Model B factors are fit to the data.  That Boverall gets 
> added to all atomic B factors in the model before the structure is written 
> out, yes?
> 
> The best estimate we have of the "true" B factor is the model B factors we 
> get at the end of refinement, once everything is converged, after we have 
> done all the building we can.  It is this "true B factor" that is a property 
> of the data, not the model, and it has the relationship to resolution and map 
> appearance that I describe below.  Does that make sense?
> 
> -James Holton
> MAD Scientist
> 
> On 3/7/2020 10:45 AM, dusan turk wrote:
>> James,
>> 
>> The case you’ve chosen is not a good illustration of the relationship 
>> between atomic B and resolution.   The problem is that during scaling of 
>> Fcalc to Fobs also B-factor difference between the two sets of numbers is 
>> minimized. In the simplest form  with two constants Koverall and Boverall it 
>> looks like this:
>> 
>> sum_to_be_minimized = sum (FOBS**2 -  Koverall * FCALC**2 * exp(-1/d**2 * 
>> Boverall) )
>> 
>> Then one can include bulk solvent correction, anisotripic scaling, … In 
>> PHENIX it gets quite complex.
>> 
>> Hence, almost regardless of the average model B you will always get the same 
>> map, because the “B" of the map will reflect the B of the FOBS.  When all 
>> atomic Bs are equal then they are also equal to average B.
>> 
>> best, dusan
>> 
>> 
>>> On 7 Mar 2020, at 01:01, CCP4BB automatic digest system 
>>>  wrote:
>>> 
 On Thu, 5 Mar 2020 01:11:33 +0100, James Holton  wrote:
 
> The funny thing is, although we generally regard resolution as a primary
> indicator of data quality the appearance of a density map at the classic
> "1-sigma" contour has very little to do with resolution, and everything
> to do with the B factor.
> 
> Seriously, try it. Take any structure you like, set all the B factors to
> 30 with PDBSET, calculate a map with SFALL or phenix.fmodel and have a
> look at the density of tyrosine (Tyr) side chains.  Even if you
> calculate structure factors all the way out to 1.0 A the holes in the
> Tyr rings look exactly the same: just barely starting to form.  This is
> because the structure factors from atoms with B=30 are essentially zero
> out at 1.0 A, and adding zeroes does not change the map.  You can adjust
> the contour level, of course, and solvent content will have some effect
> on where the "1-sigma" contour lies, but generally B=30 is the point
> where Tyr side chains start to form their holes.  Traditionally, this is
> attributed to 1.8A resolution, but it is really at B=30.  The point
> where waters first start to poke out above the 1-sigma contour is at
> B=60, despite being generally attributed to d=2.7A.
> 
> Now, of course, if you cut off this B=30 data at 3.5A then the Tyr side
> chains become blobs, but that is equivalent to collecting data with the
> detector way too far away and losing your high-resolution spots off the
> edges.  I have seen a few people do that, but not usually for a
> published structure.  Most people fight very hard for those faint,
> barely-existing high-angle spots.  But why do we do that if the map is
> going to look the same anyway?  The reason is because resolution and B
> factors are linked.
> 
> Resolution is about separation vs width, and the width of the density
> peak from any atom is set by its B factor.  Yes, atoms have an intrinsic
> width, but it is very quickly washed out by even modest B factors (B >
> 10).  This is true for both x-ray and electron form factors. To a very
> good approximation, the FWHM of C, N and O atoms is given by:
> FWHM= sqrt(B*log(2))/pi+0.15
> 
> where "B" is the B factor assigned to the atom and the 0.15 fudge factor
> accounts for its intrinsic width when B=0.  Now that we know the peak
> width, we can start to ask if two peaks are "resolved".
> 
> Start with the classical definition of "resolution" (call it after Airy,
> Raleigh, Dawes, or whatever famous person you like), but essentially you
> are asking the question: "how close can two peaks be before they merge
> into one peak?".  For Gaussian peaks this is 0.849*FWHM. Simple enough.
> However, when you look at the density of two atoms this far apart you
> will see the peak is highly oblong. Yes, the density has one maximum,
> but there are clearly two atoms in there.  It is also pretty obvious the
> long axis of the peak is the line between the two atoms, and if you fit
> two round atoms into this peak you recover the distance between them
> quite accurately.  Are they really not "resolved" if it is so clear

Re: [ccp4bb] [3dem] Which resolution?

2020-03-07 Thread dusan turk
James,

> On 7 Mar 2020, at 21:01, James Holton  wrote:
> 
> Yes, that's right.  Model B factors are fit to the data.  That Boverall gets 
> added to all atomic B factors in the model before the structure is written 
> out, yes?

Almost true. It depends how the programs are written. In MAIN this is not 
necessary. 

> The best estimate we have of the "true" B factor is the model B factors we 
> get at the end of refinement, once everything is converged, after we have 
> done all the building we can.  It is this "true B factor" that is a property 
> of the data, not the model, and it has the relationship to resolution and map 
> appearance that I describe below.  Does that make sense?

This is how it almost always is. Sometimes the best fit is achieved when model 
Baverage is higher than Fcalc fit to Fobs would suggest it. In such cases the 
difference is subtracted again during Fcalc to Fobs scaling. I did not 
investigate this any further, but maybe someone else has an idea or already 
established solution.

best, dusan



> 
> -James Holton



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB&A=1


Re: [ccp4bb] [3dem] Which resolution?

2020-03-07 Thread James Holton
Yes, that's right.  Model B factors are fit to the data.  That Boverall 
gets added to all atomic B factors in the model before the structure is 
written out, yes?


The best estimate we have of the "true" B factor is the model B factors 
we get at the end of refinement, once everything is converged, after we 
have done all the building we can.  It is this "true B factor" that is a 
property of the data, not the model, and it has the relationship to 
resolution and map appearance that I describe below.  Does that make sense?


-James Holton
MAD Scientist

On 3/7/2020 10:45 AM, dusan turk wrote:

James,

The case you’ve chosen is not a good illustration of the relationship between 
atomic B and resolution.   The problem is that during scaling of Fcalc to Fobs 
also B-factor difference between the two sets of numbers is minimized. In the 
simplest form  with two constants Koverall and Boverall it looks like this:

sum_to_be_minimized = sum (FOBS**2 -  Koverall * FCALC**2 * exp(-1/d**2 * 
Boverall) )

Then one can include bulk solvent correction, anisotripic scaling, … In PHENIX 
it gets quite complex.

Hence, almost regardless of the average model B you will always get the same map, 
because the “B" of the map will reflect the B of the FOBS.  When all atomic Bs 
are equal then they are also equal to average B.

best, dusan



On 7 Mar 2020, at 01:01, CCP4BB automatic digest system 
 wrote:


On Thu, 5 Mar 2020 01:11:33 +0100, James Holton  wrote:


The funny thing is, although we generally regard resolution as a primary
indicator of data quality the appearance of a density map at the classic
"1-sigma" contour has very little to do with resolution, and everything
to do with the B factor.

Seriously, try it. Take any structure you like, set all the B factors to
30 with PDBSET, calculate a map with SFALL or phenix.fmodel and have a
look at the density of tyrosine (Tyr) side chains.  Even if you
calculate structure factors all the way out to 1.0 A the holes in the
Tyr rings look exactly the same: just barely starting to form.  This is
because the structure factors from atoms with B=30 are essentially zero
out at 1.0 A, and adding zeroes does not change the map.  You can adjust
the contour level, of course, and solvent content will have some effect
on where the "1-sigma" contour lies, but generally B=30 is the point
where Tyr side chains start to form their holes.  Traditionally, this is
attributed to 1.8A resolution, but it is really at B=30.  The point
where waters first start to poke out above the 1-sigma contour is at
B=60, despite being generally attributed to d=2.7A.

Now, of course, if you cut off this B=30 data at 3.5A then the Tyr side
chains become blobs, but that is equivalent to collecting data with the
detector way too far away and losing your high-resolution spots off the
edges.  I have seen a few people do that, but not usually for a
published structure.  Most people fight very hard for those faint,
barely-existing high-angle spots.  But why do we do that if the map is
going to look the same anyway?  The reason is because resolution and B
factors are linked.

Resolution is about separation vs width, and the width of the density
peak from any atom is set by its B factor.  Yes, atoms have an intrinsic
width, but it is very quickly washed out by even modest B factors (B >
10).  This is true for both x-ray and electron form factors. To a very
good approximation, the FWHM of C, N and O atoms is given by:
FWHM= sqrt(B*log(2))/pi+0.15

where "B" is the B factor assigned to the atom and the 0.15 fudge factor
accounts for its intrinsic width when B=0.  Now that we know the peak
width, we can start to ask if two peaks are "resolved".

Start with the classical definition of "resolution" (call it after Airy,
Raleigh, Dawes, or whatever famous person you like), but essentially you
are asking the question: "how close can two peaks be before they merge
into one peak?".  For Gaussian peaks this is 0.849*FWHM. Simple enough.
However, when you look at the density of two atoms this far apart you
will see the peak is highly oblong. Yes, the density has one maximum,
but there are clearly two atoms in there.  It is also pretty obvious the
long axis of the peak is the line between the two atoms, and if you fit
two round atoms into this peak you recover the distance between them
quite accurately.  Are they really not "resolved" if it is so clear
where they are?

In such cases you usually want to sharpen, as that will make the oblong
blob turn into two resolved peaks.  Sharpening reduces the B factor and
therefore FWHM of every atom, making the "resolution" (0.849*FWHM) a
shorter distance.  So, we have improved resolution with sharpening!  Why
don't we always do this?  Well, the reason is because of noise.
Sharpening up-weights the noise of high-order Fourier terms and
therefore degrades the overall signal-to-noise (SNR) of the map.  This
is what I believe Colin would call reduced "contrast".  Of course, since
we view maps

[ccp4bb] arpnavigator: change contour plane orientation

2020-03-07 Thread Tim Gruene
Dear all,

the arpnavigator can display the contour levels of a map ("map style plane"). 
Does anyone know how to change the orientation of the plane, i.e. the normal 
of the cutting plane?

Best regards,
Tim

-- 
--
Tim Gruene
Head of the Centre for X-ray Structure Analysis
Faculty of Chemistry
University of Vienna

Phone: +43-1-4277-70202

GPG Key ID = A46BEE1A



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB&A=1


signature.asc
Description: This is a digitally signed message part.


Re: [ccp4bb] [3dem] Which resolution?

2020-03-07 Thread dusan turk
James,

The case you’ve chosen is not a good illustration of the relationship between 
atomic B and resolution.   The problem is that during scaling of Fcalc to Fobs 
also B-factor difference between the two sets of numbers is minimized. In the 
simplest form  with two constants Koverall and Boverall it looks like this:

sum_to_be_minimized = sum (FOBS**2 -  Koverall * FCALC**2 * exp(-1/d**2 * 
Boverall) )

Then one can include bulk solvent correction, anisotripic scaling, … In PHENIX 
it gets quite complex.  

Hence, almost regardless of the average model B you will always get the same 
map, because the “B" of the map will reflect the B of the FOBS.  When all 
atomic Bs are equal then they are also equal to average B.

best, dusan


> On 7 Mar 2020, at 01:01, CCP4BB automatic digest system 
>  wrote:
> 
>> On Thu, 5 Mar 2020 01:11:33 +0100, James Holton  wrote:
>> 
>>> The funny thing is, although we generally regard resolution as a primary
>>> indicator of data quality the appearance of a density map at the classic
>>> "1-sigma" contour has very little to do with resolution, and everything
>>> to do with the B factor.
>>> 
>>> Seriously, try it. Take any structure you like, set all the B factors to
>>> 30 with PDBSET, calculate a map with SFALL or phenix.fmodel and have a
>>> look at the density of tyrosine (Tyr) side chains.  Even if you
>>> calculate structure factors all the way out to 1.0 A the holes in the
>>> Tyr rings look exactly the same: just barely starting to form.  This is
>>> because the structure factors from atoms with B=30 are essentially zero
>>> out at 1.0 A, and adding zeroes does not change the map.  You can adjust
>>> the contour level, of course, and solvent content will have some effect
>>> on where the "1-sigma" contour lies, but generally B=30 is the point
>>> where Tyr side chains start to form their holes.  Traditionally, this is
>>> attributed to 1.8A resolution, but it is really at B=30.  The point
>>> where waters first start to poke out above the 1-sigma contour is at
>>> B=60, despite being generally attributed to d=2.7A.
>>> 
>>> Now, of course, if you cut off this B=30 data at 3.5A then the Tyr side
>>> chains become blobs, but that is equivalent to collecting data with the
>>> detector way too far away and losing your high-resolution spots off the
>>> edges.  I have seen a few people do that, but not usually for a
>>> published structure.  Most people fight very hard for those faint,
>>> barely-existing high-angle spots.  But why do we do that if the map is
>>> going to look the same anyway?  The reason is because resolution and B
>>> factors are linked.
>>> 
>>> Resolution is about separation vs width, and the width of the density
>>> peak from any atom is set by its B factor.  Yes, atoms have an intrinsic
>>> width, but it is very quickly washed out by even modest B factors (B >
>>> 10).  This is true for both x-ray and electron form factors. To a very
>>> good approximation, the FWHM of C, N and O atoms is given by:
>>> FWHM= sqrt(B*log(2))/pi+0.15
>>> 
>>> where "B" is the B factor assigned to the atom and the 0.15 fudge factor
>>> accounts for its intrinsic width when B=0.  Now that we know the peak
>>> width, we can start to ask if two peaks are "resolved".
>>> 
>>> Start with the classical definition of "resolution" (call it after Airy,
>>> Raleigh, Dawes, or whatever famous person you like), but essentially you
>>> are asking the question: "how close can two peaks be before they merge
>>> into one peak?".  For Gaussian peaks this is 0.849*FWHM. Simple enough.
>>> However, when you look at the density of two atoms this far apart you
>>> will see the peak is highly oblong. Yes, the density has one maximum,
>>> but there are clearly two atoms in there.  It is also pretty obvious the
>>> long axis of the peak is the line between the two atoms, and if you fit
>>> two round atoms into this peak you recover the distance between them
>>> quite accurately.  Are they really not "resolved" if it is so clear
>>> where they are?
>>> 
>>> In such cases you usually want to sharpen, as that will make the oblong
>>> blob turn into two resolved peaks.  Sharpening reduces the B factor and
>>> therefore FWHM of every atom, making the "resolution" (0.849*FWHM) a
>>> shorter distance.  So, we have improved resolution with sharpening!  Why
>>> don't we always do this?  Well, the reason is because of noise.
>>> Sharpening up-weights the noise of high-order Fourier terms and
>>> therefore degrades the overall signal-to-noise (SNR) of the map.  This
>>> is what I believe Colin would call reduced "contrast".  Of course, since
>>> we view maps with a threshold (aka contour) a map with SNR=5 will look
>>> almost identical to a map with SNR=500. The "noise floor" is generally
>>> well below the 1-sigma threshold, or even the 0-sigma threshold
>>> (https://doi.org/10.1073/pnas.1302823110).  As you turn up the
>>> sharpening you will see blobs split apart and also see new peaks rising
>>> abo

Re: [ccp4bb] Overrefinement considerations and Refmac5.

2020-03-07 Thread Tim Gruene
Dear TO,

at medium to poor resolution, I prefer to set the weighting myself rather than 
use the automatic weighting. At 3'ish A resolution, I would start with 0.01 or 
0.005 with many more than the default 10 cycles, maybe 50 or 100. The numbers 
you list below go in the other direction (>4).

Best,
Tim

On Friday, March 6, 2020 2:36:01 PM CET M T wrote:
> Dear BBers,
> 
> I am trying to refine a structure using COOT and Refmac5 and I have some
> concerns about overrefinement and x-ray term weight in Refmac5, based on
> the fact that during refinement to let R factor to drift too far from Rfree
> is not good...
> 
> So... First question about that : what is too far ? I have some values in
> mind like 6% of difference is OK, 10% is not... But is there a relation in
> between resolution of the structure and this difference? Should it be
> higher at lower resolution, or always around 6-7% independently of the
> resolution?
> 
> Second question is, ok, I have a too big difference, lets say 9-10%... What
> could be the reason of that and on what to play to reduce this difference?
> 
> One way I choose is to look at the x-ray term weight (even if I am totally
> sure that Refmac5 is doing things better than me), because I saw that the
> final rms on BondLength were to constraint (I have in mind that this value
> should stays in between 0.02 and 0.01).
> So I looked into Refmac log to know where was the starting point and I
> found 8.75.
> Then I tried several tests  and here are the results:
> *
> 
> R factor
> 
> Rfree
> BondLength
> 
> BondAngle
> 
> ChirVolume
> 
> Auto weighting and experimental sigmas boxes checked
> 
> 0.1932
> 0.2886
> 
> 0.0072
> 
> 1.6426
> 
> 0.1184
> 
> Weighting term at 4 and experimental sigmas box checked
> 
> 0.1780
> 0.3159
> 
> 0.1047
> 
> 8.1929
> 
> 0.5937
> 
> Weighting term at 4
> 
> 0.1792
> 0.3143
> 
> 0.1008
> 
> 7.8200
> 
> 0.5667
> 
> Weighting term at 15 and experimental sigmas box checked
> 
> 0.1783
> 0.3272
> 
> 0.2020
> 
> 1.6569
> 
> 0.9745
> 
> Weighting term at 15
> 
> 0.1801
> 0.3279
> 
> 0.2022
> 
> 12.5748
> 
> 0.9792
> 
> Weighting term at 8.75
> 
> 0.1790
> 0.3235
> 
> 0.1545
> 
> 10.5118
> 
> 0.7909
> 
> Auto weighting box checked
> 
> 0.1948
> 0.2880
> 
> 0.0076
> 
> 1.6308
> 
> 0.1176
> 
> 
> 
> *Refinement Parameters*
> [image: image.png]
> 
> So like nothing looks satisfying I decided to ask my questions here...
> 
> What do you recommend to fix my problem, which is a too large difference
> between R and Rfree?
> 
> Thank you for answers.
> 
> 
> 
> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB&A=1

-- 
--
Tim Gruene
Head of the Centre for X-ray Structure Analysis
Faculty of Chemistry
University of Vienna

Phone: +43-1-4277-70202

GPG Key ID = A46BEE1A



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB&A=1


signature.asc
Description: This is a digitally signed message part.