[ccp4bb] OFF_TOPIC: Should you be worried about BPA from plastics? Yes, if you store alkaline reagents in polycarbonate bottles!

2022-01-30 Thread Edward Berry

After using the same reagents for the Lowry assay and seeing the color yield in 
the standard curve gradually decreasing year by year, we decided to make new 
reagents last year. Sure enough the color yield was restored, but in the next 
assay a few weeks later the blank was unusually high. after a month the blank 
read nearly 1 AU.

The problem was, we stored the alkaline reagent (NaOH + Na2CO3)in a 
polycarbonate bottle. I like polycarbonate because it is transparent and hard 
like glass, but lighter and less breakable. But polycarbonate is a polyester of 
Bis-phenol A with carbonic acid. Apparently the high pH slowly hydrolyzes the 
ester linkages, or the plastic retains some monomers that slowly leach out, and 
(duh!) bis-phenol A gives a positive reaction with the Folin-Ciocalto phenol 
reagent.



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1

This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing list 
hosted by www.jiscmail.ac.uk, terms & conditions are available at 
https://www.jiscmail.ac.uk/policyandsecurity/


Re: [ccp4bb] AW: [ccp4bb] Antwort: Re: [ccp4bb] chain on 2-fold axis?

2021-08-27 Thread Edward Berry

upper limit for the molecular weight of this molecule?

David Cobessi solved a structure with heme on a crystallographic 2-fold.
https://www.ncbi.nlm.nih.gov/pubmed/11752777
Heme is almost, but not quite, 2-fold symmetric.
eab


Peer Mittl wrote on 8/27/2021 9:55 AM:

Dear Vaheh,

I agree with you, at least in your last statement. I guess we all agree that certain molecules can 
occupy special positions on true rotation axis. Strictly, this is only possible if the molecule 
obeys the rotation symmetry. For water molecules on 2-folds you already have to make assumptions 
about the "invisible" protons. I guess, many of us have seen even larger and asymmetric 
solvent molecules, such as glycerol, MPD or buffer molecules on special positions, which locally 
break the crystal symmetry. The work around for this issue would be to define two alternative 
conformations for this molecule, because these conformations do not "see" each other, as 
pointed out by Herman. But where is the upper limit for the molecular weight of this molecule?

I (and perhaps most other crystallographers) would not refine such a case as a 
twinned structure in a lower symmetry space group, because the major part of 
the AU obeys the crystal symmetry. Furthermore, there is a fundamental 
difference between crystallographic symmetry and the symmetry of a twin law. 
The crystallographic symmetry covers the entire crystal, whereas the twin law 
just relates twin domains locally. Refining a true P3221 structure as a twinned 
P32 structure is simply the wrong thing to do.

All the best,
Peer



-"CCP4 bulletin board"  schrieb: -
An: CCP4BB@JISCMAIL.AC.UK
Von: "Oganesyan, Vaheh"
Gesendet von: "CCP4 bulletin board"
Datum: 27.08.2021 13:56
Betreff: Re: [ccp4bb] AW: [ccp4bb] Antwort: Re: [ccp4bb] chain on 2-fold axis?


  How P3221 can be an option if it assumes chain on axis? I guess I’m missing something, but per my belief only those sg will be possible for which there is no axis going through the extra molecule. P1 sg looks the only correct option here  in my humble opinion.

  Democracy (voting) depends on science. However, the reverse is not, 
thankfully.
   
  Vaheh
   
  
  
  From: CCP4 bulletin board   On Behalf Of Peer Mittl

  Sent: Friday, August 27, 2021 6:32 AM
  To: CCP4BB@JISCMAIL.AC.UK
  Subject: Re: [ccp4bb] AW: [ccp4bb] Antwort: Re: [ccp4bb] chain on 2-fold axis?
 
  Dear Herman,
  
  The answer probably depends on the impact of the "extra" chain on the

  sublattice. If there is no impact the "true" space group is P3221 with
  one chain on the special position. If the swapping of the extra chain
  influences the sublattice P32 (or C2 or P1, as pointed out by Kay)
  twinned to P3221 might be the better description.
  
  All the best,

  Peer
  
  On 27.08.2021 10:56, Schreuder, Herman /DE wrote:

  >
  > Dear Peer and Eleanor,
  >
  > This is indeed what I am suspecting: If the “twinning operator” in P32
  > puts 4 out of 5 protein chains on top of symmetry mates, is the “true”
  > space group then P32, with 5 twinned chains, or P3221 with 4 normal
  > chains and 1 chain on a special position? I would vote for the latter.
  >
  > Best,
  >
  > Herman
  >
  > *Von:* CCP4 bulletin board  *Im Auftrag von
  > *Peer Mittl
  > *Gesendet:* Freitag, 27. August 2021 10:17
  > *An:* CCP4BB@JISCMAIL.AC.UK
  > *Betreff:* Re: [ccp4bb] Antwort: Re: [ccp4bb] chain on 2-fold axis?
  >
  > Dear Eleanor,
  >
  > I indeed used r/tefmac for the refinement and it came up with the values
  > HKL (a=0.56), KH-L (a=0.44). It would be interesting to see if a
  > refinement in P3221 would come up with the same occupancies for the
  > alternative conformations for the "extra" chain on the 2-fold axis. It
  > seems as if the "well-ordered" chains (2 in P3221, 4 in P32) form a
  > sublattice with P3221 symmetry and it's just the "extra" chain, which
  > generates the twinning.
  >
  > All the best,
  > Peer
  >
  > On 26.08.2021 18:09, Eleanor Dodson wrote:
  > > Motto =mitti in predictive text!
  > >
  > > On Thu, 26 Aug 2021 at 16:52, Eleanor Dodson
  > > mailto:eleanor.dod...@york.ac.uk
  > >>
  > wrote:
  > >
  > > Great, motto. I think you have nailed it! Did you use tefmac for
  > > twinned refinement? And if so what did it suggest the twin
  > >  fraction is?
  > >
  > > On Thu, 26 Aug 2021 at 16:30, Peer Mittl  >  >> wrote:
  > >
  > > Yes, the data indeed seems to be twinned and the tNCS has
  > > masked the twinning statistics, which is why I haven't
  > > considered it so far.
  > >
  > > I have not tried twinned refinement in C2 and P1 yet, but
  > > refining 4 chains in P32 with twinning yields a difference ED
  > > map that clearly indicates one (and just on!) orientation for
  > > the 5th chain. Thank you all for your suggestions.
  > >
  > > Have a nice evening,
  > > 

[ccp4bb] pictures in emails

2021-07-18 Thread Edward Berry

Two suggestions for people sending pictures of electron density to the BB:

1. Reduce the size of the pictures-
The two pictures in yesterday's email appear nice and small in my email client, but still 
illustrate what is being described. However they are actually 4032x3024 and 2040x1458 
pixels, making for rather large emails. All that extra resolution is wasted unless the 
user opens the image directly ("view image"), and in any case is completely 
unnecessary for the point being made. I think about 600x600 pixels is plenty for almost 
anything you want to show in electron density.
This does not require manipulation in photoshop or such- just reduce the size 
of the graphics window and take a screenshot of that window.

2. send the picture as an attachment rather than inline. That way it won't be 
included in all the replies. Or if the people replying could find some way to 
exclude pictures or formt the reply as plain-text, that would help.

(No, I'm not receiving these emails via 1200 baud modem- but I like to save the 
messages for future reference. If the trend continues toward high-resolution 
inline screenshots, that will take a significant amount of disk space. And yes, 
I know there is an archive.)
eab



To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB=1

This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing list 
hosted by www.jiscmail.ac.uk, terms & conditions are available at 
https://www.jiscmail.ac.uk/policyandsecurity/


Re: [ccp4bb] [EXTERNAL] [ccp4bb] CCP4I2: non standard ssh port for remote jobs

2020-06-08 Thread Edward Berry




>>> Michael Weyand  06/08/20 12:40 PM >>>
Dear CCP4I2 experts,

I'm trying to submit remote jobs via SSH within CCP4i2. Unfortunately,
we use a non standard SSH port.
So far, I'm not able to add any option within the CCP4I2 interface. For
job submission, I need something to do like

'ssh -p X -Y . ccp4i-specific-command ...'.

Is there any chance to enter this '-p' SSH option within CCP4i2? I tried
already to define a "ssh command" via "Preferences".
But so far without any success.
==
Not sure if this is applicable, but you can set an alias name
to be equivalent to a real host with a non-standard port.
Something like: 

in ~/.ssh/config or /etc/ssh/ssh_config:


Host oswego
   hostname sanpablo

   hostkeyalias "[sanpablo]:8123"

   port 8123



"hostkeyalias" refers to one key in ~/.ssh/known_hosts that looks like:
[sanpablo]:8101,[192.168.2.4]:8123 ecdsa-sha2-nist
I'm not sure if I entered that or it was made the first time I logged in and 
said "yes".

So I or a program I'm running can issue "ssh oswego" and get connected to port 
8123 on sanpablo without needing to enter a password.

I'm also wondering why there is a sshkey file option. If an user already
uses a ssh-key for a client/server connection, why is necessary to
re-input the key file again?
Any ssh between client and server should work from scratch ...

Any hints are highly appreciated,
Michael



To unsubscribe from the CCP4BB list, click the following link:
https://urldefense.com/v3/__https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1__;!!GobTDDpD7A!YNLDy-8tJTkg7wpVHOmkTbVJbE2X4L2zjspA1l224xBKIkYkMKgcxLhOdGRlh0Dv$
 

This message was issued to members of 
https://urldefense.com/v3/__http://www.jiscmail.ac.uk/CCP4BB__;!!GobTDDpD7A!YNLDy-8tJTkg7wpVHOmkTbVJbE2X4L2zjspA1l224xBKIkYkMKgcxLhOdMdEL6FF$
 , a mailing list hosted by 
https://urldefense.com/v3/__http://www.jiscmail.ac.uk__;!!GobTDDpD7A!YNLDy-8tJTkg7wpVHOmkTbVJbE2X4L2zjspA1l224xBKIkYkMKgcxLhOdEfJjPt-$
 , terms & conditions are available at 
https://urldefense.com/v3/__https://www.jiscmail.ac.uk/policyandsecurity/__;!!GobTDDpD7A!YNLDy-8tJTkg7wpVHOmkTbVJbE2X4L2zjspA1l224xBKIkYkMKgcxLhOdB78Bf5h$
 





To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1

This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing list 
hosted by www.jiscmail.ac.uk, terms & conditions are available at 
https://www.jiscmail.ac.uk/policyandsecurity/


Re: [ccp4bb] [EXTERNAL] Re: [ccp4bb] Completeness question

2020-05-30 Thread Edward Berry
>>> Ian Tickle  05/30/20 7:14 AM >>>
>>(unless of course the completeness calculations were performed on two
different reflection files)?

EDS is in fact using a different dataset compared to the coordinates,
when I submit the output reflections.cif produced by phenix, when the
input I, sigma-I from a merged scalepack output (.sca) file.

Phenix copies the raw input data (I's) into the output file, and that is
what EDS uses.
Then phenix does French & Wilson conversion of I's to F's, and rejects
reflections with a low probability. 
The resulting dataset is used in refinement, and the output statistics
are based on this. 
For an example of the result, 6myo. From the validation report,Data and
refinement statistics:
Resolution (Å) 
47.64 – 2.20  Depositor
80.48 – 2.20   EDS

% Data completeness (in resolution range)
91.1 (47.64-2.20)   Depositor
86.2 (80.48-2.20)  EDS

There were a few very low resolution reflections (probably behind the
beamstop) 
in the .sca file, resulting  in the low resolution seen by EDS. Phenix
rejects those 
and reports the low-res cutoff 40 A   But there are not a lot 
of reflections between 40 and 80 A, so I think most of the difference is
due to F, 
which EDS apparently does not apply.

I think this low-res cutoff is also responsible for the absurdly good
RSR-Z score for 6myo
compared to the ridiculously bad RSR-Z for otherwise very similar 6myp. 
The ultra-low reflections aren't going to affect shape of the density
around individual residues. But RSR-Z is not a correlation, it depends
on the actual value of the (scaled) map, and that will be affected by
strong low-res reflections. So if RSR-Z is comparing a 2Fo-Fc map,
perhaps with fill-in, to an atom-map which effectively goes to
infinitely low resolution, data lacking those low-res reflections will
compare poorly, whereas data that is artificially extended to 80A with
fill-in will do much better.
(Still looking at this, may come back for advice later.)
Ed

>>> Ian Tickle  05/30/20 7:14 AM >>>
Hi Robbie

I don't see that anisotropic truncation has anything to do with the low
spherical completeness as compared with the info in the co-ordinate
file.  Yes the spherical completeness after anisotropic truncation will
be reduced, but why would it cause it to become inconsistent with that
reported (unless of course the completeness calculations were performed
on two different reflection files)?  Besides, the anisotropy is quite
low (Delta-B eigenvalues: 3.42  -1.95 -1.47) so
that couldn't explain it.

I do agree that something has clearly gone wrong with the reflection
deposition for 6RJY.  It could of course go right back to the collection
or processing, but I think it unlikely anyone could solve the structure
with data in this state!  Approximately alternate reflections are
missing, but the pattern of absences does not correspond with any space
group.  For example from MTZDUMP on the reflection file:

   3   1   00.00 21.21  0.22 
3   1   20.00 23.83  0.19 
3   1   40.00 34.71  0.26 
3   1   60.00  9.06  0.11 
3   1   80.00 31.64  0.24 
3   1  100.00 31.22  0.25 
3   1  120.00  1.28  0.39 
3   1  140.00  6.59  0.12 
3   1  160.00 17.58  0.15 
3   1  180.00  3.94  0.18 
3   1  200.00 11.05  0.12 
3   1  220.00 34.24  0.24 
3   1  240.00 12.39  0.14 
3   1  260.00 12.76  0.15 
3   1  280.00 20.80  0.18
   3   1  300.00 23.70  0.19 
3   1  320.00 23.47  0.20 
3   1  340.00 30.50  0.23 
3   1  360.00 10.93  0.22 
3   1  380.00 28.11  0.22 
3   1  400.00 24.41  0.21 
3   1  420.00   3   1  470.00 10.54  0.29 
3   1  490.00 10.54  0.23 
3   1  510.00  2.98  0.70 
3   1  530.00  5.84  0.39 
3   1  550.00  9.79  0.27 
3   1  570.00 11.33  0.26 
3   1  590.00  8.99  0.30
3   1  610.00  1.84  0.76 
3   1  630.00  2.63  0.78 
3   1  650.00  4.91  0.46 
3   1  670.00  3.50  0.64 
3   1  690.00  1.93  0.76 
3   1  710.00  4.57  0.52 
3   1  730.00  1.71  0.73
 

Note how the pattern switches between (3 1 44) and (3 1 47).


So sometimes k+l = 2n are absent and sometimes k+l = 2n+1 are: this
pattern pervades the whole dataset so the completeness (both spherical
and ellipsoidal) is reduced by a factor of about two.  This makes no
sense in terms of known systematic absences, and certainly not for the
reported space group P212121.  This 

Re: [ccp4bb] Average B factors with TLS

2020-04-07 Thread Edward Berry
Apologies for my previous email appearing to put words in Dale's mouth- I'm 
using my school's 

webmail and it apparently doesn't indicate the quoted text.
The following is what I added:



I think it is not just that the distribution is asymmetric and limited  to 
positive numbers- 
it is due to the fact that "log" and "average" do  not commute (the average of 
the logs 

is not the log of the averages),  and the logarithmic/exponential relation 
between intensity and B. 
 
The wilson B is obtained from the slope of ln vs S^2. 
from the relation  ~ exp(-2BS^2). 
We can take that slope as the rise over run in the range from S=0 to s=1/(3A), 
even though 

the real Wilson  plot will follow it only around 3A and beyond. 
 
 in a shell at resolution S will be down by a factor of 
 exp(-2BS^2)  compared to  at S=0 
for S=(1/3A) this is a factor of exp(-0.222*B) 
   for B=10, this is a factor of 0.108 
   for B=100, this will be a factor of 2.3E-10 
 
Now if for half the atoms B is 10 and for the other half B is 100 
 at 3 A will be something like (.108 + 2.3E-10)/2 = 0.054 
(This is a little over my head because we are combining contribution of  the 

two sets of atoms to each reflection in the shell, and they add  vectorially. 

Maybe a factor of sqrt(2) is appropriate for random phases. 

But for order of magnitude:) 
 
The slope from S^2=0 to S^2=(1/3A)^2 = 
-2B ={ ln(0.054) - ln(1) }/{. - 0} 
-2B =  -26.3 
B(wilson) = 13.2 
Bave = (10+100)/2 = 55 
The log of the average is larger than the average of the logs. 
 
Another way of looking at it, the contribution of those atoms with B=100  at 3A 
is completely negligible compared to the contribution of the atoms 
with B=10, so the slope around 3A reflects only the B-factor of the  
well-ordered atoms, and the slope measured there should give B=10, not 13. 
 
If atoms were randomly distributed and we could apply Wilson over the  entire 
resol range, we still wouldn't get a straight line if there is a range  of 
atomic B-factors. 
It would be like "curve peeling" in analyzing two simultaneous  first-order 
reactions with different half-time: semilog plot of  dissociation of a mixture 
of fast and slow hemoglobin. Near zero time  the curve is steep as the fast 
molecules dissociate, then it flattens  out and becomes linear with a smaller 
slope as only slow molecules are still dissociating. So  (in the absence of a 
curve-fitting program) you calculate the rate  constant for the slow molecules 
from the linear region at long time, and  extrapolate the line back to zero 
time to get the initial percent slow.  Then you can calculate the amount of 
slow at any time point and subtract  that from the total to get the amount of 
fast, and re-plot that on  semilog to get the fast rate constant. 
 
By analyzing our Wilson plots at 3A (or more generally at the highest 
resolution available) we are getting the B-factor for the  slowest-decaying 
(lowest B-factor) atoms, getting B=10 not 13 in this case. 
 
 







To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


Re: [ccp4bb] [EXTERNAL] Re: [ccp4bb] Average B factors with TLS

2020-04-07 Thread Edward Berry




>>> Dale Tronrud  04/07/20 12:37 PM >>>
   This topic has been discussing on the BB many times and a little
searching should give you some long-winded answers (some written by me).

   The short version.  If you refine a model with a common B factor for
all atoms, or keep a very narrow distribution of B's, you will end up
with an average B close to the Wilson B.  If you don't something is
seriously wrong.

   If you allow a distribution of B's in your model, that distribution
is skewed on the high side because B factors cannot go below zero but
there is no physical upper bound.  The average of those B's will always
be larger than the Wilson B.  How much larger depends on your refinement
method more than the properties of the crystal since it is determined by
how large a tail you allow your B distribution to have.


I think it is not just that the distribution is asymmetric and limited 
to positive numbers- 

it is due to the fact that "log" and "average" do 
not commute (the average of the logs 

is not the log of the averages), 
and the logarithmic/exponential relation between intensity and B.



The wilson B is obtained from the slope of ln vs S^2.

from the relation  ~ exp(-2BS^2).

We can take that slope as the rise over run in the range from S=0 to s=1/(3A), 
even though 

the real Wilson 
plot will follow it only around 3A and beyond.



 in a shell at resolution S will be down by a factor of

 exp(-2BS^2)  compared to  at S=0

for S=(1/3A) this is a factor of exp(-0.222*B)

   for B=10, this is a factor of 0.108

   for B=100, this will be a factor of 2.3E-10



Now if for half the atoms B is 10 and for the other half B is 100

 at 3 A will be something like (.108 + 2.3E-10)/2 = 0.054

(This is a little over my head because we are combining contribution of 
the 

two sets of atoms to each reflection in the shell, and they add 
vectorially. 

Maybe a factor of sqrt(2) is appropriate for random phases. 

But for order of magnitude:)



The slope from S^2=0 to S^2=(1/3A)^2 =

-2B ={ ln(0.054) - ln(1) }/{. - 0}

-2B =  -26.3

B(wilson) = 13.2

Bave = (10+100)/2 = 55

The log of the average is larger than the average of the logs.



Another way of looking at it, the contribution of those atoms with B=100 
at 3A is completely negligible compared to the contribution of the atoms

with B=10, so the slope around 3A reflects only the B-factor of the 
well-ordered atoms, and the slope measured there should give B=10, not 13.



If atoms were randomly distributed and we could apply Wilson over the 
entire resol range, we still wouldn't get a straight line if there is a range 
of atomic B-factors.

It would be like "curve peeling" in analyzing two simultaneous 
first-order reactions with different half-time: semilog plot of 
dissociation of a mixture of fast and slow hemoglobin. Near zero time 
the curve is steep as the fast molecules dissociate, then it flattens 
out and becomes linear with a smaller slope as only slow molecules are still 
dissociating. So 
(in the absence of a curve-fitting program) you calculate the rate 
constant for the slow molecules from the linear region at long time, and 
extrapolate the line back to zero time to get the initial percent slow. 
Then you can calculate the amount of slow at any time point and subtract 
that from the total to get the amount of fast, and re-plot that on 
semilog to get the fast rate constant.



By analyzing our Wilson plots at 3A (or more generally at the highest 
resolution available) we are getting the B-factor for the 
slowest-decaying (lowest B-factor) atoms, getting B=10 not 13 in this case.













   You didn't say what your B factor model was when you achieved an
average value of 31 A^2.  This value seems tiny to me since it implies
that your intensities are falling off in resolution so slowly that you
surely should have been able to measure data to a higher resolution.  If
you decide to deposit this model you should look into why you have such
a low value.

   On the other hand, the average B of 157 A^2 seems quite reasonable
for a 3 A model (using modern resolution cutoff criteria).  It is higher
than your Wilson B, but that is expected.  In addition, as you note, the
uncertainty of a Wilson B is quite large in the absence of high
resolution data.

   Yes, this is the short version.  ;-)

Dale Tronrud


On 4/7/2020 5:16 AM, Nicholas Keep wrote:
> I am at the point of depositing a low resolution (3.15 A) structure
> refined with REFMAC.  The average B factors were 31 before I added the
> TLS contribution as required for deposition which raised them to 157-
> this is flagged as a problem with the deposition, although this did not
> stop submssion.  The estimated Wilson B factor is 80.5 (although that
> will be quite uncertain) so somewhere between these two extremes.
> 
> Is it only the relative B factors of the chains that is at all
> informative?  Should I report the rather low values without TLS
> contribution or the rather high 

Re: [ccp4bb] [EXTERNAL] Re: [ccp4bb] New phasing approach

2020-04-01 Thread Edward Berry
Bear in mind that position of the interferometer mirror or whatever 
would have to be constant to within a fraction of an Angstrom- breathe 
on the frame and it will warm ever so slightly- the expansion will
change the 
phase of the reference beam by a few thousand wavelengths. A real 
engineering challenge!




>>> "Daniel M. Himmel, Ph. D."  04/01/20 11:17
AM >>>
That's fascinating!  Can such an interferometer actually be constructed
for an
X-ray beam line or is this still in the realm of the theoretical
possible?


Daniel
 ___
 Daniel M. Himmel, Ph. D.
 E-mail:  danielmhim...@gmail.com







On Tue, Mar 31, 2020 at 10:01 PM Bernhard Rupp
 wrote:

Hi Fellows,
 
just in time for a little reading during quarantine-induced boredom here
preprint pages 
(embargoed until 04.01) from my recent Phys. Rev. paper with a different
take on phasing
https://tinyurl.com/Phys-Rev-2020
 
Enjoy, BR
--
Bernhard Rupp
http://www.hofkristallamt.org/
b...@hofkristallamt.org
+1 925 209 7429
--
Department of Genetic Epidemiology
Medical University Innsbruck
Schöpfstr. 41
A 6020 Innsbruck
bernhard.r...@i-med.ac.at
+43 676 571 0536
--
Many plausible ideas vanish 
at the presence of thought
--
 


 
  To unsubscribe from the CCP4BB list, click the following link:
 https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1 

 
  To unsubscribe from the CCP4BB list, click the following link:
 https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1 
 




To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB=1


Re: [ccp4bb] Fw:[ccp4bb] on Cell & Symmetry in coot

2016-12-09 Thread Edward Berry
CCP4 pdbset is very good for this. Find the symops in $CCP4/lib/data/symop.lib
and use symgen and chain commands to create each with a different chain ID.
The six-fold screw is generated by:
 X,Y,Z
 -Y,X-Y,2/3+Z
 Y-X,-X,1/3+Z
 -X,-Y,1/2+Z
 Y,Y-X,1/6+Z
 X-Y,X,5/6+Z
 X,Y,1+Z

but the two-fold defines a dimer which is probably involved in the helix, so 
you might want to include all 12 symops plus unit cell translation along Z
eab




>>> Smith Liu  12/09/16 8:03 AM >>>
Dear All,

I mean if the radius set in the Coot "Cell Symmetry" was too small, not enough 
monomers (less than 6) can be displayed to show the "continuous helix with a 
six-fold screw axis". If the radius was too large, as for the  "continuous 
helix with a six-fold screw axis" can be regarded as a "rod", too large radius 
will lead to show several rods in one window. But with the Coot window, it 
cannot distinguish which monomer was from which rod. Thus I cannot identify 6 
monomers forming the single rod, i.e., a  "continuous helix with a six-fold 
screw axis".

Can anyone explain in this situation how can I identify the 6 monomers in the 
Coot "Cell and SYmmetry" windows forming a single "continuous helix with a 
six-fold screw axis"?

Smith
 







 Forwarding messages 
From: "Smith Lee" <0459ef8548d5-dmarc-requ...@jiscmail.ac.uk>
Date: 2016-12-09 18:12:20
To:  CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] on Cell & Symmetry in coot
  Dear All,


There is a pdb, once opended in coot, it was a monomer (space group   P 65 2 2 
). But in the correspondence paper, it writes, "the subunits form a continuous 
helix with a six-fold screw axis".


I have tried to view with Coot the "six-fold screw axis" formed by 6 monomers. 
But in the "Cell & Symmetry" in Coot, if the radius is small, 6 monomers cannot 
be shown. If I increase the radius, more than 6 monomers would occur in the 
window, and it can hardly distinguish the 6 monomers forming the "six-fold 
screw axis".


In this situation, will you please let me know how to use Coot to identify the 
6 monomers forming the "six-fold screw axis"? In addition, suppose  6 monomers 
forming the "six-fold screw axis" have been identified in Coot, in order to 
save the pdb of each monomer, I need to click each monomer in mouse, then by 
"Save symmetry coordinates" to save the pdb of each monomer, right?


I am looking forward to getting your reply.


Smith










Re: [ccp4bb] Superpose program in CCP4

2016-10-30 Thread Edward Berry
WenHe,
I'm not sure if you want to superimpose a number of structures (as rmsd
would imply) or just compare two structures.
If you want a complete list of the distances between corresponding atoms
in two pdb files of identical sequence,
you can use the fortran program
http://www.cytbc1.net/berry/for/pdbdist2b.for 
(linux executable http://www.cytbc1.net/berry/for/pdbdist2b ).
You need to superpose the structures (if necessary) with something else,
and save the two files.
Grep out C-alphas to two new files if you only care about them.

The program asks you for the names of the two files and the residue
number at which
to start (this latter is to synch the files in case they have different
start residues. If 
sequences are identical, just give the first residue number)
It also asks for last residue to compare in first file - just give a
large number to do all.
Then it asks for a threshold - only distances larger will be printed.
Put -1 to print all.

You can put all parameters on the command line if you run it with the
shell script pdbd2b:
echo 'Find distances greater than threshold between corresponding atoms
in 2 PDB files'
echo 'Usage: pdbd2b file1 file2 startres# [thresh]'
pdbdist2b <>> WENHE ZHONG  10/29/16 11:49 AM >>>
Dear all,

I always use the SUPERPOSE tool in CCP4 to superpose molecules. This
time I want to use the RMSD values of superposed C-alpha atoms to plot a
RMSD graph (instead of using the graph automatically made by the
program). However, there are many atoms missing in the RMSD list. 

In the settings I chose “Superpose specific atoms/residues”, checked
“Output all distances to a file”, fit “C-alpha atoms”. The superposed
structures have exactly the same sequence.

My question is: is there any way to get the completed list of RMSD value
for each C-alpha atom? Or is there any other program for this purpose? 

Thank you!

Kind regards,
Wenhe




Re: [ccp4bb] difference between polar angle and eulerian angle

2014-03-29 Thread Edward Berry
Thanks, Ian!
I agree it may have to do with being used to computer graphics, where
x,y,z are fixed and the coordinates rotate. But it still doesn't make
sense:

If the axes rotate along with the molecule, in the catenated operators
of the polar angles, after the first two operators the z axis would
still be passing through the molecule in the same way it did originally,
so rotation about z in the third step would have the same effect as
rotating about z in the original orientation. 
Or in eulerian angles, if the axes rotate along with the molecule at
each step, the z axis in the third step passes through the molecule in
the same way it did in the first step, so alpha and gamma would have the
same effect and be additive.  In other words if the axes we are rotating
about rotate themselves in lock step with the molecule, we can never
rotate about any molecular axes except those that were originally along
x, y, and z (because they will always be alng x,y,z) (I mean using
simple rotations about principle axes: cos sin -sin cos).
Maybe I need to think about the concept of molecular axes as opposed to
lab axes. The lab axes are defined relative to the world and never
change. The molecular axis is defined by how the lab axis passes through
the molecule, and changes as the molecule rotates relative to the lab
axis.  But then the molecular axis seems redundant, since I can
understand the operator fine just in terms of the rotating coordinates
and the fixed lab axes. Except the desired rotation axis of the polar
angles would be a molecular axis, since it is defined by a line through
the atoms that we want to rotate about. So it rotates along with the
coordinates during the first two operations, which align it with the old
lab Z axis (which is the new molecular z axis?) . . .   You see my
confusion.
Or think about the math one step at a time, and suppose we look at the
coordinates after each step with a graphics program keeping the x axis
horizontal, y axis vertical, and z axis coming out of the plane. For
Eulerian angles, the first rotation will be about Z. This will leave the
z coordinate of each atom unchanged and change the x,y coordinates.  If
we give the new coordnates to the graphics program, it will display the
atoms rotated in the plane of the screen (about the z axis perpendicular
to the screen).  The next rotation will be about y, will leave the y
coordinates unchanged, and we see rotation about the vertical axis.
Final rotation about z is in the plane of the screen again, although
this represents rotation about a different axis of the molecule.  My
view would be to say the first and final rotation are rotating about the
perpendicular to the screen which we have kept equal to the z axis, and
it is the same z axis.

Ed

 Ian Tickle  03/29/14 1:39 PM 
Hi Edward


As far as Eulerian rotations go, in the 'Crowther' description the 2nd
rotation can occur either about the new (rotated) Y axis or about the
old (unrotated) Y axis, and similarly for the 3rd rotation about the new
or old Z.  Obviously the same thing applies to polar angles since they
can also be described in terms of a concatenation of rotations (5
instead of 3).  So in the 'new' description the rotation axes do change:
they are rotating with the molecule.

For reasons I find hard to fathom virtually all program documentation
seems to describe it in terms of rotations about already-rotated angles.
 If as you say you find this confusing then you are not alone!  However
it's very easy to change from a description involving 'new' axes to one
involving 'old' axes: you just reverse the order of the angles.  So in
the Eulerian case a rotation of alpha around Z, then beta around new Y,
then gamma around new Z (i.e. 'Crowther' convention) is completely
equivalent to a rotation of gamma around Z, then beta around _old_ Y,
then alpha around _old_ Z.

So if you're used to computer graphics where the molecules rotate around
the fixed screen axes (rotation around the rotating molecular axes would
bmuch more intuitive.


Cheers


-- Ian



On 27 March 2014 22:18, Edward A. Berry ber...@upstate.edu wrote:
According to the html-side the 'visualisation' includes two
back-rotations in addition to what you copied here, so there is at
least one difference to the visualisation of the Eulerian angles.


Right- it says:
This can also be visualised as
rotation ϕ about Z,
rotation ω about the new Y,


rotation κ about the new Z,

rotation (-ω) about the new Y,
rotation (-ϕ) about the new Z.

The first two and the last two rotations can be seen as a wrapper
which
first transforms the coordinates so the rotation axis lies along z, then
after
the actual kappa rotation is carried out (by rotation about z),
transforms the rotated molecule back to the otherwise original position.
Or which transforms the coordinate system to put Z along the rotation
axis, then after
the rotation by kappa about z transforms back to the original coordinate
system.

Specifically,
  rotation ϕ about Z 

Re: [ccp4bb] difference between polar angle and eulerian angle

2014-03-29 Thread Edward Berry
 Edward Berry 
 03/29/14 5:22 PM 
Thanks, Ian!
I agree it may have to do with being used to computer graphics, where
x,y,z are fixed and the coordinates rotate. But it still doesn't make
sense:
-My mistake- in computer graphics x,y,z rotates with the atomic
coordinates relative to screen coordintes, or the viewpoint changes


However it's very easy to change from a description involving 'new'
axes to one involving 'old' axes: you just reverse the order of the
angles.  So in the Eulerian case a rotation of alpha around Z, then beta
around new Y, then gamma around new Z (i.e. 'Crowther' convention) is
completely equivalent to a rotation of gamma around Z, then beta around
_old_ Y, then alpha around _old_ Z.

Maybe in my thinking I am going in reverse order- didn't pay attention
to sign of the angles.
If you think of the Eulerian navigator at
http://sb20.lbl.gov/berry/Euler2.gif
it is obvious that the same setting on each of the three angles will
give the same orientation. Now assuming the outside frame is fixed
(bolted to the bench) and you adjust the angles starting with the inside
ring, you will be using lab axes all the way. If you first adjust the
outside ring, then the next two rotation will be about new axes.  
Computationally it must be much easier to use old or Lab axes. In
the case of polar coordinates, the whole problem involves rotation by
kappa about an axis at odd angles to x,y,z. If in order to do that, we
introduce 3 more rotations about non-standard axes, and the same for
each of them, we will never get there!


So if you're used to computer graphics where the molecules rotate around
the fixed screen axes (rotation around the rotating molecular axes would
be very confusing!) then it seems to me that the 'old' description is
much more intuitive.


Cheers


-- Ian



On 27 March 2014 22:18, Edward A. Berry ber...@upstate.edu wrote:
According to the html-side the 'visualisation' includes two
back-rotations in addition to what you copied here, so there is at
least one difference to the visualisation of the Eulerian angles.


Right- it says:
This can also be visualised as
rotation ϕ about Z,
rotation ω about the new Y,


rotation κ about the new Z,

rotation (-ω) about the new Y,
rotation (-ϕ) about the new Z.

The first two and the last two rotations can be seen as a wrapper
which
first transforms the coordinates so the rotation axis lies along z, then
after
the actual kappa rotation is carried out (by rotation about z),
transforms the rotated molecule back to the otherwise original position.
Or which transforms the coordinate system to put Z along the rotation
axis, then after
the rotation by kappa about z transforms back to the original coordinate
system.

Specifically,
  rotation ϕ about Z brings the axis into the x-z plane so that

  rotation ω about the Y brings the axis onto the z axis, so that

  rotation κ about Z is doing the desired rotation about a line that
passes through
the  atoms in the same way the desired lmn axis did in the original
orientation;

  Then the 4'th and 5'th operations are the inverse of the 2nd and
first,
   bringing the rotated molecule back to its otherwise original position

I think all the emphasis on new y and new z is confusing. If we are
rotating the molecule (coordinates), then the axes don't change. They
pass through the molecule
in a different way because the molecule is rotated, but the axes are the
same. After the first two rotations the Z axis passes along the desired
rotation axis, but the Z axis has not moved, the coordinates (molecules)
have.
Of course there is the alternate interpretation that we are doing a
change of coordinates and expressing the unmoved molecular coordinates
relative to new principle axes. but if we are rotating the coordinates
about the axes then the axes should remain the same, shouldn't they? Or
maybe there is yet another way of looking at it.


Tim Gruene wrote:
-BEGIN PGP SIGNED MESSAGE-
HasAccording to the html-side the 'visualisation' includes two
back-rotations in addition to what you copied here, so there is at
least one difference to the visualisation of the Eulerian angles.

Best,
Tim

On 03/27/2014 07:11 AM, Qixu Cai wrote:
Dear all,

 From the definition of CCP4
(http://www.ccp4.ac.uk/html/rotationmatrices.html), the polar angle
(ϕ, ω, κ) can be visualised as rotation ϕ about Z, rotation ω about
the new Y, rotation κ about the new Z. It seems the same as the ZXZ
convention of eulerian angle definition. What's the difference
between the CCP4 polar angle definition and eulerian angle ZXZ
definition?

And what's the definition of polar angle XYK convention in GLRF
program?

Thank you very much!

Best wishes,


- --
- --
Dr Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Icedove - http://www.enigmail.net/

iD8DBQFTNAz0UxlJ7aRr7hoRAj7IAKDs/J0L

Re: [ccp4bb] off topic: a Python online course and others

2012-10-21 Thread Edward Berry
I took a look at the first four lessons at the first link, and I think there 
must be some mistake- 
this site is actually teaching BASIC. All these commands are valid syntax under 
say microsoft 
GWBasic or QuickBasic. But I think this Zed Shaw has been studying shell 
programming also and got mixed up because he uses this octa-#-thorpe instead of 
apostrophe or REM to denote comments.

Remember when parents used to send their kids to computer boot camp to learn 
BASIC
for fear they would be computer-illiterate and couldn't function in the modern 
age if 
they couldn't program a computer? 

 Sean Seaver  10/20/12 1:43 PM 
I'd also recommend:

Learn Python The Hard Way By Zed A. Shaw
http://learnpythonthehardway.org/

Online Python Tutor
http://pythontutor.com/

Take Care,

Sean Seaver, PhD

P212121
http://store.p212121.com/



Re: [ccp4bb] Fwd: [ccp4bb] crystallisation and mosaicity

2008-06-05 Thread Edward Berry

I think the important thing here is that liquid nitrogen in the lab
tends to be exactly at its boiling point, since the temperature is
maintained by continuously boiling off some of the N2.

This means the only mechanism for heat absorption is through vaporization,
depending on the latent heat of vaporization rather than the heat
capacity. So as soon as any heat is removed from the object, some gas
is formed, and the gas layer insulates.

Propane is chilled with LN2 to near its freezing point, so it can absorb
quite a lot of heat before any vapor is formed. If you spill some on your
hand you will have a nasty burn immediately, whereas you can usually get
away unscathed with splashing LN2 on your hand briefly.

Petr Leiman wrote:

yes you are right, but I assumed if people see a cloud of condensed
fog over their LN2 bath they should remove that by
a) filling up the bowl completely e.g. some LN2 drips out of the bowl
b) blow the fog away before you dip


I think the original poster meant the relatively low heat conduction of 
liquid N2, which causes boiling around the crystal immediately after 
plunging.


The best way to freeze things is to put a small container of liquid 
ethane or propane into a liquid N2 bowl, and plunge into the 
ethane/propane (this methods was suggested earlier).


Petr


Re: [ccp4bb] Merging CCP4i projects from two computers

2008-03-07 Thread Edward Berry

David J. Schuller wrote:
...


even further back, file versioning in VMS might have been relevant and
useful. But I am wandering.



[:)
Problem is both files are version *.*;1.
I keep getting:

-RMS-E-FEX, file already exists, not superseded
%BACKUP-E-OPENOUT, error opening DISK$USER1:[00.ACRIVOS.JUL99]I0TEST.DIR;1 
as output
-RMS-E-FEX, file already exists, not superseded
%BACKUP-E-OPENOUT, error opening DISK$USER1:[00.BERRY]TEMP.TXT;1 as output
-RMS-E-FEX, file already exists, not superseded
%BACKUP-E-OPENOUT, error opening DISK$USER1:[00.SAURON]KK9108051.DIR;1 as 
output
...
  SYSOPER  job terminated at  8-JAN-2008 22:25:07.17

  Accounting information:
  Buffered I/O count:  317527 Peak working set size:1597
  Direct I/O count:192669 Peak page file size:  6950
  Page faults:   4162 Mounted volumes: 0
  Charged CPU time:   0 00:23:26.42   Elapsed time: 0 00:25:07.12
$


Re: [ccp4bb] Does NCS bias a randomly-chosen test set (even if not enforced)? [ccp4bb] an over refined structure

2008-02-21 Thread Edward Berry

Dale Tronrud wrote:


   In summary, this argument depends on two assertions that you can
argue with me about:

   1) When a parameter is being used to fit the signal it was designed
for, the resulting model develops predictive power and can lower
both the working and free R.  When a signal is perturbing the value
of a parameter for which is was not designed, it is unlikely to improve
its predictive power and the working R will tend to drop, but the free
R will not (and may rise).

   2) If the unmodeled signal in the data set is a property in real
space and has the same symmetry as the molecule in the unit cell,
the inappropriate fitting of parameters will be systematic with
respect to that symmetry and the presence of a reflection in the
working set will tend to cause its symmetry mate in the test set
to be better predicted despite the fact that this predictive power
does not extend to reflections that are unrelated by symmetry.
This bias will occur for any kind of error as long as that
error obeys the symmetry of the unit cell in real space.



Well, I've had time now to think about this, and I find myself
agreeing with point 1 and most of point 2 (that the unmodeled
signal has the same symmetry as the model), and I still would
argue that NCS symmetry does not bias the free set if it is
not enforced. Sorry to be so persistent, and I'm sure 95% of
readers will want to stop here, but:

I don't see that the symmetry of unmodeled signal will cause
a free reflection whose symmetry mate is working to be better
predicted than another free reflection which is not sym-related
to a working reflection, or than in the case where there is no
symmetry.  I think this assertion comes from noting that in the
final refined structure the sym-related Fc's will be correlated,
and since the Fo's are also correlated, and since the sign of
|Fo-Fc| is correlated because of the symmetry of the un-modeled
signal, this correlation of sym-related Fc's results in an artificial
reduction of |Fo-Fc| at test reflections.

That might be true of the correlation between sym-related Fc's were
perfect. I want to argue that the correlation between Fc's results
only from the approach of the test Fc to the F of the true
(symmetrical) structure, i.e. the correlation follows from, rather
than contributes to, the decrease of |Fo-Fc| at test reflections.
The decrease in |Fo-Fc| at test reflections results only from
the approach of the electron density of the model to that of the
real structure (and the fact that the Fo's are good estimates of the
diffraction pattern of the real structure), and this is exactly what
the Free-R is supposed to measure.

Let me describe the refinement process in perhaps oversimplified
steps which make my argument clear, and then we may want to argue
about the individual assumptions.

In my view it all depends on what is
driving what. Briefly, the need to minimize |Fo-Fc| at working
reflections drives the structural changes, the resulting approach
of the structure to the true structure drives the decrease in
|Fo-Fc| at free reflections, and it is only this reduction in
|Fo-Fc| at free reflections which brings about the correlation
between free Fc and sym-related working Fc. You cannot then turn
around and say the correlation between sym-related free and
working Fc biases the |Fo-Fc|Free.

To elaborate:
The individual structural changes are driven by the need to minimize
|Fo-Fc| at the working reflections. The refinement program reduces
|Fo-Fc| at working reflections by a combination of (1) appropriate
structural changes which actually make the model closer to the
true structure, (2)Inappropriate structural changes which happen
to reduce |Fo-Fc| by accounting for some of the un-modeled signal,
but not in a way that resembles the real structure, and (3) fitting
the noise in the measurements.

The reduction in |Fo-Fc| at free reflections is driven by the fact
that the changes make the model a better approximation to the
electron density of the real structure, driving the Fc closer to the
theoretical F's of the true structure, of which the Fo are a good
approximation.
This is mainly due to changes of type 1 above, appropriate modeling of
the structure. The inappropriate movement of atoms into density may
also improve free |Fo-Fc| at least at low resolution, so give a
smaller decrease in Rfree than in Rwork. And fitting the noise will
in general move the structure away from the true structure and so
tend to increase |Fo-Fc|free.  The point, which we may need to argue
about, is that the only force driving the reduction of |Fo-Fc| at free
reflections is the approach of the model electron density to that
of the true structure.

Finally, the correlation of free Fc with working Fc is driven
by the approach of both to to their respective Fo, together with the
fact that the Fo are highly correlated. The correlation can never
be better than the correlation of free Fc to Fo, which we said in the
previous step is due to improvement 

Re: [ccp4bb] an over refined structure

2008-02-12 Thread Edward Berry

Dale Tronrud wrote:





   In summary, this argument depends on two assertions that you can
argue with me about:

   1) When a parameter is being used to fit the signal it was designed
for, the resulting model develops predictive power and can lower
both the working and free R.  When a signal is perturbing the value
of a parameter for which is was not designed, it is unlikely to improve
its predictive power and the working R will tend to drop, but the free
R will not (and may rise).

   2) If the unmodeled signal in the data set is a property in real
space and has the same symmetry as the molecule in the unit cell,
the inappropriate fitting of parameters will be systematic with
respect to that symmetry and the presence of a reflection in the
working set will tend to cause its symmetry mate in the test set
to be better predicted despite the fact that this predictive power
does not extend to reflections that are unrelated by symmetry.
This bias will occur for any kind of error as long as that
error obeys the symmetry of the unit cell in real space.



Dear Dale,
Thanks for taking the time to think about my problem and for
composing what is obviously a well-thought-out explanation.
I am a little over my head here, but I think I see your point.

Inappropriate fitting of this residual error has poor predictive
power so does not reduce {Fc-Fo| for general free reflections.
However the error is symmetrical, so attempts to fit it will
result in symmetrical changes which reduce |Fo-Fc| for those
free reflections that are related to working reflections.

I need to read the references that were mentioned in this
discussion, and think about it a little more in order to
resolve some remaining conflicts in my thinking.
But I don't need to bother everyone else with my
struggles, unless I come up with something useful.
Thanks for the guidance!

Ed


Re: [ccp4bb] Does NCS bias a randomly-chosen test set (even if not enforced)?

2008-02-11 Thread Edward Berry

Dirk Kostrewa wrote:

Dear Ed,

although, I don't think that a comparison of refinement in a higher and 
a lower symmetry space group is valid for general NCS cases, I will try 
to answer your question. Here are my thoughts for two different cases:


(1) You have data to atomic resolution with high I/sigma and low Rsym (I 
assume high redundancy). The n copies of the asymmetric unit in the unit 
cell are really identical and obey the higher symmetry (so, not a 
protein crystal). When you process the data in lower symmetry (say, P1), 
the non-averaged higher-symmetry-equivalent Fobs will differ due to 
measurement errors, and thus reflections in the working-set will differ 
to higher-symmetry-related reflections in the test-set due to these 
measurement errors. If you then refine the n copies against the 
working-set in the lower P1 symmetry, you minimize |Fobs(work)-Fcalc|, 
resulting in Fcalcs that become closer to the working-set Fobs. As a 
consequence, the Fcalcs will thus diverge somewhat from the test-set 
Fobs. However, since this atomic model is assumed to be very well 
defined obeying the higher symmetry, and, furthermore, the working-set 
contains well measured higher-symmetry-equivalent Fobs, the resulting 
atomic positions, and thus the Fcalcs, will be very close to their 
equivalent values in the higher-symmetry refinement. Therefore, the 
Fcalcs will also be still very similar to the 
higher-symmetry-equivalent Fobs in the test-set, and I would expect a 
difference between Rwork and Rfree ranging from 0 to the value of 
Rsym. In other words, the Fobs in the test-set are not really 
independent of the reflections in the working-set, and thus Rfree is 
heavily biased towards Rwork.
In this case, I would not expect large differences in the outcome due to 
the additional application of NCS-constraints/restraints.


As I see it, this is clearly a case of |Fo-Fc| for the test reflectins
decreasing because the model is getting better, and there is no bias.
Lets say the higher symmetry really does apply, so the correct structure
is perfectly symmetrical and the NCS-related reflections agree to within
the error level.
Lets also say the initial model is perfectly symmetrical (you solved the
molecular replacement with two copies of the same monomer, and rigid-
body refinement positioned them exactly). But let's say it is completely
unrefined- the search model is from a different organism in a different
space group, and modified by homology modeling to your sequence.
So the Fo obey the  NCS within error, The Fc obey the NCS, but the
Fobs don't fit the Fcalc very well. Initially there is no Free-R bias,
because the model has not been refined agaist the data. The free set
can only be biased by refinement, since it is only during refinement
that the the free set is treated differently. Thus it doesn't matter
that the ncs-related Fo are correlated and the ncs-related Fc
are correlated: it is only the CHANGES in Fc that could introduce
model bias, and they are uncorrelated if you do not enforce ncs.

Now as we refine, the model will converge toward the correct symmetrical
model as a result of minimizing the |Fo-Fc| for the work reflections.
At the same time the |Fo-Fc| for the test reflections will also decrease
on the average, but to a lesser extent. I argue that the only mechanism
for refinement to reduce |Fo-Fc| at a test reflection is by improving
the structure, and I think that constitutes an unbiased Free-R value.

If you can think of any mechanism to reduce |Fo-Fc| for a test reflection
because you are refining against a symm-related work reflection, then
the R-free would be biased.  This is not the case if you do not enforce
symmetry. On the average no decrease in |Fo-Fc|(test) will result from
changes that reduce |Fo-Fc| for the work reflection: given an arbitrary
change in the structure, the change in |Fc| at arbitrary reflections
is a pseudo-random variable with expected value zero, and there is no
correlation between the change at ncs-related reflections.

The value of |Fo-Fc| at a test reflection goes down, not due to
changes which improve the fit at a sym-related working reflection,
but because of changes that improve the fit at all test reflections,
and then only because the structure is improving. The atoms moved into
symmetrical positions not because they were constrained to do so,
but because that fits the data better, in turn because the true structure
is symmetrical. If the symmetry doesn't hold for some atoms, they will
tend to move into asymmetric positions to minimize |Fo-Fc| at work
reflections, now *decreasing* the correlation with sym-related work
reflections. But again this will tend to reduce |Fo-Fc| at free
reflections, simply because the model is better approximating the
true structure.

To make a more obvious parallel, suppose you are refining a low-resolution
dataset from a microcrystal (with no NCS). In another directory on the
same disk you have a high resolution structure 

Re: [ccp4bb] Does NCS bias a randomly-chosen test set (even if not enforced)?

2008-02-09 Thread Edward Berry

Frank von Delft wrote:


(I'm probably wrong, but I want someone to show me,and not with 
hand-waving

arguments or invocation of crystallographic intuition or such)

To convince me, someone needs to show that the expected value of the 
change
in |Fo-Fc| at a test reflection upon a change in the model (a step of 
refinement)

is negative, even in the absence of any real improvement in the model,
simply because the change reduces |Fo-Fc| at a sym-related working
reflection.


The problem is that a) the statistical drift will be very small, and b) 

If you can show me it is negative, I don't care how small.
But if it would amount to a change of 0.001 in R-free I'm
not going to worry too much about it!
(Can you describe this statistical drift a little better?)

that this will be for *almost every* reflection in the test set.  If it 
were just a few, you'd be right, but not when it's all of them:  then 
your Rfree will not be informative.

I don't get that- if the expected value is zero for each reflection,
then the more you average the better it will approximate zero.



That's in the absence of NCS restraints.  In their presence, it's bad 
anyway, because you're forcing Fc to be (almost) equal for both 
reflections.

For now I'm not arguing about that.


phx.




Re: [ccp4bb] an over refined structure

2008-02-07 Thread Edward Berry

Actually the bottom lines below were my argument in the case
that you DO apply strict NCS (although the argument runs into
some questionable points if you follow it out).

In the case that you DO NOT apply NCS, there is a second
decoupling mechanism:
Not only the error in Fo may be opposite for the two reflections,
but also the change in Fc upon applying a non-symmetrical
modification to the structure is likely to be opposite. So there
is no way of predicting whether |Fo-Fc| will move in the same
direction for the two reflections. I completely agree with Dirk
(although I am willing to listen to anyone explain why I am wrong).

Ed


Edward Berry wrote:

Dean Madden wrote:

Hi Dirk,

I disagree with your final sentence. Even if you don't apply NCS 
restraints/constraints during refinement, there is a serious risk of 
NCS contaminating your Rfree. Consider the limiting case in which 
the NCS is produced simply by working in an artificially low 
symmetry space-group (e.g. P1, when the true symmetry is P2): in this 
case, putting one symmetry mate in the Rfree set, and one in the Rwork 
set will guarantee that Rfree tracks Rwork.


I don't think this is right- remember Rfree is not just based on Fc
but Fo-Fc. Working in your lower symmetry space group you will have
separate values for the Fo at the two ncs-related reflections.
Each observation will have its own random error, and like as not
the error will be in the opposite direction for the two reflections.

Hence a structural modification that improves Fo-Fc at one reflection
is equally likely to improve or worsen the fit at the related reflection.
The only way they are coupled is through the basic tenet of R-free:
If it makes the structure better, it is likely to improve the fit
at all reflections.

For sure R-free will go down when you apply NCS- but this is because
you drastically improve your data/parameters ratio.

Best,
Ed


Re: [ccp4bb] an over refined structure

2008-02-07 Thread Edward Berry

Dean Madden wrote:

Hi Dirk,

I disagree with your final sentence. Even if you don't apply NCS 
restraints/constraints during refinement, there is a serious risk of NCS 
contaminating your Rfree. Consider the limiting case in which the 
NCS is produced simply by working in an artificially low symmetry 
space-group (e.g. P1, when the true symmetry is P2): in this case, 
putting one symmetry mate in the Rfree set, and one in the Rwork set 
will guarantee that Rfree tracks Rwork. 



I don't think this is right- remember Rfree is not just based on Fc
but Fo-Fc. Working in your lower symmetry space group you will have
separate values for the Fo at the two ncs-related reflections.
Each observation will have its own random error, and like as not
the error will be in the opposite direction for the two reflections.

Hence a structural modification that improves Fo-Fc at one reflection
is equally likely to improve or worsen the fit at the related reflection.
The only way they are coupled is through the basic tenet of R-free:
If it makes the structure better, it is likely to improve the fit
at all reflections.

For sure R-free will go down when you apply NCS- but this is because
you drastically improve your data/parameters ratio.

Best,
Ed


Re: [ccp4bb] an over refined structure

2008-02-07 Thread Edward Berry

Agreed, and this is even more true if you consider R-merge is calculated
on I's and Rfree on F's, Rmerge of 5% should contribute 2.5% to Rfree;
and furthermore errors add vectorially so it would be
more like ,025/sqrt(2).

I guess I have to take all those other errors that have to do with
the inability of a simple atomic model to account for the diffraction
of a crystal, lump them together and assume they have nothing to do
with NCS and are not affected by the simple modification under
consideration.

I am thinking about the CHANGE in |Fo-Fc| at two sym-related reflections
when the refinement program moves a single atom from position 1 to
position 2. If we do not apply NCS, this is the only atom that
will move, and for Fc we can definitely say there is no reason
to expect the two Fc's to move in the same direction, therefore
there is no coupling in the case we do not apply NCS.

If we apply strict NCS then granted the sym related Fc's are equal
before and after the change, so they move in the same direction.
As I said, the argument is weaker now. If there are systematic
errors contributing to the gap between Rfree and 0.5*Rmerge/sqrt(2),
and if these systematic errors follow the NCS, then initial Fo-Fc
is likely to be of the same sign at the related reflections and
larger than the change in Fc, so |Fo-Fc| would go in the same
direction.  But to justify this you would have to explin why
the systematic errors follow ncs. Crystal morphology related
to ncs resulting in similar absorption errors? But how large
are absorbtion errors, and is there any reason for morphology
to follow NCS?

After reading Dean Madden's latest-
We might need some assumption here that we are reasonably close
to the refined structure. If we start with random atoms then
shoving the atoms around in a way that fits the density better
might be seen as improving the structure from the point of
modeling the density, but not from the point of approximating the
real structure. But in this case the change in sign of Fc is
completely decoupled between sym-related reflections, and if you
enforce symmetry you will be enforcing the wrong symmetry and
worsening both the structure and the fit to the density.

I think Gerard Kleywegt has an example of enforcing NCS on a
an erroneous structure, and it was not very effective
at reducing Rfree? And in that case the structure may have had some
resemblence to the density at low resolution,the NCS may have been
somewhat correct.

I guess there are two questions depending whether you are at the
beginning at the beginning of a refinement and may have a completely
wrong structure, or whether refinement isnearly complete and you
want to know whether the further improvement you get on applg NCS
is real.

Jon Wright wrote:

Dear Ed,

I don't see how you decouple symmetry mates in the case of a wrong 
space group. Symmetry mates should agree with each other typically 
within R_sym or R_merge percent, eg; about 2-5% . Observed and 
calculated reflections agree within R_Factor of each other, so about 
20-30%. The experimental errors are pretty much negligible and 
overfitting is not a question about error bars; it is about how hard to 
push a round peg into a square hole?


Cheers,

Jon

Edward Berry wrote:

Actually the bottom lines below were my argument in the case
that you DO apply strict NCS (although the argument runs into
some questionable points if you follow it out).

In the case that you DO NOT apply NCS, there is a second
decoupling mechanism:
Not only the error in Fo may be opposite for the two reflections,
but also the change in Fc upon applying a non-symmetrical
modification to the structure is likely to be opposite. So there
is no way of predicting whether |Fo-Fc| will move in the same
direction for the two reflections. I completely agree with Dirk
(although I am willing to listen to anyone explain why I am wrong).

Ed


Edward Berry wrote:

Dean Madden wrote:

Hi Dirk,

I disagree with your final sentence. Even if you don't apply NCS 
restraints/constraints during refinement, there is a serious risk of 
NCS contaminating your Rfree. Consider the limiting case in which 
the NCS is produced simply by working in an artificially low 
symmetry space-group (e.g. P1, when the true symmetry is P2): in 
this case, putting one symmetry mate in the Rfree set, and one in 
the Rwork set will guarantee that Rfree tracks Rwork.


I don't think this is right- remember Rfree is not just based on Fc
but Fo-Fc. Working in your lower symmetry space group you will have
separate values for the Fo at the two ncs-related reflections.
Each observation will have its own random error, and like as not
the error will be in the opposite direction for the two reflections.

Hence a structural modification that improves Fo-Fc at one reflection
is equally likely to improve or worsen the fit at the related 
reflection.

The only way they are coupled is through the basic tenet of R-free:
If it makes the structure better, it is likely

Re: [ccp4bb] an over refined structure

2008-02-07 Thread Edward Berry

Dean Madden wrote:

Hi Ed,

This is an intriguing argument, but I know (having caught such a case as 
a reviewer) that even in cases of low NCS symmetry, Rfree can be 
significantly biased. I think the reason is that the discrepancy between 
pairs of NCS-related reflections (i.e. Fo-Fo') is generally 
significantly smaller than |Fo-Fc|. (In general, Rsym (on F) is lower 
than Rfree.) Thus, moving Fc closer to Fo will also move its NCS partner 
Fc' closer to Fo' *on average*, if they are coupled.


OK, I see that now, the systematic errors must be related to NCS
in this case because we know if we reduced the data in the higher
space group, our Rsyms would be OK. I stand educated. But it is
difficult to go from there to real ncs where the large unaccounted
errors may not be related to ncs. Furthermore if you don't enforce
NCS the structural changes are asymmetric and there is no reason to
believe Fc will move in the same direction, even in this artificial
case. So Dirk's assertion still stands, I believe.



Dean

Edward Berry wrote:

Actually the bottom lines below were my argument in the case
that you DO apply strict NCS (although the argument runs into
some questionable points if you follow it out).

In the case that you DO NOT apply NCS, there is a second
decoupling mechanism:
Not only the error in Fo may be opposite for the two reflections,
but also the change in Fc upon applying a non-symmetrical
modification to the structure is likely to be opposite. So there
is no way of predicting whether |Fo-Fc| will move in the same
direction for the two reflections. I completely agree with Dirk
(although I am willing to listen to anyone explain why I am wrong).

Ed


Edward Berry wrote:

Dean Madden wrote:

Hi Dirk,

I disagree with your final sentence. Even if you don't apply NCS 
restraints/constraints during refinement, there is a serious risk of 
NCS contaminating your Rfree. Consider the limiting case in which 
the NCS is produced simply by working in an artificially low 
symmetry space-group (e.g. P1, when the true symmetry is P2): in 
this case, putting one symmetry mate in the Rfree set, and one in 
the Rwork set will guarantee that Rfree tracks Rwork.


I don't think this is right- remember Rfree is not just based on Fc
but Fo-Fc. Working in your lower symmetry space group you will have
separate values for the Fo at the two ncs-related reflections.
Each observation will have its own random error, and like as not
the error will be in the opposite direction for the two reflections.

Hence a structural modification that improves Fo-Fc at one reflection
is equally likely to improve or worsen the fit at the related 
reflection.

The only way they are coupled is through the basic tenet of R-free:
If it makes the structure better, it is likely to improve the fit
at all reflections.

For sure R-free will go down when you apply NCS- but this is because
you drastically improve your data/parameters ratio.

Best,
Ed






Re: [ccp4bb] Problem is creation of Symmetry molecule PDB

2007-12-26 Thread Edward Berry

Sampath Natarajan wrote:

Dear all,
 
I solved a structure with four molecules in the assymetric unit which 
form a dodecameric oligomeric structure in the biological process. I 
need to create the symmetrical molecules to find out the cavity size of 
the entire molecule. I tried in coot, but I was not able to save the PDB 
file for the entire moleule. I could get only 4 molecules again. Could 
anyone help me to get dodecameric pdb file? Thanks in advance.
 

PDBSET does this very nicely with symgen and chain keywords:
This is for P3; if you have higher symmetry you might need to
try different symops to see which generate the dimer, and perhaps
add unit translations, to make it come together right. Symops
for each spacegroup are listed in $SYMOP.

#!/bin/csh -f
pdbset xyzin tetra.pdb xyzout dodeca.pdb eof
symgen x, y, z
symgen -Y,X-Y,Z
symgen Y-X,-X,Z
!select chain A B C D
chain symm 2 A I
chain symm 2 B J
chain symm 2 C K
chain symm 2 D L

chain symm 3 A Q
chain symm 3 B R
chain symm 3 C S
chain symm 3 D T

eof


Re: [ccp4bb] Fwd: [ccp4bb] Occupancy refinement of substrate using refmac5

2007-12-18 Thread Edward Berry

Zheng Zhou wrote:

Hi, Ed

I am dealing the similar problem. I checked CNS qindividual.inp. But how 
do I refine one compound with two or more possible conformations (mainly 
due to one bond rotation), each of wihich has a different occupancy? 
Thanks in advance.



Hi Zheng,
Others can answer better than I, but for what it's worth:
You have basically two methods for refining alternate,
possibly overlapping, models.
One is the alternate conformations formalism which I have
not used but is documented in the CNS FAQ
http://cns.csb.yale.edu/v1.2/faq/text.html#xtal_refine
(search for:
Q. How do I deal with alternate conformations in my refinement? )

The other way is by modeling both structures (with different chain
letter or range of residues or something) and turn off vdw
interactions between the two entities which never occur
simultaneously in the same unit cell using the igroup statement.
Some hints can be gleaned from the answer to this different question
(in the same FAQ):

Q. My structure contains a molecule which lies across a symmetry operation. This means that some parts of the molecule are mapped onto each other by symmetry. It is not a 
special position case, because no one atom lies on the symmetry operation exactly. How can I tell CNS to refine the molecule while allowing the overlap to occur?

A. You can use the igroup statement to turn the pvdw (Packing-Van-der-Waals) 
interactions off by setting the weight to zero:
  ---

Syntax for the statement is something like: interaction (selection1 selection2)
   means for it to check interaction between each atom in selection1 with
   each atom in selection2. What I am using (with CNS 1.1) is:

 igroup
   interaction ( atom_select and not (segid M or segid Z or attr store9  
0))
   ( atom_select and not (segid M or segid Z or attr store9  
0))

   interaction ( atom_select and not (segid E or attr store9  0))
   ( segid M)
   interaction ( atom_select and not (segid R or attr store9  0))
   ( segid Z)
{
   interaction ( segid E   ) ( segid M   ) weights * 1.0 pvdw 0.0 end
   interaction ( segid R   ) ( segid Z   ) weights * 1.0 pvdw 0.0 end
}   

   evaluate ($alt=1)
   while ( $alt = $nalt ) loop alcs
 interaction ( atom_select and ( attr store9 = $alt or attr store9 = 0 ))
 ( atom_select and ( attr store9 = $alt ))
 evaluate ($alt=$alt+1)
   end loop alcs
 end

Here segid M and Z are the same subunits as E and R, but in alternate 
conformations.
Store9 has to do with the formally declared alternate conformations (see atom 
selection).

The first interaction statement ignores M and Z and checks interaction of 
everything
else with everything else. The second checks interaction of everything in M with
everything except its alternate self E; the third checks Z with everything 
except R.
I see I have {commented out} the section that sets pvdw weights to zero between
alternate conformations of the same thing, but I'm not sure if this is right.
It may be this is not needed since we avoid checking those interactions, but I 
think
at one point we were turning off the NBONDS messages but not the vdw 
interaction,
so there may be two separate things needed here. I would be glad for any 
clarification.
Ed


On Dec 17, 2007 2:24 PM, Edward Berry [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


I think the correlation between occupancy and B-factor depends
also on the size of the ligand (relative to resolution).
Bob Stroud, I think, has estimated occupancy by comparing
the integrated electron density of the ligand with that of
a well-defined, isolated water (assumed to be at unit occuancy?).

In principle the integrated electron density is not affected
by applying a B-factor, it is just spread out over a wider
area. In the case of a single atom at 3 A resolution, it
is spread out under the neighboring atoms and effectively
lost, so it is hard to distinguish high B-factor from low
occupancy.
In a large ligand most of the atoms are inside the ligand,
so their spread-out density remains inside the ligand
and gets counted in the integrated density. In that case
high B-factor has a very different effect than low occupancy,
as only the latter reduces the total electron density of
the ligand.

During a previous reincarnation of this thread I did the
simple test of refining occupancy and B-factor for a
stretch of the protein (holding the rest of the protein
at unit occupancy) in CNS 1.1, and I felt the results
were quite satisfactory (don't have the specifics now).

Ed

Anastassis Perrakis wrote:
  I have already changed occupancies as Eleanor mentioned, and got
  approximate values. But my hope is to try to get much precise
ones if
  possible.
 
  I never expected to preach the 'Kleywegt Gospel' in the ccp4bb,
  but in this case what you need is more accurate

Re: [ccp4bb] Fwd: [ccp4bb] Occupancy refinement of substrate using refmac5

2007-12-17 Thread Edward Berry

I think the correlation between occupancy and B-factor depends
also on the size of the ligand (relative to resolution).
Bob Stroud, I think, has estimated occupancy by comparing
the integrated electron density of the ligand with that of
a well-defined, isolated water (assumed to be at unit occuancy?).

In principle the integrated electron density is not affected
by applying a B-factor, it is just spread out over a wider
area. In the case of a single atom at 3 A resolution, it
is spread out under the neighboring atoms and effectively
lost, so it is hard to distinguish high B-factor from low
occupancy.
In a large ligand most of the atoms are inside the ligand,
so their spread-out density remains inside the ligand
and gets counted in the integrated density. In that case
high B-factor has a very different effect than low occupancy,
as only the latter reduces the total electron density of
the ligand.

During a previous reincarnation of this thread I did the
simple test of refining occupancy and B-factor for a
stretch of the protein (holding the rest of the protein
at unit occupancy) in CNS 1.1, and I felt the results
were quite satisfactory (don't have the specifics now).

Ed

Anastassis Perrakis wrote:

I have already changed occupancies as Eleanor mentioned, and got
approximate values. But my hope is to try to get much precise ones if
possible.


I never expected to preach the 'Kleywegt Gospel' in the ccp4bb,
but in this case what you need is more accurate answers, not more 
precise ones
(or better both, but precision alone can be a problem, and you can 
easily get

'precise' but inaccurate data easily by making the wrong assumptions
in your experiment)

http://en.wikipedia.org/wiki/Accuracy


I have heard from my colleague SHELX can refine occupancies, and
got its license. I'll next try SHELX.


I think that phenix.refine can also do occupancies ?
The problem is not  if the program can do it, but if at your specific case
you have enough information to do that in a meaningful way.

For a soaking experiment and 1.5 A data, I would say that Eleanor's 
suggestion
of tuning Occ based on B, is as close as you would get, accurate enough 
given the data,

although not necessarily too precise.

Tassos


Re: [ccp4bb] Protein-detergent micelle sizes

2007-11-20 Thread Edward Berry

In case you end up compiling your own list, here is one entry:

von Jagow and co-workers (Biochim Biophys Acta. 1977 462(3):549-58.)
used tritiated Triton X-100 to measure the binding to Complex III

 5. In accordance with the high polarity the amount of bound
 detergent is relatively low, it amounts to 0.2 g Triton X-100/g
 protein. The amount of Triton bound to 1 mol of b-c1-dimer corresponds
 to the molecular weight of the Triton micelle.

The bc1 dimer MW is ~480 kDa, so .2 g/g - 96 kDa detergent

Jacob Keller wrote:

Dear Crystallographers,

I am trying to gather literature values on the size of the 
detergent/micelle belts on solublized membrane proteins. Does anybody 
know of a good repository of this info, whether as a database or as a 
review paper, or otherwise? The rough values of 15-25kD to 35-50kD were 
given in one recent BB posting, but I was looking for a more systematic 
tabulation...


Thanks,

Jacob Keller

***
Jacob Pearson Keller
Northwestern University
Medical Scientist Training Program
Dallos Laboratory
F. Searle 1-240
2240 Campus Drive
Evanston IL 60208
lab: 847.467.4049
cel: 773.608.9185
email: [EMAIL PROTECTED]
***


Re: [ccp4bb] carving up maps (was re: pymol help)

2007-10-29 Thread Edward Berry

This is not such a problem when using the old Map-cover command
in O, because the cut-offs are flat planes, you would get cubic
density around each atom which would raise the suspicion of even
the most gullible reader.

But a better solution would be to not contour the carved surface-
leave a gaping hole in the net where something else is connected.
Another way to describe this-
contour the real density, but only display the contours within
certain radius of selected atoms. Then in Anastassis's example
there would be no density for the atoms because the contour is
outside the cutoff.


Anastassis Perrakis wrote:

Dear Andrew,

Thank you for that posting; I would like to simply agree with the 
Bobscript manual and your suggested practice.


I think the 'carve' commands should not be there; if you wonder why, 
take a ligand, put it wherever you want in space,
set the map sigma to -0.5, display a map with carve=1.2 and think if 
this picture is informative, especially in the context
of your favorite competitor publishing it in Nature. 


A.

PS Yanming, if you really want to do that what you ask for, ADOBE 
ILLUSTRATOR IS THE BEST WAY TO GO



*From: *   [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
*Subject: * *Re: [ccp4bb] pymol help*
*Date: * October 29, 2007 8:44:46 GMT+01:00
*To: *   CCP4BB@JISCMAIL.AC.UK mailto:CCP4BB@JISCMAIL.AC.UK
*Reply-To: *   [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]

Dear all,
Sorry I did not make it clear in my first email. Now my question can 
boil down to:


IS IT POSSIBLE TO ONLY ZOOM ONE OBJECT AND KEEP ALL THE OTHER OBJECTS 
UN-ZOOMED IN PYMOL?


thanks
Yanming



On Oct 29, 2007, at 13:43, Andrew Gulick wrote:

I'd be curious to know if there is any consensus in the community with 
using
the carve command for showing maps.  I have never felt comfortable 
showing

density within a cutoff radius of a particular residue or--even worse--a
ligand, and felt the figures should display the extraneous bits as well.
The burden was on the crystallographer to find an appropriate view (slab,
sigma, etc...) to display the map.

The Bobscript manual appears to agree with me on this one as it states:

http://www.strubi.ox.ac.uk/bobscript/doc24.html
If your density is good then you will just have density over residue 999,
but if things are not so hot then you may want to cheat and just draw the
bits of density near to the selected atoms.

Just curious,
Andy
--
Andrew M. Gulick, Ph.D.
---
(716) 898-8619
Hauptman-Woodward Institute
700 Ellicott St
Buffalo, NY 14203
---
Senior Research Scientist
Hauptman-Woodward Institute

Assistant Professor
Dept. of Structural Biology, SUNY at Buffalo

http://www.hwi.buffalo.edu/Faculty/Gulick/Gulick.html
http://labs.hwi.buffalo.edu/gulick


On 10/29/07 4:40 AM, Stefan Schmelz [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:



Dear Yanming,

To show pretty density of a model you have to import a ccp4 density
map and display it around your ligand. The simplest solution is using
ccp4 and tick the box Generate weighted difference maps files in CCP4
format when running Refmac5 (one  or two cycles  are enough).  Specify
names for FWT and DelFwt maps and rename the maps afterwards to *.ccp4.
This renamed map (e.g. fwt.ccp4) can be opened in pymol. To show density
around our ligand you can use the following command:

isomesh map, name_of_fwtmap, 1.0, ligand, carve=1.8

(map = greats an object name map, name_of_fwtmap = name of your map,
1.0 = Sigma level, carve=1.8 = width map is displayed around your
ligand

This will allow you to show a pretty electron density map around your
ligand without any chemical info of the ligand.


Stefan Schmelz


Yanming Zhang wrote:

Hi, all,

I want to make a pymol figure wich can show the pretty density of a
ligand. But we don't want to show  the detailed chemical info of the
ligand. If I use a large enough sphere_scale for the ligand, the
chemical info will be hidden but the density map will be disrupted. If
I use a smaller sphere_scale, the density looks great but the chemical
info of the ligand will be visible. How should I overcome this dilemma?
Thank you very much for your help.
Yanming




Re: [ccp4bb] Solvent content of membrane protein crystals

2007-09-25 Thread Edward Berry

Das, Debanu wrote:


Hi,
  There are at least 4 methods to try to estimate amount of detergent in a membrane protein crystal 

.


  In summary, someone wanting to estimate amount of detergent in their 
crystals and have sufficiently large and numerous crystals, could try out 
any of the above methods. The above techniques are quite well documented 
in literature. The first 3 can be done in-house and so can FTIR. 


I happen to have the reference for Ron Kaplan's TLC method at my fingertips:

Eriks, L. R., Mayor, J. A.  Kaplan, R. S. (2003). A strategy for
identification and quantification of detergents frequently used
in the purification of membrane proteins. Anal Biochem 323, 234-41.

Sensitivity is limited by the amount of aqueous solution you can spot on a TLC 
plate.
Probably could improve sensitivity by speed-vac-ing an aliquot to near dryness
and redissolving in 50% methanol or something.


Re: [ccp4bb] Solvent content of membrane protein crystals

2007-09-23 Thread Edward Berry

Savvas Savvides wrote:
Indeed, but wouldn't consideration of micelle size affect our  
estimation of the number of molecules in the asu, in some cases  
significantly?

Good point- I think now that is taken into account by just saying
membrane proteins tend to have a high solvent content and taking
that into consideration when you guess the number of molecules.
But it would be nice to account for the detergent explicitly.
Say by analyzing detergent content of the crystals, or in some
ideal cases neutron diffraction with perdeuterated detergent.

The crystal packing of some membrane proteins shows that they tend to  
pack as potatoes in space with relatively few protein-protein  
contacts and with detergent micelles presumably providing the rest of  
the crystal packing interactions. That also explains the often  
significant diffraction anisotropy observed in such crystals. One  
classic example is the prototypical potassium channel structure (KCSA)  
(PDB entry 1bl8).

I'll have to look at KCSA again. I've been assuming the micelle is too
fluid and solvent-like to make any kind of a crystal contact, but it
occupies space holding the molecules apart and preventing real crystal
contacts. This was the rationale behind Michel's use of small
amphiphiles to replace the bulky micelle, and antibody fragments to
bridge the gap and provide hydrophilic areas for contact.


Savvas


Quoting Edward Berry [EMAIL PROTECTED]:


I would use a very general definition for solvent,
including disordered detergent and lipids.
As you know in many cases ordered detergents and lipids
have been modeled in the coordinates, so they are part of
the model not the solvent. In some cases I think waters
should be included in the model not solvent- say for
structural waters buried in the protein at least.

Ed

Savvas Savvides wrote:


Dear colleagues,

in estimating the solvent content of membrane protein crystals it
would only seem reasonable that micelle size should also be taken   
into account. Depending on the aggregation number and MW of a given  
  detergent, the concentation of detergent used, and the buffer
conditions, one may have micelles on the order of 15-25 kDa or even  
  35-50 kDa for detergents with alkyl chains of more than 10 carbons.


However, when I took a look in a handful of papers reporting   
Matthews' numbers for membrane protein crystals, it became apparent  
 that only  the protein MW is used in such estimates. I am  
beginning  to wonder if  one should even bother reporting a  Matthews 
number  for a membrane  protein crystal given the  uncertainties 
surrounding  size and role of  micelles in crystal  packing.


Any thoughts on this?

best wishes
Savvas





Re: [ccp4bb] Solvent content of membrane protein crystals

2007-09-22 Thread Edward Berry

I would use a very general definition for solvent,
including disordered detergent and lipids.
As you know in many cases ordered detergents and lipids
have been modeled in the coordinates, so they are part of
the model not the solvent. In some cases I think waters
should be included in the model not solvent- say for
structural waters buried in the protein at least.

Ed

Savvas Savvides wrote:


Dear colleagues,

in estimating the solvent content of membrane protein crystals it  would 
only seem reasonable that micelle size should also be taken into  
account. Depending on the aggregation number and MW of a given  
detergent, the concentation of detergent used, and the buffer  
conditions, one may have micelles on the order of 15-25 kDa or even  
35-50 kDa for detergents with alkyl chains of more than 10 carbons.


However, when I took a look in a handful of papers reporting Matthews'  
numbers for membrane protein crystals, it became apparent that only  the 
protein MW is used in such estimates. I am beginning to wonder if  one 
should even bother reporting a Matthews number for a membrane  protein 
crystal given the uncertainties surrounding size and role of  micelles 
in crystal packing.


Any thoughts on this?

best wishes
Savvas


Re: [ccp4bb] CCP4 rotation convention

2007-08-13 Thread Edward Berry

Someone should design a device like a compass gimbal with an extra ring
for teaching euler's angles, patent it (Gnu hardware license- world demand
is probably 100 pieces), and persuade Hampton research or MitEGen to
manufacture it.

The device (picture at (http://sb20.lbl.gov/berry/Euler2.gif),
but I'm no artist) consists of 3 concentric rings:

The outer ring is mounted by external studs in an F-shaped support
so it can rotate about a vertical diameter (alpha, new Z).
One of the bearings has a brass disk with degree marks etched and
an indicator on the ring reads alpha angle.

The central ring is connected to the outer ring by by horizontal bearings,
allowing it to rotate about a horizontal diameter. Likewise a brass disk
indicates the beta angle.

The innermost ring is connected to the central ring by vertical bearings,
allowing it to rotate about a vertical axis (when beta and alpha are zero).
An indicator read gamma.

Inside the inner ring is suspended an asymmetric object like a pointing
hand or arrow painted red on one side and green on the other.
Also wire axes indicating x,y,z direction in the original crystal.

The F-shaped mounting bracket would have x,y,z direction in the target
crystal indicated (and they would be the same when alpha, beta, and gamma
are zero, i.e. the rings are coplanar).

Playing with this would take some of the abstractness out of Euler angles.
It would also let the student resolve for herself the apparent
contradiction that all orientations can be reached by the inner object,
despite the fact that two of the rotations are (initially) coaxial.

Ed


Re: [ccp4bb] Calculating vertical offset of helices

2007-08-06 Thread Edward Berry
Since Jie Liu is talking about coiled coils and heptad
repeat, I think what may be needed is the displacement
along the bundle axis. So an expression for the best
bundle axis line, and then the projection of different
C-a's onto that line to measure the difference between
them?

Ed

Eleanor Dodson wrote:

 Jie Liu wrote:
 
Dear CCP4ers:

Does anyone know an existing program to calculate the lateral
displacement---the vertical offset, measured as a fraction of the
the heptad repeat---of neighboring helices in coiled coils or helix
bundles,  either parallel or antiparallel?

Your input is greatly appreciated.

Have a nice weekend!

Jie


  
 
 
 Maybe I dont understand the question..
 
 If you know what you want to match to what cant you just superpose the
 Cas of helix i onto those of helix j and sort it out from that?
 Selecting the match might be tricky but you must have done that?
 
 You will need to do some geometric fiddling I guess.
 The Superpose molecule option from the GUI would give you the centre of
 mass of each segment and the direction cosines for the transformation,
 which will be those for the helix axes. I think the dot product of the
 COM onto the DCs would give you the relative displacements in As
 
 (X_com1*DC1 + Y_com1*DC2 + Z_com1*DC3) - (X_com2*DC1 + Y_com2*DC2 +
 Z_com2*DC3)
 
 Eleanor


Re: [ccp4bb] resolution vs ramachandran

2007-08-03 Thread Edward Berry

Procheck puts out such a correlation (% most favorable
vs resolution) in the _04.ps file. For example look at
page 7, first panel of the sample procheck output at:
http://sb20.lbl.gov/SQR/procheck-2H88.pdf
It appears that 83.5% would be well above average
for a 3 A structure according to procheck statistics.
(However I think most structures nowadays are
above average by procheck statistics)

Ed

Xiaofei Jia wrote:


Dear all,
 
I am now preparing my structure for deposition.  The

crystal diffracts to 3.0 A. R: 0.20; R free: 0.25.
What I am concerned with is the Ramachandran plot;
83.5% in core region, 15.8% in allowed, 0.5 % in
general allowed,0.2% in disallowed region. The model
fits in density pretty well and all the attempts to
improve Ramachandran have not been successful after
quite a few trials. I am wondering if 83.5 % in core
region is acceptable with 3.0 A resolution data? 
Moreover, is there some numeric correlation between

diffraction resolution and model Ramachandran?  Thank
you for your help.

Xiaofei



   


Moody friends. Drama queens. Your life? Nope! - their life, your story. Play 
Sims Stories at Yahoo! Games.
http://sims.yahoo.com/  


Re: [ccp4bb] Help with coordinate file

2007-07-24 Thread Edward Berry

[EMAIL PROTECTED] wrote:


Hello Mona,
I am guessing you have the atom name,number and coordinates in your file.
I did something like that and Openbabel will convert it to the pdb file
you desire but as far as I know, you will have to assign a residue name to
the atom yourself. I did this by superimposition on the original file and
manually naming the atoms in a text editor.


As has been pointed out on this BBS before, the text editor nedit
is very good for this because of the ability to cut and paste columns,
i.e rectangular selections of text. Hold down the control key and
drag from upper right to lower left corner (or v.v.) to select
the column with residue names, copy, then select the same
area in the defective file and paste. (Make sure the two
selections have the same number of residues and atoms
in each residue)

Get nedit (if you don't have it) from nedit.org
or yum install nedit


Re: [ccp4bb] AW: [ccp4bb] removal of sulfate ion from the active site

2007-05-25 Thread Edward Berry

In those old chemical kinetics courses it was explicitly or
implicitly clear that [I] refers to I(free), not I(total).
The way assays are usually run, [E]K(I), so to a good
approximation the amount of bound ligand is negligible.

The way we set up crystallization, [E] is often mM
while K(I) is uM or below, and you had better consider
concentration AND total amount. You will never get
decent occupancy if you add 1 uM ligand to 1 mM protein,
even if the K(I) is 10 nM! On the other hand if you
ignore K(I) and add stoichiometric ligand, dissociation
of a few time K(I) to satisfy the free concentration
will not hurt occupancy significantly.
If you add 2x stoichiometric to
be safe, you may also fill some very low affinity
(100 uM) site. Use stoichiometric pus a few x Kd,
or solve quadratic equation to see how much is
really required.


Marius Schmidt wrote:


think about your old chemical kinetics courses.
what counts is concentration and not amount.

M



Dear Marius and others,
here I would like to comment: The problem with soaking is often not
so much the concentration of the ligand, but the amount of ligand
needed to fill all binding sites in the crystal. A typical crystal
contains about 20 mM protein so one has to add the equivalent amount
of ligand. On the other hand, the concentration UNBOUND inhibitor,
asuming a 1:1 protein-ligand complex, need only to be in the order of
10-100 times the Kd (90-99% occupancy). Ways to overcome this is to
soak in a large volume of mother liquor (containing a large amount of
ligand) or to add solid ligand to the mother liquor and let the
ligand slowly dissolve and diffuse into the crystal.

Herman



Re: [ccp4bb] Real Space Correlation coefficients

2007-02-20 Thread Edward Berry

Sorry, I didn't read the question.
sfcheck wouldn't calculate CC between exp and MR map.
Ed


Edward Berry wrote:


Also sfcheck calculates correlation coefficient for
main chain and side chain of each residue, presented
graphically in row 2 of the figure starting page 3
(example:
http://sb20.lbl.gov/cytbc1/sfcheck-1ppj.pdf ).
I suspect there is also a log file with the numeric
values.

Ed

Charles W. Carter Jr. wrote:

Is there a CCP4 program that will calculate residue-by-residue 
correlation coefficients for a molecular replacement solution and an 
experimentally phased map?


Thanks,

Charlie

**UNCrystallographers  NOTE new website url**
[EMAIL PROTECTED]
http://xtal.med.unc.edu/CARTER/Welcome.html
Department of Biochemistry and Biophysics CB 7260
UNC Chapel Hill, Chapel Hill, NC 27599-7260
Tel:  919 966-3263
FAX 919 966-2852