Re: [ccp4bb] Off Topic: Web or e-tools for booking instrument time

2011-02-28 Thread Dmitry Veprintsev
Hi All, 
I have used MRBS 
 http://mrbs.sourceforge.net/
to manage booking of a large number of instruments for a few years. 
Free, open source,  simple, local installation, works very well.

regards, Dmitry
--
Dr. Dmitry Veprintsev
Biomolecular Research Laboratory, OFLC/103 Paul Scherrer Institut
5232 Villigen PSI
Switzerland
Tel +41 (0) 56 310 5246; Fax +41 (0)56 310 5288 dmitry.veprint...@psi.ch
http://www.psi.ch/~veprintsev_d


Re: [ccp4bb] crystallizing a complex that's sensitive to ionic strength

2011-02-28 Thread Tim Gruene
Dear Hua,

adding water as suggested by Jan Kern could also be accomplished in a more
sophisticated way by using dialysis buttons. They require large volumes, though,
5mul is the minimum as far as I know. 

At the fancy end of this you could try the TOPAZ system by fluidigm. 

At the ECM in Darmstadt one of the companies presented a cheap, plasitc based
version of this which looked quite appealing to me. Maybe it was the CrystalHarp
from Molecular Dimensions, but I am not sure.

Cheers, Tim

On Sat, Feb 26, 2011 at 09:13:49PM -0500, Hua Yuan wrote:
 Dear CCP4 community members,
 
 I've been trying to crystallize a protein complex that's very sensitive to
 ionic strength, i.e., lower salt (~0.3M) will cause precipitation of the
 complex but higher salt (~0.5 M) breaks the complex apart.  The interaction
 that holds the complex is probably mainly ionic type.
 The crystals I got so far has only one component of the complex from which
 all the crystallization conditions have high salt such as 2M Ammonium
 Sulfate in them.  Besides repeatly screening many crystallization
 conditions, I was wondering whether is any way to work around this problem.
 Your suggestions would be greatly appreciated!
 
 Thanks,
 
 Hua

-- 
--
Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

phone: +49 (0)551 39 22149

GPG Key ID = A46BEE1A



signature.asc
Description: Digital signature


[ccp4bb] coot

2011-02-28 Thread FREITAG-POHL S.
Hello everybody,

Currently I am refining my 6 x 220 amino acid structure and 
I was wondering if COOT is automatically writing a kind of protocol
what I am changing in my pdb file when I am fitting-in new residues or mutate 
amino acids. If so where can I find it?

Thanks a lot,

Stefanie

Dr. Stefanie Freitag-Pohl
Durham University
Chemistry Dept
South Road
Durham.  DH1 3LE
Tel:  0191 3342143
Email: stefanie.freitag-p...@durham.ac.uk




Re: [ccp4bb] Density sharpening with Truncate?

2011-02-28 Thread Randy Read
Hi,

I'm on Garib's side here.  The way the maximum likelihood targets work, the 
variances are defined relative to the average intensity in a resolution shell, 
so if you change the falloff the variances will change in the same way.  In 
fact, one way to implement maximum likelihood refinement is in terms of 
E-values, from which the falloff has been removed.  If your B-factors didn't 
run into hard limits (which, as Garib points out, they will when you make the 
data non-physical) you would end up with the same model if you refined against 
sharpened data, except the B-factors would be lower.  The other thing that will 
change if you sharpen the data is that the R-factors will be higher, because 
the poorly-fit higher-resolution terms will contribute more to the sums.  And 
that's probably not what you want when you might already have a hard time 
getting a low resolution structure past the freeR police!

This is a case where intuition can lead you astray.  Intuition might suggest 
that, if you sharpen the data, the refinement program should pay more attention 
to fitting the high resolution detail, but the likelihood target doesn't look 
at the data the same way you do when you look at a map.  The fact that you can 
define the target in terms of E-values means that, if your model and data are 
both good, the likelihood target can be thought of as sharpening the data 
anyway.

Best wishes,

Randy Read

On 28 Feb 2011, at 09:02, Dirk Kostrewa wrote:

 Dear CCP4ers,
 
 I really would sharpen the structure factors, not only the electron density 
 maps. The simple reason is: if sharpening emphasizes enough information at 
 higher resolution to help interpreting the electron density maps, refinement 
 will also benefit from this information.
 Of course, the mean B-factor of the refined structure will be lower by the 
 sharpening B-factor, but since B-factor sharpening is usually done with lower 
 resolution data, the Wilson B-factor is usually very high, and thus far, I 
 didn't run into problems with B-factors crashing at the lower limit.
 The sharpening B-factor can easily reach values in the -100s A**2, not only 
 -10 to -50 A**2. Axel Brunger has published several papers about how to 
 estimate a good sharpening B-factor (a recent one with references is Brunger 
 et al. Acta Cryst D65, 128-133). He usually describes map sharpening, but 
 B-factor sharpening of structure factors seems to be done routinely for virus 
 structures in Steve Harrisons lab.
 
 One word of caution: the B-factor sharpening should be correctly described 
 not only in the publication but also in the PDB deposition (if refinement was 
 done against sharpened structure factors, the refinement statistics can only 
 be reproduced using these structure factors). The original structure factors 
 can be easily reproduced by applying back the negative sharpening B-factor.
 
 Best regards,
 
 Dirk.
 
 Am 26.02.11 01:09, schrieb Garib N Murshudov:
 I would not sharpen structure factors before refinement. It may cause 
 problems with B value refinement (a lot of B values may stuck around 2 or 
 minimum B). One must remember that not all atoms in crystal have same 
 Bvalue. There is a distribution of Bvalues.
 
 However maps can be sharpened after refinement. It can be done directly in 
 coot (I hope this version of coot is now widely available). Or if you are 
 using refmac for refinement you can use:
 
 mapc sharpen   #  regularised map sharpening. Bvalues and regularisation 
 parameters are calculated automatically
 
 or
 
 mapc sharpenBvalue  # regularised map sharpening with specified Bvalue
 
 or
 
 mapc sharpenBvalue
 mapc sharpen alphavalue=0.1   #  regularisation paramater. alpha=0 is 
 simple sharpening.
 
 
 I am sure other programs have similar options. (I know CNS has and it has 
 been used successfully by many people)
 
 regards
 Garib
 
 P.S. These options available from refmac v5.6 available from; 
 www.ysbl.york.ac.uk/refmac/data/refmac_experimental/refmac5.6_linux.tar.gz
 
 
 
 On 25 Feb 2011, at 23:57, Dima Klenchin wrote:
 
 At 05:39 PM 2/25/2011, Pete Meyer wrote:
 Or could anyone suggest a program that would be of help?
 CAD scaling with a scale factor of 1.0 and negative B-factor (isotropic or 
 anisotropic) should do the trick.  I haven't had much luck with density 
 sharpening (at least at ~4-5 Angstroms), but others have apparently had 
 some success with it.
 Alternatively, CCP4i task Run FFT does the job:
 
 1. Take MTZ from Refmac output
 2. Run FFT to create simple map with SigmaA-weighted phases (i.e., PHWT 
 label).
 3. In Infrequently used options, Apply B-factor scaling to F1, specify 
 negative B-factor scaling value, usually within -10 to -50.
 
 - Dima
 
 -- 
 
 ***
 Dirk Kostrewa
 Gene Center Munich, A5.07
 Department of Biochemistry
 Ludwig-Maximilians-Universität München
 Feodor-Lynen-Str. 25
 D-81377 Munich
 Germany
 Phone:+49-89-2180-76845
 

Re: [ccp4bb] coot

2011-02-28 Thread Zheng Zhou
Hi, Stefanie

Are those 6 molecules related by NCS? If so, you can model one first,
and use transform_coords_molecule (imol, rtop) to generate others.

I used to do this five times for a pentamer:

output_pdb='template'
for i in range (2,6):
transform_coords_molecule (1, [[x1, y1, z1, x2, y2, z2, x3, y3, z3],
[a, b, c]])
filename=output_pdb+str(i)+'.pdb'
save_coordinates (1, filename)

I think you can write all the the tranformation matrix out instead of
the loop if they differ significantly. Others may have more
experience.

Best,

Joe

On Mon, Feb 28, 2011 at 6:32 PM, FREITAG-POHL S.
stefanie.freitag-p...@durham.ac.uk wrote:
 Hello everybody,

 Currently I am refining my 6 x 220 amino acid structure and
 I was wondering if COOT is automatically writing a kind of protocol
 what I am changing in my pdb file when I am fitting-in new residues or
 mutate
 amino acids. If so where can I find it?

 Thanks a lot,

 Stefanie

 Dr. Stefanie Freitag-Pohl
 Durham University
 Chemistry Dept
 South Road
 Durham.  DH1 3LE
 Tel:  0191 3342143
 Email: stefanie.freitag-p...@durham.ac.uk





Re: [ccp4bb] Density sharpening with Truncate?

2011-02-28 Thread Dirk Kostrewa

Dear Randy,

thanks for your comment - a good point with the likelihood target 
estimated from E-values! So, in principle, there shouldn't be any 
difference in maximum-likelihood refinement using sharpened data or not. 
However, for curiosity, in one case at 4.3 A resolution and a sharpening 
B-factor of ~100 A**2, I compared ML refinement against the sharpened 
data with ML refinement against the original data and subsequent map 
sharpening: the R-factors were almost identical, and so were the 
electron density maps in most places. But in a some places, the maps 
were slightly different, with slightly less (!) model-bias and slightly 
clearer densities for the refinement against sharpened data. But those 
judgements were very subjective, and since a true structure at really 
high resolution is not available, I never could quantify this. Either, 
this was only anecdotal evidence, or there is still room for improvement 
in existing ML refinement programs.


Best regards,

Dirk.

Am 28.02.11 11:40, schrieb Randy Read:

Hi,

I'm on Garib's side here.  The way the maximum likelihood targets work, the 
variances are defined relative to the average intensity in a resolution shell, 
so if you change the falloff the variances will change in the same way.  In 
fact, one way to implement maximum likelihood refinement is in terms of 
E-values, from which the falloff has been removed.  If your B-factors didn't 
run into hard limits (which, as Garib points out, they will when you make the 
data non-physical) you would end up with the same model if you refined against 
sharpened data, except the B-factors would be lower.  The other thing that will 
change if you sharpen the data is that the R-factors will be higher, because 
the poorly-fit higher-resolution terms will contribute more to the sums.  And 
that's probably not what you want when you might already have a hard time 
getting a low resolution structure past the freeR police!

This is a case where intuition can lead you astray.  Intuition might suggest 
that, if you sharpen the data, the refinement program should pay more attention 
to fitting the high resolution detail, but the likelihood target doesn't look 
at the data the same way you do when you look at a map.  The fact that you can 
define the target in terms of E-values means that, if your model and data are 
both good, the likelihood target can be thought of as sharpening the data 
anyway.

Best wishes,

Randy Read

On 28 Feb 2011, at 09:02, Dirk Kostrewa wrote:


Dear CCP4ers,

I really would sharpen the structure factors, not only the electron density 
maps. The simple reason is: if sharpening emphasizes enough information at 
higher resolution to help interpreting the electron density maps, refinement 
will also benefit from this information.
Of course, the mean B-factor of the refined structure will be lower by the 
sharpening B-factor, but since B-factor sharpening is usually done with lower 
resolution data, the Wilson B-factor is usually very high, and thus far, I 
didn't run into problems with B-factors crashing at the lower limit.
The sharpening B-factor can easily reach values in the -100s A**2, not only -10 
to -50 A**2. Axel Brunger has published several papers about how to estimate a 
good sharpening B-factor (a recent one with references is Brunger et al. Acta 
Cryst D65, 128-133). He usually describes map sharpening, but B-factor 
sharpening of structure factors seems to be done routinely for virus structures 
in Steve Harrisons lab.

One word of caution: the B-factor sharpening should be correctly described not 
only in the publication but also in the PDB deposition (if refinement was done 
against sharpened structure factors, the refinement statistics can only be 
reproduced using these structure factors). The original structure factors can 
be easily reproduced by applying back the negative sharpening B-factor.

Best regards,

Dirk.

Am 26.02.11 01:09, schrieb Garib N Murshudov:

I would not sharpen structure factors before refinement. It may cause problems 
with B value refinement (a lot of B values may stuck around 2 or minimum B). 
One must remember that not all atoms in crystal have same Bvalue. There is a 
distribution of Bvalues.

However maps can be sharpened after refinement. It can be done directly in coot 
(I hope this version of coot is now widely available). Or if you are using 
refmac for refinement you can use:

mapc sharpen   #  regularised map sharpening. Bvalues and regularisation 
parameters are calculated automatically

or

mapc sharpenBvalue   # regularised map sharpening with specified Bvalue

or

mapc sharpenBvalue
mapc sharpen alphavalue=0.1#  regularisation paramater. alpha=0 is simple 
sharpening.


I am sure other programs have similar options. (I know CNS has and it has been 
used successfully by many people)

regards
Garib

P.S. These options available from refmac v5.6 available from; 

[ccp4bb] Postdoc position available, University of Bristol, UK

2011-02-28 Thread Paul Race
Dear all,

I have a 3 yr postdoc position available in my lab (Biochemistry, 
Bristol), starting early/mid April. Please see below for further particulars. 
Informal enquires welcome, though full applications MUST be made via the 
University of Bristol website 
(http://www.bris.ac.uk/boris/jobs/feeds/ads?ID=93852).

   Cheers,
 Paul


Research Assistant (ref. 16077)
School of Biochemistry
Contract: Fixed Term Contract (36 months)

Salary:£29,972 - £33,734

Grade:Level a in Pathway 2

Closing date for applications:9:00am 22 Mar 2011

Anticipated interview date:01 Apr 2011

Description
Working in the group of Dr Paul Race, you will undertake a BBSRC funded 
research project investigating the structural basis of natural product 
biosynthesis in terrestrial bacteria. You will have (or shortly have) a PhD in 
chemistry, biochemistry or a related discipline, and a proven track record in 
recombinant protein expression, purification and crystallisation, allied to 
significant experience in the determination of protein structures using 
macromolecular X-ray crystallography. Previous experience in electron and/or 
cryo-electron microscopy and a background in structural enzymology would be 
considered advantageous. 

You should be an enthusiastic and innovative scientist, a good team worker and 
an excellent communicator. The position is full-time and funding is available 
for three years currently.

If successful, you may be appointed either on a fixed term or a permanent 
contract depending on the extent of your previous relevant research experience. 
Further information can be found at www.bristol.ac.uk/personnel/ftc/


Dr Paul Race
Royal Society URF
School of Biochemistry
University of Bristol
BS8 1TD

paul.r...@bristol.ac.uk
Tel. +44 (0)117 331 2150
Fax. +44 (0)117 331 2168

http://www.bris.ac.uk/biochemistry/research/pr.html



[ccp4bb] AW: [ccp4bb] coot

2011-02-28 Thread Stefan Gerhardt
Hi Zheng

I think, it's much easier to go this way:
Coot - Extensions - NCS - Copy NCS Chain or Copy NCS Residue Range

Cheers
Stefan

-Ursprüngliche Nachricht-
Von: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] Im Auftrag von Zheng
Zhou
Gesendet: Montag, 28. Februar 2011 12:13
An: CCP4BB@JISCMAIL.AC.UK
Betreff: Re: [ccp4bb] coot

Hi, Stefanie

Are those 6 molecules related by NCS? If so, you can model one first, and
use transform_coords_molecule (imol, rtop) to generate others.

I used to do this five times for a pentamer:

output_pdb='template'
for i in range (2,6):
transform_coords_molecule (1, [[x1, y1, z1, x2, y2, z2, x3, y3, z3],
[a, b, c]])
filename=output_pdb+str(i)+'.pdb'
save_coordinates (1, filename)

I think you can write all the the tranformation matrix out instead of the
loop if they differ significantly. Others may have more experience.

Best,

Joe

On Mon, Feb 28, 2011 at 6:32 PM, FREITAG-POHL S.
stefanie.freitag-p...@durham.ac.uk wrote:
 Hello everybody,

 Currently I am refining my 6 x 220 amino acid structure and I was 
 wondering if COOT is automatically writing a kind of protocol what I 
 am changing in my pdb file when I am fitting-in new residues or mutate 
 amino acids. If so where can I find it?

 Thanks a lot,

 Stefanie

 Dr. Stefanie Freitag-Pohl
 Durham University
 Chemistry Dept
 South Road
 Durham.  DH1 3LE
 Tel:  0191 3342143
 Email: stefanie.freitag-p...@durham.ac.uk





Re: [ccp4bb] Teaching powerpoints

2011-02-28 Thread George M. Sheldrick
I have brought my collection of teaching powerpoints etc. up to date. They
are available free for educational purposes. Please send me an email if
you wish to receive the password for accessing them. If you previously
obtained this password from me it should still be valid.

George



Prof. George M. Sheldrick FRS
Dept. Structural Chemistry,
University of Goettingen,
Tammannstr. 4,
D37077 Goettingen, Germany
Tel. +49-551-39-3021 or -3068
Fax. +49-551-39-22582


Re: [ccp4bb] CCP4 for iphones

2011-02-28 Thread Ian Stokes-Rees
In a sentence, primarily due to cost and power constraints mobile
devices don't (currently) have the horsepower to do any serious
*generic* number crunching, as would be required for anything of
interest to this community.

On the topic of using otherwise-idle compute time, our group has a
publicly available service for doing molecular replacement which
accesses a federation of computing centers across the US (through Open
Science Grid):

https://portal.nebiogrid.org/secure/apps/wsmr/

We regularly secure 50-150,000 hours per day of computing time from
OSG.  We're in the process of improving this and adding in additional
services.  Watch this space.  For those with more of an interest on this
topic, you can read on below.

Regards,

Ian



This thread raises some interesting questions, but indicates a lack of
understanding of the difference between what a mobile device like an
iPhone, iPad, or Android can do compared to a rack-mounted server,
desktop computer, or even laptop.The number crunching mobile devices
are capable of is for specific sorts of data like audio and video codecs
which are offloaded to specialized hardware and which can't (currently)
be reused for other applications (like protein structure studies).  GPUs
are showing how this can change, but I wouldn't hold your breath.  I
think power and battery life will continue to be challenges for mobile
devices for a long time, so even if generic computing ability catches up
with conventional desktop/server capabilities, few people will want
their batteries drained by their device running continuously doing an MD
simulation or structure refinement.

On 2/25/11 5:01 PM, Xiaoguang Xue wrote:
 Well, maybe building a distributed computing network (Like Fold@Home)
 by iphone is an improvement of the clusters. Let's think about a
 phenomenon, the most common functions of our iphone are calling,
 playing music, and maybe gaming, so most of the time the phone is
 idle. Why don't we try to use these idle computing time to help us
 doing some more important and interesting things, like determining the
 proteins structures

US-based non-commercial researchers can access Open Science Grid
(http://www.opensciencegrid.org/), which consists of a federation of
about 80,000 compute cores, by registering for a certificate and joining
(or forming) a Virtual Organization.  We host a Virtual Organization in
OSG called SBGrid which is open to all SBGrid consortium members
(http://sbgrid.org/).  We regularly get 2000-4000 compute cores from OSG
for extended periods (12-96 hours), so it is a very powerful resource.

Another alternative for structural biologists who could benefit from
1000s of compute cores is to get an allocation at a national
supercomputing center.  In the US, NERSC or TeraGrid are good routes for
this, and many options exist.  In Europe EGI and DEISA provide a similar
one stop shop for federated grid computing and supercomputing center
access.

http://www.nersc.gov/
https://www.teragrid.org/
http://www.egi.eu/
http://www.deisa.eu/

Finally, you can benefit from the millions of desktop computers out
there with super-powerful compute cores and GPUs that spend most of the
time (often 90%) completely idle using screen saver computing.  Here
there is really only one option which is BOINC, developed by the group
that created SETI@Home.  Rosetta is (sort-of) available this way through
Rosetta@home, developed by the Baker Lab.

http://boinc.berkeley.edu/
http://boinc.bakerlab.org/

 I also noticed that there is some progress in grid computing on iphone
 and PS3. So I think it's possible to apply this technique to
 structural biology.
 http://www.sciencedaily.com/releases/2010/04/100413072040.htm

I think adding iPhone to the title of that article was just to attract
readers.  They are only using the standard web-browsing features
available on pretty much any smart phone or mobile device to view
web-portal views of computational infrastructure.  All the actual
computing was done on PS3s (and only 16 of them).  In other words, if
you consider browsing to EBI or RCSB to access some sequence alignment
program or view some protein structures, then you can say I've used an
iPhone for grid computing.  Most people, however, would question the
accuracy of this association.


Re: [ccp4bb] CCP4 for iphones

2011-02-28 Thread Ian Stokes-Rees


  
  


On 2/25/11 5:41 PM, Nat Echols wrote:
On Fri, Feb 25, 2011 at 2:10 PM, Sean Seaver s...@p212121.com
  wrote:
  

  I've been curious if there has been discussion about moving
  data processing and refinement to a software as a service
  (SaaS) deployment. If programs were web accessible then it
  may save researchers time and trouble (maintaining and
  installing software). In turn, one could then process data
  via their iphone.
  
  The computational demand would be enormous and personally have
  a hard time even doing a back of the envelope calculation.
  The demand could be offset such as by limiting jobs or the
  number of users, etc... It will be interesting to see how
  mobile plays a role in crystallography.



SBGrid has done something like this for massively parallel
  MR searches:


https://portal.nebiogrid.org/secure/apps/wsmr/


But that's a massively parallel and highly distributed
  calculation, which isn't what crystallographers do most of the
  time. Nor do they need to be particularly mobile in an era of
  remote synchrotron data collection.
  


Nat, thanks for commenting on this. As the person who has developed
it, I'm glad someone has noticed the connection between the
web-based application (well, really just an application wrapper,
since it uses CCP4 software underneath) and what it is actually
doing behind the scenes. It seems to us (within SBGrid) that there
are quite a few applications that can benefit from access to large
scale computational infrastructure. Sometimes having that resource
available will allow people to ask new questions or pose old
questions in a new way. We're always happy to talk to people who
have ideas for new computational workflows or applications that can
benefit from 10s of thousands of compute cores or that process TB of
data. And of course the underlying resources are available to
others to access themselves (see another post I made on this same
thread about an hour ago).


  
I have a lot of other objections to the idea of doing
  everything as a webapp, but that's a separate rant. I do,
  however, like the idea of using multi-touch interfaces for
  model-building, but you need something at least the size of an
  iPad for that to be more productive than using a traditional
  computer with a mouse.

  


I agree that not everything should be done as a web app. When
high-functionality UI features are required, developing these with
CSS, jQuery, AJAX, HTML5, Java, etc. is super time consuming,
compared with conventional integrated UI toolkits (Tcl/TK, Qt,
Cocoa, .NET, etc.). Similarly when significant "real-time" data
processing is required, or if multiple applications are interacting
with the same data, then the UI (graphical or otherwise) needs to be
"close" to the user data, and not stuck messing around with web
browsers (which can't really be scripted) and web forms.

I got a 21" HP multi-touch screen last year to explore improved
touch-based interfaces for structural biology applications, however
it doesn't work (properly) under OS X, and I'm not inclined to shift
to a Windows based environment to develop for it. Hopefully some
standard USB interfaces/drivers/libraries (events) will appear soon
so the iPad and other tablets aren't the exclusive domain for
touch-based applications.

Ian
  



[ccp4bb] Calculating Difference Maps Between an RCSB data set and an mtz (Different Ligand)

2011-02-28 Thread Scott Pegan
Trying to calculate a difference map from a dataset downloaded from the RCSB
and one I have.  The following applies:

Object find the difference between two bound ligands of the same structure
in the same space group.

My following work path has been:

1) Convert mmCIF to mtz (RCSB data set)
2) Use CAD to combine them
3) Use FFT to generate the diff map

If I remember correctly, I think I am missing a scaling step somewhere.
Any thoughts?

Scott



-- 
Scott D. Pegan, Ph.D.
Assistant Professor
Chemistry  Biochemistry
University of Denver
Office: 303 871 2533
Fax: 303 871 2254


Re: [ccp4bb] Calculating Difference Maps Between an RCSB data set and an mtz (Different Ligand)

2011-02-28 Thread vincent Chaptal
I just looked at a previous thread by Dale Tronrud that explains this. 
here it is:


Re: [ccp4bb] Fo-Fo Difference Map
Dale Tronrud
Mon, 03 May 2010 16:19:46 -0700
   I've struggled with getting CCP4 to calculate Fo-Fo maps, since I 
usually use other software.  The tricks are that the data sets have to 
be scaled to each other in reciprocal space, and the maps calculated 
with the same cell constants (which will be a lie for at least one of 
them).  The procedure I used the last time I did this was


1) Create a master mtz file with F-holo, F-apo, Fc-apo, Phic-apo.  
UseReflection Data Utilities, Merge Mtz Files (Cad).  Don't 
includethe H, K, and L columns explicitly, despite the default.


2) Scale F-holo and F-apo.  Use Experimental Phasing, Data 
Preparation,Scale and Analyse Data Sets, Scale refinement using 
Scaleit.  Don'tinclude anomalous differences unless your interest 
is in changinganomalous scatterers.  My notes indicate that Fhscal 
works better butdoes not have anisotropic scaling.  If your two data 
sets do not differanisotropically try Fhscal.


3) Enter Coot.
a) Load apo coordinates.
b) Open mtz, calculate map F-holo, phic-apo.   This is done with 
File.Open Mtz,mmCIF, fcf or phs...

c) Open mtz, calculate map F-apo, phic-apo
d) Calculate difference map.   Extensions.Maps... . Make a 
Difference Map...

e) Find difference map peaks.

The greater the difference in cell constants the greater the noise in 
the map.  I think the high resolution cutoff for the maps should be  
2 A delta/(A+delta)  where A is the cell edge with the largest change, 
and delta is the amount of change (in Angstrom).
Basically a 1A change for a 100A edge would require a 2A resolution 
limit.  A 5A change would imply a 10A cutoff and a very boring map.  I 
would appreciate feedback on this procedure, if you find it hard to 
understand or it doesn't work.  Certainly the Phenix solution looks 
simpler.  Dale Tronrud



Replacing the Coot step by FFT works too.
vincent


Le 2/28/11 5:26 PM, Scott Pegan a écrit :
Trying to calculate a difference map from a dataset downloaded from 
the RCSB and one I have.  The following applies:


Object find the difference between two bound ligands of the same 
structure in the same space group.


My following work path has been:

1) Convert mmCIF to mtz (RCSB data set)
2) Use CAD to combine them
3) Use FFT to generate the diff map

If I remember correctly, I think I am missing a scaling step 
somewhere.   Any thoughts?


Scott



--
Scott D. Pegan, Ph.D.
Assistant Professor
Chemistry  Biochemistry
University of Denver
Office: 303 871 2533
Fax: 303 871 2254


--

Vincent Chaptal


Institut de Biologie et Chimie des Protéines

Drug Resistance modulation and mechanism

7 passage du Vercors

69007 LYON

FRANCE

+33 4 37 65 29 16



Re: [ccp4bb] crystallizing a complex that's sensitive to ionic strength

2011-02-28 Thread Kris Tesh
Have you tried adding water to your reservoir and allowing it to vapor diffuse 
into the drop?
Kris
 Kris F. Tesh, Ph. D.
Department of Biology and Biochemistry
University of Houston 



- Original Message 
From: Tim Gruene t...@shelx.uni-ac.gwdg.de
To: CCP4BB@JISCMAIL.AC.UK
Sent: Mon, February 28, 2011 3:48:25 AM
Subject: Re: [ccp4bb] crystallizing a complex that's sensitive to ionic strength

Dear Hua,

adding water as suggested by Jan Kern could also be accomplished in a more
sophisticated way by using dialysis buttons. They require large volumes, though,
5mul is the minimum as far as I know. 

At the fancy end of this you could try the TOPAZ system by fluidigm. 

At the ECM in Darmstadt one of the companies presented a cheap, plasitc based
version of this which looked quite appealing to me. Maybe it was the CrystalHarp
from Molecular Dimensions, but I am not sure.

Cheers, Tim

On Sat, Feb 26, 2011 at 09:13:49PM -0500, Hua Yuan wrote:
 Dear CCP4 community members,
 
 I've been trying to crystallize a protein complex that's very sensitive to
 ionic strength, i.e., lower salt (~0.3M) will cause precipitation of the
 complex but higher salt (~0.5 M) breaks the complex apart.  The interaction
 that holds the complex is probably mainly ionic type.
 The crystals I got so far has only one component of the complex from which
 all the crystallization conditions have high salt such as 2M Ammonium
 Sulfate in them.  Besides repeatly screening many crystallization
 conditions, I was wondering whether is any way to work around this problem.
 Your suggestions would be greatly appreciated!
 
 Thanks,
 
 Hua

-- 
--
Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

phone: +49 (0)551 39 22149

GPG Key ID = A46BEE1A


Re: [ccp4bb] Calculating Difference Maps Between an RCSB data set and an mtz (Different Ligand)

2011-02-28 Thread Paul Emsley



Replacing the Coot step by FFT works too.


An additional benefit of the Coot approach is that you can use 
LSQ-matched maps and maps on different grids and space groups. (I was 
under the impression that that was not quite so easy with FFT.)


Also worth noting, Coot does not (yet) do auto-scaling.

Paul.


Re: [ccp4bb] Calculating Difference Maps Between an RCSB data set and an mtz (Different Ligand)

2011-02-28 Thread Eleanor Dodson

Yes - you are.


-
There are some extra steps.

Download pdbs and mtz files
csymmmatch -pdbin-ref 1.pdb -pdbin 2.pdb -origin-hand -pdbout 2-to-1.pdb

That checks they are on same origin and symmetry equivalent.

refmac for 1.pdb
refmac for 2-to-1.pdb

cad to merge  two refmac outputs.
You will have to rename columns you want

Then each Fobs will be scaled to the matching FC and that may be good 
enough.


But you may need to worry about different B values.

You can use  SCALEIT to match everything to one of the FPs

Then fft
Eleanor




I merge data - calculate my own

On 02/28/2011 04:26 PM, Scott Pegan wrote:

Trying to calculate a difference map from a dataset downloaded from the RCSB
and one I have.  The following applies:

Object find the difference between two bound ligands of the same structure
in the same space group.

My following work path has been:

1) Convert mmCIF to mtz (RCSB data set)
2) Use CAD to combine them
3) Use FFT to generate the diff map

If I remember correctly, I think I am missing a scaling step somewhere.
Any thoughts?

Scott





Re: [ccp4bb] Calculating Difference Maps Between an RCSB data set and an mtz (Different Ligand)

2011-02-28 Thread Ian Tickle
 2) Scale F-holo and F-apo.  Use Experimental Phasing, Data
 Preparation,    Scale and Analyse Data Sets, Scale refinement using
 Scaleit.  Don't    include anomalous differences unless your interest is in
 changing    anomalous scatterers.  My notes indicate that Fhscal works
 better but    does not have anisotropic scaling.  If your two data sets do
 not differ    anisotropically try Fhscal.

This is not a problem.  First scale anisotropically with your
favourite program.  Then the Kraut scaling correction using FHSCAL is
a small correction on top of this, so just rescale the
anisotropically-scaled output using FHSCAL.  I didn't include an
anisotropic scaling option in FHSCAL for the simple reason that this
option was already available in other programs.

 The greater the difference in cell constants the greater the noise in the
 map.  I think the high resolution cutoff for the maps should be  2 A
 delta/(A+delta)  where A is the cell edge with the largest change, and delta
 is the amount of change (in Angstrom).
 Basically a 1A change for a 100A edge would require a 2A resolution limit.
 A 5A change would imply a 10A cutoff and a very boring map.  I would
 appreciate feedback on this procedure, if you find it hard to understand or
 it doesn't work.  Certainly the Phenix solution looks simpler.  Dale

See this thread:

http://www.mail-archive.com/ccp4bb@jiscmail.ac.uk/msg15533.html

-- Ian


[ccp4bb] stuck with COOT installation in openSUSE 11.3

2011-02-28 Thread Hena Dutta
Hello,

I could not open the COOT GUI after installing either from
'coot-0.6.1-binary-Linux-x86_64-centos-5-gtk2.tar.gz' or from
'coot-0.6.2-pre-1-revision-3205-binary-Linux-x86_64-centos-5-gtk2.tar.gz'

I used the following commands:

1. from /usr/loca/src
sudo tar xvzf *.gz as suggested in

http://strucbio.biologie.uni-konstanz.de/ccp4wiki/index.php/COOT#Installing_Coot_on_Linux

2. I created the following link by typing
sudo ln -s /usr/local/src/coot-Linux-x86_64-centos-5-gtk2/bin/coot
/usr/local/bin/coot

After I run the command 'coot' I got the following error:

hena@dl403a-2:~ coot
COOT_PREFIX is /usr/local
/usr/local/bin/coot-real
/usr/local/bin/coot: line 247: /usr/local/bin/coot-real: No such file or
directory
/usr/local/bin/coot: line 253: /usr/local/bin/guile: No such file or
directory
hena@dl403a-2:~

Then I link the whole 'bin' directory as follows:
sudo ln -s /usr/local/src/coot-Linux-x86_64-centos-5-gtk2/bin/
/usr/local/bin/
 and I got the following error after running 'coot':

hena@dl403a-2:~ coot
COOT_PREFIX is /usr/local/src/coot-Linux-x86_64-centos-5-gtk2
/usr/local/src/coot-Linux-x86_64-centos-5-gtk2/bin/coot-real
/usr/local/src/coot-Linux-x86_64-centos-5-gtk2/bin/coot-real: error while
loading shared libraries: libpng12.so.0: cannot open shared object file: No
such file or directory
coot-exe: /usr/local/src/coot-Linux-x86_64-centos-5-gtk2/bin/coot-real
coot-version:
/usr/local/src/coot-Linux-x86_64-centos-5-gtk2/bin/coot-real
/usr/local/src/coot-Linux-x86_64-centos-5-gtk2/bin/coot-real: error while
loading shared libraries: libpng12.so.0: cannot open shared object file: No
such file or directory
platform:
/bin/uname
core: #f
No core file found.  No debugging
   This is not helpful.
   Please turn on core dumps before sending a crash report
hena@dl403a-2:~
I checked that I have 'libpng12.so.0' file in different 'lib' folders.

I successfully installed other programs like CCP4, CNS and PHENIX in my
openSUSE 11.3

Can someone advise what's wrong I am doing?
Many thanks...
hena


Re: [ccp4bb] stuck with COOT installation in openSUSE 11.3

2011-02-28 Thread Tim Gruene
Hello Hena,

maybe the libpng12.so.0 libraries on your system happen to be all 32-bit
versions. You installed the 64-bit version (because of the x86_64 in the name
of the tar-archive), therefore you also require a 64-bit version of the
libpng12.so.0. Do you have such a file in /usr/lib, and if so, is it really a
file or maybe only a link to a file which once existed on your system?

Cheers, Tim


On Mon, Feb 28, 2011 at 01:39:37PM -0800, Hena Dutta wrote:
 Hello,
 
 I could not open the COOT GUI after installing either from
 'coot-0.6.1-binary-Linux-x86_64-centos-5-gtk2.tar.gz' or from
 'coot-0.6.2-pre-1-revision-3205-binary-Linux-x86_64-centos-5-gtk2.tar.gz'
 
 I used the following commands:
 
 1. from /usr/loca/src
 sudo tar xvzf *.gz as suggested in
 
 http://strucbio.biologie.uni-konstanz.de/ccp4wiki/index.php/COOT#Installing_Coot_on_Linux
 
 2. I created the following link by typing
 sudo ln -s /usr/local/src/coot-Linux-x86_64-centos-5-gtk2/bin/coot
 /usr/local/bin/coot
 
 After I run the command 'coot' I got the following error:
 
 hena@dl403a-2:~ coot
 COOT_PREFIX is /usr/local
 /usr/local/bin/coot-real
 /usr/local/bin/coot: line 247: /usr/local/bin/coot-real: No such file or
 directory
 /usr/local/bin/coot: line 253: /usr/local/bin/guile: No such file or
 directory
 hena@dl403a-2:~
 
 Then I link the whole 'bin' directory as follows:
 sudo ln -s /usr/local/src/coot-Linux-x86_64-centos-5-gtk2/bin/
 /usr/local/bin/
  and I got the following error after running 'coot':
 
 hena@dl403a-2:~ coot
 COOT_PREFIX is /usr/local/src/coot-Linux-x86_64-centos-5-gtk2
 /usr/local/src/coot-Linux-x86_64-centos-5-gtk2/bin/coot-real
 /usr/local/src/coot-Linux-x86_64-centos-5-gtk2/bin/coot-real: error while
 loading shared libraries: libpng12.so.0: cannot open shared object file: No
 such file or directory
 coot-exe: /usr/local/src/coot-Linux-x86_64-centos-5-gtk2/bin/coot-real
 coot-version:
 /usr/local/src/coot-Linux-x86_64-centos-5-gtk2/bin/coot-real
 /usr/local/src/coot-Linux-x86_64-centos-5-gtk2/bin/coot-real: error while
 loading shared libraries: libpng12.so.0: cannot open shared object file: No
 such file or directory
 platform:
 /bin/uname
 core: #f
 No core file found.  No debugging
This is not helpful.
Please turn on core dumps before sending a crash report
 hena@dl403a-2:~
 I checked that I have 'libpng12.so.0' file in different 'lib' folders.
 
 I successfully installed other programs like CCP4, CNS and PHENIX in my
 openSUSE 11.3
 
 Can someone advise what's wrong I am doing?
 Many thanks...
 hena

-- 
--
Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

phone: +49 (0)551 39 22149

GPG Key ID = A46BEE1A



signature.asc
Description: Digital signature