[ccp4bb] Refmac version

2012-03-19 Thread Kavyashree M
Dear users, I was using Refmac 5.5.0102 (ccp4- 6.1.2) for refining the structures, I was supposed to do one more roundof refinement with final model  but unfortunately system crashed and i had to install the new version of ccp4 (6.2.0) which has refmac-5.6.0117. So my doubt here is - can I do the refinement of the final model with thisversion of refmac or do I need to complete with the older version itself?I however tried replacing the older version of refmac in place of new refmac. But the job failed while running(in the new interface) with an error message " ERROR: number of monomers  3000 /lib. limit/ Change parameter MAXMLIST in "lib_com.fh"Do i need to use the old interface itself or can i use the new interface to run old refmac binary? Kindly providesome suggestion.Thanking youWith RegardsKavya-- 
This message has been scanned for viruses and
dangerous content by
MailScanner, and is
believed to be clean.


Re: [ccp4bb] Refmac version

2012-03-19 Thread Tim Gruene
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dear Kavya,

I would trust the authors of such programs (ccp4-, phenix-,
shelx-collection et al.) to only release new versions of their programs
if they believe that version is an improvement over the previous version
;-) and generally use the latest version of their programs.

You can get the latest version of refmac5 from Garib Murshudov's web
site (http://www.ysbl.york.ac.uk/~garib/refmac/latest_refmac.html) and
copy the binary into the binary directory of you ccp4-6.2.0 installation.

When you run refmac, the header lists the program version so you can
check that your environment picked up the correct version.

Best wishes,
Tim

On 03/19/12 11:10, Kavyashree M wrote:
 Dear users,
 
 I was using Refmac 5.5.0102 (ccp4- 6.1.2) for refining the structures, I was 
 supposed to do one more round
 of refinement with final model but unfortunately system crashed and i had to 
 install the new version of ccp4
 (6.2.0) which has refmac-5.6.0117. So my doubt here is - can I do the 
 refinement 
 of the final model with this
 version of refmac or do I need to complete with the older version itself?
 
 I however tried replacing the older version of refmac in place of new refmac. 
 But the job failed while running
 (in the new interface) with an error message
  ERROR: number of monomers  3000 /lib. limit/
 Change parameter MAXMLIST in lib_com.fh
 
 Do i need to use the old interface itself or can i use the new interface to 
 run 
 old refmac binary? Kindly provide
 some suggestion.
 
 Thanking you
 With Regards
 Kavya
 
 -- 
 This message has been scanned for viruses and
 dangerous content by *MailScanner* http://www.mailscanner.info/, and is
 believed to be clean.

- -- 
- --
Dr Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFPZwtDUxlJ7aRr7hoRAnsvAKDfCCZHx2H5DwObakoBlpmCqEWmogCaAq1Z
a7NBVsM5WHby6oRFxbuaTEQ=
=CybB
-END PGP SIGNATURE-


Re: [ccp4bb] Refmac version

2012-03-19 Thread Garib N Murshudov
Dear Kavya


In principle you should be able to use newer version for old pdb file unless 
you have pdb v2 namings for DNA/RNA etc.
New version of ccp4 has more dicitionary elements (1 or so). Older version 
was compiled for 3000. That is the reason why old version does not work with 
new dictionary.


regards
Garib


On 19 Mar 2012, at 10:10, Kavyashree M wrote:

 Dear users, 
 
 I was using Refmac 5.5.0102 (ccp4- 6.1.2) for refining the structures, I was 
 supposed to do one more round
 of refinement with final model but unfortunately system crashed and i had to 
 install the new version of ccp4 
 (6.2.0) which has refmac-5.6.0117. So my doubt here is - can I do the 
 refinement of the final model with this
 version of refmac or do I need to complete with the older version itself?
 
 I however tried replacing the older version of refmac in place of new refmac. 
 But the job failed while running
 (in the new interface) with an error message 
  ERROR: number of monomers   3000 /lib. limit/
   Change parameter MAXMLIST in lib_com.fh
 
 Do i need to use the old interface itself or can i use the new interface to 
 run old refmac binary? Kindly provide
 some suggestion.
 
 Thanking you
 With Regards
 Kavya
 
 -- 
 This message has been scanned for viruses and 
 dangerous content by MailScanner, and is 
 believed to be clean.

Garib N Murshudov 
Structural Studies Division
MRC Laboratory of Molecular Biology
Hills Road 
Cambridge 
CB2 0QH UK
Email: ga...@mrc-lmb.cam.ac.uk 
Web http://www.mrc-lmb.cam.ac.uk





[ccp4bb] PhD Studentship at the University of Cambridge: Chemical Probes of Protein-Protein Interactions

2012-03-19 Thread Alessio Ciulli
A 3.5 years PhD studentship is available from October 2012 in the group led
by Dr Alessio Ciulli to design and develop novel small molecule chemical
probes that target protein interfaces that recognise post-translational
modifications of protein amino acids. This multi-disciplinary project will
combine molecular/structural biology and biophysical studies of
protein-protein complexes with small molecule drug design and organic
synthesis. For a recent example of our approach see Buckley et al. *J. Am.
Chem. Soc.*, *2012*, *134* (10), pp 4465–4468 (
http://pubs.acs.org/doi/abs/10.1021/ja209924v).

Applicants should have (or expect to obtain) at least the equivalent of a
UK II.1 honours degree (and preferably a Masters) in chemistry,
biochemistry, chemical biology, structural biology or other relevant
discipline. Applications from students with either a strong chemical or
biological science background are encouraged, where the applicant is
interested in learning the other discipline. The studentship will cover
tuition fees and a maintenance grant for EU nationals who satisfy the
eligibility requirements of the UK Research Councils. Owing to funding
restrictions, the studentship is not available to non-EU nationals.


Informal enquiries about the post can be sent to Dr Alessio Ciulli at

ac...@cam.ac.uk

Closing Date: 31 March 2012

Information on how to apply:
http://www.jobs.cam.ac.uk/job/-14772/http://www.jobs.cam.ac.uk/job/-14591/


best wishes,

Alessio


[ccp4bb] Trying to cut the resolution of the datasets

2012-03-19 Thread Abd Ghani Abd Aziz
hello everyone,

I am new in this bulletin board. I would like to know on how to cut my 
resolution in my datasets that have been processed/produced in diamond light 
source. In my processed directory, I found there are 3 files (free.mtz, 
scaled.sca and unmerged.sca). May I know which one can be used to cut my data 
that was diffracted to 1.5A? Cheers

regards
Abd Ghani
The University of Sheffield


Re: [ccp4bb] Trying to cut the resolution of the datasets

2012-03-19 Thread Eleanor Dodson

Qs

1) Why do you want to limit your data?

Most applications allow you to only use a specified sub-set - see GUI tasks 
for resolution limits.


In general you may want to run moleculer replacement or exptl phasing at a 
limited resolution, but for refinenement or phase extension it is good to 
use the whole range..


Eleanor

On Mar 19 2012, Abd Ghani Abd Aziz wrote:


hello everyone,

I am new in this bulletin board. I would like to know on how to cut my 
resolution in my datasets that have been processed/produced in diamond 
light source. In my processed directory, I found there are 3 files 
(free.mtz, scaled.sca and unmerged.sca). May I know which one can be used 
to cut my data that was diffracted to 1.5A? Cheers


regards
Abd Ghani
The University of Sheffield



--
Professor Eleanor Dodson
YSNL, Dept of Chemistry
University of York
Heslington YO10 5YW
tel: 00 44 1904 328259
Fax: 00 44 1904 328266


Re: [ccp4bb] Trying to cut the resolution of the datasets

2012-03-19 Thread Graeme Winter
... presuming of course the automated software got this resolution limit right.

If for whatever reason you would like to cut the limit mtzutils will
do this nicely:

mtzutils hklin blah_free.mtz hklout blah_lower.mtz  eof
resolution 1.8
eof

(say) - I am sure there are other ways within the suite to do this.

Best wishes,

Graeme


On 19 March 2012 14:21, Eleanor Dodson eleanor.dod...@york.ac.uk wrote:
 Qs

 1) Why do you want to limit your data?

 Most applications allow you to only use a specified sub-set - see GUI tasks
 for resolution limits.

 In general you may want to run moleculer replacement or exptl phasing at a
 limited resolution, but for refinenement or phase extension it is good to
 use the whole range..

 Eleanor


 On Mar 19 2012, Abd Ghani Abd Aziz wrote:

 hello everyone,

 I am new in this bulletin board. I would like to know on how to cut my
 resolution in my datasets that have been processed/produced in diamond light
 source. In my processed directory, I found there are 3 files (free.mtz,
 scaled.sca and unmerged.sca). May I know which one can be used to cut my
 data that was diffracted to 1.5A? Cheers

 regards
 Abd Ghani
 The University of Sheffield


 --
 Professor Eleanor Dodson
 YSNL, Dept of Chemistry
 University of York
 Heslington YO10 5YW
 tel: 00 44 1904 328259
 Fax: 00 44 1904 328266


Re: [ccp4bb] Trying to cut the resolution of the datasets

2012-03-19 Thread Tim Gruene
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dear Abd Ghani,

The method described by Graeme is how the resolution can be delimited
artificially.
If you want to get the best from your data, determine the resolution
limit of your data e.g. with pointless (I/sigI  2.0 is a good marker)
and reprocess the data to that limit. If you integrate the whole
detector area and the outer parts contain only noise, the noise has a
negative effect on the real data.

Tim

On 03/19/12 15:25, Graeme Winter wrote:
 ... presuming of course the automated software got this resolution limit 
 right.
 
 If for whatever reason you would like to cut the limit mtzutils will
 do this nicely:
 
 mtzutils hklin blah_free.mtz hklout blah_lower.mtz  eof
 resolution 1.8
 eof
 
 (say) - I am sure there are other ways within the suite to do this.
 
 Best wishes,
 
 Graeme
 
 
 On 19 March 2012 14:21, Eleanor Dodson eleanor.dod...@york.ac.uk wrote:
 Qs

 1) Why do you want to limit your data?

 Most applications allow you to only use a specified sub-set - see GUI tasks
 for resolution limits.

 In general you may want to run moleculer replacement or exptl phasing at a
 limited resolution, but for refinenement or phase extension it is good to
 use the whole range..

 Eleanor


 On Mar 19 2012, Abd Ghani Abd Aziz wrote:

 hello everyone,

 I am new in this bulletin board. I would like to know on how to cut my
 resolution in my datasets that have been processed/produced in diamond light
 source. In my processed directory, I found there are 3 files (free.mtz,
 scaled.sca and unmerged.sca). May I know which one can be used to cut my
 data that was diffracted to 1.5A? Cheers

 regards
 Abd Ghani
 The University of Sheffield


 --
 Professor Eleanor Dodson
 YSNL, Dept of Chemistry
 University of York
 Heslington YO10 5YW
 tel: 00 44 1904 328259
 Fax: 00 44 1904 328266
 

- -- 
- --
Dr Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFPZ0MfUxlJ7aRr7hoRAgysAKDwEYp5QK8l1ggcjNWeGDqfHHfMnQCfRVrZ
9kR9Pkg0bkZsHFQDSsI5QFU=
=mn05
-END PGP SIGNATURE-


Re: [ccp4bb] Trying to cut the resolution of the datasets

2012-03-19 Thread Graeme Winter
Hi Tim,

That's interesting. When I looked at this (and I would say I looked
reasonably carefully) I found it only made a difference in the scaling
- integrating across the whole area was fine. However, I would expect
to see a difference, and likely an improvement, in scaling only the
data you want. It would also give you sensible merging statistics
which you'll probably want when you come to publish or deposit.

Abd Ghani: if you'd like to rerun the processing at Diamond to a
chosen resolution limit I will be happy to send some instructions.
That's probably a good idea.

Best wishes,

Graeme

On 19 March 2012 14:31, Tim Gruene t...@shelx.uni-ac.gwdg.de wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Dear Abd Ghani,

 The method described by Graeme is how the resolution can be delimited
 artificially.
 If you want to get the best from your data, determine the resolution
 limit of your data e.g. with pointless (I/sigI  2.0 is a good marker)
 and reprocess the data to that limit. If you integrate the whole
 detector area and the outer parts contain only noise, the noise has a
 negative effect on the real data.

 Tim

 On 03/19/12 15:25, Graeme Winter wrote:
 ... presuming of course the automated software got this resolution limit 
 right.

 If for whatever reason you would like to cut the limit mtzutils will
 do this nicely:

 mtzutils hklin blah_free.mtz hklout blah_lower.mtz  eof
 resolution 1.8
 eof

 (say) - I am sure there are other ways within the suite to do this.

 Best wishes,

 Graeme


 On 19 March 2012 14:21, Eleanor Dodson eleanor.dod...@york.ac.uk wrote:
 Qs

 1) Why do you want to limit your data?

 Most applications allow you to only use a specified sub-set - see GUI tasks
 for resolution limits.

 In general you may want to run moleculer replacement or exptl phasing at a
 limited resolution, but for refinenement or phase extension it is good to
 use the whole range..

 Eleanor


 On Mar 19 2012, Abd Ghani Abd Aziz wrote:

 hello everyone,

 I am new in this bulletin board. I would like to know on how to cut my
 resolution in my datasets that have been processed/produced in diamond 
 light
 source. In my processed directory, I found there are 3 files (free.mtz,
 scaled.sca and unmerged.sca). May I know which one can be used to cut my
 data that was diffracted to 1.5A? Cheers

 regards
 Abd Ghani
 The University of Sheffield


 --
 Professor Eleanor Dodson
 YSNL, Dept of Chemistry
 University of York
 Heslington YO10 5YW
 tel: 00 44 1904 328259
 Fax: 00 44 1904 328266


 - --
 - --
 Dr Tim Gruene
 Institut fuer anorganische Chemie
 Tammannstr. 4
 D-37077 Goettingen

 GPG Key ID = A46BEE1A

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.12 (GNU/Linux)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

 iD8DBQFPZ0MfUxlJ7aRr7hoRAgysAKDwEYp5QK8l1ggcjNWeGDqfHHfMnQCfRVrZ
 9kR9Pkg0bkZsHFQDSsI5QFU=
 =mn05
 -END PGP SIGNATURE-


[ccp4bb] Refining Against Reflections?

2012-03-19 Thread Jacob Keller
Dear Crystallographers,

it occurred to me that most datasets, at least certainly since the advent
of synchrotrons, have probably some degree of radiation damage, if not some
huge degree thereof. Therefore, I was thinking an exposure-dependent
parameter might be introduced into the atomic models, as an
exposure-dependent occupancy of sorts. However, this would require
refinement programs to use individual observations as data rather than
combined reflections, effectively integrating scaling into refinement. Is
there any talk of doing this? I think the hardware could reasonably handle
this now?

And, besides the question of radiation damage, isn't it perhaps reasonable
to integrate scaling into refinement now anyway, since the constraints of
hardware are so much lower?

Jacob

-- 
***
Jacob Pearson Keller
Northwestern University
Medical Scientist Training Program
email: j-kell...@northwestern.edu
***


[ccp4bb] unstable refinement error in SHELXL

2012-03-19 Thread Lu Yu
Hi all,

I was using SHELXL for the refinement of a small peptide molecule (6-7
residues), and it was working for the first round. But then it gave me an
error message. I don't know what's going on and have you had the same
problems? Can you give me some suggestions?

*For more information*:
I was using coot to read in .fcf and .res file, and after model building,
coot can generate an .ins file. I was using this .ins file and the original
.hkl for the next round of SHELXL, except I added one line ANIS in the
.ins file:

DEFS 0.02 0.1 0.01 0.04
CGLS 10 -1
SHEL 10 0.1
FMAP 2
PLAN 200 2.3
LIST 6
WPDB 2
*ANIS*

I checked the working .ins and not-working (generated from coot) .ins
files,
1)* the working .ins* (generated from .res file at the very beginning) has:

WGHT0.10
SWAT1.3527622.1931
FVAR   2.6206  0.5  0.5  0.5  0.5

2)  *the not-workind(generated from coot) .ins* has:

WGHT  0.1
FVAR  1.0

*The refinement is shown as follows:*
 Read instructions and data
 ** Warning: unusual EXTI or SWAT parameter **
 ** Warning:8 bad CHIV instructions ignored **
 Data:6342 unique,  0 suppressed   R(int) = 0.   R(sigma) =
0.0615
 Systematic absence violations:0Bad equivalents:0
 wR2 =  0.4370 before cycle   1 for   6058 data and  1690 /  1690 parameters
 GooF = S = 4.279; Restrained GooF =  5.862  for   2243
restraints
 Max. shift = 0.259 A for O_1131bMax. dU =-0.409 for O_1131b   at
06:08:24
 wR2 =  0.7365 before cycle   2 for   6058 data and  1690 /  1690 parameters
 GooF = S =12.229; Restrained GooF = 12.221  for   2243
restraints
 Max. shift = 0.175 A for O_2131aMax. dU =-0.157 for O_5017at
06:08:25

 ** REFINEMENT UNSTABLE **



The other peptide dataset also has similar problem:
I was using coot-SHELXL, model building - refinement cycle *successfully
for the first 3 rounds*, but then, it gave me an error message:

 Read instructions and data
 ** Warning: unusual EXTI or SWAT parameter **
 ** Warning: no match for1 atoms in CONN **

 ** CANNOT RESOLVE ISOR .. O LAST **

I checked the working .ins and not-working  .ins files (both generated from
coot this case),
1) the *working .ins*:
WGHT0.10
SWAT1.2889323.0398
FVAR   2.6472  0.5  0.5

2) the* not-working .ins*:
WGHT0.10
SWAT1.3447083.0452
FVAR   2.731  0.54231  0.5409

I was really confused, since I was using coot for model building for other
datasets, and the .ins file generated from coot is good for SHELXL, but it
didn't work all the time, eg. it work for the first few rounds, then there
is a problem.

Can you give me some suggestions about what  I should do to get the SHELXL
running again?

Thanks,
Lu


Re: [ccp4bb] unstable refinement error in SHELXL

2012-03-19 Thread Tim Gruene
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dear Lu Yu,

your wR2 in the first log-extract seems very high (43%) - it might
simply be that you model is still not good enough to refine the data
anisotropically.

Does it work if you refine the model isotropically? If so, improve the
model as much as possible before going anisotropically.

You might also want to include more reflections - with SHEL 10 0.1 you
leave out all data with d10A which might be quite a few important
reflections.

It is difficult to go into more detail without knowing the exact content
of the output (lst-files), especially in the second case where the
listing file tells you that something is wrong with the 'ISOR'-command.

Regards,
Tim



On 03/19/12 17:00, Lu Yu wrote:
 Hi all,
 
 I was using SHELXL for the refinement of a small peptide molecule (6-7
 residues), and it was working for the first round. But then it gave me an
 error message. I don't know what's going on and have you had the same
 problems? Can you give me some suggestions?
 
 *For more information*:
 I was using coot to read in .fcf and .res file, and after model building,
 coot can generate an .ins file. I was using this .ins file and the original
 .hkl for the next round of SHELXL, except I added one line ANIS in the
 .ins file:
 
 DEFS 0.02 0.1 0.01 0.04
 CGLS 10 -1
 SHEL 10 0.1
 FMAP 2
 PLAN 200 2.3
 LIST 6
 WPDB 2
 *ANIS*
 
 I checked the working .ins and not-working (generated from coot) .ins
 files,
 1)* the working .ins* (generated from .res file at the very beginning) has:
 
 WGHT0.10
 SWAT1.3527622.1931
 FVAR   2.6206  0.5  0.5  0.5  0.5
 
 2)  *the not-workind(generated from coot) .ins* has:
 
 WGHT  0.1
 FVAR  1.0
 
 *The refinement is shown as follows:*
  Read instructions and data
  ** Warning: unusual EXTI or SWAT parameter **
  ** Warning:8 bad CHIV instructions ignored **
  Data:6342 unique,  0 suppressed   R(int) = 0.   R(sigma) =
 0.0615
  Systematic absence violations:0Bad equivalents:0
  wR2 =  0.4370 before cycle   1 for   6058 data and  1690 /  1690 parameters
  GooF = S = 4.279; Restrained GooF =  5.862  for   2243
 restraints
  Max. shift = 0.259 A for O_1131bMax. dU =-0.409 for O_1131b   at
 06:08:24
  wR2 =  0.7365 before cycle   2 for   6058 data and  1690 /  1690 parameters
  GooF = S =12.229; Restrained GooF = 12.221  for   2243
 restraints
  Max. shift = 0.175 A for O_2131aMax. dU =-0.157 for O_5017at
 06:08:25
 
  ** REFINEMENT UNSTABLE **
 
 
 
 The other peptide dataset also has similar problem:
 I was using coot-SHELXL, model building - refinement cycle *successfully
 for the first 3 rounds*, but then, it gave me an error message:
 
  Read instructions and data
  ** Warning: unusual EXTI or SWAT parameter **
  ** Warning: no match for1 atoms in CONN **
 
  ** CANNOT RESOLVE ISOR .. O LAST **
 
 I checked the working .ins and not-working  .ins files (both generated from
 coot this case),
 1) the *working .ins*:
 WGHT0.10
 SWAT1.2889323.0398
 FVAR   2.6472  0.5  0.5
 
 2) the* not-working .ins*:
 WGHT0.10
 SWAT1.3447083.0452
 FVAR   2.731  0.54231  0.5409
 
 I was really confused, since I was using coot for model building for other
 datasets, and the .ins file generated from coot is good for SHELXL, but it
 didn't work all the time, eg. it work for the first few rounds, then there
 is a problem.
 
 Can you give me some suggestions about what  I should do to get the SHELXL
 running again?
 
 Thanks,
 Lu
 

- -- 
- --
Dr Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFPZ1pBUxlJ7aRr7hoRAvZRAJ9TKlJLQjho67GGxZAyHIcioDcnrwCeIYjy
4rL0aiefine9z1LcQ3SSGno=
=aoDC
-END PGP SIGNATURE-


Re: [ccp4bb] unstable refinement error in SHELXL

2012-03-19 Thread George Sheldrick

Dear Lu Yu,

SHELXL is usually very stable so there must be an error in your .ins 
file, but it is difficult fo us to guess what it is without seeing the 
full file. A common error that can cause such instability is caused by a 
long-standing bug in Coot, which sets some occupancies in the .ins file 
to 1.0 (meaning that they can be refined freely starting at 1.0) rather 
than the usual 11.0 (which means that they should be fixed at 1.0; you 
can add 10 to a parameter to fix it). Another possibility is that Coot 
has not understood a 'free variable' that has been used for e.g. 
occupancy refinement. The small molecule people use other graphical GUIs 
for SHELXL (shelXle, WinGX, Olex2, System-S, XSEED, Oscail, XSHELL etc.) 
that make far fewer mistakes. The .ins files written by Coot should 
always be checked carefully and if necessary edited before running SHELXL.


Best wishes, George

On 03/19/2012 05:00 PM, Lu Yu wrote:

Hi all,

I was using SHELXL for the refinement of a small peptide molecule (6-7 
residues), and it was working for the first round. But then it gave me 
an error message. I don't know what's going on and have you had the 
same problems? Can you give me some suggestions?


_For more information_:
I was using coot to read in .fcf and .res file, and after model 
building, coot can generate an .ins file. I was using this .ins file 
and the original .hkl for the next round of SHELXL, except I added one 
line ANIS in the .ins file:


DEFS 0.02 0.1 0.01 0.04
CGLS 10 -1
SHEL 10 0.1
FMAP 2
PLAN 200 2.3
LIST 6
WPDB 2
*ANIS*

I checked the working .ins and not-working (generated from coot) .ins 
files,
1)*the working .ins* (generated from .res file at the very beginning) 
has:


WGHT0.10
SWAT1.3527622.1931
FVAR   2.6206  0.5  0.5  0.5  0.5

2) *the not-workind(generated from coot) .ins* has:

WGHT  0.1
FVAR  1.0

*The refinement is shown as follows:*
 Read instructions and data
 ** Warning: unusual EXTI or SWAT parameter **
 ** Warning:8 bad CHIV instructions ignored **
 Data:6342 unique,  0 suppressed   R(int) = 0.   R(sigma) 
= 0.0615

 Systematic absence violations:0Bad equivalents:0
 wR2 =  0.4370 before cycle   1 for   6058 data and  1690 /  1690 
parameters
 GooF = S = 4.279; Restrained GooF =  5.862  for   2243 
restraints
 Max. shift = 0.259 A for O_1131bMax. dU =-0.409 for O_1131b   at 
06:08:24
 wR2 =  0.7365 before cycle   2 for   6058 data and  1690 /  1690 
parameters
 GooF = S =12.229; Restrained GooF = 12.221  for   2243 
restraints
 Max. shift = 0.175 A for O_2131aMax. dU =-0.157 for O_5017at 
06:08:25


 ** REFINEMENT UNSTABLE **



The other peptide dataset also has similar problem:
I was using coot-SHELXL, model building - refinement cycle 
*successfully for the first 3 rounds*, but then, it gave me an error 
message:


 Read instructions and data
 ** Warning: unusual EXTI or SWAT parameter **
 ** Warning: no match for1 atoms in CONN **

 ** CANNOT RESOLVE ISOR .. O  LAST **

I checked the working .ins and not-working  .ins files (both generated 
from coot this case),

1) the *working .ins*:
WGHT0.10
SWAT1.2889323.0398
FVAR   2.6472  0.5  0.5

2) the*not-working .ins*:
WGHT0.10
SWAT1.3447083.0452
FVAR   2.731  0.54231  0.5409

I was really confused, since I was using coot for model building for 
other datasets, and the .ins file generated from coot is good for 
SHELXL, but it didn't work all the time, eg. it work for the first few 
rounds, then there is a problem.


Can you give me some suggestions about what  I should do to get the 
SHELXL running again?


Thanks,
Lu




--
Prof. George M. Sheldrick FRS
Dept. Structural Chemistry,
University of Goettingen,
Tammannstr. 4,
D37077 Goettingen, Germany
Tel. +49-551-39-3021 or -3068
Fax. +49-551-39-22582




Re: [ccp4bb] Refining Against Reflections?

2012-03-19 Thread Bernhard Rupp (Hofkristallrat a.D.)
As you observe, radiation damage is local, but the effect is - to different
extent - on all Fs i.e. global (all atoms and their damage contribute to
each hkl).

So one would need additional local parameters (reducing N/P) if you want to
address it as such, your use of occupancy is an example (even if you have a
reflection-specific

decay, somehow a realistic underlying atomic model would be desirable, and
just changing occ might not be ideal)..So is the question then 'Could a
reflection-specific

time dependent decay factor translate into any useful atom-specific model
parameter?

 

BR 

 

From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Jacob
Keller
Sent: Monday, March 19, 2012 7:46 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] Refining Against Reflections?

 

Dear Crystallographers,

 

it occurred to me that most datasets, at least certainly since the advent of
synchrotrons, have probably some degree of radiation damage, if not some
huge degree thereof. Therefore, I was thinking an exposure-dependent
parameter might be introduced into the atomic models, as an
exposure-dependent occupancy of sorts. However, this would require
refinement programs to use individual observations as data rather than
combined reflections, effectively integrating scaling into refinement. Is
there any talk of doing this? I think the hardware could reasonably handle
this now?

 

And, besides the question of radiation damage, isn't it perhaps reasonable
to integrate scaling into refinement now anyway, since the constraints of
hardware are so much lower?

 

Jacob


 

-- 
***
Jacob Pearson Keller
Northwestern University
Medical Scientist Training Program
email: j-kell...@northwestern.edu
***



Re: [ccp4bb] Refining Against Reflections?

2012-03-19 Thread Jacob Keller
I was thinking actually the dose-dependent-occupancy would really be a tau
in an exponential decay function for each atom, and they could be fitted by
how well they account for the changes in intensities (these should actually
not always be decreases, which is the problem for correcting radiation
damage at the scaling stage without iterating with models/refinement). I
guess accurate typical values would be needed to start with, similar to the
routinely-used geometry parameters. Actually, perhaps it would just be
better to assume book values initially at least, and then fit the dose
rate, since this is probably not known so accurately, then refine the
individual tau's, especially for heavy atoms.

This of course would also have great implications for the ability to phase
using radiation damage to heavy atoms (RIP)--there would have to be
something like a Patterson map mixed somehow with the exponentials, which
would show sites with the shortest half-lives.

JPK


On Mon, Mar 19, 2012 at 12:00 PM, Bernhard Rupp (Hofkristallrat a.D.) 
hofkristall...@gmail.com wrote:

 As you observe, radiation damage is local, but the effect is - to
 different extent - on all Fs i.e. global (all atoms and their damage
 contribute to each hkl).

 So one would need additional local parameters (reducing N/P) if you want
 to address it as such, your use of occupancy is an example (even if you
 have a reflection-specific

 decay, somehow a realistic underlying atomic model would be desirable, and
 just changing occ might not be ideal)….So is the question then ‘Could a
 reflection-specific

 time dependent decay factor translate into any useful atom-specific model
 parameter?

 ** **

 BR 

 ** **

 *From:* CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] *On Behalf Of 
 *Jacob
 Keller
 *Sent:* Monday, March 19, 2012 7:46 AM
 *To:* CCP4BB@JISCMAIL.AC.UK
 *Subject:* [ccp4bb] Refining Against Reflections?

 ** **

 Dear Crystallographers,

 ** **

 it occurred to me that most datasets, at least certainly since the advent
 of synchrotrons, have probably some degree of radiation damage, if not some
 huge degree thereof. Therefore, I was thinking an exposure-dependent
 parameter might be introduced into the atomic models, as an
 exposure-dependent occupancy of sorts. However, this would require
 refinement programs to use individual observations as data rather than
 combined reflections, effectively integrating scaling into refinement. Is
 there any talk of doing this? I think the hardware could reasonably handle
 this now?

 ** **

 And, besides the question of radiation damage, isn't it perhaps reasonable
 to integrate scaling into refinement now anyway, since the constraints of
 hardware are so much lower?

 ** **

 Jacob
 

 ** **

 --
 ***
 Jacob Pearson Keller
 Northwestern University
 Medical Scientist Training Program
 email: j-kell...@northwestern.edu
 ***




-- 
***
Jacob Pearson Keller
Northwestern University
Medical Scientist Training Program
email: j-kell...@northwestern.edu
***


[ccp4bb] Announcing a Web Server for the Grade ligand restraints generator.

2012-03-19 Thread Gerard Bricogne
Dear all,

   The generation of reliable restraints for novel small-molecule
ligands in macromolecular complexes is of great importance for both ligand
placement into density maps and subsequent refinement. This has led us to
develop Grade, a ligand restraint generator whose main source of restraint
information is the Cambridge Structural Database (CSD) of small-molecule
crystal structures, queried using the MOGUL program developed by the CCDC.
Where small-molecule information is lacking, Grade uses quantum chemical
procedures to obtain the restraint values.

   Grade was released to academic users as part of the BUSTER package in
July 2011 and has proved popular. However, a problem for numerous academic
users has been that, in order to get the best restraints from Grade, a CSD
system licence is necessary to make use of MOGUL. Although many institutions
already have CSD site licences, and otherwise licences are available at a
reasonable cost, this has prevented the use of Grade by small groups and
occasional users.

   To provide easy access to Grade, the CCDC has kindly agreed that we
can provide a public Web server that includes the use of MOGUL in its
invocation of Grade. The first version of the server is now available, free
of charge, at

   http://grade.globalphasing.org

   We hope this server will prove useful to academic users. We will be
very grateful for any feedback you might be able to provide about this
server, so that we can keep improving it to meet the needs of the community.
Please send us your feedback and comments at 

 buster-deve...@globalphasing.com

rather than write to a specific developer.


   With best wishes,

   The Global Phasing developers: Gerard Bricogne, Claus Flensburg,
   Peter Keller, Wlodek Paciorek, Andrew Sharff, Oliver Smart,
   Clemens Vonrhein and Thomas Womack.


Re: [ccp4bb] unusual bond lengths in PRODRG cif file (Grade Web Server)

2012-03-19 Thread Oliver Smart

On Tue, 10 Jan 2012, Stephen Graham wrote:


On 10 January 2012 09:50, John Liebeschuetz j...@ccdc.cam.ac.uk wrote:
...available to anyone who has access to the Cambridge Structural
Database System

How many academic labs will bother / can afford to buy a CCSD license
just to check the geometry of small molecule ligands, especially when
they need to do so them only once every blue moon?

The ability for the PDB to check a ligand against the CCSD upon
deposition would be great.  The ability to generate the restraint
definition for free via the web before deposition is better: that's
why people use PRODRG!

Stephen



Stephen,

We had anticipated your request and got permission from the CCDC to 
provide a public Web server that would include the use of MOGUL in its 
invocation of Grade. The Grade Web Server has been publicly launched 
today, so for ligand restraint definitions that are (partly) based 
on CSD small molecule structures try using:


http://grade.globalphasing.org

Regards,

Oliver


| Dr Oliver Smart |
| Global Phasing Ltd., Cambridge UK   |
| http://www.globalphasing.com/people/osmart/ |


Re: [ccp4bb] unusual bond lengths in PRODRG cif file

2012-03-19 Thread Oliver Smart

On Mon, 9 Jan 2012, Soisson, Stephen M wrote:

I will second Ian's recommendation for GRADE from the Global Phasing 
group.  GRADE overcomes nearly all of the shortcomings we have 
encountered with other approaches for ligand dictionary generation.




Steve,

Thanks to you and to Ian Tickle for the positive comments about grade. We 
have just launched the Grade Web Server so it should be now be much easier 
for academic and occasional users to generate ligand restraints with it.


http://grade.globalphasing.org

Regards,

Oliver


| Dr Oliver Smart |
| Global Phasing Ltd., Cambridge UK   |
| http://www.globalphasing.com/people/osmart/ |


[ccp4bb] microseeding

2012-03-19 Thread Rajesh kumar

Dear All,
I have few papers in hand which  explain me about microseeding, matrix 
microseeding, and cross seeding.I have also read few earlier threads and some 
more literature in google.Using Phoenix robot, I did a matrix micro-seeding and 
matrix cross seeding. I have few hits with this.In 96 well I used 100+100+50 nL 
and 200+200+50 nl (protein+screen+seed) in separate expts.I have hard time to 
plan to translate this 96 sitting drop well plate to 24 well plate to refine 
the conditions to get better crystals. only 1-2 hits are small crystals and 
they are tiny. 
 I wonder in 24 well plate, if I should do-1)  for Example 500+500+50nl (I am 
sure I cant add less that 500 nL precisely)2) to a drop of 500+500 nL do 
microseeding/streaking with a hair
I appreciate if you could advise and share some practical ways to further my 
experiment.
Thanks in advanceRegards,Rajesh   

Re: [ccp4bb] microseeding

2012-03-19 Thread Ed Pozharski
Scaling up 100nl drops is problematic.  What I understand is that it is
not only the different equilibration conditions, but primarily the
amount of protein that gets absorbed on the surface is relatively higher
for small drops.  There were some empirical formula for scaling up (i.e.
how much you need to increase the protein concentration, but I am afraid
you would have to re-screen anyway.



On Mon, 2012-03-19 at 15:31 -0500, Rajesh kumar wrote:
 Dear All,
 
 
 I have few papers in hand which  explain me about microseeding, matrix
 microseeding, and cross seeding.
 I have also read few earlier threads and some more literature in
 google.
 Using Phoenix robot, I did a matrix micro-seeding and matrix cross
 seeding. I have few hits with this.
 In 96 well I used 100+100+50 nL and 200+200+50 nl (protein+screen
 +seed) in separate expts.
 I have hard time to plan to translate this 96 sitting drop well plate
 to 24 well plate to refine the conditions to get better crystals. only
 1-2 hits are small crystals and they are tiny. 
 
 
  I wonder in 24 well plate, if I should do-
 1)  for Example 500+500+50nl (I am sure I cant add less that 500
 nL precisely)
 2) to a drop of 500+500 nL do microseeding/streaking with a hair
 
 
 I appreciate if you could advise and share some practical ways to
 further my experiment.
 
 
 Thanks in advance
 Regards,
 Rajesh

-- 
Edwin Pozharski, PhD, Assistant Professor
University of Maryland, Baltimore
--
When the Way is forgotten duty and justice appear;
Then knowledge and wisdom are born along with hypocrisy.
When harmonious relationships dissolve then respect and devotion arise;
When a nation falls to chaos then loyalty and patriotism are born.
--   / Lao Tse /


[ccp4bb] Position available

2012-03-19 Thread Cygler Mirek
Research Associate position

Protein Characterization and Crystallization Facility, University of 
Saskatchewan, Saskatoon

The newly established Protein Characterization and Crystallization Facility at 
the College of Medicine, University of Saskatchewan is seeking a candidate with 
expertise in various biophysical methods of protein characterization to aid the 
users in designing their experiments, provide assistance in running the 
experiments and help in interpreting the results. The instrumentation will 
include surface plasmon resonance, isothermal calorimetry, circular dichroism, 
dynamic light scattering, visible, UV and fluorescent spectroscopy, and FPLC 
instruments for a variety of chromatographic applications. The emphasis of the 
research will be on protein-protein and protein-small molecule interactions. 
Medium-throughput experiments will utilize a liquid handling robot. The 
successful candidate will be expected to participate in collaborative research 
with the faculty of the College of Medicine and other researchers on the U of S 
campus and will have an opportunity to pursue his/her own research program. The 
laboratory will be located in the new wing of the Health Sciences Building on 
the U of S campus. Close interaction with other facilities such as Mass 
Spectrometry Facility and Saskatchewan Structural Sciences Centre are expected.

The candidate should have a PhD in biochemistry, enzymology or a related 
discipline and several year of experience in protein characterization. Good 
communication skills and a team spirit are essential.

The position is initially for a 3-year period with possibility for a permanent 
appointment. The salary will be commensurate with experience.

The University of Saskatchewan places a strong emphasis on research and is a 
home for 20,000 students. Saskatoon is a rapidly growing city, within the 
province that is in a strong economic position. The University campus is one of 
the most beautiful campuses not only in Canada but in North America. The 
student life is vibrant, with variety of activities on and off the campus 
(http://explore.usask.ca/campuslife/). 

 

Contact information:

Mirek Cygler

Professor, Department of Biochemistry, University of Saskatchewan and

Adjunct Professor, Department of Biochemistry, McGill University,   


107 Wiggins Road, Saskatoon, SK S7N 5E5 Canada

E-mail : miroslaw.cyg...@usask.ca

Phone  : (306) 996-4361




Re: [ccp4bb] Trying to cut the resolution of the datasets

2012-03-19 Thread Pete Meyer

Hi Graeme,


That's interesting. When I looked at this (and I would say I looked
reasonably carefully) I found it only made a difference in the scaling
- integrating across the whole area was fine. However, I would expect
to see a difference, and likely an improvement, in scaling only the
data you want. It would also give you sensible merging statistics
which you'll probably want when you come to publish or deposit.


I've seen low-resolution datasets where the resolution cutoff had a 
fairly significant impact on integration - usually in cell or 
orientation refinement.  This seemed to make sense, as trying to use 
large numbers of spots that weren't really there (and so had essentially 
undetermined positions) tended to lead to instabilities in refinement.


What resolution ranges were you looking at?

Pete



Abd Ghani: if you'd like to rerun the processing at Diamond to a
chosen resolution limit I will be happy to send some instructions.
That's probably a good idea.

Best wishes,

Graeme

On 19 March 2012 14:31, Tim Gruene t...@shelx.uni-ac.gwdg.de wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dear Abd Ghani,

The method described by Graeme is how the resolution can be delimited
artificially.
If you want to get the best from your data, determine the resolution
limit of your data e.g. with pointless (I/sigI  2.0 is a good marker)
and reprocess the data to that limit. If you integrate the whole
detector area and the outer parts contain only noise, the noise has a
negative effect on the real data.

Tim

On 03/19/12 15:25, Graeme Winter wrote:

... presuming of course the automated software got this resolution limit right.

If for whatever reason you would like to cut the limit mtzutils will
do this nicely:

mtzutils hklin blah_free.mtz hklout blah_lower.mtz  eof
resolution 1.8
eof

(say) - I am sure there are other ways within the suite to do this.

Best wishes,

Graeme


On 19 March 2012 14:21, Eleanor Dodson eleanor.dod...@york.ac.uk wrote:

Qs

1) Why do you want to limit your data?

Most applications allow you to only use a specified sub-set - see GUI tasks
for resolution limits.

In general you may want to run moleculer replacement or exptl phasing at a
limited resolution, but for refinenement or phase extension it is good to
use the whole range..

Eleanor


On Mar 19 2012, Abd Ghani Abd Aziz wrote:


hello everyone,

I am new in this bulletin board. I would like to know on how to cut my
resolution in my datasets that have been processed/produced in diamond light
source. In my processed directory, I found there are 3 files (free.mtz,
scaled.sca and unmerged.sca). May I know which one can be used to cut my
data that was diffracted to 1.5A? Cheers

regards
Abd Ghani
The University of Sheffield


--
Professor Eleanor Dodson
YSNL, Dept of Chemistry
University of York
Heslington YO10 5YW
tel: 00 44 1904 328259
Fax: 00 44 1904 328266

- --
- --
Dr Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFPZ0MfUxlJ7aRr7hoRAgysAKDwEYp5QK8l1ggcjNWeGDqfHHfMnQCfRVrZ
9kR9Pkg0bkZsHFQDSsI5QFU=
=mn05
-END PGP SIGNATURE-


Re: [ccp4bb] microseeding

2012-03-19 Thread Patrick Shaw Stewart
Rajesh

If you set up the volumes you suggest you will probably get precipitation.
This is counterintuitive until you realize that (as Ed says) you will be
losing a lot of protein with those small drops.  When you scale up the
surface area to volume ratio is lower, so a smaller proportion of the
protein is lost.  Therefore you go *up* on the phase diagram and get
precipitant or very small crystals.

Normally halving the amount of protein for the hits from 200 nl drops works
(suggesting that half the protein is lost from such small drops).  Try say
500+1000+500 (don't reduce the volume of seed stock because the solution
that you suspended the crystals in may be important).  Or dilute the
protein and use 1000+1000+500.

For the hits from the 450 nl drops you could reduce or dilute the protein
by say 25.%.

Or make plenty of seed-stock and try seeding into a random screen again
with larger drops, say 1.5+1+0.5 ul

Those tiny crystals should be good for seeding, don't worry about that
(provided they are protein of course).

Streak seeding may work but bear in mind that roughly a third of the
precipitant comes from the seed stock in your 250 nl drops.

You can add the seed stock with a syringe and needle if you don't have
suitable robot ;)

Experience and data-mining suggests that reducing the salt precipitant (in
high-salt drops) or salt additive (in PEG drops) by around 50% may be
helpful too when scaling up - I'm not sure why this works.

Good luck

Patrick






For the hits in the 250 nl drops you are probably losing

On 19 March 2012 20:31, Rajesh kumar ccp4...@hotmail.com wrote:

  Dear All,

 I have few papers in hand which  explain me about microseeding, matrix
 microseeding, and cross seeding.
 I have also read few earlier threads and some more literature in google.
 Using Phoenix robot, I did a matrix micro-seeding and matrix cross
 seeding. I have few hits with this.
 In 96 well I used 100+100+50 nL and 200+200+50 nl (protein+screen+seed) in
 separate expts.
 I have hard time to plan to translate this 96 sitting drop well plate to
 24 well plate to refine the conditions to get better crystals. only 1-2
 hits are small crystals and they are tiny.

  I wonder in 24 well plate, if I should do-
 1)  for Example 500+500+50nl (I am sure I cant add less that 500
 nL precisely)
 2) to a drop of 500+500 nL do microseeding/streaking with a hair

 I appreciate if you could advise and share some practical ways to further
 my experiment.

 Thanks in advance
 Regards,
 Rajesh




-- 
 patr...@douglas.co.ukDouglas Instruments Ltd.
 Douglas House, East Garston, Hungerford, Berkshire, RG17 7HD, UK
 Directors: Peter Baldock, Patrick Shaw Stewart

 http://www.douglas.co.uk
 Tel: 44 (0) 148-864-9090US toll-free 1-877-225-2034
 Regd. England 2177994, VAT Reg. GB 480 7371 36


[ccp4bb] structure refinement and analysis work

2012-03-19 Thread Kevin Jin
Dear All,

Do you need some help for structure refinement or structure analysis?
I will be very happy to work for you, while I am looking for next
position.

To me, solving structure is a puzzle game and for fun.

Regards,

Kevin Jin