Re: [ccp4bb] project and literature organization software (laboratory information management software)

2014-05-06 Thread Dmitry Rodionov
Hi Tobias,

We use LabTrove:
http://www.labtrove.org/

Pros:
Relatively easy local install
Revision control
Ability to attach any file

Cons:
clumsy blog formatting
does not lay eggs.

Regards,
Dmitry


It is a bit clumsy being based on a blog engine and does not lay eggs… But it 
works fine for an electronic lab book and 

On 2014-04-29, at 7:21 AM, Tobias Beck tobiasb...@gmail.com wrote:

 Dear all,
 
 I am looking for a software solution to organize many pieces of information
 
 1.) Results from (bio)chemical experiments, such as spectral data, pictures.
 
 2.) Project ideas, milestones, etc.
 
 3.) Literature, including tags, short comments, etc. 
 
 For example, for a certain project I would like to collect information about 
 experiments conducted, then link this to literature/literature experiments 
 and to project outlines. All this should be accessible for multiple users on 
 different OS. 
 
 I have briefly looked into PiMS (too much crystallography oriented), Contor 
 ELN (only on Safari on Mac?), Labguru (nice, but not too flexible and mostly 
 for biosciences) and Confluence (nice wiki, but so far no real literature 
 plugin). 
 
 I know that this sounds maybe a little bit like something called in German a 
 'eierlegende Wollmilchsau' 
 http://en.wiktionary.org/wiki/eierlegende_Wollmilchsau 
 
 But I would be happy to hear about what software people (and labs) have 
 tried, liked/disliked and 
 ideally the reasons. 
 
 (I am aware that there was a similar query 
 https://www.mail-archive.com/ccp4bb@jiscmail.ac.uk/msg24657.html, but this 
 was more than 2 years ago)
 
 Thanks a lot!
 
 Best wishes, Tobias.
 
 -- 
 ___
 
 Dr. Tobias Beck
 ETH Zurich
 Laboratory of Organic Chemistry
 Vladimir-Prelog-Weg 3, HCI F 322
 8093 Zurich, Switzerland
 phone:  +41 44 632 68 65
 fax:+41 44 632 14 86
 web:  http://www.protein.ethz.ch/people/tobias
 ___
 


Re: [ccp4bb] PyMol and Schrodinger

2014-04-23 Thread Dmitry Rodionov
Hi Mirek,

If you happen to be using OS X, Fink makes it very easy to install recent 
open-source pymol.

Best regards,
Dmitry

On 2014-04-23, at 11:43 AM, Cygler, Miroslaw miroslaw.cyg...@usask.ca wrote:

 Hi,
 I have inquired at Schrodinger about the licensing for PyMol. I was surprised 
 by their answer. The access to PyMol is only through a yearly licence. They 
 do not offer the option of purchasing the software and using the obtained 
 version without time limitation. This policy is very different from many 
 other software packages, which one can use without continuing licensing fees 
 and additional fees are only when an upgrade is needed. At least I believe 
 that Office, EndNote, Photoshop and others are distributed this way.
 I also remember very vividly the Warren’s reason for developing PyMol, and 
 that was the free access to the source code. He later implemented fees for 
 downloading binary code specific for one’s operating system but there were no 
 time restrictions on its use. 
 As far as I recollect, Schrodinger took over PyMol distribution and 
 development promising to continue in the same spirit. 
 Please correct me if I am wrong. 
 I find the constant yearly licensing policy disturbing and will be looking 
 for alternatives. I would like to hear if you have had the same experience 
 and what you think about the Schrodinger policy.
 Best wishes,
 
 
 Mirek
 
 
 


Re: [ccp4bb] Protein concentration form chromatograms

2014-01-15 Thread Dmitry Rodionov
If using Unicorn,

1)Open your chromatogram in Evaluation window.
2)Go to Operations- Pool
3)Choose which baseline estimation suits you, define the pools (numbered rulers 
that appear under the chromatogram)
4)type in the path length (2 or 10 mm for UV-900 and 2 mm for UPC-900)
5)type in the mass extinction coefficient (Molar extinction coefficient/Mw in 
daltons)
6) get your answer from the table at the bottom of the screen
The reading will make sense for pure protein. Also note that for the most 
accurate result you need to know the exact path length which varies a little 
bit from cell to cell (or so they say).

Regards,
Dmitry

On 2014-01-15, at 10:09 AM, Karel Chaz wrote:

 Dear all,
 
 A question for the biochemistry-inclined folks in the bb; how do I calculate 
 protein concentration of chromatography fractions, starting from Abs280 from 
 the UV monitor? I know I could figure it out myself if I really tried, but 
 why bother when I have access to so many brilliant minds
 
 
 Thanks to all,
 
 K



smime.p7s
Description: S/MIME cryptographic signature


Re: [ccp4bb] AW: [ccp4bb] Dealnig with O-linked mannose

2013-11-22 Thread Dmitry Rodionov
Thanks, Herman.

Zhijie, what is the difference between LINK and LINKR?

Maybe this will help somebody (though phenix-related):
phenix.link_edits parses the pdb for LINK records and outputs the pdb.edits 
file.
Alternatively, phenix.ligand_linking can guess what should be bonded in the 
absence of LINK records and produce apply_link.def

Best rgards,
Dmitry

On 2013-11-22, at 3:18 AM, herman.schreu...@sanofi.com wrote:

 Using your favorite editor, you can copy the LINK record from the pdb file 
 generated by Coot and paste it into the pdb file produced by Phenix. You can 
 also make a script to do this. This is what I did during the time LINK 
 records were not properly handled by coot, refmac and buster. 
 
 Best regards,
 Herman
 
 
 
 -Ursprüngliche Nachricht-
 Von: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] Im Auftrag von Dmitry 
 Rodionov
 Gesendet: Donnerstag, 21. November 2013 22:09
 An: CCP4BB@JISCMAIL.AC.UK
 Betreff: Re: [ccp4bb] Dealnig with O-linked mannose
 
 Thank you all for helpful suggestions.
 
 My question was how to properly connect a mannose to a serine and real-space 
 refine the result. My apologies for not being clear enough.
 
 Coot can't find MAN-SER or SER-MAN in it's library (Coot 0.7.1, mon_lib 5.41)
 It does not automatically make the bond between MAN C1 and OG of SER either.
 
 Here is the way I finally made the connection followed by refinement it in 
 Coot:
 
 1) Get monomer... MAN
 2) real-space refine MAN into reasonable position
 3) Delete hydrogens and reducing hydroxyl
 Bond is not detected
 4) Extensions-Modelling ...-make Link (Click 2 atoms) ... Click on C1 of 
 MAN and OG of SER
 dashed bond appears
 5) Sphere refine 
 
 A whole different issue that was touched in the replies is how this model 
 will be handled by refinement programs.
 For the sake of a record, I am using Phenix and it seems to not respect the 
 described link without massaging by means of either pdb.edits or 
 apply_link.def.
 Also, phenix.refine does not produce LINK records in the output PDB, so step 
 4 might have to be repeated.
 
 Best regards,
   Dmitry



smime.p7s
Description: S/MIME cryptographic signature


Re: [ccp4bb] Dealnig with O-linked mannose

2013-11-21 Thread Dmitry Rodionov
Thank you all for helpful suggestions.

My question was how to properly connect a mannose to a serine and real-space 
refine the result. My apologies for not being clear enough.

Coot can't find MAN-SER or SER-MAN in it's library (Coot 0.7.1, mon_lib 5.41)
It does not automatically make the bond between MAN C1 and OG of SER either.

Here is the way I finally made the connection followed by refinement it in Coot:

1) Get monomer... MAN
2) real-space refine MAN into reasonable position
3) Delete hydrogens and reducing hydroxyl
Bond is not detected
4) Extensions-Modelling ...-make Link (Click 2 atoms) ... Click on C1 of MAN 
and OG of SER
dashed bond appears
5) Sphere refine 

A whole different issue that was touched in the replies is how this model will 
be handled by refinement programs.
For the sake of a record, I am using Phenix and it seems to not respect the 
described link without massaging by means of either pdb.edits or apply_link.def.
Also, phenix.refine does not produce LINK records in the output PDB, so step 4 
might have to be repeated.

Best regards,
Dmitry




smime.p7s
Description: S/MIME cryptographic signature


[ccp4bb] Dealnig with O-linked mannose

2013-11-20 Thread Dmitry Rodionov
Good day!

I am refining what appears to be O-mannosylated protein structure.

In my hands Coot (0.7.1) does not form the SER-MAN bond automatically.
I made a SER-MAN.cif with jLigand, which takes care of the glycosidic bond.  
However, now the peptide bonds are not made to this custom residue (same chain, 
consecutive numbering).

Am I doing something wrong? How can I fix this?

Many thanks,

Dmitry



smime.p7s
Description: S/MIME cryptographic signature


[ccp4bb] Conflicting Qt on OS X 10.6

2013-10-28 Thread Dmitry Rodionov
Good day!

I'm having problems with qtrview (on OS X 10.6): it crashes or hangs after 
spitting out a screenful of messages like

objc[2975]: Class QCocoaMenu is implemented in both 
/Applications/ccp4-6.4.0/bin/../Frameworks/QtGui.framework/Versions/4/QtGui and 
/Library/Frameworks/QtGui.framework/Versions/4/QtGui. One of the two will be 
used. Which one is undefined.

Just like the message says, I have Qt 4 installed in the standard location. It 
seems like the global Qt it is being used instead of the one supplied with CCP4 
6.4 since moving /Library/Frameworks/Qt* elsewhere fixes the problem.
Oddly enough, qtrview of CCP4 6.3 worked fine under the same conditions.

Is there a way to make qtrview use CCP4's Qt and ignore other versions?

Thanks!

Best regards,
Dmitry



smime.p7s
Description: S/MIME cryptographic signature


Re: [ccp4bb] Conflicting Qt on OS X 10.6

2013-10-28 Thread Dmitry Rodionov
Thanks for the suggestions!
I just opened a new terminal window, and it appears that problem had 
disappeared (by itself?).
Sorry for the false alarm- I'll go see a mental health professional.

Best regards,
Dmitry

On 2013-10-28, at 12:49 PM, William Scott wrote:

 It looks like it should be doing the right thing, i.e.,
 
 
 zsh-% otool -L qtrview 
 qtrview:
   @executable_path/../Frameworks/QtWebKit.framework/Versions/4/QtWebKit 
 (compatibility version 4.9.0, current version 4.9.3)
   
 @executable_path/../Frameworks/QtXmlPatterns.framework/Versions/4/QtXmlPatterns
  (compatibility version 4.8.0, current version 4.8.4)
   @executable_path/../Frameworks/QtGui.framework/Versions/4/QtGui 
 (compatibility version 4.8.0, current version 4.8.4)
   @executable_path/../Frameworks/QtXml.framework/Versions/4/QtXml 
 (compatibility version 4.8.0, current version 4.8.4)
   @executable_path/../Frameworks/QtNetwork.framework/Versions/4/QtNetwork 
 (compatibility version 4.8.0, current version 4.8.4)
   @executable_path/../Frameworks/QtCore.framework/Versions/4/QtCore 
 (compatibility version 4.8.0, current version 4.8.4)
 
 If you have $DYLID_LIBRARY_PATH set, try unsetting it.
 
 
 
 On Oct 28, 2013, at 9:00 AM, Dmitry Rodionov d.rodio...@gmail.com wrote:
 
 Good day!
 
 I'm having problems with qtrview (on OS X 10.6): it crashes or hangs after 
 spitting out a screenful of messages like
 
 objc[2975]: Class QCocoaMenu is implemented in both 
 /Applications/ccp4-6.4.0/bin/../Frameworks/QtGui.framework/Versions/4/QtGui 
 and /Library/Frameworks/QtGui.framework/Versions/4/QtGui. One of the two 
 will be used. Which one is undefined.
 
 Just like the message says, I have Qt 4 installed in the standard location. 
 It seems like the global Qt it is being used instead of the one supplied 
 with CCP4 6.4 since moving /Library/Frameworks/Qt* elsewhere fixes the 
 problem.
 Oddly enough, qtrview of CCP4 6.3 worked fine under the same conditions.
 
 Is there a way to make qtrview use CCP4's Qt and ignore other versions?
 
 Thanks!
 
 Best regards,
  Dmitry
 
 



smime.p7s
Description: S/MIME cryptographic signature


Re: [ccp4bb] Advise on setting up/ maintaining a Ubuntu cluster

2013-07-31 Thread Dmitry Rodionov
We don't have any performance/ reliability issues with our cheapskate setup 
either.
Make sure the network is wired with Cat5e or Cat6 cables, especially if 
distances are 8m+

Dmitry

On 2013-07-31, at 7:36 AM, Kay Diederichs wrote:

 I have a very different experience with NFS: we are using Gigabit Ethernet, 
 and a 64bit RHEL6 clone with ECC memory as a file server; it has RAID1 ext4 
 home directories and RAID6 ext4 for synchrotron data. We have had zero 
 performance or reliability problems with this in a computer lab with ~ 10 
 workstations, and I have seen 115 MB/sec file transfers via NFS, at peak 
 times. 
 Just make sure to export using the async option.
 
 HTH,
 
 Kay
 
 On Wed, 31 Jul 2013 09:21:48 +0900, Francois Berenger beren...@riken.jp 
 wrote:
 
 Be careful that running data intensive jobs over NFS
 is super slow (at least an order of magnitude compared
 to writing things on a local disk).
 Not only the computation is slow, but you may be slowing down
 all other users of the cluster too...
 
 F.



smime.p7s
Description: S/MIME cryptographic signature


Re: [ccp4bb] Advice on setting up / maintaining a Ubuntu cluster

2013-07-29 Thread Dmitry Rodionov
Dear Sergei,

IMO, the easiest way to achieve your goals is good old NIS and NFS with a 
centralized server on wired gigabit network. You could go with LDAP instead of 
NIS, but it is considerably more difficult to set up.
One computer would act as a server, containing the user database, homes and 
programs.
Hardware RAID is not worth it. You are better off getting a Linux-supported 
SAS/SATA HBA (e.g. Dell SAS 6/iR) and making a software RAID 5 with mdadm out 
of a bunch of inexpensive consumer-grade SATA disks. You need a minimum of 4 
drives for RAID5. An external HDD enclosure might be necessary depending on 
server's chassis and the desired number of drives.
We built our server from an old P4 workstation with a couple gigs of RAM (8 
clients). Having two or more cores is a benefit.
If I am not mistaken, software RAID 5 is not bootable, so you would need an 
extra drive (can be very small) for the core part of the OS. 
Export /home and /usr/local with NFS, mount them from client machines, hook the 
clients up to NIS and you are done. Some programs might not reside in 
/usr/local in which case you would have to export and mount more directories.
Ubuntu community has pretty good and easy to follow guides for NIS, NFS and 
mdadm.

Bets regards,
Dmitry

On 2013-07-29, at 6:22 AM, Sergei Strelkov wrote:

 Dear all,
 
 In old times I, just like about any protein crystallographer,
 used to work on a cluster of SGI/IRIX workstations with complete NFS-based
 cross-mounting of hard disks.
 
 A typical operation included:
 1. A single home directory location for every user:
 if my home directory was on workstation X, I would by default use
 it after logging on any of the workstations in the cluster.
 2. A single location for all software for general use.
 (And, obviously, 3. The ability to log on any node from
 any terminal; today this is done via the 'ssh -X' command).
 
 I wondered if someone could give us an advice on a painless
 setup enabling 1. and 2., for a small cluster of Ubuntu computers.
 We (will) have about five similar Dell computers in a local (192.168.*.*)
 network (wired/wireless). Any tips on the hardware (especially the
 LAN and network disks) are also welcome.
 
 Many thanks,
 Sergei
 
 -- 
 Prof. Sergei V. Strelkov
 Laboratory for Biocrystallography
 Dept of Pharmaceutical and Pharmacological Sciences, KU Leuven
 Herestraat 49 bus 822, 3000 Leuven, Belgium
 Work phone: +32 16 330845   Mobile: +32 486 294132
 Lab pages: http://pharm.kuleuven.be/anafar



smime.p7s
Description: S/MIME cryptographic signature


Re: [ccp4bb] Off-topic: Fungal growth in robot trays

2013-03-13 Thread Dmitry Rodionov
8.   Empty the carboy and fill the supply lines and wash stations with 
20% ethanol at the end of the day. (Requires decontamination in this case)

Best regards,
Dmitry

On 2013-03-13, at 10:04 PM, Viswanathan Chandrasekaran v...@biochem.utah.edu 
wrote:

 Dear All:
 Thank you for your responses. Here is a summary of suggested fixes:
 1.   Cleaning the supply carboy and lines with bleach and flushing 
 thoroughly with DD water afterwards
 2.   Adding 0.02% sodium azide to the protein
 3.   Adding 0.02% azide to commercial screens
 4.   Adding 0.02% azide to the water used for washing
 5.   Using fresh screens and storing them at low temperatures (4 or 12 
 degree C)
 6.   Manually dispensing the reservoir solution using a multi-channel 
 pipette
 7.   Using a Mosquito robot (it uses fresh needles each time)
 Best,
 Vish
  
 From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of 
 Viswanathan Chandrasekaran
 Sent: Friday, March 08, 2013 4:24 PM
 To: CCP4BB@JISCMAIL.AC.UK
 Subject: [ccp4bb] Off-topic: Fungal growth in robot trays
  
 Dear All:
 I would like some advice on getting rid of persistent fungal growth in 
 96-well sitting drop crystal plates that were set up using a Phoenix robot.
 24-well sitting drop trays prepared by hand don’t have this problem. Washing 
 the robot with 0.5% bleach followed by plenty of water had no effect. Is 
 adding sodium azide directly to commercial screen hotels (or the protein 
 sample) a good idea? If so, how much should I add? Other suggestions are 
 welcome.
 I will post a summary of all replies.
 Thank you.
 Best,
 Vish


Re: [ccp4bb] Mac mini advice

2013-01-23 Thread Dmitry Rodionov
We have 10.510.6 servers and briefly tested 10.7 server.
Last time I tried, Ubuntu 12.04 box would not authenticate users registered on 
the OS X Open Directory server.
Before that, 10.04 clients would cause random user lockouts.
NFS GUI is gone as of 10.7

Regards,
Dmitry

On 2013-01-23, at 9:05 AM, Bosch, Juergen wrote:

 I assume nobody of you is running an actual Osx server ? I mean the upgrade 
 to a full server version of the commonly distributed normal Osx releases ?
 
 I have not done it yet but I do think many of the issues mentioned regarding 
 NFS/NIS could be addressed there. Regarding the missing macpro upgrades I 
 expect to see new machines with thunderbolt connectivity in the next 4 
 months. And I will buy my third macpro then to run it as a true server.
 
 Jürgen 
 
 Sent from my iPad
 
 On Jan 23, 2013, at 5:21, Peter Keller pkel...@globalphasing.com wrote:
 
 On Wed, 2013-01-23 at 01:54 -0700, James Stroud wrote:
 On Jan 22, 2013, at 11:20 PM, Nat Echols wrote:
 The real difficulty is integrating Macs into a
 Linux-centric environment, for example configuring NFS, NIS, etc.
 
 That's because NFS and NIS are antiquities left over from the days of
 mainframes. Distributed file systems and user information databases
 are designed for an environment of many workers and few machines, when
 the typical graphics workstation cost $50,000. These days, we argue
 whether to spend an extra $200 on a $500 computer. We have moved to a
 new paradigm: many workers with many more machines, with each machine
 having essentially mainframe levels of storage and computing power.
 
 Technically there is something in what you say as a pattern for
 day-to-day work (for some people, although not all), but I think that
 describing the debate in terms of modern vs. antiquated is missing the
 point completely. The real difference between local vs. centralised
 storage is to do with responsibility for the hardware and the data that
 it contains.
 
 Local workstation storage is OK for the following kinds of cases:
 
 (i) the data that are stored locally have no value, so it doesn't matter
 if they are lost (either through hardware failure, misbehaving software
 or accidental deletion).
 
 (ii) the user has the expertise and the time to set up and maintain a
 strategy for recovering data that are lost from local disks
 
 (iii) the institution that the user works for allows the user to include
 data on local workstation disks in the institution's regular backup
 operations
 
 When none of these apply, there is a real, contemporary case for using
 something like NFS, where the storage is centrally maintained and backed
 up. The cost of storage has fallen of course, but what that means is
 that the real questions now are about the value of the data. In some
 fields, you could store your entire career's data on a few USB memory
 sticks, but I doubt that many people would want to do that without
 having made other copies somewhere else, and the same applies to local
 workstation storage too :-).
 
 There are other considerations in favour of connecting a workstation to
 networked services: if you use more than one machine it can be an
 incredible pain to be constantly moving data around from one to the
 other, and to keep track of what the authoritative versions are. Having
 independent, local user id's and passwords on every workstation can also
 cause difficulties. I could go on
 
 In other words, instead of NFS, you should run git.
 
 This is simply not an option for many crystallographers, who do not have
 a background in software development or data management. Advocating and
 supporting git (or indeed any content/version management system) for
 those kind of users is a losing battle: they see it as an unnecessary
 complication to their daily work, and will avoid using it as far as they
 can.
 
 Regards,
 Peter.
 
 -- 
 Peter Keller Tel.: +44 (0)1223 353033
 Global Phasing Ltd., Fax.: +44 (0)1223 366889
 Sheraton House,
 Castle Park,
 Cambridge CB3 0AX
 United Kingdom


Re: [ccp4bb] Mac mini advice

2013-01-22 Thread Dmitry Rodionov
AFAIK there is no problem mixing and matching different timing RAM: system will 
run at the speed of the slowest module.
I don't think anybody will notice the difference with CAS latency Coot'ing and 
Refmac'ing.

I don't think there is much sense in having more than 4 GB of RAM per physical 
core on a Mac.
Majority of the Mac flock does not really care for where the RAM modules come 
from.
As for Mac Pro's- they use ECC RAM with proprietary heat sensors, so that's a 
completely different story. You can still use generic ECC RAM in a MAC PRO at 
the cost of the fan being stuck in hurricane mode.

The bottleneck of pretty much any modern system is the HDD. Apple-branded HDDs 
were known to have somewhat modified firmware, causing problems at times 
(mostly with AppleRAID, if not using an Apple-branded HDD)
An end user most definitely will notice an SSD VS HDD, which brings up TRIM 
support on OS X, which is limited to controllers sold by Apple.

Upgradeability-wise Apple is not the way to go in any case. 

DISCLAIMER:  The rest may be much more inflammatory.

Personally, I am not convinced OS X and Apple is the way to go log term (having 
been surrounded by MACs for the past 4-5 years)
I am not happy with the direction OS X is going. Too much emphasis on eye candy 
and not enough on underlying technology.
ZFS (long ago), Xgrid and X11 have been ditched, which I find disturbing. I 
don't see Apple investing in computers given current revenue from that sector.

Linux in a virtual machine of your choice might be a better bang for the buck. 
Or, Windows in a virtual machine on a Linux box for that matter.

Don't kick me,
DIR



On 2013-01-22, at 7:22 PM, Bryan Lepore bryanlep...@gmail.com wrote:

 On Tue, Jan 22, 2013 at 1:40 PM, Phil Jeffrey pjeff...@princeton.edu wrote:
 I don't think that anybody has shown a significant performance difference on 
 Apple memory vs a reasonable 3rd party supplier.  Apple may potentially have 
 better quality controls but places like Crucial essentially have lifetime 
 warranties on their memory.  I use Crucial at home and at work. [...]
 
 sure, I agree with all this
 
 the only other point I really wanted to make is to be cautious when 
 configuring a computer on the Apple website, where they might say for memory 
 DDR3 ECC SDRAM (checked this for a Mac Pro just now) but that is a 
 non-obvious way of, from what I can tell, selling only high end memory when 
 e.g. different CAS latency is available elsewhere - again, not obvious what 
 their CL is (perhaps it is listed somewhere). and maybe other specs apply.



Re: [ccp4bb] Convert cbf to png/tiff?

2013-01-10 Thread Dmitry Rodionov
I think ADXV will do the trick.

Regards,
Dmitry

On 2013-01-10, at 3:36 PM, Frank von Delft wrote:

 Hello all - anybody know an easy way to convert CBF images (Pilatus) into 
 something lossless like tiff or png?
 
 Ideally *easy* as in   r e a l l y   e a s y  and not requiring extensive 
 installation of dependencies and stuff.  Because then I might as well write 
 my own stuff using cbflib and PIL in python.
 
 Thanks!
 phx


Re: [ccp4bb] hkl2000 install

2012-11-13 Thread Dmitry Rodionov
I believe cr_info should go in /usr/local/hklint along with site files.
It does not have to be executable but must be readable by all. (chmod a+r 
cr_info)

Regards,
Dmitry Rodionov


On 2012-11-13, at 12:33 AM, 王瑞 wrote:

 Dear everyone:
 
 I have got the returned cr_info.dat to /usr/local/lib and
 /usr/local/hklint,when I typing HKL2000,it still display:
 
 dell@ubuntu:~$ HKL2000
 ERROR: Not a valid HKL-2000 license: Licence info file (cr_info) not found
 Error code: -1
 
 So could anyone can help me ?
 
 2012/11/9 王瑞 wangrui...@gmail.com:
 Dear everyone:
 
  I'm sorry for a little off-topic! I want to install HKL2000 on
 ubuntu11.10 32bits, but it produces a file named info not cr_info
 after  run the access_prod program.And when I put info to
 
 /usr/local/lib directory and typingHKL2000 in terminal, it display:
 root@ubuntu:/usr/local/bin# HKL2000
 ERROR: Not a valid HKL-2000 license: Licence info file (cr_info) not found
 Error code: -1
 
 So could anyone tell me how to do it ?
 
 Rui Wang


Re: [ccp4bb] hkl2000 install

2012-11-13 Thread Dmitry Rodionov
Rui,

I just noticed in your email you wrote cr_info.dat
That is the cause of your problem If you actually added .dat extension.

We have always kept ours in /usr/local/hklint
Which is funny because it is not one of the folders suggested by HKL people.
Thierry and Felix are right about other locations.

Dmitry


On 2012-11-13, at 11:26 AM, Felix Frolow wrote:

 cr_info  can be in several places
 ~/ (user home directory)
 working directory 
 /usr/local/lib ( This is the place where I keep it as it is a consensus 
 location for cr_info)
 about /usr/local/hklint I am not sure, but if you say so, you probably know 
 :-)
 FF
 Dr Felix Frolow   
 Professor of Structural Biology and Biotechnology, Department of Molecular 
 Microbiology and Biotechnology
 Tel Aviv University 69978, Israel
 
 Acta Crystallographica F, co-editor
 
 e-mail: mbfro...@post.tau.ac.il
 Tel:  ++972-3640-8723
 Fax: ++972-3640-9407
 Cellular: 0547 459 608
 
 On Nov 13, 2012, at 18:12 , Dmitry Rodionov d.rodio...@gmail.com wrote:
 
 I believe cr_info should go in /usr/local/hklint along with site files.
 It does not have to be executable but must be readable by all. (chmod a+r 
 cr_info)
 
 Regards,
 Dmitry Rodionov
 
 
 On 2012-11-13, at 12:33 AM, 王瑞 wrote:
 
 Dear everyone:
 
 I have got the returned cr_info.dat to /usr/local/lib and
 /usr/local/hklint,when I typing HKL2000,it still display:
 
 dell@ubuntu:~$ HKL2000
 ERROR: Not a valid HKL-2000 license: Licence info file (cr_info) not found
 Error code: -1
 
 So could anyone can help me ?
 
 2012/11/9 王瑞 wangrui...@gmail.com:
 Dear everyone:
 
 I'm sorry for a little off-topic! I want to install HKL2000 on
 ubuntu11.10 32bits, but it produces a file named info not cr_info
 after  run the access_prod program.And when I put info to
 
 /usr/local/lib directory and typingHKL2000 in terminal, it display:
 root@ubuntu:/usr/local/bin# HKL2000
 ERROR: Not a valid HKL-2000 license: Licence info file (cr_info) not found
 Error code: -1
 
 So could anyone tell me how to do it ?
 
 Rui Wang
 



Re: [ccp4bb] Stabilization of crystals and ligand exchange

2012-10-18 Thread Dmitry Rodionov
Hi Sabine,

Glutaraldehyde crosslinking worked pretty good for various soaks in my 
experience.

J. Appl. Cryst. (1999). 32, 106-112[ doi:10.1107/S002188989801053X ]
A gentle vapor-diffusion technique for cross-linking of protein crystals for 
cryocrystallography
C. J. Lusty

Best regards,
Dmitry

On 2012-10-17, at 12:26 PM, Sabine Schneider wrote:

 Hi everyone,
 
 I am trying to get the structure of a protein-ligand complex were I need to 
 exchange the ligand which it co-crystallises nicely with.
 Problem: either they crack, disolve, turn brown,...  OR they still look very 
 nice, well shaped but do not show a single reflection at the synchrotron!!!
 
 
 Here is what I tried so far:
 
 1) initially stabilising with higher precipitant (here PEG1500) before slowly 
 transferring (*) it to the ligand-removal solution (= artifical mother liquor 
 with higher PEG, ethylen glycol or glucose, but without initial ligand)
 
 (*) by slow exchange I mean : initially mixing drop solution with 
 stabilising/ligand-removal solution and adding it back to the drop stepwise 
 before fully transferring it. Or calculation wise I have fully exchange the 
 solution to the new solution
 
 2) here I let them ist over night (if they did not disolve, crack or whatever)
 3) slow exchange transfer to the artificial ML with the new ligand (10mM), 
 left them over night and directly froze them
 
 'Best' so far (crystals still looking nice but no reflection...) was slow 
 exchange into higher PEG, than to higher PEG with ethylenglycol (30% and also 
 adding ethylenglycol to the reservoir), let them sit for over night, before 
 again slow exchange to the solution with the new ligand in higher PEG and 30% 
 ethylen glycol.
 
 As I said here the crystals keep shape, but don't diffract at all anymore. 
 Just freezing them with 30% ethylen glycol they diffract nicely to 2.5A on a 
 home source. But already after step one they are sometimes not happy anymore.
 
 Co-crystallisation failed since when I add the ligand, which is not that 
 soluble to the purified protein, everything crashed out of solution. I am 
 thinking about to test adding the ligand to the diluted protein and 
 concentrate it together. But I don't have that much ligand, since the 
 synthesis is quite tedious The ligand can be dissolved in 30% 
 ethylenglycol to ~50mM
 
 Thus I was wondering if someone has done successfully ligand exchange with 
 glutaraldehyd stabilised xtals?
 Or any ideas how to stabilise them? I appreciate any ideas or comments!
 
 Sorry for the lengthy email!
 
 Best,
 Sabine


Re: [ccp4bb] CCP4 update 6.3.0 006

2012-10-13 Thread Dmitry Rodionov
Hi Ben,

Applications launched from Finder and Spotlight get environment variables from 
~/.MacOSX/environment.plist (before 10.8) their own Info.plist (10.8) and 
launchd (system-wide) and not .tcshrc etc.

Sounds like to get the updater to work from Finder for all users you have to 
set your $CCP4 in launchd.conf Not sure if it's worth the effort.
The environment is probably set correctly (somewhere) on the machine that was 
used to install CCP4...
 
http://apple.stackexchange.com/questions/57385/where-are-system-environment-variables-set-in-mountain-lion
http://www.dowdandassociates.com/content/howto-set-environment-variable-mac-os-x-etclaunchdconf

Dmitry

On 2012-10-12, at 3:46 PM, Ben Eisenbraun wrote:

 On Mon, Oct 08, 2012 at 06:20:59PM +, Ronan Keegan wrote:
 Dear CCP4 Users,
 
 A CCP4 update has just been released, consisting of the following changes:
 
 Hi Ronan et al,
 
 The update client on OS X doesn't seem to like our installation and dies
 with:
 
 Can't make class cfol of alias
 programs:i386-mac:ccp4:6.3.0:lib_exec:Update.app: into type Unicode text
 
 But I found an odd workaround. If I double-click the Update.app in Finder,
 I get the administrator password prompt, enter the credentials, and then
 the updater tells me that $CCP4 is unset, etc.
 
 I can then run 'open Update.app' from the shell, and it inherits $CCP4 and
 runs correctly.
 
 Any ideas? The workaround works, but since I don't really know why, I don't
 feel particularly good about it.
 
 Also, my installation is on NFS and is not owned by root, so it doesn't
 require administrator privileges to update. It would be nice if the
 application checked for write privileges before assuming it needs to be run
 with escalated privileges.
 
 -ben
 
 --
 | Ben Eisenbraun
 | SBGrid Consortium  | http://sbgrid.org   |
 | Harvard Medical School | http://hms.harvard.edu  |