Re: [ccp4bb] Kabat, insertion codes refinement

2014-06-16 Thread Eric Bennett

Insertion codes are a commonly used part of the PDB specification.  It's odd 
that they wouldn't be supported correctly.  To take another similar case, what 
would you say of a program that couldn't handle negative residue numbers as is 
commonly done with N-terminal purification tags?   All sequences must start 
with 1?  (Not all antibodies are isolated from natural sources.  Some are from 
human-designed libraries for example, so they are every bit as engineered as 
something with as His tag stuck on the end.)


Cheers,
Eric



On Jun 16, 2014, at 7:23 AM, Ed Pozharski wrote:

 There is no actual requirement to use Kabat numbering, you can avoid it 
 alrogether.  Some argue that L27A is actually 28th amino acid in the protein 
 sequence, and labeling it as L27A is simply incorrect.  I would suggest doing 
 refinement with plain numbering (no insertion codes) and changing it only for 
 the final model if needed for comparative analysis. 
 
 Ed
 
 
 Sent on a Sprint Samsung Galaxy S® III
 
 
  Original message 
 From: Hargreaves, David
 Date:06/16/2014 6:07 AM (GMT-05:00)
 To: CCP4BB@JISCMAIL.AC.UK
 Subject: [ccp4bb] Kabat, insertion codes  refinement
 
 
 Dear CCP4bb,
 
  
 
 I’m refining an antibody structure which requires Kabat residue numbering 
 with insertion codes. My setup of Refmac5 and Buster both break peptide bonds 
 between some (not all) of the residues with insertion codes. I was wondering 
 whether there is a special way of handling these residues in refinement?
 
  
 
 Thanks,
 
  
 
 David
 
  
 
 David Hargreaves
 
 Associate Principal Scientist
 
 _
 
 AstraZeneca
 
 Discovery Sciences, Structure  Biophysics
 
 Mereside, 50F49, Alderley Park, Cheshire, SK10 4TF
 
 Tel +44 (0)01625 518521  Fax +44 (0) 1625 232693
 
 David.Hargreaves @astrazeneca.com
 
  
 
 Please consider the environment before printing this e-mail
 
  
 
 
 
 
 AstraZeneca UK Limited is a company incorporated in England and Wales with 
 registered number: 03674842 and a registered office at 2 Kingdom Street, 
 London, W2 6BD.
 
 
 Confidentiality Notice: This message is private and may contain confidential, 
 proprietary and legally privileged information. If you have received this 
 message in error, please notify us and remove it from your system and note 
 that you must not copy, distribute or take any action in reliance on it. Any 
 unauthorised use or disclosure of the contents of this message is not 
 permitted and may be unlawful.
 
 
 Disclaimer: Email messages may be subject to delays, interception, 
 non-delivery and unauthorised alterations. Therefore, information expressed 
 in this message is not given or endorsed by AstraZeneca UK Limited unless 
 otherwise notified by an authorised representative independent of this 
 message. No contractual relationship is created by this message by any person 
 unless specifically indicated by agreement in writing other than email.
 
 
 Monitoring: AstraZeneca UK Limited may monitor email traffic data and content 
 for the purposes of the prevention and detection of crime, ensuring the 
 security of our computer systems and checking compliance with our Code of 
 Conduct and policies.
 
 
 
 
 
 
 
 
 
 
 
 
 
 

--
Eric Bennett, er...@pobox.com

Always try to associate yourself with and learn as much as you can from those 
who know more than you do, who do better than you, who see more clearly than 
you.
- Dwight Eisenhower





Re: [ccp4bb] Off topic: Homology modeling

2014-05-22 Thread Eric Bennett
One possible outcome is: (a) if the two proteins bind similar ligands and (b) 
you know which part of the xray structure of the solved protein is binding the 
ligand and (c) you can use this information to tweak the alignment and 
determine that there is higher identity/similarity in the ligand binding 
region, that would increase your confidence in the prediction of the ligand 
binding site relative to a case where these factors weren't present.  The local 
sequence identity in the binding site could be more important than the overall 
sequence identity.  Some models may be accurate enough for purposes such as 
identifying domain junctions, or identifying residues that are likely involved 
in the function of the protein (ie possible ligand binding residues), but not 
accurate enough for detailed inhibitor design.  So whether you can draw 
reliable conclusions depends on the type of conclusions you are trying to 
draw.

Another possible outcome is that the model is completely wrong. At low identity 
levels it would be very important to test any model against experimental data 
to make sure it isn't junk.  Also make sure your alignment places gaps in 
sensible locations taking into account the structure of your template.

As a real world example, the lowest identity useful model I've ever made for 
detailed inhibitor design was a kinase where the sequence identities in the 
kinase domain were in the very low 20s.  The active site was more conserved 
though, and since kinase binding sites are very well studied there was 
additional information to help ensure the alignments in the ligand site were 
correct.  Even in that case wasn't clear when we made the model if it was going 
to be useful or not; it was only during the course of inhibitor optimization 
that we gained more confidence in its accuracy.

Cheers,
Eric



On May 22, 2014, at 3:48 PM, Theresa Hsu wrote:

 Dear all
 
 I am working with a membrane protein without known structure. The closest 
 protein in PDB has 10% sequence identity/25% similarity to my protein.
 
 What is the best method and software to do homology modeling while I try to 
 get the crystal? Is the ligand binding site prediction reliable? There is no 
 available experimental data on this protein except to sugest it is some type 
 of ion transport.
 
 Thank you.
 
 Theresa

--
Eric Bennett, er...@pobox.com

Always try to associate yourself with and learn as much as you can from those 
who know more than you do, who do better than you, who see more clearly than 
you.
- Dwight Eisenhower


Re: [ccp4bb] Stereo monitor

2013-11-21 Thread Eric Bennett

Hi David,

We have had success with the ASUS VG248QE connected with DisplayPort, on an HP 
Z620 running RHEL 6.4.

We like to have dual stereo displays on Linux, and Nvidia is discontinuing many 
of the cheaper multi-DVI-port cards.  We had no success trying to take two Acer 
DVI monitors and connect them to DP on a Quadro 4000 (not K4000) using various 
DP/DVI adapters.  So we decided we had to switch entirely to native DP 
connections and chose the VG248QE.No built in emitter on this monitor, 
however.  So I think if you want to use this on Linux, you will need the K4000 
or higher.  We bought a K5000 to test it with, assuming we could fall back to 
the K5000's dual DVI ports and our old DVI monitors if DP didn't work.  But 
since we got DP to work, we will probably go with the K4000 for our next batch 
of Linux workstations, since it appears to be the minimum card that works on 
Linux with the 3-pin emitter, and also provides two DP connectors.

We had problems getting the correct stereo 3-pin bracket adapter for these 
cards; our reseller initially sent us the wrong one that is apparently for 
older Nvidia cards (very nice of them to change the plug periodically!) but we 
were able to get the correct one by talking directly to HP.

The card will drive the monitor at 144 Hz when stereo is disabled in the X 
server config file, but it drops to 120 Hz in stereo mode, which I assume is 
probably the maximum shutter speed on the 3D Vision Pro glasses.

Cheers,
Eric





On Nov 21, 2013, at 9:52 AM, David Schuller wrote:

 On 11/21/13 07:50, mesters wrote:
 ...(both handle the dual link DVI-D standard)...
 
 Are there any monitors on the market yet which can produce stereo 3D from a 
 Displayport 1.2 input? With or without a built-in emitter.
 
 
 -- 
 ===
 All Things Serve the Beam
 ===
David J. Schuller
modern man in a post-modern world
MacCHESS, Cornell University
schul...@cornell.edu

--
Eric Bennett, er...@pobox.com

Always try to associate yourself with and learn as much as you can from those 
who know more than you do, who do better than you, who see more clearly than 
you.
- Dwight Eisenhower





Re: [ccp4bb] delete subject

2013-03-28 Thread Eric Bennett
Scott,

I'm not sure I understand your last paragraph.  Once researchers have had their 
data pass peer review (which I interpret as meaning a journal has accepted it), 
how often do you think it happens that it does not immediately get published?

Just depositing data in the PDB, or posting it on a public web site, is not 
meet[ing] the veracity of peer review.  There is something to be said for 
giving credit to the first people who have subjected their data to peer review 
and had the data pass that step, otherwise people will be tempted to just post 
data of dubious quality to stake a public claim before the quality of the data 
has been independently checked.  In a case where this initial public 
non-peer-reviewed posting is of unacceptable data quality, that would dilute 
credit granted to another person who later obtained good data.

An unfortunate number of problematic structures still sneak through peer 
review.  Relaxing quality review standards that must be passed before a 
scientist gets to claim credit for a discovery is a step backwards IMO.

Cheers,
Eric





On Mar 28, 2013, at 5:06 PM, Scott Pegan wrote:

 Hey everyone,
 
 Both Mark and Fred make some good points.  I totally agree with Nat (beat me 
 to the send button).  Although in an ideal world with all the advancements in 
 crowd sourcing and electronic media, one might think that posting data on a 
 bulletin board might be considered marking one's turf and protect the 
 scientist place in that pathway towards discoveries.  Regrettably, the 
 current reality doesn't' support this case.  As structural biologists, we are 
 still in the mode of first to publish gets the bulk of the glory and 
 potentially future funding on the topic.
 
 For instance, when I was in graduate school, the lab I was in had KcsA 
 crystals at the same time as a couple of competing groups.  Several groups 
 including the one I belong to had initial diffraction data.  One group was 
 able to solve KcsA, the first K channel trans-membrane protein structure, 
 first.  That group was led by Roderick Mackinnon, now a Noble Laureate partly 
 because of this work.  Now imagine if one of Mackinnon's student would have 
 put up the web their initial diffraction data and another group would have 
 used it to assist in their interpretation of their own data and either solved 
 the structure before Mackinnon, or at least published it prior.  Even if they 
 acknowledged Mackinnion for the assistance of his data (as they should), 
 Mackinnion and the other scientists in his lab would likely not have received 
 the broad acclaim that they received and justly deserved.  Also, ask Rosalind 
 Franklin how data sharing worked out for her. 
 
 Times haven't changed that much since ~10 years ago.  Actually, as many have 
 mentioned, things have potentially gotten worse.  Worse in the respect that 
 the scientific impact of structure is increasingly largely tide to the 
 biochemical/biological studies that accompany the structure.  In other words, 
 the discoveries based on the insights the structure provides.  
 Understandably, this increasing emphasis on follow up experiments to get into 
 high impact journals in many cases increases the time between solving the 
 structure and publishing it.  During this gap, the group who solved the 
 structure first is vulnerable to being scoped.  Once scoped unless the 
 interpretation of the structure and the conclusion of the follow up 
 experiments are largely and justifiably divergent from the initial 
 publications, there is usually a significant difficulty getting the article 
 published in a top tier journal. Many might argue that they deposited it 
 first, but I haven't seen anyone win that argument either.  Because follow up 
 articles will cite the publication describing the structure, not the PDB 
 entry.
 
 Naturally, many could and should argue that this isn't they way it should be. 
 We could rapidly move science ahead in many cases if research groups were 
 entirely transparent and made available their discovers as soon as they could 
 meet the veracity of peer-review.   However, this is not the current reality 
 or model we operate in.  So, until this changes, one might be cautious about 
 tipping your competition off whether they be another structural biology group 
 looking to publish their already solved structure, or biology group that 
 could use insights gathered by your structure information for a publication 
 that might limit your own ability to publish. Fortunately, for Tom his 
 structure sounds like it is only important to a pretty specific scientific 
 question that many folks might not be working on exactly.  
 
 Scott



Re: [ccp4bb] Off-topic: Best Scripting Language

2012-09-13 Thread Eric Bennett
On Sep 12, 2012, at 2:28 PM, Ethan Merritt wrote:


 Why are you dis-ing python? Seems everybody loves it...
 
 I'm sure you can google for many reasons I hate Python lists.
 
 Mine would start
 1) sensitive to white space == fail
 2) dynamic typing makes it nearly impossible to verify program correctness,
   and very hard to debug problems that arise from unexpected input or
   a mismatch between caller and callee.   
 3) the language developers don't care about backward compatibility;
   it seems version 2.n+1 always breaks code written for version 2.n, 
   and let's not even talk about version 3
 4) slw unless you use it simply as a wrapper for C++,
   in which case why not just use C++ or C to begin with?
 5) not thread-safe
 
you did ask...
   
   Ethan
 


While I agree generally with your points and try to avoid python if at all 
possible, I'm not sure about what you mean with point 5, since it's certainly 
possible to write threaded python scripts.

Another point that is purely personal taste is the language philosophy that 
there is one official way to do something in Python, as contrasted with Perl 
(which is my choice) where the language philosophy is that there are many ways 
of doing any given task and the language is not designed to force you into a 
particular way of doing it.




Ed adds:

 While indeed 1/3=0 (but so it will be in C), I think it's a bit of an 
 overstatement that python code execution is nearly impossible to verify.
 Another goal of python is to accelerate implementation, and dynamic/duck 
 typing supposedly helps that.  The argument is simply that weak typing 
 favours strong testing, which should be a good thing.


Actually it's a bit of a hindrance.  In Perl I can call the int function on 
anything and get a sensible answer.  In python if you call int on a string that 
contains a floating point number the default behavior is that it will crash:


[woz:~] bennette% cat pytest.py
example_string = 10.3
number = int(example_string)

[woz:~] bennette% /Library/Frameworks/Python.framework/Versions/2.6/bin/python 
pytest.py
Traceback (most recent call last):
  File pytest.py, line 2, in module
number = int(example_string)
ValueError: invalid literal for int() with base 10: '10.3'





That's brain dead.  IMHO of course.

Cheers,
Eric


Re: [ccp4bb] very informative - Trends in Data Fabrication

2012-04-07 Thread Eric Bennett
I doubt many people completely fail to archive data but maintaining data 
archives can be a pain so I'm not sure what the useful age of the average 
archive is.  Do people who archived to tape keep their tapes in a format that 
can be read by modern tape drives?  Do people who archived data to a hard drive 
10 years ago have something that can still read an Irix EFS-formatted SCSI hard 
drive today and, if not, did they bother to move the data to some other storage 
medium?

-Eric



On Apr 5, 2012, at 9:08 AM, Roger Rowlett wrote:

 FYI, every NSF grant proposal now must have a data management plan that 
 describes how all experimental data will be archived and in what formats. I'm 
 not sure how seriously these plans are monitored, but a plan must be provided 
 nevertheless. Is anyone NOT archiving their original data in some way?
 
 Roger Rowlett
 



Re: [ccp4bb] very informative - Trends in Data Fabrication

2012-04-04 Thread Eric Bennett

Then everyone's data can be lost at once in the next cloud failure.  Progress!


The hardware failed in such a way that we could not forensically restore the 
data.  What we were able to recover has been made available via a snapshot, 
although the data is in such a state that it may have little to no utility...
-Amazon to some of its cloud customers following their major crash last year


http://articles.businessinsider.com/2011-04-28/tech/29958976_1_amazon-customer-customers-data-data-loss


-Eric



On Apr 3, 2012, at 9:22 PM, Zhijie Li wrote:

 Hi,
  
 Regarding the online image file storage issue, I just googled cloud storage 
 and had a look at the current pricing of such services. To my surprise, some 
 companies are offering unlimited storage for as low as $5 a month. So that's 
 $600 for 10 years. I am afraid that these companies will feel really sorry to 
 learn that there are some monsters called crystallographers living on our 
 planet.
  
 In our lab, some pre-21st century data sets were stored on tapes, newer ones 
 on DVD discs and IDE hard drives. All these media have become or will become 
 obsolete pretty soon. Not to mention the positive relationship of getting CRC 
 errors with the medium's age. Admittedly, it may become quite a job to upload 
 all image files that the whole crystallographic community generates per year. 
 But for individual labs, I think clouding data might become something worth 
 thinking of.
  
 Zhijie
  
  



Re: [ccp4bb] All-D World

2012-02-15 Thread Eric Bennett
Jacob,

I wish it were that cheery.  Do not forget the darker side of history.

The prefix L- stands for levorotary.  The levo comes from the Latin wording 
for left side.  Left handedness is also known as sinistrality, from the Latin 
sinistra which also meant the left side, but over time took on the 
connotations that we currently associate with the word sinister.  The latter 
word, of course, is generally associated with dark and evil.  It is therefore 
erroneous to attribute the L amino acid to the Almighty.  The L amino acid is 
in fact a diabolical corruption of cellular processes that begin with the 
D-nucleotide (D- meaning rotating to the right, but derived from dexter, 
meaning dextrous and skillful).  The instrument which causes this perversion of 
God's perfect righteousness into a sign of evil deserves our strongest moral 
condemnation... I am referring, of course, to that devilish piece of cellular 
machinery known as the ribosome.  

The discovery of the ribosome was a significant blow to the success of what 
Charles Baudelaire famously called the devil's greatest trick.  For years now, 
his acolytes have attempted to hide the truth about the ribosome by referring 
to its work with the neutral, innocent-sounding phrase translation.  Don't be 
fooled, but instead pray for the development of the next generation of ribosome 
inhibitors, or at least dissolve the current generation in holy water before 
ingesting.

-Eric


On Feb 15, 2012, at 7:24 PM, Jacob Keller wrote:

 G-d is right-handed, so to speak:
 
 Ex 15:6 Thy right hand, O LORD, is become glorious in power: thy
 right hand, O LORD, hath dashed in pieces the enemy.
 
 Since we are made in His image, and our (chiral) molecules are the
 cause of making most of us right-handed, which enantiomer to use was
 not a real choice but rather flowed logically from His (right-handed)
 Essence. Our chirality is dictated by His, whatever that means!
 
 JPK
 
 
 
 On Wed, Feb 15, 2012 at 4:48 PM, William G. Scott wgsc...@ucsc.edu wrote:
 Hi Jacob:
 
 After giving this a great deal of reflection …..
 I realized that you would face the same paradox that
 God had to resolve six thousand years ago at the Dawn of
 Creation, i.e., He needed D-deoxyribose DNA to code for L-amino acid
 proteins, and vice versa.  Likewise, you would probably be faced
 with a situation where you need L-deoxyribose DNA to code for D-amino
 acid proteins, so once again, you need a ribozyme self-replicase to
 escape the Irreducible Complexity(™).  (The Central Dogma at least is 
 achiral.)
 
 At least it can be done six thousand years, which isn't unreasonable for
 a Ph.D. thesis project (especially when combined with an M.D.), and you,
 unlike Him, have access to a Sigma catalogue.
 
 All the best,
 
 Bill
 
 
 William G. Scott
 Professor
 Department of Chemistry and Biochemistry
 and The Center for the Molecular Biology of RNA
 228 Sinsheimer Laboratories
 University of California at Santa Cruz
 Santa Cruz, California 95064
 USA
 
 
 
 
 
 On Feb 15, 2012, at 10:28 AM, Jacob Keller wrote:
 
 So who out there wants to start an all-D microbial culture by total
 synthesis, a la the bacterium with the synthetic genome a while back?
 Could it work, I wonder? I guess that would be a certain benchmark for
 Man's conquest of nature.
 
 JPK
 
 ps maybe if there is a broadly-acting amino-acid isomerase or set of
 isomerases of appropriate properties, this could be helpful for
 getting the culture started--or even for preying on the L world?
 
 
 
 On Wed, Feb 15, 2012 at 12:17 PM, David Schuller dj...@cornell.edu wrote:
 On 02/15/12 12:41, Jacob Keller wrote:
 
 Are there any all-D proteins out there, of known structure or
 otherwise? If so, do enantiomer-specific catalyses become inverted?
 
 JPK
 
 What do you mean by Out There? If you mean in the PDB, then yes.  As of
 two weeks ago, there are ~ 14 racemic structures deposited; most in space
 group P -1, with one outlier in space group I -4  C 2. This includes RNA,
 DNA, and PNA, but 6 entries are actually protein. The longest is over 80
 residues.
 
 Theoretically, enantiomer-specific catalysis ought to be inverted, but most
 of the structures solved are not enzymes. kaliotoxin, plectasin, antifreeze
 protein, monellin, villin, and a designed peptide.
 
 On the other hand, if by out there you meant in nature outside of
 biochemistry and organic chemistry labs; then no, I am not aware of any
 all-D proteins. There are a few protein/peptides which include a small
 number of D-residues, which is marked up to nonribosomal synthesis.
 
 The first paper I managed to Google:
 http://jb.asm.org/content/185/24/7036.full
 Learning from Nature's Drug Factories: Nonribosomal Synthesis of 
 Macrocyclic
 Peptides
 doi: 10.1128/JB.185.24.7036-7043.2003 J. Bacteriol. December 2003 vol. 185
 no. 24 7036-7043
 
 If racemic crystallization isn't exciting enough for you, look into
 quasi-racemic crystallization.
 
 
 On Wed, Feb 15, 2012 at 8:05 AM, 

Re: [ccp4bb] Computer encryption matters

2011-08-18 Thread Eric Bennett
John,

Since so many people have said it's flawless, I'd like to point out this is not 
always the case.  The particular version of the particular package that we have 
installs some system libraries that caused a program I use on a moderately 
frequent basis to crash every time I tried to open a file on a network drive.  
It took me about 9 months to figure out what the cause was, during which time I 
had to manually copy things to the local drive before I could open them in that 
particular program.  The vendor of the encryption software has a newer version 
but our IT department is using an older version.  There is another workaround 
but it's kind of a hack.

So I'd say problems are very rare, but if you run into strange behavior, don't 
rule out encryption as a possible cause.

-Eric



On Aug 17, 2011, at 3:13 PM, Jrh wrote:

 Dear Colleagues,
 My institution is introducing concerted measures for improved security via 
 encryption of files. A laudable plan in case of loss or theft of a computer 
 with official files eg exams or student records type of information stored on 
 it.
 
 Files, folders or a whole disk drive can be encrypted. Whilst I can target 
 specific files, this could get messy and time consuming to target them and 
 keep track of new to-be-encrypted files. It is tempting therefore to agree to 
 complete encryption. However, as my laptop is my calculations' workbench, as 
 well as office tasks, I am concerned that unexpected runtime errors may occur 
 from encryption and there may be difficulties of transferability of data 
 files to colleagues and students, and to eg PDB.
 
 Does anyone have experience of encryption? Are my anxieties misplaced? If 
 not, will I need to plan to separate office files, which could then all be 
 encrypted, from crystallographic data files/calculations, which could be left 
 unencrypted. If separate treatment is the best plan does one need two 
 computers once more, rather than the one laptop? A different solution would 
 be to try to insist on an institutional repository keeping such files.
 
 In anticipation,
 Thankyou,
 John
 Prof John R Helliwell DSc


Re: [ccp4bb] more Computer encryption matters

2011-08-18 Thread Eric Bennett
For anything other than the most intensive I/O operations, recent processors 
will give you very respectable performance.

I don't encrypt whole drives but I do have some AES-encrypted disk images in 
Snow Leopard, and I get about 38 MB/sec throughput copying (using cp) and 60 
MB/sec reading (while also calculating cksum) for files of several hundred 
megabytes in size.  I only have one drive, so the write test includes the time 
it takes to read the data from the unencrypted hard drive before writing back 
to the disk image on the same volume.  The process that handles the encryption 
only hits about 30% CPU use while writing and 40% while reading so I assume my 
hard drive is the limiting factor.  If I repeat the read test, so presumably 
the encrypted data is already cached, it hits over 90 MB/sec throughput and 
about 60% of a single core is busy.

These numbers are for a 2.93 GHz core i7 iMac with the HDS722020ALA330 Hitachi 
drive.  I believe this is the 870 series chip which does _not_ have the new AES 
instruction set on-chip.

-Eric




On Aug 18, 2011, at 5:50 PM, William G. Scott wrote:

 OS X 10.7 enables you to do whole-drive encryption.
 
 Here is a description from Arse Technica:
 
 http://arstechnica.com/apple/reviews/2011/07/mac-os-x-10-7.ars/13
 
 I ain't never tried it myself.  10.7 seems to run slow enough as it is.
 
 -- Bill
 


Re: [ccp4bb] unusual sighting of a crystal structure

2011-07-17 Thread Eric Bennett

It would be even scarier if they used an NMR structure.

-Eric



On Jul 16, 2011, at 1:20 PM, Robbie Joosten wrote:

 Hi Artem,
 
 Thank for that nice example of a protein structure used to pimp a movie. 
 Ribbon representations are always the scariest.
 
 Cheers,
 Robbie



Re: [ccp4bb] Stereo solution with Nvidia '3D vision' or '3D vision pro'

2011-05-10 Thread Eric Bennett
Nvidia lists that monitor on their list of supported hardware:
http://www.nvidia.com/object/3d-vision-requirements.html

They even sell some Acer monitors in their online store although they are 
labeled in conflicting ways.

I tried upgrading the driver yesterday to the 270.41.06 version but it didn't 
make any difference, still only 100 Hz.  Are you using Windows or Linux?  We're 
using the 64-bit Linux driver.   

-Eric



On May 9, 2011, at 4:26 AM, Takaaki Fukami wrote:

 not seen a working 120 Hz stereo setup working on the Acer GD235 monitor.
 if you ask the Nvidia driver or the monitor, it reports 100 Hz instead
 
 This is what I encountered on Dell Alienware OptX AW2310 with Quadro FX3800,
 which has been fixed by nVIDIA Linux driver update (in 256.44).
 
 I don't know if the Acer monitor is compatible or not, 
 it seems better to ask NVIDIA directly. see:
 http://twitter.com/#!/NVIDIAQuadro/status/65188179753435137
 
 
 Takaaki Fukami
 
 -
 Discovery Platform Technology Dept. Gr.5
 Chugai Pharmaceutical Co.,Ltd.


Re: [ccp4bb] Stereo solution with Nvidia '3D vision' or '3D vision pro'

2011-05-06 Thread Eric Bennett
We recently had issues setting up a 3D projector and have tried lots of 
combinations of monitors, drivers, cards, glasses, etc.  The answer seems to be 
that interchangeability is very complicated and you won't know unless you try 
it.

 For example, with the last version of the Nvidia driver I tested, the driver 
refused to put out an Nvidia 3D Vision sync signal (stereo 10 in xorg.conf) 
unless there was a 3D capable LCD attached.  I don't know of any technical 
reason the Nvidia 3D Vision couldn't be used with a CRT but Nvidia has 
apparently chosen to disable it (or at least make it hard to enable) in the 
Linux driver.

Going the other direction, using RealD with and LCD system, it might be 
possible but you probably have to match your RealD emitter with RealD glasses.  
Older CrystalEyes glasses (CE3 and earlier) generally do not work with LCD 
monitors because of the polarization in the glasses.  We recently got some CE4 
glasses and they don't seem to have that problem although in practice we are 
using them with a projector, not LCD monitors.  But I don't really like the 
CE4's, there is too much of my field of vision under the glasses that they 
don't cover.

We've observed some really weird configurations that appear to mostly work, 
such as plugging in a RealD emitter and glasses when the driver is configured 
to output a signal for Nvidia 3D Vision (stereo 10 option under Linux).  You 
don't say whether you are using Windows or Linux and there may be variations in 
the drivers, variations by card, etc.  Regarding card to card variations, we've 
observed 3D setups in conference rooms with multiple emitters where some Nvidia 
cards happily drive multiple emitters with particular splitters  boosters, but 
other Nvidia cards don't.  

The bottom line is if you mix hardware you might have problems and vendors are 
unlikely to help you.  If you have CE4 glasses already, you can try it with an 
LCD and it may work.  Otherwise, if you have to buy new glasses (ie, you have 
CE3 or older), you might as well get the Nvidia package with the emitter 
included.  3D Vision Pro uses the 2.4 GHz band instead of IR to transmit the 
sync signal so if you were setting up a conference room in theory the Pro 
version might be less likely to leave dead zones in the conference room.  For a 
single user workstation it's very unlikely that you would get any benefit.

Just to muddy the waters a bit, I have not seen a working 120 Hz stereo setup 
working on the Acer GD235 monitor.  We have a bunch of them set up, and we put 
a 120 Hz mode line in xorg.conf.  If you ask X11 it says it's running at 120.  
But if you ask the Nvidia driver or the monitor, it reports 100 Hz instead, and 
visually there is enough flickering that the monitor and the driver seem to 
have the correct number.  I'm curious if anyone else here has looked in detail 
to make sure their Acer-based system is running at 120 and found that it is 
actually doing what people claim it can do.  I find the 100 Hz LCD flicker 
annoying over long periods so I am still a neanderthal CRT user.  My coworkers 
were convinced their LCD systems were running at 120, when they were actually 
only running at 100.  I'm not sure if this is a driver problem or a monitor 
problem.

-Eric


On May 6, 2011, at 11:27 AM, zhang yu wrote:

 Dear colleagues, 
 
 Sorry to present the stereo issue to the board again.
 
 Since my old SGI CRT monitor only has 75 HZ refresh rate, the flickering in 
 stereo mode bothered me a lot.  Recently, I want to update my old CRT to 120 
 HZ LCD.  I have a Nvidia Quadro FX3800 in my workstation. I would like to 
 make sure  some issues before I make the upgrade.
 
 1.  Can I apply the previous stereo emitter (Purchased from Real D, Model 
 #E-2) to 120HZ LCD? Although the company told me this emitter is not 
 compatible with LCD, could some one tell me why? Is it true that the Nvidia 
 3D vision is the only solution for the stereo in LCD?
 
 2. Nvidia supply two kinds of 3D emitters. One of them is 3D vision, while 
 the other one is 3D vision pro.  Which one is sufficient for 
 crystallographier user? (3D vision pro is much more expensive than 3D 
 vision) 
 It seems that 3D vision is for home user and powered by the Nvidia GeForce 
  series graphic cards. While 3D vision pro is for professional user and 
 powered by Nvidia Quardro series graphic card .   

 3. It looks that the Nvidia 3D glasses are very compact. Is it comfortable 
 for someone like me already with eyeglasses?
 
 
 Thanks
 
 Yu
 -- 
 Yu Zhang
 HHMI associate 
 Waksman Institute, Rutgers University
 190 Frelinghuysen Rd.
 Piscataway, NJ, 08904
 
 


Re: [ccp4bb] what to do with disordered side chains

2011-04-03 Thread Eric Bennett
Most non-structural users are familiar with the sequence of the proteins they 
are studying, and most software does at least display residue identity if you 
select an atom in a residue, so usually it is not necessary to do any cross 
checking besides selecting an atom in the residue and seeing what its residue 
name is.  The chance of somebody misinterpreting a truncated Lys as Ala is, in 
my experience, much much lower than the chance they will trust the xyz 
coordinates of atoms with zero occupancy or high B factors.

What worries me the most is somebody designing a whole biological experiment 
around an over-interpretation of details that are implied by xyz coordinates of 
atoms, even if those atoms were not resolved in the maps.  When this sort of 
error occurs it is a level of pain and wasted effort that makes the pain 
associated with having to build back in missing side chains look completely 
trivial.

As long as the PDB file format is the way users get structural data, there is 
really no good way to communicate atom exists with no reliable coordinates to 
the user, given the diversity of software packages out there for reading PDB 
files and the historical lack of any standard way of dealing with this issue.  
Even if the file format is hacked there is no way to force all the existing 
software out there to understand the hack.  A file format that isn't designed 
with this sort of feature from day one is not going to be fixable as a 
practical matter after so much legacy code has accumulated.

-Eric



On Apr 3, 2011, at 2:20 PM, Jacob Keller wrote:

 To the delete-the-atom-nik's: do you propose deleting the whole
 residue or just the side chain? I can understand deleting the whole
 residue, but deleting only the side chain seems to me to be placing a
 stumbling block also, and even possibly confusing for an experienced
 crystallographer: the .pdb says lys but it looks like an ala? Which
 is it? I could imagine a lot of frustration-hours arising from this
 practice, with people cross-checking sequences, looking in the methods
 sections for mutations...
 
 JPK
 


Re: [ccp4bb] The meaning of B-factor, was Re: [ccp4bb] what to do with disordered side chains

2011-04-01 Thread Eric Bennett
Personally I think it is a _good_ thing that those missing atoms are a pain, 
because it helps ensure you are aware of the problem.  As somebody who is in 
the business of supplying non-structural people with models, and seeing how 
those models are sometimes (mis)interpreted, I think it's better to inflict 
that pain than it is to present a model that non-structural people are likely 
to over-interpret.  

The PDB provides various manipulated versions of crystal structures, such as 
biological assemblies.  I don't think it would necessarily be a bad idea to 
build missing atoms back into those sorts of processed files but for the main 
deposited entry the best way to make sure the model is not abused is to leave 
out atoms that can't be modeled accurately.

Just as an example since you mention surfaces, some of the people I work with 
calculate solvent accessible surface areas of individual residues for purposes 
such as engineering cysteines for chemical conjugation, and if residues are 
modeled into bogus positions just to say all the atoms are there, software that 
calculates per-residue SASA has to have a reliable way of knowing to ignore 
those atoms when calculating the area of neighboring residues.  Ad hoc 
solutions like putting very large values in the B column are not clear cut for 
such a software program to interpret.  Leaving the atom out completely is 
pretty unambiguous.

-Eric


On Mar 31, 2011, at 7:34 PM, Scott Pegan wrote:

 I agree with Zbyszek with the modeling of side chains and stress the 
 following points:
 
 1) It drives me nuts when I find that PDB is missing atoms from side chains.  
  This requires me to rebuild them to get any use out of the PDB such as 
 relevant surface renderings or electropotential plots.   I am an experienced 
 structural biologist so that I can immediately identify that they have been 
 removed and  can rebuild them.  I feel sorry for my fellow scientists from 
 other biological fields that can't perform this task readability, thus 
 removing these atoms from a model limits their usefulness to a wider 
 scientific audience.
 
 2)  Not sure if any one has documented the percentage of actual side chains 
 missing from radiation damage versus heterogeneity in confirmation (i.e. 
 dissolved a crystal after collection and sent it to Mass Spec).   Although 
 the former likely happens occasionally, my gut tells me that the latter is 
 significantly more predominant.  As a result, absence of atoms from a side 
 chain in the PDB where the main chain is clearly visible in the electron 
 density might make for the best statistics for an experimental model, but 
 does not reflect a reality.  
 
 Scott
 


Re: [ccp4bb] Question on calculation of RMSD

2010-11-14 Thread Eric Bennett

Andrew Martin's ProFit program is another option:

http://www.bioinf.org.uk/software/profit/doc/node11.html

Having fitted the structures using the ZONE and ATOMS commands to 
specify which residues and atoms should be included in the fitting, 
the RMS deviation may then be calculated over a different region of 
the structure and/or a different atom set.


This is achieved using the RZONE and RATOMS commands. The syntax of 
these commands is identical to that of the ZONE and ATOMS commands 
described in Sections 8 and 9.


-Eric







-Original Message- From: E rajakumar
Sent: Monday, November 15, 2010 6:52 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] Question on calculation of RMSD

Dear All
I have two structures of homo-dimeric protein complex with different DNA.
I want to calculate RMS deviation between second monomer from these 
two complexes by fixing superposed first monomer.


This I require to know what is the effect of DNA on relative 
orientation of two monomers in the dimer.


Previously I was using MOLEMAN2 to do this calculation.

Please can you suggest me any other program to do this calculation.

Thanking you
Raj


E. Rajakumara
Postdoctoral Fellow  Strcutural Biology Program  Memorial 
Sloan-Kettering Cancer Center  New York-10021  NY  001 212 639 7986 
(Lab)  001 917 674 6266 (Mobile)



--
--
Eric Bennett, er...@pobox.com

Drawing on my fine command of the language, I said nothing.
-Robert Benchley


Re: [ccp4bb] Rosetta vs Modeller

2009-12-16 Thread Eric Bennett

Subhendu wrote:


Hi everyone,
I am currently in the process of solving some antigen-antibody 
complex structures. I was wondering if people here have used 
Modeller and Rosetta for homology modeling and have any 
recommendations.I would like to build the model of the 
antigen/antibody to the known structure of the antibody/antigen ( to 
get a model of the complex while my crystallization/structure 
solving process continues).I have experience in Modller but I have 
not tried Rosetta or the newer PyRosetta (which looks  interesting).
In the event of docking a protein to another are there any great 
programs that people here have used/recommend?


Because Modeller is a general purpose tool while RosettaAntibody has 
some specialized knowledge of antibodies you might want to consider 
giving Rosetta a try.  A couple reasons:


(1) Modeller operates at the local level of structure.  It isn't 
written to specifically optimize relative domain orientations, such 
as between VL and VH.  RosettaAb includes a specific step for 
optimizing domain orientation, which could improve your chances with 
MR if you intend to use the homology model for that purpose.


(2) For identifying templates, if you use the sort of global sequence 
alignment tools in Modeller to choose a template, you may end up with 
a template that has high identity in the framework region but the CDR 
loop lengths are not the same.  Rosetta takes a more sensible 
piecemeal approach where it will identify a template for the 
framework, and other templates for the CDRs based on the loop length. 
Of course, you can do this manually if you want to use Modeller since 
Modeller supports multiple templates, but it's more work.


-Eric

--


Re: [ccp4bb] FW: pdb-l: Retraction of 12 Structures....?

2009-12-12 Thread Eric Bennett

Dear Fred,

People have already done this for all PDB entries:
- http://eds.bmc.uu.se/eds/  : maps and many crystallographic stats
- http://www.cmbi.ru.nl/pdb_redo : maps and re-refinement. And yes, the
stats and maps do improve most of the time, unfortunately also for
structures that are not old (but to a lesser extent).


Except it doesn't work for _all_ PDB entries, even when SF are 
available.  On your site, for example, the Current entries link on 
the home page seems to be broken and entry 3G6A (which was released 8 
months ago) is not available for some unspecified reason.


For some cases (like twinned structures) EDS won't have a map.

Rather than relying on availability of a third party server to enable 
viewing the map, it would be a much cleaner solution if authors just 
deposited their own maps in the PDB.


--


[ccp4bb] map deposition (was: Retraction of 12 Structures...)

2009-12-12 Thread Eric Bennett

Robbie Joosten wrote:


I think the deposition of maps is a waste of space. Maps may describe what
the depositors paranoidwant you to think they/paranoid have looked at.
But that does not mean they looked at the right thing. Who knows what they
did to the maps in terms of (unwarrented) density modefication to make
them look cleaner? The advantage of the EDS is that it is impartial and
uniform. The maps are generated in a clear and well-described way.



For those who want to do bulk statistical analyses, the automated 
servers with uniform settings are unquestionably a good tool to have. 
They're also useful for the rare cases where somebody is cheating.  I 
don't think deposited maps should replace the servers.  But for most 
structures, where we do not suspect the crystallographer was 
intentionally cheating, I'd rather have the map as calculated with 
settings determined by the person who is most familiar with each 
specific data set.


Are default settings really smarter than the average 
crystallographer?  My personal opinion is that in most cases, when 
non-default settings have been used, there was probably a valid 
reason.


--


Re: [ccp4bb] FW: pdb-l: Retraction of 12 Structures....

2009-12-11 Thread Eric Bennett

Fred V wrote:

I personally like to visualise the electron density as well, 
however, I do think that a non-crystallographer will go through the 
trouble of downloading the structure factors, installing ccp4/coot 
etc.


Fred.


They shouldn't have to go through some of that trouble.  Maps should 
be deposited.


Even if you have good crystallography knowledge can you exactly 
reproduce the map the authors were looking at?  Software algorithms 
change over the years... the software version the authors used might 
not even be compilable on modern systems... some authors may not 
fully specify all software settings required to get the same map 
(perhaps they used NCS but you have to re-determine the NCS 
yourself)... etc.





--
Eric Bennett, er...@pobox.com


Re: [ccp4bb] Rfree in similar data set

2009-09-24 Thread Eric Bennett

Ian Tickle wrote:


For that to
be true it would have to be possible to arrive at a different unbiased
Rfree from another starting point.  But provided your starting point
wasn't a local maximum LL and you haven't gotten into a local maximum
along the way, convergence will be to a unique global maximum of the LL,
so the Rfree must be the same whatever starting point is used (within
the radius of convergence of course).


But if you're using a different set of data the minima and maxima of 
the function aren't necessarily going to be in the same place.  Rfree 
is supposed to inform about overfitting.  In an overfitting situation 
there are multiple possible models which describe the data well and 
which overfit solution you end up with could be sensitive to the data 
set used.  The provisions that you haven't gotten stuck in a local 
maximum and are within radius of convergence don't seem safe 
considering historical situations that led to the introduction of 
Rfree.  What algorithm is going to converge main chain tracing errors 
to the correct maximum?  Thinking about that situation, isn't part of 
the goal of Rfree to give you a hint in situations where you have, in 
fact, gotten stuck in a local maximum due to a significant error in 
the model that places it outside the radius of convergence of the 
refinement algorithm?



-Eric

--


Re: [ccp4bb] images

2009-03-21 Thread Eric Bennett

Kay Diederichs wrote:

In this case the structure factors were deposited, but these do not 
have a column for the anomalous signal. Re-refinement with these 
structure factors was inconclusive.


If I could have downloaded the images, I could have investigated 
this easily, because there's a large difference in the f of those 
two metals.


So to me access to images sometimes may help to answer a scientific question.



I would add a plea to those considering an image deposition system: 
accept MAPS too!


At the very least it would be nice to see the initial and final maps 
the crystallographer used.  Even if I have the structure factors I'm 
not necessarily an expert on the ins and outs of what someone had to 
do to refine a twinned or otherwise troublesome structure, and I 
don't want to have to learn the specific refinement program you used 
to be able to reproduce the exact map you saw.  (For very old 
structures, it may no longer be possible to compile the specific 
version of the refinement software on new processors/OSes, or someone 
may have used a commercial refinement package.)  And of course, there 
are non-crystallographers who use structures and it is absurd to 
expect them to learn x-ray refinement to see the relevant map for a 
twinned structure that EDS couldn't process.


I love EDS.  But even though it usually has a map for a given 
structure, seeing the actual map generated by the crystallographer 
who was the expert on the project would be better.  Once a system is 
designed that is large enough to handle images, maps would not 
significantly increase the required storage space.  I've poked a 
couple people to suggest that even now the PDB ought to be accepting 
maps.



--
-Eric


Re: [ccp4bb] fake images

2009-03-20 Thread Eric Bennett

Bernhard Rupp wrote:


I only scratched the surface and I think it would be hard work to fake
the images in a way that later expert forensics would
not readily provide evidence. Also, there are 'watermarks' available from
cryptographic methods that are even 'post-processing' resistant.



A practical question is, even if it could be detected in theory, is 
anyone looking for it in practice?  How many journal reviewers are 
going to audit this information?


Today some structures still don't get deposited with structure 
factors.  Not all journals enforce deposition policies well.  Even if 
some future image repository can do automated image auditing, people 
who want to commit overt fraud will pick a journal that doesn't 
enforce deposition requirements, and decline to provide their images.


A watermark or statistical analysis won't help you if you don't have 
access to the image.  This discussion started regarding a place for 
people who _want_ to deposit images; unfortunately some people aren't 
going to do it voluntarily.


-Eric


--


Re: [ccp4bb] stereo with Nvidia Quadro

2008-05-08 Thread Eric Bennett

Dear David,

thank you for your answer. Do you (or anyone on the list) know if 
any FX-card would work with NuVision 60GX (like FX370, FX570), or 
only the dear ones (FX1400 and above)?


Nvidia has been progressively dropping support from their lower-end 
cards.  For example the older FX1100 has stereo, but the newer FX1700 
does not.  Currently you have to buy the FX 3500 or above to get 
stereo.  Look at this chart under the row Display Connectors for 
cards which have Stereo listed:


http://www.nvidia.com/object/IO_11761.html



The 3500 series is moderately expensive.  If you are on a budget 
maybe you could snag an older used card online somewhere.




--
--
Eric Bennett

Hofstadter's Law: It always takes longer than you expect,
even when you take into account Hofstadter's Law.