Re: [ccp4bb] Mac mini advice

2013-01-23 Thread Antony Oliver
Dmitry... everyone is of course entitled to their opinion - but as one of the 
brain-washed masses I feel I need to at least reply (!). Sorry Cara...

 I am not happy with the direction OS X is going. Too much emphasis on eye 
 candy and not enough on underlying technology.

Fair enough, but it does seem to be the way that most of the UIs/OSes are 
going. Erm, Windows 8, Unity...

Also Apple are *constantly* innovating under the hood - albeit mainly hardware 
- Lightning connectors, Fusion hard-drives etc... so they are one of the few 
computer manufacturers actually innovating at all(!)

 ZFS (long ago), Xgrid and X11 have been ditched, which I find disturbing. I 
 don't see Apple investing in computers given current revenue from that 
 sector.

I sort of agree with you here - but on the other hand at least it is still a 
Unix variant underneath and we still have Xquartz.  X11/X window is itself 
becoming quite an old beast and I'm pretty sure there's ever going to be an 
X12. Hence the probable rise and prevalence of Qt - so you can run any OS you 
like...

 Linux in a virtual machine of your choice might be a better bang for the 
 buck. Or, Windows in a virtual machine on a Linux box for that matter.

Or, ( controversially ) a Mac running Linux and Windows in virtual boxes !?

Tony.

Sent from my iPhone

On 23 Jan 2013, at 03:03, Dmitry Rodionov 
d.rodio...@gmail.commailto:d.rodio...@gmail.com wrote:

AFAIK there is no problem mixing and matching different timing RAM: system will 
run at the speed of the slowest module.
I don't think anybody will notice the difference with CAS latency Coot'ing and 
Refmac'ing.

I don't think there is much sense in having more than 4 GB of RAM per physical 
core on a Mac.
Majority of the Mac flock does not really care for where the RAM modules come 
from.
As for Mac Pro's- they use ECC RAM with proprietary heat sensors, so that's a 
completely different story. You can still use generic ECC RAM in a MAC PRO at 
the cost of the fan being stuck in hurricane mode.

The bottleneck of pretty much any modern system is the HDD. Apple-branded HDDs 
were known to have somewhat modified firmware, causing problems at times 
(mostly with AppleRAID, if not using an Apple-branded HDD)
An end user most definitely will notice an SSD VS HDD, which brings up TRIM 
support on OS X, which is limited to controllers sold by Apple.

Upgradeability-wise Apple is not the way to go in any case.

DISCLAIMER:  The rest may be much more inflammatory.

Personally, I am not convinced OS X and Apple is the way to go log term (having 
been surrounded by MACs for the past 4-5 years)
I am not happy with the direction OS X is going. Too much emphasis on eye candy 
and not enough on underlying technology.
ZFS (long ago), Xgrid and X11 have been ditched, which I find disturbing. I 
don't see Apple investing in computers given current revenue from that sector.

Linux in a virtual machine of your choice might be a better bang for the buck. 
Or, Windows in a virtual machine on a Linux box for that matter.

Don't kick me,
DIR



On 2013-01-22, at 7:22 PM, Bryan Lepore 
bryanlep...@gmail.commailto:bryanlep...@gmail.com wrote:

On Tue, Jan 22, 2013 at 1:40 PM, Phil Jeffrey 
pjeff...@princeton.edumailto:pjeff...@princeton.edu wrote:
I don't think that anybody has shown a significant performance difference on 
Apple memory vs a reasonable 3rd party supplier.  Apple may potentially have 
better quality controls but places like Crucial essentially have lifetime 
warranties on their memory.  I use Crucial at home and at work. [...]

sure, I agree with all this

the only other point I really wanted to make is to be cautious when configuring 
a computer on the Apple website, where they might say for memory DDR3 ECC 
SDRAM (checked this for a Mac Pro just now) but that is a non-obvious way of, 
from what I can tell, selling only high end memory when e.g. different CAS 
latency is available elsewhere - again, not obvious what their CL is (perhaps 
it is listed somewhere). and maybe other specs apply.



Re: [ccp4bb] Off-topic: ITC - what to buy?

2013-01-23 Thread Hernani Silvestre
Hi

Those are the only two brands out there that I know of too.

Depending on what sort of experiments you will be doing (high, medium or
low troughput ? Full characterisation ?), one model might be better suited
than the other.
I would consider TTP labtech ChipCAL too.
Not wanting to introduce any biased review here, what I have seen is that
labs go for the GE Healthcare ITC, the ITC 200.
So far I have not met a research group using TA instruments, or anyone
appraising them.

TA instruments do make some comparisons between their calorimeter and GE
Healthcare's.
Would be good if someone who is using TA instruments could join in this
thread and share his/her oppinions on it.

Leo

On 22 January 2013 15:41, Wulf Blankenfeldt 
wulf.blankenfe...@uni-bayreuth.de wrote:

 Dear all:

 we are looking into buying an isothermal titration calorimeter, but are
 only aware of two manufacturers: TA Instruments and MicroCal (GE
 Healthcare). Are there any more?

 Do you have any recommendations (brand, model etc.) for us?

 Thanks in advance,


 Wulf

 --
 Prof. Dr. Wulf Blankenfeldt
 NW-I, 2.0.U1.09Universitaet Bayreuth
 fon:+49-(0)921-55-2427  Lehrstuhl fuer Biochemie
 fax:+49-(0)921-55-2432   Universitaetsstrasse 30
 e-mail: wulf.blankenfeldt [at] uni-bayreuth.de95447 Bayreuth
 web:www.biochemie.uni-bayreuth.deGermany



[ccp4bb] Hi clashscore

2013-01-23 Thread supratim dey
Hi

I am refining my 2 angstrom strucutre using phenix windows based software.
After many refinements my clashscore is not at all reducing and showing a
value of 12. Can anybody suggest how to reduce the clashscore. Is there any
technique to do it. Or i have to deal with each individual clashes
mentioned in the list manually ?

Supratim


Re: [ccp4bb] Mac mini advice

2013-01-23 Thread James Stroud
On Jan 22, 2013, at 11:20 PM, Nat Echols wrote:
 The real difficulty is integrating Macs into a
 Linux-centric environment, for example configuring NFS, NIS, etc.

That's because NFS and NIS are antiquities left over from the days of 
mainframes. Distributed file systems and user information databases are 
designed for an environment of many workers and few machines, when the typical 
graphics workstation cost $50,000. These days, we argue whether to spend an 
extra $200 on a $500 computer. We have moved to a new paradigm: many workers 
with many more machines, with each machine having essentially mainframe levels 
of storage and computing power. In other words, instead of NFS, you should run 
git.

James


Re: [ccp4bb] protein solubility predictions

2013-01-23 Thread Alberto Manfrin
Hi Careina,

If your protein is expressed in E. coli, you can have a prediction of its
solubility on this website: http://biotech.ou.edu

Hope it helps!

Alberto Manfrin

From:  Careina Edgooms careinaedgo...@yahoo.com
Reply-To:  Careina Edgooms careinaedgo...@yahoo.com
Date:  Tue, 22 Jan 2013 04:22:44 -0800
To:  CCP4BB@JISCMAIL.AC.UK
Subject:  [ccp4bb] protein solubility predictions

Dear ccp4

Apologies for the off topic question. I was wondering whether anyone could
suggest a good tool or methodology to use to predict protein solubility and
ability to fold from the sequence? I am working with a large protein of
multiple domains. I would like to work with as close to the full length
protein as possible without affecting its solubility and ability to fold
correctly. I know there are web based tools where you can upload a sequence
and see the predicted solubility but I wonder if there is any good strategy
to use to determine how best to construct the truncated protein? ie which
parts of the sequence to keep and which to remove so as to maximise
solubility. Also I have an eye to crystallising this protein in the future
and I wonder if there are any specific things I should look out for with
that in mind? I'm sure minimising flexible loops is one such thing.
All help appreciated
Best
Careina




Re: [ccp4bb] Hi clashscore

2013-01-23 Thread Tim Gruene
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dear Supratim,

I am afraid so, yes, you'll have to do it manually. Model building is
the other half of creating a model, and time-wise surely the more time
consuming one. On the other hand it is also the more scientific one
where you feel less reduced to somebody just pushing buttons, so enjoy!

You might try arp/warp. At 2A it should produce a pretty good result.

Regards,
Tim

On 01/23/2013 09:47 AM, supratim dey wrote:
 Hi
 
 I am refining my 2 angstrom strucutre using phenix windows based
 software. After many refinements my clashscore is not at all
 reducing and showing a value of 12. Can anybody suggest how to
 reduce the clashscore. Is there any technique to do it. Or i have
 to deal with each individual clashes mentioned in the list manually
 ?
 
 Supratim
 

- -- 
- --
Dr Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen

GPG Key ID = A46BEE1A

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFQ/6w+UxlJ7aRr7hoRAsP3AJ9U41iQhk75SWGX+GbH1T4NtaTwowCeMwGo
qPH1SJgumSHytCtF5j9Vn0c=
=xKve
-END PGP SIGNATURE-


Re: [ccp4bb] Hi clashscore

2013-01-23 Thread Robbie Joosten
Hi Supratim,

The clashscore gives the relative number of clashes, not their severity.
This makes it difficult to see what your specific problem is. Sever clashes
(with large overlaps) are usually the result of errors in your model and
need individual attention. Light bumps can usually be solved by optimizing
the refinement parameters. Overrestraining bonds and angles can cause a lot
of clashes.

Cheers,
Robbie

 -Original Message-
 From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of
 supratim dey
 Sent: Wednesday, January 23, 2013 09:47
 To: CCP4BB@JISCMAIL.AC.UK
 Subject: [ccp4bb] Hi clashscore
 
 Hi
 
 I am refining my 2 angstrom strucutre using phenix windows based software.
 After many refinements my clashscore is not at all reducing and showing a
 value of 12. Can anybody suggest how to reduce the clashscore. Is there
any
 technique to do it. Or i have to deal with each individual clashes
mentioned in
 the list manually ?
 
 Supratim


Re: [ccp4bb] Mac mini advice

2013-01-23 Thread Anastassis Perrakis
I am of the opinion that the truth lies somewhere in between ...

Here are my two cents based on personal experience ...

For example, I am happy myself using a MacBook Pro, which is sufficient for all 
my activities, and has all software and data that I need.
Thus, I am myself on the 'new' paradigm side, having a machine with mainframe 
levels of storage and computing power (I do not run git,
but time machine in a mac has the bits I need from the git idea - as far as I 
know git that is).

In the department, we have about 20-25 scientists. These people need to 
''maintain and be proficient in many software suites, many more than
a traditional crystallographer (like me in my PhD time for example) would need:
vector design software for cloning, databases for keeping track of clones, 
sequencing viewing software for their clones, 
 interfaces for crystallisation and biophysical equipment, analysis suites like 
Graphpad/Prism, Origin, Kintek (etc etc)  for biophysical experiments ... 
 and lets not forget SAXS software ... Our experience, is that most of 
these people like to use a Windows workstation for these (the choice is free),
others prefer a Mac, thats not my point here. Many of that software also needs 
to maintained by these people...
Also, for a variety of reasons which have to do with IT support restrictions, 
the Windows machines
we have for them are miss-configured with ancient versions of the windows 
operating system, but still Ok for many things, but not for really 
straightforward use of CCP4/Phenix ...

My point here is, that these people are less likely to be keen of the idea to 
also install and run ccp4/coot/phenix/buster in their machines 
(they use pymol/yassara/chimera though locally since they can copy/paste to 
their presentations and papers then). 
So, we find it useful to keep an old fashioned setup running in parallel. Linux 
boxes, hooked to Zalmans or really
big or double LCDs, in a specific room ... People like these for data 
processing, all data of many years back are online, incremental backup is 
running etc.
For historical reasons we even run NFS/NIS there (I agree its not a great 
choice if one would start now).

My conclusion and advice for labs or departments that have more than 5-6 
people, and are doing crystallography but not as their
full-time business is that besides personal PC/Mac, a common room with a few 
relatively powerful machines with nice, big, double,
screens, likely also Stereo, is useful for a few reasons:

1. Easier to make sure everybody is using the same software more or less
2. Same machine to everybody - not the situation that a new student gets a new 
machine at year 0, which is redundant by graduation time at +3 years (...or 
+5,6,7...)
3. Mixing of people in the room and ability for people to look over the 
shoulder of others, the point that my colleague Titia Sixma always favours,
which has indeed proved great for teaching others and learning from others.
4. Centralised real backup, availability of diffraction data on-line with 
less mounts...

For these machines, centralised user account information and 'home' sharing is 
in my view essential, as it allows to blindly choose any of the
machines that is available at a time ... and, being a Mac fun, I think Linmux 
is better suited for that purpose, financially and practically...

These said, it reminds me that we need to update the OS, buy a few new 
machines, new LCDs ... argh.

Sorry of this lecture was outside the scope of the original thread.

Tassos



On 23 Jan 2013, at 9:54, James Stroud wrote:

 On Jan 22, 2013, at 11:20 PM, Nat Echols wrote:
 The real difficulty is integrating Macs into a
 Linux-centric environment, for example configuring NFS, NIS, etc.
 
 That's because NFS and NIS are antiquities left over from the days of 
 mainframes. Distributed file systems and user information databases are 
 designed for an environment of many workers and few machines, when the 
 typical graphics workstation cost $50,000. These days, we argue whether to 
 spend an extra $200 on a $500 computer. We have moved to a new paradigm: many 
 workers with many more machines, with each machine having 
 essentiallymainframe levels of storage and computing power. In other words, 
 instead of NFS, you should run git.
 
 James


Re: [ccp4bb] protein solubility predictions

2013-01-23 Thread Anastassis Perrakis
Indeed there are many web tools predicting solubility.

My personal bias is that your brain is the best tool for ideas on how to design 
expression constructs.
(since one question in a multi-domain protein, is which bit is interesting ...!)
Many brains are better than one (talk to your colleagues - even to your 
supervisor ! - for ideas).

A tool we like (since we developed it ...) helps to combine information from 
many other tools and help us make decisions
(and order pcr primers to implement our decisions) is available at:

http://xtal.nki.nl/ccd

Another advice is whatever constructs you try, try them in many vectors - I 
could give another self-plugin here
but I will not. There are many good systems allowing to put the same PCR 
product to many vectors for expression.

A.



 From: Careina Edgooms careinaedgo...@yahoo.com
 Reply-To: Careina Edgooms careinaedgo...@yahoo.com
 Date: Tue, 22 Jan 2013 04:22:44 -0800
 To: CCP4BB@JISCMAIL.AC.UK
 Subject: [ccp4bb] protein solubility predictions
 
 Dear ccp4
 
 Apologies for the off topic question. I was wondering whether anyone could 
 suggest a good tool or methodology to use to predict protein solubility and 
 ability to fold from the sequence? I am working with a large protein of 
 multiple domains. I would like to work with as close to the full length 
 protein as possible without affecting its solubility and ability to fold 
 correctly. I know there are web based tools where you can upload a sequence 
 and see the predicted solubility but I wonder if there is any good strategy 
 to use to determine how best to construct the truncated protein? ie which 
 parts of the sequence to keep and which to remove so as to maximise 
 solubility. Also I have an eye to crystallising this protein in the future 
 and I wonder if there are any specific things I should look out for with that 
 in mind? I'm sure minimising flexible loops is one such thing.
 All help appreciated
 Best
 Careina



Re: [ccp4bb] Mac mini advice

2013-01-23 Thread Ashley Buckle
Cara you have re-ignited the perennial Mac v PC debate!!! You'll be asking 
about depositing raw diffraction data next ;)

Cheers
Ashley

Sent from my iPhone

On 23/01/2013, at 10:29 PM, Anastassis Perrakis a.perra...@nki.nl wrote:

 I am of the opinion that the truth lies somewhere in between ...
 
 Here are my two cents based on personal experience ...
 
 For example, I am happy myself using a MacBook Pro, which is sufficient for 
 all my activities, and has all software and data that I need.
 Thus, I am myself on the 'new' paradigm side, having a machine with 
 mainframe levels of storage and computing power (I do not run git,
 but time machine in a mac has the bits I need from the git idea - as far as I 
 know git that is).
 
 In the department, we have about 20-25 scientists. These people need to 
 ''maintain and be proficient in many software suites, many more than
 a traditional crystallographer (like me in my PhD time for example) would 
 need:
 vector design software for cloning, databases for keeping track of clones, 
 sequencing viewing software for their clones, 
 interfaces for crystallisation and biophysical equipment, analysis suites 
 like Graphpad/Prism, Origin, Kintek (etc etc)  for biophysical experiments 
 ... 
  and lets not forget SAXS software ... Our experience, is that most of 
 these people like to use a Windows workstation for these (the choice is free),
 others prefer a Mac, thats not my point here. Many of that software also 
 needs to maintained by these people...
 Also, for a variety of reasons which have to do with IT support 
 restrictions, the Windows machines
 we have for them are miss-configured with ancient versions of the windows 
 operating system, but still Ok for many things, but not for really 
 straightforward use of CCP4/Phenix ...
 
 My point here is, that these people are less likely to be keen of the idea to 
 also install and run ccp4/coot/phenix/buster in their machines 
 (they use pymol/yassara/chimera though locally since they can copy/paste to 
 their presentations and papers then). 
 So, we find it useful to keep an old fashioned setup running in parallel. 
 Linux boxes, hooked to Zalmans or really
 big or double LCDs, in a specific room ... People like these for data 
 processing, all data of many years back are online, incremental backup is 
 running etc.
 For historical reasons we even run NFS/NIS there (I agree its not a great 
 choice if one would start now).
 
 My conclusion and advice for labs or departments that have more than 5-6 
 people, and are doing crystallography but not as their
 full-time business is that besides personal PC/Mac, a common room with a 
 few relatively powerful machines with nice, big, double,
 screens, likely also Stereo, is useful for a few reasons:
 
 1. Easier to make sure everybody is using the same software more or less
 2. Same machine to everybody - not the situation that a new student gets a 
 new machine at year 0, which is redundant by graduation time at +3 years 
 (...or +5,6,7...)
 3. Mixing of people in the room and ability for people to look over the 
 shoulder of others, the point that my colleague Titia Sixma always favours,
 which has indeed proved great for teaching others and learning from others.
 4. Centralised real backup, availability of diffraction data on-line with 
 less mounts...
 
 For these machines, centralised user account information and 'home' sharing 
 is in my view essential, as it allows to blindly choose any of the
 machines that is available at a time ... and, being a Mac fun, I think Linmux 
 is better suited for that purpose, financially and practically...
 
 These said, it reminds me that we need to update the OS, buy a few new 
 machines, new LCDs ... argh.
 
 Sorry of this lecture was outside the scope of the original thread.
 
 Tassos
 
 
 
 On 23 Jan 2013, at 9:54, James Stroud wrote:
 
 On Jan 22, 2013, at 11:20 PM, Nat Echols wrote:
 The real difficulty is integrating Macs into a
 Linux-centric environment, for example configuring NFS, NIS, etc.
 
 That's because NFS and NIS are antiquities left over from the days of 
 mainframes. Distributed file systems and user information databases are 
 designed for an environment of many workers and few machines, when the 
 typical graphics workstation cost $50,000. These days, we argue whether to 
 spend an extra $200 on a $500 computer. We have moved to a new paradigm: 
 many workers with many more machines, with each machine having 
 essentiallymainframe levels of storage and computing power. In other words, 
 instead of NFS, you should run git.
 
 James


Re: [ccp4bb] Mac mini advice (rapidly veering off topic)

2013-01-23 Thread Chris Richardson
In the great internet tradition I'll chip in with my opinion even though it 
doesn't differ substantially from those already stated earlier in the thread.

I'm responsible for a mixed bag of Windows, Linux and Mac boxen used for 
crystallography and structural electron microscopy.

In terms of setting up the infrastructure and getting things working, Linux 
wins by a mile.  It's easy to buy cheap, powerful machines as desktops or 
servers.  There's a wealth of resources to help the sysadmin.  The ubiquity of 
Linux means that if you're trying to do something, there's almost always 
someone who has done it already and posted about it on a forum or blog.

In terms of getting people to use the machines, OS X wins hands down.  Putting 
people in front of linux machines requires more hand-holding on my part, 
especially for people who are inexperienced or even afraid of computers.  This 
is becoming more of an issue: gone are the days when most crystallographers had 
a favourite shell and were happy editing scripts.  Linux has come on in leaps 
and bounds but it's still not quite user-friendly enough; even something as 
simple as copying and pasting still doesn't have a uniform implementation.  
People familiar with commercial software struggle with the open source 
equivalents.

I won't cover Windows as I have an irrational hatred of it.

Having said all that, I too am worried about the way Apple is going.  They 
haven't released a proper Mac Pro upgrade in ages, and seem to be concentrating 
on making iMacs as wafer thing as possible at the expense of other practical 
considerations.  Their move to using the App Store for everything has 
concentrated on individual users and hasn't included support for corporate 
IT. Support for NFS and other UNIXy under-the-hood features is changed or 
dropped seemingly on a whim with no real documentation.  Their habit of making 
every change or new release a big surprise makes it difficult to plan for the 
future.

I'm glad I've got that off my chest.  Now I can do something productive.

Chris
--
Dr Chris Richardson :: Sysadmin, structural biology, icr.ac.uk

The Institute of Cancer Research: Royal Cancer Hospital, a charitable Company 
Limited by Guarantee, Registered in England under Company No. 534147 with its 
Registered Office at 123 Old Brompton Road, London SW7 3RP.

This e-mail message is confidential and for use by the addressee only.  If the 
message is received by anyone other than the addressee, please return the 
message to the sender by replying to it and then delete the message from your 
computer and network.


Re: [ccp4bb] Mac mini advice

2013-01-23 Thread Peter Keller
On Wed, 2013-01-23 at 01:54 -0700, James Stroud wrote:
 On Jan 22, 2013, at 11:20 PM, Nat Echols wrote:
  The real difficulty is integrating Macs into a
  Linux-centric environment, for example configuring NFS, NIS, etc.
 
 That's because NFS and NIS are antiquities left over from the days of
 mainframes. Distributed file systems and user information databases
 are designed for an environment of many workers and few machines, when
 the typical graphics workstation cost $50,000. These days, we argue
 whether to spend an extra $200 on a $500 computer. We have moved to a
 new paradigm: many workers with many more machines, with each machine
 having essentially mainframe levels of storage and computing power.

Technically there is something in what you say as a pattern for
day-to-day work (for some people, although not all), but I think that
describing the debate in terms of modern vs. antiquated is missing the
point completely. The real difference between local vs. centralised
storage is to do with responsibility for the hardware and the data that
it contains.

Local workstation storage is OK for the following kinds of cases:

(i) the data that are stored locally have no value, so it doesn't matter
if they are lost (either through hardware failure, misbehaving software
or accidental deletion).

(ii) the user has the expertise and the time to set up and maintain a
strategy for recovering data that are lost from local disks

(iii) the institution that the user works for allows the user to include
data on local workstation disks in the institution's regular backup
operations

When none of these apply, there is a real, contemporary case for using
something like NFS, where the storage is centrally maintained and backed
up. The cost of storage has fallen of course, but what that means is
that the real questions now are about the value of the data. In some
fields, you could store your entire career's data on a few USB memory
sticks, but I doubt that many people would want to do that without
having made other copies somewhere else, and the same applies to local
workstation storage too :-).

There are other considerations in favour of connecting a workstation to
networked services: if you use more than one machine it can be an
incredible pain to be constantly moving data around from one to the
other, and to keep track of what the authoritative versions are. Having
independent, local user id's and passwords on every workstation can also
cause difficulties. I could go on

 In other words, instead of NFS, you should run git.

This is simply not an option for many crystallographers, who do not have
a background in software development or data management. Advocating and
supporting git (or indeed any content/version management system) for
those kind of users is a losing battle: they see it as an unnecessary
complication to their daily work, and will avoid using it as far as they
can.

Regards,
Peter.

-- 
Peter Keller Tel.: +44 (0)1223 353033
Global Phasing Ltd., Fax.: +44 (0)1223 366889
Sheraton House,
Castle Park,
Cambridge CB3 0AX
United Kingdom


[ccp4bb] ATP binding

2013-01-23 Thread Jan Rashid Umar
Dear All ,

Could anybody suggest the best possible manner to measure the halflife time
of a protein-ATP complex? Our protein binds to ATP and we would like to
measure its half life time after it is bound to radiolabelled ATP. Looking
forward for your valuable suggestions.

Best wishes,

Jan


[ccp4bb] Two PhD Positions in Structural Biology, Research Centre Juelich Germany

2013-01-23 Thread Joachim Granzin

At the Structural Biochemistry Institute (ICS-6, Forschungszentrum Juelich, 
Germany), we are looking for two highly motivated PhD students interested in 
biochemical and structural studies on photosignaling proteins and enzymes. The 
positions can be filled starting from March 2013.

The project is part of a strategic research initiative Next generation of biotechnological 
methods- Biotechnology 2020+, funded by Federal Ministry of Education and Research, where 
several research groups from Research Centre Juelich (FZJ) and Heinrich-Heine-Universität 
Düsseldorf will combine their efforts to develop novel opto-sensors and photoregulatory proteins 
for the light-mediated analysis and control of molecular systems. FZJ is one of the largest 
interdisciplinary German research centers in Europe, located close to Cologne, Netherlands and 
Belgium that offers excellent opportunities for research in an international environment. Salaries 
are competitive (TvöD + allowance depending on the candidate's profile) and dissertations are 
usually completed within three years. Equal opportunities are a cornerstone of our staff policy for 
which we have received the TOTAL E-QUALITY award.

Project details: Blue-light photoreceptors containing light-oxygen-voltage 
(LOV) domains regulate a myriad of different physiological responses in both 
eukaryotes and prokaryotes. Due to their intrinsic photochemical /photophysical 
properties, LOV proteins have proven as novel fluorescent reporters and 
blue-light sensitive photoswitches. Biotechnological applications include their 
use as real-time oxygen independent flavin-based fluorescent reporter proteins 
(FbFPs), as biological trap to produce flavin mononucleotide (FMN), as well as 
in development of LOV-based optogenetic tools. In ICS-6, our group utilizes 
structure-based techniques. Specifically, protein expression, purification, 
crystallization, spectroscopy, X-ray crystallography and comparative modeling 
will be employed to design and characterize the photophysical properties of 
various FbFPs. The selected PhD students will also collaborate with partners 
from the network for the use of other molecular biology and fluorescence 
techniques.

More details about the actual research projects are available from the contact 
persons listed below.

PD Dr. Renu Batra-Safferling (Email: 
r.batra-safferl...@fz-juelich.demailto:r.batra-safferl...@fz-juelich.de)
PD Dr. Joachim Granzin (Email: 
j.gran...@fz-juelich.demailto:j.gran...@fz-juelich.de)

Requirements: Candidates should have a recent MSc or equivalent degree in 
Biochemistry, Biophysics, Chemistry or a related discipline. Skills in either 
protein chemistry  purification, or protein crystallography would be an asset.

Application: Please send your application to above listed contact persons via 
e-mail, including the following documents:
- Cover letter explaining the motivation for the position
- A complete CV
- Copy of exam certificate
- Names of two Referees (with phone no., email, and relation to the applicant)



Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt




Re: [ccp4bb] Mac mini advice

2013-01-23 Thread Bosch, Juergen
I assume nobody of you is running an actual Osx server ? I mean the upgrade to 
a full server version of the commonly distributed normal Osx releases ?

I have not done it yet but I do think many of the issues mentioned regarding 
NFS/NIS could be addressed there. Regarding the missing macpro upgrades I 
expect to see new machines with thunderbolt connectivity in the next 4 months. 
And I will buy my third macpro then to run it as a true server.

Jürgen 

Sent from my iPad

On Jan 23, 2013, at 5:21, Peter Keller pkel...@globalphasing.com wrote:

 On Wed, 2013-01-23 at 01:54 -0700, James Stroud wrote:
 On Jan 22, 2013, at 11:20 PM, Nat Echols wrote:
 The real difficulty is integrating Macs into a
 Linux-centric environment, for example configuring NFS, NIS, etc.
 
 That's because NFS and NIS are antiquities left over from the days of
 mainframes. Distributed file systems and user information databases
 are designed for an environment of many workers and few machines, when
 the typical graphics workstation cost $50,000. These days, we argue
 whether to spend an extra $200 on a $500 computer. We have moved to a
 new paradigm: many workers with many more machines, with each machine
 having essentially mainframe levels of storage and computing power.
 
 Technically there is something in what you say as a pattern for
 day-to-day work (for some people, although not all), but I think that
 describing the debate in terms of modern vs. antiquated is missing the
 point completely. The real difference between local vs. centralised
 storage is to do with responsibility for the hardware and the data that
 it contains.
 
 Local workstation storage is OK for the following kinds of cases:
 
 (i) the data that are stored locally have no value, so it doesn't matter
 if they are lost (either through hardware failure, misbehaving software
 or accidental deletion).
 
 (ii) the user has the expertise and the time to set up and maintain a
 strategy for recovering data that are lost from local disks
 
 (iii) the institution that the user works for allows the user to include
 data on local workstation disks in the institution's regular backup
 operations
 
 When none of these apply, there is a real, contemporary case for using
 something like NFS, where the storage is centrally maintained and backed
 up. The cost of storage has fallen of course, but what that means is
 that the real questions now are about the value of the data. In some
 fields, you could store your entire career's data on a few USB memory
 sticks, but I doubt that many people would want to do that without
 having made other copies somewhere else, and the same applies to local
 workstation storage too :-).
 
 There are other considerations in favour of connecting a workstation to
 networked services: if you use more than one machine it can be an
 incredible pain to be constantly moving data around from one to the
 other, and to keep track of what the authoritative versions are. Having
 independent, local user id's and passwords on every workstation can also
 cause difficulties. I could go on
 
 In other words, instead of NFS, you should run git.
 
 This is simply not an option for many crystallographers, who do not have
 a background in software development or data management. Advocating and
 supporting git (or indeed any content/version management system) for
 those kind of users is a losing battle: they see it as an unnecessary
 complication to their daily work, and will avoid using it as far as they
 can.
 
 Regards,
 Peter.
 
 -- 
 Peter Keller Tel.: +44 (0)1223 353033
 Global Phasing Ltd., Fax.: +44 (0)1223 366889
 Sheraton House,
 Castle Park,
 Cambridge CB3 0AX
 United Kingdom


Re: [ccp4bb] Mac mini advice

2013-01-23 Thread Anastassis Perrakis
We did work with a full blown OSX Server in 2004 - indeed many issues on NFS 
were Ok, but NIS was a problem - or we could not figure it out.
We used it as server for developers, running X-grid, SVN, WebObjects servers 
for a couple of EC networks, but never deployed it fully
as a departmental-level server, due to the NIS issues. For a while it also 
hosted the protein-ccd server. 
We wanted to move to Kerberos if I recall correctly, but since my colleague 
Serge Cohen moved out and our IT are mac-o-phobic I opted to continue with 
Linux.

The Server was decommissioned a few weeks ago, and will likely get a second 
life in Soleil/Ipanema, 
but I think as an electric heat generator - almost as good on it as my old 
8-proc Alpha (2000) ;-)

A.

On 23 Jan 2013, at 15:05, Bosch, Juergen wrote:

 I assume nobody of you is running an actual Osx server ? I mean the upgrade 
 to a full server version of the commonly distributed normal Osx releases ?
 
 I have not done it yet but I do think many of the issues mentioned regarding 
 NFS/NIS could be addressed there. Regarding the missing macpro upgrades I 
 expect to see new machines with thunderbolt connectivity in the next 4 
 months. And I will buy my third macpro then to run it as a true server.
 
 Jürgen 
 
 Sent from my iPad
 
 On Jan 23, 2013, at 5:21, Peter Keller pkel...@globalphasing.com wrote:
 
 On Wed, 2013-01-23 at 01:54 -0700, James Stroud wrote:
 On Jan 22, 2013, at 11:20 PM, Nat Echols wrote:
 The real difficulty is integrating Macs into a
 Linux-centric environment, for example configuring NFS, NIS, etc.
 
 That's because NFS and NIS are antiquities left over from the days of
 mainframes. Distributed file systems and user information databases
 are designed for an environment of many workers and few machines, when
 the typical graphics workstation cost $50,000. These days, we argue
 whether to spend an extra $200 on a $500 computer. We have moved to a
 new paradigm: many workers with many more machines, with each machine
 having essentially mainframe levels of storage and computing power.
 
 Technically there is something in what you say as a pattern for
 day-to-day work (for some people, although not all), but I think that
 describing the debate in terms of modern vs. antiquated is missing the
 point completely. The real difference between local vs. centralised
 storage is to do with responsibility for the hardware and the data that
 it contains.
 
 Local workstation storage is OK for the following kinds of cases:
 
 (i) the data that are stored locally have no value, so it doesn't matter
 if they are lost (either through hardware failure, misbehaving software
 or accidental deletion).
 
 (ii) the user has the expertise and the time to set up and maintain a
 strategy for recovering data that are lost from local disks
 
 (iii) the institution that the user works for allows the user to include
 data on local workstation disks in the institution's regular backup
 operations
 
 When none of these apply, there is a real, contemporary case for using
 something like NFS, where the storage is centrally maintained and backed
 up. The cost of storage has fallen of course, but what that means is
 that the real questions now are about the value of the data. In some
 fields, you could store your entire career's data on a few USB memory
 sticks, but I doubt that many people would want to do that without
 having made other copies somewhere else, and the same applies to local
 workstation storage too :-).
 
 There are other considerations in favour of connecting a workstation to
 networked services: if you use more than one machine it can be an
 incredible pain to be constantly moving data around from one to the
 other, and to keep track of what the authoritative versions are. Having
 independent, local user id's and passwords on every workstation can also
 cause difficulties. I could go on
 
 In other words, instead of NFS, you should run git.
 
 This is simply not an option for many crystallographers, who do not have
 a background in software development or data management. Advocating and
 supporting git (or indeed any content/version management system) for
 those kind of users is a losing battle: they see it as an unnecessary
 complication to their daily work, and will avoid using it as far as they
 can.
 
 Regards,
 Peter.
 
 -- 
 Peter Keller Tel.: +44 (0)1223 353033
 Global Phasing Ltd., Fax.: +44 (0)1223 366889
 Sheraton House,
 Castle Park,
 Cambridge CB3 0AX
 United Kingdom


Re: [ccp4bb] Mac mini advice

2013-01-23 Thread Dmitry Rodionov
We have 10.510.6 servers and briefly tested 10.7 server.
Last time I tried, Ubuntu 12.04 box would not authenticate users registered on 
the OS X Open Directory server.
Before that, 10.04 clients would cause random user lockouts.
NFS GUI is gone as of 10.7

Regards,
Dmitry

On 2013-01-23, at 9:05 AM, Bosch, Juergen wrote:

 I assume nobody of you is running an actual Osx server ? I mean the upgrade 
 to a full server version of the commonly distributed normal Osx releases ?
 
 I have not done it yet but I do think many of the issues mentioned regarding 
 NFS/NIS could be addressed there. Regarding the missing macpro upgrades I 
 expect to see new machines with thunderbolt connectivity in the next 4 
 months. And I will buy my third macpro then to run it as a true server.
 
 Jürgen 
 
 Sent from my iPad
 
 On Jan 23, 2013, at 5:21, Peter Keller pkel...@globalphasing.com wrote:
 
 On Wed, 2013-01-23 at 01:54 -0700, James Stroud wrote:
 On Jan 22, 2013, at 11:20 PM, Nat Echols wrote:
 The real difficulty is integrating Macs into a
 Linux-centric environment, for example configuring NFS, NIS, etc.
 
 That's because NFS and NIS are antiquities left over from the days of
 mainframes. Distributed file systems and user information databases
 are designed for an environment of many workers and few machines, when
 the typical graphics workstation cost $50,000. These days, we argue
 whether to spend an extra $200 on a $500 computer. We have moved to a
 new paradigm: many workers with many more machines, with each machine
 having essentially mainframe levels of storage and computing power.
 
 Technically there is something in what you say as a pattern for
 day-to-day work (for some people, although not all), but I think that
 describing the debate in terms of modern vs. antiquated is missing the
 point completely. The real difference between local vs. centralised
 storage is to do with responsibility for the hardware and the data that
 it contains.
 
 Local workstation storage is OK for the following kinds of cases:
 
 (i) the data that are stored locally have no value, so it doesn't matter
 if they are lost (either through hardware failure, misbehaving software
 or accidental deletion).
 
 (ii) the user has the expertise and the time to set up and maintain a
 strategy for recovering data that are lost from local disks
 
 (iii) the institution that the user works for allows the user to include
 data on local workstation disks in the institution's regular backup
 operations
 
 When none of these apply, there is a real, contemporary case for using
 something like NFS, where the storage is centrally maintained and backed
 up. The cost of storage has fallen of course, but what that means is
 that the real questions now are about the value of the data. In some
 fields, you could store your entire career's data on a few USB memory
 sticks, but I doubt that many people would want to do that without
 having made other copies somewhere else, and the same applies to local
 workstation storage too :-).
 
 There are other considerations in favour of connecting a workstation to
 networked services: if you use more than one machine it can be an
 incredible pain to be constantly moving data around from one to the
 other, and to keep track of what the authoritative versions are. Having
 independent, local user id's and passwords on every workstation can also
 cause difficulties. I could go on
 
 In other words, instead of NFS, you should run git.
 
 This is simply not an option for many crystallographers, who do not have
 a background in software development or data management. Advocating and
 supporting git (or indeed any content/version management system) for
 those kind of users is a losing battle: they see it as an unnecessary
 complication to their daily work, and will avoid using it as far as they
 can.
 
 Regards,
 Peter.
 
 -- 
 Peter Keller Tel.: +44 (0)1223 353033
 Global Phasing Ltd., Fax.: +44 (0)1223 366889
 Sheraton House,
 Castle Park,
 Cambridge CB3 0AX
 United Kingdom


Re: [ccp4bb] Mac mini advice

2013-01-23 Thread Sidhu, Khushwant S. (Dr.)
Hi,

I'm running a mac mini server.
The file sharing seems to work fine - I'm not running NIS.

There is a lag in software starting up - up to 20-30 s but once the software is 
loaded, it runs fine.

We did some benchmarking with phaser last week  there was no perceivable 
difference in running it of the server or locally (on a Mac Pro)

Software may require fiddling with to ensure that paths point to where the 
software is mounted.

This may be referred to as 'hacking' … so I wouldn't do it :-)

Sid


Dr K S Sidhu
Department of Biochemistry
1/61 Henry Wellcome Building
Lancaster Road
Leicester
LE1 9HN

Tel: 0116 229 7237




On 23 Jan 2013, at 14:05, Bosch, Juergen 
jubo...@jhsph.edumailto:jubo...@jhsph.edu wrote:

I assume nobody of you is running an actual Osx server ? I mean the upgrade to 
a full server version of the commonly distributed normal Osx releases ?

I have not done it yet but I do think many of the issues mentioned regarding 
NFS/NIS could be addressed there. Regarding the missing macpro upgrades I 
expect to see new machines with thunderbolt connectivity in the next 4 months. 
And I will buy my third macpro then to run it as a true server.

Jürgen

Sent from my iPad

On Jan 23, 2013, at 5:21, Peter Keller 
pkel...@globalphasing.commailto:pkel...@globalphasing.com wrote:

On Wed, 2013-01-23 at 01:54 -0700, James Stroud wrote:
On Jan 22, 2013, at 11:20 PM, Nat Echols wrote:
The real difficulty is integrating Macs into a
Linux-centric environment, for example configuring NFS, NIS, etc.

That's because NFS and NIS are antiquities left over from the days of
mainframes. Distributed file systems and user information databases
are designed for an environment of many workers and few machines, when
the typical graphics workstation cost $50,000. These days, we argue
whether to spend an extra $200 on a $500 computer. We have moved to a
new paradigm: many workers with many more machines, with each machine
having essentially mainframe levels of storage and computing power.

Technically there is something in what you say as a pattern for
day-to-day work (for some people, although not all), but I think that
describing the debate in terms of modern vs. antiquated is missing the
point completely. The real difference between local vs. centralised
storage is to do with responsibility for the hardware and the data that
it contains.

Local workstation storage is OK for the following kinds of cases:

(i) the data that are stored locally have no value, so it doesn't matter
if they are lost (either through hardware failure, misbehaving software
or accidental deletion).

(ii) the user has the expertise and the time to set up and maintain a
strategy for recovering data that are lost from local disks

(iii) the institution that the user works for allows the user to include
data on local workstation disks in the institution's regular backup
operations

When none of these apply, there is a real, contemporary case for using
something like NFS, where the storage is centrally maintained and backed
up. The cost of storage has fallen of course, but what that means is
that the real questions now are about the value of the data. In some
fields, you could store your entire career's data on a few USB memory
sticks, but I doubt that many people would want to do that without
having made other copies somewhere else, and the same applies to local
workstation storage too :-).

There are other considerations in favour of connecting a workstation to
networked services: if you use more than one machine it can be an
incredible pain to be constantly moving data around from one to the
other, and to keep track of what the authoritative versions are. Having
independent, local user id's and passwords on every workstation can also
cause difficulties. I could go on

In other words, instead of NFS, you should run git.

This is simply not an option for many crystallographers, who do not have
a background in software development or data management. Advocating and
supporting git (or indeed any content/version management system) for
those kind of users is a losing battle: they see it as an unnecessary
complication to their daily work, and will avoid using it as far as they
can.

Regards,
Peter.

--
Peter Keller Tel.: +44 (0)1223 353033
Global Phasing Ltd., Fax.: +44 (0)1223 366889
Sheraton House,
Castle Park,
Cambridge CB3 0AX
United Kingdom



[ccp4bb] Crystallography Near Absolute Zero?

2013-01-23 Thread Jacob Keller
Is anyone aware of any datasets taken at near absolute zero? I was
wondering what would happen...

Jacob

-- 
***
Jacob Pearson Keller, PhD
Postdoctoral Associate
HHMI Janelia Farms Research Campus
email: j-kell...@northwestern.edu
***


Re: [ccp4bb] Crystallography Near Absolute Zero?

2013-01-23 Thread David Schuller

On 01/23/13 10:11, Jacob Keller wrote:
Is anyone aware of any datasets taken at near absolute zero? I was 
wondering what would happen...


http://www.ncbi.nlm.nih.gov/pubmed/12718921
New techniques in macromolecular cryocrystallography: macromolecular 
crystal annealing and cryogenic helium.


Hanson BL 
http://www.ncbi.nlm.nih.gov/pubmed?term=Hanson%20BL%5BAuthor%5Dcauthor=truecauthor_uid=12718921, 
Schall CA 
http://www.ncbi.nlm.nih.gov/pubmed?term=Schall%20CA%5BAuthor%5Dcauthor=truecauthor_uid=12718921, 
Bunick GJ 
http://www.ncbi.nlm.nih.gov/pubmed?term=Bunick%20GJ%5BAuthor%5Dcauthor=truecauthor_uid=12718921.
J Struct Biol. http://www.ncbi.nlm.nih.gov/pubmed/12718921# 2003 
Apr;142(1):77-87.



--
===
All Things Serve the Beam
===
   David J. Schuller
   modern man in a post-modern world
   MacCHESS, Cornell University
   schul...@cornell.edu



Re: [ccp4bb] Mac mini advice

2013-01-23 Thread Chris Richardson
On 23 Jan 2013, at 14:05, Bosch, Juergen wrote:

 I assume nobody of you is running an actual Osx server ? I mean the upgrade 
 to a full server version of the commonly distributed normal Osx releases ?

At the moment we have two OS X servers.  One runs open directory for user 
authentication.  The other is an AFP server for network home directories.  They 
are elderly Xeon-based XServes and will be retired as soon as possible.

We're getting rid of the authentication server because of unrelated changes to 
the way user authentication will be handled.

We're getting rid of the file server because it's old and needs replacing.  It 
will, however, be replaced by a Linux server.  In my opinion, Apple's 
abandonment of its rack-mountable server line shows a lack of commitment to 
servers.  The Promise RAID storage they resell is considerably more expensive 
than the equivalent from our Windows/Linux supplier.  Finally, the other Mac 
admins in my organisation haven't been saying nice things about mountain lion 
server.  If you look at forums like AFP548.com it's apparent that many other 
people are moving away from OS X server.

Moving to OS X servers does simplify some aspects of administration.  In the 
past, there was the added bonus that it was easy to get Linux and Windows 
machines to talk to a Mac server.  Creeping changes made by Apple since OS X 
server 10.4 mean that this is less true.  In particular, the horrible way Apple 
sets up SMB make its use a nightmare with other modern operating systems.

Chris
--
Dr Chris Richardson :: Sysadmin, structural biology, icr.ac.uk

The Institute of Cancer Research: Royal Cancer Hospital, a charitable Company 
Limited by Guarantee, Registered in England under Company No. 534147 with its 
Registered Office at 123 Old Brompton Road, London SW7 3RP.

This e-mail message is confidential and for use by the addressee only.  If the 
message is received by anyone other than the addressee, please return the 
message to the sender by replying to it and then delete the message from your 
computer and network.


[ccp4bb] 2013 ACA meeting - speakers for undergraduate research session

2013-01-23 Thread Roger Rowlett

CCP4BB members,

I am writing to recruit protein crystallographers who work with 
undergraduate students to consider submitting an abstract to present a 
short talk at the 2013 American Crystallographic Association Meeting 
(July 20-24, Honolulu, HI) in our session, (13.08) Building Protein and 
Small Molecule Research Capacity at an Undergraduate Institution. We 
are particularly interested in talks that address curriculum 
development, best practices for integrating protein X-ray 
crystallography into undergraduate teaching and research, acquisition of 
relevant research equipment, and strategies for faculty research success 
and productivity at an undergraduate institution. In addition to sharing 
your expertise and networking with crystallography folks in the 
undergraduate teaching and research community, you can loll in the 
tropical sun as well. (For those of us in the northern U.S., this is 
quite appealing at this particular moment.)


If you are interested in presenting in our session at the ACA meeting, 
please submit an abstract ( 
http://www.amercrystalassn.org/2013-abstracts , *deadline March 31, 
2013*) or contact me for more information. The session is being 
organized by Kraig Wheeler (Eastern Illinois University) and me. We 
would like to have a similar session every year, and build a strong 
community at the ACA to promote undergraduate involvement in protein and 
small molecule crystallography. Last year's inaugural session was 
well-attended, and we hope to build on that effort in 2013.


Thanks for reading,
___
Roger S. Rowlett
Gordon  Dorothy Kline Professor
Department of Chemistry
Colgate University
13 Oak Drive
Hamilton, NY 13346

tel: (315)-228-7245
ofc: (315)-228-7395
fax: (315)-228-7935
email: rrowl...@colgate.edu



Re: [ccp4bb] Crystallography Near Absolute Zero?

2013-01-23 Thread Bryan Lepore
Coming next :

Negative Absolute Temperature for Motional Degrees of Freedom

http://m.sciencemag.org/content/339/6115/52





[ccp4bb] scala, cad, free flag

2013-01-23 Thread wtempel
Hello all,

please consider the following scenario of converting XDS data to MTZ,
preserving the rfree flag from an older MTZ file.
1. POINTLESS with input XDS_ASCII.HKL
2. CCP4I-scaling: scala with 'Copy FreeR from another MTZ' option,
implicitly using CAD, cad log attached.

I would expect my output cell dimensions to match exactly those
in XDS_ASCII.HKL. Should I not? The output cell dimensions, however, match
exactly those of the MTZ file that supplied the free flag. In this case,
the difference is just a couple 1/100 Angstroms, but could it happen in
cases of more severe deviations from isomorphism?
Did anyone else come across this behavior? Could it be intended?

Best regards,
Wolfram Tempel


cad.20130123.log
Description: Binary data


[ccp4bb] GPCR DOCK 2013 announcement

2013-01-23 Thread Maggie Gabanyi
Since there may some on this bb who have modelers as colleagues...

On behalf of the PSI GPCR Network group:

--

Dear GPCR modeling and docking researcher,

We are writing to announce the 3rd round of the GPCR Docking and Modeling 
Assessment, GPCR Dock 2013, and to invite you to submit your predictions of 
GPCR-ligand structures for comparison prior to publication of the results. The 
first two GPCR Dock assessments were conducted in 2008 and 2010 and were based 
on the structures of A2A adenosine receptor (GPCR Dock 2008; NRDD 2009), 
dopamine D3 receptor and chemokine receptor CXCR4 (GPCR Dock 2010; Structure 
2011). As before, the results from GPCR Dock 2013 will be published once an 
analysis is complete.

The present round of the assessment will be focused on four target complexes. 
Participants can choose to predict any one, two, three, or all four targets. 
Predicting all four targets is strongly encouraged. Registered participants 
will receive receptor and ligand information at midnight (Pacific standard 
time) on Feb 1st, and will have until midnight (PST) March 3rd (30 days) to 
deposit models. Further information about the assessment and registration forms 
to participate can be found at http://gpcr.scripps.edu/GPCRDock2013   

If you have any questions, please email (gpcrd...@scripps.edu) we will try to 
get back to you quickly and will also post any general comments to all 
registered participants.

GPCR Network

Ray Stevens
Ruben Abagyan
Irina Kufareva
Angela Walker
Seva Katritch







Margaret J. Gabanyi, Ph.D.
Asst. Research Professor, Department of Chemistry and Chemical Biology 
Sr. Outreach Coordinator, PSI Structural Biology Knowledgebase 
Rutgers, The State University of New Jersey
174 Frelinghuysen Rd
Piscataway, NJ 08854-8076
phone: 848-445-4932
gaba...@rcsb.rutgers.edu

Discover more at http://sbkb.org
--


[ccp4bb] off topic - legacy hardware help needed

2013-01-23 Thread Dave Roberts

Hi all,

By the way, thanks for all the suggestions on the linux versions.  I 
went against my better judgement and just stuck with Fedora, mainly 
because I'm familiar with it.  I have to admit, I kind of like it. I was 
able to get it up and running, run nfs to mount local drives, and 
install all the necessary crystallography software with no hitch - 
quick.  It's kind of nice.  And it set up my wireless printer 
automatically - so all is great.


Anyway, we have an old Indigo SGI that runs our NMR.  It's a console 
only system, and we access it via the network from another old SGI 
(toaster model - blue).  The console does not have a video card (nor 
space for one), so I can't plug in to it and see what's happening.


Anyway, our network was recently updated, and in doing so it has made 
access to our console system unavailable.  We can't get there because 
the IP's that used to be needed are no more.


So, I can get the disk out, and I have a variety of unix/linux systems 
that I could plug it in to.  But, alas, I have no motherboards or 
systems that take SCSI (that I have a sled for or a way to put it in). I 
need to be able to mount the drive on some sort of system, edit a few 
config files to fix the network, then plug it all back in.  All without 
messing up boot tables and such (not a big deal, just thought I'd throw 
that out there).


Is there a cable that simply allows me to plug in the back of a SCSI 
drive then connect to an IDE port on a newer motherboard (or better yet, 
an external USB port)?  Just curious - that would be worth it to me.


Any thoughts?

Thanks

Dave


Re: [ccp4bb] off topic - legacy hardware help needed

2013-01-23 Thread Johan Hattne
On 23 Jan 2013, at 16:07, Dave Roberts drobe...@depauw.edu wrote:

 Anyway, we have an old Indigo SGI that runs our NMR.  It's a console only 
 system, and we access it via the network from another old SGI (toaster model 
 - blue).  The console does not have a video card (nor space for one), so I 
 can't plug in to it and see what's happening.
 
 Anyway, our network was recently updated, and in doing so it has made access 
 to our console system unavailable.  We can't get there because the IP's that 
 used to be needed are no more.

I'm not familiar with the Indigo, but I presume it has a serial port over which 
you could log in to the machine.  Connect the Indigo to an accessible machine 
with a serial cable (you can even get serial/USB adapters that sort-of work, 
too) and run something like minicom or screen.

// Cheers; Johan


  Postdoctoral Fellow @ Physical Biosciences Division
___
Lawrence Berkeley National Laboratory * 1 Cyclotron Rd.
Mail Stop 64R0121 * Berkeley, CA 94720-8118 * +1 (510) 495-8055