Dear John,

  Most sound institutional data repositories use some form of
off-site backup.  However, not all of them do, and the
standards of reliabilty vary.  The advantages of an explicit
partnering system are both practical and psychological.  The
practical part is the major improvement in reliability --
even if we start at 6 nines, 12 nines is better.  The
psychological part is that members of the community can
feel reassured that reliability has in been improved to
levels at which they can focus on other, more scientific
issues, instead ot the question of reliability.

  Regards,
    Herbert

=====================================================
 Herbert J. Bernstein, Professor of Computer Science
   Dowling College, Kramer Science Center, KSC 121
        Idle Hour Blvd, Oakdale, NY, 11769

                 +1-631-244-3035
                 y...@dowling.edu
=====================================================

On Sat, 29 Oct 2011, Jrh wrote:

Dear Herbert,
I imagine it likely that eg The Univ Manchester eScholar system will have in 
place duplicate storage for the reasons you outline below. However for it to be 
geographically distant is, to my reckoning, less likely, but still possible. I 
will add that further query to my first query to my eScholar user support re 
dataset sizes and doi registration.
Greetings,
John
Prof John R Helliwell DSc



On 29 Oct 2011, at 15:49, "Herbert J. Bernstein" <y...@bernstein-plus-sons.com> 
wrote:

One important issue to address is how deal with the perceived
reliability issues of the federated model and how to start to
approach the higher reliability of the centralized model described bu
Gerard K, but without incurring what seems to be at present
unacceptable costs.  One answer comes from the approach followed in
communications systems.  If the probability of data loss in each
communication subsystem is, say, 1/1000, then the probability of data
loss in two independent copies of the same lossy system is only
1/1,000,000.  We could apply that lesson    to the
federated data image archive model by asking each institution
to partner with a second independent, and hopefully geographically
distant, institution, with an agreement for each to host copies
of the other's images.  If we restrict that duplication protocol, at least at
first, to those images strongly related to an actual publication/PDB
deposition, the incremental cost of greatly improved reliability
would be very low, with no disruption of the basic federated
approach being suggested.

Please note that I am not suggesting that institutional repositories
will have 1/1000 data loss rates, but they will certainly have some
data loss rate, and this modest change in the proposal would help to
greatly lower the impact of that data loss rate and allow us to go
forward with greater confidence.

Regards,
 Herbert


At 7:53 AM +0100 10/29/11, Jrh wrote:
Dear Gerard K,
Many thanks indeed for this.
Like Gerard Bricogne you also indicate that the location option being the 
decentralised one is 'quite simple and very cheap in terms of centralised 
cost'. The SR Facilities worldwide I hope can surely follow the lead taken by 
Diamond Light Source and PaN, the European Consortium of SR and Neutron 
Facilities, and keep their data archives and also assist authors with the doi 
registration process for those datasets that result in publication. Linking to 
these dois from the PDB for example is as you confirm straightforward.

Gerard B's pressing of the above approach via the 'Pilot project' within the 
IUCr DDD WG various discussions, with a nicely detailed plan, brought home to 
me the merit of the above approach for the even greater challenge for raw data 
archiving for chemical crystallography, both in terms of number of datasets and 
also the SR Facilities role being much smaller. IUCr Journals also note the 
challenge of moving large quantities of data around ie if the Journals were to 
try and host everything for chemical crystallography, and them thus becoming 
'the centre' for these datasets.

So:-  Universities are now establishing their own institutional repositories, 
driven largely by Open Access demands of funders. For these to host raw 
datasets that underpin publications is a reasonable role in my view and indeed 
they already have this category in the University of Manchester eScholar 
system, for example.  I am set to explore locally here whether they would 
accommodate all our Lab's raw Xray images datasets per annum that underpin our 
published crystal structures.

It would be helpful if readers of this CCP4bb could kindly also explore with 
their own universities if they have such an institutional repository and if raw 
data sets could be accommodated. Please do email me off list with this 
information if you prefer but within the CCP4bb is also good.

Such an approach involving institutional repositories would also work of course 
for the 25% of MX structures that are for non SR datasets.

All the best for a splendid PDB40 Event.

Greetings,
John
Prof John R Helliwell DSc



On 28 Oct 2011, at 22:02, Gerard DVD Kleywegt <ger...@xray.bmc.uu.se> wrote:

Hi all,

It appears that during my time here at Cold Spring Harbor, I have missed a 
small debate on CCP4BB (in which my name has been used in vain to boot).

I have not yet had time to read all the contributions, but would like to make a 
few points that hopefully contribute to the discussion and keep it with two 
feet on Earth (as opposed to La La Land where the people live who think that 
image archiving can be done on a shoestring budget... more about this in a bit).

Note: all of this is on personal title, i.e. not official wwPDB gospel. Oh, and 
sorry for the new subject line, but this way I can track the replies more 
easily.

It seems to me that there are a number of issues that need to be separated:

(1) the case for/against storing raw data
(2) implementation and resources
(3) funding
(4) location

I will say a few things about each of these issues in turn:

-----------

(1) Arguments in favour and against the concept of storing raw image data, as 
well as possible alternative solutions that could address some of the issues at 
lower cost or complexity.

I realise that my views carry a weight=1.0 just like everybody else's, and many 
of the arguments and counter-arguments have already been made, so I will not 
add to these at this stage.

-----------

(2) Implementation details and required resources.

If the community should decide that archiving raw data would be scientifically 
useful, then it has to decide how best to do it. This will determine the level 
of resources required to do it. Questions include:

- what should be archived? (See Jim H's list from (a) to (z) or so.) An initial 
plan would perhaps aim for the images associated with the data used in the 
final refinement of deposited structures.

- how much data are we talking about per dataset/structure/year?

- should it be stored close to the source (i.e., responsibility and costs for 
depositors or synchrotrons) or centrally (i.e., costs for some central 
resource)? If it is going to be stored centrally, the cost will be substantial. 
For example, at the EBI -the European Bioinformatics Institute- we have 15 PB 
of storage. We pay about 1500 GBP (~2300 USD) per TB of storage (not the kind 
you buy at Dixons or Radio Shack, obviously). For stored data, we have a 
data-duplication factor of ~8, i.e. every file is stored 8 times (at three data 
centres, plus back-ups, plus a data-duplication centre, plus unreleased versus 
public versions of the archive). (Note - this is only for the EBI/PDBe! RCSB 
and PDBj will have to acquire storage as well.) Moreover, disks have to be 
housed in a building (not free!), with cooling, security measures, security 
staff, maintenance staff, electricity (substantial cost!), rental of a 1-10 
Gb/s connection, etc. All hardware has a life-cycle of three ye!
ars (barring failures) and then needs to be replaced (at lower cost, but still 
not free).

- if the data is going to be stored centrally, how will it get there? Using ftp 
will probably not be feasible.

- if it is not stored centrally, how will long-term data availability be 
enforced? (Otherwise I could have my data on a public server until my paper 
comes out in print, and then remove it.)

- what level of annotation will be required? There is no point in having 
zillions of files lying around if you don't know which structure/crystal/sample 
they belong to, at what wavelength they were recorded, if they were used in 
refinement or not, etc.

- an issue that has not been raised yet, I think: who is going to validate that the 
images actually correspond to the structure factor amplitudes or intensities that were 
used in the refinement? This means that the data will have to be indexed, integrated, 
scaled, merged, etc. and finally compared to the deposited Fobs or Iobs. This will have 
to be done for *10,000 data sets a year*... And I can already imagine the arguments that 
will follow between depositors and "re-processors" about what software to use, 
what resolution cut-off, what outlier-rejection criteria, etc. How will conflicts and 
discrepancies be resolved? This could well end up taking a day of working time per data 
set, i.e. with 200 working days per year, one would need 50 *new* staff for this task 
alone. For comparison: worldwide, there is currently a *total* of ~25 annotators working 
for the wwPDB partners...

Not many of you know that (about 10 years ago) I spent probably an entire year of my life 
sorting out the mess that was the PDB structure factor files pre-EDS... We were 
apparently the first people to ever look at the tens of thousands of structure factor 
files and try to use all of them to calculate maps for the EDS server. (If there were 
others who attempted this before us, they had probably run away screaming.) This went 
well for many files, but there were many, many files that had problems. There were dozens 
of different kinds of issues: non-CIF files, CIF files with wrong headers, Is instead of 
Fs, Fcalc instead of Fobs, all "h" equal to 0, non-space-separated columns, 
etc. For a list, see: http://eds.bmc.uu.se/eds/eds_help.html#PROBLEMS

Anyway, my point is that simply having images without annotation and without reprocessing is like 
having a crystallographic kitchen sink (or bit bucket) which will turn out to be 50% useless when 
the day comes that somebody wants to do archive-wide analysis/reprocessing/rerefinement etc. And if 
the point is to "catch cheaters" (which in my opinion is one of the weakest, 
least-fundable arguments for storage), then the whole operation is in fact pointless without 
reprocessing by a "third party" at deposition time.

-----------

(3) Funding.

This is one issue we can't really debate - ultimately, it is the funding 
agencies who have to be convinced that the cost/benefit ratio is low enough. 
The community will somehow have to come up with a stable, long-term funding 
model. The outcome of (2) should enable one to estimate the initial investment 
cost plus the variable cost per year. Funding could be done in different ways:

- centrally - e.g., a big application for funding from NIH or EU

- by charging depositors (just like they are charged Open Access charges, which can often 
be reclaimed from the funding agencies) - would you be willing to pay, say, 5000 USD per 
dataset to secure "perpetual" storage?

- by charging users (i.e., Gerard Bricogne :-) - just kidding!

Of course, if the consensus is to go for decentralised storage and a DOI-like 
identifier system, there will be no need for a central archive, and the 
identifiers could be captured upon deposition in the PDB. (We could also check 
once a week if the files still exist where they are supposed to be.)

-----------

(4) Location.

If the consensus is to have decentralised storage, the solution is quite simple and very 
cheap in terms of "centralised" cost - wwPDB can capture DOI-like identifiers 
upon deposition and make them searchable.

If central storage is needed, then there has to be an institution willing and 
able to take on this task. The current wwPDB partners are looking at future 
funding that is at best flat, with increasing numbers of depositions that also 
get bigger and more complex. There is *no way on earth* that wwPDB can accept 
raw data (be it X-ray, NMR or EM! this is not an exclusive X-ray issue) without 
*at least* double the current level of funding (and not just in the US for 
RCSB, but also in Japan for PDBj and in Europe for PDBe)! I am pretty confident 
that this is simply *not* going to happen.

[Besides, in my own humble opinion, in order to remain relevant (and fundable!) 
in the biomedical world, the PDB will have to restyle itself as a biomedical 
resource instead of a crystallographic archive. We must take the structures to 
the biologists, and we must expand in breadth of coverage to include emerging 
hybrid methods that are relevant for structural cell (as opposed to molecular) 
biology. This mission will be much easier to fund on three continents than 
archiving TBs of raw data that have little or no tangible (i.e., fundable) 
impact on our quest to find a cure for various kinds of cancer (or hairloss) or 
to feed a growing population.]

However, there may be a more realistic solution. The role model could be NMR, 
which has its own global resource for data storage in the BMRB. BMRB is a wwPDB 
partner - if you deposit an NMR model with us, we take your ensemble 
coordinates, metadata, restraints and chemical shifts - any other NMR data 
(including spectra and FIDs) can subsequently be deposited with BMRB. These 
data will get their own BMRB ID which can be linked to the PDB ID.

A model like this has advantages - it could be housed in a single place, run by 
X-ray experts (just as BMRB is co-located with NMRFAM, the national NMR 
facility at Madison), and there would be only one place that would need to 
secure the funding (which would be substantially larger than the estimate of 
$1000 per year suggested by a previous poster from La La Land). This could for 
instance be a synchrotron (linked to INSTRUCT?), or perhaps one of the emerging 
nations could be enticed to take on this challenging task. I would expect that 
such a centre would be closely affiliated with the wwPDB organisation, or 
become a member just like BMRB. A similar model could also be employed for 
archiving raw EM image data.

-----------

I've said enough for today. It's almost time for the booze-up that kicks off 
the PDB40 symposium here at CSHL! Heck, some of you who read this might be here 
as well!

Btw - Colin Nave wrote:

"(in increasing order of influence/power do we have the Pope, US president, the Bond 
Market and finally Gerard K?)"

I'm a tad disappointed to be only in fourth place, Colin! What has the Pope 
ever done for crystallography?

--Gerard

******************************************************************
                          Gerard J. Kleywegt

     http://xray.bmc.uu.se/gerard   mailto:ger...@xray.bmc.uu.se
******************************************************************
  The opinions in this message are fictional.  Any similarity
  to actual opinions, living or dead, is purely coincidental.
******************************************************************
  Little known gastromathematical curiosity: let "z" be the
  radius and "a" the thickness of a pizza. Then the volume
           of that pizza is equal to pi*z*z*a !
******************************************************************


--
=====================================================
Herbert J. Bernstein, Professor of Computer Science
    Dowling College, Brookhaven Campus, B111B
  1300 William Floyd Parkway, Shirley, NY, 11967

                +1-631-244-1328
              Lab: +1-631-244-1935
             Cell: +1-631-428-1397
                y...@dowling.edu
=====================================================

Reply via email to