On the technical feasibility of storage of original data:
Sure, running Pilatus for an olympic record, we will go home with several T of 
data after 24 h (will we?).
Yes, we do already. I just checked the number of images from PILATUS 6M
we have collected so far this year : ca. 1.7 millions. End of the year
it will be more than 2 mio.
I compress everything and store it on  RAID systems. Disk space is
meanwhile so cheap and storage on disk is so easy compared to tapes. So
why bothering with the large number of images?

In the end for the determination of the structure only a few datasets
will be necessary. This means maybe 0.5 Tb  of compressed data to
deposit. I don't think this is too much.
But this is an abuse of the system. The final goal is the structure determination, and there are much less good crystals everywhere in one year that one Pilatus could collect in one week.
But to decide fast if the crystal diffraction data from Pilatus is good for 
storage or even for measurement whatever the speed of data collection is, good 
data processing
software is needed.
The special properties of PILATUS forces you to think more about your
data collection AND (!) it gives you the time to think at the beamline
about data collection. Fine phi slicing and high redundancy gives much
better data (important if your crystals are not that good).  The people
from Dectris will maybe add a note here.

Best,
Guenter
I personnaly think that there is only one, the one.
Anyhow, I think if the author wish to publish his structure, and it is 
important, and I am a reviewer, and it is going
to prestigious journal,  I will reprocess his data and will check his way to 
the final crystal structure solution from the beginning.
It is as in mathematics. If someone claim that he solved a long-staing problem from the past, he will not go away from his envious colleagues, who will drop everything and will sit and check, until they will find a mistake. What a pleasure!!!
And if there are no mistakes - chapeau !!!

FF
Dr Felix Frolow Professor of Structural Biology and Biotechnology
Department of Molecular Microbiology
and Biotechnology
Tel Aviv University 69978, Israel

Acta Crystallographica F, co-editor

e-mail: mbfro...@post.tau.ac.il
Tel:  ++972-3640-8723
Fax: ++972-3640-9407
Cellular: 0547 459 608

On Oct 16, 2011, at 20:38 , Frank von Delft wrote:

On the deposition of raw data:

I recommend to the committee that before it convenes again, every member should 
go collect some data on a beamline with a Pilatus detector [feel free to join 
us at Diamond].  Because by the probable time any recommendations actually 
emerge, most beamlines will have one of those (or similar), we'll be generating 
more data than the LHC, and users will be happy just to have it integrated, 
never mind worry about its fate.

That's not an endorsement, btw, just an observation/prediction.

phx.




On 14/10/2011 23:56, Thomas C. Terwilliger wrote:
For those who have strong opinions on what data should be deposited...

The IUCR is just starting a serious discussion of this subject. Two
committees, the "Data Deposition Working Group", led by John Helliwell,
and the Commission on Biological Macromolecules (chaired by Xiao-Dong Su)
are working on this.

Two key issues are (1) feasibility and importance of deposition of raw
images and (2) deposition of sufficient information to fully reproduce the
crystallographic analysis.

I am on both committees and would be happy to hear your ideas (off-list).
I am sure the other members of the committees would welcome your thoughts
as well.

-Tom T

Tom Terwilliger
terwilli...@lanl.gov


This is a follow up (or a digression) to James comparing test set to
missing reflections.  I also heard this issue mentioned before but was
always too lazy to actually pursue it.

So.

The role of the test set is to prevent overfitting.  Let's say I have
the final model and I monitored the Rfree every step of the way and can
conclude that there is no overfitting.  Should I do the final refinement
against complete dataset?

IMCO, I absolutely should.  The test set reflections contain
information, and the "final" model is actually biased towards the
working set.  Refining using all the data can only improve the accuracy
of the model, if only slightly.

The second question is practical.  Let's say I want to deposit the
results of the refinement against the full dataset as my final model.
Should I not report the Rfree and instead insert a remark explaining the
situation?  If I report the Rfree prior to the test set removal, it is
certain that every validation tool will report a mismatch.  It does not
seem that the PDB has a mechanism to deal with this.

Cheers,

Ed.



--
Oh, suddenly throwing a giraffe into a volcano to make water is crazy?
                                                Julian, King of Lemurs

Reply via email to