Re: [ccp4bb] I compressed my images by ~ a factor of two, and they load and process in mosflm faster

2009-09-22 Thread Kevin Cowtan

Waterman, David (DLSLtd,RAL,DIA) wrote:

Bill's example is nice because the compression is transparent,

 so no extra work needs to be done by developers. However, this
 is one for Macs only.

Actually, ZFS is available on Linux too as a user space filesystem, and 
Sun are considering a kernel port:

http://www.wizy.org/wiki/ZFS_on_FUSE

However, I'm inclined to wait for btrfs (butter-fs). Here's a review of 
btrfs from an ex-ZFS engineer:

http://lwn.net/Articles/342892/

Oracle are working on a new generation NFS replacement designed 
specifically to benefit from some of the btrfs features:

http://oss.oracle.com/projects/crfs/

Here's an article one of the truely astonishing btrfs features: You can 
upgrade an existing linux file system to btrfs without destroying the 
existing fs or duplicating the data!

http://btrfs.wiki.kernel.org/index.php/Conversion_from_Ext3


Re: [ccp4bb] I compressed my images by ~ a factor of two, and they load and process in mosflm faster

2009-09-21 Thread Waterman, David (DLSLtd,RAL,DIA)
Yes, this is exactly what I meant. If the data are amenable (which was 
addressed in the previous discussion with reference to diffraction images) and 
there is a suitable lossless compression/expansion algorithm, then on most 
modern computers it is faster to read the compressed data from disk and expand 
it in RAM, rather than directly read the uncompressed image from a magnetic 
plate. Of course this depends on all sorts of factors such as the speed of the 
disk, the compression ratio, the CPU(s) clock speed, if the decompression can 
be done in parallel, how much calculation the decompression requires, and so on.

Bill's example is nice because the compression is transparent, so no extra work 
needs to be done by developers. However, this is one for Macs only. I'd like to 
know whether integration runs faster using CBF images with the decompression 
overhead of CBFlib compared with reading the same data in uncompressed form on 
standard hardware (whatever that means).

Cheers
David

-Original Message-
From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf Of Andrew 
Purkiss-Trew
Sent: 18 September 2009 21:52
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] I compressed my images by ~ a factor of two, and they 
load and process in mosflm faster

The current bottleneck with file systems is the speed of getting data on or off 
the magnetic surface. So filesystem compression helps, as less data needs to be 
physically written or read per image. The CPU time spent compressing the data 
is less than the time saved in writing less data to the surface.

I would be interested to see if the speed up is the same with a solid state 
drive, as there is near 'random access' here, unlike with a magnetic drive 
where the seek time is one of the bottlenecks. For example, mechanical hard 
drives are limited to about 130MB/s, whereas SSDs can already manage 200MB/s 
(faster than a first generation SATA interface at 150MB/s can cope with and one 
of the drivers behind the 2nd (300MB/s) and 3rd generation (600MB/s) SATA 
intefaces). The large size of our image files should make them ideal for use 
with SSDs.


Quoting James Holton jmhol...@lbl.gov:

 I think it important to point out that despite the subject line, Dr.  
 Scott's statement was:
 I think they process a bit faster too
 Strangely enough, this has not convinced me to re-format my RAID array 
 with an new file system nor re-write all my software to support yet 
 another new file format.  I guess I am just lazy that way.  Has anyone 
 measured the speed increase?  Have macs become I/O-bound again? In any 
 case, I think it is important to remember that there are good reasons 
 for leaving image file formats uncompressed.  Probably the most 
 important is the activation barrier to new authors writing new 
 programs that read them.  fread() is one thing, but finding the 
 third-party code for a particular compression algorithm, navigating a 
 CVS repository and linking to a library are quite another!  This is 
 actually quite a leap for those
 of us who never had any formal training in computer science.   
 Personally, I still haven't figured out how to read pck images, as  
 it is much easier to write jiffy programs for uncompressed data.   
 For example, if all you want to do is extract a group of pixels (such 
 as a spot), then you have to decompress the whole image!  In computer 
 speak: fseek() is rendered useless by compression.  This could be why 
 Mar opted not to use the pck compression for their newer CCD-based 
 detectors?

 That said, compressed file systems do appear particularly attractive 
 if space is limiting.  Apparently HFS can do it, but what about other 
 operating systems?  Does anyone have experience with a Linux file 
 system that both supports compression and doesn't get corrupted 
 easily?

 -James Holton
 MAD Scientist


 Graeme Winter wrote:
 Hi David,

 If the data compression is carefully chosen you are right: lossless 
 jpeg2000 compression on diffraction images works very well, but is a 
 spot slow. The CBF compression using the byte offset method is a 
 little less good at compression put massively faster... as you point 
 out, this is the one used in the pilatus images. I recall that the 
 .pck format used for the MAR image plates had the same property - it 
 was quicker to read in a compressed image that the raw equivalent.

 So... once everyone is using the CBF standard for their images, with 
 native lossless compression, it'll save a fair amount in disk space 
 (=£/$), make life easier for people and - perhaps most importantly - 
 save a lot of data transfer time.

 Now the funny thing with this is that if we compress the images 
 before we store them, the compression implemented in the file system 
 will be less effective... oh well, can't win em all...

 Cheers,

 Graeme



 2009/9/18 Waterman, David (DLSLtd,RAL,DIA) david.water...@diamond.ac.uk:

 Just to comment on this, my friend in the computer game 

Re: [ccp4bb] I compressed my images by ~ a factor of two, and they load and process in mosflm faster

2009-09-21 Thread Harry Powell

Hi

Not a typical run, but I just got these on my Macbook pro from a 320  
image 1.5Å myoglobin dataset, collected on a Q315 -


[macf3c-4:~/test/cbf] harry% cd cbf
[macf3c-4:~/test/cbf/cbf] harry% time mosflm  integrate  integrate.lp
445.355u 27.951s 8:38.57 91.2%  0+0k 1+192io 41pf+0w
[macf3c-4:~/test/cbf/cbf] harry% cd ../original
[macf3c-4:~/test/cbf/original] harry% time mosflm  integrate   
integrate.lp

279.331u 18.691s 8:05.76 61.3%  0+0k 0+240io 16pf+0w

I am somewhat surprised at this. Since I wasn't running anything else,  
I'm also a little surprised that, although the user times above are  
so different, so are the percentages of the elapsed clock times. Herb  
may be able to comment more knowledgeably.


I don't have my Snow Leopard box here so can't compare the ditto'd  
files just at the moment.


On 21 Sep 2009, at 13:26, Waterman, David (DLSLtd,RAL,DIA) wrote:

Yes, this is exactly what I meant. If the data are amenable (which  
was addressed in the previous discussion with reference to  
diffraction images) and there is a suitable lossless compression/ 
expansion algorithm, then on most modern computers it is faster to  
read the compressed data from disk and expand it in RAM, rather than  
directly read the uncompressed image from a magnetic plate. Of  
course this depends on all sorts of factors such as the speed of the  
disk, the compression ratio, the CPU(s) clock speed, if the  
decompression can be done in parallel, how much calculation the  
decompression requires, and so on.


Bill's example is nice because the compression is transparent, so no  
extra work needs to be done by developers. However, this is one for  
Macs only. I'd like to know whether integration runs faster using  
CBF images with the decompression overhead of CBFlib compared with  
reading the same data in uncompressed form on standard hardware  
(whatever that means).


Cheers
David

-Original Message-
From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf  
Of Andrew Purkiss-Trew

Sent: 18 September 2009 21:52
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] I compressed my images by ~ a factor of two,  
and they load and process in mosflm faster


The current bottleneck with file systems is the speed of getting  
data on or off the magnetic surface. So filesystem compression  
helps, as less data needs to be physically written or read per  
image. The CPU time spent compressing the data is less than the time  
saved in writing less data to the surface.


I would be interested to see if the speed up is the same with a  
solid state drive, as there is near 'random access' here, unlike  
with a magnetic drive where the seek time is one of the bottlenecks.  
For example, mechanical hard drives are limited to about 130MB/s,  
whereas SSDs can already manage 200MB/s (faster than a first  
generation SATA interface at 150MB/s can cope with and one of the  
drivers behind the 2nd (300MB/s) and 3rd generation (600MB/s) SATA  
intefaces). The large size of our image files should make them ideal  
for use with SSDs.



Quoting James Holton jmhol...@lbl.gov:


I think it important to point out that despite the subject line, Dr.
Scott's statement was:
I think they process a bit faster too
Strangely enough, this has not convinced me to re-format my RAID  
array

with an new file system nor re-write all my software to support yet
another new file format.  I guess I am just lazy that way.  Has  
anyone
measured the speed increase?  Have macs become I/O-bound again? In  
any

case, I think it is important to remember that there are good reasons
for leaving image file formats uncompressed.  Probably the most
important is the activation barrier to new authors writing new
programs that read them.  fread() is one thing, but finding the
third-party code for a particular compression algorithm, navigating a
CVS repository and linking to a library are quite another!  This is
actually quite a leap for those
of us who never had any formal training in computer science.
Personally, I still haven't figured out how to read pck images, as
it is much easier to write jiffy programs for uncompressed data.
For example, if all you want to do is extract a group of pixels (such
as a spot), then you have to decompress the whole image!  In computer
speak: fseek() is rendered useless by compression.  This could be why
Mar opted not to use the pck compression for their newer CCD-based
detectors?

That said, compressed file systems do appear particularly attractive
if space is limiting.  Apparently HFS can do it, but what about other
operating systems?  Does anyone have experience with a Linux file
system that both supports compression and doesn't get corrupted
easily?

-James Holton
MAD Scientist


Graeme Winter wrote:

Hi David,

If the data compression is carefully chosen you are right: lossless
jpeg2000 compression on diffraction images works very well, but is a
spot slow. The CBF compression using the byte offset method is a

Re: [ccp4bb] I compressed my images by ~ a factor of two, and they load and process in mosflm faster

2009-09-19 Thread Kay Diederichs

James Holton schrieb:
...
In any case, I think it is important to remember that there are good 
reasons for leaving image file formats uncompressed.  Probably the most 
important is the activation barrier to new authors writing new programs 
that read them.  fread() is one thing, but finding the third-party 
code for a particular compression algorithm, navigating a CVS repository 
and linking to a library are quite another!  This is actually quite a 
leap for those of us who never had any formal training in computer 
science.  Personally, I still haven't figured out how to read pck 
images, as it is much easier to write jiffy programs for uncompressed 


pck code can be found at http://www.ccp4.ac.uk/dist/lib/src/pack_c.c

CBF code is at http://www.bernstein-plus-sons.com/software/CBF/

Both are under GPL

data.  For example, if all you want to do is extract a group of pixels 
(such as a spot), then you have to decompress the whole image!  In 
computer speak: fseek() is rendered useless by compression.  This could 
be why Mar opted not to use the pck compression for their newer 
CCD-based detectors?


thinking about the many GB written daily at a synchrotron beamline, I 
wish they had !




That said, compressed file systems do appear particularly attractive if 
space is limiting.  Apparently HFS can do it, but what about other 
operating systems?  Does anyone have experience with a Linux file system 
that both supports compression and doesn't get corrupted easily?




One possibility in Linux is ZFS over FUSE; this has a large number of 
advantages over other filesystems (except Btrfs) - see 
http://www.linux-magazine.com/w3/issue/103/ZFS.pdf . The article 
explains installation for Ubuntu. I must admit that I did not try it so far.


The alternative would be Btrfs, see 
http://www.h-online.com/open/The-Btrfs-file-system--/features/113738 . 
This is available for latest Fedora and Ubuntu, is part of the recently 
released 2.6.31 kernel 
(http://www.h-online.com/open/Kernel-Log-2-6-31-Tracking--/features/113671), 
and will therefore in the future be available in all distros.


best,

Kay
--
Kay Diederichs http://strucbio.biologie.uni-konstanz.de
email: kay.diederi...@uni-konstanz.de Tel +49 7531 88 4049 Fax 3183
Fachbereich Biologie, Universitaet Konstanz, Box M647, D-78457 Konstanz


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [ccp4bb] I compressed my images by ~ a factor of two, and they load and process in mosflm faster

2009-09-18 Thread Waterman, David (DLSLtd,RAL,DIA)
Just to comment on this, my friend in the computer game industry insists
that compression begets speed in almost all data handling situations.
This will be worth bearing in mind as we start to have more fine-sliced
Pilatus 6M (or similar) datasets to deal with.

Cheers,
David.

-Original Message-
From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf Of
William G. Scott
Sent: 17 September 2009 22:48
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] I compressed my images by ~ a factor of two, and they
load and process in mosflm faster

If you have OS X 10.6, this will impress your friends and save you some
disk space:

% du -h -d 1 mydata
3.5Gmydata

mv mydata mydata.1

sudo ditto --hfsCompression mydata.1  mydata rm -rf mydata.1

% du -h -d 1 mydata
1.8Gmydata

This does hfs filesystem compression, so the images are still recognized
by mosflm, et al.  I think they process a bit faster too, because half
the information is packed into the resource fork.
This e-mail and any attachments may contain confidential, copyright and or 
privileged material, and are for the use of the intended addressee only. If you 
are not the intended addressee or an authorised recipient of the addressee 
please notify us of receipt by returning the e-mail and do not use, copy, 
retain, distribute or disclose the information in or attached to the e-mail.
Any opinions expressed within this e-mail are those of the individual and not 
necessarily of Diamond Light Source Ltd.
Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments 
are free from viruses and we cannot accept liability for any damage which you 
may sustain as a result of software viruses which may be transmitted in or with 
the message.
Diamond Light Source Limited (company no. 4375679). Registered in England and 
Wales with its registered office at Diamond House, Harwell Science and 
Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom



Re: [ccp4bb] I compressed my images by ~ a factor of two, and they load and process in mosflm faster

2009-09-18 Thread Graeme Winter
Hi David,

If the data compression is carefully chosen you are right: lossless
jpeg2000 compression on diffraction images works very well, but is a
spot slow. The CBF compression using the byte offset method is a
little less good at compression put massively faster... as you point
out, this is the one used in the pilatus images. I recall that the
.pck format used for the MAR image plates had the same property - it
was quicker to read in a compressed image that the raw equivalent.

So... once everyone is using the CBF standard for their images, with
native lossless compression, it'll save a fair amount in disk space
(=£/$), make life easier for people and - perhaps most importantly -
save a lot of data transfer time.

Now the funny thing with this is that if we compress the images before
we store them, the compression implemented in the file system will be
less effective... oh well, can't win em all...

Cheers,

Graeme



2009/9/18 Waterman, David (DLSLtd,RAL,DIA) david.water...@diamond.ac.uk:
 Just to comment on this, my friend in the computer game industry insists
 that compression begets speed in almost all data handling situations.
 This will be worth bearing in mind as we start to have more fine-sliced
 Pilatus 6M (or similar) datasets to deal with.

 Cheers,
 David.

 -Original Message-
 From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf Of
 William G. Scott
 Sent: 17 September 2009 22:48
 To: CCP4BB@JISCMAIL.AC.UK
 Subject: [ccp4bb] I compressed my images by ~ a factor of two, and they
 load and process in mosflm faster

 If you have OS X 10.6, this will impress your friends and save you some
 disk space:

 % du -h -d 1 mydata
 3.5G    mydata

 mv mydata mydata.1

 sudo ditto --hfsCompression mydata.1  mydata rm -rf mydata.1

 % du -h -d 1 mydata
 1.8G    mydata

 This does hfs filesystem compression, so the images are still recognized
 by mosflm, et al.  I think they process a bit faster too, because half
 the information is packed into the resource fork.
 This e-mail and any attachments may contain confidential, copyright and or 
 privileged material, and are for the use of the intended addressee only. If 
 you are not the intended addressee or an authorised recipient of the 
 addressee please notify us of receipt by returning the e-mail and do not use, 
 copy, retain, distribute or disclose the information in or attached to the 
 e-mail.
 Any opinions expressed within this e-mail are those of the individual and not 
 necessarily of Diamond Light Source Ltd.
 Diamond Light Source Ltd. cannot guarantee that this e-mail or any 
 attachments are free from viruses and we cannot accept liability for any 
 damage which you may sustain as a result of software viruses which may be 
 transmitted in or with the message.
 Diamond Light Source Limited (company no. 4375679). Registered in England and 
 Wales with its registered office at Diamond House, Harwell Science and 
 Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom




Re: [ccp4bb] I compressed my images by ~ a factor of two, and they load and process in mosflm faster

2009-09-18 Thread Chavas Leo

Dear all --

I cannot remember exactly, but I thought we had a long discussion on  
the rightness of using compressed images, especially when considering  
the loss of information while doing so. What was the conclusion of  
the debate again? (sorry, too lazy to dig in the archives).


-- Leo --

On 18 Sep 2009, at 23:50, Graeme Winter wrote:


Hi David,

If the data compression is carefully chosen you are right: lossless
jpeg2000 compression on diffraction images works very well, but is a
spot slow. The CBF compression using the byte offset method is a
little less good at compression put massively faster... as you point
out, this is the one used in the pilatus images. I recall that the
.pck format used for the MAR image plates had the same property - it
was quicker to read in a compressed image that the raw equivalent.

So... once everyone is using the CBF standard for their images, with
native lossless compression, it'll save a fair amount in disk space
(=£/$), make life easier for people and - perhaps most importantly -
save a lot of data transfer time.

Now the funny thing with this is that if we compress the images before
we store them, the compression implemented in the file system will be
less effective... oh well, can't win em all...

Cheers,

Graeme



2009/9/18 Waterman, David (DLSLtd,RAL,DIA)  
david.water...@diamond.ac.uk:
Just to comment on this, my friend in the computer game industry  
insists

that compression begets speed in almost all data handling situations.
This will be worth bearing in mind as we start to have more fine- 
sliced

Pilatus 6M (or similar) datasets to deal with.

Cheers,
David.

-Original Message-
From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf Of
William G. Scott
Sent: 17 September 2009 22:48
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] I compressed my images by ~ a factor of two, and  
they

load and process in mosflm faster

If you have OS X 10.6, this will impress your friends and save you  
some

disk space:

% du -h -d 1 mydata
3.5Gmydata

mv mydata mydata.1

sudo ditto --hfsCompression mydata.1  mydata rm -rf mydata.1

% du -h -d 1 mydata
1.8Gmydata

This does hfs filesystem compression, so the images are still  
recognized
by mosflm, et al.  I think they process a bit faster too, because  
half

the information is packed into the resource fork.
This e-mail and any attachments may contain confidential,  
copyright and or privileged material, and are for the use of the  
intended addressee only. If you are not the intended addressee or  
an authorised recipient of the addressee please notify us of  
receipt by returning the e-mail and do not use, copy, retain,  
distribute or disclose the information in or attached to the e-mail.
Any opinions expressed within this e-mail are those of the  
individual and not necessarily of Diamond Light Source Ltd.
Diamond Light Source Ltd. cannot guarantee that this e-mail or any  
attachments are free from viruses and we cannot accept liability  
for any damage which you may sustain as a result of software  
viruses which may be transmitted in or with the message.
Diamond Light Source Limited (company no. 4375679). Registered in  
England and Wales with its registered office at Diamond House,  
Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11  
0DE, United Kingdom






Chavas Leonard, Ph.D.
Assistant Professor

Structural Biology Research Center
Photon Factory
High Energy Research Organization (KEK)
305-0801 Tsukuba Oho 1-1
Japan

Tel: +81(0)29-864-5642 (4901)
Fax: +81(0)29-864-2801
e-mail: leonard.cha...@kek.jp

Science Advisory Board (BIT Life Sciences)
Editorial Board (JAA)

http://pfweis.kek.jp/~leo


Re: [ccp4bb] I compressed my images by ~ a factor of two, and they load and process in mosflm faster

2009-09-18 Thread James Holton

http://proteincrystallography.org/ccp4bb/message2284.html

The conclusion was that lossless compression can give us an average of 
2.5-fold compression on diffraction images (more if they have no spots) 
and that lossy compression was something that might anger the caveat gods.


-James Holton
MAD Scientist


Chavas Leo wrote:

Dear all --

I cannot remember exactly, but I thought we had a long discussion on 
the rightness of using compressed images, especially when considering 
the loss of information while doing so. What was the conclusion of the 
debate again? (sorry, too lazy to dig in the archives).


-- Leo --

On 18 Sep 2009, at 23:50, Graeme Winter wrote:


Hi David,

If the data compression is carefully chosen you are right: lossless
jpeg2000 compression on diffraction images works very well, but is a
spot slow. The CBF compression using the byte offset method is a
little less good at compression put massively faster... as you point
out, this is the one used in the pilatus images. I recall that the
.pck format used for the MAR image plates had the same property - it
was quicker to read in a compressed image that the raw equivalent.

So... once everyone is using the CBF standard for their images, with
native lossless compression, it'll save a fair amount in disk space
(=£/$), make life easier for people and - perhaps most importantly -
save a lot of data transfer time.

Now the funny thing with this is that if we compress the images before
we store them, the compression implemented in the file system will be
less effective... oh well, can't win em all...

Cheers,

Graeme



2009/9/18 Waterman, David (DLSLtd,RAL,DIA) 
david.water...@diamond.ac.uk:
Just to comment on this, my friend in the computer game industry 
insists

that compression begets speed in almost all data handling situations.
This will be worth bearing in mind as we start to have more fine-sliced
Pilatus 6M (or similar) datasets to deal with.

Cheers,
David.

-Original Message-
From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf Of
William G. Scott
Sent: 17 September 2009 22:48
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] I compressed my images by ~ a factor of two, and they
load and process in mosflm faster

If you have OS X 10.6, this will impress your friends and save you some
disk space:

% du -h -d 1 mydata
3.5Gmydata

mv mydata mydata.1

sudo ditto --hfsCompression mydata.1  mydata rm -rf mydata.1

% du -h -d 1 mydata
1.8Gmydata

This does hfs filesystem compression, so the images are still 
recognized

by mosflm, et al.  I think they process a bit faster too, because half
the information is packed into the resource fork.
This e-mail and any attachments may contain confidential, copyright 
and or privileged material, and are for the use of the intended 
addressee only. If you are not the intended addressee or an 
authorised recipient of the addressee please notify us of receipt by 
returning the e-mail and do not use, copy, retain, distribute or 
disclose the information in or attached to the e-mail.
Any opinions expressed within this e-mail are those of the 
individual and not necessarily of Diamond Light Source Ltd.
Diamond Light Source Ltd. cannot guarantee that this e-mail or any 
attachments are free from viruses and we cannot accept liability for 
any damage which you may sustain as a result of software viruses 
which may be transmitted in or with the message.
Diamond Light Source Limited (company no. 4375679). Registered in 
England and Wales with its registered office at Diamond House, 
Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 
0DE, United Kingdom






Chavas Leonard, Ph.D.
Assistant Professor

Structural Biology Research Center
Photon Factory
High Energy Research Organization (KEK)
305-0801 Tsukuba Oho 1-1
Japan

Tel: +81(0)29-864-5642 (4901)
Fax: +81(0)29-864-2801
e-mail: leonard.cha...@kek.jp

Science Advisory Board (BIT Life Sciences)
Editorial Board (JAA)

http://pfweis.kek.jp/~leo


Re: [ccp4bb] I compressed my images by ~ a factor of two, and they load and process in mosflm faster

2009-09-18 Thread Ethan Merritt
On Friday 18 September 2009 12:47:20 Chavas Leo wrote:
 Dear all --
 
 I cannot remember exactly, but I thought we had a long discussion on  
 the rightness of using compressed images, especially when considering  
 the loss of information while doing so. 
  
 -- Leo --
 
 On 18 Sep 2009, at 23:50, Graeme Winter wrote:
 
  Hi David,
 
  If the data compression is carefully chosen you are right: lossless
   
  jpeg2000 compression on diffraction images works very well, but is a
  spot slow. The CBF compression using the byte offset method is a
  little less good at compression put massively faster... as you point
  out, this is the one used in the pilatus images.

Not all compression methods cause loss of information.

cheers,

Ethan





  I recall that the 
  .pck format used for the MAR image plates had the same property - it
  was quicker to read in a compressed image that the raw equivalent.
 
  So... once everyone is using the CBF standard for their images, with
  native lossless compression, it'll save a fair amount in disk space
  (=£/$), make life easier for people and - perhaps most importantly -
  save a lot of data transfer time.
 
  Now the funny thing with this is that if we compress the images before
  we store them, the compression implemented in the file system will be
  less effective... oh well, can't win em all...
 
  Cheers,
 
  Graeme
 
 
 
  2009/9/18 Waterman, David (DLSLtd,RAL,DIA)  
  david.water...@diamond.ac.uk:
  Just to comment on this, my friend in the computer game industry  
  insists
  that compression begets speed in almost all data handling situations.
  This will be worth bearing in mind as we start to have more fine- 
  sliced
  Pilatus 6M (or similar) datasets to deal with.
 
  Cheers,
  David.
 
  -Original Message-
  From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf Of
  William G. Scott
  Sent: 17 September 2009 22:48
  To: CCP4BB@JISCMAIL.AC.UK
  Subject: [ccp4bb] I compressed my images by ~ a factor of two, and  
  they
  load and process in mosflm faster
 
  If you have OS X 10.6, this will impress your friends and save you  
  some
  disk space:
 
  % du -h -d 1 mydata
  3.5Gmydata
 
  mv mydata mydata.1
 
  sudo ditto --hfsCompression mydata.1  mydata rm -rf mydata.1
 
  % du -h -d 1 mydata
  1.8Gmydata
 
  This does hfs filesystem compression, so the images are still  
  recognized
  by mosflm, et al.  I think they process a bit faster too, because  
  half
  the information is packed into the resource fork.
  This e-mail and any attachments may contain confidential,  
  copyright and or privileged material, and are for the use of the  
  intended addressee only. If you are not the intended addressee or  
  an authorised recipient of the addressee please notify us of  
  receipt by returning the e-mail and do not use, copy, retain,  
  distribute or disclose the information in or attached to the e-mail.
  Any opinions expressed within this e-mail are those of the  
  individual and not necessarily of Diamond Light Source Ltd.
  Diamond Light Source Ltd. cannot guarantee that this e-mail or any  
  attachments are free from viruses and we cannot accept liability  
  for any damage which you may sustain as a result of software  
  viruses which may be transmitted in or with the message.
  Diamond Light Source Limited (company no. 4375679). Registered in  
  England and Wales with its registered office at Diamond House,  
  Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11  
  0DE, United Kingdom
 
 
 
 
 Chavas Leonard, Ph.D.
 Assistant Professor
 
 Structural Biology Research Center
 Photon Factory
 High Energy Research Organization (KEK)
 305-0801 Tsukuba Oho 1-1
 Japan
 
 Tel: +81(0)29-864-5642 (4901)
 Fax: +81(0)29-864-2801
 e-mail: leonard.cha...@kek.jp
 
 Science Advisory Board (BIT Life Sciences)
 Editorial Board (JAA)
 
 http://pfweis.kek.jp/~leo
 



-- 
Ethan A Merritt
Biomolecular Structure Center
University of Washington, Seattle 98195-7742


Re: [ccp4bb] I compressed my images by ~ a factor of two, and they load and process in mosflm faster

2009-09-18 Thread James Holton
I think it important to point out that despite the subject line, Dr. 
Scott's statement was:

I think they process a bit faster too
Strangely enough, this has not convinced me to re-format my RAID array 
with an new file system nor re-write all my software to support yet 
another new file format.  I guess I am just lazy that way.  Has anyone 
measured the speed increase?  Have macs become I/O-bound again? 

In any case, I think it is important to remember that there are good 
reasons for leaving image file formats uncompressed.  Probably the most 
important is the activation barrier to new authors writing new programs 
that read them.  fread() is one thing, but finding the third-party 
code for a particular compression algorithm, navigating a CVS repository 
and linking to a library are quite another!  This is actually quite a 
leap for those of us who never had any formal training in computer 
science.  Personally, I still haven't figured out how to read pck 
images, as it is much easier to write jiffy programs for uncompressed 
data.  For example, if all you want to do is extract a group of pixels 
(such as a spot), then you have to decompress the whole image!  In 
computer speak: fseek() is rendered useless by compression.  This could 
be why Mar opted not to use the pck compression for their newer 
CCD-based detectors?


That said, compressed file systems do appear particularly attractive if 
space is limiting.  Apparently HFS can do it, but what about other 
operating systems?  Does anyone have experience with a Linux file system 
that both supports compression and doesn't get corrupted easily?


-James Holton
MAD Scientist


Graeme Winter wrote:

Hi David,

If the data compression is carefully chosen you are right: lossless
jpeg2000 compression on diffraction images works very well, but is a
spot slow. The CBF compression using the byte offset method is a
little less good at compression put massively faster... as you point
out, this is the one used in the pilatus images. I recall that the
.pck format used for the MAR image plates had the same property - it
was quicker to read in a compressed image that the raw equivalent.

So... once everyone is using the CBF standard for their images, with
native lossless compression, it'll save a fair amount in disk space
(=£/$), make life easier for people and - perhaps most importantly -
save a lot of data transfer time.

Now the funny thing with this is that if we compress the images before
we store them, the compression implemented in the file system will be
less effective... oh well, can't win em all...

Cheers,

Graeme



2009/9/18 Waterman, David (DLSLtd,RAL,DIA) david.water...@diamond.ac.uk:
  

Just to comment on this, my friend in the computer game industry insists
that compression begets speed in almost all data handling situations.
This will be worth bearing in mind as we start to have more fine-sliced
Pilatus 6M (or similar) datasets to deal with.

Cheers,
David.

-Original Message-
From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf Of
William G. Scott
Sent: 17 September 2009 22:48
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] I compressed my images by ~ a factor of two, and they
load and process in mosflm faster

If you have OS X 10.6, this will impress your friends and save you some
disk space:

% du -h -d 1 mydata
3.5Gmydata

mv mydata mydata.1

sudo ditto --hfsCompression mydata.1  mydata rm -rf mydata.1

% du -h -d 1 mydata
1.8Gmydata

This does hfs filesystem compression, so the images are still recognized
by mosflm, et al.  I think they process a bit faster too, because half
the information is packed into the resource fork.
This e-mail and any attachments may contain confidential, copyright and or 
privileged material, and are for the use of the intended addressee only. If you 
are not the intended addressee or an authorised recipient of the addressee 
please notify us of receipt by returning the e-mail and do not use, copy, 
retain, distribute or disclose the information in or attached to the e-mail.
Any opinions expressed within this e-mail are those of the individual and not 
necessarily of Diamond Light Source Ltd.
Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments 
are free from viruses and we cannot accept liability for any damage which you 
may sustain as a result of software viruses which may be transmitted in or with 
the message.
Diamond Light Source Limited (company no. 4375679). Registered in England and 
Wales with its registered office at Diamond House, Harwell Science and 
Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom





Re: [ccp4bb] I compressed my images by ~ a factor of two, and they load and process in mosflm faster

2009-09-18 Thread Andrew Purkiss-Trew
The current bottleneck with file systems is the speed of getting data  
on or off the magnetic surface. So filesystem compression helps, as  
less data needs to be physically written or read per image. The CPU  
time spent compressing the data is less than the time saved in writing  
less data to the surface.


I would be interested to see if the speed up is the same with a solid  
state drive, as there is near 'random access' here, unlike with a  
magnetic drive where the seek time is one of the bottlenecks. For  
example, mechanical hard drives are limited to about 130MB/s, whereas  
SSDs can already manage 200MB/s (faster than a first generation SATA  
interface at 150MB/s can cope with and one of the drivers behind the  
2nd (300MB/s) and 3rd generation (600MB/s) SATA intefaces). The large  
size of our image files should make them ideal for use with SSDs.



Quoting James Holton jmhol...@lbl.gov:

I think it important to point out that despite the subject line, Dr.  
Scott's statement was:

I think they process a bit faster too
Strangely enough, this has not convinced me to re-format my RAID  
array with an new file system nor re-write all my software to  
support yet another new file format.  I guess I am just lazy that  
way.  Has anyone measured the speed increase?  Have macs become  
I/O-bound again? In any case, I think it is important to remember  
that there are good reasons for leaving image file formats  
uncompressed.  Probably the most important is the activation barrier  
to new authors writing new programs that read them.  fread() is  
one thing, but finding the third-party code for a particular  
compression algorithm, navigating a CVS repository and linking to a  
library are quite another!  This is actually quite a leap for those  
of us who never had any formal training in computer science.   
Personally, I still haven't figured out how to read pck images, as  
it is much easier to write jiffy programs for uncompressed data.   
For example, if all you want to do is extract a group of pixels  
(such as a spot), then you have to decompress the whole image!  In  
computer speak: fseek() is rendered useless by compression.  This  
could be why Mar opted not to use the pck compression for their  
newer CCD-based detectors?


That said, compressed file systems do appear particularly attractive  
if space is limiting.  Apparently HFS can do it, but what about  
other operating systems?  Does anyone have experience with a Linux  
file system that both supports compression and doesn't get corrupted  
easily?


-James Holton
MAD Scientist


Graeme Winter wrote:

Hi David,

If the data compression is carefully chosen you are right: lossless
jpeg2000 compression on diffraction images works very well, but is a
spot slow. The CBF compression using the byte offset method is a
little less good at compression put massively faster... as you point
out, this is the one used in the pilatus images. I recall that the
.pck format used for the MAR image plates had the same property - it
was quicker to read in a compressed image that the raw equivalent.

So... once everyone is using the CBF standard for their images, with
native lossless compression, it'll save a fair amount in disk space
(=£/$), make life easier for people and - perhaps most importantly -
save a lot of data transfer time.

Now the funny thing with this is that if we compress the images before
we store them, the compression implemented in the file system will be
less effective... oh well, can't win em all...

Cheers,

Graeme



2009/9/18 Waterman, David (DLSLtd,RAL,DIA) david.water...@diamond.ac.uk:


Just to comment on this, my friend in the computer game industry insists
that compression begets speed in almost all data handling situations.
This will be worth bearing in mind as we start to have more fine-sliced
Pilatus 6M (or similar) datasets to deal with.

Cheers,
David.

-Original Message-
From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf Of
William G. Scott
Sent: 17 September 2009 22:48
To: CCP4BB@JISCMAIL.AC.UK
Subject: [ccp4bb] I compressed my images by ~ a factor of two, and they
load and process in mosflm faster

If you have OS X 10.6, this will impress your friends and save you some
disk space:

% du -h -d 1 mydata
3.5Gmydata

mv mydata mydata.1

sudo ditto --hfsCompression mydata.1  mydata rm -rf mydata.1

% du -h -d 1 mydata
1.8Gmydata

This does hfs filesystem compression, so the images are still recognized
by mosflm, et al.  I think they process a bit faster too, because half
the information is packed into the resource fork.
This e-mail and any attachments may contain confidential,  
copyright and or privileged material, and are for the use of the  
intended addressee only. If you are not the intended addressee or  
an authorised recipient of the addressee please notify us of  
receipt by returning the e-mail and do not use, copy, retain,  
distribute or disclose the information in or attached to 

[ccp4bb] I compressed my images by ~ a factor of two, and they load and process in mosflm faster

2009-09-17 Thread William G. Scott
If you have OS X 10.6, this will impress your friends and save you  
some disk space:


% du -h -d 1 mydata
3.5Gmydata

mv mydata mydata.1

sudo ditto --hfsCompression mydata.1  mydata
rm -rf mydata.1

% du -h -d 1 mydata
1.8Gmydata

This does hfs filesystem compression, so the images are still  
recognized by mosflm, et al.  I think they process a bit faster too,  
because half the information is packed into the resource fork.