> If you want dd to run faster, you need to find an efficient block size
> for it to use (and add the parameter bs=65536 if you choose 64k, for
> example). I don't know what would be appropriate for your hardware and
> kernel, someone else might do this more often and therefore have found
> out.

I have done this several times on CDs, and it matters not one bit (on
semi-surrent kernels, I think 2.4 +). The kernel somehow uses an optimal
value regardless of what you specify. Running a quick test with time is
still a good idea when you're dealing with dozens of gigabytes though,
just to confirm. I would advise to use bs=64k rather than bs=65536,
because if you get the number wrong by a few bytes you're begging for
trouble.

> bzip2 -9 /data/dd-image-of-hda1
> 
> This will take a very long time, though.

1) You will lose all your data beyond the point where the compressed
file is damaged. You can't restore single files from compressed disk
images fullstop.

2) You will only be able to restore this dd file onto a disk with
identical, or at least larger, partition. You can't restore single files
from it, though you can try and fiddle with the loop device.

Volker

-- 
Volker Kuhlmann                 is possibly list0570 with the domain in header
http://volker.dnsalias.net/             Please do not CC list postings to me.

Reply via email to