On 8/20/05, Carl Lowenstein <[EMAIL PROTECTED]> wrote: > # dd if=/dev/hdb1 | gzip -c > /mnt/60gbPartition/hdb1.gz
Cool. So I'm using "dd if=/dev/hdb1 | bzip2 -c | wc -c" right now. Using gzip I got it down to 78 GB, 18 too many. bzip2 is taking, so far, over three times as long as gzip to do the compressing. This is expected. But I found out that I don't *have* to get my 124 GB down to 60 GB. I can use the seek and skip options with dd to split up the 124 GB into smaller chunks, put part of it on the 60 GB and part on another disk. I've read dd's man page many times, but I didn't make that connection. This is good to know; I may need it. Another tip I got was that passing bs=1M to dd might make compression a little better. I doubt that will matter much in the end though. But if bzip2 doesn't get it down far enough, I'll give it a try. Can anyone comment on this block size thing? -todd -- [EMAIL PROTECTED] http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list
