Quoting Simon Wong <[EMAIL PROTECTED]>: > Hi guys! > > I'm pulling a compressed (gzip -9) disk image over the LAN and > decompressing prior to writing to disk with dd. > > The command I am using is > > ssh [EMAIL PROTECTED] "cat client.img.gz" | gunzip | dd > of=/dev/hda1 > > This seems to work reasonably well until it gets somewhere around the 1G > mark at which point everything seems to have slowed down to a crawl. > > The result is that it takes nearly as long as when I don't compress it > and just pull the whole 8G partition through ssh. > > I'm guessing it's something to do with buffering of the data in gunzip > prior to it getting to dd that's blowing things up. > > Anyone got any clues on how to do this more on the fly so all data is > passed through without any buffering? > > Why is this sort of question always late on a Friday? > > Oh well, have a good weekend! > > -- > Simon Wong <[EMAIL PROTECTED]> >
What if you use scp to get the compressed file to your end first. I presume this is a regular backup. How much of that 8GB actually changes between backups? We have mirrored servers scattered around the country, and use rsync daily to keep things synchronised. The "L" in Lan is for local. Can't you access the partition directly with nfs or smbclient; or is this a remote Lan on the end of a vpn tunnel half a world away? Amanda Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html -- SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/ Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
