The scenario: Server: Ubuntu Breezy Linux software RAID5 SATAII HDDs Gigabit LAN
Client: Knoppix 3.8.2 (Live) (booted with knoppix 2 for cli) Acer Veriton 3500 P4, 256MB RAM, 20GB HDD 100Mbit LAN I have an image of an NTFS partition that I want to be able to write to the client PC (snapshot or deployment to identical hardware). More below, thanks to Glen... On Fri, 2006-02-24 at 13:41 +0100, Glen Turner wrote: > That first 1GB is the oddity. It's just some buffers filling. The > results after that are the "real" results where buffering no > longer gives an advantage and you see the true sustained (lack of) > throughput. OK. > That would indicate that you're filling the bus to the hard disk, > not the network between the CPUs. yeah, the network throughput (iptraf) was only about 20Mbits, well below what I was expecting for a Gigabit to 100Mbit connection. > dd is writing a lot of empty blocks down to that disk. You might > want to consider using tar is the filesystem has a lot of empty > space. True, I should have elaborated that this was an NTFS partition image. > That's a very handwaving question since you haven't told > us anything about the computer. The bottleneck obviously > varies with the hardware. Sorry, that's the sort of question you get from someone when they're stuck :-) > Now set the machine for maximum performance. hdparm > should report 32b UDMA, write caching, look ahead, > APM disabled, fast AAM goodness. TCP buffers should BINGO! Thank-you, thank-you, thank-you! I didn't think about DMA :-( I assumed that Knoppix would turn on such things for me! (1) Turning on DMA (hdparm -d1 /dev/hda): On the client: # time ssh [EMAIL PROTECTED] "cat client.img.gz" | gunzip > /dev/hda1 time reports a real time of 6 mins 1.8s (including ssh password entry)! There's still a slow down at the 1GB mark for about 12 seconds, then full speed transfer is reached for another 200MB or so, after that the transfer trickles along until the end of the file. So, if I get the partition image size as close to or below that 1GB mark things are as fast as they're gonna get. (2) Tuning with hdparm So this time I tried: # hdparm -d1 -c1 /dev/hda time reported basically the same time within a few seconds. Performance was the same. (3) Netcat I thought I would try netcat. Server: $ cat image.img.gz | nc -l -p 5030 -q 10 Client: # time nc serverip 5030 | gunzip > /dev/hda1 time reports real time of 5m18s. Wow! Buffers filled at around 1200MB, another burst at around 1400MB but then crawling (270kbits/sec) to the end. > Test. Are you CPU-bound, I/O-bound or network bound > (top, vmstat, etc)? If you are, is that bound reasonable > (eg, 90% of the sustained disk write rate)? So, in the end definately disk I/O bound. > You might want to try transferring from and to /dev/null > to check network+CPU preformance. Note that ssh has a special Alrighty then, netcat to /dev/null. time gives real 2m50s. > performance issue -- it uses a 128MB window so the bandwidth- > delay product needs to be under that for ssh to run at > maximum speed. Not sure where to tune that. I'll put that aside for now. > Now you're found the bottleneck, fix it. Repeat test until > one of the hard bounds is reached. If possible, chose a better > method to move away from that bound (eg, dd v tar; ssh v ftp of > of a gpg-encrypted file). hence netcat instead of ssh. Since, this is now happening within a reasonable length of time here's some more results: 1. Netcat + dd nc serverip 5030 | gunzip | dd of=/dev/hda1 time gives real 5m52s. So quite a bit slower than the redirect. > Remember to record your results at each step. Let us know > the interesting numbers (eg, what is the real throughput > of your particular drive). There's surprisingly few "real" > benchmarks out there, so you'll be helping a lot by posting > your numbers. Well, thanks so much for your help Glen. I really appreciate your "teach a man to fish" support :-) I hope the above results are of use to anyone else needing to do the same sort of thing. I think I'll try and get that partition image size down (bye bye system restore). Of course, for linux clients this is so much easier (no activation issues) so setting up the preseed file for Ubuntu is my next step. Thanks again, have a great day! -- Simon Wong <[EMAIL PROTECTED]> -- SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/ Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html