On 23 May 2007, at 15:33, Brian Moon wrote:

My tests have shown that if you can reduce the size of the data, the time saved in network traffic for moving smaller objects is well worth the tiny amount of cpu to compress and uncompress the data. We deal with about 300k chunks of HTML. I thought that maybe I could speed it up if I skipped compressions. I found the opposite to be true. Not sure about the gz libraries in Java. But in PHP they are smoking fast.

Not quite the same thing, but I ran into the opposite scenario recently. A simple scp of a 180Mb file between servers (dual dual Xeons) on a gigabit LAN. With compression (scp -C) it sustained about 12Mb/sec net throughput (15 sec to copy, i.e. CPU compression speed is limiting the rate), but without compression it did 70Mb/sec (just under 3 sec to copy - I was impressed! probably disk limited (U320, 10k)). i.e. if you have gigabit locally, the throughput comes much cheaper than compression. I suspect YMMV quite a lot, so it's probably worth benchmarking. 100Mbit can usually deliver around 10Mb/ sec, so had I been on that, compression would have been the faster choice. Similar choices will apply to other protocols that support compression such as MySQL (client and slave traffic). Generally, gigabit rocks...

Marcus
--
Marcus Bointon
Synchromedia Limited: Creators of http://www.smartmessages.net/
UK resellers of [EMAIL PROTECTED] CRM solutions
[EMAIL PROTECTED] | http://www.synchromedia.co.uk/


Reply via email to