some things in this thread. 1) as Rob pointed out, you are slowing down your linux server on purpose. A swap device to disk maybe can do 200 swaps per second, so ok, add 3, and you can do 600. With vdisk, it could be 40,000 per second - so linux doesn't notice it is swapping.
2) you do not have to back up vdisk with storage. You have to back it up with paging devices. So if you converted those dog slow swap devices to vm paging, problem solved. nothing lost, no extra storage required, just linux doesn't get arbitrarily slowed when it happens to need more storage. and vm very nicely balances the paging load across all it's paging devices. For the time period linux needs to swap to vdisk it has all this extra fast memory, then that extra memory gets paged out. 3) linux measurement of anything based on time is skewed. linux might measure the swap device 100% busy - where from the vm side, it shows 30%. depends on other server's CPu requirements. Anything measuring time inside linux and busy time is wrong. you need the vm perspective which is different. I'd say your data is misleading and incomplete, too many times does such data lead to misleading conclusions. >Date: Wed, 30 Aug 2006 11:25:53 -0400 >Reply-To: Linux on 390 Port <[email protected]> >From: "Hall, Ken (GTI)" <[EMAIL PROTECTED]> > >I tracked down a copy of the actual report from the performance >guy, and it turns out we've been (partially) barking up the >wrong tree. Sorry for the confusion. =20 > >The real story is that the large swap partition DEVICE was 100% >BUSY during part of the test. The original summary report said >100% FULL, so we've all been going around trying to explain >that. What I now suspect was actually happening was that the >memory load increased to the point that the instance started >paging heavily and saturated the path to one of the swap >devices. > >This makes a lot more sense, but the problem is still there, >just a little different. > >We definitely have a memory constraint, so we're going to >increase the memory allocation for the instance, but would it >also help to use multiple small-ish swap partitions as a >safety net for peak periods? (Please don't start up the debate >on whether to use disk-based swap devices at all. I've been >through that, and the alternatives won't fly here right now.) > >Right now, the busy (larger) swap device has a priority of -2, >and the other (smaller) one has a priority of -1 (defaults). >The load problem appeared on the larger one, implying that it >was getting hit harder (far harder than the smaller partition >during peak periods). I found a howto that indicates that if >you set multiple partitions to the same priority, they get >used "round robin", instead in in "spillover" mode, but this >might not help either. =20 > >If we set several partitions to equal priority, does the kernel >do any kind of load balancing between partitions? It wouldn't >do much good if it keeps trying to use a heavily loaded swap >device. > >Thanks for all of the suggestions, and apologies for starting >down the wrong road earlier. "If you can't measure it, I'm Just NOT interested!"(tm) /************************************************************/ Barton Robinson - CBW Internet: [EMAIL PROTECTED] Velocity Software, Inc Mailing Address: 196-D Castro Street P.O. Box 390640 Mountain View, CA 94041 Mountain View, CA 94039-0640 VM Performance Hotline: 650-964-8867 Fax: 650-964-9012 Web Page: WWW.VELOCITY-SOFTWARE.COM /************************************************************/ ---------------------------------------------------------------------- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
