On Mon, Mar 30, 2009 at 3:18 AM, Fred Schmidt <[email protected]> wrote:
> I have since found out that our SCSI disk will be allocated in 50 GB > LUN's and so we would have a stripe of exactly 1 disk for most of our > systems. So I guess that rules out LVM striping. For many installations it is a huge plus that you can add additional physical volumes to a LVM blob without copying all your data from left to right. But when you stripe the logical volume, you can't increase the size anymore. The ambition to let a single guest on its own saturate all your DASD I/O bandwidth seems to stem from the discrete world where unused spare capacity is wasted. With Linux on z/VM, your objectives should probably be different. You should look at maximum achieved throughput at lowest cost while still meeting your response time objectives stated in the SLA. If you configure the system such that each Linux guest on its own can monopolize the system, your tuning task becomes significantly more complicated. When that is not justified by the SLA, it may not be the best thing to do. A lot of this is not intuitive; simply the fact that it is harder to manage does not mean it will perform better ;-) My approach would be to review the application to confirm that disk I/O is indeed expected to be the bottleneck for that application, and that a straightforward implementation actually prevents you from meeting the required response time. Depending on which part of your I/O is critical, you would decide on what to configure. If your application is CPU constrained, longer code paths in LVM (for example with striping) will make things worse. When you don't measure, nobody can tell whether you did a good job (or not). Rob -- Rob van der Heij Velocity Software http://www.velocitysoftware.com/ ---------------------------------------------------------------------- For LINUX-390 subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
