On 10 October 2012 01:13, Mark Post <[email protected]> wrote: >>>> On 10/9/2012 at 06:50 PM, Brad Hinson <[email protected]> wrote: > -snip- >> I have lots of mod-9 ECKD with HyperPAV enabled, so I want to use LVM. So my >> two choices are standard LVM, or LVM striping. If I stripe across the disks >> I spread the I/O across the physical volumes, but my gut tells me I shouldn't >> have to do this, since HyperPAV is moving around aliases dynamically. For >> example, say I have 2 PVs and 4 HyperPAV aliases. If I send some heavy I/O >> through the Linux (device-mapper) block device, then I would assume: >> >> - #1, for the case with LVM striping enabled, LVM will spread the I/O to both >> PVs, and HyperPAV will assign 2 aliases to each PV since I'm banging on them >> both. >> - #2, for the case without LVM striping, HyperPAV will assign 4 aliases to >> the first PV since that's the only one in use. >> >> In either case, it seems I'm using all 4 aliases, so seems like I would get >> the same performance. Please correct me if I'm wrong. And if so, which of >> these configs is better? > > Keep in mind that performance is affected by how many real spindles the I/O > is being issued against. A single real disk is only going to be able to do > one seek, read, etc. at a time. Whether the storage admin took this into > account creating the definitions for ECKD devices or not is another matter. > But, if they did, and each volume is in a separate subsystem/rank/whatever > within the storage array, then I would think that a combination of HyperPAV > and striping would be able to eke out more I/Os than just one volume in one > subsystem/rank/whatever. I leave it to the more knowledgeable members of the > list to either confirm or disembowel this notion. :)
I don't think any of this applies anymore to contemporary DASD subsystems. They already engage as much real devices as they can (if any at all). Even the discussion about ranks and sites and storage arrays and groups seems to be something from the past. Yes, in general you're better off with more alternative routes to the data. But you have to realize that there is CPU cost and latency related with every extra layer. So if you don't need that, you're better off without. With consolidation schemes there's normally someone else to take advantage of the resources when you don't need them. So there's little need to saturate the I/O subsystem when you can get your SLA without it. I have demonstrated I can read about 1 GB/s from a single 3390 with a single application. That's faster than you can discard the data. I can only recommend to stay away from low level benchmarks since it is hard to correlate that with your application. ---------------------------------------------------------------------- For LINUX-390 subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 ---------------------------------------------------------------------- For more information on Linux on System z, visit http://wiki.linuxvm.org/
