Thank you. The workload is mixed read / write , small and big packets, and I have raid1 sets in one of the box and raid50 in the big boy. I suppose using the iodepth=32 will show me more realistic numbers than my original tests. Is this a valid assumption if I increase iodepth until I get bigger IOPS values will get me closer and closer to the maximum IOPS can provide the config?
Thank you Laszlo On 24 February 2014 21:16, Carl Zwanzig <[email protected]> wrote: >> From: Pal, Laszlo [mailto:[email protected]] >> Sent: Monday, February 24, 2014 11:27 AM > >> thank you. if I stonewalling sync engine is a good approach ? The main goal >> is to tell our customers what iops they have to provide to get the same or >> similar io performance as they have when using the appliances. > > I'm not sure what you're asking. If you're trying to model a specific > workload, well, you have to define what that workload is. If you're trying to > see how fast the storage system itself can go, then the tests you've listed > are rational -provided- that you don't run them in parallel ('stonewall') and > that you don't limit yourself at the client (by using sync i/o). > > On the systems I test, I can see a great improvement in the throughput of > RAID volumes by increasing the iodepth. Think of it, with sync i/o > (iodepth=1) you're doing one operation at a time and to only one drive*. If > you have 8 drives in a RAID-0 set, -each- drive can be doing something at the > same time, but with iodepth<8, that'll never happen. Likewise, assuming > modern drives, each drive can queue up some operations internally and > optimize head movement. This also improves performance. (I usually use > iodepth=32 or higher; on linux.) > > *I'm ignoring parity RAID configs where each write to the RAID set causes > reads and writes to the entire volume. Nothing will help you there :). > > z! > -- To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html
