On Thu, 21 Aug 2008, Brett Monroe wrote: > > The 6140 and 2500 devices are re-badged LSI products that do fail-over > only (non-symmetric devices). > > It's been my experience that that is how the LSI products behave. We > have seen non-Sun (IBM) branded LSI storage arrays do the same thing. > The nice thing about MPxIO is that it handles it very well (for us).
It is true that the controllers in the 2540 act as active-standby on a per-drive level, with each controller being responsible for six drives unless a controller or drive interface fails. This means that it (ideally) makes sense for the 2540 to communicate with MPxIO so that data is normally sent to the controller which is active for that drive. However, it is quite odd to export a LUN-per-drive so these smarts may not exist. As I mentioned before, it used to be that the load was perfectly distributed across the two FC links and the two controllers, resulting in excellent performance. Now MPxIO makes different decisions so the load is not balanced any more. It may be that I could write a script to encourage MPxIO to re-adjust the paths after boot so that they are (temporarily) optimum for my arrangement. I am not sure how MPxIO works but it is quite possible that its final decisions are based on ordering/timing of other events which might change with a new kernel or device drivers. Note that this scenario only really applies to the case where each drive is exported as a LUN (as I am doing). With a multi-drive LUN this optimization goes away. With the current situation, I see perfect vlun load distribution via 'zpool iostat -v' but not to the device level as shown by 'iostat -x'. Bob ====================================== Bob Friesenhahn [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ _______________________________________________ storage-discuss mailing list [email protected] http://mail.opensolaris.org/mailman/listinfo/storage-discuss
