> I don't see anything in CAM regarding paths or path configuration. All of > the volumes are indicated to be 'optimal' and the per disk loading is > indicated to be quite uniform. I was not aware of the iostat -Y flag until > now.
Yea, that's sorta what I meant. Everything being on their preferred controller. > I have been meaning to ask on this list if MPxIO will do anything useful > (from a load-share and reliability standpoint) if I install another fiber > channel card so that I have four links to the array. Will it handle that ok? > I don't really expect to see much more performance but it would eliminate > total failure due to FC card failure. My understanding is that it will though I have never set it up (for cost reasons). It would (ideally) help performance more then HA as the first pair should be on two different HBAs (on two different PCI buses if the server supports it) to begin with. > Here is a 'iostat -Y' dump for 30 second interval while 'zfs scrub' is going > on. The drives serviced by fp1 seem to have a much larger service time than > the drives serviced by fp0: Hmm after a quick glance (and unless I am missing something), It looks like at least one Volume is not on it's preferred controller (based on your description of your environment) or not configured correctly: fp1 is servicing 7 devices: sd11.t1.fp1 379.0 0.1 47861.4 0.1 0.0 0.0 0.0 0 0 sd12.t1.fp1 379.4 0.0 47879.2 0.0 0.0 0.0 0.0 0 0 sd14.t1.fp1 380.7 0.6 47863.4 4.4 0.0 0.0 0.0 0 0 sd16.t1.fp1 380.4 0.5 47906.8 4.2 0.0 0.0 0.0 0 0 sd18.t1.fp1 379.3 0.5 47885.1 2.7 0.0 0.0 0.0 0 0 sd19.t1.fp1 378.9 0.0 47940.8 0.0 0.0 0.0 0.0 0 0 sd21.t1.fp1 380.2 0.1 47860.4 0.1 0.0 0.0 0.0 0 0 fp0 is servicing 5 devices: sd10.t2.fp0 379.9 0.8 47909.5 4.2 0.0 0.0 0.0 0 0 sd13.t2.fp0 380.4 0.6 47909.0 2.7 0.0 0.0 0.0 0 0 sd15.t2.fp0 379.7 0.0 47913.5 0.0 0.0 0.0 0.0 0 0 sd17.t2.fp0 379.1 0.8 47863.4 4.4 0.0 0.0 0.0 0 0 sd20.t2.fp0 381.7 0.0 47964.5 0.0 0.0 0.0 0.0 0 0 --Brett _______________________________________________ storage-discuss mailing list [email protected] http://mail.opensolaris.org/mailman/listinfo/storage-discuss
