Hi

If you are serious about performance I would recommend you pop in a
switch, have multiple links run into controller A and at least 1 into
B.

Depending on your controller (2 or 4 port) you can get max throughput
from active controller A using mpxio, and have at least 1 backup path
coming from controller B.

:)


On 10/16/07, Lion, Oren-P64304 <[EMAIL PROTECTED]> wrote:
>
> Ramana,
>
> With both HBAs connected to a single controller on the 6140 (and MPxIO)
> I observe aggregate performance similar to Linux multi-path'ing to the
> EqualLogic array, about 120000 kw/s.
>
> Oren
>
> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, October 16, 2007 6:35 AM
> To: Lion, Oren-P64304
> Cc: [email protected]
> Subject: Re: [storage-discuss] Only primary MPxIO channel gets IOs
>
> Oren,
> You are seeing expected behavior, which is dependent on the type of the
> array.
> The 6140 is a ASYMMETRIC array with ACTIVE-STANDBY configuration.
> The STANDBY path (will become ACTIVE and accept IOs) only when the
> current ACTIVE path is down.
>
> I'm not sure about the model of the Equallogic array, but based on the
> behavior, I guess the array is a SYMMETRIC device.
> The symmetric device uses an ACTIVE-ACTIVE configuration where the IOs
> are accepted on both the channels.
> If you hook up the Equallogic array to Solaris (and the 6140 to Linus,
> if Linux supports it) you should see the same array dependent behavior.
>
> Let us know if you get a chance.
>
> /Ramana
>
> I believe the Equallogic array is
> > Oren Lion wrote:
> >
> >> Hi,
> >>
> >> I was surprised to observe only the primary MPxIO channel getting IOs
> (see below iostat for fp0) while my expectation was to see IOs
> distributed (round-robin) on both paths, fp0 and fp1, with higher
> throughput than IOs only on single channel.
> >>
> >> As a sanity check to compare against iostat I ran Qlogic's SANsurfer
> GUI tool to observe HBA performance, sure enough, only a single HBA was
> servicing IO requests.
> >>
> >> Running a similar test on Linux against an EqualLogic array I
> observed multipath sending IOs down multiple channels.
> >>
> >> Is my expectation with Solaris x86 MPxIO valid?
> >>
> >> Config:
> >> V40z 8-core, 16 GB RAM, latest Recommended Patch (uname -a returns
> >> patch level 127112-01) 2x Sun StroageTek PCI-X (QLogic) 4Gb Single
> >> Port HBA Sun 6140, 4GB cache, 16x 15K RPM drives, RAID 10
> >>
> >> snippet /var/adm/messages
> >> Oct 12 11:13:26 z8 genunix: [ID 834635 kern.info]
> >> /scsi_vhci/[EMAIL PROTECTED] (sd19) multipath
> >> status: degraded, path /[EMAIL PROTECTED],0/pci1022,[EMAIL 
> >> PROTECTED]/pci1077,[EMAIL PROTECTED]/[EMAIL PROTECTED],0
> >> (fp1) to target address: w201400a0b82a3946,0 is standby Load
> >> balancing: round-robin Oct 12 11:13:26 z8 genunix: [ID 834635
> >> kern.info] /scsi_vhci/[EMAIL PROTECTED] (sd19)
> >> multipath status: optimal, path
> >> /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci1077,[EMAIL 
> >> PROTECTED]/[EMAIL PROTECTED],0 (fp0) to target
> >> address: w201500a0b82a3946,0 is online Load balancing: round-robin
> >>
> >> iostat -X 2
> >>
> >>                   extended device statistics
> >> device       r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b
> >> sd3          0.0   77.5    0.0 51193.6  0.0  1.0   12.5   0  76
> >> sd4          0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> >> sd5          0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> >> sd6          0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> >> sd19        58.5    0.0 50681.6    0.0  0.0  0.7   12.0   0  70
> >> sd19.fp1     0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> >> sd19.fp0    58.5    0.0 50681.6    0.0  0.0  0.0    0.0   0   0
> >> ses16.fp1    0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> >> ses17.fp0    0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> >>                   extended device statistics
> >> device       r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b
> >> sd3          0.0   56.0    0.0 51204.8  0.0  0.7   12.4   0  69
> >> sd4          0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> >> sd5          0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> >> sd6          0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> >> sd19        59.0    0.0 51204.8    0.0  0.0  0.7   11.8   0  69
> >> sd19.fp1     0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> >> sd19.fp0    59.0    0.0 51204.9    0.0  0.0  0.0    0.0   0   0
> >> ses16.fp1    0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> >> ses17.fp0    0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
> >>
> >>
> >> This message posted from opensolaris.org
> >> _______________________________________________
> >> storage-discuss mailing list
> >> [email protected]
> >> http://mail.opensolaris.org/mailman/listinfo/storage-discuss
> >>
> >>
> >>
> _______________________________________________
> storage-discuss mailing list
> [email protected]
> http://mail.opensolaris.org/mailman/listinfo/storage-discuss
>
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to