Recently got a new V890 and our first ST2530 array.  The V890 has two 
SG-XPCI8SAS-E-Z adapters in it (SAS HBAs).  I installed Solaris 10 08/07 and 
the current recommended patch set on the local disk.  I configured the array 
out of band, setting up two volumes and mapping them to 0 and 1 in the default 
domain.  I then attached one HBA to port 2 on controller A and the other to 
port 2 on controller B, per the instructions.

Following this, I was able to see my volumes in Solaris, in duplicate because 
of the two paths of course.  I went ahead and labelled the "disks", did a newfs 
on each, and made sure I could mount them using the traditional device names.  
On my system the disks were c2t0d0, c2t0d1, c3t1d0, and c3t1d1.  I used the c2 
devices for the label, the newfs, and the test mount of each.  Everything 
worked fine to that point.

Then I started configuring multipathing.  The SAS controllers in my system were 
/[EMAIL PROTECTED],600000/LSILogic,[EMAIL PROTECTED] and /[EMAIL 
PROTECTED],600000/LSILogic,[EMAIL PROTECTED], and the internal controller is 
/[EMAIL PROTECTED],600000/SUNW,[EMAIL PROTECTED]/[EMAIL PROTECTED],0.  I put 
the following in my mpt.conf file:

mpxio-disable="no";
name="mpt" parent="/[EMAIL PROTECTED],600000" unit-address="2" 
mpxio-disable="yes";

So, everything is mpxio-enabled except the controller for the internal disks.  
The reason I mention the device names for the SAS controllers is that I have 
also tried this mpt.conf:

mpxio-disable="yes";
name="mpt" parent="/[EMAIL PROTECTED],600000" unit-address="1" 
mpxio-disable="no";
name="mpt" parent="/[EMAIL PROTECTED],600000" unit-address="2" 
mpxio-disable="no";

That way everything is mpxio-disabled except my SAS HBAs, but same results 
either way...

What's happening is that when I run stmsboot -u after making the changes, the 
multipath device aliases are correctly configured in my /dev/dsk directory, and 
my vfstab file is correctly updated.  However, after the initial boot - unless 
I am booting with the -r option (reconfig reboot), when the system boots I only 
see the second volume (LUN 1).  When I say "see" I mean when I run stmsboot -L 
it only shows that volume.  If I run "format" I can see the first volume in its 
traditional device names c2t0d0 and c3t1d0, and I see the second in its 
multipathed name.  If do a reconfiguration reboot, I see the correct things in 
format (both vols, only multipath names) and in stmsboot -L (all four old devs 
mapping to the two multipath devs).

Oh, and by the way if I have these things mounting at boot or not in vfstab I 
get the same results... only if they are set to mount at boot, the system goes 
into maint mode when the first volume tries to mount (with the second volume 
mounted fine).  That's fairly annoying at this point so I left them as manual 
mounts.

Why is this happening?  For fun I tried changing LUN numbers from 0 and 1 to 1 
and 2, and it actually does the same thing... the first one disappears unless I 
do a reconfig reboot.  I checked, and my array is setup for the right host 
type, Solaris with Traffic Manager.  Not sure if it matters, but my first 
volume is a RAID 1+0 of six disks, and the second is a RAID 5 of five disks, 
both with 512KB segment sizes.

My V890 has the latest PROM, I've got all the Solaris/mpt updates, and my array 
firmware is pretty new (6.17.52.10, CAM 6.0.0).  Not sure where else to go from 
here.  I did log a case with Sun but I'm awaiting a response.
 
 
This message posted from opensolaris.org
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to