Brian Wilson wrote:
> On Jul 16, 2007, at 6:06 PM, Torrey McMahon wrote:
> > Darren Dunham wrote:
> >> My previous experience with powerpath was that it rode below the  
> >> Solaris
> >> device layer.  So you couldn't cause trespass by using the "wrong"
> >> device.  It would just go to powerpath which would choose the link
to
> >> use on its own.
> >>
> >> Is this not true or has it changed over time?
> > I haven't looked at power path for some time but it used to be the
> > opposite. The powerpath node sat on top of the actual device paths.

> > One of the selling points of mpxio is that it doesn't have that  
> > problem. (At least for devices it supports.) Most of the multipath
software had  
> > that same limitation
> >
> 
> I agree, it's not true.  I don't know how long it hasn't been true,  
> but the last year and a half I've been implementing PowerPath on  
> Solaris 8, 9, 10, the way to make it work is to point whatever disk  
> tool you're using to the emcpower device.  The other paths are there  
> because leadville finds them and creates them (if you're using  
> leadville), but PowerPath isn't doing anything to make them  
> redundant, it's giving you the emcpower device and the emcp, etc.  
> drivers to front end them and give you a multipathed device (the  
> emcpower device).  It DOES choose which one to use, for all 
> I/O going  
> through the emcpower device.  In a situation where you lose 
> paths and  
> I/O is moving, you'll see scsi errors down one path, then the next,  
> then the next, as PowerPath gets fed the scsi error and tries the  
> next device path.  If you use those actual device paths, you're not  
> actually getting a device that PowerPath is multipathing for you  
> (i.e. it does not dig in beneath the scsi driver)

I'm afraid I have to disagree with you: I'm using the
/dev/dsk/c2t$WWNdXs2 devices quite happily with powerpath handling
failover for my clariion.

# powermt version
EMC powermt for PowerPath (c) Version 4.4.0 (build 274)
# powermt display dev=58
Pseudo name=emcpower58a
CLARiiON ID=APM00051704678 [uscicsap1]
Logical device ID=6006016067E51400565259A15331DB11 [saperqdb1:
/oracle/Q02/saparch]
state=alive; policy=BasicFailover; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A
========================================================================
======
---------------- Host ---------------   - Stor -   -- I/O Path -  --
Stats ---
### HW Path                 I/O Paths    Interf.   Mode    State  Q-IOs
Errors
========================================================================
======
3073 [EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]         c2t5006016130202E48d58s0 
SP A1     active
alive      0      0
3073 [EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]         c2t5006016930202E48d58s0 
SP B1     active
alive      0      0
# fsck /dev/dsk/c2t5006016130202E48d58s0
** /dev/dsk/c2t5006016130202E48d58s0
** Last Mounted on /zones/saperqdb1/root/oracle/Q02/saparch
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups

FILE SYSTEM STATE IN SUPERBLOCK IS WRONG; FIX? n

144 files, 189504 used, 33832172 free (420 frags, 4228969 blocks, 0.0%
fragmentation)
# fsck /dev/dsk/c2t5006016930202E48d58s0
** /dev/dsk/c2t5006016930202E48d58s0
** Last Mounted on /zones/saperqdb1/root/oracle/Q02/saparch
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups

FILE SYSTEM STATE IN SUPERBLOCK IS WRONG; FIX? n

144 files, 189504 used, 33832172 free (420 frags, 4228969 blocks, 0.0%
fragmentation)

### So at this point, I can look down either path and get to my data.
Now I kill 1 of the 2 paths via SAN zoning.  cfgadm -c configure c2, and
powermt check reports that the path to SP A is now dead.  I'm still able
to fsck the dead path:
# cfgadm -c configure c2
# powermt check
Warning: CLARiiON device path c2t5006016130202E48d58s0 is currently
dead.
Do you want to remove it (y/n/a/q)? n
# powermt display dev=58
Pseudo name=emcpower58a
CLARiiON ID=APM00051704678 [uscicsap1]
Logical device ID=6006016067E51400565259A15331DB11 [saperqdb1:
/oracle/Q02/saparch]
state=alive; policy=BasicFailover; priority=0; queued-IOs=0
Owner: default=SP A, current=SP B
========================================================================
======
---------------- Host ---------------   - Stor -   -- I/O Path -  --
Stats ---
### HW Path                 I/O Paths    Interf.   Mode    State  Q-IOs
Errors
========================================================================
======
3073 [EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]         c2t5006016130202E48d58s0 
SP A1     active
dead       0      1
3073 [EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]         c2t5006016930202E48d58s0 
SP B1     active
alive      0      0
# fsck /dev/dsk/c2t5006016130202E48d58s0
** /dev/dsk/c2t5006016130202E48d58s0
** Last Mounted on /zones/saperqdb1/root/oracle/Q02/saparch
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups

FILE SYSTEM STATE IN SUPERBLOCK IS WRONG; FIX? n

144 files, 189504 used, 33832172 free (420 frags, 4228969 blocks, 0.0%
fragmentation)
# fsck /dev/dsk/c2t5006016930202E48d58s0
** /dev/dsk/c2t5006016930202E48d58s0
** Last Mounted on /zones/saperqdb1/root/oracle/Q02/saparch
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups

FILE SYSTEM STATE IN SUPERBLOCK IS WRONG; FIX? n

144 files, 189504 used, 33832172 free (420 frags, 4228969 blocks, 0.0%
fragmentation)
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to