Couple questions:

Are the underlying volumes VxVM volumes? If so, why are you not mounting the /dev/vx/dsk/DGNAME/volname objects?

If the devices actually /are/ powerpath devices (i.e., they're metadevices coalescing the underlying EMC LUN paths), why aren't you mounting the powerpath device node rather than the DMP devic node?

Overall, I guess what's not clear, here, is what role you are actually wanting Storage Foundation to have? Are you attempting to use it strictly for multi-path support? If so, there's likely more cost-effective ways of doing multi-pathing (sounds like you're already paying for PowerPath, any way, and, even if you weren't, MPxIO sh/would be available to you).

On 3/30/2010 11:09, Collin wrote:
Sorry for any confusion...

I've got several powerpath devices from a dead system that I'm mounting temporarily on one node in my cluster. I run a devfsadm -Cv and vxdctl enable. After that I can see the powerpath devices listed as...

emc_clarrion0_10  auto:none         -          -    online invalid

I modified my /etc/vfstab file to mount the devices..

/dev/vx/dmp/emc_clariion0_10s6 /dev/vx/rdmp/emc_clariion0_10s6 /u10 ufs 3 yes -

The device mounts and I can access the file system with all my data. When the activity starts to increase on these temporary mount points, I see a count down on the console that port H has lost connectivity. After the 16 seconds, the node panics and of course reboots. However, if I mount the power path devices using a single path..

/dev/dsk/c1t5006016100600432d10s6 /dev/rdsk/c1t5006016100600432d10s6 /u10 ufs 3 yes -

I never get the port H losing connectivity.

I want to use the dmp name in case I lose a path to these disk.

Any reason why using the dmp name causes port H to lose connectivity vs. using a single path?

--
"You can be only *so* accurate with a claw-hammer" --me

_______________________________________________
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx

Reply via email to