Colin

 

OK - I see you are using enclosure naming - and BTW - the /dev/vx/dmp
/dev/vx/dmp are just the same as disks BUT with the added provision of
DMP to keep the device online.

How many paths to these devices

# vxdmpadm getsubpaths 

or

# vxdisk path

 

If a CLARiiON , and if only 2 paths - then set the array iopolicy to
singleactive - this is most likely - see later for DMP 

# vxdmpadm listenclosure all

# vxdmpadm setattr enclosure ENC_name iopolicy=singleactive

If a CLARiiON , and if more than 2 paths - then set the array iopolicy
to balanced - and DMP does know how to stop IO to the secondary paths.
Note - this is against EMC recommendations but it works.

# vxdmpadm listenclosure all

# vxdmpadm setattr enclosure ENC_name iopolicy=balanced

 

I also notice that the enclosure names are lower case - indicating a
V5.x release VxVM installed. - are the CLARiiON APMs running ?

# vxdmpadm listapm all 

Check the CLARiiON are Active

 

If a fencing cluster - then these are local LUNs at the point of
/dev/vx/dmp names for the devices. And yes SCSI3 keys will be placed on
them in a fencing cluster 

# gabconfig -a

Check if there is Port b membership - if so then yes you have a fencing
cluster.

 

Check also the DMP block switch - it may be that with the iopolicy
incorrect, and on low IO you did not reach the limit for 

# vxdmpadm gettune all 

dmp_pathswitch_blks_shift                

Now - when busy and IO chunks bigger than path switch level ( and with
iopolicy incorrect ) a path switch will cause a trespass (check SAN
logs)  AND a block, drain, resume on the DMP path. There will be a
failover message logged in /etc/vx/dmpevents.log - check here also.

 

Stuart

 

________________________________

From: veritas-vx-boun...@mailman.eng.auburn.edu
[mailto:veritas-vx-boun...@mailman.eng.auburn.edu] On Behalf Of Collin
Sent: Wednesday, 31 March 2010 2:09 AM
To: William Havey
Cc: veritas-vx@mailman.eng.auburn.edu
Subject: Re: [Veritas-vx] VxVm

 

Sorry for any confusion...

I've got several powerpath devices from a dead system that I'm mounting
temporarily on one node in my cluster.  I run a devfsadm -Cv and vxdctl
enable.  After that I can see the powerpath devices listed as...

emc_clarrion0_10  auto:none         -          -    online invalid

I modified my /etc/vfstab file to mount the devices..

/dev/vx/dmp/emc_clariion0_10s6   /dev/vx/rdmp/emc_clariion0_10s6   /u10
ufs   3 yes -

The device mounts and I can access the file system with all my data.
When the activity starts to increase on these temporary mount points, I
see a count down on the console that port H has lost connectivity. After
the 16 seconds, the node panics and of course reboots.  However, if I
mount the power path devices using a single path..

/dev/dsk/c1t5006016100600432d10s6  /dev/rdsk/c1t5006016100600432d10s6
/u10  ufs 3 yes -

I never get the port H losing connectivity. 

I want to use the dmp name in case I lose a path to these disk.

Any reason why using the dmp name causes port H to lose connectivity vs.
using a single path?

Thanks,
Collin

On Tue, Mar 30, 2010 at 10:48 AM, William Havey <bbha...@gmail.com>
wrote:

The original message states "mount these disks as
/dev/vx/dmp/<emc_array>_Xs6 ". Perhaps this is normal behavior. Mounts
are of devices which receive I/O. A "/dev/vx/dmp/..." device entry isn't
I/O capable.

I think a clearer statement of what Collin intends to do is needed.

Bill

 

On Tue, Mar 30, 2010 at 3:01 AM, Dmitry Glushenok <gl...@jet.msk.su>
wrote:

Hello,

Panic string and previous messages usually helps to understand cause..
Release notes to RP2-RP3 also provides short descriptions of fixed
issues like "Fixed the cause of a system panic when mutex_panic() was
called from vol_rwsleep_wrlock()."


On 29.03.2010, at 19:02, Collin wrote:

> I've got the following....
>
>     Solaris 10
>     VxVM 5.0MP3RP1HF12
>

> I have a number of mount points that are being migrated from
/dev/dsk/cXtXdXsX to clustered mount points.  The problem I'm having is
if I mount these disk in the /dev/dsk/cXtXdXsX format I run the risk
that if something were to cause the direct path to go down I would lose
the databases on these mount points.  But when I mount these disks as
/dev/vx/dmp/<emc_array>_Xs6 my system panics and core dumps.

>
> Does VxVM have any issues mounting /dev/vx/dmp/<emc_array>_Xs6??
>
> Thanks,
> Collin
> _______________________________________________
> Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx

--
Dmitry Glushenok
Jet Infosystems


_______________________________________________
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx

 

 

_______________________________________________
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx

Reply via email to