/sf_dmp_field_guide.doc
It's certainly unclear to my why this is filed under marketing info but
oh well...at least it exists!
Cheers,
- Mike Myers, mike.myers at nwdc.net
-Original Message-
From: Thomas Cornely [mailto:[EMAIL PROTECTED]
Sent: Friday, March 30, 2007 12:06 PM
To: Myers, Mike
PROTECTED]
Sent: Wednesday, April 04, 2007 9:01 PM
To: Myers, Mike; veritas-vx@mailman.eng.auburn.edu
Subject: RE: [Veritas-vx] ASL, APM and EMC Clariions (oh my...)
Hi Mike,
Interesting... I had no idea this doc was there. :)
Another good document if you're looking for insights on DMP is the DMP
So if you run /etc/vx/diag.d/vxdmpinq /dev/rdsk/c3t50001FE15005E90Ad6s2 what
do you get? It certainly looks like the disks are still visible...
Cheers,
- Mike Myers, mike.myers at nwdc.net
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Jarkko
.
Cheers,
- Mike Myers, mike.myers at nwdc.net
P.S. As Michael Warnock pointed out, to the base packages, one should
add the man pages...
-Original Message-
From: Rich Whiffen [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 12, 2007 11:44 AM
To: Myers, Mike
Cc: [EMAIL PROTECTED]
Subject: Re
people do contribute to
this list and with such in depth information. It's a wonderful
resource!
-Original Message-
From: Scott Kaiser [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 12, 2007 1:15 PM
To: Rich Whiffen; Myers, Mike
Cc: [EMAIL PROTECTED]
Subject: RE: [Veritas-vx] Minimum 5.0
Folks,
We were checking on patches for foundation suite on Solaris and found a
bunch but we seem to be missing a patch for VxFS on Solaris 10...
I see patch 123200 for VxFS on Solaris 8, 123201 for VxFS on Solaris 9
but nothing for 10. From the progression I'd guess it would be 123202
but no
Searching on 61441 and fsadm yields pretty much only hits on this
Veritas bug on HP-UX:
PHKL_22121: ( SR: 8606135462 CR: JAGad04596 )
VxFS 3.3 write(2) may return incorrect error value 61441
to
applications on error.
As it says, this is not a real
You can't recover the private area directly, but if you rebuild logical
volumes that are on the exact same boundaries, the file systems inside
there will still be intact (presuming nothing wrote to those areas of
the disk during the disaster).
Probably the best instructions on doing something
It's been out experience that fsadm will not reorganize extents of files
that are open by an application. Thus, if you have even one extent of a
file in the area you wish to reclaim and that file is open, you must
shut down your application (or otherwise get it to close the file) to do
the fsadm
You don't really have a choice unless you happen to have a file system
guru on staff who enjoys playing with fsdb :)
Generally speaking fsck will recover things well though like in all
complex systems there are spectacular exceptions. Judging on small
number of errors it's showing in the output
If folks are interested, the actual bug in this case is a pair of
missing quotes in the /usr/lib/vxvm/bin/vxroot script:
if [ $? -eq 0 -a -n $bus_drivers ] ;
---
if [ $? -eq 0 -a -n $bus_drivers ] ;
We put a fixed version onto our servers when they jumpstart so that we
can
I don't know the Hitachi array, but I'll bet it's active/passive with explicit
failover (or something similarly named from Hitachi).
We have this issue with our Clariion's. We connect them thus:
SP = Service Processor
SW = fiber switch (actually director)
HBA = host bus adaptor
SPA SPB
| \
I appreciate the summary of how HDS works -- it's nice to have a little
background on other vendors stuff, you never know what you'll be working with
tomorrow...
Just to return the favor a bit, EMC Clariions can act just as you've stated
here (LUN affinity); it's a selectable mode. The model
Something like this would total up the sizes of all disks:
vxprint -g dgname| awk '/^dm/ {T+=$5} END {print T}'
Is that what you're looking for? The result will be in sectors (512 bytes per
sector on Solaris, 1k on some other platforms) though you can adjust the awk
program to fix
I don't think this is a Veritas error per-se. Once you can get format to talk
to the drives, you should be a long way to solving your problem. Look at the
array's error log because ASC 0x3a == medium not present.
If you had system power fails, you probably had array ones as well and
This procedure should work without any problems and you should be able to move
back to 3.5 as long as you don't upgrade the volume group version.
And of course you left out step 0 because we know it's required:
0. Take a full backup of stanley. Just in case.
Cheers,
- Mike.Myers at nwdc.net
Is this a T1000/2000 by chance?
If so, you might be running into a bug in the vxroot script. It's easy to
check -- on line 138 is the variable $bus_drivers quoted or not? If it's NOT
quoted, then you might have this bug affecting you. The fix it is as simple as
putting quotes around it like
is done (eg. Are the drives partitioned? Is the /etc/vfstab
changed, is /etc/system changed?)
Cheers,
- Mike.Myers at nwdc.net
-Original Message-
From: Asim Zuberi [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 17, 2008 8:50 AM
To: Myers, Mike; veritas-vx@mailman.eng.auburn.edu
Subject: RE
Folks,
We've starting (probably with VxFS 5.0) seeing that newly created Veritas file
systems (eg. just ran mkfs -F vxfs /dev/vx/rdsk/rootdg/junk2) have a flag
bit set:
# vxassist make junk 1g
# mkfs -F vxfs /dev/vx/rdsk/rootdg/junk
version 7 layout
19 matches
Mail list logo