Re: [Veritas-vx] inode invalid?!

2010-03-02 Thread Stuart Andrews
Just as an FYI  my preferred order for fscks

# fsck -F vxfs /dev/vx/rdsk/DG_name/VOL_name
Intent LOG ONLY replay 

# fsck -F vxfs -o full,nolog -n /dev/vx/rdsk/DG_name/VOL_name
FILE SYSTEM AND METADATA STRUCTURES ONLY - NO INTENT LOG  - a LOOK at what will 
happen remembering there ARE transactions in the intent log.

# fsck -F vxfs -o full -n /dev/vx/rdsk/DG_name/VOL_name
ALL FILE SYSTEM AND METADATA STRUCTURES - a LOOK at what will happen 


You can mount READ ONLY for a quick look - or to backup crucial data - remember 
unplayed transaction in the intent log
# mount  -F vxfs -o ro /dev/vx/dsk/DG_name/VOL_name  /mnt

You can ff / ncheck to match file names to inodes removed in above fsck -o full 
# ncheck -F vxfs /dev/vx/dsk/DG_name/VOL_name
# ff -F vxfs /dev/vx/dsk/DG_name/VOL_name
This is usefult to cross reference file that will be removed in the full fsck - 
these are in the fsck output above.

Check what is in the intent log 
# echo fmtlog | fsdb -F vxfs /dev/vx/rdsk/DG_name/VOL_name
Remember fset 1 is the VxFS metadata, fset 999 is user data.

Collect a METASAVE - use the binary in /opt/VRTSspt/FS/MetaSave

# fsck -F vxfs -o full,nolog -y /dev/vx/rdsk/DG_name/VOL_name
# fsck -F vxfs /dev/vx/rdsk/DG_name/VOL_name
ALL FILE SYSTEM AND METADATA STRUCTURES - WITH REMOVING STUFF IF THERE ARE 
ERRORS

OR 

# fsck -F vxfs -o full -y /dev/vx/rdsk/DG_name/VOL_name
ALL FILE SYSTEM AND METADATA STRUCTURES - WITH REMOVING STUFF IF THERE ARE 
ERRORS



FMTLOG - Internal Technote  251996




-Original Message-
From: veritas-vx-boun...@mailman.eng.auburn.edu 
[mailto:veritas-vx-boun...@mailman.eng.auburn.edu] On Behalf Of Thomas Graham
Sent: Wednesday, 3 March 2010 4:25 PM
To: William Havey
Cc: veritas-vx@mailman.eng.auburn.edu
Subject: Re: [Veritas-vx] inode invalid?!

I see, so checking lost and found dir is an important step

On Tuesday, March 2, 2010, William Havey bbha...@gmail.com wrote:
 The purpose of fsck is to make a file system mountable. If this requires 
 putting some unaccounted for file system block into the lost+found directory, 
 leaving the admin to decide what to do with the block, so be it. A file 
 system simply can not guarantee data integrity under every conceivable 
 circumstance. That's why redundancy is built into either the hardware, if the 
 situations cause is hardware failure, the array provides a good copy, or the 
 storage software, through mirroring.


 On Tue, Mar 2, 2010 at 7:18 AM, Thomas Graham lktho...@gmail.com wrote:

 But would it remove some blocks which contain data?

 On Tuesday, March 2, 2010, William Havey bbha...@gmail.com wrote:
 A file system check of VxFS should be very quick. Read the Intent Log 
 records, write them out to disk, done. Do the backup after fsck.

 On Sun, Feb 28, 2010 at 9:31 PM, Thomas Graham lktho...@gmail.com wrote:
 guys, I am using veritas cluster file system, and currently CFS is
 marked as dirty and I have to take the whole FS offline to do fsck
 scanning. Anyone know if there have any risk to hit inode if I do
 backup during inode invalid situation?

 --
 Thomas G Lau
 Tel: 93239670
 ___
 Veritas-vx maillist  -  veritas...@mailman.eng.auburn.edu
 http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx



 --
 Thomas G Lau
 Tel: 93239670



-- 
Thomas G Lau
Tel: 93239670
___
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
___
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx


Re: [Veritas-vx] Migrating volumes on windows servers [SEC=UNCLASSIFIED]

2010-03-16 Thread Stuart Andrews
Greg

I am not sure that is ever going to work - but there are some things to
look at. In CDS format (Unix) the private region is at the beginning of
disk on the whole disk partition ( 65536 blocks in VxVM Ver 5.x, 2048
blocks in VxVM 4.x ).

# vxdisk list c3t50060E80004372C0d25s2
public:slice=2 offset=65792 len=855808 disk_offset=0
private:   slice=2 offset=256 len=65536 disk_offset=0

Note - in above the 65536 blocks has 256 blocks offset - for AIX headers
if CDS sharing between these two platforms but at disk_offset=0.

My LUN geometry is 512 bytes/sector, 768 sectors/track, 50
tracks/cylinder,  38400 sectors/cylinder.  So the CDS initialization
does NOT necessarily align to cylinder boundary, and remember this is
CDS initialised - not sliced format. Windows may need this - so this may
be a pre-requisite for the attempted migration.

There are some exclusions for reserved region for other platforms - AIX
headers and others but AFAIK these are now offset to the existing
private region as shown above. I would have to check on the layout etc
for a VxVM under Windows and what size / location / offset it uses and
in basic / dynamic.

There does seem to be some problems with what you are proposing here -
namely the partition requirement for dynamic - remembering also that CDS
on Linux and Solaris uses a Solaris partition table.

In Unix the objects defined in that diskgroup - these are just defined
types and offset / lengths on the actual disk - so a volume has a
subdisk object specifying the storage LUN ID and offset and length in
the public region, a plex object to define the volume redundancy and a
volume object to define an IO object to the controlling OS ( major
number pointing back to the vxio driver and specific minor for the
specific volume number from the group minimum minor - some caveats here
for shared diskgroups etc ) .  

If the same LUN or a mirror is then presented to SF Windows ( which
AFAIK there is no direct migration ) but if the initialization on Win
server once LUNs are presented can be made to fill the same size as Unix
private region - then remaking the volumes on Win should just remake the
offsets and lengths of the objects and appropriate layers.

So - keep initialization area same size / offset. Remake objects and
make sure to not use any init=zero  on either side. Just some things to
keep in mind if attempting this.

Stuart

-Original Message-
From: veritas-vx-boun...@mailman.eng.auburn.edu
[mailto:veritas-vx-boun...@mailman.eng.auburn.edu] On Behalf Of
Robinson, Greg
Sent: Tuesday, 16 March 2010 4:24 PM
To: veritas-vx@mailman.eng.auburn.edu
Subject: [Veritas-vx] Migrating volumes on windows servers
[SEC=UNCLASSIFIED]

UNCLASSIFIED

Hi all,

I was migrating a volume from one storage array to another and from one
server to another, under windows vxvm, and it didn't work.  I followed
my normal procedure which I use for my UNIX servers, but windows vxvm
did something different that I was not expecting.

I mirrored the data in the normal way, broke the mirror, removed the
disks from the disk group (and this is where I think windows vxvm got
the better of me), and tried to import the disks and create a new disk
group out of them and mount my volume.

As soon as I removed the disks from the disk group, windows vxvm
converted them into basic disks instead of leaving them as dynamic
disks.

My question is: how can I migrate the volume without deporting the disk
group from one server and importing it from another.  The trouble is, we
don't have access to the other storage array, but we do the server.  And
access control on the other array prevents us from importing all the
disks in question.  Also, I should note that VCS is running on both
servers, but that was not a problem.

Thankx,

Greg.

IMPORTANT: This email remains the property of the Australian Defence
Organisation and is subject to the jurisdiction of section 70 of the
Crimes Act 1914. If you have received this email in error, you are
requested to contact the sender and delete the email.
___
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
___
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx


Re: [Veritas-vx] VxVm

2010-03-30 Thread Stuart Andrews
Colin

 

OK - I see you are using enclosure naming - and BTW - the /dev/vx/dmp
/dev/vx/dmp are just the same as disks BUT with the added provision of
DMP to keep the device online.

How many paths to these devices

# vxdmpadm getsubpaths 

or

# vxdisk path

 

If a CLARiiON , and if only 2 paths - then set the array iopolicy to
singleactive - this is most likely - see later for DMP 

# vxdmpadm listenclosure all

# vxdmpadm setattr enclosure ENC_name iopolicy=singleactive

If a CLARiiON , and if more than 2 paths - then set the array iopolicy
to balanced - and DMP does know how to stop IO to the secondary paths.
Note - this is against EMC recommendations but it works.

# vxdmpadm listenclosure all

# vxdmpadm setattr enclosure ENC_name iopolicy=balanced

 

I also notice that the enclosure names are lower case - indicating a
V5.x release VxVM installed. - are the CLARiiON APMs running ?

# vxdmpadm listapm all 

Check the CLARiiON are Active

 

If a fencing cluster - then these are local LUNs at the point of
/dev/vx/dmp names for the devices. And yes SCSI3 keys will be placed on
them in a fencing cluster 

# gabconfig -a

Check if there is Port b membership - if so then yes you have a fencing
cluster.

 

Check also the DMP block switch - it may be that with the iopolicy
incorrect, and on low IO you did not reach the limit for 

# vxdmpadm gettune all 

dmp_pathswitch_blks_shift

Now - when busy and IO chunks bigger than path switch level ( and with
iopolicy incorrect ) a path switch will cause a trespass (check SAN
logs)  AND a block, drain, resume on the DMP path. There will be a
failover message logged in /etc/vx/dmpevents.log - check here also.

 

Stuart

 



From: veritas-vx-boun...@mailman.eng.auburn.edu
[mailto:veritas-vx-boun...@mailman.eng.auburn.edu] On Behalf Of Collin
Sent: Wednesday, 31 March 2010 2:09 AM
To: William Havey
Cc: veritas-vx@mailman.eng.auburn.edu
Subject: Re: [Veritas-vx] VxVm

 

Sorry for any confusion...

I've got several powerpath devices from a dead system that I'm mounting
temporarily on one node in my cluster.  I run a devfsadm -Cv and vxdctl
enable.  After that I can see the powerpath devices listed as...

emc_clarrion0_10  auto:none -  -online invalid

I modified my /etc/vfstab file to mount the devices..

/dev/vx/dmp/emc_clariion0_10s6   /dev/vx/rdmp/emc_clariion0_10s6   /u10
ufs   3 yes -

The device mounts and I can access the file system with all my data.
When the activity starts to increase on these temporary mount points, I
see a count down on the console that port H has lost connectivity. After
the 16 seconds, the node panics and of course reboots.  However, if I
mount the power path devices using a single path..

/dev/dsk/c1t5006016100600432d10s6  /dev/rdsk/c1t5006016100600432d10s6
/u10  ufs 3 yes -

I never get the port H losing connectivity. 

I want to use the dmp name in case I lose a path to these disk.

Any reason why using the dmp name causes port H to lose connectivity vs.
using a single path?

Thanks,
Collin

On Tue, Mar 30, 2010 at 10:48 AM, William Havey bbha...@gmail.com
wrote:

The original message states mount these disks as
/dev/vx/dmp/emc_array_Xs6 . Perhaps this is normal behavior. Mounts
are of devices which receive I/O. A /dev/vx/dmp/... device entry isn't
I/O capable.

I think a clearer statement of what Collin intends to do is needed.

Bill

 

On Tue, Mar 30, 2010 at 3:01 AM, Dmitry Glushenok gl...@jet.msk.su
wrote:

Hello,

Panic string and previous messages usually helps to understand cause..
Release notes to RP2-RP3 also provides short descriptions of fixed
issues like Fixed the cause of a system panic when mutex_panic() was
called from vol_rwsleep_wrlock().


On 29.03.2010, at 19:02, Collin wrote:

 I've got the following

 Solaris 10
 VxVM 5.0MP3RP1HF12


 I have a number of mount points that are being migrated from
/dev/dsk/cXtXdXsX to clustered mount points.  The problem I'm having is
if I mount these disk in the /dev/dsk/cXtXdXsX format I run the risk
that if something were to cause the direct path to go down I would lose
the databases on these mount points.  But when I mount these disks as
/dev/vx/dmp/emc_array_Xs6 my system panics and core dumps.


 Does VxVM have any issues mounting /dev/vx/dmp/emc_array_Xs6??

 Thanks,
 Collin
 ___
 Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
 http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx

--
Dmitry Glushenok
Jet Infosystems


___
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx

 

 

___
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx


Re: [Veritas-vx] Linux VxVM Setup

2010-05-28 Thread Stuart Andrews
Martin

 

Some things to look at - are the read_ahead VxFS tuning - this is
optimised to sequential reads. Other VxFS tunables are here
http://support.veritas.com/docs/344352  The usual one is max_direct_iosz
for performance.  Also you could vxtrace the volume plex subdisk
operations while the backups are going on and / or vxdmpadm iostat the
LUNs / paths in particular and also iostat -Cxn to see which layer the
slow down is occurring at. 

 

Stuart



From: veritas-vx-boun...@mailman.eng.auburn.edu
[mailto:veritas-vx-boun...@mailman.eng.auburn.edu] On Behalf Of Martin,
Jonathan
Sent: Saturday, 29 May 2010 12:01 AM
To: veritas-vx@mailman.eng.auburn.edu
Subject: Re: [Veritas-vx] Linux VxVM Setup

 

Thanks for the help. Your post and another emailed to me privately led
me to find an issue with the multipath driver. I didn't notice this
before, but there should not be an /dev/sdc. This was an alternate path
to the same lun.

 

I've got my volume configured, 1.5 TB of data restored and flashbackups
are running.  However, the speed isn't what I had hoped. The backup runs
for about an hour at 1MB/sec, backing up what looks like metadata.
After that, the speed ups to ~25MB/sec. If I backup the data directly
off the lun with a standard policy I get a steady 30MB/sec. 

 

Are there any configuration settings or logs I can look through? Are
there buffer settings I can toy with? There does not seem to be much in
/etc/vx/log. We've had quite a bit of success in similar flashbackup
scenarios with Windows file servers, and we're hoping to push past
30MB/sec on Linux too.

 

Thanks,

 

-Jonathan

 

From: William Havey [mailto:bbha...@gmail.com] 
Sent: Wednesday, May 26, 2010 3:08 PM
To: Martin, Jonathan
Cc: veritas-vx@mailman.eng.auburn.edu
Subject: Re: [Veritas-vx] Linux VxVM Setup

 

Jonathan,

Use fdisk to clear up the error state then initialize the disks.

Bill

On Wed, May 26, 2010 at 10:51 AM, Martin, Jonathan
jmart...@intersil.com wrote:

Greetings all, first time poster here, so please be gentle. 

 

I'm trying to run a POC for NetBackup Linux Flashbackup, but to do that
I need a VxVM volume and VxFS partition.  I've got a 6TB lun presented
to a test RedHat 2.6 server as /dev/sdb.  When I run vxdiskadm, option 1
to initialize the disk I get the following error.

 

This disk device does not appear to be valid.  The disk may not have

  a valid or usuable partition table, the special device file for the

  disk may be missing or invalid, or the device may be turned-off or

  detached from the system.  This disk will be ignored.

  Output format: [Device_Name,Disk_Access_Name]

 

  [sdb,sdb]

 

vxdisk list gives me the following output.

 

DEVICE   TYPEDISK GROUPSTATUS

sda  auto:none   --online invalid

sdb  auto--error

sdc  auto--error

 

I also tried running through the VxVM Admin guide and got the following:

 

vxdisk init sdb

VxVM vxdisk ERROR V-5-1-0 read of lvm header blocks for /dev/vx/rdmp/sdb
failed

VxVM vxdisk ERROR V-5-1-5433 Device sdb: init failed:

Disk Ioctl failed

 

My Symantec rep gave me some trial keys and free software, but I'm on my
own for configuration. Can someone here throw me a bone?  I've got to be
doing something wrong.

 

Thanks!

 

-Jonathan


___
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx

 

___
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx


Re: [Veritas-vx] vxrepquota question

2011-03-04 Thread Stuart Andrews
It works

 

#umount /mnt

#mount -F vxfs /dev/vx/dsk/testdg/vols /mnt

#vxquot /mnt

/dev/vx/rdsk/testdg/vols:

USERS

 11266  root

  4096  adm

  4096  bin

  4096  cstynes

  4096  gdm

  4096  daemon

  4096  joe

  4096  listen

  4096  lp

  4096  noaccess

  4096  nobody

  4096  nobody4

  4096  nuucp

  4096  oracle

  4096  postgres

  4096  smmsp

  4096  svctag

  4096  sys

  4096  uucp

  4096  webservd

   

   3072  Admin

 

Just a quick test after remounting - it works

Generated a small amount of junk using the following

 

for (( i=0; i = 512; i++ )) ; do echo $i; cat /etc/passwd | cut -d :
-f1  | while read user; do dd if=/dev/zero of=$user.$i bs=1024k count=1;
chown $user $user.$i; done; done

 

 

From: veritas-vx-boun...@mailman.eng.auburn.edu
[mailto:veritas-vx-boun...@mailman.eng.auburn.edu] On Behalf Of John
Cronin
Sent: Saturday, 5 March 2011 10:16 AM
To: Carl E. Ma
Cc: veritas-vx@mailman.eng.auburn.edu
Subject: Re: [Veritas-vx] vxrepquota question

 

Try the command vxquot file-system.  According to the man page, it
does not appear that quotas need to be enabled for this command to work.
Unfortunately, I don't have any place to try this out right now.

On Fri, Mar 4, 2011 at 4:40 PM, Carl E. Ma zhu_ju...@yahoo.ca wrote:

Hello All,

We are running NFS server with Veritas VCS 5.1SP1 on solaris 10 x86. In
order to track disk usage in real time, we enabled disk quota on the
server so that we can get each user's usage with vxrepquota
filesystem and vxquota -v -u username. If we didn't set quota for
a user, his name/usage won't show up in vxrepquota output.

Since we don't know how many users will keep files on the shared NFS
filesystem, we have to enable quota for all 3000+ users as temp
solution. My questions is without enforcing soft/hard quota, can we
still track user disk usage? My understanding of quota filesystem is all
users' disk usage is being tracked within filesystem and there should
have other way to read out the statistics. I will summarize if there is
answer.

Thanks  have a good weekend,

zhu




___
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx

 

___
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx