77MB/sec is good. If you're wondering where the bottleneck is to going faster,
start with iostat -dMmxzn 10 . Ignore the first set of data it's historical.
For 2nd and following, look to how many reads and writes / second and the
number of megabytes read/written. That gives you an idea of how
The diskgroup name check is valid but assumes a pure Veritas environment. If
you have both Veritas and ZFS or indeed SVM, you have to look further. For zfs,
format will tell you that the disk is part of an imported zpool and 'zpool
import' will tell you what exported pools on what disks are
1. UFS is deprecated in Solaris 10 and removed in Solaris 11
2. Booting from vxfs is not supported and never was; root disk encapsulation
is a hack that was needed until we got zfs boot. There is no longer a use for
this technique.
3. Use of ZFS for root disk and zone roots gives you clean
vxrecover -s
-Original Message-
From: veritas-vx-boun...@mailman.eng.auburn.edu
[mailto:veritas-vx-boun...@mailman.eng.auburn.edu] On Behalf Of upen
Sent: Wednesday, January 12, 2011 5:10 PM
To: veritas-vx@mailman.eng.auburn.edu
Subject: [Veritas-vx] (no subject)
Hello all,
some disks
ZFS is not a cluster filesystem on its own.
I do not know if you can export to multiple hosts at the LUN level from ZFS
with iSCSI. NFS of course provides the multiwriter support.
If you kerberize NFS, you can securely restrict the hosts to which it connects.
Depending on your volume, this
don't do it.
There is no longer, with a Solaris 10 system such as your 5120, a valid reason
to use VERITAS boot disk encapsulation.
Use ZFS. As I recall the 5120 has hardware mirroring so you could use that or
you could use ZFS mirroring. The advantage of hardware mirroring is that it
doesn't
-boun...@mailman.eng.auburn.edu
[mailto:veritas-vx-boun...@mailman.eng.auburn.edu] On Behalf Of Asiye Yigit
Sent: Wednesday, October 06, 2010 11:29 AM
To: Hudes, Dana; ger...@gotadsl.co.uk; veritas-vx@mailman.eng.auburn.edu
Subject: Re: [Veritas-vx] couldn't encapsulate boot disk
Hello,
I am really
Evacuate definitely moves regions of a plex to new places and can indeed end up
splitting subdisks. Mirror should reproduce the subdisk layout within a plex.
Since mirror plex is on a per volume basis not a physical device basis your new
subdisks aren't necessarily in the same sectors as the
4.1 is out of support. Go to 5.
From: veritas-vx-boun...@mailman.eng.auburn.edu
[mailto:veritas-vx-boun...@mailman.eng.auburn.edu] On Behalf Of John Wong
Sent: Friday, June 11, 2010 2:16 PM
To: veritas-vx@mailman.eng.auburn.edu
Cc: j w
Subject: [Veritas-vx] Fw:
lucreate fail among other things.
From: William Havey [mailto:bbha...@gmail.com]
Sent: Sunday, April 04, 2010 10:43 AM
To: Hudes, Dana
Cc: milind phanse; VeritasUsers
Subject: Re: [Veritas-vx] Can I increse /opt partition under veritas
Was the issue with /opt being
having /opt as a separate filesystem isn't supported by Solaris. You can have
stuff under /opt (e.g. /opt/coolstack) as a separate filesystem but putting
/opt separate will cause problems.
This isn't theoretical. I tried the same thing and it worked for awhile then it
didn't and the system was
Hudes, Dana wrote:
having /opt as a separate filesystem isn't supported by
Solaris.
Doug Hughes wrote:
I had a trick that fixed this a long while ago.. I did have
/opt as a vxvm partition, but to avoid the catch-22, I
mirrored all the /opt/VRTS stuff to the root partition
underneath
the configurtion reboot and
it's gone.
From: William Havey [mailto:bbha...@gmail.com]
Sent: Friday, March 05, 2010 10:22 AM
To: Hudes, Dana
Cc: veritas-vx@mailman.eng.auburn.edu
Subject: Re: [Veritas-vx] Simulator 5.1 and Solaris 10 zones
Dana,
If /opt, or any directory
Actually, ZFS takes the concept of a journaled filesystem further. In effect,
it is a database used as a general purpose data store. Leaving the underlying
volume-management functionality (similar to much of what is in VxVM 5, leaving
aside the question of the algorithm for RAID), the
.
From: veritas-vx-boun...@mailman.eng.auburn.edu
[mailto:veritas-vx-boun...@mailman.eng.auburn.edu] On Behalf Of Michael Weis
Sent: Wednesday, February 03, 2010 5:45 AM
To: veritas-vx@mailman.eng.auburn.edu
Subject: Re: [Veritas-vx] ZFS to VxFS
Hi,
Hudes, Dana wrote
ZFS filesystems are in a zpool which has devices. ZFS is equivalent to the
combination of vxfs + vxvm, whereas ufs is directly on a device; you could make
a ufs filesystem on a vxvm device (or a zfs zvol if you wanted). It isn't
possible to directly convert, therefore, zfs to vxfs. Copy the
of demands from application for different mountpoints.
From: William Havey [mailto:bbha...@gmail.com]
Sent: Thursday, January 07, 2010 12:22 PM
To: Hudes, Dana
Cc: veritas-vx@mailman.eng.auburn.edu
Subject: Re: [Veritas-vx] Relayout volume from 2 to 4 columns
The array
.
From: William Havey [mailto:bbha...@gmail.com]
Sent: Thursday, January 07, 2010 3:05 PM
To: Hudes, Dana
Cc: veritas-vx@mailman.eng.auburn.edu
Subject: Re: [Veritas-vx] Relayout volume from 2 to 4 columns
VERITAS Storage Foundation has had the MvFS (multi-volume multi-VxFS
the ISP feature of VM would allow you to drill down to individual spindles and
place subdisks on each spindle.
Individual spindles of the RAID group? Doesn't that defeat the purpose of the
RAID group?
Striping across LUNs gets ...interesting; we usually just use them concat. Of
course that's
Havey [mailto:bbha...@gmail.com]
Sent: Wednesday, January 06, 2010 12:30 PM
To: Hudes, Dana
Cc: veritas-vx@mailman.eng.auburn.edu
Subject: Re: [Veritas-vx] Relayout volume from 2 to 4 columns
Yes, it certainly does. And that is why Symantec put the feature in the VM
product; to use host-based
Why not let Solaris 10 mpxio handle the physical devices, leave vxdmp
out of it altogether? Then you can use VxVM and ZFS on a
device-by-device basis. You don't NEED vxdmp to use VxVM.
Yeah you're paying for it but so what?
=
Dana Hudes
UNIX and Imaging group
NYC-HRA MIS
=
From: Romeo Theriault [mailto:romeotheria...@gmail.com]
Sent: Monday, March 16, 2009 4:22 PM
To: Hudes, Dana
Cc: veritas-vx@mailman.eng.auburn.edu
Subject: Re: [Veritas-vx] vxconfigd keeps dying
Probably something more akin to Hitachi Shadow Image. It's
Are you using Veritas to clone the disk groups or are you using
something like Hitachi Shadow Image?
=
Dana Hudes
UNIX and Imaging group
NYC-HRA MIS
+1 718 510 8586
Nextel: 172*26*16684
=
From:
Microsoft actually use Veritas technology here and there. The shadow
copy feature is Veritas. It says so.
=
Dana Hudes
UNIX and Imaging group
NYC-HRA MIS
+1 718 510 8586
Nextel: 172*26*16684
=
-Original Message-
From:
So don't inherit /opt, inherit select directories in /opt.
The rest of the contents of /opt will get inherited.
=
Dana Hudes
UNIX and Imaging group
NYC-HRA MIS
+1 718 510 8586
Nextel: 172*26*16684
=
-Original Message-
From: [EMAIL PROTECTED]
If you have your data on CDS volumes that will make life simpler for
SPARC-x86.
If you are going from an old version such as 3.5 where you don't have
CDS then I would make new volumes and filesystems. I would send all data
with ncftp (because it will send a whole tree recursively; you could
also
One of the advantages of Solaris 10 FC drivers is that MPXio can present
1 target for the two paths. This feature is coming RSN to vxdmp. So
consider disabling vxdmp and let native Solaris do the multipathing with
stmsboot -e.
=
Dana Hudes
UNIX and Imaging group
NYC-HRA
You really ought to go to version 5 of SF on Solaris 10. Make sure to
install not only the Master Patches but the Rolling Patches as well.
Also make sure to not only run Solaris 10 update 4 but also that you are
using Update Connection to keep your system reasonably up-to-date.
Mirror your root
rootmirror disk to patch solaris os ?
On Thu, Nov 01, 2007 at 03:47:25PM -0400, Hudes, Dana wrote:
While you could do a root mirror break-off, I'd rather use Live
Upgrade
with Solaris. That way you build up the new boot environment and then
boot onto it. If you want to patch, you use LU
While you could do a root mirror break-off, I'd rather use Live Upgrade
with Solaris. That way you build up the new boot environment and then
boot onto it. If you want to patch, you use LU and the same OS level on
both sides. Then you patch the inactive boot environment from the active
BE, then
=
-Original Message-
From: Jarkko Airaksinen [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 10, 2007 1:52 AM
To: Hudes, Dana
Subject: RE: [Veritas-vx] Strange DMPNODENAME
Hello,
Well, you got me convinced :)
My next project is to try how Sol10 + Oracle10g perform. The plan is to
use
Assuming you meet the minimum BIOS levels for an Emulex or Qlogic HBA,
you may well find that on Solaris 10 x64 you are happier disabling vxdmp
on your SAN connections and instead using the native SAN suite which
comes with the OS by issuing stmsboot -e (it'll reboot your system at
the end of the
group
NYC-HRA MIS
+1 718 510 8586
Nextel: 172*26*16684
=
-Original Message-
From: Mike Root [mailto:[EMAIL PROTECTED]
Sent: Friday, August 31, 2007 4:04 PM
To: Hudes, Dana; vxtrouble; veritas-vx@mailman.eng.auburn.edu
Subject: RE: [Veritas-vx] SF 5.0 and iSCSI
This is interesting. Would this include exporting iSCSI, I believe
that's called an initiator? Solaris 10 has some support for I believe
its iSCSI targets but not initiators.
=
Dana Hudes
UNIX and Imaging group
NYC-HRA MIS
+1 718 510 8586
Nextel: 172*26*16684
I would setup additional mirror plex. Once it sunchronized you can shut
exchange down while you break off the mirror then restart it (2 minutes to type
etc). Once you have detached the plex you can then split the dg , deport the
new dg and remove the disks. No vvr required. Vvr is handy for not
See inline
=
Dana Hudes
UNIX and Imaging group
NYC-HRA MIS
+1 718 510 8586
Nextel: 172*26*16684
=
-Original Message-
From: Rajiv Gunja [mailto:[EMAIL PROTECTED]
Sent: Friday, August 17, 2007 8:57 PM
To: Hudes, Dana
Cc: [EMAIL PROTECTED]; veritas
Actually depending on what you've got going on and of course assuming
you have a license for 5 then you can go from 3.2 to 5. What you can't
do is an in-place upgrade. You can however take the volumes etc. you've
got under 3.2 and bring them up on a system running 5.0 or 4.1. I've
done this with
If you don't do an unencapsulate of the root mirror after you detach it,
you will have a world of pain if you try to boot from it. See man page
for vxunroot and there's a procedure for doing this manually floating
around on the web
=
Dana Hudes
UNIX and Imaging group
NYC-HRA
You do not expect LU of solaris to completly brak everything? You must lead a
charmed life. Maybe if at this point LU 8 to 9 but if you go from 8 to 10 or
something you could have trouble. There was just another LU patch for 10.
Anyway if you were to say go from 10u2 with nonglobal zones and
Eek! Hoary ancient veritas long off suport! If you have not paid support all
along you are not eligible for upgrade. Not even toi 3.5 much less 4.1
If you didn't pay support, use SF 5 basic to encapsulate root disks (no more
than 4 devices, volume filesystems (ea). Then use ZFS to manage the
If these LUNs are from the one SAN, are you trying to mirror data from
elsewhere onto the SAN or to mirror SAN LUNs one to another? The latter,
for a single SAN, is counter-productive as the SAN is supposed to
provide that reliability for you. If the former, then note which device
is which LUN
Solaris 10 11/06 has limited iSCSi support. I believe it supports targets only
so you can!t use it to provide iscsi. The 8/0z release will have more
--Original Message--
From: vxtrouble
To: veritas-vx@mailman.eng.auburn.edu
Sent: Aug 3, 2007 11:17 AM
Subject: [Veritas-vx] SF 5.0 and
To break mirror in place
(Assuming encapsulated)
Assuming valid bootable mirror exist:
Detach all plexes on disk
Mount root slice of brokenoff as e.g. /mnt and edit /mnt/vfstab to change back
to orign al form such as c0t0d0s0 as / also comment out veritas entries in
/mnt/etc/system and disable
Veritas out-of-the-box supports stripe-mirror and mirror-stripe layout.
Stripe-mirror uses sub-volumes. You would need 4 spindles to build a
stripe-mirror.
=
Dana Hudes
UNIX and Imaging group
NYC-HRA
+1 718 510 8586
=
-Original Message-
From: [EMAIL
44 matches
Mail list logo