The configuration you’re considering – running your cluster interconnects over
two separate VLANs – is actually our preferred and recommended method, even
when deploying a simple 2-node cluster. While using direct connections between
cluster nodes is simple and convenient, it becomes
For LLT, there are two configuration files: /etc/llttab and
/etc/llthosts.
Make sure you've made the appropriate changes to both files.
From: veritas-ha-boun...@mailman.eng.auburn.edu
[mailto:veritas-ha-boun...@mailman.eng.auburn.edu] On Behalf Of amit
Sent: Sunday, March 15, 2009 2:07 PM
Will VCS be managing the non-global zone so that it runs on either host?
If so, the zone needs to be configured on each host. If the non-global
zone will only be running on one of the servers independent of the
cluster, then you're fine.
From: veritas-ha-boun...@mailman.eng.auburn.edu
My suggestion would be to upgrade SF and VCS first, the reason being that the
later versions of VCS support Solaris 8, 9, and 10 within the same cluster.
This way, you can do a rolling upgrade of the OS afterwards by upgrading first
the idle node, then switch the app service group to the
A probe is simply an initial run of the monitor entry point (your
monitor script) when the resource first goes online. If your monitor
script otherwise works, it's possible the initial probe is running too
soon after the online entry point when the resource isn't yet fully
online.
Eric
Tom's got a point...I just now looked at the monitor script, and you
exit with a 0. The monitor should return 100 if it determines the app
is offline and 110 if it's determined to be up.
Eric
From: veritas-ha-boun...@mailman.eng.auburn.edu
Not so...VCS 4.1 was released specifically to support Solaris 10.
Eric
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of ssloh
Sent: Sunday, November 09, 2008 10:11 PM
To: [EMAIL PROTECTED]; Veritas-ha@mailman.eng.auburn.edu
Subject: Re: [Veritas-ha]
Thanks for clarifying that, Annette.
Eric
-Original Message-
From: Annette Benz
Sent: Tuesday, September 30, 2008 12:32 AM
To: Eric Hennessey; Shashi Kanth Boddula;
veritas-ha@mailman.eng.auburn.edu
Subject: RE: [Veritas-ha] ASM instance VCS group problem
Hi ,
Please note
I think you need to open a support case on this...it almost looks as if
your version of the Oracle agent for VCS is having a hard time parsing
the + sign in the instance name.
Eric
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Shashi
Kanth Boddula
attach/detach, but it'll give you a good overview of how we work with
zones.
http://eval.symantec.com/mktginfo/enterprise/white_papers/ent-whitepaper
_implementing_solaris_zones_06-2007.en-us.pdf
Cheers!
Eric
Eric Hennessey
Director, Technical
If reinstalling on that one node doesn't work, you should open up a
support case.
Eric
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of i man
Sent: Wednesday, July 23, 2008 5:48 AM
To: Gene Henriksen
Cc: veritas-ha@mailman.eng.auburn.edu
Subject: Re: [Veritas-ha] Missing
The latest version of our zone agent supports zone attach-detach, which
isn't reflected in that paper. The essential best practices outlined in
that paper, though, remain the same.
Eric
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Rodolfo
Bonnin
Sent: Thursday, June 26,
According to the docs, if the system identified in AutoStartList isn't
up when all others are up after a full cluster start, the SG remains
offline. So if you really don't care which system hosts a given SG on
cluster start, you can omit the AutoStartList attribute.
Eric
From: [EMAIL
In short, VCS has no real dependency on time of day, so you won't run
into any issues.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Joshua
Fielden
Sent: Friday, March 07, 2008 11:05 AM
To: Rodolfo Bonnin; veritas-ha@mailman.eng.auburn.edu
Subject: Re:
This is perfectly do-able with VCS 4.1, since 4.1 supports Solaris 8, 9
and 10.
The general procedure at a high level is:
1. Freeze all service groups
2. Upgrade an idle node in the cluster
3. Once the upgraded node reboots and rejoins the cluster, unfreeze the
service groups running on one
the cluster join attempt will fail.
In addition, when working with CFS and/or SF-RAC clusters the
requirements become more stringent as there's an increased risk of
failure when running disparate OS versions and patch levels.
Cheers!
Eric
-Original Message-
From: Eric Hennessey
Sent: Sunday
member to properly join a VCS cluster. The
cluster join process is strictly governed by the version of VCS, not by
the OS version.
Hope this helps!
Eric
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Friday, December 07, 2007 7:09 PM
To: Eric Hennessey
Cc: Jim
From: upen [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 06, 2007 7:43 PM
To: Eric Hennessey
Cc: veritas-ha@mailman.eng.auburn.edu
Subject: Re: [Veritas-ha] best way for patching of cluster servers
Thanks Eric
One question,
Does Veritas/symantec provide support for Patching Sun
from Sun. See the following site:
http://sunsolve.sun.com
But we've supported mixed Solaris versions and patch levels for several
releases of VCS.
Eric
-Original Message-
From: Jim Senicka
Sent: Friday, December 07, 2007 8:26 PM
To: '[EMAIL PROTECTED]'; Eric Hennessey
Cc: 'veritas-ha
VCS with RAC requires SCSI-3 capable disks for I/O fencing, so this
wouldn't work.
From: F.A [mailto:[EMAIL PROTECTED]
Sent: Sunday, November 04, 2007 6:23 PM
To: veritas-ha@mailman.eng.auburn.edu; Eric Hennessey
Subject: Can I use Solaris 10 / VCS connected
This question was also posted to the Symantec forums, and I responded
there:
It seems to me that the fact you're migrating to a new server makes this
easy:
Build out the new servers using the target OS version/patch level.
Install appropriate Oracle s/w on the new server.
Install target
As long as at least one plex in the volume is available, VCS will detect
no change in the state of any configured Volume or Mount resources.
That said, it's always best practice to freeze a service group when
mucking about with any resources under that service group's control.
Eric
How heavily loaded is this server?
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Anand
Ganesh
Sent: Tuesday, June 12, 2007 1:54 PM
To: Tom Riemer; veritas-ha@mailman.eng.auburn.edu
Subject: Re: [Veritas-ha] Odd behavior from DiskGroup monitor
VCS used to do something like this with a version called VCS Traffic
Director.
We decided that it was better to let load balancers do what load
balancers do, and put one in front of a VCS cluster running a parallel
service group.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL
And the use of a steward is HIGHLY recommended.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jim
Senicka
Sent: Wednesday, March 14, 2007 10:10 AM
To: Cronin, John S; Pavel A Tsvetkov; Veritas-ha@mailman.eng.auburn.edu
Subject: Re: [Veritas-ha] Veritas
RDCs are supported only with synchronous replication, regardless of the
type of replication used. It doesn't matter if it's VVR or some form of
array-based replication.
Eric
_
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Pavel A
Tsvetkov
Sent: Tuesday, March 13, 2007
This looks similar to something I used to see when the EEPROM variable
use_local_mac_address was set to false.
Have you checked that on all nodes?
Alternatively, check and make sure that each NIC from each system is on
its own private network/VLAN.
Eric
-Original Message-
From: [EMAIL
While we recommend same OS rev and kernel patch, it's not a
requirement. Differences should be transitional,
though.
For instance, VCS 5.0 supports Solaris 8, 9 and 10.
Therefore, you can mix Solaris 8, 9 and 10 nodes in the same cluster while
you're in the midst of getting all nodes
It appears you have the notifier configured correctly. You should open
a support case on this.
Eric
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Stoyan
Angelov
Sent: Friday, September 15, 2006 3:39 PM
To: veritas-ha@mailman.eng.auburn.edu
Subject:
29 matches
Mail list logo