Hi Marty, The problem occurred on a single node cluster when /var/cluster/rgm/physnode_affinities is there, so it is explainable.
When /var/cluster/rgm/physnode_affinities is renamed to /var/cluster/rgm/physnode_affinities.org everything is fine. I know it is a corner case, but if we combine for instance AVS and the MySQL replication on a single node cluster, we can not rename /var/cluster/rgm/physnode_affinities to /var/cluster/rgm/physnode_affinities.org because of AVS. I assume because of the physical node affinities, the first zone in the nodelist is picked, and not the one where the MySQL database is online. So the affinity is ignored then. When I force the node with -n node:zone everything is fine. Detlef Martin Rattner wrote: > Detlef, > > Please clarify a point in your email below. > > The replication RG ${REPL_MYSQL_RESOURCEGROUP} declares a +++ affinity upon > the > MySQL app RG ${real_rg}. I am assuming that both RGs are failover-mode > (single-mastered) RGs. If you execute "clrg online > ${REPL_MYSQL_RESOURCEGROUP}", the RGM adds ${real_rg} to the argument list > due > to the +++ affinity. If ${real_rg} is already online, then > ${REPL_MYSQL_RESOURCEGROUP} must be brought online on the same node as > real_rg, > due to the strong positive affinity. If this is not happening, it would be a > bug in rgm which should be reported. > > If both RGs are initially offline, then "clrg online > ${REPL_MYSQL_RESOURCEGROUP}" will bring both RGs online. real_rg will be > brought online first (because it does not declare the affinity), using its > nodelist for preference ordering; and ${REPL_MYSQL_RESOURCEGROUP} should also > come online on the same node as real_rg. In this case, it is the nodelist of > real_rg, not of REPL_MYSQL_RESOURCEGROUP, that determines the node preference > for bringing both RGs online. > > If you want to override the nodelist of real_rg, then it is fine to specify > the > "-n ${onlnode}" argument as you are currently doing. > > Regards, > --Marty > > > On 01/14/09 06:42, Detlef Ulherr wrote: > >> Hi Tim, >> >> Thanks for the review, >> >> Tim Read - Staff Engineer Solaris Availability Engineering wrote: >> >>> Detlef, >>> >>> Couple of minor comments: >>> 531: Shouldn't the replication RG just have a strong affinity on the >>> MySQL app RG when it's added. So here, you don't need to specify which >>> node it is brought online on. The addition later of MySQL RG should sort >>> that out. >>> >>> >> It has a strong positive with failover delegation affinity to the app >> rg, but my testing at least when they should failover between zones, >> showed that the failover worked, but on an clrg online always the first >> zone in the list was picked regardless whether or not the apprg is >> online there. See line 406 and 539 - 570. >> ... >> >> > _______________________________________________ > ha-clusters-discuss mailing list > ha-clusters-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/ha-clusters-discuss > -- ***************************************************************************** Detlef Ulherr Staff Engineer Tel: (++49 6103) 752-248 Availability Engineering Fax: (++49 6103) 752-167 Sun Microsystems GmbH Amperestr. 6 mailto:detlef.ulherr at sun.com 63225 Langen http://www.sun.de/ ***************************************************************************** Sitz der Gesellschaft: Sun Microsystems GmbH, Sonnenallee 1, D-85551 Kirchheim-Heimstetten Amtsgericht Muenchen: HRB 161028 Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Dr. Roland Boemer Vorsitzender des Aufsichtsrates: Martin Haering *****************************************************************************