Send Linux-ha-cvs mailing list submissions to
        [email protected]

To subscribe or unsubscribe via the World Wide Web, visit
        http://lists.community.tummy.com/mailman/listinfo/linux-ha-cvs
or, via email, send a message with subject or body 'help' to
        [EMAIL PROTECTED]

You can reach the person managing the list at
        [EMAIL PROTECTED]

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Linux-ha-cvs digest..."


Today's Topics:

   1. Linux-HA CVS: linux-ha by andrew from 
      ([email protected])
   2. Linux-HA CVS: resources by andrew from 
      ([email protected])
   3. Linux-HA CVS: crm by andrew from 
      ([email protected])
   4. Linux-HA CVS: cts by andrew from 
      ([email protected])


----------------------------------------------------------------------

Message: 1
Date: Thu,  1 Jun 2006 02:44:40 -0600 (MDT)
From: [email protected]
Subject: [Linux-ha-cvs] Linux-HA CVS: linux-ha by andrew from 
To: [EMAIL PROTECTED]
Message-ID: <[EMAIL PROTECTED]>

linux-ha CVS committal

Author  : andrew
Host    : 
Module  : linux-ha

Dir     : linux-ha


Modified Files:
        configure.in 


Log Message:
New RA for populating node attributes

===================================================================
RCS file: /home/cvs/linux-ha/linux-ha/configure.in,v
retrieving revision 1.526
retrieving revision 1.527
diff -u -3 -r1.526 -r1.527
--- configure.in        30 May 2006 12:24:36 -0000      1.526
+++ configure.in        1 Jun 2006 08:44:40 -0000       1.527
@@ -10,7 +10,7 @@
 AC_INIT(heartbeat.spec.in)
 
 AC_CONFIG_AUX_DIR(.)
-AC_REVISION($Revision: 1.526 $) dnl cvs revision
+AC_REVISION($Revision: 1.527 $) dnl cvs revision
 AC_CANONICAL_HOST
 
 
@@ -2714,6 +2714,7 @@
        resources/OCF/portblock                                 \
        resources/OCF/Raid1                                     \
        resources/OCF/ServeRAID                                 \
+       resources/OCF/SysInfo                                   \
        resources/OCF/VIPArip                                   \
        resources/OCF/WAS                                       \
        resources/OCF/WinPopup                                  \




------------------------------

Message: 2
Date: Thu,  1 Jun 2006 02:44:41 -0600 (MDT)
From: [email protected]
Subject: [Linux-ha-cvs] Linux-HA CVS: resources by andrew from 
To: [EMAIL PROTECTED]
Message-ID: <[EMAIL PROTECTED]>

linux-ha CVS committal

Author  : andrew
Host    : 
Project : linux-ha
Module  : resources

Dir     : linux-ha/resources/OCF


Modified Files:
        Makefile.am 
Added Files:
        SysInfo.in 


Log Message:
New RA for populating node attributes

===================================================================
RCS file: /home/cvs/linux-ha/linux-ha/resources/OCF/Makefile.am,v
retrieving revision 1.14
retrieving revision 1.15
diff -u -3 -r1.14 -r1.15
--- Makefile.am 26 May 2006 01:32:14 -0000      1.14
+++ Makefile.am 1 Jun 2006 08:44:40 -0000       1.15
@@ -58,6 +58,7 @@
                        portblock       \
                        Raid1           \
                        ServeRAID       \
+                       SysInfo \
                        VIPArip         \
                        WAS             \
                        WinPopup        \




------------------------------

Message: 3
Date: Thu,  1 Jun 2006 02:54:43 -0600 (MDT)
From: [email protected]
Subject: [Linux-ha-cvs] Linux-HA CVS: crm by andrew from 
To: [EMAIL PROTECTED]
Message-ID: <[EMAIL PROTECTED]>

linux-ha CVS committal

Author  : andrew
Host    : 
Project : linux-ha
Module  : crm

Dir     : linux-ha/crm/pengine


Modified Files:
        utils.c 


Log Message:
An extra case

===================================================================
RCS file: /home/cvs/linux-ha/linux-ha/crm/pengine/utils.c,v
retrieving revision 1.140
retrieving revision 1.141
diff -u -3 -r1.140 -r1.141
--- utils.c     30 May 2006 09:24:03 -0000      1.140
+++ utils.c     1 Jun 2006 08:54:42 -0000       1.141
@@ -1,4 +1,4 @@
-/* $Id: utils.c,v 1.140 2006/05/30 09:24:03 andrew Exp $ */
+/* $Id: utils.c,v 1.141 2006/06/01 08:54:42 andrew Exp $ */
 /* 
  * Copyright (C) 2004 Andrew Beekhof <[EMAIL PROTECTED]>
  * 
@@ -1279,6 +1279,8 @@
                return RSC_ROLE_MASTER;
        } else if(safe_str_eq(role, RSC_ROLE_UNKNOWN_S)) {
                return RSC_ROLE_UNKNOWN;
+       } else if(safe_str_eq(role, "default")) {
+               return RSC_ROLE_UNKNOWN;
        }
        crm_err("Unknown role: %s", role);
        return RSC_ROLE_UNKNOWN;




------------------------------

Message: 4
Date: Thu,  1 Jun 2006 02:57:01 -0600 (MDT)
From: [email protected]
Subject: [Linux-ha-cvs] Linux-HA CVS: cts by andrew from 
To: [EMAIL PROTECTED]
Message-ID: <[EMAIL PROTECTED]>

linux-ha CVS committal

Author  : andrew
Host    : 
Project : linux-ha
Module  : cts

Dir     : linux-ha/cts


Modified Files:
        CTS.py.in CTStests.py.in 


Log Message:
At some points its safe to clear the cache files on all nodes, do so if 
clear_cache == 1
TAB->space

===================================================================
RCS file: /home/cvs/linux-ha/linux-ha/cts/CTS.py.in,v
retrieving revision 1.60
retrieving revision 1.61
diff -u -3 -r1.60 -r1.61
--- CTS.py.in   21 Apr 2006 07:10:09 -0000      1.60
+++ CTS.py.in   1 Jun 2006 08:57:00 -0000       1.61
@@ -495,6 +495,16 @@
     def install_config(self, node):
         return None
 
+    def clear_all_caches(self):
+       if self.clear_cache:
+           for node in self.Env["nodes"]:
+               if self.ShouldBeStatus[node] == self["down"]:
+                   self.debug("Removing cache file on: "+node)
+                   self.rsh.remote_py(node, "os", "system", 
+                                      "rm -f 
@HA_VARLIBDIR@/heartbeat/hostcache")
+               else:
+                   self.debug("NOT Removing cache file on: "+node)
+
     def StartaCM(self, node):
 
         '''Start up the cluster manager on a given node'''
===================================================================
RCS file: /home/cvs/linux-ha/linux-ha/cts/CTStests.py.in,v
retrieving revision 1.146
retrieving revision 1.147
diff -u -3 -r1.146 -r1.147
--- CTStests.py.in      26 Apr 2006 13:35:48 -0000      1.146
+++ CTStests.py.in      1 Jun 2006 08:57:00 -0000       1.147
@@ -62,7 +62,7 @@
         self.Scenario = scenario
         self.Tests = []
         self.Audits = []
-       self.ns=CTS.NodeStatus(self.Env)
+        self.ns=CTS.NodeStatus(self.Env)
 
         for test in tests:
             if not issubclass(test.__class__, CTSTest):
@@ -156,7 +156,7 @@
         if not self.Scenario.SetUp(self.CM):
             return None
 
-       self.CM.ns.WaitForAllNodesToComeUp(self.CM.Env["nodes"])
+        self.CM.ns.WaitForAllNodesToComeUp(self.CM.Env["nodes"])
         testcount=1
         time.sleep(30)
 
@@ -187,13 +187,13 @@
                 self.CM.log("Test %s (%s) \t[FAILED]" %(test.name,nodechoice))
                 # Better get the current info from the cluster...
                 self.CM.statall()
-               # Make sure logging is working and we have enough disk space...
-               if not self.CM.Env["DoBSC"]:
-                   self.CM.ns.WaitForAllNodesToComeUp(self.CM.Env["nodes"])
-                   if not self.CM.TestLogging():
-                       sys.exit(1)
-                   if not self.CM.CheckDf():
-                       sys.exit(1)
+                # Make sure logging is working and we have enough disk space...
+                if not self.CM.Env["DoBSC"]:
+                    self.CM.ns.WaitForAllNodesToComeUp(self.CM.Env["nodes"])
+                    if not self.CM.TestLogging():
+                        sys.exit(1)
+                    if not self.CM.CheckDf():
+                        sys.exit(1)
             stoptime=time.time()
             elapsed_time = stoptime - starttime
             test_time = stoptime - test.starttime
@@ -314,10 +314,10 @@
 
         patterns = []
         # Technically we should always be able to notice ourselves stopping
-       patterns.append(self.CM["Pat:We_stopped"] % node)
+        patterns.append(self.CM["Pat:We_stopped"] % node)
 
-       if self.CM.Env["use_logd"]:
-           patterns.append(self.CM["Pat:Logd_stopped"] % node)
+        if self.CM.Env["use_logd"]:
+            patterns.append(self.CM["Pat:Logd_stopped"] % node)
 
         # Any active node needs to notice this one left
         # NOTE: This wont work if we have multiple partitions
@@ -459,12 +459,12 @@
         if self.CM.StataCM(node):
             self.incr("WasStopped")
             if not self.start(node):
-               return self.failure("start (setup) failure: "+node)
+                return self.failure("start (setup) failure: "+node)
 
         self.starttime=time.time()
         if not self.stop(node):
             return self.failure("stop failure: "+node)
-       if not self.start(node):
+        if not self.start(node):
             return self.failure("start failure: "+node)
         return self.success()
 
@@ -765,7 +765,9 @@
         ret = self.stopall(None)
         if not ret:
             return self.failure("Setup failed")
-         
+        
+        self.CM.clear_all_caches()
+ 
         if not self.startall(None):
             return self.failure("Startall failed")
 
@@ -847,6 +849,7 @@
         if len(failed) > 0:
             return self.failure("Some node failed to stop: " + repr(failed))
 
+        self.CM.clear_all_caches()
         return self.success()
 
     def is_applicable(self):
@@ -1099,13 +1102,13 @@
         stopwatch.setwatch()
 
 #
-#      This test is CM-specific - FIXME!!
+#        This test is CM-specific - FIXME!!
 #
         if self.CM.rsh(node, "killall -9 heartbeat")==0:
             Starttime = os.times()[4]
             if stopwatch.look():
                 Stoptime = os.times()[4]
-#      This test is CM-specific - FIXME!!
+#        This test is CM-specific - FIXME!!
                 self.CM.rsh(node, "killall -9 @libdir@/heartbeat/ccm 
@libdir@/heartbeat/ipfail >/dev/null 2>&1; true")
                 Detectiontime = Stoptime-Starttime
                 detectms = int(Detectiontime*1000+0.5)
@@ -1121,7 +1124,7 @@
                 self.start(node)
                 return self.success()
             else:
-#      This test is CM-specific - FIXME!!
+#        This test is CM-specific - FIXME!!
                 self.CM.rsh(node, "killall -9 @libdir@/heartbeat/ccm 
@libdir@/heartbeat/ipfail >/dev/null 2>&1; true")
                 self.CM.ShouldBeStatus[node] = self.CM["down"]
                 ret=self.start(node)
@@ -1133,7 +1136,7 @@
         '''This test is applicable when auto_failback != legacy'''
         return self.standby.is_applicable()
 
-#      This test is CM-specific - FIXME!!
+#        This test is CM-specific - FIXME!!
     def errorstoignore(self):
         '''Return list of errors which are 'normal' and should be ignored'''
         return [ "ccm.*ERROR: ccm_control_process:failure to send protoversion 
request"
@@ -1260,7 +1263,7 @@
 
     def is_applicable(self):
         '''BandwidthTest is always applicable'''
-        return 1
+        return 0
 
 AllTestClasses.append(BandwidthTest)
 
@@ -1526,8 +1529,8 @@
         return 0
 
 #
-#      FIXME!!  This test has hard-coded cluster-manager-specific things in 
it!!
-#      
+#        FIXME!!  This test has hard-coded cluster-manager-specific things in 
it!!
+#        
     def errorstoignore(self):
         '''Return list of errors which are 'normal' and should be ignored'''
         return [ "ERROR:.*Both machines own.*resources"
@@ -1549,7 +1552,7 @@
         self.start = StartTest(cm)
         self.startall = SimulStartLite(cm)
         self.max=30
-       self.rid=None
+        self.rid=None
 
     def __call__(self, dummy):
         '''Perform the 'ResourceRecover' test. '''
@@ -1614,13 +1617,13 @@
         return 0
     
 #
-#      FIXME!!  This test has hard-coded cluster-manager-specific things in 
it!!
-#      
+#        FIXME!!  This test has hard-coded cluster-manager-specific things in 
it!!
+#        
     def errorstoignore(self):
         '''Return list of errors which should be ignored'''
         return [ """ERROR: .* Action .*%s_monitor_.* on .* failed .*target: 0 
vs. rc: 7.*: Error""" % self.rid,
-                """Updating failcount for """
-                ]
+                 """Updating failcount for """
+                 ]
 
 AllTestClasses.append(ResourceRecover)
 
@@ -2109,13 +2112,13 @@
     def __init__(self, cm):
         CTSTest.__init__(self, cm)
         self.name="AddResource"
-       self.resource_offset = 0
-       self.cib_cmd="""@sbindir@/cibadmin -C -o %s -X '%s' """
+        self.resource_offset = 0
+        self.cib_cmd="""@sbindir@/cibadmin -C -o %s -X '%s' """
 
     def __call__(self, node):
-       self.resource_offset =  self.resource_offset  + 1
+        self.resource_offset =         self.resource_offset  + 1
 
-       r_id = "bsc-rsc-%s-%d" % (node, self.resource_offset)
+        r_id = "bsc-rsc-%s-%d" % (node, self.resource_offset)
         start_pat = "crmd.*start_0 on %s complete"
 
         patterns = []
@@ -2125,62 +2128,62 @@
             self.CM["LogFileName"], patterns, self.CM["DeadTime"])
         watch.setwatch()
 
-       fields = string.split(self.CM.Env["IPBase"], '.')
-       fields[3] = str(int(fields[3])+1)
-       ip = string.join(fields, '.')
-       self.CM.Env["IPBase"] = ip
+        fields = string.split(self.CM.Env["IPBase"], '.')
+        fields[3] = str(int(fields[3])+1)
+        ip = string.join(fields, '.')
+        self.CM.Env["IPBase"] = ip
 
-       if not self.make_ip_resource(node, r_id, "ocf", "IPaddr", ip):
-           return self.failure("Make resource %s failed" % r_id)
+        if not self.make_ip_resource(node, r_id, "ocf", "IPaddr", ip):
+            return self.failure("Make resource %s failed" % r_id)
 
-       failed = 0
+        failed = 0
         watch_result = watch.lookforall()
         if watch.unmatched:
             for regex in watch.unmatched:
                 self.CM.log ("Warn: Pattern not found: %s" % (regex))
-               failed = 1
+                failed = 1
 
-       if failed:
-           return self.failure("Resource pattern(s) not found")
+        if failed:
+            return self.failure("Resource pattern(s) not found")
 
-       if not self.CM.cluster_stable(self.CM["DeadTime"]):
-           return self.failure("Unstable cluster")
+        if not self.CM.cluster_stable(self.CM["DeadTime"]):
+            return self.failure("Unstable cluster")
 
         return self.success()
 
     def make_ip_resource(self, node, id, rclass, type, ip):
-       self.CM.log("Creating %s::%s:%s (%s) on %s" % (rclass,type,id,ip,node))
-       rsc_xml="""
+        self.CM.log("Creating %s::%s:%s (%s) on %s" % (rclass,type,id,ip,node))
+        rsc_xml="""
 <primitive id="%s" class="%s" type="%s"  provider="heartbeat">
     <instance_attributes id="%s"><attributes>
         <nvpair id="%s" name="ip" value="%s"/>
     </attributes></instance_attributes>
 </primitive>""" % (id, rclass, type, id, id, ip)
 
-       node_constraint="""
+        node_constraint="""
       <rsc_location id="run_%s" rsc="%s">
         <rule id="pref_run_%s" score="100">
           <expression id="%s_loc_expr" attribute="#uname" operation="eq" 
value="%s"/>
         </rule>
       </rsc_location>""" % (id, id, id, id, node)
 
-       rc = 0
-       (rc, lines) = self.CM.rsh.remote_py(node, "os", "system", self.cib_cmd 
% ("constraints", node_constraint))
-       if rc != 0:
-           self.CM.log("Constraint creation failed: %d" % rc)
-           return None
-
-       (rc, lines) = self.CM.rsh.remote_py(node, "os", "system", self.cib_cmd 
% ("resources", rsc_xml))
-       if rc != 0:
-           self.CM.log("Resource creation failed: %d" % rc)
-           return None
-
-       return 1
-
-    def is_applicable(self):
-       if self.CM["Name"] == "linux-ha-v2" and self.CM.Env["DoBSC"]:
-           return 1
-       return None
+        rc = 0
+        (rc, lines) = self.CM.rsh.remote_py(node, "os", "system", self.cib_cmd 
% ("constraints", node_constraint))
+        if rc != 0:
+            self.CM.log("Constraint creation failed: %d" % rc)
+            return None
+
+        (rc, lines) = self.CM.rsh.remote_py(node, "os", "system", self.cib_cmd 
% ("resources", rsc_xml))
+        if rc != 0:
+            self.CM.log("Resource creation failed: %d" % rc)
+            return None
+
+        return 1
+
+    def is_applicable(self):
+        if self.CM["Name"] == "linux-ha-v2" and self.CM.Env["DoBSC"]:
+            return 1
+        return None
 
 def TestList(cm):
     result = []
@@ -2216,10 +2219,11 @@
             if self.CM.ShouldBeStatus[node] == self.CM["up"]:
                 self.incr("WasStarted")
                 watchpats.append(self.CM["Pat:All_stopped"] % node)
-               if self.CM.Env["use_logd"]:
-                   watchpats.append(self.CM["Pat:Logd_stopped"] % node)
+                if self.CM.Env["use_logd"]:
+                    watchpats.append(self.CM["Pat:Logd_stopped"] % node)
 
         if len(watchpats) == 0:
+            self.CM.clear_all_caches()
             return self.skipped()
 
         #     Stop all the nodes - at about the same time...
@@ -2232,6 +2236,7 @@
             if self.CM.ShouldBeStatus[node] == self.CM["up"]:
                 self.CM.StopaCMnoBlock(node)
         if watch.lookforall():
+            self.CM.clear_all_caches()
             return self.success()
 
         did_fail=0
@@ -2246,6 +2251,8 @@
 
         self.CM.log("Warn: All nodes stopped but CTS didnt detect: " 
                     + repr(watch.unmatched))
+
+        self.CM.clear_all_caches()
         return self.success()
 
     def is_applicable(self):
@@ -2259,7 +2266,7 @@
     def __init__(self, cm):
         CTSTest.__init__(self,cm)
         self.name="SimulStartLite"
-       
+        
     def __call__(self, dummy):
         '''Perform the 'SimulStartList' setup work. '''
         self.incr("calls")
@@ -2337,18 +2344,18 @@
         '''Perform the 'Logging' test. '''
         self.incr("calls")
         
-       self.CM.ns.WaitForAllNodesToComeUp(self.CM.Env["nodes"])
+        self.CM.ns.WaitForAllNodesToComeUp(self.CM.Env["nodes"])
 
-       # Make sure logging is working and we have enough disk space...
-       if not self.CM.TestLogging():
-           sys.exit(1)
-       if not self.CM.CheckDf():
-           sys.exit(1)
+        # Make sure logging is working and we have enough disk space...
+        if not self.CM.TestLogging():
+            sys.exit(1)
+        if not self.CM.CheckDf():
+            sys.exit(1)
 
     def is_applicable(self):
         '''ResourceRecover is applicable only when there are resources running
          on our cluster and environment is linux-ha-v2'''
-       return self.CM.Env["DoBSC"]
+        return self.CM.Env["DoBSC"]
 
     def errorstoignore(self):
         '''Return list of errors which should be ignored'''




------------------------------

_______________________________________________
Linux-ha-cvs mailing list
[email protected]
http://lists.community.tummy.com/mailman/listinfo/linux-ha-cvs


End of Linux-ha-cvs Digest, Vol 31, Issue 2
*******************************************

Reply via email to