Send Linux-ha-cvs mailing list submissions to
        [email protected]

To subscribe or unsubscribe via the World Wide Web, visit
        http://lists.community.tummy.com/mailman/listinfo/linux-ha-cvs
or, via email, send a message with subject or body 'help' to
        [EMAIL PROTECTED]

You can reach the person managing the list at
        [EMAIL PROTECTED]

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Linux-ha-cvs digest..."


Today's Topics:

   1. Linux-HA CVS: lib by andrew from 
      ([email protected])
   2. Linux-HA CVS: lib by andrew from 
      ([email protected])
   3. Linux-HA CVS: crm by andrew from 
      ([email protected])
   4. Linux-HA CVS: cts by andrew from 
      ([email protected])
   5. Linux-HA CVS: mgmt by zhenh from 
      ([email protected])


----------------------------------------------------------------------

Message: 1
Date: Tue, 28 Mar 2006 23:14:18 -0700 (MST)
From: [email protected]
Subject: [Linux-ha-cvs] Linux-HA CVS: lib by andrew from 
To: [EMAIL PROTECTED]
Message-ID: <[EMAIL PROTECTED]>

linux-ha CVS committal

Author  : andrew
Host    : 
Project : linux-ha
Module  : lib

Dir     : linux-ha/lib/crm/cib


Modified Files:
        cib_native.c 


Log Message:
Remove a bogus check of the reply_id

===================================================================
RCS file: /home/cvs/linux-ha/linux-ha/lib/crm/cib/cib_native.c,v
retrieving revision 1.56
retrieving revision 1.57
diff -u -3 -r1.56 -r1.57
--- cib_native.c        18 Mar 2006 17:23:48 -0000      1.56
+++ cib_native.c        29 Mar 2006 06:14:17 -0000      1.57
@@ -339,10 +339,7 @@
        int  rc = HA_OK;
        HA_Message *op_msg = NULL;
        op_msg = ha_msg_new(8);
-       if (op_msg == NULL) {
-               crm_err("No memory to create HA_Message");
-               return NULL;
-       }
+       CRM_CHECK(op_msg != NULL, return NULL);
        
        if(rc == HA_OK) {
                rc = ha_msg_add(op_msg, F_TYPE, T_CIB);
@@ -415,7 +412,7 @@
 
        cib_native_opaque_t *native = cib->variant_opaque;
 
-       if(cib->state ==  cib_disconnected) {
+       if(cib->state == cib_disconnected) {
                return cib_not_connected;
        }
 
@@ -478,8 +475,6 @@
                                  op_reply, F_CIB_CALLID, &reply_id) == HA_OK,
                          return cib_reply_failed);
 
-               CRM_CHECK(reply_id <= msg_id, return cib_reply_failed);
-                       
                if(reply_id == msg_id) {
                        break;
                        




------------------------------

Message: 2
Date: Tue, 28 Mar 2006 23:14:57 -0700 (MST)
From: [email protected]
Subject: [Linux-ha-cvs] Linux-HA CVS: lib by andrew from 
To: [EMAIL PROTECTED]
Message-ID: <[EMAIL PROTECTED]>

linux-ha CVS committal

Author  : andrew
Host    : 
Project : linux-ha
Module  : lib

Dir     : linux-ha/lib/crm/cib


Modified Files:
        cib_version.c 


Log Message:
Downgrade development logging

===================================================================
RCS file: /home/cvs/linux-ha/linux-ha/lib/crm/cib/cib_version.c,v
retrieving revision 1.4
retrieving revision 1.5
diff -u -3 -r1.4 -r1.5
--- cib_version.c       18 Mar 2006 17:23:48 -0000      1.4
+++ cib_version.c       29 Mar 2006 06:14:57 -0000      1.5
@@ -1,4 +1,4 @@
-/* $Id: cib_version.c,v 1.4 2006/03/18 17:23:48 andrew Exp $ */
+/* $Id: cib_version.c,v 1.5 2006/03/29 06:14:57 andrew Exp $ */
 
 /* 
  * Copyright (C) 2004 Andrew Beekhof <[EMAIL PROTECTED]>
@@ -89,8 +89,8 @@
                        const char *name = feature_tags[lpc].tags[lpc_nested];
                        crm_debug_4("Checking %s vs. %s", tag, name);
                        if(safe_str_eq(tag, name)) {
-                               crm_err("Found feature %s from set %s",
-                                       tag, feature_sets[lpc]);
+                               crm_debug_2("Found feature %s from set %s",
+                                           tag, feature_sets[lpc]);
                                current = lpc;
                                break;
                        }




------------------------------

Message: 3
Date: Wed, 29 Mar 2006 01:01:06 -0700 (MST)
From: [email protected]
Subject: [Linux-ha-cvs] Linux-HA CVS: crm by andrew from 
To: [EMAIL PROTECTED]
Message-ID: <[EMAIL PROTECTED]>

linux-ha CVS committal

Author  : andrew
Host    : 
Project : linux-ha
Module  : crm

Dir     : linux-ha/crm/pengine


Modified Files:
        regression.sh 


Log Message:
spacer

===================================================================
RCS file: /home/cvs/linux-ha/linux-ha/crm/pengine/regression.sh,v
retrieving revision 1.74
retrieving revision 1.75
diff -u -3 -r1.74 -r1.75
--- regression.sh       27 Mar 2006 15:53:10 -0000      1.74
+++ regression.sh       29 Mar 2006 08:01:04 -0000      1.75
@@ -53,6 +53,7 @@
 do_test orphan-0 "Orphan ignore"
 do_test orphan-1 "Orphan stop"
 
+echo ""
 do_test master-0 "Stopped -> Slave"
 do_test master-1 "Stopped -> Promote"
 do_test master-2 "Stopped -> Promote : notify"




------------------------------

Message: 4
Date: Wed, 29 Mar 2006 01:02:44 -0700 (MST)
From: [email protected]
Subject: [Linux-ha-cvs] Linux-HA CVS: cts by andrew from 
To: [EMAIL PROTECTED]
Message-ID: <[EMAIL PROTECTED]>

linux-ha CVS committal

Author  : andrew
Host    : 
Project : linux-ha
Module  : cts

Dir     : linux-ha/cts


Modified Files:
        CTSaudits.py.in 


Log Message:
Compare the CIB's as CIB's... no just XML blobs.
Get the not running check right

===================================================================
RCS file: /home/cvs/linux-ha/linux-ha/cts/CTSaudits.py.in,v
retrieving revision 1.41
retrieving revision 1.42
diff -u -3 -r1.41 -r1.42
--- CTSaudits.py.in     9 Jan 2006 21:19:09 -0000       1.41
+++ CTSaudits.py.in     29 Mar 2006 08:02:42 -0000      1.42
@@ -293,7 +293,7 @@
             for rsc in group :
                 if RunningNodes != NodeofRsc[rsc.rid] :
                     passed = 0 
-                   if not RunningNodes:
+                   if not NodeofRsc[rsc.rid] or len(NodeofRsc[rsc.rid]) == 0:
                        self.CM.log("Not all resources in group %s running on 
same node" % repr(group))
                        self.CM.log("* %s not running" % rsc.rid)
                    else:
@@ -440,7 +440,7 @@
                        #self.CM.debug("Retrieved CIB: %s" % first_host_xml) 
                else:
                        a_host_xml = self.store_remote_cib(a_host)
-                       diff_cmd="@sbindir@/crm_diff -VV -f -N \'%s\' -O '%s'" 
% (a_host_xml, first_host_xml)
+                       diff_cmd="@sbindir@/crm_diff -c -VV -f -N \'%s\' -O 
'%s'" % (a_host_xml, first_host_xml)
 
                        infile, outfile, errfile = os.popen3(diff_cmd)
                        diff_lines = outfile.readlines()




------------------------------

Message: 5
Date: Wed, 29 Mar 2006 04:28:28 -0700 (MST)
From: [email protected]
Subject: [Linux-ha-cvs] Linux-HA CVS: mgmt by zhenh from 
To: [EMAIL PROTECTED]
Message-ID: <[EMAIL PROTECTED]>

linux-ha CVS committal

Author  : zhenh
Host    : 
Project : linux-ha
Module  : mgmt

Dir     : linux-ha/mgmt/client


Modified Files:
        haclient.py.in 


Log Message:
Use command/result level cache to replace the object level cache.
===================================================================
RCS file: /home/cvs/linux-ha/linux-ha/mgmt/client/haclient.py.in,v
retrieving revision 1.13
retrieving revision 1.14
diff -u -3 -r1.13 -r1.14
--- haclient.py.in      28 Mar 2006 09:20:37 -0000      1.13
+++ haclient.py.in      29 Mar 2006 11:28:28 -0000      1.14
@@ -439,7 +439,7 @@
                        msgbox(_("can not get information from cluster!"))
                        return
 
-               activenodes = manager.do_cmd("active_nodes")
+               activenodes = manager.get_active_nodes()
                manager.activenodes = activenodes
                if activenodes == None :
                        msgbox(_("can not get information from cluster!"))
@@ -488,23 +488,23 @@
                
                
        def add_rsc(self, parent, rsc) :
-               type = manager.do_cmd("rsc_type\n%s"%rsc)[0]
+               type = manager.get_rsc_type(rsc)
                status = ""
                label = ""
                if type == "native" :
                        label = _("native")
-                       status = manager.do_cmd("rsc_status\n%s"%rsc)
-                       node = manager.do_cmd("rsc_running_on\n%s"%rsc)
-                       if node != None and len(node)>0:
-                               self.store.append(parent,[rsc, _(status[0]) + 
_(" on ")+str(node[0]), label])
+                       status = manager.get_rsc_status(rsc)
+                       nodes = manager.get_rsc_running_on(rsc)
+                       if nodes != None and len(nodes)>0:
+                               self.store.append(parent,[rsc, _(status) + _(" 
on ")+str(nodes), label])
                        else :
-                               self.store.append(parent,[rsc, _(status[0]), 
label])
+                               self.store.append(parent,[rsc, _(status), 
label])
                                
                elif type in ["group","clone","master"] :
                        label = type
                        status = type
                        iter = self.store.append(parent,[rsc, _(status), 
_(label)])
-                       for subrsc in manager.do_cmd("sub_rsc\n"+rsc) :
+                       for subrsc in manager.get_rsc_sub_rsc(rsc) :
                                self.add_rsc(iter, subrsc)
                                        
        def add_node(self, nodes_root, node, active):
@@ -512,7 +512,7 @@
                if not active :
                        status = _("stopped")
                else :          
-                       dc = manager.do_cmd("dc")
+                       dc = manager.get_dc()
                        node_iter = None
                        if node in dc :
                                status = _("running(dc)")
@@ -521,7 +521,7 @@
                node_iter = self.store.append(nodes_root,[node, status, "node"])
 
                if active :
-                       running_rsc = manager.do_cmd("running_rsc\n%s"%node)
+                       running_rsc = manager.get_running_rsc(node)
                        for rsc in running_rsc :
                                self.add_rsc(node_iter, rsc)
 
@@ -950,7 +950,7 @@
                
        def on_class_changed(self, widget, glade) :
                cur_class = glade.get_widget("class").child.get_text()
-               type_list = manager.do_cmd("rsc_types\n"+cur_class)
+               type_list = manager.get_rsc_types(cur_class)
                store = gtk.ListStore(str)
                for i in type_list:
                        store.append([i])
@@ -965,7 +965,7 @@
                cur_class = glade.get_widget("class").child.get_text()
                cur_type = glade.get_widget("type").child.get_text()
                
-               provider_list = 
manager.do_cmd("rsc_providers\n%s\n%s"%(cur_class,cur_type))
+               provider_list = manager.get_rsc_providers(cur_class,cur_type)
                store = gtk.ListStore(str)
                for i in provider_list:
                        store.append([i])
@@ -1030,7 +1030,7 @@
                glade.get_widget("master_node_max").set_property("sensitive", 
False)
 
                self.glade = glade
-               class_list = manager.do_cmd("rsc_classes")
+               class_list = manager.get_rsc_classes()
                store = gtk.ListStore(str)
                for i in class_list:
                        store.append([i])
@@ -1047,7 +1047,7 @@
                store = gtk.ListStore(str)
                store.append([""])
                for rsc in manager.get_all_rsc_id() :
-                       if manager.do_cmd("rsc_type\n"+rsc)[0] == "group" :
+                       if manager.get_rsc_type(rsc) == "group" :
                                store.append([rsc])
                glade.get_widget("group").set_model(store)
                glade.get_widget("group").set_text_column(0)
@@ -1073,7 +1073,7 @@
                                        passed = check_entry_value(glade, 
"master_node_max")
                                rsc = {}
                                rsc["id"] = glade.get_widget("id").get_text()
-                               if manager.do_cmd("rsc_attrs\n"+rsc["id"]) != 
None :
+                               if manager.rsc_exists(rsc["id"]) :
                                        msgbox(_("the ID already exists"))
                                        passed = False
                                
@@ -1246,7 +1246,7 @@
                if confirmbox(_("Delete") + " "+self.cur_type + " " + 
self.cur_name + "?"):
                        if self.cur_type in 
[_("native"),_("group"),_("clone"),_("master")] :
                                if string.find(self.cur_status, _("unmanaged")) 
!= -1 :
-                                       hostname = 
manager.do_cmd("rsc_running_on\n%s"%self.cur_name)
+                                       hostname = 
manager.get_rsc_running_on(self.cur_name)
                                        
manager.do_cmd("cleanup_rsc\n"+hostname[0] + "\n" + self.cur_name)
                                else :
                                        
manager.do_cmd("del_rsc\n"+self.cur_name)
@@ -1350,16 +1350,14 @@
                
        # cache functions
 
-       def cache_lookup(self, key1, key2="no") :
-               if self.cache.has_key(key1) :
-                       if self.cache[key1].has_key(key2) :
-                               return self.cache[key1][key2]
+       def cache_lookup(self, key) :
+               if self.cache.has_key(key) :
+                       return self.cache[key]
                return None
                        
-       def cache_update(self, data,key1, key2="no") :
-               if not self.cache.has_key(key1) :
-                       self.cache[key1] = {}
-               self.cache[key1][key2] = data
+       def cache_update(self, key, data) :
+               if not self.cache.has_key(key) :
+                       self.cache[key] = data
                
        def cache_clear(self) :
                self.cache.clear()
@@ -1379,7 +1377,7 @@
                gtk.main()
                if self.connected :
                        mgmt_disconnect()
-               
+       
        # connection functions
        def last_login_info(self) :
                save_path = os.environ["HOME"]+"/.haclient"
@@ -1401,7 +1399,7 @@
                self.connected = True
                self.username = username
                self.password = password
-               self.active_nodes = self.do_cmd("active_nodes")
+               self.active_nodes = self.get_active_nodes
 
                window.statusbar.push(1,"Updating data from server...")
                gobject.timeout_add(100, window.update)
@@ -1412,6 +1410,14 @@
                fd = mgmt_inputfd()
                self.io_tag = gobject.io_add_watch(fd, gobject.IO_IN, 
self.on_event)
                return True
+       
+       def query(self, query) :
+               result = self.cache_lookup(query)
+               if  result != None :
+                       return  result
+               result = self.do_cmd(query)
+               self.cache_update(query, result)
+               return result
                
        def do_cmd(self, command) :
                self.failed_reason = ""
@@ -1460,8 +1466,6 @@
                
        # cluster functions             
        def get_cluster_config(self) :
-               if self.cache_lookup("cluster_config") != None:
-                       return self.cache_lookup("cluster_config")
                
                hb_attr_names = 
["apiauth","auto_failback","baud","debug","debugfile",
                                 "deadping","deadtime","hbversion","hopfudge",
@@ -1471,17 +1475,16 @@
                crm_attr_names = 
["transition_timeout","symmetric_cluster","stonith_enabled",
                                  
"no_quorum_policy","default_resource_stickiness", "have_quorum"]
                                  
-               values = manager.do_cmd("hb_config")
+               values = manager.query("hb_config")
                if values == None :
                        return None
                config = dict(zip(hb_attr_names,values))
                                
-               values = manager.do_cmd("crm_config")
+               values = manager.query("crm_config")
                if values == None :
                        return None
                config.update(dict(zip(crm_attr_names,values)))
                
-               self.cache_update(config, "cluster_config")
                return config
                
        def update_crm_config(self, new_crm_config) :
@@ -1494,34 +1497,35 @@
 
                                
        # node functions
+       def get_dc(self):
+               return self.query("dc")
+
        def get_all_nodes(self) :
-               if self.cache_lookup("all_nodes") != None:
-                       return self.cache_lookup("all_nodes")
-               all_nodes = manager.do_cmd("all_nodes")
-               self.cache_update(all_nodes, "all_nodes")
-               return all_nodes
+               return self.query("all_nodes")
+
+       def get_active_nodes(self):
+               return self.query("active_nodes")
                                
        def get_node_config(self, node) :
-               if self.cache_lookup("node_config", node) != None:
-                       return self.cache_lookup("node_config",node)
-                       
                node_attr_names = ["uname", "online","standby", "unclean", 
"shutdown",
                                   "expected_up","is_dc","type"]
                                  
-               values = manager.do_cmd("node_config\n%s"%node)
+               values = manager.query("node_config\n%s"%node)
                if values == None :
                        return None
                config = dict(zip(node_attr_names,values))
                
-               self.cache_update(config, "node_config", node)
                return config
-       
+
+       def get_running_rsc(self, node) :
+               return self.query("running_rsc\n%s"%node)       
+
        # resource functions
        def add_group(self, group) :
                if len(group["id"]) == 0 :
                        msgbox (_("the ID can't be empty"))
                        return
-               if self.do_cmd("rsc_attrs\n"+group["id"]) != None :
+               if self.rsc_exists(group["id"]):
                        msgbox (_("the ID already exists"))
                        return
                self.do_cmd("add_grp\n"+group["id"])
@@ -1529,7 +1533,7 @@
                        msgbox(self.failed_reason)
                
        def add_native(self, rsc) :
-               if self.do_cmd("rsc_attrs\n"+rsc["id"]) != None :
+               if self.rsc_exists(rsc["id"]) :
                        msgbox (_("the ID already exists"))
                        return
                        
@@ -1554,17 +1558,41 @@
                        msgbox (self.failed_reason)
 
        def get_all_rsc_id(self) :
-               if self.cache_lookup("all_rsc") != None:
-                       return self.cache_lookup("all_rsc")
-               all_rsc = manager.do_cmd("all_rsc")
-               self.cache_update(all_rsc, "all_rsc")
-               return all_rsc
-       
+               return self.query("all_rsc")
+
+       def get_rsc_type(self, rsc_id) :
+               return self.query("rsc_type\n"+rsc_id)[0]
+
+       def get_rsc_status(self, rsc_id) :
+               return self.query("rsc_status\n"+rsc_id)[0]
+
+       def get_rsc_running_on(self, rsc_id) :
+               return self.query("rsc_running_on\n"+rsc_id)
+
+       def get_rsc_sub_rsc(self, rsc_id) :
+               return self.query("sub_rsc\n"+rsc_id)
+
+       def get_rsc_info(self, rsc) :
+               rsc_attr_names = ["id", "class", "provider","type"]
+               op_attr_names = ["id", "name", "interval","timeout"]
+               param_attr_names = ["id", "name", "value"]
+               
+               attr_list = self.query("rsc_attrs\n%s"%rsc)
+               attrs = {}
+               if attr_list != None :
+                       attrs = dict(zip(rsc_attr_names, attr_list))
+               running_on = self.query("rsc_running_on\n%s"%rsc)
+
+               raw_params = self.query("rsc_params\n%s"%rsc)
+               params = self.split_attr_list(raw_params, param_attr_names)
+               
+               raw_ops = self.query("rsc_ops\n%s"%rsc)
+               ops = self.split_attr_list(raw_ops, op_attr_names)
+
+               return (attrs, running_on, params, ops)
+
        def get_rsc_meta(self, rsc_class, rsc_type, rsc_provider) :
-               cache_key = rsc_class+rsc_type+rsc_provider
-               if self.cache_lookup("rsc_meta", cache_key) != None:
-                       return self.cache_lookup("rsc_meta", cache_key)
-               lines = self.do_cmd("rsc_metadata\n%s\n%s\n%s"%(rsc_class, 
rsc_type, rsc_provider))
+               lines = self.query("rsc_metadata\n%s\n%s\n%s"%(rsc_class, 
rsc_type, rsc_provider))
                if lines == None :
                        return None
                meta_data = ""
@@ -1614,33 +1642,20 @@
                        for key in action_xml.attributes.keys() :
                                action[key] = action_xml.getAttribute(key)
                        meta.actions.append(action)
-               self.cache_update(meta, "rsc_meta", cache_key)
                return meta
        
-       def get_rsc_info(self, rsc) :
-               if self.cache_lookup("rsc_info", rsc) != None:
-                       return self.cache_lookup("rsc_info", rsc)
-               
-               rsc_attr_names = ["id", "class", "provider","type"]
-               op_attr_names = ["id", "name", "interval","timeout"]
-               param_attr_names = ["id", "name", "value"]
-               
-               attr_list = self.do_cmd("rsc_attrs\n%s"%rsc)
-               attrs = {}
-               if attr_list != None :
-                       attrs = dict(zip(rsc_attr_names, attr_list))
-               running_on = self.do_cmd("rsc_running_on\n%s"%rsc)
+       def get_rsc_classes(self) :
+               return self.query("rsc_classes");
 
-               raw_params = self.do_cmd("rsc_params\n%s"%rsc)
-               params = self.split_attr_list(raw_params, param_attr_names)
-               
-               raw_ops = self.do_cmd("rsc_ops\n%s"%rsc)
-               ops = self.split_attr_list(raw_ops, op_attr_names)
-               
+       def get_rsc_types(self, rsc_class) :
+               return self.query("rsc_types\n"+rsc_class)
+       
+       def get_rsc_providers(self, rsc_class, rsc_type) :
+               return self.query("rsc_providers\n%s\n%s"%(rsc_class, rsc_type))
+
+       def rsc_exists(self, rsc_id) :
+               return rsc_id in self.get_all_rsc_id()
 
-               self.cache_update((attrs, running_on, params, ops),"rsc_info", 
rsc)
-               return (attrs, running_on, params, ops)
-               
        def update_attrs(self, up_cmd, del_cmd, old_attrs, new_attrs, keys):
                oldkeys = []
                if old_attrs != None :
@@ -1669,14 +1684,11 @@
                                
        # clone functions
        def get_clone(self, clone_id) :
-               if self.cache_lookup("clone", clone_id) != None :
-                       return self.cache_lookup("clone", clone_id)
-               attrs = manager.do_cmd("get_clone\n"+clone_id)
+               attrs = manager.query("get_clone\n"+clone_id)
                if attrs == None :
                        return None
                attrs_name = ["id","clone_max","clone_node_max"]
                clone = dict(zip(attrs_name,attrs))
-               self.cache_update(clone, "clone", clone_id)
                return clone
        
        def update_clone(self, clone) :
@@ -1687,14 +1699,11 @@
                
        # master functions
        def get_master(self, master_id) :
-               if self.cache_lookup("master", master_id) != None :
-                       return self.cache_lookup("master", master_id)
-               attrs = manager.do_cmd("get_master\n"+master_id)
+               attrs = manager.query("get_master\n"+master_id)
                if attrs == None :
                        return None
                attrs_name = 
["id","clone_max","clone_node_max","master_max","master_node_max"]
                master = dict(zip(attrs_name,attrs))
-               self.cache_update(master, "master", master_id)
                return master
        
        def update_master(self, master) :
@@ -1707,24 +1716,19 @@
                
        # constraint functions
        def get_constraints(self, type) :
-               if self.cache_lookup("all_constraints", type) != None:
-                       return self.cache_lookup("all_constraints", type)
-               id_list = manager.do_cmd("get_cos\n"+type)
+               id_list = manager.query("get_cos\n"+type)
                
                constraints = []
                for id in id_list :
                        constraints.append(self.get_constraint(type, id))
                                                        
-               self.cache_update(constraints,"all_constraints")
                return constraints
                        
        def get_constraint(self, type, id) :
-               if self.cache_lookup("constraint_"+type, id) != None:
-                       return self.cache_lookup("constraint_"+type, id)
                if type == "rsc_location" :
                        place_attr_names = ["id","rsc","score"]
                        expr_attr_names = ["id","attribute","operation","value"]
-                       attrs = manager.do_cmd("get_co\nrsc_location\n"+id)
+                       attrs = manager.query("get_co\nrsc_location\n"+id)
                        if attrs == None :
                                return None
                        place = dict(zip(place_attr_names,attrs[:3]))
@@ -1732,11 +1736,10 @@
                        for i in 
range((len(attrs)-len(place_attr_names))/len(expr_attr_names)) :
                                expr = 
dict(zip(expr_attr_names,attrs[3+i*4:7+i*4]))
                                place["exprs"].append(expr)
-                       self.cache_update(place,"constraint_"+type, id)
                        return place
                elif type == "rsc_order" :
                        order_attr_names = ["id","from","type","to"]
-                       attrs = manager.do_cmd("get_co\nrsc_order\n" + id)
+                       attrs = manager.query("get_co\nrsc_order\n" + id)
                        if attrs == None :
                                return None
                        order = dict(zip(order_attr_names,attrs))
@@ -1744,11 +1747,10 @@
                        return order
                elif type == "rsc_colocation" :
                        colocation_attr_names = ["id","from","to","score"]
-                       attrs = manager.do_cmd("get_co\nrsc_colocation\n" + id)
+                       attrs = manager.query("get_co\nrsc_colocation\n" + id)
                        if attrs == None :
                                return None
                        colocation = dict(zip(colocation_attr_names,attrs))
-                       self.cache_update(colocation,"constraint_"+type, id)
                        return colocation
                        
                                




------------------------------

_______________________________________________
Linux-ha-cvs mailing list
[email protected]
http://lists.community.tummy.com/mailman/listinfo/linux-ha-cvs


End of Linux-ha-cvs Digest, Vol 28, Issue 79
********************************************

Reply via email to