Re: [openstack-dev] [OpenStack][Nova][compute] Why prune all compute node stats when sync up compute nodes

2014-01-17 Thread yunhong jiang
On Thu, 2014-01-16 at 00:22 +0800, Jay Lau wrote:
 Greeting,
 
 In compute/manager.py, there is a periodic task named as
 update_available_resource(), it will update resource for each compute
 periodically.
 
  @periodic_task.periodic_task
 def update_available_resource(self, context):
 See driver.get_available_resource()
 
 Periodic process that keeps that the compute host's
 understanding of
 resource availability and usage in sync with the underlying
 hypervisor.
 
 :param context: security context
 
 new_resource_tracker_dict = {}
 nodenames = set(self.driver.get_available_nodes())
 for nodename in nodenames:
 rt = self._get_resource_tracker(nodename)
 rt.update_available_resource(context)  Update
 here
 new_resource_tracker_dict[nodename] = rt
 
 In resource_tracker.py,
 https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L384
 
 self._update(context, resources, prune_stats=True)
 
 It always set prune_stats as True, this caused some problems for me.
 As now I'm putting some metrics to compute_node_stats table, those
 metrics does not change frequently, so I did not update it frequently.
 But the periodic task always prune the new metrics that I added.

 
IIUC, it's because the host resource may change dynamically, at least in
original design?

 What about add a configuration parameter in nova.cont to make
 prune_stats as configurable?

Instead of make prune_stats to be configuration, will it make more sense
to be lazy update, i.e. not update the DB if no changes?
 
 Thanks,
 
 
 Jay
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack][Nova][compute] Why prune all compute node stats when sync up compute nodes

2014-01-15 Thread Jay Lau
Greeting,

In compute/manager.py, there is a periodic task named as
update_available_resource(), it will update resource for each compute
periodically.

 @periodic_task.periodic_task
def update_available_resource(self, context):
See driver.get_available_resource()

Periodic process that keeps that the compute host's understanding of
resource availability and usage in sync with the underlying
hypervisor.

:param context: security context

new_resource_tracker_dict = {}
nodenames = set(self.driver.get_available_nodes())
for nodename in nodenames:
rt = self._get_resource_tracker(nodename)
rt.update_available_resource(context)  Update here
new_resource_tracker_dict[nodename] = rt

In resource_tracker.py,
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L384

self._update(context, resources, prune_stats=True)

It always set prune_stats as True, this caused some problems for me. As now
I'm putting some metrics to compute_node_stats table, those metrics does
not change frequently, so I did not update it frequently. But the periodic
task always prune the new metrics that I added.

What about add a configuration parameter in nova.cont to make prune_stats
as configurable?

Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev