Thank you very much.

When I use "redis://localhost:6379", find other problem as follow:

2015-03-12 18:19:19.513 20863 INFO ceilometer.coordination [-] 
backend_url:redis://localhost:6379
2015-03-12 18:19:19.513 20863 INFO tooz.coordination [-] 
backend_url:redis://localhost:6379
2015-03-12 18:19:19.513 20863 INFO tooz.coordination [-] 
parsed_url:SplitResult(scheme='redis', netloc='localhost:6379', path='', 
query='', fragment='')***parsed_qs:{}
2015-03-12 18:19:19.515 20863 ERROR ceilometer.openstack.common.threadgroup [-] 
No module named concurrent
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup 
Traceback (most recent call last):
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup   
File 
"/usr/lib/python2.7/site-packages/ceilometer/openstack/common/threadgroup.py", 
line 143, in wait
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup     
x.wait()
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup   
File 
"/usr/lib/python2.7/site-packages/ceilometer/openstack/common/threadgroup.py", 
line 47, in wait
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup     
return self.thread.wait()
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup   
File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 173, in 
wait
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup     
return self._exit_event.wait()
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup   
File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup     
return hubs.get_hub().switch()
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup   
File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 293, in 
switch
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup     
return self.greenlet.switch()
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup   
File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 212, in 
main
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup     
result = function(*args, **kwargs)
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup   
File "/usr/lib/python2.7/site-packages/ceilometer/openstack/common/service.py", 
line 500, in run_service
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup     
service.start()
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup   
File "/usr/lib/python2.7/site-packages/ceilometer/agent.py", line 195, in start
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup     
self.partition_coordinator.start()
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup   
File "/usr/lib/python2.7/site-packages/ceilometer/coordination.py", line 71, in 
start
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup     
backend_url, self._my_id)
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup   
File "/usr/lib/python2.7/site-packages/tooz/coordination.py", line 365, in 
get_coordinator
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup     
invoke_args=(member_id, parsed_url, parsed_qs)).driver
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup   
File "/usr/lib/python2.7/site-packages/stevedore/driver.py", line 45, in 
__init__
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup     
verify_requirements=verify_requirements,
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup   
File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 55, in __init__
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup     
verify_requirements)
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup   
File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 170, in 
_load_plugins
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup     
self._on_load_failure_callback(self, ep, err)
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup   
File "/usr/lib/python2.7/site-packages/stevedore/driver.py", line 50, in 
_default_on_load_failure
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup     
raise err
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup 
ImportError: No module named concurrent
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup
____________________________________________________________________

-----邮件原件-----
发件人: Chris Dent [mailto:[email protected]] 
发送时间: 2015年3月11日 21:08
收件人: Pan, Fengyun/潘 风云
抄送: Vijaya Bhaskar; openstack
主题: Re: 答复: [Openstack] Ceilometer high availability in active-active

On Wed, 11 Mar 2015, Pan, Fengyun wrote:

> We kown that:
> backend_url',
>               default=None,
>               help='The backend URL to use for distributed coordination. If '
>                    'left empty, per-deployment central agent and per-host '
>                    'compute agent won\'t do workload '
>                    'partitioning and will only function correctly if a '
>                    'single instance of that service is running.'), But 
> how to set the ‘backend_url’?

This appears to be an oversight in the documentation. The main starting point 
is here:

    
http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-cetral-compute-agent-ha.html

but nothing there nor what it links to actually says what should go as the 
value of the setting. It's entirely dependent on the backend being used and how 
that backend is being configured. Each of the tooz drivers has some information 
on some of the options, but again, it is not fully documented yet.

For reference, what I use in my own testing is redis as follows:

    redis://localhost:6379

This uses a single redis server, so introduces another single point of failure. 
It's possible to use sentinel to improve upon this situation:

    http://docs.openstack.org/developer/tooz/developers.html#redis

The other drivers work in similar ways with their own unique arguments.

I'm sorry I'm not able to point to more complete information but I can say that 
it is in the process of being improved.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : [email protected]
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to