Re: [Openstack] [grizzly]Problems of qpid as rpcbackend

2013-05-30 Thread minmin ren
Hi all, I think it is a bug of qpid as rpcbackend. Other service(nova-compute, cinder-scheduler, etc) use eventlet thead to run service. They stop service use thread kill() method. The last step rpc.cleanup() just did nothing, because the relative consume connection run in thread and killed.

Re: [Openstack] [grizzly]Problems of qpid as rpcbackend

2013-05-30 Thread Ray Pekowski
I am not familiar with impl_qpid,py, but am familiar with amqp.py and have had problems around rpc_amqp.cleanup() the Pool.empty() method it calls. It was a totally different problem, but I decided to take a look at your problem. I noticed that in impl_qpid.py the only other place a

Re: [Openstack] [grizzly]Problems of qpid as rpcbackend

2013-05-30 Thread minmin ren
Hi Ray, Thanks for your reply. try except change to line 386 only solve cinder-scheduler or nova-compute service which is the similar implementation stop raise exception. However, all cinder-volume queue be removed when one of multi-cinder-volume service stop. It is another problem. I

[Openstack] [grizzly]Problems of qpid as rpcbackend

2013-05-28 Thread minmin ren
I think I found some problems of qpid as rpcbackend, however I'm not sure about it. Could anyone try to test it with your environment? openstack grizzly version config file need debug=True 1. service openstack-cinder-scheduler stop (nova-compute, nova-scheduler, etc) 2. vi