Re: [openstack-dev] [ceilometer] about workload partition

2017-12-03 Thread
On 2017-12-01 05:03 AM, 李田清 wrote: >> Hello, >> we test workload partition, and find it's much slower than not >> using it. >> After some review, we find that, after get samples from >> notifications.sample >> ceilometer unpacks them a

[openstack-dev] [ceilometer] about workload partition

2017-12-01 Thread
Hello, we test workload partition, and find it's much slower than not using it. After some review, we find that, after get samples from notifications.sample ceilometer unpacks them and sends them one by one to the pipe ceilometer.pipe.*, this will make the consumer slow.

[openstack-dev] [gnocchi]can we put the api on udp?

2017-11-20 Thread
Hello, right now, ceilometer notification agent can send samples by udp publisher.. But gnocchi can only accept by rest api. Is there a way to use udp to accept notification agent's samples that sending by udp? Thanks a

[openstack-dev] [ceilometer][oslo_messaging] Random Connection reset by peer

2017-10-27 Thread
Hello, I test ceilometer agent notification workload partition. I found it is too fragile. The load is 1k cirrors vm. I make processing queue to 4 and workers to 1. I can sure the network is ok. But in the ceilometer agent notification log i see this

[openstack-dev] oslo listener heartbeat

2017-10-26 Thread
Hello, i see this in impl_rabbit.py # NOTE(sileht): if purpose is PURPOSE_LISTEN # the consume code does the heartbeat stuff # we don't need a thread self._heartbeat_thread = None if purpose == rpc_common.PURPOSE_SEND:

Re: [openstack-dev] [ceilometer] the workload partition willcauseconsumer disappeared

2017-10-25 Thread
I use 5.10.2 > I test newton 5.10.2, and in ceilometer agent notification, the log shows > 2017-10-21 03:33:19.779 225636 ERROR root [-] Unexpected exception > occurred 60 time(s)... retrying. > 2017-10-21 03:33:19.779 225636 ERROR root Traceback (most recent call last): > 2017-10-21

Re: [openstack-dev] [ceilometer] the workload partition will causeconsumer disappeared

2017-10-25 Thread
On 2017-10-23 09:57 PM, 李田清 wrote: > We test ceilometer workload partition, and find even with one > rabbitmq server, the ceilometer-pipe > will lost its consumers. Does anyone know this? > I configure, batch_size =1, batch_timeout =1, > and pipeline_proces

[openstack-dev] [ceilometer] the workload partition will cause consumer disappeared

2017-10-23 Thread
Hello, We test ceilometer workload partition, and find even with one rabbitmq server, the ceilometer-pipe will lost its consumers. Does anyone know this? I configure, batch_size =1, batch_timeout =1, and pipeline_processing_queues = 1. If anyone know this, please point it

[openstack-dev] [telemetry] ceilometer-notification restart with no consumers on ceilometer-pipe

2017-10-17 Thread
Hello, I am testing ceilometer workload partition. And find that if we restart agent notification, the ceilometer-pip* queue will not have consumers any more. Does anyone know this? The pipeline.yaml is here http://paste.openstack.org/show/623909/ And i also find the

Re: [openstack-dev] [nova] nova boot from image created volume

2017-04-06 Thread
gt; virt/block_device.py >> 63::returns: The availability_zone value to pass to volume_api.create >> 487:vol = volume_api.create(context, self.volume_size, '', >> '', >> 508:vol = volume_api.create(context, self.volume_size, >> 530:

[openstack-dev] [nova] nova boot from image created volume

2017-04-06 Thread
Hello, If we use nova boot from image and created volume, i think the nova will use volume/cinder.py:create to create volume. But after insert pdb, i do not find the specific code of line to call the create. Can someone help me to point out the code of the line? Thanks a

Re: [openstack-dev] [opensatck-dev][trove]redis replication

2015-06-17 Thread
]redis replication Have you checked the blueprint for this at: https://review.openstack.org/#/c/189445/. Hope that helps. Regards, Mariam. 李田清 ---06/17/2015 02:06:39 AM---Hello, Right now we can create one replication once, but it is not suitable for redis. What we w From

[openstack-dev] [opensatck-dev][trove]redis replication

2015-06-17 Thread
Hello, Right now we can create one replication once, but it is not suitable for redis. What we will do for this? And if time permit can the assigner of redis replication tell about the process for redis replication? Thanks a