Re: [openstack-dev] [oslo.messaging][mistral] For how long is blocking executor deprecated?

2017-06-13 Thread Mehdi Abaakouk
Hi, On Tue, Jun 13, 2017 at 01:53:02PM +0700, Renat Akhmerov wrote: Can you please clarify for how long you plan to keep ‘blocking executor’ deprecated before complete removal? Like all deprecations. We just done it, so you have two cycles, we will remove it in Rocky. But as I said, this

Re: [openstack-dev] [oslo.messaging][mistral] For how long is blocking executor deprecated?

2017-06-13 Thread Renat Akhmerov
Ok, I think I already got my question answered:  https://docs.openstack.org/releasenotes/oslo.messaging/unreleased.html#deprecation-notes Thanks Renat Akhmerov @Nokia On 13 Jun 2017, 13:59 +0700, Renat Akhmerov , wrote: > Hi Oslo team, > > Can you please clarify for

[openstack-dev] [oslo.messaging][mistral] For how long is blocking executor deprecated?

2017-06-13 Thread Renat Akhmerov
Hi Oslo team, Can you please clarify for how long you plan to keep ‘blocking executor’ deprecated before complete removal? We have to use it in Mistral for the time being. We plan to move away from using it but the transition may take significant time, not this cycle for sure. So we got

Re: [openstack-dev] [oslo.messaging]Optimize RPC performance by reusing callback queue

2017-06-08 Thread Ken Giusti
Hi, Keep in mind the rabbit driver creates a single reply queue per *transport* - that is per call to oslo.messaging's get_transport/get_rpc_transport/get_notification_transport. If you have multiple RPCClients sharing the same transport, then all clients issuing RPC calls over that transport

Re: [openstack-dev] [oslo.messaging]Optimize RPC performance by reusing callback queue

2017-06-08 Thread Mehdi Abaakouk
Hi, On Thu, Jun 08, 2017 at 10:29:16AM +0800, int32bit wrote: Hi, Currently, I find our RPC client always need create a new callback queue for every call requests to track the reply belongs, at least in Newton. That's pretty inefficient and lead to poor performance. I also find some RPC

Re: [openstack-dev] [oslo.messaging]Optimize RPC performance by reusing callback queue

2017-06-08 Thread lương hữu tuấn
Hi, First of all, the correlation_id is needed for tracking the response for callback queue per client. What you said is the inefficiency of callback queue per request. In any case, callback queue is needed. About oslo_messaging, you can see the correlation_id in the driver of amqp. Br,

[openstack-dev] [oslo.messaging]Optimize RPC performance by reusing callback queue

2017-06-07 Thread int32bit
Hi, Currently, I find our RPC client always need create a new callback queue for every call requests to track the reply belongs, at least in Newton. That's pretty inefficient and lead to poor performance. I also find some RPC implementations no need to create a new queue, they track the request

Re: [openstack-dev] [oslo.messaging] [smaug] Gate Py34 is failure

2016-05-11 Thread Andreas Jaeger
On 2016-05-11 11:54, Qiming Teng wrote: > I believe the gate is back. You may want to do 'recheck' now. No, it's not. Once everything is fixed, we'll tell. Joshua Hesketh has worked around some problems but not all of them. Once it's working, somebody from the infra team will change the IRC

Re: [openstack-dev] [oslo.messaging] [smaug] Gate Py34 is failure

2016-05-11 Thread Qiming Teng
I believe the gate is back. You may want to do 'recheck' now. Regards, Qiming __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [oslo.messaging] [smaug] Gate Py34 is failure

2016-05-11 Thread xiangxinyong
Thanks. It is really help me a lot. Best Regards, xiangxinyong On Wed, May 11, 2016 at 04:44 PM +0800, Qiming wrote: > I believe the infra team is busy working on it. Seems it was caused by > pip 8.1.2. Please be patient and avoid doing 'recheck' until problem is > fixed. > > Regards, >

Re: [openstack-dev] [oslo.messaging] [smaug] Gate Py34 is failure

2016-05-11 Thread Qiming Teng
I believe the infra team is busy working on it. Seems it was caused by pip 8.1.2. Please be patient and avoid doing 'recheck' until problem is fixed. Regards, Qiming On Wed, May 11, 2016 at 04:42:49PM +0800, xiangxinyong wrote: > hello folks, > > > I find [gate-smaug-python34] is FAILURE. >

[openstack-dev] [oslo.messaging] [smaug] Gate Py34 is failure

2016-05-11 Thread xiangxinyong
hello folks, I find [gate-smaug-python34] is FAILURE. The gate messages are as follows: Collecting oslo.messaging>=4.5.0 (from smaug==0.0.1.dev159) Could not find a version that satisfies the requirement oslo.messaging>=4.5.0 (from smaug==0.0.1.dev159) (from versions: ) No matching

Re: [openstack-dev] [oslo.messaging] configurable ack-then-process (at least/most once) behavior

2016-02-09 Thread Bogdan Dobrelya
On 11.12.2015 12:06, Bogdan Dobrelya wrote: > Hello. > > On 02.12.2015 12:01, Bogdan Dobrelya wrote: >>> Bogdan, >>> >>> Which service would use this flag to start with? and how would the >>> code change to provide "app side is fully responsible for duplicates >>> handling"? >> >> (fixed topic

Re: [openstack-dev] [oslo.messaging] Why the needed version of kombu to support heartbeat is >=3.0.7

2015-12-16 Thread me,apporc
To anyone who cares, I made a try to use kombu 2.5 (specifically its reconnecting logic), with oslo.messaging (which uses kombu's reconnecting logic after kilo). No matter whether heartbeat is enabled or not, there are truly many problems. NoneType Error [1], eg. I have to manually backport

Re: [openstack-dev] [oslo.messaging] configurable ack-then-process (at least/most once) behavior

2015-12-11 Thread Bogdan Dobrelya
Hello. On 02.12.2015 12:01, Bogdan Dobrelya wrote: >> Bogdan, >> >> Which service would use this flag to start with? and how would the >> code change to provide "app side is fully responsible for duplicates >> handling"? > > (fixed topic tags to match oslo.messaging) > > AFAIK, this mode is

Re: [openstack-dev] [oslo.messaging] configurable ack-then-process (at least/most once) behavior

2015-12-10 Thread Renat Akhmerov
Hi, I also left my comment in the patch which explains what we need from Mistral perspective. Please take a look. Renat Akhmerov @ Mirantis Inc. > On 02 Dec 2015, at 17:01, Bogdan Dobrelya wrote: > >> Bogdan, >> >> Which service would use this flag to start with?

Re: [openstack-dev] [oslo.messaging] configurable ack-then-process (at least/most once) behavior

2015-12-02 Thread Bogdan Dobrelya
> Bogdan, > > Which service would use this flag to start with? and how would the > code change to provide "app side is fully responsible for duplicates > handling"? (fixed topic tags to match oslo.messaging) AFAIK, this mode is required by Mistral HA. Other projects may want the at-least-once

Re: [openstack-dev] [oslo.messaging] State wrapping in the MessageHandlingServer

2015-11-11 Thread Matthew Booth
On Tue, Nov 10, 2015 at 6:46 PM, Joshua Harlow wrote: > Matthew Booth wrote: > >> My patch to MessageHandlingServer is currently being reverted because it >> broke Nova tests: >> >> https://review.openstack.org/#/c/235347/ >> >> Specifically it causes a number of tests to

Re: [openstack-dev] [oslo.messaging] State wrapping in the MessageHandlingServer

2015-11-11 Thread Joshua Harlow
Matthew Booth wrote: On Tue, Nov 10, 2015 at 6:46 PM, Joshua Harlow > wrote: Matthew Booth wrote: My patch to MessageHandlingServer is currently being reverted because it broke Nova tests:

[openstack-dev] [oslo.messaging] State wrapping in the MessageHandlingServer

2015-11-10 Thread Matthew Booth
My patch to MessageHandlingServer is currently being reverted because it broke Nova tests: https://review.openstack.org/#/c/235347/ Specifically it causes a number of tests to take a very long time to execute, which ultimately results in the total build time limit being exceeded. This is very

Re: [openstack-dev] [oslo.messaging] State wrapping in the MessageHandlingServer

2015-11-10 Thread Joshua Harlow
Matthew Booth wrote: My patch to MessageHandlingServer is currently being reverted because it broke Nova tests: https://review.openstack.org/#/c/235347/ Specifically it causes a number of tests to take a very long time to execute, which ultimately results in the total build time limit being

Re: [openstack-dev] [oslo.messaging] Why the needed version of kombu to support heartbeat is >=3.0.7

2015-10-28 Thread me,apporc
Thank you for pointing this out, dims. I didn't notice this process of openstack. But i wonder how do you find the relationship between that bot's commit and the global requirements commit. And sileht, from this commit

Re: [openstack-dev] [oslo.messaging] Why the needed version of kombu to support heartbeat is >=3.0.7

2015-10-28 Thread Davanum Srinivas
apporc, I do a git blame on global-requirements.txt to figure that out. I'll let @sileht answer the other one :) -- dims On Wed, Oct 28, 2015 at 3:15 PM, me,apporc wrote: > Thank you for pointing this out, dims. I didn't notice this process of > openstack. But i

Re: [openstack-dev] [oslo.messaging] Why the needed version of kombu to support heartbeat is >=3.0.7

2015-10-27 Thread Mehdi Abaakouk
Le 2015-10-27 04:22, me,apporc a écrit : But i found in the changelog history of kombu [1], heartbeat support was added in version 2.5.0, so what's the point for ">= 3.0.7". Thanks. The initial heartbeat implementation have some critical issues for oslo.messaging that was fixed since kombu

Re: [openstack-dev] [oslo.messaging] Why the needed version of kombu to support heartbeat is >=3.0.7

2015-10-27 Thread me,apporc
Thank you for the explanation, Mehdi. About the "critical issues" mentioned in the commits, as i understand: [1] seems just socket timeout issue, and admins can adjust those kernel params themselves. [2] and [3] truly a problem about the heartbeat implementation, but it says the fix is a part of

Re: [openstack-dev] [oslo.messaging] Why the needed version of kombu to support heartbeat is >=3.0.7

2015-10-27 Thread Mehdi Abaakouk
[1] seems just socket timeout issue, and admins can adjust those kernel params themselves. Yes, but if you trick kernel settings, like putting very low tcp keepalive values, you don't need to have/enable heartbeat. [2] and [3] truly a problem about the heartbeat implementation, but it

Re: [openstack-dev] [oslo.messaging] Why the needed version of kombu to support heartbeat is >=3.0.7

2015-10-27 Thread me,apporc
Thanks again. This kombu >=3.0.7 requirement is added in commit 5b9fb6980220dbfa18bac4c3231d57efb493ebf0, which is from a Bot with no reason. As i see, we are directly requiring amqp >=1.4.0 in requirement.txt from commit 0c954cffa2f3710acafa79f01b958a8955823640 on. So maybe there is no need to

Re: [openstack-dev] [oslo.messaging] Why the needed version of kombu to support heartbeat is >=3.0.7

2015-10-27 Thread Davanum Srinivas
"Bot with No reason" <<< Not really accurate. The process in openstack is to update global requirements first and then bot proposes the update to different projects. So please look at https://github.com/openstack/requirements/commit/c7f69afd6af56e8f7956c6fa0bea8fd776151fe6 for the commit which

[openstack-dev] [oslo.messaging][RPC][HA] describe failure modes in API docs

2015-10-26 Thread Bogdan Dobrelya
This is a continuation of the "next steps" topic [0]. I created a blueprint [1] and related etherpad [2] to describe failure modes (timeouts, retrurn codes and so on) at least for the Oslo messaging RPC API calls, which seem the most critical point for app and the library devs. As the end goal,

[openstack-dev] [oslo.messaging] Why the needed version of kombu to support heartbeat is >=3.0.7

2015-10-26 Thread me,apporc
In this commit: https://review.openstack.org/#/c/12/, we set the requirement of kombu to >=3.0.7 to support rabbit heartbeat. But i found in the changelog history of kombu [1], heartbeat support was added in version 2.5.0, so what's the point for ">= 3.0.7". Thanks. [1]:

Re: [openstack-dev] [oslo.messaging][devstack] Pika RabbitMQ driver implementation

2015-09-25 Thread Joshua Harlow
Dmitriy Ukhlov wrote: Hello stackers, I'm working on new olso.messaging RabbitMQ driver implementation which uses pika client library instead of kombu. It related to https://blueprints.launchpad.net/oslo.messaging/+spec/rabbit-pika. In this letter I want to share current results and probably

Re: [openstack-dev] [oslo.messaging][devstack] Pika RabbitMQ driver implementation

2015-09-25 Thread Joshua Harlow
Also a side question, that someone might know, Whatever happened to the folks from rabbitmq (incorporated? pivotal?) who were going to get involved in oslo.messaging, did that ever happen; if anyone knows? They might be a good bunch of people to review such a pika driver (since I think they

Re: [openstack-dev] [oslo.messaging][devstack] Pika RabbitMQ driver implementation

2015-09-25 Thread Dmitriy Ukhlov
Hello Joshua, thank you for your feedback. This will end up on review.openstack.org right so that it can be properly > reviewed (it will likely take a while since it looks to be ~1000+ lines of > code)? Yes, sure I will send this patch to review.openstack.org, but first of all I need to get

[openstack-dev] [oslo.messaging][devstack] Pika RabbitMQ driver implementation

2015-09-25 Thread Dmitriy Ukhlov
Hello stackers, I'm working on new olso.messaging RabbitMQ driver implementation which uses pika client library instead of kombu. It related to https://blueprints.launchpad.net/oslo.messaging/+spec/rabbit-pika. In this letter I want to share current results and probably get first feedack from

Re: [openstack-dev] [oslo.messaging][zmq]

2015-09-18 Thread Doug Hellmann
Excerpts from ozamiatin's message of 2015-09-16 18:24:53 +0300: > Hi All, > > I'm excited to report that today we have merged [1] new zmq driver into > oslo.messaging master branch. > The driver is not completely done yet, so we are going to continue > developing it on the master branch now. >

Re: [openstack-dev] [oslo.messaging][zmq]

2015-09-18 Thread Davanum Srinivas
Hear hear! nice work Oleksii and team! -- Dims On Fri, Sep 18, 2015 at 4:51 PM, Doug Hellmann wrote: > Excerpts from ozamiatin's message of 2015-09-16 18:24:53 +0300: > > Hi All, > > > > I'm excited to report that today we have merged [1] new zmq driver into > >

[openstack-dev] [oslo.messaging][zmq]

2015-09-16 Thread ozamiatin
Hi All, I'm excited to report that today we have merged [1] new zmq driver into oslo.messaging master branch. The driver is not completely done yet, so we are going to continue developing it on the master branch now. What we've reached for now is passing functional tests gate (we are going

Re: [openstack-dev] [oslo.messaging][zmq]

2015-09-16 Thread Clint Byrum
Excerpts from ozamiatin's message of 2015-09-16 08:24:53 -0700: > Hi All, > > I'm excited to report that today we have merged [1] new zmq driver into > oslo.messaging master branch. > The driver is not completely done yet, so we are going to continue > developing it on the master branch now. >

Re: [openstack-dev] [oslo.messaging]

2015-09-02 Thread Georgy Okrokvertskhov
I believe in oslo.messaging routing_key is a topic. Here is an example for ceilometer: https://github.com/openstack/ceilometer/blob/master/ceilometer/meter/notifications.py#L212 And here is an oslo code for rabbitMQ driver:

[openstack-dev] [oslo.messaging]

2015-09-01 Thread Nader Lahouti
Hi, I am considering to use oslo.messaging to read messages from a rabbit queue. The messages are put into the queue by an external process. In order to do that I need to specify routing_key in addition to other parameters (i.e. exchange and queue,... name) for accessing the queue. I was looking

Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-07-20 Thread Bogdan Dobrelya
inline On 14.07.2015 18:59, Alec Hothan (ahothan) wrote: inline... On 7/8/15, 8:23 AM, Bogdan Dobrelya bdobre...@mirantis.com wrote: I believe Oleksii is already working on it. On all above I believe it is best to keep oslo messaging simple and predictable, then have apps deal

Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-07-20 Thread Alec Hothan (ahothan)
On 7/20/15, 5:24 AM, Bogdan Dobrelya bdobre...@mirantis.com wrote: inline On 14.07.2015 18:59, Alec Hothan (ahothan) wrote: inline... On 7/8/15, 8:23 AM, Bogdan Dobrelya bdobre...@mirantis.com wrote: I believe Oleksii is already working on it. On all above I believe it is

Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-07-14 Thread Alec Hothan (ahothan)
inline... On 7/8/15, 8:23 AM, Bogdan Dobrelya bdobre...@mirantis.com wrote: On 6/12/15, 3:55 PM, Clint Byrum cl...@fewbar.com wrote: I think you missed it is not tested in the gate as a root cause for some of the ambiguity. Anecdotes and bug reports are super important for knowing

Re: [openstack-dev] [oslo.messaging] [mistral] Acknowledge feature of RabbitMQ in oslo.messaging

2015-07-07 Thread Renat Akhmerov
Just to clarify: what we’re looking for is how to implement “Work queue” pattern described at [1] with oslo messaging. As Nikolay said, it requires that a message to be acknowledged after it has been processed. [1] http://www.rabbitmq.com/tutorials/tutorial-two-python.html

[openstack-dev] [oslo.messaging] [mistral] Acknowledge feature of RabbitMQ in oslo.messaging

2015-07-07 Thread Nikolay Makhotkin
Hi, I am using RabbitMQ as the backend and searched oslo.messaging for message acknowledgement feature but I found only [1] what is wrong using of acknowledgement since it acknowledges incoming message before it has been processed (while it should be done only after processing the message,

Re: [openstack-dev] [oslo.messaging] [mistral] Acknowledge feature of RabbitMQ in oslo.messaging

2015-07-07 Thread Mehdi Abaakouk
Hi, The RPC API of oslo.messaging do it for you, you don't have to care about acknowledgement (or anything else done by the driver because the underlying used pattern depends of it) . For the Working Queues patterns, I guess what you need is to ensure that the Target doesn't have the

Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-06-27 Thread ozamiatin
Spec [1] is updated and ready for review. Thanks everyone for taking part in the discussion. Regards, Oleksii [1] - https://review.openstack.org/#/c/187338 6/22/15 13:14, Sean Dague пишет: On 06/19/2015 08:18 PM, Alec Hothan (ahothan) wrote: Do we have a good understanding of what is

Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-06-22 Thread Sean Dague
On 06/19/2015 08:18 PM, Alec Hothan (ahothan) wrote: Do we have a good understanding of what is expected of zmq wrt rabbitMQ? Like in what part of the bell curve or use cases would you see it? Or indirectly, where do we see RabbitMQ lacking today that maybe ZMQ could handle better? I have

Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-06-20 Thread Joshua Harlow
Alec Hothan (ahothan) wrote: Do we have a good understanding of what is expected of zmq wrt rabbitMQ? Like in what part of the bell curve or use cases would you see it? Or indirectly, where do we see RabbitMQ lacking today that maybe ZMQ could handle better? I have tried to find any information

Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-06-19 Thread Alec Hothan (ahothan)
Do we have a good understanding of what is expected of zmq wrt rabbitMQ? Like in what part of the bell curve or use cases would you see it? Or indirectly, where do we see RabbitMQ lacking today that maybe ZMQ could handle better? I have tried to find any information on very large scale deployment

Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-06-19 Thread Clint Byrum
Excerpts from Alec Hothan (ahothan)'s message of 2015-06-19 17:18:41 -0700: Do we have a good understanding of what is expected of zmq wrt rabbitMQ? Like in what part of the bell curve or use cases would you see it? Or indirectly, where do we see RabbitMQ lacking today that maybe ZMQ could

Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-06-19 Thread Thierry Carrez
Flavio Percoco wrote: There's a 95% of deployments using rabbit not because rabbit is the best solution for all OpenStack problems but because it was the one that works best now. The lack of support on other drivers caused this and as long this lack of support on such drivers persist, it won't

Re: [openstack-dev] [oslo.messaging][zeromq] pub/sub support for the revised zeromq driver

2015-06-18 Thread Alec Hothan (ahothan)
On 6/1/15, 5:03 PM, Davanum Srinivas dava...@gmail.commailto:dava...@gmail.com wrote: fyi, the spec for zeromq driver in oslo.messaging is here: https://review.openstack.org/#/c/187338/1/specs/liberty/zmq-patterns-usage .rst,unified The above spec suggests using the zmq pub/sub/xpub/xsub

Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-06-18 Thread Gordon Sim
On 06/16/2015 08:51 PM, Alec Hothan (ahothan) wrote: I saw Sean Dague mention in another email that RabbitMQ is used by 95% of OpenStack users - and therefore does it make sense to invest in ZMQ (legit question). I believe it's used by 95% of users because there is as yet no compelling

Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-06-18 Thread Flavio Percoco
On 16/06/15 19:51 +, Alec Hothan (ahothan) wrote: Gordon, These are all great points for RPC messages (also called CALL in oslo messaging). There are similar ambiguous contracts for the other types of messages (CAST and FANOUT). I am worried about the general lack of interest from the

Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-06-16 Thread Gordon Sim
On 06/12/2015 09:41 PM, Alec Hothan (ahothan) wrote: One long standing issue I can see is the fact that the oslo messaging API documentation is sorely lacking details on critical areas such as API behavior during fault conditions, load conditions and scale conditions. I very much agree,

Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-06-16 Thread Alec Hothan (ahothan)
Gordon, These are all great points for RPC messages (also called CALL in oslo messaging). There are similar ambiguous contracts for the other types of messages (CAST and FANOUT). I am worried about the general lack of interest from the community to fix this as it looks like most people assume

Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-06-15 Thread Alec Hothan (ahothan)
On 6/12/15, 3:55 PM, Clint Byrum cl...@fewbar.com wrote: I think you missed it is not tested in the gate as a root cause for some of the ambiguity. Anecdotes and bug reports are super important for knowing where to invest next, but a test suite would at least establish a base line and

Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-06-15 Thread Clint Byrum
Excerpts from Alec Hothan (ahothan)'s message of 2015-06-15 11:45:53 -0700: On 6/12/15, 3:55 PM, Clint Byrum cl...@fewbar.com wrote: I think you missed it is not tested in the gate as a root cause for some of the ambiguity. Anecdotes and bug reports are super important for knowing

Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-06-13 Thread ozamiatin
6/13/15 01:55, Clint Byrum пишет: Excerpts from Alec Hothan (ahothan)'s message of 2015-06-12 13:41:17 -0700: On 6/1/15, 5:03 PM, Davanum Srinivas dava...@gmail.com wrote: fyi, the spec for zeromq driver in oslo.messaging is here:

Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-06-12 Thread ozamiatin
Hi, Alec Thanks for email threads investigation. I've decided to spend more time to dig into old zmq-related threads too. Some notes inline. 6/12/15 23:41, Alec Hothan (ahothan) пишет: On 6/1/15, 5:03 PM, Davanum Srinivas dava...@gmail.com wrote: fyi, the spec for zeromq driver in

Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-06-12 Thread Alec Hothan (ahothan)
On 6/1/15, 5:03 PM, Davanum Srinivas dava...@gmail.com wrote: fyi, the spec for zeromq driver in oslo.messaging is here: https://review.openstack.org/#/c/187338/1/specs/liberty/zmq-patterns-usage .rst,unified -- dims I was about to provide some email comments on the above review off gerrit,

Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-06-12 Thread Clint Byrum
Excerpts from Alec Hothan (ahothan)'s message of 2015-06-12 13:41:17 -0700: On 6/1/15, 5:03 PM, Davanum Srinivas dava...@gmail.com wrote: fyi, the spec for zeromq driver in oslo.messaging is here: https://review.openstack.org/#/c/187338/1/specs/liberty/zmq-patterns-usage .rst,unified --

Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-06-01 Thread Davanum Srinivas
@lists.openstack.org Date: Wednesday, May 27, 2015 at 3:52 AM To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [oslo.messaging][zeromq] Next step Hi, I'll try to address the question about Proxy process. AFAIK

Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-05-28 Thread Alec Hothan (ahothan)
-dev@lists.openstack.org Subject: Re: [openstack-dev] [oslo.messaging][zeromq] Next step Hi, I'll try to address the question about Proxy process. AFAIK there is no way yet in zmq to bind more than once to a specific port (e.g. tcp://*:9501). Apparently we can: socket1.bind('tcp://node1:9501

Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-05-27 Thread ozamiatin
Hi, I'll try to address the question about Proxy process. AFAIK there is no way yet in zmq to bind more than once to a specific port (e.g. tcp://*:9501). Apparently we can: socket1.bind('tcp://node1:9501') socket2.bind('tcp://node2:9501') but we can not: socket1.bind('tcp://*:9501')

Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-05-26 Thread Davanum Srinivas
Alec, Here are the slides: http://www.slideshare.net/davanum/oslomessaging-new-0mq-driver-proposal All the 0mq patches to date should be either already merged in trunk or waiting for review on trunk. Oleksii, Li Ma, Can you please address the other questions? thanks, Dims On Tue, May 26, 2015

[openstack-dev] oslo.messaging release 1.11.0 (liberty)

2015-05-26 Thread Davanum Srinivas
We are jubilant to announce the release of: oslo.messaging 1.11.0: Oslo Messaging API With source available at: http://git.openstack.org/cgit/openstack/oslo.messaging For more details, please see the git log history below and: http://launchpad.net/oslo.messaging/+milestone/1.11.0

[openstack-dev] [oslo.messaging][zeromq] Next step

2015-05-26 Thread Alec Hothan (ahothan)
Looking at what is the next step following the design summit meeting on 0MQ as the etherpad does not provide too much information. Few questions: - would it be possible to have the slides presented (showing the proposed changes in the 0MQ driver design) to be available somewhere? - is there a

[openstack-dev] Oslo.messaging ZeroMQ driver changes in Liberty

2015-04-14 Thread ozamiatin
Hi, Does anyone use any version of ZeroMQ driver in production deployment? If you do, please leave your comments in [1], or reply to this letter. [1] https://review.openstack.org/#/c/171131/ Thanks, Oleksii Zamiatin __

[openstack-dev] [oslo.messaging][zeromq] Some backports to stable/kilo

2015-04-09 Thread Li Ma
Hi oslo all, Currently devstack master relies on 1.8.1 release due to requirements frozen (=1.8.0 1.9.0), however, ZeroMQ driver is able to run on 1.9.0 release. The result is that you cannot deploy ZeroMQ driver using devstack master now due to some incompatibility between oslo.messaging 1.8.1

Re: [openstack-dev] [oslo.messaging][zeromq] Some backports to stable/kilo

2015-04-09 Thread Mehdi Abaakouk
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Hi, All of these patches are only bug fixes, so that's good for me. So if others agreed, I can release 1.8.2 with these changes once they are landed. We don't have any other changes pending in kilo branch for now. Le 2015-04-09 16:12, Li Ma

Re: [openstack-dev] [oslo.messaging][zeromq] introduce Request-Reply pattern to improve the stability

2015-04-01 Thread Li Ma
Great. I'm just doing some experiments to evaluate REQ/REP pattern. It seems that your implementation is completed. Looking forward to reviewing your updates. On Mon, Mar 30, 2015 at 4:02 PM, ozamiatin ozamia...@mirantis.com wrote: Hi, Sorry for not replying on [1] comments too long. I'm

Re: [openstack-dev] [oslo.messaging][zeromq] introduce Request-Reply pattern to improve the stability

2015-03-30 Thread ozamiatin
Hi, Sorry for not replying on [1] comments too long. I'm almost ready to return to the spec with updates. The main lack of current zmq-driver implementation is that it manually implements REQ/REP on top of PUSH/PULL. It results in: 1. PUSH/PULL is one way directed socket (reply needs another

[openstack-dev] [oslo.messaging][zeromq] introduce Request-Reply pattern to improve the stability

2015-03-29 Thread Li Ma
Hi all, I'd like to propose a simple but straightforward method to improve the stability of the current implementation. Here's the current implementation: receiver(PULL(tcp)) -- service(PUSH(tcp)) receiver(PUB(ipc)) -- service(SUB(ipc)) receiver(PUSH(ipc)) -- service(PULL(ipc)) Actually, as

Re: [openstack-dev] [oslo.messaging][zeromq] 'Subgroup' for broker-less ZeroMQ driver

2015-03-24 Thread Flavio Percoco
On 23/03/15 09:24 -0400, Doug Hellmann wrote: Excerpts from Li Ma's message of 2015-03-23 18:23:39 +0800: Hi all, During previous threads discussing about zeromq driver, a subgroup may be necessary to exchange knowledge and improve efficiency of communication and development. In this subgroup,

Re: [openstack-dev] [oslo.messaging][zeromq] 'Subgroup' for broker-less ZeroMQ driver

2015-03-24 Thread Doug Hellmann
Excerpts from Li Ma's message of 2015-03-24 23:31:22 +0800: On Mon, Mar 23, 2015 at 9:24 PM, Doug Hellmann d...@doughellmann.com wrote: The goal we set at the Kilo summit was to have a group of people interested in zmq start contributing to the driver, and I had hoped to the library

Re: [openstack-dev] [oslo.messaging][zeromq] 'Subgroup' for broker-less ZeroMQ driver

2015-03-24 Thread Ben Nemec
On 03/24/2015 10:31 AM, Li Ma wrote: On Mon, Mar 23, 2015 at 9:24 PM, Doug Hellmann d...@doughellmann.com wrote: The goal we set at the Kilo summit was to have a group of people interested in zmq start contributing to the driver, and I had hoped to the library overall. How do we feel that is

Re: [openstack-dev] [oslo.messaging][zeromq] 'Subgroup' for broker-less ZeroMQ driver

2015-03-24 Thread Li Ma
On Mon, Mar 23, 2015 at 9:24 PM, Doug Hellmann d...@doughellmann.com wrote: The goal we set at the Kilo summit was to have a group of people interested in zmq start contributing to the driver, and I had hoped to the library overall. How do we feel that is going? That sounds great. I hope so.

Re: [openstack-dev] [oslo.messaging][zeromq] 'Subgroup' for broker-less ZeroMQ driver

2015-03-24 Thread Li Ma
By the way, just informing that a general session will be available for zeromq driver. I'll provide the general architecture of the current zeromq driver, pros and cons, potential improvements, use cases for production. Topic: Distributed Messaging System for OpenStack at Scale Link:

Re: [openstack-dev] [oslo.messaging][zeromq] 'Subgroup' for broker-less ZeroMQ driver

2015-03-24 Thread Doug Hellmann
Excerpts from ozamiatin's message of 2015-03-24 18:57:25 +0200: Hi, +1 for subgroup meeting Does the separate repository mean separate library (python package) with its own release cycles so on? Yes, although as an Oslo library it would be subject to our existing policies about versioning,

Re: [openstack-dev] [oslo.messaging][zeromq] 'Subgroup' for broker-less ZeroMQ driver

2015-03-24 Thread Eric Windisch
From my experience, making fast moving changes is far easier when code is split out. Changes occur too slowly when integrated. I'd be +1 on splitting the code out. I expect you will get more done this way. Regards, Eric Windisch

Re: [openstack-dev] [oslo.messaging][zeromq] 'Subgroup' for broker-less ZeroMQ driver

2015-03-24 Thread Flavio Percoco
On 24/03/15 11:03 -0500, Ben Nemec wrote: On 03/24/2015 10:31 AM, Li Ma wrote: On Mon, Mar 23, 2015 at 9:24 PM, Doug Hellmann d...@doughellmann.com wrote: The goal we set at the Kilo summit was to have a group of people interested in zmq start contributing to the driver, and I had hoped to the

Re: [openstack-dev] [oslo.messaging][zeromq] 'Subgroup' for broker-less ZeroMQ driver

2015-03-24 Thread ozamiatin
Hi, +1 for subgroup meeting Does the separate repository mean separate library (python package) with its own release cycles so on? As I can see the separate library makes it easy: 1) To support optional (for oslo.messaging) requirements specific for zmq driver like pyzmq, redis so on 2)

Re: [openstack-dev] [oslo.messaging][zeromq] 'Subgroup' for broker-less ZeroMQ driver

2015-03-24 Thread Davanum Srinivas
+1 to keep it together. -- dims On Tue, Mar 24, 2015 at 12:17 PM, Flavio Percoco fla...@redhat.com wrote: On 24/03/15 11:03 -0500, Ben Nemec wrote: On 03/24/2015 10:31 AM, Li Ma wrote: On Mon, Mar 23, 2015 at 9:24 PM, Doug Hellmann d...@doughellmann.com wrote: The goal we set at the Kilo

Re: [openstack-dev] [oslo.messaging][zeromq] 'Subgroup' for broker-less ZeroMQ driver

2015-03-23 Thread Doug Hellmann
Excerpts from Li Ma's message of 2015-03-23 18:23:39 +0800: Hi all, During previous threads discussing about zeromq driver, a subgroup may be necessary to exchange knowledge and improve efficiency of communication and development. In this subgroup, we can schedule a given topic or just

[openstack-dev] [oslo.messaging][zeromq] 'Subgroup' for broker-less ZeroMQ driver

2015-03-23 Thread Li Ma
Hi all, During previous threads discussing about zeromq driver, a subgroup may be necessary to exchange knowledge and improve efficiency of communication and development. In this subgroup, we can schedule a given topic or just discuss some re-factoring stuff or bugs in irc room at a fixed time.

Re: [openstack-dev] [oslo.messaging] extending notification MessagingDriver

2015-03-11 Thread Boden Russell
Regarding bug 1426046 (see below) -- Is this just a matter of making the classes public or are you thinking the driver interface needs more thought + solidifying before making something extendable? Perhaps I can donate a cycle to 2 to help get this in. On 2/26/15 10:33 AM, Doug Hellmann wrote:

Re: [openstack-dev] [oslo.messaging] extending notification MessagingDriver

2015-02-26 Thread Doug Hellmann
On Thu, Feb 26, 2015, at 07:24 AM, Boden Russell wrote: What's the suggested approach for implementing a custom oslo messaging driver given the existing impl [1] is private? e.g. I want to provide my own notification messaging driver which adds functionality atop the existing driver [1].

Re: [openstack-dev] [oslo.messaging] extending notification MessagingDriver

2015-02-26 Thread Boden Russell
I don't have a public repo -- have been PoCing using a private gitlab to date... I figured any interest in the driver impl would come out of this email discussion. More than happy to provide my PoC code publicly (after a little clean-up) if there's an interest. On 2/26/15 12:01 PM, Sandy Walsh

Re: [openstack-dev] [oslo.messaging] extending notification MessagingDriver

2015-02-26 Thread Sandy Walsh
Sent: Thursday, February 26, 2015 2:41 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [oslo.messaging] extending notification MessagingDriver Thanks for filing the bug report... My driver implementation effectively allows you to filter

Re: [openstack-dev] [oslo.messaging] extending notification MessagingDriver

2015-02-26 Thread Boden Russell
Thanks for filing the bug report... My driver implementation effectively allows you to filter on notification events and multicast matches to a given list of topics. I've been calling it an messaging multicast notification driver and thus the plugin stevedore entry point I've called

Re: [openstack-dev] [oslo.messaging] extending notification MessagingDriver

2015-02-26 Thread Doug Hellmann
On Thu, Feb 26, 2015, at 01:41 PM, Boden Russell wrote: Thanks for filing the bug report... My driver implementation effectively allows you to filter on notification events and multicast matches to a given list of topics. I've been calling it an messaging multicast notification driver

[openstack-dev] [oslo.messaging] extending notification MessagingDriver

2015-02-26 Thread Boden Russell
What's the suggested approach for implementing a custom oslo messaging driver given the existing impl [1] is private? e.g. I want to provide my own notification messaging driver which adds functionality atop the existing driver [1]. This can obviously be done by extending the

[openstack-dev] [oslo.messaging] notification listener; same target with multiple executors

2015-02-12 Thread Boden Russell
Is it possible to have multiple oslo messaging notification listeners using different executors on the same target? For example, I was to create multiple notification listeners [1] each using a different executor for the same set of targets (e.g. glance/notifications). When I try this [2], only

[openstack-dev] [oslo.messaging] Can RPCClient.call() be used with a subset of servers?

2015-02-09 Thread Gravel, Julie Chongcharoen
Hello, I want to use oslo.messaging.RPCClient.call() to invoke a method on multiple servers, but not all of them. Can this be done and how? I read the code documentation (client.py and target.py). I only saw either the call used for one server at a time, or for all of them using

Re: [openstack-dev] [oslo.messaging] Can RPCClient.call() be used with a subset of servers?

2015-02-09 Thread Doug Hellmann
On Mon, Feb 9, 2015, at 02:40 PM, Gravel, Julie Chongcharoen wrote: Hello, I want to use oslo.messaging.RPCClient.call() to invoke a method on multiple servers, but not all of them. Can this be done and how? I read the code documentation

Re: [openstack-dev] [oslo.messaging] Can RPCClient.call() be used with a subset of servers?

2015-02-09 Thread Russell Bryant
On 02/09/2015 04:04 PM, Doug Hellmann wrote: On Mon, Feb 9, 2015, at 02:40 PM, Gravel, Julie Chongcharoen wrote: Hello, I want to use oslo.messaging.RPCClient.call() to invoke a method on multiple servers, but not all of them. Can this be

Re: [openstack-dev] [oslo.messaging] Can RPCClient.call() be used with a subset of servers?

2015-02-09 Thread Denis Makogon
On Monday, February 9, 2015, Gravel, Julie Chongcharoen julie.gra...@hp.com wrote: Hello, I want to use oslo.messaging.RPCClient.call() to invoke a method on multiple servers, but not all of them. Can this be done and how? I read the code documentation (client.py and

Re: [openstack-dev] [oslo.messaging][zmq] Redundant zmq.Context creation

2015-02-04 Thread Li Ma
Any news here? Per-socket solution is a conservative solution that makes zeromq driver work for multiple-workers. Neutron-server has api-worker and rpc-worker. I'm not sure per-driver is applicable. I will try to figure it out soon. On Fri, Jan 23, 2015 at 7:53 PM, Oleksii Zamiatin

  1   2   >