Hi,
On Tue, Jun 13, 2017 at 01:53:02PM +0700, Renat Akhmerov wrote:
Can you please clarify for how long you plan to keep ‘blocking executor’
deprecated before complete removal?
Like all deprecations. We just done it, so you have two cycles, we will
remove it in Rocky.
But as I said, this
Ok, I think I already got my question answered:
https://docs.openstack.org/releasenotes/oslo.messaging/unreleased.html#deprecation-notes
Thanks
Renat Akhmerov
@Nokia
On 13 Jun 2017, 13:59 +0700, Renat Akhmerov , wrote:
> Hi Oslo team,
>
> Can you please clarify for
Hi Oslo team,
Can you please clarify for how long you plan to keep ‘blocking executor’
deprecated before complete removal?
We have to use it in Mistral for the time being. We plan to move away from
using it but the transition may take significant time, not this cycle for sure.
So we got
Hi,
Keep in mind the rabbit driver creates a single reply queue per *transport*
- that is per call to oslo.messaging's
get_transport/get_rpc_transport/get_notification_transport.
If you have multiple RPCClients sharing the same transport, then all
clients issuing RPC calls over that transport
Hi,
On Thu, Jun 08, 2017 at 10:29:16AM +0800, int32bit wrote:
Hi,
Currently, I find our RPC client always need create a new callback queue
for every call requests to track the reply belongs, at least in Newton.
That's pretty inefficient and lead to poor performance. I also find some
RPC
Hi,
First of all, the correlation_id is needed for tracking the response for
callback queue per client. What you said is the inefficiency of callback
queue per request. In any case, callback queue is needed.
About oslo_messaging, you can see the correlation_id in the driver of amqp.
Br,
Hi,
Currently, I find our RPC client always need create a new callback queue
for every call requests to track the reply belongs, at least in Newton.
That's pretty inefficient and lead to poor performance. I also find some
RPC implementations no need to create a new queue, they track the request
On 2016-05-11 11:54, Qiming Teng wrote:
> I believe the gate is back. You may want to do 'recheck' now.
No, it's not. Once everything is fixed, we'll tell.
Joshua Hesketh has worked around some problems but not all of them. Once
it's working, somebody from the infra team will change the IRC
I believe the gate is back. You may want to do 'recheck' now.
Regards,
Qiming
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
Thanks. It is really help me a lot.
Best Regards,
xiangxinyong
On Wed, May 11, 2016 at 04:44 PM +0800, Qiming wrote:
> I believe the infra team is busy working on it. Seems it was caused by
> pip 8.1.2. Please be patient and avoid doing 'recheck' until problem is
> fixed.
>
> Regards,
>
I believe the infra team is busy working on it. Seems it was caused by
pip 8.1.2. Please be patient and avoid doing 'recheck' until problem is
fixed.
Regards,
Qiming
On Wed, May 11, 2016 at 04:42:49PM +0800, xiangxinyong wrote:
> hello folks,
>
>
> I find [gate-smaug-python34] is FAILURE.
>
hello folks,
I find [gate-smaug-python34] is FAILURE.
The gate messages are as follows:
Collecting oslo.messaging>=4.5.0 (from smaug==0.0.1.dev159) Could not find a
version that satisfies the requirement oslo.messaging>=4.5.0 (from
smaug==0.0.1.dev159) (from versions: ) No matching
On 11.12.2015 12:06, Bogdan Dobrelya wrote:
> Hello.
>
> On 02.12.2015 12:01, Bogdan Dobrelya wrote:
>>> Bogdan,
>>>
>>> Which service would use this flag to start with? and how would the
>>> code change to provide "app side is fully responsible for duplicates
>>> handling"?
>>
>> (fixed topic
To anyone who cares,
I made a try to use kombu 2.5 (specifically its reconnecting logic), with
oslo.messaging (which uses kombu's reconnecting logic after kilo). No
matter whether heartbeat is enabled or not, there are truly many problems.
NoneType Error [1], eg.
I have to manually backport
Hello.
On 02.12.2015 12:01, Bogdan Dobrelya wrote:
>> Bogdan,
>>
>> Which service would use this flag to start with? and how would the
>> code change to provide "app side is fully responsible for duplicates
>> handling"?
>
> (fixed topic tags to match oslo.messaging)
>
> AFAIK, this mode is
Hi, I also left my comment in the patch which explains what we need from
Mistral perspective. Please take a look.
Renat Akhmerov
@ Mirantis Inc.
> On 02 Dec 2015, at 17:01, Bogdan Dobrelya wrote:
>
>> Bogdan,
>>
>> Which service would use this flag to start with?
> Bogdan,
>
> Which service would use this flag to start with? and how would the
> code change to provide "app side is fully responsible for duplicates
> handling"?
(fixed topic tags to match oslo.messaging)
AFAIK, this mode is required by Mistral HA. Other projects may want
the at-least-once
On Tue, Nov 10, 2015 at 6:46 PM, Joshua Harlow
wrote:
> Matthew Booth wrote:
>
>> My patch to MessageHandlingServer is currently being reverted because it
>> broke Nova tests:
>>
>> https://review.openstack.org/#/c/235347/
>>
>> Specifically it causes a number of tests to
Matthew Booth wrote:
On Tue, Nov 10, 2015 at 6:46 PM, Joshua Harlow > wrote:
Matthew Booth wrote:
My patch to MessageHandlingServer is currently being reverted
because it
broke Nova tests:
My patch to MessageHandlingServer is currently being reverted because it
broke Nova tests:
https://review.openstack.org/#/c/235347/
Specifically it causes a number of tests to take a very long time to
execute, which ultimately results in the total build time limit being
exceeded. This is very
Matthew Booth wrote:
My patch to MessageHandlingServer is currently being reverted because it
broke Nova tests:
https://review.openstack.org/#/c/235347/
Specifically it causes a number of tests to take a very long time to
execute, which ultimately results in the total build time limit being
Thank you for pointing this out, dims. I didn't notice this process of
openstack. But i wonder how do you find the relationship between that bot's
commit and the global requirements commit.
And sileht, from this commit
apporc,
I do a git blame on global-requirements.txt to figure that out. I'll let
@sileht answer the other one :)
-- dims
On Wed, Oct 28, 2015 at 3:15 PM, me,apporc
wrote:
> Thank you for pointing this out, dims. I didn't notice this process of
> openstack. But i
Le 2015-10-27 04:22, me,apporc a écrit :
But i found in the changelog history of kombu [1], heartbeat support
was
added in version 2.5.0, so what's the point for ">= 3.0.7". Thanks.
The initial heartbeat implementation have some critical issues for
oslo.messaging that was fixed since kombu
Thank you for the explanation, Mehdi. About the "critical issues" mentioned
in the commits, as i understand:
[1] seems just socket timeout issue, and admins can adjust those kernel
params themselves.
[2] and [3] truly a problem about the heartbeat implementation, but it says
the fix is a part of
[1] seems just socket timeout issue, and admins can adjust those kernel
params themselves.
Yes, but if you trick kernel settings, like putting very low tcp
keepalive values, you don't need to have/enable heartbeat.
[2] and [3] truly a problem about the heartbeat implementation, but it
Thanks again.
This kombu >=3.0.7 requirement is added in commit
5b9fb6980220dbfa18bac4c3231d57efb493ebf0, which is from a Bot with no
reason.
As i see, we are directly requiring amqp >=1.4.0 in requirement.txt from
commit 0c954cffa2f3710acafa79f01b958a8955823640 on.
So maybe there is no need to
"Bot with No reason" <<< Not really accurate. The process in openstack is
to update global requirements first and then bot proposes the update to
different projects. So please look at
https://github.com/openstack/requirements/commit/c7f69afd6af56e8f7956c6fa0bea8fd776151fe6
for the commit which
This is a continuation of the "next steps" topic [0].
I created a blueprint [1] and related etherpad [2] to describe failure
modes (timeouts, retrurn codes and so on) at least for the Oslo
messaging RPC API calls, which seem the most critical
point for app and the library devs.
As the end goal,
In this commit: https://review.openstack.org/#/c/12/, we set the
requirement of kombu to >=3.0.7 to support rabbit heartbeat.
But i found in the changelog history of kombu [1], heartbeat support was
added in version 2.5.0, so what's the point for ">= 3.0.7". Thanks.
[1]:
Dmitriy Ukhlov wrote:
Hello stackers,
I'm working on new olso.messaging RabbitMQ driver implementation which
uses pika client library instead of kombu. It related to
https://blueprints.launchpad.net/oslo.messaging/+spec/rabbit-pika.
In this letter I want to share current results and probably
Also a side question, that someone might know,
Whatever happened to the folks from rabbitmq (incorporated? pivotal?)
who were going to get involved in oslo.messaging, did that ever happen;
if anyone knows?
They might be a good bunch of people to review such a pika driver (since
I think they
Hello Joshua, thank you for your feedback.
This will end up on review.openstack.org right so that it can be properly
> reviewed (it will likely take a while since it looks to be ~1000+ lines of
> code)?
Yes, sure I will send this patch to review.openstack.org, but first of all
I need to get
Hello stackers,
I'm working on new olso.messaging RabbitMQ driver implementation which uses
pika client library instead of kombu. It related to
https://blueprints.launchpad.net/oslo.messaging/+spec/rabbit-pika.
In this letter I want to share current results and probably get first
feedack from
Excerpts from ozamiatin's message of 2015-09-16 18:24:53 +0300:
> Hi All,
>
> I'm excited to report that today we have merged [1] new zmq driver into
> oslo.messaging master branch.
> The driver is not completely done yet, so we are going to continue
> developing it on the master branch now.
>
Hear hear! nice work Oleksii and team!
-- Dims
On Fri, Sep 18, 2015 at 4:51 PM, Doug Hellmann
wrote:
> Excerpts from ozamiatin's message of 2015-09-16 18:24:53 +0300:
> > Hi All,
> >
> > I'm excited to report that today we have merged [1] new zmq driver into
> >
Hi All,
I'm excited to report that today we have merged [1] new zmq driver into
oslo.messaging master branch.
The driver is not completely done yet, so we are going to continue
developing it on the master branch now.
What we've reached for now is passing functional tests gate (we are
going
Excerpts from ozamiatin's message of 2015-09-16 08:24:53 -0700:
> Hi All,
>
> I'm excited to report that today we have merged [1] new zmq driver into
> oslo.messaging master branch.
> The driver is not completely done yet, so we are going to continue
> developing it on the master branch now.
>
I believe in oslo.messaging routing_key is a topic.
Here is an example for ceilometer:
https://github.com/openstack/ceilometer/blob/master/ceilometer/meter/notifications.py#L212
And here is an oslo code for rabbitMQ driver:
Hi,
I am considering to use oslo.messaging to read messages from a rabbit
queue. The messages are put into the queue by an external process.
In order to do that I need to specify routing_key in addition to other
parameters (i.e. exchange and queue,... name) for accessing the queue. I
was looking
inline
On 14.07.2015 18:59, Alec Hothan (ahothan) wrote:
inline...
On 7/8/15, 8:23 AM, Bogdan Dobrelya bdobre...@mirantis.com wrote:
I believe Oleksii is already working on it.
On all above I believe it is best to keep oslo messaging simple and
predictable, then have apps deal
On 7/20/15, 5:24 AM, Bogdan Dobrelya bdobre...@mirantis.com wrote:
inline
On 14.07.2015 18:59, Alec Hothan (ahothan) wrote:
inline...
On 7/8/15, 8:23 AM, Bogdan Dobrelya bdobre...@mirantis.com wrote:
I believe Oleksii is already working on it.
On all above I believe it is
inline...
On 7/8/15, 8:23 AM, Bogdan Dobrelya bdobre...@mirantis.com wrote:
On 6/12/15, 3:55 PM, Clint Byrum cl...@fewbar.com wrote:
I think you missed it is not tested in the gate as a root cause for
some of the ambiguity. Anecdotes and bug reports are super important
for
knowing
Just to clarify: what we’re looking for is how to implement “Work queue”
pattern described at [1] with oslo messaging. As Nikolay said, it requires that
a message to be acknowledged after it has been processed.
[1] http://www.rabbitmq.com/tutorials/tutorial-two-python.html
Hi,
I am using RabbitMQ as the backend and searched oslo.messaging for message
acknowledgement feature but I found only [1] what is wrong using of
acknowledgement since it acknowledges incoming message before it has been
processed (while it should be done only after processing the message,
Hi,
The RPC API of oslo.messaging do it for you, you don't have to care
about acknowledgement (or anything else done by the driver because the
underlying used pattern depends of it) .
For the Working Queues patterns, I guess what you need is to ensure
that the Target doesn't have the
Spec [1] is updated and ready for review.
Thanks everyone for taking part in the discussion.
Regards,
Oleksii
[1] - https://review.openstack.org/#/c/187338
6/22/15 13:14, Sean Dague пишет:
On 06/19/2015 08:18 PM, Alec Hothan (ahothan) wrote:
Do we have a good understanding of what is
On 06/19/2015 08:18 PM, Alec Hothan (ahothan) wrote:
Do we have a good understanding of what is expected of zmq wrt rabbitMQ?
Like in what part of the bell curve or use cases would you see it? Or
indirectly, where do we see RabbitMQ lacking today that maybe ZMQ could
handle better?
I have
Alec Hothan (ahothan) wrote:
Do we have a good understanding of what is expected of zmq wrt rabbitMQ?
Like in what part of the bell curve or use cases would you see it? Or
indirectly, where do we see RabbitMQ lacking today that maybe ZMQ could
handle better?
I have tried to find any information
Do we have a good understanding of what is expected of zmq wrt rabbitMQ?
Like in what part of the bell curve or use cases would you see it? Or
indirectly, where do we see RabbitMQ lacking today that maybe ZMQ could
handle better?
I have tried to find any information on very large scale deployment
Excerpts from Alec Hothan (ahothan)'s message of 2015-06-19 17:18:41 -0700:
Do we have a good understanding of what is expected of zmq wrt rabbitMQ?
Like in what part of the bell curve or use cases would you see it? Or
indirectly, where do we see RabbitMQ lacking today that maybe ZMQ could
Flavio Percoco wrote:
There's a 95% of deployments using rabbit not because rabbit is the
best solution for all OpenStack problems but because it was the one
that works best now. The lack of support on other drivers caused this
and as long this lack of support on such drivers persist, it won't
On 6/1/15, 5:03 PM, Davanum Srinivas
dava...@gmail.commailto:dava...@gmail.com wrote:
fyi, the spec for zeromq driver in oslo.messaging is here:
https://review.openstack.org/#/c/187338/1/specs/liberty/zmq-patterns-usage
.rst,unified
The above spec suggests using the zmq pub/sub/xpub/xsub
On 06/16/2015 08:51 PM, Alec Hothan (ahothan) wrote:
I saw Sean Dague mention in another email that RabbitMQ is used by 95% of
OpenStack users - and therefore does it make sense to invest in ZMQ (legit
question).
I believe it's used by 95% of users because there is as yet no
compelling
On 16/06/15 19:51 +, Alec Hothan (ahothan) wrote:
Gordon,
These are all great points for RPC messages (also called CALL in oslo
messaging). There are similar ambiguous contracts for the other types of
messages (CAST and FANOUT).
I am worried about the general lack of interest from the
On 06/12/2015 09:41 PM, Alec Hothan (ahothan) wrote:
One long standing issue I can see is the fact that the oslo messaging API
documentation is sorely lacking details on critical areas such as API
behavior during fault conditions, load conditions and scale conditions.
I very much agree,
Gordon,
These are all great points for RPC messages (also called CALL in oslo
messaging). There are similar ambiguous contracts for the other types of
messages (CAST and FANOUT).
I am worried about the general lack of interest from the community to fix
this as it looks like most people assume
On 6/12/15, 3:55 PM, Clint Byrum cl...@fewbar.com wrote:
I think you missed it is not tested in the gate as a root cause for
some of the ambiguity. Anecdotes and bug reports are super important for
knowing where to invest next, but a test suite would at least establish a
base line and
Excerpts from Alec Hothan (ahothan)'s message of 2015-06-15 11:45:53 -0700:
On 6/12/15, 3:55 PM, Clint Byrum cl...@fewbar.com wrote:
I think you missed it is not tested in the gate as a root cause for
some of the ambiguity. Anecdotes and bug reports are super important for
knowing
6/13/15 01:55, Clint Byrum пишет:
Excerpts from Alec Hothan (ahothan)'s message of 2015-06-12 13:41:17 -0700:
On 6/1/15, 5:03 PM, Davanum Srinivas dava...@gmail.com wrote:
fyi, the spec for zeromq driver in oslo.messaging is here:
Hi, Alec
Thanks for email threads investigation.
I've decided to spend more time to dig into old zmq-related threads too.
Some notes inline.
6/12/15 23:41, Alec Hothan (ahothan) пишет:
On 6/1/15, 5:03 PM, Davanum Srinivas dava...@gmail.com wrote:
fyi, the spec for zeromq driver in
On 6/1/15, 5:03 PM, Davanum Srinivas dava...@gmail.com wrote:
fyi, the spec for zeromq driver in oslo.messaging is here:
https://review.openstack.org/#/c/187338/1/specs/liberty/zmq-patterns-usage
.rst,unified
-- dims
I was about to provide some email comments on the above review off gerrit,
Excerpts from Alec Hothan (ahothan)'s message of 2015-06-12 13:41:17 -0700:
On 6/1/15, 5:03 PM, Davanum Srinivas dava...@gmail.com wrote:
fyi, the spec for zeromq driver in oslo.messaging is here:
https://review.openstack.org/#/c/187338/1/specs/liberty/zmq-patterns-usage
.rst,unified
--
@lists.openstack.org
Date: Wednesday, May 27, 2015 at 3:52 AM
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [oslo.messaging][zeromq] Next step
Hi,
I'll try to address the question about Proxy process.
AFAIK
-dev@lists.openstack.org
Subject: Re: [openstack-dev] [oslo.messaging][zeromq] Next step
Hi,
I'll try to address the question about Proxy process.
AFAIK there is no way yet in zmq to bind more than once to a specific port
(e.g. tcp://*:9501).
Apparently we can:
socket1.bind('tcp://node1:9501
Hi,
I'll try to address the question about Proxy process.
AFAIK there is no way yet in zmq to bind more than once to a specific
port (e.g. tcp://*:9501).
Apparently we can:
socket1.bind('tcp://node1:9501')
socket2.bind('tcp://node2:9501')
but we can not:
socket1.bind('tcp://*:9501')
Alec,
Here are the slides:
http://www.slideshare.net/davanum/oslomessaging-new-0mq-driver-proposal
All the 0mq patches to date should be either already merged in trunk
or waiting for review on trunk.
Oleksii, Li Ma,
Can you please address the other questions?
thanks,
Dims
On Tue, May 26, 2015
We are jubilant to announce the release of:
oslo.messaging 1.11.0: Oslo Messaging API
With source available at:
http://git.openstack.org/cgit/openstack/oslo.messaging
For more details, please see the git log history below and:
http://launchpad.net/oslo.messaging/+milestone/1.11.0
Looking at what is the next step following the design summit meeting on
0MQ as the etherpad does not provide too much information.
Few questions:
- would it be possible to have the slides presented (showing the proposed
changes in the 0MQ driver design) to be available somewhere?
- is there a
Hi,
Does anyone use any version of ZeroMQ driver in production deployment?
If you do, please leave your comments in [1], or reply to this letter.
[1] https://review.openstack.org/#/c/171131/
Thanks,
Oleksii Zamiatin
__
Hi oslo all,
Currently devstack master relies on 1.8.1 release due to requirements
frozen (=1.8.0 1.9.0), however, ZeroMQ driver is able to run on
1.9.0 release. The result is that you cannot deploy ZeroMQ driver
using devstack master now due to some incompatibility between
oslo.messaging 1.8.1
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hi,
All of these patches are only bug fixes, so that's good for me.
So if others agreed, I can release 1.8.2 with these changes once they
are landed.
We don't have any other changes pending in kilo branch for now.
Le 2015-04-09 16:12, Li Ma
Great. I'm just doing some experiments to evaluate REQ/REP pattern.
It seems that your implementation is completed.
Looking forward to reviewing your updates.
On Mon, Mar 30, 2015 at 4:02 PM, ozamiatin ozamia...@mirantis.com wrote:
Hi,
Sorry for not replying on [1] comments too long.
I'm
Hi,
Sorry for not replying on [1] comments too long.
I'm almost ready to return to the spec with updates.
The main lack of current zmq-driver implementation is that
it manually implements REQ/REP on top of PUSH/PULL.
It results in:
1. PUSH/PULL is one way directed socket (reply needs another
Hi all,
I'd like to propose a simple but straightforward method to improve the
stability of the current implementation.
Here's the current implementation:
receiver(PULL(tcp)) -- service(PUSH(tcp))
receiver(PUB(ipc)) -- service(SUB(ipc))
receiver(PUSH(ipc)) -- service(PULL(ipc))
Actually, as
On 23/03/15 09:24 -0400, Doug Hellmann wrote:
Excerpts from Li Ma's message of 2015-03-23 18:23:39 +0800:
Hi all,
During previous threads discussing about zeromq driver, a subgroup may
be necessary to exchange knowledge and improve efficiency of
communication and development. In this subgroup,
Excerpts from Li Ma's message of 2015-03-24 23:31:22 +0800:
On Mon, Mar 23, 2015 at 9:24 PM, Doug Hellmann d...@doughellmann.com wrote:
The goal we set at the Kilo summit was to have a group of people
interested in zmq start contributing to the driver, and I had hoped to
the library
On 03/24/2015 10:31 AM, Li Ma wrote:
On Mon, Mar 23, 2015 at 9:24 PM, Doug Hellmann d...@doughellmann.com wrote:
The goal we set at the Kilo summit was to have a group of people
interested in zmq start contributing to the driver, and I had hoped to
the library overall. How do we feel that is
On Mon, Mar 23, 2015 at 9:24 PM, Doug Hellmann d...@doughellmann.com wrote:
The goal we set at the Kilo summit was to have a group of people
interested in zmq start contributing to the driver, and I had hoped to
the library overall. How do we feel that is going?
That sounds great. I hope so.
By the way, just informing that a general session will be available
for zeromq driver. I'll provide the general architecture of the
current zeromq driver, pros and cons, potential improvements, use
cases for production.
Topic: Distributed Messaging System for OpenStack at Scale
Link:
Excerpts from ozamiatin's message of 2015-03-24 18:57:25 +0200:
Hi,
+1 for subgroup meeting
Does the separate repository mean separate library (python package) with
its own release cycles so on?
Yes, although as an Oslo library it would be subject to our existing
policies about versioning,
From my experience, making fast moving changes is far easier when code is
split out. Changes occur too slowly when integrated.
I'd be +1 on splitting the code out. I expect you will get more done this
way.
Regards,
Eric Windisch
On 24/03/15 11:03 -0500, Ben Nemec wrote:
On 03/24/2015 10:31 AM, Li Ma wrote:
On Mon, Mar 23, 2015 at 9:24 PM, Doug Hellmann d...@doughellmann.com wrote:
The goal we set at the Kilo summit was to have a group of people
interested in zmq start contributing to the driver, and I had hoped to
the
Hi,
+1 for subgroup meeting
Does the separate repository mean separate library (python package) with
its own release cycles so on?
As I can see the separate library makes it easy:
1) To support optional (for oslo.messaging) requirements specific for
zmq driver like pyzmq, redis so on
2)
+1 to keep it together.
-- dims
On Tue, Mar 24, 2015 at 12:17 PM, Flavio Percoco fla...@redhat.com wrote:
On 24/03/15 11:03 -0500, Ben Nemec wrote:
On 03/24/2015 10:31 AM, Li Ma wrote:
On Mon, Mar 23, 2015 at 9:24 PM, Doug Hellmann d...@doughellmann.com
wrote:
The goal we set at the Kilo
Excerpts from Li Ma's message of 2015-03-23 18:23:39 +0800:
Hi all,
During previous threads discussing about zeromq driver, a subgroup may
be necessary to exchange knowledge and improve efficiency of
communication and development. In this subgroup, we can schedule a
given topic or just
Hi all,
During previous threads discussing about zeromq driver, a subgroup may
be necessary to exchange knowledge and improve efficiency of
communication and development. In this subgroup, we can schedule a
given topic or just discuss some re-factoring stuff or bugs in irc
room at a fixed time.
Regarding bug 1426046 (see below) -- Is this just a matter of making the
classes public or are you thinking the driver interface needs more
thought + solidifying before making something extendable?
Perhaps I can donate a cycle to 2 to help get this in.
On 2/26/15 10:33 AM, Doug Hellmann wrote:
On Thu, Feb 26, 2015, at 07:24 AM, Boden Russell wrote:
What's the suggested approach for implementing a custom oslo messaging
driver given the existing impl [1] is private?
e.g. I want to provide my own notification messaging driver which adds
functionality atop the existing driver [1].
I don't have a public repo -- have been PoCing using a private gitlab to
date... I figured any interest in the driver impl would come out of this
email discussion.
More than happy to provide my PoC code publicly (after a little
clean-up) if there's an interest.
On 2/26/15 12:01 PM, Sandy Walsh
Sent: Thursday, February 26, 2015 2:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [oslo.messaging] extending notification
MessagingDriver
Thanks for filing the bug report...
My driver implementation effectively allows you to filter
Thanks for filing the bug report...
My driver implementation effectively allows you to filter on
notification events and multicast matches to a given list of topics.
I've been calling it an messaging multicast notification driver and thus
the plugin stevedore entry point I've called
On Thu, Feb 26, 2015, at 01:41 PM, Boden Russell wrote:
Thanks for filing the bug report...
My driver implementation effectively allows you to filter on
notification events and multicast matches to a given list of topics.
I've been calling it an messaging multicast notification driver
What's the suggested approach for implementing a custom oslo messaging
driver given the existing impl [1] is private?
e.g. I want to provide my own notification messaging driver which adds
functionality atop the existing driver [1]. This can obviously be done
by extending the
Is it possible to have multiple oslo messaging notification listeners
using different executors on the same target?
For example, I was to create multiple notification listeners [1] each
using a different executor for the same set of targets (e.g.
glance/notifications).
When I try this [2], only
Hello,
I want to use oslo.messaging.RPCClient.call() to invoke a
method on multiple servers, but not all of them. Can this be done and how? I
read the code documentation (client.py and target.py). I only saw either the
call used for one server at a time, or for all of them using
On Mon, Feb 9, 2015, at 02:40 PM, Gravel, Julie Chongcharoen wrote:
Hello,
I want to use oslo.messaging.RPCClient.call() to invoke a
method on multiple servers, but not all of them. Can this
be done and how? I read the code documentation
On 02/09/2015 04:04 PM, Doug Hellmann wrote:
On Mon, Feb 9, 2015, at 02:40 PM, Gravel, Julie Chongcharoen wrote:
Hello,
I want to use oslo.messaging.RPCClient.call() to invoke a
method on multiple servers, but not all of them. Can this
be
On Monday, February 9, 2015, Gravel, Julie Chongcharoen julie.gra...@hp.com
wrote:
Hello,
I want to use oslo.messaging.RPCClient.call() to invoke a
method on multiple servers, but not all of them. Can this be done and how?
I read the code documentation (client.py and
Any news here? Per-socket solution is a conservative solution that
makes zeromq driver work for multiple-workers. Neutron-server has
api-worker and rpc-worker. I'm not sure per-driver is applicable. I
will try to figure it out soon.
On Fri, Jan 23, 2015 at 7:53 PM, Oleksii Zamiatin
1 - 100 of 182 matches
Mail list logo