Re: [openstack-dev] [oslo] No complains about rabbitmq SSL problems: could we have this in the logs?

2018-11-05 Thread Ken Giusti
Hi Mohammed,

What release of openstack are you using?  (ocata, pike, etc)

Also just to confirm my understanding:  you do see the SSL connections come
up, but after some time they 'hang' - what do you mean by 'hang'?  Do the
connections drop?  Or do the connections remain up but you start seeing
messages (RPC calls) time out?

thanks,

On Wed, Oct 31, 2018 at 9:40 AM Mohammed Naser  wrote:

> For what it’s worth: I ran into the same issue.  I think the problem lies
> a bit deeper because it’s a problem with kombu as when debugging I saw that
> Oslo messaging tried to connect and hung after.
>
> Sent from my iPhone
>
> > On Oct 31, 2018, at 2:29 PM, Thomas Goirand  wrote:
> >
> > Hi,
> >
> > It took me a long long time to figure out that my SSL setup was wrong
> > when trying to connect Heat to rabbitmq over SSL. Unfortunately, Oslo
> > (or heat itself) never warn me that something was wrong, I just got
> > nothing working, and no log at all.
> >
> > I'm sure I wouldn't be the only one happy about having this type of
> > problems being yelled out loud in the logs. Right now, it does work if I
> > turn off SSL, though I'm still not sure what's wrong in my setup, and
> > I'm given no clue if the issue is on rabbitmq-server or on the client
> > side (ie: heat, in my current case).
> >
> > Just a wishlist... :)
> > Cheers,
> >
> > Thomas Goirand (zigo)
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Ken Giusti  (kgiu...@gmail.com)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] No complains about rabbitmq SSL problems: could we have this in the logs?

2018-11-02 Thread Ken Giusti
Hi,

There does seem to be something currently wonky with SSL &
oslo.messaging.   I'm looking into it now.

And there's this recently reported issue:

https://bugs.launchpad.net/oslo.messaging/+bug/1800957

In the above bug something seems to have broken SSL between ocata and
pike.  The current suspected change is a patch that fixed a threading issue.

Stay tuned...


On Thu, Nov 1, 2018 at 3:53 AM Thomas Goirand  wrote:

> On 10/31/18 2:40 PM, Mohammed Naser wrote:
> > For what it’s worth: I ran into the same issue.  I think the problem
> lies a bit deeper because it’s a problem with kombu as when debugging I saw
> that Oslo messaging tried to connect and hung after.
> >
> > Sent from my iPhone
> >
> >> On Oct 31, 2018, at 2:29 PM, Thomas Goirand  wrote:
> >>
> >> Hi,
> >>
> >> It took me a long long time to figure out that my SSL setup was wrong
> >> when trying to connect Heat to rabbitmq over SSL. Unfortunately, Oslo
> >> (or heat itself) never warn me that something was wrong, I just got
> >> nothing working, and no log at all.
> >>
> >> I'm sure I wouldn't be the only one happy about having this type of
> >> problems being yelled out loud in the logs. Right now, it does work if I
> >> turn off SSL, though I'm still not sure what's wrong in my setup, and
> >> I'm given no clue if the issue is on rabbitmq-server or on the client
> >> side (ie: heat, in my current case).
> >>
> >> Just a wishlist... :)
> >> Cheers,
> >>
> >> Thomas Goirand (zigo)
>
> I've opened a bug here:
>
> https://bugs.launchpad.net/oslo.messaging/+bug/1801011
>
> Cheers,
>
> Thomas Goirand (zigo)
>
> __________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Ken Giusti  (kgiu...@gmail.com)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][nova][openstack-ansible] About Rabbitmq warning problems on nova-compute side

2018-11-02 Thread Ken Giusti
Hi Gokhan,

There's been a flurry of folks reporting issues recently related to pike
and SSL.   See:

https://bugs.launchpad.net/oslo.messaging/+bug/1800957
and
https://bugs.launchpad.net/oslo.messaging/+bug/1801011

I'm currently working on this - no status yet.

As a test would it be possible to try disabling SSL in your configuration
to see if the problem persists?


On Thu, Nov 1, 2018 at 7:53 AM Gökhan IŞIK (BİLGEM BTE) <
gokhan.i...@tubitak.gov.tr> wrote:

> Hi folks,
>
> I have problems about rabbitmq on nova-compute side. I see lots of
> warnings in log file like that “client unexpectedly closed TCP
> connection”.[1]
>
> I have a HA OpenStack environment on ubuntu 16.04.5 which is installed
> with Openstack Ansible Project. My OpenStack environment version is Pike.
> My environment consists of 3 controller nodes ,23 compute nodes and 1 log
> node. Cinder volume service is installed on compute nodes and I am using
> NetApp Storage.
>
> I tried lots of configs on nova about oslo messaging and rabbitmq side,
> but I didn’t resolve this problem. My latest configs are below:
>
> rabbitmq.config is : http://paste.openstack.org/show/733767/
>
> nova.conf is: http://paste.openstack.org/show/733768/
>
> Services versions are : http://paste.openstack.org/show/733769/
>
>
> Can you share your experiences on rabbitmq side and How can I solve these
> warnings on nova-compute side ? What will you advice ?
>
>
> [1] http://paste.openstack.org/show/733766/
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Ken Giusti  (kgiu...@gmail.com)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral][oslo][messaging] Removing “blocking” executor from oslo.messaging

2018-10-19 Thread Ken Giusti
Hi Renat,
After discussing this a bit with Ben on IRC we're going to push the removal
off to T milestone 1.

I really like Ben's idea re: adding a blocking entry to your project's
setup.cfg file.  We can remove the explicit check for blocking in
oslo.messaging so you won't get an annoying warning if you want to load
blocking on your own.

Let me know what you think, thanks.

On Fri, Oct 19, 2018 at 12:02 AM Renat Akhmerov 
wrote:

> Hi,
>
>
> @Ken, I understand your considerations. I get that. I’m only asking not to
> remove it *for now*. And yes, if you think it should be discouraged from
> using it’s totally fine. But practically, it’s been the only reliable
> option for Mistral so far that may be our fault, I have to admit, because
> we weren’t able to make it work well with other executor types but we’ll
> try to fix that.
>
> By the way, I was playing with different options yesterday and it seems
> like that setting the executor to “threading” and the
> “executor_thread_pool_size” property to 1 behaves the same way as
> “blocking”. So may be that’s an option for us too, even if “blocking” is
> completely removed. But I would still be in favour of having some extra
> time to prove that with thorough testing.
>
> @Ben, including the executor via setup.cfg also looks OK to me. I see no
> issues with this approach.
>
>
> Thanks
>
> Renat Akhmerov
> @Nokia
> On 18 Oct 2018, 23:35 +0700, Ben Nemec , wrote:
>
>
>
> On 10/18/18 9:59 AM, Ken Giusti wrote:
>
> Hi Renat,
>
> The biggest issue with the blocking executor (IMHO) is that it blocks
> the protocol I/O while  RPC processing is in progress.  This increases
> the likelihood that protocol processing will not get done in a timely
> manner and things start to fail in weird ways.  These failures are
> timing related and are typically hard to reproduce or root-cause.   This
> isn't something we can fix as blocking is the nature of the executor.
>
> If we are to leave it in we'd really want to discourage its use.
>
>
> Since it appears the actual executor code lives in futurist, would it be
> possible to remove the entrypoint for blocking from oslo.messaging and
> have mistral just pull it in with their setup.cfg? Seems like they
> should be able to add something like:
>
> oslo.messaging.executors =
> blocking = futurist:SynchronousExecutor
>
> to their setup.cfg to keep it available to them even if we drop it from
> oslo.messaging itself. That seems like a good way to strongly discourage
> use of it while still making it available to projects that are really
> sure they want it.
>
>
> However I'm ok with leaving it available if the policy for using
> blocking is 'use at your own risk', meaning that bug reports may have to
> be marked 'won't fix' if we have reason to believe that blocking is at
> fault.  That implies removing 'blocking' as the default executor value
> in the API and having applications explicitly choose it.  And we keep
> the deprecation warning.
>
> We could perhaps implement time duration checks around the executor
> callout and log a warning if the executor blocked for an extended amount
> of time (extended=TBD).
>
> Other opinions so we can come to a consensus?
>
>
> On Thu, Oct 18, 2018 at 3:24 AM Renat Akhmerov  <mailto:renat.akhme...@gmail.com>> wrote:
>
> Hi Oslo Team,
>
> Can we retain “blocking” executor for now in Oslo Messaging?
>
>
> Some background..
>
> For a while we had to use Oslo Messaging with “blocking” executor in
> Mistral because of incompatibility of MySQL driver with green
> threads when choosing “eventlet” executor. Under certain conditions
> we would get deadlocks between green threads. Some time ago we
> switched to using PyMysql driver which is eventlet friendly and did
> a number of tests that showed that we could safely switch to
> “eventlet” executor (with that driver) so we introduced a new option
> in Mistral where we could choose an executor in Oslo Messaging. The
> corresponding bug is [1].
>
> The issue is that we recently found that not everything actually
> works as expected when using combination PyMysql + “eventlet”
> executor. We also tried “threading” executor and the system *seems*
> to work with it but surprisingly performance is much worse.
>
> Given all of that we’d like to ask Oslo Team not to remove
> “blocking” executor for now completely, if that’s possible. We have
> a strong motivation to switch to “eventlet” for other reasons
> (parallelism => better performance etc.) but seems like we need some
> time to make it smoothly.
>
>
> [1] https://bugs.launchpad.net/mistral/+bug/1696469
>
>

Re: [openstack-dev] [mistral][oslo][messaging] Removing “blocking” executor from oslo.messaging

2018-10-18 Thread Ken Giusti
Hi Renat,

The biggest issue with the blocking executor (IMHO) is that it blocks the
protocol I/O while  RPC processing is in progress.  This increases the
likelihood that protocol processing will not get done in a timely manner
and things start to fail in weird ways.  These failures are timing related
and are typically hard to reproduce or root-cause.   This isn't something
we can fix as blocking is the nature of the executor.

If we are to leave it in we'd really want to discourage its use.

However I'm ok with leaving it available if the policy for using blocking
is 'use at your own risk', meaning that bug reports may have to be marked
'won't fix' if we have reason to believe that blocking is at fault.  That
implies removing 'blocking' as the default executor value in the API and
having applications explicitly choose it.  And we keep the deprecation
warning.

We could perhaps implement time duration checks around the executor callout
and log a warning if the executor blocked for an extended amount of time
(extended=TBD).

Other opinions so we can come to a consensus?


On Thu, Oct 18, 2018 at 3:24 AM Renat Akhmerov 
wrote:

> Hi Oslo Team,
>
> Can we retain “blocking” executor for now in Oslo Messaging?
>
>
> Some background..
>
> For a while we had to use Oslo Messaging with “blocking” executor in
> Mistral because of incompatibility of MySQL driver with green threads when
> choosing “eventlet” executor. Under certain conditions we would get
> deadlocks between green threads. Some time ago we switched to using PyMysql
> driver which is eventlet friendly and did a number of tests that showed
> that we could safely switch to “eventlet” executor (with that driver) so we
> introduced a new option in Mistral where we could choose an executor in
> Oslo Messaging. The corresponding bug is [1].
>
> The issue is that we recently found that not everything actually works as
> expected when using combination PyMysql + “eventlet” executor. We also
> tried “threading” executor and the system *seems* to work with it but
> surprisingly performance is much worse.
>
> Given all of that we’d like to ask Oslo Team not to remove “blocking”
> executor for now completely, if that’s possible. We have a strong
> motivation to switch to “eventlet” for other reasons (parallelism => better
> performance etc.) but seems like we need some time to make it smoothly.
>
>
> [1] https://bugs.launchpad.net/mistral/+bug/1696469
>
>
> Thanks
>
> Renat Akhmerov
> @Nokia
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Ken Giusti  (kgiu...@gmail.com)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance][cinder][keystone][requirements] blocking oslo.messaging 9.0.0

2018-10-09 Thread Ken Giusti
On Tue, Oct 9, 2018 at 12:30 PM Doug Hellmann  wrote:

> Ken Giusti  writes:
>
> > On Tue, Oct 9, 2018 at 11:56 AM Doug Hellmann 
> wrote:
> >
> >> Matthew Thode  writes:
> >>
> >> > On 18-10-09 11:12:30, Doug Hellmann wrote:
> >> >> Matthew Thode  writes:
> >> >>
> >> >> > several projects have had problems with the new release, some have
> >> ways
> >> >> > of working around it, and some do not.  I'm sending this just to
> raise
> >> >> > the issue and allow a place to discuss solutions.
> >> >> >
> >> >> > Currently there is a review proposed to blacklist 9.0.0, but if
> this
> >> is
> >> >> > going to still be an issue somehow in further releases we may need
> >> >> > another solution.
> >> >> >
> >> >> > https://review.openstack.org/#/c/608835/
> >> >> >
> >> >> > --
> >> >> > Matthew Thode (prometheanfire)
> >> >> >
> >>
> __
> >> >> > OpenStack Development Mailing List (not for usage questions)
> >> >> > Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >>
> >> >> Do you have links to the failure logs or bug reports or something?
> If I
> >> >> wanted to help I wouldn't even know where to start.
> >> >>
> >> >
> >> >
> >>
> http://logs.openstack.org/21/607521/2/check/cross-cinder-py35/e15722e/testr_results.html.gz
> >>
> >> These failures look like we should add a proper API to oslo.messaging to
> >> set the notification and rpc backends for testing. The configuration
> >> options are *not* part of the API of the library.
> >>
> >> There is already an oslo_messaging.conffixture module with a fixture
> >> class, but it looks like it defaults to rabbit. Maybe someone wants to
> >> propose a patch to make that a parameter to the constructor?
> >>
> >
> > oslo.messaging's conffixture uses whatever the config default for
> > transport_url is unless the test
> > specifically overrides it by setting the transport_url attribute.
> > The o.m. unit tests's base test class sets conffixture.transport_url to
> > "fake:/" to use the fake in memory driver.
> > That's the existing practice (I believe it's used like that outside of
> o.m.)
>
> OK, so it sounds like the fixture is relying on the configuration to be
> set up in advance, and that's the thing we need to change. We don't want
> users outside of the library to set up tests by using the configuration
> options, right?
>

That's the intent of ConfFixture it seems - provide a wrapper API so tests
don't have to monkey directly with the config.

How about this:

  https://review.openstack.org/609063


>
> Doug
>


-- 
Ken Giusti  (kgiu...@gmail.com)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance][cinder][keystone][requirements] blocking oslo.messaging 9.0.0

2018-10-09 Thread Ken Giusti
On Tue, Oct 9, 2018 at 11:56 AM Doug Hellmann  wrote:

> Matthew Thode  writes:
>
> > On 18-10-09 11:12:30, Doug Hellmann wrote:
> >> Matthew Thode  writes:
> >>
> >> > several projects have had problems with the new release, some have
> ways
> >> > of working around it, and some do not.  I'm sending this just to raise
> >> > the issue and allow a place to discuss solutions.
> >> >
> >> > Currently there is a review proposed to blacklist 9.0.0, but if this
> is
> >> > going to still be an issue somehow in further releases we may need
> >> > another solution.
> >> >
> >> > https://review.openstack.org/#/c/608835/
> >> >
> >> > --
> >> > Matthew Thode (prometheanfire)
> >> >
> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >> Do you have links to the failure logs or bug reports or something? If I
> >> wanted to help I wouldn't even know where to start.
> >>
> >
> >
> http://logs.openstack.org/21/607521/2/check/cross-cinder-py35/e15722e/testr_results.html.gz
>
> These failures look like we should add a proper API to oslo.messaging to
> set the notification and rpc backends for testing. The configuration
> options are *not* part of the API of the library.
>
> There is already an oslo_messaging.conffixture module with a fixture
> class, but it looks like it defaults to rabbit. Maybe someone wants to
> propose a patch to make that a parameter to the constructor?
>

oslo.messaging's conffixture uses whatever the config default for
transport_url is unless the test
specifically overrides it by setting the transport_url attribute.
The o.m. unit tests's base test class sets conffixture.transport_url to
"fake:/" to use the fake in memory driver.
That's the existing practice (I believe it's used like that outside of o.m.)


>
> >
> http://logs.openstack.org/21/607521/2/check/cross-glance-py35/e2161d7/testr_results.html.gz
>
> These failures should be fixed by releasing the patch that Mehdi
> provided that ensures there is a valid default transport configured.
>
> >
> http://logs.openstack.org/21/607521/2/check/cross-keystone-py35/908a1c2/testr_results.html.gz
>
> Lance has already described these as mocking implementation details of
> the library. I expect we'll need someone with keystone experience to
> work out what the best solution is to do there.
>
> >
> > --
> > Matthew Thode (prometheanfire)
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Ken Giusti  (kgiu...@gmail.com)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][tacker][daisycloud-core][meteos] Removal of rpc_backend config opt from oslo.messaging

2018-09-18 Thread Ken Giusti
Thanks to work done by Steve Kowalik we're ready to remove the old
rpc_backend transport configuration option that has been deprecated since
mid 2016.  This removal involves changes to the oslo.messaging.ConfFixture
as well.

Steve has provided patches to those projects affected by these changes
Almost all projects have merged these patches.

There are a few projects - included in the subject line - where the
necessary patches have not yet landed.  If you're a committer on one of
these projects please make an effort to review the patches proposed for
your project:

https://review.openstack.org/#/q/topic:bug/1712399+status:open

Our goal is to land the removal next week.

thanks

-- 
Ken Giusti  (kgiu...@gmail.com)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][python3] mixed versions?

2018-09-17 Thread Ken Giusti
On Thu, Sep 13, 2018 at 7:39 PM Doug Hellmann  wrote:

> Excerpts from Jim Rollenhagen's message of 2018-09-13 12:08:08 -0600:
> > On Wed, Sep 12, 2018 at 2:28 PM, Doug Hellmann 
> > wrote:
> >
> > > Excerpts from Doug Hellmann's message of 2018-09-12 12:04:02 -0600:
> > > > Excerpts from Clark Boylan's message of 2018-09-12 10:44:55 -0700:
> > > > > On Wed, Sep 12, 2018, at 10:23 AM, Jim Rollenhagen wrote:
> > > > > > The process of operators upgrading Python versions across their
> > > fleet came
> > > > > > up this morning. It's fairly obvious that operators will want to
> do
> > > this in
> > > > > > a rolling fashion.
> > > > > >
> > > > > > Has anyone considered doing this in CI? For example, running
> > > multinode
> > > > > > grenade with python 2 on one node and python 3 on the other node.
> > > > > >
> > > > > > Should we (openstack) test this situation, or even care?
> > > > > >
> > > > >
> > > > > This came up in a Vancouver summit session (the python3 one I
> think).
> > > General consensus there seemed to be that we should have grenade jobs
> that
> > > run python2 on the old side and python3 on the new side and test the
> update
> > > from one to another through a release that way. Additionally there was
> > > thought that the nova partial job (and similar grenade jobs) could
> hold the
> > > non upgraded node on python2 and that would talk to a python3 control
> plane.
> > > > >
> > > > > I haven't seen or heard of anyone working on this yet though.
> > > > >
> > > > > Clark
> > > > >
> > > >
> > > > IIRC, we also talked about not supporting multiple versions of
> > > > python on a given node, so all of the services on a node would need
> > > > to be upgraded together.
> > > >
> > > > Doug
> > >
> > > I spent a little time talking with the QA team about setting up
> > > this job, and Attila pointed out that we should think about what
> > > exactly we think would break during a 2-to-3 in-place upgrade like
> > > this.
> > >
> > > Keeping in mind that we are still testing initial installation under
> > > both versions and upgrades under python 2, do we have any specific
> > > concerns about the python *version* causing upgrade issues?
> > >
> >
> > A specific example brought up in the ironic room was the way we encode
> > exceptions in oslo.messaging for transmitting over RPC. I know that we've
> > found encoding bugs in that in the past, and one can imagine that RPC
> > between a service running on py2 and a service running on py3 could have
> > similar issues.
>
> Mixing python 2 and 3 components of the same service across nodes
> does seem like an interesting case. I wonder if it's something we
> could build a functional test job in oslo.messaging for, though,
> without having to test every service separately. I'd be happy if
> someone did that.
>
>
Currently that's a hole in the oslo.messaging tests.  I've opened a work
item to address this in launchpad:
https://bugs.launchpad.net/oslo.messaging/+bug/1792977


> > It's definitely edge cases that we'd be catching here (if any), so I'm
> > personally fine with assuming it will just work. But I wanted to pose the
> > question to the list, as we agreed this isn't only an ironic problem.
>
> Yes, definitely. I think it's likely to be a bit of work to set up the
> jobs and run them for all services, which is why I'm trying to
> understand if it's really needed. Thinking through the cases on the list
> is a good way to get folks to poke holes in any assertions, so I
> appreciate that you started the thread and that everyone is
> participating.
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Ken Giusti  (kgiu...@gmail.com)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] proposing Moisés Guimarães for oslo.config core

2018-08-08 Thread Ken Giusti
On Wed, Aug 8, 2018 at 9:19 AM Doug Hellmann  wrote:
>
> Excerpts from Doug Hellmann's message of 2018-08-01 09:27:09 -0400:
> > Moisés Guimarães (moguimar) did quite a bit of work on oslo.config
> > during the Rocky cycle to add driver support. Based on that work,
> > and a discussion we have had since then about general cleanup needed
> > in oslo.config, I think he would make a good addition to the
> > oslo.config review team.
> >
> > Please indicate your approval or concerns with +1/-1.
> >
> > Doug
>
> Normally I would have added moguimar to the oslo-config-core team
> today, after a week's wait. Funny story, though. There is no
> oslo-config-core team.
>
> oslo.config is one of a few of our libraries that we never set up with a
> separate review team. It is managed by oslo-core. We could set up a new
> review team for that library, but after giving it some thought I
> realized that *most* of the libraries are fairly stable, our team is
> pretty small, and Moisés is a good guy so maybe we don't need to worry
> about that.
>
> I spoke with Moisés, and he agreed to be part of the larger core team.
> He pointed out that the next phase of the driver work is going to happen
> in castellan, so it would be useful to have another reviewer there. And
> I'm sure we can trust him to be careful with reviews in other repos
> until he learns his way around.
>
> So, I would like to amend my original proposal and suggest that we add
> Moisés to the oslo-core team.
>
> Please indicate support with +1 or present any concerns you have. I
> apologize for the confusion on my part.
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

+1

-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Proposing Zane Bitter as oslo.service core

2018-08-03 Thread Ken Giusti
+1!

On Fri, Aug 3, 2018 at 12:58 PM, Ben Nemec  wrote:
> Hi,
>
> Zane has been doing some good work in oslo.service recently and I would like
> to add him to the core team.  I know he's got a lot on his plate already,
> but he has taken the time to propose and review patches in oslo.service and
> has demonstrated an understanding of the code.
>
> Please respond with +1 or any concerns you may have.  Thanks.
>
> -Ben
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][ci][infra][aodh][telemetry] telemetry test broken on oslo.messaging stable/queens

2018-06-11 Thread Ken Giusti
Updated subject to include [aodh] and [telemetry]

On Tue, Jun 5, 2018 at 11:41 AM, Doug Hellmann  wrote:
> Excerpts from Ken Giusti's message of 2018-06-05 10:47:17 -0400:
>> Hi,
>>
>> The telemetry integration test for oslo.messaging has started failing
>> on the stable/queens branch [0].
>>
>> A quick review of the logs points to a change in heat-tempest-plugin
>> that is incompatible with the version of gabbi from queens upper
>> constraints (1.40.0) [1][2].
>>
>> The job definition [3] includes required-projects that do not have
>> stable/queens branches - including heat-tempest-plugin.
>>
>> My question - how do I prevent this job from breaking when these
>> unbranched projects introduce changes that are incompatible with
>> upper-constrants for a particular branch?
>
> Aren't those projects co-gating on the oslo.messaging test job?
>
> How are the tests working for heat's stable/queens branch? Or telemetry?
> (whichever project is pulling in that tempest repo)
>

I've run the stable/queens branches of both Aodh[1] and Heat[2] - both failed.

Though the heat failure is different from what we're seeing on
oslo.messaging [3],
the same warning about gabbi versions is there [4].

However the Aodh failure is exactly the same as the oslo.messaging one
[5] - this makes sense since the oslo.messaging test is basically
running the same telemetry-tempest-plugin test.

So this isn't something unique to oslo.messaging - the telemetry
integration test is busted in stable/queens.

I'm going to mark these tests as non-voting on oslo.messaging's queens
branch for now so we can land some pending patches.


[1] https://review.openstack.org/#/c/574306/
[2] https://review.openstack.org/#/c/574311/
[3] 
http://logs.openstack.org/11/574311/1/check/heat-functional-orig-mysql-lbaasv2/21cce1d/job-output.txt.gz#_2018-06-11_17_30_51_106223
[4] 
http://logs.openstack.org/11/574311/1/check/heat-functional-orig-mysql-lbaasv2/21cce1d/logs/devstacklog.txt.gz#_2018-06-11_17_09_39_691
[5] 
http://logs.openstack.org/06/574306/1/check/telemetry-dsvm-integration/0a9620a/job-output.txt.gz#_2018-06-11_16_53_33_982143



>>
>> I've tried to use override-checkout in the job definition, but that
>> seems a bit hacky in this case since the tagged versions don't appear
>> to work and I've resorted to a hardcoded ref [4].
>>
>> Advice appreciated, thanks!
>>
>> [0] https://review.openstack.org/#/c/567124/
>> [1] 
>> http://logs.openstack.org/24/567124/1/check/oslo.messaging-telemetry-dsvm-integration-rabbit/e7fdc7d/logs/devstack-gate-post_test_hook.txt.gz#_2018-05-16_05_20_05_624
>> [2] 
>> http://logs.openstack.org/24/567124/1/check/oslo.messaging-telemetry-dsvm-integration-rabbit/e7fdc7d/logs/devstacklog.txt.gz#_2018-05-16_05_19_06_332
>> [3] 
>> https://git.openstack.org/cgit/openstack/oslo.messaging/tree/.zuul.yaml?h=stable/queens#n250
>> [4] https://review.openstack.org/#/c/572193/2/.zuul.yaml
>
> ______
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][ci][infra] telemetry test broken on oslo.messaging stable/queens

2018-06-05 Thread Ken Giusti
Hi,

The telemetry integration test for oslo.messaging has started failing
on the stable/queens branch [0].

A quick review of the logs points to a change in heat-tempest-plugin
that is incompatible with the version of gabbi from queens upper
constraints (1.40.0) [1][2].

The job definition [3] includes required-projects that do not have
stable/queens branches - including heat-tempest-plugin.

My question - how do I prevent this job from breaking when these
unbranched projects introduce changes that are incompatible with
upper-constrants for a particular branch?

I've tried to use override-checkout in the job definition, but that
seems a bit hacky in this case since the tagged versions don't appear
to work and I've resorted to a hardcoded ref [4].

Advice appreciated, thanks!

[0] https://review.openstack.org/#/c/567124/
[1] 
http://logs.openstack.org/24/567124/1/check/oslo.messaging-telemetry-dsvm-integration-rabbit/e7fdc7d/logs/devstack-gate-post_test_hook.txt.gz#_2018-05-16_05_20_05_624
[2] 
http://logs.openstack.org/24/567124/1/check/oslo.messaging-telemetry-dsvm-integration-rabbit/e7fdc7d/logs/devstacklog.txt.gz#_2018-05-16_05_19_06_332
[3] 
https://git.openstack.org/cgit/openstack/oslo.messaging/tree/.zuul.yaml?h=stable/queens#n250
[4] https://review.openstack.org/#/c/572193/2/.zuul.yaml
-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Oslo][Nova][Sahara][Tempest][Cinder][Magnum] Removing broken tox missing requirements tests

2018-04-30 Thread Ken Giusti
Folks,

Here in Oslo land a number of projects define a tox test for missing
dependencies. These tests are based on a tool - pip-check-reqs - that
no longer functions under the latest release of pip.  The project's
upstream github repo hasn't had any commit activity in a year and
appears to no longer be maintained.

See my previous email about this tool:
http://lists.openstack.org/pipermail/openstack-dev/2018-April/129697.html

In lieu of a suitable replacement, I've started removing the broken
tox tests from the oslo project to prevent anyone else having that
"Hmm - why doesn't this test pass?" moment I hit last week.

I've created a epad that lists the projects that define tox tests
based on this tool:

https://etherpad.openstack.org/p/pip_(missing|check)_reqs

There are other non-Oslo projects - Nova, Cinder, etc - that may want
to also remove that test. See the epad for details.

I've started patches for a couple of projects, but if anyone is
willing to help out please use the epad so we don't step on each
other's toes.

thanks,

-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Catching missing or stale package requirements in requirements.txt

2018-04-23 Thread Ken Giusti
Hi Folks,

Some of the Oslo libraries have a tox test that does the above [0].
This ensures that our requirements.txt file is kept current with the
code.

This test uses a tool called pip_check_reqs [1].  Unfortunately this
tool is not compatible with pip version 10, and it appears as if the
github project hasn't seen any development activity in the last 2
years.  Seems unlikely that pip 10 support will be added anytime soon.

Can anyone recommend a suitable alternative to the pip_check_reqs tool?

Thanks in advance,

[0] https://git.openstack.org/cgit/openstack/oslo.messaging/tree/tox.ini#n116
[1] https://github.com/r1chardj0n3s/pip-check-reqs
-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][oslo] Notice to users of the ZeroMQ transport in oslo.messaging

2018-03-26 Thread Ken Giusti
Folks,

It's been over a year since the last commit was made to the ZeroMQ
driver in oslo.messaging.  It is at the point where some of the
related unit tests are beginning to fail due to bit rot.  None of the
current oslo.messaging contributors have a good enough understanding
of the codebase to effectively fix it.

Personally I'm not sure the driver will work in production at all.

Given this it was decided in Dublin that the ZeroMQ driver no longer
meets the official policy for in tree driver support [0] and will be
deprecated in Rocky.  However it would be insincere for the team to
give the impression that the driver is maintained for the normal 2
cycle deprecation process.  Therefore the driver code will be removed
in 'S'.

The ZeroMQ driver is the largest body of code of any driver in the
oslo.messaging repo, weighing in at over 5k lines of code.  For
comparison, the rabbitmq kombu driver consists of only about 2K lines
of code.  If any individuals are willing to commit to ownership of
this codebase and keep the driver compliant with policy (see [0]),
please follow up with bnemec or myself (kgiusti) on #openstack-oslo.

Thanks,


[0] 
https://docs.openstack.org/oslo.messaging/latest/contributor/supported-messaging-drivers.html


-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][all] Deprecation Notice: Pika driver for oslo.messaging

2018-03-21 Thread Ken Giusti
Folks,

Last year at the Boston summit the Oslo team decided to deprecate
support for the Pika transport in oslo.messaging with removal planned
for Rocky [0].

This was announced on the operators list last May [1].

No objections have been raised to date. We're not aware of any
deployments using this transport and its
removal is not anticipated to affect anyone.

This is notice that the removal is currently underway [2].

Thanks,

[0] 
https://etherpad.openstack.org/p/BOS_Forum_Oslo.Messaging_driver_recommendations
[1] 
http://lists.openstack.org/pipermail/openstack-operators/2017-May/013579.html
[2] https://review.openstack.org/#/c/536960/

-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Oslo PTG Summary

2018-03-13 Thread Ken Giusti
Hi Doug,

Andy updated the etherpad [0] with a new link [1].
Holler if it's still broken...


[0] https://etherpad.openstack.org/p/oslo-ptg-rocky
[1] 
https://docs.google.com/presentation/d/1PWJAGQohAvlwod4gMTp6u1jtZT1cuaE-whRmnV8uiMM/edit?usp=sharing

On Mon, Mar 12, 2018 at 11:54 AM, Doug Hellmann  wrote:
> I can’t see
>
> https://docs.google.com/presentation/d/e/2PACX-1vQpaSSm7Amk9q4sBEAUi_IpyJ4l07qd3t5T_BPZkdLWfYbtSpSmF7obSB1qRGA65wjiiq2Sb7H2ylJo/pub?start=false&loop=false&delayms=3000&slide=id.p
>
>
>
> On Mar 12, 2018, at 11:39 AM, Ken Giusti  wrote:
>
> Hi Josh - I'm able to view all of them, but I probably have special
> google powers ;)
>
> Which links are broken for you?
>
> thanks,
>
> On Thu, Mar 8, 2018 at 3:53 PM, Joshua Harlow  wrote:
>
>
> Can we get some of those doc links opened.
>
> 'You need permission to access this published document.' I am getting for a
> few of them :(
>
>
> Ben Nemec wrote:
>
>
> Hi,
>
> Here's my summary of the discussions we had in the Oslo room at the PTG.
> Please feel free to reply with any additions if I missed something or
> correct anything I've misrepresented.
>
> oslo.config drivers for secret management
> -
>
> The oslo.config implementation is in progress, while the Castellan
> driver still needs to be written. We want to land this early in Rocky as
> it is a significant change in architecture for oslo.config and we want
> it to be well-exercised before release.
>
> There are discussions with the TripleO team around adding support for
> this feature to its deployment tooling and there will be a functional
> test job for the Castellan driver with Custodia.
>
> There is a weekly meeting in #openstack-meeting-3 on Tuesdays at 1600
> UTC for discussion of this feature.
>
> oslo.config driver implementation: https://review.openstack.org/#/c/513844
> spec:
>
> https://specs.openstack.org/openstack/oslo-specs/specs/queens/oslo-config-drivers.html
>
> Custodia key management support for Castellan:
> https://review.openstack.org/#/c/515190/
>
> "stable" libraries
> --
>
> Some of the Oslo libraries are in a mature state where there are very
> few, if any, meaningful changes to them. With the removal of the
> requirements sync process in Rocky, we may need to change the release
> process for these libraries. My understanding was that there were no
> immediate action items for this, but it was something we need to be
> aware of.
>
> dropping support for mox3
> -
>
> There was some concern that no one from the Oslo team is actually in a
> position to support mox3 if something were to break (such as happened in
> some libraries with Python 3.6). Since there is a community goal to
> remove mox from all OpenStack projects in Rocky this will hopefully not
> be a long-term problem, but there was some discussion that if projects
> needed to keep mox for some reason that they would be asked to provide a
> maintainer for mox3. This topic is kind of on hold pending the outcome
> of the community goal this cycle.
>
> automatic configuration migration on upgrade
> 
>
> There is a desire for oslo.config to provide a mechanism to
> automatically migrate deprecated options to their new location on
> version upgrades. This is a fairly complex topic that I can't cover
> adequately in a summary email, but there is a spec proposed at
> https://review.openstack.org/#/c/520043/ and POC changes at
> https://review.openstack.org/#/c/526314/ and
> https://review.openstack.org/#/c/526261/
>
> One outcome of the discussion was that in the initial version we would
> not try to handle complex migrations, such as the one that happened when
> we combined all of the separate rabbit connection opts into a single
> connection string. To start with we will just raise a warning to the
> user that they need to handle those manually, but a templated or
> hook-based method of automating those migrations could be added as a
> follow-up if there is sufficient demand.
>
> oslo.messaging plans
> 
>
> There was quite a bit discussed under this topic. I'm going to break it
> down into sub-topics for clarity.
>
> oslo.messaging heartbeats
> =
>
> Everyone seemed to be in favor of this feature, so we anticipate
> development moving forward in Rocky. There is an initial patch proposed
> at https://review.openstack.org/546763
>
> We felt that it should be possible to opt in and out of the feature, and
> that the configuration should be done at the app

Re: [openstack-dev] [oslo] Oslo PTG Summary

2018-03-12 Thread Ken Giusti
here: https://review.openstack.org/#/c/464028/ Turns out it's
>> really simple!
>>
>> Nova is also using this functionality for more complex options related
>> to upgrades, so that would be a good place to look for more advanced use
>> cases.
>>
>> Full documentation for the mutable config options is at
>> https://docs.openstack.org/oslo.config/latest/reference/mutable.html
>>
>> The goal status is being tracked in
>> https://storyboard.openstack.org/#!/story/2001545
>>
>> Chang Bo was also going to talk to Lance about possibly coming up with a
>> burndown chart like the one he had for the policy in code work.
>>
>> oslo healthcheck middleware
>> ---
>>
>> As this ended up being the only major topic for the afternoon, the
>> session was unfortunately lightly attended. However, the self-healing
>> SIG was talking about related topics at the same time so we ended up
>> moving to that room and had a good discussion.
>>
>> Overall the feature seemed to be well-received. There is some security
>> concern with exposing service information over an un-authenticated
>> endpoint, but because there is no authentication supported by the health
>> checking functionality in things like Kubernetes or HAProxy this is
>> unavoidable. The feature won't be mandatory, so if this exposure is
>> unacceptable it can be turned off (with a corresponding loss of
>> functionality, of course).
>>
>> There was also some discussion of dropping the asynchronous nature of
>> the checks in the initial version in order to keep the complexity to a
>> minimum. Asynchronous testing can always be added later if it proves
>> necessary.
>>
>> The full spec is at https://review.openstack.org/#/c/531456
>>
>> oslo.config strict validation
>> -
>>
>> I actually had discussions with multiple people about this during the
>> week. In both cases, they were just looking for a minimal amount of
>> validation that would catch an error such at "devug=True". Such a
>> validation might be fairly simple to write now that we have the
>> YAML-based sample config with (ideally) information about all the
>> options available to set in a project. It should be possible to compare
>> the options set in the config file with the ones listed in the sample
>> config and raise warnings for any that don't exist.
>>
>> There is also a more complete validation spec at
>>
>> http://specs.openstack.org/openstack/oslo-specs/specs/ocata/oslo-validator.html
>> and a patch proposed at https://review.openstack.org/#/c/384559/
>>
>> Unfortunately there has been little movement on that as of late, so it
>> might be worthwhile to implement something more minimalist initially and
>> then build from there. The existing patch is quite significant and
>> difficult to review.
>>
>> Conclusion
>> --
>>
>> I feel like there were a lot of good discussions at the PTG and we have
>> plenty of work to keep the small Oslo team busy for the Rocky cycle. :-)
>>
>> Thanks to everyone who participated and I look forward to seeing how
>> much progress we've made at the next Summit and PTG.
>>
>> -Ben
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] proposing Stephen Finucan for oslo-core

2018-01-08 Thread Ken Giusti
+1 for Stephen!

On Mon, Jan 8, 2018 at 9:55 AM, Doug Hellmann  wrote:
> Stephen (sfinucan) has been working on pbr, oslo.config, and
> oslo.policy and reviewing several of the other Oslo libraries for
> a while now. His reviews are always helpful and I think he would
> make a good addition to the oslo-core team.
>
> As per our usual practice, please reply here with a +1 or -1 and
> any reservations.
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Oslo team updates

2018-01-02 Thread Ken Giusti
+1, and a big thank you for all your contributions

On Tue, Jan 2, 2018 at 9:39 AM, Davanum Srinivas  wrote:
> +1 from me as well. Thanks everyone!
>
> On Tue, Jan 2, 2018 at 9:31 AM, Doug Hellmann  wrote:
>> Excerpts from ChangBo Guo's message of 2018-01-02 11:53:02 +0800:
>>> In last two cycles some people's situation changed, can't focus on Oslo
>>> code review, so I propose  some changes in Oslo team.  Remove following
>>> people, thanks their past hard wok to make Oslo well, and welcome them back
>>> if they want to join the team again.  please +1/-1 for the change
>>>
>>> Generalist Code Reviewers:
>>>  Brant Knudson
>>>
>>> Specialist API Maintainers:
>>> oslo-cache-core:  Brant Kundson  David Stanek
>>> oslo-db-core: Viktor Serhieiev
>>> oslo-messaging-core:Dmitriy Ukhlov Oleksii Zamiatin Viktor Serhieiev
>>> oslo-policy-core: Brant Kundson  David Stanek guang-yee
>>> oslo-service-core: Marian Horban
>>>
>>> We welcome anyone join the team or contribution in Oslo. The Oslo program
>>> brings together generalist code reviewers and specialist API maintainers
>>> They share a common interest in tackling copy-and-paste technical debt
>>> across the OpenStack project. For more information please refer to wiki
>>> [1].
>>>
>>> [1] https://wiki.openstack.org/wiki/Oslo
>>
>> +1 -- it's sad to see the team shrink a bit, but it's good to keep the
>> list accurate based on when people can contribute.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Oslo][oslo.messaging][all] Notice: upcoming change to oslo.messaging RPC server

2017-09-26 Thread Ken Giusti
Hi Folks,

Just a head's up:

In Queens the default access policy for RPC Endpoints will change from
LegacyRPCAccessPolicy to DefaultRPCAccessPolicy.  RPC calls to private
('_' prefix) methods will no longer be possible.  If you want to allow
RPC Clients to invoke private methods, you must explicitly set the
access_policy to LegacyRPCAccessPolicy when you call get_rpc_server()
or instantiate an RPCDispatcher.  This change [0] has been merged to
oslo.messaging master and will appear in the next release of
oslo.messaging.

"Umm What?"

Good question!  Here's the TL;DR details:

Since forever it's been possible for a client to make an RPC call
against _any_ method defined in the RPC Endpoint object.  And by "any"
we mean "all methods including private ones (method names prefixed by
'_' )"

Naturally this ability came as a surprise many folk [1], including
yours truly and others on the oslo team [2].  It was agreed that
having this be the default behavior was indeed A Bad Thing.

So starting in Ocata oslo.messaging has provided a means for
controlling access to Endpoint methods [3].  Oslo.messaging now
defines three different "access control policies" that can be applied
to an RPC Server:

LegacyRPCAccessPolicy: original behavior - any method can be invoked
by an RPC client
DefaultRPCAccessPolicy: prevent RPC access to private '_' methods, all
others may be invoked
ExplicitRPCAccessPolicy: only allow access to those methods that have
been decorated with @expose decorator

See [4] for more details.

In order not to break anything at the time the default access policy
was set to 'LegacyRPCAccessPolicy'.  This has been the default for
Ocata and Pike.

Starting in Queens this will no longer be the case.
DefaultRPCAccessPolicy will become the default if no access policy is
specified when calling get_rpc_server() or directly instantiating an
RPCDispatcher.  To keep the old behavior you must explicitly set the
access policy to LegacyRPCAccessPolicy:

from oslo_messaging.rpc import LegacyRPCAccessPolicy
...
server = get_rpc_server(transport, target, endpoints,
 access_policy=LegacyRPCAccessPolicy)



Reply here if you have any questions or hit any issues, thanks!

-K

[0] https://review.openstack.org/#/c/500456/
[1] https://bugs.launchpad.net/oslo.messaging/+bug/1194279
[2] https://bugs.launchpad.net/oslo.messaging/+bug/1555845
[3] https://review.openstack.org/#/c/358359/
[4] https://docs.openstack.org/oslo.messaging/latest/reference/server.html
-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][barbican][sahara] start RPC service before launcher wait?

2017-09-18 Thread Ken Giusti
On Thu, Sep 14, 2017 at 7:33 PM, Adam Spiers  wrote:
>
> Hi Ken,
>
> Thanks a lot for the analysis, and sorry for the slow reply!
> Comments inline...
>
> Ken Giusti  wrote:
> > Hi Adam,
> >
> > I think there's a couple of problems here.
> >
> > Regardless of worker count, the service.wait() is called before
> > service.start().  And from looking at the oslo.service code, the 'wait()'
> > method is call after start(), then again after stop().  This doesn't match
> > up with the intended use of oslo.messaging.server.wait(), which should only
> > be called after .stop().
>
> Hmm, so are you saying that there might be a bug in oslo.service's
> usage of oslo.messaging, and that this Sahara bugfix was the wrong
> approach too?
>
> https://review.openstack.org/#/c/280741/1/sahara/cli/sahara_engine.py
>

Well, I don't think the explicit call to start() is going to help,
esp. if the number of workers is > 1 since the workers are forked and
need to call start() from their own process space..
In fact, if # of workers > 1 then you not only get an RPC server in
each worker process, you'll end up with an extra RPC
server in the calling thread.

Take a look at a test service I've created for oslo.messaging:

https://pastebin.com/rSA6AD82

If you change the main code to call the new sequence, you'll end up
with 3 rpc servers (2 in the workers, one in the main process).

In that code I've made the wait() call a no op if the server hasn't
been started first.   And the stop method will call stop and wait on
the rpc server, which is the expected sequence as far as
oslo.messaging is concerned.

To me it seems that the bug is in oslo.service - calling wait() before
start() doesn't make sense to me.

> > Perhaps a bigger issue is that in the multi threaded case all threads
> > appear to be calling start, wait, and stop on the same instance of the
> > service (oslo.messaging rpc server).  At least that's what I'm seeing in my
> > muchly reduced test code:

I was wrong about this - I failed to notice that each service had
forked and was dealing with its own copy of the server.

> >
> > https://paste.fedoraproject.org/paste/-73zskccaQvpSVwRJD11cA
> >
> > The log trace shows multiple calls to start, wait, stop via different
> > threads to the same TaskServer instance:
> >
> > https://paste.fedoraproject.org/paste/dyPq~lr26sQZtMzHn5w~Vg
> >
> > Is that expected?
>
> Unfortunately in the interim, your pastes seem to have vanished - any
> chance you could repaste them?
>

Ugh - didn't keep a copy.  If you pull down that test code you can use
it to generate those traces.


> Thanks,
> Adam
>
> > On Mon, Jul 31, 2017 at 9:32 PM, Adam Spiers  wrote:
> > > Ken Giusti  wrote:
> > >> On Mon, Jul 31, 2017 at 10:01 AM, Adam Spiers  wrote:
> > >>> I recently discovered a bug where barbican-worker would hang on
> > >>> shutdown if queue.asynchronous_workers was changed from 1 to 2:
> > >>>
> > >>>https://bugs.launchpad.net/barbican/+bug/1705543
> > >>>
> > >>> resulting in a warning like this:
> > >>>
> > >>>WARNING oslo_messaging.server [-] Possible hang: stop is waiting for
> > >>> start to complete
> > >>>
> > >>> I found a similar bug in Sahara:
> > >>>
> > >>>https://bugs.launchpad.net/sahara/+bug/1546119
> > >>>
> > >>> where the fix was to call start() on the RPC service before making the
> > >>> launcher wait() on it, so I ported the fix to Barbican, and it seems
> > >>> to work fine:
> > >>>
> > >>>https://review.openstack.org/#/c/485755
> > >>>
> > >>> I noticed that both projects use ProcessLauncher; barbican uses
> > >>> oslo_service.service.launch() which has:
> > >>>
> > >>>if workers is None or workers == 1:
> > >>>launcher = ServiceLauncher(conf, restart_method=restart_method)
> > >>>else:
> > >>>launcher = ProcessLauncher(conf, restart_method=restart_method)
> > >>>
> > >>> However, I'm not an expert in oslo.service or oslo.messaging, and one
> > >>> of Barbican's core reviewers (thanks Kaitlin!) noted that not many
> > >>> other projects start the task before calling wait() on the launcher,
> > >>> so I thought I'd check here whether that 

Re: [openstack-dev] [openstack][oslo][all] Clean up oslo deprecated stuff !

2017-08-23 Thread Ken Giusti
On Tue, Aug 22, 2017 at 3:24 AM, ChangBo Guo  wrote:

> Hi ALL,
>
> We discussed a little about how to avoid breaking consuming projects' gate
> in oslo weekly meeting last week.  The easy improvement is that clean up
> deprecated stuff in oslo at the beginning of Queens.  I collected
> deprecated stuff  in [1].  This need Oslo team and other team work
> toghether. I think we can start the work when cycle Queens is open. I
> also reserved a room in PTG
> for 2 hours to do hacking.[2]
>
>
> [1] https://etherpad.openstack.org/p/oslo-queens-tasks
> [2] https://ethercalc.openstack.org/Queens-PTG-Discussion-Rooms
>
> --
> ChangBo Guo(gcb)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
I've gone through oslo.messaging and have updated the etherpad [1] with a
few more entries.  I've opened launchpad bugs where appropriate, adding the
oslo-debt-cleanup tag to each for good measure.

The original list included a few items that as best I can tell were not
tagged as deprecated via debtcollector until Pike. IIUC we can't remove
those in Queens, correct?

-- 
Ken Giusti  (kgiu...@gmail.com)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.messaging] Proposing Andy Smith as an oslo.messaging core reviewers

2017-08-16 Thread Ken Giusti
+1

On Mon, Aug 14, 2017 at 6:59 AM, ChangBo Guo  wrote:

> I propose that we add Andy Smith to the oslo.messaging team.
>
> Andy Smith has been actively contributing to oslo.messaging for a while
> now, both
> in helping make oslo.messaging better via code contribution(s) and by
> helping with
> the review load when he can. He's been involved on the AMQP 1.0 side for
> awhile. He's really interested in taking ownership of the experimental
> Kafka driver, which would be great to have someone able to drive that.
>
> Please respond with +1/-1
>
> Voting will last 2 weeks and will end at 28th of August.
>
> Cheers,
> --
> ChangBo Guo(gcb)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Ken Giusti  (kgiu...@gmail.com)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman proposal for message bus analysis

2017-08-09 Thread Ken Giusti
 mostly Nova? Was ceilometer involved?
> > > > I would be curious to know how much AMQP traffic is
> Control
> > > > related
> > > > (e.g. spinning up VMs) vs how much is telemetry related
> in a
> > > > typical
> > > > openstack deployment.
> > > > Do we know that?
> > > >
> > > > I have also left some comments in the doc.
> > > >
> > > > Paul-Andre
> > > >
> > > >
> > > > -Original Message-
> > > > From: Matthieu Simonin 
> > > > Reply-To: "OpenStack Development Mailing List (not for usage
> > > > questions)"
> > > > 
> > > > Date: Wednesday, June 21, 2017 at 6:54 PM
> > > > To: "OpenStack Development Mailing List (not for usage
> > > > questions)"
> > > > 
> > > > Subject: Re: [openstack-dev] [FEMDC][MassivelyDistributed]
> > > > Strawman
> > > > proposal for  message bus analysis
> > > >
> > > > Hi Ken,
> > > >
> > > > Thanks for starting this !
> > > > I've made a first pass on the epad and left some notes
> and
> > > > questions
> > > > there.
> > > >
> > > > Best,
> > > >
> > > > Matthieu
> > > > - Mail original -
> > > > > De: "Ken Giusti" 
> > > > > À: "OpenStack Development Mailing List (not for usage
> > > > > questions)"
> > > > > 
> > > > > Envoyé: Mercredi 21 Juin 2017 15:23:26
> > > > > Objet: [openstack-dev] [FEMDC][MassivelyDistributed]
> > > > > Strawman
> > > > > proposal formessage bus analysis
> > > > >
> > > > > Hi All,
> > > > >
> > > > > Andy and I have taken a stab at defining some test
> > > > > scenarios
> > > > > for
> > > > > anal the
> > > > > different message bus technologies:
> > > > >
> > > > > https://etherpad.openstack.org/p/1BGhFHDIoi
> > > > >
> > > > > We've started with tests for just the oslo.messaging
> layer
> > > > > to
> > > > > analyze
> > > > > throughput and latency as the number of message bus
> clients
> > > > > -
> > > > > and
> > > > > the bus
> > > > > itself - scale out.
> > > > >
> > > > > The next step will be to define messaging oriented test
> > > > > scenarios
> > > > > for an
> > > > > openstack deployment.  We've started by enumerating a
> few
> > > > > of
> > > > > the
> > > > > tools,
> > > > > topologies, and fault conditions that need to be
> covered.
> > > > >
> > > > > Let's use this epad as a starting point for analyzing
> > > > > messaging -
> > > > > please
> > > > > feel free to contribute, question, and criticize :)
> > > > >
> > > > > thanks,
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Ken Giusti  (kgiu...@gmail.com)
> > > > >
> > > > > __
> 
> > > > > OpenStack Development Mailing List (not for usage
> > > > > questions)
> > > > > Unsubscribe:
> > > > > openstack-dev-requ...@

[openstack-dev] [oslo][barbican][sahara] start RPC service before launcher wait?

2017-08-02 Thread Ken Giusti
Oop - didn't reply all
-- Forwarded message --
From: Ken Giusti 
Date: Tue, Aug 1, 2017 at 12:51 PM
Subject: Re: [openstack-dev] [oslo][barbican][sahara] start RPC service
before launcher wait?
To: Adam Spiers 


Hi Adam,

I think there's a couple of problems here.

Regardless of worker count, the service.wait() is called before
service.start().  And from looking at the oslo.service code, the 'wait()'
method is call after start(), then again after stop().  This doesn't match
up with the intended use of oslo.messaging.server.wait(), which should only
be called after .stop().

Perhaps a bigger issue is that in the multi threaded case all threads
appear to be calling start, wait, and stop on the same instance of the
service (oslo.messaging rpc server).  At least that's what I'm seeing in my
muchly reduced test code:

https://paste.fedoraproject.org/paste/-73zskccaQvpSVwRJD11cA

The log trace shows multiple calls to start, wait, stop via different
threads to the same TaskServer instance:

https://paste.fedoraproject.org/paste/dyPq~lr26sQZtMzHn5w~Vg

Is that expected?

On Mon, Jul 31, 2017 at 9:32 PM, Adam Spiers  wrote:

> Ken Giusti  wrote:
>
>> On Mon, Jul 31, 2017 at 10:01 AM, Adam Spiers  wrote:
>>
>>> I recently discovered a bug where barbican-worker would hang on
>>> shutdown if queue.asynchronous_workers was changed from 1 to 2:
>>>
>>>https://bugs.launchpad.net/barbican/+bug/1705543
>>>
>>> resulting in a warning like this:
>>>
>>>WARNING oslo_messaging.server [-] Possible hang: stop is waiting for
>>> start to complete
>>>
>>> I found a similar bug in Sahara:
>>>
>>>https://bugs.launchpad.net/sahara/+bug/1546119
>>>
>>> where the fix was to call start() on the RPC service before making the
>>> launcher wait() on it, so I ported the fix to Barbican, and it seems
>>> to work fine:
>>>
>>>https://review.openstack.org/#/c/485755
>>>
>>> I noticed that both projects use ProcessLauncher; barbican uses
>>> oslo_service.service.launch() which has:
>>>
>>>if workers is None or workers == 1:
>>>launcher = ServiceLauncher(conf, restart_method=restart_method)
>>>else:
>>>launcher = ProcessLauncher(conf, restart_method=restart_method)
>>>
>>> However, I'm not an expert in oslo.service or oslo.messaging, and one
>>> of Barbican's core reviewers (thanks Kaitlin!) noted that not many
>>> other projects start the task before calling wait() on the launcher,
>>> so I thought I'd check here whether that is the correct fix, or
>>> whether there's something else odd going on.
>>>
>>> Any oslo gurus able to shed light on this?
>>>
>>
>> As far as an oslo.messaging server is concerned, the order of operations
>> is:
>>
>> server.start()
>> # do stuff until ready to stop the server...
>> server.stop()
>> server.wait()
>>
>> The final wait blocks until all requests that are in progress when stop()
>> is called finish and cleanup.
>>
>
> Thanks - that makes sense.  So the question is, why would
> barbican-worker only hang on shutdown when there are multiple workers?
> Maybe the real bug is somewhere in oslo_service.service.ProcessLauncher
> and it's not calling start() correctly?
>



-- 
Ken Giusti  (kgiu...@gmail.com)



-- 
Ken Giusti  (kgiu...@gmail.com)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][barbican][sahara] start RPC service before launcher wait?

2017-07-31 Thread Ken Giusti
On Mon, Jul 31, 2017 at 10:01 AM, Adam Spiers  wrote:

> Hi all,
>
> I recently discovered a bug where barbican-worker would hang on
> shutdown if queue.asynchronous_workers was changed from 1 to 2:
>
>https://bugs.launchpad.net/barbican/+bug/1705543
>
> resulting in a warning like this:
>
>WARNING oslo_messaging.server [-] Possible hang: stop is waiting for
> start to complete
>
> I found a similar bug in Sahara:
>
>https://bugs.launchpad.net/sahara/+bug/1546119
>
> where the fix was to call start() on the RPC service before making the
> launcher wait() on it, so I ported the fix to Barbican, and it seems
> to work fine:
>
>https://review.openstack.org/#/c/485755
>
> I noticed that both projects use ProcessLauncher; barbican uses
> oslo_service.service.launch() which has:
>
>if workers is None or workers == 1:
>launcher = ServiceLauncher(conf, restart_method=restart_method)
>else:
>launcher = ProcessLauncher(conf, restart_method=restart_method)
>
> However, I'm not an expert in oslo.service or oslo.messaging, and one
> of Barbican's core reviewers (thanks Kaitlin!) noted that not many
> other projects start the task before calling wait() on the launcher,
> so I thought I'd check here whether that is the correct fix, or
> whether there's something else odd going on.
>
> Any oslo gurus able to shed light on this?
>
> Thanks!
> Adam
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


As far as an oslo.messaging server is concerned, the order of operations is:

server.start()
# do stuff until ready to stop the server...
server.stop()
server.wait()

The final wait blocks until all requests that are in progress when stop()
is called finish and cleanup.

-K


-- 
Ken Giusti  (kgiu...@gmail.com)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman proposal for message bus analysis

2017-06-23 Thread Ken Giusti
On Wed, Jun 21, 2017 at 10:13 AM, Ilya Shakhat  wrote:

> Hi Ken,
>
> Please check scenarios and reports that exist in Performance Docs. In
> particular you may be interested in:
>  * O.M.Simulator - https://github.com/openstack/o
> slo.messaging/blob/master/tools/simulator.py
>  * MQ  performance scenario - https://docs.openstack.org/dev
> eloper/performance-docs/test_plans/mq/plan.html#message-queue-performance
>  * One of RabbitMQ reports - https://docs.openstack.org/dev
> eloper/performance-docs/test_results/mq/rabbitmq/cmsm/index.html
>  * MQ HA scenario - https://docs.openstack.org/dev
> eloper/performance-docs/test_plans/mq_ha/plan.html
>  * One of RabbitMQ HA reports - https://docs.openstack.org/dev
> eloper/performance-docs/test_results/mq_ha/rabbitmq-ha-
> queues/cs1ss2-ks2-ha/omsimulator-ha-call-cs1ss2-ks2-ha/index.html
>
>
Thank you Ilya - these tests you reference are indeed valuable.

But, IIUC, those tests benchmark queue throughput, using a single (highly
threaded) client->single server traffic flow.   If that is the case, I
think the tests we're trying to define might be a bit more specific to the
FEMDC [0] use cases:  multiple servers consuming from different topics
while many clients distributed across the message bus are connecting,
generating traffic, failing over, etc.

The goal of these tests would be to quantify the behavior of the message
bus as a whole under different messaging loads, failure conditions, etc.

[0] https://wiki.openstack.org/wiki/Fog_Edge_Massively_Distributed_Clouds




>
> Thanks,
> Ilya
>
> 2017-06-21 15:23 GMT+02:00 Ken Giusti :
>
>> Hi All,
>>
>> Andy and I have taken a stab at defining some test scenarios for anal the
>> different message bus technologies:
>>
>> https://etherpad.openstack.org/p/1BGhFHDIoi
>>
>> We've started with tests for just the oslo.messaging layer to analyze
>> throughput and latency as the number of message bus clients - and the bus
>> itself - scale out.
>>
>> The next step will be to define messaging oriented test scenarios for an
>> openstack deployment.  We've started by enumerating a few of the tools,
>> topologies, and fault conditions that need to be covered.
>>
>> Let's use this epad as a starting point for analyzing messaging - please
>> feel free to contribute, question, and criticize :)
>>
>> thanks,
>>
>>
>>
>> --
>> Ken Giusti  (kgiu...@gmail.com)
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ______
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Ken Giusti  (kgiu...@gmail.com)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FEMDC][MassivelyDistributed] Strawman proposal for message bus analysis

2017-06-21 Thread Ken Giusti
On Wed, Jun 21, 2017 at 11:24 AM, Jay Pipes  wrote:

> On 06/21/2017 09:23 AM, Ken Giusti wrote:
>
>> Andy and I have taken a stab at defining some test scenarios for anal the
>> different message bus...
>>
>
> That was a particularly unfortunatey choice of words.
>
>
Ugh. Sorry - most unfortunate fat-finger...or Freudian slip...


>
> Best,
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Ken Giusti  (kgiu...@gmail.com)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [FEMDC][MassivelyDistributed] Strawman proposal for message bus analysis

2017-06-21 Thread Ken Giusti
Hi All,

Andy and I have taken a stab at defining some test scenarios for anal the
different message bus technologies:

https://etherpad.openstack.org/p/1BGhFHDIoi

We've started with tests for just the oslo.messaging layer to analyze
throughput and latency as the number of message bus clients - and the bus
itself - scale out.

The next step will be to define messaging oriented test scenarios for an
openstack deployment.  We've started by enumerating a few of the tools,
topologies, and fault conditions that need to be covered.

Let's use this epad as a starting point for analyzing messaging - please
feel free to contribute, question, and criticize :)

thanks,



-- 
Ken Giusti  (kgiu...@gmail.com)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging]Optimize RPC performance by reusing callback queue

2017-06-08 Thread Ken Giusti
Hi,

Keep in mind the rabbit driver creates a single reply queue per *transport*
- that is per call to oslo.messaging's
get_transport/get_rpc_transport/get_notification_transport.

If you have multiple RPCClients sharing the same transport, then all
clients issuing RPC calls over that transport will use the same reply queue
(and multiplex incoming replies using a unique id in the reply itself).
See
https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/_drivers/amqpdriver.py?h=stable/newton#n452
for all the details.

But it cannot share the reply queue across transports - and certainly not
across processes :)

-K



On Wed, Jun 7, 2017 at 10:29 PM, int32bit  wrote:

> Hi,
>
> Currently, I find our RPC client always need create a new callback queue
> for every call requests to track the reply belongs, at least in Newton.
> That's pretty inefficient and lead to poor performance. I also  find some
> RPC implementations no need to create a new queue, they track the request
> and response by correlation id in message header(rabbitmq well supports,
> not sure is it AMQP standard?). The rabbitmq official document provide a
> simple demo, see [1].
>
> So I am confused that why our oslo.messaging doesn't use this way
> to optimize RPC performance. Is it for any consideration or do I miss
> some potential cases?
>
> Thanks for any reply and discussion!
>
>
> [1] https://www.rabbitmq.com/tutorials/tutorial-six-python.html.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Ken Giusti  (kgiu...@gmail.com)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.messaging] Call to deprecate the 'pika' driver in the oslo.messaging project

2017-05-26 Thread Ken Giusti
So it's been over a week with no objections.

I will start the deprecation process, including an announcement on the
operator's email list.

Thanks for the feedback.

On Mon, May 22, 2017 at 8:06 PM, ChangBo Guo  wrote:
> +1 , let's focus on key drivers.
>
> 2017-05-17 2:02 GMT+08:00 Joshua Harlow :
>>
>> Fine with me,
>>
>> I'd personally rather get down to say 2 'great' drivers for RPC,
>>
>> And say 1 (or 2?) for notifications.
>>
>> So ya, wfm.
>>
>> -Josh
>>
>>
>> Mehdi Abaakouk wrote:
>>>
>>> +1 too, I haven't seen its contributors since a while.
>>>
>>> On Mon, May 15, 2017 at 09:42:00PM -0400, Flavio Percoco wrote:
>>>>
>>>> On 15/05/17 15:29 -0500, Ben Nemec wrote:
>>>>>
>>>>>
>>>>>
>>>>> On 05/15/2017 01:55 PM, Doug Hellmann wrote:
>>>>>>
>>>>>> Excerpts from Davanum Srinivas (dims)'s message of 2017-05-15
>>>>>> 14:27:36 -0400:
>>>>>>>
>>>>>>> On Mon, May 15, 2017 at 2:08 PM, Ken Giusti 
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> Folks,
>>>>>>>>
>>>>>>>> It was decided at the oslo.messaging forum at summit that the pika
>>>>>>>> driver will be marked as deprecated [1] for removal.
>>>>>>>
>>>>>>>
>>>>>>> [dims} +1 from me.
>>>>>>
>>>>>>
>>>>>> +1
>>>>>
>>>>>
>>>>> Also +1
>>>>
>>>>
>>>> +1
>>>>
>>>> Flavio
>>>>
>>>> --
>>>> @flaper87
>>>> Flavio Percoco
>>>
>>>
>>>
>>>
>>>>
>>>> __
>>>>
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>> ______
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> ChangBo Guo(gcb)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][oslo.messaging] Call to deprecate the 'pika' driver in the oslo.messaging project

2017-05-15 Thread Ken Giusti
Folks,

It was decided at the oslo.messaging forum at summit that the pika
driver will be marked as deprecated [1] for removal.

The pika driver is another rabbitmq-based driver.  It was developed as
a replacement for the current rabbit driver (rabbit://).  The pika
driver is based on the 'pika' rabbitmq client library [2], rather than
the kombu library [3] of the current rabbitmq driver.  The pika
library was recommended by the rabbitmq community a couple of summits
ago as a better client than the kombu client.

However, testing done against this driver did not show "appreciable
difference in performance or reliability" over the existing rabbitmq
driver.

Given this, and the recent departure of some very talented
contributors, the consensus is to deprecate pika and recommend users
stay with the original rabbitmq driver.

The plan is to mark the driver as deprecated in Pike, removal in Rocky.

thanks,


[1] 
https://etherpad.openstack.org/p/BOS_Forum_Oslo.Messaging_driver_recommendations
  (~ line 80)
[2] https://github.com/pika/pika
[3] https://github.com/celery/kombu

-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Oslo][oslo.messaging] FYI: forum for "Oslo.Messaging Non-rabbitmq Backend Recommendations" scheduled 5/11 1:30 MR104

2017-05-11 Thread Ken Giusti
Just a head's up: this is an additional session to continue the
discussion from Weds

etherpad: 
https://etherpad.openstack.org/p/BOS_Forum_Oslo.Messaging_driver_recommendations

thanks,

-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo]Introduction of new driver for oslo.messaging

2017-04-03 Thread Ken Giusti
This is gonna be _huge_.

As a matter of fact, I like this so much I'm abandoning my proposal to
support for RPC over rfc2549.

-K

On Sat, Apr 1, 2017 at 10:49 AM, Amrith Kumar  wrote:
> Great idea, happy to try it out for Trove. We love o.m.rpc :) But it needs to 
> be secure; other comment has been posed in review, I'm doing a talk about o.m 
> use by trove in Boston anyway, maybe we can get Melissa to join me for that?
>
> -amrith
>
>
> -Original Message-
> From: Deja, Dawid [mailto:dawid.d...@intel.com]
> Sent: Friday, March 31, 2017 10:41 AM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [oslo]Introduction of new driver for oslo.messaging
>
> Hi all,
>
> To work around issues with rabbitMQ scalability we'd like to introduce new 
> driver in oslo messaging that have nearly no scaling limits[1].
> We'd like to have as much eyes on this as possible since we believe that this 
> is the technology of the future. Thanks for all reviews.
>
> Dawid Deja
>
> [1] https://review.openstack.org/#/c/452219/
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo_messaging] Limiting the number of retries for kafka driver

2017-02-14 Thread Ken Giusti
On Tue, Feb 14, 2017 at 2:52 PM, Elancheran Subramanian
 wrote:
> Hello All,
> This is reg limiting the number of retries for Kafka driver support on Nova
> and Neutron.
>
> While trying out the oslo messaging notifications support for Kafka on Nova
> and Neutron, the Kafka driver doesn’t support limiting the number of retries
> for failed messages. When I checked the code, currently there is no
> configuration which support that, though the send_notification has retry
> https://github.com/openstack/oslo.messaging/blob/master/oslo_messaging/_drivers/impl_kafka.py#L336
> but it’s not set or passed from component’s (nova or neutron’s)
> configuration. Is there any configuration which I’m missing? Please let me
> know.
>

You haven't missed anything - the kafka driver doesn't provide a means
to set a default retry via its configuration.
The expectation is that the caller (nova/neutron) would provide a
retry value when constructing a Notifier instance.

There was such a config option for the rabbitmq driver
(rabbit_max_retries) but that was removed because it broke
*something* - can't remember exactly the reason tho, sorry.

>
> Thanks in advance,
> Cheran
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Oslo logo files (was 'log files')

2017-02-14 Thread Ken Giusti
These are great!

I've tweaked the subject lest folks think this only applies to logging.

"Watch me pull an RPC call outta my hat... nothing up my sleeve... Presto!"

(google 'bullwinkle' - pop culture reference... clearly showing my age
(and mentality) :)

-K

On Mon, Feb 13, 2017 at 11:35 PM, ChangBo Guo  wrote:
> We got  cool logos in different format, just use them :-)
> I’m excited to finally be able to share final project logo files with you.
> Inside this folder, you’ll find full-color and one-color versions of the
> logo, plus a mascot-only version, in EPS, JPG and PNG formats. You can use
> them on presentations and wherever else you’d like to add some team flair.
>
> https://www.dropbox.com/sh/kj0e3sdu47pqr3e/AABllB31vJZDlw4OkZRK_AZia?dl=0
>
> --
> ChangBo Guo(gcb)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Not running for Oslo PTL for Pike

2017-01-05 Thread Ken Giusti
On Tue, Jan 3, 2017 at 3:03 PM, Joshua Harlow  wrote:
> Hi Oslo folks (and others),
>
> Happy new year!
>
> After serving for about a year I think it's a good opportunity for myself to
> let another qualified individual run for Oslo PTL (seems common to only go
> for two terms and hand-off to another).
>
> So I just wanted to let folks know that I will be doing this, so that we can
> grow others in the community that wish to try out being a PTL.
>
> I don't plan on leaving the Oslo community btw, just want to make sure we
> spread the knowledge (and the fun!) of being a PTL.
>
> Hopefully I've been a decent PTL (with  room to improve) during this
> time :-)
>

Dude - you've been a most excellent PTL!

Thanks for all the help (and laughs :) you've provided in the past year.

> -Josh
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Splitting notifications from rpc (and questions + work around this)

2016-11-04 Thread Ken Giusti
On Wed, Nov 2, 2016 at 8:11 PM, Joshua Harlow  wrote:
> Hi folks,
>
> There was a bunch of chatter at the summit about how there are really two
> different types of (oslo) messaging usage that exist in openstack and how
> they need not be backed by the same solution type (rabbitmq, qpid,
> kafka...).
>
> For those that were not at the oslo sessions:
>
> https://wiki.openstack.org/wiki/Design_Summit/Ocata/Etherpads#Oslo
>
> The general gist was though that we need to make sure people really do know
> that there are two very different types of messaging usage in openstack and
> to ensure that operators (and developers) are picking the right backing
> technology for each type.
>
> So some questions naturally arise out of this.
>
> * Where are the best practices with regard to selection of the best backend
> type for rpc (and one for notifications); is this something oslo.messaging
> should work through (or can the docs team and operator group also help in
> making this)?
>
> * What are the tradeoffs in using the same (or different) technology for rpc
> and notifications?
>

I think the olso.messaging team should take the lead here and educate
users as to what the options are, and  how the two supported messaging
services (RPC and Notifications) differ with respect to backend
requirements.   These topics really should be part of the
oslo.messaging 'Theory of Operations' documentation that was discussed
during the Arch WG summit meeting.

Currently the biggest functional difference between the backends is
the support of store-and-forward (e.g. queueing) verses point-to-point
message transfer.  Oslo could at least explain the pros and cons of
each approach with respect to the RPC and Notification services so
that folks understand what the tradeoffs and advantages are in the
first place.

Furthermore the team should also document the functional differences
between the various choices of backends.  For instance it would be
useful to understand how the two supported point-to-point backends
(zmq and dispatch router) differ in both behavior and recommended
deployment.


> * Is it even possible for all oslo.messaging consuming projects to be able
> to choose 2 different backends, are consuming projects consuming the library
> correctly so that they can use 2 different backends?
>
> * Is devstack able to run with say kafka for notifications and rabbitmq for
> rpc (if not, is there any work item the oslo group can help with to make
> this possible) so that we can ensure and test that all projects can work
> correctly with appropriate (and possibly different) backends?
>
> * Any other messaging, arch-wg work that we (oslo or others) can help out
> with to make sure that projects (and operators) are using the right
> technology for the right use (and not just defaulting to RPC over rabbitmq
> because it exists, when in reality something else might be a better choice)?
>

Ultimately there should be recommendations for which backends are
optimal for a range of different deployment scenarios, but at this
point we really don't have enough data and experience with these
backends to create such recommendations.

> * More(?)
>
> Just wanted to get this conversation started, because afaik it's one that
> has not been widely circulated (and operators have been setting up rabbitmq
> in various HA and clustered and ... modes, when in reality thinking through
> what and how it is used may be more appropriate); this also applies to
> developers since some technical solutions in openstack seem to be created
> due to (in-part) rabbitmq shortcomings (cells v1 afaik was *in* part created
> due to scaling issues).
>
> -Josh
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Adding gdavoian to oslo-core

2016-10-03 Thread Ken Giusti
+1 for Gevorg!

On Mon, Oct 3, 2016 at 2:02 PM, Davanum Srinivas  wrote:
> +1 from me!
>
> On Mon, Oct 3, 2016 at 1:40 PM, Joshua Harlow  wrote:
>> Greetings all stackers,
>>
>> I propose that we add Gevorg Davoian[1] to the oslo-core[2] team.
>>
>> Gevorg has been actively contributing to oslo for a while now, both in
>> helping make oslo better via code contribution(s) and by helping with the
>> review load when he can. He has provided quality reviews and is doing an
>> awesome job with the various oslo concepts and helping make oslo the best it
>> can be!
>>
>> Overall I think he would make a great addition to the core review team.
>>
>> Please respond with +1/-1.
>>
>> Thanks much!
>>
>> - Joshua Harlow
>>
>> [1] https://launchpad.net/~gdavoian
>> [2] https://launchpad.net/oslo
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Adding ozamiatin to oslo-core

2016-10-03 Thread Ken Giusti
+1 for sure.

On Mon, Oct 3, 2016 at 1:42 PM, Joshua Harlow  wrote:
> Greetings all stackers,
>
> I propose that we add Oleksii Zamiatin[1] to the oslo-core[2] team.
>
> Oleksii has been actively contributing to oslo for a while now, both in
> helping make oslo better via code contribution(s) and by helping with the
> review load when he can. He has provided quality reviews and is doing an
> awesome job with the various oslo concepts and helping make oslo the best it
> can be!
>
> Overall I think he would make a great addition to the core review team.
>
> Please respond with +1/-1.
>
> Thanks much!
>
> - Joshua Harlow
>
> [1] https://launchpad.net/~ozamiatin
> [2] https://launchpad.net/oslo
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-09-02 Thread Ken Giusti
On Thu, Sep 1, 2016 at 4:53 PM, Ian Wells  wrote:
> On 1 September 2016 at 06:52, Ken Giusti  wrote:
>>
>> On Wed, Aug 31, 2016 at 3:30 PM, Ian Wells  wrote:
>
>> > I have opinions about other patterns we could use, but I don't want to
>> > push
>>
>> > my solutions here, I want to see if this is really as much of a problem
>> > as
>> > it looks and if people concur with my summary above.  However, the right
>> > approach is most definitely to create a new and more fitting set of oslo
>> > interfaces for communication patterns, and then to encourage people to
>> > move
>> > to the new ones from the old.  (Whether RabbitMQ is involved is neither
>> > here
>> > nor there, as this is really a question of Oslo APIs, not their
>> > implementation.)
>> >
>>
>> Hmm... maybe.   Message bus technology is varied, and so is it's
>> behavior.  There are brokerless, point-to-point backends supported by
>> oslo.messaging [1],[2] which will exhibit different
>> capabilities/behaviors from the traditional broker-based
>> store-and-forward backend (e.g. message acking end-to-end vs to the
>> intermediary).
>
>
> The important thing is that you shouldn't have to look behind the curtain.
> We can offer APIs that are driven by the implementation (designed for test,
> and trivial to implement correctly given handy open source projects we know
> and trust) and the choice of design will therefore be dependent on the
> backend mechanisms we consider for use to implement the API.  APIs are
> always a point of negotiation between what the caller needs and what can be
> implemented in a practical amount of time.  But *I do not care* whether
> you're using rabbits or carrier pigeons just so long as what you have
> documented that the API promises me is actually true.  I *do not expect* to
> have to read RabbitMQ ior AMQP documentation to work out what behaviour I
> should expect for my messaging.  And its behaviour should be consistent if I
> have a choice of messaging backends.
>

And I agree totally - this is the way it _should_ be.  And to get
there we do have to address the ambiguities in the existing API, as
well as extend it so applications can explicitly state their service
needs.

My point is that the API also has to be _backend_ agnostic.  That
really hasn't been the priority it should be IMHO.  The current API as
it stands leaks too much of the backend behavior past the API.

For example here's where we are with the current API: a majority of
deployments are broker based - applications using oslo.messaging  have
come to rely _indirectly_ on the behavioral side effects of using a
broker backend.  In fact RabbitMQ's operational characteristics have
become the de-facto "correct" behavior.  Any other backend that
doesn't exhibit exactly the same behavior as RabbitMQ is considered
buggy.   Consider qpidd for example - simple differences in default
queue lifecycle and default flow control settings resulted in
messaging behavior different from RabbitMQ.  These were largely
considered bugs in qpidd.  I think this played a large part in the
lack of adoption of qpidd.

And qpidd is the same type of messaging backend as rabbitmq - a
broker.  Imagine what deployer's are going to hit when they attempt to
use a completely different technology - non-brokered backends like
Zeromq or message routing.

Improving the API as you describe will go a long way to solving this
situation.  And believe me I agree 100% that this API work needs to be
done.

But the API redesign should be done in a backend-agnostic manner.  We
(the oslo.messaging devs) have to ensure that _required_ API features
cannot be tied to any one backend implementation.  For example things
like notification pools are trivial to support for broker backends,
but hard/impossible for point to point distributed technologies.  It
must be clear to the application devs that using those optional
features that cannot be effectively implemented for a given backend
basically forces the deployer's hand.

My point is yes we need to improve that API but it should be done in a
backend agnostic way. There are currently features/behaviors that
essentially require a broker back end.  We should avoid making such
features mandatory elements of the API and ensure that the API users
are well aware of the consequences for deployers when using such
features.


>> All the more reason to have explicit delivery guarantees and well
>> understood failure scenarios defined by the API.
>
> And on this point we totally agree.
>
> I think the point of an API is to subdivide who carries which
> responsibilities - the caller for handling exceptional cases and the
> implementer

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-09-01 Thread Ken Giusti
On Wed, Aug 31, 2016 at 3:30 PM, Ian Wells  wrote:
> On 31 August 2016 at 10:12, Clint Byrum  wrote:
>>
>> Excerpts from Duncan Thomas's message of 2016-08-31 12:42:23 +0300:
>> > On 31 August 2016 at 11:57, Bogdan Dobrelya 
>> > wrote:
>> >
>> > > I agree that RPC design pattern, as it is implemented now, is a major
>> > > blocker for OpenStack in general. It requires a major redesign,
>> > > including handling of corner cases, on both sides, *especially* RPC
>> > > call
>> > > clients. Or may be it just have to be abandoned to be replaced by a
>> > > more
>> > > cloud friendly pattern.
>> >
>> >
>> > Is there a writeup anywhere on what these issues are? I've heard this
>> > sentiment expressed multiple times now, but without a writeup of the
>> > issues
>> > and the design goals of the replacement, we're unlikely to make progress
>> > on
>> > a replacement - even if somebody takes the heroic approach and writes a
>> > full replacement themselves, the odds of getting community by-in are
>> > very
>> > low.
>>
>> Right, this is exactly the sort of thing I'd like to gather a group of
>> design-minded folks around in an Architecture WG. Oslo is busy with the
>> implementations we have now, but I'm sure many oslo contributors would
>> like to come up for air and talk about the design issues, and come up
>> with a current design, and some revisions to it, or a whole new one,
>> that can be used to put these summit hallway rumors to rest.
>
>
> I'd say the issue is comparatively easy to describe.  In a call sequence:
>
> 1. A sends a message to B
> 2. B receives messages
> 3. B acts upon message
> 4. B responds to message
> 5. A receives response
> 6. A acts upon response
>
> ... you can have a fault at any point in that message flow (consider crashes
> or program restarts).  If you ask for something to happen, you wait for a
> reply, and you don't get one, what does it mean?  The operation may have
> happened, with or without success, or it may not have gotten to the far end.
> If you send the message, does that mean you'd like it to cause an action
> tomorrow?  A year from now?  Or perhaps you'd like it to just not happen?
> Do you understand what Oslo promises you here, and do you think every person
> who ever wrote an RPC call in the whole OpenStack solution also understood
> it?
>

Precisely - IMHO it's a shortcoming of the current o.m. RPC (and
Notification) API in that it does not let the API user explicitly set
the desired delivery guarantee when publishing.  Right now it's
implied that the delivery guarantee is "At Most Once" but that's
mostly not precisely defined in any meaningful way.

Any messaging API should be explicit regarding what delivery
guarantee(s) are possible.  In addition, an API should allow the user
to designate the importance of a message on a per-send basis:  can
this message be dropped?  can this message be duplicated?  At what
point in time does the message become invalid (already offered for RPC
via timeout, but not Notifications IIRC), etc

And well-understood failure modes... things always fail...


> I have opinions about other patterns we could use, but I don't want to push
> my solutions here, I want to see if this is really as much of a problem as
> it looks and if people concur with my summary above.  However, the right
> approach is most definitely to create a new and more fitting set of oslo
> interfaces for communication patterns, and then to encourage people to move
> to the new ones from the old.  (Whether RabbitMQ is involved is neither here
> nor there, as this is really a question of Oslo APIs, not their
> implementation.)
>

Hmm... maybe.   Message bus technology is varied, and so is it's
behavior.  There are brokerless, point-to-point backends supported by
oslo.messaging [1],[2] which will exhibit different
capabilities/behaviors from the traditional broker-based
store-and-forward backend (e.g. message acking end-to-end vs to the
intermediary).

All the more reason to have explicit delivery guarantees and well
understood failure scenarios defined by the API.

[1] http://docs.openstack.org/developer/oslo.messaging/zmq_driver.html
[2] http://docs.openstack.org/developer/oslo.messaging/AMQP1.0.html


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-09-01 Thread Ken Giusti
On Wed, Aug 31, 2016 at 6:02 PM, Clint Byrum  wrote:
> Excerpts from Ian Wells's message of 2016-08-31 12:30:45 -0700:
>> On 31 August 2016 at 10:12, Clint Byrum  wrote:
>>
>> > Excerpts from Duncan Thomas's message of 2016-08-31 12:42:23 +0300:
>> > > On 31 August 2016 at 11:57, Bogdan Dobrelya 
>> > wrote:
>> > >
>> > > > I agree that RPC design pattern, as it is implemented now, is a major
>> > > > blocker for OpenStack in general. It requires a major redesign,
>> > > > including handling of corner cases, on both sides, *especially* RPC
>> > call
>> > > > clients. Or may be it just have to be abandoned to be replaced by a
>> > more
>> > > > cloud friendly pattern.
>> > >
>> > >
>> > > Is there a writeup anywhere on what these issues are? I've heard this
>> > > sentiment expressed multiple times now, but without a writeup of the
>> > issues
>> > > and the design goals of the replacement, we're unlikely to make progress
>> > on
>> > > a replacement - even if somebody takes the heroic approach and writes a
>> > > full replacement themselves, the odds of getting community by-in are very
>> > > low.
>> >
>> > Right, this is exactly the sort of thing I'd like to gather a group of
>> > design-minded folks around in an Architecture WG. Oslo is busy with the
>> > implementations we have now, but I'm sure many oslo contributors would
>> > like to come up for air and talk about the design issues, and come up
>> > with a current design, and some revisions to it, or a whole new one,
>> > that can be used to put these summit hallway rumors to rest.
>> >
>>
>> I'd say the issue is comparatively easy to describe.  In a call sequence:
>>
>> 1. A sends a message to B
>> 2. B receives messages
>> 3. B acts upon message
>> 4. B responds to message
>> 5. A receives response
>> 6. A acts upon response
>>
>> ... you can have a fault at any point in that message flow (consider
>> crashes or program restarts).  If you ask for something to happen, you wait
>> for a reply, and you don't get one, what does it mean?  The operation may
>> have happened, with or without success, or it may not have gotten to the
>> far end.  If you send the message, does that mean you'd like it to cause an
>> action tomorrow?  A year from now?  Or perhaps you'd like it to just not
>> happen?  Do you understand what Oslo promises you here, and do you think
>> every person who ever wrote an RPC call in the whole OpenStack solution
>> also understood it?
>>
>> I have opinions about other patterns we could use, but I don't want to push
>> my solutions here, I want to see if this is really as much of a problem as
>> it looks and if people concur with my summary above.  However, the right
>> approach is most definitely to create a new and more fitting set of oslo
>> interfaces for communication patterns, and then to encourage people to move
>> to the new ones from the old.  (Whether RabbitMQ is involved is neither
>> here nor there, as this is really a question of Oslo APIs, not their
>> implementation.)
>
> I think it's about time we get some Architecture WG meetings started,
> and put "Document RPC design" on the agenda.
>

+1 I'm certainly interested in helping out here.



> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] what do you work on upstream of OpenStack?

2016-06-14 Thread Ken Giusti
Hi Doug,

The pyngus library - it was originally designed as part of the
oslo.messaging amqp1 driver but proved to be useful as a stand alone
messaging API built for proton.

https://pypi.python.org/pypi/pyngus

On Mon, Jun 13, 2016 at 3:11 PM, Doug Hellmann  wrote:
> I'm trying to pull together some information about contributions
> that OpenStack community members have made *upstream* of OpenStack,
> via code, docs, bug reports, or anything else to dependencies that
> we have.
>
> If you've made a contribution of that sort, I would appreciate a
> quick note.  Please reply off-list, there's no need to spam everyone,
> and I'll post the summary if folks want to see it.
>
> Thanks,
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][mistral] Saga of process than ack and where can we go from here...

2016-06-13 Thread Ken Giusti
isn't
> really the `rpc` kind of model that oslo.messaging has been targeting at
> that point, but is more like a work-queue) be in another library with a
> clear API that explicitly is targeted at this kind of model. Thus
> instead of having a multi-personality codebase with hidden features like
> this (say in oslo.messaging) instead it gets its own codebase and API
> that is 'just right' (or more close to being 'right') for it's concept
> (vs trying to stuff it into oslo.messaging).
>
>
> What happened to the idea of adding new functions at the level of the
> call & cast functions we have now, that work with at-least-once instead
> of at-most-once semantics? Yes this is a different sort of use case, but
> it's still "messaging".
>
>
> The idea I think is dead. Joshua essentially told the reasons in the
> previous message.
>
> Renat Akhmerov
> @Nokia
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Common RPC Message Trace Mechanism

2016-03-07 Thread Ken Giusti
Hi,

The 'trace' boolean offered by the AMQP 1.0 driver exposes a debug feature
that is provided by the Proton library.  This is specific to the Proton
library - I'm not sure kombu/zmq/etc offer a similar feature.

As Xuanzhou points out, this debug tool merely prints to stdout a summary
of each AMQP 1.0 protocol frame before it is written/after it is read from
the socket.  It prints the entire protocol exchange (control frames, etc)
and is not limited to just the message traffic.  Given that, I don't think
the transport drivers can implement such a low level debug feature unless
it is offered by the protocol libraries themselves.

-K


On Sun, Mar 6, 2016 at 11:55 PM Xuanzhou Perry Dong 
wrote:

> Hi, Boris,
>
> Thanks for your response.
>
> I refer to the very simple type of "trace": just print out the RPC
> messages to stdout/stderr/syslog. I have checked the osprofiler project and
> think that it is very good and could solve my problem if it is used by the
> Openstack projects to trace their RPC calls.
>
>
> Best Regards,
> Xuanzhou Perry Dong
>
> At 2016-03-07 12:27:12, "Boris Pavlovic"  wrote:
>
> Xuanzhou,
>
> I am not sure what do you mean by "trace". But if you need something that
> allows to do cross service/project tracing then you should take a look at
> osprofiler:
> https://github.com/openstack/osprofiler
>
> Best regards,
> Boris Pavlovic
>
> On Sun, Mar 6, 2016 at 8:15 PM, Xuanzhou Perry Dong 
> wrote:
>
>> Hi,
>>
>> I am looking for a common RPC message trace mechanism in oslo_messaging.
>> This message trace mechanism needs to be common to all drivers. Currently
>> some documentation mentions that oslo_messaging_amqp.trace can activate the
>> message trace (say,
>> http://docs.openstack.org/liberty/config-reference/content/networking-configuring-rpc.html).
>> But it seems that this oslo_messaging_amqp.trace is only available to the
>> Proton driver.
>>
>> Do I miss any existing common RPC message trace mechanism in oslo? If
>> there is no such mechanism, I would propose to create such a mechanism for
>> oslo.
>>
>> Any response is appreciated.
>> Thanks.
>> Best Regards,
>> Xuanzhou Perry Dong
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Adding Dmitry Ukhlov to Oslo-Messaging-Core

2016-02-01 Thread Ken Giusti
+1

Yay Dmitry!!


On Thu, Jan 28, 2016 at 11:00 AM Oleksii Zamiatin 
wrote:

> My big +1 for Dmitry!
>
> On Thu, Jan 28, 2016 at 5:40 PM, Doug Hellmann 
> wrote:
>
>> Excerpts from Davanum Srinivas (dims)'s message of 2016-01-28 09:25:56
>> -0600:
>> > Team,
>> >
>> > Dmitry has been focused solely on the Pika Driver this cycle:
>> > http://stackalytics.com/?user_id=dukhlov&metric=commits
>> >
>> > Now that we have Pika driver in master, i'd like to invite Dmitry to
>> > continue his work on all of Oslo.Messaging in addition to Pika.
>> > Clearly over time he will expand to other Oslo stuff (hopefully!).
>> > Let's please make add him to the Oslo-Messaging-Core in the meantime.
>> >
>> > Thanks,
>> > Dims
>> >
>>
>> +1 -- Thanks for your contributions, Dmitry!
>>
>> Doug
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] nominating Brant Knudson for Oslo core

2015-10-01 Thread Ken Giusti
+1

On Mon, Sep 28, 2015 at 4:37 AM Victor Stinner  wrote:

> +1 for Brant
>
> Victor
>
> Le 24/09/2015 19:12, Doug Hellmann a écrit :
> > Oslo team,
> >
> > I am nominating Brant Knudson for Oslo core.
> >
> > As liaison from the Keystone team Brant has participated in meetings,
> > summit sessions, and other discussions at a level higher than some
> > of our own core team members.  He is already core on oslo.policy
> > and oslo.cache, and given his track record I am confident that he would
> > make a good addition to the team.
> >
> > Please indicate your opinion by responding with +1/-1 as usual.
> >
> > Doug
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [qpid] [zmq] [RabbitMQ] [oslo] Pending deprecation of driver(s).

2015-06-22 Thread Ken Giusti
On Mon, Jun 22, 2015 at 2:27 PM Adam Young  wrote:

> On 06/20/2015 10:28 AM, Flavio Percoco wrote:
> >>
> >
> > As promissed: https://review.openstack.org/#/c/193804/
> >
> > Cheers,
> You can't deprecate a driver without providing a viable alternative.
>
> Right now, QPID is the only driver that supports  Kerberos.
>
> TO support Kerberos, tyou need support for the GSSAPI library, which is
> usually done via support for SASL.  Why is it so convoluted...historical...
>
> We've talked with both teams (I work with Ken) and I think Proton is
> likely going to be the first to have support.  The folks working on
> Rabbit have the double hurdle of getting SASL support into Erlang first,
> and then support for SASL into Rabbit. They've indicated a preference
> for getting it in to the AMQP 1.0 driver, and not bothering with the
> exisiting, but, check me on this, the Oso.Messaging  code only support
> the pre 1.0 Rabbit.
>
>
> So..until we have a viable alternative, please leave QPID alone. I've
> not been bothering people about it, as there seems to be work to get
> ahead, but until either Rabbit or  Proton support Kerberos, I need QPID
> as is.
>
>
Re: proton - Kerberos support is in progress upstream [0],[1], but as you
point out it's not yet available.  That blocks kerberos support for the
amqp1.0 driver.

Once proton does release that, and the amqp1.0 driver adopts it, you'll be
able to migrate to the amqp1.0 driver and continue to work with the QPID
broker (as long as the version of the QPID broker supports AMQP 1.0).

That doesn't help you now, but it is something to plan for.

[0]https://issues.apache.org/jira/browse/PROTON-334
[1]https://issues.apache.org/jira/browse/PROTON-911


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [qpid] [zmq] [RabbitMQ] [oslo] Pending deprecation of driver(s).

2015-06-19 Thread Ken Giusti
On Fri, Jun 19, 2015 at 4:47 PM Clint Byrum  wrote:

> Excerpts from Ken Giusti's message of 2015-06-19 08:01:46 -0700:
> > On Fri, Jun 19, 2015 at 2:15 AM Flavio Percoco 
> wrote:
> >
> > > On 18/06/15 16:37 -0400, Doug Hellmann wrote:
> > > >Excerpts from Clint Byrum's message of 2015-06-18 12:47:21 -0700:
> > > >> Hello! I know there's been a lot of churn and misunderstanding over
> the
> > > >> recent devstack changes, so I wanted to make it clear where we're
> going
> > > >> with messaging drivers now that the policy [1] was approved.
> > > >>
> > > >> According to the policy, drivers need to have at least 60% unit test
> > > >> coverage, and an integration test suite with at least 3 "popular"
> > > >> OpenStack projects, with preference for Nova, Cinder, and Glance,
> and
> > > >> individuals who will support said test suite.
> > > >>
> > > >> So, with that, the following is the status of each driver in tree
> right
> > > >> now:
> > > >>
> > > >> rabbit - 89% Unit test coverage. Being the default in devstack, and
> > > >> the default in nearly every project's config files, this one is
> heavily
> > > >> integration tested. There are multiple individuals who have proven
> to
> > > >> be able to debug failures related to RabbitMQ and are well known in
> > > >> the community.
> > > >
> > > >+1
> > > >
> > > >>
> > > >> qpid - Unit test coverage is at 77%, so it passes that bar. I cannot
> > > >> find any integration testing being done, so it fails that. I also
> have
> > > >> not found anyone who will step up and support it. So this should be
> > > >> deprecated immediately.
> > > >
> > > >+1 - I believe Ken and the other folks interested in this indicated
> that
> > > >the AMQP 1.0 driver should replace this one.
> > >
> > > The qpid driver should be deprecated, I'll be doing so in the next
> > > couple of days. Look forward to the patch.
> > >
> > > +1
> >
> > > >
> > > >Speaking of AMQP 1.0, you don't mention that one (it uses qpid, but is
> > > >separate from the driver named "qpid").
> > >
> > > I'd like to clarify something about the AMQP 1.0 driver. It's not a
> > > direct replacement for the qpidd one because it uses an entirely
> > > different protocol that recently became a standard.
> > >
> > > The reason I mention this is because it doesn't really require qpidd -
> > > not the double d - which is the broker daemon in the qpid family. I
> > > guess the confusion comes because the library it sits on top off is
> > > called qpid-proton.
> > >
> > > The qpid family is a set of tools that provide messaging capabilities.
> > > Among those you find qpidd (broker daemon), qpid-proton (amqp1.0
> > > library), qpid-dispatch (message router). It's confusing indeed.
> > >
> > > The importance of this distinction is that the amqp1.0 driver in
> > > oslo.messaging is intended as a protocol based driver and not
> > > targetting any technology. That is to say, that it could be written
> > > using a library that is not qpid-proton and it can talk to any message
> > > router or message broker that has support for amqp 1.0.
> > >
> > >
> > +1 - yeah, we really shouldn't be considering the amqp1 driver as simply
> > the "replacement qpid driver" - as Flavio points out it has the potential
> > to provide compatibility with other messaging back ends.
> >
> > Clint - can you also include separate metrics for the amqp1 driver?
> >
>
> It's far less clear to me how to measure the unit test coverage of the
> amqp driver. I'll wait for Flavio's answer to my question about where
> the code lives, because it is definitely not organized like the others.
>
>
No, it isn't.  At the time we proposed this structure for the amqp1 driver,
there really wasn't much formal structure within the driver directory.  We
had impl_rabbit, which was copied to impl_qpid in the hopes that all the
rest of the code could be shared (it didn't work out very well).  And the
zmq code wasn't working at the time.  And all of it was crammed into one
flat directory.

Rather than heap yet more stuff into that directory, we proposed to put the
amqp1 driver into its own sub-directory.  We chose a 'protocols'
subdirectory, into which the amqp1 driver lives in the 'amqp'
subdirectory.  The hope was that other protocols would put their
implementation bits into that protocols directory rather than lump them in
directly under _drivers.

It looks like the zmq driver will be structured a bit different from that -
there will be a top level impl_zmq file along side a zmq_driver directory.

I like the way zmq has laid out the sources, and maybe that should be the
'official' structure to olso.messaging drivers.  If that's the case I'll be
more that happy to arrange the amqp1 driver in a similar manner.

I've gone off into the weeds, sorry.

The amqp1 driver lives in _drivers/protocols/amqp directory (for now).  And
yes, those unit tests should be in the tests/drivers directory like
everything else - when the tests/drivers directory was created that file
wasn't mo

Re: [openstack-dev] [all] FYI - dropping non RabbitMQ support in devstack

2015-06-19 Thread Ken Giusti
I've already +1'd this patch - timing issues aside - I think this is a Good
Thing from a developer's point of view.  Particularly, my own selfish point
of view :)  I envision the ability to support multiple different messaging
services via the amqp1 driver.  Having to keep devstack updated with an
array of supported configurations is gonna be a nightmare for all
involved.  I'd much rather have a small independent plugin to work on
rather than having to get every change included into devstack proper.

And thanks to Sean's excellent example I've started a plugin for the
amqp1.0 driver (totally untested at this point), so we'll have that covered
[0].   Thanks Sean!

That said, the only concern I have with this patch is whether it will
result in a less well-tested oslo.messaging.

O.M. is supposed to be an abstraction of the messaging bus - it's not just
"RPC and Notifications over RabbitMQ", is it?   How do we validate that
abstraction if we don't thoroughly integration test O.M. over multiple
different drivers/backends?  Other folks have already raised the issue of
rabbit-specific behaviors likely leaking through the API, especially with
respect to failure scenarios.   If we make it harder to run integration
tests over anything but the rabbit driver, then we risk breaking that
abstraction in such a way that using anything _other_ than rabbit will be
practically impossible.

[0] https://github.com/kgiusti/amqp1-devstack

On Thu, Jun 18, 2015 at 12:28 PM Clint Byrum  wrote:

> Excerpts from Sean Dague's message of 2015-06-18 07:09:56 -0700:
> > On 06/18/2015 09:54 AM, ozamiatin wrote:
> > > Hi Sean,
> > >
> > > Thanks a lot for the plugin!
> > > I was a little bit confused with a commit message and dropping of
> > > drivers support.
> > > It turns really not so hard to test zeromq driver with plugin.
> >
> > Yes, that was the design goal with the whole plugin mechanism. To
> > provide an experience that was so minimally different from vanilla
> > devstack, that most people would never even notice. It's honestly often
> > easier to explain to people how to enable a service via a plugin than
> > the config variables in tree.
> >
>
> +1
>
> > > So I have no objections any more and removing my -1.
> >
> > Cool, great. It would be great if you or someone else could post a
> > review to pull this code into gerrit somewhere. You'll need the code in
> > gerrit to use it in devstack-gate jobs, as we don't allow projects
> > outside of gerrit to be cloned during tests.
> >
> > > But I also agree with Doug Hellmann and other speakers that we should
> > > not make such changes
> > > so fast.
> >
> > The reason I didn't think the time table was unreasonable was how quick
> > this transition could be made, and how little code is needed to make one
> > of these plugins. And the fact that from there on out you get to be in
> > control of landing fixes or enhancements for your backend on the
> > timetable that works for you.
> >
> > It will make getting the devstack-gate jobs working reliably a lot
> > simpler and quicker for your team.
> >
>
> Agreed on all points. I believe that the mistake was simply that
> there wasn't any need to hold hands for those who we are enabling to
> move faster and more independently. We do, in fact, need to transfer
> ownership gracefully. Thanks so much for writing the plugin for zmq,
> that is a huge win for zmq developers. I can't speak for oslo directly,
> but it seems like that plugin should land under oslo's direct stewardship
> and then we can move forward with this.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [qpid] [zmq] [RabbitMQ] [oslo] Pending deprecation of driver(s).

2015-06-19 Thread Ken Giusti
On Fri, Jun 19, 2015 at 2:15 AM Flavio Percoco  wrote:

> On 18/06/15 16:37 -0400, Doug Hellmann wrote:
> >Excerpts from Clint Byrum's message of 2015-06-18 12:47:21 -0700:
> >> Hello! I know there's been a lot of churn and misunderstanding over the
> >> recent devstack changes, so I wanted to make it clear where we're going
> >> with messaging drivers now that the policy [1] was approved.
> >>
> >> According to the policy, drivers need to have at least 60% unit test
> >> coverage, and an integration test suite with at least 3 "popular"
> >> OpenStack projects, with preference for Nova, Cinder, and Glance, and
> >> individuals who will support said test suite.
> >>
> >> So, with that, the following is the status of each driver in tree right
> >> now:
> >>
> >> rabbit - 89% Unit test coverage. Being the default in devstack, and
> >> the default in nearly every project's config files, this one is heavily
> >> integration tested. There are multiple individuals who have proven to
> >> be able to debug failures related to RabbitMQ and are well known in
> >> the community.
> >
> >+1
> >
> >>
> >> qpid - Unit test coverage is at 77%, so it passes that bar. I cannot
> >> find any integration testing being done, so it fails that. I also have
> >> not found anyone who will step up and support it. So this should be
> >> deprecated immediately.
> >
> >+1 - I believe Ken and the other folks interested in this indicated that
> >the AMQP 1.0 driver should replace this one.
>
> The qpid driver should be deprecated, I'll be doing so in the next
> couple of days. Look forward to the patch.
>
> +1


> >
> >Speaking of AMQP 1.0, you don't mention that one (it uses qpid, but is
> >separate from the driver named "qpid").
>
> I'd like to clarify something about the AMQP 1.0 driver. It's not a
> direct replacement for the qpidd one because it uses an entirely
> different protocol that recently became a standard.
>
> The reason I mention this is because it doesn't really require qpidd -
> not the double d - which is the broker daemon in the qpid family. I
> guess the confusion comes because the library it sits on top off is
> called qpid-proton.
>
> The qpid family is a set of tools that provide messaging capabilities.
> Among those you find qpidd (broker daemon), qpid-proton (amqp1.0
> library), qpid-dispatch (message router). It's confusing indeed.
>
> The importance of this distinction is that the amqp1.0 driver in
> oslo.messaging is intended as a protocol based driver and not
> targetting any technology. That is to say, that it could be written
> using a library that is not qpid-proton and it can talk to any message
> router or message broker that has support for amqp 1.0.
>
>
+1 - yeah, we really shouldn't be considering the amqp1 driver as simply
the "replacement qpid driver" - as Flavio points out it has the potential
to provide compatibility with other messaging back ends.

Clint - can you also include separate metrics for the amqp1 driver?




> The ones we're targetting for the gate are rabbitmq (with the amqp 1.0
> plugin enabled) and qpidd.
>
> Since we're at it, let me share some updates:
>
> The driver unittests now run on every python2 job and the work on
> python3 is almost done. There's also a functional tests gate like we
> have for other drivers.
>
> The missing bit is an integration gate, which we'll be start working
> on in the next couple of days.
>
> Hope the above helps clarifying confusions around this driver.
>
> >
> >>
> >> zmq - Unit test coverage is 74%. There are no currently active
> integration
> >> tests in OpenStack's infrastructure. Several individuals have self
> >> identified as being willing to work on creating them. We have not had
> >> the conversations yet about ongoing support. I recommend we continue to
> >> support their effort to become policy compliant. If that does not
> solidify
> >> by M3 of Liberty, or if the "new" zmq driver appears with integration
> >> tests and support manpower, then we can deprecate at that time.
> >
> >+1 - I know interest has been growing in this, so let's keep it going
> >and see where we end up.
> >
> >>
> >> There's also the question of _how_ to deprecate them. I figure we should
> >> deprecate when the driver is loaded. Is there an existing mechanism
> >> that someone can point me to, or should I look at adding that to
> >> oslo.messaging/stevedore?
> >
> >Normally we would recommend using versionutils from oslo.log, but we've
> >been trying to avoid making oslo.log a dependency of the oslo libs
> >because it uses some of them and that introduces cycles. Dims had a
> >patch recently that just used a DeprecationWarning, and relied on
> >oslo.log to redirect the warning to the log file. That seems like a good
> >pattern to repeat.
>
> Can we use debtcollector to decorate the main driver class? A warning
> will be printted every time an instance of such class is created
> (rather than at import time).
>
> If we don't want to add a dependency on that, we co

Re: [openstack-dev] [all] QPID incompatible with python 3 and untested in gate -- what to do?

2015-04-16 Thread Ken Giusti
On Wed, Apr 15, 2015 at 8:18 PM, Joshua Harlow  wrote:
> Ken Giusti wrote:
>>
>> On Wed, Apr 15, 2015 at 1:33 PM, Doug Hellmann
>> wrote:
>>>
>>> Excerpts from Ken Giusti's message of 2015-04-15 09:31:18 -0400:
>>>>
>>>> On Tue, Apr 14, 2015 at 6:23 PM, Joshua Harlow
>>>> wrote:
>>>>>
>>>>> Ken Giusti wrote:
>>>>>>
>>>>>> Just to be clear: you're asking specifically about the 0-10 based
>>>>>> impl_qpid.py driver, correct?   This is the driver that is used for
>>>>>> the "qpid://" transport (aka rpc_backend).
>>>>>>
>>>>>> I ask because I'm maintaining the AMQP 1.0 driver (transport
>>>>>> "amqp://") that can also be used with qpidd.
>>>>>>
>>>>>> However, the AMQP 1.0 driver isn't yet Python 3 compatible due to its
>>>>>> dependency on Proton, which has yet to be ported to python 3 - though
>>>>>> that's currently being worked on [1].
>>>>>>
>>>>>> I'm planning on porting the AMQP 1.0 driver once the dependent
>>>>>> libraries are available.
>>>>>>
>>>>>> [1]: https://issues.apache.org/jira/browse/PROTON-490
>>>>>
>>>>>
>>>>> What's the expected date on this as it appears this also blocks python
>>>>> 3
>>>>> work as well... Seems like that hasn't been updated since nov 2014
>>>>> which
>>>>> doesn't inspire that much confidence (especially for what appears to be
>>>>> mostly small patches).
>>>>>
>>>> Good point.  I reached out to the bug owner.  He got it 'mostly
>>>> working' but got hung up on porting the proton unit tests.   I've
>>>> offered to help this along and he's good with that.  I'll make this a
>>>> priority to move this along.
>>>>
>>>> In terms of availability - proton tends to do releases about every 4-6
>>>> months.  They just released 0.9, so the earliest availability would be
>>>> in that 4-6 month window (assuming that should be enough time to
>>>> complete the work).   Then there's the time it will take for the
>>>> various distros to pick it up...
>>>>
>>>> so, definitely not 'real soon now'. :(
>>>
>>> This seems like a case where if we can get the libs we need to a point
>>> where they install via pip, we can let the distros catch up instead of
>>> waiting for them.
>>>
>>
>> Sadly just the python wrappers are available via pip.  Its C extension
>> requires that the native proton shared library (libqpid-proton) is
>> available.   To date we've relied on the distro to provide that
>> library.
>
>
> How does that (c extension) work with eventlet? Does it?
>

I haven't experienced any issues in my testing.

To be clear - the libqpid-proton library is non-blocking and
non-threading.  It's simply an protocol processing engine - the driver
hands it raw network data and messages magically pop out (and vise
versa).

 All I/O, blocking, threading etc is done in the python driver itself.
I suspect there's nothing eventlet needs to do that requires
overloading functionality provided by the binary proton library, but
my knowledge of eventlet is pretty slim.


>
>>
>>> Similarly, if we have *an* approach for Python 3 on oslo.messaging, that
>>> means the library isn't blocking us from testing applications with
>>> Python 3. If some of the drivers lag, their test jobs may need to be
>>> removed or disabled if the apps start testing under Python 3.
>>>
>>> Doug
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] QPID incompatible with python 3 and untested in gate -- what to do?

2015-04-15 Thread Ken Giusti
On Wed, Apr 15, 2015 at 1:33 PM, Doug Hellmann  wrote:
> Excerpts from Ken Giusti's message of 2015-04-15 09:31:18 -0400:
>> On Tue, Apr 14, 2015 at 6:23 PM, Joshua Harlow  wrote:
>> > Ken Giusti wrote:
>> >>
>> >> Just to be clear: you're asking specifically about the 0-10 based
>> >> impl_qpid.py driver, correct?   This is the driver that is used for
>> >> the "qpid://" transport (aka rpc_backend).
>> >>
>> >> I ask because I'm maintaining the AMQP 1.0 driver (transport
>> >> "amqp://") that can also be used with qpidd.
>> >>
>> >> However, the AMQP 1.0 driver isn't yet Python 3 compatible due to its
>> >> dependency on Proton, which has yet to be ported to python 3 - though
>> >> that's currently being worked on [1].
>> >>
>> >> I'm planning on porting the AMQP 1.0 driver once the dependent
>> >> libraries are available.
>> >>
>> >> [1]: https://issues.apache.org/jira/browse/PROTON-490
>> >
>> >
>> > What's the expected date on this as it appears this also blocks python 3
>> > work as well... Seems like that hasn't been updated since nov 2014 which
>> > doesn't inspire that much confidence (especially for what appears to be
>> > mostly small patches).
>> >
>>
>> Good point.  I reached out to the bug owner.  He got it 'mostly
>> working' but got hung up on porting the proton unit tests.   I've
>> offered to help this along and he's good with that.  I'll make this a
>> priority to move this along.
>>
>> In terms of availability - proton tends to do releases about every 4-6
>> months.  They just released 0.9, so the earliest availability would be
>> in that 4-6 month window (assuming that should be enough time to
>> complete the work).   Then there's the time it will take for the
>> various distros to pick it up...
>>
>> so, definitely not 'real soon now'. :(
>
> This seems like a case where if we can get the libs we need to a point
> where they install via pip, we can let the distros catch up instead of
> waiting for them.
>

Sadly just the python wrappers are available via pip.  Its C extension
requires that the native proton shared library (libqpid-proton) is
available.   To date we've relied on the distro to provide that
library.

> Similarly, if we have *an* approach for Python 3 on oslo.messaging, that
> means the library isn't blocking us from testing applications with
> Python 3. If some of the drivers lag, their test jobs may need to be
> removed or disabled if the apps start testing under Python 3.
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] QPID incompatible with python 3 and untested in gate -- what to do?

2015-04-15 Thread Ken Giusti
On Tue, Apr 14, 2015 at 6:23 PM, Joshua Harlow  wrote:
> Ken Giusti wrote:
>>
>> Just to be clear: you're asking specifically about the 0-10 based
>> impl_qpid.py driver, correct?   This is the driver that is used for
>> the "qpid://" transport (aka rpc_backend).
>>
>> I ask because I'm maintaining the AMQP 1.0 driver (transport
>> "amqp://") that can also be used with qpidd.
>>
>> However, the AMQP 1.0 driver isn't yet Python 3 compatible due to its
>> dependency on Proton, which has yet to be ported to python 3 - though
>> that's currently being worked on [1].
>>
>> I'm planning on porting the AMQP 1.0 driver once the dependent
>> libraries are available.
>>
>> [1]: https://issues.apache.org/jira/browse/PROTON-490
>
>
> What's the expected date on this as it appears this also blocks python 3
> work as well... Seems like that hasn't been updated since nov 2014 which
> doesn't inspire that much confidence (especially for what appears to be
> mostly small patches).
>

Good point.  I reached out to the bug owner.  He got it 'mostly
working' but got hung up on porting the proton unit tests.   I've
offered to help this along and he's good with that.  I'll make this a
priority to move this along.

In terms of availability - proton tends to do releases about every 4-6
months.  They just released 0.9, so the earliest availability would be
in that 4-6 month window (assuming that should be enough time to
complete the work).   Then there's the time it will take for the
various distros to pick it up...

so, definitely not 'real soon now'. :(

>
>>
>> On Tue, Apr 14, 2015 at 1:22 PM, Clint Byrum  wrote:
>>>
>>> Hello! There's been some recent progress on python3 compatibility for
>>> core libraries that OpenStack depends on[1], and this is likely to open
>>> the flood gates for even more python3 problems to be found and fixed.
>>>
>>> Recently a proposal was made to make oslo.messaging start to run python3
>>> tests[2], and it was found that qpid-python is not python3 compatible
>>> yet.
>>>
>>> This presents us with questions: Is anyone using QPID, and if so, should
>>> we add gate testing for it? If not, can we deprecate the driver? In the
>>> most recent survey results I could find [3] I don't even see message
>>> broker mentioned, whereas Databases in use do vary somewhat.
>>>
>>> Currently it would appear that only oslo.messaging runs functional tests
>>> against QPID. I was unable to locate integration testing for it, but I
>>> may not know all of the places to dig around to find that.
>>>
>>> So, please let us know if QPID is important to you. Otherwise it may be
>>> time to unburden ourselves of its maintenance.
>>>
>>> [1] https://pypi.python.org/pypi/eventlet/0.17.3
>>> [2] https://review.openstack.org/#/c/172135/
>>> [3]
>>> http://superuser.openstack.org/articles/openstack-user-survey-insights-november-2014
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] QPID incompatible with python 3 and untested in gate -- what to do?

2015-04-14 Thread Ken Giusti
Just to be clear: you're asking specifically about the 0-10 based
impl_qpid.py driver, correct?   This is the driver that is used for
the "qpid://" transport (aka rpc_backend).

I ask because I'm maintaining the AMQP 1.0 driver (transport
"amqp://") that can also be used with qpidd.

However, the AMQP 1.0 driver isn't yet Python 3 compatible due to its
dependency on Proton, which has yet to be ported to python 3 - though
that's currently being worked on [1].

I'm planning on porting the AMQP 1.0 driver once the dependent
libraries are available.

[1]: https://issues.apache.org/jira/browse/PROTON-490

On Tue, Apr 14, 2015 at 1:22 PM, Clint Byrum  wrote:
> Hello! There's been some recent progress on python3 compatibility for
> core libraries that OpenStack depends on[1], and this is likely to open
> the flood gates for even more python3 problems to be found and fixed.
>
> Recently a proposal was made to make oslo.messaging start to run python3
> tests[2], and it was found that qpid-python is not python3 compatible yet.
>
> This presents us with questions: Is anyone using QPID, and if so, should
> we add gate testing for it? If not, can we deprecate the driver? In the
> most recent survey results I could find [3] I don't even see message
> broker mentioned, whereas Databases in use do vary somewhat.
>
> Currently it would appear that only oslo.messaging runs functional tests
> against QPID. I was unable to locate integration testing for it, but I
> may not know all of the places to dig around to find that.
>
> So, please let us know if QPID is important to you. Otherwise it may be
> time to unburden ourselves of its maintenance.
>
> [1] https://pypi.python.org/pypi/eventlet/0.17.3
> [2] https://review.openstack.org/#/c/172135/
> [3] 
> http://superuser.openstack.org/articles/openstack-user-survey-insights-november-2014
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][Neutron] Fork() safety and oslo.messaging

2014-11-26 Thread Ken Giusti
I've come up with a patch to the amqp 1.0 driver that resets the
connection state if the pid of the current process is different from
the process that has created the connection:

https://review.openstack.org/#/c/134684/

I've managed to get this to work using rpc_workers=4 in neutron.conf,
which failed consistently pre-patch.



On Tue, Nov 25, 2014 at 11:16 AM, Mehdi Abaakouk  wrote:
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
>
>
>> Mmmm... I don't think it's that clear (re: an application issue).  I
>> mean, yes - the application is doing the os.fork() at the 'wrong'
>> time, but where is this made clear in the oslo.messaging API
>> documentation?
>> I think this is the real issue here:  what is the "official" guidance
>> for using os.fork() and its interaction with oslo libraries?
>>
>> In the case of oslo.messaging, I can't find any mention of os.fork()
>> in the API docs (I may have missed it - please correct me if so).
>> That would imply - at least to me - that there is _no_ restrictions of
>> using os.fork() together with oslo.messaging.
>
>
> Yes, I agree we should add a note on that in oslo.messaging (and perhaps in
> oslo.db too).
>
> And also the os.fork() is done by the service.ProcessLauncher of
> oslo-incubator,
> and it's not (yet) documented. But once oslo.service library will be
> released, it will.
>
>> But in the case of qpid, that is definitely _not_ the case.
>>
>> The legacy qpid driver - impl_qpid - imports a 3rd party library, the
>> qpid.messaging API.   This library uses threading.Thread internally,
>> we (consumers of this library) have no control over how that thread is
>> managed.  So for impl_qpid, os.fork()'ing after the driver is loaded
>> can't be guaranteed to work.   In fact, I'd say os.fork() and
>> impl_qpid will not work - full stop.
>
>
> Yes, I have tried it, and I have catch what happen and I can confirm that
> too now, unfortunately :( And this can occurs with any driver if the 3rd
> party library
> doesn't work when we use os.fork()
>
>>> For the amqp1 driver case, I think this is the same things, it seems to
>>> do lazy creation of the connection too.
>>>
>>
>> We have more flexibility here, since the driver directly controls when
>> the thread is spawned.  But the very fact that the thread is used
>> places a restriction on how oslo.messaging and os.fork() can be used
>> together, which isn't made clear in the documentation for the library.
>>
>> I'm not familiar with the rabbit driver - I've seen some patch for
>> heatbeating in rabbit introduce threading, so there may also be an
>> implication there as well.
>
>
> Yes, we need to check that.
>
>>> Personally, I don't like this API, because the behavior difference
>>> between
>>> '__init__' and 'start' is too implicit.
>>>
>>
>> That's true, but I'd say that the problem of implicitness re:
>> os.fork() needs to be clarified at the library level as well.
>>
>
> I agree.
>
> I will write the documentation patch for oslo.messaging.
>
> Cheers,
>
> - ---
> Mehdi Abaakouk
> mail: sil...@sileht.net
> irc: sileht
> -BEGIN PGP SIGNATURE-
> Version: OpenPGP.js v.1.20131017
> Comment: http://openpgpjs.org
>
> wsFcBAEBCAAQBQJUdKtPCRAYkrQvzqrryAAAqCIP/RvONngQ1MOKXEcktcku
> Ok1Lr9O12O3j/zESkXaKInJbqak0EjM0FXnoXeah4qPbSoJYZIfztmCOtX4u
> CSdhkAWhFqXdpHUtngXImHeB3P/eRH0Vo7R3PAmUbv5VWkn+lwcO+Ym1g79Z
> vCJbcwVpldGiTDRJRDAPPb14UakJbZJGnkRDgYscNlBG+alzLw0MsaqnJ7LS
> 8Yj4YkXSgthpHLF2N8Yem9r7Lh7CbYLKzlhOylgAJTd8gpGGtncwWMwYJvKc
> lsMJNY34PMiNkPk1a+VSlrWcPJpafBl3aOBbrIpmMSpMe9pXC/yHW2nrtGez
> VXxliFpqQ7kA5827AuhPAM8EzeMUDetLhZvLxzqY7f/SlaoQ/s/9VhfemmHv
> d4wT8uiayrWSMdXVUJZcMUdM2XlJDdObokMI0ZQKQYX8OhKQL8LdaHR2xr6B
> OjS4Mp4+/W4Y9wMUFqlRyGnW1LLwCFYWHpyKlhXKmYSSdKTn5L7Pcvmmfw8d
> JzDcMxfKCBnM4mNRzlBqYV4/ysb6FNMUZwu+D1YxCVHmAH2O1/oODujNJFkZ
> gSWAmh9kYawJKbbS0Lh7nkOJs1iOnYxG0IQmz61sffg8T2FrpbH4FNWh1/+a
> mQhmYWH2L5noJIwncVQSloSRuoSWLj9rfeiTIHjq2ZnTUD5tbXK6S5dTvv4m
> 4bij
> =G9oX
> -END PGP SIGNATURE-
>



-- 
Ken Giusti  (kgiu...@gmail.com)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][Neutron] Fork() safety and oslo.messaging

2014-11-25 Thread Ken Giusti
Hi Mehdi

On Tue, Nov 25, 2014 at 5:38 AM, Mehdi Abaakouk  wrote:
>
> Hi,
>
> I think the main issue is the behavior of the API
> of oslo-incubator/openstack/common/service.py, specially:
>
>  * ProcessLauncher.launch_service(MyService())
>
> And then the MyService have this behavior:
>
> class MyService:
>def __init__(self):
># CODE DONE BEFORE os.fork()
>
>def start(self):
># CODE DONE AFTER os.fork()
>
> So if an application created a FD inside MyService.__init__ or before 
> ProcessLauncher.launch_service, it will be shared between
> processes and we got this kind of issues...
>
> For the rabbitmq/qpid driver, the first connection is created when the rpc 
> server is started or when the first rpc call/cast/... is done.
>
> So if the application doesn't do that inside MyService.__init__ or before 
> ProcessLauncher.launch_service everything works as expected.
>
> But if the issue is raised I think this is an application issue (rpc stuff 
> done before the os.fork())
>

Mmmm... I don't think it's that clear (re: an application issue).  I
mean, yes - the application is doing the os.fork() at the 'wrong'
time, but where is this made clear in the oslo.messaging API
documentation?

I think this is the real issue here:  what is the "official" guidance
for using os.fork() and its interaction with oslo libraries?

In the case of oslo.messaging, I can't find any mention of os.fork()
in the API docs (I may have missed it - please correct me if so).
That would imply - at least to me - that there is _no_ restrictions of
using os.fork() together with oslo.messaging.

But in the case of qpid, that is definitely _not_ the case.

The legacy qpid driver - impl_qpid - imports a 3rd party library, the
qpid.messaging API.   This library uses threading.Thread internally,
we (consumers of this library) have no control over how that thread is
managed.  So for impl_qpid, os.fork()'ing after the driver is loaded
can't be guaranteed to work.   In fact, I'd say os.fork() and
impl_qpid will not work - full stop.

> For the amqp1 driver case, I think this is the same things, it seems to do 
> lazy creation of the connection too.
>

We have more flexibility here, since the driver directly controls when
the thread is spawned.  But the very fact that the thread is used
places a restriction on how oslo.messaging and os.fork() can be used
together, which isn't made clear in the documentation for the library.

I'm not familiar with the rabbit driver - I've seen some patch for
heatbeating in rabbit introduce threading, so there may also be an
implication there as well.


> I will take a look to the neutron code, if I found a rpc usage
> before the os.fork().
>

I've done some tracing of neutron-server's behavior in this case - you
may want to take a look at

 https://bugs.launchpad.net/neutron/+bug/1330199/comments/8

>
> Personally, I don't like this API, because the behavior difference between
> '__init__' and 'start' is too implicit.
>

That's true, but I'd say that the problem of implicitness re:
os.fork() needs to be clarified at the library level as well.

thanks,

-K

> Cheers,
>
> ---
> Mehdi Abaakouk
> mail: sil...@sileht.net
> irc: sileht
>
>
> Le 2014-11-24 20:27, Ken Giusti a écrit :
>
>> Hi all,
>>
>> As far as oslo.messaging is concerned, should it be possible for the
>> main application to safely os.fork() when there is already an active
>> connection to a messaging broker?
>>
>> I ask because I'm hitting what appears to be fork-related issues with
>> the new AMQP 1.0 driver.  I think the same problems have been seen
>> with the older impl_qpid driver as well [0]
>>
>> Both drivers utilize a background threading.Thread that handles all
>> async socket I/O and protocol timers.
>>
>> In the particular case I'm trying to debug, rpc_workers is set to 4 in
>> neutron.conf.  As far as I can tell, this causes neutron.service to
>> os.fork() four workers, but does so after it has created a listener
>> (and therefore a connection to the broker).
>>
>> This results in multiple processes all select()'ing the same set of
>> networks sockets, and stuff breaks :(
>>
>> Even without the background process, wouldn't this use still result in
>> sockets being shared across the parent/child processes?   Seems
>> dangerous.
>>
>> Thoughts?
>>
>> [0] https://bugs.launchpad.net/oslo.messaging/+bug/1330199




-- 
Ken Giusti  (kgiu...@gmail.com)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Oslo][Neutron] Fork() safety and oslo.messaging

2014-11-24 Thread Ken Giusti
Hi all,

As far as oslo.messaging is concerned, should it be possible for the
main application to safely os.fork() when there is already an active
connection to a messaging broker?

I ask because I'm hitting what appears to be fork-related issues with
the new AMQP 1.0 driver.  I think the same problems have been seen
with the older impl_qpid driver as well [0]

Both drivers utilize a background threading.Thread that handles all
async socket I/O and protocol timers.

In the particular case I'm trying to debug, rpc_workers is set to 4 in
neutron.conf.  As far as I can tell, this causes neutron.service to
os.fork() four workers, but does so after it has created a listener
(and therefore a connection to the broker).

This results in multiple processes all select()'ing the same set of
networks sockets, and stuff breaks :(

Even without the background process, wouldn't this use still result in
sockets being shared across the parent/child processes?   Seems
dangerous.

Thoughts?

[0] https://bugs.launchpad.net/oslo.messaging/+bug/1330199

-- 
Ken Giusti  (kgiu...@gmail.com)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] zeromq work for kilo

2014-09-19 Thread Ken Giusti
On Mon Sep 8 15:18:35 UTC 2014, Doug Hellmann wrote:
>On Sep 8, 2014, at 10:35 AM, Antonio Messina  
>wrote:
>
>> Hi All,
>>
>> We tested briefly ZeroMQ with Havana last year, but we couldn't find
>> any good documentation on how to implement it, and we were not able to
>> get it working. We also got the impression that the support was not at
>> all mature, so we decided to use RabbitMQ instead.
>>
>> However, I must say that the broker-less design of ZeroMQ is very
>> appealing, and we would like to give it a try, assuming
>> 1) the documentation is improved
>> 2) there is some assurance that support for ZeroMQ is not going to be 
>> dropped.
>>
>> I can help with 1) if there is someone that knows a bit of the
>> internals and can bootstrap me, because I have no first hand
>> experience on how message queues are used in OpenStack, and little
>> experience with ZeroMQ.
>
>Unfortunately, the existing Oslo team doesn’t have a lot of
>experience with ZeroMQ either (hence this thread). It sounds like Li
>Ma’s team has made it work, though, so maybe you could work
>together. We should prioritize documentation and then functional
>testing, I think.
>
> About 2), well this is a decision for the developers, but IMHO there
> *should* be support for ZeroMQ in OpenStack: its broker-less
> architecture would eliminate a SPoF (the message broker), could ease
> the deployment (especially in HA setup) and grant very high
> performance.
>
>I agree, it would be useful to support it. This is purely a resource
>allocation problem for me. I don't have anyone willing to do the work
>needed to ensure the driver is functional and can be deployed sanely
>(although maybe I’ve found a couple of volunteers now :-).
>
>There is another effort going on to support AMQP 1.0, which (as I
>understand it) includes similar broker-less deployment options. Before
>we decide whether to invest in ZeroMQ for that reason alone, it would
>be useful to know if AMQP 1.0 support makes potential ZeroMQ support
>less interesting.
>

While the AMQP 1.0 protocol permits it, the current implementation of
the new driver does not support broker-less point-to-point - yet.

I'm planning on adding that support to the AMQP 1.0 driver in the
future.  I have yet to spend any time ramping up on the existing
brokerless support implemented by the zmq driver, so forgive my
ignorance, but I'm hoping to leverage what is there if it makes sense.

If it doesn't make sense, and the existing code is zmq specific, then
I'd be interested in working with the zmq folks to help develop a more
generic implementation that functions across both drivers.

>Doug


-- 
Ken Giusti  (kgiu...@gmail.com)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] zeromq work for kilo

2014-09-19 Thread Ken Giusti
On Thu, 18 Sep 2014 17:29:27 Eric Windisch wrote:
>>
>>
>> That?s great feedback, Eric, thank you. I know some of the other projects

+1 - yes, excellent feedback - having just worked on the AMQP 1.0
driver, I think you've nicely described some of my own experiences.

>> are moving drivers out of the main core tree, and we could certainly
>> consider doing that here as well, if we have teams willing to sign up for
>> the work it means doing.
>>
>> In addition to the zmq driver, we have a fairly stable rabbit driver, a
>> qpid driver whose quality I don?t know , and a new experimental AMQP 1.0
>> driver. Are we talking about moving those out, too, or just zmq because we
>> were already considering removing it entirely?
>>
>
>I believe it makes good sense for all drivers, in the long term. However,
>the most immediate benefits would be in offloading any drivers that need
>substantial work or improvements, aka velocity. That would mean the AMQP
>and ZeroMQ drivers.
>

I'm tentatively in favor of this - 'tentative' because, noob that I am,
I'm not sure I understand the trade-offs, if any, that moving these
drivers outside of oslo.messaging would bring.

To be clear: I'm 100% for any change that makes it easier to have
non-core developers that have the proper domain knowledge contribute
to these drivers.  However, there's a few things I need to understand:

Does this move make it harder for users to deploy these
drivers?  How would we insure that the proper, tested version of a
driver is delivered along with oslo.messaging proper?

>With the Nova drivers, what's useful is that we have tempest and we can use
>that as an integration gate. I suppose that's technically possible with
>oslo.messaging and its drivers as well, although I prefer to see a
>separation of concerns were I presume there are messaging patterns you want
>to validate that aren't exercised by Tempest.

This is critical IMHO - any proposed changes to oslo.messaging
proper, or any particular driver for that matter, needs to be vetted
against the other out-of-tree drivers automagically.  E.g. If a
proposed change to oslo.messaging breaks the out of tree AMQP 1.0
driver, that needs to be flagged by jenkins during the gerrit review
of the proposed oslo.messaging patch.

>
>Another thing I'll note is that before pulling Ironic in, Nova had an API
>contract test. This can be useful for making sure that changes in the
>upstream project doesn't break drivers, or that breakages could at least
>invoke action by the driver team:
>https://github.com/openstack/nova/blob/4ce3f55d169290015063131134f93fca236807ed/nova/tests/virt/test_ironic_api_contracts.py
>
>--
>Regards,
>Eric Windisch

-- 
Ken Giusti  (kgiu...@gmail.com)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging][feature freeze exception] Can I get an exception for AMQP 1.0?

2014-09-03 Thread Ken Giusti
On Wed Sep 3 19:23:52 UTC 2014, Doug Hellmann wrote:
>On Sep 3, 2014, at 2:03 PM, Ken Giusti  wrote:
>> Hello,
>>
>> I'm proposing a freeze exception for the oslo.messaging AMQP 1.0
>> driver:
>>
>>   https://review.openstack.org/#/c/75815/

>> Thanks,
>>
>> Ken
>
>Ken,
>
>I think we’re generally in favor of including the new driver, but
>before I say so officially can you fill us in on the state of the
>additional external libraries it needs? I see pyngus on pypi, and you
>mention in the “Request to include AMQP 1.0 support in Juno-3” thread
>that proton is being packaged in EPEL and work is ongoing for
>Debian. Is that done (it has only been a few days, I know)?
>

Hi Doug,

Yes, AMQP 1.0 tech is pretty new, so the availability of the packages
is the sticking point.

That said, there are two different dependencies to consider:
oslo.messaging dependencies, and AMQP 1.0 support in the brokers.

For oslo.messaging, the dependency is on the Proton C developer
libraries.  This library is currently available on EPEL for centos6+
and fedora. It is not available in the Ubuntu repos yet - though they
have recently been accepted to Debian sid.  For developers, the QPID
project maintains a PPA that can be used to get the packages on
Debian/Ubuntu (though this is not acceptable for openstack CI
support).  The python bindings
that interface with this library are available on pypi (see the
amqp1-requirements.txt in the driver patch).

Qpid upstream has been shipping with 1.0 support for awhile now, but
unfortunately the popular distros don't have the latest Qpid brokers
available.  Qpid with AMQP 1.0 support is available via EPEL for
centos7 and fedora. RedHat deprecated Qpid on rhel6, so now centos6 is
stuck with an old version of qpid since we can't override base
packages in EPEL.  Same deal with Debian/Ubuntu, though the QPID PPA
should have the latest packages (I'll have to follow up on that).


>I would like to avoid having that packaging work be a blocker, so if
>we do include the driver, what do you think is the best way to convey
>the instructions for installing those packages? I know you’ve done
>some work recently on documenting the experimental status, did that
>include installation tips?

Not at present, but that's what I'm working on at the moment.  I'll
definitely call out the installation dependencies and configuration
settings necessary for folks to get the driver up and running with a
minimum of pain.

I assume I'll have to update at least:

The wiki: https://wiki.openstack.org/wiki/Oslo/Messaging
The user manuals:
http://docs.openstack.org/icehouse/config-reference/content/configuring-rpc.html

I can also add a README to the protocols/amqp directory, if that makes sense.

>
>Thanks,
>Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging][feature freeze exception] Can I get an exception for AMQP 1.0?

2014-09-03 Thread Ken Giusti
Hello,

I'm proposing a freeze exception for the oslo.messaging AMQP 1.0
driver:

   https://review.openstack.org/#/c/75815/

Blueprint:

   
https://blueprints.launchpad.net/oslo.messaging/+spec/amqp10-driver-implementation

I presented this work at the Juno summit [1]. The associated spec has
been approved and merged [2].

The proposed patch has been in review since before icehouse, with a
couple of non-binding +1's.  A little more time is necessary to get
core reviews.

The patch includes a number of functional tests, and I've proposed a
CI check that will run those tests [3].  This patch is currently
pending support for bare fedora 20 nodes in CI.  I'm planning to add
additional test cases and devstack support in the future.

I'm in the process of adding documentation to the RPC section of the
Openstack manual.

Justification:

I think there's a benefit to have this driver available as an
_experimental_ feature in Juno, and the risk of inclusion is minimal
as the driver is optional, disabled by default, and will not have
impact on any system that does not explicitly enable it.

Unlike previous versions of the protocol, AMQP 1.0 is the official
standard for AMQP messaging (ISO/IEC 19464).  Support for it is
arriving from multiple different messaging system vendors [4].

Having access to AMQP 1.0 functionality in openstack sooner rather
than later gives the developers of AMQP 1.0 messaging systems the
opportunity to validate their AMQP 1.0 support in the openstack
environment.  Likewise, easier access to this driver by the openstack
developer community will help us find and fix any issues in a timely
manner as adoption of the standard grows.

Please consider this feature to be a part of Juno-3 release.

Thanks,

Ken


-- 
Ken Giusti  (kgiu...@gmail.com)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] Request to include AMQP 1.0 support in Juno-3

2014-08-28 Thread Ken Giusti
On Thu, 28 Aug 2014 13:36:46 +0100, Mark McLoughlin wrote:
> On Thu, 2014-08-28 at 13:24 +0200, Flavio Percoco wrote:
> > On 08/27/2014 03:35 PM, Ken Giusti wrote:
> > > Hi All,
> > >
> > > I believe Juno-3 is our last chance to get this feature [1] included
> > > into olso.messaging.
> > >

> >
> >
> > Hi Ken,
> >
> > Thanks a lot for your hard work here. As I stated in my last comment on
> > the driver's review, I think we should let this driver land and let
> > future patches improve it where/when needed.
> >
> > I agreed on letting the driver land as-is based on the fact that there
> > are patches already submitted ready to enable the gates for this driver.
>
> I feel bad that the driver has been in a pretty complete state for quite
> a while but hasn't received a whole lot of reviews. There's a lot of
> promise to this idea, so it would be ideal if we could unblock it.
>
> One thing I've been meaning to do this cycle is add concrete advice for
> operators on the state of each driver. I think we'd be a lot more
> comfortable merging this in Juno if we could somehow make it clear to
> operators that it's experimental right now. My idea was:
>
>   - Write up some notes which discusses the state of each driver e.g.
>
>   - RabbitMQ - the default, used by the majority of OpenStack
> deployments, perhaps list some of the known bugs, particularly
> around HA.
>
>   - Qpid - suitable for production, but used in a limited number of
> deployments. Again, list known issues. Mention that it will
> probably be removed with the amqp10 driver matures.
>
>   - Proton/AMQP 1.0 - experimental, in active development, will
> support  multiple brokers and topologies, perhaps a pointer to a
> wiki page with the current TODO list
>
>   - ZeroMQ - unmaintained and deprecated, planned for removal in
> Kilo

Sounds like a plan - I'll take on the Qpid and Proton notes.  I've
been (trying) to keep the status of the Proton stuff up to date on the
blueprint page:

https://blueprints.launchpad.net/oslo.messaging/+spec/amqp10-driver-implementation

Is there a more appropriate home for these notes?  Etherpad?

>
>   - Propose this addition to the API docs and ask the operators list
> for feedback
>
>   - Propose a patch which adds a load-time deprecation warning to the
> ZeroMQ driver
>
>   - Include a load-time experimental warning in the proton driver

Done!

>
> Thoughts on that?
>
> (I understand the ZeroMQ situation needs further discussion - I don't
> think that's on-topic for the thread, I was just using it as example of
> what kind of advice we'd be giving in these docs)
>
> Mark.
>
> -
Ken Giusti  (kgiu...@gmail.com)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging] Request to include AMQP 1.0 support in Juno-3

2014-08-27 Thread Ken Giusti
Hi All,

I believe Juno-3 is our last chance to get this feature [1] included
into olso.messaging.

I honestly believe this patch is about as low risk as possible for a
change that introduces a whole new transport into oslo.messaging.  The
patch shouldn't affect the existing transports at all, and doesn't
come into play unless the application specifically turns on the new
'amqp' transport, which won't be the case for existing applications.

The patch includes a set of functional tests which exercise all the
messaging patterns, timeouts, and even broker failover. These tests do
not mock out any part of the driver - a simple test broker is included
which allows the full driver codepath to be executed and verified.

IFAIK, the only remaining technical block to adding this feature,
aside from core reviews [2], is sufficient infrastructure test coverage.
We discussed this a bit at the last design summit.  The root of the
issue is that this feature is dependent on a platform-specific library
(proton) that isn't in the base repos for most of the CI platforms.
But it is available via EPEL, and the Apache QPID team is actively
working towards getting the packages into Debian (a PPA is available
in the meantime).

In the interim I've proposed a non-voting CI check job that will
sanity check the new driver on EPEL based systems [3].  I'm also
working towards adding devstack support [4], which won't be done in
time for Juno but nevertheless I'm making it happen.

I fear that this feature's inclusion is stuck in a chicken/egg
deadlock: the driver won't get merged until there is CI support, but
the CI support won't run correctly (and probably won't get merged)
until the driver is available.  The driver really has to be merged
first, before I can continue with CI/devstack development.

[1] 
https://blueprints.launchpad.net/oslo.messaging/+spec/amqp10-driver-implementation
[2] https://review.openstack.org/#/c/75815/
[3] https://review.openstack.org/#/c/115752/
[4] https://review.openstack.org/#/c/109118/

-- 
Ken Giusti  (kgiu...@gmail.com)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] How to get new package requirements into the CI system using a PPA and EPEL?

2014-08-22 Thread Ken Giusti
On Tue, Jul 15, 2014 at 7:28 PM, Ian Wienand  wrote:
> On 07/15/2014 11:55 PM, Ken Giusti wrote:
>> Good to hear about epel's availability.  But on the Ubuntu/Debian
>> side - is it possible to add the Qpid project's PPA to the config
>> project?  From a quick 'grep' of the sources, it appears as if Pypy
>> requires a PPA.  It's configured in
>> modules/openstack_project/manifests/slave_common.pp).  Can I use
>> this as an example for adding Qpid's PPA?
>
> This is probably a good example of the puppet classes to use, but...
>
> As discussed, it's questionable what you want here.  Probably for your
> unit tests, you could mock-out calls to the library?  So you may not
> need it installed at all?
>
> If you want to test it "for real"; i.e. in a running environment with
> real RPC happening between components, etc, then that would be in a
> devstack environment.  It sounds like you'd probably be wanting to
> define a new rpc-backend [1] that could be optionally enabled.
>
> Once you had that in devstack, you'd have to start looking at the
> jenkins-job-builder configs [2] and add a specific test that enabled
> the flag for this back-end and add it as probably a non-voting job to
> some component.
>


Thanks,  I've spent some time hacking this and have the following:

1) a patch to devstack that adds a configuration option to enable AMQP
1.0 as the RPC messaging protocol:

https://review.openstack.org/#/c/109118/

2) a patch to openstack-infra/config that adds a new non-voting job
for oslo.messaging that runs on a devstack node with AMQP 1.0 enabled.
The job runs the AMQP 1.0 functional tests via tox:

https://review.openstack.org/#/c/115752/

#2 is fairly straightforward - I've copy&pasted the code from existing
neutron-functional tests.  Still needs testing, however.

#1 is the change I'm most concerned about as it adds the Apache qpid
PPA for ubuntu systems.  I've tested this on my Trusty and Centos6
vm's and it works well for me.

Can anyone on the devstack or infra teams give me some feedback on
these changes?  I'm hoping these infrastructure changes will unblock
the AMQP 1.0 blueprint in time for Juno-3 (fingers, toes, eyes
crossed).

thanks!

> -i
>
> [1] http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/rpc_backend
> [2] 
> https://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/jenkins_job_builder/config/



-- 
Ken Giusti  (kgiu...@gmail.com)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][infra] Adding support for AMQP 1.0 Messaging to Oslo.Messaging and infra/config

2014-08-01 Thread Ken Giusti
On Wed, 30 Jul 2014 22:14:51 +, Jeremy Stanley wrote:
>On 2014-07-30 14:59:09 -0400 (-0400), Ken Giusti wrote:
>> Thanks Daniel.  It was my understanding - which may be wrong - that
>> having devstack install the 'out of band' packages would only help in
>> the case of the devstack-based integration tests, not in the case of
>> CI running the unit tests.  Is that indeed the case?
>[...]
>> I'm open to any thoughts on how best to solve this, thanks.
>
>Since they're in EPEL and we run Python 2.6 unit tests today on
>CentOS 6 servers, if the proton libraries install successfully there
>perhaps we could opportunistically exercise it only under Python 2.6
>for now? Not ideal, but it does get it enforced upstream with
>minimal fuss. I'd really rather not start accumulating arbitrary PPA
>sources on our job workers... I know we've done it for a handful of
>multi-project efforts where we needed select backports from non-LTS
>releases, but we've so far limited that to only PPAs maintained by
>the same package teams as the mainline distro packages themselves.
>

Yeah, it's becoming pretty clear that adding this PPA to infra is not
The Right Thing To Do.  How does this sound as an alternative:

1) _for_ _now_, make the dependent unit tests optional for
oslo.messaging.  Specifically, by default tox will not run them, but
I'll add a new testenv that adds a requirement for the dependent
packages and runs all the unit tests (default tests + new amqp1.0
tests).  Eg, do 'tox -e amqp1' to pull in the python packages that
require the libraries, and run all unit tests.  This allows those
developers that have installed the proton libraries to run the tests,
and avoid making life hard for those devs who don't have the libraries
installed.

2) Propose a new optional configuration flag in devstack that enables
the AMQP 1.0 messaging protocol (default off).  Something like
$RPC_MESSAGING_PROTOCOL == "AMQP1".  When this is set in the devstack
config, rpc_backend will install the AMQP 1.0 libraries, adding the
Qpid PPA in the case of ubuntu for now.

3) Create a non-voting oslo.messaging gate test [0] that merely
runs the 'tox -e amqp1' tests.  Of course, additional integration
tests are a Good Thing, but at the very least we should start with
this. This would give us a heads up should new patches break the amqp
1.0 driver.  This test could eventually become gating once the driver
matures and the packages find their way into all the proper repos.

4) Profit (couldn't resist :)

Opinions?

[0] I honestly have no idea how to do this, or if it's even feasible
btw - I've never written a gating test before.  I'd appreciate any
pointers to get me started, thanks!


>Longer term, I'd suggest getting it sponsored into Debian
>unstable/testing ASAP, interesting the Ubuntu OpenStack team in
>importing it into the development tree for the next Ubuntu release,
>and then incorporating it into the Trusty Ubuntu Cloud Archive.
>We're not using UCA yet, but on Trusty we probably should consider
>adding it sooner rather than later since when we tried to tack on
>the Precise UCA in the last couple cycles we had too many headaches
>from trying to jump ahead substantially on fundamental bits like
>libvirt. Breaking sooner and more often means those incremental
>issues are easier to identify and address, usually.

Ah - I didn't know that, thanks!  I know one of the Qpid devs is
currently engaged in getting these packages into Debian.  I'll reach
out to him and see if he can work on getting it into UCA next.

Thanks again - very valuable info!

>--
>Jeremy Stanley


-- 
Ken Giusti  (kgiu...@gmail.com)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][infra] Adding support for AMQP 1.0 Messaging to Oslo.Messaging and infra/config

2014-08-01 Thread Ken Giusti
On Wed, 30 Jul 2014 15:04:41 -0700, Matt Riedemann wrote:
>On 7/30/2014 11:59 AM, Ken Giusti wrote:
>> On Wed, 30 Jul 2014 14:25:46 +0100, Daniel P. Berrange wrote:
>>> On Wed, Jul 30, 2014 at 08:54:01AM -0400, Ken Giusti wrote:
>>>> Greetings,

>> At this point, there are no integration tests that exercise the
>> driver.  However, the new unit tests include a test 'broker', which
>> allow the unit tests to fully exercise the new driver, right down to
>> the network.  That's a bonus of AMQP 1.0 - it can support peer-2-peer
>> messaging.
>>
>> So its the new unit tests that have the 'hard' requirement of the
>> proton libraries.And mocking-out the proton libraries really
>> doesn't allow us to do any meaningful tests of the driver.
>>

>
>If your unit tests are dependent on a specific dependent library aren't
>they no longer unit tests but integration tests anyway?
>

Good point - yes, they are certainly more than just unit tests.  I'd
consider them more "functional" tests than integration tests, tho:
they only test from the new driver API down to the wire (and back up
again via the fake loopback broker).  For integration testing, I'd
want to put a real broker in there, and run real subprojects over
oslo.messaging using the new driver (neutron, etc).

I'd really like to avoid the classic unit test approach of mocking out
the underlying messaging client api if possible.  Even though that
would avoid the dependency, I think it could result in the same issues
we've had with the existing impl_qpid tests passing in mock, but
failing when run against qpidd.

>Just wondering, not trying to put up road-blocks because I'd like to see
>how this code performs but haven't had time yet to play with it.
>

np, a good question, thanks!  When you do get a chance to kick the tires,
feel free to ping me with any questions/issues you have.  Thanks!

>--
>
>Thanks,
>
>Matt Riedemann
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][infra] Adding support for AMQP 1.0 Messaging to Oslo.Messaging and infra/config

2014-07-30 Thread Ken Giusti
On Wed, 30 Jul 2014 14:25:46 +0100, Daniel P. Berrange wrote:
> On Wed, Jul 30, 2014 at 08:54:01AM -0400, Ken Giusti wrote:
> > Greetings,
> >
> > Apologies for the cross-post: this should be of interest to both infra
> > and olso.messaging developers.
> >
> > The blueprint [0] that adds support for version 1.0 of the AMQP messaging
> > protocol is blocked due to CI test failures [1]. These failures are due
> > to a new package dependency this blueprint adds to oslo.messaging.
> >
> > The AMQP 1.0 functionality is provided by the Apache Qpid's Proton
> > AMQP 1.0 toolkit.  The blueprint uses the Python bindings for this
> > toolkit, which are available on Pypi.  These bindings, however, include
> > a C extension that depends on the Proton toolkit development libraries
> > in order to build and install.  The lack of this toolkit is the cause
> > of the blueprint's current CI failures.
> >
> > This toolkit is written in C, and thus requires platform-specific
> > libraries.
> >
> > Now here's the problem: packages for Proton are not included by
> > default in most distro's base repositories (yet).  The Apache Qpid
> > team has provided packages for EPEL, and has a PPA available for
> > Ubuntu.  Packages for Debian are also being proposed.
> >
> > I'm proposing this patch to openstack-infra/config to address the
> > dependency problem [2].  It adds the proton toolkit packages to the
> > common slave configuration.  Does this make sense?  Are there any
> > better alternatives?
>
> For other cases where we need more native packages, we tyically
> use devstack to ensure they are installed. This is preferrable
> since it works for ordinary developers as well as the CI system.
>

Thanks Daniel.  It was my understanding - which may be wrong - that
having devstack install the 'out of band' packages would only help in
the case of the devstack-based integration tests, not in the case of
CI running the unit tests.  Is that indeed the case?

At this point, there are no integration tests that exercise the
driver.  However, the new unit tests include a test 'broker', which
allow the unit tests to fully exercise the new driver, right down to
the network.  That's a bonus of AMQP 1.0 - it can support peer-2-peer
messaging.

So its the new unit tests that have the 'hard' requirement of the
proton libraries.And mocking-out the proton libraries really
doesn't allow us to do any meaningful tests of the driver.

But if devstack is the preferred method for getting 'special case'
packages installed, would it be acceptable to have the new unit tests
run as a separate oslo.messaging integration test, and remove them
from the unit tests?

I'm open to any thoughts on how best to solve this, thanks.

> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- http://virt-manager.org :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging][infra] Adding support for AMQP 1.0 Messaging to Oslo.Messaging and infra/config

2014-07-30 Thread Ken Giusti
Greetings,

Apologies for the cross-post: this should be of interest to both infra
and olso.messaging developers.

The blueprint [0] that adds support for version 1.0 of the AMQP messaging
protocol is blocked due to CI test failures [1]. These failures are due
to a new package dependency this blueprint adds to oslo.messaging.

The AMQP 1.0 functionality is provided by the Apache Qpid's Proton
AMQP 1.0 toolkit.  The blueprint uses the Python bindings for this
toolkit, which are available on Pypi.  These bindings, however, include
a C extension that depends on the Proton toolkit development libraries
in order to build and install.  The lack of this toolkit is the cause
of the blueprint's current CI failures.

This toolkit is written in C, and thus requires platform-specific
libraries.

Now here's the problem: packages for Proton are not included by
default in most distro's base repositories (yet).  The Apache Qpid
team has provided packages for EPEL, and has a PPA available for
Ubuntu.  Packages for Debian are also being proposed.

I'm proposing this patch to openstack-infra/config to address the
dependency problem [2].  It adds the proton toolkit packages to the
common slave configuration.  Does this make sense?  Are there any
better alternatives?

[0] 
https://blueprints.launchpad.net/oslo.messaging/+spec/amqp10-driver-implementation
[1] https://review.openstack.org/#/c/75815/
[2] https://review.openstack.org/#/c/110431/



-- 
Ken Giusti  (kgiu...@gmail.com)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] [Oslo.messaging] RPC failover handling in rabbitmq driver

2014-07-28 Thread Ken Giusti
On Mon, 28 Jul 2014 10:58:02 +0100, Gordon Sim wrote:
> On 07/28/2014 09:20 AM, Bogdan Dobrelya wrote:
> > Hello.
> > I'd like to bring your attention to major RPC failover issue in
> > impl_rabbit.py [0]. There are several *related* patches and a number of
> > concerns should be considered as well:
> > - Passive exchanges fix [1] (looks like the problem is much deeper than
> > it seems though).
> > - the first version of the fix [2] which makes the producer to declare a
> > queue and bind it to exchange as well as consumer does.
> > - Making all RPC involved reply_* queues durable in order to preserve
> > them in RabbitMQ after failover (there could be a TTL for such a queues
> > as well)
> > - RPC throughput tuning patch [3]
> >
> > I believe the issue [0] should be at least prioritized and assigned to
> > some milestone.
>
> I think the real issue is the lack of clarity around what guarantees are
> made by the API.
>

Wholeheartedly agree!  This lack of explicitness makes it very
difficult to add new messaging backends (drivers) to oslo.messaging
and expect the API to function uniformly from the application's point
of view.  The end result is that oslo.messaging's API behavior is
somewhat implicitly defined by the characteristics of the rpc backend
(broker), rather than olso.messaging itself.

In other words: we need to solve this problem in general, not just for
the rabbit driver.

> Is it the case that an RPC call should never fail (i.e. never time out)
> due to failover? Either way, the answer to this should be very clear.
>
> If failures may occur, then the calling code needs to handle that. If
> eliminating failures is part of the 'contract' then the library should
> have a clear strategy for ensuring (and testing) this.
>
> Another possible scenario is that the connection is lost immediately
> after writing the request message to the socket (but before it is
> processed by the rabbit broker). In this case the issue is that the
> request is not confirmed, so it can complete before it is 'safe'. In
> other words requests are unreliable.
>
> My own view is that if you want to avoid time outs on failover, the best
> approach is to have olso.messaging retry the entire request regardless
> of the point it had reached in the previous attempt. I.e. rather than
> trying to make delivery of responses reliable, assume that both requests
> and responses are unreliable and re-issue the request immediately on
> failover.

I like this suggestion. By assuming limited reliability from the
underlying messaging system, we reduce oslo.messaging's reliance on
features provided by any particular messaging implementation
(driver/broker).

> (The retry logic could even be made independent of any driver
> if desired).

Exactly!  Having all QOS related code outside of the drivers would
guarantee that the behavior of the API is _uniform_ across all
drivers.

>
> This is perhaps a bigger change, but I think it is more easy to get
> right and will also be more scalable and performant since it doesn't
> require replication of every queue and every message.
>
>
> >
> > [0] https://bugs.launchpad.net/oslo.messaging/+bug/1338732
> > [1] https://review.openstack.org/#/c/109373/
> > [2]
> > https://github.com/noelbk/oslo.messaging/commit/960fc26ff050ca3073ad90eccbef1ca95712e82e
> > [3] https://review.openstack.org/#/c/109143/
>


-- 
Ken Giusti  (kgiu...@gmail.com)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack][oslo.messaging] Adding a new RPC backend for testing AMQP 1.0

2014-07-23 Thread Ken Giusti
Hi,

I'd like some help with $SUBJECT.  I've got a WIP patch up for review:

https://review.openstack.org/#/c/109118/

My goal is to have an RPC backend that I can use to test the new AMQP
1.0 oslo.messaging driver against.  I suspect this new backend would
initially only be used by tests specifically written against the
driver, but I'm hoping for wider adoption as the driver stabilizes and
AMQP 1.0 adoption increases.

As I said, this is only a WIP and doesn't completely work yet (though
it shouldn't break support for the existing backends).  I'm just
looking for some early feedback on whether or not this is the correct
approach.

thanks!

-- 
Ken Giusti  (kgiu...@gmail.com)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][infra] New package dependencies for oslo.messaging's AMQP 1.0 support.

2014-07-16 Thread Ken Giusti
On 07/15/2014 10:58:50 +0200, Flavio Percoco wrote:
>On 07/15/2014 07:16 PM, Doug Hellmann wrote:
>> On Tue, Jul 15, 2014 at 1:03 PM, Ken Giusti  wrote:
>>>
>>> These packages may be obtained via EPEL for Centos/RHEL systems
>>> (qpid-proton-c-devel), and via the Qpid project's PPA [3]
>>> (libqpid-proton2-dev) for Debian/Ubuntu.  They are also available for
>>> Fedora via the default yum repos.  Otherwise, the source can be pulled
>>> directly from the Qpid project and built/installed manually [4].
>>
>> Do you know the timeline for having those added to the Ubuntu cloud
>> archives? I think we try not to add PPAs in devstack, but I'm not sure
>> if that's a hard policy.
>
>IIUC, the package has been accepted in Debian - Ken, correct me if I'm
>wrong. Here's the link to the Debian's mentor page:
>
>http://mentors.debian.net/package/qpid-proton
>

No, it hasn't been accepted yet - it is still pending approval by the
sponsor.  That's one of the reasons the Qpid project has set up its
own PPA.

>>
>>>
>>> I'd like to get the blueprint accepted, but I'll have to address these
>>> new dependencies first.  What is the best way to get these new
>>> packages into CI, devstack, etc?  And will developers be willing to
>>> install the proton development libraries, or can this be done
>>> automagically?
>>
>> To set up integration tests we'll need an option in devstack to set
>> the messaging driver to this new one. That flag should also trigger
>> setting up the dependencies needed. Before you spend time implementing
>> that, though, we should clarify the policy on PPAs.
>
>Agreed. FWIW, the work on devstack is on the works but it's being held
>off while we clarify the policy on PPAs.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging][infra] New package dependencies for oslo.messaging's AMQP 1.0 support.

2014-07-15 Thread Ken Giusti
Hi,

The AMQP 1.0 blueprint proposed for oslo.messaging Juno [0] introduces
dependencies on a few packages that provide AMQP functionality.

These packages are:

* pyngus - a client API
* python-qpid-proton - the python bindings for the Proton AMQP library
* qpid-proton: the AMQP 1.0 library.

pyngus is a pure-python module available at pypi [1].

python-qpid-proton is also available at pypi [2], but it contains a C
extension.  This C extension requires that the qpid-proton development
libraries are installed in order to build the extension when
installing python-qpid-proton.

So this means that oslo.messaging developers, as well as the CI
systems, etc, will need to have the qpid-proton development packages
installed.

These packages may be obtained via EPEL for Centos/RHEL systems
(qpid-proton-c-devel), and via the Qpid project's PPA [3]
(libqpid-proton2-dev) for Debian/Ubuntu.  They are also available for
Fedora via the default yum repos.  Otherwise, the source can be pulled
directly from the Qpid project and built/installed manually [4].

I'd like to get the blueprint accepted, but I'll have to address these
new dependencies first.  What is the best way to get these new
packages into CI, devstack, etc?  And will developers be willing to
install the proton development libraries, or can this be done
automagically?

thanks for your help,


[0] 
https://blueprints.launchpad.net/oslo.messaging/+spec/amqp10-driver-implementation
[1] https://pypi.python.org/pypi/pyngus
[2] https://pypi.python.org/pypi/python-qpid-proton/0.7-0
[3] https://launchpad.net/~qpid
[4] http://qpid.apache.org/download.html

-- 
Ken Giusti  (kgiu...@gmail.com)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev