Re: [openstack-dev] [goals][python3][adjutant][barbican][chef][cinder][cloudkitty][i18n][infra][loci][nova][charms][rpm][puppet][qa][telemetry][trove] join the bandwagon!

2018-08-31 Thread Mehdi Abaakouk

Telemetry is ready

On Thu, Aug 30, 2018 at 07:24:23PM -0400, Doug Hellmann wrote:

Below is the list of project teams that have not yet started migrating
their zuul configuration. If you're ready to go, please respond to this
email to let us know so we can start proposing patches.

Doug

| adjutant| 3 repos   |
| barbican| 5 repos   |
| Chef OpenStack  | 19 repos  |
| cinder  | 6 repos   |
| cloudkitty  | 5 repos   |
| I18n| 2 repos   |
| Infrastructure  | 158 repos |
| loci| 1 repos   |
| nova| 6 repos   |
| OpenStack Charms| 80 repos  |
| Packaging-rpm   | 4 repos   |
| Puppet OpenStack| 47 repos  |
| Quality Assurance   | 22 repos  |
| Telemetry   | 8 repos   |
| trove   | 5 repos   |

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate

2018-08-08 Thread Mehdi Abaakouk

On Wed, Aug 08, 2018 at 08:35:20AM -0400, Corey Bryant wrote:

On Wed, Aug 8, 2018 at 3:43 AM, Thomas Goirand  wrote:


On 08/07/2018 06:10 PM, Corey Bryant wrote:
> I was concerned that there wouldn't be any
> gating until Ubuntu 20.04 (April 2020)
Same over here. I'm concerned that it takes another 2 years, which
really, we cannot afford.

> but Py3.7 is available in bionic today.


Yeah but it's the beta3 version.

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate

2018-08-08 Thread Mehdi Abaakouk

On Wed, Aug 08, 2018 at 09:35:04AM -0400, Doug Hellmann wrote:

Excerpts from Andrey Kurilin's message of 2018-08-08 15:25:01 +0300:

Thanks Thomas for pointing to the issue, I checked it locally and here is
an update for openstack/rally (rally framework without in-tree OpenStack
plugins) project:

- added unittest job with py37 env


It would be really useful if you could help set up a job definition in
openstack-zuul-jobs like we have for openstack-tox-py36 [1], so that other
projects can easily add the job, too. Do you have time to do that?


I have already done this kind of stuff for Telemetry project. And our
project already gate on py37. The only restriction is that we have to
use a fedora-28 instance with the python-3.7 package installed manually
(via bindep.txt). Ubuntu 18.04 LTS only a beta version of python 3.7 in
universe repo.

So I'm guessing we have to wait next Ubuntu LTS to add this job
everywhere.

https://github.com/openstack/ceilometer/blob/master/.zuul.yaml#L12
https://github.com/openstack/ceilometer/blob/master/bindep.txt#L7

Cheers,

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins

2018-06-26 Thread Mehdi Abaakouk

Hi,

I have never understood the branchless tempest thing. Making Tempest
release is a great news for me.

But about plugins... Tempest already provides a API for plugins. If you
are going to break this API, why not using stable branches and
deprecation process like any other software ?

If you do that, plugin will be informed that Tempest will soon do a
breaking change. Their can update their plugin code and raise the
minimal tempest version required to work.

Their can do that when they have times, and not because Tempest want to
release a version soon.

Also the stable branch/deprecation process is well known by the
whole community.

And this will also allow them to release a version when their want.

So I support making release of Tempest and Plugins, but do not support
a coordinated release.

Regards,

On Tue, Jun 26, 2018 at 06:18:52PM +0900, Ghanshyam Mann wrote:

Hello Everyone,

In Queens cycle,  community goal to split the Tempest Plugin has been completed 
[1] and i think almost all the projects have separate repo for tempest plugin 
[2]. Which means each tempest plugins are being separated from their project 
release model.  Few projects have started the independent release model for 
their plugins like kuryr-tempest-plugin, ironic-tempest-plugin etc [3].  I 
think neutron-tempest-plugin also planning as chatted with amotoki.

There might be some changes in Tempest which might not work with older version 
of Tempest Plugins.  For example, If I am testing any production cloud which 
has Nova, Neutron, Cinder, Keystone , Aodh, Congress etc  i will be using 
Tempest and Aodh's , Congress's Tempest plugins. With Independent release model 
of each Tempest Plugins, there might be chance that the Aodh's or Congress's 
Tempest plugin versions are not compatible with latest/known Tempest versions. 
It will become hard to find the compatible tag/release of Tempest and Tempest 
Plugins or in some cases i might need to patch up the things.

During QA feedback sessions at Vancouver Summit, there was feedback to 
coordinating the release of all Tempest plugins and Tempest [4] (also amotoki 
talked to me on this as neutron-tempest-plugin is planning their first 
release). Idea is to release/tag all the Tempest plugins and Tempest together 
so that specific release/tag can be identified as compatible version of all the 
Plugins and Tempest for testing the complete stack. That way user can get to 
know what version of Tempest Plugins is compatible with what version of Tempest.

For above use case, we need some coordinated release model among Tempest and all the Tempest 
Plugins. There should be single release of all Tempest Plugins with well defined tag whenever any 
Tempest release is happening.  For Example, Tempest version 19.0.0 is to mark the "support of 
the Rocky release". When releasing the Tempest 19.0, we will release all the Tempest plugins 
also to tag the compatibility of plugins with Tempest for "support of the Rocky release".

One way to make this coordinated release (just a initial thought):
1. Release Each Tempest Plugins whenever there is any major version release of 
Tempest (like marking the support of OpenStack release in Tempest, EOL of 
OpenStack release in Tempest)
   1.1 Each plugin or Tempest can do their intermediate release of minor 
version change which are in backward compatible way.
   1.2 This coordinated Release can be started from latest Tempest Version for 
simple reading.  Like if we start this coordinated release from Tempest version 
19.0.0 then,
   each plugins will be released as 19.0.0 and so on.

Giving the above background and use case of this coordinated release,
A) I would like to ask each plugins owner if you are agree on this coordinated 
release?  If no, please give more feedback or issue we can face due to this 
coordinated release.






If we get the agreement from all Plugins then,
B) Release team or TC help to find the better model for this use case or 
improvement in  above model.

C) Once we define the release model, find out the team owning that release 
model (there are more than 40 Tempest plugins currently) .

NOTE: Till we decide the best solution for this use case, each plugins can 
do/keep doing their plugin release as per independent release model.

[1] https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html
[2] https://docs.openstack.org/tempest/latest/plugin-registry.html
[3] https://github.com/openstack/kuryr-tempest-plugin/releases
  https://github.com/openstack/ironic-tempest-plugin/releases
[4] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131011.html


-gmann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Mehdi Abaakouk
mail: sil...@si

Re: [openstack-dev] [heat][ci][infra][aodh][telemetry] telemetry test broken on oslo.messaging stable/queens

2018-06-11 Thread Mehdi Abaakouk


Hi,

The tempest plugin error remember me something we got in telemetry gate 
a while back.


We fix the telemetry tempest plugin with 
https://github.com/openstack/telemetry-tempest-plugin/commit/11277a8bee2b0ee0688ed32cc0e836872c24ee4b


So I propose the same for heat tempest plugin: 
https://review.openstack.org/574550


Hope that helps,
sileht

Le 2018-06-11 21:53, Ken Giusti a écrit :

Updated subject to include [aodh] and [telemetry]

On Tue, Jun 5, 2018 at 11:41 AM, Doug Hellmann  
wrote:

Excerpts from Ken Giusti's message of 2018-06-05 10:47:17 -0400:

Hi,

The telemetry integration test for oslo.messaging has started failing
on the stable/queens branch [0].

A quick review of the logs points to a change in heat-tempest-plugin
that is incompatible with the version of gabbi from queens upper
constraints (1.40.0) [1][2].

The job definition [3] includes required-projects that do not have
stable/queens branches - including heat-tempest-plugin.

My question - how do I prevent this job from breaking when these
unbranched projects introduce changes that are incompatible with
upper-constrants for a particular branch?


Aren't those projects co-gating on the oslo.messaging test job?

How are the tests working for heat's stable/queens branch? Or 
telemetry?

(whichever project is pulling in that tempest repo)



I've run the stable/queens branches of both Aodh[1] and Heat[2] - both 
failed.


Though the heat failure is different from what we're seeing on
oslo.messaging [3],
the same warning about gabbi versions is there [4].

However the Aodh failure is exactly the same as the oslo.messaging one
[5] - this makes sense since the oslo.messaging test is basically
running the same telemetry-tempest-plugin test.

So this isn't something unique to oslo.messaging - the telemetry
integration test is busted in stable/queens.

I'm going to mark these tests as non-voting on oslo.messaging's queens
branch for now so we can land some pending patches.


[1] https://review.openstack.org/#/c/574306/
[2] https://review.openstack.org/#/c/574311/
[3]
http://logs.openstack.org/11/574311/1/check/heat-functional-orig-mysql-lbaasv2/21cce1d/job-output.txt.gz#_2018-06-11_17_30_51_106223
[4]
http://logs.openstack.org/11/574311/1/check/heat-functional-orig-mysql-lbaasv2/21cce1d/logs/devstacklog.txt.gz#_2018-06-11_17_09_39_691
[5]
http://logs.openstack.org/06/574306/1/check/telemetry-dsvm-integration/0a9620a/job-output.txt.gz#_2018-06-11_16_53_33_982143





I've tried to use override-checkout in the job definition, but that
seems a bit hacky in this case since the tagged versions don't appear
to work and I've resorted to a hardcoded ref [4].

Advice appreciated, thanks!

[0] https://review.openstack.org/#/c/567124/
[1] 
http://logs.openstack.org/24/567124/1/check/oslo.messaging-telemetry-dsvm-integration-rabbit/e7fdc7d/logs/devstack-gate-post_test_hook.txt.gz#_2018-05-16_05_20_05_624
[2] 
http://logs.openstack.org/24/567124/1/check/oslo.messaging-telemetry-dsvm-integration-rabbit/e7fdc7d/logs/devstacklog.txt.gz#_2018-05-16_05_19_06_332
[3] 
https://git.openstack.org/cgit/openstack/oslo.messaging/tree/.zuul.yaml?h=stable/queens#n250

[4] https://review.openstack.org/#/c/572193/2/.zuul.yaml


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-10-02 Thread Mehdi Abaakouk

Looks like the LIBS_FROM_GIT workarounds have landed, but I still have some 
issue
on telemetry integration jobs:

 
http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-integration-ceilometer/e3bd35d/logs/devstacklog.txt.gz

On Fri, Sep 29, 2017 at 10:57:34AM +0200, Mehdi Abaakouk wrote:

On Fri, Sep 29, 2017 at 08:16:38AM +, Jens Harbott wrote:

2017-09-29 7:44 GMT+00:00 Mehdi Abaakouk <sil...@sileht.net>:

We also have our legacy-telemetry-dsvm-integration-ceilometer broken:

http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-integration-ceilometer/e185ae1/logs/devstack-gate-setup-workspace-new.txt


That looks similar to what Ian fixed in [1], seems like your job needs
a corresponding patch.


Thanks, I have proposed the same kind of patch for telemetry [1]

[1] https://review.openstack.org/508448

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-09-29 Thread Mehdi Abaakouk

On Fri, Sep 29, 2017 at 08:16:38AM +, Jens Harbott wrote:

2017-09-29 7:44 GMT+00:00 Mehdi Abaakouk <sil...@sileht.net>:

We also have our legacy-telemetry-dsvm-integration-ceilometer broken:

http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-integration-ceilometer/e185ae1/logs/devstack-gate-setup-workspace-new.txt


That looks similar to what Ian fixed in [1], seems like your job needs
a corresponding patch.


Thanks, I have proposed the same kind of patch for telemetry [1]

[1] https://review.openstack.org/508448

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-09-29 Thread Mehdi Abaakouk

On Fri, Sep 29, 2017 at 03:41:54PM +1000, Ian Wienand wrote:

On 09/29/2017 03:37 PM, Ian Wienand wrote:

I'm not aware of issues other than these at this time


Actually, that is not true.  legacy-grenade-dsvm-neutron-multinode is
also failing for unknown reasons.  Any debugging would be helpful,
thanks.


We also have our legacy-telemetry-dsvm-integration-ceilometer broken:

http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-integration-ceilometer/e185ae1/logs/devstack-gate-setup-workspace-new.txt

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][mistral] For how long is blocking executor deprecated?

2017-06-13 Thread Mehdi Abaakouk

Hi,

On Tue, Jun 13, 2017 at 01:53:02PM +0700, Renat Akhmerov wrote:

Can you please clarify for how long you plan to keep ‘blocking executor’ 
deprecated before complete removal?


Like all deprecations. We just done it, so you have two cycles, we will
remove it in Rocky.

But as I said, this executor have never ever be tested. Even its
currently default, this default was chosen to not default to 'eventlet'
or 'threading', because this is an application choice and not a lib one.
But this (bad) default and the poor logged message haven't helped to
ensure application make the choice. That why blocking executor is not
deprecated and all 'executor' parameters in oslo.messaging will become
mandatory.

Cheers,

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][ceilometer][gnocchi] Query on adding new meters to Gnocchi

2017-06-08 Thread Mehdi Abaakouk

On Thu, Jun 08, 2017 at 08:30:32AM +, Deepthi V V wrote:

Thanks Mehdi for the information. I will soon upload a spec for adding the 
meters.


We don't use spec, just open a bug, or directly send patches.


Thanks,
Deepthi

-Original Message-
From: Mehdi Abaakouk [mailto:sil...@sileht.net]
Sent: Thursday, June 08, 2017 1:44 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [telemetry][ceilometer][gnocchi] Query on adding 
new meters to Gnocchi

Hi,

On Thu, Jun 08, 2017 at 05:35:43AM +, Deepthi V V wrote:

Hi,

I am trying to add new meters/resource types in gnocchi. I came across 2 files:
Gnocchi_resources.yaml and ceilometer_update script which will make Gnocchi api 
calls for resource_type addition.
I have a few queries. Could you please clarify them.


 1.  Is it sufficient to add the resource types only in gnocchi_resources.yaml 
file.


No, you also need to create resource type with the Gnocchi API.


 2.  OR is the ceilometer_update script also required to be modified. Is this 
script responsible for defining attributes in metadata.


This script is only for Ceilometer supported resource types. We do not support 
upgrade if this script is changed or if you add/remove attributes to Ceilometer 
resource types.


 3.  If I have to perform function done by step 2, as an alternative to 
updating the script, is it correct to bring up the system in following order
*   Change gnocchi_resources.yaml for new resource types.
*   Start ceilometer and gnocchi processes.
*   Execute Gnocchi REST apis to create new resource types.


I see to two solutions depending of your use case:

* if your new resource types aim to support a not yet handled Openstack
 resource. You should consider to contribute upstream to update
 ceilometer-upgrade and gnocchi_resources.yaml

* if not, then option 3 is the good way to go.

Regards,

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][ceilometer][gnocchi] Query on adding new meters to Gnocchi

2017-06-08 Thread Mehdi Abaakouk

Hi,

On Thu, Jun 08, 2017 at 05:35:43AM +, Deepthi V V wrote:

Hi,

I am trying to add new meters/resource types in gnocchi. I came across 2 files:
Gnocchi_resources.yaml and ceilometer_update script which will make Gnocchi api 
calls for resource_type addition.
I have a few queries. Could you please clarify them.


 1.  Is it sufficient to add the resource types only in gnocchi_resources.yaml 
file.


No, you also need to create resource type with the Gnocchi API.


 2.  OR is the ceilometer_update script also required to be modified. Is this 
script responsible for defining attributes in metadata.


This script is only for Ceilometer supported resource types. We do not
support upgrade if this script is changed or if you add/remove
attributes to Ceilometer resource types.


 3.  If I have to perform function done by step 2, as an alternative to 
updating the script, is it correct to bring up the system in following order
*   Change gnocchi_resources.yaml for new resource types.
*   Start ceilometer and gnocchi processes.
*   Execute Gnocchi REST apis to create new resource types.


I see to two solutions depending of your use case:

* if your new resource types aim to support a not yet handled Openstack
 resource. You should consider to contribute upstream to update
 ceilometer-upgrade and gnocchi_resources.yaml

* if not, then option 3 is the good way to go.

Regards,

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging]Optimize RPC performance by reusing callback queue

2017-06-08 Thread Mehdi Abaakouk

Hi,

On Thu, Jun 08, 2017 at 10:29:16AM +0800, int32bit wrote:

Hi,

Currently, I find our RPC client always need create a new callback queue
for every call requests to track the reply belongs, at least in Newton.
That's pretty inefficient and lead to poor performance. I also  find some
RPC implementations no need to create a new queue, they track the request
and response by correlation id in message header(rabbitmq well supports,
not sure is it AMQP standard?). The rabbitmq official document provide a
simple demo, see [1].

So I am confused that why our oslo.messaging doesn't use this way
to optimize RPC performance. Is it for any consideration or do I miss
some potential cases?


I think that was designing like this from the beginning unfortunately.

The main issue is not the feature itself. It's easy to implement I wrote a PoC
some times ago. But some projects allow what we call 'Rolling Upgrade'.

That means an older (N-1) application should be allowed to talk to a
newer one and the reverse. So a RPC server have to known if it should
send the message to the old callback queue or to the new one (even RPC
Server from version N-1 should be able to do that). Also a new RPC
client should be able to talk to an old RPC server.

So implementing this feature would take many cycles of patches to
implement and babysit. With this kind of steps:
* for version N+1: allows RPC server and RPC client to read/send to the future 
queue
 topology but continue to use the old topology by default.
* for version N+2: switch to the new topology by default but continue to
 support to talk to RPC client/server from previous version
* for version N+3: remove code of the old topology.

Any issue encounters by an application have good chance extends each step
to more than one cycle.

So finally, this is not as easy as the feature alone itself and this
issue is known since at least 2015 [1], and oslo.messaging have basic no
very active contributor so nobody is going to fix this kind of technical
debt (obviously everybody is welcome to fix that).

[1] https://bugs.launchpad.net/oslo.messaging/+bug/1437951

Regards,

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] regional incoming storage targets

2017-06-01 Thread Mehdi Abaakouk

On Thu, Jun 01, 2017 at 01:46:21PM +0200, Julien Danjou wrote:

On Wed, May 31 2017, gordon chung wrote:


[…]


i'm not entirely sure this is an issue, just thought i'd raise it to
discuss.


It's a really interesting point you raise. I never thought we could do
that but indeed, we could. Maybe we built a great architecture after
all. ;-)

Easy solution: disable refresh. Problem solved.


I have never liked this refresh feature on API side.

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [panko] dropping hbase driver support

2017-05-28 Thread Mehdi Abaakouk



Le 2017-05-28 19:18, Julien Danjou a écrit :

On Fri, May 26 2017, gordon chung wrote:


as all of you know, we moved all storage out of ceilometer so it is
handles only data generation and normalisation. there seems to be very
little contribution to panko which handles metadata indexing, event
storage so given how little it's being adopted and how little 
resources

are being put on supporting it, i'd like to proposed to drop hbase
support as a first step in making the project more manageable for
whatever resource chooses to support it.


No objection from me.


It's ok for me too.

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is the pendulum swinging on PaaS layers?

2017-05-21 Thread Mehdi Abaakouk

On Fri, May 19, 2017 at 02:04:05PM -0400, Sean Dague wrote:

You end up replicating the Ceilometer issue where there was a break down
in getting needs expressed / implemented, and the result was a service
doing heavy polling of other APIs (because that's the only way it could
get the data it needed).


Not related to the topic, but Ceilometer doesn't have this issue
anymore. Since Nova writes the uuid of the instance inside the libvirt
instance metadata. We just associate libvirt metrics to the instance
uuid. And then correlate them with the full metdata we receive via
notification. We don't poll nova at all anymore.

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-19 Thread Mehdi Abaakouk

On Thu, May 18, 2017 at 03:16:20PM -0400, Mike Bayer wrote:



On 05/18/2017 02:37 PM, Julien Danjou wrote:

On Thu, May 18 2017, Mike Bayer wrote:


I'm not understanding this?  do you mean this?


In the long run, yes. Unfortunately, we're not happy with the way Oslo
libraries are managed and too OpenStack centric. I've tried for the last
couple of years to move things on, but it's barely possible to deprecate
anything and contribute, so I feel it's safer to start fresh and better
alternative. Cotyledon by Mehdi is a good example of what can be
achieved.



here's cotyledon:

https://cotyledon.readthedocs.io/en/latest/


replaces oslo.service with a multiprocessing approach that doesn't use 
eventlet.  great!  any openstack service that rides on oslo.service 
would like to be able to transparently switch from eventlet to 
multiprocessing the same way they can more or less switch to mod_wsgi 
at the moment.   IMO this should be part of oslo.service itself.   


I have quickly presented cotyledon some summit ago, we said we will wait
to see if other projects want to get rid of eventlet before adopting
such new lib (or merge it with oslo.service).

But for now, the lib is still under telemetry umbrella.

Keeping the current API and supporting both are (I think) impossible.
The current API is too eventlet centric. And some applications rely
on implicit internal contract/behavior/assumption.

Dealing about concurrent/thread/signal safety in multithreading app or
eventlet app is already hard enough. So having the lib that deals with
both is even harder. We already have oslo.messaging that deals with
3 threads models, this is just an unending story of race conditions.

Since a new API is needed, why not writing a new lib. Anyways when you
get rid of eventlet you have so many thing to change to ensure your
performance will not drop. Changing from oslo.service to cotyledon is
really easy on the side.

Docs state: "oslo.service being impossible to fix and bringing an 
heavy dependency on eventlet, "  is there a discussion thread on that?


Not really, I just put some comments on reviews and discus this on IRC.
Since nobody except Telemetry have expressed/try to get rid of eventlet.

For the story we first get rid of eventlet in Telemetry, fixes couple of
performance issue due to using threading/process instead
greenlet/greenthread.

Then we fall into some weird issue due to oslo.service internal
implementation. Process not exiting properly, signals not received,
deadlock when signal are received, unkillable process,
tooz/oslo.messaging heartbeat not scheduled correctly, worker not
restarted when they are dead. All of what we expect from oslo.service
was not working correctly anymore because we remove the line
'eventlet.monkeypatch()'.

For example, when oslo.service receive a signal, it can arrive on any
thread, this thread is paused, the callback is run in this thread
context, but if the callback try to discus to your code in this thread,
the process lockup, because your code is paused. Python
offers tool to avoid that (signal.set_wakeup_fd), but oslo.service don't
use it. I have tried to run callbacks only on the main thread with
set_wakeup_fd, to avoid this kind of issue but I fail. The whole
oslo.service code is clearly not designed to be threadsafe/signalsafe.
Well, it works for eventlet because you have only one real thread.

And this is just one example on complicated thing I have tried to fix,
before starting cotyledon.

I'm finding it hard to believe that only a few years ago, everyone saw 
the wisdom of not re-implementing everything in their own projects and 
using a common layer like oslo, and already that whole situation is 
becoming forgotten - not just for consistency, but also when a bug is 
found, if fixed in oslo it gets fixed for everyone.


Because the internal of cotyledon and oslo.service are so different.
Having the code in oslo or not doesn't help for maintenance anymore.
Cotyledon is a lib, code and bugs :) can already be shared between
projects that doesn't want eventlet.

An increase in the scope of oslo is essential to dealing with the 
issue of "complexity" in openstack. 


Increasing the scope of oslo works only if libs have maintainers. But
most of them lack of people today. Most of oslo libs are in maintenance
mode. But that another subject.

The state of openstack as dozens 
of individual software projects each with their own idiosyncratic 
quirks, CLIs, process and deployment models, and everything else that 
is visible to operators is ground zero for perceived operator 
complexity.


Cotyledon have been written to be Openstack agnostic. But I have also
write an optional module within the library to glue oslo.config and
cotyledon. Mainly to mimic the oslo.config options/reload of
oslo.service and make operators experience unchanged for Openstack
people.

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Descri

Re: [openstack-dev] [Heat] Heat template example repository

2017-05-18 Thread Mehdi Abaakouk

On Thu, May 18, 2017 at 11:26:41AM +0200, Lance Haig wrote:



This is not only an Aodh/Ceilometer alarm issue. I can confirm that
whatever the resource prefix, this works well.

But an alarm description also contains a query an external API to
retrieve statistics. Aodh alarms are currently able to
query the deprecated Ceilometer-API and the Gnocchi-API. Creating alarms
that query the deprecated Ceilometer-API is obviously deprecated too.

Unfortunately, I have seen that all templates still use the deprecated
Ceilometer-API. Since Ocata, this API don't even run by default.

I just propose an update for one template as example here:

https://review.openstack.org/#/c/465817/

I can't really do the others, I don't have enough knowledge in
Mistral/Senlin/Openshift.
One of the challenges we have is that we have users who are on 
different versions of heat and so if we change the examples to 
accommodate the new features then we effectively block them from being 
able to use these or learn from them.


I think, it's too late to use term 'new feature' for
Aodh/Telemetry/Gnocchi. It's not new anymore, but current. The current
templates just doesn't work since at least 3 cycles... And the repo
still doesn't have templates that use the current supported APIs.

How many previous version do you want to support in this repos ? I doubt it's
more of 2-3 cycles, you may just fixes all autoscaling/autohealing templates
today.

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Heat template example repository

2017-05-18 Thread Mehdi Abaakouk

Hi,

On Mon, May 15, 2017 at 01:01:57PM -0400, Zane Bitter wrote:

On 15/05/17 12:10, Steven Hardy wrote:

On Mon, May 15, 2017 at 04:46:28PM +0200, Lance Haig wrote:

Hi Steve,

I am happy to assist in any way to be honest.


It was great to meet you in Boston, and thanks very much for 
volunteering to help out.


BTW one issue I'm aware of is that the autoscaling template examples 
we have all use OS::Ceilometer::* resources for alarms. We have a 
global environment thingy that maps those to OS::Aodh::*, so at least 
in theory those templates should continue to work, but there are 
actually no examples that I can find of autoscaling templates doing 
things the way we want everyone to do them.


This is not only an Aodh/Ceilometer alarm issue. I can confirm that
whatever the resource prefix, this works well.

But an alarm description also contains a query an external API to
retrieve statistics. Aodh alarms are currently able to
query the deprecated Ceilometer-API and the Gnocchi-API. Creating alarms
that query the deprecated Ceilometer-API is obviously deprecated too.

Unfortunately, I have seen that all templates still use the deprecated
Ceilometer-API. Since Ocata, this API don't even run by default.

I just propose an update for one template as example here:

 https://review.openstack.org/#/c/465817/

I can't really do the others, I don't have enough knowledge in
Mistral/Senlin/Openshift.

Cheers,

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Running Gnocchi API in specific interface

2017-05-17 Thread Mehdi Abaakouk

Hi,

On Thu, May 18, 2017 at 11:14:06AM +0800, Andres Alvarez wrote:

Hello folks

The gnocchi-api command allows running the API server usign a spefic port:

usage: gnocchi-api [-h] [--port PORT] -- [passed options]

positional arguments:
 -- [passed options]   '--' is the separator of the arguments used to start
   the WSGI server and the arguments passed to the WSGI
   application.

optional arguments:
 -h, --helpshow this help message and exit
 --port PORT, -p PORT  TCP port to listen on (default: 8000)

I was wondering if it's possible as well to use a specific interface? (In
my case, I am working on a cloud dev environment, so I need 0.0.0.0)?


gnocchi-api is for testing purpose, for production or any HTTP server
advanced usage, I would recommend to use the wsgi application inside a
real HTTP server, you can find an example with uwsgi here:
http://gnocchi.xyz/running.htm

Cheers,

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.messaging] Call to deprecate the 'pika' driver in the oslo.messaging project

2017-05-16 Thread Mehdi Abaakouk

+1 too, I haven't seen its contributors since a while.

On Mon, May 15, 2017 at 09:42:00PM -0400, Flavio Percoco wrote:

On 15/05/17 15:29 -0500, Ben Nemec wrote:



On 05/15/2017 01:55 PM, Doug Hellmann wrote:

Excerpts from Davanum Srinivas (dims)'s message of 2017-05-15 14:27:36 -0400:

On Mon, May 15, 2017 at 2:08 PM, Ken Giusti <kgiu...@gmail.com> wrote:

Folks,

It was decided at the oslo.messaging forum at summit that the pika
driver will be marked as deprecated [1] for removal.


[dims} +1 from me.


+1


Also +1


+1

Flavio

--
@flaper87
Flavio Percoco





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures][telemetry][gnocchi] Release of openstack/python-gnocchiclient failed

2017-04-12 Thread Mehdi Abaakouk

Thanks for the investigation.

I see two things:

* I have forgotten than pushing tag will build the tarball
 automatically.
* I have thought I should do the tagging manually because of [1]

For the second point, if I was wrong and we should continue to use the
release repo, I have prepared two reviews to regularize the already
tagged releases and retag a versions to fix the jobs.

- gnocchiclient: https://review.openstack.org/#/c/456448/
- gnocchi: https://review.openstack.org/#/c/456449/

If I was right, I will manually retag a new version without uploading the
tarballs myself this time and wait for the tooling to do it.

[1] https://review.openstack.org/#/c/447438/

On Wed, Apr 12, 2017 at 01:49:54PM -0400, Doug Hellmann wrote:

Gnocchi team,

The client release you tagged had a build failure.  It looks like
the tag was pushed directly, so I will relay the results of the
debugging work but let you decide how to deal with it.

Clark determined that the files on PyPI do not exactly match the files
on our tarballs server. One of the JSON metadata files was regenerated,
and because dictionaries are unordered it came out in a different order.

Clark also checked the MD5 sum on PyPI and found that the signatures
match.

It's not clear why the job failed. Subsequent jobs have passed. One
course of action you could take is to retag the same commit with a new
version number to get new packages, just to be safe.

If you choose to stick with the existing packages, you will need to
manually submit the constraint update, since that job was skipped after
the upload job reported a failure.

Please also file the update in the releases repository, so that
there is a record of the fact that the release was made.

Excerpts from jenkins's message of 2017-04-12 09:29:09 +:

Build failed.

- python-gnocchiclient-tarball 
http://logs.openstack.org/6f/6fd3d262bf88c1b8b8e195aef451cde80a7e7c1d/release/python-gnocchiclient-tarball/61adb8c/
 : SUCCESS in 3m 52s
- python-gnocchiclient-tarball-signing 
http://logs.openstack.org/6f/6fd3d262bf88c1b8b8e195aef451cde80a7e7c1d/release/python-gnocchiclient-tarball-signing/c7dd0a8/
 : SUCCESS in 11s
- python-gnocchiclient-pypi-both-upload 
http://logs.openstack.org/6f/6fd3d262bf88c1b8b8e195aef451cde80a7e7c1d/release/python-gnocchiclient-pypi-both-upload/0ad195d/
 : FAILURE in 48s
- python-gnocchiclient-announce-release python-gnocchiclient-announce-release : 
SKIPPED
- propose-python-gnocchiclient-update-constraints 
propose-python-gnocchiclient-update-constraints : SKIPPED



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][telemetry] gate breakage

2017-03-02 Thread Mehdi Abaakouk

Example of failure:

Our test assertion (that does a GET /v2.1/servers/detail HTTP/1.1) that
returns [] instead of the list of instances

http://logs.openstack.org/56/439156/2/check/gate-telemetry-dsvm-integration-gnocchi-ubuntu-xenial/d4a6c69/console.html#_2017-03-02_09_28_59_334619

The 'openstack server list' we issue for debugging that have the same
issue:

http://logs.openstack.org/56/439156/2/check/gate-telemetry-dsvm-integration-gnocchi-ubuntu-xenial/d4a6c69/console.html#_2017-03-02_09_29_13_803796

On Thu, Mar 02, 2017 at 02:52:20PM +0100, Mehdi Abaakouk wrote:

Hello,

We are experiencing an blocking issue with our integrated job since some
days.

Basically the job creates a heat stack and call nova API to list
instances and see if the stack have upscaled.

The autoscaling itself work well, but our test assertion fails because
listing nova instances doesn't works anymore. It always returns an empty
list.

I have first though https://review.openstack.org/#/c/427392/ will fix
it, because nova-api was logging some errors about cell initialisations.

But now theses errors are gone, and 'openstack server list' still
returns an empty list, while 'openstack server show X' works well.

Any ideas are welcome.

Cheers,
--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][telemetry] gate breakage

2017-03-02 Thread Mehdi Abaakouk

Hello,

We are experiencing an blocking issue with our integrated job since some
days.

Basically the job creates a heat stack and call nova API to list
instances and see if the stack have upscaled.

The autoscaling itself work well, but our test assertion fails because
listing nova instances doesn't works anymore. It always returns an empty
list.

I have first though https://review.openstack.org/#/c/427392/ will fix
it, because nova-api was logging some errors about cell initialisations.

But now theses errors are gone, and 'openstack server list' still
returns an empty list, while 'openstack server show X' works well.

Any ideas are welcome.

Cheers,
--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Triggering alarms on Swift notifications

2017-02-16 Thread Mehdi Abaakouk

On Thu, Feb 16, 2017 at 03:33:43PM +0200, Denis Makogon wrote:

Hello Mehdi. Thanks for response. See comments inline.

2017-02-16 15:25 GMT+02:00 Mehdi Abaakouk <sil...@sileht.net>:


Hi,

On Thu, Feb 16, 2017 at 03:04:52PM +0200, Denis Makogon wrote:


Greetings.

Could someone provide any guidelines for checking why alarm or ok-action
webhooks are not being executed. Is it possible to investigate an issue by
analyzing ceilometer and/or aodh logs?

So, my devstack setup based on master branch with local.conf (see
https://gist.github.com/denismakogon/4d88bdbea4bf428e55e88d25d52735f6)

Once devstack installed i'm checking if notifications are being emitted
while uploading files to Swift and i'm able to see events in Ceilometer
(see https://gist.github.com/denismakogon/c6ad75899dcc50ce2a9b9f6
a4e0612f7).

After that i'm trying to setup Aodh event alarm (see
https://gist.github.com/denismakogon/f6449e71ba9bb04cdd0065b52918b5af)

And that's where i'm stuck, while working with Swift i see notifications
are coming from ceilometermiddleware to Panko and available in Ceilometer
via event-list but no alarm being triggered in Aodh.

So, could someone explain me what am i doing wrong or am i missing
something?



I think devstack plugins currently doesn't setup the Ceilometer stuffs to
be able to use events in Aodh.



So, if i would drop off both Aodh and Panko from this setup it may appear
that issue will be solved somehow?


I don't get it, I have thought your goal was to create alarm triggered on
event. And for this you need Aodh and Ceilometer (not Panko).

You have to do 
https://docs.openstack.org/developer/aodh/event-alarm.html#configuration
manually to get events emitted by Ceilometer received by Aodh.


Note that Aodh doesn't query Panko, but listen for event on alarm.all
topic by
default. I'm guessing the Ceilometer conf/pipeline have to be tweaked to
send
events to Aodh somehow.



Yes, i know that, in this setup Ceilometer depends on Panko as event
source, is that right?


No, Panko is for storage of event in a database (It's a API and a
Ceilometer dispatcher plugin). Events are still built from notification
by Ceilometer (agent-notification).

Rephrase from Ceilometer point of view. Ceilometer send events to Aodh AND 
Panko.

Regards,

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Triggering alarms on Swift notifications

2017-02-16 Thread Mehdi Abaakouk

Hi,

On Thu, Feb 16, 2017 at 03:04:52PM +0200, Denis Makogon wrote:

Greetings.

Could someone provide any guidelines for checking why alarm or ok-action
webhooks are not being executed. Is it possible to investigate an issue by
analyzing ceilometer and/or aodh logs?

So, my devstack setup based on master branch with local.conf (see
https://gist.github.com/denismakogon/4d88bdbea4bf428e55e88d25d52735f6)

Once devstack installed i'm checking if notifications are being emitted
while uploading files to Swift and i'm able to see events in Ceilometer
(see https://gist.github.com/denismakogon/c6ad75899dcc50ce2a9b9f6a4e0612f7).

After that i'm trying to setup Aodh event alarm (see
https://gist.github.com/denismakogon/f6449e71ba9bb04cdd0065b52918b5af)

And that's where i'm stuck, while working with Swift i see notifications
are coming from ceilometermiddleware to Panko and available in Ceilometer
via event-list but no alarm being triggered in Aodh.

So, could someone explain me what am i doing wrong or am i missing
something?


I think devstack plugins currently doesn't setup the Ceilometer stuffs to
be able to use events in Aodh.

Note that Aodh doesn't query Panko, but listen for event on alarm.all topic by
default. I'm guessing the Ceilometer conf/pipeline have to be tweaked to send
events to Aodh somehow.

Maybe this [1] have to be done manually.

[1] https://docs.openstack.org/developer/aodh/event-alarm.html#configuration

Regards,

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][monasca] Can we uncap python-kafka ?

2017-01-18 Thread Mehdi Abaakouk

Thanks Joe for all these details, I can see that Monasca is still
not able to switch to the new lib for new very good reasons.

But according your comment on https://review.openstack.org/#/c/420579/ :


I don't think that anyone currently using Monasca wants to accept either
of those options so we need to find a way to maintain the current data
guarantees while using the async behaviour of the new client library.
That takes time and engineering effort to make that happen.  Is there
anyone in the community willing to put in the effort to help build and
test these new features at scale?


Nobody have plan to fix this issues soon.

And about from the same review:


On another topic I'm curious what new features are you looking to get
out of the new library.  Is there anything we can do to help you get the
capabilities you want with the existing client?


I don't think asking other projects to use deprecated and unsupported
lib version in new code is good, it's just adding fresh technical debt.

So, I agree with gordc, perhaps you should stay with the old and
unsupported lib. And let other to use the supported one.


On Tue, Jan 17, 2017 at 11:58:25PM +, Keen, Joe wrote:

Tony, I have some observations on the new client based on a short term
test and a long running test.

For short term use it uses 2x the memory compared to the older client.
The logic that deals with receiving partial messages from Kafka was
completely rewritten in the 1.x series and with logging enabled I see
continual warnings about truncated messages.  I don’t lose any data
because of this but I haven’t been able to verify if it’s doing more reads
than necessary.  I don’t know that either of these problems are really a
sticking point for Monasca but the increase in memory usage is potentially
a problem.



Long term testing showed some additional problems.  On a Kafka server that
has been running for a couple weeks I can write data in but the
kafka-python library is no longer able to read data from Kafka.  Clients
written in other languages are able to read successfully.  Profiling of
the python-kafka client shows that it’s spending all it’s time in a loop
attempting to connect to Kafka:

   276150.0860.0000.0860.000 {method 'acquire’ of
'thread.lock' objects}
431520.2500.0000.3850.000 types.py:15(_unpack)
431530.1350.0000.1350.000 {_struct.unpack}
48040/477980.1640.0000.1650.000 {len}
603510.2010.0000.2010.000 {method 'read’ of
'_io.BytesIO' objects}
  7389962   23.9850.000   23.9850.000 {method 'keys' of ‘dict'
objects}
  738  104.9310.000  395.6540.000 conn.py:560(recv)
  738   58.3420.000  100.0050.000
conn.py:722(_requests_timed_out)
  738   97.7870.000  167.5680.000 conn.py:588(_recv)
  7390071   46.5960.000   46.5960.000 {method 'recv’ of
'_socket.socket' objects}
  7390145   23.1510.000   23.1510.000 conn.py:458(connected)
  7390266   21.4170.000   21.4170.000 {method 'tell’ of
'_io.BytesIO' objects}
  7395664   41.6950.000   41.6950.000 {time.time}



I also see additional problems with the use of the deprecated
SimpleConsumer and SimpleProducer clients.  We really do need to
investigate migrating to the new async only Producer objects while still
maintaining the reliability guarantees that Monasca requires.


On 12/13/16, 10:01 PM, "Tony Breeds" <t...@bakeyournoodle.com> wrote:


On Mon, Dec 05, 2016 at 04:03:13AM +, Keen, Joe wrote:


I don’t know, yet, that we can.  Unless we can find an answer to the
questions I had above I’m not sure that this new library will be
performant and durable enough for the use cases Monasca has.  I’m fairly
confident that we can make it work but the performance issues with
previous versions prevented us from even trying to integrate so it will
take us some time.  If you need an answer more quickly than a week or
so,
and if anyone in the community is willing, I can walk them through the
testing I’d expect to happen to validate the new library.


Any updates Joe?  It's been 10 days and we're running close to Christamas
so
at this rate it'll be next year before we know if this is workable.

Yours Tony.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][monasca] Can we uncap python-kafka ?

2017-01-10 Thread Mehdi Abaakouk

The library final release is really soon, and we are still blocked on
this topic. If this is not solved, we will release one more time an
unusable driver in oslo.messging.

I want to remember that people current uses the kafka driver in
production with 'downstream patches' ready since 1 years to make it
works.

We recently remove the kafka dep from oslo.messaging to be able to merge
some of these patches. But we can't untag the experimental flag of
this driver until the dependency issue is solved.

So what can we do to unblock this situation ?

On Fri, Jan 06, 2017 at 02:31:28PM +0100, Mehdi Abaakouk wrote:

Any progress ?

On Thu, Dec 08, 2016 at 08:32:54AM +1100, Tony Breeds wrote:

On Mon, Dec 05, 2016 at 04:03:13AM +, Keen, Joe wrote:

I wasn’t able to set a test up on Friday and with all the other work I
have for the next few days I doubt I’ll be able to get to it much before
Wednesday.


It's Wednesday so can we have an update?

Yours Tony.


--
Mehdi Abaakouk
mail: sil...@sileht.net

irc: sileht


--
Mehdi Abaakouk
mail: sil...@sileht.net
]]irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][monasca] Can we uncap python-kafka ?

2017-01-06 Thread Mehdi Abaakouk

Any progress ?

On Thu, Dec 08, 2016 at 08:32:54AM +1100, Tony Breeds wrote:

On Mon, Dec 05, 2016 at 04:03:13AM +, Keen, Joe wrote:

I wasn’t able to set a test up on Friday and with all the other work I
have for the next few days I doubt I’ll be able to get to it much before
Wednesday.


It's Wednesday so can we have an update?

Yours Tony.


--
Mehdi Abaakouk
mail: sil...@sileht.net

irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Telemetry] Asking a question to our users

2017-01-04 Thread Mehdi Abaakouk

I haven't answer before because that just one question, and I'm not sure
about what is really interesting for us. But what about something like:

Which backend/dispatcher/publusher their use ? Do their switch to Gnocchi ?

On Wed, Jan 04, 2017 at 03:31:04PM +0100, Julien Danjou wrote:

On Mon, Dec 26 2016, Julien Danjou wrote:

Nothing? :)


Hi folks,

The foundation is offering the opportunity to ask a question to users:

  "I wanted to offer you the opportunity to ask a question on the
  upcoming User Survey, which launches on or before Feb. 1. Each PTL of
  a project with significant adoption can submit one question. You can
  decide which audience to serve the question to - those who are USING,
  TESTING, or INTERESTED in your project (or some combination of these)."

Would anyone have an interesting idea/question to ask our beloved users?

Cheers,


--
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] drbdmanage is no more GPL2

2016-12-12 Thread Mehdi Abaakouk

On Mon, Dec 12, 2016 at 11:00:41AM +, Duncan Thomas wrote:

It's a soft dependency, like most of the vendor specific dependencies - you
only need them if you're using a specific backend. We've loads of them in
cinder, under a whole bunch of licenses. There was a summit session
discussing it that didn't come to any firm conclusions.


I have take a look to some other soft dependencies (I perhaps miss some):

pywbem: LGPLv2+
vmemclient: APACHE 2.0
hpe3parclient: APACHE 2.0
purestorage: BSD 2-Clause
rbd/rados:  LGPL 2.1

Their are all at least OSI Approved licenses

Anyway, I'm just sad to see that now two Opensource softwares (Cinder and
DRBD) need a non 'OSI Approved license' library to talk together.

Regards,
--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] drbdmanage is no more GPL2

2016-12-12 Thread Mehdi Abaakouk

On Mon, Dec 12, 2016 at 11:52:50AM +0100, Thierry Carrez wrote:

That said, it doesn't seem to be listed as a Cinder requirement right
now ? Is it a new dependency being considered, or is it currently flying
under the radar ?


I think this is because this library is not available on Pypi.

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] drbdmanage is no more GPL2

2016-12-11 Thread Mehdi Abaakouk

Hi,

I have recently seen that drbdmanage python library is no more GPL2 but
need a end user license agreement [1]. 


Is this compatible with the driver policy of Cinder ?

[1] 
http://git.drbd.org/drbdmanage.git/commitdiff/441dc6a96b0bc6a08d2469fa5a82d97fc08e8ec1

Regards

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][monasca] Can we uncap python-kafka ?

2016-12-02 Thread Mehdi Abaakouk

On Fri, Dec 02, 2016 at 09:39:41AM +0100, Mehdi Abaakouk wrote:

On Fri, Dec 02, 2016 at 09:29:56AM +0100, Mehdi Abaakouk wrote:

And my bench seems to confirm the perf issue have been solved:


I have updated my requirement review to require >=1.3.1 [1] to solve
the monasca issue.

[1] https://review.openstack.org/404878a


And this is the update for all projects:

https://review.openstack.org/#/q/status:open+branch:master+topic:sileht/kafka-update

Nothing should block all of this anymore, except +2/+A :)

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][monasca] Can we uncap python-kafka ?

2016-12-02 Thread Mehdi Abaakouk

On Fri, Dec 02, 2016 at 09:29:56AM +0100, Mehdi Abaakouk wrote:

And my bench seems to confirm the perf issue have been solved:


I have updated my requirement review to require >=1.3.1 [1] to solve
the monasca issue.

[1] https://review.openstack.org/404878

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][monasca] Can we uncap python-kafka ?

2016-12-02 Thread Mehdi Abaakouk

On Fri, Dec 02, 2016 at 03:29:59PM +1100, Tony Breeds wrote:

On Thu, Dec 01, 2016 at 04:52:52PM +, Keen, Joe wrote:


Unfortunately there’s nothing wrong on the Monasca side so far as we know.
 We test new versions of the kafka-python library outside of Monasca
before we bother to try integrating a new version.  Since 1.0 the
kafka-python library has suffered from crashes and memory leaks severe
enough that we’ve never attempted using it in Monasca itself.  We reported
the bugs we found to the kafka-python project but they were closed once
they released a new version.


So Opening bugs isn't working.  What about writing code?


The bug https://github.com/dpkp/kafka-python/issues/55

Reopening it would be the right solution here.

I can't reproduce the segfault neither and I agree with dpkp, that looks like a
ujson issue.

And my bench seems to confirm the perf issue have been solved:
(but not in the pointed version...)

$ pifpaf run kafka python kafka_test.py
kafka-python version: 0.9.5
...
fetch size 179200 -> 45681.8728864 messages per second
fetch size 204800 -> 47724.3810674 messages per second
fetch size 230400 -> 47209.9841092 messages per second
fetch size 256000 -> 48340.7719787 messages per second
fetch size 281600 -> 49192.9896743 messages per second
fetch size 307200 -> 50915.3291133 messages per second

$ pifpaf run kafka python kafka_test.py
kafka-python version: 1.0.2

fetch size 179200 -> 8546.77931323 messages per second
fetch size 204800 -> 9213.30958314 messages per second
fetch size 230400 -> 10316.668006 messages per second
fetch size 256000 -> 11476.2285269 messages per second
fetch size 281600 -> 12353.7254386 messages per second
fetch size 307200 -> 13131.2367288 messages per second

(1.1.1 and 1.2.5 have also the same issue)

$ pifpaf run kafka python kafka_test.py
kafka-python version: 1.3.1
fetch size 179200 -> 44636.9371873 messages per second
fetch size 204800 -> 44324.7085365 messages per second
fetch size 230400 -> 45235.8283208 messages per second
fetch size 256000 -> 45793.1044121 messages per second
fetch size 281600 -> 44648.6357019 messages per second
fetch size 307200 -> 44877.8445987 messages per second
fetch size 332800 -> 47166.9176281 messages per second
fetch size 358400 -> 47391.0057622 messages per second

Looks like it works well now :)


Just in case I have updated a bit the bench script to ensure it always works 
for me:

--- kafka_test.py.ori   2016-12-02 09:16:10.570677010 +0100
+++ kafka_test.py   2016-12-02 09:06:04.870370438 +0100
@@ -14,7 +14,7 @@
import time
import ujson

-KAFKA_URL = '92.168.10.6:9092'
+KAFKA_URL = '127.0.0.1:9092'
KAFKA_GROUP = 'kafka_python_perf'
KAFKA_TOPIC = 'raw-events'

@@ -24,6 +24,7 @@

def write():
k_client = KafkaClient(KAFKA_URL)
+k_client.ensure_topic_exists(KAFKA_TOPIC)
p = KeyedProducer(k_client,
  async=False,
      req_acks=KeyedProducer.ACK_AFTER_LOCAL_WRITE,


--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][monasca] Can we uncap python-kafka ?

2016-12-01 Thread Mehdi Abaakouk

On Fri, Dec 02, 2016 at 03:19:26PM +1100, Tony Breeds wrote:

On Thu, Dec 01, 2016 at 07:57:37AM +0100, Mehdi Abaakouk wrote:
I think the solution is pretty simple. Just Fix it.  I'm not saying it's easy
but it is *simple*.  We're a group of skilled individuals We have several ways
forward which just haven't been done in the last 8 months.  Waiting doesn't seem
to be a viable course anymore.


I agree the solution is *simple*.


From my POV this really feels more like a social problem then a technical one.
I'd like to remind everyone with a technical stake in this that it's OpenStack
First, Project Team Second[1].  The thing that's best for OpenStack is for the
oslo.mesaging and monasca teams to come together and find a path forward.  It's
really great that this thread has started that process!


No they are no social issue here, I always tried to fix other projects
issues even they are not in my area of expertise when I want to go
forward. But I'm realistic, I can only do this only if the invest and
the time it requires is not too big. And to fix (by code) that one, this
is just too high for me, sorry.

Cheers,
--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] influxdb driver gate error

2016-12-01 Thread Mehdi Abaakouk



Le 2016-12-01 23:48, Sam Morrison a écrit :

Using influxdb v1.1 works fine. anything less than 1.0 I would deem
unusable for influxdb. So to get this to work we’d need a newer
version of influxdb installed.


That's work for me.


Any idea how to do this? I see they push out a custom ceph repo to
install a newer ceph so I guess we’d need to do something similar
although influx don’t provide a repo, just a deb.


We do this kind of thing in tooz:

https://github.com/openstack/tooz/blob/master/tox.ini#L73
https://github.com/openstack/tooz/blob/master/setup-consul-env.sh

A tarball and setting the PATH is easiest and compatible with more 
platforms.


--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Should we drop kafka driver ?

2016-12-01 Thread Mehdi Abaakouk



Le 2016-11-30 17:45, Joshua Harlow a écrit :

One of the places for gate testing that is still being worked on is
the following: https://github.com/jd/pifpaf/pull/28


I just finished the work on this: https://github.com/jd/pifpaf/pull/35
And I used it in oslo.messaging here: 
https://review.openstack.org/#/c/404802/


We can now run 'tox -epy27-func-kafka' to run the test kafka driver.

But currently 0 test pass, the driver just print very bad backtrace.

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][monasca] Can we uncap python-kafka ?

2016-11-30 Thread Mehdi Abaakouk

Le 2016-12-01 01:25, Tony Breeds a écrit :
So right now I think the answer is still no we can't uncap it.  To the 
best of
my knowledge monasca still suffers a substatial performance decrease 
and the

switch from sync to async producers isn't desired for them.

Joe wrote this all up in:

http://lists.openstack.org/pipermail/openstack-dev/2016-June/098420.html


Unfortunately this comes down to monasca and olso.messaging getting on 
the same
page with python-kafka.  From a requirements POV we really only care 
about

co-installability, which uncappiugn clearly breaks.


I'm aware of all of that, oslo.messaging patch for the new version is
ready since 8 months now. And we are still blocked... capping libraries,
whatever the reason, is very annoying and just freezes people work.


From the API pov python-kafka haven't break anything, the API is still

here and documented (and deprecated). What monasca raises is performance
issue due to how their uses the library, and on absumption on how it works
internally. Blocking all projects for that looks not fair to me.

As nadya said, now, we have users that that prefers using an unmerged
patch and the new lib instead of using upstream supported
version with the old lib. 
This is not an acceptable situation to me but that's just my thought.


Where is the solution to allow oslo.messaging works blocked since 8
month to continue ?

So, I'm waiting for monasca team input now and hope we can move forward.

Cheers,
--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we uncap python-kafka ?

2016-11-30 Thread Mehdi Abaakouk



Le 2016-11-30 19:11, Nadya Shakhat a écrit :

Hi all,

This one [1] is used very intensively actually :) It is not merged 
because
we cannot just upgrade python-kafka version because of Monasca. We had 
some

talks in their channel, but no results still. I suggest the following:
1. On our side we will update this change
2. I will ping Monasca guys once more and we will proceed with 
discussions

in the CR

[1] https://review.openstack.org/#/c/332105/


Good news, so I should retitle this thread "Can we uncap python-kafka ?"

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [monasca][oslo] (Re: [oslo] Should we drop kafka driver ?)

2016-11-30 Thread Mehdi Abaakouk

On Wed, Nov 30, 2016 at 05:44:25PM +0100, Mehdi Abaakouk wrote:

Also, capping libraries is always a bad idea, this is one more example
of why. The lib was capped due to an python-kafka upstream bugs and not
API breakage. Something like != 1.0.0 would be sufficient.


I have proposed to uncap python-kafka https://review.openstack.org/404878

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Should we drop kafka driver ?

2016-11-30 Thread Mehdi Abaakouk



Le 2016-11-30 17:45, Joshua Harlow a écrit :

IMHO, not just yet, dims and I have been trying to use this driver
recently (for notifications only in my case) and I am more than
willing to try to get the changes needed to get this into a healthy
state (from my understanding dims and friends have been working
through this as well).

One of the places for gate testing that is still being worked on is
the following: https://github.com/jd/pifpaf/pull/28

That will aid with some of the gate testing.


awesome !

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [monasca][oslo] (Re: [oslo] Should we drop kafka driver ?)

2016-11-30 Thread Mehdi Abaakouk



Le 2016-11-30 16:56, Davanum Srinivas a écrit :
Problem is with Monasca team having concerns with later python-kafka 
versions

https://bugs.launchpad.net/oslo.messaging/+bug/1639264


Good point, the bug is 1 month old, but the issue is known since 7
months.

At this point I think if we want to keep kafka in oslo.messaging we have
to raise this dep and merge this upgrade patch whatever the Monasca status is.

We can't continue to use thing deprecated since 1 year.

Also, capping libraries is always a bad idea, this is one more example
of why. The lib was capped due to an python-kafka upstream bugs and not
API breakage. Something like != 1.0.0 would be sufficient.

The deprecated API is still in last release of python-kafka, so even it's
slower than with old version, we should not break anything.

sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Should we drop kafka driver ?

2016-11-30 Thread Mehdi Abaakouk

Hi,

I think my subject is clear :) , but I will add some facts that can help
to the decision: 


* It uses only deprecated python-kafka API [1] [2]
* It's not python3 compatible [3]
* We still don't have kafka testing in gate 


So, one year after the driver introduction, this one is still in bad shape
and doesn't match the requirements [4].

These reviews looks abandoned/outdated/unmaintained:

[1] https://review.openstack.org/#/c/297994/ 
[2] https://review.openstack.org/#/c/332105/


Other links:

[3] https://review.openstack.org/#/c/404802/
[4] 
http://docs.openstack.org/developer/oslo.messaging/supported-messaging-drivers.html#testing

And of course, we will not drop the code now, but just deprecate it for
removal. 


Cheers,

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] influxdb driver gate error

2016-11-29 Thread Mehdi Abaakouk



Le 2016-11-30 08:06, Sam Morrison a écrit :

2016-11-30 06:50:14.969302 | + pifpaf -e GNOCCHI_STORAGE run influxdb
-- pifpaf -e GNOCCHI_INDEXER run mysql -- ./tools/pretty_tox.sh
2016-11-30 06:50:17.399380 | ERROR: pifpaf: 'ascii' codec can't decode
byte 0xc2 in position 165: ordinal not in range(128)
2016-11-30 06:50:17.415485 | ERROR: InvocationError:
'/home/jenkins/workspace/gate-gnocchi-tox-db-py27-mysql-ubuntu-xenial/run-tests.sh’


You can temporary pass '--debug' to pifpaf to get the full backtrace.

Cheers,

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] New and next-gen libraries (a BCN followup)

2016-11-04 Thread Mehdi Abaakouk

Le 2016-11-03 19:27, Joshua Harlow a écrit :

*Next-gen oslo.service replacement*

This one may require a little more of a plan on how to make it work,
but the gist is that medhi (and others) has created
https://github.com/sileht/cotyledon which is a nice replacement for
oslo.service that ceilometer is using (and others?) and the idea was
to start to figure out how to move away from (or replace with?)
olso.service with that library.


If people are interested, the doc is here [0]


I'd like to see a spec with some of the details/thoughts on how that
could be possible, what changes would still be needed. I think from
that session that the following questions were raised:

- Can multiprocessing (or subprocess?) be used (instead of os.fork)
- What to do about windows?


I have already solved those two by using multiprocessing, disable 
SIGHUP and write down the limitation [1]


- Is it possible to create a oslo.service compat layer that preserves 
the oslo.service API but uses cotyledon under the covers to smooth the 
transition/adoption of other projects to cotyledon


Not sure it's easy to do, cotyledon API encourages user to not create 
any python objects before the process is forked to ensure we didn't have 
any rpc/mysql/... connections open and unused, FDs open, lock acquired, 
(put any thing here that can result in race when using os.fork()). While 
oslo.service forces the user to create objects before the fork occurs.


- Perhaps in general we should write how an adoption could happen for a 
consuming project (maybe just writing down how ceilometer made the 
switch would be a good start, what issues were encountered, how they 
were resolved...)


This is avialable here [2]

[0] http://cotyledon.readthedocs.io/en/latest/
[1] http://cotyledon.readthedocs.io/en/latest/non-posix-support.html
[2] http://cotyledon.readthedocs.io/en/latest/oslo-service-migration.html
--
Mehdi Abaakouk
mail: sil...@sileht.net
:w
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Deprecating the Ceilometer API

2016-10-04 Thread Mehdi Abaakouk



Le 2016-10-04 18:09, gordon chung a écrit :

On 04/10/2016 11:58 AM, Tim Bell wrote:
What would be the impact for Heat users who are using the Ceilometer 
scaling in their templates?


Tim


pretty big. :/


The use-case itself is still supported.

Using Ceilometer alarming or Aodh alarming is transparent from the Heat 
template point of view, Heat already does the API calls to endpoint 
found in Keystone.


For the storage, Heat users can move their storage to Gnocchi and 
updates their templates to create in Aodh, Gnocchi alarms instead of 
legacy Ceilometer alarms.


I agree with gordc, this is not a light change. But well the old 
Ceilometer storage don't work at scale, each alarm evaluation take a lot 
of times and CPU to retrieve statistics from the old storage, while it's 
very quick when Gnocchi is used.


Keeping the old storage system for the autoscaling use-case doesn't make 
sense to me.


Also we have a integration gating job that tests the 
Heat+Ceilometer+Aodh+Gnocchi since two cycles. While the previous/legacy 
Ceilometer scaling system never had functional/integration tests.



--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Adding gdavoian to oslo-core

2016-10-04 Thread Mehdi Abaakouk

Obviously +1


Le 2016-10-03 19:40, Joshua Harlow a écrit :

Greetings all stackers,

I propose that we add Gevorg Davoian[1] to the oslo-core[2] team.

Gevorg has been actively contributing to oslo for a while now, both in
helping make oslo better via code contribution(s) and by helping with
the review load when he can. He has provided quality reviews and is
doing an awesome job with the various oslo concepts and helping make
oslo the best it can be!

Overall I think he would make a great addition to the core review team.

Please respond with +1/-1.

Thanks much!

- Joshua Harlow

[1] https://launchpad.net/~gdavoian
[2] https://launchpad.net/oslo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Adding ozamiatin to oslo-core

2016-10-04 Thread Mehdi Abaakouk

Obviously +1 too

Le 2016-10-03 19:42, Joshua Harlow a écrit :

Greetings all stackers,

I propose that we add Oleksii Zamiatin[1] to the oslo-core[2] team.

Oleksii has been actively contributing to oslo for a while now, both in
helping make oslo better via code contribution(s) and by helping with
the review load when he can. He has provided quality reviews and is
doing an awesome job with the various oslo concepts and helping make
oslo the best it can be!

Overall I think he would make a great addition to the core review team.

Please respond with +1/-1.

Thanks much!

- Joshua Harlow

[1] https://launchpad.net/~ozamiatin
[2] https://launchpad.net/oslo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [telemetry] [requirements] [FFE] Oslo.db 4.13.3

2016-09-08 Thread Mehdi Abaakouk



Le 2016-09-08 16:21, Matthew Thode a écrit :

Once it’s in, we’ll trigger another oslo.db release.


The release change is ready: https://review.openstack.org/#/c/367482/

I have tested it against Gnocchi we don't have any issue anymore.

Thanks all!

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] RPC call not appearing to retry

2016-08-16 Thread Mehdi Abaakouk

Hi,

Le 2016-08-15 04:50, Eric K a écrit :

Hi all, I'm running into an issue with oslo-messaging PRC call not
appearing to retry. If I do oslo_messaging.RPCClient(transport, target,
timeout=5, retry=10).call(self.context, method, **kwargs) using a topic
with no listeners, I consistently get the MessagingTimeout exception in 
5

seconds, with no apparent retry attempt. Any tips on whether this is a
user error or a bug or a feature? Thanks so much!


About retry, from 
http://docs.openstack.org/developer/oslo.messaging/rpcclient.html:


"By default, cast() and call() will block until the message is 
successfully sent. However, the retry parameter can be used to have 
message sending fail with a MessageDeliveryFailure after the given 
number of retries."


It looks like it retries in case of MessageDeliveryFailure not 
MessagingTimeout.


Cheers,

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral] Library for JWT (JSON Web Token)

2016-06-30 Thread Mehdi Abaakouk



Le 2016-06-30 13:07, Renat Akhmerov a écrit :

Reason: we need it to provide support for OpenID Connect
authentication in Mistral.


Can't [1] do the job ? (sorry if I'm off-beat)

[1] http://docs.openstack.org/developer/keystone/federation/openidc.html

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral][oslo.messaging] stable/mitaka is broken by oslo.messaging 5.0.0

2016-05-18 Thread Mehdi Abaakouk

On Tue, May 17, 2016 at 09:41:11PM -0700, Joshua Harlow wrote:

Doug Hellmann wrote:

So there are a few options I am seeing so far (there might be more 
that I don't see also), others can hopefully correct me if they are 
wrong (which they might be) ;)


Option #1

Oslo.messaging (and the dispatcher part that does this) stays as is, 
doing at-most-once for RPC (notifications are in a different category 
here so let's not discuss them) and doing at-most-once well and 
battle-hardened (it's current goal) across the various backend drivers 
it supports.


At that point at-least-once will have to done via some other library 
where this kind of semantics can be placed, that might be tooz via 
https://review.openstack.org/#/c/260246/ (which has similar semantics, 
but is not based on a kind of RPC, instead it's more like a 
job-queue).


This is still my favorite option.


Option #2

Oslo.messaging (and the dispatcher part that does this) changes 
(possibly allowing it to be replaced with a different type of 
dispatcher, ie like in https://review.openstack.org/#/c/314732/); the 
default class continues being great at for RPC (notifications are in a 
different category here so let's not discuss them) and doing 
at-most-once well and battle-hardened (it's current goal) across the 
various backend drivers it supports. If people want to provide an 
alternate class with different semantics they are somewhat on there 
own (but at least they can do this).


Issues raised: this though may not be wanted, as some of the 
oslo.messaging folks do not want the dispatcher class to be exposed at 
all (and would prefer to make it totally private, so exposing it would 
be against that goal); though people are already 'hacking' this kind 
of functionality in, so it might be the best we can get at the current 
time?


Exposing dispatcher will not fix the mistral issue, because since
oslo.msg 5.0, dispatcher does not talk directly to the driver interface
anymore (that was a design issue). All drivers interactions have been
moved into RPCServer and NotificationListener where stuffs are already
private since the beginning. Thanks to Dmitriy Ukhlov, for its amazing
work on this (The server/executor/dispatcher refactoring was a huge
simplication of oslo.messaging internal).


Option #3

Do nothing.

Issues raised: everytime oslo.messaging changes this *mostly* internal 
dispatcher API a project will have to make a new 'hack' to replace it 
and hope that the semantics that it has 'hacked' in will continue to 
be compatible with the various drivers in oslo.messaging. Not IMHO a 
sustainable way to keep on working (and I'd be wary of doing this in a 
project if I was the owner of one, because it's ummm, 'dirty').


Mistral have removed its hack since we break them with oslo.messaging
5.0.0


Cheers,
--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][oslo][designate][zaqar][nova][swift] using pylibmc instead of python-memcached

2016-05-13 Thread Mehdi Abaakouk

On Fri, May 13, 2016 at 02:58:08PM +0200, Julien Danjou wrote:

What's wrong with pymemcache, that we picked for tooz and are using for
2 years now?

 https://github.com/pinterest/pymemcache


Looks like a good alternative.

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][oslo][designate][zaqar][nova][swift] using pylibmc instead of python-memcached

2016-05-13 Thread Mehdi Abaakouk

- Is anyone interested in using pylibmc in their project instead of
python-memcached?


This is not a real drop-in replacement, pylibmc.Client is not threadsafe
like python-memcached [1]. Aos it's written in C, it shouldn't be a
problem for keystone because you don't use eventlet anymore, but for
project that still use eventlet no greenlet switch will occurs during
memcached IO.

[1] 
http://sendapatch.se/projects/pylibmc/misc.html#differences-from-python-memcached

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][mistral] Saga of process than ack and where can we go from here...

2016-05-11 Thread Mehdi Abaakouk

On Tue, May 10, 2016 at 03:13:46PM -0400, Zane Bitter wrote:
The alternative proposed was to create some sort of proxy that listens 
for notifications, sanitises them and drops them into the appropriate 
Zaqar queues. So this would be an example that:


* Requires at-least-once delivery semantics
* Is fundamentally a message queue (not a job queue, and not RPC)
* Receives notifications sent from oslo.messaging


You are described the already existing Notification API (listener part).
It:
* Uses 'at-least-once' delivery allow to manually requeue messages.
* allow to read messages in batch from the queues and we do not reply to
* message because this is a queue API

This is used by Ceilometer to read notification from all Openstack
project. Ceilometer handles duplicate itself. If you want convert
'undercloud' notification into 'Cloud' notification, that can be a
solution.

About sanitizing the notification, good luck, because nothing is not
currently versionned and formatted. Nova have started some work to
produce notification payload with oslo.vo, but this is the only project.

Cheers,

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][mistral] Saga of process than ack and where can we go from here...

2016-05-04 Thread Mehdi Abaakouk
ed maintenance 
for this lib while currently not many people care about (the whole lib 
not the new API).


I also wonder if other project have the same needs (that always help to 
design a new API).


Cheers,

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.messaging dispatching into private/protected methods?

2016-03-08 Thread Mehdi Abaakouk



So during this exploration of this code for the above review it made
me wonder if this is a feature or bug, or if we should at least close
the hole of allowing calling into nearly any endpoint method/attribute
(even non-callable ones to?).
...
Thoughts?


I agree that doesn't make any sense to call non-callable and 
protected/private methods.

So, I'm ok to seal the hole.

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] PTL for Newton and beyond

2016-03-04 Thread Mehdi Abaakouk

Hi,

Thanks for all the great work you have done, I have appreciated your 
leadership on Oslo,

and a special thanks to bring in new people in oslo.messaging ;)

Le 2016-03-03 11:32, Davanum Srinivas a écrit :

Team,

It has been great working with you all as PTL for Oslo. Looks like the
nominations open up next week for elections and am hoping more than
one of you will step up for the next cycle(s). I can show you the
ropes and help smoothen the transition process if you let me know
about your interest in being the next PTL. With the move to more
automated testing in our CI (periodic jobs running against oslo.*
master) and the adoption of the release process (logging reviews in
/releases repo) the load should be considerably less on you.
especially proud of all the new people joining as both oslo cores and
project cores and hitting the ground running. Big shout out to Doug
Hellmann for his help and guidance when i transitioned into the PTL
role.

Main challenges will be to get back confidence of all the projects
that use the oslo libraries, NOT be the first thing they look for when
things break (Better backward compat, better test matrix) and
evangelizing that Oslo is still the common play ground for *all*
projects and not just the headache of some nut jobs who are willing to
take up the impossible task of defining and nurturing these libraries.
There's a lot of great work ahead of us and i am looking forward to
continue to work with you all.

Thanks,
Dims


--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][telemetry] gate-ceilometer-dsvm-integration broken

2016-01-04 Thread Mehdi Abaakouk


Hi,

Which had a Depends-On to the devstack change, anyone know why that 
didn't
fail with the CeilometerAlarmTest.test_alarm before the devstack 
change

merged?


It seems the test was skipped[1], as it was disabled for another 
bug[2].


[1]
http://logs.openstack.org/15/256315/2/check/gate-heat-dsvm-functional-orig-mysql/bffccd5/console.html.gz#_2015-12-14_23_33_13_394
[2] https://bugs.launchpad.net/heat/+bug/1523337


This is unrelated, this is an old issue, this bug have already been 
fixed in Aodh[1], and Heat have re-enabled the ceilometer tests [2] just 
after the fix was merged.


I think we just forget to set the status of #1523337 (heat side) when 
[2] was merged. (I have just set it)


[1] https://review.openstack.org/#/c/254078/
[2] 
http://git.openstack.org/cgit/openstack/heat/commit/?id=53e16655ab899f56bd0fd5d4997bb27a76be53df


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] [oslo.messaging] [fuel] [ha] Is Swift going to support oslo.messaging?

2015-12-01 Thread Mehdi Abaakouk

Hi,


Current scheme supports only one RabbitMQ node with url parameter.


That's not true, you can pass many hosts via the url like that: 
rabbit://user:pass@host1:port1,user:pass@host2:port2/vhost


http://docs.openstack.org/developer/oslo.messaging/transport.html#oslo_messaging.TransportURL

But this is perhaps not enough for your use-case.

Cheers,

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] Why the needed version of kombu to support heartbeat is >=3.0.7

2015-10-27 Thread Mehdi Abaakouk


Le 2015-10-27 04:22, me,apporc a écrit :
But i found in the changelog history of kombu [1], heartbeat support 
was

added in version 2.5.0, so what's the point for ">= 3.0.7". Thanks.


The initial heartbeat implementation have some critical issues for 
oslo.messaging that was fixed since kombu 3.0.7 and py-amqp 1.4.0.


---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] Why the needed version of kombu to support heartbeat is >=3.0.7

2015-10-27 Thread Mehdi Abaakouk



[1] seems just socket timeout issue, and admins can adjust those kernel
params themselves.


Yes, but if you trick kernel settings, like putting very low tcp 
keepalive values, you don't need to have/enable heartbeat.


[2] and [3] truly a problem about the heartbeat implementation, but it 
says
the fix is a part of py-amqp 1.4.0, and the dependency with kombu was 
not

specified.
[4] is a bug of kombu's autoretry method which is said to be fixed in 
kombu

3.0.0, it is not directly related to heartbeat.


As far as I can remember, this is because oslo.messaging doesn't really 
require py-amqp but only kombu, so to ensure kombu depends of py-amqp 
1.4.0 we have to depends on kombu 3.0.7 (that have amqp>=1.4.0 in its 
requirements I guess).


---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 回复: [Ceilometer][Gnocchi] Gnocchi cannot deal with combined resource-id ?

2015-09-14 Thread Mehdi Abaakouk

Hi,

Le 2015-09-12 16:54, Julien Danjou a écrit :

On Sat, Sep 12 2015, Luo Gangyi wrote:

 I checked it again, no "ignored" is marked, seems the bug of devstack 
;(


I was talking about that:


https://git.openstack.org/cgit/openstack/ceilometer/tree/etc/ceilometer/gnocchi_resources.yaml#n67

 And it's OK that gnocchi is not perfect now, but I still have some 
worries about how gnocchi deal with or going to deal with 
instance--tapxxx condition.

 I see 'network.incoming.bytes' belongs to resouce type 'instance'.
 But no attributes of instance can save the infomation of tap name.
 Although I can search
 all metric ids from resouce id(instance uuid), how do I distinguish 
them from different taps of an instance?


Where do you see network.incoming.bytes as being linked to an instance?
Reading gnocchi_resources.yaml I don't see that.


It was the case in the past, some metrics was by error associated to the 
instance. This is now fixed, they have their own resource type.
But currently this metrics are marked to be ignored by the Ceilometer 
dispatcher.


The next step is to re-enable theses metrics on the Ceilometer 
dispatcher side, but some code need to be written to extract
the instance name and the tap name from the resource id by a declarative 
manner.



Regards,
---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][keystone] oslo_config and wsgi middlewares

2015-08-20 Thread Mehdi Abaakouk



It looks like the additional features added, in particular the
'oslo_config_project' property, needs to be documented.


I have added some documentation into the keystonemiddleware too: 
https://review.openstack.org/#/c/208965/


---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][keystone] oslo_config and wsgi middlewares

2015-08-20 Thread Mehdi Abaakouk

Hi,


Also to support some of the newer services that don't use paste i
think we should absolutely make it so that the CONF object is passed
to middleware rather than sourced globally. I think gnochhi and zaqar
both fit into this case.


For example, Gnocchi doesn't use paste, deployer adds middlewares like 
this:


   [api]
   middlewares = 
oslo_middleware.request_id.RequestId,oslo_middleware.cors.CORS,keystonemiddleware.auth_token.AuthProtocol


Of course because of the current issue, we browse the pipeline to pass 
the correct options to
keystonemiddleware.auth_token.AuthProtocol, to make it works. My change 
allows to remove this workaround.



The problem i see with what you are saying is that it is mixing
deployment methodologies in a way that is unintended. Paste is
designed to allow deployers to add and remove middleware independent
of configuring the service. This means that at paste time there is no
CONF object unless it's globally registered and this is why most
middlewares allow taking options from paste.ini because if you don't
have global CONF then it's the only way to actually get options into
the middleware.


My change adds a new way that doesn't use global CONF object but still 
read options

from the application configuration file.


Fixing this problem is always where i loose enthusiasm for removing
global CONF files.


With my solution, if all applications update they paste.ini 
configuration, we
can remove the global CONF from keystonemiddleware and only relies on 
options loaded

via paste or via the local oslo.cfg object.
keystonemiddleware become like most middlewares and does not depend 
anymore on the application.


(If you want that I can write a patch to deprecate usage of global CONF 
object

once my change is merged, and update paste.ini of other projects)


From a developer perspective I feel the solution is for us to
reconsider how we deploy and configure middleware. If you are using
paste those pieces of middleware should each be able to be configured
without any integration with the service underneath it. Otherwise if
your service needs a piece of middleware like auth_token middleware to
operate or relies on oslo.config options like cors then that is not
something that should be controlled by paste.

From a deployer perspective there is no great answer i just want
everything to be in oslo.config.


Yeah, that's the goal, whatever the app uses global or not, uses paste 
or not, etc...

for the deployer, the configuration of middlewares are in oslo.config.


Equally from a deployer perspective this wasn't an issue until aodh
(et al) decided to remove the global CONF object which developers from
all the projects hate but live with. I don't mean to imply we
shouldn't look at better ways to do things, but if you're main concern
is deployers the easiest thing we can do for consistency is add back
the global CONF object or remove paste from aodh. :)


I will be sad to readd the global CONF object (removing paste is not 
really an option for us I think).


Cheers,

---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][keystone] oslo_config and wsgi middlewares

2015-08-06 Thread Mehdi Abaakouk
On Thu, Aug 06, 2015 at 04:25:58PM +, Michael Krotscheck wrote:
 Hi there!
 
 The most recent version of the CORS middleware (~2.4) no longer requires
 the use of Oslo.config, and supports pastedeploy. While using oslo.config
 provides far better features - such as multiple origins - it doesn't
 prevent you from using it in the paste pipeline. The documentation has been
 updated to reflect this.

Yes, but you can't use oslo.config without hardcode the loading the
middleware to pass the oslo.config object into the application.

 I fall on the operators side, and as a result feel that we should be using
 oslo.config for everything. One single configuration method across
 services, consistent naming conventions, autogenerated with sane options,
 with tooling and testing that makes it reliable. Special Snowflakes really
 just add cognitive friction, documentation overhead, and noise.

I'm clearly on the operator side too, and I just try to find a solution to 
be able to use all middlewares without having to write code for each 
in each application and use oslo.config. Zaqar, Gnocchi and Aodh are
the first projects that do to not use cfg.CONF and can't load many middlewares
without writing code for each. When middleware should be just something that
deployer enabled and configuration. Our middleware looks more like a lib
than a middleware)

(Zaqar, Gnocchi and Aodh have written hack for keystonemiddleware because 
this is an essential piece but other middlewares are broken for them).

Cheers,
-- 
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][keystone] oslo_config and wsgi middlewares

2015-08-06 Thread Mehdi Abaakouk
 but still can use
oslo.config.


WDYT ?


Cheers,

[1] https://github.com/openstack/gnocchi/blob/master/gnocchi/rest/app.py#L140
[2] https://github.com/openstack/gnocchi/blob/master/gnocchi/service.py#L64-L73
[3] 
https://github.com/openstack/zaqar/blob/87fd1aa93dafb64097f731dbd416c2eeb697d403/zaqar/transport/auth.py#L63
[4] 
https://github.com/openstack/zaqar/blob/87fd1aa93dafb64097f731dbd416c2eeb697d403/zaqar/transport/auth.py#L70
[5] https://review.openstack.org/#/c/208965/
[6] https://review.openstack.org/#/c/209817/


--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] oslo.messaging version and RabbitMQ heartbeat support

2015-07-07 Thread Mehdi Abaakouk


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi,

Le 2015-07-01 15:21, Mike Dorman a écrit :

As a follow up to the discussion during the IRC meeting yesterday,
please vote for one of these approaches:

1)  Make the default for the rabbit_heartbeat_timeout_threshold
parameter 60, which matches the default in Kilo oslo.messaging.  This
will by default enable the RMQ heartbeat feature, which also matches
the default in Kilo oslo.messaging.  Operators will need to set this
parameter to 0 in order to disable the feature, which will be
documented in the comments within the manifest.  We will reevaluate
the default value for the Liberty release, since the oslo.messaging
default most likely will change to 0 for that release.


Just a small correction:

For kilo release, this is 0 not 60 per default since oslo.messaging 
1.8.3, because some operators have reported critical issues
with heartbeat enabled and some versions of py-amqp and kombu, and 
because we can't raise anymore the requirements for kilo we have disable 
it

by default.

For liberty, we have raise the requirements and we perhaps re-enable 
heartbeat, if nobody report new issue.


Cheers,
- ---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

-BEGIN PGP SIGNATURE-
Version: OpenPGP.js v.1.20131017
Comment: http://openpgpjs.org

wsFcBAEBCAAQBQJVm8SRCRAYkrQvzqrryAAAnaEQAIMVd6FdW593wi87SFlv
zP/Jc+bNyivQPynE/o4smgm5feuBqCou/8Zig7p1Ac5wpMGtsmK+wYKkdC03
nQaeSZ7hqlu2EZb2+xdB0sRBm2CZZHNcYIG/NF7E6dYHNfksrT6qeOUM3HBi
JLVxBonF1Ch7zFaNhprrd0S9t/0vnXQlvDp+9Co0j5I0MV4hczkpFsQlPcr/
+sphCn65c+GeVkpGuxqYYsOvwvhqi8Xcm414OI5xLVb2N/iR5R1jg3Z4OT0a
qhm6Tj6v4tOjAkyFVF4gDI9IxXMUG19IaKzwdAON1czFOgaqPlgWa0dc6dqW
BqurTvolBtcr8VUqs0l96/+Wr18u/7ctQtarwYhRMau8GbwYRod2fcV/8vFB
wX/gdMHz3hc/bpYTTKX6CqBfn1QInNuNP1nDH8kpcUOMDMPjhL2SvPDVeKRH
15lq6cBO0vRUwZZVjCc8B3OWum4kIC84ji04drzxYq/Ha2SBM9HAjtOg+1rJ
s53IPegUT+L8F/9SsLkmX4uRfEu4eGn2rVL3jrss9R3Wy50/3j4MM44nDVDN
TH6gAf4/DFdjrwDNuAnMv4FNSNl7mE/enYOtTpQy1Wnj70qwmDluTf9sA9RB
fpjf+cctCy6HEUzeSc8lVZmyswh2fipMKf3j6Z0oX2G33JFDKyIiGG1sHbqn
i8M/
=RJtG
-END PGP SIGNATURE-


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] [mistral] Acknowledge feature of RabbitMQ in oslo.messaging

2015-07-07 Thread Mehdi Abaakouk

Hi,

The RPC API of oslo.messaging do it for you, you don't have to care 
about acknowledgement (or anything else done by the driver because the 
underlying used pattern depends of  it) .


For the Working Queues patterns, I guess what you need is to ensure 
that the Target doesn't have the server attribute set and use call 
or cast depending of your needs.


It works like this for rabbitmq:
  * the message is acknowledged from the rabbitmq PoV when the worker 
start the processing of the message
  * when it finish it send a message back to the caller with the result 
of the processing or the raised exception if that doesn't work


On the client side, when you use call is wait for the returns.

If you don't need to get the result or the exception occurred during the 
message processing just use cast, it doesn't wait for the return and 
worker doesn't send it.


When RPC is needed, acknowledgement after the message have been 
processed is not enough reliable to ensure the message have been 
processed correctly and can lead to stuck message on the queue.



Otherwise, the Notification API of oslo.messaging allows to control 
acknowledgement or requeue of message but does not provide method and 
endpoint versioning (that allows rolling upgrade for example), and 
remote executed method are hardcoded to match the notification mechanism 
of openstack.


Cheers,

---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


Le 2015-07-07 12:13, Renat Akhmerov a écrit :

Just to clarify: what we’re looking for is how to implement “Work
queue” pattern described at [1] with oslo messaging. As Nikolay said,
it requires that a message to be acknowledged after it has been
processed.

[1] http://www.rabbitmq.com/tutorials/tutorial-two-python.html
http://www.rabbitmq.com/tutorials/tutorial-two-python.html

Renat Akhmerov
@ Mirantis Inc.



On 07 Jul 2015, at 15:58, Nikolay Makhotkin nmakhot...@mirantis.com 
wrote:


Hi,

I am using RabbitMQ as the backend and searched oslo.messaging for 
message acknowledgement feature but I found only [1] what is wrong 
using of acknowledgement since it acknowledges incoming message before 
it has been processed (while it should be done only after processing 
the message, otherwise we can lost given message if, say, the server 
suddenly goes down).


So, my questions: does oslo.messaging indeed not support correct 
acknowledgement feature? Or it is impossible to do for oslo.messaging 
paradighm? Or is it somehow connected with that oslo.messaging has to 
support a lot of transport types?


Can't it be implemented somehow in oslo.messaging?

[1] 
https://github.com/openstack/oslo.messaging/blob/master/oslo_messaging/rpc/dispatcher.py#L135 
https://github.com/openstack/oslo.messaging/blob/master/oslo_messaging/rpc/dispatcher.py#L135



Best Regards,
Nikolay
@Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging] Expanding the oslo.messaging core team

2015-06-19 Thread Mehdi Abaakouk

+1

Le 2015-06-19 14:54, Doug Hellmann a écrit :
Excerpts from Davanum Srinivas (dims)'s message of 2015-06-19 07:46:47 
-0400:

Hi Team,

We have had active and continuing contributions [1] from the following
in Blueprints, Reviews, Commits and Summit/Email discussions:
* Ken Giusti
* Oleksii Zamiatin
* Victor Sergeyev

Ken and Oleksii are spearheading important drivers that will help
expand choices of messaging infrastructure in OpenStack to boot and
Victor is helping harden the existing code.

So can we please invite them to join the oslo.messaging core team

Thanks,
Dims

[1] http://stackalytics.com/?module=oslo.messaging



+1 for all three!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Inviting Robert Collins to Oslo core

2015-05-21 Thread Mehdi Abaakouk


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256



I'd like to invite Robert Collins (aka lifeless) as an Oslo core.
Robert has been a long time contributor to a whole bunch of OpenStack
projects and a member of our TC. Robert indicated that he can help
across Oslo projects this cycle, so let's please welcome him. Here's
my +1.


+1

- ---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

-BEGIN PGP SIGNATURE-
Version: OpenPGP.js v.1.20131017
Comment: http://openpgpjs.org

wkYEAREIABAFAlVeGWoJEJZbdE7sD8foAAAJ3gCeOdLi3ZpWRW24uk34FKAM
YDYEn1UAn1IQsnD6e8RcyUJEPjMBLubc3n9D
=3tM1
-END PGP SIGNATURE-


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Exception in rpc_dispatcher

2015-05-07 Thread Mehdi Abaakouk


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi,

This is a well known issue when eventlet monkey patching is not done 
correctly.
The application must do the monkey patching before anything else even 
loading another module that eventlet.


You can find more information here: 
https://bugs.launchpad.net/oslo.messaging/+bug/1288878


Or some examples of how nova and ceilometer ensure that:

 https://github.com/openstack/nova/blob/master/nova/cmd/__init__.py
 
https://github.com/openstack/ceilometer/blob/master/ceilometer/cmd/__init__.py



More recent version of oslo.messaging already outputs a better error 
message in this case.


Cheers,

- ---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


Le 2015-05-07 08:11, Vikash Kumar a écrit :

Hi,

   I am getting this error in my agent side. I am getting same message
twice, one after other.

2015-05-07 11:39:28.189 11363 ERROR oslo.messaging.rpc.dispatcher
[req-43875dc3-99a9-4803-aba2-5cff22943c2c ] Exception during message
handling: _oslo_messaging_localcontext_7a1e392197394132b7bfc1bda8239a82
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher 
Traceback

(most recent call last):
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher   
File
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, 
line

134, in _dispatch_and_reply
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher
incoming.message))
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher   
File
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, 
line

179, in _dispatch
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher
localcontext.clear_local_context()
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher   
File
/usr/lib/python2.7/dist-packages/oslo/messaging/localcontext.py, line 
55,

in clear_local_context
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher
delattr(_STORE, _KEY)
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher
AttributeError:
_oslo_messaging_localcontext_7a1e392197394132b7bfc1bda8239a82
2015-05-07 11:39:28.189 11363 TRACE oslo.messaging.rpc.dispatcher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-BEGIN PGP SIGNATURE-
Version: OpenPGP.js v.1.20131017
Comment: http://openpgpjs.org

wsFcBAEBCAAQBQJVSxmkCRAYkrQvzqrryAAAcA4P/iTf59F9HbqQF6uuKDM6
HMPWg0PovpzUg0opOMMBE8ZwiQ6B+w5MS3rkwDzbcXfqijDxM8A0BAOwG5/v
iFGfENKPIVm2Y/7iHmg84+MXSKSYDNmsuZc0AOP0i9Ar9D6E8SZbC5hMSfAO
KOZBbVmBP14/KhXesxJPx5nbDknhRPLravV9o/iyMnLSWBGQa80X92G1tkz6
6PI/UCCp1SGyky0eg0ZoZ5+IX3r9UsyjGDRS+le+lQEu4T0e05G1jBnvw9H7
qIo7ecWDSUwwxl7sz2HlgaF0st4bjCtRtSPbbcW2nShKBVbIdAxfncj2O8Ux
PVwk4ZaEdyQ+O2RJp/vq6v9jcNsoh/3jCLojEwUv4BlLS7qEW4Ime0coJoxD
zgC1vdgSojS8pxRto8kh7NJ91MpILRDfm3bJ/bpTGb04Wh4LYGHmoeQMrCex
rPmYgDkWTXUpsqAgwHpP8DZpRXY40hx6yCiWp/1lNLI/CYx4B6fDOOS7Xf8k
kjmRrzV8QriQ+02M9cCWIgLyskAUIWRzEn/ZhtAiQTvaCJEuMvQTHSYMf30J
m1hW2bL5UcuwA+4Or6nxzfF14EDaWZv2dD2hPNvfJ3qMXgCHQoQUVp061sto
p7RPmzWVWuNrwmVIp0JcJSLDGRSifGadDw/3Uiygpo9M9RfUJEpC2RJ85WjU
DSE8
=nSuZ
-END PGP SIGNATURE-


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Adding Joshua Harlow to oslo-core

2015-05-07 Thread Mehdi Abaakouk



I always felt that was the case, so +1 of course

---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


Le 2015-05-05 16:47, Julien Danjou a écrit :

Hi fellows,

I'd like to propose that we add Joshua Harlow to oslo-core. He is
already maintaining some of the Oslo libraries (taskflow, tooz…) and
he's helping on a lot of other ones for a while now. Let's bring him in
for real!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] release critical oslo.messaging changes

2015-04-23 Thread Mehdi Abaakouk


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


The review on master branches are ready:

oslo.messaging with heartbeat off: 
https://review.openstack.org/#/c/174929/

requirement changes: https://review.openstack.org/#/c/174930/

Once they are merged, I will backport them to stable/kilo and sync the 
requirements to release oslo.messaging 1.8.2 with this new 
requirements.:


- ---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


Le 2015-04-17 15:54, Sean Dague a écrit :

It turns out a number of people are hitting -
https://bugs.launchpad.net/oslo.messaging/+bug/1436769 (I tripped over
it this morning as well).

Under a currently unknown set of conditions you can get into a 
heartbeat
loop with oslo.messaging 1.8.1 which basically shuts down the RPC bus 
as

every service is heartbeat looping 100% of the time.

I had py-amqp  1.4.0, and 1.4.0 seems to have a bug fix for one of the
issues here.

However, after chatting with silent in IRC this morning it sounded like
the safer option might be to disable the rabbit heartbeat by default,
because this sort of heartbeat storm can kill the entire OpenStack
environment, and is not really clear how you recover from it.

All of which is recorded in the bug.

Proposed actions are to do both of:

- oslo.messaging release with heartbeats off by default (simulates 
1.8.0

behavior before the heartbeat code landed)
- oslo.messaging requiring py-amqp = 1.4.0, so that if you enable the
heartbeating, at least you are protected from the known bug

This would still let operators use the feature, we'd consider it
experimental, until we're sure there aren't any other dragons hidden in
there. I think the goal would be to make it default on again for 
Marmoset.


-Sean

-BEGIN PGP SIGNATURE-
Version: OpenPGP.js v.1.20131017
Comment: http://openpgpjs.org

wsFcBAEBCAAQBQJVOQV9CRAYkrQvzqrryAAAUVkP/jV4VGtzM2PMk15FxYxM
WS1b46mu2G/J0U8hOuOcrV5G36KL3nzk3em4VEnSpPfihRLrk1Ufhi9P3Obc
DQyjNuImXUE/z4nx81pCarjk0nGeuoejHexvQP3lLn5lvd/r9nbjHkaUST9Y
yYG7GHq+j24FnQzjP84GS0tRp6DnSMqKs8OwPTg7oyGFUK9tbnkp+LqDRHnx
GqQTnb+yCs5b55VQJLOFf9IN/oPmsfSVYimtgq9MEmCXCLvWIF7AYQJMmy9Q
QG2fj1o/TPEUgOijT/15jDgEePek5EDC6RaNX0YCthUsE70DE/PFF8j1IIez
gojOO7rtkrvEi8f4P1qBbXDE2vOe9f9mYlZHxfAl8tDrT0VIoVTWKAXU/L2H
3MRMhTrnTRRZuyyLtKbIP76U+uCbHWaJaQPW/BJMYRUDAH6eNb2mSvHw3H7k
3BdVtkGRwmDCdoCxbm+T3rud2BiNpwcwmlFlLVqEphLhg/A+KMP+Kufw2DbG
SItejHMdfgAEgb/7xlJ6iKXU6Zy3fqX9ik2beQavlEqhiLtZWDXko4cHgQtd
sCLfg6a3Z394ZIUvGDXjMVX1l+CkoRg1+IBtqTCieqapuTILnsH0YHlmAfOi
dy5t1Cay4Ltgl38u8A9L4GBaVnoDmWaMiCFyHbL5Qit2pTxZkgcxa1SPdlKm
A2qk
=impF
-END PGP SIGNATURE-


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][zeromq] Some backports to stable/kilo

2015-04-09 Thread Mehdi Abaakouk


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


Hi,

All of these patches are only bug fixes, so that's good for me.

So if others agreed, I can release 1.8.2 with these changes once they 
are landed.

We don't have any other changes pending in kilo branch for now.

Le 2015-04-09 16:12, Li Ma a écrit :

Hi oslo all,

Currently devstack master relies on 1.8.1 release due to requirements
frozen (=1.8.0  1.9.0), however, ZeroMQ driver is able to run on
1.9.0 release. The result is that you cannot deploy ZeroMQ driver
using devstack master now due to some incompatibility between
oslo.messaging 1.8.1 and devstack master source.

So I try to backport 4 recent reviews [1-4] to stable/kilo to make
sure it is working. I'll appreciate allowing these backports and make
them into 1.8.2.

[1] https://review.openstack.org/172038
[2] https://review.openstack.org/172061
[3] https://review.openstack.org/172062
[4] https://review.openstack.org/172063

Best regards,


- ---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


-BEGIN PGP SIGNATURE-
Version: OpenPGP.js v.1.20131017
Comment: http://openpgpjs.org

wsFcBAEBCAAQBQJVJpTgCRAYkrQvzqrryAAAzbMP/2RSHctvsFRn2qUD/+OU
kO/YIEN7ft5Zm3HM9zWRc3M+oc4ICV4vsiF3Ylyy5NmtbK51pu1ZbKBT3Dxn
8jLsylUbHWBY1oaik4NH46/e3rXcKrK0V0zkbrN+RhzPqQ/fuNtVT1KUlimH
/evZxosRlYByz9ss4d8Lo1mYsDeUuhjnkI6Hmc919vZAlkSPey12INT61/hs
/9xNipWP5eUuzPSovM1nutK56DRl7HDT2PDP1RQ4kU+qUXCg4+gaArVayKE3
OOn9Snrz7PoX4psaiYlhhqIkfT+ULOI6r0Q2wlgS7laaYXfiV95x1gYXdYRW
1Hm6H5Nnvpb+TTpJsl5einyPT/DC5R+fUIHGWI0mEfBBAYhPBnZFlZdEiaP9
Y/QI1m6Qtq7wU0FEBPjzGEzrk2er2NlSvl0Q5vf5YTUMsdaEpIDiqpp8AnL5
5HvtslyPJuVizN2A/gBFajo34j/0jFqc0xiDTk3bdXSPIG96TgCDYFK55iz+
YQzi4XuCW67FNB/wr9nA/XssmiC+BthWB5giS62h4WRlgj75cx3+0Vop7sSX
fSLiKQyID676I/vS7I245JtSSCsk1/yq+snZvQRnmBrZWUAeLwbDpiEvSBXZ
l9OrVIbrSYCpnk3fNHGD6gWTWH99Q5sQglqBDiBr29tUsnVYDftdDHX9rr5g
sSNl
=YFhB
-END PGP SIGNATURE-


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] not running for PTL for liberty cycle

2015-04-07 Thread Mehdi Abaakouk


Thanks a lot for what you have done !

---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


Le 2015-04-03 14:50, Doug Hellmann a écrit :

Team,

I have decided not to run for PTL for Oslo for the next cycle.

Serving as PTL for the last three releases has been a rewarding
experience, and I think we’ve made some great strides together as a
team. Now it’s time for me to step down and let someone else take the
lead position. I am still committed to the mission, and I will  be
contributing to Oslo, but I do also want to work on some other
projects that I won’t have time for if I stay in the PTL role. We have
several good candidates for PTL on the team, and I expect us to have
no trouble finding someone willing to step up and run.

Thanks,
Doug


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler

2015-03-06 Thread Mehdi Abaakouk


Hi, just some oslo.messaging thoughts about having multiple 
nova-scheduler processes (can also apply to any other daemon acting as 
rpc server),


nova-scheduler use service.Service.create() to create a rpc server, that 
one is identified by a 'topic' and a 'server' (the 
oslo.messaging.Target).
Creating multiple workers like [1] does, will result to have all workers 
that share the same identity. This is usually because the 'server' is 
set with the 'hostname', to make our life easier.
With rabbitmq for example, the 'server' attribute of the 
oslo.messaging.Target is used for a queue name, you usually have the 
following queues created:


scheduler
scheduler.scheduler-node-1
scheduler.scheduler-node-2
scheduler.scheduler-node-3
...

Keep things as-is will result that messages that go to 
scheduler.scheduler-node-1 will be processed randomly by the first ready 
worker. You will not be able to identify workers from the amqp point of 
view.
The side effect of that is if a worker stuck, bug or whatever and 
doesn't consume messages anymore, we will not be able to see it. One of 
the other worker will continue to notify that scheduler-node-1 works and 
consume new messages even if all of them are dead/stuck except one.


So I think that each rpc servers (each workers) should have a different 
'server', to get amqp queues like that:


scheduler
scheduler.scheduler-node-1-worker-1
scheduler.scheduler-node-1-worker-2
scheduler.scheduler-node-1-worker-3
scheduler.scheduler-node-2-worker-1
scheduler.scheduler-node-2-worker-2
scheduler.scheduler-node-3-worker-1
scheduler.scheduler-node-3-worker-2
...

Cheers,


[1] https://review.openstack.org/#/c/159382/
---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslo.messaging 1.7.0 released

2015-02-25 Thread Mehdi Abaakouk

The Oslo team is thrilled to announce the release of:

oslo.messaging 1.7.0: Oslo Messaging API

For more details, please see the git log history below and:

http://launchpad.net/oslo.messaging/+milestone/1.7.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.messaging

Changes in oslo.messaging 1.6.0..1.7.0
--

097fb23 Add FAQ entry for notifier configuration
68cd8cf rabbit: Fix behavior of rabbit_use_ssl
b2f505e amqp1: fix functional tests deps
aef3a61 Skip functional tests that fail due to a qpidd bug
56fda65 Remove unnecessary log messages from amqp1 unit tests
a9d5dd1 Include missing parameter in call to listen_for_notifications
3d366c9 Fix the import of the driver by the unit test
d8e68c3 Add a new aioeventlet executor
4e182b2 Add missing unit test for a recent commit
1475246 Add the threading executor setup.cfg entrypoint
824313a Move each drivers options into its own group
16ee9a8 Refactor the replies waiter code
03a46fb Imported Translations from Transifex
8380ac6 Fix notifications broken with ZMQ driver
b6a1ea0 Gate functionnal testing improvements
bf4ab5a Treat sphinx warnings as errors
0bf90d1 Move gate hooks to the oslo.messaging tree
dc75773 Set the password used in gate
7a7ca5f Update README.rst format to match expectations
434b5c8 Declare DirectPublisher exchanges with passive=True
3c40cee kombu: fix driver loading with kombu+qpid scheme
e7e5506 Ensure kombu channels are closed
3d232a0 Implements notification-dispatcher-filter
8eed6bb Make sure zmq can work with redis

Diffstat (except docs and test files)
-

README.rst |   5 +-
amqp1-requirements.txt |   2 +-
.../locale/fr/LC_MESSAGES/oslo.messaging.po|  18 +-
oslo_messaging/_drivers/amqp.py|   6 +-
oslo_messaging/_drivers/amqpdriver.py  | 160 
++---

oslo_messaging/_drivers/common.py  |   5 +-
oslo_messaging/_drivers/impl_qpid.py   |  45 +++--
oslo_messaging/_drivers/impl_rabbit.py | 196 
++---

oslo_messaging/_drivers/impl_zmq.py|  24 +--
oslo_messaging/_drivers/matchmaker_redis.py|   2 +-
oslo_messaging/_executors/impl_aioeventlet.py  |  75 
oslo_messaging/_executors/impl_eventlet.py |   6 +-
oslo_messaging/conffixture.py  |  16 +-
oslo_messaging/notify/__init__.py  |   2 +
oslo_messaging/notify/dispatcher.py|   9 +-
oslo_messaging/notify/filter.py|  77 
oslo_messaging/notify/listener.py  |   8 +-
oslo_messaging/opts.py |   7 +-
requirements.txt   |   4 +
setup.cfg  |   5 +
tox.ini|  15 +-
42 files changed, 937 insertions(+), 327 deletions(-)


Requirements updates


diff --git a/amqp1-requirements.txt b/amqp1-requirements.txt
index bf8a37e..6303dc7 100644
--- a/amqp1-requirements.txt
+++ b/amqp1-requirements.txt
@@ -8 +8 @@ pyngus=1.0.0,2.0.0  # Apache-2.0
-python-qpid-proton=0.7,0.8  # Apache-2.0
+python-qpid-proton=0.7,0.9  # Apache-2.0
diff --git a/requirements.txt b/requirements.txt
index 352b14a..e6747b0 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -31,0 +32,4 @@ futures=2.1.6
+
+# needed by the aioeventlet executor
+aioeventlet=0.4
+trollius=1.0


--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo_log/oslo_config initialization

2015-01-21 Thread Mehdi Abaakouk


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi,

Le 2015-01-21 11:16, Qiming Teng a écrit :

Any hint or sample code to setup logging if I'm abandoning the log
module from oslo.incubator?


You need to do:

  cfg.CONF(name='prog', project='project')
  log.setup(cfg.CONF, 'project')

Example of project that already use both: 
https://github.com/stackforge/gnocchi/blob/master/gnocchi/service.py#L25


Cheers,
- ---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

-BEGIN PGP SIGNATURE-
Version: OpenPGP.js v.1.20131017
Comment: http://openpgpjs.org

wkYEAREIABAFAlS/qU4JEJZbdE7sD8foAAAQQwCfYN9jFNWp4OsxJts7Elmy
8taVKfYAn1uDtfn0aEJVDzXXbLdACzVxXEsB
=lHLc
-END PGP SIGNATURE-


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.messaging 1.5.1 released

2014-12-04 Thread Mehdi Abaakouk


The Oslo team is pleased to announce the release of
oslo.messaging 1.5.1: Oslo Messaging API

This release reintroduces the 'fake_rabbit' config option.

For more details, please see the git log history below and
 https://launchpad.net/oslo.messaging//+milestone/1.5.1

Please report issues through launchpad:
 https://bugs.launchpad.net/oslo.messaging/



Changes in openstack/oslo.messaging  1.5.0..1.5.1

712f6e3 Reintroduces fake_rabbit config option
554ad9d Imported Translations from Transifex

  diffstat (except docs and test files):

 .../locale/de/LC_MESSAGES/oslo.messaging.po   |  8 ++--
 .../locale/en_GB/LC_MESSAGES/oslo.messaging.po|  8 ++--
 .../locale/fr/LC_MESSAGES/oslo.messaging.po   |  8 ++--
 oslo.messaging/locale/oslo.messaging.pot  |  8 ++--
 oslo/messaging/_drivers/impl_rabbit.py| 13 
-
 tests/drivers/test_impl_rabbit.py | 19 
+++

 6 files changed, 55 insertions(+), 9 deletions(-)

  Requirements updates: N/A


--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.messaging 1.5.0 released

2014-12-02 Thread Mehdi Abaakouk


The Oslo team is pleased to announce the release of oslo.messaging 
1.5.0.


This release includes a number of fixes about rabbit driver timeout that
was not always respected, starts using kombu API instead of custom code
when it's possible. It also introduces the first ZMQ unit tests.
And ZMQ and AMQP 1.0 drivers got some bug fixes and improvements.

For more details, please see the git log history below and
https://launchpad.net/oslo.messaging/+milestone/1.5.0

 Please report issues through launchpad:
https://launchpad.net/oslo.messaging



Changes in openstack/oslo.messaging  1.4.1..1.5.0

cb78f2e Rabbit: Fixes debug message format
2dd7de9 Rabbit: iterconsume must honor timeout
bcb3b23 Don't use oslo.cfg to set kombu in-memory driver
f3370da Don't share connection pool between driver object
c7d99bf Show what the threshold is being increased to
30a5b12 Wait for expected messages in listener pool test
c8e02e9 Dispath messages in all listeners in a pool
cb9 Reduces the unit tests run times
b369826 Set correctly the messaging driver to use in tests
7bce31a Always use a poll timeout in the executor
f1c7e78 Have the timeout decrement inside the wait() method
e15cd36 Renamed PublishErrorsHandler
80e62ae Create a new connection when a process fork has been detected
42f55a1 Remove the use of PROTOCOL_SSLv3
a8d3da2 Add qpid and amqp 1.0 tox targets
a5ffc62 Updated from global requirements
ee6a729 Imported Translations from Transifex
973301a rabbit: uses kombu instead of builtin stuffs
d9d04fb Allows to overriding oslotest environ var
0d49793 Create ZeroMQ Context per socket
7306680 Remove unuseful param of the ConnectionContext
442d8b9 Updated from global requirements
5aadc56 Add basic tests for 0mq matchmakers
30e0aea Notification listener pools
7ea4147 Updated from global requirements
37e5e2a Fix tiny typo in server.py
10eb120 Switch to oslo.middleware
a3ca0e5 Updated from global requirements
6f76039 Activate pep8 check that _ is imported
f43fe66 Enable user authentication in the AMQP 1.0 driver
f74014a Documentation anomaly in TransportURL parse classmethod
f61f7c5 Don't put the message payload into warning log
70910e0 Updated from global requirements
6857db1 Add pbr to installation requirements
0088ac9 Updated from global requirements
f1afac4 Add driver independent functional tests
a476b2e Imported Translations from Transifex
db2709e zmq: Remove dead code
a87aa3e Updated from global requirements
3dd6a23 Finish transition to oslo.i18n
969847d Imported Translations from Transifex
63a5f1c Imported Translations from Transifex
1640cc1 qpid: Always auto-delete queue of DirectConsumer
6b405b9 Updated from global requirements
d4e64d8 Imported Translations from Transifex
487bbf5 Enable oslo.i18n for oslo.messaging
8d242bd Switch to oslo.serialization
f378009 Cleanup listener after stopping rpc server
5fd9845 Updated from global requirements
ed88623 Track the attempted method when raising UnsupportedVersion
93283f2 fix memory leak for function _safe_log
2478675 Stop using importutils from oslo-incubator
3fa6b8f Add missing deprecated group amqp1
f44b612 Updated from global requirements
f57a4ab Stop using intersphinx
bc0033a Add documentation explaining how to use the AMQP 1.0 driver
d2b34c0 Imported Translations from Transifex
4b57eee Construct ZmqListener with correct arguments
3e6c0b3 Message was send to wrong node with use zmq as rpc_backend
e0adc7d Work toward Python 3.4 support and testing
d753b03 Ensure the amqp options are present in config file
214fa5e Add contributing page to docs
f8ea1a0 Import notifier middleware from oslo-incubator
41fbe41 Let oslotest manage the six.move setting for mox
ff6c5e9 Add square brackets for ipv6 based hosts
b9a917c warn against sorting requirements
7c2853a Improve help strings

  diffstat (except docs and test files):

 .testr.conf|   2 +-
 openstack-common.conf  |   4 -
 .../locale/de/LC_MESSAGES/oslo.messaging.po|  34 +-
 .../LC_MESSAGES/oslo.messaging-log-critical.po |  21 -
 .../en_GB/LC_MESSAGES/oslo.messaging-log-error.po  |  14 +-
 .../en_GB/LC_MESSAGES/oslo.messaging-log-info.po   |  21 -
 .../LC_MESSAGES/oslo.messaging-log-warning.po  |  21 -
 .../locale/en_GB/LC_MESSAGES/oslo.messaging.po |  33 +-
 .../fr/LC_MESSAGES/oslo.messaging-log-error.po |  27 ++
 .../locale/fr/LC_MESSAGES/oslo.messaging.po|  38 ++
 oslo.messaging/locale/oslo.messaging-log-error.pot |   9 +-
 oslo.messaging/locale/oslo.messaging.pot   |  31 +-
 oslo/messaging/_drivers/amqp.py|  34 +-
 oslo/messaging/_drivers/amqpdriver.py  |  62 ++-
 oslo/messaging/_drivers/base.py|   9 +
 oslo/messaging/_drivers/common.py  |  32 +-
 oslo/messaging/_drivers/impl_fake.py   |  37 +-
 oslo/messaging/_drivers/impl_qpid.py   |  10 +-
 oslo/messaging/_drivers/impl_rabbit.py   

Re: [openstack-dev] [Oslo][Neutron] Fork() safety and oslo.messaging

2014-11-25 Thread Mehdi Abaakouk

Hi,

I think the main issue is the behavior of the API
of oslo-incubator/openstack/common/service.py, specially:

 * ProcessLauncher.launch_service(MyService())

And then the MyService have this behavior:

class MyService:
   def __init__(self):
   # CODE DONE BEFORE os.fork()

   def start(self):
   # CODE DONE AFTER os.fork()

So if an application created a FD inside MyService.__init__ or 
before ProcessLauncher.launch_service, it will be shared between

processes and we got this kind of issues...

For the rabbitmq/qpid driver, the first connection is created when the 
rpc server is started or when the first rpc call/cast/... is done.


So if the application doesn't do that inside MyService.__init__ or 
before ProcessLauncher.launch_service everything works as expected.


But if the issue is raised I think this is an application issue (rpc 
stuff done before the os.fork())


For the amqp1 driver case, I think this is the same things, it seems 
to do lazy creation of the connection too.


I will take a look to the neutron code, if I found a rpc usage
before the os.fork().


Personally, I don't like this API, because the behavior difference between
'__init__' and 'start' is too implicit.

Cheers,

---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


Le 2014-11-24 20:27, Ken Giusti a écrit :

Hi all,

As far as oslo.messaging is concerned, should it be possible for the
main application to safely os.fork() when there is already an active
connection to a messaging broker?

I ask because I'm hitting what appears to be fork-related issues with
the new AMQP 1.0 driver.  I think the same problems have been seen
with the older impl_qpid driver as well [0]

Both drivers utilize a background threading.Thread that handles all
async socket I/O and protocol timers.

In the particular case I'm trying to debug, rpc_workers is set to 4 in
neutron.conf.  As far as I can tell, this causes neutron.service to
os.fork() four workers, but does so after it has created a listener
(and therefore a connection to the broker).

This results in multiple processes all select()'ing the same set of
networks sockets, and stuff breaks :(

Even without the background process, wouldn't this use still result in
sockets being shared across the parent/child processes?   Seems
dangerous.

Thoughts?

[0] https://bugs.launchpad.net/oslo.messaging/+bug/1330199


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][Neutron] Fork() safety and oslo.messaging

2014-11-25 Thread Mehdi Abaakouk


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi,


I will take a look to the neutron code, if I found a rpc usage
before the os.fork().


With a basic devstack that use the ml2 plugin, I found at least two rpc 
usage before the os.fork() in neutron:


https://github.com/openstack/neutron/blob/master/neutron/services/l3_router/l3_router_plugin.py#L63
https://github.com/openstack/neutron/blob/master/neutron/services/loadbalancer/drivers/common/agent_driver_base.py#L341


It looks like all services plugins run into the parent thread, and only 
the core plugin (ml2) run in children ... rpc_workers apply only on the 
core plugin rpc code... not for other plugins.


If I correctly understand, neutron server does:
- - load and start wsgi server (in a eventlet thread)
- - load neutron manager that:
  * load core plugin
  * load services plugins
  * start services plugins (some rpc connections are created)
- - fork neutron-server (by using the ProcessLauncher)
- - drop the DB connections of children to ensure oslo.db created new 
ones (looks like releases connection already opened by the parent 
process)

- - start core plugin rpc connections of children

It seems the wsgi code and service plugins live in each children but do 
nothing...


I think ProcessLauncher is more designed to bootstrap many identical 
processes with a parent process that do nothing (except child 
start/stop).


For oslo.messaging, perhaps we needs to document when we can fork and 
when it's too late to fork the process when it use oslo.messaging.



Cheers,

- ---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht
-BEGIN PGP SIGNATURE-
Version: OpenPGP.js v.1.20131017
Comment: http://openpgpjs.org

wsFcBAEBCAAQBQJUdKaqCRAYkrQvzqrryAAAi0AP/1d4SefHSQV/iX3tJtIs
uIxlalJqkicRKYmvoNP29S+uqpVrSvKbno92f4lygeuruJ5h/jt5KMgKD1WP
rCROoZlsOWYgnQ6nx+C2YsXire6cPu+hv8rqNSX9qZJGKkwBfRS/gNtHXfeL
Jm6CNgft18Nj1PB+RykhZf+gB1bjJT0lSYzi9z3se1d9R6AFEi9tcEQq4BsA
gXA5Qm6lBRuHflFL1h9XbTPrKxPRpxEvDfHJeu2rv8HlEL1zyXjJ/JIFO87x
P6i+H7FVIntvumdGthMJnfnqp+O96l2OW1KZwb0SFH34DgMbY3COY0mHXBV6
+ZuTcQvflDa7EZfHGhuTUn2YsXRdUuY+Fopds2wUYrgi5BK+5aTdIiPXsTk0
1Ju68PB4PHXngP8pu+mcqh+54XDQRlMAPBfT6kOAy1RtQ1K7U9zqI7qR6Znu
PyYvRhNo6Z9Hg0qzFPbYWL0GpESGN0A6bQ8s0iPrlOGzrZxvOoo2ynxx2aHu
ArOrzuJBiPgBXY+5QyeHDBePfMSumbU/wtwlAY8H5ecRDxbqG+X9bEPzWClF
NAOaDTyn0mao7myZh81+oUaMhIY7W0eYVcO9Gum07RoKNTUWd++eQjP0soeh
2z+zS9JlmYUaE5VZmi2EC0uwfrm/KkvJVA2rFE+F9mBjk1XxuJRLGK/MShPu
m+jk
=lfg+
-END PGP SIGNATURE-


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][Neutron] Fork() safety and oslo.messaging

2014-11-25 Thread Mehdi Abaakouk


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256




Mmmm... I don't think it's that clear (re: an application issue).  I
mean, yes - the application is doing the os.fork() at the 'wrong'
time, but where is this made clear in the oslo.messaging API
documentation?
I think this is the real issue here:  what is the official guidance
for using os.fork() and its interaction with oslo libraries?

In the case of oslo.messaging, I can't find any mention of os.fork()
in the API docs (I may have missed it - please correct me if so).
That would imply - at least to me - that there is _no_ restrictions of
using os.fork() together with oslo.messaging.


Yes, I agree we should add a note on that in oslo.messaging (and perhaps 
in oslo.db too).


And also the os.fork() is done by the service.ProcessLauncher of 
oslo-incubator,
and it's not (yet) documented. But once oslo.service library will be 
released, it will.



But in the case of qpid, that is definitely _not_ the case.

The legacy qpid driver - impl_qpid - imports a 3rd party library, the
qpid.messaging API.   This library uses threading.Thread internally,
we (consumers of this library) have no control over how that thread is
managed.  So for impl_qpid, os.fork()'ing after the driver is loaded
can't be guaranteed to work.   In fact, I'd say os.fork() and
impl_qpid will not work - full stop.


Yes, I have tried it, and I have catch what happen and I can confirm 
that too now, unfortunately :( And this can occurs with any driver if 
the 3rd party library

doesn't work when we use os.fork()

For the amqp1 driver case, I think this is the same things, it seems 
to do lazy creation of the connection too.




We have more flexibility here, since the driver directly controls when
the thread is spawned.  But the very fact that the thread is used
places a restriction on how oslo.messaging and os.fork() can be used
together, which isn't made clear in the documentation for the library.

I'm not familiar with the rabbit driver - I've seen some patch for
heatbeating in rabbit introduce threading, so there may also be an
implication there as well.


Yes, we need to check that.

Personally, I don't like this API, because the behavior difference 
between

'__init__' and 'start' is too implicit.



That's true, but I'd say that the problem of implicitness re:
os.fork() needs to be clarified at the library level as well.



I agree.

I will write the documentation patch for oslo.messaging.

Cheers,

- ---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht
-BEGIN PGP SIGNATURE-
Version: OpenPGP.js v.1.20131017
Comment: http://openpgpjs.org

wsFcBAEBCAAQBQJUdKtPCRAYkrQvzqrryAAAqCIP/RvONngQ1MOKXEcktcku
Ok1Lr9O12O3j/zESkXaKInJbqak0EjM0FXnoXeah4qPbSoJYZIfztmCOtX4u
CSdhkAWhFqXdpHUtngXImHeB3P/eRH0Vo7R3PAmUbv5VWkn+lwcO+Ym1g79Z
vCJbcwVpldGiTDRJRDAPPb14UakJbZJGnkRDgYscNlBG+alzLw0MsaqnJ7LS
8Yj4YkXSgthpHLF2N8Yem9r7Lh7CbYLKzlhOylgAJTd8gpGGtncwWMwYJvKc
lsMJNY34PMiNkPk1a+VSlrWcPJpafBl3aOBbrIpmMSpMe9pXC/yHW2nrtGez
VXxliFpqQ7kA5827AuhPAM8EzeMUDetLhZvLxzqY7f/SlaoQ/s/9VhfemmHv
d4wT8uiayrWSMdXVUJZcMUdM2XlJDdObokMI0ZQKQYX8OhKQL8LdaHR2xr6B
OjS4Mp4+/W4Y9wMUFqlRyGnW1LLwCFYWHpyKlhXKmYSSdKTn5L7Pcvmmfw8d
JzDcMxfKCBnM4mNRzlBqYV4/ysb6FNMUZwu+D1YxCVHmAH2O1/oODujNJFkZ
gSWAmh9kYawJKbbS0Lh7nkOJs1iOnYxG0IQmz61sffg8T2FrpbH4FNWh1/+a
mQhmYWH2L5noJIwncVQSloSRuoSWLj9rfeiTIHjq2ZnTUD5tbXK6S5dTvv4m
4bij
=G9oX
-END PGP SIGNATURE-


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps

2014-11-17 Thread Mehdi Abaakouk


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Le 2014-11-17 15:26, James Page a écrit :

This was discussed in the oslo.messaging summit session, and
re-enabling zeromq support in devstack is definately on my todo list,
but I don't think the should block landing of the currently proposed
unit tests on oslo.messaging.


I would like to see this tests landed too, even we need to install redis 
or whatever and
to start them manually. This will help a lot to review zmq stuffs and 
ensure fixed thing are not broken again.


- ---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

-BEGIN PGP SIGNATURE-
Version: OpenPGP.js v.1.20131017
Comment: http://openpgpjs.org

wsFcBAEBCAAQBQJUajTzCRAYkrQvzqrryAAAN3AP+QEdd4/kc0y+6WB4d3Tu
g19EfSLR/1ekSBK0AeBc7z7hlDh5wVnQF1t0cm4Kv/fg2+59+Kjc0FhoBeDR
DbOe75vlJTkkUIK+RgPiFLm2prjV7oHQVA7x5E75IhewG+jlLtPm47Wj2b12
wRpeIJC3ofR8OETZ6yxr8NVUvdEWrQk+E2XfDrs3SC55RMYl+so9/FxVlR4y
qwg2EKyhBvjCF8B7j0f3kZqrOCUTi00ivLEN2t+gqCA1WDm7o0cqSVLGvqDW
+HvgJTnVeCu9F+OgsSjpfrVcAiWsF4K5sxZtLv76fLL75simDVG04gOTi5ZL
UtZ2HSQGHrdamTz/qu9FckdhMWoGeUq9XeJf1ulCqJ/9Q4GWlh3KwM/h0hws
A3lKBRxwdiG4afkddhXH3CXa2WyN/genTEaitbk0rk0Q6Q0dumiLPC+P5txB
Fpn1DgwXYMdKVOVWGhUuKVtVWHN35+bJIaGXA/j9MuzEVyTkxhQsOl2aC992
SrQzLvOE9Ao9o4zQCChDnKPfVg8NcxFsljnf55uLBCWQT6zrKNLL18EY1JvL
kacwKipFWyW4TGYQc33ibV66353W8WY6L07ihDFWYo5Ww0NTWtgNM2FUpM2L
QgiP9DcGsOMJ+Ez41uXVLzPueal0KCkgXFbl4Vrrk5PflTvZx8kaXf8TTbei
Kcmc
=hgqJ
-END PGP SIGNATURE-


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging] status of rabbitmq heartbeat support

2014-11-17 Thread Mehdi Abaakouk


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


Hi,

Many peoples want to have heartbeat support into the rabbit driver of 
oslo.messaging (https://launchpad.net/bugs/856764)


We have different approaches to add this feature:

- - Putting all the heartbeat logic into the rabbitmq driver. 
(https://review.openstack.org/#/c/126330/)
  But the patch use a python thread to do the work, and don't care about 
which oslo.messaging executor is used.
  But this patch is also the only already written patch that add 
heartbeat for all kind of connections and that works.


- - Like we quickly talk at the summit, we can make the oslo.messaging 
executor responsible to trigger the
  heartbeat method of the driver 
(https://review.openstack.org/#/c/132979/, 
https://review.openstack.org/#/c/134542/)
  At the first glance, this sound good, but the executor is only used 
for the server side of oslo.messaging.

  And for the client side, this doesn't solve the issue.

Or just an other thought:

- - Moving the executor setup/start/stop from the 
MessageHandlingServer object to the Transport object (note: 1 
transport instance == 1 driver instance),
  the 'transport' become responsible to register 'tasks' into the 
'executor'
  and tasks will be 'polling and dispatch' (one for each 
rpc/notification server created like we have now) and a background task 
for the driver (the heartbeat in this case)
  So when we setup a new transport, it's will automatically schedule the 
work to do on the underlying driver without the need to know if this is 
client or server things.
  This will help a driver to do background tasks within also messaging 
(I known that AMQP1.0 driver have the same kind of needs,

  currently done with a python thread into the driver too)
  This looks a bigger work.


So I think if we want a quick resolution of the heartbeat issue,
we need to land the first solution when it's ready 
(https://review.openstack.org/#/c/126330/)


Otherwise any thoughts/comments are welcome.

Regards,

- --
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


-BEGIN PGP SIGNATURE-
Version: OpenPGP.js v.1.20131017
Comment: http://openpgpjs.org

wsFcBAEBCAAQBQJUajZICRAYkrQvzqrryAAAdoAQAKGrsCUGIKqmGc2VDpQ4
r5iJO1U+6Tq/BVTch70kSAZ9X0FToor8Zwf6/QIv/1f95r9KapOEtmIP2i+f
9qIcuO0U6yFABiOcp+2XPPTo4zWUPUlZf0+KH28MvGcIaulS3t+k+Z2BObIO
ZZ+chjadg2CVxFL0WeeSk0U7FdWDUl3/Jm+gA04+cUv/yUDBqo1UCcdLqKz6
/VmvPjnEUtYvityHNuoytPo9Na6RS7fa9UPyAOJJhp577QQGZzfpMwV/AY6c
7OfOABHINvmzB7YMiEhE/nOcu3sxrIbp7lMAvdPHxtpHd90BBLquxoPpbBvo
ajKDAw6dPLLL6QYTRUIk4xbN0tQXbkQ/l1/9gV38c6x1HfxkIB8XSVNSNnq2
CAsTq7jWfka08R3dtcLlq9zR7Kv0ouqMvvR0SXcMjASJd/WonBD38zMCOc1Z
6puM1keEaXtmiKyj+WzDkLu0DTEvHdHiTSzazJIqqGbbGIBuhMiTwreRjL4P
LkQFnZciL38n98lBbG5JIo8YKrQhAI1c1/vj3+2q2olot13ExmJYIzaf1ZFF
4QyxcUAeR3dVbxkSHU89xv17uImuNw/klUsLCV7hsfZw1lm+HZyW+OHTOzMu
PymYsaJewIOmO3YZMu7F1bm8hDq7O0Gax1Yh0aaVUl/NsXsoCR5dYTE4KYQl
AZ3k
=xSP6
-END PGP SIGNATURE-


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps

2014-11-17 Thread Mehdi Abaakouk


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256



Le 2014-11-17 22:53, Doug Hellmann a écrit :

That’s a good goal, but that’s not what I had in mind for in-tree 
functional tests.




An interesting idea that might be useful that taskflow implemented/has 
done...


The examples @ 
https://github.com/openstack/taskflow/tree/master/taskflow/examples 
all get tested during unit test runs to ensure they work as expected. 
This seems close to your 'simple app' (where app is replaced with 
example), it might be nice to have a similar approach for 
oslo.messaging that has 'examples' that are these apps that get ran to 
probe the functionality of oslo.messaging (as well as useful for 
documentation to show people how to use it, which is the other usage 
taskflow has for these examples)


The hacky example tester could likely be shared (or refactored, since 
it probably needs it), 
https://github.com/openstack/taskflow/blob/master/taskflow/tests/test_examples.py#L91


Sure, that would be a good way to do it, too.


We already have some works done in that ways. Gordon Sim have wrote some 
tests that use only the public API to test a driver: 
https://github.com/openstack/oslo.messaging/blob/master/tests/functional/test_functional.py


You just have to set the TRANSPORT_URL environment variable to start 
them.


I'm working to run them on a devstack vm for rabbit, qpid, amqp1.0 
driver, the infra patch that add experimental jobs have just landed: 
https://review.openstack.org/#/c/130370/


I have two other patches waiting to make it works:
* https://review.openstack.org/#/c/130370/
* https://review.openstack.org/#/c/130437/

So if zmq driver support in devstack is fixed, we can easily add a new 
job to run them in the same way.



- ---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht
-BEGIN PGP SIGNATURE-
Version: OpenPGP.js v.1.20131017
Comment: http://openpgpjs.org

wkYEAREIABAFAlRq6p4JEJZbdE7sD8foAAAWnACdHPwDAbga4mfP/tIL1Z9q
A0w2zvAAnA/tvfXnAJO4a2n4TKiZYiVGbUdT
=BVDs
-END PGP SIGNATURE-


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Meeting time

2014-10-08 Thread Mehdi Abaakouk


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256



- Mondays at 1600 UTC [1]: #openstack-meeting-alt, #openstack-meeting-3
- Thursdays at 1600 UTC [2]: #openstack-meeting-3
- Thursdays at 1700 UTC [3]: #openstack-meeting-3
- Fridays at 1600 UTC (current time, may be ok when DST ends) [4]:
#openstack-meeting-alt, #openstack-meeting-3



I vote for Mondays at 1600 UTC

- ---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


-BEGIN PGP SIGNATURE-
Version: OpenPGP.js v.1.20131017
Comment: http://openpgpjs.org

wsFcBAEBCAAQBQJUNPnLCRAYkrQvzqrryAAAYrsP/3YV2hsDPye3LN8msmKh
XuQrVXrWsJtwFezfd0nxZXjEcoWzOS4eRWUh3Khw6FM0AISTDL1yQH9CReEW
ZPY60NPBunhdCYJIsIwVfOyQH6ucc7Z1nqwt30Nrb0eEMYvyrM2EYnFbkmkS
QzyshpCn7wGVjzHm6DCE/orMNkggSNehUySgFoUkdfmEcwkkXyCtCBXRLrOB
x9Of8tSPnwWRUELLQaI8V+xmkpf0cbcwJatz5X4OKa4Enjw/sdlGlnSv12xo
1HkmAXr30gpkI4DjqIbNdVASaKt+C0Y6YQlIALI/R6qzADxCo6IJ84ITAPlE
iNM6uQ837I5yV65sYqYIwSIIlG/AurFOjqmLDfXwHCdE4DrRCF193yBVRlHY
wWKhHSKoQ3KUBCZAQUdR13oxOOIKEgkUTYRhuq51dy4lZsAz69faLVUGWLSh
aoMmGBJCDOWkgwYDTpcTj+bH8aj4SaN6GhMfx8l4ddZmpLPepbzOgAg/HjnY
6+HAx3ZnNUdyDET9HFvgfF7jNf1bCeHzukd6eJ6uId+M6ctkuDPkb2ADAGu3
73CBJphvO1AzeJSykupgXWRln4qoUf4u4PExaqCL6sYWqYLuS7KC+K2fAQSV
/80rvlywoKRPJHVsTYKc96ageKwRtQOML1uZxuuVZUUV33fUPfH3+FBqMgUb
1kyS
=9Ly5
-END PGP SIGNATURE-


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] zeromq work for kilo

2014-09-19 Thread Mehdi Abaakouk


Hi,

For me, the lack of tests is why reviews are not done, even some of them 
are critical.


An other issue, less problematic, of this driver is it relies on 
eventlet directly,
but all the eventlet code should be only localized into 
oslo.messaging._executor.impl_eventlet.


Otherwise, I have real opinion for pushing the zmq driver in an other 
repo or not, if we really have people to maintain this driver,

I'm sure the code will be merged.

Having the testing issue fixed for K-1 sounds a good milestone, to see 
if we are able to maintain it even with split or not the repo.


Regards,
---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Adding Nejc Saje to ceilometer-core

2014-09-11 Thread Mehdi Abaakouk
+1

-- 
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [zmq] [oslo.messaging] Running devstack with zeromq

2014-06-19 Thread Mehdi Abaakouk


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


Hi,

Le 2014-06-19 00:30, Ben Nemec a écrit :

On 06/18/2014 05:45 AM, Elena Ezhova wrote:

So I wonder whether it is something the community is interested in 
and, if

yes, are there any recommendations concerning possible implementation?


I can't speak to the specific implementation, but if we're going to 
keep

the zmq driver in oslo.messaging then IMHO it should be usable with
devstack, so +1 to making that work.


Currently the zmq driver have a really bad test coverage, the driver is 
'I think' broken since a while.


Bugs like [1] or [2] let me think that nobody can use it currently.

[1] https://bugs.launchpad.net/oslo.messaging/+bug/1301723
[2] https://bugs.launchpad.net/oslo.messaging/+bug/1330460

Also, an oslo.messaging rule is a driver must not force to use a 
eventloop library,
but this one heavily use eventlet. So only the eventlet executor can 
works with it, not the

blocking one or any future executor.

I guess if someone is interested in, the first step is to fix the zmq 
driver, remove eventlet stuffs and write unit tests for it,
before trying integration, and raise bugs that should be catch by 
unit/functionnal testing.


If nobody is interested in zmq, perhaps we should just 
drop/deprecated/mark_as_broken it.


Just my thought

Cheers,
- ---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht
-BEGIN PGP SIGNATURE-
Version: OpenPGP.js v.1.20131017
Comment: http://openpgpjs.org

wsFcBAEBCAAQBQJTotejCRAYkrQvzqrryAAAcrgP/0uvNSE6BkNx3RfEyx9j
9VFJTEvgdFf8XiQvnoL5Yr0Qdh/A+YtU1/ID4cMHTHVtsoNRN4QNIVM31SbV
YtbFQqUfZdF7vXrH/d3GFvgJfwyll5lCZ0OZAD+igGBEKgPv1z1pw82Ld3Em
pdfAzsih5dvyycGiyYS2fiuOBawpW7unb7eEJTMKgaBmrHX+mzWTVlGnOpOx
I3czlFls/S9gd85notkwNytHI1cnGByrZhFXWDE20G2LYwnZ/Nhhx6oTeuTB
s5vgUuuG1HozXf7ntpzchuabjnozQ4RIVbqpCvuDoqPujX3+1H7ZdkJVHJ5J
M+anZWjDZ1SqSRZp9GICv5B7NOsbDcmQNiUZ4s+X+9Mks5IVM0iRC5KJWFE8
59gMCxd78dEasbMee+aps1UbNYwUBRjDm7NlsU1yYUyVZNFODGa1gbA3erFs
2JInw6vEsDoFzixoX6AZJZCFIEnCV1ku8ioEITv6O4mI3y4BtieU4uMEfuZq
z81OuNxAbNCyg0XujAj3GTWB4gGzrBBczMsoHd4jLTepIeyugpDZy0d7Ltsb
TAGjvbU93q12/Uii3C4G6irEUal5cu0dZxhIYFRcsxAKIQI97MVt/SfB5aLK
8c/bDzsnaXLH1Ri6TbGMMb3QjXyp/gx2O7KXYImH0IjeTZpFkU1/Tq5SmxYA
yOLg
=Oeed
-END PGP SIGNATURE-
 ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging] Question of oslo.messaging notificaiton listener

2014-06-11 Thread Mehdi Abaakouk


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi,

Le 2014-06-11 06:04, Lu, Lianhao a écrit :

When we're debugging  a ceilometer bug #1320420, we find that for the
oslo messaging
notification listener, if we have multiple endpoints registered through
oslo.messaging.get_notification_listener(), and one of the endpoints
raise an exception,
that would stop all other endpoints processing that notification. Is
it designed so purposely
or is it a bug?


All endpoints should be processed.
Only when an endpoint ask for a message requeue the next endpoints are 
not processed.
But this can occur only, when no exception is raised and the endpoint 
method return

oslomsg.NotificationResult.REQUEUE.

So it looks like a bug...

- ---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

-BEGIN PGP SIGNATURE-
Version: OpenPGP.js v.1.20131017
Comment: http://openpgpjs.org

wsFcBAEBCAAQBQJTmEciCRAYkrQvzqrryAAAcx0P/i4hlY+0Ls1lWXgNIbK5
Nforwf+cuVAewPXIBnkJQ5de5RkOsGOl3jeQAj8xXaShuKSeEwpz/lKAWDuC
1F7ZBTCJsFu7JNaTqGvKR3Lt3PefSilgPMdAk7742BFKkbyVklN4sNxk6oX2
ij2IMslEiDNqStRBGOvNcZHAdP11k4pbqPl+Py74wNZwHZR6F2DMtaVpOAj9
AE741gA2f6N+O8mcW5MBfzbXwnJ9bsX3qbNwmnMLCY/W/UZJzxU1iQfUdRU9
GMBpfksO+P9mEHVKksXS0C+Swg/9xb03SIROTb41w6s7Wim9DdmgPcFzI6e1
jlAcs6OcAuK91rndVz7jkmUa0w6o2YVTrcstoM8yScx1XY4WCcT3jdZUpyLU
7U8tMyP4Owd7Z5GOSHDT1BrQK/EhEUSHMzYmiqvFEV17GBwrbLvDzg2Lg3Yp
Ki6uL+sh7jpeKmOLbyq0MrYfcXaJXBoZ0itpk8IXtmuMtGln2XtqQ7btGsqR
9SbhZ88eUPqtAz5DPvYxlscE5+Lrn61MbgloYuvYro/uNgoopqlFx26JKq7f
YjJ4nBuuG2cnNbPlN8OcGga5Tj29yqXcnYXAv2fdQ4UWYpYfj4iUWheNkjnv
8k1D5mVImt9Y4U/7yWILYSnTIz9anNNQ9KrH0Yr4+YjZLQqMWet0u3yLNR90
esEf
=WF4M
-END PGP SIGNATURE-
 ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] using transport_url disables connection pooling?

2014-04-11 Thread Mehdi Abaakouk


Hi,

Le 2014-04-10 22:02, Gordon Sim a écrit :

If you use the transport_url config option (or pass a url directly to
get_transport()), then the amqp connection pooling appears to be
disabled[1]. That means a new connection is created for every request
send as well as for every response.

So I guess rpc_backend is the recommended approach to selecting the
transport at present(?).


To keep same behavior as before yes.

Also, they are a pending review to fix that (multi amqp hosts and 
connection pooling when using a transport url):


https://review.openstack.org/#/c/78948/


Cheers,
---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues when running unit tests in OpenStack

2014-04-08 Thread Mehdi Abaakouk

Hi,

Le 2014-04-08 17:11, Ben Nemec a écrit :

It should be possible to switch back to testr for the py33 tests if we
use the regex syntax for filtering out the tests that don't work in
py3.  The rpc example is
https://github.com/openstack/oslo-incubator/blob/master/tox.ini#L21
and if you look at what tests2skip.py does in the tripleo tempest
element it should be an example of how to filter out failing tests:
https://github.com/openstack/tripleo-image-elements/blob/master/elements/tempest/bin/run-tempest#L110


Unfortunately, it won't work because testr/subunit needs to load all 
python files to compute the tests list, and then filter it with the 
regexes
But in python3, all files cannot be loaded yet. This is why nosetests is 
used currently.


Regards,
---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] nominating Ildikó Váncsa and Nadya Privalova to ceilometer-core

2014-03-11 Thread Mehdi Abaakouk
On Mon, Mar 10, 2014 at 05:15:08AM -0400, Eoghan Glynn wrote:
 
 Folks,
 
 Time for some new blood on the ceilometer core team.
 
  * Ildikó co-authored the complex query API extension with Balazs Gibizer
and showed a lot of tenacity in pushing this extensive blueprint
through gerrit over multiple milestones.

+1 

  * Nadya has shown much needed love to the previously neglected HBase
driver bringing it much closer to feature parity with the other
supported DBs, and has also driven the introduction of ceilometer
coverage in Tempest.

+1

Cheers,

-- 
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][wsme][pecan] Need help for ceilometer havana alarm issue

2014-03-05 Thread Mehdi Abaakouk
Hi,

On Thu, Mar 06, 2014 at 10:44:25AM +0800, ZhiQiang Fan wrote:
 I already check the stable/havana and master branch via devstack, the
 problem is still in havana, but master branch is not affected
 
 I think it is important to fix it for havana too, since some high level
 application may depends on the returned faultstring. Currently, I'm not
 sure mater branch fix it in pecan or wsme module, or in ceilometer itself
 
 Is there anyone can help with this problem?

This is a duplicate bug of https://bugs.launchpad.net/ceilometer/+bug/1260398

This one have already been fixed, I have marked havana as affected, to
think about it if we cut a new havana version.

Feel free to prepare the backport.

Regards,

-- 
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][keystone] status of quota class

2014-02-25 Thread Mehdi Abaakouk
On Wed, Feb 19, 2014 at 10:27:38AM -0600, Kevin L. Mitchell wrote:
 On Wed, 2014-02-19 at 13:47 +0100, Mehdi Abaakouk wrote:
 
 Of course; anyone can propose a blueprint.  Who will you have work on
 the feature?
 
  ie: add a new API endpoint to set a quota_class to a project, store that
  into the db and change the quota engine to read the quota_class from the
  db instead of the RequestContext.
 
 Reading the quota class from the db sounds like a bad fit to me; this
 really feels like something that should be stored in Keystone, since
 it's authentication-related data.  Additionally, if the attribute is in
 Keystone, other services may take advantage of it.  The original goal of
 quota classes was to make it easier to update the quotas of a given
 tenant based on some criteria, such as the service level they've paid
 for; if a customer upgrades (or downgrades) their service level, their
 quotas should change to match.  This could be done by manually updating
 each quota that affects them, but a single change to a single attribute
 makes better sense.

Thanks for your comments,

This exactly what I have understand and what I needs, and I agree,
the keystone approach looks really better to me too, but perhaps
a bit more complicated to get accepted, this information is a kind of
metadata associated to a project or/and a domain, that should be
returned with the token validation like the service catalog.

I have found a not yet accepted blueprint on this subject:
https://blueprints.launchpad.net/keystone/+spec/service-metadata

The funny part is the API example use quota-class as metadata key :)

But whatever the approach, If a solution is accepted,
I'm sure I have people to work on this.


Regards, 
-- 
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] status of quota class

2014-02-19 Thread Mehdi Abaakouk
Hi, 

I have recently dig into the quota class in nova and some related
subject on the ML and discovered that quota class code exists but it is
not usable.

An API V2 extension exists to manipulate quota class, these one are
stored into the database.

The quota driver engine handles quota class too, and attends to have the
'quota_class' argument into the nova RequestContext set.

But 'quota_class' is never set when a nova RequestContext is created.

The quota class API V3 have been recently removed due to the unfinished work:
https://github.com/openstack/nova/commit/1b15b23b0a629e00913a40c5def42e5ca887071c


So my question, what is the plan to finish the 'quota class' feature ? 

Can I propose a blueprint for the next cycle to store the mapping between
project and a quota_class into nova itself, to finish this feature ? 

ie: add a new API endpoint to set a quota_class to a project, store that
into the db and change the quota engine to read the quota_class from the
db instead of the RequestContext.



Best Regards, 

-- 
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer] ceilometer unit tests broke because of a nova patch

2014-02-04 Thread Mehdi Abaakouk
On Tue, Feb 04, 2014 at 01:11:10PM -0800, Dan Smith wrote:
  Whats the underlying problem here? nova notifications aren't 
  versioned?  Nova should try to support ceilometer's use case so
  it sounds like there is may be a nova issue in here as well.
  
  Oh you're far from it.
  
  Long story short, the problem is that when an instance is detroyed,
  we need to poll one last time for its CPU, IO, etc statistics to
  send them to Ceilometer. The only way we found to do that in Nova
  is to plug a special notification driver that intercepts the
  deletion notification in Nova, run the pollsters, and then returns
  to Nova execution.
 
 Doesn't this just mean that Nova needs to do an extra poll and send an
 extra notification? Using a special notification driver, catching the
 delete notification, and polling one last time seems extremely fragile
 to me. It makes assumptions about the order things happen internally
 to nova, right?
 
 What can be done to make Ceilometer less of a bolt-on? That seems like
 the thing worth spending time discussing...

We don't have to add a new notification, but we have to add some new
datas in the nova notifications.
At least for the delete instance notification to remove the ceilometer
nova notifier.

A while ago, I have registered a blueprint that explains which datas are
missing in the current nova notifications:

https://blueprints.launchpad.net/nova/+spec/usage-data-in-notification
https://wiki.openstack.org/wiki/Ceilometer/blueprints/remove-ceilometer-nova-notifier

Regards, 
-- 
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >