[openstack-dev] [requirements] cross project testing

2016-07-22 Thread Matthew Thode
One of the things that seems to happen now and then is that an update to
requirements breaks gate for other projects in some way.  One thing that
helps is cross project testing, though we do that on a one off basis I'd
like to see this testing become more codified.  I have a review out that
does a (very) hackish proof of concept, I've tested both functional and
unit testing, both pass.

https://review.openstack.org/#/c/345011/

(saved the functional testing, don't have the unit test link...)
http://logs.openstack.org/11/345011/5/check/gate-requirements-pep8/5afcdf9/console.html

So, my question to put to other projects is, which tests of yours do we
hit when requirements break things for you.  This list will hopefully be
used to figure out what we can test our changes against to not cause the
badness.

Any notes on this would be appreciated :D

-- 
Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] ssl re-encryption in octavia

2016-07-22 Thread Brandon Logan
I do not believe it is in it and I don't know if anyone is working on
it.  I believe it has pushed down the priority stack, but someone might
correct me if I'm wrong.

Thanks,
Brandon
On Fri, 2016-07-22 at 16:00 -0700, Akshay Kumar Sanghai wrote:
> Hi,
> I saw in specs of kilo that ssl re-encryption will be introduced in
> later phase. Is the ssl re-encryption feature available in the mitaka
> release? I understand ssl offload is available, but I want to try the
> ssl re-encryption on octavia lbaas. 
> This link refers to v1
> probably https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL. Please
> redirect me to any documentation if ssl re-encryption is there.
> 
> 
> Thanks
> Akshay Sanghai
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Congress horizon plugin - congressclient/congress API auth issue - help

2016-07-22 Thread Anusha Ramineni
Hi Aimee,

Thanks for the investigation.

I remember testing congress client with V3 password based authentication ,
which worked fine .. but never tested with token based .

Please go ahead and fix it , if you think there is any issue .

On 22-Jul-2016 9:38 PM, "Aimee Ukasick"  wrote:

> All - I made the change to the auth_url that  Anusha suggested.
> Same problem as before " Cannot authorize API client"
> 2016-07-22 14:13:50.835861 * calling policies_list =
> client.list_policy()*
> 2016-07-22 14:13:50.836062 Unable to get policies list: Cannot
> authorize API client.
>
> I used the token from the log output to query the Congress API with
> the keystone v3 token - no issues.
> curl -X GET -H "X-Auth-Token: 18ec54ac811b49aa8265c3d535ba0095" -H
> "Cache-Control: no-cache" "http://192.168.56.103:1789/v1/policies;
>
> So I really think the problem is that the python-congressclient
> doesn't support identity v3.
> I thought it did, but then I came across this:
> "support keystone v3 api and session based authentication "
> https://bugs.launchpad.net/python-congressclient/+bug/1564361
> This is currently assigned to Anusha.
> I'd like to start work on it since I am becoming familiar with keystone v3.
>
> Thoughts?
>
> aimee
>
>
>
>
> On Fri, Jul 22, 2016 at 8:07 AM, Aimee Ukasick
>  wrote:
> > Thanks Anusha! I will retest this today. I guess I need to learn more
> > about Horizon as well - thanks for pointing me in the right direction.
> >
> > aimee
> >
> >
> >
> > On Fri, Jul 22, 2016 at 6:30 AM, Anusha Ramineni 
> wrote:
> >> Hi Aimee,
> >>
> >> I think devstack by default configured horizon to use v3 .
> >> For V2 authentication, from the logs , auth_url doesn't seem to be set
> >> explicitly to v2 auth_url .
> >>
> >> I have always set explicit v2 auth which worked fine.
> >> For eg:- auth_url = 'http://:5000/v2.0' , for V2
> authentication
> >>
> >> I have raised a patch, to take the auth_url from horizon settings
> instead of
> >> from request.
> >> https://review.openstack.org/#/c/345828/1
> >>
> >> Please set explict v2 auth_url as mentioned above in
> OPENSTACK_KESYTONE_URL
> >> in /openstack_dashboard/local/local_settings.py and restart
> apache2
> >> server . Then v2 authentication should go through fine.
> >>
> >> For v3 , need to add relevant code for v3 authentication in
> contrib/horizon
> >> as presently it is hardcoded to use only v2. but yes, the code from
> plugin
> >> model patch is still a WIP , so doesn't work for v3 authentication I
> guess
> >> I'll have a look at it and let you know .
> >>
> >>
> >> Best Regards,
> >> Anusha
> >>
> >> On 21 July 2016 at 21:56, Tim Hinrichs  wrote:
> >>>
> >>> So clearly an authentication problem then.
> >>>
> >>> Anusha, do you have any ideas?  (Aimee, I think Anusha has worked with
> >>> Keystone authentication most recently, so she's your best bet.)
> >>>
> >>> Tim
> >>>
> >>> On Thu, Jul 21, 2016 at 8:59 AM Aimee Ukasick
> >>>  wrote:
> 
>  The  Policy/Data Sources web page throws the same errors. I am
>  planning to recheck direct API calls using v3 auth today or tomorrow.
> 
>  aimee
> 
>  On Thu, Jul 21, 2016 at 10:49 AM, Tim Hinrichs  wrote:
>  > Hi Aimee,
>  >
>  > Do the other APIs work?  That is, is it a general problem
>  > authenticating, or
>  > is the problem limited to list_policies?
>  >
>  > Tim
>  >
>  > On Wed, Jul 20, 2016 at 3:54 PM Aimee Ukasick
>  > 
>  > wrote:
>  >>
>  >> Hi all,
>  >>
>  >> I've been working on Policy UI (Horizon): Unable to get policies
>  >> list (devstack) (https://bugs.launchpad.net/congress/+bug/1602837)
>  >> for the past 3 days. Anusha is correct - it's an authentication
>  >> problem, but I have not been able to fix it.
>  >>
>  >> I grabbed the relevant code in congress.py from Anusha's horizon
>  >> plugin model patchset (https://review.openstack.org/#/c/305063/3)
> and
>  >> added try/catch blocks, logging statements (with error because I
>  >> haven't figured out how to set the horizon log level).
>  >>
>  >>
>  >> I am testing the code on devstack, which I cloned on 19 July 2016.
>  >>
>  >> With both v2 and v3 auth, congressclient.v1.client is created.
>  >> The failure happens trying to call
>  >> congressclient.v1.client.Client.list_policies().
>  >> When using v2 auth, the error message is "Unable to get policies
> list:
>  >> The resource could not be found"
>  >> When using v3 auth, the error message is "Cannot authorize API
> client"
>  >>
>  >> I am assuming that congressclient.v1.client.Client is
>  >>
>  >>
>  >>
> https://github.com/openstack/python-congressclient/blob/master/congressclient/v1/client.py
>  >> and that 

Re: [openstack-dev] [Neutron][networking-ovn][networking-odl] Syncing neutron DB and OVN DB

2016-07-22 Thread Zhou, Han
Thanks Numan & Amitabha, this may be the right direction to solve the bug [1].

It basically implements Neutron API as async call, and queuing the request 
within DB transaction, and the ordering is preserved by the journal thread 
"lock" that is implemented with state PROCESSING plus DB transaction 
"with_for_update", with the help of validation functions for dependency 
checking (e.g. same object cannot be updated by 2 journal threads at the same 
time, etc.).

However, I didn't figure out how errors are handled with this approach. For 
example, a port is created in Neutron but ODL controller failed to create it 
although the journal thread successfully sent the request to ODL. And I didn't 
see how the port states (UP & DOWN) are handled (I didn’t see any call to 
ProvisioningBlock, so does it mean it will just be UP from the beginning?) It 
would be great if anyone can help answer this question.

[1] https://bugs.launchpad.net/networking-ovn/+bug/1605089

Thanks,
Han Zhou

From: Numan Siddique 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, July 22, 2016 at 4:51 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [Neutron][networking-ovn][networking-odl] Syncing 
neutron DB and OVN DB

Thanks for the comments Amitabha.
Please see comments inline

On Fri, Jul 22, 2016 at 5:50 AM, Amitabha Biswas 
> wrote:
Hi Numan,

Thanks for the proposal. We have also been thinking about this use-case.

If I’m reading this accurately (and I may not be), it seems that the proposal 
is to not have any OVN NB (CUD) operations (R operations outside the scope) 
done by the api_worker threads but rather by a new journal thread.


Correct.
​

If this is indeed the case, I’d like to consider the scenario when there any N 
neutron nodes, each node with M worker threads. The journal thread at the each 
node contain list of pending operations. Could there be (sequence) dependency 
in the pending operations amongst each the journal threads in the nodes that 
prevents them from getting applied (for e.g. Logical_Router_Port and 
Logical_Switch_Port inter-dependency), because we are returning success on 
neutron operations that have still not been committed to the NB DB.


I
​ts a valid scenario and should be designed properly to handle such scenarios 
in case we take this approach.

​
Couple of clarifications and thoughts below.

Thanks
Amitabha >

On Jul 13, 2016, at 1:20 AM, Numan Siddique 
> wrote:

Adding the proper tags in subject

On Wed, Jul 13, 2016 at 1:22 PM, Numan Siddique 
> wrote:
Hi Neutrinos,

Presently, In the OVN ML2 driver we have 2 ways to sync neutron DB and OVN DB
 - At neutron-server startup, OVN ML2 driver syncs the neutron DB and OVN DB if 
sync mode is set to repair.
 - Admin can run the "neutron-ovn-db-sync-util" to sync the DBs.

Recently, in the v2 of networking-odl ML2 driver (Please see (1) below which 
has more details). (ODL folks please correct me if I am wrong here)

  - a journal thread is created which does the CRUD operations of neutron 
resources asynchronously (i.e it sends the REST APIs to the ODL controller).

Would this be the equivalent of making OVSDB transactions to the OVN NB DB?

​Correct.
​



  - a maintenance thread is created which does some cleanup periodically and at 
startup does full sync if it detects ODL controller cold reboot.


Few question I have
 - can OVN ML2 driver take same or similar approach. Are there any advantages 
in taking this approach ? One advantage is neutron resources can be 
created/updated/deleted even if the OVN ML2 driver has lost connection to the 
ovsdb-server. The journal thread would eventually sync these resources in the 
OVN DB. I would like to know the communities thoughts on this.

If we can make it work, it would indeed be a huge plus for system wide upgrades 
and some corner cases in the code (ACL specifically), where the post_commit 
relies on all transactions to be successful and doesn’t revert the neutron db 
if something fails.






 - Are there are other ML2 drivers which might have to handle the DB sync's 
(cases where the other controllers also maintain their own DBs) and how they 
are handling it ?

 - Can a common approach be taken to sync the neutron DB and controller DBs ?


---

(1)
Sync threads created by networking-odl ML2 driver
--
ODL ML2 driver creates 2 threads (threading.Thread module) at init
 - Journal thread
 - Maintenance thread

Journal thread

The journal module creates a new journal table 

Re: [openstack-dev] [kolla] Monitoring tooling

2016-07-22 Thread Dave Walker
Yes, this is my thought.

The scope of the Sensu work is: "Is this thing working?" (with the
reference being up/down)
But the scope of the Grafana and friends is, "How hard is this working?"
(but no alerting)

They are certainly complementary However, Sensu can throw data at a
Grafana stack (aiui).. but I fear that is too much to achieve this cycle.

--
Kind Regards,
Dave Walker

On 23 July 2016 at 00:11, Fox, Kevin M  wrote:

> I think those are two different, complementary things.
>
> One's metrics and the other is monitoring. You probably want both at the
> same time.
>
> Thanks,
> Kevin
> 
> From: Steven Dake (stdake) [std...@cisco.com]
> Sent: Friday, July 22, 2016 3:52 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [kolla] Monitoring tooling
>
> Thanks for pointing that out.  Brain out to lunch today it appears :(
>
> I think choices are a good thing even though they increase our
> implementation footprint.  Anyone opposed to implementing both with
> something in globals.yml like
> monitoring: grafana or
> monitoring: sensu
>
> Comments questions or concerns welcome.
>
> Regards
> -steve
>
> On 7/22/16, 3:42 PM, "Stephen Hindle"  wrote:
>
> >Don't forget mewalds implementation as well - we now have 2 monitoring
> >options for kolla :-)
> >
> >On Fri, Jul 22, 2016 at 3:15 PM, Steven Dake (stdake) 
> >wrote:
> >> Hi folks,
> >>
> >> At the midcycle we decided to push off implementing Monitoring until
> >>post
> >> Newton.  The rationale for this decision was that the core review team
> >>has
> >> enough on their plates and nobody was super keen to implement any
> >>monitoring
> >> solution given our other priorities.
> >>
> >> Like all good things, communities produce new folks that want to do new
> >> things, and Sensu was proposed as Kolla's monitoring solution (atleast
> >>the
> >> first one).  A developer that has done some good work has shown up to
> >>do the
> >> job as well :)  I have heard good things about Sensu, minus the fact
> >>that it
> >> is implemented in Ruby and I fear it may end up causing our gate a lot
> >>of
> >> hassle.
> >>
> >> https://review.openstack.org/#/c/341861/
> >>
> >>
> >> Anyway I think we can work through the gate problem.
> >>
> >> Does anyone have any better suggestion?  I'd like to unblock Dave's work
> >> which is blocked on a ­2 pending a complete discussion of our monitoring
> >> solution.  Note we may end up implementing more than one down the road ­
> >> Sensu is just where the original interest was.
> >>
> >> Please provide feedback, even if you don't have a preference, whether
> >>your a
> >> core reviewer or not.
> >>
> >> My take is we can merge this work in non-prioirty order, and if it
> >>makes the
> >> end of the cycle fantastic ­ if not, we can release it in Ocatta.
> >>
> >> Regards
> >> -steve
> >>
> >>
> >>
> >>_
> >>_
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> >--
> >Stephen Hindle - Senior Systems Engineer
> >480.807.8189 480.807.8189
> >www.limelight.com Delivering Faster Better
> >
> >Join the conversation
> >
> >at Limelight Connect
> >
> >--
> >The information in this message may be confidential.  It is intended
> >solely
> >for
> >the addressee(s).  If you are not the intended recipient, any disclosure,
> >copying or distribution of the message, or any action or omission taken
> >by
> >you
> >in reliance on it, is prohibited and may be unlawful.  Please immediately
> >contact the sender if you have received this message in error.
> >
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [kolla] Monitoring tooling

2016-07-22 Thread Fox, Kevin M
I think those are two different, complementary things.

One's metrics and the other is monitoring. You probably want both at the same 
time.

Thanks,
Kevin

From: Steven Dake (stdake) [std...@cisco.com]
Sent: Friday, July 22, 2016 3:52 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] Monitoring tooling

Thanks for pointing that out.  Brain out to lunch today it appears :(

I think choices are a good thing even though they increase our
implementation footprint.  Anyone opposed to implementing both with
something in globals.yml like
monitoring: grafana or
monitoring: sensu

Comments questions or concerns welcome.

Regards
-steve

On 7/22/16, 3:42 PM, "Stephen Hindle"  wrote:

>Don't forget mewalds implementation as well - we now have 2 monitoring
>options for kolla :-)
>
>On Fri, Jul 22, 2016 at 3:15 PM, Steven Dake (stdake) 
>wrote:
>> Hi folks,
>>
>> At the midcycle we decided to push off implementing Monitoring until
>>post
>> Newton.  The rationale for this decision was that the core review team
>>has
>> enough on their plates and nobody was super keen to implement any
>>monitoring
>> solution given our other priorities.
>>
>> Like all good things, communities produce new folks that want to do new
>> things, and Sensu was proposed as Kolla's monitoring solution (atleast
>>the
>> first one).  A developer that has done some good work has shown up to
>>do the
>> job as well :)  I have heard good things about Sensu, minus the fact
>>that it
>> is implemented in Ruby and I fear it may end up causing our gate a lot
>>of
>> hassle.
>>
>> https://review.openstack.org/#/c/341861/
>>
>>
>> Anyway I think we can work through the gate problem.
>>
>> Does anyone have any better suggestion?  I'd like to unblock Dave's work
>> which is blocked on a ­2 pending a complete discussion of our monitoring
>> solution.  Note we may end up implementing more than one down the road ­
>> Sensu is just where the original interest was.
>>
>> Please provide feedback, even if you don't have a preference, whether
>>your a
>> core reviewer or not.
>>
>> My take is we can merge this work in non-prioirty order, and if it
>>makes the
>> end of the cycle fantastic ­ if not, we can release it in Ocatta.
>>
>> Regards
>> -steve
>>
>>
>>
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
>--
>Stephen Hindle - Senior Systems Engineer
>480.807.8189 480.807.8189
>www.limelight.com Delivering Faster Better
>
>Join the conversation
>
>at Limelight Connect
>
>--
>The information in this message may be confidential.  It is intended
>solely
>for
>the addressee(s).  If you are not the intended recipient, any disclosure,
>copying or distribution of the message, or any action or omission taken
>by
>you
>in reliance on it, is prohibited and may be unlawful.  Please immediately
>contact the sender if you have received this message in error.
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [octavia] ssl re-encryption in octavia

2016-07-22 Thread Akshay Kumar Sanghai
Hi,
I saw in specs of kilo that ssl re-encryption will be introduced in later
phase. Is the ssl re-encryption feature available in the mitaka release? I
understand ssl offload is available, but I want to try the ssl
re-encryption on octavia lbaas.
This link refers to v1 probably
https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL. Please redirect me to
any documentation if ssl re-encryption is there.

Thanks
Akshay Sanghai
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Monitoring tooling

2016-07-22 Thread Steven Dake (stdake)
Thanks for pointing that out.  Brain out to lunch today it appears :(

I think choices are a good thing even though they increase our
implementation footprint.  Anyone opposed to implementing both with
something in globals.yml like
monitoring: grafana or
monitoring: sensu

Comments questions or concerns welcome.

Regards
-steve

On 7/22/16, 3:42 PM, "Stephen Hindle"  wrote:

>Don't forget mewalds implementation as well - we now have 2 monitoring
>options for kolla :-)
>
>On Fri, Jul 22, 2016 at 3:15 PM, Steven Dake (stdake) 
>wrote:
>> Hi folks,
>>
>> At the midcycle we decided to push off implementing Monitoring until
>>post
>> Newton.  The rationale for this decision was that the core review team
>>has
>> enough on their plates and nobody was super keen to implement any
>>monitoring
>> solution given our other priorities.
>>
>> Like all good things, communities produce new folks that want to do new
>> things, and Sensu was proposed as Kolla's monitoring solution (atleast
>>the
>> first one).  A developer that has done some good work has shown up to
>>do the
>> job as well :)  I have heard good things about Sensu, minus the fact
>>that it
>> is implemented in Ruby and I fear it may end up causing our gate a lot
>>of
>> hassle.
>>
>> https://review.openstack.org/#/c/341861/
>>
>>
>> Anyway I think we can work through the gate problem.
>>
>> Does anyone have any better suggestion?  I'd like to unblock Dave's work
>> which is blocked on a ­2 pending a complete discussion of our monitoring
>> solution.  Note we may end up implementing more than one down the road ­
>> Sensu is just where the original interest was.
>>
>> Please provide feedback, even if you don't have a preference, whether
>>your a
>> core reviewer or not.
>>
>> My take is we can merge this work in non-prioirty order, and if it
>>makes the
>> end of the cycle fantastic ­ if not, we can release it in Ocatta.
>>
>> Regards
>> -steve
>>
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
>-- 
>Stephen Hindle - Senior Systems Engineer
>480.807.8189 480.807.8189
>www.limelight.com Delivering Faster Better
>
>Join the conversation
>
>at Limelight Connect
>
>-- 
>The information in this message may be confidential.  It is intended
>solely 
>for
>the addressee(s).  If you are not the intended recipient, any disclosure,
>copying or distribution of the message, or any action or omission taken
>by 
>you
>in reliance on it, is prohibited and may be unlawful.  Please immediately
>contact the sender if you have received this message in error.
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Monitoring tooling

2016-07-22 Thread Stephen Hindle
Don't forget mewalds implementation as well - we now have 2 monitoring
options for kolla :-)

On Fri, Jul 22, 2016 at 3:15 PM, Steven Dake (stdake)  wrote:
> Hi folks,
>
> At the midcycle we decided to push off implementing Monitoring until post
> Newton.  The rationale for this decision was that the core review team has
> enough on their plates and nobody was super keen to implement any monitoring
> solution given our other priorities.
>
> Like all good things, communities produce new folks that want to do new
> things, and Sensu was proposed as Kolla's monitoring solution (atleast the
> first one).  A developer that has done some good work has shown up to do the
> job as well :)  I have heard good things about Sensu, minus the fact that it
> is implemented in Ruby and I fear it may end up causing our gate a lot of
> hassle.
>
> https://review.openstack.org/#/c/341861/
>
>
> Anyway I think we can work through the gate problem.
>
> Does anyone have any better suggestion?  I'd like to unblock Dave's work
> which is blocked on a –2 pending a complete discussion of our monitoring
> solution.  Note we may end up implementing more than one down the road –
> Sensu is just where the original interest was.
>
> Please provide feedback, even if you don't have a preference, whether your a
> core reviewer or not.
>
> My take is we can merge this work in non-prioirty order, and if it makes the
> end of the cycle fantastic – if not, we can release it in Ocatta.
>
> Regards
> -steve
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Stephen Hindle - Senior Systems Engineer
480.807.8189 480.807.8189
www.limelight.com Delivering Faster Better

Join the conversation

at Limelight Connect

-- 
The information in this message may be confidential.  It is intended solely 
for
the addressee(s).  If you are not the intended recipient, any disclosure,
copying or distribution of the message, or any action or omission taken by 
you
in reliance on it, is prohibited and may be unlawful.  Please immediately
contact the sender if you have received this message in error.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Remove Davanum Srinivas from Magnum core team

2016-07-22 Thread Steven Dake (stdake)
Your a class act Dims.  It was good working with you on Magnum :)

Regards
-steve

On 7/22/16, 2:15 PM, "Davanum Srinivas"  wrote:

>Thanks Hongbin!
>
>On Fri, Jul 22, 2016 at 5:13 PM, Hongbin Lu  wrote:
>> Hi all,
>>
>>
>>
>> Based on Dims¹s request, I removed him from the Magnum core reviewer
>>team.
>> Dims¹s contribution started from the first commit of the Magnum tree,
>>and he
>> was served as a Magnum core reviewer for a long time. I am sorry to hear
>> that Dims want to leave the team, but thanks for his contribution and
>> guidance to the project.
>>
>>
>>
>> Note: this removal doesn¹t require a vote because Dims requested to be
>> removed.
>>
>>
>>
>> Best regards,
>>
>> Hongbin
>>
>>
>>
>>
>>
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
>-- 
>Davanum Srinivas :: https://twitter.com/dims
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Please start getting in the habit of breaking up containers from ansible changes

2016-07-22 Thread Steven Dake (stdake)
Precisely!

From: "Fox, Kevin M" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, July 22, 2016 at 3:03 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [kolla] Please start getting in the habit of 
breaking up containers from ansible changes

I think its an interesting idea. If nothing else, it will show what it would be 
like to have a split set of repo's before it actually is a thing and can't be 
undone.

Thanks,
Kevin

From: Dave Walker [em...@daviey.com]
Sent: Friday, July 22, 2016 2:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] Please start getting in the habit of 
breaking up containers from ansible changes


On 22 July 2016 at 21:35, Steven Dake (stdake) 
> wrote:
>
> Hey folks,
>
> I know it doesn't make a lot of sense to break up containers from ansible 
> changes to people outside the core review team, but for anything with 
> backport potential, please do so.  We are considering in Occata splitting the 
> kolla repo into two (kolla = containers & build, kolla-ansible = playbooks).  
> I think the timing is right after we branch Kolla Newton, but I don't want to 
> crater our backport process in the process.  By keeping the changes separate 
> we can still have a tidy backport experience.
>
> Even for small changes - 2-3 liner, please break them up using Partial-Bug.
>
> Core reviewers please start enforcing this.
>
> TIA!
> -steve
>

Hi Steve,

Why would this cause a problem in current Master?  As I understand it, you want 
to make sure that changes that touch both Dockerfiles and Playbooks are in 
isolated commits so they can be backported.  However, this surely won't be 
relevant until Newton is cut and Occata is opened, as Newton is remaining as a 
single tree.  So, the splitting of commits is only relevant in Occata+1, where 
splitting will already be enforced - by the splitting of the trees in Occata?  
I say O+1, as split trees will only start in O.. So for Newton, O commits will 
still backport cleanly... as they will be separated by the nature of the tree 
split.

Or... Have I horribly misunderstood your push?

With Occata, will kolla and kolla-ansible have common ancestry? As in, will 
they both be based on current Master with irrelevant files removed from each 
tree?

--
Kind Regards,
Dave Walker
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Monitoring tooling

2016-07-22 Thread Steven Dake (stdake)
Hi folks,

At the midcycle we decided to push off implementing Monitoring until post 
Newton.  The rationale for this decision was that the core review team has 
enough on their plates and nobody was super keen to implement any monitoring 
solution given our other priorities.

Like all good things, communities produce new folks that want to do new things, 
and Sensu was proposed as Kolla's monitoring solution (atleast the first one).  
A developer that has done some good work has shown up to do the job as well :)  
I have heard good things about Sensu, minus the fact that it is implemented in 
Ruby and I fear it may end up causing our gate a lot of hassle.


https://review.openstack.org/#/c/341861/


Anyway I think we can work through the gate problem.

Does anyone have any better suggestion?  I'd like to unblock Dave's work which 
is blocked on a -2 pending a complete discussion of our monitoring solution.  
Note we may end up implementing more than one down the road - Sensu is just 
where the original interest was.

Please provide feedback, even if you don't have a preference, whether your a 
core reviewer or not.

My take is we can merge this work in non-prioirty order, and if it makes the 
end of the cycle fantastic - if not, we can release it in Ocatta.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Please start getting in the habit of breaking up containers from ansible changes

2016-07-22 Thread Fox, Kevin M
I think its an interesting idea. If nothing else, it will show what it would be 
like to have a split set of repo's before it actually is a thing and can't be 
undone.

Thanks,
Kevin

From: Dave Walker [em...@daviey.com]
Sent: Friday, July 22, 2016 2:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] Please start getting in the habit of 
breaking up containers from ansible changes


On 22 July 2016 at 21:35, Steven Dake (stdake) 
> wrote:
>
> Hey folks,
>
> I know it doesn't make a lot of sense to break up containers from ansible 
> changes to people outside the core review team, but for anything with 
> backport potential, please do so.  We are considering in Occata splitting the 
> kolla repo into two (kolla = containers & build, kolla-ansible = playbooks).  
> I think the timing is right after we branch Kolla Newton, but I don't want to 
> crater our backport process in the process.  By keeping the changes separate 
> we can still have a tidy backport experience.
>
> Even for small changes - 2–3 liner, please break them up using Partial-Bug.
>
> Core reviewers please start enforcing this.
>
> TIA!
> -steve
>

Hi Steve,

Why would this cause a problem in current Master?  As I understand it, you want 
to make sure that changes that touch both Dockerfiles and Playbooks are in 
isolated commits so they can be backported.  However, this surely won't be 
relevant until Newton is cut and Occata is opened, as Newton is remaining as a 
single tree.  So, the splitting of commits is only relevant in Occata+1, where 
splitting will already be enforced - by the splitting of the trees in Occata?  
I say O+1, as split trees will only start in O.. So for Newton, O commits will 
still backport cleanly... as they will be separated by the nature of the tree 
split.

Or... Have I horribly misunderstood your push?

With Occata, will kolla and kolla-ansible have common ancestry? As in, will 
they both be based on current Master with irrelevant files removed from each 
tree?

--
Kind Regards,
Dave Walker
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Please start getting in the habit of breaking up containers from ansible changes

2016-07-22 Thread Steven Dake (stdake)


From: Dave Walker >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, July 22, 2016 at 2:19 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [kolla] Please start getting in the habit of 
breaking up containers from ansible changes


On 22 July 2016 at 21:35, Steven Dake (stdake) 
> wrote:
>
> Hey folks,
>
> I know it doesn't make a lot of sense to break up containers from ansible 
> changes to people outside the core review team, but for anything with 
> backport potential, please do so.  We are considering in Occata splitting the 
> kolla repo into two (kolla = containers & build, kolla-ansible = playbooks).  
> I think the timing is right after we branch Kolla Newton, but I don't want to 
> crater our backport process in the process.  By keeping the changes separate 
> we can still have a tidy backport experience.
>
> Even for small changes - 2-3 liner, please break them up using Partial-Bug.
>
> Core reviewers please start enforcing this.
>
> TIA!
> -steve
>

Hi Steve,

Why would this cause a problem in current Master?  As I understand it, you want 
to make sure that changes that touch both Dockerfiles and Playbooks are in 
isolated commits so they can be backported.  However, this surely won't be 
relevant until Newton is cut and Occata is opened, as Newton is remaining as a 
single tree.  So, the splitting of commits is only relevant in Occata+1, where 
splitting will already be enforced - by the splitting of the trees in Occata?  
I say O+1, as split trees will only start in O.. So for Newton, O commits will 
still backport cleanly... as they will be separated by the nature of the tree 
split.

Or... Have I horribly misunderstood your push?

With Occata, will kolla and kolla-ansible have common ancestry? As in, will 
they both be based on current Master with irrelevant files removed from each 
tree?

Dave,

This causes no problem in current master or in Newton.

Newton, Mitaka, and Liberty will be one repository.  I want to run a 
semi-experiment to see if people complain about splitting up the changes to the 
point that it causes harm to the project and want to get the core reviewer team 
in the habit for looking for those sorts of issues.  Its a slight change in our 
workflow.

With Occata and the common ancestry question, if I were doing the work, I'd 
copy one repo to another, and remove irrelevant files as to not lose history.  
What we end up doing will likely be driven by community consensus not my best 
guess  at speculation of how it should be done so YMMV.

Regards
-steve

Regards
-steve


--
Kind Regards,
Dave Walker
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Plugins for all

2016-07-22 Thread Steve Martinelli


On Fri, Jul 22, 2016 at 2:08 PM, Hayes, Graham  wrote:

>   * OpenStack Client
>
> OpenStack CLI privileged projects have access to more commands, as
> plugins cannot hook in to them (e.g. quotas)
>

It's been OSC's intention to allow for command hooking, we just don't
really know how to do that just yet :\


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI job to test undercloud only

2016-07-22 Thread James Slagle
On Fri, Jul 22, 2016 at 4:53 PM, Emilien Macchi  wrote:
> Hi,
>
> I started some work to have a CI job that will only deploy an undercloud.
> We'll save time and resources.
>
> I used storyboard: https://storyboard.openstack.org/#!/story/2000682
> and I invite our contributors to use it too when working in TripleO
> CI, it helps us to track our current work.

Just to add some context around tracking CI work in StoryBoard:

I raised the issue a little while ago that it would help if we had a
central place to track CI tasks that we need to work on. Given that
we're not using launchpad currently for tripleo-ci (other than for
filing bugs), and launchpad is more release oriented, I proposed to
give StoryBoard a try. Previously we had used a trello board for this,
but I think we should use StoryBoard instead.

Especially given that OpenStack in general seems to be moving towards
more usage of StoryBoard in the future:
https://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html

The tripleo-ci project in StoryBoard is:
https://storyboard.openstack.org/#!/project/749
And I've also created a board:
https://storyboard.openstack.org/#!/board/35

Keep in mind that we want to avoid duplication between Launchpad and
StoryBoard. Launchpad is still the source of truth. We're using
StoryBoard just to track things that we weren't otherwise tracking. I
think it would be fine if you wanted to created tasks for bugs you
might be working on, but please keep all the pertinent details on the
bug itself in Launchpad.

Further, the StoryBoard team in #storyboard seemed really interested
in getting our feedback on how StoryBoard works for us. I created an
etherpad where we can record some of this feedback to collect and pass
onto them:
https://etherpad.openstack.org/p/tripleo-ci-storyboard-feedback

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Please start getting in the habit of breaking up containers from ansible changes

2016-07-22 Thread Dave Walker
On 22 July 2016 at 21:35, Steven Dake (stdake)  wrote:
>
> Hey folks,
>
> I know it doesn't make a lot of sense to break up containers from ansible
changes to people outside the core review team, but for anything with
backport potential, please do so.  We are considering in Occata splitting
the kolla repo into two (kolla = containers & build, kolla-ansible =
playbooks).  I think the timing is right after we branch Kolla Newton, but
I don't want to crater our backport process in the process.  By keeping the
changes separate we can still have a tidy backport experience.
>
> Even for small changes - 2–3 liner, please break them up using
Partial-Bug.
>
> Core reviewers please start enforcing this.
>
> TIA!
> -steve
>

Hi Steve,

Why would this cause a problem in current Master?  As I understand it, you
want to make sure that changes that touch both Dockerfiles and Playbooks
are in isolated commits so they can be backported.  However, this surely
won't be relevant until Newton is cut and Occata is opened, as Newton is
remaining as a single tree.  So, the splitting of commits is only relevant
in Occata+1, where splitting will already be enforced - by the splitting of
the trees in Occata?  I say O+1, as split trees will only start in O.. So
for Newton, O commits will still backport cleanly... as they will be
separated by the nature of the tree split.

Or... Have I horribly misunderstood your push?

With Occata, will kolla and kolla-ansible have common ancestry? As in, will
they both be based on current Master with irrelevant files removed from
each tree?

--
Kind Regards,
Dave Walker
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Remove Davanum Srinivas from Magnum core team

2016-07-22 Thread Davanum Srinivas
Thanks Hongbin!

On Fri, Jul 22, 2016 at 5:13 PM, Hongbin Lu  wrote:
> Hi all,
>
>
>
> Based on Dims’s request, I removed him from the Magnum core reviewer team.
> Dims’s contribution started from the first commit of the Magnum tree, and he
> was served as a Magnum core reviewer for a long time. I am sorry to hear
> that Dims want to leave the team, but thanks for his contribution and
> guidance to the project.
>
>
>
> Note: this removal doesn’t require a vote because Dims requested to be
> removed.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Remove Davanum Srinivas from Magnum core team

2016-07-22 Thread Hongbin Lu
Hi all,

Based on Dims's request, I removed him from the Magnum core reviewer team. 
Dims's contribution started from the first commit of the Magnum tree, and he 
was served as a Magnum core reviewer for a long time. I am sorry to hear that 
Dims want to leave the team, but thanks for his contribution and guidance to 
the project.

Note: this removal doesn't require a vote because Dims requested to be removed.

Best regards,
Hongbin


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Plugins for all

2016-07-22 Thread Hayes, Graham
On 21/07/2016 16:49, Doug Hellmann wrote:
> Excerpts from Hayes, Graham's message of 2016-07-19 16:59:20 +:
>> On 19/07/2016 16:39, Doug Hellmann wrote:
>>> Excerpts from Hayes, Graham's message of 2016-07-18 17:13:09 +:
 On 18/07/2016 17:57, Thierry Carrez wrote:
> Hayes, Graham wrote:
>> [...]
>> The point is that we were supposed to be a level field as a community
>> but if we have examples like this, there is not a level playing field.
>
> While I generally agree on your goals here (avoid special-casing some
> projects in generic support projects like Tempest), I want to clarify
> what we meant by "level playing field" in a recent resolution.


 Yes - it has been pointed out the title is probably overloading a term
 just used for a different purpose - I am more than willing to change it.

 I wasn't sure where I got the name, and I realised that was probably in
 my head from that resolution.

> This was meant as a level playing field for contributors within a
> project, not a level playing field between projects. The idea is that
> any contributor joining any OpenStack project should not be technically
> limited compared to other contributors on the same project. So, no
> "secret sauce" that only a subset of developers on a project have access 
> to.

 There is a correlation here - "special sauce" (not secret obviously)
 that only a subset of projects have access to.

> I think I understand where you're gong when you say that all projects
> should have equal chances, but keep in mind that (1) projects should not
> really "compete" against each other (but rather all projects should
> contribute to the success of OpenStack as a whole) and (2) some
> OpenStack projects will always be more equal than others (for example we
> require that every project integrates with Keystone, and I don't see
> that changing).

 Yes, I agree we should not be competing. But was should not be asking
 the smaller projects to re-implement functionality, just because they
 did not get integrated in time.

 We require all projects to integrate with keystone for auth, as we
 require all projects to integrate with neutron for network operations
 and designate for DNS, I just see it as a requirement for using the
 other components of OpenStack for their defined purpose.

>>>
>>> It would be useful to have some specific information from the QA/Tempest
>>> team (and any others with a similar situation) about whether the current
>>> situation about how differences between in-tree tests and plugin tests
>>> are allowed to use various APIs. For example, are there APIs only
>>> available to in-tree tests that are going to stay that way? Or is this
>>> just a matter of not having had time to "harden" or "certify" or
>>> otherwise prepare those APIs for plugins to use them?
>>
>> "Staying that way" is certainly the impression given to users from
>> other projects.
>
> OK, but is that an "impression" or is it a stated "policy"?
>
>> In any case tempest is just an example. From my viewpoint, we need to
>> make this a community default, to avoid even the short (which really
>> ends up a long) term discrepancy between projects.
>
> Before we start making lots of specific rules about how teams
> coordinate, I would like to understand the problem those rules are meant
> to solve, so thank you for providing that example. I still haven't heard
> from the QA team, though. Ken'ichi?
>
>> If the standard in the community is equal access, this means when the
>> next testing tool, CLI, SDK, $cross_project_tool comes along, it is
>> available to all projects equally.
>>
>> If everyone uses the interfaces, they get better for all users of them,
>> "big tent projects" and "tc-approved-release" alike. Having two
>> way of doing the same thing means that there will always be
>> discrepancies between people who are in the club, and those who are not.
>
> I think I understand your motivation. It's not clear yet whether
> there needs to be a new policy to change the existing intent,
> or if a discussion just hasn't happened, or if someone simply needs
> to edit some code.
>
> Are there other examples we can talk about in the mean time?

Sure.

  * Horizon

Horizon privileged projects have access to much more panels than
plugins (service status, quotas, overviews etc).
Plugins have to rely on tarballs of horizon

  * OpenStack Client

OpenStack CLI privileged projects have access to more commands, as
plugins cannot hook in to them (e.g. quotas)

  * Grenade

Plugins may or may not have tempest tests ran (I think that patch
merged), they have to use parts of tempest I was told explicitly
plugins should not use to get the tests to run at that point.

  * Docs

We can now add install guides and hook into the API Reference, and API
guides. This is great - and I am really happy about 

[openstack-dev] [tripleo] CI job to test undercloud only

2016-07-22 Thread Emilien Macchi
Hi,

I started some work to have a CI job that will only deploy an undercloud.
We'll save time and resources.

I used storyboard: https://storyboard.openstack.org/#!/story/2000682
and I invite our contributors to use it too when working in TripleO
CI, it helps us to track our current work.

So far I did 2 patches and had successful results when testing locally:
https://review.openstack.org/#/c/346230
https://review.openstack.org/#/c/346147
https://review.openstack.org/#/c/346220

Once we get this working, I plan to work on the same job with -upgrade
suffix, similar with our overcloud-upgrade job.
Any feedback is welcome.
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Please start getting in the habit of breaking up containers from ansible changes

2016-07-22 Thread Steven Dake (stdake)
Hey folks,

I know it doesn't make a lot of sense to break up containers from ansible 
changes to people outside the core review team, but for anything with backport 
potential, please do so.  We are considering in Occata splitting the kolla repo 
into two (kolla = containers & build, kolla-ansible = playbooks).  I think the 
timing is right after we branch Kolla Newton, but I don't want to crater our 
backport process in the process.  By keeping the changes separate we can still 
have a tidy backport experience.

Even for small changes - 2-3 liner, please break them up using Partial-Bug.

Core reviewers please start enforcing this.

TIA!
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Proposing Spyros Trigazis for Magnum core reviewer team

2016-07-22 Thread Hongbin Lu
Hi all,

Spyros has consistently contributed to Magnum for a while. In my opinion, what 
differentiate him from others is the significance of his contribution, which 
adds concrete value to the project. For example, the operator-oriented install 
guide he delivered attracts a significant number of users to install Magnum, 
which facilitates the adoption of the project. I would like to emphasize that 
the Magnum team has been working hard but struggling to increase the adoption, 
and Spyros's contribution means a lot in this regards. He also completed 
several essential and challenging tasks, such as adding support for OverlayFS, 
adding Rally job for Magnum, etc. In overall, I am impressed by the amount of 
high-quality patches he submitted. He is also helpful in code reviews, and his 
comments often help us identify pitfalls that are not easy to identify. He is 
also very active in IRC and ML. Based on his contribution and expertise, I 
think he is qualified to be a Magnum core reviewer.

I am happy to propose Spyros to be a core reviewer of Magnum team. According to 
the OpenStack Governance process [1], we require a minimum of 4 +1 votes from 
Magnum core reviewers within a 1 week voting window (consider this proposal as 
a +1 vote from me). A vote of -1 is a veto. If we cannot get enough votes or 
there is a veto vote prior to the end of the voting window, Spyros is not able 
to join the core team and needs to wait 30 days to reapply.

The voting is open until Thursday July 29st.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Jakub Libosvar for testingcore

2016-07-22 Thread Bhatia, Manjeet S
Ofcourse insightful reviewers will make development fast.

So ++

From: Darek Śmigiel [mailto:smigiel.dari...@gmail.com]
Sent: Friday, July 22, 2016 7:20 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Neutron] Proposing Jakub Libosvar for testingcore

I’m not a core, so treat this as +0 but I think Jakub will be good addition to 
core team.

So +1

On Jul 22, 2016, at 3:20 AM, Martin Hickey 
> wrote:

+1

Oleg Bondarev ---22/07/2016 09:13:16---+1 On Fri, Jul 22, 2016 at 
2:36 AM, Doug Wiegley 
>

From: Oleg Bondarev >
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: 22/07/2016 09:13
Subject: Re: [openstack-dev] [Neutron] Proposing Jakub Libosvar for testing core




+1

On Fri, Jul 22, 2016 at 2:36 AM, Doug Wiegley 
> wrote:
+1
On Jul 21, 2016, at 5:13 PM, Kevin Benton 
> wrote:

+1

On Thu, Jul 21, 2016 at 2:41 PM, Carl Baldwin 
> wrote:
+1 from me

On Thu, Jul 21, 2016 at 1:35 PM, Assaf Muller 
> wrote:
As Neutron's so called testing lieutenant I would like to propose
Jakub Libosvar to be a core in the testing area.

Jakub has demonstrated his inherent interest in the testing area over
the last few years, his reviews are consistently insightful and his
numbers [1] are in line with others and I know will improve if given
the responsibilities of a core reviewer. Jakub is deeply involved with
the project's testing infrastructures and CI systems.

As a reminder the expectation from cores is found here [2], and
specifically for cores interesting in helping out shaping Neutron's
testing story:

* Guide community members to craft a testing strategy for features [3]
* Ensure Neutron's testing infrastructures are sufficiently
sophisticated to achieve the above.
* Provide leadership when determining testing Do's & Don'ts [4]. What
makes for an effective test?
* Ensure the gate stays consistently green

And more tactically we're looking at finishing the Tempest/Neutron
tests dedup [5] and to provide visual graphing for historical control
and data plane performance results similar to [6].

[1] http://stackalytics.com/report/contribution/neutron/90
[2] http://docs.openstack.org/developer/neutron/policies/neutron-teams.html
[3] 
http://docs.openstack.org/developer/neutron/devref/development.environment.html#testing-neutron
[4] https://assafmuller.com/2015/05/17/testing-lightning-talk/
[5] https://etherpad.openstack.org/p/neutron-tempest-defork
[6] https://www.youtube.com/watch?v=a0qlsH1hoKs=youtu.be=24m22s

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [devstack][neutron] - neutron gate blocked by devstack change

2016-07-22 Thread Sean M. Collins
Also I was the one who approved the original patch, so the fault rests
on my shoulders. My apologies.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][neutron] - neutron gate blocked by devstack change

2016-07-22 Thread Sean M. Collins
I just approved the revert.

I think we need to step back and re-evaluate the work that we are doing
in neutron-legacy. It's very fragile - and really any change to that
piece of logic ends up breaking networking-generic-switch,
ironic-multitenant-network, midonet, or the gate.

Which is why I'm really hoping that with the new lib/neutron we can just
move away from this mess that we've got and start fresh.
-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] How to enable SSL in devStack?

2016-07-22 Thread Clark Boylan


On Wed, Jul 20, 2016, at 07:01 AM, Rob Crittenden wrote:
> Andrey Pavlov wrote:
> > Hi,
> >
> > When I ran devstack with SSL I found a bug and tried to fix it -
> > https://review.openstack.org/#/c/242812/
> > But no one agree with me.
> > Try to apply this patch - it may help.
> > Also there is a chance that new bugs present in devstack that
> > prevented to install it with SSL.
> 
> Seeing how some other things in your local.conf might help but when I 
> tried to reproduce it I got the same error and it failed because Apache 
> didn't have an SSL listener on 443.
> 
> I'm not sure I'd recommend direct SSL in any case. I'd recommend the 
> tls-proxy service instead. Note that I'm pretty sure it has the same 
> problem: it hasn't been updated to handle port 443 for Keystone.

I pushed a change up (https://review.openstack.org/#/c/296771/) to
enable tls-proxy in devstack-gate to see how it does and it wasn't too
happy. Is it worth trying to make a push on this and just enabling it by
default in devstack?

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Jakub Libosvar for testingcore

2016-07-22 Thread Brandon Logan
+1

On Fri, 2016-07-22 at 09:19 -0500, Darek Śmigiel wrote:
> I’m not a core, so treat this as +0 but I think Jakub will be good
> addition to core team.
> 
> 
> So +1
> 
> > On Jul 22, 2016, at 3:20 AM, Martin Hickey
> >  wrote:
> > 
> > +1
> > 
> > Oleg Bondarev ---22/07/2016 09:13:16---+1 On Fri, Jul
> > 22, 2016 at 2:36 AM, Doug Wiegley 
> > 
> > From: Oleg Bondarev 
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Date: 22/07/2016 09:13
> > Subject: Re: [openstack-dev] [Neutron] Proposing Jakub Libosvar for
> > testing core
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > +1
> > 
> > On Fri, Jul 22, 2016 at 2:36 AM, Doug Wiegley
> >  wrote:
> > +1
> > On Jul 21, 2016, at 5:13 PM, Kevin Benton
> >  wrote:
> > 
> > +1
> > 
> > On Thu, Jul 21, 2016 at 2:41 PM, Carl
> > Baldwin  wrote:
> > +1 from me
> > 
> > On Thu, Jul 21, 2016 at 1:35 PM, Assaf
> > Muller  wrote:
> > As Neutron's so called testing
> > lieutenant I would like to propose
> > Jakub Libosvar to be a core in the
> > testing area.
> > 
> > Jakub has demonstrated his inherent
> > interest in the testing area over
> > the last few years, his reviews are
> > consistently insightful and his
> > numbers [1] are in line with others
> > and I know will improve if given
> > the responsibilities of a core
> > reviewer. Jakub is deeply involved
> > with
> > the project's testing
> > infrastructures and CI systems.
> > 
> > As a reminder the expectation from
> > cores is found here [2], and
> > specifically for cores interesting
> > in helping out shaping Neutron's
> > testing story:
> > 
> > * Guide community members to craft a
> > testing strategy for features [3]
> > * Ensure Neutron's testing
> > infrastructures are sufficiently
> > sophisticated to achieve the above.
> > * Provide leadership when
> > determining testing Do's & Don'ts
> > [4]. What
> > makes for an effective test?
> > * Ensure the gate stays consistently
> > green
> > 
> > And more tactically we're looking at
> > finishing the Tempest/Neutron
> > tests dedup [5] and to provide
> > visual graphing for historical
> > control
> > and data plane performance results
> > similar to [6].
> > 
> > [1]
> > 
> > http://stackalytics.com/report/contribution/neutron/90
> > [2]
> > 
> > http://docs.openstack.org/developer/neutron/policies/neutron-teams.html
> > [3]
> > 
> > http://docs.openstack.org/developer/neutron/devref/development.environment.html#testing-neutron
> > [4]
> > 
> > https://assafmuller.com/2015/05/17/testing-lightning-talk/
> > [5]
> > 
> > https://etherpad.openstack.org/p/neutron-tempest-defork
> > [6]
> > 
> > https://www.youtube.com/watch?v=a0qlsH1hoKs=youtu.be=24m22s
> >  

Re: [openstack-dev] [cinder] Volume Drivers unit tests

2016-07-22 Thread Ivan Kolodyazhny
Eric, you're right.

I've disabled all such tests using '@unittest.skip("Skip until bug #1578986
is fixed")' decorator in my patch:
$ grep -r '1578986' cinder/tests/unit/  | grep -v 'pyc' | wc -l
37

Next step is to fix them.


[1] https://review.openstack.org/#/c/320148/

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Fri, Jul 22, 2016 at 4:13 PM, Eric Harney  wrote:

> On 07/21/2016 05:26 PM, Knight, Clinton wrote:
> > Nate, you have to press Ctrl-C to see the in-progress test, that’s why
> you don’t
> > see it in the logs.  The bug report shows this and points to the patch
> where it
> > appeared to begin. https://bugs.launchpad.net/cinder/+bug/1578986
> >
> > Clinton
> >
>
> I think this only gives a backtrace of the test runner and not the test.
>
> I attached gdb when this hang occured and see this.  Looks like we still
> have a thread running the oslo.messaging fake driver.
>
> http://paste.openstack.org/raw/539769/
>
> (Linked in the bug report as well.)
>
> > *From: *"Potter, Nathaniel" 
> > *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)"
> > 
> > *Date: *Thursday, July 21, 2016 at 7:17 PM
> > *To: *"OpenStack Development Mailing List (not for usage questions)"
> > 
> > *Subject: *Re: [openstack-dev] [cinder] Volume Drivers unit tests
> >
> > Hi all,
> >
> > I’m not totally sure that this is the same issue, but lately I’ve seen
> the gate
> > tests fail while hanging at this point [1], but they say ‘ok’ rather than
> > ‘inprogress’. Has anyone else come across this? It only happens
> sometimes, and a
> > recheck can get past it. The full log is here [2].
> >
> > [1] http://paste.openstack.org/show/539314/
> >
> > [2]
> >
> http://logs.openstack.org/90/341090/6/check/gate-cinder-python34-db/ea65de5/console.html
> >
> > Thanks,
> >
> > Nate
> >
> > *From:*yang, xing [mailto:xing.y...@emc.com]
> > *Sent:* Thursday, July 21, 2016 3:17 PM
> > *To:* OpenStack Development Mailing List (not for usage questions)
> > 
> > *Subject:* Re: [openstack-dev] [cinder] Volume Drivers unit tests
> >
> > Hi Ivan,
> >
> > Do you have any logs for the VMAX driver?  We'll take a look.
> >
> > Thanks,
> >
> > Xing
> >
> >
> 
> >
> > *From:*Ivan Kolodyazhny [e...@e0ne.info]
> > *Sent:* Thursday, July 21, 2016 4:44 PM
> > *To:* OpenStack Development Mailing List (not for usage questions)
> > *Subject:* Re: [openstack-dev] [cinder] Volume Drivers unit tests
> >
> > Thank you Xing,
> >
> > The issue is related both to VNX and VMAX EMC drivers
> >
> >
> > Regards,
> > Ivan Kolodyazhny,
> > http://blog.e0ne.info/
> >
> > On Thu, Jul 21, 2016 at 11:00 PM, yang, xing  > > wrote:
> >
> > Hi Ivan,
> >
> > Thanks for sending this out.  Regarding the issue in the EMC VNX
> driver unit
> > tests, it is tracked by this bug
> > https://bugs.launchpad.net/cinder/+bug/1578986. The driver was
> recently
> > refactored so this is probably a new issue introduced by the
> refactor.
> > We are investigating this issue.
> >
> > Thanks,
> >
> > Xing
> >
> >
>  
> 
> >
> > *From:*Ivan Kolodyazhny [e...@e0ne.info ]
> > *Sent:* Thursday, July 21, 2016 1:02 PM
> > *To:* OpenStack Development Mailing List
> > *Subject:* [openstack-dev] [cinder] Volume Drivers unit tests
> >
> > Hi team,
> >
> > First of all, I would like to apologize, if my mail is be too
> emotional. I
> > spent too much of time to fix it and failed.
> >
> > TL;DR;
> >
> > What I want to say is: "Let's spend some time to make our tests
> better and
> > fix all issues". Patch [1] is still unstable. Unit tests can pass or
> fail in
> > a in a random order. Also, I've disabled some tests to pass CI.
> >
> > Long version:
> >
> > While I was working on patch "Move drivers unit tests to
> unit.volume.drivers
> > directory" [1] I've found a lot of issues with our unit tests :(.
> Not all of
> > them are already fixed, so that patch is still in progress
> >
> > What did I found and what should we have to fix:
> >
> > 1) Execution time [2]. I don't want to argue what it unit tests, but
> 2-4
> > seconds per tests should be non-acceptable, IMO.
> >
> > 2) Execution order. Seriously, do you know that our tests will fail
> or hang
> > if execution order will change? Even if one test for diver A failed,
> some
> > tests for driver B will fail too.
> >
> > 3) Lack of mock. It's a root cause for #2. We didn't mock sleeps and
> event
> > loops right. We don't mock RPC call well too [3]. We don't
> > have 'cinder.openstack.common.rpc.impl_fake' module in Cinder 

Re: [openstack-dev] [nova] gate "gate-nova-python27-db" is broken due to oslo.context 2.6.0

2016-07-22 Thread Doug Hellmann
Excerpts from Jamie Lennox's message of 2016-07-20 10:28:29 +1000:
> On 20 July 2016 at 00:06, Joshua Harlow  wrote:
> 
> > Hayes, Graham wrote:
> >
> >> On 18/07/16 22:27, Ronald Bradford wrote:
> >>
> >>> Hi All,
> >>>
> >>> For Oslo libraries we ensure that API's are backward compatible for 1+
> >>> releases.
> >>> When an Oslo API adds a new class attribute (as in this example of
> >>> is_admin_project and 4 other attributes) added to Oslo Context in
> >>> 2.6.0,  these are added to ensure this API is also forward compatible
> >>> with existing project code for any contract with the base class
> >>> instantiation or manipulation.
> >>>
> >>
> >> Which projects is this run against?
> >>
> >> The issue seen is presently Nova specific (as other projects can
> >>> utilize 2.6.0) and it is related to projects that sub-class
> >>> oslo.context, and how unit tests are written for using class
> >>> parameters.  Ideally, to implement using oslo.context correctly
> >>> OpenStack projects should:
> >>>
> >>
> >> Designate also had to make a quick change to support 2.6.0.
> >>
> >> We were lucky as it was noticed by the RDO builds, which had pulled in
> >> 2.6.0 before the requirements update was proposed, so it did not break
> >> our gate.
> >>
> >> I just did a quick search and there is a few projects that hardcoded
> >> this, like we did.
> >>
> >
> > Ya, that's bad, nothing in the docs of the to_dict API say what to even
> > compare against (or the keys produced), so I'm pretty sure anyone doing
> > this is setting themselves up for future failure and fragile software.
> >
> 
> Can you post that list?
> 
> >
> >
> >> * Not perform direct dictionary to dictionary comparisons with the
> >>> to_dict() method as this does not support when new attributes at
> >>> added. Two patches (one to nova) address this in offending projects
> >>> [5][6]
> >>> * Unit tests should focus on attributes specific to the sub-classed
> >>> signature, e.g. [7].  Oslo context provides an extensive set of unit
> >>> tests for base class functionality. This is a wish list item for
> >>> projects to implement.
> >>>
> >>> The to_dict() method exists as a convenience method only and is not an
> >>> API contract. The resulting set of keys should be used accordingly.
> >>> This is why there is no major release version.
> >>>
> >>
> >> How are developers supposed to know that?
> >>
> >
> > So we (in oslo) can (and ideally will) make this better but when the API
> > doesn't itself tell you what keys are produced or what the values of those
> > keys are then it should be pretty obvious to u (the library user) that u
> > can not reliably do dictionary comparisons (because how do u know what to
> > compare against when the docs don't state that?). I suppose people are
> > 'reverse engineering the dict' by just looking at the code but that's also
> > not so great...
> >
> 
> I think the obvious and only thing you should expect from the to_dict
> method is that it can be reversed by the from_dict method. Subclasses can
> then make small modifications to those methods to add additional
> information as appropriate. There is a bit of a problem in this with the
> way subclasses are done that is fixed in [1] but it does not affect any
> existing code.
> 
> We realize that the to_dict method is subclassed by a lot of services and
> affects RPC and so contexts must be serializable between different versions
> of the library so we will not modify existing to_dict values but as
> mentioned writing your tests to assume this will never be added to sets us
> up for these problems.

Exactly. It breaks the layering between the subclass and the base class.
Unit tests in nova should not be testing functionality defined in a
library, no matter if that library is from Oslo or anywhere else. In
this case, the contract is as Jamie describes above (the return value of
to_dict can be passed to from_dict to get a new context instance). The
contents of the dict are not meant to be used as-is outside of the
context class. The subclass should not be asserting anything about the
contents provided by the base class.

> In this case oslo.context was largely extracted from nova and so the
> fragile tests make sense and should therefore be fixed - but the oslo
> change does not constitute a breaking API change.

Right. Ronald's change should address this pretty cleanly by only
looking at the expected values defined by the subclass.

Doug

> 
> 
> [1] https://review.openstack.org/#/c/341250/
> 
> >
> >
> >> This kind of feels like semantics. This was an external API that changed
> >> and as a result should have been a major version.
> >>
> >
> > I think this is where it gets a little bit into as u said, semantics, but
> > the semantics IMHO are important here because it affects the ability of
> > oslo.context to move forward & change.
> >
> > I suppose we should/could just put a warning on this method like I did in
> > taskflow (for something similar) @
> 

[openstack-dev] [Cinder] Midcycle Action item Summary

2016-07-22 Thread Kendall Nelson
Hello All,

   We came out of the Midcycle with a lot of things on a lot of people’s
plates. Here is a summary of what everyone signed up for and what things
need owners.

scottda:

   -

   Pick a time to meet weekly to discuss and push ahead with Active-Active
   HA (day1)
   -

   Find out the process for how swift uses feature branches and teach us
   (day1)
   -

   Write a devstack to fix config for tempest multibackend (just pass
   $CINDER_ENABLED_BACKENDS through to tempest.conf; make sure the string
   formats match up between tempest and cinder) (day2)

geguileo:

   -

   Make a list of minimal risk Active-Active HA patches to focus on first
   (day1)

jgriffith:

   -

   Work with scottda on getting the logic back into Nova (?) that was
   ripped out for setting up multiple back ends; it used to be there but was
   removed because there were errors coming from LVM (day2)
   -

   Work with xyang and e0ne on consolidating a single stable fake driver
   (day3)

smcginnis:

   -

   Rebase nested quota support patches (day1)
   -

  https://review.openstack.org/#/c/298453/ Functional Tests for nested
  quotas
  -

  https://review.openstack.org/#/c/285640/ Cinder test cases for nested
  quotas

eharney:

   -

   Figure out what needs to be changed in the client for min vol sizes
   (day1)
   -

   Create a wiki or document and schedule for stable branch release plans
   (day2)

diablo_rojo:

   -

   Write template for email to send to failing CI’s (day1)

bswartz:

   -

   Push code for your stochastic scheduler spec(day3)

e0ne:

   -

   Convert functional job in gate to be a stripped down devstack that can
   have the driver be configured


~~~Needs an owner~~~

   -

   QA Liaison
   -

   Request Infra to make driver branches for old releases (day2)
   -

   Set up project config for driver branches for old releases (day2)
   -

   Spec for the desired result for seamless failback after replication
   failover(day3)
   -

   Set provisioning to thick=true in all drivers and then change the
   default provisioning to thin (day3)
   -

   Update documentation for tests to explain differences between unit,
   functional, in-tree tempest, and tempest tests- i.e. what services run in
   functional environment versus in-tree tempest tests, what each’s purpose
   is, maybe examples of things that would be tested in each area? (day3)
   -

   Set up jobs for fake driver and real drivers for functional tests (day3)



Links to etherpads where notes were taken and todo’s were set up:

Day1: https://etherpad.openstack.org/p/newton-cinder-midcycle-day1

Day2: https://etherpad.openstack.org/p/newton-cinder-midcycle-day2

Day3: https://etherpad.openstack.org/p/newton-cinder-midcycle-day3

Thanks!

Kendall Nelson

(diablo_rojo)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Congress horizon plugin - congressclient/congress API auth issue - help

2016-07-22 Thread Aimee Ukasick
All - I made the change to the auth_url that  Anusha suggested.
Same problem as before " Cannot authorize API client"
2016-07-22 14:13:50.835861 * calling policies_list =
client.list_policy()*
2016-07-22 14:13:50.836062 Unable to get policies list: Cannot
authorize API client.

I used the token from the log output to query the Congress API with
the keystone v3 token - no issues.
curl -X GET -H "X-Auth-Token: 18ec54ac811b49aa8265c3d535ba0095" -H
"Cache-Control: no-cache" "http://192.168.56.103:1789/v1/policies;

So I really think the problem is that the python-congressclient
doesn't support identity v3.
I thought it did, but then I came across this:
"support keystone v3 api and session based authentication "
https://bugs.launchpad.net/python-congressclient/+bug/1564361
This is currently assigned to Anusha.
I'd like to start work on it since I am becoming familiar with keystone v3.

Thoughts?

aimee




On Fri, Jul 22, 2016 at 8:07 AM, Aimee Ukasick
 wrote:
> Thanks Anusha! I will retest this today. I guess I need to learn more
> about Horizon as well - thanks for pointing me in the right direction.
>
> aimee
>
>
>
> On Fri, Jul 22, 2016 at 6:30 AM, Anusha Ramineni  
> wrote:
>> Hi Aimee,
>>
>> I think devstack by default configured horizon to use v3 .
>> For V2 authentication, from the logs , auth_url doesn't seem to be set
>> explicitly to v2 auth_url .
>>
>> I have always set explicit v2 auth which worked fine.
>> For eg:- auth_url = 'http://:5000/v2.0' , for V2 authentication
>>
>> I have raised a patch, to take the auth_url from horizon settings instead of
>> from request.
>> https://review.openstack.org/#/c/345828/1
>>
>> Please set explict v2 auth_url as mentioned above in OPENSTACK_KESYTONE_URL
>> in /openstack_dashboard/local/local_settings.py and restart apache2
>> server . Then v2 authentication should go through fine.
>>
>> For v3 , need to add relevant code for v3 authentication in contrib/horizon
>> as presently it is hardcoded to use only v2. but yes, the code from plugin
>> model patch is still a WIP , so doesn't work for v3 authentication I guess
>> I'll have a look at it and let you know .
>>
>>
>> Best Regards,
>> Anusha
>>
>> On 21 July 2016 at 21:56, Tim Hinrichs  wrote:
>>>
>>> So clearly an authentication problem then.
>>>
>>> Anusha, do you have any ideas?  (Aimee, I think Anusha has worked with
>>> Keystone authentication most recently, so she's your best bet.)
>>>
>>> Tim
>>>
>>> On Thu, Jul 21, 2016 at 8:59 AM Aimee Ukasick
>>>  wrote:

 The  Policy/Data Sources web page throws the same errors. I am
 planning to recheck direct API calls using v3 auth today or tomorrow.

 aimee

 On Thu, Jul 21, 2016 at 10:49 AM, Tim Hinrichs  wrote:
 > Hi Aimee,
 >
 > Do the other APIs work?  That is, is it a general problem
 > authenticating, or
 > is the problem limited to list_policies?
 >
 > Tim
 >
 > On Wed, Jul 20, 2016 at 3:54 PM Aimee Ukasick
 > 
 > wrote:
 >>
 >> Hi all,
 >>
 >> I've been working on Policy UI (Horizon): Unable to get policies
 >> list (devstack) (https://bugs.launchpad.net/congress/+bug/1602837)
 >> for the past 3 days. Anusha is correct - it's an authentication
 >> problem, but I have not been able to fix it.
 >>
 >> I grabbed the relevant code in congress.py from Anusha's horizon
 >> plugin model patchset (https://review.openstack.org/#/c/305063/3) and
 >> added try/catch blocks, logging statements (with error because I
 >> haven't figured out how to set the horizon log level).
 >>
 >>
 >> I am testing the code on devstack, which I cloned on 19 July 2016.
 >>
 >> With both v2 and v3 auth, congressclient.v1.client is created.
 >> The failure happens trying to call
 >> congressclient.v1.client.Client.list_policies().
 >> When using v2 auth, the error message is "Unable to get policies list:
 >> The resource could not be found"
 >> When using v3 auth, the error message is "Cannot authorize API client"
 >>
 >> I am assuming that congressclient.v1.client.Client is
 >>
 >>
 >> https://github.com/openstack/python-congressclient/blob/master/congressclient/v1/client.py
 >> and that client.list_policy() calls list_policy()in the
 >> python-congressclient
 >> which in turn calls the Congress API. Is this correct?
 >>
 >> Any ideas why with v3 auth, the python-congressclient cannot authorize
 >> the
 >> call to the API?
 >>
 >> I looked at other horizon plugin models (ceilometer, neutron, nova,
 >> cerberus, cloudkitty, trove, designate, manila) to see how they
 >> created
 >> the client. While the code to create a client is not identical,
 >> it is vastly different from the code to create a client
 >> 

Re: [openstack-dev] [devstack] How to enable SSL in devStack?

2016-07-22 Thread Rob Crittenden

Brant Knudson wrote:



On Wed, Jul 20, 2016 at 12:29 PM, Rob Crittenden > wrote:
Fixing Keystone is easy. An Apache VirtualHost for 443 needs to be
added.

But I found another, deeper problem: cinder won't listen on SSL.
When they switched to using oslo_service for WSGI they completely
removed the ability to use SSL. See bug
https://bugs.launchpad.net/cinder/+bug/1590901


rob


Problems like this should make us wonder why we're reimplementing basic
functionality like TLS termination. Existing wsgi containers (uwsgi,
gunicorn, and apache) all handle TLS termination just fine.


I'm not exactly sure what you mean. If you mean that doing native TLS in 
eventlet is not a great idea then we are in agreement. But to remove it 
will should require a plan, not an unexpected side-effect of another change.


rob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Remove duplicate code using Data Driven Tests (DDT)

2016-07-22 Thread Daniel P. Berrange
On Thu, Jul 21, 2016 at 07:03:53AM -0700, Matt Riedemann wrote:
> On 7/21/2016 2:03 AM, Bhor, Dinesh wrote:
> > Hi Nova Devs,
> > 
> > 
> > 
> > Many times, there are a number of data sets that we have to run the same
> > tests on.
> > 
> > And, to create a different test for each data set values is
> > time-consuming and inefficient.
> > 
> > 
> > 
> > Data Driven Testing [1] overcomes this issue. Data-driven testing (DDT)
> > is taking a test,
> > 
> > parameterizing it and then running that test with varying data. This
> > allows you to run the
> > 
> > same test case with many varying inputs, therefore increasing coverage
> > from a single test,
> > 
> > reduces code duplication and can ease up error tracing as well.
> > 
> > 
> > 
> > DDT is a third party library needs to be installed separately and invoke the
> > 
> > module when writing the tests. At present DDT is used in cinder and rally.
> 
> There are several projects using it:
> 
> http://codesearch.openstack.org/?q=ddt%3E%3D1.0.1=nope==
> 
> I first came across it when working a little in manila.
> 
> > 
> > 
> > 
> > To start with, I have reported this as a bug [2] and added initial patch
> > [3] for the same,
> > 
> > but couple of reviewers has suggested to discuss about this on ML as
> > this is not a real bug.
> > 
> > IMO this is not a feature implementation and it’s just a effort for
> > simplifying our tests,
> > 
> > so a blueprint will be sufficient to track its progress.
> > 
> > 
> > 
> > So please let me know whether I can file a new blueprint or nova-specs
> > to proceed with this.
> > 
> > 
> > 
> > [1] http://ddt.readthedocs.io/en/latest/index.html
> > 
> > [2] https://bugs.launchpad.net/nova/+bug/1604798
> > 
> > [3] https://review.openstack.org/#/c/344820/
> > 
> > 
> > 
> > Thank you,
> > 
> > Dinesh Bhor
> > 
> > 
> > __
> > Disclaimer: This email and any attachments are sent in strictest confidence
> > for the sole use of the addressee and may contain legally privileged,
> > confidential, and proprietary data. If you are not the intended recipient,
> > please advise the sender by replying promptly to this email and then delete
> > and destroy this email and any attachments without any further use, copying
> > or forwarding.
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> I agree that it's not a bug. I also agree that it helps in some specific
> types of tests which are doing some kind of input validation (like the patch
> you've proposed) or are simply iterating over some list of values (status
> values on a server instance for example).
> 
> Using DDT in Nova has come up before and one of the concerns was hiding
> details in how the tests are run with a library, and if there would be a
> learning curve. Depending on the usage, I personally don't have a problem
> with it. When I used it in manila it took a little getting used to but I was
> basically just looking at existing tests and figuring out what they were
> doing when adding new ones.

I don't think there's significant learning curve there - the way it
lets you annotate the test methods is pretty easy to understand and
the ddt docs spell it out clearly for newbies. We've far worse things
in our code that create a hard learning curve which people will hit
first :-)

People have essentially been re-inventing ddt in nova tests already
by defining one helper method and them having multiple tests methods
all calling the same helper with a different dataset. So ddt is just
formalizing what we're already doing in many places, with less code
and greater clarity.

> I definitely think DDT is easier to use/understand than something like
> testscenarios, which we're already using in Nova.

Yeah, testscenarios feels little over-engineered for what we want most
of the time.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] How to enable SSL in devStack?

2016-07-22 Thread Brant Knudson
On Wed, Jul 20, 2016 at 12:29 PM, Rob Crittenden 
wrote:

> Rob Crittenden wrote:
>
>> Andrey Pavlov wrote:
>>
>>> Hi,
>>>
>>> When I ran devstack with SSL I found a bug and tried to fix it -
>>> https://review.openstack.org/#/c/242812/
>>> But no one agree with me.
>>> Try to apply this patch - it may help.
>>> Also there is a chance that new bugs present in devstack that
>>> prevented to install it with SSL.
>>>
>>
>> Seeing how some other things in your local.conf might help but when I
>> tried to reproduce it I got the same error and it failed because Apache
>> didn't have an SSL listener on 443.
>>
>> I'm not sure I'd recommend direct SSL in any case. I'd recommend the
>> tls-proxy service instead. Note that I'm pretty sure it has the same
>> problem: it hasn't been updated to handle port 443 for Keystone.
>>
>> I'm working on switching from stud to mod_proxy if you want to take a
>> look and this problem is fixed there, https://review.openstack.org/301172
>>
>> I'll see about adding a SSL listener to Keystone for the USE_SSL case in
>> the next few days.
>>
>> And yeah, it's a moving target. I have an experimental gate test for
>> tlsproxy but it has to be requested explicitly. My plan is to enable it
>> as non-voting once the mod_proxy changes land so it will at least be
>> more obvious when things break (or maybe we can making it voting).
>>
>
> Fixing Keystone is easy. An Apache VirtualHost for 443 needs to be added.
>
> But I found another, deeper problem: cinder won't listen on SSL. When they
> switched to using oslo_service for WSGI they completely removed the ability
> to use SSL. See bug https://bugs.launchpad.net/cinder/+bug/1590901
>
>
> rob
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Problems like this should make us wonder why we're reimplementing basic
functionality like TLS termination. Existing wsgi containers (uwsgi,
gunicorn, and apache) all handle TLS termination just fine.

-- 
- Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] some compress error when deploy OS

2016-07-22 Thread Shake Chen
hope can help you.

http://www.chenshake.com/openstack-project-series-3-devstack/

On Fri, Jul 22, 2016 at 9:35 AM,  wrote:

>
> Hi all,
>
> When i use devstack to deploy a OS env, it raise the error.
> The log is as follows.
> Does anybody know how to resolve this problem? Thank you!~
>
> 12 static files copied to '/opt/stack/horizon/static', 1708 unmodified.
> +lib/horizon:init_horizon:152
>  DJANGO_SETTINGS_MODULE=openstack_dashboard.settings
> +lib/horizon:init_horizon:152  django-admin compress --force
> Found 'compress' tags in:
>
> /opt/stack/horizon/openstack_dashboard/templates/horizon/_scripts.html
> /opt/stack/horizon/openstack_dashboard/templates/horizon/_conf.html
> /opt/stack/horizon/openstack_dashboard/templates/_stylesheets.html
> Compressing... CommandError: An error occurred during rendering
> /opt/stack/horizon/openstack_dashboard/templates/horizon/_scripts.html:
> '\"../build/dagre-d3.js\"' isn't accessible via COMPRESS_URL
> ('/dashboard/static/') and can't be compressed
> +lib/horizon:init_horizon:1exit_trap
> +./stack.sh:exit_trap:480  local r=1
>
>
> BR,
> dwj
>
>
>
> 
> ZTE Information Security Notice: The information contained in this mail (and 
> any attachment transmitted herewith) is privileged and confidential and is 
> intended for the exclusive use of the addressee(s).  If you are not an 
> intended recipient, any disclosure, reproduction, distribution or other 
> dissemination or use of the information contained is strictly prohibited.  If 
> you have received this mail in error, please delete it and notify us 
> immediately.
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Shake Chen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] os-brick privsep failures and an upgrade strategy?

2016-07-22 Thread Matt Riedemann

On 7/22/2016 8:20 AM, Angus Lees wrote:

On Thu, 21 Jul 2016 at 09:27 Sean Dague > wrote:

On 07/12/2016 06:25 AM, Matt Riedemann wrote:

> We probably aren't doing anything while Sean Dague is on vacation.
He's
> back next week and we have the nova/cinder meetups, so I'm planning on
> talking about the grenade issue in person and hopefully we'll have a
> plan by the end of next week to move forward.

After some discussions at the Nova midcycle we threw together an
approach where we just always allow privsep-helper from oslo.rootwrap.

https://review.openstack.org/344450


Were these discussions captured anywhere?  I thought we'd discussed
alternatives on os-dev, reached a conclusion, implemented the
changes(*), and verified the results all a month ago - and that we were
just awaiting nova approval.  So I'm surprised to see this sudden change
in direction...

(*) Changes:
https://review.openstack.org/#/c/329769/
https://review.openstack.org/#/c/332610/
mriedem's verification: https://review.openstack.org/#/c/331885/

 - Gus

We did a sniff test of this, and it worked to roll over the upgrade
boundary, without an etc change, and work with osbrick 1.4.0 (currently
blacklisted because of the upgrade issue). While I realize it wasn't the
favorite approach by many it works. It's 3 lines of functional change.
If we land this, release, and bump the minimum, we've got the upgrade
issue solved in this cycle.

Please take a look and see if we can agree to this path forward.

-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Message  protected by MailGuard: e-mail anti-virus, anti-spam and
content filtering.http://www.mailguard.com.au/mg
Click here to report this message as spam:
https://console.mailguard.com.au/ras/1OSGOh3pqW/kb4I7l49SLBdqHGpZpoHi/0.82



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



We talked about it at the nova midcycle, the etherpad is here but the 
notes on privsep/grenade are pretty sparse:


https://etherpad.openstack.org/p/nova-newton-midcycle

Long-term we want this in code, which is why privsep is for, but today 
it's config because it's deployed into /etc, so we treat it as config 
with the same rules for upgrades that are applied in grenade for actual 
config options, i.e. new code should be able to run on old config.


I mentioned that we still break this for other new filters which we 
don't test, but the feeling was we shouldn't change how we do this for 
things we do test since operators rely on it and upgrade is their top 
pain point.


We also decided that simply hard-coding the privsep-helper in 
oslo.rootwrap itself was better than needing to script/hack the same 
thing for every project that is going to adopt privsep - and we can 
isolate it in the rootwrap library so there are no exceptional upgrade 
scripts for newton (for nova, or anyone).


So this is not great, but it's the least bad to get us over this issue 
for newton and unblock os-brick and os-vif and allow new projects to 
start adopting privsep and not hit the same upgrade issues.


mikal suggested that gus and sdague talk over a hangout or some higher 
bandwidth medium if we still need to hash things out here.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][oslo.db] Inspecting sqlite db during unit tests

2016-07-22 Thread Mike Bayer



On 07/22/2016 04:02 AM, Kevin Benton wrote:

Now that we have switched to oslo.db for test provisioning the
responsibility of choosing a location lands
here: 
https://github.com/openstack/oslo.db/blob/a79479088029e4fa51def91cb36bc652356462b6/oslo_db/sqlalchemy/provision.py#L505

The problem is that when you specify OS_TEST_DBAPI_ADMIN_CONNECTION it
does end up creating the file, but then the logic above chooses a URL
based on the random ident. So you can find an sqlite file in your tmp
dir, it just won't be the one you asked for.

It seems like a bug in the oslo.db logic, but the commit that added it
was part of a much larger refactor so I'm not sure if it was intentional
to ensure that no two tests used the same db.


it is, the testr system runs tests in multiple subprocesses and I think 
neutron has it set to four.  if they all shared the same sqlite database 
file you'd have failed tests.





On Thu, Jul 21, 2016 at 1:45 PM, Carl Baldwin > wrote:

Hi,

In Neutron, we run unit tests with an in-memory sqlite instance. It
is impossible, as far as I know, to inspect this database using the
sqlite3 command line while the unit tests are running. So, we have
to resort to python / sqlalchemy to do it. This is inconvenient.

Months ago, I was able to get the unit tests to write the sqlite db
to a file so that I could inspect it while I was sitting at a
breakpoint in the code. That was very nice. Yesterday, I tried to
repeat that while traveling and was unable to figure it out. I had
to time box my effort to move on to other things.

As far as I remember, the mechanism that I used was to adjust the
neutron.conf for the tests [1]. I'm not totally sure about this
because I didn't take sufficient notes, I think because it was
pretty easy to figure it out at the time. This mechanism doesn't
seem to have any effect these days. I changed it to
'sqlite:tmp/unit-test.db' and never saw a file created there.

I did a little bit of digging and I tried one more thing. That was
to set OS_TEST_DBAPI_ADMIN_CONNECTION='sqlite:tmp/unit-test.db'
in the environment before running tests. I was encouraged because
this caused a file to be created at that location but the file
remained empty for the duration of the run.

Does anyone know off the top of their head how to get unit tests in
Neutron to use a file based sqlite db?

Carl

[1] 
https://github.com/openstack/neutron/blob/97c491294cf9eca0921336719d62d74ec4e1fa96/neutron/tests/etc/neutron.conf#L26

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][oslo.db] Inspecting sqlite db during unit tests

2016-07-22 Thread Mike Bayer



On 07/22/2016 04:02 AM, Kevin Benton wrote:

Now that we have switched to oslo.db for test provisioning the
responsibility of choosing a location lands
here: 
https://github.com/openstack/oslo.db/blob/a79479088029e4fa51def91cb36bc652356462b6/oslo_db/sqlalchemy/provision.py#L505

The problem is that when you specify OS_TEST_DBAPI_ADMIN_CONNECTION it
does end up creating the file, but then the logic above chooses a URL
based on the random ident. So you can find an sqlite file in your tmp
dir, it just won't be the one you asked for.

It seems like a bug in the oslo.db logic, but the commit that added it
was part of a much larger refactor so I'm not sure if it was intentional
to ensure that no two tests used the same db.


There is also a very recent commit to Neutron at 
https://review.openstack.org/#/c/332476/ , which I think changes the 
system to actually use the provisioning for the SQLite database as well, 
whereas before it might have been not taking effect.  But in any case, 
the OS_TEST_DBAPI_ADMIN_CONNECTION thing still works in that if you give 
it a file-based URL, provisioning should be putting the database files 
in /tmp.  If your approach is "pdb.set_trace(); then look at the file", 
just do this:


$ OS_TEST_DBAPI_ADMIN_CONNECTION=sqlite:///myfile.db 
.tox/functional/bin/python -m unittest 
neutron.tests.unit.db.test_db_base_plugin_v2.TestBasicGet.test_single_get_admin


> 
/home/classic/dev/redhat/openstack/neutron/neutron/tests/unit/db/test_db_base_plugin_v2.py(790)test_single_get_admin()

-> plugin = neutron.db.db_base_plugin_v2.NeutronDbPluginV2()
(Pdb)
(Pdb) self.engine.url
sqlite:tmp/hjbckefatl.db

then you can "sqlite3 /tmp/hjbckefatl.db" while the test is pending.






On Thu, Jul 21, 2016 at 1:45 PM, Carl Baldwin > wrote:

Hi,

In Neutron, we run unit tests with an in-memory sqlite instance. It
is impossible, as far as I know, to inspect this database using the
sqlite3 command line while the unit tests are running. So, we have
to resort to python / sqlalchemy to do it. This is inconvenient.

Months ago, I was able to get the unit tests to write the sqlite db
to a file so that I could inspect it while I was sitting at a
breakpoint in the code. That was very nice. Yesterday, I tried to
repeat that while traveling and was unable to figure it out. I had
to time box my effort to move on to other things.

As far as I remember, the mechanism that I used was to adjust the
neutron.conf for the tests [1]. I'm not totally sure about this
because I didn't take sufficient notes, I think because it was
pretty easy to figure it out at the time. This mechanism doesn't
seem to have any effect these days. I changed it to
'sqlite:tmp/unit-test.db' and never saw a file created there.

I did a little bit of digging and I tried one more thing. That was
to set OS_TEST_DBAPI_ADMIN_CONNECTION='sqlite:tmp/unit-test.db'
in the environment before running tests. I was encouraged because
this caused a file to be created at that location but the file
remained empty for the duration of the run.

Does anyone know off the top of their head how to get unit tests in
Neutron to use a file based sqlite db?

Carl

[1] 
https://github.com/openstack/neutron/blob/97c491294cf9eca0921336719d62d74ec4e1fa96/neutron/tests/etc/neutron.conf#L26

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][qa] When do we need tests for microversions in Tempest?

2016-07-22 Thread Matt Riedemann

On 7/14/2016 4:40 PM, Matt Riedemann wrote:

On 7/14/2016 3:11 AM, GHANSHYAM MANN wrote:

1. Always add a schema change to Tempest if a microversion changes a
response.


The problem with this is we shouldn't land a schema change by itself
in tempest.
Until we have something using the schema we have no verification that
they
actually work. We can and will land incorrect schemas if we did this.
That's why
there is a pretty strong policy of only landing code that is run in
CI somewhere
for Tempest.


+1, yes we should not add those without testing.



OK, good point on not landing changes that aren't tested. That's pretty
obvious.

For something like this though:

https://review.openstack.org/#/c/339559/

The gate-tempest-dsvm-neutron-full-ssh is testing it indirectly, so I'm
assuming that's OK even though we don't have an explicit test for the
2.3 microversion?

I know the patch needs to be updated for the other extended server
attributes in that microversion, but it's the immediate thing I want to
get fixed so we can get on with making the
gate-tempest-dsvm-neutron-full-ssh job voting.



We talked about this topic at the nova midcycle and these are the 
notes/decisions I took:


* We can't have schema changes in Tempest that aren't tested - this is 
already the Tempest policy and makes sense.
* When adding tests for a new microversion, if there is a gap in Tempest 
response schema validation it should be filled in that patch.
* After feature freeze we should fill any gaps between nova's latest 
microversion and the schema coverage in Tempest. Right now we have a 
backlog so we're playing catch up, but that shouldn't happen once we get 
caught up.
* It's fine to have microversion tests in Tempest even if they only hit 
the nova API/DB because we want the response schema validation (and to 
avoid these gaps in coverage).


So I think we can move forward on filling the gap in Tempest (there are 
several open changes for review).


I'll also push a docs change to nova [1] to mention that a Tempest test 
needs to be added for any microversion which changes the response schema.


[1] 
http://docs.openstack.org/developer/nova/code-review.html#microversion-api


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Mascot

2016-07-22 Thread Julien Danjou
On Thu, Jul 21 2016, gordon chung wrote:

> meerkat is a good option too. i think we have something to vote on :)

I've started a poll (sent to core reviewers, it's the easiest), go ahead
and vote. I'll stop the vote Tuesday (or before if everyone voted by
then!).

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Overview of the libvirt instance storage series

2016-07-22 Thread Matthew Booth
This series is part of the priority feature libvirt-instance-storage[1]. At
first glance it may not be immediately apparent why, so I'll work backwards
for
context.

The purpose of the feature is to create an unambiguous, canonical source of
metadata about 'local' storage. 'Local' here is defined to be storage which
is
directly managed by Nova, so non-volume root, ephemeral, swap, and config
disks. This storage may not actually be local (eg Rbd), but it's managed
locally, so that's how we treat it. The justification for wanting to do
this is
in the spec, so I won't repeat it here.

In order to solve this problem, we first had to fix the interface between
libvirt.driver and libvirt.imagebackend. As it stands, the code currently
calls
Image.cache(), passing a callback function which will write data to a
target.
The problem is, we need this context in order to know in advance how the
disk
will be persisted. The lack of this context also results in some tortured
layering violations, such as the SUPPORTS_CLONE interface, where we
essentially
push Rbd-specific logic into driver. There are many examples of this. The
solution to this is the Image.create_from_image and Image.create_from_func
interfaces, which provide the backend with all the relevant context
required to
do backend-specific special handling in the backend.

We initially created a series of patches which first added the relevant new
interfaces for each of the 5[2] storage backends, and followed this up with
a
patch which updated driver to call the new interfaces. 2 things came back
from
early review feedback which made us re-examine this:

* The 'big bang' approach of switching all backends on in a single commit
  was tolerable, but not popular. We were asked to look into making the
  switch-over incremental.

* Feodor Tersin (but not tempest or the unit tests) noticed that we broke
  resize.

The resize problem it turns out was hard. The problem is that as well as
providing no context to the backend, the Image.cache() interface also does
'all
the things', which incidentally doesn't always involve the cache.
Additionally,
it was called via _create_image(), which also does 'all the things'. So
there
were 2 levels of 'all the things'. We expressly did not want to turn our
create
functions into new 'all the things' methods. Because both cache() and
_create_image() both do so many things that nobody can remember why they do
them all, and which ones interact badly in which contexts, they had become
magnets for hacks, workarounds, and bugs[3]. Rather than add another layer
to
_create_image(), we decided to finally pull it apart.

This resulting series achieves both of the above goals. We add a shim layer
which implements the new Image interfaces in terms of the existing cache()
interface. Specific backend implementations override the shim layer with
a backend-specific implementation. Once all backends have been updated we
can
start to remove additional assumptions from driver, but the switch-over will
happen incrementally.

We also deconstruct both _create_image and _create_images_and_backing, and
provide helper methods for callers in driver to easily implement only the
functionality they require. This both makes it obvious to the reader what is
happening for a particular caller, and removes interactions between
unexpected
behaviours (by eliminating them).

The change to _create_images_and_backing also makes it use common code with
regular backend disk creation. Although it wasn't an explicit goal of this
series, I believe this also fixes a problem which would have prevented live
migration between Lvm-backed instances.

In creating this series we have made good test coverage a high priority.
Specifically, we want to ensure that tests:

* Validate the behaviour we are interested in

* Provide assurance that the refactoring is not introducing regressions

For the latter reason, we have pulled any test changes we can to the front
of
the series, before making any actual changes. This allows us to see that the
tests ran successfully both before and after the interface change. After the
interface change, we update the tests to test the new interfaces. As they
have
more context, these are also easier to test more thoroughly.

The series can be viewed here:

  https://review.openstack.org/#/q/topic:libvirt-imagebackend

Note that the series is a single, dependent chain. Yes, you really do have
to
click next to see it all, because there are 35 patches. Yes, this is a total
pita to manage.

The last patch in the series is currently:

  https://review.openstack.org/#/c/320610/

This is the patch which adds the Qcow2 backend, which is currently the only
backend patch we've rebased on to the new series. We picked this because
it's
the default for most tests, so a green here gives us a moderately confident
feeling. The others will follow.

To the greatest extent practical the patches are all single purpose, which
is
why there are so many. I'll highlight the 'crux' 

Re: [openstack-dev] [Neutron] Proposing Jakub Libosvar for testingcore

2016-07-22 Thread Darek Śmigiel
I’m not a core, so treat this as +0 but I think Jakub will be good addition to 
core team.

So +1

> On Jul 22, 2016, at 3:20 AM, Martin Hickey  wrote:
> 
> +1
> 
> Oleg Bondarev ---22/07/2016 09:13:16---+1 On Fri, Jul 22, 2016 
> at 2:36 AM, Doug Wiegley 
> 
> From: Oleg Bondarev 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 22/07/2016 09:13
> Subject: Re: [openstack-dev] [Neutron] Proposing Jakub Libosvar for testing 
> core
> 
> 
> 
> 
> +1
> 
> On Fri, Jul 22, 2016 at 2:36 AM, Doug Wiegley  > wrote:
> +1
> On Jul 21, 2016, at 5:13 PM, Kevin Benton  > wrote:
> 
> +1
> 
> On Thu, Jul 21, 2016 at 2:41 PM, Carl Baldwin  > wrote:
> +1 from me
> 
> On Thu, Jul 21, 2016 at 1:35 PM, Assaf Muller  > wrote:
> As Neutron's so called testing lieutenant I would like to propose
> Jakub Libosvar to be a core in the testing area.
> 
> Jakub has demonstrated his inherent interest in the testing area over
> the last few years, his reviews are consistently insightful and his
> numbers [1] are in line with others and I know will improve if given
> the responsibilities of a core reviewer. Jakub is deeply involved with
> the project's testing infrastructures and CI systems.
> 
> As a reminder the expectation from cores is found here [2], and
> specifically for cores interesting in helping out shaping Neutron's
> testing story:
> 
> * Guide community members to craft a testing strategy for features [3]
> * Ensure Neutron's testing infrastructures are sufficiently
> sophisticated to achieve the above.
> * Provide leadership when determining testing Do's & Don'ts [4]. What
> makes for an effective test?
> * Ensure the gate stays consistently green
> 
> And more tactically we're looking at finishing the Tempest/Neutron
> tests dedup [5] and to provide visual graphing for historical control
> and data plane performance results similar to [6].
> 
> [1] http://stackalytics.com/report/contribution/neutron/90 
> 
> [2] http://docs.openstack.org/developer/neutron/policies/neutron-teams.html 
> 
> [3] 
> http://docs.openstack.org/developer/neutron/devref/development.environment.html#testing-neutron
>  
> 
> [4] https://assafmuller.com/2015/05/17/testing-lightning-talk/ 
> 
> [5] https://etherpad.openstack.org/p/neutron-tempest-defork 
> 
> [6] https://www.youtube.com/watch?v=a0qlsH1hoKs=youtu.be=24m22s 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

[openstack-dev] [Fuel] New version of fuel-devops (2.9.22)

2016-07-22 Thread Alexey Stepanov
Hi All!

Today we are going to update the 'fuel-devops' framework on our product CI
to the version 2.9.22.
It's the FINAL version in 2.9 thread, new active development will be
produced in 3.x thread only, and 3.0.1 is released as first of it's.

Changes sinse 2.9.21:
* For devops:

  - paramiko 2.0.1 is banned due to connection failures has been
reproduced.  [1]
  - do not log keystoneauth1 debug info -- logs will be more readable.  [2]
  - fixed bug, when unexpected 'k e y s' header has been printed on 'dos.py
list' request, if not any environments registered in database.  [3]

* For QA automation:

  - SSHClient().check_call() now receives error_info optional parameter as
header for error log.  [4]
  - ExecResult class is implemented as universal storage for execution
results. R/W access for 'stdout', 'stderr' and 'exit_code'. R/O access to
UNICODE 'stdout_str', 'stderr_str' and 'others'. On-demand decode stdout as
JSON or YAML.  [5]
  - Support for chrony NTP has been ported from fuel-devops 3.  [6]
  - Implemented Subprocess helper class with SSHClient-like API for calling
subprocess lock-free (buffers polled every 100 ms). [7]
  - On SSHClient request from cache, catch all exceptions as a reason for
reconnect. [8]


List of all changes is available on github [9].

[1] https://review.openstack.org/#/c/344746/
[2] https://review.openstack.org/#/c/345377/
[3] https://review.openstack.org/#/c/345991/
[4] https://review.openstack.org/#/c/340996/
[5] https://review.openstack.org/#/c/340997/
[6] https://review.openstack.org/#/c/342101/
[7] https://review.openstack.org/#/c/342712/
[8] https://review.openstack.org/#/c/344667/
[9] https://github.com/openstack/fuel-devops/compare/2.9.21...release/2.9


-- 
Best regards,
Alexey Stepanov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [CC neutron] CIDR overlap functionality and constraints

2016-07-22 Thread Mike Bayer



On 07/21/2016 02:43 PM, Carl Baldwin wrote:


None of these operations are expected to be very contentious and
performance hasn't really been a concern yet. If it were a big concern,
I'd be very interested in the GiST index solution because, as I
understand it, detecting overlap without that capability requires a
linear search through the existing records. But, GiST index capability
isn't ubiquitous which makes it difficult to get excited about for
practical purposes. I do have an academic interest in it. Computational
geometry used to be a hobby of mine when I worked on tools for physical
design of microchips. I've been telling people for years that I thought
it'd be cool if databases had some facility for indexing potentially
overlapping ranges in one or more dimensions. This looks like some
pretty cool stuff.

Can you think of any other operations in Neutron -- or elsewhere in
OpenStack -- which will benefit from these new functions? I'll be
honest. Without some compelling benefit, it may be very difficult to
swallow the pill of dealing with special case code in each kind of DB
for this capability. But, if it is abstracted sufficiently by oslo db,
it might be worth looking at down the road. The changes to adopt such
functionality shouldn't be too difficult.


Well let me reiterate the idea, which is that:

1. we add features to oslo.db so that the use of a custom stored 
function is not a big deal


2. we add features to oslo.db that are based on using triggers, special 
constraints, or Gist indexes, so that the use of a database constraint 
that needs this kind of thing is not a big deal


3. the first proof of concept for this, is a CIDR function / trigger for 
this one reported issue in Neutron.


Now the question is, "can I think of any operation in openstack, besides 
this one, that would benefit from a custom stored function or a 
specialized constraint".   The answer for me is "not specifically but i 
bet if I started looking, I would".  Anytime there's an application 
loading some rows of data out of a table, doing some calculations on it, 
then dealing with a subset of those rows as a result, is a candidate for 
#1 (in fact I have some vague recollection of seeing some patch in 
Neutron that had this issue, it was the reason that compare-and-swap 
could not be used).   Anytime an application is trying to insert rows 
into a table which should be rejected based on some criteria beyond 
"unique key", that's a candidate for #2 - perhaps the plethora of 
UUID-based recipes throughout openstack in some cases could be better 
stated by more data-driven constraints.


If we were to decide that the Neutron issue right here doesn't need any 
changes, then I would be fine abandoning this initiative for now.  But 
as it stands, there seems to be a need to either do this change, *or* 
add a new UUID column to the subnets table, and basically I'm hoping to 
start steering the boat away from the island of 
add-a-new-column-everytime-theres-a-concurrency-problem.






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] neutron port duplication

2016-07-22 Thread Andrey Volkov
Hi, nova and neutron teams,

While booting new instance nova requests port for that instance in the
neutron.
It's possible to have a situation when neutron doesn't response due timeout
or connection break and nova retries port creation. It definitely results in
ports duplication for instance [1].

To solve this issue different methods can be applied:
- Transactional port creating in neutron (when it's possible to rollback if
the client doesn't accept the answer).
- Idempotent port creation (when the client provides some id and server
does get_or_create on this id).
- Getting port on the client before next retry attempt (idempotent port
creation on the client side).

Questions to community:
- Am I right with my thoughts? Does the problem exist? Maybe there is tool
already can solve it?
- Which method is better to apply to solve the problem if it exists?

[1] https://bugs.launchpad.net/nova/+bug/1603909


-- 
Thanks,

Andrey Volkov,
Software Engineer, Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Logo survey

2016-07-22 Thread Sean McGinnis
Hey all,

During the midcycle we discussed a few options for a new logo mascot. I've
taken a few of the top choices and created a survey to get more input:

https://www.surveymonkey.com/r/G7JRNQB

Please vote as soon as possible. These are first come, first served, so we
should decide soon and get our selections submitted. I will probably wait
until Monday to collect responses unless we get a lot of clear responses
right away.

Thanks!
Sean


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [daisycloud-core] IRC Meeting Log 20160722

2016-07-22 Thread Yujun Zhang
Why not try the meeting bot to record the IRC log and archive it
automatically? See http://eavesdrop.openstack.org/

--
Yujun

On Fri, Jul 22, 2016 at 5:40 PM <hu.zhiji...@zte.com.cn> wrote:

> Hi Team,
>
> About the daisy4nfv things we did not have time to discuss, I want to add
> that: I thought in the future, when we move to work with OPNFV world, we
> will still use this channel to discuss not only daisycloud but also
> daisy4nfv. I personaly do not want to maintain two meeting by myself. But
> your opinions are more than welcome.
>
>
>
>
> 20160722 Agenda
> ===
> 1) roll call
> 2) Agenda bashing
> 3) Approved Wei (kong.w...@zte.com.cn) as daisycloud core reviewer
> 4) daisycloud status update
> 5) daisy4nfv status update and disscussion in daisycloud channel
>
>
> Log
> ===
>
> ? daisycloud-core project weekly meeting
> huzhj
> Hello
> zhouya
> hi
> → lu has joined
> ? lu is now known as Guest25493
> ← Guest25493 has left
> huzhj
> Yao is on the sick leave, so may be we can not reach her
> zhouya
> ok
> huzhj
> Let's wait for more people come online
> → luyao has joined
> huzhj
> OK
> Let's start
> → King has joined
> ← King has quit (Client Quit)
> huzhj
> Today's agenda
> 1) roll call
> 2) Agenda bashing
> 3) Approved Wei (kong.w...@zte.com.cn) as daisycloud core reviewer
> 4) daisycloud status update
> 5) daisy4nfv status update
> 6) daisy4nfv disscussion in daisycloud channel
> 1) roll call
> zhouya
> o/
> → Sun has joined
> luyao
> o/
> huzhj
> roll call is a convention that everyone write them name with a "+" as the
> preffix
> Sun
> O/
> huzhj
> Next topic, Agenda bashing
> zhouya
> +zhouya
> Sun
> +sunjing
> luyao
> +luyao
> huzhj
> Agenda bashing is a way for modifing the agenda right before starting
> meeting
> so if you have anything to disscuss please feel free to ask me to add to
> the agenda list
> If nothing to modify, we will go to next topic
> Sun
> Ok with me
> huzhj
> Ok, next topic Approved Wei (kong.w...@zte.com.cn) as daisycloud core
> reviewer
> luyao
> ok
> huzhj
> As the result of the voting on the mailing list, Wei has successfully been
> approved as our core reviewer
> luyao
> congratulation
> huzhj
> Hope he can do more great jobs for the project in future :)
> congratulations!
> But he is not online ~~
> Sun
> congratulations
> zhouya
> he is busy with tempest
> ← luyao has quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC
> client)
> zhouya
> with my joy respectfully
> huzhj
> Yes, our tempest is one the way, Wei will make it work for us soon.
> → luyao has joined
> → kongwei has joined
> kongwei
> hi
> huzhj
> We are still open for discuss about who will be and how he/she can be the
> next core reviewer
> Hi Wei
> We are talking about you. thanks for the great work on the tempest things
> Next topic
> 4) daisycloud status update
> Same old way, do it in alphabet order
> huzhj
> I wil be the first, keepalived/haproxy problem during kolla deploying has
> been solved
> the problem is that there are more than one clusters on our shared network
> using the same virtual_router_id in /etc/keepalived/keepalived.conf
> the virtual_router_id is choosed by kolla script which is a number
> beetween 0 and 255
> luyao
> good
> huzhj
> it is easy to conflicted , so the resolution is for us to not to use
> shared network to deploy many clusters at the same time.
> zhouya
> OK
> huzhj
> Next thing is that i have been cut off the size of kolla images tarball by
> simply build what we really needs
> Sun
> the globals.yml give a default value 51
> huzhj
> @Sun , good catch!
> luyao
> we can set diff value by daisy
> zhouya
> so modify globals.yml?
> huzhj
> Yes, that will be a greate help for our development atleast
> Sun
> maybe wo can set diff value according ip of vm
> huzhj
> Due to the time limit let's talk about it in more details offline
> The current list of images to build is as follows:
> luyao
> ok
> huzhj
> 192.168.0.48:4000/kollaglue/centos-binary-neutron-openvswitch-agent
> 192.168.0.48:4000/kollaglue/centos-binary-neutron-server
> 192.168.0.48:4000/kollaglue/centos-binary-neutron-metadata-agent
> 192.168.0.48:4000/kollaglue/centos-binary-neutron-dhcp-agent
> 192.168.0.48:4000/kollaglue/centos-binary-neutron-l3-agent
> 192.168.0.48:4000/kollaglue/centos-binary-nova-libvirt
> 192.168.0.48:4000/kollaglue/centos-binary-nova-compute
> 192.168.0.48:4000/kollaglue/centos-binary-nova-scheduler
> 192.168.0.48:4000/kollaglue/centos-binary-nova-conductor
> 192.168.0.48:4000/kollaglue

Re: [openstack-dev] [cinder] [nova] os-brick privsep failures and an upgrade strategy?

2016-07-22 Thread Angus Lees
On Thu, 21 Jul 2016 at 09:27 Sean Dague  wrote:

> On 07/12/2016 06:25 AM, Matt Riedemann wrote:
> 
> > We probably aren't doing anything while Sean Dague is on vacation. He's
> > back next week and we have the nova/cinder meetups, so I'm planning on
> > talking about the grenade issue in person and hopefully we'll have a
> > plan by the end of next week to move forward.
>
> After some discussions at the Nova midcycle we threw together an
> approach where we just always allow privsep-helper from oslo.rootwrap.
>
> https://review.openstack.org/344450


Were these discussions captured anywhere?  I thought we'd discussed
alternatives on os-dev, reached a conclusion, implemented the changes(*),
and verified the results all a month ago - and that we were just awaiting
nova approval.  So I'm surprised to see this sudden change in direction...

(*) Changes:
https://review.openstack.org/#/c/329769/
https://review.openstack.org/#/c/332610/
mriedem's verification: https://review.openstack.org/#/c/331885/

 - Gus

We did a sniff test of this, and it worked to roll over the upgrade
> boundary, without an etc change, and work with osbrick 1.4.0 (currently
> blacklisted because of the upgrade issue). While I realize it wasn't the
> favorite approach by many it works. It's 3 lines of functional change.
> If we land this, release, and bump the minimum, we've got the upgrade
> issue solved in this cycle.
>
> Please take a look and see if we can agree to this path forward.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> --
> Message  protected by MailGuard: e-mail anti-virus, anti-spam and content
> filtering.http://www.mailguard.com.au/mg
> Click here to report this message as spam:
> https://console.mailguard.com.au/ras/1OSGOh3pqW/kb4I7l49SLBdqHGpZpoHi/0.82
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Volume Drivers unit tests

2016-07-22 Thread Eric Harney
On 07/21/2016 05:26 PM, Knight, Clinton wrote:
> Nate, you have to press Ctrl-C to see the in-progress test, that’s why you 
> don’t 
> see it in the logs.  The bug report shows this and points to the patch where 
> it 
> appeared to begin. https://bugs.launchpad.net/cinder/+bug/1578986
> 
> Clinton
> 

I think this only gives a backtrace of the test runner and not the test.

I attached gdb when this hang occured and see this.  Looks like we still
have a thread running the oslo.messaging fake driver.

http://paste.openstack.org/raw/539769/

(Linked in the bug report as well.)

> *From: *"Potter, Nathaniel" 
> *Reply-To: *"OpenStack Development Mailing List (not for usage questions)" 
> 
> *Date: *Thursday, July 21, 2016 at 7:17 PM
> *To: *"OpenStack Development Mailing List (not for usage questions)" 
> 
> *Subject: *Re: [openstack-dev] [cinder] Volume Drivers unit tests
> 
> Hi all,
> 
> I’m not totally sure that this is the same issue, but lately I’ve seen the 
> gate 
> tests fail while hanging at this point [1], but they say ‘ok’ rather than 
> ‘inprogress’. Has anyone else come across this? It only happens sometimes, 
> and a 
> recheck can get past it. The full log is here [2].
> 
> [1] http://paste.openstack.org/show/539314/
> 
> [2] 
> http://logs.openstack.org/90/341090/6/check/gate-cinder-python34-db/ea65de5/console.html
> 
> Thanks,
> 
> Nate
> 
> *From:*yang, xing [mailto:xing.y...@emc.com]
> *Sent:* Thursday, July 21, 2016 3:17 PM
> *To:* OpenStack Development Mailing List (not for usage questions) 
> 
> *Subject:* Re: [openstack-dev] [cinder] Volume Drivers unit tests
> 
> Hi Ivan,
> 
> Do you have any logs for the VMAX driver?  We'll take a look.
> 
> Thanks,
> 
> Xing
> 
> 
> 
> *From:*Ivan Kolodyazhny [e...@e0ne.info]
> *Sent:* Thursday, July 21, 2016 4:44 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [cinder] Volume Drivers unit tests
> 
> Thank you Xing,
> 
> The issue is related both to VNX and VMAX EMC drivers
> 
> 
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
> 
> On Thu, Jul 21, 2016 at 11:00 PM, yang, xing  > wrote:
> 
> Hi Ivan,
> 
> Thanks for sending this out.  Regarding the issue in the EMC VNX driver 
> unit
> tests, it is tracked by this bug
> https://bugs.launchpad.net/cinder/+bug/1578986. The driver was recently
> refactored so this is probably a new issue introduced by the refactor. 
> We are investigating this issue.
> 
> Thanks,
> 
> Xing
> 
> 
> 
> 
> *From:*Ivan Kolodyazhny [e...@e0ne.info ]
> *Sent:* Thursday, July 21, 2016 1:02 PM
> *To:* OpenStack Development Mailing List
> *Subject:* [openstack-dev] [cinder] Volume Drivers unit tests
> 
> Hi team,
> 
> First of all, I would like to apologize, if my mail is be too emotional. I
> spent too much of time to fix it and failed.
> 
> TL;DR;
> 
> What I want to say is: "Let's spend some time to make our tests better and
> fix all issues". Patch [1] is still unstable. Unit tests can pass or fail 
> in
> a in a random order. Also, I've disabled some tests to pass CI.
> 
> Long version:
> 
> While I was working on patch "Move drivers unit tests to 
> unit.volume.drivers
> directory" [1] I've found a lot of issues with our unit tests :(. Not all 
> of
> them are already fixed, so that patch is still in progress
> 
> What did I found and what should we have to fix:
> 
> 1) Execution time [2]. I don't want to argue what it unit tests, but 2-4
> seconds per tests should be non-acceptable, IMO.
> 
> 2) Execution order. Seriously, do you know that our tests will fail or 
> hang
> if execution order will change? Even if one test for diver A failed, some
> tests for driver B will fail too.
> 
> 3) Lack of mock. It's a root cause for #2. We didn't mock sleeps and event
> loops right. We don't mock RPC call well too [3]. We don't
> have 'cinder.openstack.common.rpc.impl_fake' module in Cinder tree.
> 
> In some drivers, we use oslo_service.loopingcall.FixedIntervalLoopingCall
> [4]. We've go ZeroIntervalLoopingCall [5] class in Cinder. Do we use it
> everywhere or mock FixedIntervalLoopingCall right? I don't think so, I've
> hacked oslo_service in my env to rise an exception if interval > 0. 297
> tests failed. It means, our tests use sleep. We have to get rid of this.
> TBH, not only volume drivers unit tests failed. E.g. some API unit tests
> failed too.
> 
> 4) Due to #3, sometimes unit tests hangs even on master branch with a 
> minor
> 

Re: [openstack-dev] [Congress] Congress horizon plugin - congressclient/congress API auth issue - help

2016-07-22 Thread Aimee Ukasick
Thanks Anusha! I will retest this today. I guess I need to learn more
about Horizon as well - thanks for pointing me in the right direction.

aimee



On Fri, Jul 22, 2016 at 6:30 AM, Anusha Ramineni  wrote:
> Hi Aimee,
>
> I think devstack by default configured horizon to use v3 .
> For V2 authentication, from the logs , auth_url doesn't seem to be set
> explicitly to v2 auth_url .
>
> I have always set explicit v2 auth which worked fine.
> For eg:- auth_url = 'http://:5000/v2.0' , for V2 authentication
>
> I have raised a patch, to take the auth_url from horizon settings instead of
> from request.
> https://review.openstack.org/#/c/345828/1
>
> Please set explict v2 auth_url as mentioned above in OPENSTACK_KESYTONE_URL
> in /openstack_dashboard/local/local_settings.py and restart apache2
> server . Then v2 authentication should go through fine.
>
> For v3 , need to add relevant code for v3 authentication in contrib/horizon
> as presently it is hardcoded to use only v2. but yes, the code from plugin
> model patch is still a WIP , so doesn't work for v3 authentication I guess
> I'll have a look at it and let you know .
>
>
> Best Regards,
> Anusha
>
> On 21 July 2016 at 21:56, Tim Hinrichs  wrote:
>>
>> So clearly an authentication problem then.
>>
>> Anusha, do you have any ideas?  (Aimee, I think Anusha has worked with
>> Keystone authentication most recently, so she's your best bet.)
>>
>> Tim
>>
>> On Thu, Jul 21, 2016 at 8:59 AM Aimee Ukasick
>>  wrote:
>>>
>>> The  Policy/Data Sources web page throws the same errors. I am
>>> planning to recheck direct API calls using v3 auth today or tomorrow.
>>>
>>> aimee
>>>
>>> On Thu, Jul 21, 2016 at 10:49 AM, Tim Hinrichs  wrote:
>>> > Hi Aimee,
>>> >
>>> > Do the other APIs work?  That is, is it a general problem
>>> > authenticating, or
>>> > is the problem limited to list_policies?
>>> >
>>> > Tim
>>> >
>>> > On Wed, Jul 20, 2016 at 3:54 PM Aimee Ukasick
>>> > 
>>> > wrote:
>>> >>
>>> >> Hi all,
>>> >>
>>> >> I've been working on Policy UI (Horizon): Unable to get policies
>>> >> list (devstack) (https://bugs.launchpad.net/congress/+bug/1602837)
>>> >> for the past 3 days. Anusha is correct - it's an authentication
>>> >> problem, but I have not been able to fix it.
>>> >>
>>> >> I grabbed the relevant code in congress.py from Anusha's horizon
>>> >> plugin model patchset (https://review.openstack.org/#/c/305063/3) and
>>> >> added try/catch blocks, logging statements (with error because I
>>> >> haven't figured out how to set the horizon log level).
>>> >>
>>> >>
>>> >> I am testing the code on devstack, which I cloned on 19 July 2016.
>>> >>
>>> >> With both v2 and v3 auth, congressclient.v1.client is created.
>>> >> The failure happens trying to call
>>> >> congressclient.v1.client.Client.list_policies().
>>> >> When using v2 auth, the error message is "Unable to get policies list:
>>> >> The resource could not be found"
>>> >> When using v3 auth, the error message is "Cannot authorize API client"
>>> >>
>>> >> I am assuming that congressclient.v1.client.Client is
>>> >>
>>> >>
>>> >> https://github.com/openstack/python-congressclient/blob/master/congressclient/v1/client.py
>>> >> and that client.list_policy() calls list_policy()in the
>>> >> python-congressclient
>>> >> which in turn calls the Congress API. Is this correct?
>>> >>
>>> >> Any ideas why with v3 auth, the python-congressclient cannot authorize
>>> >> the
>>> >> call to the API?
>>> >>
>>> >> I looked at other horizon plugin models (ceilometer, neutron, nova,
>>> >> cerberus, cloudkitty, trove, designate, manila) to see how they
>>> >> created
>>> >> the client. While the code to create a client is not identical,
>>> >> it is vastly different from the code to create a client
>>> >> in contrib/horizon/congress.py.
>>> >>
>>> >> Thanks in advance for any pointers.
>>> >>
>>> >> aimee
>>> >>
>>> >> Aimee Ukasick (aimeeu)
>>> >>
>>> >> v2 log:
>>> >> 2016-07-20 22:13:56.501455
>>> >> 2016-07-20 22:14:30.238233 * view.get_data calling policies =
>>> >> congress.policies_list(self.request) *
>>> >> 2016-07-20 22:14:30.238318 * self.request.path=
>>> >> /dashboard/admin/policies/
>>> >> 2016-07-20 22:14:30.238352 * congress.policies_list(request)
>>> >> BEGIN*
>>> >> 2016-07-20 22:14:30.238376 * calling client =
>>> >> congressclient(request)*
>>> >> 2016-07-20 22:14:30.238399 * congress.congressclient BEGIN*
>>> >> 2016-07-20 22:14:30.238454 * auth_url=
>>> >> http://192.168.56.103/identity
>>> >> 2016-07-20 22:14:30.238479 * calling get_keystone_session *
>>> >> 2016-07-20 22:14:30.238505 * congress.get_keystone_session BEGIN
>>> >> auth_url *http://192.168.56.103/identity
>>> >> 2016-07-20 22:14:30.238554 * path= /identity
>>> >> 2016-07-20 22:14:30.238578 * using V2 plugin to authenticate*
>>> >> 2016-07-20 

Re: [openstack-dev] [Neutron][networking-ovn][networking-odl] Syncing neutron DB and OVN DB

2016-07-22 Thread Numan Siddique
Thanks for the comments Amitabha.
Please see comments inline

On Fri, Jul 22, 2016 at 5:50 AM, Amitabha Biswas  wrote:

> Hi Numan,
>
> Thanks for the proposal. We have also been thinking about this use-case.
>
> If I’m reading this accurately (and I may not be), it seems that the
> proposal is to not have any OVN NB (CUD) operations (R operations outside
> the scope) done by the api_worker threads but rather by a new journal
> thread.
>
>
Correct.
​


> If this is indeed the case, I’d like to consider the scenario when there
> any N neutron nodes, each node with M worker threads. The journal thread at
> the each node contain list of pending operations. Could there be (sequence)
> dependency in the pending operations amongst each the journal threads in
> the nodes that prevents them from getting applied (for e.g.
> Logical_Router_Port and Logical_Switch_Port inter-dependency), because we
> are returning success on neutron operations that have still not been
> committed to the NB DB.
>
>
I
​ts a valid scenario and should be designed properly to handle such
scenarios in case we take this approach.

​

> Couple of clarifications and thoughts below.
>
> Thanks
> Amitabha 
>
> On Jul 13, 2016, at 1:20 AM, Numan Siddique  wrote:
>
> Adding the proper tags in subject
>
> On Wed, Jul 13, 2016 at 1:22 PM, Numan Siddique 
> wrote:
>
>> Hi Neutrinos,
>>
>> Presently, In the OVN ML2 driver we have 2 ways to sync neutron DB and
>> OVN DB
>>  - At neutron-server startup, OVN ML2 driver syncs the neutron DB and OVN
>> DB if sync mode is set to repair.
>>  - Admin can run the "neutron-ovn-db-sync-util" to sync the DBs.
>>
>> Recently, in the v2 of networking-odl ML2 driver (Please see (1) below
>> which has more details). (ODL folks please correct me if I am wrong here)
>>
>>   - a journal thread is created which does the CRUD operations of neutron
>> resources asynchronously (i.e it sends the REST APIs to the ODL controller).
>>
>
> Would this be the equivalent of making OVSDB transactions to the OVN NB DB?
>

​Correct.
​


>
>   - a maintenance thread is created which does some cleanup periodically
>> and at startup does full sync if it detects ODL controller cold reboot.
>>
>>
>> Few question I have
>>  - can OVN ML2 driver take same or similar approach. Are there any
>> advantages in taking this approach ? One advantage is neutron resources can
>> be created/updated/deleted even if the OVN ML2 driver has lost connection
>> to the ovsdb-server. The journal thread would eventually sync these
>> resources in the OVN DB. I would like to know the communities thoughts on
>> this.
>>
>
> If we can make it work, it would indeed be a huge plus for system wide
> upgrades and some corner cases in the code (ACL specifically), where the
> post_commit relies on all transactions to be successful and doesn’t revert
> the neutron db if something fails.
>




>
>
>>  - Are there are other ML2 drivers which might have to handle the DB
>> sync's (cases where the other controllers also maintain their own DBs) and
>> how they are handling it ?
>>
>>  - Can a common approach be taken to sync the neutron DB and controller
>> DBs ?
>>
>>
>>
>> ---
>>
>> (1)
>> Sync threads created by networking-odl ML2 driver
>> --
>> ODL ML2 driver creates 2 threads (threading.Thread module) at init
>>  - Journal thread
>>  - Maintenance thread
>>
>> Journal thread
>> 
>> The journal module creates a new journal table by name
>> “opendaylightjournal”  -
>> https://github.com/openstack/networking-odl/blob/master/networking_odl/db/models.py#L23
>>
>> Journal thread will be in loop waiting for the sync event from the ODL
>> ML2 driver.
>>
>>  - ODL ML2 driver resource (network, subnet, port) precommit functions
>> when called by the ML2 plugin adds an entry in the “opendaylightjournal”
>> table with the resource data and sets the journal operation state for this
>> entry to “PENDING”.
>>  - The corresponding resource postcommit function of the ODL ML2 plugin
>> when called, sets the sync event flag.
>>  - A timer is also created which sets the sync event flag when it expires
>> (the default value is 10 seconds).
>>  - Journal thread wakes up, looks into the “opendaylightjournal” table
>> with the entries with state “pending” and runs the CRUD operation on those
>> resources in the ODL DB. Once done, it sets the state to “completed”.
>>
>> Maintenance thread
>> --
>> Maintenance thread does 3 operations
>>  - JournalCleanup - Delete completed rows from journal table
>> “opendaylightjournal”.
>>  - CleanupProcessing - Mark orphaned processing rows to pending.
>>  - Full sync - Re-sync when detecting an ODL "cold reboot”.
>>
>>
>>
>> Thanks
>> Numan
>>
>>
> 

[openstack-dev] [murano] PTL on vacation

2016-07-22 Thread Kirill Zaitsev
Hi team, I’d like to inform you, that I would be on vacation during next week 
and a half and would like to appoint some of my deputies =)


During this cycle I’m acting as a release liaison, so in my absence I would 
like Victor Ryzhenkin (freerunner) to act as one and be responsible for 
supervising our releases, if any.

As for IRC community meetings — I’m leaving Nikolay Starodubtsev (Nikolay_St) 
and Valerii Kovalchuk (vakovalchuk) to chair the next two meetings. I also 
suggest to skip the next meeting, unless some agenda items are added. Guys, 
please don’t forget to send a message next week, if you decide to follow this 
suggestion.


As for myself — I’ll be obviously less active, but I should have some internet 
access and would still check email and answer questions on reviews, just at a 
slower rate. =)

-- 
Kirill Zaitsev
Murano Project Tech Lead
Software Engineer at
Mirantis, Inc

signature.asc
Description: Message signed with OpenPGP using AMPGpg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Congress horizon plugin - congressclient/congress API auth issue - help

2016-07-22 Thread Anusha Ramineni
Hi Aimee,

I think devstack by default configured horizon to use v3 .
For V2 authentication, from the logs , auth_url doesn't seem to be set
explicitly to v2 auth_url .

I have always set explicit v2 auth which worked fine.
For eg:- auth_url = 'http://:5000/v2.0' , for V2 authentication

I have raised a patch, to take the auth_url from horizon settings instead
of from request.
https://review.openstack.org/#/c/345828/1

Please set explict v2 auth_url as mentioned above in OPENSTACK_KESYTONE_URL
 in /openstack_dashboard/local/local_settings.py and restart
apache2 server . Then v2 authentication should go through fine.

For v3 , need to add relevant code for v3 authentication in contrib/horizon
as presently it is hardcoded to use only v2. but yes, the code from plugin
model patch is still a WIP , so doesn't work for v3 authentication I guess
I'll have a look at it and let you know .


Best Regards,
Anusha

On 21 July 2016 at 21:56, Tim Hinrichs  wrote:

> So clearly an authentication problem then.
>
> Anusha, do you have any ideas?  (Aimee, I think Anusha has worked with
> Keystone authentication most recently, so she's your best bet.)
>
> Tim
>
> On Thu, Jul 21, 2016 at 8:59 AM Aimee Ukasick 
> wrote:
>
>> The  Policy/Data Sources web page throws the same errors. I am
>> planning to recheck direct API calls using v3 auth today or tomorrow.
>>
>> aimee
>>
>> On Thu, Jul 21, 2016 at 10:49 AM, Tim Hinrichs  wrote:
>> > Hi Aimee,
>> >
>> > Do the other APIs work?  That is, is it a general problem
>> authenticating, or
>> > is the problem limited to list_policies?
>> >
>> > Tim
>> >
>> > On Wed, Jul 20, 2016 at 3:54 PM Aimee Ukasick <
>> aimeeu.opensou...@gmail.com>
>> > wrote:
>> >>
>> >> Hi all,
>> >>
>> >> I've been working on Policy UI (Horizon): Unable to get policies
>> >> list (devstack) (https://bugs.launchpad.net/congress/+bug/1602837)
>> >> for the past 3 days. Anusha is correct - it's an authentication
>> >> problem, but I have not been able to fix it.
>> >>
>> >> I grabbed the relevant code in congress.py from Anusha's horizon
>> >> plugin model patchset (https://review.openstack.org/#/c/305063/3) and
>> >> added try/catch blocks, logging statements (with error because I
>> >> haven't figured out how to set the horizon log level).
>> >>
>> >>
>> >> I am testing the code on devstack, which I cloned on 19 July 2016.
>> >>
>> >> With both v2 and v3 auth, congressclient.v1.client is created.
>> >> The failure happens trying to call
>> >> congressclient.v1.client.Client.list_policies().
>> >> When using v2 auth, the error message is "Unable to get policies list:
>> >> The resource could not be found"
>> >> When using v3 auth, the error message is "Cannot authorize API client"
>> >>
>> >> I am assuming that congressclient.v1.client.Client is
>> >>
>> >>
>> https://github.com/openstack/python-congressclient/blob/master/congressclient/v1/client.py
>> >> and that client.list_policy() calls list_policy()in the
>> >> python-congressclient
>> >> which in turn calls the Congress API. Is this correct?
>> >>
>> >> Any ideas why with v3 auth, the python-congressclient cannot authorize
>> the
>> >> call to the API?
>> >>
>> >> I looked at other horizon plugin models (ceilometer, neutron, nova,
>> >> cerberus, cloudkitty, trove, designate, manila) to see how they created
>> >> the client. While the code to create a client is not identical,
>> >> it is vastly different from the code to create a client
>> >> in contrib/horizon/congress.py.
>> >>
>> >> Thanks in advance for any pointers.
>> >>
>> >> aimee
>> >>
>> >> Aimee Ukasick (aimeeu)
>> >>
>> >> v2 log:
>> >> 2016-07-20 22:13:56.501455
>> >> 2016-07-20 22:14:30.238233 * view.get_data calling policies =
>> >> congress.policies_list(self.request) *
>> >> 2016-07-20 22:14:30.238318 * self.request.path=
>> >> /dashboard/admin/policies/
>> >> 2016-07-20 22:14:30.238352 * congress.policies_list(request)
>> >> BEGIN*
>> >> 2016-07-20 22:14:30.238376 * calling client =
>> >> congressclient(request)*
>> >> 2016-07-20 22:14:30.238399 * congress.congressclient BEGIN*
>> >> 2016-07-20 22:14:30.238454 * auth_url=
>> http://192.168.56.103/identity
>> >> 2016-07-20 22:14:30.238479 * calling get_keystone_session *
>> >> 2016-07-20 22:14:30.238505 * congress.get_keystone_session BEGIN
>> >> auth_url *http://192.168.56.103/identity
>> >> 2016-07-20 22:14:30.238554 * path= /identity
>> >> 2016-07-20 22:14:30.238578 * using V2 plugin to authenticate*
>> >> 2016-07-20 22:14:30.238630 * v2 auth.get_auth_state=
>> >> 2016-07-20 22:14:30.238656 None
>> >> 2016-07-20 22:14:30.238677 * finished using V2 plugin to
>> >> authenticate*
>> >> 2016-07-20 22:14:30.238698 * creating session with auth *
>> >> 2016-07-20 22:14:30.244407 * congress.get_keystone_session END*
>> >> 2016-07-20 22:14:30.244462 * regtion_name= RegionOne
>> >> 

Re: [openstack-dev] [ironic][neutron][nova] Sync port state changes.

2016-07-22 Thread Vasyl Saienko
Kevin, thanks for reply,

On Fri, Jul 22, 2016 at 11:50 AM, Kevin Benton  wrote:

> Hi,
>
> Once you solve the issue of getting the baremetal ports to transition to
> the ACTIVE state, a notification will automatically be emitted to Nova of
> 'network-vif-plugged' with the port ID. Will ironic not have access to that
> event via Nova?
>
> To solve issues of getting the baremetal ports to transition to the ACTIVE
state we should do the following:

   1. Use FLAT network instead of VXLAN for Ironic gate jobs [3].
   2. On Nova side set vnic_type to baremetal for Ironic hypervisor [0].
   3. On Neutron side, perform fake 'baremetal' port binding [2] in case of
   FLAT network.

We need to receive direct notifications from Neutron to Ironic, because
Ironic creates ports in provisioning network by his own.
Nova doesn't know anything about provisioning ports.

If not, Ironic could develop a service plugin that just listens for port
> update events and relays them to Ironic.
>
>
I already prepared PoC [4] to Neutron that allows to send notifications to
Ironic on port_update event.

Reference:
[0] https://review.openstack.org/339143
[1] https://review.openstack.org/339129
[3] https://review.openstack.org/340695
[4] https://review.openstack.org/345211


> On Tue, Jul 12, 2016 at 4:07 AM, Vasyl Saienko 
> wrote:
>
>> Hello Community,
>>
>> I'm working to make Ironic be aware about  Neutron port state changes [0].
>> The issue consists of two parts:
>>
>>- Neutron ports for baremetal instances remain in DOWN state [1]. The
>>issue occurs because there is no mechanism driver that binds ports. To
>>solve it we need to create port with  vnic_type='baremetal' in Nova [2],
>>and bind in Neutron. New mechanism driver that supports baremetal 
>> vnic_type
>>is needed [3].
>>
>>- Sync Neutron events with Ironic. According to Neutron architecture
>>[4] mechanism drivers work synchronously. When the port is bound by ml2
>>mechanism driver it becomes ACTIVE. While updating dhcp information 
>> Neutron
>>uses dhcp agent, which is asynchronous call. I'm confused here, since
>>ACTIVE port status doesn't mean that it operates (dhcp agent may fail to
>>setup port). The issue was solved by [5]. So starting from [5] when ML2
>>uses new port status update flow, port update is always asynchronous
>>operation. And the most efficient way is to implement callback mechanism
>>between Neutron and Ironic is like it's done for Neutron/Nova.
>>
>>
>> Neutron/Nova/Ironic teams let me know your thoughts on this.
>>
>> Reference:
>> [0] https://bugs.launchpad.net/ironic/+bug/1304673
>> [1] https://bugs.launchpad.net/neutron/+bug/1599836
>> [2] https://review.openstack.org/339143
>> [3] https://review.openstack.org/#/c/339129/
>> [4]
>> https://www.packtpub.com/sites/default/files/Article-Images/B04751_01.png
>> [5]
>> https://github.com/openstack/neutron/commit/b672c26cb42ad3d9a17ed049b506b5622601e891
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Applying for stable-follows tag

2016-07-22 Thread Kwasniewska, Alicja
+1 too

From: Mauricio Lima [mailto:mauricioli...@gmail.com]
Sent: Tuesday, July 19, 2016 5:29 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [kolla][vote] Applying for stable-follows tag

+1

2016-07-19 12:23 GMT-03:00 Vikram Hosakote (vhosakot) 
>:
+1 sure.

Regards,
Vikram Hosakote
IRC:  vhosakot

From: Michał Jastrzębski >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, July 19, 2016 at 9:20 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [kolla][vote] Applying for stable-follows tag

+1 ofc

On 19 July 2016 at 06:02, Ryan Hallisey 
> wrote:
+1

-Ryan

- Original Message -
From: "Jeffrey Zhang" >
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Sent: Monday, July 18, 2016 9:16:09 PM
Subject: Re: [openstack-dev] [kolla][vote] Applying for stable-follows tag

+1 to apply
I'd like to be the volunteer.

On Mon, Jul 18, 2016 at 9:04 PM, Swapnil Kulkarni (coolsvap)
> wrote:
On Mon, Jul 18, 2016 at 6:23 PM, Paul Bourke 
> wrote:
Hi Steve,

+1 to applying. I'll volunteer for the backport team also.

-Paul


On 18/07/16 13:07, Steven Dake (stdake) wrote:

Hey Koalians,

I'd like us to consider applying  for the stable follows policy tag.
   Full details are here:


http://github.com/openstack/governance/blob/master/reference/tags/stable_follows-policy.rst

Because of the magic work we did to make liberty functional, it is
possible that we may not be able to apply for this tag until Liberty
falls into EOL.  Still I personally believe intent matters most, and our
intent has always been for these to be stable low-rate-of-change
no-feature-backport branches.  There are some exceptions I think we
would fit under for the Liberty case, so I think it is worth a shot.

I'd need 2-4 people to commit to joining the stable backport team for
Kolla reviews specifically.  These folks would be the only folks that
could ACK patches in the stable branch maintenance queue.  Anyone could
continue to submit backport patches as they desire.

I'll leave voting open for 1 week or until there I a majority (6 core
reviewers) or until there is a unanimous vote.  If there is not, then we
won't apply.  The deadline for this vote is July 25th.

Thanks!
-steve




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

+1 to apply for stable follows policy.
I would like to volunteer for the backport team.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [horizon] Midcycle Summary

2016-07-22 Thread Matthias Runge
On 22/07/16 09:38, Rob Cresswell wrote:
> We didn't discuss it explicitly, but I don't believe the decision has
> changed. We can remove it, but last I checked the patches in line to
> handle testing after deprecation still needed work. If they've been
> updated etc. I can look again.
> 
> Rob
> 
Patches I had up are now quite outdated. I'd highly appreciate someone
else taking over this.

That being said, I'm not sure if we should just remove it right now.

That would leave out all questions of testing disabled features etc,
which has been delaying this patch.

-- 
Matthias Runge 

Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Michael Cunningham,
Michael O'Neill, Eric Shander

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [daisycloud-core] IRC Meeting Log 20160722

2016-07-22 Thread hu . zhijiang
Hi Team,

About the daisy4nfv things we did not have time to discuss, I want to add 
that: I thought in the future, when we move to work with OPNFV world, we 
will still use this channel to discuss not only daisycloud but also 
daisy4nfv. I personaly do not want to maintain two meeting by myself. But 
your opinions are more than welcome.




20160722 Agenda
===
1) roll call
2) Agenda bashing
3) Approved Wei (kong.w...@zte.com.cn) as daisycloud core reviewer
4) daisycloud status update
5) daisy4nfv status update and disscussion in daisycloud channel


Log
===

? daisycloud-core project weekly meeting 
huzhj
Hello 
zhouya
hi 
→ lu has joined 
? lu is now known as Guest25493 
← Guest25493 has left 
huzhj
Yao is on the sick leave, so may be we can not reach her 
zhouya
ok 
huzhj
Let's wait for more people come online 
→ luyao has joined 
huzhj
OK 
Let's start 
→ King has joined 
← King has quit (Client Quit) 
huzhj
Today's agenda 
1) roll call 
2) Agenda bashing 
3) Approved Wei (kong.w...@zte.com.cn) as daisycloud core reviewer 
4) daisycloud status update 
5) daisy4nfv status update 
6) daisy4nfv disscussion in daisycloud channel 
1) roll call 
zhouya
o/ 
→ Sun has joined 
luyao
o/ 
huzhj
roll call is a convention that everyone write them name with a "+" as the 
preffix 
Sun
O/ 
huzhj
Next topic, Agenda bashing 
zhouya
+zhouya 
Sun
+sunjing 
luyao
+luyao 
huzhj
Agenda bashing is a way for modifing the agenda right before starting 
meeting 
so if you have anything to disscuss please feel free to ask me to add to 
the agenda list 
If nothing to modify, we will go to next topic 
Sun
Ok with me 
huzhj
Ok, next topic Approved Wei (kong.w...@zte.com.cn) as daisycloud core 
reviewer 
luyao
ok 
huzhj
As the result of the voting on the mailing list, Wei has successfully been 
approved as our core reviewer 
luyao
congratulation 
huzhj
Hope he can do more great jobs for the project in future :) 
congratulations! 
But he is not online ~~ 
Sun
congratulations 
zhouya
he is busy with tempest 
← luyao has quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC 
client) 
zhouya
with my joy respectfully 
huzhj
Yes, our tempest is one the way, Wei will make it work for us soon. 
→ luyao has joined 
→ kongwei has joined 
kongwei
hi 
huzhj
We are still open for discuss about who will be and how he/she can be the 
next core reviewer 
Hi Wei 
We are talking about you. thanks for the great work on the tempest things 
Next topic 
4) daisycloud status update 
Same old way, do it in alphabet order 
huzhj
I wil be the first, keepalived/haproxy problem during kolla deploying has 
been solved 
the problem is that there are more than one clusters on our shared network 
using the same virtual_router_id in /etc/keepalived/keepalived.conf 
the virtual_router_id is choosed by kolla script which is a number 
beetween 0 and 255 
luyao
good 
huzhj
it is easy to conflicted , so the resolution is for us to not to use 
shared network to deploy many clusters at the same time. 
zhouya
OK 
huzhj
Next thing is that i have been cut off the size of kolla images tarball by 
simply build what we really needs 
Sun
the globals.yml give a default value 51 
huzhj
@Sun , good catch! 
luyao
we can set diff value by daisy 
zhouya
so modify globals.yml? 
huzhj
Yes, that will be a greate help for our development atleast 
Sun
maybe wo can set diff value according ip of vm 
huzhj
Due to the time limit let's talk about it in more details offline 
The current list of images to build is as follows: 
luyao
ok 
huzhj
192.168.0.48:4000/kollaglue/centos-binary-neutron-openvswitch-agent 
192.168.0.48:4000/kollaglue/centos-binary-neutron-server 
192.168.0.48:4000/kollaglue/centos-binary-neutron-metadata-agent 
192.168.0.48:4000/kollaglue/centos-binary-neutron-dhcp-agent 
192.168.0.48:4000/kollaglue/centos-binary-neutron-l3-agent 
192.168.0.48:4000/kollaglue/centos-binary-nova-libvirt 
192.168.0.48:4000/kollaglue/centos-binary-nova-compute 
192.168.0.48:4000/kollaglue/centos-binary-nova-scheduler 
192.168.0.48:4000/kollaglue/centos-binary-nova-conductor 
192.168.0.48:4000/kollaglue/centos-binary-nova-novncproxy 
192.168.0.48:4000/kollaglue/centos-binary-nova-consoleauth 
192.168.0.48:4000/kollaglue/centos-binary-nova-ssh 
192.168.0.48:4000/kollaglue/centos-binary-nova-api 
192.168.0.48:4000/kollaglue/centos-binary-heat-api 
192.168.0.48:4000/kollaglue/centos-binary-heat-api-cfn 
192.168.0.48:4000/kollaglue/centos-binary-heat-engine 
192.168.0.48:4000/kollaglue/centos-binary-glance-api 
192.168.0.48:4000/kollaglue/centos-binary-glance-registry 
192.168.0.48:4000/kollaglue/centos-binary-horizon 
192.168.0.48:4000/kollaglue/centos-binary-keystone 
192.168.0.48:4000/kollaglue/centos-binary-openvswitch-db-server 
192.168.0.48:4000/kollaglue/centos-binary-openvswitch-vswitchd 
192.168.0.48:4000/kollaglue/centos-binary-heka 
192.168.0.48:4000/kollaglue/centos-binary-kolla-toolbox 
192.168.0.48:4000/kollaglue/centos-binary-rabbitmq 
192.168.0.48:4000/kollag

Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-07-22 Thread Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
Hi Hongbin,

This is really a good idea, because it will mitigate much of the job of 
implementing loop and conditional branch in Heat ResourceGroup. But as Kevin 
pointed out in below mail, it need a careful upgrade/migration path.

Meanwhile, as for the blueprint of supporting multiple flavor 
(https://blueprints.launchpad.net/magnum/+spec/support-multiple-flavor), we 
have implemented a Proof of Concept/prototype based on the current 
ResourceGroup method. (see the design spec 
https://review.openstack.org/#/c/345745/ for details.)

I am wondering whether we can continue with the implementation of supporting 
multiple flavor based on the current Resource Group for now? or Do you have any 
plan on when to implement the "manually managing the bay nodes"?

Regards,
Gary

From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: Tuesday, May 17, 2016 3:01 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually managing the 
bay nodes

Sounds ok, but there needs to be a careful upgrade/migration path, where both 
are supported until after all pods are migrated out of nodes that are in the 
resourcegroup.

Thanks,
Kevin


From: Hongbin Lu
Sent: Sunday, May 15, 2016 3:49:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Discuss the idea of manually managing the bay 
nodes
Hi all,

This is a continued discussion from the design summit. For recap, Magnum 
manages bay nodes by using ResourceGroup from Heat. This approach works but it 
is infeasible to manage the heterogeneity across bay nodes, which is a 
frequently demanded feature. As an example, there is a request to provision bay 
nodes across availability zones [1]. There is another request to provision bay 
nodes with different set of flavors [2]. For the request features above, 
ResourceGroup won't work very well.

The proposal is to remove the usage of ResourceGroup and manually create Heat 
stack for each bay nodes. For example, for creating a cluster with 2 masters 
and 3 minions, Magnum is going to manage 6 Heat stacks (instead of 1 big Heat 
stack as right now):
* A kube cluster stack that manages the global resources
* Two kube master stacks that manage the two master nodes
* Three kube minion stacks that manage the three minion nodes

The proposal might require an additional API endpoint to manage nodes or a 
group of nodes. For example:
$ magnum nodegroup-create --bay XXX --flavor m1.small --count 2 
--availability-zone us-east-1 
$ magnum nodegroup-create --bay XXX --flavor m1.medium --count 3 
--availability-zone us-east-2 ...

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/magnum-availability-zones
[2] https://blueprints.launchpad.net/magnum/+spec/support-multiple-flavor

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [tooz] DLM benchmark results

2016-07-22 Thread John Schwarz
Yes, the backends were deployed in cluster configuration (the
configurations are available in the appendix).
I'll make a change to the doc to make sure this is reflected properly.

On Fri, Jul 22, 2016 at 11:29 AM, Kevin Benton  wrote:
> Were the backends (zookeeper, etcd) deployed in a cluster configuration? I
> can't quite tell from the doc.
>
> On Fri, Jul 22, 2016 at 12:58 AM, John Schwarz  wrote:
>>
>> You're right Joshua.
>>
>> Tooz HEAD points to 0f4e1198fdcbd6a29d77c67d105d201ed0fbd9e0.
>>
>> With regards to etcd and zookeeper's versions, they are:
>> zookeeper-3.4.5+28-1.cdh4.7.1.p0.13.el6.x86_64,
>> etcd-2.2.5-2.el7.0.1.x86_64.
>>
>> John.
>>
>> On Thu, Jul 21, 2016 at 8:14 PM, Joshua Harlow 
>> wrote:
>> > Hi John,
>> >
>> > Thanks for gathering this info,
>> >
>> > Do you have the versions of the backend that were used here
>> > (particularly
>> > relevant for etcd which has a new release pretty frequently).
>> >
>> > It'd be useful to capture that info also :)
>> >
>> > John Schwarz wrote:
>> >>
>> >> Hi everyone,
>> >>
>> >> Following [1], a few of us sat down during the last day of the Austin
>> >> Summit and discussed the possibility of adding formal support for
>> >> Tooz, specifically for the locking mechanism it provides. The
>> >> conclusion we reached was that benchmarks should be done to show if
>> >> and how Tooz affects the normal operation of Neutron (i.e. if locking
>> >> a resource using Zookeeper takes 3 seconds, it's not worthwhile at
>> >> all).
>> >>
>> >> We've finally finished the benchmarks and they are available at [2].
>> >> They test a specific case: when creating an HA router a lock-free
>> >> algorithm is used to assign a vrid to a router (this is later used for
>> >> keepalived), and the benchmark specifically checks the effects of
>> >> locking that function with either Zookeeper or Etcd, using the no-Tooz
>> >> case as a baseline. The locking was checked in 2 different ways - one
>> >> which presents no contention (acquire() always succeeds immediately)
>> >> and one which presents contentions (acquire() may block until a
>> >> similar process for the invoking tenant is complete).
>> >>
>> >> The benchmarks show that while using Tooz does raise the cost of an
>> >> operation, the effects are not as bad as we initially feared. In the
>> >> simple, single simultaneous request, using Zookeeper raised the
>> >> average time it took to create a router by 1.5% (from 11.811 to 11.988
>> >> seconds). On the more-realistic case of 6 simultaneous requests,
>> >> Zookeeper raised the cost by 3.74% (from 16.533 to 17.152 seconds).
>> >>
>> >> It is important to note that the setup itself was overloaded - it was
>> >> built on a single baremetal hosting 5 VMs (4 of which were
>> >> controllers) and thus we were unable to go further - for example, 10
>> >> concurrent requests overloaded the server and caused some race
>> >> conditions to appear in the L3 scheduler (bugs will be opened soon),
>> >> so for this reason we haven't tested heavier samples and limited
>> >> ourselves to 6 simultaneous requests.
>> >>
>> >> Also important to note that some kind of race condition was noticed in
>> >> tooz's etcd driver. We've discussed this with the tooz devs and
>> >> provided a patch that is supposed to fix them [3].
>> >> Lastly, races in the L3 HA Scheduler were found and we are yet to dig
>> >> into them and find out their cause - bugs will be opened for these as
>> >> well.
>> >>
>> >> I've opened the summary [2] for comments so you're welcome to open a
>> >> discussion about the results both in the ML and on the doc itself.
>> >>
>> >> (CC to all those who attended the Austin Summit meeting and other
>> >> interested parties)
>> >> Happy locking,
>> >>
>> >> [1]:
>> >>
>> >> http://lists.openstack.org/pipermail/openstack-dev/2016-April/093199.html
>> >> [2]:
>> >>
>> >> https://docs.google.com/document/d/1jdI8gkQKBE0G9koR0nLiW02d5rwyWv_-gAp7yavt4w8
>> >> [3]: https://review.openstack.org/#/c/342096/
>> >>
>> >> --
>> >> John Schwarz,
>> >> Senior Software Engineer,
>> >> Red Hat.
>> >>
>> >>
>> >> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> --
>> John Schwarz,
>> Senior Software Engineer,
>> Red Hat.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [neutron] [tooz] DLM benchmark results

2016-07-22 Thread Kevin Benton
Were the backends (zookeeper, etcd) deployed in a cluster configuration? I
can't quite tell from the doc.

On Fri, Jul 22, 2016 at 12:58 AM, John Schwarz  wrote:

> You're right Joshua.
>
> Tooz HEAD points to 0f4e1198fdcbd6a29d77c67d105d201ed0fbd9e0.
>
> With regards to etcd and zookeeper's versions, they are:
> zookeeper-3.4.5+28-1.cdh4.7.1.p0.13.el6.x86_64,
> etcd-2.2.5-2.el7.0.1.x86_64.
>
> John.
>
> On Thu, Jul 21, 2016 at 8:14 PM, Joshua Harlow 
> wrote:
> > Hi John,
> >
> > Thanks for gathering this info,
> >
> > Do you have the versions of the backend that were used here (particularly
> > relevant for etcd which has a new release pretty frequently).
> >
> > It'd be useful to capture that info also :)
> >
> > John Schwarz wrote:
> >>
> >> Hi everyone,
> >>
> >> Following [1], a few of us sat down during the last day of the Austin
> >> Summit and discussed the possibility of adding formal support for
> >> Tooz, specifically for the locking mechanism it provides. The
> >> conclusion we reached was that benchmarks should be done to show if
> >> and how Tooz affects the normal operation of Neutron (i.e. if locking
> >> a resource using Zookeeper takes 3 seconds, it's not worthwhile at
> >> all).
> >>
> >> We've finally finished the benchmarks and they are available at [2].
> >> They test a specific case: when creating an HA router a lock-free
> >> algorithm is used to assign a vrid to a router (this is later used for
> >> keepalived), and the benchmark specifically checks the effects of
> >> locking that function with either Zookeeper or Etcd, using the no-Tooz
> >> case as a baseline. The locking was checked in 2 different ways - one
> >> which presents no contention (acquire() always succeeds immediately)
> >> and one which presents contentions (acquire() may block until a
> >> similar process for the invoking tenant is complete).
> >>
> >> The benchmarks show that while using Tooz does raise the cost of an
> >> operation, the effects are not as bad as we initially feared. In the
> >> simple, single simultaneous request, using Zookeeper raised the
> >> average time it took to create a router by 1.5% (from 11.811 to 11.988
> >> seconds). On the more-realistic case of 6 simultaneous requests,
> >> Zookeeper raised the cost by 3.74% (from 16.533 to 17.152 seconds).
> >>
> >> It is important to note that the setup itself was overloaded - it was
> >> built on a single baremetal hosting 5 VMs (4 of which were
> >> controllers) and thus we were unable to go further - for example, 10
> >> concurrent requests overloaded the server and caused some race
> >> conditions to appear in the L3 scheduler (bugs will be opened soon),
> >> so for this reason we haven't tested heavier samples and limited
> >> ourselves to 6 simultaneous requests.
> >>
> >> Also important to note that some kind of race condition was noticed in
> >> tooz's etcd driver. We've discussed this with the tooz devs and
> >> provided a patch that is supposed to fix them [3].
> >> Lastly, races in the L3 HA Scheduler were found and we are yet to dig
> >> into them and find out their cause - bugs will be opened for these as
> >> well.
> >>
> >> I've opened the summary [2] for comments so you're welcome to open a
> >> discussion about the results both in the ML and on the doc itself.
> >>
> >> (CC to all those who attended the Austin Summit meeting and other
> >> interested parties)
> >> Happy locking,
> >>
> >> [1]:
> >>
> http://lists.openstack.org/pipermail/openstack-dev/2016-April/093199.html
> >> [2]:
> >>
> https://docs.google.com/document/d/1jdI8gkQKBE0G9koR0nLiW02d5rwyWv_-gAp7yavt4w8
> >> [3]: https://review.openstack.org/#/c/342096/
> >>
> >> --
> >> John Schwarz,
> >> Senior Software Engineer,
> >> Red Hat.
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> John Schwarz,
> Senior Software Engineer,
> Red Hat.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [oslo.db] [CC neutron] CIDR overlap functionality and constraints

2016-07-22 Thread Kevin Benton
I think the one use case you missed is the bug for which it was being
developed to fix: https://review.openstack.org/#/c/314054/

Currently we check for overlapping subnets on the same network in a lookup
before creating the subnet, so two requests can race and get overlapping
subnets committed to the database.

However, if that's the only use case and people aren't happy with the
complexity, we can solve this particular bug by doing a compare and swap
operation on some network scoped value (at the cost of all subnet creates
on the same network becoming serialized via conflicts and retries).

On Thu, Jul 21, 2016 at 11:43 AM, Carl Baldwin  wrote:

> On Tue, Jul 19, 2016 at 7:40 AM, Mike Bayer  wrote:
>
>> Oslo.db devs :
>>
>> We've developed a system by which CIDR math, such as that of detecting
>> region overlaps, can be performed on a MySQL database within queries [1]
>> [2].   This feature makes use of a custom stored function I helped to
>> produce which provides functionality similar to that which Postgresql
>> provides built in [3].   SQLite also supports a simple way to add CIDR math
>> functions as well which I've demonstrated at [4].
>>
>> Note that I use the term "function" and not "procedure" to stress that
>> this is not a "stored procedure" in the traditional sense of performing
>> complex business logic and persistence operations - this CIDR function
>> performs a calculation that is not at all specific to Openstack, and is
>> provided already by other databases as a built-in, and nothing else.
>>
>> The rationale for network-math logic being performed in the relational
>> database is so that SQL like SELECT, UPDATE, and INSERT can make use of
>> CIDR overlaps and other network math, such as to locate records that
>> correspond to network ranges in some way and of course to provide guards
>> and constraints, like that of concurrent UPDATE statements against
>> conflicting ranges as well as being able to produce INSERT constraints for
>> similar reasons.   Both MySQL and Postgresql have support for network
>> number functions, Postgresql just has a lot more.
>>
>> The INSERT constraint problem is also addressed by our patch and makes
>> use of an INSERT trigger on MySQL [5], but on Postgresql we use a GIST
>> index which has been shown to be more reliable under concurrent use than a
>> trigger on this backend [6].
>>
>> Not surprisingly, there's a lot of verbosity to both the production of
>> the MySQL CIDR overlap function and the corresponding trigger and
>> constraint, as well as the fact that to support the addition of these
>> functions / constraints at both the Alembic migration level as well as that
>> of the model level (because we would like metadata.create_all() to work),
>> they are currently stated twice within this patch within their full
>> verbosity.This is sub-optimal, and while the patch here makes use of an
>> Alembic recipe [7] to aid in the maintenance of special DDL constructs,
>> it's adding lots of burden to the Neutron codebase that could be better
>> stated elsewhere.
>>
>> The general verbosity and unfamiliarity of these well known SQL features
>> is understandably being met with trepidation.  I've identified that this
>> trepidation is likely rooted in the fact that unlike the many other
>> elaborate SQL features we use like ALTER TABLE, savepoints, subqueries,
>> SELECT FOR UPDATE, isolation levels, etc. etc., there is no warm and fuzzy
>> abstraction layer here that is both greatly reducing the amount of explicit
>> code needed to produce and upgrade the feature, as well as indicating that
>> "someone else" will fix this system when it has problems.
>>
>> Rather than hobbling the entire Openstack ecosystem to using a small
>> subset of what our relational databases are capable of, I'd like to propose
>> that preferably somewhere in oslo.db, or elsewhere, we begin providing the
>> foundation for the use of SQL features that are rooted in mechanisms such
>> as triggers and small use of stored functions, and more specifically begin
>> to produce network-math SQL features as the public API, starting with this
>> one.
>>
>
> Mike,
>
> This is pretty cool, I'll admit. I enjoyed looking through and learning
> about some modern capabilities in Postgres. The thing is, I can only think
> of one area in Neutron's API which would benefit from this. That is subnet
> pools. Specifically, these operations could benefit:
>
> - Create a subnet from a subnet pool.
> - It would be helpful for the database to check overlap with other
> subnets already allocated from the same pool. Now, we have to do a select
> to check for overlap and then an insert later. Obviously, we've had to work
> out a way to avoid races during the time between select and update.
>
> - Adding a subnet pool to an address scope or updating a subnet pool
> already under an address scope. These operations require that all of the
> various subnet pools not have any 

Re: [openstack-dev] [Neutron] Proposing Jakub Libosvar for testingcore

2016-07-22 Thread Martin Hickey

+1



From:   Oleg Bondarev 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   22/07/2016 09:13
Subject:Re: [openstack-dev] [Neutron] Proposing Jakub Libosvar for
testing core



+1

On Fri, Jul 22, 2016 at 2:36 AM, Doug Wiegley  wrote:
  +1

On Jul 21, 2016, at 5:13 PM, Kevin Benton  wrote:

+1

On Thu, Jul 21, 2016 at 2:41 PM, Carl Baldwin 
wrote:
 +1 from me

 On Thu, Jul 21, 2016 at 1:35 PM, Assaf Muller 
 wrote:
   As Neutron's so called testing lieutenant I would like to
   propose
   Jakub Libosvar to be a core in the testing area.

   Jakub has demonstrated his inherent interest in the testing area
   over
   the last few years, his reviews are consistently insightful and
   his
   numbers [1] are in line with others and I know will improve if
   given
   the responsibilities of a core reviewer. Jakub is deeply
   involved with
   the project's testing infrastructures and CI systems.

   As a reminder the expectation from cores is found here [2], and
   specifically for cores interesting in helping out shaping
   Neutron's
   testing story:

   * Guide community members to craft a testing strategy for
   features [3]
   * Ensure Neutron's testing infrastructures are sufficiently
   sophisticated to achieve the above.
   * Provide leadership when determining testing Do's & Don'ts [4].
   What
   makes for an effective test?
   * Ensure the gate stays consistently green

   And more tactically we're looking at finishing the
   Tempest/Neutron
   tests dedup [5] and to provide visual graphing for historical
   control
   and data plane performance results similar to [6].

   [1] http://stackalytics.com/report/contribution/neutron/90
   [2]
   
http://docs.openstack.org/developer/neutron/policies/neutron-teams.html

   [3]
   
http://docs.openstack.org/developer/neutron/devref/development.environment.html#testing-neutron

   [4] https://assafmuller.com/2015/05/17/testing-lightning-talk/
   [5] https://etherpad.openstack.org/p/neutron-tempest-defork
   [6]
   https://www.youtube.com/watch?v=a0qlsH1hoKs=youtu.be=24m22s


   
__

   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
   openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 
__

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org
?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Jakub Libosvar for testing core

2016-07-22 Thread Oleg Bondarev
+1

On Fri, Jul 22, 2016 at 2:36 AM, Doug Wiegley 
wrote:

> +1
>
> On Jul 21, 2016, at 5:13 PM, Kevin Benton  wrote:
>
> +1
>
> On Thu, Jul 21, 2016 at 2:41 PM, Carl Baldwin  wrote:
>
>> +1 from me
>>
>> On Thu, Jul 21, 2016 at 1:35 PM, Assaf Muller  wrote:
>>
>>> As Neutron's so called testing lieutenant I would like to propose
>>> Jakub Libosvar to be a core in the testing area.
>>>
>>> Jakub has demonstrated his inherent interest in the testing area over
>>> the last few years, his reviews are consistently insightful and his
>>> numbers [1] are in line with others and I know will improve if given
>>> the responsibilities of a core reviewer. Jakub is deeply involved with
>>> the project's testing infrastructures and CI systems.
>>>
>>> As a reminder the expectation from cores is found here [2], and
>>> specifically for cores interesting in helping out shaping Neutron's
>>> testing story:
>>>
>>> * Guide community members to craft a testing strategy for features [3]
>>> * Ensure Neutron's testing infrastructures are sufficiently
>>> sophisticated to achieve the above.
>>> * Provide leadership when determining testing Do's & Don'ts [4]. What
>>> makes for an effective test?
>>> * Ensure the gate stays consistently green
>>>
>>> And more tactically we're looking at finishing the Tempest/Neutron
>>> tests dedup [5] and to provide visual graphing for historical control
>>> and data plane performance results similar to [6].
>>>
>>> [1] http://stackalytics.com/report/contribution/neutron/90
>>> [2]
>>> http://docs.openstack.org/developer/neutron/policies/neutron-teams.html
>>> [3]
>>> http://docs.openstack.org/developer/neutron/devref/development.environment.html#testing-neutron
>>> [4] https://assafmuller.com/2015/05/17/testing-lightning-talk/
>>> [5] https://etherpad.openstack.org/p/neutron-tempest-defork
>>> [6]
>>> https://www.youtube.com/watch?v=a0qlsH1hoKs=youtu.be=24m22s
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] os-brick privsep failures and an upgrade strategy?

2016-07-22 Thread Li, Xiaoyan
Hi,

What is the discussion result of privsep issue?
When can we release next os-brick?

Best wishes
Lisa

From: Ivan Kolodyazhny [mailto:e...@e0ne.info]
Sent: Wednesday, July 13, 2016 9:55 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [cinder] [nova] os-brick privsep failures and an 
upgrade strategy?

Thanks for the update, Matt.

I will join our meeting next week.

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Tue, Jul 12, 2016 at 4:25 PM, Matt Riedemann 
> wrote:
On 7/12/2016 6:29 AM, Ivan Kolodyazhny wrote:
Hi team,

Do we have any decision on this issue? I've found few patches but both
of them are -1'ed.

From Cinder perspective, it blocks us to release new os-brick with
features, which are needed for other projects like Cinder and
python-brick-cinderclient-ext.

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Wed, Jun 22, 2016 at 5:47 PM, Matt Riedemann
 
>> wrote:

On 6/21/2016 10:12 PM, Angus Lees wrote:

On Wed, 22 Jun 2016 at 05:59 Matt Riedemann
 
>



1840
WARNING oslo.privsep.daemon [-] privsep log:
/usr/local/bin/nova-rootwrap: Unauthorized command: privsep-helper
--config-file /etc/nova/nova.conf --privsep_context
os_brick.privileged.default --privsep_sock_path
/tmp/tmpV5w2VC/privsep.sock (no filter matched)

 .. so nova-rootwrap is rejecting the privsep-helper command line
because no filter matched.  This indicates the nova
compute.filters file
has not been updated, or is incorrect.


As was later pointed out by mtreinish, grenade is attempting to
run the
newton code against mitaka configs, and this includes using mitaka
rootwrap filters.   Unfortunately, the change to add privsep to
nova's
rootwrap filters wasn't approved until the newton cycle (so that
all the
os-brick privsep-related changes could be approved together), and so
this doesn't Just Work.

Digging in further, it appears that there *is* a mechanism in
grenade to
upgrade rootwrap filters between major releases, but this needs
to be
explicitly updated for each project+release and hasn't been for
nova+mitaka->newton.  I'm not sure how this is *meant* to work,
since
the grenade "theory of upgrade" doesn't mention when configs
should be
updated - the only mechanism provided is an "exception ... used
sparingly."


As noted in the review, my understanding of the config changes is
deprecation of options across release boundaries so that you can't
drop a config option that would break someone from release to
release without it being deprecated first. So deprecate option foo
in mitaka, people upgrading from liberty to mitaka aren't broken,
but they get warnings in mitaka so that when you drop the option in
newton it's not a surprise and consumers should have adjusted during
mitaka.

For rootwrap filters I agree this is more complicated.


Anyway, I added an upgrade step for nova mitaka->newton that updates
rootwrap filters appropriately(*).  Again, I'm not sure what this
communicates to deployers compared to cinder (which *did* have the
updated rootwrap filter merged in mitaka, but of course that update
still needs to be installed at some point).
(*) https://review.openstack.org/#/c/332610

 - Gus



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Alternatively Walter had a potential workaround to fallback to
rootwrap for 

Re: [openstack-dev] [Neutron][oslo.db] Inspecting sqlite db during unit tests

2016-07-22 Thread Kevin Benton
Now that we have switched to oslo.db for test provisioning the
responsibility of choosing a location lands here:
https://github.com/openstack/oslo.db/blob/a79479088029e4fa51def91cb36bc652356462b6/oslo_db/sqlalchemy/provision.py#L505

The problem is that when you specify OS_TEST_DBAPI_ADMIN_CONNECTION it does
end up creating the file, but then the logic above chooses a URL based on
the random ident. So you can find an sqlite file in your tmp dir, it just
won't be the one you asked for.

It seems like a bug in the oslo.db logic, but the commit that added it was
part of a much larger refactor so I'm not sure if it was intentional to
ensure that no two tests used the same db.

On Thu, Jul 21, 2016 at 1:45 PM, Carl Baldwin  wrote:

> Hi,
>
> In Neutron, we run unit tests with an in-memory sqlite instance. It is
> impossible, as far as I know, to inspect this database using the sqlite3
> command line while the unit tests are running. So, we have to resort to
> python / sqlalchemy to do it. This is inconvenient.
>
> Months ago, I was able to get the unit tests to write the sqlite db to a
> file so that I could inspect it while I was sitting at a breakpoint in the
> code. That was very nice. Yesterday, I tried to repeat that while traveling
> and was unable to figure it out. I had to time box my effort to move on to
> other things.
>
> As far as I remember, the mechanism that I used was to adjust the
> neutron.conf for the tests [1]. I'm not totally sure about this because I
> didn't take sufficient notes, I think because it was pretty easy to figure
> it out at the time. This mechanism doesn't seem to have any effect these
> days. I changed it to 'sqlite:tmp/unit-test.db' and never saw a file
> created there.
>
> I did a little bit of digging and I tried one more thing. That was to
> set OS_TEST_DBAPI_ADMIN_CONNECTION='sqlite:tmp/unit-test.db' in the
> environment before running tests. I was encouraged because this caused a
> file to be created at that location but the file remained empty for the
> duration of the run.
>
> Does anyone know off the top of their head how to get unit tests in
> Neutron to use a file based sqlite db?
>
> Carl
>
> [1]
> https://github.com/openstack/neutron/blob/97c491294cf9eca0921336719d62d74ec4e1fa96/neutron/tests/etc/neutron.conf#L26
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [tooz] DLM benchmark results

2016-07-22 Thread John Schwarz
You're right Joshua.

Tooz HEAD points to 0f4e1198fdcbd6a29d77c67d105d201ed0fbd9e0.

With regards to etcd and zookeeper's versions, they are:
zookeeper-3.4.5+28-1.cdh4.7.1.p0.13.el6.x86_64,
etcd-2.2.5-2.el7.0.1.x86_64.

John.

On Thu, Jul 21, 2016 at 8:14 PM, Joshua Harlow  wrote:
> Hi John,
>
> Thanks for gathering this info,
>
> Do you have the versions of the backend that were used here (particularly
> relevant for etcd which has a new release pretty frequently).
>
> It'd be useful to capture that info also :)
>
> John Schwarz wrote:
>>
>> Hi everyone,
>>
>> Following [1], a few of us sat down during the last day of the Austin
>> Summit and discussed the possibility of adding formal support for
>> Tooz, specifically for the locking mechanism it provides. The
>> conclusion we reached was that benchmarks should be done to show if
>> and how Tooz affects the normal operation of Neutron (i.e. if locking
>> a resource using Zookeeper takes 3 seconds, it's not worthwhile at
>> all).
>>
>> We've finally finished the benchmarks and they are available at [2].
>> They test a specific case: when creating an HA router a lock-free
>> algorithm is used to assign a vrid to a router (this is later used for
>> keepalived), and the benchmark specifically checks the effects of
>> locking that function with either Zookeeper or Etcd, using the no-Tooz
>> case as a baseline. The locking was checked in 2 different ways - one
>> which presents no contention (acquire() always succeeds immediately)
>> and one which presents contentions (acquire() may block until a
>> similar process for the invoking tenant is complete).
>>
>> The benchmarks show that while using Tooz does raise the cost of an
>> operation, the effects are not as bad as we initially feared. In the
>> simple, single simultaneous request, using Zookeeper raised the
>> average time it took to create a router by 1.5% (from 11.811 to 11.988
>> seconds). On the more-realistic case of 6 simultaneous requests,
>> Zookeeper raised the cost by 3.74% (from 16.533 to 17.152 seconds).
>>
>> It is important to note that the setup itself was overloaded - it was
>> built on a single baremetal hosting 5 VMs (4 of which were
>> controllers) and thus we were unable to go further - for example, 10
>> concurrent requests overloaded the server and caused some race
>> conditions to appear in the L3 scheduler (bugs will be opened soon),
>> so for this reason we haven't tested heavier samples and limited
>> ourselves to 6 simultaneous requests.
>>
>> Also important to note that some kind of race condition was noticed in
>> tooz's etcd driver. We've discussed this with the tooz devs and
>> provided a patch that is supposed to fix them [3].
>> Lastly, races in the L3 HA Scheduler were found and we are yet to dig
>> into them and find out their cause - bugs will be opened for these as
>> well.
>>
>> I've opened the summary [2] for comments so you're welcome to open a
>> discussion about the results both in the ML and on the doc itself.
>>
>> (CC to all those who attended the Austin Summit meeting and other
>> interested parties)
>> Happy locking,
>>
>> [1]:
>> http://lists.openstack.org/pipermail/openstack-dev/2016-April/093199.html
>> [2]:
>> https://docs.google.com/document/d/1jdI8gkQKBE0G9koR0nLiW02d5rwyWv_-gAp7yavt4w8
>> [3]: https://review.openstack.org/#/c/342096/
>>
>> --
>> John Schwarz,
>> Senior Software Engineer,
>> Red Hat.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
John Schwarz,
Senior Software Engineer,
Red Hat.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] On testing...

2016-07-22 Thread Sergei Chipiga
Hi all,

> 2. We don't have enough higher-level (integration leve) test coverage for
our newer angular interfaces.

I previously wrote about new horizon integration autotests architecture and
parallel mode: https://github.com/sergeychipiga/horizon_autotests. In this
automated scope there are many tests for new angular interface, touching
instances and containers. With this architecture, it's pretty easy to write
autotests for new angular interface, as for ordinary html interface.
--
Regards, Sergei Chipiga
QA Engineer, Mirantis Inc.

Tel.: +7 (960) 057-29-32
Skype: chipiga86
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Midcycle Summary

2016-07-22 Thread Rob Cresswell
We didn't discuss it explicitly, but I don't believe the decision has changed. 
We can remove it, but last I checked the patches in line to handle testing 
after deprecation still needed work. If they've been updated etc. I can look 
again.

Rob

On 22 July 2016 at 07:52, Matthias Runge 
> wrote:
On 21/07/16 19:40, Rob Cresswell wrote:
> Hi everyone,
>
> We had the Horizon mid cycle meetup last week, and I wanted to highlight
> some of the discussion and decisions made.
>
> Agenda: https://etherpad.openstack.org/p/horizon-newton-midcycle
> Notes: https://etherpad.openstack.org/p/horizon-newton-midcycle-notes
>
Thanks for sharing this Rob!

Did you also talk about deprecating of ceilometer based metering dashboard?

IIRC, we all agreed at the last summit, this doesn't really work for
large deployments and should be removed or replaced?

Or does anyone still use it?

Matthias

--
Matthias Runge >

Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Michael Cunningham,
Michael O'Neill, Eric Shander

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Midcycle Summary

2016-07-22 Thread Matthias Runge
On 21/07/16 19:40, Rob Cresswell wrote:
> Hi everyone,
> 
> We had the Horizon mid cycle meetup last week, and I wanted to highlight
> some of the discussion and decisions made.
> 
> Agenda: https://etherpad.openstack.org/p/horizon-newton-midcycle 
> Notes: https://etherpad.openstack.org/p/horizon-newton-midcycle-notes
> 
Thanks for sharing this Rob!

Did you also talk about deprecating of ceilometer based metering dashboard?

IIRC, we all agreed at the last summit, this doesn't really work for
large deployments and should be removed or replaced?

Or does anyone still use it?

Matthias

-- 
Matthias Runge 

Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Michael Cunningham,
Michael O'Neill, Eric Shander

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [daisycloud-core] Today(20160722)'s weekly meeting agenda

2016-07-22 Thread hu . zhijiang
20160722 Agenda
1) roll call
2) Agenda bashing
3) Approved Wei (kong.w...@zte.com.cn) as daisycloud core reviewer
4) daisycloud status update
5) daisy4nfv status update
6) daisy4nfv disscussion in daisycloud channel





B.R.,
Zhijiang


ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack][neutron] - neutron gate blocked by devstack change

2016-07-22 Thread Kevin Benton
Hi all,

The merge of https://review.openstack.org/#/c/343072/ unfortunately broke
the linux bridge jobs so Neutron patches are currently blocked.

I have a fix for the linux bridge settings (
https://review.openstack.org/345707) that I verified with a test patch in
Neutron (https://review.openstack.org/#/c/345449/).

However, in the interest of unblocking the gate to give reviewers time to
go over the changes in 345707, I just proposed a revert of 343072 here:
https://review.openstack.org/#/c/345820/

If some devstack reviewers can fast track that revert to unblock the
Neutron gate, that would be much appreciated.

Cheers,
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev