Re: [openstack-dev] [glance] Proposal for a mid-cycle virtual sync on operator issues

2016-05-25 Thread Nikhil Komawar
Thanks Sam. We purposefully chose that time to accommodate some of our
community members from the Pacific. I'm assuming it's just your case
that's not working out for that time? So, hopefully other Australian/NZ
friends can join.


On 5/26/16 12:59 AM, Sam Morrison wrote:
> I’m hoping some people from the Large Deployment Team can come along. It’s 
> not a good time for me in Australia but hoping someone else can join in.
>
> Sam
>
>
>> On 26 May 2016, at 2:16 AM, Nikhil Komawar  wrote:
>>
>> Hello,
>>
>>
>> Firstly, I would like to thank Fei Long for bringing up a few operator
>> centric issues to the Glance team. After chatting with him on IRC, we
>> realized that there may be more operators who would want to contribute
>> to the discussions to help us take some informed decisions.
>>
>>
>> So, I would like to call for a 2 hour sync for the Glance team along
>> with interested operators on Thursday June 9th, 2016 at 2000UTC. 
>>
>>
>> If you are interested in participating please RSVP here [1], and
>> participate in the poll for the tool you'd prefer. I've also added a
>> section for Topics and provided a template to document the issues clearly.
>>
>>
>> Please be mindful of everyone's time and if you are proposing issue(s)
>> to be discussed, come prepared with well documented & referenced topic(s).
>>
>>
>> If you've feedback that you are not sure if appropriate for the
>> etherpad, you can reach me on irc (nick: nikhil).
>>
>>
>> [1] https://etherpad.openstack.org/p/newton-glance-and-ops-midcycle-sync
>>
>> -- 
>>
>> Thanks,
>> Nikhil Komawar
>> Newton PTL for OpenStack Glance
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] input erroneous `vitrage_id` in `vitrage rca show`API or `vitrage topology show` API

2016-05-25 Thread Afek, Ifat (Nokia - IL)
Hi,

You are right. The API should return 404, and the CLI should issue the nice 
error message.

Best Regards,
Ifat.

From: Zhang Yujun [mailto:zhangyujun+...@gmail.com]
Sent: Wednesday, May 25, 2016 10:10 AM
To: OpenStack Development Mailing List (not for usage questions); 
dong.wenj...@zte.com.cn
Cc: Shamir, Ohad (Nokia - IL)
Subject: Re: [openstack-dev] [vitrage] input erroneous `vitrage_id` in `vitrage 
rca show`API or `vitrage topology show` API

Hi, all

I'm not sure if vitrage follows the RESTful API.

In RESTful API, it is reasonable to return 404 error when the requested 
resource does not exist. If we want to display a friendly message, it could be 
implemented in front end.

The backend API should remain simple and consistent since we may have different 
consumers, e.g. a CLI who does not understand error message at all.

--
Yujun ZHANG

On Wed, May 25, 2016 at 2:57 PM Afek, Ifat (Nokia - IL) 
> wrote:
Hi dwj,

I’m passing the question (and answer) to openstack mailing list.

You are right. In case of an invalid vitrage id, we should not return HTTP 404 
error. I think the correct behavior would be to return a nice error message 
like “Alarm XYZ does not exist”.
You can open a bug about it.

Thanks,
Ifat.

From: dong.wenj...@zte.com.cn 
[mailto:dong.wenj...@zte.com.cn]
Sent: Wednesday, May 25, 2016 9:24 AM
To: Weyl, Alexey (Nokia - IL); Afek, Ifat (Nokia - IL); Rosensweig, Elisha 
(Nokia - IL); Shamir, Ohad (Nokia - IL)
Subject: input erroneous `vitrage_id` in `vitrage rca show`API or `vitrage 
topology show` API


Hi folks,

`vitrage rca show` has the positional arguments: alarm_id

`vitrage topology show` API has the optional arguments: --root

They all means the `vitrage_id` in vertex, and those APIs also call the
function of `graph_query_vertices(self, query_dict=None, root_id=None, 
depth=None,
 direction=Direction.BOTH)`

But if the user input the erroneous `vitrage_id` which was not exist in the 
nodes,
Do the APIs all need to return the HTTP 404 error?

See the code:
https://github.com/openstack/vitrage/blob/master/vitrage/graph/algo_driver/networkx_algorithm.py#L43

Do we need to unify the return value as a empty graph in function 
`graph_query_vertices`
if the root is not be found or the root_data don't match the query?
Or leave it, don't modify, return the HTTP 404?

Thanks.

The error log:
stack@cloud:~$ vitrage rca show 'ALARM:nagios:cloud:test'
Unknown Error (HTTP 404)

2016-05-25 12:26:08.734 10742 DEBUG 
vitrage.entity_graph.api_handler.entity_graph_api [-] EntityGraphApis get_rca 
root:ALARM:nagios:cloud:test get_rca 
/opt/stack/vitrage/vitrage/entity_graph/api_handler/entity_graph_api.py:114
2016-05-25 12:26:08.735 10742 ERROR oslo_messaging.rpc.dispatcher [-] Exception 
during message handling: u'ALARM:nagios:cloud:test'
2016-05-25 12:26:08.735 10742 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2016-05-25 12:26:08.735 10742 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
138, in _dispatch_and_reply
2016-05-25 12:26:08.735 10742 ERROR oslo_messaging.rpc.dispatcher 
incoming.message))
2016-05-25 12:26:08.735 10742 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
185, in _dispatch
2016-05-25 12:26:08.735 10742 ERROR oslo_messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2016-05-25 12:26:08.735 10742 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
127, in _do_dispatch
2016-05-25 12:26:08.735 10742 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2016-05-25 12:26:08.735 10742 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/vitrage/vitrage/entity_graph/api_handler/entity_graph_api.py", line 
120, in get_rca
2016-05-25 12:26:08.735 10742 ERROR oslo_messaging.rpc.dispatcher 
direction=Direction.IN)
2016-05-25 12:26:08.735 10742 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/vitrage/vitrage/graph/algo_driver/networkx_algorithm.py", line 43, 
in graph_query_vertices
2016-05-25 12:26:08.735 10742 ERROR oslo_messaging.rpc.dispatcher root_data 
= self.graph._g.node[root_id]
2016-05-25 12:26:08.735 10742 ERROR oslo_messaging.rpc.dispatcher KeyError: 
u'ALARM:nagios:cloud:test'
2016-05-25 12:26:08.735 10742 ERROR oslo_messaging.rpc.dispatcher
2016-05-25 12:26:08.759 10742 ERROR oslo_messaging._drivers.common [-] 
Returning exception u'ALARM:nagios:cloud:test' to caller

BR,
dwj





ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the 

Re: [openstack-dev] [glance] Proposal for a mid-cycle virtual sync on operator issues

2016-05-25 Thread Sam Morrison
I’m hoping some people from the Large Deployment Team can come along. It’s not 
a good time for me in Australia but hoping someone else can join in.

Sam


> On 26 May 2016, at 2:16 AM, Nikhil Komawar  wrote:
> 
> Hello,
> 
> 
> Firstly, I would like to thank Fei Long for bringing up a few operator
> centric issues to the Glance team. After chatting with him on IRC, we
> realized that there may be more operators who would want to contribute
> to the discussions to help us take some informed decisions.
> 
> 
> So, I would like to call for a 2 hour sync for the Glance team along
> with interested operators on Thursday June 9th, 2016 at 2000UTC. 
> 
> 
> If you are interested in participating please RSVP here [1], and
> participate in the poll for the tool you'd prefer. I've also added a
> section for Topics and provided a template to document the issues clearly.
> 
> 
> Please be mindful of everyone's time and if you are proposing issue(s)
> to be discussed, come prepared with well documented & referenced topic(s).
> 
> 
> If you've feedback that you are not sure if appropriate for the
> etherpad, you can reach me on irc (nick: nikhil).
> 
> 
> [1] https://etherpad.openstack.org/p/newton-glance-and-ops-midcycle-sync
> 
> -- 
> 
> Thanks,
> Nikhil Komawar
> Newton PTL for OpenStack Glance
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova][horizon] Serial console support for ironic instances

2016-05-25 Thread Yuiko Takada
Jim, thank you so much for having discussed with johnthetubaguy and added
this topic to the agenda.
I also will attend to Nova IRC meeting.


Best Regards,
Yuiko Takada Mori

2016-05-25 20:27 GMT+09:00 Jim Rollenhagen :

> On Wed, May 25, 2016 at 01:58:18PM +0900, Yuiko Takada wrote:
> > Hi!
> >
> > Hironori, Lucas, thank you for bringing this topic up!
> >
> > Yes, as Lucas says,  our latest spec is
> > https://review.openstack.org/#/c/319505
> >
> > I and Tien, Hironori, Akira discussed and merged our idea.
> >
> > And new Nova spec is here:
> > https://review.openstack.org/#/c/319507
> >
> > As you guys know, Nova non-priority spec approval freeze is 5/30-6/3,
> > so that I guess Ironic spec need to be approved until it.
>
> Just a note here, I talked with johnthetubaguy this morning, and we
> think the Nova blueprint doesn't need a spec. I updated the whiteboard
> on the BP with some details, added it to the agenda
> for the next Nova meeting, and will be there to discuss it.
>
> // jim
>
> >
> >
> > Best Regards,
> > Yuiko Takada Mori
> >
> > 2016-05-25 1:15 GMT+09:00 Lucas Alvares Gomes :
> >
> > > Hi,
> > >
> > > > I'm working with Tien who is a submitter of one[1] of console specs.
> > > > I joined the console session in Austin.
> > > >
> > > > In the session, we got the following consensus.
> > > > - focus on serial console in Newton
> > > > - use nova-serial proxy as is
> > > >
> > > > We also got some requirements[2] for this feature in the session.
> > > > We have started cooperating with Akira and Yuiko who submitted
> another
> > > similar spec[3].
> > > > We're going to unite our specs and add solutions for the requirements
> > > ASAP.
> > > >
> > >
> > > Great stuff! So do we have an update on this?
> > >
> > > I see [3] is now abandoned and a new spec was proposed recently [4].
> > > Is [4] the result of the union of both specs?
> > >
> > > > [1] ironic-ipmiproxy: https://review.openstack.org/#/c/296869/
> > > > [2] https://etherpad.openstack.org/p/ironic-newton-summit-console
> > > > [3] ironic-console-server: https://review.openstack.org/#/c/306755/
> > >
> > > [4] https://review.openstack.org/#/c/319505
> > >
> > > Cheers,
> > > Lucas
> > >
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
>
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN

2016-05-25 Thread Ryan Moats


Ben Pfaff  wrote on 05/25/2016 07:44:43 PM:

> From: Ben Pfaff 
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: John McDowall ,
> "disc...@openvswitch.org" , OpenStack
> Development Mailing List , Justin
> Pettit , Russell Bryant 
> Date: 05/25/2016 07:44 PM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> On Wed, May 25, 2016 at 09:27:31AM -0500, Ryan Moats wrote:
> > As I understand it, Table 0 identifies the logical port and logical
> > flow. I'm worried that this means we'll end up with separate bucket
> > rules for each ingress port of the port pairs that make up a port
> > group, leading to a cardinality product in the number of rules.
> > I'm trying to think of a way where Table 0 could identify the packet
> > as being part of a particular port group, and then I'd only need one
> > set of bucket rules to figure out the egress side.  However, the
> > amount of free metadata space is limited and so before we go down
> > this path, I'm going to pull Justin, Ben and Russell in to see if
> > they buy into this idea or if they can think of an alternative.
>
> I've barely been following the discussion, so a recap of the question
> here would help a lot.
>

Sure (and John gets to correct me where I'm wrong) - the SFC proposal
is to carry a chain as a ordered set of port groups, where each group
consists of multiple port pairs. Each port pair consists of an ingress
port and an egress port, so that traffic is load balanced between
the ingress ports of a group. Traffic from the egress port of a group
is sent to the ingress port of the next group (ingress and egress here
are from the point of view of the thing getting the traffic).

I was suggesting to John that from the view of the switch, this would
be reversed in the openvswitch rules - the proposed CHAINING stage
in the ingress pipeline would apply the classifier for traffic entering
a chain and identify traffic coming from an egress SFC port in the
midst of a chain. The egress pipeline would identify the next ingress SFC
port that gets the traffic or the final destination for traffic exiting
the chain.

Further, I pointed him at the select group for how traffic could be
load balanced between the different ports that are contained in a port
group, but that I was worried that I'd need a cartesian product of rules
in the egress chain stage.  Having thought about this some more, I'm
realizing that I'm confused and the number of rules should not be that
bad:

- Table 0 will identify the logical port the traffic comes from
- The CHAINING stage of the ingress pipeline can map that logical
  port information to the port group the port is part of.
- The CHAINING stage of the egress pipeline would use that port
  group information to select the next logical port via a select group.

I believe this requires a total number of rules in the CHAINING stages
of the order of the number of ports in the service chain.

The above is predicated on carrying the port group information from
the ingress pipeline to the egress pipeline in metadata, so I would
be looking to you for ideas on where this data could be carried, since
I know that we don't have infinite space for said metadata...

Ryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN

2016-05-25 Thread Ryan Moats


John McDowall  wrote on 05/25/2016 07:27:46
PM:

> From: John McDowall 
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: "disc...@openvswitch.org" , "OpenStack
> Development Mailing List" , Ben
> Pfaff , Justin Pettit , Russell Bryant
> 
> Date: 05/25/2016 07:28 PM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> Ryan,
>
> Ok – I will let the experts weigh in on load balancing.
>
> In the meantime I have attached a couple of files to show where I am
> going. The first is sfc_dict.py and is a representation of the dict
> I am passing from SFC to OVN. This will then translate to the
> attached ovn-nb schema file.
>
> One of my concerns is that SFC almost doubles the size of the ovn-nb
> schema but I could not think of any other way of doing it.
>
> Thoughts?
>
> John

The dictionary looks fine for a starting point, and the more I look
at the classifier, the more I wonder if we can't do something with
the current ACL table to avoid duplication in the NB database
definition...

Ryan

> From: Ryan Moats 
> Date: Wednesday, May 25, 2016 at 7:27 AM
> To: John McDowall 
> Cc: "disc...@openvswitch.org" , OpenStack
> Development Mailing List , Ben Pfaff <
> b...@ovn.org>, Justin Pettit , Russell Bryant
 >
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> John McDowall  wrote on 05/24/2016
> 06:33:05 PM:
>
> > From: John McDowall 
> > To: Ryan Moats/Omaha/IBM@IBMUS
> > Cc: "disc...@openvswitch.org" , "OpenStack
> > Development Mailing List" 
> > Date: 05/24/2016 06:33 PM
> > Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> >
> > Ryan,
> >
> > Thanks for getting back to me and pointing me in a more OVS like
> > direction. What you say makes sense, let me hack something together.
> > I have been a little distracted getting some use cases together. The
> > other area is how to better map the flow-classifier I have been
> > thinking about it a little, but I will leave it till after we get
> > the chains done.
> >
> > Your load-balancing comment was very interesting – I saw some
> > patches for load-balancing a few months ago but nothing since. It
> > would be great if we could align with load-balancing as that would
> > make a really powerful solution.
> >
> > Regards
> >
> > John
>
> John-
>
> For the load balancing, I believe that you'll want to look at
> openvswitch's select group, as that should let you set up multiple
> buckets for each egress port in the port pairs that make up a port
> group.
>
> As I understand it, Table 0 identifies the logical port and logical
> flow. I'm worried that this means we'll end up with separate bucket
> rules for each ingress port of the port pairs that make up a port
> group, leading to a cardinality product in the number of rules.
> I'm trying to think of a way where Table 0 could identify the packet
> as being part of a particular port group, and then I'd only need one
> set of bucket rules to figure out the egress side.  However, the
> amount of free metadata space is limited and so before we go down
> this path, I'm going to pull Justin, Ben and Russell in to see if
> they buy into this idea or if they can think of an alternative.
>
> Ryan
>
> >
> > From: Ryan Moats 
> > Date: Monday, May 23, 2016 at 9:06 PM
> > To: John McDowall 
> > Cc: "disc...@openvswitch.org" , OpenStack
> > Development Mailing List 
> > Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> >
> > John McDowall  wrote on 05/18/2016
> > 03:55:14 PM:
> >
> > > From: John McDowall 
> > > To: Ryan Moats/Omaha/IBM@IBMUS
> > > Cc: "disc...@openvswitch.org" , "OpenStack
> > > Development Mailing List" 
> > > Date: 05/18/2016 03:55 PM
> > > Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> > >
> > > Ryan,
> > >
> > > OK all three repos and now aligned with their masters. I have done
> > > some simple level system tests and I can steer traffic to a single
> > > VNF.  Note: some additional changes to networking-sfc to catch-up
> > > with their changes.
> > >
> > > https://github.com/doonhammer/networking-sfc
> > > https://github.com/doonhammer/networking-ovn
> > > https://github.com/doonhammer/ovs
> > >
> > > The next tasks I see are:
> > >
> > > 1. Decouple networking-sfc and networking-ovn. I am thinking that I
> > > will pass a nested port-chain dictionary holding 

Re: [openstack-dev] [kuryr][magnum]Installing kuryr for mutlinode openstack setup

2016-05-25 Thread Vikas Choudhary
Hi Akshay,

Sorry about that. You need to run "tox -e genconfig" . After this inside
kuryr/etc "kuryr.conf_sample" file will get generated. Copy this file to
/etc/kuryr/ after renaming to kuryr.conf.

Documentation will be updated soon.

-Vikas
.

On Wed, May 25, 2016 at 8:44 PM, Akshay Kumar Sanghai <
akshaykumarsang...@gmail.com> wrote:

> Hi,
> Thanks Jaume and Antoni.
> I tried the installation by git cloning the kuryr repo. I did pip install
> -r requirements.txt. After that I did pip install . . But it doesn't end
> successfully. There are no config files in /etc/kuryr directory.
> root@compute1:~/kuryr# pip install .
> Unpacking /root/kuryr
>   Running setup.py (path:/tmp/pip-4kbPa8-build/setup.py) egg_info for
> package from file:///root/kuryr
> [pbr] Processing SOURCES.txt
> warning: LocalManifestMaker: standard file '-c' not found
>
> [pbr] In git context, generating filelist from git
> warning: no previously-included files matching '*.pyc' found anywhere
> in distribution
>   Requirement already satisfied (use --upgrade to upgrade):
> kuryr==0.1.0.dev422 from file:///root/kuryr in
> /usr/local/lib/python2.7/dist-packages
> Requirement already satisfied (use --upgrade to upgrade): pbr>=1.6 in
> /usr/lib/python2.7/dist-packages (from kuryr==0.1.0.dev422)
> Requirement already satisfied (use --upgrade to upgrade): Babel>=2.3.4 in
> /usr/local/lib/python2.7/dist-packages (from kuryr==0.1.0.dev422)
> Requirement already satisfied (use --upgrade to upgrade): Flask<1.0,>=0.10
> in /usr/local/lib/python2.7/dist-packages (from kuryr==0.1.0.dev422)
> Requirement already satisfied (use --upgrade to upgrade):
> jsonschema!=2.5.0,<3.0.0,>=2.0.0 in /usr/lib/python2.7/dist-packages (from
> kuryr==0.1.0.dev422)
> Requirement already satisfied (use --upgrade to upgrade):
> netaddr!=0.7.16,>=0.7.12 in /usr/lib/python2.7/dist-packages (from
> kuryr==0.1.0.dev422)
> Requirement already satisfied (use --upgrade to upgrade):
> oslo.concurrency>=3.5.0 in /usr/local/lib/python2.7/dist-packages (from
> kuryr==0.1.0.dev422)
> Requirement already satisfied (use --upgrade to upgrade): oslo.log>=1.14.0
> in /usr/local/lib/python2.7/dist-packages (from kuryr==0.1.0.dev422)
> Requirement already satisfied (use --upgrade to upgrade):
> oslo.serialization>=1.10.0 in /usr/local/lib/python2.7/dist-packages (from
> kuryr==0.1.0.dev422)
> Requirement already satisfied (use --upgrade to upgrade):
> oslo.utils>=3.5.0 in /usr/local/lib/python2.7/dist-packages (from
> kuryr==0.1.0.dev422)
> Requirement already satisfied (use --upgrade to upgrade):
> python-neutronclient>=4.2.0 in /usr/local/lib/python2.7/dist-packages (from
> kuryr==0.1.0.dev422)
> Requirement already satisfied (use --upgrade to upgrade): pyroute2>=0.3.10
> in /usr/local/lib/python2.7/dist-packages (from kuryr==0.1.0.dev422)
> Requirement already satisfied (use --upgrade to upgrade):
> os-client-config>=1.13.1 in /usr/local/lib/python2.7/dist-packages (from
> kuryr==0.1.0.dev422)
> Requirement already satisfied (use --upgrade to upgrade):
> neutron-lib>=0.1.0 in /usr/local/lib/python2.7/dist-packages (from
> kuryr==0.1.0.dev422)
> Requirement already satisfied (use --upgrade to upgrade): Werkzeug>=0.7 in
> /usr/local/lib/python2.7/dist-packages (from
> Flask<1.0,>=0.10->kuryr==0.1.0.dev422)
> Requirement already satisfied (use --upgrade to upgrade): Jinja2>=2.4 in
> /usr/lib/python2.7/dist-packages (from
> Flask<1.0,>=0.10->kuryr==0.1.0.dev422)
> Requirement already satisfied (use --upgrade to upgrade):
> itsdangerous>=0.21 in /usr/local/lib/python2.7/dist-packages (from
> Flask<1.0,>=0.10->kuryr==0.1.0.dev422)
> Requirement already satisfied (use --upgrade to upgrade): markupsafe in
> /usr/lib/python2.7/dist-packages (from
> Jinja2>=2.4->Flask<1.0,>=0.10->kuryr==0.1.0.dev422)
> Cleaning up...
> root@compute1:~/kuryr#
>
>
> Thanks
> Akshay
>
>
>
>
> On Wed, May 25, 2016 at 4:24 PM, Antoni Segura Puimedon <
> toni+openstac...@midokura.com> wrote:
>
>>
>>
>> On Wed, May 25, 2016 at 11:20 AM, Jaume Devesa  wrote:
>>
>>> Hello Akshay,
>>>
>>> responses inline:
>>>
>>> On Wed, 25 May 2016 10:48, Akshay Kumar Sanghai wrote:
>>> > Hi,
>>> > I have a 4 node openstack setup (1 controller, 1 network, 2 compute
>>> nodes).
>>> > I want to install kuryr in liberty version. I cannot find a package in
>>> > ubuntu repo.
>>>
>>> There is not yet official version of Kuryr. You'll need to install using
>>> the
>>> current master branch of the repo[1] (by cloning it, install
>>> dependencies and
>>> `python setup.py install`
>>>
>>
>>  Or you could run it dockerized. Read the "repo info" in [2]
>>
>> We are working on having the packaging ready, but we are splitting the
>> repos first,
>> so it will take a while for plain distro packages.
>>
>>
>>> > -How do i install kuryr?
>>> If the README.rst file of the repository is not enough for you in terms
>>> of
>>> installation and configuration, please let us know what's not clear.
>>>
>>> > - what are 

Re: [openstack-dev] [kolla] Cross project spec liason

2016-05-25 Thread Swapnil Kulkarni (coolsvap)
On Thu, May 26, 2016 at 8:28 AM, Steven Dake (stdake)  wrote:
> Fellow core reviewers,
>
> I have a lot of liasing to do as a PTL and would like to offload some of
> it so I can get some actual sleep :)  Are there any takers on Mike's
> requirement for a cross-project liason for Kolla?  The job involves
> reviewing all specs in the cross project spec repository and understanding
> their technical impact on Kolla.  It is important in this role to take
> care of ensuring Kolla's objectives are met by the specs and raise any red
> flags to the PTL and provide an update to the core team during our weekly
> team meeting.  This responsibility comes with +2/-2 voting rights on all
> cross project specifications.
>
> The individual must be a core reviewer.
>
> Regards,
> -steve
>
> On 5/17/16, 6:41 AM, "Mike Perez"  wrote:
>
>>Hi PTL's,
>>
>>Please make sure your cross-project spec liaisons are up-to-date [1].
>>This role
>>defaults to the PTL if no liaison is selected. See list of
>>reponsibilities [2].
>>
>>As agreed by the TC, the cross-project spec liaison team will have voting
>>rights on the openstack/openstack-spec repo [3]. Next week I will be
>>adding
>>people from the cross-project spec liaison list to the gerrit group with
>>the
>>appropriate ACLs.
>>
>>
>>[1] -
>>https://wiki.openstack.org/wiki/CrossProjectLiaisons#Cross-Project_Spec_Li
>>aisons
>>[2] -
>>http://docs.openstack.org/project-team-guide/cross-project.html#cross-proj
>>ect-specification-liaisons
>>[3] -
>>http://governance.openstack.org/resolutions/20160414-grant-cross-project-s
>>pec-team-voting.html
>>
>>--
>>Mike Perez
>>
>>__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Steve,

I would like to take the responsibility to be cross-project liaison for Kolla.

Swapnil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Cross project spec liason

2016-05-25 Thread Steven Dake (stdake)
Fellow core reviewers,

I have a lot of liasing to do as a PTL and would like to offload some of
it so I can get some actual sleep :)  Are there any takers on Mike's
requirement for a cross-project liason for Kolla?  The job involves
reviewing all specs in the cross project spec repository and understanding
their technical impact on Kolla.  It is important in this role to take
care of ensuring Kolla's objectives are met by the specs and raise any red
flags to the PTL and provide an update to the core team during our weekly
team meeting.  This responsibility comes with +2/-2 voting rights on all
cross project specifications.

The individual must be a core reviewer.

Regards,
-steve

On 5/17/16, 6:41 AM, "Mike Perez"  wrote:

>Hi PTL's,
>
>Please make sure your cross-project spec liaisons are up-to-date [1].
>This role
>defaults to the PTL if no liaison is selected. See list of
>reponsibilities [2].
>
>As agreed by the TC, the cross-project spec liaison team will have voting
>rights on the openstack/openstack-spec repo [3]. Next week I will be
>adding
>people from the cross-project spec liaison list to the gerrit group with
>the
>appropriate ACLs.
>
>
>[1] - 
>https://wiki.openstack.org/wiki/CrossProjectLiaisons#Cross-Project_Spec_Li
>aisons
>[2] - 
>http://docs.openstack.org/project-team-guide/cross-project.html#cross-proj
>ect-specification-liaisons
>[3] - 
>http://governance.openstack.org/resolutions/20160414-grant-cross-project-s
>pec-team-voting.html
>
>-- 
>Mike Perez
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] New Core Reviewer (sent on behalf of Steve Martinelli)

2016-05-25 Thread Steve Martinelli
++ very well deserved!

On Wed, May 25, 2016 at 6:58 PM, Rodrigo Duarte 
wrote:

> Thank you all, it's a privilege to be part of a team from where I've
> learned so much. =)
>
> On Wed, May 25, 2016 at 1:05 PM, Brad Topol  wrote:
>
>> CONGRATULATIONS Rodrigo!!! Very well deserved!!!
>>
>> --Brad
>>
>>
>> Brad Topol, Ph.D.
>> IBM Distinguished Engineer
>> OpenStack
>> (919) 543-0646
>> Internet: bto...@us.ibm.com
>> Assistant: Kendra Witherspoon (919) 254-0680
>>
>> [image: Inactive hide details for Lance Bragstad ---05/25/2016 09:09:55
>> AM---Congratulations Rodrigo! Thank you for all the continued a]Lance
>> Bragstad ---05/25/2016 09:09:55 AM---Congratulations Rodrigo! Thank you for
>> all the continued and consistent reviews.
>>
>> From: Lance Bragstad 
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> Date: 05/25/2016 09:09 AM
>> Subject: Re: [openstack-dev] [keystone] New Core Reviewer (sent on
>> behalf of Steve Martinelli)
>> --
>>
>>
>>
>> Congratulations Rodrigo!
>>
>> Thank you for all the continued and consistent reviews.
>>
>> On Tue, May 24, 2016 at 1:28 PM, Morgan Fainberg <
>> *morgan.fainb...@gmail.com* > wrote:
>>
>>I want to welcome Rodrigo Duarte (rodrigods) to the keystone core
>>team. Rodrigo has been a consistent contributor to keystone and has been
>>instrumental in the federation implementations. Over the last cycle he has
>>shown an understanding of the code base and contributed quality reviews.
>>
>>I am super happy (as proxy for Steve) to welcome Rodrigo to the
>>Keystone Core team.
>>
>>Cheers,
>>--Morgan
>>
>>
>>__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe:
>>*openstack-dev-requ...@lists.openstack.org?subject:unsubscribe*
>>
>> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Rodrigo Duarte Sousa
> Senior Quality Engineer @ Red Hat
> MSc in Computer Science
> http://rodrigods.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] API changes on limit / marker / sort in Newton

2016-05-25 Thread Zhenyu Zheng
Thanks for the information, really hope these two can get merged for Newton:
 https://review.openstack.org/#/c/240401/
 https://review.openstack.org/#/c/239869/

On Sat, May 21, 2016 at 5:55 AM, Jay Pipes  wrote:

> +1 on all your suggestions below, Sean.
>
> -jay
>
>
> On 05/20/2016 08:05 AM, Sean Dague wrote:
>
>> There are a number of changes up for spec reviews that add parameters to
>> LIST interfaces in Newton:
>>
>> * keypairs-pagination (MERGED) -
>>
>> https://github.com/openstack/nova-specs/blob/8d16fc11ee6d01b5a9fe1b8b7ab7fa6dff460e2a/specs/newton/approved/keypairs-pagination.rst#L2
>> * os-instances-actions - https://review.openstack.org/#/c/240401/
>> * hypervisors - https://review.openstack.org/#/c/240401/
>> * os-migrations - https://review.openstack.org/#/c/239869/
>>
>> I think that limit / marker is always a legit thing to add, and I almost
>> wish we just had a single spec which is "add limit / marker to the
>> following APIs in Newton"
>>
>> Most of these came in with sort_keys as well. We currently don't have
>> schema enforcement on sort_keys, so I don't think we should add any more
>> instances of it until we scrub it. Right now sort_keys is mostly a way
>> to generate a lot of database load because users can sort by things not
>> indexed in your DB. We really should close that issue in the future, but
>> I don't think we should make it any worse. I have -1s on
>> os-instance-actions and hypervisors for that reason.
>>
>> os-instances-actions and os-migrations are time based, so they are
>> proposing a changes-since. That seems logical and fine. Date seems like
>> the natural sort order for those anyway, so it's "almost" limit/marker,
>> except from end not the beginning. I think that in general changes-since
>> on any resource which is time based should be fine, as long as that
>> resource is going to natural sort by the time field in question.
>>
>> So... I almost feel like this should just be soft policy at this point:
>>
>> limit / marker - always ok
>> sort_* - no more until we have a way to scrub sort (and we fix weird
>> sort key issues we have)
>> changes-since - ok on any resource that will natural sort with the
>> updated time
>>
>>
>> That should make proposing these kinds of additions easier for folks,
>>
>> -Sean
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN

2016-05-25 Thread Ben Pfaff
On Wed, May 25, 2016 at 09:27:31AM -0500, Ryan Moats wrote:
> As I understand it, Table 0 identifies the logical port and logical
> flow. I'm worried that this means we'll end up with separate bucket
> rules for each ingress port of the port pairs that make up a port
> group, leading to a cardinality product in the number of rules.
> I'm trying to think of a way where Table 0 could identify the packet
> as being part of a particular port group, and then I'd only need one
> set of bucket rules to figure out the egress side.  However, the
> amount of free metadata space is limited and so before we go down
> this path, I'm going to pull Justin, Ben and Russell in to see if
> they buy into this idea or if they can think of an alternative.

I've barely been following the discussion, so a recap of the question
here would help a lot.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN

2016-05-25 Thread John McDowall
Ryan,

Ok – I will let the experts weigh in on load balancing.

In the meantime I have attached a couple of files to show where I am going. The 
first is sfc_dict.py and is a representation of the dict I am passing from SFC 
to OVN. This will then translate to the attached ovn-nb schema file.

One of my concerns is that SFC almost doubles the size of the ovn-nb schema but 
I could not think of any other way of doing it.

Thoughts?

John

From: Ryan Moats >
Date: Wednesday, May 25, 2016 at 7:27 AM
To: John McDowall 
>
Cc: "disc...@openvswitch.org" 
>, OpenStack 
Development Mailing List 
>, 
Ben Pfaff >, Justin Pettit 
>, Russell Bryant 
>
Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN


John McDowall 
> wrote 
on 05/24/2016 06:33:05 PM:

> From: John McDowall 
> >
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: "disc...@openvswitch.org" 
> >, "OpenStack
> Development Mailing List" 
> >
> Date: 05/24/2016 06:33 PM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> Ryan,
>
> Thanks for getting back to me and pointing me in a more OVS like
> direction. What you say makes sense, let me hack something together.
> I have been a little distracted getting some use cases together. The
> other area is how to better map the flow-classifier I have been
> thinking about it a little, but I will leave it till after we get
> the chains done.
>
> Your load-balancing comment was very interesting – I saw some
> patches for load-balancing a few months ago but nothing since. It
> would be great if we could align with load-balancing as that would
> make a really powerful solution.
>
> Regards
>
> John

John-

For the load balancing, I believe that you'll want to look at
openvswitch's select group, as that should let you set up multiple
buckets for each egress port in the port pairs that make up a port
group.

As I understand it, Table 0 identifies the logical port and logical
flow. I'm worried that this means we'll end up with separate bucket
rules for each ingress port of the port pairs that make up a port
group, leading to a cardinality product in the number of rules.
I'm trying to think of a way where Table 0 could identify the packet
as being part of a particular port group, and then I'd only need one
set of bucket rules to figure out the egress side.  However, the
amount of free metadata space is limited and so before we go down
this path, I'm going to pull Justin, Ben and Russell in to see if
they buy into this idea or if they can think of an alternative.

Ryan

>
> From: Ryan Moats >
> Date: Monday, May 23, 2016 at 9:06 PM
> To: John McDowall 
> >
> Cc: "disc...@openvswitch.org" 
> >, OpenStack
> Development Mailing List 
> >
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> John McDowall 
> > wrote 
> on 05/18/2016
> 03:55:14 PM:
>
> > From: John McDowall 
> > >
> > To: Ryan Moats/Omaha/IBM@IBMUS
> > Cc: "disc...@openvswitch.org" 
> > >, "OpenStack
> > Development Mailing List" 
> > >
> > Date: 05/18/2016 03:55 PM
> > Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> >
> > Ryan,
> >
> > OK all three repos and now aligned with their masters. I have done
> > some simple level system tests and I can steer traffic to a single
> > VNF.  Note: some additional changes to networking-sfc to catch-up
> > with their changes.
> >
> > https://github.com/doonhammer/networking-sfc
> > 

Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-25 Thread Adrian Otto

> On May 25, 2016, at 12:43 PM, Ben Swartzlander  wrote:
> 
> On 05/25/2016 06:48 AM, Sean Dague wrote:
>> I've been watching the threads, trying to digest, and find the way's
>> this is getting sliced doesn't quite slice the way I've been thinking
>> about it. (which might just means I've been thinking about it wrong).
>> However, here is my current set of thoughts on things.
>> 
>> 1. Should OpenStack be open to more languages?
>> 
>> I've long thought the answer should be yes. Especially if it means we
>> end up with keystonemiddleware, keystoneauth, oslo.config in other
>> languages that let us share elements of infrastructure pretty
>> seamlessly. The OpenStack model of building services that register in a
>> service catalog and use common tokens for permissions through a bunch of
>> services is quite valuable. There are definitely people that have Java
>> applications that fit into the OpenStack model, but have no place to
>> collaborate on them.
>> 
>> (Note: nothing about the current proposal goes anywhere near this)
>> 
>> 2. Is Go a "good" language to add to the community?
>> 
>> Here I am far more mixed. In programming language time, Go is super new.
>> It is roughly the same age as the OpenStack project. The idea that Go and
>> Python programmers overlap seems to be because some shops that used
>> to do a lot in Python, now do some things in Go.
>> 
>> But when compared to other languages in our bag, Javascript, Bash. These
>> are things that go back 2 decades. Unless you have avoided Linux or the
>> Web successfully for 2 decades, you've done these in some form. Maybe
>> not being an expert, but there is vestigial bits of knowledge there. So
>> they *are* different. In the same way that C or Java are different, for
>> having age. The likelihood of finding community members than know Python
>> + one of these is actually *way* higher than Python + Go, just based on
>> duration of existence. In a decade that probably won't be true.
> 
> Thank you for bringing up this point. My major concern boils down to the 
> likelihood that Go will never be well understood by more than a small subset 
> of the community. (When I say "well understood" I mean years of experiences 
> with thousands of lines of code -- not "I can write hello world").
> 
> You expect this problem to get better in the future -- I expect this problem 
> to get worse. Not all programming languages survive. Google for "dead 
> programming languages" some time and you'll find many examples. The problem 
> is that it's never obvious when the languages are young that something more 
> popular will come along and kill a language.
> 
> I don't want to imply that Golang is especially likely to die any time soon. 
> But every time you add a new language to a community, you increase the *risk* 
> that one of the programming languages used by the community will eventually 
> fall out of popularity, and it will become hard or impossible to find people 
> to maintain parts of the code.
> 
> I tend to take a long view of software lifecycles, having witnessed the death 
> of projects due to bad decisions before. Does anyone expect OpenStack to 
> still be around in 10 years? 20 years? What is the likelihood that both 
> Python and Golang are both still popular languages then? I guarantee [1] that 
> it's lower than the likelihood that only Python is still a popular language.
> 
> Adding a new language adds risk that new contributors won't understand some 
> parts of the code. Period. It doesn't matter what the language is.
> 
> My proposed solution is to draw the community line at the language barrier 
> line. People in this community are expected to understand Python. Anyone can 
> start other communities, and they can overlap with ours, but let's make it 
> clear that they're not the same.

Take all the names of the programming languages out for a moment here. The 
point is not that one is any more appropriate than another. In order to evolve, 
OpenStack must allow alternatives. It sets us up for long term success. 
Evolution is gradual change. Will we ever need to refactor things from one 
language to another, or have the same API implemented in two languages? Sure. 
That’s fine. Optimize for a long term outcome, not short term efficiencies. 
Twenty years from now if OpenStack still has a  “Python only” attitude, I’m 
sure it will be totally and utterly irrelevant. We will have all moved on by 
then. Let’s get this right, and offer individual projects freedom to do what 
they feel is best. Have a selection of designated languages, and rationale for 
why to stick with whichever one is preferred at a point in time.

Adrian

> 
> -Ben Swartzlander
> 
> [1] For all X, Y in (0, 1): X * Y < X
> 
>> 3. Are there performance problems where python really can't get there?
>> 
>> This seems like a pretty clear "yes". It shouldn't be surprising. Python
>> has no jit (yes there is pypy, but it's compat story isn't here). There
>> is 

Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-25 Thread John Dickinson
My responses are inline and to question 5, which, like you, I think is the key.

On 25 May 2016, at 3:48, Sean Dague wrote:

> I've been watching the threads, trying to digest, and find the way's
> this is getting sliced doesn't quite slice the way I've been thinking
> about it. (which might just means I've been thinking about it wrong).
> However, here is my current set of thoughts on things.
>
> 1. Should OpenStack be open to more languages?
>
> I've long thought the answer should be yes. Especially if it means we
> end up with keystonemiddleware, keystoneauth, oslo.config in other
> languages that let us share elements of infrastructure pretty
> seamlessly. The OpenStack model of building services that register in a
> service catalog and use common tokens for permissions through a bunch of
> services is quite valuable. There are definitely people that have Java
> applications that fit into the OpenStack model, but have no place to
> collaborate on them.
>
> (Note: nothing about the current proposal goes anywhere near this)
>
> 2. Is Go a "good" language to add to the community?
>
> Here I am far more mixed. In programming language time, Go is super new.
> It is roughly the same age as the OpenStack project. The idea that Go and
> Python programmers overlap seems to be because some shops that used
> to do a lot in Python, now do some things in Go.
>
> But when compared to other languages in our bag, Javascript, Bash. These
> are things that go back 2 decades. Unless you have avoided Linux or the
> Web successfully for 2 decades, you've done these in some form. Maybe
> not being an expert, but there is vestigial bits of knowledge there. So
> they *are* different. In the same way that C or Java are different, for
> having age. The likelihood of finding community members than know Python
> + one of these is actually *way* higher than Python + Go, just based on
> duration of existence. In a decade that probably won't be true.
>
> 3. Are there performance problems where python really can't get there?
>
> This seems like a pretty clear "yes". It shouldn't be surprising. Python
> has no jit (yes there is pypy, but it's compat story isn't here). There
> is a reason a bunch of python libs have native components for speed -
> numpy, lxml, cryptography, even yaml throws a warning that you should
> really compile the native version for performance when there is full
> python fallback.
>
> The Swift team did a very good job demonstrating where these issues are
> with trying to get raw disk IO. It was a great analysis, and kudos to
> that team for looking at so many angles here.
>
> 4. Do we want to be in the business of building data plane services that
> will all run into python limitations, and will all need to be rewritten
> in another language?
>
> This is a slightly different spin on the question Thierry is asking.
>
> Control Plane services are very unlikely to ever hit a scaling concern
> where rewriting the service in another language is needed for
> performance issues. These are orchestrators, and the time spent in them
> is vastly less than the operations they trigger (start a vm, configure a
> switch, boot a database server). There was a whole lot of talk in the
> threads of "well that's not innovative, no one will want to do just
> that", which seems weird, because that's most of OpenStack. And it's
> pretty much where all the effort in the containers space is right now,
> with a new container fleet manager every couple of weeks. So thinking
> that this is a boring problem no one wants to solve, doesn't hold water
> with me.
>
> Data Plane services seem like they will all end up in the boat of
> "python is not fast enough". Be it serving data from disk, mass DNS
> transfers, time series database, message queues. They will all
> eventually hit the python wall. Swift hit it first because of the
> maturity of the project and they are now focused on this kind of
> optimization, as that's what their user base demands. However I think
> all other data plane services will hit this as well.
>
> Glance (which is partially a data plane service) did hit this limit, and
> the way it is largely mitigated by folks is by using Ceph and exposing that
> directly to Nova so now Glance is only in the location game and metadata
> game, and Ceph is in the data plane game.
>
> When it comes to doing data plan services in OpenStack, I'm quite mixed.
> The technology concerns for data plane
> services are quite different. All the control plane services kind of
> look and feel the same. An API + worker model, a DB for state, message
> passing / rpc to put work to the workers. This is a common pattern and
> is something which even for all the project differences, does end up
> kind of common between parts. Projects that follow this model are
> debuggable as a group not too badly.
>
> 5. Where does Swift fit?
>
> This I think has always been a tension point in the community (at least
> since I joined in 2012). Swift is an original 

Re: [openstack-dev] [craton]

2016-05-25 Thread Ian Cordasco
On May 25, 2016 6:26 PM, "Gerald Bothello"  wrote:
>

Hi Gerald!

Did you have some questions about craton that I can help by answering?

-
Ian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Unable to set metadata_encryption_key

2016-05-25 Thread Djimeli Konrad
Hello Nikhil,

Looking at how my proposed solution ( https://review.openstack.org/319659)
was inefficient. Instead of using a dummy string to identify encrypted
data, I have been thinking about handling the exceptions that are generated
when you try to decrypt unencrypted data, since it would either cause a
"TypeError" or "ValueError" as seen  here


https://github.com/openstack/glance/blob/24fae90c179d306c3f6763e9b4412a3e7ebd67e9/glance/db/sqlalchemy/migrate_repo/versions/017_quote_encrypted_swift_credentials.py#L125
.

But I am still waiting a review and proposed solution.

Thanks
Konrad
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [craton]

2016-05-25 Thread Gerald Bothello

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] New Core Reviewer (sent on behalf of Steve Martinelli)

2016-05-25 Thread Rodrigo Duarte
Thank you all, it's a privilege to be part of a team from where I've
learned so much. =)

On Wed, May 25, 2016 at 1:05 PM, Brad Topol  wrote:

> CONGRATULATIONS Rodrigo!!! Very well deserved!!!
>
> --Brad
>
>
> Brad Topol, Ph.D.
> IBM Distinguished Engineer
> OpenStack
> (919) 543-0646
> Internet: bto...@us.ibm.com
> Assistant: Kendra Witherspoon (919) 254-0680
>
> [image: Inactive hide details for Lance Bragstad ---05/25/2016 09:09:55
> AM---Congratulations Rodrigo! Thank you for all the continued a]Lance
> Bragstad ---05/25/2016 09:09:55 AM---Congratulations Rodrigo! Thank you for
> all the continued and consistent reviews.
>
> From: Lance Bragstad 
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: 05/25/2016 09:09 AM
> Subject: Re: [openstack-dev] [keystone] New Core Reviewer (sent on behalf
> of Steve Martinelli)
> --
>
>
>
> Congratulations Rodrigo!
>
> Thank you for all the continued and consistent reviews.
>
> On Tue, May 24, 2016 at 1:28 PM, Morgan Fainberg <
> *morgan.fainb...@gmail.com* > wrote:
>
>I want to welcome Rodrigo Duarte (rodrigods) to the keystone core
>team. Rodrigo has been a consistent contributor to keystone and has been
>instrumental in the federation implementations. Over the last cycle he has
>shown an understanding of the code base and contributed quality reviews.
>
>I am super happy (as proxy for Steve) to welcome Rodrigo to the
>Keystone Core team.
>
>Cheers,
>--Morgan
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe:
>*openstack-dev-requ...@lists.openstack.org?subject:unsubscribe*
>
> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Rodrigo Duarte Sousa
Senior Quality Engineer @ Red Hat
MSc in Computer Science
http://rodrigods.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [defcore] [interop] Proposal for a virtual sync dedicated to Import Refactor May 26th

2016-05-25 Thread Nikhil Komawar
The agenda is up
https://etherpad.openstack.org/p/newton-glance-import-refactor-midcycle-sync-1


Please note
https://wiki.openstack.org/wiki/VirtualSprints#Image_Import_Refactor_Sync_.231_--_Newton


If you are having issues connecting the hangout, you can reach out to me
on IRC at around 1505UTC tomorrow (Thursday May 26).


On 5/20/16 6:00 PM, Nikhil Komawar wrote:
> Hello all,
>
>
> I want to propose having a dedicated virtual sync next week Thursday May
> 26th at 1500UTC for one hour on the Import Refactor work [1] ongoing in
> Glance. We are making a few updates to the spec; so it would be good to
> have everyone on the same page and soon start merging those spec changes.
>
>
> Also, I would like for this sync to be cross project one so that all the
> different stakeholders are aware of the updates to this work even if you
> just want to listen in.
>
>
> Please vote with +1, 0, -1. Also, if the time doesn't work please
> propose 2-3 additional time slots.
>
>
> We can decide later on the tool and I will setup agenda if we have
> enough interest.
>
>
> [1]
> http://specs.openstack.org/openstack/glance-specs/specs/mitaka/approved/image-import/image-import-refactor.html
>
>

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] process for making decisions?

2016-05-25 Thread Loo, Ruby
Hi Jim,

Thanks for responding.

>If we do think we need a formal process for making decisions as you
>define above, I think it should be something like:
>
>* bring it up on the mailing list
>* someone /must/ propose a solution along the way, in gerrit, perhaps
>  the person that started the thread if nobody else steps up
>* (if we think this is a really big decision, we can declare that X% of
>  cores should vote on it before landing it)

I think this is reasonable. This thread seems to have petered out, so I¹ve got 
a proposal [1] with your suggestion. Let¹s see how it goes ;)

--ruby

[1] https://review.openstack.org/321246


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-05-25 Thread Kyle Mestery
On Wed, May 25, 2016 at 3:55 PM, Armando M.  wrote:
>
>
> On 25 May 2016 at 13:31, Elzur, Uri  wrote:
>>
>> Kyle
>>
>> Thx for your comment. I think these are orthogonal discussions. The heart
>> of this one, for me, and in the Neutron context, is plotting a road forward
>> on new technologies INDEPENDENT of external (even if related) open source
>> projects. I like Armando's direction.
>>
>> The best of my understanding (granted, limited) is that the OvS official
>> position is not supportive of gpe and NSH as long as the Linux Kernel
>> doesn't have them. So we are in a nice little spiral for >2 years, which is
>> really long time if we want to have a reasonable pace of new technology
>> adoption
>>
>
> It would be nice to understand what the concerns are and how to resolve them
> in order to try and find a path where things can be reconciled later on.
> Technology adoption will always be hindered by the potential risk of dealing
> with fork down the road.
>
Fundamentally, I think we just need to draw a hard line in the sand
that we won't be testing things in the gate which carry patches for
downstream components. Once these things land in the kernel and OVS,
they can easily be supported upstream. We've done this for OVS
features for years. Uri is bringing an argument to the wrong place,
IMHO.

>>
>> The IETF is already last call and open source support ???
>>
>> Thx
>>
>> Uri (“Oo-Ree”)
>> C: 949-378-7568
>>
>>
>> -Original Message-
>> From: Kyle Mestery [mailto:mest...@mestery.com]
>> Sent: Wednesday, May 25, 2016 1:00 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> 
>> Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC
>>
>> On Wed, May 25, 2016 at 2:29 PM, Elzur, Uri  wrote:
>> > Armando
>> >
>> >
>> >
>> > I’m asking for a clear answer “I think the position here is as
>> > follows: if a technology is not mainstream, i.e. readily available via
>> > distros and the various channels, it can only be integrated via an
>> > experimental path”
>> >
>> >
>> >
>> > If we can allow for the EXPERIMENTAL path for NSH, then we can stand
>> > up the whole stack in EXPERIMENTAL mode and quickly move to mainstream
>> > when other pieces outside of Neutron fall in place.
>> >
>> >
>> >
>> > As to OVN – it has to be EXPERIMENTAL too. I guess, if I interpret
>> > your response correctly, that unlike their future intention for OVN,
>> > OvS is not willing to signal interest in integrating NSH
>> >
>> Would this be a better thing to discuss on the ovs-dev list [1] rather
>> than the openstack-dev list? I'm sure the OVS devs would be happy to
>> continue a discussion about the possibility of using VXLAN+NSH over GENEVE
>> there.
>>
>> [1] http://mail.openvswitch.org/mailman/listinfo/dev
>>
>> >
>> >
>> > Thx
>> >
>> >
>> >
>> > Uri (“Oo-Ree”)
>> >
>> > C: 949-378-7568
>> >
>> >
>> >
>> > From: Armando M. [mailto:arma...@gmail.com]
>> > Sent: Wednesday, May 25, 2016 9:33 AM
>> > To: OpenStack Development Mailing List (not for usage questions)
>> > 
>> >
>> > Subject: Re: [openstack-dev] [Neutron] support of NSH in
>> > networking-SFC
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > On 24 May 2016 at 21:53, Elzur, Uri  wrote:
>> >
>> > Hi Tim
>> >
>> > Sorry for the delay due to travel...
>> >
>> > This note is very helpful!
>> >
>> > We are in agreement that the team including the individuals cited
>> > below are supportive. We also agree that SFC belongs in the
>> > networking-SFC project (with proper API adjustment)
>> >
>> > It seems networking-sfc still holds the position that without OvS
>> > accepting VXLAN-gpe and NSH patches they can't support NSH. I'm trying
>> > to get a clear read on where is this stated as requirement
>> >
>> >
>> >
>> > I think the position here is as follows: if a technology is not
>> > mainstream, i.e. readily available via distros and the various
>> > channels, it can only be integrated via an experimental path. No-one
>> > is preventing anyone from posting patches and instructions to compile
>> > kernels and kernel modules, but ultimately as an OpenStack project
>> > that is suppose to produce commercial and production grade software,
>> > we should be very sensitive in investing time and energy in supporting
>> > a technology that may or may not have a viable path towards inclusion
>> > into mainstream (Linux and OVS in this instance).
>> >
>> >
>> >
>> > One another clear example we had in the past was DPDK (that enabled
>> > fast path processing in Neutron with OVS) and connection tracking
>> > (that enabled security groups natively build on top of OVS). We, as a
>> > project have consistently avoided endorsing efforts until they mature
>> > and show a clear path forward.
>> >
>> >
>> >
>> >
>> > Like you, we are closely following the progress of the patches and
>> > honestly I have hard 

[openstack-dev] [glance] [stable] Proposal to add Ian Cordasco to glance-stable-maint

2016-05-25 Thread Nikhil Komawar
Hi all,


I would like to propose adding Ian to glance-stable-maint team. The
interest is coming from him and I've already asked for feedback from the
current glance-stable-maint folks, which has been in Ian's favor. Also,
as Ian mentions the current global stable team isn't going to subsume
the per-project teams anytime soon.


Ian is willing to shoulder the responsibility of stable liaison for
Glance [1] which is great news. If no objections are raised by Friday
May 27th 2359UTC, we will go ahead and do the respective changes.


[1] https://wiki.openstack.org/wiki/CrossProjectLiaisons#Stable_Branch


-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova Live Migration of rescued instances

2016-05-25 Thread Claudiu Belu
Hello Paul,

So, Hyper-V supports nova-rescue at the moment, the patch actually got in last 
week, thanks to Dan Smith and Jay Pipes. \o/

I've tested live-migration of rescued Hyper-V instances, and it works for both 
Generation 1 and Generation 2 VMs.

I'm thinking that a tempest test for this scenario can be added, once you 
finish with your blueprint. :)

Best regards,

Claudiu Belu


From: Paul Carlton [paul.carlt...@hpe.com]
Sent: Wednesday, May 25, 2016 2:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Nova Live Migration of rescued instances

On 25/05/16 11:59, Gary Kotton wrote:
> Hi,
> The VMware driver supports rescue. Live migration should be pretty simple 
> here as the rescue is only for the disk. So you can migrate the instance to 
> whatever host you want. The only concern with the VMware driver is that the 
> live migration patches are in review and I think that they require a spec or 
> blueprint (https://review.openstack.org/#/c/270116/)
> Thanks
> Gary
>
> On 5/25/16, 10:49 AM, "Paul Carlton"  wrote:
>
>> I'm working on a spec https://review.openstack.org/#/c/307131/ to permit
>> the live migration of rescued instances. I have an implementation that
>> works for libvirt and have addressed lack of support for this feature
>> in other drivers using driver capabilities.
>>
>> I've achieved this for libvirt driver by simply changing how rescue and
>> unrescue are implemented.  In the libvirt driver rescue saves the current
>> domain xml in a local file and unrescue uses this to revert the instance to
>> its previous setup, i.e. booting from instance primary disk again rather
>> than rescue image.  However saving the previous state in the domain
>> xml file is unnecessary since during unrescue the domain is destroyed
>> and restarted. This is effectively a hard reboot so I just call hard reboot
>> during the unrescue operation.  Hard reboot rebuilds the domain xml
> >from the nova database so the domain xml file is not needed.
>> However I was wondering which other drivers support rescue, vmware
>> and xen I think?  Would it be possible to implement support for live
>> migration of rescued instances for these drivers too?  I'm happy to do
>> the work to implement this, given some guidance from those with more
>> familiarity with these drivers than I.
>>
>> Thanks
>>
>> --
>> Paul Carlton
>> Software Engineer
>> Cloud Services
>> Hewlett Packard
>> BUK03:T242
>> Longdown Avenue
>> Stoke Gifford
>> Bristol BS34 8QZ
>>
>> Mobile:+44 (0)7768 994283
>> Email:mailto:paul.carlt...@hpe.com
>> Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 
>> 1HN Registered No: 690597 England.
>> The contents of this message and any attachments to it are confidential and 
>> may be legally privileged. If you have received this message in error, you 
>> should delete it from your system immediately and advise the sender. To any 
>> recipient of this message within HP, unless otherwise stated you should 
>> consider this message and attachments as "HP CONFIDENTIAL".
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
So vmware supports rescue but until this patch goes in it does not
support live migration between compute nodes?  So if my change
lands before yours then you would need to amend the
"supports_live_migrate_rescued" capabilities flag to True in your
driver to permit live migration of instances in a rescued state?

--
Paul Carlton
Software Engineer
Cloud Services
Hewlett Packard
BUK03:T242
Longdown Avenue
Stoke Gifford
Bristol BS34 8QZ

Mobile:+44 (0)7768 994283
Email:mailto:paul.carlt...@hpe.com
Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England.
The contents of this message and any attachments to it are confidential and may 
be legally privileged. If you have received this message in error, you should 
delete it from your system immediately and advise the sender. To any recipient 
of this message within HP, unless otherwise stated you should consider this 
message and attachments as "HP CONFIDENTIAL".



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-05-25 Thread Armando M.
On 25 May 2016 at 13:31, Elzur, Uri  wrote:

> Kyle
>
> Thx for your comment. I think these are orthogonal discussions. The heart
> of this one, for me, and in the Neutron context, is plotting a road forward
> on new technologies INDEPENDENT of external (even if related) open source
> projects. I like Armando's direction.
>
> The best of my understanding (granted, limited) is that the OvS official
> position is not supportive of gpe and NSH as long as the Linux Kernel
> doesn't have them. So we are in a nice little spiral for >2 years, which is
> really long time if we want to have a reasonable pace of new technology
> adoption
>
>
It would be nice to understand what the concerns are and how to resolve
them in order to try and find a path where things can be reconciled later
on. Technology adoption will always be hindered by the potential risk of
dealing with fork down the road.


> The IETF is already last call and open source support ???
>
> Thx
>
> Uri (“Oo-Ree”)
> C: 949-378-7568
>
>
> -Original Message-
> From: Kyle Mestery [mailto:mest...@mestery.com]
> Sent: Wednesday, May 25, 2016 1:00 PM
> To: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC
>
> On Wed, May 25, 2016 at 2:29 PM, Elzur, Uri  wrote:
> > Armando
> >
> >
> >
> > I’m asking for a clear answer “I think the position here is as
> > follows: if a technology is not mainstream, i.e. readily available via
> > distros and the various channels, it can only be integrated via an
> experimental path”
> >
> >
> >
> > If we can allow for the EXPERIMENTAL path for NSH, then we can stand
> > up the whole stack in EXPERIMENTAL mode and quickly move to mainstream
> > when other pieces outside of Neutron fall in place.
> >
> >
> >
> > As to OVN – it has to be EXPERIMENTAL too. I guess, if I interpret
> > your response correctly, that unlike their future intention for OVN,
> > OvS is not willing to signal interest in integrating NSH
> >
> Would this be a better thing to discuss on the ovs-dev list [1] rather
> than the openstack-dev list? I'm sure the OVS devs would be happy to
> continue a discussion about the possibility of using VXLAN+NSH over GENEVE
> there.
>
> [1] http://mail.openvswitch.org/mailman/listinfo/dev
>
> >
> >
> > Thx
> >
> >
> >
> > Uri (“Oo-Ree”)
> >
> > C: 949-378-7568
> >
> >
> >
> > From: Armando M. [mailto:arma...@gmail.com]
> > Sent: Wednesday, May 25, 2016 9:33 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > 
> >
> > Subject: Re: [openstack-dev] [Neutron] support of NSH in
> > networking-SFC
> >
> >
> >
> >
> >
> >
> >
> > On 24 May 2016 at 21:53, Elzur, Uri  wrote:
> >
> > Hi Tim
> >
> > Sorry for the delay due to travel...
> >
> > This note is very helpful!
> >
> > We are in agreement that the team including the individuals cited
> > below are supportive. We also agree that SFC belongs in the
> > networking-SFC project (with proper API adjustment)
> >
> > It seems networking-sfc still holds the position that without OvS
> > accepting VXLAN-gpe and NSH patches they can't support NSH. I'm trying
> > to get a clear read on where is this stated as requirement
> >
> >
> >
> > I think the position here is as follows: if a technology is not
> > mainstream, i.e. readily available via distros and the various
> > channels, it can only be integrated via an experimental path. No-one
> > is preventing anyone from posting patches and instructions to compile
> > kernels and kernel modules, but ultimately as an OpenStack project
> > that is suppose to produce commercial and production grade software,
> > we should be very sensitive in investing time and energy in supporting
> > a technology that may or may not have a viable path towards inclusion
> into mainstream (Linux and OVS in this instance).
> >
> >
> >
> > One another clear example we had in the past was DPDK (that enabled
> > fast path processing in Neutron with OVS) and connection tracking
> > (that enabled security groups natively build on top of OVS). We, as a
> > project have consistently avoided endorsing efforts until they mature
> > and show a clear path forward.
> >
> >
> >
> >
> > Like you, we are closely following the progress of the patches and
> > honestly I have hard time seeing OpenStack supporting NSH in
> > production even by the end of 2017. I think this amounts to slowing down
> the market...
> >
> > I think we need to break the logjam.
> >
> >
> >
> > We are not the ones (Neutron) you're supposed to break the logjam
> > with. I think the stakeholders here go well beyond the Neutron team
> alone.
> >
> >
> >
> >
> > I've reviewed
> > (https://review.openstack.org/#/c/312199/12/specs/newton/neutron-stadi
> > um.rst,unified) and found nowhere a guideline suggesting that before a
> > backend has fully implemented and 

Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-05-25 Thread Armando M.
On 25 May 2016 at 12:29, Elzur, Uri  wrote:

> Armando
>
>
>
> I’m asking for a clear answer “I think the position here is as follows:
> if a technology is not mainstream, i.e. readily available via distros and
> the various channels, it can only be integrated via an experimental path”
>
>
>
> If we can allow for the EXPERIMENTAL path for NSH, then we can stand up
> the whole stack in EXPERIMENTAL mode and quickly move to mainstream when
> other pieces outside of Neutron fall in place.
>

As I said, you're free to experiment. The general directive is to allow
these experimentations to take place and use them as a feedback tool to
iterate on the abstractions. However the abstraction would only be
considered community accepted if and only if there's enough evidence that
there is established support from a broad variety of plugins (open source
and non).


>
>
> As to OVN – it has to be EXPERIMENTAL too. I guess, if I interpret your
> response correctly, that unlike their future intention for OVN,  OvS is not
> willing to signal interest in integrating NSH
>

We're mixing two things here: OVN is not experimenting with (Neutron) APIs
(as it's adopting those as is), but it's experimenting with
implementations. So I would not conflate OVN and NSH in the same
discussion. I simply brought it up as another example (alongside DPDK) of
how innovation can be fostered in open source communities.


>
> Thx
>
>
>
> Uri (“Oo-Ree”)
>
> C: 949-378-7568
>
>
>
> *From:* Armando M. [mailto:arma...@gmail.com]
> *Sent:* Wednesday, May 25, 2016 9:33 AM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [Neutron] support of NSH in networking-SFC
>
>
>
>
>
>
>
> On 24 May 2016 at 21:53, Elzur, Uri  wrote:
>
> Hi Tim
>
> Sorry for the delay due to travel...
>
> This note is very helpful!
>
> We are in agreement that the team including the individuals cited below
> are supportive. We also agree that SFC belongs in the networking-SFC
> project (with proper API adjustment)
>
> It seems networking-sfc still holds the position that without OvS
> accepting VXLAN-gpe and NSH patches they can't support NSH. I'm trying to
> get a clear read on where is this stated as requirement
>
>
>
> I think the position here is as follows: if a technology is not
> mainstream, i.e. readily available via distros and the various channels, it
> can only be integrated via an experimental path. No-one is preventing
> anyone from posting patches and instructions to compile kernels and kernel
> modules, but ultimately as an OpenStack project that is suppose to produce
> commercial and production grade software, we should be very sensitive in
> investing time and energy in supporting a technology that may or may not
> have a viable path towards inclusion into mainstream (Linux and OVS in this
> instance).
>
>
>
> One another clear example we had in the past was DPDK (that enabled fast
> path processing in Neutron with OVS) and connection tracking (that enabled
> security groups natively build on top of OVS). We, as a project have
> consistently avoided endorsing efforts until they mature and show a clear
> path forward.
>
>
>
>
> Like you, we are closely following the progress of the patches and
> honestly I have hard time seeing OpenStack supporting NSH in production
> even by the end of 2017. I think this amounts to slowing down the market...
>
> I think we need to break the logjam.
>
>
>
> We are not the ones (Neutron) you're supposed to break the logjam with. I
> think the stakeholders here go well beyond the Neutron team alone.
>
>
>
>
> I've reviewed (
> https://review.openstack.org/#/c/312199/12/specs/newton/neutron-stadium.rst,unified)
> and found nowhere a guideline suggesting that before a backend has fully
> implemented and merged upstream a technology (i.e. another project outside
> of OepnStack!), OpenStack Neutron can't make any move. ODL is working >2
> years to support NSH using patches, yet to be accepted into Linux Kernel
> (almost done) and OvS (preliminary) - as you stated. Otherwise we create a
> serialization, that gets worse and worse over time and with additional
> layers.
>
> No one suggests the such code needs to be PRODUCTION, but we need a way to
> roll out EXPERIMENTAL functions and later merge them quickly when all
> layers are ready, this creates a nice parallelism and keeps a decent pace
> of rolling out new features broadly supported elsewhere.
>
>
>
> I agree with this last statement; this is for instance what is happening
> with OVN which, in order to work with Neutron, needs patching and stay
> close to trunk etc. The technology is still maturing and the whole Neutron
> integration is in progress, but at least there's a clear signal that the it
> will eventually become mainstream. If it did not, I would bet that
> priorities would be focused elsewhere.
>
>
>
> You asked in a previous email whether 

Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-05-25 Thread Armando M.
On 25 May 2016 at 10:24, Tim Rozet  wrote:

> In my opinion, it is a better approach to break this down into plugin vs
> driver support.  There should be no problem adding support into
> networking-sfc plugin for NSH today.  The OVS driver however, depends on
> OVS as the dataplane - which I can see a solid argument for only supporting
> an official version with a non-NSH solution.  The plugin side should have
> no dependency on OVS.  Therefore if we add NSH SFC support to an ODL driver
> in networking-odl, and use that as our networking-sfc driver, the argument
> about OVS goes away (since neutron/networking-sfc is totally unaware of the
> dataplane at this point).


I am afraid the argument does not go away is the crux of the matter is
exposing implementation aspects over the SFC API where such aspects  can
only be realized/understood by a single plugin.


> We would just need to ensure that API calls to networking-sfc specifying
> NSH port pairs returned error if the enabled driver was OVS (until official
> OVS with NSH support is released).
>
>
I am not 100% sure what you mean by specifying NSH port pairs over the API
but this to me seems to be in violation of the above mentioned abstraction
principle we're trying to abide. To date a plugin is allowed to bring its
own extensions, however that doesn't mean that those extensions can be
universally implemented and as such must be considered plugin specific.

Thoughts?


> Tim Rozet
> Red Hat SDN Team
>
> - Original Message -
> From: "Armando M." 
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Cc: "Tim Rozet" 
> Sent: Wednesday, May 25, 2016 12:33:16 PM
> Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC
>
> On 24 May 2016 at 21:53, Elzur, Uri  wrote:
>
> > Hi Tim
> >
> > Sorry for the delay due to travel...
> >
> > This note is very helpful!
> >
> > We are in agreement that the team including the individuals cited below
> > are supportive. We also agree that SFC belongs in the networking-SFC
> > project (with proper API adjustment)
> >
> > It seems networking-sfc still holds the position that without OvS
> > accepting VXLAN-gpe and NSH patches they can't support NSH. I'm trying to
> > get a clear read on where is this stated as requirement
> >
>
> I think the position here is as follows: if a technology is not mainstream,
> i.e. readily available via distros and the various channels, it can only be
> integrated via an experimental path. No-one is preventing anyone from
> posting patches and instructions to compile kernels and kernel modules, but
> ultimately as an OpenStack project that is suppose to produce commercial
> and production grade software, we should be very sensitive in investing
> time and energy in supporting a technology that may or may not have a
> viable path towards inclusion into mainstream (Linux and OVS in this
> instance).
>
> One another clear example we had in the past was DPDK (that enabled fast
> path processing in Neutron with OVS) and connection tracking (that enabled
> security groups natively build on top of OVS). We, as a project have
> consistently avoided endorsing efforts until they mature and show a clear
> path forward.
>
>
> > Like you, we are closely following the progress of the patches and
> > honestly I have hard time seeing OpenStack supporting NSH in production
> > even by the end of 2017. I think this amounts to slowing down the
> market...
> >
> > I think we need to break the logjam.
> >
>
> We are not the ones (Neutron) you're supposed to break the logjam with. I
> think the stakeholders here go well beyond the Neutron team alone.
>
>
> >
> > I've reviewed (
> >
> https://review.openstack.org/#/c/312199/12/specs/newton/neutron-stadium.rst,unified
> )
> > and found nowhere a guideline suggesting that before a backend has fully
> > implemented and merged upstream a technology (i.e. another project
> outside
> > of OepnStack!), OpenStack Neutron can't make any move. ODL is working >2
> > years to support NSH using patches, yet to be accepted into Linux Kernel
> > (almost done) and OvS (preliminary) - as you stated. Otherwise we create
> a
> > serialization, that gets worse and worse over time and with additional
> > layers.
> >
> > No one suggests the such code needs to be PRODUCTION, but we need a way
> to
> > roll out EXPERIMENTAL functions and later merge them quickly when all
> > layers are ready, this creates a nice parallelism and keeps a decent pace
> > of rolling out new features broadly supported elsewhere.
> >
>
> I agree with this last statement; this is for instance what is happening
> with OVN which, in order to work with Neutron, needs patching and stay
> close to trunk etc. The technology is still maturing and the whole Neutron
> integration is in progress, but at least there's a clear signal that the it
> will eventually become 

Re: [openstack-dev] [nova] determining or clarifying a path for gabbi+nova

2016-05-25 Thread Chris Dent

On Wed, 25 May 2016, Sean Dague wrote:


I still would rather not put gabbi into the compute API testing this
cycle. Instead learn from the placement side, let people see good
patterns there, and not confuse contributors with multiple ways to test
things in the compute API. Because that requires a lot of digging out
from later (example: mox & mock).


To be clear, I wasn't saying "let's do this immediately" or even
"let's do this this cycle". What I'm trying to do is two things. One
is to lay, slowly, some groundwork on which we can build up an
understanding of two things:

* what gabbi can do
* which of those things might be useful for nova

That's a conversation that can carry on pretty slowly and doesn't
have to take away from anything else. But as I've been noticing a
lot lately, if we try to go into changes without having some
agreement on the words we're using, we're not going to get anywhere,
so you know, let's have a chilled chat about this stuff and see
where it takes us. That's an important part of the process and the
medium of email is a reasonable place for that process (inclusive,
asynchronous, addressable).

The other is to grant people who do have the wherewithal to improve
their stuff (be that stuff nova or something else) with gabbi some greater
visibility into gabbi's existence and prowess. Knowing is half the
battle, etc.


And we still have this whole api-ref site which is only 50% verified
(and we still need to address a number of microversion issues) -
http://burndown.dague.org/. We said at the beginning of the cycle
api-ref and policy in code were our 2 API priorities. Until those are
well in the bag I don't want to take the energy and care to make sure we
do a pivot on test strategy to something completely new in a way that is
easy for everyone to contribute to and review.


a) I promise to be good boy and get involved there. I keep meaning
   to and a variety of other things keep coming up (including simply
   the need to be in a different zone to clear out the crazy) and I
   feel lame about it.

b) I'd like to disabuse you of this notion that there is a pivot
   involved here or being suggested here. I prefer to think of it as
   an augmentation.

   However, even if it is a pivot: So what? Sometimes we need to
   make changes. Sometimes because it is necessary and we need new
   functionality. Sometimes simply because changing things up a bit
   provides a _much_ needed shift in perspective.

--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-05-25 Thread Kyle Mestery
On Wed, May 25, 2016 at 2:29 PM, Elzur, Uri  wrote:
> Armando
>
>
>
> I’m asking for a clear answer “I think the position here is as follows: if a
> technology is not mainstream, i.e. readily available via distros and the
> various channels, it can only be integrated via an experimental path”
>
>
>
> If we can allow for the EXPERIMENTAL path for NSH, then we can stand up the
> whole stack in EXPERIMENTAL mode and quickly move to mainstream when other
> pieces outside of Neutron fall in place.
>
>
>
> As to OVN – it has to be EXPERIMENTAL too. I guess, if I interpret your
> response correctly, that unlike their future intention for OVN,  OvS is not
> willing to signal interest in integrating NSH
>
Would this be a better thing to discuss on the ovs-dev list [1] rather
than the openstack-dev list? I'm sure the OVS devs would be happy to
continue a discussion about the possibility of using VXLAN+NSH over
GENEVE there.

[1] http://mail.openvswitch.org/mailman/listinfo/dev

>
>
> Thx
>
>
>
> Uri (“Oo-Ree”)
>
> C: 949-378-7568
>
>
>
> From: Armando M. [mailto:arma...@gmail.com]
> Sent: Wednesday, May 25, 2016 9:33 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
>
> Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC
>
>
>
>
>
>
>
> On 24 May 2016 at 21:53, Elzur, Uri  wrote:
>
> Hi Tim
>
> Sorry for the delay due to travel...
>
> This note is very helpful!
>
> We are in agreement that the team including the individuals cited below are
> supportive. We also agree that SFC belongs in the networking-SFC project
> (with proper API adjustment)
>
> It seems networking-sfc still holds the position that without OvS accepting
> VXLAN-gpe and NSH patches they can't support NSH. I'm trying to get a clear
> read on where is this stated as requirement
>
>
>
> I think the position here is as follows: if a technology is not mainstream,
> i.e. readily available via distros and the various channels, it can only be
> integrated via an experimental path. No-one is preventing anyone from
> posting patches and instructions to compile kernels and kernel modules, but
> ultimately as an OpenStack project that is suppose to produce commercial and
> production grade software, we should be very sensitive in investing time and
> energy in supporting a technology that may or may not have a viable path
> towards inclusion into mainstream (Linux and OVS in this instance).
>
>
>
> One another clear example we had in the past was DPDK (that enabled fast
> path processing in Neutron with OVS) and connection tracking (that enabled
> security groups natively build on top of OVS). We, as a project have
> consistently avoided endorsing efforts until they mature and show a clear
> path forward.
>
>
>
>
> Like you, we are closely following the progress of the patches and honestly
> I have hard time seeing OpenStack supporting NSH in production even by the
> end of 2017. I think this amounts to slowing down the market...
>
> I think we need to break the logjam.
>
>
>
> We are not the ones (Neutron) you're supposed to break the logjam with. I
> think the stakeholders here go well beyond the Neutron team alone.
>
>
>
>
> I've reviewed
> (https://review.openstack.org/#/c/312199/12/specs/newton/neutron-stadium.rst,unified)
> and found nowhere a guideline suggesting that before a backend has fully
> implemented and merged upstream a technology (i.e. another project outside
> of OepnStack!), OpenStack Neutron can't make any move. ODL is working >2
> years to support NSH using patches, yet to be accepted into Linux Kernel
> (almost done) and OvS (preliminary) - as you stated. Otherwise we create a
> serialization, that gets worse and worse over time and with additional
> layers.
>
> No one suggests the such code needs to be PRODUCTION, but we need a way to
> roll out EXPERIMENTAL functions and later merge them quickly when all layers
> are ready, this creates a nice parallelism and keeps a decent pace of
> rolling out new features broadly supported elsewhere.
>
>
>
> I agree with this last statement; this is for instance what is happening
> with OVN which, in order to work with Neutron, needs patching and stay close
> to trunk etc. The technology is still maturing and the whole Neutron
> integration is in progress, but at least there's a clear signal that the it
> will eventually become mainstream. If it did not, I would bet that
> priorities would be focused elsewhere.
>
>
>
> You asked in a previous email whether Neutron wanted to kept itself hostage
> of OVS. My answer to you is NO: we have many technology stack options we can
> rely on in order to realize abstractions so long as they are open, and have
> a viable future.
>
>
>
>
> Thx
>
> Uri (“Oo-Ree”)
> C: 949-378-7568
>
> -Original Message-
> From: Tim Rozet [mailto:tro...@redhat.com]
> Sent: Friday, May 20, 2016 7:01 PM
> To: OpenStack Development Mailing List (not for 

[openstack-dev] [horizon] Horizon in devstack is broken, rechecks are futile

2016-05-25 Thread Timur Sufiev
Dear Horizon contributors,

The test job dsvm-integration fails for a reason for the last ~24 hours,
please do not recheck your patches if you see that almost all integration
tests fail (and only these tests) - it won't help. The fix for
django_openstack_auth issue which has been uncovered by the recent devstack
change (see https://bugs.launchpad.net/horizon/+bug/1585682) is being
worked on. Stay tuned, there will be another notification when rechecks
will become meaningful again.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTLs][all][mentoring] Mentors needed in specific technical areas

2016-05-25 Thread Emily K Hugenbruch

Hi Sean,
Thanks for volunteering!  Please fill out the signup form here:
https://openstackfoundation.formstack.com/forms/mentoring

~Emily Hugenbruch
IRC: ekhugen

Date: Wed, 25 May 2016 18:22:21 +
From: "Sean M. Collins" 
To: "OpenStack Development Mailing List (not for usage questions)"
 
Subject: Re: [openstack-dev] [PTLs][all][mentoring] Mentors needed in
 specific technical areas
Message-ID:

<01000154e927339b-3f84b310-ed09-4dc1-8e30-13f4a64735e4-000...@email.amazonses.com>


Content-Type: text/plain; charset=utf-8

I can be one of the mentors for those interested in the Neutron project

--
Sean M. Collins
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo_config] Improving Config Option Help Texts

2016-05-25 Thread Morgan Fainberg
On Wed, May 25, 2016 at 2:48 AM, Erno Kuvaja  wrote:

> On Tue, May 24, 2016 at 8:58 PM, John Garbutt 
> wrote:
>
>> On 24 May 2016 at 19:03, Ian Cordasco  wrote:
>> > -Original Message-
>> > From: Erno Kuvaja 
>> > Reply: OpenStack Development Mailing List (not for usage questions)
>> > 
>> > Date: May 24, 2016 at 06:06:14
>> > To: OpenStack Development Mailing List (not for usage questions)
>> > 
>> > Subject:  [openstack-dev] [all][oslo_config] Improving Config Option
>> Help Texts
>> >
>> >> Hi all,
>> >>
>> >> Based on the not yet merged spec of categorized config options [0] some
>> >> project seems to have started improving the config option help texts.
>> This
>> >> is great but I noticed scary trend on clutter to be added on these
>> >> sections. Now looking individual changes it does not look that bad at
>> all
>> >> in the code 20 lines well structured templating. Until you start
>> comparing
>> >> it to the example config files. Lots of this data is redundant to what
>> is
>> >> generated to the example configs already and then the maths struck me.
>> >>
>> >> In Glance only we have ~120 config options (this does not include
>> >> glance_store nor any other dependencies we pull in for our configs like
>> >> Keystone auth. Those +20 lines of templating just became over 2000
>> lines of
>> >> clutter in the example configs and if all projects does that we can
>> >> multiply the issue. I think no-one with good intention can say that
>> it's
>> >> beneficial for our deployers and admins who are already struggling
>> with the
>> >> configs.
>> >>
>> >> So I beg you when you do these changes to the config option help fields
>> >> keep them short and compact. We have the Configuration Docs for
>> extended
>> >> descriptions and cutely formatted repetitive fields, but lets keep
>> those
>> >> off from the generated (Example) config files. At least I would like
>> to be
>> >> able to fit more than 3 options on the screen at the time when reading
>> >> configs.
>> >>
>> >> [0] https://review.openstack.org/#/c/295543/
>> >
>> > Hey Erno,
>> >
>> > So here's where I have to very strongly disagree with you. That spec
>> > was caused by operator feedback, specifically for projects that
>> > provide multiple services that may or may not have separated config
>> > files which and which already have "short and compact" descriptions
>> > that are not very helpful to oeprators.
>>
>> +1
>>
>> The feedback at operator sessions in Manchester and Austin seemed to
>> back up the need for better descriptions.
>>
>>
> I'm all for _better_ descriptions.
>
>
>> More precisely, Operators should not need to read the code to
>> understand how to use the configuration option.
>>
>> Now often that means they are longer. But they shouldn't be too long.
>>
>>
> Let me give an example of what I see as a clutter with the newly proposed
> help texts:
>
> Glance config files are split per service. So we have files
> glance-api.conf, glance-registry.conf, glance-scrubber.conf etc.
> We should not need to add 300 lines (once for each option) to
> glance-api.conf containing repetitive:
> """
>
> Services which consume this:
> * ``glance-api``
> """
> As it's glance-api.conf this _should_ be self-explanatory. This is getting
> worse for certain options we have in multiple config files that will have:
> """
>
> Services which consume this:
> * ``glance-api`` (mandatory for v1; optional for v2)
> * ``image scrubber`` (a periodic task)
> * ``cache prefetcher`` (a periodic task)
> """
> Which is kind of correct, but as all these three services has their own
> configs, changing it in one does not necessarily affect the rest
> (glance-api.conf is exception here if it is available and -scrubber and/or
> -cache configs are not). So now adding these lines to glance-scrubber.conf
> gives impression that glance-api consumes it from there, which is false.
>
> Will all options in [keystone_authtoken] have a list of every single
> OpenStack service consuming it? I certainly hope not.
>
> The next part of cluttering is adding again extra ~300 redundant lines:
> """
>
> Possible values:
> * A valid port number
> """
> This is specific example for PortOpt, Currently the configgenrator
> provides following from single line help text:
> """
> # Port the registry server is listening on. (port value)
> # Minimum value: 0
> # Maximum value: 65535
> #registry_port = 9191
> """
> Is "Possible values:\n A valid port number" by any means adding value to
> that help? I've seen the same with IntOpt where configgenerator adds that
> (integer value) and we add "Possible values:\n Valid Integer".
>
> > The *example* config files will have a lot more detail in them. Last I
>> > saw (I've stopped driving that specification) there was going to be a
>> > way to generate config files 

Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-25 Thread Ben Swartzlander

On 05/25/2016 06:48 AM, Sean Dague wrote:

I've been watching the threads, trying to digest, and find the way's
this is getting sliced doesn't quite slice the way I've been thinking
about it. (which might just means I've been thinking about it wrong).
However, here is my current set of thoughts on things.

1. Should OpenStack be open to more languages?

I've long thought the answer should be yes. Especially if it means we
end up with keystonemiddleware, keystoneauth, oslo.config in other
languages that let us share elements of infrastructure pretty
seamlessly. The OpenStack model of building services that register in a
service catalog and use common tokens for permissions through a bunch of
services is quite valuable. There are definitely people that have Java
applications that fit into the OpenStack model, but have no place to
collaborate on them.

(Note: nothing about the current proposal goes anywhere near this)

2. Is Go a "good" language to add to the community?

Here I am far more mixed. In programming language time, Go is super new.
It is roughly the same age as the OpenStack project. The idea that Go and
Python programmers overlap seems to be because some shops that used
to do a lot in Python, now do some things in Go.

But when compared to other languages in our bag, Javascript, Bash. These
are things that go back 2 decades. Unless you have avoided Linux or the
Web successfully for 2 decades, you've done these in some form. Maybe
not being an expert, but there is vestigial bits of knowledge there. So
they *are* different. In the same way that C or Java are different, for
having age. The likelihood of finding community members than know Python
+ one of these is actually *way* higher than Python + Go, just based on
duration of existence. In a decade that probably won't be true.


Thank you for bringing up this point. My major concern boils down to the 
likelihood that Go will never be well understood by more than a small 
subset of the community. (When I say "well understood" I mean years of 
experiences with thousands of lines of code -- not "I can write hello 
world").


You expect this problem to get better in the future -- I expect this 
problem to get worse. Not all programming languages survive. Google for 
"dead programming languages" some time and you'll find many examples. 
The problem is that it's never obvious when the languages are young that 
something more popular will come along and kill a language.


I don't want to imply that Golang is especially likely to die any time 
soon. But every time you add a new language to a community, you increase 
the *risk* that one of the programming languages used by the community 
will eventually fall out of popularity, and it will become hard or 
impossible to find people to maintain parts of the code.


I tend to take a long view of software lifecycles, having witnessed the 
death of projects due to bad decisions before. Does anyone expect 
OpenStack to still be around in 10 years? 20 years? What is the 
likelihood that both Python and Golang are both still popular languages 
then? I guarantee [1] that it's lower than the likelihood that only 
Python is still a popular language.


Adding a new language adds risk that new contributors won't understand 
some parts of the code. Period. It doesn't matter what the language is.


My proposed solution is to draw the community line at the language 
barrier line. People in this community are expected to understand 
Python. Anyone can start other communities, and they can overlap with 
ours, but let's make it clear that they're not the same.


-Ben Swartzlander

[1] For all X, Y in (0, 1): X * Y < X


3. Are there performance problems where python really can't get there?

This seems like a pretty clear "yes". It shouldn't be surprising. Python
has no jit (yes there is pypy, but it's compat story isn't here). There
is a reason a bunch of python libs have native components for speed -
numpy, lxml, cryptography, even yaml throws a warning that you should
really compile the native version for performance when there is full
python fallback.

The Swift team did a very good job demonstrating where these issues are
with trying to get raw disk IO. It was a great analysis, and kudos to
that team for looking at so many angles here.

4. Do we want to be in the business of building data plane services that
will all run into python limitations, and will all need to be rewritten
in another language?

This is a slightly different spin on the question Thierry is asking.

Control Plane services are very unlikely to ever hit a scaling concern
where rewriting the service in another language is needed for
performance issues. These are orchestrators, and the time spent in them
is vastly less than the operations they trigger (start a vm, configure a
switch, boot a database server). There was a whole lot of talk in the
threads of "well that's not innovative, no one will want to do just
that", which seems weird, because that's 

Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-05-25 Thread Elzur, Uri
Hi Armando

I hear (hopefully right ☺) that we have an agreement that the SFC abstraction 
we want to follow (and that includes in my mind networking-sfc and OVN – pls 
feel free to correct me if wrong!) is use of NSH approach. This includes 
internal representation of the chain, support of metdata etc. it is not clear 
to me who is interested in supporting the wire protocol too, however given its 
IETF status, not sure why it would be considered “pollution”.

Igor Duarte has a proposal I believe he was working with the networking-sfc 
folks on

Thx

Uri (“Oo-Ree”)
C: 949-378-7568

From: Armando M. [mailto:arma...@gmail.com]
Sent: Wednesday, May 25, 2016 11:06 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC



On 24 May 2016 at 22:07, Elzur, Uri 
> wrote:
Hi Armando

Pls see below [UE]

Thx

Uri (“Oo-Ree”)
C: 949-378-7568

From: Armando M. [mailto:arma...@gmail.com]
Sent: Friday, May 20, 2016 9:08 PM
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC



On 20 May 2016 at 17:37, Elzur, Uri 
> wrote:
Hi Armando, Cathy, All

First I apologize for the delay, returning from a week long international trip. 
(yes, I know,  a lousy excuse on many accounts…)

If I’m attempting to summarize all the responses, it seems like

• A given abstraction in Neutron is allowed (e.g. in support of SFC), 
preferably not specific to a given technology e.g. NSH for SFC

• A stadium project is not held to the same tests (but we do not have a 
“formal” model here, today) and therefore can support even a specific 
technology e.g. NSH (definitely better with abstractions to meet Neutron 
standards for future integration)

A given abstraction is allowed so long as there is enough agreement that it is 
indeed technology agnostic. If the abstraction maps neatly to a given 
technology, the implementation may exist within the context of Neutron or 
elsewhere.
[UE] I think we have agreement SFC is a needed abstraction

Having said that I'd like to clarify a point: you seem to refer to the stadium 
as a golden standard. The stadium is nothing else but a list of software 
repositories that the Neutron team develops and maintain. Given the maturity of 
a specific repo, it may or may not implement an abstraction with integration 
code to non open technologies. This is left at discretion of the group of folks 
who are directly in control of the specific repo, though it has been the 
general direction to strongly encourage and promote openness throughout the 
entire stack that falls under the responsibility of the Neutron team and thus 
the stadium.

[UE] carefully read 
(https://review.openstack.org/#/c/312199/12/specs/newton/neutron-stadium.rst,unified)
 and hope I understand Stadium. All NSH patches that we’d like to support are 
OPEN. I’m still looking for the place where a restriction prevents 
networking-SFC form moving forward on NSH before all other external projects to 
OpenStack has completed their work. Pls see also reply to Tim Rozet

However,

• There still is a chicken and egg phenomenon… how can a technology 
become main stream with OPEN SOURCE support  if we can’t get an OpenStack to 
support the required abstractions before the technology was adopted elsewhere??

o   Especially as Stadium, can we let Neutron to lead the industry, given broad 
enough community interest?

• BTW,  in this particular case, there originally has been a direct ODL 
access as a NSH solution (i.e. NO OpenStack option), then we got Tacker (now an 
Neutron Stadium project, if I get it right) to support SFC and NSH, but we are 
still told that networking-sfc (another Neutron Stadium project ) can’t do the 
same….
I cannot comment for the experience and the conversations you've had so far as 
I have no context. All I know is that if you want to experiment with 
OpenDaylight and its NSH provider and want to use that as a Neutron backend you 
can. However, if that requires new abstractions, these new abstractions must be 
agreed by all interested parties, be technology agnostic, and allow for 
multiple implementation, an open one included. That's the nature of OpenStack.
[UE] thanks for this clarification! I think it means that now that we all agree 
SFC abstraction is needed and that NSH is an emerging standard and 
networking-sfc team agrees to support NSH – there should be no reason to wait. 
As Tim Rozet mentioned an ODL driver with explicit SFC support is WIP, so 
sounds like NSH  support in it should be a go!

So long the required support is not specific to NSH and the API is not polluted 
by implementation 

Re: [openstack-dev] [puppet] proposal about puppet versions testing coverage

2016-05-25 Thread Matt Fischer
On Wed, May 25, 2016 at 1:09 PM, Emilien Macchi  wrote:

> Greating folks,
>
> In a recent poll [1], we asked to our community to tell which version
> of Puppet they were running.
> The motivation is to make sure our Puppet OpenStack CI test the right
> things, that are really useful.
>
> Right now, we run unit test jobs on puppet on 3.3, 3.4, 3.6, 3.8, 4.0
> and latest (current is 4.5).
> We also have functional jobs (non-voting, in periodic pipeline), that
> run puppet 4.5. Those ones break very often because nobody (except
> me?) regularly checks puppet4 periodic jobs.
>
> So here's my proposal, feel fee to comment:
>
> * Reduce puppet versions testing to 3.6, 3.8, 4.5 and latest (keep the
> last one non-voting). It seems that 3.6 and 3.8 are widely used by our
> consumers (default in centos7 & ubuntu LTS), and 4.5 is the latest
> release in the 4.x series.
>


+1



> * Move functional puppet4 jobs from experimental to check pipeline
> (non-voting). They'll bring very useful feedback. It will add 6 more
> jobs in the check queue, but since we will drop 2 unit tests jobs (in
> both check & gate pipelines), it will add 2 jobs at total (in term of
> time, unit tests jobs take 15 min and functional jobs take ~30 min) so
> the impact of node consumption is IMHO not relevant here.
>


What's the plan for making Puppet4 jobs voting? I think this is a good
start but we should more towards voting jobs I think.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] determining or clarifying a path for gabbi+nova

2016-05-25 Thread Sean Dague
On 05/25/2016 02:54 PM, Andrew Laski wrote:
> 
> 
> On Wed, May 25, 2016, at 11:13 AM, Chris Dent wrote:
>>
>> Earlier this year I worked with jaypipes to compose a spec[1] for using
>> gabbi[2] with nova. Summit rolled around and there were some legitimate
>> concerns about the focus of the spec being geared towards replacing the
>> api sample tests. I wasn't at summit ☹ but my understanding of the
>> outcome of the discussion was (please correct me if I'm wrong):
>>
>> * gabbi is not a straight replacement for the api-samples (notably
>>it doesn't address the documentation functionality provided by
>>api-samples)
>>
>> * there are concerns, because of the style of response validation
>>that gabbi does, that there could be a coverage gap[3] when a
>>representation changes (in, for example, a microversion bump)
>>
>> * we'll see how things go with the placement API work[4], which uses
>>gabbi for TDD, and allow people to learn more about gabbi from
>>that
>>
>> Since that all seems to make sense, I've gone ahead and abandoned
>> the review associated with the spec as overreaching for the time
>> being.
>>
>> I'd like, however, to replace it with a spec that is somewhat less
>> reaching in its plans. Rather than replace api-samples with gabbi,
>> augment existing tests of the API with gabbi-based tests. I think
>> this is a useful endeavor that will find and fix inconsistencies but
>> I'd like to get some feedback from people so I can formulate a spec
>> that will actually be useful.
>>
>> For reference, I started working on some integration of tempest and
>> gabbi[5] (based on some work that Mehdi did), and in the first few
>> minutes of writing tests found and reported bugs against nova and
>> glance, some of which have even been fixed since then. Win! We like
>> win.
>>
>> The difficulty here, and the reason I'm writing this message, is
>> simply this: The biggest benefit of gabbi is the actual writing and
>> initial (not the repeated) running of the tests. You write tests, you
>> find bugs and inconsistencies. The second biggest benefit is going
>> back and being a human and reading the tests and being able to see
>> what the API is doing, request and response in the same place. That's
>> harder to write a spec about than "I want to add or change feature X".
>> There's no feature here.
> 
> After reading this my first thought is that gabbi would handle what I'm
> testing in
> https://review.openstack.org/#/c/263927/33/nova/tests/functional/wsgi/test_servers.py,
> or any of the other tests in that directory. Does that seem accurate?
> And what would the advantage of gabbi be versus what I have currently
> written?

It would.

I still would rather not put gabbi into the compute API testing this
cycle. Instead learn from the placement side, let people see good
patterns there, and not confuse contributors with multiple ways to test
things in the compute API. Because that requires a lot of digging out
from later (example: mox & mock).

And we still have this whole api-ref site which is only 50% verified
(and we still need to address a number of microversion issues) -
http://burndown.dague.org/. We said at the beginning of the cycle
api-ref and policy in code were our 2 API priorities. Until those are
well in the bag I don't want to take the energy and care to make sure we
do a pivot on test strategy to something completely new in a way that is
easy for everyone to contribute to and review.

I feel like we have a good sandbox for this in the placement API, and we
can evaluate at end of cycle for next steps.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-05-25 Thread Elzur, Uri
Tim

+1 for me (guess not surprising...)

Thx

Uri (“Oo-Ree”)
C: 949-378-7568


-Original Message-
From: Tim Rozet [mailto:tro...@redhat.com] 
Sent: Wednesday, May 25, 2016 10:24 AM
To: Armando M. ; Elzur, Uri 
Cc: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

In my opinion, it is a better approach to break this down into plugin vs driver 
support.  There should be no problem adding support into networking-sfc plugin 
for NSH today.  The OVS driver however, depends on OVS as the dataplane - which 
I can see a solid argument for only supporting an official version with a 
non-NSH solution.  The plugin side should have no dependency on OVS.  Therefore 
if we add NSH SFC support to an ODL driver in networking-odl, and use that as 
our networking-sfc driver, the argument about OVS goes away (since 
neutron/networking-sfc is totally unaware of the dataplane at this point).  We 
would just need to ensure that API calls to networking-sfc specifying NSH port 
pairs returned error if the enabled driver was OVS (until official OVS with NSH 
support is released).

Thoughts?

Tim Rozet
Red Hat SDN Team

- Original Message -
From: "Armando M." 
To: "OpenStack Development Mailing List (not for usage questions)" 

Cc: "Tim Rozet" 
Sent: Wednesday, May 25, 2016 12:33:16 PM
Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

On 24 May 2016 at 21:53, Elzur, Uri  wrote:

> Hi Tim
>
> Sorry for the delay due to travel...
>
> This note is very helpful!
>
> We are in agreement that the team including the individuals cited 
> below are supportive. We also agree that SFC belongs in the 
> networking-SFC project (with proper API adjustment)
>
> It seems networking-sfc still holds the position that without OvS 
> accepting VXLAN-gpe and NSH patches they can't support NSH. I'm trying 
> to get a clear read on where is this stated as requirement
>

I think the position here is as follows: if a technology is not mainstream, 
i.e. readily available via distros and the various channels, it can only be 
integrated via an experimental path. No-one is preventing anyone from posting 
patches and instructions to compile kernels and kernel modules, but ultimately 
as an OpenStack project that is suppose to produce commercial and production 
grade software, we should be very sensitive in investing time and energy in 
supporting a technology that may or may not have a viable path towards 
inclusion into mainstream (Linux and OVS in this instance).

One another clear example we had in the past was DPDK (that enabled fast path 
processing in Neutron with OVS) and connection tracking (that enabled security 
groups natively build on top of OVS). We, as a project have consistently 
avoided endorsing efforts until they mature and show a clear path forward.


> Like you, we are closely following the progress of the patches and 
> honestly I have hard time seeing OpenStack supporting NSH in 
> production even by the end of 2017. I think this amounts to slowing down the 
> market...
>
> I think we need to break the logjam.
>

We are not the ones (Neutron) you're supposed to break the logjam with. I think 
the stakeholders here go well beyond the Neutron team alone.


>
> I've reviewed (
> https://review.openstack.org/#/c/312199/12/specs/newton/neutron-stadiu
> m.rst,unified) and found nowhere a guideline suggesting that before a 
> backend has fully implemented and merged upstream a technology (i.e. 
> another project outside of OepnStack!), OpenStack Neutron can't make 
> any move. ODL is working >2 years to support NSH using patches, yet to 
> be accepted into Linux Kernel (almost done) and OvS (preliminary) - as 
> you stated. Otherwise we create a serialization, that gets worse and 
> worse over time and with additional layers.
>
> No one suggests the such code needs to be PRODUCTION, but we need a 
> way to roll out EXPERIMENTAL functions and later merge them quickly 
> when all layers are ready, this creates a nice parallelism and keeps a 
> decent pace of rolling out new features broadly supported elsewhere.
>

I agree with this last statement; this is for instance what is happening with 
OVN which, in order to work with Neutron, needs patching and stay close to 
trunk etc. The technology is still maturing and the whole Neutron integration 
is in progress, but at least there's a clear signal that the it will eventually 
become mainstream. If it did not, I would bet that priorities would be focused 
elsewhere.

You asked in a previous email whether Neutron wanted to kept itself hostage of 
OVS. My answer to you is NO: we have many technology stack options we can rely 
on in order to realize abstractions so long as they are open, and have a viable 
future.


Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-05-25 Thread Elzur, Uri
Armando

I’m asking for a clear answer “I think the position here is as follows: if a 
technology is not mainstream, i.e. readily available via distros and the 
various channels, it can only be integrated via an experimental path”

If we can allow for the EXPERIMENTAL path for NSH, then we can stand up the 
whole stack in EXPERIMENTAL mode and quickly move to mainstream when other 
pieces outside of Neutron fall in place.

As to OVN – it has to be EXPERIMENTAL too. I guess, if I interpret your 
response correctly, that unlike their future intention for OVN,  OvS is not 
willing to signal interest in integrating NSH

Thx

Uri (“Oo-Ree”)
C: 949-378-7568

From: Armando M. [mailto:arma...@gmail.com]
Sent: Wednesday, May 25, 2016 9:33 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC



On 24 May 2016 at 21:53, Elzur, Uri 
> wrote:
Hi Tim

Sorry for the delay due to travel...

This note is very helpful!

We are in agreement that the team including the individuals cited below are 
supportive. We also agree that SFC belongs in the networking-SFC project (with 
proper API adjustment)

It seems networking-sfc still holds the position that without OvS accepting 
VXLAN-gpe and NSH patches they can't support NSH. I'm trying to get a clear 
read on where is this stated as requirement

I think the position here is as follows: if a technology is not mainstream, 
i.e. readily available via distros and the various channels, it can only be 
integrated via an experimental path. No-one is preventing anyone from posting 
patches and instructions to compile kernels and kernel modules, but ultimately 
as an OpenStack project that is suppose to produce commercial and production 
grade software, we should be very sensitive in investing time and energy in 
supporting a technology that may or may not have a viable path towards 
inclusion into mainstream (Linux and OVS in this instance).

One another clear example we had in the past was DPDK (that enabled fast path 
processing in Neutron with OVS) and connection tracking (that enabled security 
groups natively build on top of OVS). We, as a project have consistently 
avoided endorsing efforts until they mature and show a clear path forward.


Like you, we are closely following the progress of the patches and honestly I 
have hard time seeing OpenStack supporting NSH in production even by the end of 
2017. I think this amounts to slowing down the market...

I think we need to break the logjam.

We are not the ones (Neutron) you're supposed to break the logjam with. I think 
the stakeholders here go well beyond the Neutron team alone.


I've reviewed 
(https://review.openstack.org/#/c/312199/12/specs/newton/neutron-stadium.rst,unified)
 and found nowhere a guideline suggesting that before a backend has fully 
implemented and merged upstream a technology (i.e. another project outside of 
OepnStack!), OpenStack Neutron can't make any move. ODL is working >2 years to 
support NSH using patches, yet to be accepted into Linux Kernel (almost done) 
and OvS (preliminary) - as you stated. Otherwise we create a serialization, 
that gets worse and worse over time and with additional layers.

No one suggests the such code needs to be PRODUCTION, but we need a way to roll 
out EXPERIMENTAL functions and later merge them quickly when all layers are 
ready, this creates a nice parallelism and keeps a decent pace of rolling out 
new features broadly supported elsewhere.

I agree with this last statement; this is for instance what is happening with 
OVN which, in order to work with Neutron, needs patching and stay close to 
trunk etc. The technology is still maturing and the whole Neutron integration 
is in progress, but at least there's a clear signal that the it will eventually 
become mainstream. If it did not, I would bet that priorities would be focused 
elsewhere.

You asked in a previous email whether Neutron wanted to kept itself hostage of 
OVS. My answer to you is NO: we have many technology stack options we can rely 
on in order to realize abstractions so long as they are open, and have a viable 
future.


Thx

Uri (“Oo-Ree”)
C: 949-378-7568
-Original Message-
From: Tim Rozet [mailto:tro...@redhat.com]
Sent: Friday, May 20, 2016 7:01 PM
To: OpenStack Development Mailing List (not for usage questions) 
>; 
Elzur, Uri >
Cc: Cathy Zhang >
Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

Hi Uri,
I originally wrote the Tacker->ODL SFC NSH piece and have been working with 
Tacker and networking-sfc team to bring it upstream into OpenStack.  Cathy, 
Stephen, Louis and the rest of the 

Re: [openstack-dev] [nova] determining or clarifying a path for gabbi+nova

2016-05-25 Thread Chris Dent

On Wed, 25 May 2016, Andrew Laski wrote:


After reading this my first thought is that gabbi would handle what I'm
testing in
https://review.openstack.org/#/c/263927/33/nova/tests/functional/wsgi/test_servers.py,
or any of the other tests in that directory. Does that seem accurate?
And what would the advantage of gabbi be versus what I have currently
written?


Yes, things like that seem like they could be a pretty good candidate.
Assuming you had a GabbiFixture subclass that did what you're doing in
your setUp()[1] and test loader[2] then the gabbi file would look
something like this (untested, but if you want to try this together
tomorrow I reckon we could make it go pretty quickly):

```yaml
fixtures:
- LaskiFixture

tests:
- name: create a server
  POST: /servers
  request_headers:
  content-type: application/json
  data:
  server:
  name: foo
  # the fixture injects this value
  imageRef: $ENVIRON['image_ref']
  flavorRef: 1
  status: 201
  response_headers:
  # check headers however you like here

- name: get the server
  # this assumes the post above had a location response
  # header
  GET: $LOCATION
  response_json_paths:
  $.server.name: foo
  $.server.image.id: $ENVIRON['image_ref']
  $.server.flavor.id: 1

- name: delete the server
  DELETE: $LAST_URL
  status: 204

- name: make sure it really is gone
  GET: $LAST_URL
  status: 404
```

To me the primary advantages are:

* cleaner representation of the request response cycle of a sequence
  of requests without random other stuff
* under the covers it's direct interaction with the wsgi application
  with regular plain ol http clients 
* response validation that can be as simple or complex as you like

  with json paths
  * or even more complex if you want to write your own response
handlers[5]
* It's pretty easy to write (and correct if you get it wrong) these things.

That's a start at least.

Thanks for the good leading question.

[1] The placement api review[3] has a fairly straightforward
fixture[4] that has some but not all of the ideas that your fixture
would need. As Sergey correctly points out it needs to be cleaned up
now that it has a subclass.

[2] The test loader associates the gabbi yaml files with the wsgi
application that is being tested and produces standard python
unittest tests. There's an example in the placement api again:
https://review.openstack.org/#/c/293104/47/nova/tests/functional/gabbi/test_placement_api.py

[3] https://review.openstack.org/#/c/293104/

[4] 
https://review.openstack.org/#/c/293104/47/nova/tests/functional/gabbi/fixtures.py

[5] https://gabbi.readthedocs.io/en/latest/handlers.html
--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] proposal about puppet versions testing coverage

2016-05-25 Thread Emilien Macchi
Greating folks,

In a recent poll [1], we asked to our community to tell which version
of Puppet they were running.
The motivation is to make sure our Puppet OpenStack CI test the right
things, that are really useful.

Right now, we run unit test jobs on puppet on 3.3, 3.4, 3.6, 3.8, 4.0
and latest (current is 4.5).
We also have functional jobs (non-voting, in periodic pipeline), that
run puppet 4.5. Those ones break very often because nobody (except
me?) regularly checks puppet4 periodic jobs.

So here's my proposal, feel fee to comment:

* Reduce puppet versions testing to 3.6, 3.8, 4.5 and latest (keep the
last one non-voting). It seems that 3.6 and 3.8 are widely used by our
consumers (default in centos7 & ubuntu LTS), and 4.5 is the latest
release in the 4.x series.
* Move functional puppet4 jobs from experimental to check pipeline
(non-voting). They'll bring very useful feedback. It will add 6 more
jobs in the check queue, but since we will drop 2 unit tests jobs (in
both check & gate pipelines), it will add 2 jobs at total (in term of
time, unit tests jobs take 15 min and functional jobs take ~30 min) so
the impact of node consumption is IMHO not relevant here.

[1] 
https://docs.google.com/forms/d/1rJZxP52LyrFhFTy8w4J_5tnA7-A5g32YVhHSaCd7-F8/edit#responses

Thanks for your feedback,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] determining or clarifying a path for gabbi+nova

2016-05-25 Thread Andrew Laski


On Wed, May 25, 2016, at 11:13 AM, Chris Dent wrote:
> 
> Earlier this year I worked with jaypipes to compose a spec[1] for using
> gabbi[2] with nova. Summit rolled around and there were some legitimate
> concerns about the focus of the spec being geared towards replacing the
> api sample tests. I wasn't at summit ☹ but my understanding of the
> outcome of the discussion was (please correct me if I'm wrong):
> 
> * gabbi is not a straight replacement for the api-samples (notably
>it doesn't address the documentation functionality provided by
>api-samples)
> 
> * there are concerns, because of the style of response validation
>that gabbi does, that there could be a coverage gap[3] when a
>representation changes (in, for example, a microversion bump)
> 
> * we'll see how things go with the placement API work[4], which uses
>gabbi for TDD, and allow people to learn more about gabbi from
>that
> 
> Since that all seems to make sense, I've gone ahead and abandoned
> the review associated with the spec as overreaching for the time
> being.
> 
> I'd like, however, to replace it with a spec that is somewhat less
> reaching in its plans. Rather than replace api-samples with gabbi,
> augment existing tests of the API with gabbi-based tests. I think
> this is a useful endeavor that will find and fix inconsistencies but
> I'd like to get some feedback from people so I can formulate a spec
> that will actually be useful.
> 
> For reference, I started working on some integration of tempest and
> gabbi[5] (based on some work that Mehdi did), and in the first few
> minutes of writing tests found and reported bugs against nova and
> glance, some of which have even been fixed since then. Win! We like
> win.
> 
> The difficulty here, and the reason I'm writing this message, is
> simply this: The biggest benefit of gabbi is the actual writing and
> initial (not the repeated) running of the tests. You write tests, you
> find bugs and inconsistencies. The second biggest benefit is going
> back and being a human and reading the tests and being able to see
> what the API is doing, request and response in the same place. That's
> harder to write a spec about than "I want to add or change feature X".
> There's no feature here.

After reading this my first thought is that gabbi would handle what I'm
testing in
https://review.openstack.org/#/c/263927/33/nova/tests/functional/wsgi/test_servers.py,
or any of the other tests in that directory. Does that seem accurate?
And what would the advantage of gabbi be versus what I have currently
written?


> 
> I'm also aware that there is concern about adding yet another thing to
> understand in the codebase.
> 
> So what's a reasonable course of action here?
> 
> Thanks.
> 
> P.S: If any other project is curious about using gabbi, it is easier
> to use and set up than this discussion is probably making it sound
> and extremely capable. If you want to try it and need some help,
> just ask me: cdent on IRC.
> 
> [1] https://review.openstack.org/#/c/291352/
> 
> [2] https://gabbi.readthedocs.io/
> 
> [3] This would be expected: Gabbi considers its job to be testing
> the API layer, not the serializers and object that the API might be
> using (although it certainly can validate those things).
> 
> [4] https://review.openstack.org/#/c/293104/
> 
> [5] http://markmail.org/message/z6z6ego4wqdaelhq
> 
> -- 
> Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
> freenode: cdent tw: @anticdent
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Trove Meeting notes

2016-05-25 Thread Amrith Kumar
Here are the notes from the (just concluded) Trove meeting.

http://eavesdrop.openstack.org/meetings/trove/2016/trove.2016-05-25-18.00.html

Two actions:

ACTION: pmalik and stewie925 to look into the rebase issue (amrith, 18:41:32)
ACTION: amrith pmackinn cp16net to see what this mysql issue is (amrith, 
18:41:50)

Need to pick up the pace on reviews, especially of specs.

-amrith

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTLs][all][mentoring] Mentors needed in specific technical areas

2016-05-25 Thread Sean M. Collins
I can be one of the mentors for those interested in the Neutron project

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Zaqar messages standardization

2016-05-25 Thread Thomas Herve
On Fri, May 20, 2016 at 5:52 PM, Jiri Tomasek  wrote:
> Hey all,
>
> I've been recently working on getting the TripleO UI integrated with Zaqar,
> so it can receive a messages from Mistral workflows and act upon them
> without having to do various polling hacks.
>
> Since there is currently quite a large amount of new TripleO workflows
> comming to tripleo-common, we need to standardize this communication so
> clients can consume the messages consistently.
>
> I'll try to outline the requirements as I see it to start the discussion.
>
> Zaqar queues:
> To listen to the Zaqar messages it requires the client to connect to Zaqar
> WebSocket, send authenticate message and subscribe to queue(s) which it
> wants to listen to. The currently pending workflow patches which send Zaqar
> messages [1, 2] expect that the queue is created by client and name is
> passed as an input to the workflow [3].
>
> From the client perspective, it would IMHO be better if all workflows sent
> messages to the same queue and provide means to identify itself by carrying
> workflow name and execution id. The reason is, that if client creates a
> queue and triggers the workflow and then disconnects from the Socket (user
> refreshes browser), then it does not know what queues it previously created
> and which it should listen to. If there is single 'tripleo' queue, then all
> clients always know that it is where it will get all the messages from.
>
> Messages identification and content:
> The client should be able to identify message by it's name so it can act
> upon it. The name should probably be relevant to the action or workflow it
> reports on.
>
> {
>   body: {
> name: 'tripleo.validations.v1.run_validation,
> execution_id: '123123123'
> data: {}
>   }
> }
>
> Other parts of the message are optional but it would be good to provide
> information relevant to the message's purpose, so the client can update
> relevant state and does not have to do any additional API calls. So e.g. in
> case of running the validation a message includes validation id.

Hi,

Sorry for not responding earlier, but I have some inputs. In Heat we
publish events on Zaqar queue, and we defined this format:

{
'timestamp': $timestamp,
'version': '0.1',
'type': 'os.heat.event',
'id': $uuid,
'payload': {
'XXX
}
}

I don't think we have strong requirements on that, and we can
certainly make some tweaks. If we can converge towards something
simimar that'd be great.

Thanks,

-- 
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-05-25 Thread Armando M.
On 24 May 2016 at 22:07, Elzur, Uri  wrote:

> Hi Armando
>
>
>
> Pls see below [UE]
>
>
>
> Thx
>
>
>
> Uri (“Oo-Ree”)
>
> C: 949-378-7568
>
>
>
> *From:* Armando M. [mailto:arma...@gmail.com]
> *Sent:* Friday, May 20, 2016 9:08 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [Neutron] support of NSH in networking-SFC
>
>
>
>
>
>
>
> On 20 May 2016 at 17:37, Elzur, Uri  wrote:
>
> Hi Armando, Cathy, All
>
>
>
> First I apologize for the delay, returning from a week long international
> trip. (yes, I know,  a lousy excuse on many accounts…)
>
>
>
> If I’m attempting to summarize all the responses, it seems like
>
> · A given abstraction in Neutron is allowed (e.g. in support of
> SFC), preferably not specific to a given technology e.g. NSH for SFC
>
> · A stadium project is not held to the same tests (but we do not
> have a “formal” model here, today) and therefore can support even a
> specific technology e.g. NSH (definitely better with abstractions to meet
> Neutron standards for future integration)
>
>
>
> A given abstraction is allowed so long as there is enough agreement that
> it is indeed technology agnostic. If the abstraction maps neatly to a given
> technology, the implementation may exist within the context of Neutron or
> elsewhere.
>
> [UE] I think we have agreement SFC is a needed abstraction
>
>
>
> Having said that I'd like to clarify a point: you seem to refer to the
> stadium as a golden standard. The stadium is nothing else but a list of
> software repositories that the Neutron team develops and maintain. Given
> the maturity of a specific repo, it may or may not implement an abstraction
> with integration code to non open technologies. This is left at discretion
> of the group of folks who are directly in control of the specific repo,
> though it has been the general direction to strongly encourage and promote
> openness throughout the entire stack that falls under the responsibility of
> the Neutron team and thus the stadium.
>
>
>
> [UE] carefully read (
> https://review.openstack.org/#/c/312199/12/specs/newton/neutron-stadium.rst,unified)
> and hope I understand Stadium. All NSH patches that we’d like to support
> are OPEN. I’m still looking for the place where a restriction prevents
> networking-SFC form moving forward on NSH before all other external
> projects to OpenStack has completed their work. Pls see also reply to Tim
> Rozet
>
>
>
> However,
>
> · There still is a chicken and egg phenomenon… how can a
> technology become main stream with OPEN SOURCE support  if we can’t get an
> OpenStack to support the required abstractions *before* the technology
> was adopted elsewhere??
>
> o   Especially as Stadium, can we let Neutron to lead the industry, given
> broad enough community interest?
>
> · BTW,  in this particular case, there originally has been a
> *direct* ODL access as a NSH solution (i.e. NO OpenStack option), then we
> got Tacker (now an Neutron Stadium project, if I get it right) to support
> SFC and NSH, but we are still told that networking-sfc (another Neutron
> Stadium project ) can’t do the same….
>
> I cannot comment for the experience and the conversations you've had so
> far as I have no context. All I know is that if you want to experiment with
> OpenDaylight and its NSH provider and want to use that as a Neutron backend
> you can. However, if that requires new abstractions, these new abstractions
> must be agreed by all interested parties, be technology agnostic, and allow
> for multiple implementation, an open one included. That's the nature of
> OpenStack.
>
> [UE] thanks for this clarification! I think it means that now that we all
> agree SFC abstraction is needed and that NSH is an emerging standard and
> networking-sfc team agrees to support NSH – there should be no reason to
> wait. As Tim Rozet mentioned an ODL driver with explicit SFC support is
> WIP, so sounds like NSH  support in it should be a go!
>

So long the required support is not specific to NSH and the API is not
polluted by implementation details specific to NSH.

> · Also regarding the  following comment made on another message
> in this thread, “As to OvS features, I guess the OvS ml is a better place,
> but wonder if the Neutron community wants to hold itself hostage to the
> pace of other projects who are reluctant to adopt a feature”, what I mean
> is again, that chicken and egg situation as above. Personally, I think
> OpenStack Neutron should allow mechanisms that are of interest / value to
> the networking community at large, to “ experiment with the abstraction” as
> you stated, *independent of other organizations/projects*…
>
> I personally I see no catch-22 if you operate under the premises I stated
> above. If Neutron allowed to experiment with *any* mechanism without taking
> into consideration the 

[openstack-dev] [app-catalog] [heat] [murano] App Catalog IRC meeting Thursday May 26th

2016-05-25 Thread Christopher Aedo
Join us Thursday for our weekly meeting, scheduled for May 26th at
17:00UTC in #openstack-meeting-3

The agenda can be found here, and please add to if you want to discuss
something with the Community App Catalog team:
https://wiki.openstack.org/wiki/Meetings/app-catalog

This week we will see some updates from the folks working on the Glare
backend in addition to discussing the application developer community
building thread Igor Marnat kicked off
(http://lists.openstack.org/pipermail/user-committee/2016-May/000854.html).

Looking forward to seeing you all there tomorrow!

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-05-25 Thread Tim Rozet
In my opinion, it is a better approach to break this down into plugin vs driver 
support.  There should be no problem adding support into networking-sfc plugin 
for NSH today.  The OVS driver however, depends on OVS as the dataplane - which 
I can see a solid argument for only supporting an official version with a 
non-NSH solution.  The plugin side should have no dependency on OVS.  Therefore 
if we add NSH SFC support to an ODL driver in networking-odl, and use that as 
our networking-sfc driver, the argument about OVS goes away (since 
neutron/networking-sfc is totally unaware of the dataplane at this point).  We 
would just need to ensure that API calls to networking-sfc specifying NSH port 
pairs returned error if the enabled driver was OVS (until official OVS with NSH 
support is released).

Thoughts?

Tim Rozet
Red Hat SDN Team

- Original Message -
From: "Armando M." 
To: "OpenStack Development Mailing List (not for usage questions)" 

Cc: "Tim Rozet" 
Sent: Wednesday, May 25, 2016 12:33:16 PM
Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

On 24 May 2016 at 21:53, Elzur, Uri  wrote:

> Hi Tim
>
> Sorry for the delay due to travel...
>
> This note is very helpful!
>
> We are in agreement that the team including the individuals cited below
> are supportive. We also agree that SFC belongs in the networking-SFC
> project (with proper API adjustment)
>
> It seems networking-sfc still holds the position that without OvS
> accepting VXLAN-gpe and NSH patches they can't support NSH. I'm trying to
> get a clear read on where is this stated as requirement
>

I think the position here is as follows: if a technology is not mainstream,
i.e. readily available via distros and the various channels, it can only be
integrated via an experimental path. No-one is preventing anyone from
posting patches and instructions to compile kernels and kernel modules, but
ultimately as an OpenStack project that is suppose to produce commercial
and production grade software, we should be very sensitive in investing
time and energy in supporting a technology that may or may not have a
viable path towards inclusion into mainstream (Linux and OVS in this
instance).

One another clear example we had in the past was DPDK (that enabled fast
path processing in Neutron with OVS) and connection tracking (that enabled
security groups natively build on top of OVS). We, as a project have
consistently avoided endorsing efforts until they mature and show a clear
path forward.


> Like you, we are closely following the progress of the patches and
> honestly I have hard time seeing OpenStack supporting NSH in production
> even by the end of 2017. I think this amounts to slowing down the market...
>
> I think we need to break the logjam.
>

We are not the ones (Neutron) you're supposed to break the logjam with. I
think the stakeholders here go well beyond the Neutron team alone.


>
> I've reviewed (
> https://review.openstack.org/#/c/312199/12/specs/newton/neutron-stadium.rst,unified)
> and found nowhere a guideline suggesting that before a backend has fully
> implemented and merged upstream a technology (i.e. another project outside
> of OepnStack!), OpenStack Neutron can't make any move. ODL is working >2
> years to support NSH using patches, yet to be accepted into Linux Kernel
> (almost done) and OvS (preliminary) - as you stated. Otherwise we create a
> serialization, that gets worse and worse over time and with additional
> layers.
>
> No one suggests the such code needs to be PRODUCTION, but we need a way to
> roll out EXPERIMENTAL functions and later merge them quickly when all
> layers are ready, this creates a nice parallelism and keeps a decent pace
> of rolling out new features broadly supported elsewhere.
>

I agree with this last statement; this is for instance what is happening
with OVN which, in order to work with Neutron, needs patching and stay
close to trunk etc. The technology is still maturing and the whole Neutron
integration is in progress, but at least there's a clear signal that the it
will eventually become mainstream. If it did not, I would bet that
priorities would be focused elsewhere.

You asked in a previous email whether Neutron wanted to kept itself hostage
of OVS. My answer to you is NO: we have many technology stack options we
can rely on in order to realize abstractions so long as they are open, and
have a viable future.


>
> Thx
>
> Uri (“Oo-Ree”)
> C: 949-378-7568
>
> -Original Message-
> From: Tim Rozet [mailto:tro...@redhat.com]
> Sent: Friday, May 20, 2016 7:01 PM
> To: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>; Elzur, Uri 
> Cc: Cathy Zhang 
> Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC
>
> Hi Uri,
> I originally wrote the Tacker->ODL SFC 

[openstack-dev] [ironic] third party CI systems - vendor requirement milestones status

2016-05-25 Thread Kurt Taylor
We are in the final stretch for requiring CI testing for ironic drivers. I
have organized the CI teams that I know about and their current status into
the following wiki page:
https://wiki.openstack.org/wiki/Ironic/Drivers#3rd_Party_CI_required_implementation_status

I have already heard from a few folks with edits, but please review this
info and let me know if you have any changes. You can make needed changes
yourself, but let me know so I can keep track.

Thanks!
Kurt Taylor (krtaylor)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTLs][all][mentoring] Mentors needed in specific technical areas

2016-05-25 Thread Emily K Hugenbruch


Amrith,

This is a separate program from outreachy.  Typically outreachy is more of
an internship.  This program is supposed to be more of a "we meet for an
hour once a week to go over some questions" or "I'm stuck on what to work
on next, can you suggest a few things".  Something more lightweight for the
mentors and mentees.

Thanks for noticing the links.  One should've pointed to
https://drive.google.com/file/d/0BxtM4AiszlEyVkEtdktmWjBPN3c/view.  We do
have a wiki page for mentoring where we're listing the different types of
mentoring, so if projects have links for newcomers, you can add them here:
https://wiki.openstack.org/wiki/Mentors  We're debating how to best lay out
the page right now, but eventually we'll move it to a more permanent place
than a wiki.
Thanks!
Emily Hugenbruch
IRC: ekhugen
~



Subject: Re: [openstack-dev] [PTLs][all][mentoring] Mentors needed in


Emily,

Is this mentoring program in any way related to Outreachy[1] or is that a
different program altogether?

Your email says, "people to the guidelines (here<
https://docs.google.com/document/d/10ZJz0oX_V944o84xe-l67zvy2kf1RyCzpudU2fTlDOA
> and here<
https://docs.google.com/document/d/10ZJz0oX_V944o84xe-l67zvy2kf1RyCzpudU2fTlDOA/edit
>) and". Both of those appear to be links to the same document.

Thanks,

-amrith

[1] https://www.gnome.org/outreachy/

From: Emily K Hugenbruch [mailto:ekhugenbr...@us.ibm.com]
Sent: Monday, May 23, 2016 10:25 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [PTLs][all][mentoring] Mentors needed in specific
technical areas


Hi,
The lightweight mentoring program sponsored by the Women of OpenStack has
really taken off, and we have about 35 mentees looking for technical help
that we don't have mentors for. We're asking for help from the PTLs to
announce the mentoring program in team meetings then direct people to the
guidelines (here<
https://docs.google.com/document/d/10ZJz0oX_V944o84xe-l67zvy2kf1RyCzpudU2fTlDOA
> and here<
https://docs.google.com/document/d/10ZJz0oX_V944o84xe-l67zvy2kf1RyCzpudU2fTlDOA/edit
>) and signup form<
https://openstackfoundation.formstack.com/forms/mentoring> if they're
interested.

Mentors should be regular contributors to a project, with an interest in
helping new people and about 4 hours a month for mentoring. They do not
have to be women; the program is just sponsored by WoO, we welcome all
mentees and mentors.

These are the projects/areas where we especially need mentors:

 *   Cinder
 *   Containers
 *   Documentation
 *   Glance
 *   Keystone
 *   Murano
 *   Neutron
 *   Nova
 *   Ops
 *   Searchlight
 *   Telemetry
 *   TripleO
 *   Trove
If you have any questions you can contact me, or ask on openstack-women
where the mentoring committee hangs out.
Thanks!
Emily Hugenbruch
IRC: ekhugen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTLs][all][mentoring] Mentors needed in specific technical areas

2016-05-25 Thread Mike Perez
On 19:12 May 24, Augustina Ragwitz wrote:
>  
> >> On 12:54 May 24, Augustina Ragwitz wrote:
> >> Hi Emily,
> >>
> >> I'm the Nova Mentoring Czar and we have a Wiki page with a list of
> >> projects that would be good for new contributors:
> >> https://wiki.openstack.org/wiki/Nova/Mentoring
> >>
> >> For Nova, I'd encourage potential contributors to get involved with a
> >> specific project so that mentoring can happen organically. Interested
> >> folks are more than welcome to reach out to me, preferably by email.
> >
> > There's an assumption here that all projects have things in place
> > to begin
> > mentoring people. With the people we've spoken to, sometimes just
> > reaching on
> > IRC gave no answers. This is actually matching people to
> > someone who has
> > knowledge and is interested/has time to mentor. Even if a match
> > can't be made
> > right away, communication is made. First impressions with on
> > boarding is key.
> >
> > --
> > Mike Perez
>  
> I'm a little confused by your response. I wasn't making any assumptions
> or intending to criticize this mentorship program. I understood that
> Emily had highlighted gaps in certain technical areas, of which Nova is
> one. In recognition of the challenges faced by new contributors, the
> Nova team had a session at the Newton Design Summit where we discussed
> ideas on how to address these challenges within our own team. One
> outcome of this session is that I volunteered for the role of Mentoring
> Czar. When I saw Emily's original post, I thought this information might
> be relevant. My intention is to share our resources for new contributors
> and present myself as a contact point so this information could be
> provided to participants in the mentorship program that don't have
> mentors assigned. In fact, if other projects do have things in place for
> new contributors, it would probably be helpful if they also provided
> this information to the mentorship program.
>  
> Again, my intention was not to criticize and I think any effort to
> encourage new contributors is a good thing. I apologize if my original
> response suggested otherwise.

You're right Augustina. I took your response of "potential contributors to get
involved with a specific project so that mentoring can happen organically" out
of context from just the scope of Nova. Apologies for the confusion.

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][oslo] Mitaka neutron-*aas are broken when --config-dir is passed

2016-05-25 Thread Armando M.
On 25 May 2016 at 09:02, Brandon Logan  wrote:

> +1
>
> This sounds like a sane plane.  That magical config load caused me some
> problems in the past when I didn't know about it, would be glad to see
> it go.  I thought it being deprecated and removed was planned anyway,
> and honestly didn't think it was still in the code base because I hadn't
> run into any issues recently.
>

If my memory doesn't fail me, that code is still around for a couple of
reasons, and one being documented in [1] (the other one I think was because
of the devstack's way of configuring *-aas). My suggestion would be to
thoroughly think what the implications are before we let it up in flames.
Things might have changed since the last time this code was touched, but
one never knows.

[1] https://bugs.launchpad.net/neutron/+bug/1492069


>
> Thanks,
> Brandon
>
> On Wed, 2016-05-25 at 10:56 -0400, Doug Hellmann wrote:
> > Excerpts from Ihar Hrachyshka's message of 2016-05-25 14:03:24 +0200:
> > > Hi all,
> > >
> > > Our internal Mitaka testing revealed that neutron-server fails to
> start when:
> > > - any neutron-*aas service plugin is enabled (in our particular case,
> it was lbaas);
> > > - --config-dir option is passed to the process via CLI.
> > >
> > > Since RDO/OSP neutron-server systemd unit files use --config-dir
> options extensively, it renders all neutron-*aas broken as of Mitaka for us.
> > >
> > > The failure is reported as: https://launchpad.net/bugs/1585102 and
> the traceback can be found in: http://paste.openstack.org/show/498502/
> > >
> > > As you can see, it crashes in provider_configuration neutron module,
> where we have a custom parser for service_providers configuration:
> > >
> > >
> https://github.com/openstack/neutron/blob/master/neutron/services/provider_configuration.py#L83
> > >
> > > This code was introduced in Kilo when neutron-*aas were split of the
> tree. The intent of the code at the time was to allow service plugins to
> load neutron_*aas.conf files located in /etc/neutron/ that are not passed
> explicitly to neutron-server via --config-file options. [A decision that
> was, in my opinion, wrong in the first place: we should not have introduced
> ‘magic’ in neutron that allowed the controller to load configuration files
> implicitly, and we would be better off just relying on oslo.config
> facilities, like using --config-dir to load an ‘unknown’ set of
> configuration files.]
> >
> > +1
> >
> > >
> > > The failure was triggered by oslo.config 3.8.0 release that is part of
> Mitaka series, particularly by the following patch:
> https://review.openstack.org/#q,Ibd0566f11df62da031afb128c9687c5e8c7b27ae,n,z
> This patch, among other things, changed the type of ‘config_dir’ option
> from string to list [of strings]. Since configuration options are not
> considered part of public API, we can’t claim that oslo.config broke their
> API guarantees and revert the patch. [Even if that would be the case, we
> could not do it because we already released several Mitaka and Newton
> releases of the library with the patch included, so it’s probably late to
> switch back.]
> > >
> > > I have proposed a fix for provider_configuration module that would
> adopt the new list type for the option:
> https://review.openstack.org/#/c/320304/ Actually, it does not even rely
> on the option anymore, instead it pulls values using config_dirs property
> defined on ConfigOpts objects, which I assume is part of public API.
> > >
> > > Since Mitaka supports anything oslo.config >= 3.7.0, we would also
> need to support the older type in some graceful way, if we backport the fix
> there.
> > >
> > > Doug Hellmann has concerns about the approach taken. In his own words,
> "This approach may solve the problem in the short term, but it's going to
> leave you with some headaches later in this cycle when we expand
> oslo.config.” Specifically, "There are plans under way to expand
> configuration sources from just files and directories to use URLs. I expect
> some existing options to be renamed or otherwise deprecated as part of that
> work, and using the option value here will break neutron when that
> happens.” (more details in the patch)
> >
> > >
> > > First, it’s a surprise to me that config_dirs property (not an option)
> is not part of public API of the library. I thought that if something is
> private, we name it with a leading underscore. (?)
> >
> > I was conflating "config_dirs" with the --config-dir option value,
> > which as you point out is incorrect.  Sorry for the confusion.
> >
> > >
> > > If we don’t have public access to the symbol, a question arises on how
> we tackle that in neutron/mitaka (!). Note that we are not talking about a
> next release, it’s current neutron/mitaka that is broken and should be
> fixed to work with oslo.config 3.8.0, so any follow up work in oslo.config
> itself won’t make it to stable/mitaka for the library. So we need some
> short term solution here.
> > >
> 

Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-25 Thread Fox, Kevin M
+1. very good discussion.

From: Sean Dague [s...@dague.net]
Sent: Wednesday, May 25, 2016 3:48 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

I've been watching the threads, trying to digest, and find the way's
this is getting sliced doesn't quite slice the way I've been thinking
about it. (which might just means I've been thinking about it wrong).
However, here is my current set of thoughts on things.

1. Should OpenStack be open to more languages?

I've long thought the answer should be yes. Especially if it means we
end up with keystonemiddleware, keystoneauth, oslo.config in other
languages that let us share elements of infrastructure pretty
seamlessly. The OpenStack model of building services that register in a
service catalog and use common tokens for permissions through a bunch of
services is quite valuable. There are definitely people that have Java
applications that fit into the OpenStack model, but have no place to
collaborate on them.

(Note: nothing about the current proposal goes anywhere near this)

2. Is Go a "good" language to add to the community?

Here I am far more mixed. In programming language time, Go is super new.
It is roughly the same age as the OpenStack project. The idea that Go and
Python programmers overlap seems to be because some shops that used
to do a lot in Python, now do some things in Go.

But when compared to other languages in our bag, Javascript, Bash. These
are things that go back 2 decades. Unless you have avoided Linux or the
Web successfully for 2 decades, you've done these in some form. Maybe
not being an expert, but there is vestigial bits of knowledge there. So
they *are* different. In the same way that C or Java are different, for
having age. The likelihood of finding community members than know Python
+ one of these is actually *way* higher than Python + Go, just based on
duration of existence. In a decade that probably won't be true.

3. Are there performance problems where python really can't get there?

This seems like a pretty clear "yes". It shouldn't be surprising. Python
has no jit (yes there is pypy, but it's compat story isn't here). There
is a reason a bunch of python libs have native components for speed -
numpy, lxml, cryptography, even yaml throws a warning that you should
really compile the native version for performance when there is full
python fallback.

The Swift team did a very good job demonstrating where these issues are
with trying to get raw disk IO. It was a great analysis, and kudos to
that team for looking at so many angles here.

4. Do we want to be in the business of building data plane services that
will all run into python limitations, and will all need to be rewritten
in another language?

This is a slightly different spin on the question Thierry is asking.

Control Plane services are very unlikely to ever hit a scaling concern
where rewriting the service in another language is needed for
performance issues. These are orchestrators, and the time spent in them
is vastly less than the operations they trigger (start a vm, configure a
switch, boot a database server). There was a whole lot of talk in the
threads of "well that's not innovative, no one will want to do just
that", which seems weird, because that's most of OpenStack. And it's
pretty much where all the effort in the containers space is right now,
with a new container fleet manager every couple of weeks. So thinking
that this is a boring problem no one wants to solve, doesn't hold water
with me.

Data Plane services seem like they will all end up in the boat of
"python is not fast enough". Be it serving data from disk, mass DNS
transfers, time series database, message queues. They will all
eventually hit the python wall. Swift hit it first because of the
maturity of the project and they are now focused on this kind of
optimization, as that's what their user base demands. However I think
all other data plane services will hit this as well.

Glance (which is partially a data plane service) did hit this limit, and
the way it is largely mitigated by folks is by using Ceph and exposing that
directly to Nova so now Glance is only in the location game and metadata
game, and Ceph is in the data plane game.

When it comes to doing data plan services in OpenStack, I'm quite mixed.
The technology concerns for data plane
services are quite different. All the control plane services kind of
look and feel the same. An API + worker model, a DB for state, message
passing / rpc to put work to the workers. This is a common pattern and
is something which even for all the project differences, does end up
kind of common between parts. Projects that follow this model are
debuggable as a group not too badly.

5. Where does Swift fit?

This I think has always been a tension point in the community (at least
since I joined in 2012). Swift is an original service of OpenStack, as
it started 

Re: [openstack-dev] [nova] Intel NFV CI failing all shelve/unshelve tests

2016-05-25 Thread Chris Friesen

On 05/22/2016 05:41 PM, Jay Pipes wrote:

Hello Novaites,

I've noticed that the Intel NFV CI has been failing all test runs for quite some
time (at least a few days), always failing the same tests around shelve/unshelve
operations.





I looked through the conductor and compute logs to see if I could find any
possible reasons for the errors and found a number of the following errors in
the compute logs:

2016-05-22 22:18:59.403 8145 ERROR nova.compute.manager [instance:
cae6fd47-0968-4922-a03e-3f2872e4eb52] Traceback (most recent call last):
2016-05-22 22:18:59.403 8145 ERROR nova.compute.manager [instance:
cae6fd47-0968-4922-a03e-3f2872e4eb52]   File
"/opt/stack/new/nova/nova/compute/manager.py", line 4230, in _unshelve_instance
2016-05-22 22:18:59.403 8145 ERROR nova.compute.manager [instance:
cae6fd47-0968-4922-a03e-3f2872e4eb52] with rt.instance_claim(context,
instance, limits):





2016-05-22 22:18:59.403 8145 ERROR nova.compute.manager [instance:
cae6fd47-0968-4922-a03e-3f2872e4eb52] newcell.unpin_cpus(pinned_cpus)
2016-05-22 22:18:59.403 8145 ERROR nova.compute.manager [instance:
cae6fd47-0968-4922-a03e-3f2872e4eb52]   File
"/opt/stack/new/nova/nova/objects/numa.py", line 94, in unpin_cpus
2016-05-22 22:18:59.403 8145 ERROR nova.compute.manager [instance:
cae6fd47-0968-4922-a03e-3f2872e4eb52] pinned=list(self.pinned_cpus))
2016-05-22 22:18:59.403 8145 ERROR nova.compute.manager [instance:
cae6fd47-0968-4922-a03e-3f2872e4eb52] CPUPinningInvalid: Cannot pin/unpin cpus
[6] from the following pinned set [0, 2, 4]

on or around the time of the failures in Tempest.

Perhaps tomorrow morning we can look into handling the above exception properly
from the compute manager, since clearly we shouldn't be allowing
CPUPinningInvalid to be raised in the resource tracker's _update_usage() 
call


First, it seems wrong to me that an _unshelve_instance() call would result in 
unpinning any CPUs.  If the instance was using pinned CPUs then I would expect 
the CPUs to be unpinned when doing the "shelve" operation.  When we do an 
instance claim as part of the "unshelve" operation we should be pinning CPUs, 
not unpinning them.


Second, the reason why CPUPinningInvalid gets raised in _update_usage() is that 
it has discovered an inconsistency in its view of resources.  In this case, it's 
trying to unpin CPU 6 from a set of pinned cpus that doesn't include CPU 6.  I 
think this is a valid concern and should result in an error log.  Whether it 
should cause the unshelve operation to fail is a separate question, but it's 
definitely a symptom that something is wrong with resource tracking on this 
compute node.


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-05-25 Thread Armando M.
On 24 May 2016 at 21:53, Elzur, Uri  wrote:

> Hi Tim
>
> Sorry for the delay due to travel...
>
> This note is very helpful!
>
> We are in agreement that the team including the individuals cited below
> are supportive. We also agree that SFC belongs in the networking-SFC
> project (with proper API adjustment)
>
> It seems networking-sfc still holds the position that without OvS
> accepting VXLAN-gpe and NSH patches they can't support NSH. I'm trying to
> get a clear read on where is this stated as requirement
>

I think the position here is as follows: if a technology is not mainstream,
i.e. readily available via distros and the various channels, it can only be
integrated via an experimental path. No-one is preventing anyone from
posting patches and instructions to compile kernels and kernel modules, but
ultimately as an OpenStack project that is suppose to produce commercial
and production grade software, we should be very sensitive in investing
time and energy in supporting a technology that may or may not have a
viable path towards inclusion into mainstream (Linux and OVS in this
instance).

One another clear example we had in the past was DPDK (that enabled fast
path processing in Neutron with OVS) and connection tracking (that enabled
security groups natively build on top of OVS). We, as a project have
consistently avoided endorsing efforts until they mature and show a clear
path forward.


> Like you, we are closely following the progress of the patches and
> honestly I have hard time seeing OpenStack supporting NSH in production
> even by the end of 2017. I think this amounts to slowing down the market...
>
> I think we need to break the logjam.
>

We are not the ones (Neutron) you're supposed to break the logjam with. I
think the stakeholders here go well beyond the Neutron team alone.


>
> I've reviewed (
> https://review.openstack.org/#/c/312199/12/specs/newton/neutron-stadium.rst,unified)
> and found nowhere a guideline suggesting that before a backend has fully
> implemented and merged upstream a technology (i.e. another project outside
> of OepnStack!), OpenStack Neutron can't make any move. ODL is working >2
> years to support NSH using patches, yet to be accepted into Linux Kernel
> (almost done) and OvS (preliminary) - as you stated. Otherwise we create a
> serialization, that gets worse and worse over time and with additional
> layers.
>
> No one suggests the such code needs to be PRODUCTION, but we need a way to
> roll out EXPERIMENTAL functions and later merge them quickly when all
> layers are ready, this creates a nice parallelism and keeps a decent pace
> of rolling out new features broadly supported elsewhere.
>

I agree with this last statement; this is for instance what is happening
with OVN which, in order to work with Neutron, needs patching and stay
close to trunk etc. The technology is still maturing and the whole Neutron
integration is in progress, but at least there's a clear signal that the it
will eventually become mainstream. If it did not, I would bet that
priorities would be focused elsewhere.

You asked in a previous email whether Neutron wanted to kept itself hostage
of OVS. My answer to you is NO: we have many technology stack options we
can rely on in order to realize abstractions so long as they are open, and
have a viable future.


>
> Thx
>
> Uri (“Oo-Ree”)
> C: 949-378-7568
>
> -Original Message-
> From: Tim Rozet [mailto:tro...@redhat.com]
> Sent: Friday, May 20, 2016 7:01 PM
> To: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>; Elzur, Uri 
> Cc: Cathy Zhang 
> Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC
>
> Hi Uri,
> I originally wrote the Tacker->ODL SFC NSH piece and have been working
> with Tacker and networking-sfc team to bring it upstream into OpenStack.
> Cathy, Stephen, Louis and the rest of the networking-sfc team have been
> very receptive to changes specific to NSH around their current API and DB
> model.  The proper place for SFC to live in OpenStack is networking-sfc,
> while Tacker can do its orchestration job by rendering ETSI MANO TOSCA
> input like VNF Descriptors and VNF Forwarding Graph Descriptors.
>
> We currently have a spec in netwoking-odl to migrate my original driver
> for ODL to do IETF NSH.  That driver will be supported in networking-sfc,
> along with some changes to networking-sfc to account for NSH awareness and
> encap type (like VXLAN+GPE or Ethernet).  The OVS work to support NSH is
> coming along and patches are under review.  Yi Yang has built a private OVS
> version with these changes and we can use that for now to test with.
>
> I think it is all coming together and will take a couple more months
> before all of the pieces (Tacker, networking-sfc, networking-odl, ovs) are
> in place.  I don't think networking-sfc is holding up any progress.
>
> Thanks,
>
> Tim Rozet

Re: [openstack-dev] How to single sign on with windows authentication with Keystone

2016-05-25 Thread Adam Young

On 05/25/2016 07:26 AM, OpenStack Mailing List Archive wrote:

Link: https://openstack.nimeyo.com/85057/?show=85707#c85707
From: imocha 

I am trying to follow the steps. I am able to install ADFS and would 
like to proceed further.


However, I am having issues with setting up SSL endpoints for Keystone 
V3. I am using Mitaka. Is there any step that I can use.


I am using packstack to install the Mitaka and wanted to enable SSL 
for the identity endpoints to work with ADFS for SAML2 flow.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
We went through a proof of concept for this last summer (FreeIPA and 
Ipsilon, not ADFS)



https://github.com/admiyo/rippowam

Right now I'm working on updating for Keycloak instead of Ipsilon.

The SSL stuff I would like to recommend using Certmonger to manage, but 
I don't know how to tie that in with the ADFS CA. We do it using IPA's 
CA.  You can set up a trust between IPA and and AD, which might be your 
easiest path forward.


With a trust, the Keystone server would be registered as a host on the 
FreeIPA server, but would accept Kerberos tickets from ADFS.  If you 
want to completely federate the two, you can do so as well, and then you 
do not  need the trust, you just let ADFS issue SAML.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Proposal for a mid-cycle virtual sync on operator issues

2016-05-25 Thread Nikhil Komawar
Hello,


Firstly, I would like to thank Fei Long for bringing up a few operator
centric issues to the Glance team. After chatting with him on IRC, we
realized that there may be more operators who would want to contribute
to the discussions to help us take some informed decisions.


So, I would like to call for a 2 hour sync for the Glance team along
with interested operators on Thursday June 9th, 2016 at 2000UTC. 


If you are interested in participating please RSVP here [1], and
participate in the poll for the tool you'd prefer. I've also added a
section for Topics and provided a template to document the issues clearly.


Please be mindful of everyone's time and if you are proposing issue(s)
to be discussed, come prepared with well documented & referenced topic(s).


If you've feedback that you are not sure if appropriate for the
etherpad, you can reach me on irc (nick: nikhil).


[1] https://etherpad.openstack.org/p/newton-glance-and-ops-midcycle-sync

-- 

Thanks,
Nikhil Komawar
Newton PTL for OpenStack Glance


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] New Core Reviewer (sent on behalf of Steve Martinelli)

2016-05-25 Thread Brad Topol

CONGRATULATIONS Rodrigo!!! Very well deserved!!!

--Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:   Lance Bragstad 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   05/25/2016 09:09 AM
Subject:Re: [openstack-dev] [keystone] New Core Reviewer (sent on
behalf of Steve Martinelli)



Congratulations Rodrigo!

Thank you for all the continued and consistent reviews.

On Tue, May 24, 2016 at 1:28 PM, Morgan Fainberg  wrote:
  I want to welcome Rodrigo Duarte (rodrigods) to the keystone core team.
  Rodrigo has been a consistent contributor to keystone and has been
  instrumental in the federation implementations. Over the last cycle he
  has shown an understanding of the code base and contributed quality
  reviews.

  I am super happy (as proxy for Steve) to welcome Rodrigo to the Keystone
  Core team.

  Cheers,
  --Morgan

  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Log spool in the context

2016-05-25 Thread Doug Hellmann
Excerpts from Alexis Lee's message of 2016-05-25 16:24:59 +0100:
> Doug Hellmann said on Wed, May 25, 2016 at 11:06:35AM -0400:
> > Excerpts from Alexis Lee's message of 2016-05-25 13:46:05 +0100:
> > >   def some_method(ctx):
> > >   log = tools.get_api_logger(ctx) or LOG
> > 
> > That "or" statement in some_method() seems to imply though that
> > when spool logging is on, messages would *only* go through the
> > spooling logger. Is that what we want? Even for info messages?
> 
> The global logger (LOG) is still accessible so if you definitely want a
> message in the main log, you can use that instead. I'll use "reqlog"
> instead of "log" in future to make the two more quickly distinguishable.
> EG a "warning, disk running out of space" may be discovered during
> request processing but isn't tied to that request, so it makes more
> sense to send that to the main log.
> 
> If we want a message to go to both loggers, without going to the global
> logger twice, we can do:
> 
> if reqlog != LOG:
> reqlog.info("...")
> LOG.info("...")

But that leaves it up to the application or library author to have to
make that call for every log message, which makes logging more
complicated.

The point of the spooling logger is to dump everything about a request
when the request fails, right? And we want the "normal" behavior the
rest of the time? It seems like we should look at a way to make that
happen without a lot of impact on application logic.

It's OK to have the same message go to both loggers, right? Could we
have a wrapper class that takes a normal logger and a spooling logger
and sends its messages to both?

So you might do something like:

  reqlog = tools.get_request_logger('api', context, LOG)

And if the spool_api option is off, you get back the LOG object and if
it's on you get back a SpoolLoggerWrapper or something that duplicates
the message to the actual spool logger and to LOG.

  reqlog.info('this message goes both places')
  reqlog.debug('this is only emitted by the spool logger')

Even better would be to find a way to configure the logger hierarchy
so that is completely transparent, so that messages going to the
SpoolLogger are propagated to the normal logger based on their log
level. That would mean rearranging the planned hierarchy, I think, to
avoid having to connect two separate parts of the tree like the
SpoolLoggerWrapper proposed above does.

Doug

> 
> 
> Alexis (lxsli)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-25 Thread Doug Hellmann
Excerpts from Flavio Percoco's message of 2016-05-25 13:21:49 +0200:
> On 25/05/16 06:48 -0400, Sean Dague wrote:
> 
> [snip]
> 
> >4. Do we want to be in the business of building data plane services that
> >will all run into python limitations, and will all need to be rewritten
> >in another language?
> >
> >This is a slightly different spin on the question Thierry is asking.
> >
> >Control Plane services are very unlikely to ever hit a scaling concern
> >where rewriting the service in another language is needed for
> >performance issues. These are orchestrators, and the time spent in them
> >is vastly less than the operations they trigger (start a vm, configure a
> >switch, boot a database server). There was a whole lot of talk in the
> >threads of "well that's not innovative, no one will want to do just
> >that", which seems weird, because that's most of OpenStack. And it's
> >pretty much where all the effort in the containers space is right now,
> >with a new container fleet manager every couple of weeks. So thinking
> >that this is a boring problem no one wants to solve, doesn't hold water
> >with me.
> >
> >Data Plane services seem like they will all end up in the boat of
> >"python is not fast enough". Be it serving data from disk, mass DNS
> >transfers, time series database, message queues. They will all
> >eventually hit the python wall. Swift hit it first because of the
> >maturity of the project and they are now focused on this kind of
> >optimization, as that's what their user base demands. However I think
> >all other data plane services will hit this as well.
> >
> >Glance (which is partially a data plane service) did hit this limit, and
> >the way it is largely mitigated by folks is by using Ceph and exposing that
> >directly to Nova so now Glance is only in the location game and metadata
> >game, and Ceph is in the data plane game.
> 
> Sorry for nitpicking here but Glance's API keeps being a data API. Sure it
> stores locations and sure you can do fancy things with those locations but, as
> far as end users go, it's still a data API. It is not be used as intensively 
> as
> Swift's, though. Ceph's driver allows for fancier things to be done but there
> are deployments which don't use Ceph.

FWIW, that's part of why my original suggestion for the new import
API only had the equivalent of the copy-from feature. Monty made a
good point that doing it that way required users to have some other
service from which they could import the data, so we still have a data
API in glance. Maybe we should revisit that decision after this issue is
resolved.

> I believe it'd be better to separate data services that *own* the data from
> those that integrate other backends. Swift owns the data. You upload it to
> swift, it stores the data using its own strategies and it serves it. Glance 
> gets
> the data, puts it in some other store and then you can either access it (not
> always) directly from the store or have Glance serving it back.

> 
> >When it comes to doing data plan services in OpenStack, I'm quite mixed.
> >The technology concerns for data plane
> >services are quite different. All the control plane services kind of
> >look and feel the same. An API + worker model, a DB for state, message
> >passing / rpc to put work to the workers. This is a common pattern and
> >is something which even for all the project differences, does end up
> >kind of common between parts. Projects that follow this model are
> >debuggable as a group not too badly.
> >
> >5. Where does Swift fit?
> >
> >This I think has always been a tension point in the community (at least
> >since I joined in 2012). Swift is an original service of OpenStack, as
> >it started as Swift and Nova. But they were very different things. Swift
> >is a data service, Nova was a control plane. Much of what is now
> >OpenStack is Nova derivative in some way (some times direct extractions
> >(Glance, Cinder, Ironic), some times convergent paths (Neutron). And
> >then with that many examples, lots of other things built in similar ways.
> >
> >Swift doesn't use common oslo components. That actually makes debugging
> >it quite different compared to the rest of OpenStack. The lack of
> >oslo.log means structured JSON log messages to Elastic Search, are not
> >a thing. Swift has a very different model in it's service split.
> >Swift doesn't use global requirements. Swift ensures it can run without
> >Keystone, because their goal is Swift everywhere, whether or not it's
> >part of the rest of OpenStack.
> >
> >These are all fine goals, but they definitely have led to tensions on
> >all sides.
> >
> >And I think part of the question is "are these tensions that need to be
> >solved" or "is this data that this thing is different". Which isn't to
> >say that Swift is bad, it's just definitively different than much of the
> >ecosystem. Maybe Swift should be graduated beyond OpenStack, because
> >it's scope cross cuts much differently. Ceph isn't part of 

Re: [openstack-dev] [gate] [nova] live migration, libvirt 1.3, and the gate

2016-05-25 Thread Kashyap Chamarthy
On Tue, May 24, 2016 at 01:59:17PM -0400, Sean Dague wrote:

Thanks for the summary, Sean.

[...]

> It turns out it works fine because libvirt *actually* seems to take the
> data from cpu_map.xml and do a translation to what it believes qemu will
> understand. On these systems apparently this turns into "-cpu
> Opteron_G1,-pse36"
> (http://logs.openstack.org/29/42529/24/check/gate-tempest-dsvm-multinode-full/5f504c5/logs/libvirt/qemu/instance-000b.txt.gz)
> 
> At some point between libvirt 1.2.2 and 1.3.1, this changed. Now libvirt
> seems to be passing our cpu_model directly to qemu, and assumes that as
> a user you will be responsible for writing all the  stanzas to
> add/remove yourself. When libvirt sends 'gate64' to qemu, this explodes,
> as qemu has no idea what we are talking about.
> http://logs.openstack.org/34/319934/2/experimental/gate-tempest-dsvm-multinode-live-migration/b87d689/logs/screen-n-cpu.txt.gz#_2016-05-24_15_59_12_531

[...]

So, in short, the central issue seems to be this: the custom 'gate64'
model is not being trasnalted by libvirt into a model that QEMU can
recognize.

I could reproduce it with upstream libvirt
(libvirt-1.3.4-2.fc25.x86_64), and filed this bug:

https://bugzilla.redhat.com/show_bug.cgi?id=1339680 -- libvirt CPU
driver fails to translate a custom CPU model into something that
QEMU recognizes

Some discussion from libvirt migration developers (comment #3):

"So it looks like the whole code which computes the right CPU model
is skipped. The reason is . Our code avoids
comparing guest CPU definition to host CPU for TCG mode (since the
host CPU is irrelevant in this case). And as a side effect the code
that would translate the gate64 CPU model into something that is
supported by QEMU is skipped too."

> So, the existing cpu_map.xml workaround for our testing situation will
> no longer work.

[...]

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Log spool in the context

2016-05-25 Thread Alexis Lee
Doug Hellmann said on Wed, May 25, 2016 at 11:06:35AM -0400:
> Excerpts from Alexis Lee's message of 2016-05-25 13:46:05 +0100:
> >   def some_method(ctx):
> >   log = tools.get_api_logger(ctx) or LOG
> 
> That "or" statement in some_method() seems to imply though that
> when spool logging is on, messages would *only* go through the
> spooling logger. Is that what we want? Even for info messages?

The global logger (LOG) is still accessible so if you definitely want a
message in the main log, you can use that instead. I'll use "reqlog"
instead of "log" in future to make the two more quickly distinguishable.
EG a "warning, disk running out of space" may be discovered during
request processing but isn't tied to that request, so it makes more
sense to send that to the main log.

If we want a message to go to both loggers, without going to the global
logger twice, we can do:

if reqlog != LOG:
reqlog.info("...")
LOG.info("...")


Alexis (lxsli)
-- 
Nova developer, Hewlett-Packard Limited.
Registered Office: Cain Road, Bracknell, Berkshire RG12 1HN.
Registered Number: 00690597 England
VAT number: GB 314 1496 79

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr][magnum]Installing kuryr for mutlinode openstack setup

2016-05-25 Thread Hongbin Lu


From: Antoni Segura Puimedon [mailto:toni+openstac...@midokura.com]
Sent: May-25-16 6:55 AM
To: OpenStack Development Mailing List (not for usage questions); Gal Sagie; 
openstack-operators
Subject: Re: [openstack-dev] [kuryr][magnum]Installing kuryr for mutlinode 
openstack setup



On Wed, May 25, 2016 at 11:20 AM, Jaume Devesa 
> wrote:
Hello Akshay,

responses inline:

On Wed, 25 May 2016 10:48, Akshay Kumar Sanghai wrote:
> Hi,
> I have a 4 node openstack setup (1 controller, 1 network, 2 compute nodes).
> I want to install kuryr in liberty version. I cannot find a package in
> ubuntu repo.

There is not yet official version of Kuryr. You'll need to install using the
current master branch of the repo[1] (by cloning it, install dependencies and
`python setup.py install`

 Or you could run it dockerized. Read the "repo info" in [2]
We are working on having the packaging ready, but we are splitting the repos 
first,
so it will take a while for plain distro packages.

> -How do i install kuryr?
If the README.rst file of the repository is not enough for you in terms of
installation and configuration, please let us know what's not clear.

> - what are the components that need to be installed on the respective
> nodes?

You need to run the kuryr libnetwork's service in all the nodes that you use as
docker 'workers'

and your chosen vendor's neutron agents. For example, for MidoNet it's
midolman, for ovs it would be the neutron ovs agent.


> - Do i need to install magnum for docker swarm?

Not familiar with Magnum.. Can not help you here.


If you want to run docker swarm in bare metal, you do not need Magnum. Only
keystone and Neutron.
You'd put docker swarm, neutron and keystone running in one node, and then
have N nodes with docker engine, kuryr/libnetwork and the neutron agents of
the vendor of your choice.
[Hongbin Lu] Yes, Magnum is optional if you prefer to install swarm/k8s/mesos 
manually or by other tools. What Magnum offers is basically an automation of 
deployment plus a few management operations (i.e. scaling the cluster at 
runtime). From my point of view, if you prefer to skip Magnum, the main 
disadvantage is the missing of ability to get a tenant-scoped swarm/k8s/mesos 
cluster on-demand. In such case, you might have a static k8s/swarm/mesos 
cluster that is shared across multiple tenants.

> - Can i use docker swarm, kubernetes, mesos in openstack without using
> kuryr?

You can use swarm and kubernetes in OpenStack with Kuryr using Magnum. It will
use neutron networking for providing nets to the VMs that will run the 
swarm/kubernetes
cluster. Inside the VMs, another overlay done by flannel will be used (in k8s, 
in
swarm I have not tried it).

[Hongbin Lu] Yes, I think Flannel is an alternative of Kuryr. If using Magnum, 
Flannel is supported in k8s and swarm. Magnum supports 3 Flannel backends: udp, 
vxlan, and host-gw. If you want an overlay solution, you can choose udp or 
vxlan. If you want a high-performance solution, the host-gw backend should work 
well.


What will be the disadvantages?

The disadvantages are that you do not get explicit Neutron networking for your 
containers,
you get less networking isolation for your VMs/containers and if you want the 
highest
performance, you have to change the default flannel mode.
[Hongbin Lu] That is true. If using Magnum, the default flannel backend is 
“udp”. Users need to turn on the “host-gw” backend to get the highest 
performance. We are discussing if it makes sense to change the default to 
“host-gw”, so that users will get the non-overlay performance by default. In 
addition, Magnum team is working with Kuryr team to bring Kuryr as the second 
network driver. For the comparison between Flannel and Kuryr, I think the main 
disadvantage of Flannel is not about performance (because the flannel host-gw 
backend should provide a similar performance as Kuryr), but is the ability to 
tightly integrate containers with Neutron.



Only docker swarm right now. The kubernetes one will be addressed soon.

>
> Thanks
> Akshay
Thanks to you for giving it a try!


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

There are a bunch of people much more experienced than me in Kuryr. I hope I
haven't said anything stupid.

Best regards,

[1]: http://github.com/openstack/kuryr
 [2] https://hub.docker.com/r/kuryr/libnetwork/


--
Jaume Devesa
Software Engineer at Midokura
PGP key: 35C2D6B2 @ keyserver.ubuntu.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

[openstack-dev] [nova] determining or clarifying a path for gabbi+nova

2016-05-25 Thread Chris Dent


Earlier this year I worked with jaypipes to compose a spec[1] for using
gabbi[2] with nova. Summit rolled around and there were some legitimate
concerns about the focus of the spec being geared towards replacing the
api sample tests. I wasn't at summit ☹ but my understanding of the
outcome of the discussion was (please correct me if I'm wrong):

* gabbi is not a straight replacement for the api-samples (notably
  it doesn't address the documentation functionality provided by
  api-samples)

* there are concerns, because of the style of response validation
  that gabbi does, that there could be a coverage gap[3] when a
  representation changes (in, for example, a microversion bump)

* we'll see how things go with the placement API work[4], which uses
  gabbi for TDD, and allow people to learn more about gabbi from
  that

Since that all seems to make sense, I've gone ahead and abandoned
the review associated with the spec as overreaching for the time
being.

I'd like, however, to replace it with a spec that is somewhat less
reaching in its plans. Rather than replace api-samples with gabbi,
augment existing tests of the API with gabbi-based tests. I think
this is a useful endeavor that will find and fix inconsistencies but
I'd like to get some feedback from people so I can formulate a spec
that will actually be useful.

For reference, I started working on some integration of tempest and
gabbi[5] (based on some work that Mehdi did), and in the first few
minutes of writing tests found and reported bugs against nova and
glance, some of which have even been fixed since then. Win! We like
win.

The difficulty here, and the reason I'm writing this message, is
simply this: The biggest benefit of gabbi is the actual writing and
initial (not the repeated) running of the tests. You write tests, you
find bugs and inconsistencies. The second biggest benefit is going
back and being a human and reading the tests and being able to see
what the API is doing, request and response in the same place. That's
harder to write a spec about than "I want to add or change feature X".
There's no feature here.

I'm also aware that there is concern about adding yet another thing to
understand in the codebase.

So what's a reasonable course of action here?

Thanks.

P.S: If any other project is curious about using gabbi, it is easier
to use and set up than this discussion is probably making it sound
and extremely capable. If you want to try it and need some help,
just ask me: cdent on IRC.

[1] https://review.openstack.org/#/c/291352/

[2] https://gabbi.readthedocs.io/

[3] This would be expected: Gabbi considers its job to be testing
the API layer, not the serializers and object that the API might be
using (although it certainly can validate those things).

[4] https://review.openstack.org/#/c/293104/

[5] http://markmail.org/message/z6z6ego4wqdaelhq

--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-25 Thread Doug Hellmann
Excerpts from Fox, Kevin M's message of 2016-05-24 19:38:06 +:
> Frankly, this is one of the major negatives we've felt from the Big Tent 
> idea...
> 
> OpenStack use to be more of a product then it is now. When there were common 
> problems to be solved, there was pressure applied to solve them in a way 
> everyone (OpenStack Project, OpenStack Users, and Openstack Operators) would 
> benefit from.
> 
> For recent example, there has been a lot of talk about reimplementing 
> features from Barbican in Magnum, Keystone, etc, and not wanting to depend on 
> Barbican. In the pre-tent days, we'd just fix Barbican to do the things we 
> all need it to, and then start depending on it. Then, everyone could start 
> depending on a solid secret store being there since everyone would deploy it 
> because they would want at least one thing that depends on it. Say, Lbaas, or 
> COE orchestration, and then adding more projects that depend on it would be 
> easier for the operator. Instead I see a lot of of trying to implement a hack 
> in each project to try and not depend on it, solving the problem for one 
> project but for no one else.

That's unfortunate. I remember talking about encouraging projects
to build on the work of others when we were discussing the big tent
changes.

  "Where it makes sense, the project cooperates with existing projects
  rather than gratuitously competing or reinventing the wheel." [1]

[1] http://governance.openstack.org/reference/new-projects-requirements.html

> Its a vicious chicken and the egg cycle we have created. Projects don't want 
> to depend on things if its not commonly deployed. Operators don't want to 
> deploy it if there's not a direct reason to, or something they do care about 
> depending on it. So our projects are encouraged to do bad things now and I 
> think we're all suffering for it.

Project teams seem to have misinterpreted or misunderstood, but that's
not the guidance we've given them.

> Cross project work became much harder after the big tent because there was 
> less reason to play nice with each other. OpenStack projects are already 
> fairly insular and this has made it worse. Opening up additional languages 
> makes it yet harder to work on the common stuff. I'm not against picking 1 
> additional language for performance critical stuff, but it should be 
> carefully considered, and should have to be carefully reasoned about the need 
> for.
> 
> Thanks,
> Kevin
> 
> 
> From: Ian Cordasco [sigmaviru...@gmail.com]
> Sent: Tuesday, May 24, 2016 12:12 PM
> To: Jay Pipes; OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"
> 
> -Original Message-
> From: Jay Pipes 
> Reply: OpenStack Development Mailing List (not for usage questions) 
> 
> Date: May 24, 2016 at 11:35:42
> To: openstack-dev@lists.openstack.org 
> Subject:  Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"
> 
> > On 05/24/2016 06:19 AM, Thierry Carrez wrote:
> > > Chris Dent wrote:
> > >> [...]
> > >> I don't really know. I'm firmly in the camp that OpenStack needs to
> > >> be smaller and more tightly focused if a unitary thing called OpenStack
> > >> expects to be any good. So I'm curious about and interested in
> > >> strategies for figuring out where the boundaries are.
> > >>
> > >> So that, of course, leads back to the original question: Is OpenStack
> > >> supposed to be a unitary.
> > >
> > > As a data point, since I heard that question rhetorically asked quite a
> > > few times over the past year... There is an old answer to that, since a
> > > vote of the PPB (the ancestor of our TC) from June, 2011 which was never
> > > overruled or changed afterwards:
> > >
> > > "OpenStack is a single product made of a lot of independent, but
> > > cooperating, components."
> > >
> > > The log is an interesting read:
> > > http://eavesdrop.openstack.org/meetings/openstack-meeting/2011/openstack-meeting.2011-06-28-20.06.log.html
> >
> > Hmm, blast from the past. I'm sad I didn't make it to that meeting.
> >
> > I would (now at least) have voted for #2: OpenStack is "a collection of
> > independent projects that work together for some level of integration
> > and releases".
> >
> > This is how I believe OpenStack should be seen, as I wrote on Twitter
> > relatively recently:
> >
> > https://twitter.com/jaypipes/status/705794815338741761
> > https://twitter.com/jaypipes/status/705795095262441472
> 
> I'm honestly in the same boat as Chris. And I've constantly heard both. I 
> also frankly am not sure I agree with the idea that OpenStack is one product. 
> I think more along the lines of the way DefCore specifies OpenStack Compute 
> as a Product, etc. I feel like if every project contributed to the OpenStack 
> product, we might have a better adoption rate and a 

Re: [openstack-dev] [kuryr][magnum]Installing kuryr for mutlinode openstack setup

2016-05-25 Thread Akshay Kumar Sanghai
Hi,
Thanks Jaume and Antoni.
I tried the installation by git cloning the kuryr repo. I did pip install
-r requirements.txt. After that I did pip install . . But it doesn't end
successfully. There are no config files in /etc/kuryr directory.
root@compute1:~/kuryr# pip install .
Unpacking /root/kuryr
  Running setup.py (path:/tmp/pip-4kbPa8-build/setup.py) egg_info for
package from file:///root/kuryr
[pbr] Processing SOURCES.txt
warning: LocalManifestMaker: standard file '-c' not found

[pbr] In git context, generating filelist from git
warning: no previously-included files matching '*.pyc' found anywhere
in distribution
  Requirement already satisfied (use --upgrade to upgrade):
kuryr==0.1.0.dev422 from file:///root/kuryr in
/usr/local/lib/python2.7/dist-packages
Requirement already satisfied (use --upgrade to upgrade): pbr>=1.6 in
/usr/lib/python2.7/dist-packages (from kuryr==0.1.0.dev422)
Requirement already satisfied (use --upgrade to upgrade): Babel>=2.3.4 in
/usr/local/lib/python2.7/dist-packages (from kuryr==0.1.0.dev422)
Requirement already satisfied (use --upgrade to upgrade): Flask<1.0,>=0.10
in /usr/local/lib/python2.7/dist-packages (from kuryr==0.1.0.dev422)
Requirement already satisfied (use --upgrade to upgrade):
jsonschema!=2.5.0,<3.0.0,>=2.0.0 in /usr/lib/python2.7/dist-packages (from
kuryr==0.1.0.dev422)
Requirement already satisfied (use --upgrade to upgrade):
netaddr!=0.7.16,>=0.7.12 in /usr/lib/python2.7/dist-packages (from
kuryr==0.1.0.dev422)
Requirement already satisfied (use --upgrade to upgrade):
oslo.concurrency>=3.5.0 in /usr/local/lib/python2.7/dist-packages (from
kuryr==0.1.0.dev422)
Requirement already satisfied (use --upgrade to upgrade): oslo.log>=1.14.0
in /usr/local/lib/python2.7/dist-packages (from kuryr==0.1.0.dev422)
Requirement already satisfied (use --upgrade to upgrade):
oslo.serialization>=1.10.0 in /usr/local/lib/python2.7/dist-packages (from
kuryr==0.1.0.dev422)
Requirement already satisfied (use --upgrade to upgrade): oslo.utils>=3.5.0
in /usr/local/lib/python2.7/dist-packages (from kuryr==0.1.0.dev422)
Requirement already satisfied (use --upgrade to upgrade):
python-neutronclient>=4.2.0 in /usr/local/lib/python2.7/dist-packages (from
kuryr==0.1.0.dev422)
Requirement already satisfied (use --upgrade to upgrade): pyroute2>=0.3.10
in /usr/local/lib/python2.7/dist-packages (from kuryr==0.1.0.dev422)
Requirement already satisfied (use --upgrade to upgrade):
os-client-config>=1.13.1 in /usr/local/lib/python2.7/dist-packages (from
kuryr==0.1.0.dev422)
Requirement already satisfied (use --upgrade to upgrade):
neutron-lib>=0.1.0 in /usr/local/lib/python2.7/dist-packages (from
kuryr==0.1.0.dev422)
Requirement already satisfied (use --upgrade to upgrade): Werkzeug>=0.7 in
/usr/local/lib/python2.7/dist-packages (from
Flask<1.0,>=0.10->kuryr==0.1.0.dev422)
Requirement already satisfied (use --upgrade to upgrade): Jinja2>=2.4 in
/usr/lib/python2.7/dist-packages (from
Flask<1.0,>=0.10->kuryr==0.1.0.dev422)
Requirement already satisfied (use --upgrade to upgrade):
itsdangerous>=0.21 in /usr/local/lib/python2.7/dist-packages (from
Flask<1.0,>=0.10->kuryr==0.1.0.dev422)
Requirement already satisfied (use --upgrade to upgrade): markupsafe in
/usr/lib/python2.7/dist-packages (from
Jinja2>=2.4->Flask<1.0,>=0.10->kuryr==0.1.0.dev422)
Cleaning up...
root@compute1:~/kuryr#


Thanks
Akshay




On Wed, May 25, 2016 at 4:24 PM, Antoni Segura Puimedon <
toni+openstac...@midokura.com> wrote:

>
>
> On Wed, May 25, 2016 at 11:20 AM, Jaume Devesa  wrote:
>
>> Hello Akshay,
>>
>> responses inline:
>>
>> On Wed, 25 May 2016 10:48, Akshay Kumar Sanghai wrote:
>> > Hi,
>> > I have a 4 node openstack setup (1 controller, 1 network, 2 compute
>> nodes).
>> > I want to install kuryr in liberty version. I cannot find a package in
>> > ubuntu repo.
>>
>> There is not yet official version of Kuryr. You'll need to install using
>> the
>> current master branch of the repo[1] (by cloning it, install dependencies
>> and
>> `python setup.py install`
>>
>
>  Or you could run it dockerized. Read the "repo info" in [2]
>
> We are working on having the packaging ready, but we are splitting the
> repos first,
> so it will take a while for plain distro packages.
>
>
>> > -How do i install kuryr?
>> If the README.rst file of the repository is not enough for you in terms of
>> installation and configuration, please let us know what's not clear.
>>
>> > - what are the components that need to be installed on the respective
>> > nodes?
>>
>> You need to run the kuryr libnetwork's service in all the nodes that you
>> use as
>> docker 'workers'
>>
>
> and your chosen vendor's neutron agents. For example, for MidoNet it's
> midolman, for ovs it would be the neutron ovs agent.
>
>
>>
>> > - Do i need to install magnum for docker swarm?
>>
>> Not familiar with Magnum.. Can not help you here.
>>
>
>
> If you want to run docker swarm in bare metal, you do not need Magnum. Only
> keystone and 

Re: [openstack-dev] [oslo] Log spool in the context

2016-05-25 Thread Doug Hellmann
Excerpts from Alexis Lee's message of 2016-05-25 13:46:05 +0100:
> Doug Hellmann said on Tue, May 24, 2016 at 02:53:51PM -0400:
> > Rather than forcing SpoolManager to be a singleton, maybe the thing
> > to do is build some functions for managing a singleton instance (or
> > one per type or whatever), and making that API convenient enough
> > that using the spool logger doesn't require adding a bunch of logic
> > and import_opt() calls all over the place.  Since it looks like the
> > convenience function would require looking at a config option owned
> > by the application, it probably shouldn't live in oslo.log, but
> > putting it in a utility module in nova might make sense.
> 
> OK, so if I understand you correctly, we'll have EG nova/tools.py
> containing something like:
> 
>   CONF.import_opt("spool_api")
>   SPOOL_MANAGERS = {}
> 
>   def get_api_logger(context):
>   if not CONF.spool_api:
>   return None
>   mgr = SPOOL_MANAGERS.setdefault('api', SpoolManager('api'))
>   return mgr.get_spool(context.request_id)
> 
> then in normal code:
> 
>   LOG = logging.getLogger(__name__)
> 
>   def some_method(ctx):
>   log = tools.get_api_logger(ctx) or LOG
> 
> That seems OK to me, I'll work on it, thank you both.
> 
> 
> Alexis (lxsli)

Yeah, something like that looks like what I was thinking.

That "or" statement in some_method() seems to imply though that
when spool logging is on, messages would *only* go through the
spooling logger. Is that what we want? Even for info messages?

Doug


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][oslo] Mitaka neutron-*aas are broken when --config-dir is passed

2016-05-25 Thread Doug Hellmann
Excerpts from Ihar Hrachyshka's message of 2016-05-25 14:03:24 +0200:
> Hi all,
> 
> Our internal Mitaka testing revealed that neutron-server fails to start when:
> - any neutron-*aas service plugin is enabled (in our particular case, it was 
> lbaas);
> - --config-dir option is passed to the process via CLI.
> 
> Since RDO/OSP neutron-server systemd unit files use --config-dir options 
> extensively, it renders all neutron-*aas broken as of Mitaka for us.
> 
> The failure is reported as: https://launchpad.net/bugs/1585102 and the 
> traceback can be found in: http://paste.openstack.org/show/498502/
> 
> As you can see, it crashes in provider_configuration neutron module, where we 
> have a custom parser for service_providers configuration:
> 
> https://github.com/openstack/neutron/blob/master/neutron/services/provider_configuration.py#L83
> 
> This code was introduced in Kilo when neutron-*aas were split of the tree. 
> The intent of the code at the time was to allow service plugins to load 
> neutron_*aas.conf files located in /etc/neutron/ that are not passed 
> explicitly to neutron-server via --config-file options. [A decision that was, 
> in my opinion, wrong in the first place: we should not have introduced 
> ‘magic’ in neutron that allowed the controller to load configuration files 
> implicitly, and we would be better off just relying on oslo.config 
> facilities, like using --config-dir to load an ‘unknown’ set of configuration 
> files.]

+1

> 
> The failure was triggered by oslo.config 3.8.0 release that is part of Mitaka 
> series, particularly by the following patch: 
> https://review.openstack.org/#q,Ibd0566f11df62da031afb128c9687c5e8c7b27ae,n,z 
> This patch, among other things, changed the type of ‘config_dir’ option from 
> string to list [of strings]. Since configuration options are not considered 
> part of public API, we can’t claim that oslo.config broke their API 
> guarantees and revert the patch. [Even if that would be the case, we could 
> not do it because we already released several Mitaka and Newton releases of 
> the library with the patch included, so it’s probably late to switch back.]
> 
> I have proposed a fix for provider_configuration module that would adopt the 
> new list type for the option: https://review.openstack.org/#/c/320304/ 
> Actually, it does not even rely on the option anymore, instead it pulls 
> values using config_dirs property defined on ConfigOpts objects, which I 
> assume is part of public API.
> 
> Since Mitaka supports anything oslo.config >= 3.7.0, we would also need to 
> support the older type in some graceful way, if we backport the fix there.
> 
> Doug Hellmann has concerns about the approach taken. In his own words, "This 
> approach may solve the problem in the short term, but it's going to leave you 
> with some headaches later in this cycle when we expand oslo.config.” 
> Specifically, "There are plans under way to expand configuration sources from 
> just files and directories to use URLs. I expect some existing options to be 
> renamed or otherwise deprecated as part of that work, and using the option 
> value here will break neutron when that happens.” (more details in the patch)

> 
> First, it’s a surprise to me that config_dirs property (not an option) is not 
> part of public API of the library. I thought that if something is private, we 
> name it with a leading underscore. (?)

I was conflating "config_dirs" with the --config-dir option value,
which as you point out is incorrect.  Sorry for the confusion.

> 
> If we don’t have public access to the symbol, a question arises on how we 
> tackle that in neutron/mitaka (!). Note that we are not talking about a next 
> release, it’s current neutron/mitaka that is broken and should be fixed to 
> work with oslo.config 3.8.0, so any follow up work in oslo.config itself 
> won’t make it to stable/mitaka for the library. So we need some short term 
> solution here.
> 
> Doug suggested that neutron team would work with oslo folks to expose missing 
> bits from oslo.config to consumers: "There are several ways we could address 
> the need here. For example, we could provide a method that returns the source 
> info (file names, directories, etc.). We could add a class method that has 
> the effect of making a new ConfigOpts instance with the same source 
> information as an existing object passed to it. Or we could split the config 
> locating logic out of ConfigOpts and make it a separate object that can be 
> shared. We should discuss those options on the ML, so please start a thread.”
> 
> It may be a good idea, but honestly, I don’t want to see neutron following 
> the path we took back in kilo. I would prefer seeing neutron getting rid of 
> implicit loading of specifically named configuration files for service 
> plugins (and just for a single option!)
> 
> My plan to get out of those woods would be:
> - short term, we proceed on the direction I took with the patch, adopting 
> list 

Re: [openstack-dev] How to run a task when multiple tasks are completed

2016-05-25 Thread Alexey Shtokolov
Hi!

In case when all 4 tasks are on the same node you should use 'requires' and
'required_for' fields in the task definition to make dependencies between
them [0].
E.g.:

*- id: Task4*
  *version: [a version of the tasks graph execution engine] *
*  type: [one of: stage, group, skipped, puppet, shell] *
*  role: [matches roles for which this tasks should be executed] *
  *requires: [Task1, Task2, Task3]*


In case of cross-node dependencies you can use 'cross-depends' and
'cross-depended-by' [0].

[0] -
https://docs.mirantis.com/openstack/fuel/fuel-8.0/reference-architecture.html#task-based-deployment


2016-05-25 14:16 GMT+03:00 Jonnalagadda, Venkata <
venkata.jonnalaga...@intl.att.com>:

> Fuel Team,
>
>
>
> I have couple of tasks in Fuel (deployment_tasks.yaml) as below –
>
>
>
> Task1
>
> Task2
>
> Task3
>
> Task4
>
>
>
> Now, I want to run Task4 only when Tasks-1,2,3 are completed. How I can
> configure this in deployment_tasks yaml ? Please suggest.
>
>
>
> *Thanks & Regards,*
>
>
>
> *J. Venkata Mahesh*
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
---
WBR, Alexey Shtokolov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [defcore] [interop] Proposal for a virtual sync dedicated to Import Refactor May 26th

2016-05-25 Thread Nikhil Komawar
Yes, downside indeed.

We will rely on community being civil about this and me kicking people
out for those who did not care to RSVP (if we need free slots).

On 5/25/16 10:24 AM, Flavio Percoco wrote:
> On 25/05/16 09:53 -0400, Nikhil Komawar wrote:
>> Thanks Flavio, Erno.
>>
>> Right now we've 10 participants who have RSVP yes. I was waiting for
>> last min additions but I think we can close the RSVP.
>>
>> We should go for a google hangout on air that can be streamed live on
>> youtube for those merely interested in listening. For all the 10 spots,
>> we can accommodate on the hangout for participation.
>>
>> Here's the link for the event:
>> https://plus.google.com/events/cb4acoebucn25vu8f7enprp85j4
>>
>> More info:
>> https://wiki.openstack.org/wiki/VirtualSprints#Image_Import_Refactor_Sync_.231_--_Newton
>>
>
> Works for me!
>
> The downside of this is that any late addition might be left out as
> the limit of
> participants is 10. Also, if the late addition comes in before any of
> the people
> that had +1'd then the latter would be left out.
>
> I'll be there on time :D
> Flavio
>
>> On 5/25/16 8:51 AM, Erno Kuvaja wrote:
>>>
>>>
>>> On Wed, May 25, 2016 at 12:54 PM, Flavio Percoco >> > wrote:
>>>
>>> On 20/05/16 18:00 -0400, Nikhil Komawar wrote:
>>>
>>> Hello all,
>>>
>>>
>>> I want to propose having a dedicated virtual sync next week
>>> Thursday May
>>> 26th at 1500UTC for one hour on the Import Refactor work [1]
>>> ongoing in
>>> Glance. We are making a few updates to the spec; so it would
>>> be good to
>>> have everyone on the same page and soon start merging those
>>> spec changes.
>>>
>>>
>>> This is tomorrow! Have we decided the tool?
>>>
>>> An invite would be lovely :D
>>>
>>> ++
>>>
>>> Either one of us can provide bluejeans meeting if needed/wanted.
>>>
>>> - Erno
>>>
>>>
>>> Flavio
>>>
>>> --
>>> @flaper87
>>> Flavio Percoco
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>
>>> 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> -- 
>>
>> Thanks,
>> Nikhil
>>
>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] let's give a different warning message for different OS_PROJECT_NAME ?

2016-05-25 Thread Amrith Kumar
> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: Wednesday, May 25, 2016 6:06 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [devstack] let's give a different warning
> message for different OS_PROJECT_NAME ?
> 
> On 05/25/2016 04:05 AM, li.yuanz...@zte.com.cn wrote:
> > Hi All,
> > A warning message leave me a doubt.
> >
> > After having installed openstack by devstack, when I use the cmd
> > "source openrc", a warning message is
> > printed in the terminal that "WARNING: setting legacy
> > OS_TENANT_NAME to support cli tools."
> > and then when I execute the cmd "source openrc admin admin", it
> > give the same info.
> >
> > at first, I think I make a failure in "source openrc" or
> > installing openstack, after read the openrc script,
> > I know this message will be printed no matter what openrc's
> > arguments are injected.
> >
> > So I think, this message have an ambiguity for user. could we
> > ignore this info 'echo "WARNING: setting legacy
> > OS_TENANT_NAME to support cli tools." ' or make some different info
> > for different arguments, such as:
> > echo "WARNING: setting legacy
> > OS_TENANT_NAME=$OS_PROJECT_NAME to support cli tools."
> >
> > what do you think?
> 
> I think that's a good suggestion, please propose a patch and we should
> be able to get it in pretty quickly.

Would people be OK with listing all of the legacy parameters that are being set 
in the same format as a standard deprecation warning?

-amrith

> 
>   -Sean
> 
> --
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTLs][all][mentoring] Mentors needed in specific technical areas

2016-05-25 Thread Amrith Kumar

From: Augustina Ragwitz [mailto:aragwitz.li...@pobox.com]
Sent: Tuesday, May 24, 2016 10:12 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [PTLs][all][mentoring] Mentors needed in specific 
technical areas


On 12:54 May 24, Augustina Ragwitz wrote:
Hi Emily,

I'm the Nova Mentoring Czar and we have a Wiki page with a list of
projects that would be good for new contributors:
https://wiki.openstack.org/wiki/Nova/Mentoring

For Nova, I'd encourage potential contributors to get involved with a
specific project so that mentoring can happen organically. Interested
folks are more than welcome to reach out to me, preferably by email.

There's an assumption here that all projects have things in place to begin
mentoring people. With the people we've spoken to, sometimes just reaching on

[amrith] Maybe Mike is saying that the assumption in Emily's email is that all 
projects have something in place to begin mentoring people?


IRC gave no answers. This is actually matching people to someone who has
knowledge and is interested/has time to mentor. Even if a match can't be made
right away, communication is made. First impressions with on boarding is key.

[amrith] Yes, first impressions are key and my (unfortunate) experience with 
mentorship in the recent past has been very unfortunate. I think the website 
that auggy has put together is a great first impression and I'd like to make 
something like that for Trove.
--
Mike Perez

I'm a little confused by your response. I wasn't making any assumptions or 
intending to criticize this mentorship program. I understood that Emily had 
highlighted gaps in certain technical areas, of which Nova is one. In 
recognition of the challenges faced by new contributors, the Nova team had a 
session at the Newton Design Summit where we discussed ideas on how to address 
these challenges within our own team. One outcome of this session is that I 
volunteered for the role of Mentoring Czar. When I saw Emily's original post, I 
thought this information might be relevant. My intention is to share our 
resources for new contributors and present myself as a contact point so this 
information could be provided to participants in the mentorship program that 
don't have mentors assigned. In fact, if other projects do have things in place 
for new contributors, it would probably be helpful if they also provided this 
information to the mentorship program.

Again, my intention was not to criticize and I think any effort to encourage 
new contributors is a good thing. I apologize if my original response suggested 
otherwise.

--
Augustina Ragwitz
irc: auggy
Augustina Ragwitz
Sr Systems Software Engineer, HPE Cloud
Hewlett Packard Enterprise
---
irc: auggy


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN

2016-05-25 Thread Ryan Moats
John McDowall  wrote on 05/24/2016 06:33:05
PM:

> From: John McDowall 
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: "disc...@openvswitch.org" , "OpenStack
> Development Mailing List" 
> Date: 05/24/2016 06:33 PM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> Ryan,
>
> Thanks for getting back to me and pointing me in a more OVS like
> direction. What you say makes sense, let me hack something together.
> I have been a little distracted getting some use cases together. The
> other area is how to better map the flow-classifier I have been
> thinking about it a little, but I will leave it till after we get
> the chains done.
>
> Your load-balancing comment was very interesting – I saw some
> patches for load-balancing a few months ago but nothing since. It
> would be great if we could align with load-balancing as that would
> make a really powerful solution.
>
> Regards
>
> John

John-

For the load balancing, I believe that you'll want to look at
openvswitch's select group, as that should let you set up multiple
buckets for each egress port in the port pairs that make up a port
group.

As I understand it, Table 0 identifies the logical port and logical
flow. I'm worried that this means we'll end up with separate bucket
rules for each ingress port of the port pairs that make up a port
group, leading to a cardinality product in the number of rules.
I'm trying to think of a way where Table 0 could identify the packet
as being part of a particular port group, and then I'd only need one
set of bucket rules to figure out the egress side.  However, the
amount of free metadata space is limited and so before we go down
this path, I'm going to pull Justin, Ben and Russell in to see if
they buy into this idea or if they can think of an alternative.

Ryan

>
> From: Ryan Moats 
> Date: Monday, May 23, 2016 at 9:06 PM
> To: John McDowall 
> Cc: "disc...@openvswitch.org" , OpenStack
> Development Mailing List 
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> John McDowall  wrote on 05/18/2016
> 03:55:14 PM:
>
> > From: John McDowall 
> > To: Ryan Moats/Omaha/IBM@IBMUS
> > Cc: "disc...@openvswitch.org" , "OpenStack
> > Development Mailing List" 
> > Date: 05/18/2016 03:55 PM
> > Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> >
> > Ryan,
> >
> > OK all three repos and now aligned with their masters. I have done
> > some simple level system tests and I can steer traffic to a single
> > VNF.  Note: some additional changes to networking-sfc to catch-up
> > with their changes.
> >
> > https://github.com/doonhammer/networking-sfc
> > https://github.com/doonhammer/networking-ovn
> > https://github.com/doonhammer/ovs
> >
> > The next tasks I see are:
> >
> > 1. Decouple networking-sfc and networking-ovn. I am thinking that I
> > will pass a nested port-chain dictionary holding port-pairs/port-
> > pair-groups/flow-classifiers from networking-sfc to networking-ovn.
> > 2. Align the interface between networking-ovn and ovs/ovn to match
> > the nested dictionary in 1.
> > 3. Modify the ovn-nb schema and ovn-northd.c to march the port-chain
model.
> > 4. Add ability to support chain of port-pairs
> > 5. Think about flow-classifiers and how best to map them, today I
> > just map the logical-port and ignore everything else.
> >
> > Any other suggestions/feedback?
> >
> > Regards
> >
> > John
>
> John-
>
> (Sorry for sending this twice, but I forgot that text/html is not liked
> by the mailing lists ...)
>
> My apologies for not answering this sooner - I was giving a two day
> training on Tues/Wed last week and came back to my son graduating
> from HS the next day, so things have been a bit of a whirlwind here.
>
> Looking at the github repos, I like the idea of passing a dictionary
> from networking-sfc to networking-ovn. The flow classifiers should
> be relatively straightforward to map to ovs match rules (famous last
> words)...
>
> I've probably missed an orbit here, but in the ovn-northd implementation,
> I was expecting to find service chains in the egress and router pipelines
> in addition to the ingress pipeline (see below for why I think a service
> chain stage in the egress pipeline makes sense ...)
>
> Also, in the ovn-northd implementation, I'm a little disturbed to see the
> ingress side of the service chain sending packets to output ports - I
> think that a more scalable (and more "ovs-like" approach) would be to
> match the egress side of a port pair in the chaining stage of the
> ingress pipeline, with an action that  set the input port register.
> Then the egress pipeline would have a chaining stage where the 

Re: [openstack-dev] [glance] [defcore] [interop] Proposal for a virtual sync dedicated to Import Refactor May 26th

2016-05-25 Thread Flavio Percoco

On 25/05/16 09:53 -0400, Nikhil Komawar wrote:

Thanks Flavio, Erno.

Right now we've 10 participants who have RSVP yes. I was waiting for
last min additions but I think we can close the RSVP.

We should go for a google hangout on air that can be streamed live on
youtube for those merely interested in listening. For all the 10 spots,
we can accommodate on the hangout for participation.

Here's the link for the event:
https://plus.google.com/events/cb4acoebucn25vu8f7enprp85j4

More info:
https://wiki.openstack.org/wiki/VirtualSprints#Image_Import_Refactor_Sync_.231_--_Newton


Works for me!

The downside of this is that any late addition might be left out as the limit of
participants is 10. Also, if the late addition comes in before any of the people
that had +1'd then the latter would be left out.

I'll be there on time :D
Flavio


On 5/25/16 8:51 AM, Erno Kuvaja wrote:



On Wed, May 25, 2016 at 12:54 PM, Flavio Percoco > wrote:

On 20/05/16 18:00 -0400, Nikhil Komawar wrote:

Hello all,


I want to propose having a dedicated virtual sync next week
Thursday May
26th at 1500UTC for one hour on the Import Refactor work [1]
ongoing in
Glance. We are making a few updates to the spec; so it would
be good to
have everyone on the same page and soon start merging those
spec changes.


This is tomorrow! Have we decided the tool?

An invite would be lovely :D

++

Either one of us can provide bluejeans meeting if needed/wanted.

- Erno


Flavio

--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--

Thanks,
Nikhil




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTLs][all][mentoring] Mentors needed in specific technical areas

2016-05-25 Thread Amrith Kumar
Auggy,

This is a great wiki page and would love to see a high level "OpenStack 
Mentoring" page with links to projects that have this kind of "beginners guide".

I'll try and do something similar for Trove but you've set the bar very high.

-amrith

> -Original Message-
> From: Augustina Ragwitz [mailto:aragwitz.li...@pobox.com]
> Sent: Tuesday, May 24, 2016 3:55 PM
> To: OpenStack Development Mailing List 
> Subject: Re: [openstack-dev] [PTLs][all][mentoring] Mentors needed in
> specific technical areas
> 
> Hi Emily,
> 
> I'm the Nova Mentoring Czar and we have a Wiki page with a list of
> projects that would be good for new contributors:
> https://wiki.openstack.org/wiki/Nova/Mentoring
> 
> For Nova, I'd encourage potential contributors to get involved with a
> specific project so that mentoring can happen organically. Interested
> folks are more than welcome to reach out to me, preferably by email.
> 
> --
> Augustina Ragwitz
> Sr Systems Software Engineer, HPE Cloud
> Hewlett Packard Enterprise
> ---
> irc: auggy
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] expire old bug reports

2016-05-25 Thread Markus Zoeller
FYI, script [1] expires old bug reports. It looks like some projects
have done something like this before too. Please leave comments in the
review if you think the bug report comment can be improved. Using the
same comment (and final status of the bug report) across the projects
could be useful. Feel free to use the script in your favorite project.
I'll do that in week R-13 for Nova [2].

References:
[1] https://review.openstack.org/#/c/321008/
[2] http://lists.openstack.org/pipermail/openstack-dev/2016-May/095654.html


-- 
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] I'm going to expire open bug reports older than 18 months.

2016-05-25 Thread Markus Zoeller
On 24.05.2016 19:05, Doug Hellmann wrote:
> Excerpts from Markus Zoeller's message of 2016-05-24 11:00:35 +0200:
>> On 24.05.2016 09:34, Duncan Thomas wrote:
>>> Cinder bugs list was far more manageable once this had been done.
>>>
>>> It is worth sharing the tool for this? I realise it's fairly trivial to
>>> write one, but some standardisation on the comment format etc seems
>>> valuable, particularly for Q/A folks who work between different projects.
>>
>> A first draft (without the actual expiring) is at [1]. I'm going to
>> finish it this week. If there is a place in an OpenStack repo, just give
>> me a pointer and I'll push a change.
>>
>>> On 23 May 2016 at 14:02, Markus Zoeller  wrote:
>>>
 TL;DR: Automatic closing of 185 bug reports which are older than 18
 months in the week R-13. Skipping specific bug reports is possible. A
 bug report comment explains the reasons.
 [...]
>>
>> References:
>> [1]
>> https://github.com/markuszoeller/openstack/blob/master/scripts/launchpad/expire_old_bug_reports.py
>>
> 
> Feel free to submit that to the openstack-infra/release-tools repo. We
> have some other tools in that repo for managing launchpad bugs.
> 
> Doug

Thanks! I pushed https://review.openstack.org/#/c/321008/


-- 
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [defcore] [interop] Proposal for a virtual sync dedicated to Import Refactor May 26th

2016-05-25 Thread Nikhil Komawar
Thanks Flavio, Erno.

Right now we've 10 participants who have RSVP yes. I was waiting for
last min additions but I think we can close the RSVP.

We should go for a google hangout on air that can be streamed live on
youtube for those merely interested in listening. For all the 10 spots,
we can accommodate on the hangout for participation.

Here's the link for the event:
https://plus.google.com/events/cb4acoebucn25vu8f7enprp85j4

More info:
https://wiki.openstack.org/wiki/VirtualSprints#Image_Import_Refactor_Sync_.231_--_Newton

On 5/25/16 8:51 AM, Erno Kuvaja wrote:
>
>
> On Wed, May 25, 2016 at 12:54 PM, Flavio Percoco  > wrote:
>
> On 20/05/16 18:00 -0400, Nikhil Komawar wrote:
>
> Hello all,
>
>
> I want to propose having a dedicated virtual sync next week
> Thursday May
> 26th at 1500UTC for one hour on the Import Refactor work [1]
> ongoing in
> Glance. We are making a few updates to the spec; so it would
> be good to
> have everyone on the same page and soon start merging those
> spec changes.
>
>
> This is tomorrow! Have we decided the tool?
>
> An invite would be lovely :D
>
> ++
>
> Either one of us can provide bluejeans meeting if needed/wanted.
>
> - Erno
>  
>
> Flavio
>
> -- 
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum][Octavia] Need help for LBaaS v2

2016-05-25 Thread Kobi Samoray
Hi Wanghua,
I think that you have a networking issue within your setup - Octavia service is 
unable to connect to the running Amphora.
If this is the case - you will have to debug your setup as this is a 
connectivity issue and not a software problem.

On May 24, 2016, at 05:45, 王华 
> wrote:

Hi Michael,

Thank you for your help. Do you need any other logs to debug this problem?

Regards,
Wanghua

On Tue, May 24, 2016 at 12:31 AM, Michael Johnson 
> wrote:
Hi Wanghua,

From the o-cw log, it looks like the amphora service VM did not boot
properly or the network is not configured correctly.  We can see that
Nova said the VM went active, but the amphora-agent inside the image
never became reachable.  I would check the nova instance console log
to make sure the instance finished booting and check that the octavia
management network is properly setup such that the amphora-agent can
be reached.

Michael


On Sun, May 22, 2016 at 8:31 PM, 王华 
> wrote:
> Hi all,
>
> Previously Magnum used LBaaS v1. Now LBaaS v1 is deprecated, so we want to
> replace it by LBaaS v2. But I met a problem in my patch
> https://review.openstack.org/#/c/314060/. I could not figure out why it
> didn't work. It seems there are some errors in
> http://logs.openstack.org/60/314060/5/check/gate-functional-dsvm-magnum-k8s/6e3795e/logs/screen-o-cw.txt.gz.
> Can anyone in Octavia team help me to figure out why my patch doesn't work
> in the gate?  Thank you.
>
> Regards,
> Wanghua
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Need helps to implement the full baremetals support

2016-05-25 Thread Yuanying OTSUKA
Hi, Spyros

I fixed a conflicts and upload following patch.
* https://review.openstack.org/#/c/320968/

But it isn’t tested yet, maybe it doesn’t work..
If you have a question, please feel free to ask.


Thanks
-yuanying



2016年5月25日(水) 17:56 Spyros Trigazis :

> Hi Yuanying,
>
> please upload your workaround. I can test it and try to fix the conflicts.
> Even if it conflicts we can have some iterations on it.
>
> I'll upload later what worked for me on devstack.
>
> Thanks,
> Spyros
>
> On 25 May 2016 at 05:13, Yuanying OTSUKA  wrote:
>
>> Hi, Hongbin, Spyros.
>>
>> I’m also interesting this work.
>> I have workaround patch to support ironic.
>> (but currently conflict with master.
>> Is it helpful to upload it for initial step of the implementation?
>>
>> Thanks
>> -yuanying
>>
>> 2016年5月25日(水) 6:52 Hongbin Lu :
>>
>>> Hi all,
>>>
>>>
>>>
>>> One of the most important feature that Magnum team wants to deliver in
>>> Newton is the full baremetal support. There is a blueprint [1] created for
>>> that and the blueprint was marked as “essential” (that is the highest
>>> priority). Spyros is the owner of the blueprint and he is looking for helps
>>> from other contributors. For now, we immediately needs help to fix the
>>> existing Ironic templates [2][3][4] that are used to provision a Kubernetes
>>> cluster on top of baremetal instances. These templates were used to work,
>>> but they become outdated right now. We need help to fix those Heat template
>>> as an initial step of the implementation. Contributors are expected to
>>> follow the Ironic devstack guide to setup the environment. Then, exercise
>>> those templates in Heat.
>>>
>>>
>>>
>>> If you interest to take the work, please contact Spyros or me and we
>>> will coordinate the efforts.
>>>
>>>
>>>
>>> [1]
>>> https://blueprints.launchpad.net/magnum/+spec/magnum-baremetal-full-support
>>>
>>> [2]
>>> https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubecluster-fedora-ironic.yaml
>>>
>>> [3]
>>> https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubemaster-fedora-ironic.yaml
>>>
>>> [4]
>>> https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubeminion-fedora-ironic.yaml
>>>
>>>
>>>
>>> Best regards,
>>>
>>> Hongbin
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] New Core Reviewer (sent on behalf of Steve Martinelli)

2016-05-25 Thread Lance Bragstad
Congratulations Rodrigo!

Thank you for all the continued and consistent reviews.

On Tue, May 24, 2016 at 1:28 PM, Morgan Fainberg 
wrote:

> I want to welcome Rodrigo Duarte (rodrigods) to the keystone core team.
> Rodrigo has been a consistent contributor to keystone and has been
> instrumental in the federation implementations. Over the last cycle he has
> shown an understanding of the code base and contributed quality reviews.
>
> I am super happy (as proxy for Steve) to welcome Rodrigo to the Keystone
> Core team.
>
> Cheers,
> --Morgan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [defcore] [interop] Proposal for a virtual sync dedicated to Import Refactor May 26th

2016-05-25 Thread Erno Kuvaja
On Wed, May 25, 2016 at 12:54 PM, Flavio Percoco  wrote:

> On 20/05/16 18:00 -0400, Nikhil Komawar wrote:
>
>> Hello all,
>>
>>
>> I want to propose having a dedicated virtual sync next week Thursday May
>> 26th at 1500UTC for one hour on the Import Refactor work [1] ongoing in
>> Glance. We are making a few updates to the spec; so it would be good to
>> have everyone on the same page and soon start merging those spec changes.
>>
>
> This is tomorrow! Have we decided the tool?
>
> An invite would be lovely :D
>
> ++

Either one of us can provide bluejeans meeting if needed/wanted.

- Erno


> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Log spool in the context

2016-05-25 Thread Alexis Lee
Doug Hellmann said on Tue, May 24, 2016 at 02:53:51PM -0400:
> Rather than forcing SpoolManager to be a singleton, maybe the thing
> to do is build some functions for managing a singleton instance (or
> one per type or whatever), and making that API convenient enough
> that using the spool logger doesn't require adding a bunch of logic
> and import_opt() calls all over the place.  Since it looks like the
> convenience function would require looking at a config option owned
> by the application, it probably shouldn't live in oslo.log, but
> putting it in a utility module in nova might make sense.

OK, so if I understand you correctly, we'll have EG nova/tools.py
containing something like:

  CONF.import_opt("spool_api")
  SPOOL_MANAGERS = {}

  def get_api_logger(context):
  if not CONF.spool_api:
  return None
  mgr = SPOOL_MANAGERS.setdefault('api', SpoolManager('api'))
  return mgr.get_spool(context.request_id)

then in normal code:

  LOG = logging.getLogger(__name__)

  def some_method(ctx):
  log = tools.get_api_logger(ctx) or LOG

That seems OK to me, I'll work on it, thank you both.


Alexis (lxsli)
-- 
Nova developer, Hewlett-Packard Limited.
Registered Office: Cain Road, Bracknell, Berkshire RG12 1HN.
Registered Number: 00690597 England
VAT number: GB 314 1496 79

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-25 Thread Denis Makogon
Hello to All.

This message is not about arguing weather OpenStack needs Go and other
language.

This is a good discussion. So, the main question here is "Go along with
Python for OpenStack" and the problem is to support Go code starting for
necessity of skilled Go developers to infrastructure for CI/CD, etc.

Correct me if i'm wrong, none of the messages above were stating about
support Go-extensions for Python (C extensions were mentioned couple
times). Starting Go v1.5 it is possible to develop extension for Python [1]
(lib that helps to develop extensions [2])
An idea is in:
  - "If think that your project is an exceptional one (swift, designate,
etc.) and you really think that Golang is what you need,
 then why can't you develop your own Go-extensions  and write Python
libs that are utilizing that code
 and then add that new python dependency to your project?"
  - distribute your go-extensions (*.so files) and DEB/RPM for further
consumption in DevStack for example (like we do for multiple components -
Kafka, Cassandra, MySQL, RMQ, etc.)

As i can see such behaviour would allow Python to be main lang for
OpenStack development, we wouldn't have an overhead for building new
infrastructure for Go and would allow projects to use Go for developing
their extensions out of OpenStack Big Tent.

[1] https://blog.filippo.io/building-python-modules-with-go-1-5/
[2] https://github.com/sbinet/go-python

Kind regards,
Denys Makogon


2016-05-25 14:21 GMT+03:00 Flavio Percoco :

> On 25/05/16 06:48 -0400, Sean Dague wrote:
>
> [snip]
>
>
> 4. Do we want to be in the business of building data plane services that
>> will all run into python limitations, and will all need to be rewritten
>> in another language?
>>
>> This is a slightly different spin on the question Thierry is asking.
>>
>> Control Plane services are very unlikely to ever hit a scaling concern
>> where rewriting the service in another language is needed for
>> performance issues. These are orchestrators, and the time spent in them
>> is vastly less than the operations they trigger (start a vm, configure a
>> switch, boot a database server). There was a whole lot of talk in the
>> threads of "well that's not innovative, no one will want to do just
>> that", which seems weird, because that's most of OpenStack. And it's
>> pretty much where all the effort in the containers space is right now,
>> with a new container fleet manager every couple of weeks. So thinking
>> that this is a boring problem no one wants to solve, doesn't hold water
>> with me.
>>
>> Data Plane services seem like they will all end up in the boat of
>> "python is not fast enough". Be it serving data from disk, mass DNS
>> transfers, time series database, message queues. They will all
>> eventually hit the python wall. Swift hit it first because of the
>> maturity of the project and they are now focused on this kind of
>> optimization, as that's what their user base demands. However I think
>> all other data plane services will hit this as well.
>>
>> Glance (which is partially a data plane service) did hit this limit, and
>> the way it is largely mitigated by folks is by using Ceph and exposing
>> that
>> directly to Nova so now Glance is only in the location game and metadata
>> game, and Ceph is in the data plane game.
>>
>
> Sorry for nitpicking here but Glance's API keeps being a data API. Sure it
> stores locations and sure you can do fancy things with those locations
> but, as
> far as end users go, it's still a data API. It is not be used as
> intensively as
> Swift's, though. Ceph's driver allows for fancier things to be done but
> there
> are deployments which don't use Ceph.
>
> I believe it'd be better to separate data services that *own* the data from
> those that integrate other backends. Swift owns the data. You upload it to
> swift, it stores the data using its own strategies and it serves it.
> Glance gets
> the data, puts it in some other store and then you can either access it
> (not
> always) directly from the store or have Glance serving it back.
>
>
> When it comes to doing data plan services in OpenStack, I'm quite mixed.
>> The technology concerns for data plane
>> services are quite different. All the control plane services kind of
>> look and feel the same. An API + worker model, a DB for state, message
>> passing / rpc to put work to the workers. This is a common pattern and
>> is something which even for all the project differences, does end up
>> kind of common between parts. Projects that follow this model are
>> debuggable as a group not too badly.
>>
>> 5. Where does Swift fit?
>>
>> This I think has always been a tension point in the community (at least
>> since I joined in 2012). Swift is an original service of OpenStack, as
>> it started as Swift and Nova. But they were very different things. Swift
>> is a data service, Nova was a control plane. Much of what is now
>> OpenStack is Nova derivative in some way (some times 

[openstack-dev] [Fuel] Storing deployment configuration before or after a successful deployment

2016-05-25 Thread Roman Prykhodchenko
Folks,

Recently we were investigating an issue [1] when a user configured a cluster to 
cause deployment to fail and then expected a discard button will allow to reset 
changes made after that failure. As Julia mentioned in her comment on the bug, 
basically what we’ve got is that users actually perceive the meaning of a 
cluster.deployed attribute as a snapshot to a latest deployment configuration 
while it was designed to keep the latest configuration of a successful 
deployment. Should we re-consider the meaning of that attribute and therefore 
features and the action of the Discard button?


References:

1. https://bugs.launchpad.net/fuel/+bug/1584681



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTLs][all][mentoring] Mentors needed in specific technical areas

2016-05-25 Thread Amrith Kumar
Emily,

Is this mentoring program in any way related to Outreachy[1] or is that a 
different program altogether?

Your email says, "people to the guidelines 
(here
 and 
here)
 and". Both of those appear to be links to the same document.

Thanks,

-amrith

[1] https://www.gnome.org/outreachy/

From: Emily K Hugenbruch [mailto:ekhugenbr...@us.ibm.com]
Sent: Monday, May 23, 2016 10:25 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [PTLs][all][mentoring] Mentors needed in specific 
technical areas


Hi,
The lightweight mentoring program sponsored by the Women of OpenStack has 
really taken off, and we have about 35 mentees looking for technical help that 
we don't have mentors for. We're asking for help from the PTLs to announce the 
mentoring program in team meetings then direct people to the guidelines 
(here
 and 
here)
 and signup form if 
they're interested.

Mentors should be regular contributors to a project, with an interest in 
helping new people and about 4 hours a month for mentoring. They do not have to 
be women; the program is just sponsored by WoO, we welcome all mentees and 
mentors.

These are the projects/areas where we especially need mentors:

 *   Cinder
 *   Containers
 *   Documentation
 *   Glance
 *   Keystone
 *   Murano
 *   Neutron
 *   Nova
 *   Ops
 *   Searchlight
 *   Telemetry
 *   TripleO
 *   Trove
If you have any questions you can contact me, or ask on openstack-women where 
the mentoring committee hangs out.
Thanks!
Emily Hugenbruch
IRC: ekhugen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][oslo] Mitaka neutron-*aas are broken when --config-dir is passed

2016-05-25 Thread Ihar Hrachyshka
Hi all,

Our internal Mitaka testing revealed that neutron-server fails to start when:
- any neutron-*aas service plugin is enabled (in our particular case, it was 
lbaas);
- --config-dir option is passed to the process via CLI.

Since RDO/OSP neutron-server systemd unit files use --config-dir options 
extensively, it renders all neutron-*aas broken as of Mitaka for us.

The failure is reported as: https://launchpad.net/bugs/1585102 and the 
traceback can be found in: http://paste.openstack.org/show/498502/

As you can see, it crashes in provider_configuration neutron module, where we 
have a custom parser for service_providers configuration:

https://github.com/openstack/neutron/blob/master/neutron/services/provider_configuration.py#L83

This code was introduced in Kilo when neutron-*aas were split of the tree. The 
intent of the code at the time was to allow service plugins to load 
neutron_*aas.conf files located in /etc/neutron/ that are not passed explicitly 
to neutron-server via --config-file options. [A decision that was, in my 
opinion, wrong in the first place: we should not have introduced ‘magic’ in 
neutron that allowed the controller to load configuration files implicitly, and 
we would be better off just relying on oslo.config facilities, like using 
--config-dir to load an ‘unknown’ set of configuration files.]

The failure was triggered by oslo.config 3.8.0 release that is part of Mitaka 
series, particularly by the following patch: 
https://review.openstack.org/#q,Ibd0566f11df62da031afb128c9687c5e8c7b27ae,n,z 
This patch, among other things, changed the type of ‘config_dir’ option from 
string to list [of strings]. Since configuration options are not considered 
part of public API, we can’t claim that oslo.config broke their API guarantees 
and revert the patch. [Even if that would be the case, we could not do it 
because we already released several Mitaka and Newton releases of the library 
with the patch included, so it’s probably late to switch back.]

I have proposed a fix for provider_configuration module that would adopt the 
new list type for the option: https://review.openstack.org/#/c/320304/ 
Actually, it does not even rely on the option anymore, instead it pulls values 
using config_dirs property defined on ConfigOpts objects, which I assume is 
part of public API.

Since Mitaka supports anything oslo.config >= 3.7.0, we would also need to 
support the older type in some graceful way, if we backport the fix there.

Doug Hellmann has concerns about the approach taken. In his own words, "This 
approach may solve the problem in the short term, but it's going to leave you 
with some headaches later in this cycle when we expand oslo.config.” 
Specifically, "There are plans under way to expand configuration sources from 
just files and directories to use URLs. I expect some existing options to be 
renamed or otherwise deprecated as part of that work, and using the option 
value here will break neutron when that happens.” (more details in the patch)

First, it’s a surprise to me that config_dirs property (not an option) is not 
part of public API of the library. I thought that if something is private, we 
name it with a leading underscore. (?)

If we don’t have public access to the symbol, a question arises on how we 
tackle that in neutron/mitaka (!). Note that we are not talking about a next 
release, it’s current neutron/mitaka that is broken and should be fixed to work 
with oslo.config 3.8.0, so any follow up work in oslo.config itself won’t make 
it to stable/mitaka for the library. So we need some short term solution here.

Doug suggested that neutron team would work with oslo folks to expose missing 
bits from oslo.config to consumers: "There are several ways we could address 
the need here. For example, we could provide a method that returns the source 
info (file names, directories, etc.). We could add a class method that has the 
effect of making a new ConfigOpts instance with the same source information as 
an existing object passed to it. Or we could split the config locating logic 
out of ConfigOpts and make it a separate object that can be shared. We should 
discuss those options on the ML, so please start a thread.”

It may be a good idea, but honestly, I don’t want to see neutron following the 
path we took back in kilo. I would prefer seeing neutron getting rid of 
implicit loading of specifically named configuration files for service plugins 
(and just for a single option!)

My plan to get out of those woods would be:
- short term, we proceed on the direction I took with the patch, adopting list 
type in newton, and gracefully handling both in mitaka;
- long term, deprecate (Newton) and remove (Ocata) the whole special casing 
code for service providers from neutron. Any configuration files to load for 
service plugins or any other plugin would need to be specified on CLI with 
either --config-file or --config-dir. No more magic.

Thoughts?

Ihar

Re: [openstack-dev] [glance] [defcore] [interop] Proposal for a virtual sync dedicated to Import Refactor May 26th

2016-05-25 Thread Flavio Percoco

On 20/05/16 18:00 -0400, Nikhil Komawar wrote:

Hello all,


I want to propose having a dedicated virtual sync next week Thursday May
26th at 1500UTC for one hour on the Import Refactor work [1] ongoing in
Glance. We are making a few updates to the spec; so it would be good to
have everyone on the same page and soon start merging those spec changes.


This is tomorrow! Have we decided the tool?

An invite would be lovely :D

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] discovery and deploy a compute node automatically

2016-05-25 Thread Igor Kalnitsky
> On May 25, 2016, at 13:53, jason  wrote:
> 
> Thanks, and yes you got my point, my "automatically ", means after a new node 
> has been discovered , the deployement process starts automatically. Cron may 
> help, but what if I need more info to check if that new discovered node 
> deserves to be a compute node or not? Can the cron script  get more 
> characteristics info about the node? For example , "if the new node has right 
> amount of nic interfaces, right setting of NUMA etc., then make it a compute 
> node with the same configuration as others with the same characteristics".

Yep. Cron is a way to run periodically some script. Alternatively, as Alex 
mentioned, you can write a daemon.

In order to make some checks, you can use either CLI or RESTful API. It's 
really easy to interact with API using Python with requests [1] library.


[1]: http://docs.python-requests.org/en/master/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [heat] [murano] [app-catalog] OpenStack Apps Community, several suggestions how to improve collaboration

2016-05-25 Thread Igor Marnat
Colleagues,
having attended many sessions and talked to many customers, partners
and contributors in Austin I’d like to suggest several improvements to how
we develop OpenStack apps and work with the Community App Catalog (
https://apps.openstack.org/).

Key goals to achieve are:
- Provide contributors with an ability to collaborate on OpenStack
apps development
- Provide contributors and consumers with transparent workflow to
manage their apps
- Provide consumers with information about apps - how it was developed
and tested
- To summarize - introduce the way to build community working on
OpenStack apps

*What is OpenStack application*
OpenStack is about 6 years young and all these years discussions about
it are in progress. Variety of applications is huge, from LAMP stacks
and legacy Java apps to telco workloads and VNF apps. There is working
group which works on a definition of "What is OpenStack application",
hopefully community will agree on definition soon.

For the sake of our discussion below let us agree on a simple approach:
an OpenStack application is any software asset which 1. can be executed on
an OpenStack cloud, 2. lives in apps.openstack.org.  So far there are
Murano applications, Heat templates, Glance images and TOSCA templates.

There are many good OpenStack applications in the world which don't live in
OpenStack App Catalog. However, let us for now concentrate on those which
do, just for the sake of this discussion.

*Introduction to OpenStack development ecosystem*
OpenStack was introduced about 6 years ago. Over these years
community grown significantly. There were 8 companies contributing to
OpenStack in Austin (1-st release). In Mitaka (13-th release) there were 64
companies contributing.

One of the reasons for this growth is the set of sophisticated tools
the OpenStack contributor ecosystem has chosen to use or build to
enable collaboration:
- software repositories: http://git.openstack.org/cgit/openstack/nova,
http://git.openstack.org/cgit/openstack/neutron, ..
- bug trackers: https://launchpad.net/nova, https://launchpad.net/neutron
, ...
- same instance of gerrit for all the projects for code review:
https://review.openstack.org/
- gating test infrastructure http://zuul.openstack.org/
- common approach to release management, repositories management,
naming, tons of other things managed by review in
https://review.openstack.org/#/q/status:open+project:openstack/governance
- IRC channels, etherpads, meetings and mailing lists
- governance to manage all of the above

All of the above is what we can call OpenStack collaboration ecosystem
and it is a key factor for OpenStack community success.

*Introduction to OpenStack apps development ecosystem*
Now when OpenStack is mature and have it up and running is not a big
deal, focus of community and customers shifts from "how do I get a running
cloud" to "what do I do with running cloud".

Use cases of different cloud users are very different, however one
can identify and develop standard building blocks which can be reused by
cloud users (service providers, DevTest teams, ...). Many cloud users want
to contribute their homegrown apps upstream because:
- it allows to other people to use it and improve it
- community can implement missing parts
- community can help with testing and maintaining an app

Year ago we introduced Community App Catalog for OpenStack
http://apps.openstack.org as an integration/distribution point of
customer experience/apps. This initiative is successful, there are about
100 software assets of various kinds which can be run on OpenStack. For
further growth we need to make several changes in a way we approach
collaboration around OpenStack Apps. Namely, we need to provide an ability
to apps developers to collaborate on application development.

*OpenStack Community App Catalog is there, what else?*
Community App Catalog http://apps.openstack.org allows to
publish/consume apps to/from it.

"The OpenStack Community App Catalog is designed to use the same tools
for submission and review as other OpenStack projects. As such we follow
the OpenStack development workflow" [0].

To follow OpenStack development workflow, apps developers need to have:
- dedicated repositories & code review system to collaborate on code
- mailing lists, IRC channels, core reviewers teams
- common approach to QA
- governance model to manage all of the above

Most of the above is missing for apps developers now.  App Catalog
provides central place to store final artifacts (ready apps, like .exe
files in Win world) but there is no centralized infrastructure to
collaborate on development of source code of apps.

Apps developers partially use infrastructure of OpenStack core
projects (Heat & Murano) - repositories and bug trackers. Other than that
they are on their own, there are no teams, no mailing lists, no IRC
channels for apps developers - most of the items from the list above is
missing.

[0] 

Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-25 Thread Flavio Percoco

On 25/05/16 06:48 -0400, Sean Dague wrote:

[snip]


4. Do we want to be in the business of building data plane services that
will all run into python limitations, and will all need to be rewritten
in another language?

This is a slightly different spin on the question Thierry is asking.

Control Plane services are very unlikely to ever hit a scaling concern
where rewriting the service in another language is needed for
performance issues. These are orchestrators, and the time spent in them
is vastly less than the operations they trigger (start a vm, configure a
switch, boot a database server). There was a whole lot of talk in the
threads of "well that's not innovative, no one will want to do just
that", which seems weird, because that's most of OpenStack. And it's
pretty much where all the effort in the containers space is right now,
with a new container fleet manager every couple of weeks. So thinking
that this is a boring problem no one wants to solve, doesn't hold water
with me.

Data Plane services seem like they will all end up in the boat of
"python is not fast enough". Be it serving data from disk, mass DNS
transfers, time series database, message queues. They will all
eventually hit the python wall. Swift hit it first because of the
maturity of the project and they are now focused on this kind of
optimization, as that's what their user base demands. However I think
all other data plane services will hit this as well.

Glance (which is partially a data plane service) did hit this limit, and
the way it is largely mitigated by folks is by using Ceph and exposing that
directly to Nova so now Glance is only in the location game and metadata
game, and Ceph is in the data plane game.


Sorry for nitpicking here but Glance's API keeps being a data API. Sure it
stores locations and sure you can do fancy things with those locations but, as
far as end users go, it's still a data API. It is not be used as intensively as
Swift's, though. Ceph's driver allows for fancier things to be done but there
are deployments which don't use Ceph.

I believe it'd be better to separate data services that *own* the data from
those that integrate other backends. Swift owns the data. You upload it to
swift, it stores the data using its own strategies and it serves it. Glance gets
the data, puts it in some other store and then you can either access it (not
always) directly from the store or have Glance serving it back.


When it comes to doing data plan services in OpenStack, I'm quite mixed.
The technology concerns for data plane
services are quite different. All the control plane services kind of
look and feel the same. An API + worker model, a DB for state, message
passing / rpc to put work to the workers. This is a common pattern and
is something which even for all the project differences, does end up
kind of common between parts. Projects that follow this model are
debuggable as a group not too badly.

5. Where does Swift fit?

This I think has always been a tension point in the community (at least
since I joined in 2012). Swift is an original service of OpenStack, as
it started as Swift and Nova. But they were very different things. Swift
is a data service, Nova was a control plane. Much of what is now
OpenStack is Nova derivative in some way (some times direct extractions
(Glance, Cinder, Ironic), some times convergent paths (Neutron). And
then with that many examples, lots of other things built in similar ways.

Swift doesn't use common oslo components. That actually makes debugging
it quite different compared to the rest of OpenStack. The lack of
oslo.log means structured JSON log messages to Elastic Search, are not
a thing. Swift has a very different model in it's service split.
Swift doesn't use global requirements. Swift ensures it can run without
Keystone, because their goal is Swift everywhere, whether or not it's
part of the rest of OpenStack.

These are all fine goals, but they definitely have led to tensions on
all sides.

And I think part of the question is "are these tensions that need to be
solved" or "is this data that this thing is different". Which isn't to
say that Swift is bad, it's just definitively different than much of the
ecosystem. Maybe Swift should be graduated beyond OpenStack, because
it's scope cross cuts much differently. Ceph isn't part of OpenStack,
but it's in 50% of installs. libvirt isn't part of OpenStack, but it's
in 90% of installs. And in both of those cases OpenStack is one of the
biggest drivers of their use.

Which, gets contentious because people feel like this is kicking
something out. And that I can understand. There is a lot of emotion
wrapped up in labels and who gets to be on the the OpenStack home page.
I wish there wasn't. Good software should get deployed because it is
good and solves a need, not because of labels. I'm not sure Swift users
really care that Swift is OpenStack. They care that Swift is Swift. And
Swift being Swift, but not being OpenStack would open up 

Re: [openstack-dev] [ironic][nova][horizon] Serial console support for ironic instances

2016-05-25 Thread Jim Rollenhagen
On Wed, May 25, 2016 at 01:58:18PM +0900, Yuiko Takada wrote:
> Hi!
> 
> Hironori, Lucas, thank you for bringing this topic up!
> 
> Yes, as Lucas says,  our latest spec is
> https://review.openstack.org/#/c/319505
> 
> I and Tien, Hironori, Akira discussed and merged our idea.
> 
> And new Nova spec is here:
> https://review.openstack.org/#/c/319507
> 
> As you guys know, Nova non-priority spec approval freeze is 5/30-6/3,
> so that I guess Ironic spec need to be approved until it.

Just a note here, I talked with johnthetubaguy this morning, and we
think the Nova blueprint doesn't need a spec. I updated the whiteboard
on the BP with some details, added it to the agenda
for the next Nova meeting, and will be there to discuss it.

// jim

> 
> 
> Best Regards,
> Yuiko Takada Mori
> 
> 2016-05-25 1:15 GMT+09:00 Lucas Alvares Gomes :
> 
> > Hi,
> >
> > > I'm working with Tien who is a submitter of one[1] of console specs.
> > > I joined the console session in Austin.
> > >
> > > In the session, we got the following consensus.
> > > - focus on serial console in Newton
> > > - use nova-serial proxy as is
> > >
> > > We also got some requirements[2] for this feature in the session.
> > > We have started cooperating with Akira and Yuiko who submitted another
> > similar spec[3].
> > > We're going to unite our specs and add solutions for the requirements
> > ASAP.
> > >
> >
> > Great stuff! So do we have an update on this?
> >
> > I see [3] is now abandoned and a new spec was proposed recently [4].
> > Is [4] the result of the union of both specs?
> >
> > > [1] ironic-ipmiproxy: https://review.openstack.org/#/c/296869/
> > > [2] https://etherpad.openstack.org/p/ironic-newton-summit-console
> > > [3] ironic-console-server: https://review.openstack.org/#/c/306755/
> >
> > [4] https://review.openstack.org/#/c/319505
> >
> > Cheers,
> > Lucas
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to single sign on with windows authentication with Keystone

2016-05-25 Thread OpenStack Mailing List Archive

Link: https://openstack.nimeyo.com/85057/?show=85707#c85707
From: imocha 

I am trying to follow the steps. I am able to install ADFS and would like to proceed further.

However, I am having issues with setting up SSL endpoints for Keystone V3. I am using Mitaka. Is there any step that I can use. 

I am using packstack to install the Mitaka and wanted to enable SSL for the identity endpoints to work with ADFS for SAML2 flow.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] [tripleo] [fuel] Puppet pacemaker module

2016-05-25 Thread Sofer Athlan-Guyot
Hi,

The merge of the Fuel and Tripleo puppet-pacemaker is ready for review,
as the green CI shows: https://review.openstack.org/#/c/296440/

The great news is that all the legacy code has been kept untouched.

Basically, reviewers should only ensure that this is indeed the case.  No need
to review the ~18K loc[1].  All new features are under a new namespace:
 - new manifests are under manifests/new namespace;
 - new provider are named pacemaker_* (instead of pcmk_*)

So moving to the new pacemaker means that the consumers has to
explicitly change its own manifests, giving them a upgrade path[2][3]

The new module is very well documented, so gives the README.md[4] a go.  It
has a whole lot of new features: service provider for pacemaker; much
more better test coverage, both unit and acceptance; a whole new array
of providers, ...

The above patch only include the xml providers and some pcs.  Resource,
colocation and order pcs providers are currently being ported in this
patch[5].  Those are the one used by the tripleo consumer.  It may also
has a port of the service provider depending on the tests I'm doing.

I would like to thanks Dmitry Ilyin for its outstanding work on this
one.

Enjoy,

[1] those are mainly under the lib/pacemaker which is the xml library
used by fuel as an interface to the pacemaker cluster (equivalent to the
pcs command)

[2] this is what is done in https://review.openstack.org/302409 and
https://review.openstack.org/309069 for tripleo

[3] fuel integration will come later, so don't worry about red CI for
fuel.

[4] 
https://review.openstack.org/gitweb?p=openstack/puppet-pacemaker.git;a=blob;f=README.md;h=446fc3f6b3c91924e98fbcf0e78923b29e0fb3f5;hb=4d2e554f687f525ad54c0b0c41a610aa68f50e4d

[5] https://review.openstack.org/310713

--
Sofer Athlan-Guyot

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] nova-bugs team meeting (Europe/Asia-friendly) is now earlier

2016-05-25 Thread Markus Zoeller
Please be aware that the Europe/Asia-friendly time slot of the
nova-bugs-team meeting [1] got moved from 10:00 UTC to 08:00 UTC. I'm
doing this because this time slot isn't well attended in the last weeks
and I think this new time slot will make it easier for folks in (East)
Asia to attend.

This takes effect starting from 2016-06-07 [2].

References:
[1] https://wiki.openstack.org/wiki/Meetings/Nova/BugsTeam
[2] https://review.openstack.org/#/c/320337/

-- 
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How to run a task when multiple tasks are completed

2016-05-25 Thread Jonnalagadda, Venkata
Fuel Team,

I have couple of tasks in Fuel (deployment_tasks.yaml) as below –

Task1
Task2
Task3
Task4

Now, I want to run Task4 only when Tasks-1,2,3 are completed. How I can 
configure this in deployment_tasks yaml ? Please suggest.

Thanks & Regards,

J. Venkata Mahesh


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova Live Migration of rescued instances

2016-05-25 Thread Paul Carlton



On 25/05/16 11:59, Gary Kotton wrote:

Hi,
The VMware driver supports rescue. Live migration should be pretty simple here 
as the rescue is only for the disk. So you can migrate the instance to whatever 
host you want. The only concern with the VMware driver is that the live 
migration patches are in review and I think that they require a spec or 
blueprint (https://review.openstack.org/#/c/270116/)
Thanks
Gary

On 5/25/16, 10:49 AM, "Paul Carlton"  wrote:


I'm working on a spec https://review.openstack.org/#/c/307131/ to permit
the live migration of rescued instances. I have an implementation that
works for libvirt and have addressed lack of support for this feature
in other drivers using driver capabilities.

I've achieved this for libvirt driver by simply changing how rescue and
unrescue are implemented.  In the libvirt driver rescue saves the current
domain xml in a local file and unrescue uses this to revert the instance to
its previous setup, i.e. booting from instance primary disk again rather
than rescue image.  However saving the previous state in the domain
xml file is unnecessary since during unrescue the domain is destroyed
and restarted. This is effectively a hard reboot so I just call hard reboot
during the unrescue operation.  Hard reboot rebuilds the domain xml

>from the nova database so the domain xml file is not needed.

However I was wondering which other drivers support rescue, vmware
and xen I think?  Would it be possible to implement support for live
migration of rescued instances for these drivers too?  I'm happy to do
the work to implement this, given some guidance from those with more
familiarity with these drivers than I.

Thanks

--
Paul Carlton
Software Engineer
Cloud Services
Hewlett Packard
BUK03:T242
Longdown Avenue
Stoke Gifford
Bristol BS34 8QZ

Mobile:+44 (0)7768 994283
Email:mailto:paul.carlt...@hpe.com
Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England.
The contents of this message and any attachments to it are confidential and may be 
legally privileged. If you have received this message in error, you should delete it from 
your system immediately and advise the sender. To any recipient of this message within 
HP, unless otherwise stated you should consider this message and attachments as "HP 
CONFIDENTIAL".


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

So vmware supports rescue but until this patch goes in it does not
support live migration between compute nodes?  So if my change
lands before yours then you would need to amend the
"supports_live_migrate_rescued" capabilities flag to True in your
driver to permit live migration of instances in a rescued state?

--
Paul Carlton
Software Engineer
Cloud Services
Hewlett Packard
BUK03:T242
Longdown Avenue
Stoke Gifford
Bristol BS34 8QZ

Mobile:+44 (0)7768 994283
Email:mailto:paul.carlt...@hpe.com
Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England.
The contents of this message and any attachments to it are confidential and may be 
legally privileged. If you have received this message in error, you should delete it from 
your system immediately and advise the sender. To any recipient of this message within 
HP, unless otherwise stated you should consider this message and attachments as "HP 
CONFIDENTIAL".




smime.p7s
Description: S/MIME Cryptographic Signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova Live Migration of rescued instances

2016-05-25 Thread Gary Kotton
Hi,
The VMware driver supports rescue. Live migration should be pretty simple here 
as the rescue is only for the disk. So you can migrate the instance to whatever 
host you want. The only concern with the VMware driver is that the live 
migration patches are in review and I think that they require a spec or 
blueprint (https://review.openstack.org/#/c/270116/)
Thanks
Gary

On 5/25/16, 10:49 AM, "Paul Carlton"  wrote:

>I'm working on a spec https://review.openstack.org/#/c/307131/ to permit
>the live migration of rescued instances. I have an implementation that
>works for libvirt and have addressed lack of support for this feature
>in other drivers using driver capabilities.
>
>I've achieved this for libvirt driver by simply changing how rescue and
>unrescue are implemented.  In the libvirt driver rescue saves the current
>domain xml in a local file and unrescue uses this to revert the instance to
>its previous setup, i.e. booting from instance primary disk again rather
>than rescue image.  However saving the previous state in the domain
>xml file is unnecessary since during unrescue the domain is destroyed
>and restarted. This is effectively a hard reboot so I just call hard reboot
>during the unrescue operation.  Hard reboot rebuilds the domain xml
>from the nova database so the domain xml file is not needed.
>
>However I was wondering which other drivers support rescue, vmware
>and xen I think?  Would it be possible to implement support for live
>migration of rescued instances for these drivers too?  I'm happy to do
>the work to implement this, given some guidance from those with more
>familiarity with these drivers than I.
>
>Thanks
>
>-- 
>Paul Carlton
>Software Engineer
>Cloud Services
>Hewlett Packard
>BUK03:T242
>Longdown Avenue
>Stoke Gifford
>Bristol BS34 8QZ
>
>Mobile:+44 (0)7768 994283
>Email:mailto:paul.carlt...@hpe.com
>Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 
>1HN Registered No: 690597 England.
>The contents of this message and any attachments to it are confidential and 
>may be legally privileged. If you have received this message in error, you 
>should delete it from your system immediately and advise the sender. To any 
>recipient of this message within HP, unless otherwise stated you should 
>consider this message and attachments as "HP CONFIDENTIAL".
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] discovery and deploy a compute node automatically

2016-05-25 Thread jason
Hi Aleksandr,
Thanks for the examples! That will really help me a lot.
On May 25, 2016 6:26 PM, "Aleksandr Didenko"  wrote:

> Hi,
>
> +1 to Igor. It should be easily doable via some sort of "watcher" script
> (run it as a daemon or under cron), that script should:
>
> - watch for new nodes in 'discover' state. CLI example:
>   fuel nodes
> - assign new nodes to env with compute role. CLI example:
>   fuel --env $ENV_ID node set --node $NEW_NODE_ID --role compute
> - update networks assignment for new node. CLI example:
>   fuel node --node $NEW_NODE_ID --network --download
>   # edit /root/node_$NEW_NODE_ID/interfaces.yaml
>   fuel node --node $NEW_NODE_ID --network --upload
> - deploy changes. CLI example:
>   fuel deploy-changes --env $ENV_ID
>
> Regards,
> Alex
>
> On Wed, May 25, 2016 at 12:03 PM, Igor Kalnitsky 
> wrote:
>
>> Hey Jason,
>>
>> What do you mean by "automatically"?
>>
>> You need to assign "compute" role on that discovered node, and hit
>> "Deploy Changes" button. If you really want to deploy any new discovered
>> node automatically, I think you can create some automation script and put
>> it under cron.
>>
>> Hope it helps,
>> Igor
>>
>> > On May 25, 2016, at 12:33, jason  wrote:
>> >
>> > Hi All,
>> >
>> > Is there any way for fuel to deploy a newly discovered node as a
>> compute node automatically? I followed the openstack doc for fuel but did
>> not get any answer.
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr][magnum]Installing kuryr for mutlinode openstack setup

2016-05-25 Thread Antoni Segura Puimedon
On Wed, May 25, 2016 at 11:20 AM, Jaume Devesa  wrote:

> Hello Akshay,
>
> responses inline:
>
> On Wed, 25 May 2016 10:48, Akshay Kumar Sanghai wrote:
> > Hi,
> > I have a 4 node openstack setup (1 controller, 1 network, 2 compute
> nodes).
> > I want to install kuryr in liberty version. I cannot find a package in
> > ubuntu repo.
>
> There is not yet official version of Kuryr. You'll need to install using
> the
> current master branch of the repo[1] (by cloning it, install dependencies
> and
> `python setup.py install`
>

 Or you could run it dockerized. Read the "repo info" in [2]

We are working on having the packaging ready, but we are splitting the
repos first,
so it will take a while for plain distro packages.


> > -How do i install kuryr?
> If the README.rst file of the repository is not enough for you in terms of
> installation and configuration, please let us know what's not clear.
>
> > - what are the components that need to be installed on the respective
> > nodes?
>
> You need to run the kuryr libnetwork's service in all the nodes that you
> use as
> docker 'workers'
>

and your chosen vendor's neutron agents. For example, for MidoNet it's
midolman, for ovs it would be the neutron ovs agent.


>
> > - Do i need to install magnum for docker swarm?
>
> Not familiar with Magnum.. Can not help you here.
>


If you want to run docker swarm in bare metal, you do not need Magnum. Only
keystone and Neutron.

You'd put docker swarm, neutron and keystone running in one node, and then
have N nodes with docker engine, kuryr/libnetwork and the neutron agents of
the vendor of your choice.


> > - Can i use docker swarm, kubernetes, mesos in openstack without using
> > kuryr?


You can use swarm and kubernetes in OpenStack with Kuryr using Magnum. It
will
use neutron networking for providing nets to the VMs that will run the
swarm/kubernetes
cluster. Inside the VMs, another overlay done by flannel will be used (in
k8s, in
swarm I have not tried it).


> What will be the disadvantages?
>

The disadvantages are that you do not get explicit Neutron networking for
your containers,
you get less networking isolation for your VMs/containers and if you want
the highest
performance, you have to change the default flannel mode.


>
> Only docker swarm right now. The kubernetes one will be addressed soon.
>
> >
> > Thanks
> > Akshay
>
> Thanks to you for giving it a try!



> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> There are a bunch of people much more experienced than me in Kuryr. I hope
> I
> haven't said anything stupid.
>
> Best regards,
>
> [1]: http://github.com/openstack/kuryr

 [2] https://hub.docker.com/r/kuryr/libnetwork/

>
>
> --
> Jaume Devesa
> Software Engineer at Midokura
> PGP key: 35C2D6B2 @ keyserver.ubuntu.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] discovery and deploy a compute node automatically

2016-05-25 Thread jason
Hi Igor,

Thanks, and yes you got my point, my "automatically ", means after a new
node has been discovered , the deployement process starts automatically.
Cron may help, but what if I need more info to check if that new discovered
node deserves to be a compute node or not? Can the cron script  get more
characteristics info about the node? For example , "if the new node has
right amount of nic interfaces, right setting of NUMA etc., then make it a
compute node with the same configuration as others with the same
characteristics".
On May 25, 2016 6:03 PM, "Igor Kalnitsky"  wrote:

> Hey Jason,
>
> What do you mean by "automatically"?
>
> You need to assign "compute" role on that discovered node, and hit "Deploy
> Changes" button. If you really want to deploy any new discovered node
> automatically, I think you can create some automation script and put it
> under cron.
>
> Hope it helps,
> Igor
>
> > On May 25, 2016, at 12:33, jason  wrote:
> >
> > Hi All,
> >
> > Is there any way for fuel to deploy a newly discovered node as a compute
> node automatically? I followed the openstack doc for fuel but did not get
> any answer.
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >