Re: [openstack-dev] [kuryr][kuryr-kubernetes] Propose to support Kubernetes Network Custom Resource Definition De-facto Standard Version 1

2018-06-07 Thread Antoni Segura Puimedon
On Thu, Jun 7, 2018 at 3:52 AM, Peng Liu  wrote:
> Cool.
> I'll start to prepare a BP for this, so we can have more detailed
> discussion.

Great!

>
> On Wed, Jun 6, 2018 at 11:08 PM, Antoni Segura Puimedon 
> wrote:
>>
>> On Wed, Jun 6, 2018 at 2:37 PM, Irena Berezovsky 
>> wrote:
>> > Sounds like a great initiative.
>> >
>> > Lets follow up on the proposal by the kuryr-kubernetes blueprint.
>>
>> I fully subscribe what Irena said. Let's get on this quick!
>>
>> >
>> > BR,
>> > Irena
>> >
>> > On Wed, Jun 6, 2018 at 6:47 AM, Peng Liu  wrote:
>> >>
>> >> Hi Kuryr-kubernetes team,
>> >>
>> >> I'm thinking to propose a new BP to support  Kubernetes Network Custom
>> >> Resource Definition De-facto Standard Version 1 [1], which was drafted
>> >> by
>> >> network plumbing working group of kubernetes-sig-network. I'll call it
>> >> NPWG
>> >> spec below.
>> >>
>> >> The purpose of NPWG spec is trying to standardize the multi-network
>> >> effort
>> >> around K8S by defining a CRD object 'network' which can be consumed by
>> >> various CNI plugins. I know there has already been a BP VIF-Handler And
>> >> Vif
>> >> Drivers Design, which has designed a set of mechanism to implement the
>> >> multi-network functionality. However I think it is still worthwhile to
>> >> support this widely accepted NPWG spec.
>> >>
>> >> My proposal is to implement a new vif_driver, which can interpret the
>> >> PoD
>> >> annotation and CRD defined by NPWG spec, and attach pod to additional
>> >> neutron subnet and port accordingly. This new driver should be mutually
>> >> exclusive with the sriov and additional_subnets drivers.So the endusers
>> >> can
>> >> choose either way of using mult-network with kuryr-kubernetes.
>> >>
>> >> Please let me know your thought, any comments are welcome.
>> >>
>> >>
>> >>
>> >> [1]
>> >>
>> >> https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7UdTPAG_RNydhVE1Kx54kFQ/edit#heading=h.hylsbqoj5fxd
>> >>
>> >>
>> >> Regards,
>> >>
>> >> --
>> >> Peng Liu
>> >>
>> >>
>> >> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Peng Liu
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr][kuryr-kubernetes] Propose to support Kubernetes Network Custom Resource Definition De-facto Standard Version 1

2018-06-06 Thread Antoni Segura Puimedon
On Wed, Jun 6, 2018 at 2:37 PM, Irena Berezovsky  wrote:
> Sounds like a great initiative.
>
> Lets follow up on the proposal by the kuryr-kubernetes blueprint.

I fully subscribe what Irena said. Let's get on this quick!

>
> BR,
> Irena
>
> On Wed, Jun 6, 2018 at 6:47 AM, Peng Liu  wrote:
>>
>> Hi Kuryr-kubernetes team,
>>
>> I'm thinking to propose a new BP to support  Kubernetes Network Custom
>> Resource Definition De-facto Standard Version 1 [1], which was drafted by
>> network plumbing working group of kubernetes-sig-network. I'll call it NPWG
>> spec below.
>>
>> The purpose of NPWG spec is trying to standardize the multi-network effort
>> around K8S by defining a CRD object 'network' which can be consumed by
>> various CNI plugins. I know there has already been a BP VIF-Handler And Vif
>> Drivers Design, which has designed a set of mechanism to implement the
>> multi-network functionality. However I think it is still worthwhile to
>> support this widely accepted NPWG spec.
>>
>> My proposal is to implement a new vif_driver, which can interpret the PoD
>> annotation and CRD defined by NPWG spec, and attach pod to additional
>> neutron subnet and port accordingly. This new driver should be mutually
>> exclusive with the sriov and additional_subnets drivers.So the endusers can
>> choose either way of using mult-network with kuryr-kubernetes.
>>
>> Please let me know your thought, any comments are welcome.
>>
>>
>>
>> [1]
>> https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7UdTPAG_RNydhVE1Kx54kFQ/edit#heading=h.hylsbqoj5fxd
>>
>>
>> Regards,
>>
>> --
>> Peng Liu
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr][libnetwork] Release kuryr-libnetwork 1.x for Queens

2018-01-25 Thread Antoni Segura Puimedon
On Mon, Jan 22, 2018 at 3:46 PM, Daniel Mellado
 wrote:
> +1
>
>
> El 21/1/18 a las 8:13, Irena Berezovsky escribió:
>
> +1
>
> On Fri, Jan 19, 2018 at 9:42 PM, Hongbin Lu  wrote:
>>
>> Hi Kuryr team,
>>
>> I think Kuryr-libnetwork is ready to move out of beta status. I propose to
>> make the first 1.x release of Kuryr-libnetwork for Queens and cut a stable
>> branch on it. What do you think about this proposal?

Agreed. Thanks a lot for bringing it up Hongbin!

>>
>> Best regards,
>> Hongbin
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] VTG sessions today postponed

2017-10-04 Thread Antoni Segura Puimedon
Hi fellow kuryrs,

Due to illness of one of the key contributors in todays sessions,
we'll postpone the sessions today. As soon as he recovers we'll
publish the new time.

Regards,

Toni

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] vPTG schedule

2017-10-02 Thread Antoni Segura Puimedon
On Mon, Oct 2, 2017 at 12:52 PM, Daniel Mellado
<daniel.mellado...@ieee.org> wrote:
> Hi Hongbin,
>
> It seems we messed up with the etherpad times, please do follow the
> invite schedule for that.
>
> Today's session will be held at 12:00 CET and 13:00 UTC (we have kist
> corrected the etherpad).
>
> Sorry for the noise!
>
> Daniel
>
> On 10/02/2017 03:30 AM, Hongbin Lu wrote:
>> Hi Toni,
>>
>> The time of a few proposed sessions look inconsistent with the etherpad.
>> Could you double check?
>>
>> On Thu, Sep 28, 2017 at 5:48 AM, Antoni Segura Puimedon
>> <celeb...@gmail.com <mailto:celeb...@gmail.com>> wrote:
>>
>> Hi fellow Kuryrs!
>>
>> It's that time of the cycle again where we hold our virtual project team
>> gathering[0]. The dates this time are:
>>
>> October 2nd, 3rd and 4th
>>
>> The proposed sessions are:
>>
>> October 2nd 13:00utc: Scale discussion
>> In this session we'll talk about the recent scale testing we
>> have performed
>> in a 112 node cluster. From this starting point. We'll work on
>> identifying
>> and prioritizing several initiatives to improve the performance
>> of the
>> pod-in-VM and the baremetal scenarios.
>>
>> October 2nd 14:00utc: Scenario testing
>> The September 27th's release of zuulv3 opens the gates for
>> better scenario
>> testing, specially regarding multinode scenarios. We'll discuss
>> the tasks
>> and outstanding challenges to achieve good scenario testing
>> coverage and
>> document well how to write these tests in our tempest plugin.
>>
>> October 3rd 13:00utc: Multi networks
>> As the Kubernetes community Network SIG draws near to having a
>> consensus on
>> multi network implementations, we must elaborate a plan on a PoC
>> that takes
>> the upstream Kubernetes consensus and implements it with
>> Kuryr-Kubernetes
>> in a way that we can serve normal overlay and accelerated
>> networking.
>>
>> October 4th 14:00utc: Network Policy
>> Each cycle we aim to narrow the gap between Kubernetes
>> networking entities
>> and our translations. In this cycle, apart from the Loadbalancer
>> service
>> type support, we'll be tackling how we map Network Policy to Neutron
>> networking. This session will first lay out Network Policy and
>> its use and
>> then discuss about one or more mappings.

Due to the general strike tomorrow in Barcelona, the multi networks
discussion and
Network policy will be moved to 9th

Multi network October 9th 12utc
Network policy October 9th 13utc

>>
>> October 5th 13:00utc: Kuryr-libnetwork
>>
>> This session is Oct 4th in the etherpad.
>>
>> We'll do the cycle planing for Kuryr-libnetwork. Blueprints and
>> bugs and
>> general discussion.
>>
>> October 6th 14:00utc: Fuxi
>>
>> This session is Oct 4th in the etherpad.
>>
>> In this session we'll discuss everything related to storage,
>> both in the
>> Docker and in the Kubernetes worlds.
>>
>>
>> I'll put the links to the bluejeans sessions in the etherpad[0].
>>
>>
>> [0] https://etherpad.openstack.org/p/kuryr-queens-vPTG
>> <https://etherpad.openstack.org/p/kuryr-queens-vPTG>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] vPTG schedule

2017-10-02 Thread Antoni Segura Puimedon
You are right Hongbin. Sorry about that, somehow I counted with CET
instead of CEST.

I just corrected the Etherpad https://etherpad.openstack.org/p/kuryr-queens-vPTG

On Mon, Oct 2, 2017 at 3:30 AM, Hongbin Lu <hongbin...@gmail.com> wrote:
> Hi Toni,
>
> The time of a few proposed sessions look inconsistent with the etherpad.
> Could you double check?
>
> On Thu, Sep 28, 2017 at 5:48 AM, Antoni Segura Puimedon <celeb...@gmail.com>
> wrote:
>>
>> Hi fellow Kuryrs!
>>
>> It's that time of the cycle again where we hold our virtual project team
>> gathering[0]. The dates this time are:
>>
>> October 2nd, 3rd and 4th
>>
>> The proposed sessions are:
>>
>> October 2nd 13:00utc: Scale discussion
>> In this session we'll talk about the recent scale testing we have
>> performed
>> in a 112 node cluster. From this starting point. We'll work on
>> identifying
>> and prioritizing several initiatives to improve the performance of the
>> pod-in-VM and the baremetal scenarios.
>>
>> October 2nd 14:00utc: Scenario testing
>> The September 27th's release of zuulv3 opens the gates for better
>> scenario
>> testing, specially regarding multinode scenarios. We'll discuss the
>> tasks
>> and outstanding challenges to achieve good scenario testing coverage
>> and
>> document well how to write these tests in our tempest plugin.
>>
>> October 3rd 13:00utc: Multi networks
>> As the Kubernetes community Network SIG draws near to having a
>> consensus on
>> multi network implementations, we must elaborate a plan on a PoC that
>> takes
>> the upstream Kubernetes consensus and implements it with
>> Kuryr-Kubernetes
>> in a way that we can serve normal overlay and accelerated networking.
>>
>> October 4th 14:00utc: Network Policy
>> Each cycle we aim to narrow the gap between Kubernetes networking
>> entities
>> and our translations. In this cycle, apart from the Loadbalancer
>> service
>> type support, we'll be tackling how we map Network Policy to Neutron
>> networking. This session will first lay out Network Policy and its use
>> and
>> then discuss about one or more mappings.
>>
>> October 5th 13:00utc: Kuryr-libnetwork
>
> This session is Oct 4th in the etherpad.
>>
>> We'll do the cycle planing for Kuryr-libnetwork. Blueprints and bugs
>> and
>> general discussion.
>>
>> October 6th 14:00utc: Fuxi
>
> This session is Oct 4th in the etherpad.
>>
>> In this session we'll discuss everything related to storage, both in
>> the
>> Docker and in the Kubernetes worlds.
>>
>>
>> I'll put the links to the bluejeans sessions in the etherpad[0].
>>
>>
>> [0] https://etherpad.openstack.org/p/kuryr-queens-vPTG
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] vPTG schedule

2017-09-28 Thread Antoni Segura Puimedon
Hi fellow Kuryrs!

It's that time of the cycle again where we hold our virtual project team
gathering[0]. The dates this time are:

October 2nd, 3rd and 4th

The proposed sessions are:

October 2nd 13:00utc: Scale discussion
In this session we'll talk about the recent scale testing we have performed
in a 112 node cluster. From this starting point. We'll work on identifying
and prioritizing several initiatives to improve the performance of the
pod-in-VM and the baremetal scenarios.

October 2nd 14:00utc: Scenario testing
The September 27th's release of zuulv3 opens the gates for better scenario
testing, specially regarding multinode scenarios. We'll discuss the tasks
and outstanding challenges to achieve good scenario testing coverage and
document well how to write these tests in our tempest plugin.

October 3rd 13:00utc: Multi networks
As the Kubernetes community Network SIG draws near to having a consensus on
multi network implementations, we must elaborate a plan on a PoC that takes
the upstream Kubernetes consensus and implements it with Kuryr-Kubernetes
in a way that we can serve normal overlay and accelerated networking.

October 4th 14:00utc: Network Policy
Each cycle we aim to narrow the gap between Kubernetes networking entities
and our translations. In this cycle, apart from the Loadbalancer service
type support, we'll be tackling how we map Network Policy to Neutron
networking. This session will first lay out Network Policy and its use and
then discuss about one or more mappings.

October 5th 13:00utc: Kuryr-libnetwork
We'll do the cycle planing for Kuryr-libnetwork. Blueprints and bugs and
general discussion.

October 6th 14:00utc: Fuxi
In this session we'll discuss everything related to storage, both in the
Docker and in the Kubernetes worlds.


I'll put the links to the bluejeans sessions in the etherpad[0].


[0] https://etherpad.openstack.org/p/kuryr-queens-vPTG

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] No weekly IRC meeting today

2017-08-07 Thread Antoni Segura Puimedon
HI Kuryrs!

Since most of the people are on PTO today, we'll skip the weekly IRC
meeting and hold it next week. I don't have PTO, so if there's
anything you want to discuss, you are very welcome to come to
#openstack-kuryr

Regards,

Toni

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kuryr] Queens PTL candidacy

2017-08-03 Thread Antoni Segura Puimedon
Hi fellow kuryrs!

I present my PTL candidacy for the Queens Cycle.

We've done well in this past cycle getting close to our first release
adding things like:
- Kubernetes service support,
- Token authentication support,
- A lot of devstack improvements like multinode
- pod-in-VM neutron trunk and macvlan support
- Resource pooling
- Octavia support

Also some things that are in-flight and that may still make it in this
cycle or early next:
- CNI split
- Containerized deployment with daemonsets
- SR-IOV support

In this new upcoming cycle I would like to bring the following things:
- Full multi-network support consistent with upstream design
- High Availability (active - active) controllers
- Controller Health and Readiness checks
- SR-IOV support
- Network Policy
- Ingress controllers

I am very thankful for all the people and organizations that have
contributed not only in code, but also in reviews, documentation,
blueprints, bugs and discussion. Serving as PTL this last cycle has
been a great pleasure and I would love to serve again.

Regards,

Toni

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Nominate Kirill Zaitsev as kuryr-tempest-core reviewer

2017-07-04 Thread Antoni Segura Puimedon
On Tue, Jul 4, 2017 at 12:23 PM, Gal Sagie  wrote:
> +1
+1
>
> On Tue, Jul 4, 2017 at 12:28 PM, Daniel Mellado 
> wrote:
>>
>> Hi Team,
>>
>> I wanted to nominate Kirill for kuryr-tempest-core reviewer. He's been a
>> great help from start both contributing and reviewing.
>>
>> Please voice your support or concerns if any
>>
>> Best!
>>
>> Daniel
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Best Regards ,
>
> The G.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] weekly IRC meeting cancelled today

2017-06-05 Thread Antoni Segura Puimedon
Hi Kuryrs,

Today Irena and I are attending the OpenStack Israel day and won't be
able to chair the meeting. We can catch up tomorrow on IRC during the
day.

Toni

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kuryr] Weekly IRC meeting on May 1st is cancelled

2017-04-29 Thread Antoni Segura Puimedon
Hi Kuryrs!

Most of you have holidays on May 1st, so due to that, we'll adjourn
the weekly IRC meeting.
The following one is in the summit week. I'll check if I don't have a
talk at the same time as the meeting. If I do. I'll have to cancel as
well.

Regards,

Toni

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] Videoconf meeting for Actor refactor proposal

2017-04-18 Thread Antoni Segura Puimedon
Hi Kuryrs,

Tomorrow there's going to be a meeting to discuss Ilya's proposal for
a refactor of Kuryr controller entities based on the actor model. The
meeting will be held at:

https://bluejeans.com/826701641

12:00 UTC

I'll post the recording probably the day after.

See you there,

Toni

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] Weekly IRC meeting today April 10th is cancelled

2017-04-10 Thread Antoni Segura Puimedon
Hi Kuryrs,

I'm on PTO on the mountains with limited connectivity, hence I won't
be able to make it to the weekly IRC meeting.

Regards and sorry for the short notice,

Toni

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud Provider for Kubernetes

2017-03-24 Thread Antoni Segura Puimedon


On Friday, March 24, 2017 at 3:59:18 PM UTC+1, Graham Hayes wrote:
>
> On 24/03/17 10:27 -0400, Davanum Srinivas wrote: 
> >Folks, 
> > 
> >As discussed in the etherpad: 
> >https://etherpad.openstack.org/p/go-and-containers 
> > 
> >Here's a request for a repo in OpenStack: 
> >https://review.openstack.org/#/c/449641/ 
> > 
> >This request pulls in the existing code from kubernetes/kubernetes 
> >repo and preserves the git history too 
> >https://github.com/dims/k8s-cloud-provider 
> > 
> >Anyone interested? please ping me on Slack or IRC and we can continue 
> this work. 
>
> Yeah - I would love to continue the provider work on gerrit :) 
>
> Is there a way for us to make sure changes in the k8 master don't 
> break our plugin? Or do we need to periodic jobs on the provider repo 
> to catch breakages in the plugin interface? 
>

I suppose the options are either:

ask k8s to add select external cloud providers in the CI
Have a webhook in the k8s repo that triggered CI on the OSt infra 
 

>
> Thanks, Graham 
>
> >__ 
>
> >OpenStack Development Mailing List (not for usage questions) 
> >Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] VTG Action item sorting

2017-03-06 Thread Antoni Segura Puimedon
Hi Kuryrs!

Thanks a lot for the hard work in last week's VTG sessions. We created
quite a few Action items and tomorrow will be the time to sort them:

https://trello.com/b/1Ij919E8

I'll try to eventually move it to storyboard, but we need to get some
stuff created.

The session will be held as videoconference (will post recording as
usual) at 13:30 UTC. To join:

https://bluejeans.com/5508709975

You can also join by phone finding a local number in [1] and entering
the meeting id "5508709975" followed by a '#'.

[1] https://www.intercallonline.com/listNumbersByCode.action?confCode=5508709975

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] final day recordings for the Virtual Team Gathering

2017-03-03 Thread Antoni Segura Puimedon
Hi Kuryrs,

Thank you all for participating in the VTG. Here you can find the last
two recordings:

https://youtu.be/ti4oOK6p_Dw

https://youtu.be/iEdOTngEw4I

Let's discuss the priority of all the action items next week!

Toni

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] 2nd day recording for the Virtual Team Gathering

2017-03-01 Thread Antoni Segura Puimedon
Hi Kuryrs!

Thanks again for joining the Virtual team Gathering sessions. Here you
have the links to the recordings:

https://youtu.be/o1RKNOAhqho

https://youtu.be/ovbK5kk5AZ0

See you today on the last day of this first VTG!

Toni

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] 1st day recording for the Virtual Team Gathering

2017-03-01 Thread Antoni Segura Puimedon
Hi Kuryrs!

Thank you all for joining yesterday. For those unable to make it, here
are the recordings for the sessions:

https://youtu.be/Hdn9LOnCrSc

https://youtu.be/6D5iGEkKtGc

See you at today's sessions!

Toni

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] Weekly IRC meeting today cancelled

2017-02-27 Thread Antoni Segura Puimedon
Hello Kuryrs,

Due to preparations for this week's VTG, today's meeting is cancelled.


Looking forward to see you all tomorrow!

Toni
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr][dragonflow][octavia] layer 7 load balancing discussion

2017-02-21 Thread Antoni Segura Puimedon
Hi all!

An impromptu conversation started between Omer and I about l4 and l7 load
balancers for serving Kubernetes workloads (both services and ingress
controllers) and we thought it would be nice to have more feedback on the
discussion, particularly from Octavia folks.

Omer kindly offered to have the discussion at 11:00 tomorrow in the
Dragonflow room Savannah 2 (level 2)[0].

Some of the topics to cover will be:

- Generic API for load balancing based on headers versus protocol drivers
- Octavia kubernetes compute driver
- Dragonflow Octavia implementation

We'll keep working on the etherpad leading up to the meeting for topics and
structure of the discussion.

Regards,

Toni

[0] https://etherpad.openstack.org/p/dragonflow-l7lbaas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kuryr] VTG schedule and details

2017-02-20 Thread Antoni Segura Puimedon
Hi Kuryrs!

The VTG sessions[0] will be held in bluejeans:

https://bluejeans.com/5508709975

You can also join by phone finding a local number in [1] and entering the
meeting id "5508709975" followed by a '#'.

The sessions will be


┌──┬─┐
│12:30-13:30 utc   │13:45-14:45
utc   │
┌───┼──┼──┤
│Tue Feb 28th   │  Kuryr-K8s HA│ Kuryr-K8s tenancy and net
policy │
├───┼──┼──┤
│Wed March 1st  │  Kuryr-K8s resource Mgmt │ Fuxi: K8s and
Docker │
├───┼──┼──┤
│Thu March 2nd  │  Kuryr-K8s:multi device  │   Kuryr-K8s client and
testing   │
└───┴──┴──┘




[0] https://etherpad.openstack.org/p/kuryr_virtual_gathering_2017h1
[1] https://www.intercallonline.com/listNumbersByCode.action?
confCode=5508709975
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][kolla][openstack-helm][kuryr] OpenStack on containers leaveraging kuryr

2017-02-10 Thread Antoni Segura Puimedon
On Fri, Feb 10, 2017 at 3:24 PM, Flavio Percoco  wrote:

> On 09/02/17 09:57 +0100, Flavio Percoco wrote:
>
>> Greetings,
>>
>> I was talking with Tony and he mentioned that he's recording a new demo
>> for
>> kuryr and, well, it'd be great to also use the containerized version of
>> TripleO
>> for the demo.
>>
>> His plan is to have this demo out by next week and that may be too tight
>> for the
>> containerized version of TripleO (it may be not, let's try). That said, I
>> think
>> it's still a good opportunity for us to sit down at the PTG and play with
>> this a
>> bit further.
>>
>> So, before we set a date and time for this, I wanted to extend the invite
>> to
>> other folks and see if there's some interest. It be great to also have
>> folks
>> from Kolla and openstack-helm joining.
>>
>> Looking forward to hearing ideas and hacking with y'all,
>> Flavio
>>
>
> So, given the interest and my hope to group as much folks from other teams
> as
> possible, what about we just schedule this for Wednesday at 09:00 am ?
>
> I'm not sure what room we can crash yet but I'll figure it out soon and let
> y'all know.
>
> Any objections/observations?


Sounds good to me!


>
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][kolla][openstack-helm][kuryr] OpenStack on containers leaveraging kuryr

2017-02-09 Thread Antoni Segura Puimedon
On Thu, Feb 9, 2017 at 10:00 PM, Dan Sneddon  wrote:

> Pete, thanks for mentioning network isolation and segmentation. That's
> my area of interest, since I'm focused on underlay networking for
> TripleO and bare-metal networking in Ironic.
>
> Network isolation is going to be important for several reasons:
>
> 1) Separation of control and data plane in deployments
> 2) Tenant isolation in multi-tenant Ironic BMaaS
> 3) Network Function Virtualization (NFV) use cases
>
> The intention of the isolated networking model for TripleO was to
> separate control and data plane, as well as tenant from administrative
> traffic. A secondary goal was to make this highly configurable and
> customizable. This has been well received by many operators who have
> rigid security isolation requirements (such as PCI-DSS for financial
> transactions), or those who customize their underlay network to
> integrate into an existing networking topology. I'm thinking about how
> to do something similar in Kubernetes, perhaps with Kuryr.
>
> The Harbor project looks very interesting. Do you have any more
> information about how Harbor uses Raven to achieve isolation? Also, are
> you saying that Harbor uses an older (prototype) version of Raven, or
> are you referring to Raven itself as a prototype?
>

I can answer to some of that :-)

Raven was the Python 3 asyncio based prototype my team built back
when I was at Midokura for integrating Kubernetes and Neutron as
something to then upstream to Kuryr with the help of the rest of the
community (taking the lessons learned from the PoC and improving
on it). So yes, Raven itself was a prototype (a quite functional one)
and led to what we know today in Kuryr as the kuryr-kubernetes
controller, which is now almost at the same level of features, missing
just two patches for the service support.

I have to note here, that Pete did some interesting modifications to
Raven like OVN support addition and leveraging the watcher model
to make, IIRC, the cluster services use the native OVN load balancer
rather than neutron-lbaas.

The Kuryr-kubernetes controller is built with pluggability in mind and it
has a system of drivers (using stevedore) for acquiring resources.  This
makes things like what Pete did easier to achieve with the new codebase
and also pick yourself the level of isolation that you want. Let's say that
you want
to have the different OSt components pick different networks or even
projects, you would just need to make a very small driver like [0] or [1]
that could, for example, make an http request to some service that held
a mapping, read some specific annotation, etc.

In terms of isolation for deployments, we are starting discussion about
leveraging the new CNI support for reporting multiple interfaces (still not
implemented in k8s, but playing is fun) so that we can put the pods that
need it both in the control and in the data plane, we'll probably need to
tweak the interface of the drivers so that they can return an iterable.


[0]
https://github.com/openstack/kuryr-kubernetes/blob/master/kuryr_kubernetes/controller/drivers/default_project.py#L39
[1]
https://github.com/openstack/kuryr-kubernetes/blob/master/kuryr_kubernetes/controller/drivers/default_subnet.py#L56

>
> I'll be at the PTG Tuesday through Friday morning. I'm looking forward
> to having some conversations about this topic.
>
> --
> Dan Sneddon |  Senior Principal OpenStack Engineer
> dsned...@redhat.com |  redhat.com/openstack
> dsneddon:irc|  @dxs:twitter
>
> On 02/09/2017 09:56 AM, Pete Birley wrote:
> > Hi Flavio,
> >
> > I've been doing some work on packaging Kuryr for use with K8s as an
> > underlay for OpenStack on Kubernetes. When we met up in Brno the Harbor
> > project I showed you used Tony's old Raven Prototype to provide the
> > network isolation and segmentation in K8s. I've since begun to lay the
> > groundwork for OpenStack-Helm to support similar modes of operation,
> > allowing both service isolation and also combined networking between
> > OpenStack and K8s, where pods and VMs can co-exist on the same Neutron
> > Networks.
> >
> > I'm not sure I will have things fully functional within OpenStack-Helm
> > by the PTG, but it would be great to sit down and work out how we can
> > ensure that not only do we not end up replicating work needlessly, but
> > also find further opportunities to collaborate. I'll be in Atlanta all
> > week, though I think some of the OS-Helm and Kolla-K8s developers will
> > be leaving on Wed, would a particular day/time work best for you?
> >
> >
> > Cheers
> >
> > Pete (portdirect)
> >
> >
> > On Thu, Feb 9, 2017 at 8:57 AM, Flavio Percoco  > > wrote:
> >
> > Greetings,
> >
> > I was talking with Tony and he mentioned that he's recording a new
> > demo for
> > kuryr and, well, it'd be great to also use the containerized version
> > of TripleO
> > for the demo.
> >
> > 

Re: [openstack-dev] [kuryr] Resource management videoconf meeting

2017-02-09 Thread Antoni Segura Puimedon
On Thu, Feb 9, 2017 at 7:48 AM, Antoni Segura Puimedon <celeb...@gmail.com>
wrote:

>
>
> On Tue, Feb 7, 2017 at 5:30 PM, Antoni Segura Puimedon <celeb...@gmail.com
> > wrote:
>
>> Hi all,
>>
>> In the last weekly IRC meeting it was agreed to hold a videoconf meeting
>> about resource management. For those unaware, the topic revolves around the
>> reutilization of Neutron resources like ports and subports as well and
>> virtual devices like veths. This allows for things like batching neutron
>> calls and achieveng much shorter times when creating pods.
>>
>> The meeting time will be 13:30UTC Thurdsay February 9th on bluejeans:
>>
>
> The meeting has been moved to 12 UTC
>
>>
>> In order to join by phone:
>>
>> You can find a local number to call at:
>>
>> https://www.intercallonline.com/listNumbersByCode.action?con
>> fCode=5508709975
>>
>> Meeting id: 844311720
>>
>> In order to join with the computer:
>>
>> https://bluejeans.com/844311720
>>
>> The meeting recording link will be made available.
>>
>
https://youtu.be/3698DBV-Ng4


>
>> Regards,
>>
>> Toni
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Resource management videoconf meeting

2017-02-08 Thread Antoni Segura Puimedon
On Tue, Feb 7, 2017 at 5:30 PM, Antoni Segura Puimedon <celeb...@gmail.com>
wrote:

> Hi all,
>
> In the last weekly IRC meeting it was agreed to hold a videoconf meeting
> about resource management. For those unaware, the topic revolves around the
> reutilization of Neutron resources like ports and subports as well and
> virtual devices like veths. This allows for things like batching neutron
> calls and achieveng much shorter times when creating pods.
>
> The meeting time will be 13:30UTC Thurdsay February 9th on bluejeans:
>

The meeting has been moved to 12 UTC

>
> In order to join by phone:
>
> You can find a local number to call at:
>
> https://www.intercallonline.com/listNumbersByCode.action?
> confCode=5508709975
>
> Meeting id: 844311720
>
> In order to join with the computer:
>
> https://bluejeans.com/844311720
>
> The meeting recording link will be made available.
>
> Regards,
>
> Toni
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] Resource management videoconf meeting

2017-02-07 Thread Antoni Segura Puimedon
Hi all,

In the last weekly IRC meeting it was agreed to hold a videoconf meeting
about resource management. For those unaware, the topic revolves around the
reutilization of Neutron resources like ports and subports as well and
virtual devices like veths. This allows for things like batching neutron
calls and achieveng much shorter times when creating pods.

The meeting time will be 13:30UTC Thurdsay February 9th on bluejeans:

In order to join by phone:

You can find a local number to call at:


https://www.intercallonline.com/listNumbersByCode.action?confCode=5508709975

Meeting id: 844311720

In order to join with the computer:

https://bluejeans.com/844311720

The meeting recording link will be made available.

Regards,

Toni
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][kuryr] python-k8sclient vs client-python (was Fwd: client-python Beta Release)

2017-01-30 Thread Antoni Segura Puimedon
On Thu, Jan 26, 2017 at 12:41 PM, Davanum Srinivas 
wrote:

> Team,
>
> A bit of history, we had a client generated from swagger definition for a
> while in Magnum, we plucked it out into python-k8sclient which then got
> used by fuel-ccp, kuryr etc. Recently the kuberneted team started an effort
> called client-python. Please see 1.0.0b1 announcement.
>
> * It's on pypi[1] and readthedocs[2]
> * i've ported the e2e tests in python-k8sclient that runs against an
> actual k8s setup and got that working
> * i've looked at various tests in kuryr, fuel-ccp, magnum etc to see what
> could be ported as well. most of it is merged already. i have a couple of
> things in progress
>
> So, when client-python hits 1.0.0, Can we please mothball our
> python-k8sclient and switch over to the k8s community supported option?
> Can you please evaluate what's missing so we can make sure those things
> get into 1.0.0 final?
>

I am all for this. Thanks for the good work Davanum! I think this is a
perfect case where the OpenStack Community can give back to other upstream
communities and we should improve client-python where we need.


>
> Thanks,
> Dims
>
> [1] https://pypi.python.org/pypi/kubernetes
> [2] http://kubernetes.readthedocs.io/en/latest/kubernetes.html
>
> -- Forwarded message --
> From: 'Mehdy Bohlool' via Kubernetes developer/contributor discussion <
> kubernetes-...@googlegroups.com>
> Date: Wed, Jan 25, 2017 at 8:34 PM
> Subject: client-python Beta Release
> To: Kubernetes developer/contributor discussion <
> kubernetes-...@googlegroups.com>, kubernetes-us...@googlegroups.com
>
>
> Python client is now in beta. Please find more information here:
> https://github.com/kubernetes-incubator/client-python/
> releases/tag/v1.0.0b1
>
> You can reach the maintainers of this project at SIG API Machinery
> .
> If you have any problem with the client or any suggestions, please file an
> issue .
>
>
> Mehdy Bohlool |  Software Engineer |  me...@google.com |  mbohlool@github
> 
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes developer/contributor discussion" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-dev+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/ms
> gid/kubernetes-dev/CACd0WeG3O1t%3DXt7AGykyK7CcLmVYyJAB918c%2
> BXvteqVrW3nb7A%40mail.gmail.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr][ptl] PTL candidacy

2017-01-24 Thread Antoni Segura Puimedon
Hi Kuryrs!

I am putting forward my candidacy[1] for a second term as the PTL. It is
great to work with all of you every day to make kuryr deliver on its
mission.

Over this current cycle, we've managed to rebuild the team after we had
some core departures and I think we are in a very good place, with new very
involved core contributors and others showing promise to become cores
during this upcoming cycle. Kuryr is built by all of us and I'd like to
thank you all for the hard work.

Now that we've reached container-in-VM support, some of the outstanding
goals are:

- increasing the robustness
- Make kuryr-kubernetes have HA
- Resource management in kuryr-kubernetes
- CNI split
- CNI multi device support
- Continue to work with the Fuxi subteam and get its core reviewer
structure stronger
- Docker 1.13 for kuryr-libnetwork
- Magnum collaboration for container-in-VM

I look forward to work with all of you on getting Kuryr to the next level!

[1] https://review.openstack.org/#/c/424560/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] Project Virtual Gathering (PVG)

2017-01-18 Thread Antoni Segura Puimedon
Hi Kuryrs!

Due to traveling restrictions, we opted not to take part in the Atlanta
PTG. However, the design work sessions won't disappear :-)

Please go through its etherpad[0] and propose, vote, comment about
sessions, format and scheduling.

Regards,

Toni

[0] https://etherpad.openstack.org/p/kuryr_virtual_gathering_2017h1
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Ocata cycle ending and proposing new people as Kuryr cores

2017-01-16 Thread Antoni Segura Puimedon
That's a majority of the cores having cast positive votes.

Congratulations to Liping Mao and Ilya Chukhnakov! You're now cores and on
the hook!


On Mon, Jan 16, 2017 at 3:10 AM, Vikas Choudhary <choudharyvika...@gmail.com
> wrote:

> +1 for both.
>
> On Sun, Jan 15, 2017 at 12:42 PM, Gal Sagie <gal.sa...@gmail.com> wrote:
>
>> +1 for both.
>>
>> On Sun, Jan 15, 2017 at 9:05 AM, Irena Berezovsky <irenab@gmail.com>
>> wrote:
>>
>>>
>>>
>>> On Fri, Jan 13, 2017 at 6:49 PM, Antoni Segura Puimedon <
>>> celeb...@gmail.com> wrote:
>>>
>>>> Hi fellow kuryrs!
>>>>
>>>> We are getting close to the end of the Ocata and it is time to look
>>>> back and appreciate the good work all the contributors did. I would like to
>>>> thank you all for the continued dedication and participation in gerrit, the
>>>> weekly meetings, answering queries on IRC, etc.
>>>>
>>>> I also want to propose two people that I think will help us a lot as
>>>> core contributors in the next cycles.
>>>>
>>>> For Kuryr-lib and kuryr-libnetwork I want to propose Liping Mao. Liping
>>>> has been contributing a lot of since Mitaka, not just in code but in
>>>> reviews catching important details and fixing bugs. It is overdue that he
>>>> gets to help us even more!
>>>>
>>>> +1
>>>
>>>> For Kuryr-kubernetes I want to propose Ilya Chukhnakov. Ilya got into
>>>> Kuryr at the end of the Newton cycle and has done a wonderful job in the
>>>> Kubernetes integration contributing heaps of code and being an important
>>>> part of the design discussions and patches. It is also time for him to
>>>> start approving patches :-)
>>>>
>>>> +1
>>>
>>>>
>>>> Let's have the votes until next Friday (unless enough votes are cast
>>>> earlier).
>>>>
>>>> Regards,
>>>>
>>>> Toni
>>>>
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Best Regards ,
>>
>> The G.
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] Ocata cycle ending and proposing new people as Kuryr cores

2017-01-13 Thread Antoni Segura Puimedon
Hi fellow kuryrs!

We are getting close to the end of the Ocata and it is time to look back
and appreciate the good work all the contributors did. I would like to
thank you all for the continued dedication and participation in gerrit, the
weekly meetings, answering queries on IRC, etc.

I also want to propose two people that I think will help us a lot as core
contributors in the next cycles.

For Kuryr-lib and kuryr-libnetwork I want to propose Liping Mao. Liping has
been contributing a lot of since Mitaka, not just in code but in reviews
catching important details and fixing bugs. It is overdue that he gets to
help us even more!

For Kuryr-kubernetes I want to propose Ilya Chukhnakov. Ilya got into Kuryr
at the end of the Newton cycle and has done a wonderful job in the
Kubernetes integration contributing heaps of code and being an important
part of the design discussions and patches. It is also time for him to
start approving patches :-)


Let's have the votes until next Friday (unless enough votes are cast
earlier).

Regards,

Toni
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Dec 14th kuryr-kubernetes syncup

2016-12-13 Thread Antoni Segura Puimedon
On Mon, Dec 12, 2016 at 6:49 PM, Antoni Segura Puimedon <celeb...@gmail.com>
wrote:

> Hi fellow kuryrs!
>
> December 14th at 11:00 UTC we'll be having a video meeting [1] to sync
> about the current Kubernetes integration.
>

We're moving the meeting to bluejeans [3] so that we can have a recording


> There is an etherpad for the topics that we'll be covering [2]. Feel free
> to add topics and +1 the topics that you want to have discussion on.
>
> The objective of the meeting is to come to decision about short term
> design and implementation. If there is time, we'll also kick off design
> talk on some longer term items.
>
> Regards,
>
> Toni
>
>
> [1] https://plus.google.com/hangouts/_/calendar/
> aXJlbmFiLmRldkBnbWFpbC5jb20.2kstghq4tavlmnhkfpnrjqd3j4
> [2] https://etherpad.openstack.org/p/kuryr-kubernetes-dec14-syncup
>

[3] https://bluejeans.com/5508709975/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] Dec 14th kuryr-kubernetes syncup

2016-12-12 Thread Antoni Segura Puimedon
Hi fellow kuryrs!

December 14th at 11:00 UTC we'll be having a video meeting [1] to sync
about the current Kubernetes integration. There is an etherpad for the
topics that we'll be covering [2]. Feel free to add topics and +1 the
topics that you want to have discussion on.

The objective of the meeting is to come to decision about short term design
and implementation. If there is time, we'll also kick off design talk on
some longer term items.

Regards,

Toni


[1]
https://plus.google.com/hangouts/_/calendar/aXJlbmFiLmRldkBnbWFpbC5jb20.2kstghq4tavlmnhkfpnrjqd3j4
[2] https://etherpad.openstack.org/p/kuryr-kubernetes-dec14-syncup
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr][Magnum] How stable is kuryr-kubernetes?

2016-12-06 Thread Antoni Segura Puimedon
On Tue, Dec 6, 2016 at 9:10 AM, Mikhail Fedosin  wrote:

> Hi folks!
>

Hi Mikhail!


>
> We at Samsung are trying to integrate containers in OpenStack and at this
> moment we are looking at Kubernetes deployed by Magnum, which works good
> enough for now.
>
> One challenge we have faced recently is making containers able to
> communicate with Nova VM instances (in other words we want to integrate
> Neutron in Kubernetes) and Kuryr seems to be a right solution (based on its
> description). Unfortunately there is a lack of documentation, but from
> various presentations on youtube I got that kuryr has been split in two
> projects (kuryr-libnetwork for Docker Swarm and kuryr-kubernetes for
> Kubernetes respectively, and they both share a common library called
> "kuryr").
>

That's exactly right!


> kuryr-libnetwork continues previous works, which the community has been
> implementing for over a year. It looks stable, nevertheless it doesn't work
> with the latest Docker 1.12.
>

It works with 1.12, but not with 1.12's Swarm mode, since that is hardcoded
to use Docker's overlay driver, that is expected to change.


> kuryr-kubernetes is rather new, and I wonder if it can be already used (at
> least on devstack), or maybe some further efforts are required.
>

We have a previous python3 (and lbaasv1) only prototype that can be used to
test how it all works:

 https://github.com/midonet/kuryr/tree/k8s

With kuryr-kubernetes we are now reaching the stage to have services
supported again (they were supported in the above prototype). There is
devstack for

https://github.com/openstack/kuryr-kubernetes

The current state is that CNI patch [1] is about to be merged and the
service watchers should come in soon.



> Then please enlighten me about current status of Magnum-Kuryr integration.
> I saw that this was discussed in Barcelona and Austin, but in Magnum's
> master it's still unavailable. Also it will be great if you point at the
> responsible person with whom I can discuss it more detailed and see how I
> can be involved in the development.
>

For Magnum integration we have to move kuryr-libnetwork's container-in-vm
support[2][3] (that is being merged this week) to kuryr-kubernetes (which
only supports bare-metal binding right now). Once that is done, work can
begin on Magnum using it in either  macvlan, ipvlan, vlan mode (there's two
modes here, one container - one vlan, and one subnet, one vlan).

You can reach out to apuimedo (me), ivc_, irenab or vikasc about
kuryr-kubernetes and the same plus ltomasbo, lmdaly and mchiappero about
the container-in-vm.

Regards,

Toni


[1] https://review.openstack.org/#/c/404038/
[2] https://review.openstack.org/#/c/400365/
[3] https://review.openstack.org/#/c/402462/

>
> Thanks,
> Mike
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr][magnum] Notes from Summit fishbowl session

2016-11-03 Thread Antoni Segura Puimedon
On Thu, Nov 3, 2016 at 4:29 AM, Vikas Choudhary
<choudharyvika...@gmail.com> wrote:
>
>
> On Thu, Nov 3, 2016 at 12:33 AM, Antoni Segura Puimedon <celeb...@gmail.com>
> wrote:
>>
>> Hi magna and kuryrs!
>>
>> Thank you all for joining last week meetings. I am now writing a few
>> emails to have persistent notes of what was talked about and discussed
>> in the Kuryr work sessions. In the Magnum joint session the points
>> were:
>>
>> Kuryr - Magnum joint work session
>> =
>>
>> Authentication
>> ==
>>
>> * Consensus on using Keystone trust tokens.
>> - We should follow closely the Keystone effort into scoping the
>> allowed
>>   actions per token to limit those to the minimal required set of
>> verbs
>>   that the COE and Kuryr need.
>>
>> * It was deemed unnecessary to pursue a proxying approach to access
>>   Neutron. This means VM applications should be able to reach Neutron
>> and
>>   Keystone but the only source of credentials they should have is the
>>   Keystone tokens.
>>
>>
>> Tenancy and network topology
>> 
>>
>> Two approaches should be made available to users:
>>
>> Full Neutron networking
>> ~~~
>>
>> Under this configuration, containers running inside the nova instances
>> would get networking via Neutron vlan-aware-VMs feature. This means
>> the COE
>> driver (either kuryr-libnetwork or kuryr-kubernetes) would request a
>> Neutron subport for the container. In this way, there can be multiple
>> isolated networks running on worker nodes.
>>
>> The concerns about this solution are about the performance when
>> starting
>> big amounts of containers and the latency introduced when starting
>> them due
>> to going all the way to Neutron to request the subport.
>>
>> Minimal Neutron networking
>> ~~
>>
>
> Is this ipvlan/macvlan approach?

Yes. This will use ipvlan/macvlan.

>
>>
>> In order to address the concerns with the 'Full Neutron networking'
>> approach, and as a trade-off between features and minimalism, this way
>> of
>> networking the containers would all be in the same Neutron network as
>> the
>> ports of their VMs.
>>
>> The problem with this solution is that allowing multiple isolated
>> networks
>> like CNM and Kubernetes with policy have is quite complicated.
>>
>>
>> Regards,
>>
>> Toni
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr][magnum] Notes from Summit fishbowl session

2016-11-02 Thread Antoni Segura Puimedon
Hi magna and kuryrs!

Thank you all for joining last week meetings. I am now writing a few
emails to have persistent notes of what was talked about and discussed
in the Kuryr work sessions. In the Magnum joint session the points
were:

Kuryr - Magnum joint work session
=

Authentication
==

* Consensus on using Keystone trust tokens.
- We should follow closely the Keystone effort into scoping the allowed
  actions per token to limit those to the minimal required set of verbs
  that the COE and Kuryr need.

* It was deemed unnecessary to pursue a proxying approach to access
  Neutron. This means VM applications should be able to reach Neutron and
  Keystone but the only source of credentials they should have is the
  Keystone tokens.


Tenancy and network topology


Two approaches should be made available to users:

Full Neutron networking
~~~

Under this configuration, containers running inside the nova instances
would get networking via Neutron vlan-aware-VMs feature. This means the COE
driver (either kuryr-libnetwork or kuryr-kubernetes) would request a
Neutron subport for the container. In this way, there can be multiple
isolated networks running on worker nodes.

The concerns about this solution are about the performance when starting
big amounts of containers and the latency introduced when starting them due
to going all the way to Neutron to request the subport.

Minimal Neutron networking
~~

In order to address the concerns with the 'Full Neutron networking'
approach, and as a trade-off between features and minimalism, this way of
networking the containers would all be in the same Neutron network as the
ports of their VMs.

The problem with this solution is that allowing multiple isolated networks
like CNM and Kubernetes with policy have is quite complicated.


Regards,

Toni

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] Kuryr IPVlan Code PoC

2016-09-24 Thread Antoni Segura Puimedon
On Wed, Sep 21, 2016 at 11:53 AM, Daly, Louise M
 wrote:
> Hi everyone,
>
>
>
> As promised here is a link to the code PoC for the Kuryr-IPVlan proposal.
>
> https://github.com/lmdaly/kuryr-libnetwork
>
>
>
> Link to specific commit
>
> https://github.com/lmdaly/kuryr-libnetwork/commit/1dc895a6d8bfaa03c0dd5cfb2d3e23e2e948a67c
>
>
>
> From here you can clone the repo and install Kuryr as you normally would
> with a few additional steps:
>
>
>
> 1. The IPVlan driver must be installed on the VM/Machine that the PoC will
> be run on. Fedora-Server(not the cloud image) includes the driver by default
> but the likes of the cloud image must be modified to include the driver.
>
> 2. You must install Docker experimental.
>
> 3. You must use the Kuryr IPAM driver for address management.
>
> 4. In order to enable the IPVlan mode you must change the ipvlan option in
> the kuryr.conf file from false to true.
>
> 5. You must also change the ifname option to match the interface of the
> private network you wish to run the containers on. (Default is ens3)
>
> 6. As listed in the limitations on the README.rst on kuryr “To create Docker
> networks with subnets having same/overlapping cidr, it is expected to pass
> unique pool name for each such network creation Docker command.” You will
> need to do this if you are creating a docker network with the same private
> network on another VM.
>
>
>
> The IPVlan proposal was sent out to the mailing list – link for those who
> missed it.
>
> http://osdir.com/ml/openstack-dev/2016-09/msg00816.html
>
>
>
> Please send any feedback, issues, comments, bugs.

Thanks a lot Louise,

I've taken the core of the binding code and tried to port it to
kuryr-lib [1]. It is still
a Work In Progress.

With this patch, it should be possible to have kuryr-libnetwork only do the part
of requesting the extra IP (allocated by creating a new port?) and then updating
the allowed address pairs.

Regards,

Toni

[1] https://review.openstack.org/#/c/375864/1


>
>
>
> Thanks,
>
> Louise
>
>
>
>
>
> --
> Intel Research and Development Ireland Limited
> Registered in Ireland
> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
> Registered Number: 308263
>
> This e-mail and any attachments may contain confidential material for the
> sole use of the intended recipient(s). Any review or distribution by others
> is strictly prohibited. If you are not the intended recipient, please
> contact the sender and delete all copies.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][FFE][kuryr] kuryr-lib

2016-09-19 Thread Antoni Segura Puimedon
Hi,

We recently made the first release (0.1.0) of kuryr-lib on which no
project depends except openstack/kuryr-libnetwork. Now, kuryr-lib by
itself is not more than a base library and its release is only to
serve kuryr-libnetwork and kuryr-kubernetes.

Now we are in the process of releasing kuryr-libnetwork and we'd need
to include kuryr-lib to global requirements[1] (since kuryr-libnetwork
requirements are managed by the openstack proposal's bot. Thus, I
request an exception to set the minimal kuryr-lib version.

Regards,

Toni

[1] https://review.openstack.org/#/c/369755/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] September 19th meeting agenda

2016-09-19 Thread Antoni Segura Puimedon
Hi Kuryrs!

Yet again I'm late in posting the agenda, sorry. Here goes today's meeting
agenda[1]. As always, feel free to add items or bring them up during the
open discussion, I'll be checking the agenda for updates just before
the meeting starts at 14utc.

See you in the meeting,

Toni

[1] 
https://wiki.openstack.org/wiki/Meetings/Kuryr#Meeting_September_19th.2C_2016

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][ptl] Self non-nomination for Kolla PTL for Ocata cycle

2016-09-14 Thread Antoni Segura Puimedon
On Mon, Sep 12, 2016 at 7:04 PM, Steven Dake (stdake)  wrote:
> To the OpenStack Community,
>
>
>
> Consider this email my self non-nomination for PTL of Kolla for
>
> the coming Ocata release.  I let the team know in our IRC team meeting
>
> several months ago I was passing the on baton at the conclusion of Newton,
>
> but I thought the broader OpenStack community would appreciate the
> information.
>
>
>
> I am super proud of what our tiny struggling community produced starting
>
> 3 years ago with only 3 people to the strongly emergent system that is Kolla
>
> with over 467 total contributors [1] since inception and closing in on 5,000
>
> commits today.
>
>
>
> In my opinion, the Kolla community is well on its way to conquering the last
>
> great challenge OpenStack faces: Making operational deployment management
> (ODM)
>
> of OpenStack cloud platforms straight-forward, easy, and most importantly
>
> cost effective for the long term management of OpenStack.
>
>
>
> The original objective the Kolla community set out to accomplish, deploying
>
> OpenStack in containers at 100 node scale has been achieved as proven by
> this
>
> review [2].  In these 12 scenarios, we were able to deploy with 3
>
> controllers, 100 compute nodes, and 20 storage nodes using Ceph for all
>
> storage and run rally as well as tempest against the deployment.
>
>
>
> Kolla is _No_Longer_a_Toy_ and _has_not_been_ since Liberty 1.1.0.
>
>
>
> I have developed a strong leadership pipeline and expect several candidates
>
> to self-nominate.  I wish all of them the best in the future PTL elections.
>
>
>
> Finally, I would like to thank all of the folks that have supported Kolla’s
>
> objectives.  If I listed the folks individually this email would be far too
>
> long, but you know who you are J Thank you for placing trust in my
> judgement.

Thank you Steven! You and the Kolla people have always been around when
Kuryr needed help and/or guidance.

We appreciate it a lot.
>
>
>
> It has been a pleasure to serve as your leader.
>
>
>
> Regards
>
> -steak
>
>
>
> [1] http://stackalytics.com/report/contribution/kolla-group/2000
>
> [2] https://review.openstack.org/#/c/352101/
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Antoni Segura Puimedon
On Tue, Sep 13, 2016 at 5:05 PM, Hongbin Lu  wrote:
>
>
> On Tue, Sep 13, 2016 at 2:10 AM, Vikas Choudhary
>  wrote:
>>
>>
>>
>> On Mon, Sep 12, 2016 at 9:17 PM, Hongbin Lu  wrote:
>>>
>>> Ivan,
>>>
>>> Thanks for the proposal. From Magnum's point of view, this proposal
>>> doesn't seem to require to store neutron/rabbitmq credentials in tenant VMs
>>> which is more desirable. I am looking forward to the PoC.
>>
>>
>> Hogbin, Can you please elaborate on this will not require to store neutron
>> credentials?
>> For example in libnetwork case, neutron's commands like "show_port" and
>> "update_port" will still need to be invoked from inside VM.
>
>
> In a typical COE cluster, there are master nodes and work (minion/slave)
> nodes. Regarding to credentials, the following is optimal:
> * Avoid storing credentials in work nodes. If credentials have to be stored,
> move them to master nodes if we can (containers are running in work nodes so
> credentials stored there have a higher risk). A question for you, neutron's
> commands like "show_port" and "update_port" need to be invoked from work
> nodes or master nodes?
> * If credentials have to be stored, scope them with least privilege (Magnum
> uses Keystone trust for this purpose).

I think that with the ipvlan proposal you probably can do without having to call
those two. IIUC the proposal the binding on the VM, taking libnetwork
as an example
 would be:

1. docker sends a request to kuryr-libnetwork running in container-in-vm mode.
2. kuryr-libnetwork forwards the request to a kuryr daemon that has
the necessary
credentials to talk to neutron (it could run either in the master node
or in the compute
node just like there is the dhcp agent, i.e., with one foot on the VM
network and one
on the underlay).
3. The kuryr daemon does the address pair proposal requests to Neutron
and returns
the result to the kuryr-libnetwork in the VM, at which point the VM
port can already
send and receive data for the container.
4. kuryr-libnetwork in the VM creates an ipvlan virtual device and
puts it the IP
returned by the kuryr daemon.

>
>>
>>
>> Overall I liked this approach given its simplicity over vlan-aware-vms.
>>
>> -VikasC
>>>
>>>
>>> Best regards,
>>> Hongbin
>>>
>>> On Mon, Sep 12, 2016 at 7:29 AM, Coughlan, Ivan 
>>> wrote:



 Overview

 Kuryr proposes to address the issues of double encapsulation and
 exposure of containers as neutron entities when containers are running
 within VMs.

 As an alternative to the vlan-aware-vms and use of ovs within the VM, we
 propose to:

 -  Use allowed-address-pairs configuration for the VM neutron
 port

 -  Use IPVLAN for wiring the Containers within VM



 In this way:

 -  Achieve efficient data path to container within VM

 -  Better leverage OpenStack EPA(Enhanced Platform Awareness)
 features to accelerate the data path (more details below)

 -  Mitigate the risk of vlan-aware-vms not making neutron in
 time

 -  Provide a solution that works on existing and previous
 openstack releases



 This work should be done in a way permitting the user to optionally
 select this feature.





 Required Changes

 The four main changes we have identified in the current kuryr codebase
 are as follows:

 · Introduce an option of enabling “IPVLAN in VM” use case. This
 can be achieved by using a config file option or possibly passing a command
 line argument. The IPVLAN master interface must also be identified.

 · If using “IPVLAN in VM” use case, Kuryr should no longer
 create a new port in Neutron or the associated VEth pairs. Instead, Kuryr
 will create a new IPVLAN slave interface on top of the VM’s master 
 interface
 and pass this slave interface to the Container netns.

 · If using “IPVLAN in VM” use case, the VM’s port ID needs to be
 identified so we can associate the additional IPVLAN addresses with the
 port. This can be achieved by querying Neutron’s show-port function and
 passing the VMs IP address.

 · If using “IPVLAN in VM” use case, Kuryr should associate the
 additional IPVLAN addresses with the VMs port. This can be achieved using
 Neutron’s allowed-address-pairs flag in the port-update function. We intend
 to make use of Kuryr’s existing IPAM functionality to request these IPs 
 from
 Neutron.



 Asks

 We wish to discuss the pros and cons.

 For example, containers exposure as proper neutron entities and the
 utility of neutron’s allowed-address-pairs is not yet well understood.



 We also wish to understand if this approach 

Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-12 Thread Antoni Segura Puimedon
On Mon, Sep 12, 2016 at 1:42 PM, Antoni Segura Puimedon
<celeb...@gmail.com> wrote:
> On Mon, Sep 12, 2016 at 1:29 PM, Coughlan, Ivan <ivan.cough...@intel.com> 
> wrote:
>>
>>
>> Overview
>>
>> Kuryr proposes to address the issues of double encapsulation and exposure of
>> containers as neutron entities when containers are running within VMs.
>>
>> As an alternative to the vlan-aware-vms and use of ovs within the VM, we
>> propose to:
>>
>> -  Use allowed-address-pairs configuration for the VM neutron port
>>
>> -  Use IPVLAN for wiring the Containers within VM
>>
>>
>>
>> In this way:
>>
>> -  Achieve efficient data path to container within VM
>>
>> -  Better leverage OpenStack EPA(Enhanced Platform Awareness)
>> features to accelerate the data path (more details below)
>>
>> -  Mitigate the risk of vlan-aware-vms not making neutron in time
>>
>> -  Provide a solution that works on existing and previous openstack
>> releases
>>
>>
>>
>> This work should be done in a way permitting the user to optionally select
>> this feature.
>>
>>
>>
>>
>>
>> Required Changes
>>
>> The four main changes we have identified in the current kuryr codebase are
>> as follows:
>>
>> · Introduce an option of enabling “IPVLAN in VM” use case. This can
>> be achieved by using a config file option or possibly passing a command line
>> argument. The IPVLAN master interface must also be identified.
>>
>> · If using “IPVLAN in VM” use case, Kuryr should no longer create a
>> new port in Neutron or the associated VEth pairs. Instead, Kuryr will create
>> a new IPVLAN slave interface on top of the VM’s master interface and pass
>> this slave interface to the Container netns.
>>
>> · If using “IPVLAN in VM” use case, the VM’s port ID needs to be
>> identified so we can associate the additional IPVLAN addresses with the
>> port. This can be achieved by querying Neutron’s show-port function and
>> passing the VMs IP address.
>>
>> · If using “IPVLAN in VM” use case, Kuryr should associate the
>> additional IPVLAN addresses with the VMs port. This can be achieved using
>> Neutron’s allowed-address-pairs flag in the port-update function. We intend
>> to make use of Kuryr’s existing IPAM functionality to request these IPs from
>> Neutron.
>>
>>
>>
>> Asks
>>
>> We wish to discuss the pros and cons.
>>
>> For example, containers exposure as proper neutron entities and the utility
>> of neutron’s allowed-address-pairs is not yet well understood.
>>
>>
>>
>> We also wish to understand if this approach is acceptable for kuryr?

My vote is that it is acceptable to work on introducing such mode to
kuryr-libnetwork
(and later to kuryr-kubernetes).

Could we get a link to the current PoC and set a meeting for an
upstreaming plan?


>
> Thanks Ivan, adding discussion about this to the weekly IRC meeting. Maybe 
> it's
> a bit tight for all the participants to get comfortable enough with
> the specifics
> to take a decision today, but let's bring the topic to the table and give an
> answer during this week.
>
>>
>>
>>
>>
>>
>> EPA
>>
>> The Enhanced Platform Awareness initiative is a continuous program to enable
>> fine-tuning of the platform for virtualized network functions.
>>
>> This is done by exposing the processor and platform capabilities through the
>> management and orchestration layers.
>>
>> When a virtual network function is instantiated by an Enhanced Platform
>> Awareness enabled orchestrator, the application requirements can be more
>> efficiently matched with the platform capabilities.
>>
>> http://itpeernetwork.intel.com/openstack-kilo-release-is-shaping-up-to-be-a-milestone-for-enhanced-platform-awareness/
>>
>> https://networkbuilders.intel.com/docs/OpenStack_EPA.pdf
>>
>> https://www.brighttalk.com/webcast/12229/181563/epa-features-in-openstack-kilo
>>
>>
>>
>>
>>
>> Regards,
>>
>> Ivan….
>>
>> --
>> Intel Research and Development Ireland Limited
>> Registered in Ireland
>> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
>> Registered Number: 308263
>>
>> This e-mail and any attachments may contain confidential material for the
>> sole use of the intended r

[openstack-dev] [kuryr] PTL Candidacy

2016-09-12 Thread Antoni Segura Puimedon
Hi Kuryrs!

First of all, I want to thank Gal for the hard work done in the past
two cycles as PTL for the project and to all of you who have
contributed in any and many ways, be it either in documentation, code,
bugs, design, discussion and usage.

I announce my candidacy to the Kuryr PTL position [1] with the goal of
bringing Kuryr to fulfill its mission of bringing OpenStack networking
and storage to the container world. The way in which I believe we
should do it is mainly by continuing to increase the community
participation in the day to day of the project evolution.

In order to take Kuryr to the next stage, the goals that I'd like us
to focus on are:

- Make it easier for people to develop Kuryr with kuryr-kubernetes devstack.
- Increased testing for kuryr-libnetwork and start to have end to end
testing for kuryr-kubernetes.
- Now that we implemented a core contribution specialization in four groups:
+ fuxi,
+ kuryr,
+ kuryr-libnetwokr,
+ kuryr-kubernetes.
We should get more people familiar enough with each project to be able
to promote them to core contributors.
- Increase the attention on bug tracking and release management.
- Reach Container-in-VM support.
- Participate more in upstream container orchestration engine communities.

I look forward to work with all of you to make Kuryr into the things
all of us want and need from it.

[1] https://review.openstack.org/#/c/368825/

Antoni Segura Puimedon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-12 Thread Antoni Segura Puimedon
On Mon, Sep 12, 2016 at 1:29 PM, Coughlan, Ivan  wrote:
>
>
> Overview
>
> Kuryr proposes to address the issues of double encapsulation and exposure of
> containers as neutron entities when containers are running within VMs.
>
> As an alternative to the vlan-aware-vms and use of ovs within the VM, we
> propose to:
>
> -  Use allowed-address-pairs configuration for the VM neutron port
>
> -  Use IPVLAN for wiring the Containers within VM
>
>
>
> In this way:
>
> -  Achieve efficient data path to container within VM
>
> -  Better leverage OpenStack EPA(Enhanced Platform Awareness)
> features to accelerate the data path (more details below)
>
> -  Mitigate the risk of vlan-aware-vms not making neutron in time
>
> -  Provide a solution that works on existing and previous openstack
> releases
>
>
>
> This work should be done in a way permitting the user to optionally select
> this feature.
>
>
>
>
>
> Required Changes
>
> The four main changes we have identified in the current kuryr codebase are
> as follows:
>
> · Introduce an option of enabling “IPVLAN in VM” use case. This can
> be achieved by using a config file option or possibly passing a command line
> argument. The IPVLAN master interface must also be identified.
>
> · If using “IPVLAN in VM” use case, Kuryr should no longer create a
> new port in Neutron or the associated VEth pairs. Instead, Kuryr will create
> a new IPVLAN slave interface on top of the VM’s master interface and pass
> this slave interface to the Container netns.
>
> · If using “IPVLAN in VM” use case, the VM’s port ID needs to be
> identified so we can associate the additional IPVLAN addresses with the
> port. This can be achieved by querying Neutron’s show-port function and
> passing the VMs IP address.
>
> · If using “IPVLAN in VM” use case, Kuryr should associate the
> additional IPVLAN addresses with the VMs port. This can be achieved using
> Neutron’s allowed-address-pairs flag in the port-update function. We intend
> to make use of Kuryr’s existing IPAM functionality to request these IPs from
> Neutron.
>
>
>
> Asks
>
> We wish to discuss the pros and cons.
>
> For example, containers exposure as proper neutron entities and the utility
> of neutron’s allowed-address-pairs is not yet well understood.
>
>
>
> We also wish to understand if this approach is acceptable for kuryr?

Thanks Ivan, adding discussion about this to the weekly IRC meeting. Maybe it's
a bit tight for all the participants to get comfortable enough with
the specifics
to take a decision today, but let's bring the topic to the table and give an
answer during this week.

>
>
>
>
>
> EPA
>
> The Enhanced Platform Awareness initiative is a continuous program to enable
> fine-tuning of the platform for virtualized network functions.
>
> This is done by exposing the processor and platform capabilities through the
> management and orchestration layers.
>
> When a virtual network function is instantiated by an Enhanced Platform
> Awareness enabled orchestrator, the application requirements can be more
> efficiently matched with the platform capabilities.
>
> http://itpeernetwork.intel.com/openstack-kilo-release-is-shaping-up-to-be-a-milestone-for-enhanced-platform-awareness/
>
> https://networkbuilders.intel.com/docs/OpenStack_EPA.pdf
>
> https://www.brighttalk.com/webcast/12229/181563/epa-features-in-openstack-kilo
>
>
>
>
>
> Regards,
>
> Ivan….
>
> --
> Intel Research and Development Ireland Limited
> Registered in Ireland
> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
> Registered Number: 308263
>
> This e-mail and any attachments may contain confidential material for the
> sole use of the intended recipient(s). Any review or distribution by others
> is strictly prohibited. If you are not the intended recipient, please
> contact the sender and delete all copies.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] Spec and devref placement

2016-09-12 Thread Antoni Segura Puimedon
Hi Kuryrs!

On September 5th's weekly IRC meeting Irena Berezovsky suggested that
we should take a decision regarding the location of specs and devrefs.

Currently we default to putting all the specs and devrefs for:
- Kuryr
- Kuryr-libnetwork
- Kuryr-kubernetes

to openstack/kuryr. Fuxi is still being integrated and keeps its own doc.

The three proposals that came up where:
a) All specs and devrefs to openstack/kuryr
b) Specs in openstack/kuryr but devrefs in each specific project,
i.e., the one that will end up with the implementation code.
c) Both specs and devrefs in each separate Kuryr project.

I would like to advocate for option (b). It makes things easy for when
specs involve multiple kuryr pieces and, at the same time, it keeps
development information in the place where you'd expect, close to the
code.

Please, weigh on this issue here in the ML or in the weekly IRC
meeting today. The idea is to reach a decision by next week's weekly
IRC meeting and then write it in each subproject's "how to contribute"

See you later in the weekly IRC,

Toni

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] September 12th meeting agenda

2016-09-12 Thread Antoni Segura Puimedon
Hi Kuryrs!

Sorry I didn't get to post it earlier, here goes today's meeting
agenda. As always, feel free to add items or bring them up during the
open discussion, I'll be checking the agenda for updates just before
the meeting starts at 14utc.

See you in the meeting,

Toni

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr][Dragonflow] - PTL Non Candacy

2016-09-12 Thread Antoni Segura Puimedon
On Mon, Sep 12, 2016 at 10:31 AM, Gal Sagie  wrote:
> Hello all,
>
> I would like to announce that i will not be running for projects Kuryr or
> Dragonflow
> PTL.
>
> I believe that both projects shows great value and progress compared to the
> time they exists
> mostly thanks to the great communities actively working on both of them.
>
> I also strongly believe that the PTL position is one that should be
> alternating given there is
> a good candidate and i believe there are some great candidates for both
> projects.
>
> I will of course still stay involved in both projects and excited to see
> what the next
> release is going to bring.
>
> Thanks for everyone that closely helped and contributed and lets keep up on
> making
> OpenStack great together.

Thanks to you for all the hard work and dedication to making Kuryr a welcoming
community!

>
> Gal.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] Kuryr-Kubernetes devstack proposal for testing

2016-09-06 Thread Antoni Segura Puimedon
Hi Kuryrs!

Yesterday in the meeting we discussed about the need to design a way for the
functional testing to happen for the Kuryr-Kubernetes integration. I studied
the possibilities today and came up with the following proposal (After the
proposal I'll put more detailed explanations).

Prerequisites
=
Usual services:
* Neutron and its agents (LBaaSv2 included)
* Keystone

Devstack plugin
===

* Installs Docker just like kuryr-libnetwork's plugin
* Installs Docker compose
* Pulls gcr.io/google_containers/hyperkube-amd64:v1.3.6
* Pulls quay.io/coreos/etcd:v3.0.7
* Runs in --net=host:
  * coreos/etcd
  * google_containers/hyperkube /setup-files.sh
  * google_containers/hyperkube /hyperkube apiserver
  * google_containers/hyperkube /hyperkube controller-manager
  * google_containers/hyperkube /hyperkube scheduler
  * google_containers/hyperkube /hyperkube kubelet with the Kuryr CNI driver
mounted as a volume
* Starts the Kuryr Watcher pointing to the apiserver as a devstack service

After the steps above, we can use a python kubernetes client to run the tests.

apiserver
-
This is what the Kuryr watcher connects to. Example parameters for running the
hyperkube container::

--service-cluster-ip-range=10.0.0.1/24 \
--insecure-bind-address=0.0.0.0 \
--insecure-port=8080 \
--etcd-servers=http://${LOCAL_IP}:2379 \

--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota
\
--client-ca-file=/srv/kubernetes/ca.crt \
--basic-auth-file=/srv/kubernetes/basic_auth.csv \
--min-request-timeout=300 \
--tls-cert-file=/srv/kubernetes/server.cert \
--tls-private-key-file=/srv/kubernetes/server.key \
--token-auth-file=/srv/kubernetes/known_tokens.csv \
--allow-privileged=true \
--v=2 \
--logtostderr=true

controller-manager
--
It will be running the plugin for LoadBalancer service type in the future.
Example parameters for running the hyperkube container::

--master=127.0.0.1:8080 \
--service-account-private-key-file=/srv/kubernetes/server.key \
--root-ca-file=/srv/kubernetes/ca.crt \
--min-resync-period=3m \
--v=2 \
--logtostderr=true

scheduler
-
To do the hard job of scheduling to the only kubelet. Example parameters for
running the hyperkube container::

  --master=127.0.0.1:8080 \
  --service-account-private-key-file=/srv/kubernetes/server.key \
  --root-ca-file=/srv/kubernetes/ca.crt \
  --min-resync-period=3m \
  --v=2 \
  --logtostderr=true

Kubelet
---
It needs to run in privileged mode and with the following volumes::

--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--volume=/var/log/kuryr:/var/log/kuryr \
--net=host \
--privileged=true \
--pid=host

It also needs the CNI driver, which should be mounted as a volume from the
current /opt/stack/kuryr-kubernetes source. However, the container is not
likely to have Python, so I propose to build a CNI binary with python-install
and mount just the binary.

The example parameters to run it would be::
--allow-privileged=true \  # this we can probably omit for tests
--api-servers="http://127.0.0.1:8080; \
--v=2 \
--address='0.0.0.0' \
--enable-server \
--containerized \
--network-plugin=cni

Why Hyperkube and compose instead of minikube?
==

Hyperkube provides us with more flexibility to run the building blocks that we
need for the integration, like not running kube-proxy. It also can run easily
without modification in the OpenStack CI jenkins worker with little resources.

Minikube spawns a Virtual Machine using Docker Machine. This means that it
would need more resources and make its Keystone/Neutron usage more complicated.
It could prossibly be hacked to use the Docker Machine generic SSH driver and
point it to the same machine, but I find it going to too much trouble compared
to the simplicity of the solution above.


Why Hyperkube and compose instead of just running kubernetes from src
=
Kubernetes building would take a high amount of resources ~8GiB and more time
than pulling the hyperkube containers. However, this is a decision that we may
want to revisit once we start contributing the kuryr cloud provider to
Kubernetes (for the loadbalancer service type).

---
---
---

Please, all Kuryrs, feel welcome to dispute the proposal and claims above and
to propose alternatives. After a bit of discussion we can propose a blueprint
and start implementing.


Regards,

Toni

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

[openstack-dev] [kuryr] Sept. 1st Kuryr-Kubernetes videoconf discussion meeting points

2016-09-05 Thread Antoni Segura Puimedon
Hi Kuryrs,

Last Thursday, we had a very productive Kuryr-Kubernetes integration
meeting. I want to thank a lot to all the people that joined the meeting
for how well they prepared the meeting and the good points that were raise.

We used some slides to guide the meeting points[1]. The key takeaways for
me, in no particular order, were:

# General
* The need for a multitenancy spec
* Using device owner, device id for resource tracking. This can come in
very handy for cleaning up resources, as well as when deciding which
component takes care of what.
* Ilya Chukhnakov proposed an active-active HA configuration that I look
forward to see a blueprint/spec for :-)

# CNI
* Moving Neutron port creation to the CNI driver. This will allow us to
react faster to high Pod churn Kubernetes usage scenarios.
* Proposal to simplify the configuration for the worker nodes by making
configuration be stored preferably on K8s. If that is not possible, direct
etcd.
* We noted the possibility of future optimizations by having worker nodes
pull pre-allocated Neutron ports. However, this is something we'll only get
into considering once we have performance numbers, as it makes the
deployment less flexible.

# Container-in-VM
* We should check with Magnum about the communication possibilities going
forward for the worker nodes to be able to talk to Neutron directly (see
first point in #CNI).
* Check reference implementation of Neutron trunk/sub ports at the Host
side to spot possible slow downs (like Linux Bridge) that could negate part
of the advantage of using kuryr.
* It was discussed to make Container-in-VM configurable to support
different deployment scenarios.
* Ivan Coughlan to send Address pairs proposal for one of those scenarios.

# Service support
* The need to add LBaaSv2 support now that Neutron dropped the long
deprecated LBaasv1.
* The need for studying the options for UDP load balancing with LBaaSv2 and
octavia. How far is the API from supporting it, which vendors could easily
support it. The new distributed OVN load balancer got mentioned.

# Python2/Python3
* The Python3 asyncio PoC will continue its upstreaming process.
* Ilya Chukhnakov will make an eventlet Py2/Py3 PoC that offers the same in
the next two weeks. I will be reviewing.
* When we reach feature parity, we'll evaluate the implementations and
there is agreement on moving to Py2/Py3 eventlet solution to lower the
barrier of adoption and due to maturity of associated libraries.

That is all. If any of the present people see that I forgot something I'll
be very thankful if you add to the above points.

Regards,

Toni

[1]
https://docs.google.com/presentation/d/1A9MG2EvZBtf2sJFcuBzuv0GxYpCyfAyYzNkkEgJuPVA/edit?usp=sharing
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] Meeting agenda for Sept 5th IRC meeting

2016-09-05 Thread Antoni Segura Puimedon
Hi Kuryrs,

Sorry I didn't get to post this on Fridays due to traveling. Here you can
find the weekly IRC meeting agenda for today:

https://wiki.openstack.org/wiki/Meetings/Kuryr#Meeting_September_5th.2C_2016

Remember! Today we meet at the regular 14:00 UTC!

Antoni Segura Puimedon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Python 3 usage and kuryr-kubernetes

2016-08-31 Thread Antoni Segura Puimedon
On Wed, Aug 31, 2016 at 9:23 AM, Jaume Devesa <devv...@gmail.com> wrote:

> Hi Toni,
>
>
> I am +1 on a). Kuryr-kubernetes is an asynchronous service. We are not
> talking about some binary/unicode mismatch or list/generators return type
> that can be solved with 'six' library or some 'utils' module.
>


> If we go to python2 that will force us to re-do all the core part of the
> project when we'll move to python3.
>

No question about that. I feel like going with Python2 now potentially
means that OSt projects will not be using py3 only syntax in a long while.


>
> Is there a policy that prevents to run some services on python2 and some
> on python3 in distros? What's the reason behind?
>

I think right now nobody has a policy yet. But there's probably three
options for distros:

x) Do not use py3 until they can support just py3 OSt
y) Allow different stack per OSt service with the libraries and clients
supporting both py2 and py3 stacks. Services are supported for either py2
or py3. Under this case I assume that once a non library/client project has
a mature py3 support, it can deprecate py2 after a cycle (which I think is
a good incentive for developers to see a good path forward to only have to
support one stack, since now supporting py3 means you may end up having to
support two stacks ad infinitum).
z) Like (y), but for projects that have py3 mature, support both py2 and
py3.

I support (y), but it would be nice if the community would have a
recommendation for OSt upstream and it's downstreams.



>
>
> Regards,
>
> Jaume Devesa
> Software Engineer @ Midokura
>
> On Aug 30 2016, at 4:44 pm, Antoni Segura Puimedon <celeb...@gmail.com>
> wrote:
>
>> Hi fellow kuryrs!
>>
>> This email is gonna be a bit long, so I'll put it in parts.
>>
>> Kubernetes integration components
>> ==
>>
>> As you know, we are now in the process of upstreaming the Kuryr
>> Kubernetes PoC that the Kuryr team at Midokura did. This PoC upstreaming
>> effort has two main parts:
>>
>> Kuryr watcher: Based on Python3 asyncio, it connects to the ?watch=true
>> Kubernetes resource endpoints, then passes the seen events to translators
>> that end up calling Neutron. With the Neutron resource information returned
>> by the translators, the watching coroutines update the resource that
>> originated the event.
>>
>> Kuryr CNI: Py2 and Py3 compatible. It is called by Kubernetes' Kubelet
>> with the noop container so that the CNI driver does the network plumbing
>> for it. Basically we use openstack/kuryr binding code to bind Pod veths to
>> Neutron ports.
>>
>> Upstream Deployment design
>> ==
>>
>> In the Kuryr-Kubernetes integration vision, Kuryr CNI is installed
>> wherever Kubelet is and the Kuryr watcher (or watchers once we support HA)
>> runs in a container somewhere that can access the Kubernetes, Neutron and
>> Keystone APIs (which does not need to be able to access the watcher host on
>> anything else that established connections). The idea behind allowing it to
>> be in a non-privileged container somewhere is that in this way you do not
>> need to make Neutron/Keystone accessible from the Kubernetes worker nodes,
>> just like for a lot of Nova compute deployments (obviously, depending on
>> you networking vendor, you have rabbitmq agent access to Neutron).
>>
>> If one does not need the extra isolation for the Kuryr Watcher, the
>> Watcher containers could even be started by Kubernetes and the CNI driver
>> would just let the watcher container in the Host networking instead of on
>> the Overlay, so Kubernetes would manage the integration deployment.
>>
>> OpenStack distributions: when the rubber meets the road
>> ==
>>
>> If the OpenStack distros, however, would prefer not to run Kuryr Watcher
>> containerized or they want to, as they probably should, build their own
>> container (rather than the upstream kuryr/kubernetes one in dockerhub that
>> is based on alpine linux), they would need to have Python3.5 support. I
>> understand that at the moment from the most popular OpenStack distros, only
>> one has Python 3.5 supported.
>>
>> You can imagine where this is heading... These are the options that I can
>> see:
>>
>> a) Work with the OpenStack distributions to ensure python3.5 support is
>> reached soon for Kuryr and its dependencies (some listed below):
>>
>>- babel
>>- oslo.concurrency
>>- oslo.config
>>- oslo.log
>>- oslo.utils
>>- pbr
>>- pyroute2
>> 

[openstack-dev] OpenStack Kuryr IRC meetings change

2016-08-31 Thread Antoni Segura Puimedon
Hi Kuryrs!

Infra merged the change that was agreed in this week's IRC meeting [1]

Starting immediately, all subsequent OpenStack Kuryr IRC meetings will be
held Mondays at 14:00 UTC at #openstack-meeting-4

We'll also try to publish an agenda the latest every Friday at the wiki
[2]. Everybody is welcome to add items to it (Though if there are too many
some may get pushed a week).

Have fun coding and reviewing!

Antoni Segura Puimedon

PS: There is a nice .ics that you can use to add the meetings to your
calendar[3]

[1] https://review.openstack.org/362852
[2] https://wiki.openstack.org/wiki/Meetings/Kuryr
[3] http://eavesdrop.openstack.org/#Kuryr_Project_Meeting
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] Python 3 usage and kuryr-kubernetes

2016-08-30 Thread Antoni Segura Puimedon
Hi fellow kuryrs!

This email is gonna be a bit long, so I'll put it in parts.

Kubernetes integration components
==

As you know, we are now in the process of upstreaming the Kuryr Kubernetes
PoC that the Kuryr team at Midokura did. This PoC upstreaming effort has
two main parts:

Kuryr watcher: Based on Python3 asyncio, it connects to the ?watch=true
Kubernetes resource endpoints, then passes the seen events to translators
that end up calling Neutron. With the Neutron resource information returned
by the translators, the watching coroutines update the resource that
originated the event.

Kuryr CNI: Py2 and Py3 compatible. It is called by Kubernetes' Kubelet with
the noop container so that the CNI driver does the network plumbing for it.
Basically we use openstack/kuryr binding code to bind Pod veths to Neutron
ports.

Upstream Deployment design
==

In the Kuryr-Kubernetes integration vision, Kuryr CNI is installed wherever
Kubelet is and the Kuryr watcher (or watchers once we support HA) runs in a
container somewhere that can access the Kubernetes, Neutron and Keystone
APIs (which does not need to be able to access the watcher host on anything
else that established connections). The idea behind allowing it to be in a
non-privileged container somewhere is that in this way you do not need to
make Neutron/Keystone accessible from the Kubernetes worker nodes, just
like for a lot of Nova compute deployments (obviously, depending on you
networking vendor, you have rabbitmq agent access to Neutron).

If one does not need the extra isolation for the Kuryr Watcher, the Watcher
containers could even be started by Kubernetes and the CNI driver would
just let the watcher container in the Host networking instead of on the
Overlay, so Kubernetes would manage the integration deployment.

OpenStack distributions: when the rubber meets the road
==

If the OpenStack distros, however, would prefer not to run Kuryr Watcher
containerized or they want to, as they probably should, build their own
container (rather than the upstream kuryr/kubernetes one in dockerhub that
is based on alpine linux), they would need to have Python3.5 support. I
understand that at the moment from the most popular OpenStack distros, only
one has Python 3.5 supported.

You can imagine where this is heading... These are the options that I can
see:

a) Work with the OpenStack distributions to ensure python3.5 support is
reached soon for Kuryr and its dependencies (some listed below):

   - babel
   - oslo.concurrency
   - oslo.config
   - oslo.log
   - oslo.utils
   - pbr
   - pyroute2
   - python-neutronclient
   - python-keystoneclient

This also implies that distros should adopt a policy about having OpenStack
services running in Python2, some in Python3, as I think the best is to
have each project move at its own speed (within reason).

b) As Ilya Chukhnakov from Mirantis proposed, drop Python3 for now and
reimplement it with python-requests and eventlet. He'll work on a PoC to
see its feasibility and how it compares to the asyncio based one.
Personal position
=

I see this as a good opportunity for the OpenStack community at wide to
start having Python3-first (and even python3 only) services and allow
OpenStack projects to take full advantage of all the good things Python3
has to offer and move forward with the rest of the Python community.

There's been some efforts in the part in some projects [1][2] but it seems
implementation was deferred indefinitely probably to the same distribution
issue that we face now.

In light of the recent discussions in this mailing list and the decision
taken by the Technical Committee [3] about alternative languages. I think
it would be very good for the community to set an official plan and
incentivize the projects to move to Python3 in future releases
(unfortunately, library projects like clients and oslo will most likely
have to keep python2 for longer, but it is probably for the best).

While such position is not taken, I would like to hear what the rest of the
Kuryr (and the rest of OpenStack) has to say about the matter and we should
at least evaluate the possibility of having to do the option (b) above.

Sorry for the long wall of text, and looking forward to discuss options (a)
and (b) both in these thread and in the next Kuryr weekly meeting this
coming Monday,

Toni

[1]
https://wiki.openstack.org/wiki/Oslo/blueprints/asyncio#What.27s_wrong_with_eventlet.3F
[2] https://blueprints.launchpad.net/oslo.messaging/+spec/asyncio-executor
[3]
http://eavesdrop.openstack.org/meetings/tc/2016/tc.2016-08-02-20.01.log.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] openstack/kuryr-libnetwork dropping 'common' subpackage

2016-08-30 Thread Antoni Segura Puimedon
Hi fellow kuryrs!

There is a proposal for dropping the 'common' openstack/kuryr-libnetwork
package and move its pieces in the parent kuryr_libnetwork package [1].

The idea behind 'common' was that it would serve the current purpose
developed by the openstack/kuryr repository, i.e., to serve as a place to
put the utilities/config used by all the Kuryr integrations (be it
libnetwork, k8s, fuxi). Since we were advised and decided to split
openstack/kuryr into:

- openstack/kuryr: Kuryr library for common functionality and configurations
- openstack/kuryr-libnetwork: For Docker libnetwork specific code

The meaningful options for what to do about the 'common' package are two:

- Repurpose it to contain common code for different kuryr-libnetwork
drivers and refactor kuryr in plugins IPAM, Remote Driver.
- Move the common modules to the parent package.

Personally, I could see a nice clean separation with the former, but I have
to say that at the current stage, after all the friction we got from the
repository split, the best option in my mind is to go with the latter
option.

[1] https://review.openstack.org/#/c/361567/

Antoni Segura Puimedon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kuryr] Proposing vikasc for kuryr-libnetwork core

2016-08-16 Thread Antoni Segura Puimedon
Hi Kuryrs,

I would like to propose Vikas Choudhary for the core team for the
kuryr-libnetwork subproject. Vikas has kept submitting patches and reviews
at a very good rhythm in the past cycle and I believe he will help a lot to
move kuryr forward.

I would also like to propose him for the core team for the kuryr-kubernetes
subproject since he has experience in the day to day work with kubernetes
and can help with the review and refactoring of the prototype upstreaming.

Regards,

Antoni Segura Puimedon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr][kolla] keystone v3 support status and plans?

2016-08-16 Thread Antoni Segura Puimedon
On Tue, Aug 16, 2016 at 2:56 PM, Steven Dake (stdake) 
wrote:

> Hey kuryrians,
>
> Kolla has a kuryr review in our queue and its looking really solid from
> Hui Kang.  The last key problem blocking the merge (and support of Kuryr in
> Kolla) is that Kolla only supports keystone v3 (and later when that comes
> from upstream).  As a result we are unable to merge kuryr because we can't
> validate it.  The work on the kolla side is about 98% done (need a few
> keystone v3 config options).  Wondering if keystone v3 will magically land
> in this cycle?
>

We are now trying to make the first release of kuryr-lib (openstack/kuryr)
and of kuryr-libnetwork (openstack/kuryr-libnetwork). Part of the release
is the keystone v3 support. It should be merged by next week.

Thanks a lot for reaching out!


> Its not all that challenging, but I realize the kuryr team may have other
> things that are higher priority on their plates.
>
> FWIW lack of keystone v3 support will be an adoption barrier for kuryr
> beyond kolla as well.
>
> Comments welcome.
>
> Regards
> -steve
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] Overlay MTU setup in docker remote driver

2016-08-16 Thread Antoni Segura Puimedon
On Tue, Aug 16, 2016 at 6:29 AM, Liping Mao (limao)  wrote:

> Hi Kuryr team,
>
> I just notice, this can be fixed in kuryr bind code.
> I submit a bug to track this:
> https://bugs.launchpad.net/kuryr-libnetwork/+bug/1613528
>
> And patch sets are here:
> https://review.openstack.org/#/c/355712/
> https://review.openstack.org/#/c/355714/


Thanks a lot Liping Mao! That's a nice way to solve it.


>
>
> Thanks.
>
> Regards,
> Liping Mao
>
> On 16/8/15 下午11:20, "Liping Mao (limao)"  wrote:
>
> >Hi Kuryr team,
> >
> >I open an issue in docker-libnetwork:
> >https://github.com/docker/libnetwork/issues/1390
> >
> >Appreciate for any idea or comments. Thanks.
> >
> >Regards,
> >Liping Mao
> >
> >
> >On 16/8/12 下午4:08, "Liping Mao (limao)"  wrote:
> >
> >>Hi Kuryr team,
> >>
> >>When the network in neutron using overlay for vm,
> >>it will use dhcp option to control the VM interface MTU,
> >>but for docker, the ip address does not get from dhcp.
> >>So it will not set up proper MTU in container.
> >>
> >>Two work-around in my mind now:
> >>1. Set the default MTU in docker to 1450 or less.
> >>2. Manually configure MTU after container start up.
> >>
> >>But both of these are not good, the idea way in my mind
> >>is when libnetwork Call remote driver create network,
> >>kuryr create neutron network, then return Proper MTU to libnetwork,
> >>docker use this MTU for this network. But docker remote driver
> >>does not support this.
> >>
> >>Or maybe let user config MTU in remote driver,
> >>a little similar with overlay driver:
> >>https://github.com/docker/libnetwork/pull/1349
> >>
> >>But now, seems like remote driver will not do similar things.
> >>
> >>Any idea to solve this problem? Thanks.
> >>
> >>
> >>Regards,
> >>Liping Mao
> >>
> >>
> >>__
> ___
> >>_
> >>OpenStack Development Mailing List (not for usage questions)
> >>Unsubscribe:
> >>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >___
> ___
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Project mascots update

2016-08-04 Thread Antoni Segura Puimedon
On Wed, Aug 3, 2016 at 11:00 PM, Heidi Joy Tretheway  wrote:

> *Steve Hardy wrote:*
> *"I have one question regarding the review process for the logos when
> they are drafted? In the cases where projects have their existing community
> generated logos I can imagine there will be a preference to stick with
> something that’s somehow derived from their existing logo…" *
>
> HJ: You’re right, Steve. That’s why every project that had an existing
> community-illustrated logo had first option to keep that mascot in the
> revised logo, and in most cases they chose to do this. So Oslo’s moose,
> Senlin’s forest, Tacker’s squid, and Cloudkitty’s cat (among others) will
> still be the prominent feature in their logo.
>
> *Steve Hardy wrote:*
> *“In cases where a new logo is produced I'm sure community enthusiasm and
> acceptance will be greater if team members have played a part in the logo
> design process or at least provided some feedback prior to the designs
> being declared final?”*
>
> HJ: Absolutely. That’s why we encouraged project teams to work together to
> select their mascot. I received dozens of team etherpads and Condorcet
> polls from PTLs to show how the team decided their mascot candidates. The
> PTLs confirmed their winners for the list you see on
> http://www.openstack.org/project-mascots. You can also see an example of
> an illustration style there, and we expect to have the first five logos
> (with the final illustration style) in hand shortly.
>
> It’s going to be a major effort to complete 50 illustrations x 3 logo
> variations prior to Barcelona, but I think we can make it. That said, it’s
> not possible to do several rounds of revisions with each project team and
> the illustrators. What I’ve been doing instead is listening carefully to
> project team requests and pulling photos to share with the illustrators
> that best show what the teams intend. I’m happy to share that with anyone
> who asks.
>

It would be really awesome if, in true OSt and OSS spirit this work
happened in an OpenStack repository with an open, text based format like
SVG. This way people could contribute and review.


> Heidi Joy
> __
> Heidi Joy Tretheway
> Senior Marketing Manager, OpenStack Foundation
> 503 816 9769  |  skype: heidi.tretheway
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] Kuryr IRC Meeting time

2016-07-26 Thread Antoni Segura Puimedon
On Tue, Jul 26, 2016 at 12:53 PM, Vikas Choudhary <
choudharyvika...@gmail.com> wrote:

> +1
> On 26 Jul 2016 15:45, "Liping Mao (limao)"  wrote:
>
>> Hi Team,
>>
>> Currently, kuryr team meeting time is as following:
>>
>> Every two weeks (on even weeks) on Tuesday at 0300 UTC in
>> #openstack-meeting-4
>>
>>
>> Every two weeks (on odd weeks) on Monday at 1400 UTC in
>> #openstack-meeting-4
>>
>>
>>
>> But seems like there are less people join the meeting on Tuesday at 0300
>> UTC.
>> You can see[1], we did not start the meeting at 0300 UTC for two times
>> recently.
>> IMO, If the time is not suitable for attendee, what about change the
>> meeting time(0300 UTC)?
>>
>
Sounds good to me. Could the people that used to attend it (and those who
would like to)
propose times that suit them?


>
>> Thanks.
>> [1]http://eavesdrop.openstack.org/meetings/kuryr/2016/
>>
>> Regards,
>> Liping Mao
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][Python 3.4-3.5] Async python clients

2016-07-04 Thread Antoni Segura Puimedon
On Mon, Jul 4, 2016 at 11:59 AM, Julien Danjou <jul...@danjou.info> wrote:

> On Mon, Jul 04 2016, Antoni Segura Puimedon wrote:
>
> > for the neutron clients now we use a thread executor from the asyncio
> loop
> > any time
> > we do neutron client request.
>
> It's a good trade-off, but it won't be as good as going full on async
> I/O. :)
>

Sure, if the neutronclient doesn't grow async support we'll most likely add
the calls
to neutron we need in our API watcher using that aio lib I linked. Using
the thread
executor is more of a workaround than a definitive solution.


>
> --
> Julien Danjou
> # Free Software hacker
> # https://julien.danjou.info
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][Python 3.4-3.5] Async python clients

2016-07-04 Thread Antoni Segura Puimedon
On Mon, Jul 4, 2016 at 11:16 AM, Julien Danjou  wrote:

> On Sun, Jun 26 2016, Denis Makogon wrote:
>
> > I know that some work in progress to bring Python 3.4 compatibility to
> > backend services and it is kinda hard question to answer, but i'd like to
> > know if there are any plans to support asynchronous HTTP API client in
> the
> > nearest future using aiohttp [1] (PEP-3156)?
>

We were not sure if aiohttp would be taken in as a requirement, so in our
kuryr kubernetes
prototype we did our own asyncio http request library (it only does GET for
now)[2]




>
> I don't think there is unfortunately. Most clients now relies on
> `requests', and unfortunately it's not async not it seems ready to be
> last time I checked.
>

for the neutron clients now we use a thread executor from the asyncio loop
any time
we do neutron client request.

[2] https://github.com/midonet/kuryr/blob/k8s/kuryr/raven/aio/methods.py

>
> --
> Julien Danjou
> // Free Software hacker
> // https://julien.danjou.info
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] kuryr-libnetwork split

2016-07-04 Thread Antoni Segura Puimedon
On Fri, Jul 1, 2016 at 7:58 PM, Doug Hellmann  wrote:

> Excerpts from Jeremy Stanley's message of 2016-07-01 15:05:30 +:
> > On 2016-07-01 08:26:13 -0500 (-0500), Monty Taylor wrote:
> > [...]
> > > Check with Doug Hellman about namespaces. We used to use them in some
> > > oslo things and had to step away from them because of some pretty weird
> > > and horrible breakage issues.
> > [...]
> >
> > Or read the associated Oslo spec from when that was done:
> >
> >  https://specs.openstack.org/openstack/oslo-specs/specs/kilo/drop-namespace-packages.html
> >
> >
>
> Yes, please don't use python namespaces. It's a cool feature, as you
> say, but the setuptools implementation available for Python 2 has some
> buggy edge cases that we hit on a regular basis before moving back to
> regular packages. It might be something we could look into again when
> we're running only on Python 3, since at that point the feature is built
> into the language.
>

For kuryr-kubernetes we target only Python3, I wonder if we could move
kuryr-libnetwork
to be python3 only and, if that were the case, how this alters the
situation for namespace
packages.


>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] kuryr-libnetwork split

2016-07-01 Thread Antoni Segura Puimedon
On Fri, Jul 1, 2016 at 3:26 PM, Monty Taylor <mord...@inaugust.com> wrote:

> On 07/01/2016 08:10 AM, Antoni Segura Puimedon wrote:
> > Hi fellow kuryrs!
> >
> > In order to proceed with the split of kuryr into a main lib and it's
> kuryr
> > libnetwork component, we've cloned the contents of openstack/kuryr over
> to
> > openstack/kuryr-libnetwork.
> >
> > The idea is that after this, the patches that will go to openstack/kuryr
> > will be to trim out the kuryr/kuryr-libnetwork specific parts and make a
> > release of the common parts so that openstack/kuryr-libnetwork can start
> > using it.
> >
> > I propose that we use python namespaces and the current common code in
> > kuryr is moved to:
> > kuryr/lib/
>
> Check with Doug Hellman about namespaces. We used to use them in some
> oslo things and had to step away from them because of some pretty weird
> and horrible breakage issues.
>

Thanks for the warning. It's a very cool looking feature that is underused,
so probably
there is a nasty reason for that. I'll ask.



>
> >
> > which openstack/kuryr-libnetwork would import like so:
> >
> > from kuryr.lib import binding
> >
> > So, right now, patches in review that are for the Docker ipam or remote
> > driver, should be moved to openstack/kuryr-libnetwork and soon we should
> > make openstack/kuryr-libnetwork add kuryr-lib to the requirements.
> >
> > Regards,
> >
> > Toni
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] kuryr-libnetwork split

2016-07-01 Thread Antoni Segura Puimedon
On Fri, Jul 1, 2016 at 3:10 PM, Antoni Segura Puimedon <
toni+openstac...@midokura.com> wrote:

> Hi fellow kuryrs!
>
> In order to proceed with the split of kuryr into a main lib and it's kuryr
> libnetwork component, we've cloned the contents of openstack/kuryr over to
> openstack/kuryr-libnetwork.
>
> The idea is that after this, the patches that will go to openstack/kuryr
> will be to trim out the kuryr/kuryr-libnetwork specific parts and make a
> release of the common parts so that openstack/kuryr-libnetwork can start
> using it.
>
> I propose that we use python namespaces and the current common code in
> kuryr is moved to:
> kuryr/lib/
>
>
> which openstack/kuryr-libnetwork would import like so:
>
> from kuryr.lib import binding
>
> So, right now, patches in review that are for the Docker ipam or remote
> driver, should be moved to openstack/kuryr-libnetwork and soon we should
> make openstack/kuryr-libnetwork add kuryr-lib to the requirements.
>

We should be moving the gates too.


>
> Regards,
>
> Toni
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] kuryr-libnetwork split

2016-07-01 Thread Antoni Segura Puimedon
Hi fellow kuryrs!

In order to proceed with the split of kuryr into a main lib and it's kuryr
libnetwork component, we've cloned the contents of openstack/kuryr over to
openstack/kuryr-libnetwork.

The idea is that after this, the patches that will go to openstack/kuryr
will be to trim out the kuryr/kuryr-libnetwork specific parts and make a
release of the common parts so that openstack/kuryr-libnetwork can start
using it.

I propose that we use python namespaces and the current common code in
kuryr is moved to:
kuryr/lib/


which openstack/kuryr-libnetwork would import like so:

from kuryr.lib import binding

So, right now, patches in review that are for the Docker ipam or remote
driver, should be moved to openstack/kuryr-libnetwork and soon we should
make openstack/kuryr-libnetwork add kuryr-lib to the requirements.

Regards,

Toni
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] Refactoring into common library and kuryr-libnetwork + Nested_VMs

2016-06-28 Thread Antoni Segura Puimedon
On Tue, Jun 28, 2016 at 11:54 AM, Vikas Choudhary <
choudharyvika...@gmail.com> wrote:

>
>
> On Tue, Jun 28, 2016 at 1:53 PM, Antoni Segura Puimedon <
> toni+openstac...@midokura.com> wrote:
>
>>
>>
>> On Mon, Jun 27, 2016 at 11:10 AM, Vikas Choudhary <
>> choudharyvika...@gmail.com> wrote:
>>
>>>
>>>
>>> On Mon, Jun 27, 2016 at 2:22 PM, Fawad Khaliq <fa...@plumgrid.com>
>>> wrote:
>>>
>>>> Vikas,
>>>>
>>>> Thanks for starting this. Where would you classify the Segmentation
>>>> (VLANID etc) allocation engine. Currently, the libnetwork plugin is tied to
>>>> the api and talks to Neutron, how would libnetwork and api part interact?
>>>>
>>>> As per my current understanding, it should be present be part of
>>> Kuryr-controller(server) running on cluster master node. My proposal is to
>>> move all neutron api calling part to kuryr-controller and let libnetwork
>>> plugin make request to kuryr-controller.
>>>
>>
>> Right now we have three repositories
>>
>> - kuryr
>> - kuryr-libnetwork
>> - kuryr-kubernetes
>>
>> My proposal is that the common code (as described below in Vikas' email,
>> this includes the binding code) lives in `kuryr`.
>> The kuryr server for the nested swarm case would also live there, as it
>> would be a generic rest API.
>>
>> The local libnetwork code, the REST server that we have that serves the
>> libnetwork ipam and remote driver APIs would live in kuryr-libnetwork.
>> For the nested case, I'd put a configuration option to the libnetwork
>> driver to prefer the vlan tagging binding script.
>>
>
> vlan tagging part looks common to both libnetwork and k8s(cni). Will it be
> present in both the repos, kuryr-libnetwork and kuryr-k8s or we can put
> this also in common 'Kuryr'?
>

It would be in common kuryr. The configuration option to use it instead of
the port type would be defined in both kuryr-libnetwork and kuryr-k8s.


>
>
>>
>> Both CNI and the API watcher I would put in the kuryr-kubernetes
>> repositories under /kuryr/{cni,watcher}
>>
>
>>
>>>
>>>
>>>> Fawad Khaliq
>>>>
>>>>
>>>> On Fri, Jun 24, 2016 at 9:45 AM, Vikas Choudhary <
>>>> choudharyvika...@gmail.com> wrote:
>>>>
>>>>> Hi Team,
>>>>>
>>>>> As already discussed with some of teammates over irc and internally,
>>>>> thought of bringing discussionto ml for more opinions:
>>>>>
>>>>> My idea on repo structure is something similar to this:
>>>>>
>>>>> kuryr
>>>>> └── controller
>>>>> │   ├── api (running on controller node(cluster master or openstack
>>>>> controller node), talking to other services(neutron))
>>>>> │   │
>>>>> │   ├── kubernetes-controller
>>>>> │   │   │
>>>>> │   │   └── watcher (for network related services making api
>>>>> calls)
>>>>> │   │
>>>>> │   │___any_other_coe_controller_capable_of_watching_events
>>>>> │
>>>>> │
>>>>> │
>>>>> │___driver
>>>>>  │common (traffic tagging utilities and binding)
>>>>>  │
>>>>>  │kubernetes(cni)
>>>>>  │
>>>>>  │libnetwork(network and ipam driver)(for network related
>>>>> services making api calls)
>>>>>  │
>>>>>  │ any_other_driver(calling api for nw related services if
>>>>> watcher not supported)
>>>>>
>>>>>
>>>>> Thoughts?
>>>>>
>>>>>
>>>>> -Vikas
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> -- Forwarded message --
>>>>> From: Vikas Choudhary <choudharyvika...@gmail.com>
>>>>> Date: Thu, Jun 23, 2016 at 2:54 PM
>>>>> Subject: Re: Kuryr Refactoring into common library and
>>>>> kuryr-libnetwork + Nested_VMs
>>>>> To: Antoni Segura Puimedon <t...@midokura.com>
>>>>>
>>>>>
>>>>> @Toni, Can you please explain a bit on how the roles regarding
>>>>>  vlan/segmentation id allocation, tagging ang untagging containers' 
>>>>> traffic
>>>>> are di

Re: [openstack-dev] [Kuryr] Refactoring into common library and kuryr-libnetwork + Nested_VMs

2016-06-28 Thread Antoni Segura Puimedon
On Mon, Jun 27, 2016 at 11:10 AM, Vikas Choudhary <
choudharyvika...@gmail.com> wrote:

>
>
> On Mon, Jun 27, 2016 at 2:22 PM, Fawad Khaliq <fa...@plumgrid.com> wrote:
>
>> Vikas,
>>
>> Thanks for starting this. Where would you classify the Segmentation
>> (VLANID etc) allocation engine. Currently, the libnetwork plugin is tied to
>> the api and talks to Neutron, how would libnetwork and api part interact?
>>
>> As per my current understanding, it should be present be part of
> Kuryr-controller(server) running on cluster master node. My proposal is to
> move all neutron api calling part to kuryr-controller and let libnetwork
> plugin make request to kuryr-controller.
>

Right now we have three repositories

- kuryr
- kuryr-libnetwork
- kuryr-kubernetes

My proposal is that the common code (as described below in Vikas' email,
this includes the binding code) lives in `kuryr`.
The kuryr server for the nested swarm case would also live there, as it
would be a generic rest API.

The local libnetwork code, the REST server that we have that serves the
libnetwork ipam and remote driver APIs would live in kuryr-libnetwork.
For the nested case, I'd put a configuration option to the libnetwork
driver to prefer the vlan tagging binding script.

Both CNI and the API watcher I would put in the kuryr-kubernetes
repositories under /kuryr/{cni,watcher}


>
>
>> Fawad Khaliq
>>
>>
>> On Fri, Jun 24, 2016 at 9:45 AM, Vikas Choudhary <
>> choudharyvika...@gmail.com> wrote:
>>
>>> Hi Team,
>>>
>>> As already discussed with some of teammates over irc and internally,
>>> thought of bringing discussionto ml for more opinions:
>>>
>>> My idea on repo structure is something similar to this:
>>>
>>> kuryr
>>> └── controller
>>> │   ├── api (running on controller node(cluster master or openstack
>>> controller node), talking to other services(neutron))
>>> │   │
>>> │   ├── kubernetes-controller
>>> │   │   │
>>> │   │   └── watcher (for network related services making api calls)
>>> │   │
>>> │   │___any_other_coe_controller_capable_of_watching_events
>>> │
>>> │
>>> │
>>> │___driver
>>>  │common (traffic tagging utilities and binding)
>>>  │
>>>  │kubernetes(cni)
>>>  │
>>>  │libnetwork(network and ipam driver)(for network related
>>> services making api calls)
>>>  │
>>>  │____ any_other_driver(calling api for nw related services if
>>> watcher not supported)
>>>
>>>
>>> Thoughts?
>>>
>>>
>>> -Vikas
>>>
>>>
>>>
>>>
>>> -- Forwarded message --
>>> From: Vikas Choudhary <choudharyvika...@gmail.com>
>>> Date: Thu, Jun 23, 2016 at 2:54 PM
>>> Subject: Re: Kuryr Refactoring into common library and kuryr-libnetwork
>>> + Nested_VMs
>>> To: Antoni Segura Puimedon <t...@midokura.com>
>>>
>>>
>>> @Toni, Can you please explain a bit on how the roles regarding
>>>  vlan/segmentation id allocation, tagging ang untagging containers' traffic
>>> are divided among entities you mentioned.
>>>
>>> In my understanding, in k8s case, API_watcher has resource translators
>>> and these will be talking to neutron for port creation and ip allocation.
>>> Then why for k8s case, neutron talking utilities are present in common lib.
>>> Or in other words, which neutron apis will be used from common lib?
>>>
>>> -Vikas
>>>
>>> On Thu, Jun 23, 2016 at 2:22 PM, Antoni Segura Puimedon <
>>> t...@midokura.com> wrote:
>>>
>>>>
>>>>
>>>> On Thu, Jun 23, 2016 at 7:28 AM, Irena Berezovsky <ir...@midokura.com>
>>>> wrote:
>>>>
>>>>> Hi guys,
>>>>> Just minor suggestion from my side. Please link all the refactoring
>>>>> patches to the same launchpad bp/topic so it will be easy to trace the
>>>>> relevant work.
>>>>>
>>>>> Vikas, Gal,let me know if you need so help.
>>>>>
>>>>> BR,
>>>>> Irena
>>>>>
>>>>> On Thu, Jun 23, 2016 at 7:58 AM, Vikas Choudhary <
>>>>> choudharyvika...@gmail.com> wrote:
>>>>>
>>>>>> Hi Gal,
>>>>>>
>>>>>> Greeting of the day!!!
>>>>>>
>>>>>>

Re: [openstack-dev] [Higgins][Zun] Project roadmap

2016-06-13 Thread Antoni Segura Puimedon
On Mon, Jun 13, 2016 at 12:10 AM, Hongbin Lu  wrote:

> Hi team,
>
>
>
> During the team meetings these weeks, we collaborated the initial project
> roadmap. I summarized it as below. Please review.
>
>
>
> * Implement a common container abstraction for different container
> runtimes. The initial implementation will focus on supporting basic
> container operations (i.e. CRUD).
>
> * Focus on non-nested containers use cases (running containers on physical
> hosts), and revisit nested containers use cases (running containers on VMs)
> later.
>
> * Provide two set of APIs to access containers: The Nova APIs and the
> Zun-native APIs. In particular, the Zun-native APIs will expose full
> container capabilities, and Nova APIs will expose capabilities that are
> shared between containers and VMs.
>
> * Leverage Neutron (via Kuryr) for container networking.
>

Great! Let us know anytime we can help


> * Leverage Cinder for container data volume.
>
Have you considered fuxi?

https://github.com/openstack/fuxi


> * Leverage Glance for storing container images. If necessary, contribute
> to Glance for missing features (i.e. support layer of container images).
>
> * Support enforcing multi-tenancy by doing the following:
>
> ** Add configurable options for scheduler to enforce neighboring
> containers belonging to the same tenant.
>

What about have the scheduler pluggable instead of having a lot of
configuration options?


> ** Support hypervisor-based container runtimes.
>

Is that hyper.sh?


>
>
> The following topics have been discussed, but the team cannot reach
> consensus on including them into the short-term project scope. We skipped
> them for now and might revisit them later.
>
> * Support proxying API calls to COEs.
>
> * Advanced container operations (i.e. keep container alive, load balancer
> setup, rolling upgrade).
>
> * Nested containers use cases (i.e. provision container hosts).
>
> * Container composition (i.e. support docker-compose like DSL).
>

Will it have ordering primitives, i.e. this container won't start until
this one is up and running. ?

I also wonder whether the Higgins container abstraction will have rich
status reporting that can be used in ordering.
For example, whether it can differentiate started containers from those
that are already listening in their exposed
ports.



>
>
> NOTE: I might forgot and mis-understood something. Please feel free to
> point out if anything is wrong or missing.
>
>
>
> Best regards,
>
> Hongbin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] [Neutron] Controlling security groups through Kuryr's libnetwork plugin [was: Waiting until Neutron PortisActive]

2016-06-13 Thread Antoni Segura Puimedon
On Sun, Jun 12, 2016 at 9:39 PM, Mike Spreitzer <mspre...@us.ibm.com> wrote:

> Antoni Segura Puimedon <toni+openstac...@midokura.com> wrote on
> 06/11/2016 07:39:41 PM:
>
> > Well, with a label you can make the Neutron Port have an SG that
> > forbids pinging.
>
> Wait, what?  Labels on what can do what?


There are two options here. It is not implemented yet. You could:

* Have the docker network create pass an option to say: All the ports in
this docker net shall have
this neutron SG assigned.
* Wait a bit for the proposal that is being pushed to Docker upstream that
demands to have
--net-opts to `docker run`.

>
>
> Thanks,
> Mike
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [charms] Re-licensing OpenStack charms under Apache 2.0

2016-06-11 Thread Antoni Segura Puimedon
On Fri, Jun 10, 2016 at 6:45 PM, Bilal Baqar  wrote:

> Hey James
>
> No problem on my side for this change.
>
> Regards
>
> On Fri, Jun 10, 2016 at 3:25 PM, Neil Jerram  wrote:
>
>> Me too.
>>
>> On Fri, Jun 10, 2016 at 10:32 AM Cory Benfield  wrote:
>>
>>>
>>> > On 8 Jun 2016, at 11:20, James Page  wrote:
>>> >
>>> > The majority of contributors are from Canonical (from whom I have
>>> permission to make this switch) with a further 18 contributors from outside
>>> of Canonical who I will be directly contacting for approval in gerrit as
>>> reviews are raised for each repository.
>>>
>>
No problem on my side either.


>
>>> Hey James,
>>>
>>> I’m happy for you to relicense my contributions as Apache 2.0.
>>>
>>> Cory
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Bilal Baqar
> MTS - PLUMgrid Inc.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] [Neutron] Waiting until Neutron PortisActive

2016-06-11 Thread Antoni Segura Puimedon
On Sat, Jun 11, 2016 at 3:03 PM, Mike Spreitzer  wrote:

> What about pinging?  BTW, from where would the pings come?
>
> In the Docker/Swarm API today there is no way to disable ping.  However,
> once Kuryr's libnetwork plugin is updated so that `docker network connect
> --ip=W.X.Y.Z ...` will latch onto a matching pre-existing Neutron Port, if
> it exists, there will be a way for a user to disable pings (right?).


Well, with a label you can make the Neutron Port have an SG that forbids
pinging.


>
>
> In the Kubernetes API there is now a way to do something like security
> groups, it is called NetworkPolicy; it is not yet well defined enough to
> say whether it gives the user a way to disable pings.


This is the reason I'd lean against using pinging. I think using get_port
should do it for now.


>
>
> Thanks,
> Mike
>
>
>
> From:Mohammad Banikazemi/Watson/IBM@IBMUS
> To:"OpenStack Development Mailing List \(not for usage
> questions\)" 
> Date:06/10/2016 10:50 AM
>
> Subject:Re: [openstack-dev] [Kuryr] [Neutron] Waiting until
> Neutron PortisActive
> --
>
>
>
> Hi Neil,
>
> Currently, when a docker libnetwork "join" operation in Kuryr is returned,
> it is not guaranteed that the network connectivity has been established.
> There are containers that check for network connectivity as the first thing
> they do when they come up and under heavy load some notice there is no
> connectivity and simply bail out. I am trying to deal with such a use case,
>
> Thanks for pointing out that option 2 won't work for you. I think
> Salvatore also alluded to that in his response. What you are suggesting
> with pinging the container from the appropriate namespace may be worth a
> try but then there may be containers that do not allow ingress traffic
> while they are up and happy. So short of what Salvatore suggested in his
> earlier email (and I am not sure if that can be done without additions to
> Neutron), we are left with option 1.
>
> Keep in mind that users can choose not to enable the blocking option and
> things will be as they are right now. Would that be reasonable?
>
> Best,
>
> Mohammad
>
> [image: Inactive hide details for Neil Jerram ---06/10/2016 09:25:59
> AM---Hi Mohammad, Why is the blocking needed? Is it to report som]Neil
> Jerram ---06/10/2016 09:25:59 AM---Hi Mohammad, Why is the blocking needed?
> Is it to report some kind of status back to
>
> From: Neil Jerram 
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: 06/10/2016 09:25 AM
> Subject: Re: [openstack-dev] [Kuryr] [Neutron] Waiting until Neutron Port
> is Active
> --
>
>
>
> Hi Mohammad,
>
> Why is the blocking needed?  Is it to report some kind of status back to
> Docker/Kubernetes, or to allow some follow-on action to happen?
>
> When using networking-calico as the driver, I think that only option (1)
> would work, out of the options you've suggested below.  (3) doesn't work,
> as you say, because Calico doesn't involve an L2 agent.  Also Calico
> doesn't use the RPC message queue for reporting port status, because we've
> found that that message queue is in itself a scalability bottleneck.
>
> I guess another option would be for the using system to determine for
> itself when the port appears to be working, e.g. by the host pinging the
> container/pod's IP address.
>
> Regards,
> Neil
>
>
> On Wed, Jun 8, 2016 at 4:23 PM Mohammad Banikazemi <*m...@us.ibm.com*
> > wrote:
> For the Kuryr project, in order to support blocking until vifs are plugged
> in (that is adding config options similar to the following options define
> in Nova: vif_plugging_is_fatal and vif_plugging_timeout), we need to detect
> that the Neutron plugin being used is done with plugging a given vif.
>
> Here are a few options:
>
> 1- The simplest approach seems to be polling for the status of the Neutron
> port to become Active. (This may lead to scalability issues but short of
> having a specific goal for scalability, it is not clear that will be the
> case.)
> 2- Alternatively, We could subscribe to the message queue and wait for
> such a port update event.
> 3- It was also suggested that we could use l2 agent extension to detect
> such an event but that seems to limit us to certain Neutron plugins and
> therefore not acceptable.
>
> I was wondering if there are other and better options.
>
> Best,
>
> Mohammad
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> *openstack-dev-requ...@lists.openstack.org?subject:unsubscribe*
> 
> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
> 

Re: [openstack-dev] [Kuryr] [Neutron] Waiting until Neutron Port isActive

2016-06-10 Thread Antoni Segura Puimedon
On Fri, Jun 10, 2016 at 5:18 PM, Neil Jerram  wrote:

> Yes, that all makes sense - thanks for explaining.
>
> Neil
>
>
> On Fri, Jun 10, 2016 at 3:50 PM Mohammad Banikazemi  wrote:
>
>> Hi Neil,
>>
>> Currently, when a docker libnetwork "join" operation in Kuryr is
>> returned, it is not guaranteed that the network connectivity has been
>> established. There are containers that check for network connectivity as
>> the first thing they do when they come up and under heavy load some notice
>> there is no connectivity and simply bail out. I am trying to deal with such
>> a use case,
>>
>> Thanks for pointing out that option 2 won't work for you. I think
>> Salvatore also alluded to that in his response. What you are suggesting
>> with pinging the container from the appropriate namespace may be worth a
>> try but then there may be containers that do not allow ingress traffic
>> while they are up and happy. So short of what Salvatore suggested in his
>> earlier email (and I am not sure if that can be done without additions to
>> Neutron), we are left with option 1.
>>
>> Keep in mind that users can choose not to enable the blocking option and
>> things will be as they are right now. Would that be reasonable?
>>
>
That means going for the current version in the patch, right?


>
>> Best,
>>
>> Mohammad
>>
>> [image: Inactive hide details for Neil Jerram ---06/10/2016 09:25:59
>> AM---Hi Mohammad, Why is the blocking needed? Is it to report som]Neil
>> Jerram ---06/10/2016 09:25:59 AM---Hi Mohammad, Why is the blocking needed?
>> Is it to report some kind of status back to
>>
>> From: Neil Jerram 
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> Date: 06/10/2016 09:25 AM
>>
>>
>> Subject: Re: [openstack-dev] [Kuryr] [Neutron] Waiting until Neutron
>> Port is Active
>> --
>>
>>
>>
>> Hi Mohammad,
>>
>> Why is the blocking needed?  Is it to report some kind of status back to
>> Docker/Kubernetes, or to allow some follow-on action to happen?
>>
>> When using networking-calico as the driver, I think that only option (1)
>> would work, out of the options you've suggested below.  (3) doesn't work,
>> as you say, because Calico doesn't involve an L2 agent.  Also Calico
>> doesn't use the RPC message queue for reporting port status, because we've
>> found that that message queue is in itself a scalability bottleneck.
>>
>> I guess another option would be for the using system to determine for
>> itself when the port appears to be working, e.g. by the host pinging the
>> container/pod's IP address.
>>
>> Regards,
>> Neil
>>
>>
>> On Wed, Jun 8, 2016 at 4:23 PM Mohammad Banikazemi <*m...@us.ibm.com*
>> > wrote:
>>
>>
>>For the Kuryr project, in order to support blocking until vifs are
>>plugged in (that is adding config options similar to the following options
>>define in Nova: vif_plugging_is_fatal and vif_plugging_timeout), we need 
>> to
>>detect that the Neutron plugin being used is done with plugging a given 
>> vif.
>>
>>Here are a few options:
>>
>>1- The simplest approach seems to be polling for the status of the
>>Neutron port to become Active. (This may lead to scalability issues but
>>short of having a specific goal for scalability, it is not clear that will
>>be the case.)
>>2- Alternatively, We could subscribe to the message queue and wait
>>for such a port update event.
>>3- It was also suggested that we could use l2 agent extension to
>>detect such an event but that seems to limit us to certain Neutron plugins
>>and therefore not acceptable.
>>
>>I was wondering if there are other and better options.
>>
>>Best,
>>
>>Mohammad
>>
>>__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe:
>>*openstack-dev-requ...@lists.openstack.org?subject:unsubscribe*
>>
>>
>>
>>*http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
>>
>>__
>>
>>
>>
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe:
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>
>>
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __

Re: [openstack-dev] [Kuryr] Kuryr did not detect neutron tag plugin in devstack

2016-06-06 Thread Antoni Segura Puimedon
On Sat, Jun 4, 2016 at 5:17 AM, Liping Mao (limao)  wrote:

> Hi Kuryr team,
>
> I notice kuryr did not detect neutron tag plugin in devstack[1].
> This is because when kuryr process start up in devstack,
> neutron-server did not finish load tag plugin.
> Kuryr use api call to detect neutron tag, so kuryr will not detect it.
> After I manually restart kuryr process, everything works well.
>
> I¹m not familiar with devstack, not sure if there is anyway to
> make sure neutron-server finished start before kuryr start up.
> I submit a patch[2], I just restart kuryr in extra stage, at that stage,
> Neutron-server already finish start.
> Any comments or good idea to solve this?
>

I proposed in the weekly meeting that the Neutron capability detection
be postponed to the first action we need. This way the check will happen
driven by the user and at a time when Neutron will surely be up and running.

Please just help to add your comments in patch or here. Thanks.
>
> [1] https://bugs.launchpad.net/kuryr/+bug/1587522
> [2] https://review.openstack.org/#/c/323453/
>
> Regards,
> Liping Mao
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] IPAM issue with multiple docker networks having same cidr subnets

2016-05-27 Thread Antoni Segura Puimedon
On Thu, May 26, 2016 at 9:48 PM, Vikas Choudhary  wrote:

> Hi All,
>
> Recently, Banix observed and brought into notice this issue [1].
>
> To solve this, i could think of two approaches:
> 1. Modifying the libnetwork apis to get PoolID also at network creation.
>  OR
> 2. Enhancing the /network docker api to get PoolID details also
>
> Problem with the first approach is that it is changing libnetwork
> interface which is common for all remote drivers and thus chances of any
> break-ups are high. So I preferred second one.
>
> Here is the patch I pushed to docker [2].
>
> Once this is merged, we can easily fix this issue by tagging poolID to
> neutron networks and filtering subnets at address request time based on
> this information.
>
> Any thoughts/suggestions?
>

I think following the address scope proposal at [2] is the best course of
action. Thanks for taking
it up with Docker upstream!


>
>
> Thanks
> Vikas
>
> [1] https://bugs.launchpad.net/kuryr/+bug/1585572
> [2] https://github.com/docker/docker/issues/23025
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr][magnum]Installing kuryr for mutlinode openstack setup

2016-05-25 Thread Antoni Segura Puimedon
On Wed, May 25, 2016 at 11:20 AM, Jaume Devesa  wrote:

> Hello Akshay,
>
> responses inline:
>
> On Wed, 25 May 2016 10:48, Akshay Kumar Sanghai wrote:
> > Hi,
> > I have a 4 node openstack setup (1 controller, 1 network, 2 compute
> nodes).
> > I want to install kuryr in liberty version. I cannot find a package in
> > ubuntu repo.
>
> There is not yet official version of Kuryr. You'll need to install using
> the
> current master branch of the repo[1] (by cloning it, install dependencies
> and
> `python setup.py install`
>

 Or you could run it dockerized. Read the "repo info" in [2]

We are working on having the packaging ready, but we are splitting the
repos first,
so it will take a while for plain distro packages.


> > -How do i install kuryr?
> If the README.rst file of the repository is not enough for you in terms of
> installation and configuration, please let us know what's not clear.
>
> > - what are the components that need to be installed on the respective
> > nodes?
>
> You need to run the kuryr libnetwork's service in all the nodes that you
> use as
> docker 'workers'
>

and your chosen vendor's neutron agents. For example, for MidoNet it's
midolman, for ovs it would be the neutron ovs agent.


>
> > - Do i need to install magnum for docker swarm?
>
> Not familiar with Magnum.. Can not help you here.
>


If you want to run docker swarm in bare metal, you do not need Magnum. Only
keystone and Neutron.

You'd put docker swarm, neutron and keystone running in one node, and then
have N nodes with docker engine, kuryr/libnetwork and the neutron agents of
the vendor of your choice.


> > - Can i use docker swarm, kubernetes, mesos in openstack without using
> > kuryr?


You can use swarm and kubernetes in OpenStack with Kuryr using Magnum. It
will
use neutron networking for providing nets to the VMs that will run the
swarm/kubernetes
cluster. Inside the VMs, another overlay done by flannel will be used (in
k8s, in
swarm I have not tried it).


> What will be the disadvantages?
>

The disadvantages are that you do not get explicit Neutron networking for
your containers,
you get less networking isolation for your VMs/containers and if you want
the highest
performance, you have to change the default flannel mode.


>
> Only docker swarm right now. The kubernetes one will be addressed soon.
>
> >
> > Thanks
> > Akshay
>
> Thanks to you for giving it a try!



> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> There are a bunch of people much more experienced than me in Kuryr. I hope
> I
> haven't said anything stupid.
>
> Best regards,
>
> [1]: http://github.com/openstack/kuryr

 [2] https://hub.docker.com/r/kuryr/libnetwork/

>
>
> --
> Jaume Devesa
> Software Engineer at Midokura
> PGP key: 35C2D6B2 @ keyserver.ubuntu.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-15 Thread Antoni Segura Puimedon
On Sat, May 14, 2016 at 7:13 PM, Clint Byrum  wrote:

> Excerpts from Dieterly, Deklan's message of 2016-05-14 01:18:20 +:
> > Python 2.x will not be supported for much longer, and let¹s face it,
> > Python is easy, but it just does not scale. Nor does Python have the
> > performance characteristics that large, distributed systems require.
> Maybe
> > Java could replace Python in OpenStack as the workhorse language.
>
> Which is why we've been pushing toward python 3 for years now. It's
> default for python apps in distros now, gates are holding the line at the
> unit test level now, so we just need a push toward integration testing
> and I truly believe we'll be seeing people use python3 and pypy to run
> OpenStack in the next year.
>

Kuryr kubernetes integration is python 3 only, as it is asyncio based. I
would
be surprised if new projects and subprojects don't go to python3 directly.


>
> And regarding not scaling: That's precisely what's being discussed,
> and it seems like there are plenty of options for pushing python further
> that aren't even half explored yet. Meanwhile, if enough people agree,
> perhaps go is a good option for those areas where we just can't push
> Python further without it already looking like another language anyway.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Port binding query

2016-05-12 Thread Antoni Segura Puimedon
On Thu, May 12, 2016 at 4:50 PM, Neil Jerram  wrote:

> I'm trying Kuryr with networking-calico and think I've hit an unhelpful
> inconsistency. A Neutron port has 'id' and 'device_id' fields that are
> usually different. When Nova does VIF binding for a Neutron port, it
> generates the Linux device name from 'tap' + port['id']. But when Kuryr
> does VIF binding for a Neutron port, I think it generates the Linux device
> name from 'tap' + port['device_id'].
>
> Thoughts? Does that sound right, or have I misread the code and my logs?
> If it's correct, it marginally impacts the ability to use identical agent
> and Neutron driver/plugin code for the two cases (Nova and Kuryr).
>

I think we are supposed to behave like Nova, binding wise.

@Banix: Can you confirm that it is a bug and not a feature?

>From a quick grepping I see hat nova sets the name to be:

nova/network/neutronv2/api.py:devname = "tap" +
current_neutron_port['id']

Whereas in Kuryr we use the first 8 characters of the Docker endpoint id.


>
> Thanks,
> Neil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Question for Antoni Puimedon about load balancers and overlay networks

2016-05-03 Thread Antoni Segura Puimedon
On Mon, May 2, 2016 at 8:11 PM, Mike Spreitzer  wrote:

> I am looking at
> https://www.openstack.org/videos/video/project-kuryr-docker-delivered-kubernetes-next,
> around 28:00.  You have said that overlay networks are involved, and are
> now talking about load balancers.  Is this Neutron LBaaS?  As far as I
> know, a Neutron LBaaS instance is "one-armed" --- both the VIP and the back
> endpoints have to be on the same Neutron network.  But you seem to have all
> the k8s services on a single subnet.  So I am having some trouble following
> exactly what is going on.  Can you please elaborate?


Hi Mike,

Thanks for reaching out and thanks for going over the video!

For those following at home, we are talking about the explanation I gave
about slide 18 in [1].

This topic was also discussed in the work sessions about Kubernetes
integration (maybe also in the architecture one, but I don't remember it
for sure). The further explanation I gave in the work session is that the
setup we use is having a Neutron network with a subnet for all the load
balancer pools and then each Kubernetes namespace have a different Neutron
net and subnets. To make it work, we have all the subnets (including the
load balancer one) connected to a single `raven` router. This is not
blocked by the API and I think according to the LB spec it should work,
although I have not tried it with ovs.

Christophe (CCed) is trying to verify this OVS usage.


[1]
http://www.slideshare.net/celebdor/project-kuryr-returns-docker-delivered-kubernetes-next


>
>
> BTW, there is also some discussion of k8s multi-tenancy in the Kubernetes
> Networking SIG and the Kubernetes OpenStack SIG.
>
> Thanks,
> Mike
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-22 Thread Antoni Segura Puimedon
On Thu, Apr 21, 2016 at 9:30 PM, Fox, Kevin M  wrote:
> +1.
> 
> From: Hongbin Lu [hongbin...@huawei.com]
> Sent: Thursday, April 21, 2016 7:50 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
> abstraction for all COEs
>
>> -Original Message-
>> From: Steve Gordon [mailto:sgor...@redhat.com]
>> Sent: April-21-16 9:39 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
>> abstraction for all COEs
>>
>> - Original Message -
>> > From: "Hongbin Lu" 
>> > To: "OpenStack Development Mailing List (not for usage questions)"
>> > 
>> > > -Original Message-
>> > > From: Keith Bray [mailto:keith.b...@rackspace.com]
>> > > Sent: April-20-16 6:13 PM
>> > > To: OpenStack Development Mailing List (not for usage questions)
>> > > Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build
>> > > unified abstraction for all COEs
>> > >
>> > > Magnum doesn¹t have to preclude tight integration for single COEs
>> > > you speak of.  The heavy lifting of tight integration of the COE in
>> > > to OpenStack (so that it performs optimally with the infra) can be
>> > > modular (where the work is performed by plug-in models to Magnum,
>> > > not performed by Magnum itself. The tight integration can be done
>> by
>> > > leveraging existing technologies (Heat and/or choose your DevOps
>> tool of choice:
>> > > Chef/Ansible/etc). This allows interested community members to
>> focus
>> > > on tight integration of whatever COE they want, focusing
>> > > specifically on
>> >
>> > I agree that tight integration can be achieved by a plugin, but I
>> > think the key question is who will do the work. If tight integration
>> > needs to be done, I wonder why it is not part of the Magnum efforts.
>>
>> Why does the integration belong in Magnum though? To me it belongs in
>> the COEs themselves (e.g. their in-tree network/storage plugins) such
>> that someone can leverage them regardless of their choices regarding
>> COE deployment tooling (and yes that means Magnum should be able to
>> leverage them too)? I guess the issue is that in the above conversation
>> we are overloading the term "integration" which can be taken to mean
>> different things...
>
> I can clarify. I mean to introduce abstractions to allow tight integration 
> between COEs and OpenStack. For example,
>
> $ magnum container-create --volume= --net= ...
>
> I agree with you that such integration should be supported by the COEs 
> themselves. If it does, Magnum will leverage it (anyone can leverage it as 
> well regardless of they are using Magnum or not). If it doesn't (the 
> reality), Magnum could add support for that via its abstraction layer. For 
> your question about why such integration belongs in Magnum, my answer is that 
> the work needs to be done in one place so that everyone can leverage it 
> instead of re-inventing their own solutions. Magnum is the OpenStack 
> container service so it is nature for Magnum to take it IMHO.

The integration is being done in the COEs themselves.

In Docker with Swarm you can just do:

docker network create -d kuryr \
   --ipam-driver=kuryr \
   --subnet=10. 10.0.0/24 \
   --gateway=10.10.0.1 \
   -o neutron.net.name=mynet mynet_d

You can also refer to them by uuid. People are starting to join to be
able to do the same with storage volumes (we'll talk about it in [1])

For Kubernetes we still do not have it upstream, but we create and use
neutron resources as well. All this in bare-metal, but in the newton
cycle, provided that the vlan-aware-vms spec gets released, we'll
support container-in-vm (we'll discuss it in the summit in work
sessions and in a presentation [2]) and Magnum will be able to use it.

So, the way I look at it, Magnum should probably not be too
opinionated, giving choice to operators, but it should provide as much
access to the core OpenStack resources as possible, as long as those
are available in the COE (and that's where we are trying to help).

[1] https://www.openstack.org/summit/austin-2016/summit-schedule/events/6861
[2] https://www.openstack.org/summit/austin-2016/summit-schedule/events/7633


>
>>
>> -Steve
>>
>> > From my point of view,
>> > pushing the work out doesn't seem to address the original pain, which
>> > is some users don't want to explore the complexities of individual
>> COEs.
>>
>>
>> ___
>> ___
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-
>> requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Re: [openstack-dev] [kuryr] Did kuryr need to know about Docker's cluster store?

2016-03-03 Thread Antoni Segura Puimedon
On Fri, Mar 4, 2016 at 5:01 AM, Vikas Choudhary
 wrote:
> Since libnetwork talks to cluster store independent of plugin, I think no
> changes are required on Kuryr side.

That's right. Docker handles all the KV storage interaction from the
data it receives from
Kuryr. In fact, it is expressly forbidden by libnetwork for plugins to
access the KV store
for resources that are in-flight, even.

>
>
> Regards
> Vikas
>
> On Thu, Mar 3, 2016 at 9:54 PM, Mike Spreitzer  wrote:
>>
>> On Feb 5 I was given a tarchive of kuryr with an install script that
>> configures the docker daemon to use consul as its cluster store.  If I
>> modify the config of docker to use etcd instead then do I need to change
>> anything in Kuryr?
>>
>> Thanks,
>> Mike
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-26 Thread Antoni Segura Puimedon
On Fri, Feb 26, 2016 at 10:44 AM, Miguel Angel Ajo Pelayo
 wrote:
>
>> On 26 Feb 2016, at 02:38, Sean McGinnis  wrote:
>>
>> On Thu, Feb 25, 2016 at 04:13:56PM +0800, Qiming Teng wrote:
>>> Hi, All,
>>>
>>> After reading through all the +1's and -1's, we realized how difficult
>>> it is to come up with a proposal that makes everyone happy. When we are
>>> discussing this proposal with some other contributors, we came up with a
>>> proposal which is a little bit different. This idea could be very
>>> impractical, very naive, given that we don't know much about the huge
>>> efforts behind the scheduling, planning, coordination ... etc etc. So,
>>> please treat this as a random thought.
>>>
>>> Maybe we can still have the Summit and the Design Summit colocated, but
>>> we can avoid the overlap that has been the source of many troubles. The
>>> idea is to have both events scheduled by the end of a release cycle. For
>>> example:
>>>
>>> Week 1:
>>>  Wednesday-Friday: 3 days Summit.
>>>* Primarily an event for marketing, sales, CTOs, architects,
>>>  operators, journalists, ...
>>>* Contributors can decide whether they want to attend this.
>>>  Saturday-Sunday:
>>>* Social activities: contributors meet-up, hang outs ...
>>>
>>> Week 2:
>>>  Monday-Wednesday: 3 days Design Summit
>>>* Primarily an event for developers.
>>>* Operators can hold meetups during these days, or join project
>>>  design summits.
>>>
>
>
> A proposal like this one seems much more rational to me,
>
>   * no need for two trips
>   * no overlap of the summit/design (I end up running back and forth 
> otherwise)

These. A thousand times these. Yes, I know the timing in the cycle is still
not the best, but I suspect that with the other option project
specific mid-cycles
will still occur.

>
> Otherwise, separating both parts of the summit increases the gap
> between engineering and the final OpenStack users/ops. I couldn’t go
> to summit-related-events 4 times a year for family reasons. But I like
> to have the opportunity to spend some time close to the user/op side
> of things to understand how people is using OpenStack, what are they
> missing, what are we doing good.
>
>
>>> If you need to attend both events, you don't need two trips. Scheduling
>>> both events by the end of a release cycle can help gather more
>>> meaningful feedbacks, experiences or lessons from previous releases and
>>> ensure a better plan for the coming release.
>>>
>>> If you want to attend just the main Summit or only the Design Summit,
>>> you can plan your trip accordingly.
>>>
>>> Thoughts?
>
> I really like it. Not sure how well does it work for others, or from
> the organisational point of view.

Probably a big issue is that organizers would end up losing 2-3 weekends
in a row, which may be a bit too much to ask.

>
>>>
>>> - Qiming
>>>
>>
>> This would eliminate the need for a second flight, and it would
>> net be total less time away than attending two separate events. I could
>> see this working.
>>
>> Sean
>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] Now part of OpenStack big-tent

2016-02-24 Thread Antoni Segura Puimedon
On Wed, Feb 24, 2016 at 4:51 PM, Gal Sagie  wrote:
> Hello Everyone,
>
> Just wanted to update you that Kuryr [1] was officially accepted yesterday
> as a
> big tent project.

We should probably get Kuryr to be selectable as a tag to subscribe to in

http://lists.openstack.org/cgi-bin/mailman/options/openstack-dev

>
> We are currently facing some interesting challenges and times and if you
> are running containers in OpenStack in mixed environments you most certainly
> want to look and examine Kuryr.
>
> We are holding a weekly IRC meeting [2] which is alternating between time
> zones
> so you have no excuse :) everyone are welcome!
>
> We want to help and solve more challenges in the realm of containers
> networking
> deployments in OpenStack and if you are deploying this either in development
> or
> in production we would love to hear your experience and the problems you are
> facing
> and try to help you manage this better, feel free to share!
>
> [1] https://wiki.openstack.org/wiki/Kuryr
> [2] https://wiki.openstack.org/wiki/Meetings/Kuryr

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Failed to create network with kuryr driver type

2016-02-02 Thread Antoni Segura Puimedon
On Tue, Feb 2, 2016 at 7:26 AM, Mars Ma  wrote:

> hi Vikas,
>
> ubuntu@kuryr1:~$ docker network create --driver=kuryr --ipam-driver=kuryr
> --subnet 10.10.0.0/16 --gateway 10.10.0.1 --ip-range 10.10.0.0/24 foo
> 68f14fe701710d6f3472d1626c33f0036a145aaa8d81265429e509cf759adfe1
> ubuntu@kuryr1:~$ docker network ls
> NETWORK ID  NAMEDRIVER
> 8cbbcae5d143bridge  bridge
> d64b70ca9b64nonenull
> 65510a4e71dehosthost
> ubuntu@kuryr1:~$ neutron net-list
>
> +--+--+--+
> | id   | name
> | subnets
>|
>
> +--+--+--+
> | 29f8fd92-26f5-42b6-86db-ae09fc77cd91 | public
> | 18dcdefd-741f-4ec2-ba22-60610f071ed7
> 2001:db8::/64   |
> |  |
>| 430a7e0b-ca74-41a1-a6b0-d8958072eee2
> 172.24.4.0/24   |
> | db489255-be4e-439d-babd-20a52a412e25 |
> 68f14fe701710d6f3472d1626c33f0036a145aaa8d81265429e509cf759adfe1 |
> 18e035b2-d26d-488a-b780-9c91eb36b294 10.10.0.0/24|
> | 37475896-5f2a-4308-a4cd-8095cefa3b7c | private
>| 1e69eb3c-08a9-4312-882d-179c1e546aed
> fd54:99bd:a3b8::/64 |
> |  |
>| 08a1b408-3915-4d11-932d-f8c3e9ee4c0f
> 10.0.0.0/24 |
>
> +--+--+--+
> ubuntu@kuryr1:~$ docker run --net=foo -itd --name=container1 busybox
> Unable to find image 'busybox:latest' locally
> latest: Pulling from library/busybox
> 583635769552: Pull complete
> b175bcb79023: Pull complete
> Digest:
> sha256:c1bc9b4bffe665bf014a305cc6cf3bca0e6effeb69d681d7a208ce741dad58e0
> Status: Downloaded newer image for busybox:latest
> 4a7ad6fe18174e3e427e73283325446929daf9af87fb11534f9febc91acca272
> Error response from daemon: Cannot start container
> 4a7ad6fe18174e3e427e73283325446929daf9af87fb11534f9febc91acca272: network
> foo not found
> ubuntu@kuryr1:~$
>
> why does not docker list the created network? but neutron can list it.
> any comment ?
>

Do you have etcd installed and Docker talking to it?
Which capability scope are you using in kuryr.conf?

>
>
> Thanks & Best regards !
> Mars Ma
> 
>
> On Wed, Jan 20, 2016 at 6:18 PM, Vikas Choudhary <
> choudharyvika...@gmail.com> wrote:
>
>> Cheers :) !!
>>
>> On Wed, Jan 20, 2016 at 3:41 PM, Mars Ma  wrote:
>>
>>> Much thanks to @Vikas
>>>
>>> Thanks & Best regards !
>>> Mars Ma
>>> 
>>>
>>> On Wed, Jan 20, 2016 at 5:55 PM, Vikas Choudhary <
>>> choudharyvika...@gmail.com> wrote:
>>>
 Hi Mars,

 Your problem will be solved by this patch,
 https://review.openstack.org/#/c/265744/ . It has not been merged yet
 though.


 Thanks
 Vikas


 On Wed, Jan 20, 2016 at 2:39 PM, Mars Ma  wrote:

> Hi Vikas,
>
> I added your fix , and also have problem, but different :
>
> $ neutron subnetpool-list
>
> +--+---+---+---+--+
> | id   | name  | prefixes  |
> default_prefixlen | address_scope_id |
>
> +--+---+---+---+--+
> | 360765af-fd5d-432c-990f-f787600c30ab | kuryr | [u'10.10.1.0/24'] |
> 24|  |
>
> +--+---+---+---+--+
> ubuntu@kuryr1:~$ sudo docker network create -d kuryr
> --ipam-driver=kuryr kuryr
> Error response from daemon: Plugin Error: NetworkDriver.CreateNetwork,
> {
>   "Err": "u'Gateway' is a required property\n\nFailed validating
> u'required' in schema[u'properties'][u'IPv4Data'][u'items']:\n
>  {u'description': u'IPv4 data',\n u'example': {u'AddressSpace':
> u'foo',\n  u'AuxAddresses': {u'db': u'192.168.42.3',\n
>u'web': u'192.168.42.2'},\n
>  u'Gateway': u'192.168.42.1/24',\n  u'Pool': u'
> 192.168.42.0/24'},\n u'properties': {u'AddressSpace':
> {u'description': u'The name of the address space.',\n
> u'example': u'foo',\n
> u'type': u'string'},\n  

Re: [openstack-dev] [Kuryr] Starting Kuryr service requires root privilege

2016-01-26 Thread Antoni Segura Puimedon
On Tue, Jan 26, 2016 at 10:19 AM, Baohua Yang <yangbao...@gmail.com> wrote:
> Thanks toni.
> Could u help add those instructions into doc?
> And we might need provide some tool to enable those CAP_NET_ADMIN cap in the
> startup scripts.

I'll send a wip patch today or tomorrow.

>
> On Tue, Jan 26, 2016 at 4:29 PM, Antoni Segura Puimedon
> <toni+openstac...@midokura.com> wrote:
>>
>> On Tue, Jan 26, 2016 at 8:13 AM, Baohua Yang <yangbao...@gmail.com> wrote:
>> > Hi hua
>> > Thanks for the suggestion!
>> > Yes, root wrap is also a good candidate.
>> > We will compare to choose the proper solution.
>> > Thanks!
>> >
>> > On Tue, Jan 26, 2016 at 1:59 PM, 王华 <wanghua.hum...@gmail.com> wrote:
>> >>
>> >> Hi Baohua,
>> >>
>> >> I think https://wiki.openstack.org/wiki/Rootwrap can solve this
>> >> problem.
>> >> It is used in other OpenStack projects like Nova, Neutron.
>> >>
>> >> Regards,
>> >> Wanghua
>> >>
>> >> On Tue, Jan 26, 2016 at 1:07 PM, Baohua Yang <yangbao...@gmail.com>
>> >> wrote:
>> >>>
>> >>> Hi toni
>> >>>
>> >>> Recently we found some issue when starting kuryr service without root
>> >>> privilege [1].
>> >>>
>> >>> Tfukushima mentioned that you have some suggestion on using capacity
>> >>> to
>> >>> solve this?
>>
>> I do. I have a C launcher that allows Kuryr to run with CAP_NET_ADMIN so
>> that
>> any user can run it. My idea was to put it in contrib and then let the
>> distros decide
>> if they want to run kuryr as root or use the launcher in their packaging
>> systemd
>> service files.
>>
>> >>>
>> >>> We currently make a temp workaround by suggesting using sudo to start
>> >>> the
>> >>> service [2].
>> >>>
>> >>> Any advice?
>> >>>
>> >>> Thanks!
>> >>>
>> >>> [1] https://bugs.launchpad.net/kuryr/+bug/1516539.
>> >>> [2] https://review.openstack.org/#/c/272370
>> >>>
>> >>> --
>> >>> Best wishes!
>> >>> Baohua
>> >>>
>> >>>
>> >>>
>> >>> __
>> >>> OpenStack Development Mailing List (not for usage questions)
>> >>> Unsubscribe:
>> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>>
>> >>
>> >>
>> >>
>> >> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >
>> >
>> >
>> > --
>> > Best wishes!
>> > Baohua
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Best wishes!
> Baohua
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [midonet] Split up python-midonetclient

2015-12-14 Thread Antoni Segura Puimedon
On Mon, Dec 14, 2015 at 6:07 PM, Jaume Devesa  wrote:

> +1
>
> I think it is good compromise. Thanks Ryu!
>
> I understand the CLI will belong to the external part. I much prefer to
> have
> it in a separate project rather than into the plugin. Even if the code is
> tiny.
>

Let me summarize it:

python-midonetclient: Low level API that lives and breathes in
midonet/midonet.
Has the current cli.
python-os-midonetclient: High level API that is in
openstack/python-midonetclient
 (can be packaged with a different
name).

Are you asking for python-os-midonetclient not to include the cli tool?

I would prefer to keep with the OpenStack practice [1] of having it
together. I don't
think developing a python cli client for the new python-os-midonetclient
that is
on par with the neutron cli tool would be that big of a task and I think it
would
make operation nicer. It could even find the midonet-api from the zookeeper
registry like the other tools do.

[1] https://github.com/openstack/python-neutronclient/blob/master/setup.cfg

>
> If you will want to just do midonet calls for debugging or check the
> MidoNet
> virtual infrastructure, it will be cleaner to install it without
> dependencies than
> dragging the whole neutron project (networking-midonet depends on neutron).
>
> Regards,
>
> On 14 December 2015 at 17:32, Ryu Ishimoto  wrote:
>
>> On Tue, Dec 15, 2015 at 1:00 AM, Sandro Mathys 
>> wrote:
>> > On Tue, Dec 15, 2015 at 12:02 AM, Ryu Ishimoto 
>> wrote:
>> >
>> > So if I understand you correctly, you suggest:
>> > 1) the (midonet/internal) low level API stays where it is and will
>> > still be called python-midonetclient.
>> > 2) the (neutron/external) high level API is moved into it's own
>> > project and will be called something like python-os-midonetclient.
>> >
>> > Sounds like a good compromise which addresses the most important
>> > points, thanks Ryu! I wasn't aware that these parts of the
>> > python-midonetclient are so clearly distinguishable/separable but if
>> > so, this makes perfect sense. Not perfectly happy with the naming, but
>> > I figure it's the way to go.
>>
>> Thanks for the endorsement.  Yes, it is trivial to separate them (less
>> than a day of work) because they are pretty much already separated.
>>
>> As for the naming, I think it's better to take a non-disruptive
>> approach so that it's transparent to those currently developing the
>> low level midonet client.  To your question, however, I have another
>> suggestion, which is that for the high level client code, it may also
>> make sense to just include that as part of the plugin.  It's such
>> small code that it might not make sense to separate, and also likely
>> to be used only by the plugin in the future.  Which basically means
>> that the plugin need not depend on any python client library at all.
>> I think this will simplify even further.  It should also be ok to be
>> tied to the plugin release cycles as well assuming that's the only
>> place the client is needed.
>>
>> Cheers,
>> Ryu
>>
>>
>>
>> >
>> > -- Sandro
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Jaume Devesa
> Software Engineer at Midokura
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [midonet] Split up python-midonetclient

2015-12-09 Thread Antoni Segura Puimedon
On Tue, Dec 8, 2015 at 1:58 PM, Galo Navarro  wrote:

> Hi Sandro,
>
> >> 1) (Downstream) packaging: midonet and python-midonetclient are two
> >> distinct packages, and therefore should have distinct upstream
> >> tarballs - which are compiled on a per repo basis.
>
> This is actually not accurate: there is no such thing as a midonet
> package. The midonet repo produces 4 or 5 separate packages: agent,
> cluster, tools, py client.
>
> I'd like to understand a bit better what exactly you're trying to
> achieve. Is it to produce tarballs? All of them? Just
> python-midonetclient?
>
> Let's examine the concrete requirements before rushing headlong into
> highly disruptive changes like splitting repos. For example, a
> py-midonetclient tarball can be built already without having a
> separate repo.
>
> > 3) In order to put python-midonetclient on PyPI, it's probably
> > required to be in its own repo as well, isn't it? Because that's
> > another requirement [3]
>
> Ditto. We already have a mirror repo of pyc for this purpose
> https://github.com/midonet/python-midonetclient, synced daily.
>

Some of the problems with that is that it does not have any git log history
nor does it feel like a coding project at all.

Allow me to put forward a solution that will allow you keep the development
in the midonet tree while, at the same time, having a proper repository
with identifiable patches in github.com/midonet/python-midonetclient

Look at the repo I created [1] and specially at its commit history [2]

As you can see, it has all the commit history relevant for the
midonet/python-midonetclient (and only that one) is present.

This is generated in the following way. There should be job that does once
in the
midonet repo:

git format-patch -o "${HOME}/patches_current
--relative=python-midonetclient \
7aef7ea7845a2125696303a277d40bd45c9240e2..master

Then, each day it should do:

cd ${JOB_HOME}
git clone https://github.com/celebdor/python-midonetclient.git
git clone https://github.com/midonet/midonet.git

pushd midonet
git format-patch -o "${JOB_HOME}/patches_new"
--relative=python-midonetclient \
7aef7ea7845a2125696303a277d40bd45c9240e2..master
popd
pushd python-midonetclient

for file in `diff <(ls -1a "${HOME}/patches_current") <(ls -1a
"${JOB_HOME}/patches_new") | cut -f2 -d' '`
do
git am < "${JOB_HOME}/patches_new/$file"
done

popd
mv patches_new "${HOME}/patches_current"

It should be quite straightforward to change whichever job you currently
use to
this.

The last remaining issue will be that of tags. github.com/midonet/midonet
is not
tagging all the releases. However, python-midonetclient should, so I would
just
ask that when you make a release you push the tag to python-midonetclient
as well.

[1] https://github.com/celebdor/python-midonetclient/
[2] https://github.com/celebdor/python-midonetclient/commits/master


Regards,

Toni


> g
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [midonet] Split up python-midonetclient

2015-12-09 Thread Antoni Segura Puimedon
On Wed, Dec 9, 2015 at 2:41 PM, Antoni Segura Puimedon <
toni+openstac...@midokura.com> wrote:

>
>
> On Tue, Dec 8, 2015 at 1:58 PM, Galo Navarro <g...@midokura.com> wrote:
>
>> Hi Sandro,
>>
>> >> 1) (Downstream) packaging: midonet and python-midonetclient are two
>> >> distinct packages, and therefore should have distinct upstream
>> >> tarballs - which are compiled on a per repo basis.
>>
>> This is actually not accurate: there is no such thing as a midonet
>> package. The midonet repo produces 4 or 5 separate packages: agent,
>> cluster, tools, py client.
>>
>> I'd like to understand a bit better what exactly you're trying to
>> achieve. Is it to produce tarballs? All of them? Just
>> python-midonetclient?
>>
>> Let's examine the concrete requirements before rushing headlong into
>> highly disruptive changes like splitting repos. For example, a
>> py-midonetclient tarball can be built already without having a
>> separate repo.
>>
>> > 3) In order to put python-midonetclient on PyPI, it's probably
>> > required to be in its own repo as well, isn't it? Because that's
>> > another requirement [3]
>>
>> Ditto. We already have a mirror repo of pyc for this purpose
>> https://github.com/midonet/python-midonetclient, synced daily.
>>
>
> Some of the problems with that is that it does not have any git log history
> nor does it feel like a coding project at all.
>
> Allow me to put forward a solution that will allow you keep the development
> in the midonet tree while, at the same time, having a proper repository
> with identifiable patches in github.com/midonet/python-midonetclient
>
> Look at the repo I created [1] and specially at its commit history [2]
>
> As you can see, it has all the commit history relevant for the
> midonet/python-midonetclient (and only that one) is present.
>
> This is generated in the following way. There should be job that does once
> in the
> midonet repo:
>
> git format-patch -o "${HOME}/patches_current
> --relative=python-midonetclient \
> 7aef7ea7845a2125696303a277d40bd45c9240e2..master
>
> Then, each day it should do:
>
> cd ${JOB_HOME}
> git clone https://github.com/celebdor/python-midonetclient.git
> git clone https://github.com/midonet/midonet.git
>
> pushd midonet
> git format-patch -o "${JOB_HOME}/patches_new"
> --relative=python-midonetclient \
> 7aef7ea7845a2125696303a277d40bd45c9240e2..master
> popd
> pushd python-midonetclient
>
> for file in `diff <(ls -1a "${HOME}/patches_current") <(ls -1a
> "${JOB_HOME}/patches_new") | cut -f2 -d' '`
> do
> git am < "${JOB_HOME}/patches_new/$file"
> done
>

Obviously at this point it should do a "git push" :P


> popd
> mv patches_new "${HOME}/patches_current"
>
> It should be quite straightforward to change whichever job you currently
> use to
> this.
>
> The last remaining issue will be that of tags. github.com/midonet/midonet
> is not
> tagging all the releases. However, python-midonetclient should, so I would
> just
> ask that when you make a release you push the tag to python-midonetclient
> as well.
>
> [1] https://github.com/celebdor/python-midonetclient/
> [2] https://github.com/celebdor/python-midonetclient/commits/master
>
>
> Regards,
>
> Toni
>
>
>> g
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [midonet] IRC: ditch #midonet-dev?

2015-12-01 Thread Antoni Segura Puimedon
On Tue, Dec 1, 2015 at 10:59 AM, Ivan Kelly  wrote:

> +1 for #2
>

PS: Beware of the top-posting! It makes vote counting harder ;-)


>
> On Tue, Dec 1, 2015 at 10:57 AM, Sandro Mathys 
> wrote:
> > Hi,
> >
> > Our IRC channels have been neglected for a long time, and as a result
> > we lost ownership of #midonet-dev, which is now owner by
> > freenode-staff. In theory, it should be very easy to get ownership
> > back, particularly since we still own #midonet. But in reality, it
> > seems like none of the freenode staff feel responsible for these
> > requests, so we still aren't owners after requesting it for 3 weeks
> > already.
> >
> > Therefore, Toni Segura suggested we just ditch it and move to
> > #openstack-midonet instead.
> >
> > However, several people have also said we don't need two channels,
> > i.e. we should merge #midonet and #midonet-dev.
> >
> > So, here's three proposals:
> >
> > Proposal #1:
> > * keep #midonet
> > * replace #midonet-dev with #openstack-midonet
> >
> > Proposal #2:
> > * keep #midonet
> > * merge #midonet-dev into #midonet
>

+1


> >
> > Proposal #3:
> > * replace both #midonet and #midonet-dev with #openstack-midonet
> >
> > I don't have any strong feelings for any of the proposals, but suggest
> > we go with proposal #2. Traffic in both #midonet and #midonet-dev is
> > rather low, so one channel should do - there's way busier OpenStack
> > channels out there. Furthermore, #midonet is shorter than
> > #openstack-midonet and already established. I also think people will
> > rather look in #midonet than #openstack-midonet if they're looking for
> > us.
> >
> > Thoughts?
> >
> > Cheers,
> > Sandro
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Current status of docker python lib supports for libnetwork

2015-11-10 Thread Antoni Segura Puimedon
On Mon, Nov 9, 2015 at 3:07 AM, Baohua Yang  wrote:

> Hi, kuryr guys
> I may not attend tmr's IRC meeting, hence make a quick sync here.
> One task is to investigate the docker python lib support to libnetwork.
> I checked the code here at https://github.com/docker/docker-py.
> They do have the network api support in code (in api/network.py), as
>  1. create_network
>  2. remove_network
>  3. inspect_network
>  4. connect_container_to_network
>  5. disconnect_container_from_network
>  Those apis are implemented by calling the rest api directly.
>  But those apis are not mentioned in doc yet.
>

Great. That means that we could use it in the Rally or tempest work for the
functional testing.

Thanks Baohua for the investigation ;-)


>  Thanks!
>
> --
> Best wishes!
> Baohua
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] mutihost networking with nova vm as docker host

2015-11-06 Thread Antoni Segura Puimedon
On Fri, Nov 6, 2015 at 1:20 PM, Baohua Yang  wrote:

> It does cause confusing by calling container-inside-vm as nested
> container.
>
> The "nested" term in container area usually means
> container-inside-container.
>

I try to always put it as VM-nested container. But I probably slipped in
some mentions.


> we may refer this (container-inside-vm) explicitly as vm-holding container.
>

container-in-vm?


>
> On Fri, Nov 6, 2015 at 12:13 PM, Vikas Choudhary <
> choudharyvika...@gmail.com> wrote:
>
>> @Gal, I was asking about "container in nova vm" case.
>> Not sure if you were referring to this case as nested containers case. I
>> guess nested containers case would be "containers inside containers" and
>> this could be hosted on nova vm and nova bm node. Is my understanding
>> correct?
>>
>> Thanks Gal and Toni, for now i got answer to my query related to
>> "container in vm" case.
>>
>> -Vikas
>>
>> On Thu, Nov 5, 2015 at 6:00 PM, Gal Sagie  wrote:
>>
>>> The current OVS binding proposals are not for nested containers.
>>> I am not sure if you are asking about that case or about the nested
>>> containers inside a VM case.
>>>
>>> For the nested containers, we will use Neutron solutions that support
>>> this kind of configuration, for example
>>> if you look at OVN you can define "parent" and "sub" ports, so OVN knows
>>> to perform the logical pipeline in the compute host
>>> and only perform VLAN tagging inside the VM (as Toni mentioned)
>>>
>>> If you need more clarification you can catch me on IRC as well and we
>>> can talk.
>>>
>>> On Thu, Nov 5, 2015 at 8:03 AM, Vikas Choudhary <
>>> choudharyvika...@gmail.com> wrote:
>>>
 Hi All,

 I would appreciate inputs on following queries:
 1. Are we assuming nova bm nodes to be docker host for now?

 If Not:
  - Assuming nova vm as docker host and ovs as networking plugin:
 This line is from the etherpad[1], "Eachdriver would have
 an executable that receives the name of the veth pair that has to be bound
 to the overlay" .
 Query 1:  As per current ovs binding proposals by Feisky[2]
 and Diga[3], vif seems to be binding with br-int on vm. I am unable to
 understand how overlay will work. AFAICT , neutron will configure br-tun of
 compute machines ovs only. How overlay(br-tun) configuration will happen
 inside vm ?

  Query 2: Are we having double encapsulation(both at vm and
 compute)? Is not it possible to bind vif into compute host br-int?

  Query3: I did not see subnet tags for network plugin being
 passed in any of the binding patches[2][3][4]. Dont we need that?


 [1]  https://etherpad.openstack.org/p/Kuryr_vif_binding_unbinding
 [2]  https://review.openstack.org/#/c/241558/
 [3]  https://review.openstack.org/#/c/232948/1
 [4]  https://review.openstack.org/#/c/227972/


 -Vikas Choudhary


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> --
>>> Best Regards ,
>>>
>>> The G.
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best wishes!
> Baohua
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][kuryr] network control plane (libkv role)

2015-11-05 Thread Antoni Segura Puimedon
On Thu, Nov 5, 2015 at 10:47 AM, Vikas Choudhary  wrote:

> ++ [Neutron] tag
>
>
> On Thu, Nov 5, 2015 at 10:40 AM, Vikas Choudhary <
> choudharyvika...@gmail.com> wrote:
>
>> Hi all,
>>
>> By network control plane i specifically mean here sharing network state
>> across docker daemons sitting on different hosts/nova_vms in multi-host
>> networking.
>>
>> libnetwork provides flexibility where vendors have a choice between
>> network control plane to be handled by libnetwork(libkv) or remote driver
>> itself OOB. Vendor can choose to "mute" libnetwork/libkv by advertising
>> remote driver capability as "local".
>>
>> "local" is our current default "capability" configuration in kuryr.
>>
>> I have following queries:
>> 1. Does it mean Kuryr is taking responsibility of sharing network state
>> across docker daemons? If yes, network created on one docker host should be
>> visible in "docker network ls" on other hosts. To achieve this, I guess
>> kuryr driver will need help of some distributed data-store like consul etc.
>> so that kuryr driver on other hosts could create network in docker on other
>> hosts. Is this correct?
>>
>> 2. Why we cannot  set default scope as "Global" and let libkv do the
>> network state sync work?
>>
>> Thoughts?
>>
>
Hi Vikas,

Thanks for raising this. As part of the current work on enabling multi-node
we should be moving the default to 'global'.


>
>> Regards
>> -Vikas Choudhary
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][kuryr] mutihost networking with nova vm as docker host

2015-11-05 Thread Antoni Segura Puimedon
On Thu, Nov 5, 2015 at 10:38 AM, Vikas Choudhary  wrote:

> ++[Neutron] tag
>
>
> On Thu, Nov 5, 2015 at 11:33 AM, Vikas Choudhary <
> choudharyvika...@gmail.com> wrote:
>
>> Hi All,
>>
>> I would appreciate inputs on following queries:
>> 1. Are we assuming nova bm nodes to be docker host for now?
>>
>
Yes. That's the assumption for deployments as of now before we tackle
containers running on
more complicated deployment topologies (like containers running inside
tenant VMs).


>
>> If Not:
>>
>
When we go for other kinds of deployments,


>  - Assuming nova vm as docker host and ovs as networking plugin:
>> This line is from the etherpad[1], "Eachdriver would have an
>> executable that receives the name of the veth pair that has to be bound to
>> the overlay" .
>>
>
The binding will obviously have to change for such deployments


> Query 1:  As per current ovs binding proposals by Feisky[2]
>> and Diga[3], vif seems to be binding with br-int on vm. I am unable to
>> understand how overlay will work. AFAICT , neutron will configure br-tun of
>> compute machines ovs only. How overlay(br-tun) configuration will happen
>> inside vm ?
>>
>>  Query 2: Are we having double encapsulation(both at vm and
>> compute)? Is not it possible to bind vif into compute host br-int?
>>
>>  Query3: I did not see subnet tags for network plugin being
>> passed in any of the binding patches[2][3][4]. Dont we need that?
>>
>
The spec for containers on VMs has not yet been drafted and we are open for
proposals and discussion. I would like to have more than one spec proposal
for it and to try to achieve community consensus before the new year for
what's the best way to go.

Currently it seems that the approaches that will be proposed are:
- ovn-like solution with vlan tag per port [5]
- routed solution with port per VM as explained by Brenden Blanco [6]

I'm hoping that we will arrive to something in between or perhaps more
complete than either of those options.

[5] http://docs.openstack.org/developer/networking-ovn/containers.html
[6] https://gist.github.com/drzaeus77/89aa3db154c688a15ee6

Regards,

Toni

>
>>
>> [1]  https://etherpad.openstack.org/p/Kuryr_vif_binding_unbinding
>> [2]  https://review.openstack.org/#/c/241558/
>> [3]  https://review.openstack.org/#/c/232948/1
>> [4]  https://review.openstack.org/#/c/227972/
>>
>>
>> -Vikas Choudhary
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] competing implementations

2015-11-04 Thread Antoni Segura Puimedon
On Wed, Nov 4, 2015 at 2:38 PM, Baohua Yang <yangbao...@gmail.com> wrote:

> +1, Antoni!
> btw, is our weekly meeting still on meeting-4 channel?
> Not found it there yesterday.
>

Yes, it is still on openstack-meeting-4, but this week we skipped it, since
some of us were
traveling and we already held the meeting on Friday. Next Monday it will be
held as usual
and the following week we start alternating (we have yet to get a room for
that one).

>
> On Wed, Nov 4, 2015 at 9:27 PM, Antoni Segura Puimedon <
> toni+openstac...@midokura.com> wrote:
>
>> Hi Kuryrs,
>>
>> Last Friday, as part of the contributors meetup, we discussed also code
>> contribution etiquette. Like other OpenStack project (Magnum comes to
>> mind), the etiquette for what to do when there is disagreement in the way
>> to code a blueprint of fix a bug is as follows:
>>
>> 1.- Try to reach out so that the original implementation gets closer to a
>> compromise by having the discussion in gerrit (and Mailing list if it
>> requires a wider range of arguments).
>> 2.- If a compromise can't be reached, feel free to make a separate
>> implementation arguing well its difference, virtues and comparative
>> disadvantages. We trust the whole community of reviewers to be able to
>> judge which is the best implementation and I expect that often the
>> reviewers will steer both submissions closer than they originally were.
>> 3.- If both competing implementations get the necessary support, the core
>> reviewers will take a specific decision on which to take based on technical
>> merit. Important factor are:
>> * conciseness,
>> * simplicity,
>> * loose coupling,
>> * logging and error reporting,
>> * test coverage,
>> * extensibility (when an immediate pending and blueprinted feature
>> can better be built on top of it).
>> * documentation,
>> * performance.
>>
>> It is important to remember that technical disagreement is a healthy
>> thing and should be tackled with civility. If we follow the rules above, it
>> will lead to a healthier project and a more friendly community in which
>> everybody can propose their vision with equal standing. Of course,
>> sometimes there may be a feeling of duplicity, but even in the case where
>> one's solution it is not selected (and I can assure you I've been there and
>> know how it can feel awkward) it usually still enriches the discussion and
>> constitutes a contribution that improves the project.
>>
>> Regards,
>>
>> Toni
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best wishes!
> Baohua
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] competing implementations

2015-11-04 Thread Antoni Segura Puimedon
Hi Kuryrs,

Last Friday, as part of the contributors meetup, we discussed also code
contribution etiquette. Like other OpenStack project (Magnum comes to
mind), the etiquette for what to do when there is disagreement in the way
to code a blueprint of fix a bug is as follows:

1.- Try to reach out so that the original implementation gets closer to a
compromise by having the discussion in gerrit (and Mailing list if it
requires a wider range of arguments).
2.- If a compromise can't be reached, feel free to make a separate
implementation arguing well its difference, virtues and comparative
disadvantages. We trust the whole community of reviewers to be able to
judge which is the best implementation and I expect that often the
reviewers will steer both submissions closer than they originally were.
3.- If both competing implementations get the necessary support, the core
reviewers will take a specific decision on which to take based on technical
merit. Important factor are:
* conciseness,
* simplicity,
* loose coupling,
* logging and error reporting,
* test coverage,
* extensibility (when an immediate pending and blueprinted feature can
better be built on top of it).
* documentation,
* performance.

It is important to remember that technical disagreement is a healthy thing
and should be tackled with civility. If we follow the rules above, it will
lead to a healthier project and a more friendly community in which
everybody can propose their vision with equal standing. Of course,
sometimes there may be a feeling of duplicity, but even in the case where
one's solution it is not selected (and I can assure you I've been there and
know how it can feel awkward) it usually still enriches the discussion and
constitutes a contribution that improves the project.

Regards,

Toni
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] meetup

2015-10-28 Thread Antoni Segura Puimedon
Hi fellow Kuryrs,

I want to remind you that we'll be meeting up at some corner of [1] to
discuss all things kuryr. The agenda will be:

- Current challenges with multinode
- The path to Mitaka:
  + Functional testing
  + COE Integrations
  + VM nested containers.


[1] http://sched.co/4Qan

Regards,

Toni
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] Proposing Taku Fukushima as Kuryr core

2015-10-13 Thread Antoni Segura Puimedon
Hi fellow Kurýrs,

I would like to propose Taku Fukushima for the core Kuryr team due to his
unparalleled dedication to the project. He has written most of the code and
battled through the continuous libnetwork API changes. He will be a great
addition to the reviewing tasks.

Current core members, please, cast your vote by tomorrow night.

Regards,

Toni
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Kuryr] Kuryr Open Tasks

2015-10-07 Thread Antoni Segura Puimedon
On Wed, Oct 7, 2015 at 9:40 PM, Egor Guz <e...@walmartlabs.com> wrote:

> Gal, thx I a lot. I have created the pool
> http://doodle.com/poll/udpdw77evdpnsaq6 where everyone can vote for time
> slot.
>

Thanks Egor


>
> —
> Egor
>
>
> From: Gal Sagie <gal.sa...@gmail.com<mailto:gal.sa...@gmail.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
> >>
> Date: Tuesday, October 6, 2015 at 12:08
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>,
> Eran Gampel <eran.gam...@huawei.com<mailto:eran.gam...@huawei.com>>,
> Antoni Segura Puimedon <t...@midokura.com<mailto:t...@midokura.com>>,
> Irena Berezovsky <ir...@midokura.com<mailto:ir...@midokura.com>>,
> Mohammad Banikazemi <m...@us.ibm.com<mailto:m...@us.ibm.com>>, Taku Fukushima
> <tfukush...@midokura.com<mailto:tfukush...@midokura.com>>, Salvatore
> Orlando <salv.orla...@gmail.com<mailto:salv.orla...@gmail.com>>, sky fei <
> feisk...@gmail.com<mailto:feisk...@gmail.com>>, "
> digambarpati...@yahoo.co.in<mailto:digambarpati...@yahoo.co.in>" <
> digambarpati...@yahoo.co.in<mailto:digambarpati...@yahoo.co.in>>,
> Digambar Patil <digambar...@gmail.com<mailto:digambar...@gmail.com>>
> Subject: [openstack-dev] [Neutron][Kuryr] Kuryr Open Tasks
>
> Hello All,
>
> I have opened a Trello board to track all Kuryr assigned tasks and their
> assignee.
> In addition to all the non assigned tasks we have defined.
>
> You can visit and look at the board here [1].
> Please email back if i missed you or any task that you are working on, or
> a task
> that you think needs to be on that list.
>
> This is only a temporary solution until we get everything organised, we
> plan to track everything with launchpad bugs (and the assigned blueprints)
>
> If you see any task from this list which doesn't have an assignee and you
> feel
> you have the time and the desire to contribute, please contact me and i
> will provide
> guideness.
>
> Thanks
> Gal
>
> [1] https://trello.com/b/cbIAXrQ2/project-kuryr
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Kuryr - Spec

2015-09-18 Thread Antoni Segura Puimedon
On Fri, Sep 18, 2015 at 8:30 PM, Gal Sagie  wrote:

> Hello everyone,
>
> We have a spec for project Kuryr in Neutron repository [1] , we have been
> iterating on it
> internally and with the great help and feedback from the Magnum team.
>
> I am glad to say that we reached a pretty good step where we have most of
> the
> Magnum team +1 the spec, i personally think all of the items for the first
> milestone
> (which is for Mitaka release) are well defined and already in working
> process (and low level
> design process).
>
> I would like to thank the Magnum team for working closely with us on this
> and for
> the valuable feedback.
>
> The reason why we put this in the Neutron repository is the fact that we
> feel Kuryr
> is not another Neutron implementation, it is an infrastructure project
> that can be used by
> any Neutron plugin and need (in my opinion) to go hand in hand with
> Neutron.
> We would like to make it visible to the Neutron team and i hope that we
> can get
> this spec merged for the Mitaka release to define our goals in Kuryr.
>
> We also have detailed designs and blueprints process in Kuryr repository
> for
> all the items described in the spec.
> I hope to see more comments/review from Neutron members on this spec.
>
> On a side note, we had a virtual sprint for Kuryr last week, apuimedo and
> taku will have
> a video of a demo thanks to the progress made on the sprint, so stay tuned
> for that to see whats available.
>

And Gal and I will be demoing live the development version in the Summit
[1], so come
see our talk ;-)

[1]
https://openstacksummitoctober2015tokyo.sched.org/event/b90847a5496c0a2454929d95a0afc46e


>
> Thanks
> Gal.
>
> [1] https://review.openstack.org/#/c/213490/
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] [neutron] libnetwork endpoint to Neutron abstractions

2015-08-21 Thread Antoni Segura Puimedon
Hi list,

I was reviewing the CreateEndpoint patch[1] from Taku that had received
positive reviews. I put some comment about an alternative way to map
endpoints to nets and subnets and I would appreciate some discussion here
on the mailing list about the original proposal and the alternative I
mentioned.

Regards,

Toni


=
[1] https://review.openstack.org/#/c/210052/9//COMMIT_MSG
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >