Re: [openstack-dev] [kuryr][kuryr-kubernetes] Propose to support Kubernetes Network Custom Resource Definition De-facto Standard Version 1

2018-06-06 Thread Irena Berezovsky
Sounds like a great initiative.

Lets follow up on the proposal by the kuryr-kubernetes blueprint.

BR,
Irena

On Wed, Jun 6, 2018 at 6:47 AM, Peng Liu  wrote:

> Hi Kuryr-kubernetes team,
>
> I'm thinking to propose a new BP to support  Kubernetes Network Custom
> Resource Definition De-facto Standard Version 1 [1], which was drafted by
> network plumbing working group of kubernetes-sig-network. I'll call it NPWG
> spec below.
>
> The purpose of NPWG spec is trying to standardize the multi-network effort
> around K8S by defining a CRD object 'network' which can be consumed by
> various CNI plugins. I know there has already been a BP VIF-Handler And Vif
> Drivers Design, which has designed a set of mechanism to implement the
> multi-network functionality. However I think it is still worthwhile to
> support this widely accepted NPWG spec.
>
> My proposal is to implement a new vif_driver, which can interpret the PoD
> annotation and CRD defined by NPWG spec, and attach pod to additional
> neutron subnet and port accordingly. This new driver should be mutually
> exclusive with the sriov and additional_subnets drivers.So the endusers can
> choose either way of using mult-network with kuryr-kubernetes.
>
> Please let me know your thought, any comments are welcome.
>
>
>
> [1] https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR
> 7UdTPAG_RNydhVE1Kx54kFQ/edit#heading=h.hylsbqoj5fxd
>
>
> Regards,
>
> --
> Peng Liu
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr][libnetwork] Release kuryr-libnetwork 1.x for Queens

2018-01-20 Thread Irena Berezovsky
+1

On Fri, Jan 19, 2018 at 9:42 PM, Hongbin Lu  wrote:

> Hi Kuryr team,
>
> I think Kuryr-libnetwork is ready to move out of beta status. I propose to
> make the first 1.x release of Kuryr-libnetwork for Queens and cut a stable
> branch on it. What do you think about this proposal?
>
> Best regards,
> Hongbin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Nominate Kirill Zaitsev as kuryr-tempest-core reviewer

2017-07-04 Thread Irena Berezovsky
+1

On Wed, Jul 5, 2017 at 4:23 AM, Vikas Choudhary 
wrote:

> +1
>
> On Tue, Jul 4, 2017 at 7:59 PM, Antoni Segura Puimedon  > wrote:
>
>> On Tue, Jul 4, 2017 at 12:23 PM, Gal Sagie  wrote:
>> > +1
>> +1
>> >
>> > On Tue, Jul 4, 2017 at 12:28 PM, Daniel Mellado <
>> daniel.mellado...@ieee.org>
>> > wrote:
>> >>
>> >> Hi Team,
>> >>
>> >> I wanted to nominate Kirill for kuryr-tempest-core reviewer. He's been
>> a
>> >> great help from start both contributing and reviewing.
>> >>
>> >> Please voice your support or concerns if any
>> >>
>> >> Best!
>> >>
>> >> Daniel
>> >>
>> >> 
>> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> >
>> > --
>> > Best Regards ,
>> >
>> > The G.
>> >
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Ocata cycle ending and proposing new people as Kuryr cores

2017-01-14 Thread Irena Berezovsky
On Fri, Jan 13, 2017 at 6:49 PM, Antoni Segura Puimedon 
wrote:

> Hi fellow kuryrs!
>
> We are getting close to the end of the Ocata and it is time to look back
> and appreciate the good work all the contributors did. I would like to
> thank you all for the continued dedication and participation in gerrit, the
> weekly meetings, answering queries on IRC, etc.
>
> I also want to propose two people that I think will help us a lot as core
> contributors in the next cycles.
>
> For Kuryr-lib and kuryr-libnetwork I want to propose Liping Mao. Liping
> has been contributing a lot of since Mitaka, not just in code but in
> reviews catching important details and fixing bugs. It is overdue that he
> gets to help us even more!
>
> +1

> For Kuryr-kubernetes I want to propose Ilya Chukhnakov. Ilya got into
> Kuryr at the end of the Newton cycle and has done a wonderful job in the
> Kubernetes integration contributing heaps of code and being an important
> part of the design discussions and patches. It is also time for him to
> start approving patches :-)
>
> +1

>
> Let's have the votes until next Friday (unless enough votes are cast
> earlier).
>
> Regards,
>
> Toni
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][dvr][fip] router support two external network

2016-11-13 Thread Irena Berezovsky
Hi,
The case you are describing may be related to the previously discussed RFE
[1].
Having additional networks with FIP range attached via router interface
should be allowed from the API point of view, but may need some adaptations
to make it work properly. Please see the details in the discussion log [1].

[1]  https://bugs.launchpad.net/neutron/+bug/1566191

BR,
Irena



On Sun, Nov 13, 2016 at 12:35 PM, Gary Kotton  wrote:

> Hi,
>
> Today the mapping is 1:1. So if you want additional mappinsg to internal
> networks then you can define more than one interface on your instance. Then
> map each interface to the relevant network.
>
> Thanks
>
> Gary
>
>
>
> *From: *huangdenghui 
> *Reply-To: *OpenStack List 
> *Date: *Saturday, November 12, 2016 at 10:26 AM
> *To: *OpenStack List 
> *Subject: *[openstack-dev] [neutron][dvr][fip] router support two
> external network
>
>
>
> Hi all
> Currently, neutron model supports one router with one external
> network, which is used to connect router to outside world. FIP can be
> allocated from external network which is gateway of a router. One private
> fixed ip of one port(usually it is a vm port ) can only associate one
> floating ip.  In some deployment scenario, all ports served by one router,
> all ports need ip address which can be accessed by intranet, and some ports
> need ip address which can be accessed by internet. I was wondering how
> neutron to resolve this kind of use cases? One idea is one router support
> two external network(one for intranet, the other one for internet, but only
> have gateway), the other idea is one router still have only one external
> network, but this external have two different type of subnet (one for
> internet, the other one for intranet). any comment is welcome. thanks.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Spec and devref placement

2016-09-12 Thread Irena Berezovsky
I am fine with option (b) as well.
We can add option (d):
Specs in openstack/*kuryr-specs* but devrefs in each specific project,
i.e., the one that will end up with the implementation code.


On Mon, Sep 12, 2016 at 2:38 PM, Antoni Segura Puimedon <celeb...@gmail.com>
wrote:

> Hi Kuryrs!
>
> On September 5th's weekly IRC meeting Irena Berezovsky suggested that
> we should take a decision regarding the location of specs and devrefs.
>
> Currently we default to putting all the specs and devrefs for:
> - Kuryr
> - Kuryr-libnetwork
> - Kuryr-kubernetes
>
> to openstack/kuryr. Fuxi is still being integrated and keeps its own doc.
>
> The three proposals that came up where:
> a) All specs and devrefs to openstack/kuryr
> b) Specs in openstack/kuryr but devrefs in each specific project,
> i.e., the one that will end up with the implementation code.
> c) Both specs and devrefs in each separate Kuryr project.
>
> I would like to advocate for option (b). It makes things easy for when
> specs involve multiple kuryr pieces and, at the same time, it keeps
> development information in the place where you'd expect, close to the
> code.
>
> Please, weigh on this issue here in the ML or in the weekly IRC
> meeting today. The idea is to reach a decision by next week's weekly
> IRC meeting and then write it in each subproject's "how to contribute"
>
> See you later in the weekly IRC,
>
> Toni
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-12 Thread Irena Berezovsky
Hi Ivan,
The approach looks very interesting and seems to be reasonable effort to
make it work with kuryr as alternative to the 'VLAN aware VM' approach.
Having container presented as neutron entity has its value, especially for
visibility/monitoring (i.e mirroring) and security  (i.e applying security
groups).
But I do think that for short term, this approach is a good way to provide
Container in VM  support.
I  think it's worth to submit devref to kuryr to move forward.
BR,
Irena

On Mon, Sep 12, 2016 at 2:29 PM, Coughlan, Ivan 
wrote:

>
>
> *Overview*
>
> Kuryr proposes to address the issues of double encapsulation and exposure
> of containers as neutron entities when containers are running within VMs.
>
> As an alternative to the vlan-aware-vms and use of ovs within the VM, we
> propose to:
>
> -  Use allowed-address-pairs configuration for the VM neutron port
>
> -  Use IPVLAN for wiring the Containers within VM
>
>
>
> In this way:
>
> -  Achieve efficient data path to container within VM
>
> -  Better leverage OpenStack EPA(Enhanced Platform Awareness)
> features to accelerate the data path (more details below)
>
> -  Mitigate the risk of vlan-aware-vms not making neutron in time
>
> -  Provide a solution that works on existing and previous
> openstack releases
>
>
>
> This work should be done in a way permitting the user to optionally select
> this feature.
>
>
>
>
> *Required Changes*
>
> The four main changes we have identified in the current kuryr codebase are
> as follows:
>
> · Introduce an option of enabling “IPVLAN in VM” use case. This
> can be achieved by using a config file option or possibly passing a command
> line argument. The IPVLAN master interface must also be identified.
>
> · If using “IPVLAN in VM” use case, Kuryr should no longer create
> a new port in Neutron or the associated VEth pairs. Instead, Kuryr will
> create a new IPVLAN slave interface on top of the VM’s master interface and
> pass this slave interface to the Container netns.
>
> · If using “IPVLAN in VM” use case, the VM’s port ID needs to be
> identified so we can associate the additional IPVLAN addresses with the
> port. This can be achieved by querying Neutron’s show-port function and
> passing the VMs IP address.
>
> · If using “IPVLAN in VM” use case, Kuryr should associate the
> additional IPVLAN addresses with the VMs port. This can be achieved using
> Neutron’s allowed-address-pairs flag in the port-update function. We
> intend to make use of Kuryr’s existing IPAM functionality to request these
> IPs from Neutron.
>
>
>
> *Asks*
>
> We wish to discuss the pros and cons.
>
> For example, containers exposure as proper neutron entities and the
> utility of neutron’s allowed-address-pairs is not yet well understood.
>
>
>
> We also wish to understand if this approach is acceptable for kuryr?
>
>
>
>
>
> *EPA*
>
> The Enhanced Platform Awareness initiative is a continuous program to
> enable fine-tuning of the platform for virtualized network functions.
>
> This is done by exposing the processor and platform capabilities through
> the management and orchestration layers.
>
> When a virtual network function is instantiated by an Enhanced Platform
> Awareness enabled orchestrator, the application requirements can be more
> efficiently matched with the platform capabilities.
>
> http://itpeernetwork.intel.com/openstack-kilo-release-is-
> shaping-up-to-be-a-milestone-for-enhanced-platform-awareness/
>
> https://networkbuilders.intel.com/docs/OpenStack_EPA.pdf
>
> https://www.brighttalk.com/webcast/12229/181563/epa-
> features-in-openstack-kilo
>
>
>
>
>
> Regards,
>
> Ivan….
>
> --
> Intel Research and Development Ireland Limited
> Registered in Ireland
> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
> Registered Number: 308263
>
> This e-mail and any attachments may contain confidential material for the
> sole use of the intended recipient(s). Any review or distribution by others
> is strictly prohibited. If you are not the intended recipient, please
> contact the sender and delete all copies.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][qos] Question about behavior expectations if network policy is set

2016-09-02 Thread Irena Berezovsky
Mike,
As per QoS spec  [1], the behavior is:
``QoS policies could be applied:

   - Per network: All the ports plugged on the network where the QoS policy
   is

   applied get the policy applied to them.
   - Per port: The specific port gets the policy applied, when the port had
   any

   network policy that one is overridden.``

As David mentioned, empty policy is probably the way to deal with the case
that there is qos network policy defined, but for some reason specific port
should be kept without the policy.  There is an option to have certain port
types to be left without policy and you can define such behavior in the
plugin.
But anyway, this is not something that is reflected at the DB or Plugin
layer, and should be handled by the QoS backend driver.
[1]
https://specs.openstack.org/openstack/neutron-specs/specs/liberty/qos-api-extension.html

BR,
Irena

On Fri, Sep 2, 2016 at 11:58 AM, Shaughnessy, David <
david.shaughne...@intel.com> wrote:

> Hi Michael.
> When you update a network with a qos_policy_id it updates all ports on
> that network too [1].
> If you want to have a port on that network without any QoS features on it
> you can make a policy with no rules attached.
> This empty policy would override the networks QoS policy when it's
> attached to a port.
> Hope that answers your question.
> Regards.
> David.
>
> [1] https://github.com/openstack/neutron/blob/master/neutron/
> plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L427-L437
>
> -Original Message-
> From: Michael Micucci [mailto:micu...@midokura.com]
> Sent: Friday, September 2, 2016 8:28 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Neutron][qos] Question about behavior
> expectations if network policy is set
>
>
> Hey all,
>
> Quick question: If I have a network and a port, and neither has any QOS
> policy set, and then I change the network to set the qos_policy_id, should
> this new QOS policy affect traffic on the already-created port?
> In other words, is this a network default for future ports (like with
> port_security_enabled) which would only affect ports created on this
> network from here on out (but leave already-created ports untouched)?
> Or is this a network setting which takes effect for all ports, regardless
> if they were already created or not?
>
> If the latter, the question had come up, this seems to mean that if a net
> policy is set, then there is no way a port can be set to have no policy,
> because if you unset a port's specific policy, it will fallback to the
> network policy, rather than skip policy calculations altogether.
> So, what does one do if they want all ports on a net to follow Policy X,
> EXCEPT for a single port?
>
> I would say the first question is the most important for me to understand
> the behavior, with the second question as a further clarification.
>
> Thanks a bunch!
>
> Sincerely,
>
> Michael Micucci
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> --
> Intel Research and Development Ireland Limited
> Registered in Ireland
> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
> Registered Number: 308263
>
>
> This e-mail and any attachments may contain confidential material for the
> sole
> use of the intended recipient(s). Any review or distribution by others is
> strictly prohibited. If you are not the intended recipient, please contact
> the
> sender and delete all copies.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] Proposing vikasc for kuryr-libnetwork core

2016-08-17 Thread Irena Berezovsky
+1

On Wed, Aug 17, 2016 at 12:54 AM, Antoni Segura Puimedon  wrote:

> Hi Kuryrs,
>
> I would like to propose Vikas Choudhary for the core team for the
> kuryr-libnetwork subproject. Vikas has kept submitting patches and reviews
> at a very good rhythm in the past cycle and I believe he will help a lot to
> move kuryr forward.
>
> I would also like to propose him for the core team for the
> kuryr-kubernetes subproject since he has experience in the day to day work
> with kubernetes and can help with the review and refactoring of the
> prototype upstreaming.
>
> Regards,
>
> Antoni Segura Puimedon
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] Docker failed to add an existed network with GW interface already created

2016-05-26 Thread Irena Berezovsky
Hi Liping Mao,


On Thu, May 26, 2016 at 12:31 PM, Liping Mao (limao) 
wrote:

> Hi Vikas, Antoni and Kuryr team,
>
> When I use kuryr, I notice kuryr will failed to add an existed
> network with gateway interface already created by neutron[1][2].
>
> The bug is because kuryr will create a neutron port for gw
> port in ipam_request_address.
>
> I think kuryr should not do actual create neutron gw port at all.
> Because CNM module do not have concept map with Neutron vRouter.
> Till now, user have to use neutron api to attach GW port in
> private network with vRouter. So I think the Kuryr should not
> actually create GW port.
>
> I think it possible to define via kuryr configuration file if kuryr should
create or not gw port. Kuryr already does it for DHCP port.

> What do you think? Thanks for any comments.
>
>
> [1] https://bugs.launchpad.net/kuryr/+bug/1584286
> [2] https://review.openstack.org/#/c/319524/4
>
>
>
> Regards,
> Liping Mao
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nova] scheduling bandwidth resources / NIC_BW_KB resource class

2016-04-20 Thread Irena Berezovsky
On Wed, Apr 20, 2016 at 4:25 PM, Miguel Angel Ajo Pelayo <
majop...@redhat.com> wrote:

> Inline update.
>
> On Mon, Apr 11, 2016 at 4:22 PM, Miguel Angel Ajo Pelayo
>  wrote:
> > On Mon, Apr 11, 2016 at 1:46 PM, Jay Pipes  wrote:
> >> On 04/08/2016 09:17 AM, Miguel Angel Ajo Pelayo wrote:
> [...]
> >> Yes, Nova's conductor gathers information about the requested networks
> >> *before* asking the scheduler where to place hosts:
> >>
> >>
> https://github.com/openstack/nova/blob/stable/mitaka/nova/conductor/manager.py#L362
> >>
> >>>  That would require identifying that the port has a "qos_policy_id"
> >>> attached to it, and then, asking neutron for the specific QoS policy
> >>>   [3], then look out for a minimum bandwidth rule (still to be
> defined),
> >>> and extract the required bandwidth from it.
> >>
> >>
> >> Yep, exactly correct.
> >>
> >>> That moves, again some of the responsibility to examine and
> >>> understand external resources to nova.
> >>
> >>
> >> Yep, it does. The alternative is more retries for placement decisions
> >> because accurate decisions cannot be made until the compute node is
> already
> >> selected and the claim happens on the compute node.
> >>
> >>>  Could it make sense to make that part pluggable via stevedore?, so
> >>> we would provide something that takes the "resource id" (for a port in
> >>> this case) and returns the requirements translated to resource classes
> >>> (NIC_BW_KB in this case).
> >>
> >>
> >> Not sure Stevedore makes sense in this context. Really, we want *less*
> >> extensibility and *more* consistency. So, I would envision rather a
> system
> >> where Nova would call to Neutron before scheduling when it has received
> a
> >> port or network ID in the boot request and ask Neutron whether the port
> or
> >> network has any resource constraints on it. Neutron would return a
> >> standardized response containing each resource class and the amount
> >> requested in a dictionary (or better yet, an os_vif.objects.* object,
> >> serialized). Something like:
> >>
> >> {
> >>   'resources': {
> >> '': {
> >>   'NIC_BW_KB': 2048,
> >>   'IPV4_ADDRESS': 1
> >> }
> >>   }
> >> }
> >>
> >
> > Oh, true, that's a great idea, having some API that translates a
> > neutron resource, to scheduling constraints. The external call will be
> > still required, but the coupling issue is removed.
> >
> >
>
>
> I had a talk yesterday with @iharchys, @dansmith, and @sbauzas about
> this, and we believe the synthesis of resource usage / scheduling
> constraints from neutron makes sense.
>
> We should probably look into providing those details in a read only
> dictionary during port creation/update/show in general, in that way,
> we would not be adding an extra API call to neutron from the nova
> scheduler to figure out any of those details. That extra optimization
> is something we may need to discuss with the neutron community.
>
What about the caller context?
I believe these details should be  visible for admin user only.

>
>

> >> In the case of the NIC_BW_KB resource class, Nova's scheduler would
> look for
> >> compute nodes that had a NIC with that amount of bandwidth still
> available.
> >> In the case of the IPV4_ADDRESS resource class, Nova's scheduler would
> use
> >> the generic-resource-pools interface to find a resource pool of
> IPV4_ADDRESS
> >> resources (i.e. a Neutron routed network or subnet allocation pool)
> that has
> >> available IP space for the request.
> >>
> >
> > Not sure about the IPV4_ADDRESS part because I still didn't look on
> > how they resolve routed networks with this new framework, but for
> > other constraints makes perfect sense to me.
> >
> >> Best,
> >> -jay
> >>
> >>
> >>> Best regards,
> >>> Miguel Ángel Ajo
> >>>
> >>>
> >>> [1]
> >>>
> >>>
> http://lists.openstack.org/pipermail/openstack-dev/2016-February/086371.html
> >>> [2] https://bugs.launchpad.net/neutron/+bug/1560963
> >>> [3]
> >>>
> http://developer.openstack.org/api-ref-networking-v2-ext.html#showPolicy
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] BGP support

2016-03-28 Thread Irena Berezovsky
Hi Gary,
The new L2GW spec is [1] comes to enable inter-cloud connection to stretch
the network between the local and the remote clouds using tunnels between
border VTEP devices.
VTEP can be populated manually with remote MAC (optionally) IP entries.
BGP support is a bit orthogonal or may I say complementary.

[1] https://review.openstack.org/#/c/270786/

On Mon, Mar 28, 2016 at 9:36 AM, Gary Kotton  wrote:

> Hi,
> In the M cycle BGP support was added in tree. I have seen specs in the L2
> GW project for this support too. Are we planning to consolidate the
> efforts? Will the BGP code be moved from the Neutron git to the L2-GW
> project? Will a new project be created?
> Sorry, a little in the dark here and it would be nice if someone could
> please provide some clarity here. It would be a pity that there were
> competing efforts and my take would be that the Neutron code would be the
> single source of truth (until we decide otherwise).
> I think that the L2-GW project would be a very good place for that service
> code to reside. It can also have MPLS etc. support. So it may be a natural
> fit.
> Thanks
> Gary
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] rate-limit

2016-03-14 Thread Irena Berezovsky
Hi Andy,
(Adding neutron tag)
Please open an RFE bug under neutron and add qos tag. This will facilitate
the discussion of the use case feasibility.
Please join the QoS IRC meetings
https://wiki.openstack.org/wiki/Meetings/QoS.

BR,
Irena

On Mon, Mar 14, 2016 at 2:05 PM, Andy Wang  wrote:

> Hi All,
>
> I want to develop  feature rate-limit based on contrail-openstack context.
> My target is control rate of the sum of all the VMs int the one project or
> one tenant.
> I want to do it using linux TC. Is it feasible ?
> Is there one better idea for this ? Any suggestion is very appreciated.
>
> Thanks
> Andy Wang
>
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][TaaS] Possible points to be considered for TaaS Spec

2016-03-14 Thread Irena Berezovsky
Hi Reedip,
Please see my comments inline

On Tue, Mar 8, 2016 at 9:19 AM, reedip banerjee  wrote:

> While reading up the specs in [1] and [2], there are certain things which
> we may need to discuss before proceeding forward
>
> a) Reference point for Ingress/Egress traffic:
> There may be some confusion related to how we are labelling
> Ingress and Egress ( is it with respect to a VM, with a switch ,
> or any other entity).
> As we are looking from "Inside the VM" and not from "Inside the Network",
> that needs to be made clear.
>
I think it worth to be consistent with other neutron features, for example
with Security Groups

>
> b) How to perceive TaaS:
> In the section "Proposed Changes" Taas has been compared with a Core
> Neutron
> Plugin ( L3-Router) and a plugin which has emerged out of Neutron (
> Neutron LBaaS).
> This might cause confusion to the reviewers. It would be better that we
> decide
> how we would like to demonstrate TaaS:
> - Is it a plugin which can be integrated with the Neutron Core
> - Or is it an extension of the Core Neutron Services which can be used by
> selected users
>
> Based on the decision, we can modify the explanation to make the spec a
> bit more streamed.
>
I think it's advanced service adding value to the core neutron.

>
> c) Device Owner for TaaS:
> - If Tap Service creates a "destination" port, the port would have a
> device owner
> of the format of "network:tap"
> - If the "destination" port is now connected to a VM and the VM is booted
> up, nova
> changes the owner to "compute:nova"
>
Probably the difference will be if TaaS is allowed to remove this port or
not.

>
> # Is there any impact of the change of the device_owner
> # If there is an impact, should there be a change in nova so that the
> device_owner is not modified
> # When in the future, TaaS supports user providing an "already created
> port" should the device owner
> be checked and modified?
>
> d) Outcome of Deleting the VM where TaaS operates
> Following might be added to the Spec:
>
> 1. Deletion of the VM (and port attched to it) from which we were
> mirroring (source of the mirror):
> In this case we would do a cascade delete of the 'Tap_Flow' instances that
> were associated with the port that was deleted.
>
Are you sure you want to delete the Flow? Maybe it should reflect the
status of being not operational any more. I personally  do not think user
created resource should be deleted without explicit user operation.

>
> 2. Deletion of the VM (and port attched to it) to which we were mirroring
> (Destination of the mirror):
> In this case we would do a cascade delete of the 'Tap_Service' instance
> that was associated with the port that was deleted.
>
Same as previous comment.

>
> e) Making the API independent of OpenVSwitch
> As per our last discussion [3], it is better that w esplit our
> implementation for TaaS,
> so that
>  # the focus is not limited to OpenVSwitch, which may be a point of
> concern during review
>  # allow other vendors to create thier own pluggable implementation
>
+1

>
> f) Choice of Tapping before/after Sec Groups
>
> Security Groups can filter a lot , and implementing TaaS before or after
> the SG
> can impact the overall monitoring.
> As referenced in [1], we can provide this option as a future course of
> work, and
> in the meanwhile specify the option which we are working upon  ( Before or
> After)
> in the spec, to make it clear.
>
> I think it can be the TapFlow attribute, lets say 'position' that can be
either 'port' or 'vNIC'.
 *'vnic' *to capture ingress traffic after it passed inbound SG filters and
egress traffic before it passes outbound SG filters

'port' to capture ingress traffic before it passed inbound SG filters and
egress traffic after it passes outbound SG filters


>
> [1]:
> https://review.openstack.org/#/c/96149/8/specs/juno/tap-as-a-service.rst
> [2]:
> https://review.openstack.org/#/c/256210/5/specs/mitaka/tap-as-a-service.rst
> [3]:
> http://eavesdrop.openstack.org/meetings/taas/2016/taas.2016-03-02-06.33.log.txt
>
> --
> Thanks and Regards,
> Reedip Banerjee
> IRC: reedip
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [L2-Gateway] Install and Configuration issue

2016-03-08 Thread Irena Berezovsky
Hi Jason,
According to the L2GW config, it should be set as in this line:
https://github.com/openstack/networking-l2gw/blob/master/etc/l2gw_plugin.ini#L25

I think it should be working as default setting, but maybe you can try to
set this explicitly.

Hope it helps,
Irena

On Tue, Mar 8, 2016 at 12:33 PM, Jason Guy  wrote:

> Hi, I am trying to get the L2GW plugin working with my openstack setup. I
> did the installation from the git repo using setup.py install.
>
> I am having trouble getting the Neutron-server to take the plugin though.
> Here is the config line from the neutron.conf:
> # Plugins
> core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
> service_plugins =
> neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.metering.metering_plugin.MeteringPlugin,networking_l2gw.services.l2gateway.plugin.L2GatewayPlugin
>
> After restarting neutron-server, I am getting this error in the
> neutron-server.log:
>
> INFO neutron.manager [req-b537d3d8-5ad5-419c-a7a0-133991af38fc - - - - -]
> Loading Plugin: networking_l2gw.services.l2gateway.plugin.L2GatewayPlugin
> ERROR neutron.services.service_base
> [req-b537d3d8-5ad5-419c-a7a0-133991af38fc - - - - -] No providers specified
> for 'L2GW' service, exiting
>
> I do not see anything in the install instructions regarding the
> neutron.conf configuration, so I am guessing at this point, hence the
> email. :) I have searched and tried to figure this out by looking at the
> code. What should the service_provider be set to?
>
> Thanks,
> Jason
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][tap-as-a-service] weekly meeting

2015-11-18 Thread Irena Berezovsky
On Wed, Nov 18, 2015 at 8:31 AM, Takashi Yamamoto 
wrote:

> hi,
>
> On Thu, Nov 12, 2015 at 2:11 AM, Vikram Hosakote (vhosakot)
>  wrote:
> > Hi,
> >
> > TAAS looks great for traffic monitoring.
> >
> > Some questions about TAAS.
> >
> > 1) Can TAAS be used for provider networks as well, or just for tenant
> > networks ?
>
> currently only for VM ports on tenant networks.
>
> >
> > 2) Will there be any performance impact is every neutron port and every
> > packet is mirrored/duplicated ?
>
> i guess per-port impact is negligible.
> there's definitely per-packet impacts.
> i don't have any numbers though.
>
> >
> > 3) How is TAAS better than a non-mirroring approaches like
> packet-sniffing
> > (wireshark/tcpdump) and tracking interface counters/metrics ?
>
> i think taas is richer but probably slower than them.
>
> >
> > 4) Is TAAS a legal/lawful way to intercept/duplicate customer traffic in
> a
> > production cloud ? Or, TAAS is used just for debugging/troubleshooting ?
>
> although i'm not sure about legal/lawful requirements,
> i guess taas can be used for such purposes.
>

You check this presentation for potential usage scenarios:


https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/tap-as-a-service-taas-port-monitoring-for-neutron-networks

>
> > I was not able to find answers for these questions in
> > https://etherpad.openstack.org/p/mitaka-neutron-unplugged-track.
> >
> > Thanks!
> >
> >
> > Regards,
> > Vikram Hosakote
> > vhosa...@cisco.com
> > Software Engineer
> > Cloud and Virtualization Group (CVG)
> > Cisco Systems
> > Boxborough MA USA
> >
> > From: Takashi Yamamoto 
> > Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Date: Tuesday, November 10, 2015 at 10:08 PM
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Subject: [openstack-dev] [neutron][tap-as-a-service] weekly meeting
> >
> > hi,
> >
> > tap-as-a-service meeting will be held weekly, starting today.
> > http://eavesdrop.openstack.org/#Tap_as_a_Service_Meeting
> > anyone interested in the project is welcome.
> > sorry for immediate notice.
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Proposing Taku Fukushima as Kuryr core

2015-10-14 Thread Irena Berezovsky
+1


On Tue, Oct 13, 2015 at 5:07 PM, Gal Sagie  wrote:

> +1
>
> Taku is a great addition to the team and hoping to see him continue
> deliver high quality
> contribution in all aspects of the project.
>
> On Tue, Oct 13, 2015 at 4:52 PM, Antoni Segura Puimedon <
> toni+openstac...@midokura.com> wrote:
>
>> Hi fellow Kurýrs,
>>
>> I would like to propose Taku Fukushima for the core Kuryr team due to his
>> unparalleled dedication to the project. He has written most of the code
>> and
>> battled through the continuous libnetwork API changes. He will be a great
>> addition to the reviewing tasks.
>>
>> Current core members, please, cast your vote by tomorrow night.
>>
>> Regards,
>>
>> Toni
>>
>>
>
>
> --
> Best Regards ,
>
> The G.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] How could an L2 agent extension access agent methods ?

2015-09-26 Thread Irena Berezovsky
I would like to second  Kevin. This can be done in a similar way as ML2
Plugin passed plugin_context to ML2 Extension Drivers:
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/driver_api.py#L910
.

BR,
Irena

On Fri, Sep 25, 2015 at 11:57 AM, Kevin Benton  wrote:

> I think the 4th of the options you proposed would be the best. We don't
> want to give agents direct access to the agent object or else we will run
> the risk of breaking extensions all of the time during any kind of
> reorganization or refactoring. Having a well defined API in between will
> give us flexibility to move things around.
>
> On Fri, Sep 25, 2015 at 1:32 AM,  wrote:
>
>> Hi everyone,
>>
>> (TL;DR: we would like an L2 agent extension to be able to call methods on
>> the agent class, e.g. OVSAgent)
>>
>> In the networking-bgpvpn project, we need the reference driver to
>> interact with the ML2 openvswitch agent with new RPCs to allow exchanging
>> information with the BGP VPN implementation running on the compute nodes.
>> We also need the OVS agent to setup specific things on the OVS bridges for
>> MPLS traffic.
>>
>> To extend the agent behavior, we currently create a new agent by
>> mimicking the main() in ovs_neutron_agent.py but instead of instantiating
>> instantiate OVSAgent, with instantiate a class that overloads the OVSAgent
>> class with the additional behavior we need [1] .
>>
>> This is really not the ideal way of extending the agent, and we would
>> prefer using the L2 agent extension framework [2].
>>
>> Using the L2 agent extension framework would work, but only partially: it
>> would easily allos us to register our RPC consumers, but not to let us
>> access to some datastructures/methods of the agent that we need to use:
>> setup_entry_for_arp_reply and local_vlan_map, access to the OVSBridge
>> objects to manipulate OVS ports.
>>
>> I've filled-in an RFE bug to track this issue [5].
>>
>> We would like something like one of the following:
>> 1) augment the L2 agent extension interface (AgentCoreResourceExtension)
>> to give access to the agent object (and thus let the extension call methods
>> of the agent) by giving the agent as a parameter of the initialize method
>> [4]
>> 2) augment the L2 agent extension interface (AgentCoreResourceExtension)
>> to give access to the agent object (and thus let the extension call methods
>> of the agent) by giving the agent as a parameter of a new setAgent method
>> 3) augment the L2 agent extension interface (AgentCoreResourceExtension)
>> to give access only to specific/chosen methods on the agent object, for
>> instance by giving a dict as a parameter of the initialize method [4],
>> whose keys would be method names, and values would be pointer to these
>> methods on the agent object
>> 4) define a new interface with methods to access things inside the agent,
>> this interface would be implemented by an object instantiated by the agent,
>> and that the agent would pass to the extension manager, thus allowing the
>> extension manager to passe the object to an extension through the
>> initialize method of AgentCoreResourceExtension [4]
>>
>> Any feedback on these ideas...?
>> Of course any other idea is welcome...
>>
>> For the sake of triggering reaction, the question could be rephrased as:
>> if we submit a change doing (1) above, would it have a reasonable chance of
>> merging ?
>>
>> -Thomas
>>
>> [1]
>> https://github.com/openstack/networking-bgpvpn/blob/master/networking_bgpvpn/neutron/services/service_drivers/bagpipe/ovs_agent/ovs_bagpipe_neutron_agent.py
>> [2] https://review.openstack.org/#/c/195439/
>> [3]
>> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py#L30
>> [4]
>> https://github.com/openstack/neutron/blob/master/neutron/agent/l2/agent_extension.py#L28
>> [5] https://bugs.launchpad.net/neutron/+bug/1499637
>>
>> _
>>
>> Ce message et ses pieces jointes peuvent contenir des informations 
>> confidentielles ou privilegiees et ne doivent donc
>> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu 
>> ce message par erreur, veuillez le signaler
>> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
>> electroniques etant susceptibles d'alteration,
>> Orange decline toute responsabilite si ce message a ete altere, deforme ou 
>> falsifie. Merci.
>>
>> This message and its attachments may contain confidential or privileged 
>> information that may be protected by law;
>> they should not be distributed, used or copied without authorisation.
>> If you have received this email in error, please notify the sender and 
>> delete this message and its attachments.
>> As emails may be altered, Orange is not liable for messages that have been 
>> modified, changed or 

[openstack-dev] [neutron][nova][qos] network QoS support driven by VM flavor/image requirements

2015-09-24 Thread Irena Berezovsky
I would like to start discussion regarding user experience when certain
level of network QoS is expected to be applied on VM ports. As you may know
basic networking QoS support was introduced during Liberty Release
following spec, Ref [1]
As it was discussed during last networking-QoS meeting, Ref [2], nova team
drives to the approach where neutron port is created with all required
settings and then VM is created with pre-created port  and not with
requested network. While this approach serves decoupling and separation of
compute and networking concerns, it will require smarter Client
orchestration and  we may loose some functionality we have today. One of
the usage scenarios that currently supported, is that Cloud Provider may
associate certain requirements with nova flavors. Once Tenant requests VM
for this flavor, nova (nova-scheduler) will make sure to fulfill the
requirements. Possible way to make this work for networking -qos is to set :
 nova-manage flavor set_key --name m1.small --key quota:vif_qos_policy
--value 

With current VM creation workflow  this will require nova to request
neutron to create port and apply qos policy with specified policy_id. This
will require changes on nova side.
I am not sure how to support the above user scenario with pre-created port
approach.

I would like to ask your opinion regarding the direction  for QoS in
particular, but the question is general for nova-neutron integration.
Should explicitly  decoupled networking/compute approach replace the
current way that nova delegates networking requirements to neutron.

BR,
Irena


[1]
http://specs.openstack.org/openstack/neutron-specs/specs/liberty/qos-api-extension.html
[2]
http://eavesdrop.openstack.org/meetings/neutron_qos/2015/neutron_qos.2015-09-16-14.02.log.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] PTL Non-Candidacy

2015-09-12 Thread Irena Berezovsky
Kyle,
Thank you for the hard work you did making neuron project and neutron
community  better!
You have been open and very supportive as a neutron community lead.
Hope you will stay involved.


On Fri, Sep 11, 2015 at 11:12 PM, Kyle Mestery  wrote:

> I'm writing to let everyone know that I do not plan to run for Neutron PTL
> for a fourth cycle. Being a PTL is a rewarding but difficult job, as Morgan
> recently put it in his non-candidacy email [1]. But it goes further than
> that for me. As Flavio put it in his post about "Being a PTL" [2], it's a
> full time job. In the case of Neutron, it's more than a full time job, it's
> literally an always on job.
>
> I've tried really hard over my three cycles as PTL to build a stronger web
> of trust so the project can grow, and I feel that's been accomplished. We
> have a strong bench of future PTLs and leaders ready to go, I'm excited to
> watch them lead and help them in anyway I can.
>
> As was said by Zane in a recent email [3], while Heat may have pioneered
> the concept of rotating PTL duties with each cycle, I'd like to highly
> encourage Neutron and other projects to do the same. Having a deep bench of
> leaders supporting each other is important for the future of all projects.
>
> See you all in Tokyo!
> Kyle
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074157.html
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Kuryr - virtual sprint

2015-08-20 Thread Irena Berezovsky
Second or last week of September work for me

On Thu, Aug 20, 2015 at 3:22 PM, Antoni Segura Puimedon 
toni+openstac...@midokura.com wrote:



 On Wed, Aug 19, 2015 at 11:50 PM, Salvatore Orlando 
 salv.orla...@gmail.com wrote:

 Hi Gal,

 even if I've been a lurker so far, I'm interested in attending for
 learning and contributing to it with my massive bug-injecting skills!

 You said virtual sprint and somewhere in september - I think
 somewhere refers to dates?
 Anyway I am pretty much open to any date from September 7th onwards.

 Salvatore


 On 19 August 2015 at 19:57, Gal Sagie gal.sa...@gmail.com wrote:

 Hello everyone,

 During our last meeting an idea was brought up that we try to do a
 virtual sprint
 for Kuryr somewhere in September.

 Basically the plan is very similar to the mid cycle sprints or feature
 sprints where
 we iterate on couple of tasks online and finish gaps we might have in
 Kuryr.
 (I think we are talking about 2-3 days)


 Great Idea. I propose September 14th, 15th and 16th.



 The agenda for the sprint is dependent on the amount of work we finish
 by then,
 but it will probably consist of containerising some of the common
 plugins and connecting
 things end to end. (for host networking)

 I started this email in order to find the best dates for it, so if you
 plan on participating
 please share your preferred dates (anyone that has a Neutron plugin
 might want to offer a containerised version of it with Kuryr to integrate
 with Docker and lib network and the sprint
 is probably a good place to start doing it)

 Thanks
 Gal.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][bgpvpn] Service Plugin vs Service driver

2015-08-19 Thread Irena Berezovsky
Current VPNaaS Service Plugin inherits from VpnPluginRpcDbMixin, which is
not required for some vendor solutions, since L3 is implemented without
leveraging L3 Agents to manage router namespaces (ODL, MidoNet, etc).
I guess if Mixin usage will be changed to conditional RPC support based on
drivers requirements, follow what Salvatore suggested makes perfect sense.

On Wed, Aug 19, 2015 at 11:06 AM, Salvatore Orlando salv.orla...@gmail.com
wrote:

 my 0.02€ on the matter inline.

 Regards,
 Salvatore

 On 18 August 2015 at 23:45, Mathieu Rohon mathieu.ro...@gmail.com wrote:

 hi brandon,

 thanks for your answer.

 my answers inline,



 On Tue, Aug 18, 2015 at 8:53 PM, Brandon Logan 
 brandon.lo...@rackspace.com wrote:

 ​So let me make sure I understand this. You want to do a separate
 service plugin for what would normally be separate drivers under one
 service plugin.  The reasons for this are:


 1. You dont want users the ability to choose the type, you want it
 always to be the same one

 While in theory it is be possible to have multiple BGPVPN providers in
 the same deployment, there are control and data plane aspects that the
 service type framework at the moment cannot deal with it. Mathieu brought
 some examples in the bug report. The bottom line appears to be that the
 choice of the l3 service plugin (or whatever serves l3 in your deployment)
 also dictates the choiche of the BGPVPN service provider to employ.

 2. Some types do want to be the source of truth of the data stored,
 instead of it being the service plugin database.

 This point has little to do with service types. It's about the fact that
 plugins are not required to implemented the various db mixins in neutron.db
 and therefore not required to use the neutron DB.


 First, let me address the possibility of a solution using one service
 plugin and multiple drivers per type:


 I think that you can overcome #1 in the instantiation of the service
 plugin to check if there are more than 1 provider active, if so you can
 just throw an exception saying you can only have 1.  I'd have to look at it
 more to see if there are any caveats to this, but I think that would work.


 For #2, assuming #1 works, then the drivers that are defined can have
 some boolean that they set that will tell the plugin whether they are the
 source of truth or not, and depending on that you can store the data in the
 service plugin's db or just pass the data along, also pass GET requests to
 the drivers as well.


 I agree that those workarounds will surely works but I wonder what is the
 meaning of a service plugin/type that can only support one service
 provider? can't the service plugin be the service provider directly?


 I believe there is some value, but I am not able to quantify it at the
 moment.
 - A single service plugin also implies (more or less) a common user-facing
 APIs. I really don't want to end up in a conditons where the user API looks
 different (or the workflow is different) according to what's backing the
 neutron BGPVPN implementation
 - A single service plugin provides a commonplace for all the boilerplate
 management logic. This works for most drivers, but not for those who don't
 rely on neutron DB as a data source (unless you manage to build a
 sqlalchemy dialect for things such as opencontrail APIs, but I seriously
 doubt that it would be feasible)
 - Distinct service plugins might lead to different workflows. This is not
 necessarily a bad thing, because integration for some backends might need
 it. However this means that during review phase particular attention should
 be paid to ensure the behaviour of each service plugin respects the API
 specification.



 The reasons why I'm considering this change are :

 1. I'm not sure we would have some use cases where we would be able to
 choose one bgpvpn backend independently from the provider of the core
 plugin (or a mech driver in the ML2 case) and/or the router plugin.
 If one use ODL to manage its core resources, he won't be able to use
 nuage or contrail to manage its bgpvpn connection.
 The bgpvpn project is more about having a common API than having the
 capacity to mix backends. At least for the moment.


 I agree with this; but this problem exists regardless of whether you have
 a single service plugin with drivers or multiple service plugins. You are
 unlikely to be able to use the contrail BGPVPN service plugin is core and
 l3 are managed by ODL, I think.



 2. I'm also considering that each plugin, which would be backend
 dependent, could declare what features it supports through the use of
 extensions.


 Unfortunately extensions are the only way to declare supported
 capabilities at the moment. But please - don't end up allowing each service
 plugin exposing a different API.


 Each plugin would be a bgpvpn service type, and would implement the
 bgpvpn extension, but some of them could extend the bgpvpn_connection
 resource with other extensions also hosted in the bgpvpn 

Re: [openstack-dev] [neutron] Adding results to extension callbacks (ml2 port_update ext handling).

2015-07-15 Thread Irena Berezovsky
Hi Bob, Miguel

On Tue, Jul 14, 2015 at 5:19 PM, Robert Kukura kuk...@noironetworks.com
wrote:

 I haven't had a chance to review this patch in detail yet, but am
 wondering if this is being integrated with ML2 as an extension driver? If
 so, that should clearly address how dictionaries are extended and input
 values are validated. If not, why?

This was my initial idea as well. But with this approach we had to split
QoS extension into two parts, once for QoS policies and rules managements
and other for QoS policy association with network/port.
The first one is realized by oS service plugin and the latter could be
realized as ML2 extension.
With current approach, QoS Plugin takes care of both parts. As Miguel
mentioned, the port/network QoS policy association information can be added
by enhancing current callback mechanism. It works fine when need to
propagate details from ML2 to the QoS Plugin for configuration, but lacks
support to add QoS policy details to port attributed when get_port is
invoked.
With approach we took, all QoS functionality managed by single plugin and
required less integration pieces in several places.


 -Bob


 On 7/13/15 10:50 PM, Miguel Angel Ajo wrote:

 Inline reply (I've added to CC relevant people for ml2/plugin.py
 port_update extension
 handing -via git blame-) as they probably have an opinion here
 (specially the last
 two options).

 Kevin Benton wrote:

 This sounds like a lot of overlap with the dict extend functions. Why
 weren't they adequate for the QoS use case?


 Let me explain, I believe Mike exceeded the proposal with AFTER_READ,
 that's not the plan,
 even if we could do as he proposed,

 AFTER_READ dict extension is just a temporary workaround until we have
 a separate explicit
 api @armax is working on. Making explicit that your service is going
 to extend resources,
 and handled that in an ordered way is a good thing.

 Afterwards, the source of this review has came from ML2 implementation of
 AFTER_CREATE/AFTER_UPDATE notification for ports/nets.

 Let me explain:

  As a decoupled, mixinless service extending core resources, we
 need to do two things:

 1) Extending the core resources as other extensions would do, adding
 stuff to the port/network
 dicts, here's where it comes the current AFTER_READ dict extension,
 and future API making
 that more explicit and more controlled.

 2) Verifying the extended values we receive on core resources, by
 registering to BEFORE_*
 callbacks. For example, if a tenant is trying to use a qos_profile_id
 he doesn't have access to,
 or that doesn't exist, we can cancel the operation by throwing an
 exception.

   We need to extend the notifications for ports and networks, as
 that's not notified currently.
 Mike will work on that too in a separate patch.


 3) Taking the finally extended values and store associations in database
  (AFTER_UPDATE/AFTER_CREATE) so any later reads of the
 port/network will get the right
  qos_profile_later in after read.


 We have found that AFTER_CREATE/AFTER_UPDATE happens at plugin level
 (neutron/plugins/ml2/plugin.py / update_port) and that information
 passed down is
 very brief to our case (basically a None port if no ml2-know
 attribute is changed), and
 ml2 has to be aware of every single extension.

 Here there are two options:
a) we make ml2 also aware of qos_profile_id changes, complicating
 the business logic down
 there, or...

b) we send the AFTER_CREATE/UPDATE events, and we listen to the
 callback
 listeners to such notification, and they will tell us if there's any
 relevant field which must
 be propagated down to agents. Then it's up to the agents to use such
 field or not.


Mike's patch proposal is in the direction of (b), he's a long term
 thinker, I was proposing him
 to just go (a) for now, but let's discuss and find what's more right.


  On Mon, Jul 13, 2015 at 7:58 AM, Mike Kolesnikmkole...@redhat.com
 wrote:

  Hi,

 I sent a simple patch to check the possibility to add results to
 callbacks:
 https://review.openstack.org/#/c/201127/

 This will allow us to decouple the callback logic from the ML2
 plugin in
 the QoS scenario where we need to update the agents in case the
 profile_id
 on a port/network changes.
 It will also allow for a cleaner way to extend resource attributes as
 AFTER_READ callbacks can return a dict of fields to add to the original
 resource instead of mutating it directly.

 Please let me know what you think of this idea.

 Regards,
 Mike



 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 

Re: [openstack-dev] [neutron] enriching port binding extension API dictionaries with key-values

2015-06-28 Thread Irena Berezovsky
Hi Andreas,

On Fri, Jun 26, 2015 at 4:04 PM, Andreas Scheuring 
scheu...@linux.vnet.ibm.com wrote:

 Hi together,
 for a new ml2 plugin I would like to pass over some data from neutron to
 nova on port creation and update (exploiting port binding extension
 [1]). For my prototype I thought of using one of the following response
 dictionaries to add my information:

 - binding:vif_details
 - binding:profile

 The API ref describes these attributes (port create / port update - both
 response) as dictionaries, but without restricting the key-value pairs
 or naming a defined number [1].

binding:profile is an input dictionary, that should tell neutron how to
bind the port.
binding:vif_details is an output dictionary that should provide nova  (or
other) with enough details to plug the port properly.


 I've also seen some other ml2 plugins enriching those fields with unique
 data. So I assuming this is not considered as an API change, isn't it?

 Important: It's only about the response. The input comes from a
 configuration file.


 Thanks


 [1] http://developer.openstack.org/api-ref-networking-v2-ext.html


 --
 Andreas
 (IRC: scheuran)



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Modular L2 Agent

2015-06-23 Thread Irena Berezovsky
On Mon, Jun 22, 2015 at 7:48 PM, Sean M. Collins s...@coreitpro.com wrote:

 On Mon, Jun 22, 2015 at 10:47:39AM EDT, Salvatore Orlando wrote:
  I would probably start with something for enabling the L2 agent to
 process
  features such as QoS and security groups, working on the OVS agent, and
  then in a second step abstract a driver interface for communicating with
  the data plane. But I honestly do not know if this will keep the work too
  OVS-centric and therefore won't play well with the current efforts to
 put
  linux bridge on par with OVS in Neutron. For those question we should
 seek
  an answer from our glorious reference control plane lieutenant, and
 perhaps
  also from Sean Collins, who's coordinating efforts around linux bridge
  parity.

 I think that what Salvatore has suggested is good. If we start creating
 good API contracts, and well defined interfaces in the reference control
 plane agents - this is a good first step. Even if we start off by doing
 this just for the OVS agent, that'll be a good template for what we
 would need to do for any agent-driven L2 implementation - and it could
 easily be re-used by others.

 To be honest, if you squint hard enough there really are very few
 differences between the OVS agent and the Linux Bridge agent does -
 the parts that handle control plane communication, processing
 data updates, and so forth should all be very similar.

 They only become different at the lower
 levels where it's brctl/ip2 vs. ovs-vsctl/of-ofctl CLI calls - so why
 maintain two separate agent implementations when quite a bit of what
 they do is functionally identical?


As Miguel mentioned, the patch [1] adds support for QoS driver in L2
Agents. Since QoS support is planned to be leveraged by OVS and SR-IOV and
maybe later by Linux Bridge, the idea is to make common L2Agent layer to
enable generic support for features (extensions) and QoS as the first
feature to support. This is not the Modular L2 Agent, but definitely the
step into the right direction.
This work should have minimal impact on Server side, and mostly about code
reuse by L2 Agents.

[1] https://review.openstack.org/#/c/189723/

BR,
Irena

 --
 Sean M. Collins

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Regarding Flow Classifier proposals for Liberty!

2015-06-07 Thread Irena Berezovsky
Hi Vikram,
Agree with what you stated. Additional use case can be Tap as a Service to
allow filtering of the mirrored packets.

BR,
Irena

On Fri, Jun 5, 2015 at 11:47 AM, Vikram Choudhary 
vikram.choudh...@huawei.com wrote:

  Dear All,



 There are multiple proposal floating around flow classifier rules for
 Liberty [1], [2] and [3].

 I feel we all should work together and try to address all our use case
 having a unified framework rather than working separately achieving the
 same  goal.



 Moreover, I can find the proposal for flow classifier as defined by the
 existing SFC [2] proposal is too generic and could address all the use
 cases by minor extension’s.



 In this regard, I would like all to come forward, exchange their thoughts,
 work together and make it happen good the first go rather doing the same
 effort separately and end up in duplicating code  effort L.

 I always feel less code will make our life happy in the long run ;)



 Please let me know about your views.



 [1] Add Neutron API extensions for packet forwarding

   https://review.openstack.org/#/c/186663/



 [2] Neutron API for Service Chaining [Flow Filter resource]


 https://review.openstack.org/#/c/177946/6/specs/liberty/neutron-api-for-service-chaining.rst



 [3] QoS API Extension [Defines classifier rule in QoSRule. Classifier
 rule can really grow big in the long run]:


 https://review.openstack.org/#/c/88599/10/specs/liberty/qos-api-extension.rst



 Thanks

 Vikram



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Progressing/tracking work on libvirt / vif drivers

2015-06-02 Thread Irena Berezovsky
Hi Ian,
I like your proposal. It sounds very reasonable and makes separation of
concerns between neutron and nova very clear. I think with vif plug script
support [1]. it will help to decouple neutron from nova dependency.
Thank you for sharing this,
Irena
[1] https://review.openstack.org/#/c/162468/

On Tue, Jun 2, 2015 at 10:45 AM, Ian Wells ijw.ubu...@cack.org.uk wrote:

 VIF plugging, but not precisely libvirt VIF plugging, so I'll tout this to
 a hopefully interested audience.

 At the summit, we wrote up a spec we were thinking of doing at [1].  It
 actually proposes two things, which is a little naughty really, but hey.

 Firstly we propose that we turn binding into a negotiation, so that Nova
 can offer binding options it supports to Neutron and Neutron can pick the
 one it likes most.  This is necessary if you happen to use vhostuser with
 qemu, as it doesn't work for some circumstances, and desirable all around,
 since it means you no longer have to configure Neutron to choose a binding
 type that Nova likes and Neutron can choose different binding types
 depending on circumstances.  As a bonus, it should make inter-version
 compatibility work better.

 Secondly we suggest that some of the information that Nova and Neutron
 currently calculate independently should instead be passed from Neutron to
 Nova, simplifying the Nova code since it no longer has to take an educated
 guess at things like TAP device names.  That one is more contentious, since
 in theory Neutron could pass an evil value, but if we can find some pattern
 that works (and 'pattern' might be literally true, in that you could get
 Nova to confirm that the TAP name begins with a magic string and is not
 going to be a physical device or other interface on the box) I think that
 would simplify the code there.

 Read, digest, see what you think.  I haven't put it forward yet (actually
 I've lost track of which projects take specs at this point) but I would
 very much like to get it implemented and it's not a drastic change (in
 fact, it's a no-op until we change Neutron to respect what Nova passes).

 [1] https://etherpad.openstack.org/p/YVR-nova-neutron-binding-spec

 On 1 June 2015 at 10:37, Neil Jerram neil.jer...@metaswitch.com wrote:

 On 01/06/15 17:45, Neil Jerram wrote:

  Many thanks, John  Dan.  I'll start by drafting a summary of the work
 that I'm aware of in this area, at
 https://etherpad.openstack.org/p/liberty-nova-libvirt-vif-work.


 OK, my first draft of this is now there at [1].  Please could folk with
 VIF-related work pending check that I haven't missed or misrepresented
 them?  Especially, please could owners of the 'Infiniband SR-IOV' and
 'mlnx_direct removal' changes confirm that those are really ready for core
 review?  It would be bad to ask for core review that wasn't in fact wanted.

 Thanks,
 Neil


 [1] https://etherpad.openstack.org/p/liberty-nova-libvirt-vif-work


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [QoS] weekly meeting - update

2015-04-20 Thread Irena Berezovsky
Hi,

This week neutron QoS meeting will take place on Tuesday, April 21 at 14:00
UTC on #openstack-meeting-3.

Next week, the meeting is back to its original slot: Wed at 14:00 UTC on
#openstack-meeting-3.

Please join if you are interested.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting

2015-04-07 Thread Irena Berezovsky
Hi Miguel,
Thank you for leading this.


On Tue, Apr 7, 2015 at 8:45 AM, Miguel Ángel Ajo majop...@redhat.com
wrote:

 On Tuesday, 7 de April de 2015 at 3:14, Kyle Mestery wrote:

 On Mon, Apr 6, 2015 at 6:04 PM, Salvatore Orlando sorla...@nicira.com
 wrote:



 On 7 April 2015 at 00:33, Armando M. arma...@gmail.com wrote:


 On 6 April 2015 at 08:56, Miguel Ángel Ajo majop...@redhat.com wrote:

  I’d like to co-organized a QoS weekly meeting with Sean M. Collins,

 In the last few years, the interest for QoS support has increased,
 Sean has been leading
 this effort [1] and we believe we should get into a consensus about how to
 model an extension
 to let vendor plugins implement QoS capabilities on network ports and
 tenant networks, and
 how to extend agents, and the reference implementation  others [2]


 As you surely know, so far every attempt to achieve a consensus has failed
 in a pretty miserable way.
 This mostly because QoS can be interpreted in a lot of different ways,
 both from the conceptual and practical perspective.

 Yes, I’m fully aware of it, it was also a new feature, so it was out of
 scope for Kilo.

 It is important in my opinion to clearly define the goals first. For
 instance a simple extensions for bandwidth limiting could be a reasonable
 target for the Liberty release.

 I quite agree here, but IMHO, as you said it’s a quite open field
 (limiting, guaranteeing,
 marking, traffic shaping..), we should do our best in trying to define a
 model allowing us
 to build that up in the future without huge changes, on the API side I
 guess micro versioning
 is going to help in the API evolution.

 Also, at some point, we should/could need to involve the nova folks, for
 example, to define
 port flavors that can be associated to nova
 instance flavors, providing them
 1) different types of network port speeds/guarantees/priorities,
 2) being able to schedule instance/ports in coordination to be able to met
 specified guarantees.

 yes, complexity can sky rocket fast,

 Moving things such as ECN into future works is the right thing to do in
 my opinion. Attempting to define a flexible framework that can deal with
 advanced QoS policies specification is a laudable effort, but I am a bit
 skeptical about its feasibility.

 ++, I think focusing on perhaps bandwidth limiting may make a lot of sense

 Yes, I believe we should look into the future , but at the same pick our
 very first feature (or a
 very simple set of them) for L, stick to it, and try to make a design
 that can be extended.

+1






 As per discussion we’ve had during the last few months [3], I believe
 we should start simple, but
 prepare a model allowing future extendibility, to allow for example
 specific traffic rules (per port,
 per IP, etc..), congestion notification support [4], …


 Simple in my mind is even more extreme then what you're proposing
 here... I'd start with bare APIs for specifying bandwidth limiting, and
 then phase them out once this framework is in place.
 Also note that this kind of design bears some overlap with the flavor
 framework which is probably going to be another goal for Liberty.

 Indeed, and the flavor framework is something I'm hoping we can land by
 Liberty-1 (yes, I just said Liberty-1).

 Yes it’s something I looked at, I must admit I wasn’t able to see it work
 together (It doesn’t
 mean it doesn’t play well, but most probably I was silly enough not to
 see it :) ),

 I didn’t want to distract attention from the Kilo cycle focus making
 questions, so it should
 be a good thing to talk about during the first meetings.

 Who are the flavor fathers/mothers? ;)




 Morever, consider using common tools such as the specs repo to share and
 discuss design documents.


 Also a good idea.

 Yes, that was the plan now, we didn’t use it before to avoid creating
 unnecessary noise during this cycle.




 It’s the first time I’m trying to organize an openstack/neutron
 meeting, so, I don’t know what’s the
 best way to do it, or find the best timeslot. I guess interested people
 may have a saying, so I’ve
 looped anybody I know is interested in the CC of this mail.


 I think that's a good idea. Incidentally I was speaking with Sean
 regarding Summit session [1], and we were hoping we could get some folks
 together either prior or during the summit, to try and get some momentum
 going behind this initiative, once again.

 Very interesting [1]!, nice to see we start to have a bunch of people with
 an interest in QoS.


 I think is a good idea as well.  I was thinking that perhaps it might be a
 good idea to grab a design summit session as well (surely not a fishbowl
 one as they're totally unfit for this purpose).
 However, it might be good to achieve some sort of consensus before the
 summit, as as we know fairly well now the summit is probably the worst
 place where consensus can be achieved!


 And finally, agreed here as well.


 Yes, a bit of preliminary discussion, and a 

Re: [openstack-dev] [nova][neutron] Passthrough of PF's from SR-IOV capable devices.

2015-02-22 Thread Irena Berezovsky
Please see inline

On Thu, Feb 19, 2015 at 4:43 PM, Steve Gordon sgor...@redhat.com wrote:

 - Original Message -
  From: Irena Berezovsky irenab@gmail.com
  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org,
 
  On Thu, Feb 5, 2015 at 9:01 PM, Steve Gordon sgor...@redhat.com wrote:
 
   - Original Message -
From: Przemyslaw Czesnowicz przemyslaw.czesnow...@intel.com
To: OpenStack Development Mailing List (not for usage questions) 
   openstack-dev@lists.openstack.org
   
Hi
   
 1) If the device is a normal PCI device, but is a network card,
 am I
 still able to
 take advantage of the advanced syntax added circa Juno to define
 the
 relationship between that card and a given physical network so
 that the
 scheduler can place accordingly (and does this still use the ML2
 mech
 drvier for
 SR-IOV even though it's a normal device.
   
Actually libvirt won't allow using normal PCI devices for network
interfaces into VM.
Following error is thrown by libvirt 1.2.9.1:
libvirtError: unsupported configuration: Interface type hostdev is
   currently
supported on SR-IOV Virtual Functions only
   
I don't know why libvirt prohibits that. But we should prohibit that
 on
Openstack side as well.
  
   This is true for hostdev style configuration, normal PCI devices
 are
   still valid in Libvirt for passthrough using hostdev though. The
 former
   having been specifically created for handling passthrough of VFs, the
   latter being the more generic passthrough functionality and what was
 used
   with the original PCI passthrough functionality introduced circa
 Havana.
  
   I guess what I'm really asking in this particular question is what is
 the
   intersection of these two implementations - if any, as on face value it
   seems that to passthrough a physical PCI device I must use the older
 syntax
   and thus can't have the scheduler be aware of its external network
   connectivity.
  
  Support for normal PCI device passthrough for networking in SR-IOV like
  way will require new VIF Driver support for hostdev style device guest
 XML
  being created and some call invocation to set MAC address and VLAN tag.
 
  
 2) There is no functional reason from a Libvirt/Qemu perspective
 that I
 couldn't
 pass through a PF to a guest, and some users have expressed
 surprise
   to me
 when they have run into this check in the Nova driver. I assume in
 the
 initial
 implementation this was prevented to avoid a whole heap of fun
   additional
 logic
 that is required if this is allowed (e.g. check that no VFs from
 the PF
 being
 requested are already in use, remove all the associated VFs from
 the
   pool
 when
 assigning the PF, who gets allowed to use PFs versus VFs etc.). Am
 I
 correct here
 or is there another reason that this would be undesirable to allow
 in
 future -
 assuming such checks can also be designed - that I am missing?

I think that is correct. But even if the additional logic was
   implemented  it
wouldn't work because of how libvirt behaves currently.
  
   Again though, in the code we have a distinction between a physical
 device
   (as I was asking about in Q1) and a physical function (as I am asking
 about
   in Q2) and similarly whether libvirt allows or not depends on how you
   configure in the guest XML. Though I wouldn't be surprised on the PF
 case
   if it is in fact not allowed in Libvirt (even with hostdev) it is
 again
   important to consider this distinctly separate from passing through the
   physical device case which we DO allow currently in the code I'm asking
   about.
  
  I think what you suggest is not difficult to support, but current (since
  Juno) PCI device passthrough  for networking is all about SR-IOV PCI
 device
  passthrough. As I mentioned, to support  normal PCI device will require
  libvirt VIF Driver adjustment. I think its possible to make this work
 with
  existing neutron ML2 SRIOV Mechanism Driver.

 Understood, was just trying to understand if there was an explicit reason
 *not* to do this. How should we track this, keep adding to
 https://etherpad.openstack.org/p/kilo_sriov_pci_passthrough ?


I think that probably new etherpad for Liberty should be  created in order
to track SR-IOV and PCI features. Most of the features proposed for Kilo
were rejected due to the nova and neutron priorities focus on other areas.
All listed and rejected features and new features priorities should be
evaluated and probably picked by people willing to drive it. For Kilo we
started this work during the pci_passthrough weekly meetings and finalized
at the summit. I think it worked well. I would suggest to do the same for
Liberty.

BR,
Irena


 Thanks,

 Steve

__
OpenStack Development Mailing List

Re: [openstack-dev] [nova][neutron] Passthrough of PF's from SR-IOV capable devices.

2015-02-08 Thread Irena Berezovsky
On Thu, Feb 5, 2015 at 9:01 PM, Steve Gordon sgor...@redhat.com wrote:

 - Original Message -
  From: Przemyslaw Czesnowicz przemyslaw.czesnow...@intel.com
  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 
  Hi
 
   1) If the device is a normal PCI device, but is a network card, am I
   still able to
   take advantage of the advanced syntax added circa Juno to define the
   relationship between that card and a given physical network so that the
   scheduler can place accordingly (and does this still use the ML2 mech
   drvier for
   SR-IOV even though it's a normal device.
 
  Actually libvirt won't allow using normal PCI devices for network
  interfaces into VM.
  Following error is thrown by libvirt 1.2.9.1:
  libvirtError: unsupported configuration: Interface type hostdev is
 currently
  supported on SR-IOV Virtual Functions only
 
  I don't know why libvirt prohibits that. But we should prohibit that on
  Openstack side as well.

 This is true for hostdev style configuration, normal PCI devices are
 still valid in Libvirt for passthrough using hostdev though. The former
 having been specifically created for handling passthrough of VFs, the
 latter being the more generic passthrough functionality and what was used
 with the original PCI passthrough functionality introduced circa Havana.

 I guess what I'm really asking in this particular question is what is the
 intersection of these two implementations - if any, as on face value it
 seems that to passthrough a physical PCI device I must use the older syntax
 and thus can't have the scheduler be aware of its external network
 connectivity.

Support for normal PCI device passthrough for networking in SR-IOV like
way will require new VIF Driver support for hostdev style device guest XML
being created and some call invocation to set MAC address and VLAN tag.


   2) There is no functional reason from a Libvirt/Qemu perspective that I
   couldn't
   pass through a PF to a guest, and some users have expressed surprise
 to me
   when they have run into this check in the Nova driver. I assume in the
   initial
   implementation this was prevented to avoid a whole heap of fun
 additional
   logic
   that is required if this is allowed (e.g. check that no VFs from the PF
   being
   requested are already in use, remove all the associated VFs from the
 pool
   when
   assigning the PF, who gets allowed to use PFs versus VFs etc.). Am I
   correct here
   or is there another reason that this would be undesirable to allow in
   future -
   assuming such checks can also be designed - that I am missing?
  
  I think that is correct. But even if the additional logic was
 implemented  it
  wouldn't work because of how libvirt behaves currently.

 Again though, in the code we have a distinction between a physical device
 (as I was asking about in Q1) and a physical function (as I am asking about
 in Q2) and similarly whether libvirt allows or not depends on how you
 configure in the guest XML. Though I wouldn't be surprised on the PF case
 if it is in fact not allowed in Libvirt (even with hostdev) it is again
 important to consider this distinctly separate from passing through the
 physical device case which we DO allow currently in the code I'm asking
 about.

I think what you suggest is not difficult to support, but current (since
Juno) PCI device passthrough  for networking is all about SR-IOV PCI device
passthrough. As I mentioned, to support  normal PCI device will require
libvirt VIF Driver adjustment. I think its possible to make this work with
existing neutron ML2 SRIOV Mechanism Driver.


 -Steve

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SRIOV-error

2014-12-15 Thread Irena Berezovsky
Hi David,
You error is not related to agent.
I would suggest to check:

1.nova.conf at your compute node for pci whitelist configuration

2.   Neutron server configuration for correct physical_network label 
matching the label in pci whitelist

3.   Nova DB tables containing PCI devices entries:

#echo 'use nova;select hypervisor_hostname,pci_stats from compute_nodes;' | 
mysql -u root
You should not run SR-IOV agent in you setup. SR-IOV agent is an optional and 
currently does not add value if you use Intel NIC.


Regards,
Irena
From: david jhon [mailto:djhon9...@gmail.com]
Sent: Tuesday, December 16, 2014 5:54 AM
To: Murali B
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] SRIOV-error

Just to be more clear, command $lspci | grep -i Ethernet gives following output:

01:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port 
Backplane Connection (rev 01)
01:00.1 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port 
Backplane Connection (rev 01)
03:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port 
Backplane Connection (rev 01)
03:00.1 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port 
Backplane Connection (rev 01)
03:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller 
Virtual Function (rev 01)
03:10.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller 
Virtual Function (rev 01)
03:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller 
Virtual Function (rev 01)
03:10.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller 
Virtual Function (rev 01)
03:10.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller 
Virtual Function (rev 01)
03:10.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller 
Virtual Function (rev 01)
03:10.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller 
Virtual Function (rev 01)
03:10.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller 
Virtual Function (rev 01)
04:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port 
Backplane Connection (rev 01)
04:00.1 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port 
Backplane Connection (rev 01)
04:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller 
Virtual Function (rev 01)
04:10.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller 
Virtual Function (rev 01)
04:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller 
Virtual Function (rev 01)
04:10.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller 
Virtual Function (rev 01)
04:10.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller 
Virtual Function (rev 01)
04:10.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller 
Virtual Function (rev 01)
04:10.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller 
Virtual Function (rev 01)
04:10.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller 
Virtual Function (rev 01)
How can I make SR-IOV agent run and fix this bug?


On Tue, Dec 16, 2014 at 8:36 AM, david jhon 
djhon9...@gmail.commailto:djhon9...@gmail.com wrote:
Hi Murali,
Thanks for your response, I did the same, it has resolved errors apparently but 
1) neutron agent-list shows no agent for sriov, 2) neutron port is created 
successfully but creating vm is erred in scheduling as follows:

result from neutron agent-list:
+--++-+---++---+
| id   | agent_type | host| alive | 
admin_state_up | binary|
+--++-+---++---+
| 2acc7044-e552-4601-b00b-00ba591b453f | Open vSwitch agent | blade08 | xxx   | 
True   | neutron-openvswitch-agent |
| 595d07c6-120e-42ea-a950-6c77a6455f10 | Metadata agent | blade08 | :-)   | 
True   | neutron-metadata-agent|
| a1f253a8-e02e-4498-8609-4e265285534b | DHCP agent | blade08 | :-)   | 
True   | neutron-dhcp-agent|
| d46b29d8-4b5f-4838-bf25-b7925cb3e3a7 | L3 agent   | blade08 | :-)   | 
True   | neutron-l3-agent  |
+--++-+---++---+

2014-12-15 19:30:44.546 40249 ERROR oslo.messaging.rpc.dispatcher 
[req-c7741cff-a7d8-422f-b605-6a1d976aeb09 ] Exception during message handling: 
PCI $
2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 13$
2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-12-15 

Re: [openstack-dev] Minimal ML2 mechanism driver after Neutron decomposition change

2014-12-15 Thread Irena Berezovsky


-Original Message-
From: henry hly [mailto:] 
Sent: Tuesday, December 16, 2014 3:12 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Minimal ML2 mechanism driver after Neutron 
decomposition change

On Tue, Dec 16, 2014 at 1:53 AM, Neil Jerram neil.jer...@metaswitch.com wrote:
 Hi all,

 Following the approval for Neutron vendor code decomposition 
 (https://review.openstack.org/#/c/134680/), I just wanted to comment 
 that it appears to work fine to have an ML2 mechanism driver 
 _entirely_ out of tree, so long as the vendor repository that provides 
 the ML2 mechanism driver does something like this to register their 
 driver as a neutron.ml2.mechanism_drivers entry point:

   setuptools.setup(
   ...,
   entry_points = {
   ...,
   'neutron.ml2.mechanism_drivers': [
   'calico = xyz.openstack.mech_xyz:XyzMechanismDriver',
   ],
   },
   )

 (Please see
 https://github.com/Metaswitch/calico/commit/488dcd8a51d7c6a1a2f0378900
 1c2139b16de85c for the complete change and detail, for the example 
 that works for me.)

 Then Neutron and the vendor package can be separately installed, and 
 the vendor's driver name configured in ml2_conf.ini, and everything works.

 Given that, I wonder:

 - is that what the architects of the decomposition are expecting?

 - other than for the reference OVS driver, are there any reasons in
   principle for keeping _any_ ML2 mechanism driver code in tree?


Good questions. I'm also looking for the linux bridge MD, SRIOV MD...
Who will be responsible for these drivers?

Excellent question. In my opinion, 'technology' specific but not vendor 
specific MD (like SRIOV) should not be maintained by specific vendor. It should 
be accessible for all interested parties for contribution.

The OVS driver is maintained by Neutron community, vendor specific hardware 
driver by vendor, SDN controllers driver by their own community or vendor. But 
there are also other drivers like SRIOV, which are general for a lot of vendor 
agonitsc backends, and can't be maintained by a certain vendor/community.

So, it would be better to keep some general backend MD in tree besides SRIOV. 
There are also vif-type-tap, vif-type-vhostuser, 
hierarchy-binding-external-VTEP ... We can implement a very thin in-tree base 
MD that only handle vif bind which is backend agonitsc, then backend provider 
is free to implement their own service logic, either by an backend agent, or by 
a driver derived from the base MD for agentless scenery.

Keeping general backend MDs in tree sounds reasonable. 
Regards

 Many thanks,
  Neil

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SRIOV failures error-

2014-12-14 Thread Irena Berezovsky
Hi David,
One configuration option is missing that you should be aware of:
In /etc/neutron/plugins/ml2/ml2_conf_sriov.ini:
In [ml2_sriov] section set PCI Device vendor and product IDs you use, in format 
vendor_id:product_id

supported_pci_vendor_devs =

Example:
supported_pci_vendor_devs = 8086:10ca

Then restart neutron-server.

May ask you to post this question at ask.openstack.org?

BR,
Irena

From: david jhon [mailto:djhon9...@gmail.com]
Sent: Monday, December 15, 2014 9:21 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] SRIOV failures error-

Hi, I am doing the same i.e., configuring SR-IOV with Openstack juno, I have 
manually done this but now I am doing it for openstack. Can you please list 
down steps you used for configuring SR-IOV. I am currently following this link 
but getting many errors: 
https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking

Please share some tutorial you found for this configuration. I'll be real 
grateful for your help.  Thanking you in anticipation
Looking forward seeing your response
Thanks and Regards,

On Tue, Dec 2, 2014 at 12:18 PM, Itzik Brown 
itz...@redhat.commailto:itz...@redhat.com wrote:
Hi,
Seems like you don't have available devices for allocation.

What's the output of:
#echo 'use nova;select hypervisor_hostname,pci_stats from compute_nodes;' | 
mysql -u root

Itzik

On 12/02/2014 08:21 AM, Murali B wrote:
Hi

we are trying to bring-up the SRIOV on set-up.

facing the below error when we tried create the vm.

Still during creating instance ERROR appears .
PciDeviceRequestFailed: PCI device request ({'requests': 
[InstancePCIRequest(alias_name=None,count=1,is_new=False,request_id=58584ee1-8a41-4979-9905-4d18a3df3425,spec=[{physical_network='physnet1'}])],
 'code': 500}equests)s failed

followed the steps from the 
https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking

Please help us to get rid this error. let us know if any configuration is 
required at hardware in order to work properly.

Thanks
-Murali


___

OpenStack-dev mailing list

OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2014-12-09 Thread Irena Berezovsky
Hi Daniel,
Please see inline

-Original Message-
From: Daniel P. Berrange [mailto:berra...@redhat.com] 
Sent: Tuesday, December 09, 2014 4:04 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech 
driver/L2 and vif_driver

On Tue, Dec 09, 2014 at 10:53:19AM +0100, Maxime Leroy wrote:
 I have also proposed a blueprint to have a new plugin mechanism in 
 nova to load external vif driver. (nova-specs:
 https://review.openstack.org/#/c/136827/ and nova (rfc patch):
 https://review.openstack.org/#/c/136857/)
 
 From my point-of-view of a developer having a plugin framework for 
 internal/external vif driver seems to be a good thing.
 It makes the code more modular and introduce a clear api for vif driver 
 classes.
 
 So far, it raises legitimate questions concerning API stability and 
 public API that request a wider discussion on the ML (as asking by 
 John Garbut).
 
 I think having a plugin mechanism and a clear api for vif driver is 
 not going against this policy:
 http://docs.openstack.org/developer/nova/devref/policies.html#out-of-tree-support.
 
 There is no needs to have a stable API. It is up to the owner of the 
 external VIF driver to ensure that its driver is supported by the 
 latest API. And not the nova community to manage a stable API for this 
 external VIF driver. Does it make senses ?

Experiance has shown that even if it is documented as unsupported, once the 
extension point exists, vendors  users will ignore the small print about 
support status. There will be complaints raised every time it gets broken until 
we end up being forced to maintain it as stable API whether we want to or not. 
That's not a route we want to go down.

[IB] It should be up to the vendor to maintain it and make sure it's not 
broken. Having proper 3rd party CI in place should catch API contract changes.

 Considering the network V2 API, L2/ML2 mechanism driver and VIF driver 
 need to exchange information such as: binding:vif_type and 
 binding:vif_details.
 
 From my understanding, 'binding:vif_type' and 'binding:vif_details' as 
 a field part of the public network api. There is no validation 
 constraints for these fields (see 
 http://docs.openstack.org/api/openstack-network/2.0/content/binding_ex
 t_ports.html), it means that any value is accepted by the API. So, the 
 values set in 'binding:vif_type' and 'binding:vif_details' are not 
 part of the public API. Is my understanding correct ?

The VIF parameters are mapped into the nova.network.model.VIF class which is 
doing some crude validation. I would anticipate that this validation will be 
increasing over time, because any functional data flowing over the API and so 
needs to be carefully managed for upgrade reasons.

Even if the Neutron impl is out of tree, I would still expect both Nova and 
Neutron core to sign off on any new VIF type name and its associated details 
(if any).

[IB] This maybe the reasonable integration point. But it requires nova team 
review and approval. From my experience nova team is extremely overloaded, 
therefor getting this code reviewed become very difficult mission.
 
 What other reasons am I missing to not have VIF driver classes as a 
 public extension point ?

Having to find  install VIF driver classes from countless different vendors, 
each hiding their code away in their own obsecure website, will lead to awful 
end user experiance when deploying Nova. Users are better served by having it 
all provided when they deploy Nova IMHO

If every vendor goes off  works in their own isolated world we also loose the 
scope to align the implementations, so that common concepts work the same way 
in all cases and allow us to minimize the number of new VIF types required. The 
proposed vhostuser VIF type is a good example of this - it allows a single Nova 
VIF driver to be capable of potentially supporting multiple different impls on 
the Neutron side.
If every vendor worked in their own world, we would have ended up with multiple 
VIF drivers doing the same thing in Nova, each with their own set of bugs  
quirks.

[IB] I think that most of the vendors that maintain vif_driver out of nova, do 
not do it on purpose and would prefer to see it upstream. Sometimes host side 
binding is not fully integrated with libvirt and requires some temporary 
additional code, till libvirt provides complete support. Sometimes, it is just 
lack of nova team attention to the proposed spec/code  to be reviewed and 
accepted on time, which ends up with fully supported neutron part and missing 
small but critical vif_driver piece.
 
I expect the quality of the code the operator receives will be lower if it is 
never reviewed by anyone except the vendor who writes it in the first place.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|

Re: [openstack-dev] SRIOV failures error-

2014-12-02 Thread Irena Berezovsky
Hi Murali,
Seems there is a mismatch between pci_whitelist configuration and requested 
network.
In the table below:
physical_network: physnet2

In the error you sent, there is:
[InstancePCIRequest(alias_name=None,count=1,is_new=False,request_id=58584ee1-8a41-4979-9905-4d18a3df3425,spec=[{physical_network='physnet1'}])],

Please check the neutron and nova configuration for physical_network.

Cheers,
Irena

From: Murali B [mailto:mbi...@gmail.com]
Sent: Wednesday, December 03, 2014 5:19 AM
To: openstack-dev@lists.openstack.org; itz...@redhat.com
Subject: [openstack-dev] SRIOV failures error-

Hi Itzik,

Thank you for your reply

Please find the below output for #echo 'use nova;select 
hypervisor_hostname,pci_stats from
compute_nodes;' | mysql -u root

MariaDB [nova] select hypervisor_hostname,pci_stats from compute_nodes;
+-+---+
| hypervisor_hostname | pci_stats   
  |
+-+---+
| compute2| []  
  |
| xilinx-r720 | [{count: 1, vendor_id: 8086, physical_network: 
physnet2, product_id: 10ed}] |
| compute1| []  
  |
| compute4| []  
  |
+-+---+

we have enabled SRIOV agent on compute node 4 as well as  xilinx-r720

Thanks
-Murali
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] CI for NUMA, SR-IOV, and other features that can't be tested on current infra.

2014-11-16 Thread Irena Berezovsky
Hi Steve,
Regarding SR-IOV testing, at Mellanox we have CI job running on bare metal node 
with Mellanox SR-IOV NIC.  This job is reporting on neutron patches. Currently 
API tests are executed. 
The contact person for SRIOV CI job is listed at driverlog:
https://github.com/stackforge/driverlog/blob/master/etc/default_data.json#L1439

The following items are in progress:
 - SR-IOV functional testing 
 - Reporting CI job on nova patches
 - Multi-node setup
It worth to mention that we   want to start the collaboration on SR-IOV testing 
effort as part of the pci pass-through subteam activity.
Please join the weekly meeting if you want to collaborate or have some inputs: 
https://wiki.openstack.org/wiki/Meetings/Passthrough

BR,
Irena

-Original Message-
From: Steve Gordon [mailto:sgor...@redhat.com] 
Sent: Wednesday, November 12, 2014 9:11 PM
To: itai mendelsohn; Adrian Hoban; Russell Bryant; Ian Wells (iawells); Irena 
Berezovsky; ba...@cisco.com
Cc: Nikola Đipanov; Russell Bryant; OpenStack Development Mailing List (not for 
usage questions)
Subject: [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other 
features that can't be tested on current infra.

Hi all,

We had some discussions last week - particularly in the Nova NFV design session 
[1] - on the subject of ensuring that telecommunications and NFV-related 
functionality has adequate continuous integration testing. In particular the 
focus here is on functionality that can't easily be tested on the public clouds 
that back the gate, including:

- NUMA (vCPU pinning, vCPU layout, vRAM layout, huge pages, I/O device locality)
- SR-IOV with Intel, Cisco, and Mellanox devices (possibly others)
  
In each case we need to confirm where we are at, and the plan going forward, 
with regards to having:

1) Hardware to run the CI on.
2) Tests that actively exercise the functionality (if not already in existence).
3) Point person for each setup to maintain it and report into the third-party 
meeting [2].
4) Getting the jobs operational and reporting [3][4][5][6].

In the Nova session we discussed a goal of having the hardware by K-1 (Dec 18) 
and having it reporting at least periodically by K-2 (Feb 5). I'm not sure if 
similar discussions occurred on the Neutron side of the design summit.

SR-IOV
==

Adrian and Irena mentioned they were already in the process of getting up to 
speed with third party CI for their respective SR-IOV configurations. Robert 
are you attempting similar with regards to Cisco devices? What is the status of 
each of these efforts versus the four items I lifted above and what do you need 
assistance with?

NUMA


We still need to identify some hardware to run third party CI for the 
NUMA-related work, and no doubt other things that will come up. It's expected 
that this will be an interim solution until OPNFV resources can be used (note 
cdub jokingly replied 1-2 years when asked for a rough estimate - I mention 
this because based on a later discussion some people took this as a serious 
estimate).

Ian did you have any luck kicking this off? Russell and I are also endeavouring 
to see what we can do on our side w.r.t. this short term approach - in 
particular if you find hardware we still need to find an owner to actually 
setup and manage it as discussed.

In theory to get started we need a physical multi-socket box and a virtual 
machine somewhere on the same network to handle job control etc. I believe the 
tests themselves can be run in VMs (just not those exposed by existing public 
clouds) assuming a recent Libvirt and an appropriately crafted Libvirt XML that 
ensures the VM gets a multi-socket topology etc. (we can assist with this).

Thanks,

Steve

[1] https://etherpad.openstack.org/p/kilo-nova-nfv
[2] https://wiki.openstack.org/wiki/Meetings/ThirdParty
[3] http://ci.openstack.org/third_party.html
[4] http://www.joinfu.com/2014/01/understanding-the-openstack-ci-system/
[5] 
http://www.joinfu.com/2014/02/setting-up-an-external-openstack-testing-system/
[6] 
http://www.joinfu.com/2014/02/setting-up-an-openstack-external-testing-system-part-2/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Translation technical debt

2014-11-12 Thread Irena Berezovsky
Count me in

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Tuesday, November 11, 2014 3:47 PM
To: OpenStack List
Subject: [openstack-dev] [Neutron] Translation technical debt

Hi,
In order to enforce our translation policies - 
http://docs.openstack.org/developer/oslo.i18n/guidelines.html - we have added a 
hacking rule in Neutron. If you wish to jump aboard this effort then please let 
me know. There are patches for all directories except the plugins and services.
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] PCI/SR-IOV meet-up at the summit

2014-11-04 Thread Irena Berezovsky
Hi,
We thought it would be a good idea to have some chat regarding further SR-IOV 
enchantment that we want to achieve during Kilo.
If you are interested to discuss it, please join us Wednesday 5, at 13:15 at 
the developers lounge.
The list of topics raised till now can be found here:
https://etherpad.openstack.org/p/kilo_sriov_pci_passthrough

See you there,
Irena
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QoS] Pod time at Paris Summit

2014-11-01 Thread Irena Berezovsky
Hi Sean,
Is there any chance to change this time slot?
Unfortunately, I won't be there on Friday.


BR,
Irena

-Original Message-
From: Collins, Sean [mailto:sean_colli...@cable.comcast.com] 
Sent: Thursday, October 30, 2014 5:50 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][QoS] Pod time at Paris Summit

I have reserved the 2:35 slot for Friday. 

https://etherpad.openstack.org/p/neutron-kilo-meetup-slots

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QoS] Pod time at Paris Summit

2014-10-29 Thread Irena Berezovsky
Hi Sean,
Will be great to meet in person and discuss QoS adoption path.
Count me in,
Irena

-Original Message-
From: Collins, Sean [mailto:sean_colli...@cable.comcast.com] 
Sent: Tuesday, October 28, 2014 8:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][QoS] Pod time at Paris Summit

Hi,

Like Atlanta, I will be at the summit. If there is interest, I can schedule a 
time to talk about the QoS API extension.

--
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Architecture]Suggestions for the third vendors' plugin and driver

2014-09-14 Thread Irena Berezovsky
Hi,
While keeping focused on defining proper approach to deal with Neutron third 
vendors’ plugin and driver, we also need to provide solution for complimentary 
critical piece of code maintained in the Nova code base.
Introducing new vif_type by neutron L2 Plugin/Driver, requires adding vif 
plugging support at Nova side.
I think it is very important to enable  virt driver  extensibility to support 
out of the tree/future vif_types.
If the direction is to keep vendor plugins/drivers external to Neutron core, it 
seems reasonable to impose same policy on the Nova side.

BR,
Irena

From: Kevin Benton [mailto:blak...@gmail.com]
Sent: Friday, September 12, 2014 12:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: tanny...@huawei.com
Subject: Re: [openstack-dev] [Neutron][Architecture]Suggestions for the third 
vendors' plugin and driver

 So my suggestion is remove all vendors' plugins and drivers except opensource 
 as built-in.

Yes, I think this is currently the view held by the PTL (Kyle) and some of the 
other cores so what you're suggesting will definitely come up at the summit.


 Why do we need a different repo to store vendors' codes? That's not the 
 community business.
 I think only a proper architecture and normal NBSB API can bring a clear 
 separation between plugins(or drivers) and core code, not a different repo.

The problem is that that architecture won't stay stable if there is no shared 
community plugin depending on its stability. Let me ask you the inverse 
question. Why do you think the reference driver should stay in the core repo?

A separate repo won't have an impact on what is packaged and released so it 
should have no impact on user experience, complete versions, providing 
code examples,  or developing new features. In fact, it will likely help 
with the last two because it will provide a clear delineation between what a 
plugin is responsible for vs. what the core API is responsible for. And, 
because new cores can be added faster to the open source plugins repo due to a 
smaller code base to learn, it will help with developing new features by 
reducing reviewer load.

On Fri, Sep 12, 2014 at 1:50 AM, Germy Lure 
germy.l...@gmail.commailto:germy.l...@gmail.com wrote:


On Fri, Sep 12, 2014 at 11:11 AM, Kevin Benton 
blak...@gmail.commailto:blak...@gmail.com wrote:

 Maybe I missed something, but what's the solution?

There isn't one yet. That's why it's going to be discussed at the summit.
So my suggestion is remove all vendors' plugins and drivers except opensource 
as built-in.
By leaving open source plugins and drivers in the tree , we can resolve such 
problems:
  1)release a workable and COMPLETE version
  2)user experience(especially for beginners)
  3)provide code example to learn for new contributors and vendors
  4)develop and verify new features


 I think we should release a workable version.

Definitely. But that doesn't have anything to do with it living in the same 
repository. By putting it in a different repo, it provides smaller code bases 
to learn for new contributors wanting to become a core developer in addition to 
a clear separation between plugins and core code.
Why do we need a different repo to store vendors' codes? That's not the 
community business.
I think only a proper architecture and normal NBSB API can bring a clear 
separation between plugins(or drivers) and core code, not a different repo.
Of course, if the community provides a wiki page for vendors to add hyperlink 
of their codes, I think it's perfect.

 Besides of user experience, the open source drivers are also used for 
 developing and verifying new features, even small-scale case.

Sure, but this also isn't affected by the code being in a separate repo.
See comments above.

 The community should and just need focus on the Neutron core and provide 
 framework for vendors' devices.

I agree, but without the open source drivers being separated as well, it's very 
difficult for the framework for external drivers to be stable enough to be 
useful.
Architecture and API. The community should ensure core and API stable enough 
and high quality. Vendors for external drivers.
Who provides, who maintains(including development, storage, distribution, 
quality, etc).

On Thu, Sep 11, 2014 at 7:24 PM, Germy Lure 
germy.l...@gmail.commailto:germy.l...@gmail.com wrote:
Some comments inline.

BR,
Germy

On Thu, Sep 11, 2014 at 5:47 PM, Kevin Benton 
blak...@gmail.commailto:blak...@gmail.com wrote:
This has been brought up several times already and I believe is going to be 
discussed at the Kilo summit.
Maybe I missed something, but what's the solution?

I agree that reviewing third party patches eats community time. However, 
claiming that the community pays 46% of it's energy to maintain vendor-specific 
code doesn't make any sense. LOC in the repo has very little to do with ongoing 
required maintenance. Assuming the APIs for the plugins stay consistent, there 
should be few 

[openstack-dev] [neutron][nova] PCI pass-through feature/topic proposals for Kilo Release

2014-09-02 Thread Irena Berezovsky
Following the last PCI pass-through meeting , we want to start thinking about 
features/add-ons that need to be addressed in the Kilo Release.

I created an etherpad (reused Doug's template) for topics related to PCI 
pass-through, mostly focused on SR-IOV networking:
https://etherpad.openstack.org/p/kilo_sriov_pci_passthrough

Please see some instructions at the top of the page.
Based on the topics interest, we may need to work out the overall details and 
propose summit session to present and get community feedback.

BR,
Irena
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Openvswitch agent support for non promic mode adapters

2014-08-25 Thread Irena Berezovsky
Hi Andreas,
We can definitely set some time to discuss this. 
I am usually available from 5 to 14:00 UTC.  
Let's follow up on IRC (irenab).

BR,
Irena

-Original Message-
From: Andreas Scheuring [mailto:scheu...@linux.vnet.ibm.com] 
Sent: Monday, August 25, 2014 11:00 AM
To: Irena Berezovsky
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: RE: [openstack-dev] [neutron][ml2] Openvswitch agent support for non 
promic mode adapters

Hi Irena,
thanks for your reply. Yes sure, collaboration would be great. 
Do you already have a blueprint out there? Maybe wen can synchup this week to 
discuss more details? Cause I would like to understand what exactly you're 
looking for. Normally I'm available form 7 UTC to 16 utc (today only until 13 
utc). My irc name is scheuran. Maybe we can get in contact this week!

You also where talking about sriov. I saw some blueprint mentioning sriov  
macvtap. Do you have any insights into this one, too? What we also would like 
to do is to introduce macvtap as network virtualization option. Macvtap also 
registers mac addresses to network adapters...


Thanks,
Andreas


On Sun, 2014-08-24 at 08:51 +, Irena Berezovsky wrote:
 Hi Andreas,
 Thank you for this initiative.
 We were looking on similar problem for mixing OVS and SR-IOV on same network 
 adapter, which also requires mac addresses registration of OVS ports. 
 Please let me know if you would like to collaborate on this effort.
 
 BR,
 Irena
 
 -Original Message-
 From: Andreas Scheuring [mailto:scheu...@linux.vnet.ibm.com]
 Sent: Friday, August 22, 2014 11:16 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [neutron][ml2] Openvswitch agent support 
 for non promic mode adapters
 
 Thanks for your feedback. 
 
 No, I do not yet have code for it. Just wanted to get a feeling if such a 
 feature would get acceptance in the community. 
 But if that helps I can sit down and start some prototyping while I'm 
 preparing a blueprint spec in parallel. 
 
 The main part of the implementation I wanted to do on my own to get more 
 familiar with the code base and to get more in touch with the community.
 But of course advice and feedback of experienced neutron developers is 
 essential!
 
 So I will proceed like this
 - Create a blueprint
 - Commit first pieces of code to get early feedback (e.g. ask via the 
 mailing list or irc)
 - Upload a spec (as soon as the repo is available for K)
 
 Does that make sense for you?
 
 Thanks,
 Andreas
 
 
 
 On Thu, 2014-08-21 at 13:44 -0700, Kevin Benton wrote:
  I think this sounds reasonable. Do you have code for this already, 
  or are you looking for a developer to help implement it?
  
  
  On Thu, Aug 21, 2014 at 8:45 AM, Andreas Scheuring 
  scheu...@linux.vnet.ibm.com wrote:
  Hi,
  last week I started discussing an extension to the existing
  neutron
  openvswitch agent to support network adapters that are not in
  promiscuous mode. Now I would like to enhance the round to get
  feedback
  from a broader audience via the mailing list.
  
  
  The Problem
  When driving vlan or flat networking, openvswitch requires an
  network
  adapter in promiscuous mode.
  
  
  Why not having promiscuous mode in your adapter?
  - Admins like to have full control over their environment and
  which
  network packets enter the system.
  - The network adapter just does not have support for it.
  
  
  What to do?
  Linux net-dev driver offer an interface to manually register
  additional
  mac addresses (also called secondary unicast addresses).
  Exploiting this
  one can register additional mac addresses to the network
  adapter. This
  also works via a well known ip user space tool.
  
  `bridge fdb add aa:aa:aa:aa:aa:aa dev eth0`
  
  
  What to do in openstack?
  As neutron is aware of all the mac addresses that are in use
  it's the
  perfect candidate for doing the mac registrations. The idea is
  to modify
  the neutron openvswitch agent that it does the registration on
  port
  add and port remove via the bridge command.
  There would be a new optional configuration parameter,
  something like
  'non-promisc-mode' that is by default set to false. Only when
  set to
  true, macs get manually registered. Otherwise the agent
  behaves like it
  does today. So I guess only very little changes to the agent
  code are
  required. From my current point of view we do not need any
  changes to
  the ml2 plug-in.
  
  
  Blueprint or a bug?
  I guess it's a blueprint

Re: [openstack-dev] [neutron][ml2] Openvswitch agent support for non promic mode adapters

2014-08-24 Thread Irena Berezovsky
Hi Andreas,
Thank you for this initiative.
We were looking on similar problem for mixing OVS and SR-IOV on same network 
adapter, which also requires mac addresses registration of OVS ports. 
Please let me know if you would like to collaborate on this effort.

BR,
Irena

-Original Message-
From: Andreas Scheuring [mailto:scheu...@linux.vnet.ibm.com] 
Sent: Friday, August 22, 2014 11:16 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron][ml2] Openvswitch agent support for non 
promic mode adapters

Thanks for your feedback. 

No, I do not yet have code for it. Just wanted to get a feeling if such a 
feature would get acceptance in the community. 
But if that helps I can sit down and start some prototyping while I'm preparing 
a blueprint spec in parallel. 

The main part of the implementation I wanted to do on my own to get more 
familiar with the code base and to get more in touch with the community.
But of course advice and feedback of experienced neutron developers is 
essential!

So I will proceed like this
- Create a blueprint
- Commit first pieces of code to get early feedback (e.g. ask via the mailing 
list or irc)
- Upload a spec (as soon as the repo is available for K)

Does that make sense for you?

Thanks,
Andreas



On Thu, 2014-08-21 at 13:44 -0700, Kevin Benton wrote:
 I think this sounds reasonable. Do you have code for this already, or 
 are you looking for a developer to help implement it?
 
 
 On Thu, Aug 21, 2014 at 8:45 AM, Andreas Scheuring 
 scheu...@linux.vnet.ibm.com wrote:
 Hi,
 last week I started discussing an extension to the existing
 neutron
 openvswitch agent to support network adapters that are not in
 promiscuous mode. Now I would like to enhance the round to get
 feedback
 from a broader audience via the mailing list.
 
 
 The Problem
 When driving vlan or flat networking, openvswitch requires an
 network
 adapter in promiscuous mode.
 
 
 Why not having promiscuous mode in your adapter?
 - Admins like to have full control over their environment and
 which
 network packets enter the system.
 - The network adapter just does not have support for it.
 
 
 What to do?
 Linux net-dev driver offer an interface to manually register
 additional
 mac addresses (also called secondary unicast addresses).
 Exploiting this
 one can register additional mac addresses to the network
 adapter. This
 also works via a well known ip user space tool.
 
 `bridge fdb add aa:aa:aa:aa:aa:aa dev eth0`
 
 
 What to do in openstack?
 As neutron is aware of all the mac addresses that are in use
 it's the
 perfect candidate for doing the mac registrations. The idea is
 to modify
 the neutron openvswitch agent that it does the registration on
 port
 add and port remove via the bridge command.
 There would be a new optional configuration parameter,
 something like
 'non-promisc-mode' that is by default set to false. Only when
 set to
 true, macs get manually registered. Otherwise the agent
 behaves like it
 does today. So I guess only very little changes to the agent
 code are
 required. From my current point of view we do not need any
 changes to
 the ml2 plug-in.
 
 
 Blueprint or a bug?
 I guess it's a blueprint.
 
 What's the timeframe?
 K would be great.
 
 
 
 I would be thankful for any feedback on this! Feel free to
 contact me
 anytime. Thanks in advance!
 
 Regards,
 Andreas
 
 (irc: scheuran)
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 --
 Kevin Benton
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Mellanox plugin deprecation

2014-08-20 Thread Irena Berezovsky
Hi,

As announced in the last neutron meeting [1], the Mellanox plugin is being 
deprecated.  Juno is the last release to support Mellanox  plugin.

The Mellanox ML2 Mechanism Driver is replacing the plugin and introduced since 
Icehouse release.

[1] 
http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-08-18-21.02.log.html

BR,
Irena

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] PCI support

2014-08-13 Thread Irena Berezovsky
Hi Gary,
I understand your concern. I think CI is mandatory to ensure that code is not 
broken. While unit tests provide great value, it may end up with the code that 
does not work...
I am not sure how this code can be checked for validity without running the 
neutron part.
Probably our CI job should be triggered by nova changes in the PCI area.
What do you suggest?

Irena

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Tuesday, August 12, 2014 4:29 PM
To: Irena Berezovsky; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [Nova] PCI support

Thanks, the concern is for the code in Nova and not in Neutron. That is, there 
is quite a lot of PCI code being added and no way of knowing that it actually 
works (unless we trust the developers working on it :)).
Thanks
Gary

From: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com
Date: Tuesday, August 12, 2014 at 10:25 AM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Gary Kotton gkot...@vmware.commailto:gkot...@vmware.com
Subject: RE: [openstack-dev] [Nova] PCI support

Hi Gary,
Mellanox already established CI support on Mellanox SR-IOV NICs, as one of the 
jobs of Mellanox External Testing CI 
(Check-MLNX-Neutron-ML2-Sriov-driverhttps://urldefense.proofpoint.com/v1/url?u=http://144.76.193.39/ci-artifacts/94888/13/Check-MLNX-Neutron-ML2-Sriov-driverk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=OFhjKT9ipKmAmkiQpq6hlqZIHthaGP7q1PTygNW2RXs%3D%0As=13fdee114a421eeed33edf26a639f8450df6efa361ba912c41694ff75292e789).
Meanwhile not voting, but will be soon.

BR,
Irena

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Monday, August 11, 2014 5:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] PCI support

Thanks for the update.

From: Robert Li (baoli) ba...@cisco.commailto:ba...@cisco.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, August 11, 2014 at 5:08 PM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] PCI support

Gary,

Cisco is adding it in our CI testbed. I guess that mlnx is doing the same for 
their MD as well.

-Robert

On 8/11/14, 9:05 AM, Gary Kotton 
gkot...@vmware.commailto:gkot...@vmware.com wrote:

Hi,
At the moment all of the drivers are required CI support. Are there any plans 
regarding the PIC support. I understand that this is something that requires 
specific hardware. Are there any plans to add this?
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Update on third party CI in Neutron

2014-08-13 Thread Irena Berezovsky
Hi,
Mellanox CI was also failing due to the same issue, 
https://bugs.launchpad.net/neutron/+bug/1355780 (apparently duplicated bug for 
https://bugs.launchpad.net/neutron/+bug/1353309)
We currently fixed the issue locally, by patching the server side RPC version 
support to 1.3.

BR,
Irena


From: Hemanth Ravi [mailto:hemanthrav...@gmail.com]
Sent: Wednesday, August 13, 2014 12:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [third-party] Update on third party CI 
in Neutron

Kyle,

One Convergence third-party CI is failing due to 
https://bugs.launchpad.net/neutron/+bug/1353309.

Let me know if we should turn off the CI logs until this is fixed or if we need 
to fix anything on the CI end. I think one other third-party CI (Mellanox) is 
failing due to the same issue.

Regards,
-hemanth

On Tue, Jul 29, 2014 at 6:02 AM, Kyle Mestery 
mest...@mestery.commailto:mest...@mestery.com wrote:
On Mon, Jul 28, 2014 at 1:42 PM, Hemanth Ravi 
hemanthrav...@gmail.commailto:hemanthrav...@gmail.com wrote:
 Kyle,

 One Convergence CI has been fixed (setup issue) and is running without the
 failures for ~10 days now. Updated the etherpad.

Thanks for the update Hemanth, much appreciated!

Kyle

 Thanks,
 -hemanth


 On Fri, Jul 11, 2014 at 4:50 PM, Fawad Khaliq 
 fa...@plumgrid.commailto:fa...@plumgrid.com wrote:


 On Fri, Jul 11, 2014 at 8:56 AM, Kyle Mestery 
 mest...@noironetworks.commailto:mest...@noironetworks.com
 wrote:

 PLUMgrid

 Not saving enough logs

 All Jenkins slaves were just updated to upload all required logs. PLUMgrid
 CI should be good now.


 Thanks,
 Fawad Khaliq


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] PCI support

2014-08-13 Thread Irena Berezovsky
Generally, I agree with you. But it's a little tricky.
There are different types of SR-IOV NICs and what will work for some vendor may 
be broken for another.
I think that both current SR-IOV networking flavors: Embedded switching (Intel, 
Mellanox) and Cisco VM-FEX should be verified for relevant nova patches.
What tests do you think it should run for nova side?

Thanks,
Irena

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Wednesday, August 13, 2014 10:10 AM
To: Irena Berezovsky; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [Nova] PCI support

Hi,
If I understand correctly the only way that this work is with nova and neutron 
running. My understanding would be to have the CI running with this as the 
configuration. I just think that this should be a prerequisite similar to 
having validations of virtualization drivers.
Does that make sense?
Thanks
Gary

From: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com
Date: Wednesday, August 13, 2014 at 9:01 AM
To: Gary Kotton gkot...@vmware.commailto:gkot...@vmware.com, OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: RE: [openstack-dev] [Nova] PCI support

Hi Gary,
I understand your concern. I think CI is mandatory to ensure that code is not 
broken. While unit tests provide great value, it may end up with the code that 
does not work...
I am not sure how this code can be checked for validity without running the 
neutron part.
Probably our CI job should be triggered by nova changes in the PCI area.
What do you suggest?

Irena

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Tuesday, August 12, 2014 4:29 PM
To: Irena Berezovsky; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [Nova] PCI support

Thanks, the concern is for the code in Nova and not in Neutron. That is, there 
is quite a lot of PCI code being added and no way of knowing that it actually 
works (unless we trust the developers working on it :)).
Thanks
Gary

From: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com
Date: Tuesday, August 12, 2014 at 10:25 AM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Gary Kotton gkot...@vmware.commailto:gkot...@vmware.com
Subject: RE: [openstack-dev] [Nova] PCI support

Hi Gary,
Mellanox already established CI support on Mellanox SR-IOV NICs, as one of the 
jobs of Mellanox External Testing CI 
(Check-MLNX-Neutron-ML2-Sriov-driverhttps://urldefense.proofpoint.com/v1/url?u=http://144.76.193.39/ci-artifacts/94888/13/Check-MLNX-Neutron-ML2-Sriov-driverk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=OFhjKT9ipKmAmkiQpq6hlqZIHthaGP7q1PTygNW2RXs%3D%0As=13fdee114a421eeed33edf26a639f8450df6efa361ba912c41694ff75292e789).
Meanwhile not voting, but will be soon.

BR,
Irena

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Monday, August 11, 2014 5:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] PCI support

Thanks for the update.

From: Robert Li (baoli) ba...@cisco.commailto:ba...@cisco.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, August 11, 2014 at 5:08 PM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] PCI support

Gary,

Cisco is adding it in our CI testbed. I guess that mlnx is doing the same for 
their MD as well.

-Robert

On 8/11/14, 9:05 AM, Gary Kotton 
gkot...@vmware.commailto:gkot...@vmware.com wrote:

Hi,
At the moment all of the drivers are required CI support. Are there any plans 
regarding the PIC support. I understand that this is something that requires 
specific hardware. Are there any plans to add this?
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Rotating the weekly Neutron meeting

2014-08-13 Thread Irena Berezovsky
+1

-Original Message-
From: Kyle Mestery [mailto:mest...@mestery.com] 
Sent: Wednesday, August 13, 2014 5:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron] Rotating the weekly Neutron meeting

Per this week's Neutron meeting [1], it was decided that offering a rotating 
meeting slot for the weekly Neutron meeting would be a good thing. This will 
allow for a much easier time for people in Asia/Pacific timezones, as well as 
for people in Europe.

So, I'd like to propose we rotate the weekly as follows:

Monday 2100UTC
Tuesday 1400UTC

If people are ok with these time slots, I'll set this up and we'll likely start 
with this new schedule in September, after the FPF.

Thanks!
Kyle

[1] 
http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-08-11-21.00.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] PCI support

2014-08-12 Thread Irena Berezovsky
Hi Gary,
Mellanox already established CI support on Mellanox SR-IOV NICs, as one of the 
jobs of Mellanox External Testing CI 
(Check-MLNX-Neutron-ML2-Sriov-driverhttp://144.76.193.39/ci-artifacts/94888/13/Check-MLNX-Neutron-ML2-Sriov-driver).
Meanwhile not voting, but will be soon.

BR,
Irena

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Monday, August 11, 2014 5:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] PCI support

Thanks for the update.

From: Robert Li (baoli) ba...@cisco.commailto:ba...@cisco.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, August 11, 2014 at 5:08 PM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] PCI support

Gary,

Cisco is adding it in our CI testbed. I guess that mlnx is doing the same for 
their MD as well.

-Robert

On 8/11/14, 9:05 AM, Gary Kotton 
gkot...@vmware.commailto:gkot...@vmware.com wrote:

Hi,
At the moment all of the drivers are required CI support. Are there any plans 
regarding the PIC support. I understand that this is something that requires 
specific hardware. Are there any plans to add this?
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] make mac address updatable: which plugins?

2014-08-12 Thread Irena Berezovsky
Hi Chuck,
I'll comment regarding Mellanox Plug-in and Ml2 Mech driver  in the review.
BR,
Irena 

-Original Message-
From: Carlino, Chuck (OpenStack TripleO, Neutron) [mailto:chuck.carl...@hp.com] 
Sent: Wednesday, August 06, 2014 10:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] make mac address updatable: which 
plugins?

Yamamoto has reviewed the changes for this, and has raised the following issue 
(among others).


  *   iirc mellanox uses mac address as port identifier. what happens on 
address change?

Can someone who knows mellanox please comment, either here or in the review?

Thanks,
Chuck


On Aug 5, 2014, at 1:22 PM, Carlino, Chuck (OpenStack TripleO, Neutron) 
chuck.carl...@hp.commailto:chuck.carl...@hp.com wrote:

Thanks for the quick responses.

Here's the WIP review:

https://review.openstack.org/112129.

The base plugin doesn't contribute to the notification decision right now, so 
I've modified the actual plugin code.

Chuck


On Aug 5, 2014, at 12:51 PM, Amir Sadoughi 
amir.sadou...@rackspace.commailto:amir.sadou...@rackspace.commailto:amir.sadou...@rackspace.com
wrote:

I agree with Kevin here. Just a note, don't bother with openvswitch and 
linuxbridge plugins as they are marked for deletion this cycle, imminently 
(already deprecated)[0].

Amir

[0] 
http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-08-04-21.02.html
 Announcements 2e.

From: Kevin Benton 
[blak...@gmail.commailto:blak...@gmail.commailto:blak...@gmail.com]
Sent: Tuesday, August 05, 2014 2:40 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] make mac address updatable: which 
plugins?

How are you implementing the change? It would be good to get to see some code 
in a review to get an idea of what needs to be updated.

If it's just a change in the DB base plugin, just let those changes propagate 
to the plugins that haven't overridden the inherited behavior.


On Tue, Aug 5, 2014 at 1:28 PM, Charles Carlino 
chuckjcarl...@gmail.commailto:chuckjcarl...@gmail.commailto:chuckjcarl...@gmail.com
 wrote:
Hi all,

I need some help regarding a bug [1] I'm working on.

The bug is basically a request to make the mac address of a port updatable.  
The use case is a baremetal (Ironic) node that has a bad NIC which must be 
replaced, resulting in a new mac address.  The bad NIC has an associated 
neutron port which of course holds the NIC's IP address.  The reason to make 
mac_address updatable (as opposed to having the user create a new port and 
delete the old one) is that during the recovery process the IP address must be 
retained and assigned to the new NIC/port, which is not guaranteed in the above 
work-around.

I'm coding the changes to do this in the ml2, openvswitch, and linuxbridge 
plugins but I'm not sure how to handle the the other plugins since I don't know 
if the associated backends are prepared to handle such updates.  My first 
thought is to disallow the update in the other plugins, but I would really 
appreciate your advice.

Kind regards,
Chuck Carlino

[1] https://bugs.launchpad.net/neutron/+bug/1341268

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][SR-IOV]: RE: ML2 mechanism driver for SR-IOV capable NIC based switching, ...

2014-07-27 Thread Irena Berezovsky
Hi Robert,
Please see inline

-Original Message-
From: Robert Li (baoli) [mailto:ba...@cisco.com] 
Sent: Friday, July 25, 2014 12:44 AM
To: mest...@mestery.com; Irena Berezovsky
Cc: Akihiro Motoki; Sandhya Dasu (sadasu); OpenStack Development Mailing List 
(not for usage questions)
Subject: [openstack-dev][neutron][SR-IOV]: RE: ML2 mechanism driver for SR-IOV 
capable NIC based switching,...

Hi Kyle, 

Sorry I missed your queries on the IRC channel today. I was thinking about this 
whole BP. After chatting with Irena this morning, I think that I understand 
what this BP is trying to achieve overall. I also had a chat with Sandhya 
afterwards. I¹d like to discuss a few things in here:
  
   Sandhya¹s MD is going to support cisco¹s VMFEX. Overall her code¹s 
structure would look like very much similar to Irena¹s patch in part 1.
However, she cannot simply inherit from SriovNicSwitchMechanismDriver. The 
differences for her code are: 1) get_vif_details() would populate profileid 
(rather than vlanid), 2) she¹d need to do vmfex specific processing in 
try_to_bind(). We¹re thinking that with a little of generalization, 
SriovNicSwitchMechanismDriver() (with a changed name such as 
SriovMechanismDriver()) can be used both for nic switch and vmfex. It would 
look like in terms of class hierarchy:
 SriovMechanismDriver
SriovNicSwitchMechanismDriver
SriovQBRMechanismDriver
 SriovCiscoVmfexMechanismDriver
[IrenaB] Seems correct hierarchy in we want to have separate MDs for NIC 
embedded switch and Cisco VMFEX cases.  But what if same deployment should 
support both cases? Two MDs should be chosen? 

Code duplication would be reduced significantly. The change would be:
make get_vif_details an abstract method in SriovMechanismDriver
make an abstract method to perform specific bind action required by a 
particular adaptor indicated in the PCI vendor info
vif type and agent type should be set based on the PCI vendor info 

A little change of patch part 1 would achieve the above
[IrenaB] Do you propose vif_type and agent type be determined during MD 
initialization time or during port binding attempt? 

   Originally I thought that SR-IOV port¹s status would be depending on the 
Sriov Agent (patch part 2). After chatting with Irena, this is not the case. So 
all the SR-IOV ports will be active once created or bound according to the 
try_to_bind() method. In addition, the current Sriov Agent (patch part 2) only 
supports port admin status change for mlnx adaptor. I think these caveats need 
to be spelled out explicitly to avoid any confusion or misunderstanding, at 
least in the documentation.

[IrenaB] Agree, this  should be documented as well as whole SRIOV vNIC creation 
workflow. 
If agent is required , port won't be bound if Agent is not running on the 
required Host.
It may be the case that dynamic changes for admin state for SR-IOV VF are 
supported by other vendors as well. Existing agent also allows to expand the 
dynamic changes support for future functionality, such as QoS, ACLs, .. 

   Sandhya has planned to support both intel and vmfex in her MD. This 
requires a hybrid sriov mech driver that populates vif details based on the PCI 
vendor info in the port. Another way to do this is to run two MDs in the same 
time, one supporting intel, the other vmfex. This would work well with the 
above classes. But it requires change of the two config options (in Irena¹s 
patch part one) so that per MD config options can be specified. I¹m not sure if 
this is practical in real deployment (meaning use of SR-IOV adaptors from 
different vendors in the same deployment), but I think it¹s doable within the 
existing ml2 framework.
[IrenaB] I think we should see how to make the deployment of the SR-IOV 
required pieces as simple and intuitive as possible.

we¹ll go over the above in the next sr-iov IRC meeting as well.
[IrenaB]  Looking forward for this discussion

Thanks,
Robert









On 7/24/14, 1:55 PM, Kyle Mestery (Code Review) rev...@openstack.org
wrote:

Kyle Mestery has posted comments on this change.

Change subject: ML2 mechanism driver for SR-IOV capable NIC based 
switching, Part 2 
..


Patch Set 3: Code-Review+2 Workflow+1

I believe Irena has answered all of Robert's questions. Any subsequent 
issues can be handled as a followup.

--
To view, visit https://review.openstack.org/107651
To unsubscribe, visit https://review.openstack.org/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: I533ccee067935326d5837f90ba321a962e8dc2a6
Gerrit-PatchSet: 3
Gerrit-Project: openstack/neutron
Gerrit-Branch: master
Gerrit-Owner: Berezovsky Irena ire...@mellanox.com
Gerrit-Reviewer: Akihiro Motoki mot...@da.jp.nec.com
Gerrit-Reviewer: Arista Testing 
arista-openstack-t...@aristanetworks.com
Gerrit-Reviewer: Baodong (Robert) Li ba...@cisco.com

Re: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

2014-07-13 Thread Irena Berezovsky
Hi,
For agent way to notify server regarding node specific info, you can leverage 
the  periodic state report that neutron agent sends to the neutron Server.
As an option, the ML2 Mechanism Driver can check that agent report and 
depending on the
datapath_type, update vif_details.
This can be done similar to bridge_mappings:
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mech_openvswitch.py#43
BR,
Irena


From: Czesnowicz, Przemyslaw [mailto:przemyslaw.czesnow...@intel.com]
Sent: Thursday, July 10, 2014 6:20 PM
To: Irena Berezovsky; OpenStack Development Mailing List (not for usage 
questions)
Cc: Mooney, Sean K
Subject: RE: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

Hi,

Thanks for Your answers.

Yep using binding:vif_details makes more sense. We would like to reuse 
VIF_TYPE_OVS and modify the nova to use the userspace vhost when ‘use_dpdk’ 
flag is present.
What we are missing is how to inform the ml2 plugin/mechanism drivers when to 
put that ‘use_dpdk’ flag into vif_details.

On the node ovs_neutron_agent could look up datapath_type in ovsdb, but how can 
we provide that info to the plugin?
Currently there is no mechanism to get node specific info into the ml2 plugin 
(or at least we don’t see any).

Any ideas on how this could be implemented?

Regards
Przemek
From: Irena Berezovsky [mailto:ire...@mellanox.com]
Sent: Thursday, July 10, 2014 8:08 AM
To: OpenStack Development Mailing List (not for usage questions); Czesnowicz, 
Przemyslaw
Cc: Mooney, Sean K
Subject: RE: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

Hi,
For passing  information from neutron to nova VIF Driver, you should use 
binding:vif_details dictionary.  You may not require new VIF_TYPE, but can 
leverage the existing VIF_TYPE_OVS, and add ‘use_dpdk’   in vif_details 
dictionary. This will require some rework of the existing libvirt vif_driver 
VIF_TYPE_OVS.

Binding:profile is considered as input dictionary that is used to pass 
information required for port binding on Server side. You  may use 
binding:profile to pass in  a dpdk ovs request, so it will be taken into port 
binding consideration by ML2 plugin.

I am not sure regarding new vnic_type, since it will require  port owner to 
pass in the requested type. Is it your intention? Should the port owner be 
aware of dpdk ovs usage?
There is also VM scheduling consideration that if certain vnic_type is 
requested, VM should be scheduled on the node that can satisfy the request.

Regards,
Irena


From: loy wolfe [mailto:loywo...@gmail.com]
Sent: Thursday, July 10, 2014 6:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Mooney, Sean K
Subject: Re: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

i think both a new vnic_type and a new vif_type should be added. now vnic has 
three types: normal, direct, macvtap, then we need a new type of uservhost.

as for vif_type, now we have VIF_TYPE_OVS, VIF_TYPE_QBH/QBG, VIF_HW_VEB, so we 
need a new VIF_TYPE_USEROVS

I don't think it's a good idea to directly reuse ovs agent, for we have to 
consider use cases that ovs and userovs co-exists. Now it's a little painful to 
fork and write a new agent, but it will be easier when ML2 agent BP is merged 
in the future. (https://etherpad.openstack.org/p/modular-l2-agent-outline)

On Wed, Jul 9, 2014 at 11:08 PM, Czesnowicz, Przemyslaw 
przemyslaw.czesnow...@intel.commailto:przemyslaw.czesnow...@intel.com wrote:
Hi

We (Intel Openstack team) would like to add support for dpdk based userspace 
openvswitch using mech_openvswitch and mech_odl from ML2 plugin.
The dpdk enabled ovs comes in two flavours one is netdev incorporated into 
vanilla ovs the other is a fork of ovs with a dpdk datapath 
(https://github.com/01org/dpdk-ovs ).
Both flavours use userspace vhost mechanism to connect the VMs to the switch.

Our initial approach was to extend ovs vif bindings in nova and add a config 
parameter to specify when userspace vhost should be used.
Spec : https://review.openstack.org/95805
Code: https://review.openstack.org/100256

Nova devs rejected this approach saying that Neutron should pass all necessary 
information to nova to select vif bindings.

Currently we are looking for a way to pass information from Neutron to Nova 
that dpdk enabled ovs is being used while still being able to use 
mech_openvswitch and ovs_neutron_agent or mech_odl.

We thought of two possible solutions:

1.  Use binding_profile to provide node specific info to nova.

Agent rpc api would be extended to allow agents to send node profile to neutron 
plugin.

That info would be stored in db and passed to nova when binding on this 
specific host is requested.

This could be used to support our use case or pass other info to nova (i.e name 
of integration bridge)



2.  Let mech_openvswitch and mech_odl detect what binding type to use.

When asked for port binding mech_openvswitch and mech_odl would call the agent

Re: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

2014-07-10 Thread Irena Berezovsky
Hi,
For passing  information from neutron to nova VIF Driver, you should use 
binding:vif_details dictionary.  You may not require new VIF_TYPE, but can 
leverage the existing VIF_TYPE_OVS, and add ‘use_dpdk’   in vif_details 
dictionary. This will require some rework of the existing libvirt vif_driver 
VIF_TYPE_OVS.

Binding:profile is considered as input dictionary that is used to pass 
information required for port binding on Server side. You  may use 
binding:profile to pass in  a dpdk ovs request, so it will be taken into port 
binding consideration by ML2 plugin.

I am not sure regarding new vnic_type, since it will require  port owner to 
pass in the requested type. Is it your intention? Should the port owner be 
aware of dpdk ovs usage?
There is also VM scheduling consideration that if certain vnic_type is 
requested, VM should be scheduled on the node that can satisfy the request.

Regards,
Irena


From: loy wolfe [mailto:loywo...@gmail.com]
Sent: Thursday, July 10, 2014 6:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Mooney, Sean K
Subject: Re: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

i think both a new vnic_type and a new vif_type should be added. now vnic has 
three types: normal, direct, macvtap, then we need a new type of uservhost.

as for vif_type, now we have VIF_TYPE_OVS, VIF_TYPE_QBH/QBG, VIF_HW_VEB, so we 
need a new VIF_TYPE_USEROVS

I don't think it's a good idea to directly reuse ovs agent, for we have to 
consider use cases that ovs and userovs co-exists. Now it's a little painful to 
fork and write a new agent, but it will be easier when ML2 agent BP is merged 
in the future. (https://etherpad.openstack.org/p/modular-l2-agent-outline)

On Wed, Jul 9, 2014 at 11:08 PM, Czesnowicz, Przemyslaw 
przemyslaw.czesnow...@intel.commailto:przemyslaw.czesnow...@intel.com wrote:
Hi

We (Intel Openstack team) would like to add support for dpdk based userspace 
openvswitch using mech_openvswitch and mech_odl from ML2 plugin.
The dpdk enabled ovs comes in two flavours one is netdev incorporated into 
vanilla ovs the other is a fork of ovs with a dpdk datapath 
(https://github.com/01org/dpdk-ovs ).
Both flavours use userspace vhost mechanism to connect the VMs to the switch.

Our initial approach was to extend ovs vif bindings in nova and add a config 
parameter to specify when userspace vhost should be used.
Spec : https://review.openstack.org/95805
Code: https://review.openstack.org/100256

Nova devs rejected this approach saying that Neutron should pass all necessary 
information to nova to select vif bindings.

Currently we are looking for a way to pass information from Neutron to Nova 
that dpdk enabled ovs is being used while still being able to use 
mech_openvswitch and ovs_neutron_agent or mech_odl.

We thought of two possible solutions:

1.  Use binding_profile to provide node specific info to nova.

Agent rpc api would be extended to allow agents to send node profile to neutron 
plugin.

That info would be stored in db and passed to nova when binding on this 
specific host is requested.

This could be used to support our use case or pass other info to nova (i.e name 
of integration bridge)



2.  Let mech_openvswitch and mech_odl detect what binding type to use.

When asked for port binding mech_openvswitch and mech_odl would call the agent 
or odl  to check what bindings to use (VIF_TYPE_OVS or VIF_TYPE_DPDKVHOST)


So, what would be the best way to support our usecase, is it one of the above ?

Best regards
Przemek

--
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][sriov] weekly meeting for july 1st and july 8th

2014-07-01 Thread Irena Berezovsky
I'll chair this week PCI SR-IOV pass-through meeting for those who would like 
to attend.

BR,
Irena

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Tuesday, July 01, 2014 5:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][sriov] weekly meeting for july 1st and july 8th

Hi,

I will be on PTO from Tuesday, and come back to office on July 9th Wednesday. 
Therefore, I won't be present in the next two SR-IOV weekly meetings. Regarding 
the sr-iov development status, I finally fixed all the failures in the existing 
unit tests. Rob and I are still working on adding new unit test cases in the 
PCI and libvirt driver area. Once that's done, we should be able to push 
another two patches up.

Thanks,
Robert

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ml2] Tracking the reviews for ML2 related specs

2014-06-14 Thread Irena Berezovsky
Hi Mohammad,
Thank you for sharing the links.
Can you please elaborate on columns of the table in [1]. Is [R] supposed to be 
for spec review and [C] for code review?
If this correct, would it be possible to add [C] columns for already merged 
specs that still have the code under review?

Thanks a lot,
Irena

From: Mohammad Banikazemi [mailto:m...@us.ibm.com]
Sent: Friday, June 13, 2014 8:02 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][ml2] Tracking the reviews for ML2 related 
specs


In order to make the review process a bit easier (without duplicating too much 
data and without creating too much overhead), we have created a wiki to keep 
track of the ML2 related specs for the Juno cycle [1]. The idea is to organize 
the people who participate in the ML2 subgroup activities and get the related 
specs reviewed as much as possible in the subgroup before asking the broader 
community to review. (There is of course nothing that prevents others from 
reviewing these specs as soon as they are available for review.) If you have 
any ML2 related spec under review or being planned, you may want to update the 
wiki [1] accordingly.

We will see if this will be useful or not. If you have any comments or 
suggestions please post here or bring them to the IRC weekly meetings [2].

Best,

Mohammad

[1] https://wiki.openstack.org/wiki/Tracking_ML2_Subgroup_Reviews
[2] https://wiki.openstack.org/wiki/Meetings/ML2
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova]{neutron] Mid cycle sprints

2014-06-14 Thread Irena Berezovsky
+ 1 
Would love to join the gang :)

-Original Message-
From: Assaf Muller [mailto:amul...@redhat.com] 
Sent: Friday, June 13, 2014 4:21 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova]{neutron] Mid cycle sprints



- Original Message -
 Hi,
 There is the mid cycle sprint in July for Nova and Neutron. Anyone 
 interested in maybe getting one together in Europe/Middle East around the 
 same dates?
 If people are willing to come to this part of the world I am sure that 
 we can organize a venue for a few days. Anyone interested. If we can 
 get a quorum then I will be happy to try and arrange things.

+1 on an Israel sprint :)

 Thanks
 Gary
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ml2] Too much shim rest proxy mechanism drivers in ML2

2014-06-10 Thread Irena Berezovsky
Hi Luke,
Please see my comments inline.

BR,
Irena
From: Luke Gorrie [mailto:l...@tail-f.com]
Sent: Monday, June 09, 2014 12:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][ml2] Too much shim rest proxy 
mechanism drivers in ML2

On 6 June 2014 10:17, henry hly 
henry4...@gmail.commailto:henry4...@gmail.com wrote:
ML2 mechanism drivers are becoming another kind of plugins. Although they can 
be loaded together, but can not work with each other.
[...]
Could we remove all device related adaption(rest/ssh/netconf/of... proxy) from 
these mechanism driver to the agent side, leaving only necessary code in the 
plugin?

In the Snabb NFV mech driver [*] we are trying a design that you might find 
interesting.

We stripped the mech driver down to bare bones and declared that the agent has 
to access the Neutron configuration independently.

In practice this means that our out-of-tree agent is connecting to Neutron's 
MySQL database directly instead of being fed config changes by custom sync code 
in ML2. This means there are very little work for the mech driver to do (in our 
case check configuration and perform special port binding).
[IrenaB] The DB access approach was previously used by OVS and LinuxBridge 
Agents and at some point (~Grizzly Release) was changed to use RPC 
communication. You can see the reasons and design details covered by [1] and 
the patch itself in [2]

[1]: 
https://docs.google.com/document/d/1MbcBA2Os4b98ybdgAw2qe_68R1NG6KMh8zdZKgOlpvg/edit?pli=1
[2] https://review.openstack.org/#/c/9591/

We are also trying to avoid running an OVS/LinuxBridge-style agent on the 
compute hosts in order to keep the code footprint small. I hope we will succeed 
-- I'd love to hear if somebody else is running agent-less? Currently we 
depend on a really ugly workaround to make VIF binding succeed and we are 
looking for a clean alternative: 
https://github.com/lukego/neutron/commit/31d6d0657aeae9fd97a63e4d53da34fb86be92f7
[IrenaB] I think that for “Non SDN Controller” Mechanism Drivers there will be 
need for some sort of agent to handle port update events even though it might 
not be required in order to bind the port.

[*] Snabb NFV mech driver code: https://review.openstack.org/95711

Cheers,
-Luke


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ml2] Too much shim rest proxy mechanism drivers in ML2

2014-06-10 Thread Irena Berezovsky
Hi Luke,
Very impressive solution!

I do not think there is a problem to keep agent out of the tree in a short 
term, but would highly recommend to put it upstream in a longer term.
You will benefit from quite valuable community review. And most important it 
will allow to keep your code as much as possible aligned with neutron code 
base. Once there are some general changes done by other people, your code will 
be taken into account and won’t be broken accidentally.
I would like to mention that there is Modular L2 Agent initiative driven by ML2 
team, you may be interested to follow: 
https://etherpad.openstack.org/p/modular-l2-agent-outline

Best Regards,
Irena

From: luk...@gmail.com [mailto:luk...@gmail.com] On Behalf Of Luke Gorrie
Sent: Tuesday, June 10, 2014 12:48 PM
To: Irena Berezovsky
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][ml2] Too much shim rest proxy 
mechanism drivers in ML2

Hi Irena,

Thanks for the very interesting perspective!

On 10 June 2014 10:57, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:
[IrenaB] The DB access approach was previously used by OVS and LinuxBridge 
Agents and at some point (~Grizzly Release) was changed to use RPC 
communication.

That is very interesting. I've been involved in OpenStack since the Havana 
cycle and was not familiar with the old design.

I'm optimistic about the scalability of our implementation. We have 
sanity-tested with 300 compute nodes and a 300ms sync interval. I am sure we 
will find some parts that we need to spend optimization energy on, however.

The other scalability aspect we are being careful of is the cost of individual 
update operations. (In LinuxBridge that would be the iptables, ebtables, etc 
commands.) In our implementation the compute nodes preprocess the Neutron 
config into a small config file for the local traffic plane and then load that 
in one atomic operation (SIGHUP style). Again, I am sure we will find cases 
that we need to spend optimization effort on, but the design seems scalable to 
me thanks to the atomicity.

For concreteness, here is the agent we are running on the DB node to make the 
Neutron config available:
https://github.com/SnabbCo/snabbswitch/blob/master/src/designs/neutron/neutron-sync-master

and here is the agent that pulls it onto the compute node:
https://github.com/SnabbCo/snabbswitch/blob/master/src/designs/neutron/neutron-sync-agent

TL;DR we snapshot the config with mysqldump and distribute it with git.

Here's the sanity test I referred to: 
https://groups.google.com/d/msg/snabb-devel/blmDuCgoknc/PP_oMgopiB4J

I will be glad to report on our experience and what we change based on our 
deployment experience during the Juno cycle.

[IrenaB] I think that for “Non SDN Controller” Mechanism Drivers there will be 
need for some sort of agent to handle port update events even though it might 
not be required in order to bind the port.

True. Indeed, we do have an agent running on the compute host, and it we are 
synchronizing it with port updates based on the mechanism described above.

Really what I mean is: Can we keep our agent out-of-tree and apart from ML2 and 
decide for ourselves how to keep it synchronized (instead of using the MQ)? Is 
there a precedent for doing things this way in an ML2 mech driver (e.g. one of 
the SDNs)?

Cheers!
-Luke


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QoS] Weekly IRC Meeting?

2014-05-22 Thread Irena Berezovsky
+1 to attend,

Regards,
Irena

-Original Message-
From: Collins, Sean [mailto:sean_colli...@cable.comcast.com] 
Sent: Wednesday, May 21, 2014 5:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][QoS] Weekly IRC Meeting?

Hi,

The session that we had on the Quality of Service API extension was well 
attended - I would like to keep the momentum going by proposing a weekly IRC 
meeting.

How does Tuesdays at 1800 UTC in #openstack-meeting-alt sound?

--
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Informal meeting before SR-IOV summit presentation

2014-05-12 Thread Irena Berezovsky
Hi all,
Where are we going to meet for this meeting at 1:00 pm today?

-Original Message-
From: Robert Li (baoli) [mailto:ba...@cisco.com] 
Sent: Friday, May 09, 2014 10:52 PM
To: Sandhya Dasu (sadasu); Brent Eagles; Steve Gordon
Cc: Dan Smith; OpenStack Development Mailing List (not for usage questions); 
John Garbutt; Russell Bryant; yunhong-jiang; Itzik Brown; Yongli He; Jay Pipes; 
Irena Berezovsky
Subject: Re: Informal meeting before SR-IOV summit presentation

the program pods area should be open.

On 5/9/14, 3:33 PM, Sandhya Dasu (sadasu) sad...@cisco.com wrote:

I have no idea, how to pick a location.
Should we meet at the Cisco booth at 1pm and then take it from there?

Any other ideas?

Thanks,
sandhya

On 5/9/14 3:17 PM, Brent Eagles beag...@redhat.com wrote:

On 09/05/14 04:21 PM, Sandhya Dasu (sadasu) wrote:
 Thanks for all your replies.

 Thanks for the great inputs on how to frame the discussion in the 
etherpad  so it becomes easier for people to get on board. We will 
add author indent  to track the source of the changes. Will work on 
cleaning that up.

 Regarding the session itself, as you probably know, there was an 
attempt  in Icehouse to get the sr-iov work going. We found that the 
time allotted  for the session was not sufficient to get to all the 
use cases and discuss  alternate views.

 This time around we want to be better prepared and so would like to 
keep  only a couple of open times for the actual session. Hence, the 
request for  the early meeting.

 How does Monday 1pm sound?

 Thanks,
 Sandhya

That time is good with me.

Cheers,

Brent




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Informal meeting before SR-IOV summit presentation

2014-05-09 Thread Irena Berezovsky
Works for me as well.
What would be the meeting place? 

-Original Message-
From: Robert Li (baoli) [mailto:ba...@cisco.com] 
Sent: Friday, May 09, 2014 10:13 PM
To: Sandhya Dasu (sadasu); Steve Gordon
Cc: Dan Smith; OpenStack Development Mailing List (not for usage questions); 
John Garbutt; Russell Bryant; yunhong-jiang; Itzik Brown; Brent Eagles; Yongli 
He; Jay Pipes; Irena Berezovsky
Subject: Re: Informal meeting before SR-IOV summit presentation

It sounds good to me.

Thanks Sandhya for organizing it.

Robert

On 5/9/14, 2:51 PM, Sandhya Dasu (sadasu) sad...@cisco.com wrote:

Thanks for all your replies.

Thanks for the great inputs on how to frame the discussion in the 
etherpad so it becomes easier for people to get on board. We will add 
author indent to track the source of the changes. Will work on cleaning that 
up.

Regarding the session itself, as you probably know, there was an 
attempt in Icehouse to get the sr-iov work going. We found that the 
time allotted for the session was not sufficient to get to all the use 
cases and discuss alternate views.

This time around we want to be better prepared and so would like to 
keep only a couple of open times for the actual session. Hence, the 
request for the early meeting.

How does Monday 1pm sound?

Thanks,
Sandhya

On 5/9/14 11:44 AM, Steve Gordon sgor...@redhat.com wrote:

- Original Message -
 From: Robert Li (baoli) ba...@cisco.com
 Subject: Re: Informal meeting before SR-IOV summit presentation
 
 This is the one that Irena created:
 https://etherpad.openstack.org/p/pci_passthrough_cross_project

Thanks, I missed this as it wasn't linked from the design summit Wiki 
page.

-Steve

 On 5/8/14, 4:33 PM, Steve Gordon sgor...@redhat.com wrote:
 
 - Original Message -
   It would be nice to have an informal discussion / unconference
session
   before the actual summit session on SR-IOV. During the previous
IRC
   meeting, we were really close to identifying the different use
cases.
   There was a dangling discussion on introducing another level of 
   indirection between the vnic_types exposed via the nova boot 
   API
and
 how
   it would be represented internally. It would be ideal to have
these 2
   discussions converged before the summit session.
  
  What would be the purpose of doing that before the session? IMHO, 
  a large part of being able to solve this problem is getting 
  everyone
up to
  speed on what this means, what the caveats are, and what we're
trying to
  solve. If we do some of that outside the scope of the larger
audience, I
  expect we'll get less interaction (or end up covering it again) 
  in
the
  session.
  
  That said, if there's something I'm missing that needs to be
resolved
  ahead of time, then that's fine, but I expect the best plan is to
just
  keep the discussion to the session. Afterwards, additional things
can be
  discussed in a one-off manner, but getting everyone on the same 
  page
is
  largely the point of having a session in the first place IMHO.
 
 Right, in spite of my previous response...looking at the etherpad
there
 is nothing there to frame the discussion at the moment:
 
 https://etherpad.openstack.org/p/juno-nova-sriov-support
 
 I think populating this should be a priority rather than organizing 
 another session/meeting?
 
 Steve
 
 

--
Steve Gordon, RHCE
Product Manager, Red Hat Enterprise Linux OpenStack Platform Red Hat 
Canada (Toronto, Ontario)



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QoS] Interest in a meeting at the Networking pod at the design summit?

2014-05-06 Thread Irena Berezovsky
I would like to join this discussion.

Thanks,
Irena

-Original Message-
From: Collins, Sean [mailto:sean_colli...@cable.comcast.com] 
Sent: Tuesday, May 06, 2014 7:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][QoS] Interest in a meeting at the Networking 
pod at the design summit?

For those attending, to discuss the QoS API current status?

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Problem plugging I/F into Neutron...

2014-03-30 Thread Irena Berezovsky
Hi Paul,
Please be aware that there was also change in nova to support ovs_hybrid_plug:
https://review.openstack.org/#/c/83190/
I am not sure, but maybe worth to check nova code and nova.conf  you are using 
to be aligned with neutron code.

Hope it helps,
Irena


From: Paul Michali (pcm) [mailto:p...@cisco.com]
Sent: Saturday, March 29, 2014 1:17 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron] Problem plugging I/F into Neutron...

Hi,

I have a VM that I start up outside of OpenStack (as a short term solution, 
until we get it working inside a Nova VM), using KVM. It has scrips associated 
with the three interfaces that are created, to hook this VM into Neutron. One 
I/F is on br-ex (connected to the public network for DevStack), another to 
br-int (connected to a management network that is created), and a third is 
connected to br-int (connected to the private network for DevStack). It's 
understood these are hacks to get things going and can be brittle.  With 
DevStack, I have a vanilla localrc, so using ML2, without any ML2 settings 
specified.

Now, the first two scripts use internal Neutron client calls to create the 
port, and then plug the VIF. The third, uses Neutron to create the port, and 
then Nova to plug the VIF. I don't know why - I inherited the scripts.

On one system, where Nova is based on commit b3e2e05 (10 days ago), this all 
works just peachy. Interfaces are hooked in and I can ping to my hearts 
content. On another system, that I just reimaged today, using the latest and 
greatest OpenStack projects, the third script fails.

I talked to Nova folks, and the vic is now an object, instead of a plain dict, 
and therefore calls on the object fail (as the script just provides a dict). I 
started trying to convert the vif to an object, but in discussing with a 
co-worker, we thought that we could too use Neutron calls for all of the setup 
of this third interface.

Well, I tried, and the port is created, but unlike the other system, the port 
is DOWN, and I cannot ping to or from it (the other ports still work fine, with 
this newer OpenStack repo). One difference is that the port is showing  
{port_filter: true, ovs_hybrid_plug: true} for binding:vif_details, in the 
neutron port-show output. On the older system this is empty (so must be new 
changes in Neutron?)


Here is the Neutron based code (trimmed) to do the create and plugging:

import neutron.agent.linux.interface as vif_driver
from neutronclient.neutron import client as qclient

qc = qclient.Client('2.0', auth_url=KEYSTONE_URL, username=user, 
tenant_name=tenant, password=pw)

prefix, net_name = interface.split('__')
port_name = net_name + '_p'
try:
nw_id = qc.list_networks(name=net_name)['networks'][0]['id']
except qcexp.NeutronClientException as e:
...

p_spec = {'port': {'admin_state_up': True,
   'name': port_name,
   'network_id': nw_id,
   'mac_address': mac_addr,
   'binding:host_id': hostname,
   'device_id': vm_uuid,
   'device_owner': 'compute:None'}}

try:
port = qc.create_port(p_spec)
except qcexp.NeutronClientException as e:
...

port_id = port['port']['id']
br_name = 'br-int'

conf = cfg.CONF
config.register_root_helper(conf)
conf.register_opts(vif_driver.OPTS)

driver = vif_driver.OVSInterfaceDriver(cfg.CONF)
driver.plug(nw_id, port_id, interface, mac_addr, br_name)

Finally, here are the questions (hope you stuck with the long message)...

Any idea why the neutron version is not working? I know there were a bunch of 
recent changes.
Is there a way for me to turn off the ova_hybrid_plug and port_filter flags? 
Should I?
Should I go back to using Nova and build a VIF object?
If so, any reason why the Neutron version would not work?
Is there a way to do a similar thing, but via using Northbound APIs (so it 
isn't as brittle)?

Thanks in advance!

PCM (Paul Michali)

MAIL . p...@cisco.commailto:p...@cisco.com
IRC ... pcm_ (irc.freenode.comhttp://irc.freenode.com)
TW  @pmichali
GPG Key ... 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how to extend port capability using binding:profile

2014-03-18 Thread Irena Berezovsky
Hi Li Ma,
ML2 binding:profile is accessible for admin user only.
Currently it can be set via port-create/port-update CLI following this syntax:
'neutron port-create netX --binding:profile type=dict keyX=valX'

BR,
Irena

-Original Message-
From: Li Ma [mailto:m...@awcloud.com] 
Sent: Sunday, March 16, 2014 9:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] how to extend port capability using binding:profile

Hi all,

I'd like to extend port capability using ML2 binding:profile. I checked the 
official docs and it seems that there's no guide for it.

Is there any CLI support for port binding:profile?
Or is there any development guide on how to set up profile?

--
---
cheers,
Li Ma




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of ports

2014-03-05 Thread Irena Berezovsky
Hi Robert, Sandhya,
I have pushed the reference implementation SriovAgentMechanismDriverBase as 
part the following WIP:
https://review.openstack.org/#/c/74464/

The code is in mech_agent.py, and very simple code for mech_sriov_nic_switch.py.

Please take a look and review.

BR,
Irena

-Original Message-
From: Irena Berezovsky [mailto:ire...@mellanox.com] 
Sent: Wednesday, March 05, 2014 9:04 AM
To: Robert Li (baoli); Sandhya Dasu (sadasu); OpenStack Development Mailing 
List (not for usage questions); Robert Kukura; Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of 
ports

Hi Robert,
Seems to me that many code lines are duplicated following your proposal.
For agent based MDs, I would prefer to inherit from  
SimpleAgentMechanismDriverBase and add there verify method for 
supported_pci_vendor_info. Specific MD will pass the list of supported 
pci_vendor_info list. The  'try_to_bind_segment_for_agent' method will call 
'supported_pci_vendor_info', and if supported continue with binding flow. 
Maybe instead of a decorator method, it should be just an utility method?
I think that the check for supported vnic_type and pci_vendor info support, 
should be done in order to see if MD should bind the port. If the answer is 
Yes, no more checks are required.

Coming back to the question I asked earlier, for non-agent MD, how would you 
deal with updates after port is bound, like 'admin_state_up' changes?
I'll try to push some reference code later today.

BR,
Irena

-Original Message-
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, March 05, 2014 4:46 AM
To: Sandhya Dasu (sadasu); OpenStack Development Mailing List (not for usage 
questions); Irena Berezovsky; Robert Kukura; Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of 
ports

Hi Sandhya,

I agree with you except that I think that the class should inherit from 
MechanismDriver. I took a crack at it, and here is what I got:

# Copyright (c) 2014 OpenStack Foundation # All Rights Reserved.
#
#Licensed under the Apache License, Version 2.0 (the License); you
may
#not use this file except in compliance with the License. You may
obtain
#a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an AS IS BASIS,
WITHOUT
#WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See
the
#License for the specific language governing permissions and
limitations
#under the License.

from abc import ABCMeta, abstractmethod

import functools
import six

from neutron.extensions import portbindings from neutron.openstack.common 
import log from neutron.plugins.ml2 import driver_api as api

LOG = log.getLogger(__name__)


DEFAULT_VNIC_TYPES_SUPPORTED = [portbindings.VNIC_DIRECT,
portbindings.VNIC_MACVTAP]

def check_vnic_type_and_vendor_info(f):
@functools.wraps(f)
def wrapper(self, context):
vnic_type = context.current.get(portbindings.VNIC_TYPE,
portbindings.VNIC_NORMAL)
if vnic_type not in self.supported_vnic_types:
LOG.debug(_(%(func_name)s: skipped due to unsupported 
vnic_type: %(vnic_type)s),
  {'func_name': f.func_name, 'vnic_type': vnic_type})
return

if self.supported_pci_vendor_info:
profile = context.current.get(portbindings.PROFILE, {})
if not profile:
LOG.debug(_(%s: Missing profile in port binding),
  f.func_name)
return
pci_vendor_info = profile.get('pci_vendor_info')
if not pci_vendor_info:
LOG.debug(_(%s: Missing pci vendor info in profile),
  f.func_name)
return
if pci_vendor_info not in self.supported_pci_vendor_info:
LOG.debug(_(%(func_name)s: unsupported pci vendor 
info: %(info)s),
  {'func_name': f.func_name, 'info':
pci_vendor_info})
return
f(self, context)
return wrapper

@six.add_metaclass(ABCMeta)
class SriovMechanismDriverBase(api.MechanismDriver):
Base class for drivers that supports SR-IOV

The SriovMechanismDriverBase provides common code for mechanism
drivers that supports SR-IOV. Such a driver may or may not require
an agent to be running on the port's host.

MechanismDrivers that uses this base class and requires an agent must
pass the agent type to __init__(), and must implement
try_to_bind_segment_for_agent() and check_segment_for_agent().

MechanismDrivers that uses this base class may provide supported vendor
information, and must provide the supported vnic types

Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of ports

2014-03-05 Thread Irena Berezovsky
Hi Robert,
I think what you mentioned can be achieved by calling into specific MD method 
from
SriovAgentMechanismDriverBase .try_to_bind_segment_for_agent mehod, maybe 
something like 'get_vif_details' before it calls to context.set_binding.
Would you mind to continue discussion over patch gerrit review 
https://review.openstack.org/#/c/74464/ ?
I think it will be easier to follow up the comments and decisions.

Thanks,
Irena

-Original Message-
From: Robert Li (baoli) [mailto:ba...@cisco.com] 
Sent: Wednesday, March 05, 2014 6:10 PM
To: Irena Berezovsky; OpenStack Development Mailing List (not for usage 
questions); Sandhya Dasu (sadasu); Robert Kukura; Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of 
ports

Hi Irena,

The main reason for me to do it that way is how vif_details should be setup in 
our case. Do you need vlan in vif_details? The behavior in the existing base 
classes is that the vif_details is set during the driver init time. In our 
case, it needs to be setup during bind_port().

thanks,
Robert


On 3/5/14 7:37 AM, Irena Berezovsky ire...@mellanox.com wrote:

Hi Robert, Sandhya,
I have pushed the reference implementation 
SriovAgentMechanismDriverBase as part the following WIP:
https://review.openstack.org/#/c/74464/

The code is in mech_agent.py, and very simple code for 
mech_sriov_nic_switch.py.

Please take a look and review.

BR,
Irena

-Original Message-
From: Irena Berezovsky [mailto:ire...@mellanox.com]
Sent: Wednesday, March 05, 2014 9:04 AM
To: Robert Li (baoli); Sandhya Dasu (sadasu); OpenStack Development 
Mailing List (not for usage questions); Robert Kukura; Brian Bowen
(brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV 
binding of ports

Hi Robert,
Seems to me that many code lines are duplicated following your proposal.
For agent based MDs, I would prefer to inherit from 
SimpleAgentMechanismDriverBase and add there verify method for 
supported_pci_vendor_info. Specific MD will pass the list of supported 
pci_vendor_info list. The  'try_to_bind_segment_for_agent' method will 
call 'supported_pci_vendor_info', and if supported continue with 
binding flow.
Maybe instead of a decorator method, it should be just an utility method?
I think that the check for supported vnic_type and pci_vendor info 
support, should be done in order to see if MD should bind the port. If 
the answer is Yes, no more checks are required.

Coming back to the question I asked earlier, for non-agent MD, how 
would you deal with updates after port is bound, like 'admin_state_up' changes?
I'll try to push some reference code later today.

BR,
Irena

-Original Message-
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, March 05, 2014 4:46 AM
To: Sandhya Dasu (sadasu); OpenStack Development Mailing List (not for 
usage questions); Irena Berezovsky; Robert Kukura; Brian Bowen 
(brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV 
binding of ports

Hi Sandhya,

I agree with you except that I think that the class should inherit from 
MechanismDriver. I took a crack at it, and here is what I got:

# Copyright (c) 2014 OpenStack Foundation # All Rights Reserved.
#
#Licensed under the Apache License, Version 2.0 (the License); you
may
#not use this file except in compliance with the License. You may
obtain
#a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an AS IS BASIS,
WITHOUT
#WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See
the
#License for the specific language governing permissions and
limitations
#under the License.

from abc import ABCMeta, abstractmethod

import functools
import six

from neutron.extensions import portbindings from 
neutron.openstack.common import log from neutron.plugins.ml2 import 
driver_api as api

LOG = log.getLogger(__name__)


DEFAULT_VNIC_TYPES_SUPPORTED = [portbindings.VNIC_DIRECT,
portbindings.VNIC_MACVTAP]

def check_vnic_type_and_vendor_info(f):
@functools.wraps(f)
def wrapper(self, context):
vnic_type = context.current.get(portbindings.VNIC_TYPE,
portbindings.VNIC_NORMAL)
if vnic_type not in self.supported_vnic_types:
LOG.debug(_(%(func_name)s: skipped due to unsupported 
vnic_type: %(vnic_type)s),
  {'func_name': f.func_name, 'vnic_type': vnic_type})
return

if self.supported_pci_vendor_info:
profile = context.current.get(portbindings.PROFILE, {})
if not profile:
LOG.debug(_(%s: Missing profile in port binding),
  f.func_name)
return
pci_vendor_info = profile.get

Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of ports

2014-03-04 Thread Irena Berezovsky
Hi Robert,
Seems to me that many code lines are duplicated following your proposal.
For agent based MDs, I would prefer to inherit from  
SimpleAgentMechanismDriverBase and add there verify method for 
supported_pci_vendor_info. Specific MD will pass the list of supported 
pci_vendor_info list. The  'try_to_bind_segment_for_agent' method will call 
'supported_pci_vendor_info', and if supported continue with binding flow. 
Maybe instead of a decorator method, it should be just an utility method?
I think that the check for supported vnic_type and pci_vendor info support, 
should be done in order to see if MD should bind the port. If the answer is 
Yes, no more checks are required.

Coming back to the question I asked earlier, for non-agent MD, how would you 
deal with updates after port is bound, like 'admin_state_up' changes?
I'll try to push some reference code later today.

BR,
Irena

-Original Message-
From: Robert Li (baoli) [mailto:ba...@cisco.com] 
Sent: Wednesday, March 05, 2014 4:46 AM
To: Sandhya Dasu (sadasu); OpenStack Development Mailing List (not for usage 
questions); Irena Berezovsky; Robert Kukura; Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of 
ports

Hi Sandhya,

I agree with you except that I think that the class should inherit from 
MechanismDriver. I took a crack at it, and here is what I got:

# Copyright (c) 2014 OpenStack Foundation # All Rights Reserved.
#
#Licensed under the Apache License, Version 2.0 (the License); you
may
#not use this file except in compliance with the License. You may
obtain
#a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an AS IS BASIS,
WITHOUT
#WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See
the
#License for the specific language governing permissions and
limitations
#under the License.

from abc import ABCMeta, abstractmethod

import functools
import six

from neutron.extensions import portbindings from neutron.openstack.common 
import log from neutron.plugins.ml2 import driver_api as api

LOG = log.getLogger(__name__)


DEFAULT_VNIC_TYPES_SUPPORTED = [portbindings.VNIC_DIRECT,
portbindings.VNIC_MACVTAP]

def check_vnic_type_and_vendor_info(f):
@functools.wraps(f)
def wrapper(self, context):
vnic_type = context.current.get(portbindings.VNIC_TYPE,
portbindings.VNIC_NORMAL)
if vnic_type not in self.supported_vnic_types:
LOG.debug(_(%(func_name)s: skipped due to unsupported 
vnic_type: %(vnic_type)s),
  {'func_name': f.func_name, 'vnic_type': vnic_type})
return

if self.supported_pci_vendor_info:
profile = context.current.get(portbindings.PROFILE, {})
if not profile:
LOG.debug(_(%s: Missing profile in port binding),
  f.func_name)
return
pci_vendor_info = profile.get('pci_vendor_info')
if not pci_vendor_info:
LOG.debug(_(%s: Missing pci vendor info in profile),
  f.func_name)
return
if pci_vendor_info not in self.supported_pci_vendor_info:
LOG.debug(_(%(func_name)s: unsupported pci vendor 
info: %(info)s),
  {'func_name': f.func_name, 'info':
pci_vendor_info})
return
f(self, context)
return wrapper

@six.add_metaclass(ABCMeta)
class SriovMechanismDriverBase(api.MechanismDriver):
Base class for drivers that supports SR-IOV

The SriovMechanismDriverBase provides common code for mechanism
drivers that supports SR-IOV. Such a driver may or may not require
an agent to be running on the port's host.

MechanismDrivers that uses this base class and requires an agent must
pass the agent type to __init__(), and must implement
try_to_bind_segment_for_agent() and check_segment_for_agent().

MechanismDrivers that uses this base class may provide supported vendor
information, and must provide the supported vnic types.

def __init__(self, agent_type=None, supported_pci_vendor_info=[],
 supported_vnic_types=DEFAULT_VNIC_TYPES_SUPPORTED):
Initialize base class for SR-IOV capable Mechanism Drivers

:param agent_type: Constant identifying agent type in agents_db
:param supported_pci_vendor_info: a list of vendor_id:product_id
:param supported_vnic_types: The binding:vnic_type values we can bind

self.supported_pci_vendor_info = supported_pci_vendor_info
self.agent_type = agent_type
self.supported_vnic_types = supported_vnic_types

def initialize(self

Re: [openstack-dev] PCI SRIOV meeting suspend?

2014-03-03 Thread Irena Berezovsky
Hi Yongli He,
The PCI SRIOV meeting switched back to weekly occurrences,. 
Next meeting will be today at usual time slot:
https://wiki.openstack.org/wiki/Meetings#PCI_Passthrough_Meeting

In coming meetings we would like to work on content to be proposed for Juno.
BR,
Irena

-Original Message-
From: yongli he [mailto:yongli...@intel.com] 
Sent: Tuesday, March 04, 2014 3:28 AM
To: Robert Li (baoli); Irena Berezovsky; OpenStack Development Mailing List
Subject: PCI SRIOV meeting suspend?

HI, Robert

does it stop for while?

and if you are convenient please review this patch set , check if the interface 
is ok.


https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/pci-extra-info,n,z

Yongli He

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron: Need help with tox failure in VPN code

2014-03-03 Thread Irena Berezovsky
Hi Paul,
I think the problem for tests failure is SystemExit exception raised by 
service_base.py when fails to load drivers by plugin. It terminates the tests.

BR,
Irena
From: Paul Michali [mailto:p...@cisco.com]
Sent: Tuesday, March 04, 2014 7:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] Neutron: Need help with tox failure in VPN code

Hi,

I'm stuck and can use some guidance here...please!

I have a change set out for review that used the VPN Service Type Framework ( 
https://review.openstack.org/74144). Everything worked fine, passed Jenkins, 
etc.

Found out that the STF won't make it to I-3, so I removed the dependency from 
my change set and tried to modify the plugin.py file to use some STF logic 
(like LBaaS uses) to load the desired service driver that is specified as the 
default. Adjusted the code to no longer use provider info.

Well, in doing so, tox fails, and unfortunately there little info on the 
failure. This can be seen by running a subset of the tests, where 2 fail:

tox -e py27 -v -- neutron.tests.unit.services.vpn

only the name of a failing test case for one, and a mention of return code 10 
on another and no other info on the failure reason. I didn't see this on a full 
tox run in my repo, but Jenkins failed and Akihiro noticed it too, in running 
the above subset of the suite (thanks!).


I've narrow it down a bit, but have no idea why it fails...

One, it seems to be some interaction between test_vpnaas_driver_plugin.py and 
the two service driver tests (cisco_ipsec.py and ipsec.py). I can remove either 
one of the service driver tests cases, and it will still fail with the other 
one (so even the reference code fails).

Two, if I change plugin.py to set self.driver to the reference device driver 
(as is done in the latest patch set) it works fine with all test cases.

Three, it seems to be a test only issue, because I can run devstack with the 
login I have in plugin.py, currently commented out in __init__(), and 
successfully load either the reference or cisco service driver, by changing 
neutron.conf.

It seems like I'm doing something wrong in the loading of the service driver, 
or using this implementation, is somehow interacting with the tests.

If anyone has ideas on what is wrong, or a better way to load the service 
driver, please let me know. I was thinking I could read and parse neutron.conf 
manually and then load the service driver, but there must be a better way!

Thanks!

PCM (Paul Michali)

MAIL  p...@cisco.commailto:p...@cisco.com
IRCpcm_  (irc.freenode.nethttp://irc.freenode.net)
TW@pmichali
GPG key4525ECC253E31A83
Fingerprint 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of ports

2014-02-25 Thread Irena Berezovsky
Hi Sandhya,
I mentioned the port state with regards to expected operation that can be 
applied to neutron port after neutron port is already bound to certain virtual 
interface. 
Since for my case, there will be neutron L2 agent on Host, it will manage port 
admin state locally. I am not sure how it should work for your case, and if you 
need L2 agent for this.

BR,
Irena

-Original Message-
From: Sandhya Dasu (sadasu) [mailto:sad...@cisco.com] 
Sent: Tuesday, February 25, 2014 4:19 PM
To: OpenStack Development Mailing List (not for usage questions); Irena 
Berezovsky; Robert Kukura; Robert Li (baoli); Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of 
ports

Hi,
As a follow up from today's IRC, Irena, are you looking to write the below 
mentioned Base/Mixin class that inherits from AgentMechanismDriverBase class? 
When you mentioned port state, were you referring to the 
validate_port_binding() method?

Pls clarify.

Thanks,
Sandhya

On 2/6/14 7:57 AM, Sandhya Dasu (sadasu) sad...@cisco.com wrote:

Hi Bob and Irena,
   Thanks for the clarification. Irena, I am not opposed to a 
SriovMechanismDriverBase/Mixin approach, but I want to first figure out 
how much common functionality there is. Have you already looked at this?

Thanks,
Sandhya

On 2/5/14 1:58 AM, Irena Berezovsky ire...@mellanox.com wrote:

Please see inline my understanding

-Original Message-
From: Robert Kukura [mailto:rkuk...@redhat.com]
Sent: Tuesday, February 04, 2014 11:57 PM
To: Sandhya Dasu (sadasu); OpenStack Development Mailing List (not for 
usage questions); Irena Berezovsky; Robert Li (baoli); Brian Bowen
(brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV 
binding of ports

On 02/04/2014 04:35 PM, Sandhya Dasu (sadasu) wrote:
 Hi,
  I have a couple of questions for ML2 experts regarding support 
 of SR-IOV ports.

I'll try, but I think these questions might be more about how the 
various SR-IOV implementations will work than about ML2 itself...

 1. The SR-IOV ports would not be managed by ova or linuxbridge L2 
 agents. So, how does a MD for SR-IOV ports bind/unbind its ports to 
 the host? Will it just be a db update?

I think whether or not to use an L2 agent depends on the specific 
SR-IOV implementation. Some (Mellanox?) might use an L2 agent, while 
others
(Cisco?) might put information in binding:vif_details that lets the 
nova VIF driver take care of setting up the port without an L2 agent.
[IrenaB] Based on VIF_Type that MD defines, and going forward with 
other binding:vif_details attributes, VIFDriver should do the VIF pluging 
part.
As for required networking configuration is required, it is usually 
done either by L2 Agent or external Controller, depends on MD.

 
 2. Also, how do we handle the functionality in mech_agent.py, within 
 the SR-IOV context?

My guess is that those SR-IOV MechanismDrivers that use an L2 agent 
would inherit the AgentMechanismDriverBase class if it provides useful 
functionality, but any MechanismDriver implementation is free to not 
use this base class if its not applicable. I'm not sure if an 
SriovMechanismDriverBase (or SriovMechanismDriverMixin) class is being 
planned, and how that would relate to AgentMechanismDriverBase.

[IrenaB] Agree with Bob, and as I stated before I think there is a 
need for SriovMechanismDriverBase/Mixin that provides all the generic 
functionality and helper methods that are common to SRIOV ports.
-Bob

 
 Thanks,
 Sandhya
 
 From: Sandhya Dasu sad...@cisco.com mailto:sad...@cisco.com
 Reply-To: OpenStack Development Mailing List (not for usage 
questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
 Date: Monday, February 3, 2014 3:14 PM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org, Irena Berezovsky  
ire...@mellanox.com mailto:ire...@mellanox.com, Robert Li (baoli)
 ba...@cisco.com mailto:ba...@cisco.com, Robert Kukura  
rkuk...@redhat.com mailto:rkuk...@redhat.com, Brian Bowen  
(brbowen) brbo...@cisco.com mailto:brbo...@cisco.com
 Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV  
extra hr of discussion today
 
 Hi,
 Since, openstack-meeting-alt seems to be in use, baoli and 
 myself are moving to openstack-meeting. Hopefully, Bob Kukura  
 Irena can join soon.
 
 Thanks,
 Sandhya
 
 From: Sandhya Dasu sad...@cisco.com mailto:sad...@cisco.com
 Reply-To: OpenStack Development Mailing List (not for usage 
questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
 Date: Monday, February 3, 2014 1:26 PM
 To: Irena Berezovsky ire...@mellanox.com  
mailto:ire...@mellanox.com, Robert Li (baoli) ba...@cisco.com  
mailto:ba...@cisco.com, Robert Kukura rkuk...@redhat.com  
mailto:rkuk...@redhat.com, OpenStack Development Mailing List 
(not for usage questions)
 openstack-dev

Re: [openstack-dev] How to write a new neutron L2 plugin using ML2 framework?

2014-02-10 Thread Irena Berezovsky
Hi,
As stated below, we are already having this work both in nova and neuron.
Please take a look at the following discussions:
https://wiki.openstack.org/wiki/Meetings#PCI_Passthrough_Meeting

For neutron part there are two different flavors that are coming as part of 
this effort:
1. Cisco SRIOV supporting 802.1QBH - no L2 agent
2. Mellanox Flavor - SRIOV embedded switch (HW_VEB) - with L2 agent.
My guess is that second flavor of SRIOV embedded switch should work for Intel 
NICs as well.

Please join the PCI pass-through meeting discussions to see that you do not do 
any redundant work or just follow-up on mailing list.

BR,
Irena


-Original Message-
From: Mathieu Rohon [mailto:mathieu.ro...@gmail.com] 
Sent: Monday, February 10, 2014 1:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] How to write a new neutron L2 plugin using ML2 
framework?

Hi,

SRIOV is under implementation in nova and neutron. Did you have a look to :
https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support
https://blueprints.launchpad.net/neutron/+spec/ml2-binding-profile
https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type
https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov


On Mon, Feb 10, 2014 at 7:27 AM, Isaku Yamahata isaku.yamah...@gmail.com 
wrote:
 On Sat, Feb 08, 2014 at 03:49:46AM +, Yang, Yi Y 
 yi.y.y...@intel.com wrote:

 Hi, All

 Hi.


 I want to write a new neutron L2 plugin using ML2 framework, I noticed 
 openvswitch and linxubridge have been ported into ML2 framework, but it 
 seems many code is removed compared to standalone L2 plugin, I guess some 
 code has been written into a common library. Now I want to write a L2 plugin 
 to enable switch for a SR-IOV 10g NIC, I think I need to write as follows:


having such a feature would be awesome : did you fill a BP for that?


 1. a new mechanism driver neutron/plugins/ml2/drivers/mech_XXX.py, but from 
 source code, it seems nothing to do.

You mean, you want to use AgentMechanismDriverBase directly? this is an 
abstract class du to check_segment_for_agent method.


 This requires to define how your plugin utilize network.
 If multi tenant network is wanted, what/how technology will be used.
 The common one is VLAN or tunneling(GRE, VXLAN).
 This depends on what feature your NIC supports.


 2. a new agent neutron/plugins/XXX/ XXX_neutron_plugin.py

I don't know if this would be mandatory. May be you can just add necessary 
informations with extend_port_dict while your MD bind the port, as proposed by 
this patch :
https://review.openstack.org/#/c/69783/

Nova will then configure the port correctly. The only need for an agent would 
be to populate the agent DB with supported segment types, so that during 
bind_port, the MD find an appropriate segment (with check_segment_for_agent).


 After this, an issue it how to let neutron know it and load it by default or 
 by configuration. Debugging is also an issue, nobody can write code 
 correctly once :-),  does neutron have any good debugging way for a newbie?

 LOG.debug and debug middle ware.
 If there are any other better way, I'd also like to know.

 thanks,

 I'm very eager to be able to get your help and sincerely thank you in 
 advance.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 --
 Isaku Yamahata isaku.yamah...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of ports

2014-02-04 Thread Irena Berezovsky
Please see inline my understanding

-Original Message-
From: Robert Kukura [mailto:rkuk...@redhat.com] 
Sent: Tuesday, February 04, 2014 11:57 PM
To: Sandhya Dasu (sadasu); OpenStack Development Mailing List (not for usage 
questions); Irena Berezovsky; Robert Li (baoli); Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of 
ports

On 02/04/2014 04:35 PM, Sandhya Dasu (sadasu) wrote:
 Hi,
  I have a couple of questions for ML2 experts regarding support of 
 SR-IOV ports.

I'll try, but I think these questions might be more about how the various 
SR-IOV implementations will work than about ML2 itself...

 1. The SR-IOV ports would not be managed by ova or linuxbridge L2 
 agents. So, how does a MD for SR-IOV ports bind/unbind its ports to 
 the host? Will it just be a db update?

I think whether or not to use an L2 agent depends on the specific SR-IOV 
implementation. Some (Mellanox?) might use an L2 agent, while others
(Cisco?) might put information in binding:vif_details that lets the nova VIF 
driver take care of setting up the port without an L2 agent.
[IrenaB] Based on VIF_Type that MD defines, and going forward with other 
binding:vif_details attributes, VIFDriver should do the VIF pluging part. As 
for required networking configuration is required, it is usually done either by 
L2 Agent or external Controller, depends on MD.

 
 2. Also, how do we handle the functionality in mech_agent.py, within 
 the SR-IOV context?

My guess is that those SR-IOV MechanismDrivers that use an L2 agent would 
inherit the AgentMechanismDriverBase class if it provides useful functionality, 
but any MechanismDriver implementation is free to not use this base class if 
its not applicable. I'm not sure if an SriovMechanismDriverBase (or 
SriovMechanismDriverMixin) class is being planned, and how that would relate to 
AgentMechanismDriverBase.

[IrenaB] Agree with Bob, and as I stated before I think there is a need for 
SriovMechanismDriverBase/Mixin that provides all the generic functionality and 
helper methods that are common to SRIOV ports.  
-Bob

 
 Thanks,
 Sandhya
 
 From: Sandhya Dasu sad...@cisco.com mailto:sad...@cisco.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
 Date: Monday, February 3, 2014 3:14 PM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org, Irena Berezovsky 
 ire...@mellanox.com mailto:ire...@mellanox.com, Robert Li (baoli)
 ba...@cisco.com mailto:ba...@cisco.com, Robert Kukura 
 rkuk...@redhat.com mailto:rkuk...@redhat.com, Brian Bowen 
 (brbowen) brbo...@cisco.com mailto:brbo...@cisco.com
 Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV 
 extra hr of discussion today
 
 Hi,
 Since, openstack-meeting-alt seems to be in use, baoli and myself 
 are moving to openstack-meeting. Hopefully, Bob Kukura  Irena can 
 join soon.
 
 Thanks,
 Sandhya
 
 From: Sandhya Dasu sad...@cisco.com mailto:sad...@cisco.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
 Date: Monday, February 3, 2014 1:26 PM
 To: Irena Berezovsky ire...@mellanox.com 
 mailto:ire...@mellanox.com, Robert Li (baoli) ba...@cisco.com 
 mailto:ba...@cisco.com, Robert Kukura rkuk...@redhat.com 
 mailto:rkuk...@redhat.com, OpenStack Development Mailing List (not for 
 usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org, Brian Bowen (brbowen)
 brbo...@cisco.com mailto:brbo...@cisco.com
 Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV 
 extra hr of discussion today
 
 Hi all,
 Both openstack-meeting and openstack-meeting-alt are available 
 today. Lets meet at UTC 2000 @ openstack-meeting-alt.
 
 Thanks,
 Sandhya
 
 From: Irena Berezovsky ire...@mellanox.com 
 mailto:ire...@mellanox.com
 Date: Monday, February 3, 2014 12:52 AM
 To: Sandhya Dasu sad...@cisco.com mailto:sad...@cisco.com, Robert 
 Li (baoli) ba...@cisco.com mailto:ba...@cisco.com, Robert Kukura 
 rkuk...@redhat.com mailto:rkuk...@redhat.com, OpenStack 
 Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org, Brian Bowen (brbowen)
 brbo...@cisco.com mailto:brbo...@cisco.com
 Subject: RE: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on 
 Jan. 30th
 
 Hi Sandhya,
 
 Can you please elaborate how do you suggest to extend the below bp for 
 SRIOV Ports managed by different Mechanism Driver?
 
 I am not biased to any specific direction here, just think we need 
 common layer for managing SRIOV port at neutron, since there is a 
 common pass between nova and neutron.
 
  
 
 BR,
 
 Irena
 
  
 
  
 
 *From:*Sandhya Dasu (sadasu) [mailto:sad...@cisco.com]
 *Sent

Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV extra hr of discussion today

2014-02-03 Thread Irena Berezovsky
Seems the openstack-meeting-alt is busy, let's use openstack-meeting

From: Sandhya Dasu (sadasu) [mailto:sad...@cisco.com]
Sent: Monday, February 03, 2014 8:28 PM
To: Irena Berezovsky; Robert Li (baoli); Robert Kukura; OpenStack Development 
Mailing List (not for usage questions); Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV extra hr of 
discussion today

Hi all,
Both openstack-meeting and openstack-meeting-alt are available today. Lets 
meet at UTC 2000 @ openstack-meeting-alt.

Thanks,
Sandhya

From: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com
Date: Monday, February 3, 2014 12:52 AM
To: Sandhya Dasu sad...@cisco.commailto:sad...@cisco.com, Robert Li 
(baoli) ba...@cisco.commailto:ba...@cisco.com, Robert Kukura 
rkuk...@redhat.commailto:rkuk...@redhat.com, OpenStack Development Mailing 
List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
Brian Bowen (brbowen) brbo...@cisco.commailto:brbo...@cisco.com
Subject: RE: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

Hi Sandhya,
Can you please elaborate how do you suggest to extend the below bp for SRIOV 
Ports managed by different Mechanism Driver?
I am not biased to any specific direction here, just think we need common layer 
for managing SRIOV port at neutron, since there is a common pass between nova 
and neutron.

BR,
Irena


From: Sandhya Dasu (sadasu) [mailto:sad...@cisco.com]
Sent: Friday, January 31, 2014 6:46 PM
To: Irena Berezovsky; Robert Li (baoli); Robert Kukura; OpenStack Development 
Mailing List (not for usage questions); Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

Hi Irena,
  I was initially looking at 
https://blueprints.launchpad.net/neutron/+spec/ml2-typedriver-extra-port-info 
to take care of the extra information required to set up the SR-IOV port. When 
the scope of the BP was being decided, we had very little info about our own 
design so I didn't give any feedback about SR-IOV ports. But, I feel that this 
is the direction we should be going. Maybe we should target this in Juno.

Introducing, SRIOVPortProfileMixin would be creating yet another way to take 
care of extra port config. Let me know what you think.

Thanks,
Sandhya

From: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com
Date: Thursday, January 30, 2014 4:13 PM
To: Robert Li (baoli) ba...@cisco.commailto:ba...@cisco.com, Robert 
Kukura rkuk...@redhat.commailto:rkuk...@redhat.com, Sandhya Dasu 
sad...@cisco.commailto:sad...@cisco.com, OpenStack Development Mailing 
List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
Brian Bowen (brbowen) brbo...@cisco.commailto:brbo...@cisco.com
Subject: RE: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

Robert,
Thank you very much for the summary.
Please, see inline

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Thursday, January 30, 2014 10:45 PM
To: Robert Kukura; Sandhya Dasu (sadasu); Irena Berezovsky; OpenStack 
Development Mailing List (not for usage questions); Brian Bowen (brbowen)
Subject: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

Hi,

We made a lot of progress today. We agreed that:
-- vnic_type will be a top level attribute as binding:vnic_type
-- BPs:
 * Irena's 
https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type for 
binding:vnic_type
 * Bob to submit a BP for binding:profile in ML2. SRIOV input info will be 
encapsulated in binding:profile
 * Bob to submit a BP for binding:vif_details in ML2. SRIOV output info 
will be encapsulated in binding:vif_details, which may include other 
information like security parameters. For SRIOV, vlan_id and profileid are 
candidates.
-- new arguments for port-create will be implicit arguments. Future release may 
make them explicit. New argument: --binding:vnic_type {virtio, direct, macvtap}.
I think that currently we can make do without the profileid as an input 
parameter from the user. The mechanism driver will return a profileid in the 
vif output.

Please correct any misstatement in above.

Issues:
  -- do we need a common utils/driver for SRIOV generic parts to be used by 
individual Mechanism drivers that support SRIOV? More details on what would be 
included in this sriov utils/driver? I'm thinking that a candidate would be the 
helper functions to interpret the pci_slot, which is proposed as a string. 
Anything else in your mind?
[IrenaB] I thought on some SRIOVPortProfileMixin to handle and persist SRIOV 
port related attributes

  -- what should mechanism drivers put in binding:vif_details and how nova 
would use this information? as far as I see it from the code, a VIF object is 
created and populated based on information provided by neutron (from get 
network and get port)

Questions:
  -- nova needs to work with both

Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

2014-02-02 Thread Irena Berezovsky
Hi Sandhya,
Can you please elaborate how do you suggest to extend the below bp for SRIOV 
Ports managed by different Mechanism Driver?
I am not biased to any specific direction here, just think we need common layer 
for managing SRIOV port at neutron, since there is a common pass between nova 
and neutron.

BR,
Irena


From: Sandhya Dasu (sadasu) [mailto:sad...@cisco.com]
Sent: Friday, January 31, 2014 6:46 PM
To: Irena Berezovsky; Robert Li (baoli); Robert Kukura; OpenStack Development 
Mailing List (not for usage questions); Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

Hi Irena,
  I was initially looking at 
https://blueprints.launchpad.net/neutron/+spec/ml2-typedriver-extra-port-info 
to take care of the extra information required to set up the SR-IOV port. When 
the scope of the BP was being decided, we had very little info about our own 
design so I didn't give any feedback about SR-IOV ports. But, I feel that this 
is the direction we should be going. Maybe we should target this in Juno.

Introducing, SRIOVPortProfileMixin would be creating yet another way to take 
care of extra port config. Let me know what you think.

Thanks,
Sandhya

From: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com
Date: Thursday, January 30, 2014 4:13 PM
To: Robert Li (baoli) ba...@cisco.commailto:ba...@cisco.com, Robert 
Kukura rkuk...@redhat.commailto:rkuk...@redhat.com, Sandhya Dasu 
sad...@cisco.commailto:sad...@cisco.com, OpenStack Development Mailing 
List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
Brian Bowen (brbowen) brbo...@cisco.commailto:brbo...@cisco.com
Subject: RE: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

Robert,
Thank you very much for the summary.
Please, see inline

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Thursday, January 30, 2014 10:45 PM
To: Robert Kukura; Sandhya Dasu (sadasu); Irena Berezovsky; OpenStack 
Development Mailing List (not for usage questions); Brian Bowen (brbowen)
Subject: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

Hi,

We made a lot of progress today. We agreed that:
-- vnic_type will be a top level attribute as binding:vnic_type
-- BPs:
 * Irena's 
https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type for 
binding:vnic_type
 * Bob to submit a BP for binding:profile in ML2. SRIOV input info will be 
encapsulated in binding:profile
 * Bob to submit a BP for binding:vif_details in ML2. SRIOV output info 
will be encapsulated in binding:vif_details, which may include other 
information like security parameters. For SRIOV, vlan_id and profileid are 
candidates.
-- new arguments for port-create will be implicit arguments. Future release may 
make them explicit. New argument: --binding:vnic_type {virtio, direct, macvtap}.
I think that currently we can make do without the profileid as an input 
parameter from the user. The mechanism driver will return a profileid in the 
vif output.

Please correct any misstatement in above.

Issues:
  -- do we need a common utils/driver for SRIOV generic parts to be used by 
individual Mechanism drivers that support SRIOV? More details on what would be 
included in this sriov utils/driver? I'm thinking that a candidate would be the 
helper functions to interpret the pci_slot, which is proposed as a string. 
Anything else in your mind?
[IrenaB] I thought on some SRIOVPortProfileMixin to handle and persist SRIOV 
port related attributes

  -- what should mechanism drivers put in binding:vif_details and how nova 
would use this information? as far as I see it from the code, a VIF object is 
created and populated based on information provided by neutron (from get 
network and get port)

Questions:
  -- nova needs to work with both ML2 and non-ML2 plugins. For regular plugins, 
binding:vnic_type will not be set, I guess. Then would it be treated as a 
virtio type? And if a non-ML2 plugin wants to support SRIOV, would it need to  
implement vnic-type, binding:profile, binding:vif-details for SRIOV itself?
[IrenaB] vnic_type will be added as an additional attribute to binding 
extension. For persistency it should be added in PortBindingMixin for non ML2. 
I didn't think to cover it as part of ML2 vnic_type bp.
For the rest attributes, need to see what Bob plans.

 -- is a neutron agent making decision based on the binding:vif_type?  In that 
case, it makes sense for binding:vnic_type not to be exposed to agents.
[IrenaB] vnic_type is input parameter that will eventually cause certain 
vif_type to be sent to GenericVIFDriver and create network interface. Neutron 
agents periodically scan for attached interfaces. For example, OVS agent will 
look only for OVS interfaces, so if SRIOV interface is created, it won't be 
discovered by OVS agent.

Thanks,
Robert
___
OpenStack-dev mailing list

Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

2014-01-29 Thread Irena Berezovsky
Hi Robert,
I think that I can go with Bob's suggestion, but think it makes sense to cover 
the vnic_type and PCI-passthru via two separate patches. Adding vnic_type will 
probably impose changes to existing Mech. Drivers while PCI-passthru is about 
introducing some pieces for new SRIOV supporting Mech. Drivers.

More comments inline

BR,
IRena

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 29, 2014 4:47 PM
To: Irena Berezovsky; rkuk...@redhat.com; Sandhya Dasu (sadasu); OpenStack 
Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

Hi folks,

I'd like to do a recap on today's meeting, and if possible we should continue 
the discussion in this thread so that we can be more productive in tomorrow's 
meeting.

Bob suggests that we have these BPs:
One generic covering implementing binding:profile in ML2, and one specific to 
PCI-passthru, defining the vnic-type (wherever it goes) and any keys for 
binding:profile.


Irena suggests that we have three BPs:
1. generic ML2 support for binding:profile (corresponding to Bob's covering 
implementing binding:profile in ML2 ?)
2. add vnic_type support for binding Mech Drivers in ML2 plugin
3. support PCI slot via profile (corresponding to Bob's any keys for 
binding:profile ?)

Both proposals sound similar, so it's great that we are converging. I think 
that it's important that we put more details in each BP on what's exactly 
covered by it. One question I have is about where binding:profile will be 
implemented. I see that portbinding is defined/implemented under its extension 
and neutron.db. So when both of you guys are saying that implementing 
binding:profile in ML2, I'm kind of confused. Please let me know what I'm 
missing here. My understanding is that non-ML2 plugin can use it as well.
[IrenaB] Basically you  are right. Currently ML2 does not inherit the DB Mixin 
for port binding. So it supports the port binding extension, but uses its own 
DB table to store relevant attributes. Making it work for ML2 means not adding 
this support to PortBindingMixin.

Another issue that came up during the meeting is about whether or not vnic-type 
should be part of the top level binding or part of binding:profile. In other 
words, should it be defined as binding:vnic-type or binding:profile:vnic-type.
[IrenaB] As long as existing binding capable Mech Drivers will take vnic_type 
into its consideration, I guess doing it via binding:profile will introduce 
less changes all over (CLI, API). But I am not sure this reason is strong 
enough to choose this direction
We also need one or two BPs to cover the change in the neutron 
port-create/port-show CLI/API.
[IrenaB] binding:profile is already supported, so it probably depends on 
direction with vnic_type

Another thing is that we need to define the binding:profile dictionary.
[IrenaB] With regards to PCI SRIOV related attributes, right?

Thanks,
Robert



On 1/29/14 4:02 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Will attend

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 29, 2014 12:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

Hi Folks,

Can we have one more meeting tomorrow? I'd like to discuss the blueprints we 
are going to have and what each BP will be covering.

thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

2014-01-29 Thread Irena Berezovsky
Hi Robert,
Please see inline, I'll try to post my understanding.


From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 29, 2014 6:03 PM
To: Irena Berezovsky; rkuk...@redhat.com; Sandhya Dasu (sadasu); OpenStack 
Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

Hi Irena,

I'm now even more confused. I must have missed something. See inline

Thanks,
Robert

On 1/29/14 10:19 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert,
I think that I can go with Bob's suggestion, but think it makes sense to cover 
the vnic_type and PCI-passthru via two separate patches. Adding vnic_type will 
probably impose changes to existing Mech. Drivers while PCI-passthru is about 
introducing some pieces for new SRIOV supporting Mech. Drivers.

More comments inline

BR,
IRena

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 29, 2014 4:47 PM
To: Irena Berezovsky; rkuk...@redhat.commailto:rkuk...@redhat.com; Sandhya 
Dasu (sadasu); OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

Hi folks,

I'd like to do a recap on today's meeting, and if possible we should continue 
the discussion in this thread so that we can be more productive in tomorrow's 
meeting.

Bob suggests that we have these BPs:
One generic covering implementing binding:profile in ML2, and one specific to 
PCI-passthru, defining the vnic-type (wherever it goes) and any keys for 
binding:profile.


Irena suggests that we have three BPs:
1. generic ML2 support for binding:profile (corresponding to Bob's covering 
implementing binding:profile in ML2 ?)
2. add vnic_type support for binding Mech Drivers in ML2 plugin
3. support PCI slot via profile (corresponding to Bob's any keys for 
binding:profile ?)

Both proposals sound similar, so it's great that we are converging. I think 
that it's important that we put more details in each BP on what's exactly 
covered by it. One question I have is about where binding:profile will be 
implemented. I see that portbinding is defined/implemented under its extension 
and neutron.db. So when both of you guys are saying that implementing 
binding:profile in ML2, I'm kind of confused. Please let me know what I'm 
missing here. My understanding is that non-ML2 plugin can use it as well.
[IrenaB] Basically you  are right. Currently ML2 does not inherit the DB Mixin 
for port binding. So it supports the port binding extension, but uses its own 
DB table to store relevant attributes. Making it work for ML2 means not adding 
this support to PortBindingMixin.

[ROBERT] does that mean binding:profile for PCI can't be used by non-ml2 plugin?
[IrenaB] binding:profile is can be used by any plugin that supports binding 
extension. To persist the binding:profile (in the DB), plugin should add DB 
table for this . The PortBindingMixin does not persist the binding:profile for 
now.

Another issue that came up during the meeting is about whether or not vnic-type 
should be part of the top level binding or part of binding:profile. In other 
words, should it be defined as binding:vnic-type or binding:profile:vnic-type.
[IrenaB] As long as existing binding capable Mech Drivers will take vnic_type 
into its consideration, I guess doing it via binding:profile will introduce 
less changes all over (CLI, API). But I am not sure this reason is strong 
enough to choose this direction
We also need one or two BPs to cover the change in the neutron 
port-create/port-show CLI/API.
[IrenaB] binding:profile is already supported, so it probably depends on 
direction with vnic_type

[ROBERT] Can you let me know where in the code binding:profile is supported? in 
portbindings_db.py, the PortBindingPort model doesn't have a column for 
binding:profile. So I guess that I must have missed it.
[IrenaB] For existing examples for supporting binding:profile by existing 
plugins you can look at two examples:
https://github.com/openstack/neutron/blob/master/neutron/plugins/mlnx/mlnx_plugin.py
 - line 
266https://github.com/openstack/neutron/blob/master/neutron/plugins/mlnx/mlnx_plugin.py%20-%20line%20266

https://github.com/openstack/neutron/blob/master/neutron/plugins/nec/nec_plugin.py
 - line 
424https://github.com/openstack/neutron/blob/master/neutron/plugins/nec/nec_plugin.py%20-%20line%20424

Regarding BPs for the CLI/API, we are planning to add vnic-type and profileid 
in the CLI, also the new keys in binding:profile. Are you saying no changes are 
needed (say display them, interpret the added cli arguments, etc), therefore no 
new BPs are needed for them?
[IrenaB] I think so. It should work bysetting on neutron port-create 
-binding:profile type=dict vnic_type=direct

Another thing is that we need to define the binding:profile dictionary.
[IrenaB] With regards to PCI SRIOV related attributes, right?

[ROBERT

Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

2014-01-29 Thread Irena Berezovsky
Please see inline

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Thursday, January 30, 2014 1:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

On 29 January 2014 23:50, Robert Kukura 
rkuk...@redhat.commailto:rkuk...@redhat.com wrote:
On 01/29/2014 05:44 PM, Robert Li (baoli) wrote:
 Hi Bob,

 that's a good find. profileid as part of IEEE 802.1br needs to be in
 binding:profile, and can be specified by a normal user, and later possibly
 the pci_flavor. Would it be wrong to say something as in below in the
 policy.json?
  create_port:binding:vnic_type: rule:admin_or_network_owner
  create_port:binding:profile:profileid: rule:admin_or_network_owner
Maybe, but a normal user that owns a network has no visibility into the
underlying details (such as the providernet extension attributes.

I'm with Bob on this, I think - I would expect that vnic_type is passed in by 
the user (user readable, and writeable, at least if the port is not attached) 
and then may need to be reflected back, if present, in the 'binding' attribute 
via the port binding extension (unless Nova can just go look for it - I'm not 
clear on what's possible here).
[IrenaB] I would prefer not to add new extension for vnic_type. I think it fits 
well into port binding extension, and it may be reasonable to follow the policy 
rules as Robert suggested. The way user specifies the vnic_type via nova API is 
currently left out for short term. Based on previous PCI meeting discussions, 
it was raised by John that regular user may be required to set vNIC flavor, but 
he definitely not expected to manage 'driver' level details of the way to 
connect vNIC.
For me it looks like neutron port can handle vnic_type via port binding, and 
the question is whether it is standalone attribute on port binding or a key,val 
pair on port binding:profile.


Also, would a normal cloud user really know what pci_flavor to use?
Isn't all this kind of detail hidden from a normal user within the nova
VM flavor (or host aggregate or whatever) pre-configured by the admin?

Flavors are user-visible, analogous to Nova's machine flavors, they're just not 
user editable.  I'm not sure where port profiles come from.
--
Ian.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-27 Thread Irena Berezovsky
Hi Robert, all,
My comments inline

Regards,
Irena
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 5:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Folks,

In today's meeting, we discussed a scheduler issue for SRIOV. The basic 
requirement is for coexistence of the following compute nodes in a cloud:
  -- SRIOV only compute nodes
  -- non-SRIOV only compute nodes
  -- Compute nodes that can support both SRIOV and non-SRIOV ports. Lack of 
a proper name, let's call them compute nodes with hybrid NICs support, or 
simply hybrid compute nodes.

I'm not sure if it's practical in having hybrid compute nodes in a real cloud. 
But it may be useful in the lab to bench mark the performance differences 
between SRIOV, non-SRIOV, and coexistence of both.
[IrenaB]
I would like to clarify a bit on the requirements you stated below.
As I see it, the hybrid compute nodes actually can be preferred in the real 
cloud, since one can define VM with one vNIC attached via SR-IOV virtual 
function while the other via some vSwitch.
But it definitely make sense to land VM with 'virtio' vNICs only on the 
non-SRIOV compute node.

Maybe there should be some sort of preference order of suitable nodes in 
scheduler choice, based on vnic types required for the VM.

In a cloud that supports SRIOV in some of the compute nodes, a request such as:

 nova boot -flavor m1.large -image image-uuid --nic net-id=net-uuid vm

doesn't require a SRIOV port. However, it's possible for the nova scheduler to 
place it on a compute node that supports sriov port only. Since neutron plugin 
runs on the controller, port-create would succeed unless neutron knows the host 
doesn't support non-sriov port. But connectivity on the node would not be 
established since no agent is running on that host to establish such 
connectivity.
[IrenaB] I
Having ML2 plugin as neutron backend, will fail to bind the port, in no agent 
is running on the Host

Irena brought up the idea of using host aggregate. This requires creation of a 
non-SRIOV host aggregate, and use that in the above 'nova boot' command. It 
should work.

The patch I had introduced a new constraint in the existing PCI passthrough 
filter.

The consensus seems to be having a better solution in a later release. And for 
now, people can either use host aggregate or resort to their own means.

Let's keep the discussion going on this.

Thanks,
Robert





On 1/24/14 4:50 PM, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:

Hi Folks,

Based on Thursday's discussion and a chat with Irena, I took the liberty to add 
a summary and discussion points for SRIOV on Monday and onwards. Check it out 
https://wiki.openstack.org/wiki/Meetings/Passthrough. Please feel free to 
update it. Let's try to finalize it next week. The goal is to determine the BPs 
that need to get approved, and to start coding.

thanks,
Robert


On 1/22/14 8:03 AM, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:

Sounds great! Let's do it on Thursday.

--Robert

On 1/22/14 12:46 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert, all,
I would suggest not to delay the SR-IOV discussion to the next week.
Let's try to cover the SRIOV side and especially the nova-neutron interaction 
points and interfaces this Thursday.
Once we have the interaction points well defined, we can run parallel patches 
to cover the full story.

Thanks a lot,
Irena

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 22, 2014 12:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][neutron] PCI passthrough SRIOV

Hi Folks,

As the debate about PCI flavor versus host aggregate goes on, I'd like to move 
forward with the SRIOV side of things in the same time. I know that tomorrow's 
IRC will be focusing on the BP review, and it may well continue into Thursday. 
Therefore, let's start discussing SRIOV side of things on Monday.

Basically, we need to work out the details on:
-- regardless it's PCI flavor or host aggregate or something else, how 
to use it to specify a SRIOV port.
-- new parameters for -nic
-- new parameters for neutron net-create/neutron port-create
-- interface between nova and neutron
-- nova side of work
-- neutron side of work

We should start coding ASAP.

Thanks,
Robert


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-27 Thread Irena Berezovsky
Hi Robert,
Please see inline

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 10:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Irena,

I agree on your first comment.

see inline as well.

thanks,
Robert

On 1/27/14 10:54 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert, all,
My comments inline

Regards,
Irena
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 5:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Folks,

In today's meeting, we discussed a scheduler issue for SRIOV. The basic 
requirement is for coexistence of the following compute nodes in a cloud:
  -- SRIOV only compute nodes
  -- non-SRIOV only compute nodes
  -- Compute nodes that can support both SRIOV and non-SRIOV ports. Lack of 
a proper name, let's call them compute nodes with hybrid NICs support, or 
simply hybrid compute nodes.

I'm not sure if it's practical in having hybrid compute nodes in a real cloud. 
But it may be useful in the lab to bench mark the performance differences 
between SRIOV, non-SRIOV, and coexistence of both.
[IrenaB]
I would like to clarify a bit on the requirements you stated below.
As I see it, the hybrid compute nodes actually can be preferred in the real 
cloud, since one can define VM with one vNIC attached via SR-IOV virtual 
function while the other via some vSwitch.
But it definitely make sense to land VM with 'virtio' vNICs only on the 
non-SRIOV compute node.

Maybe there should be some sort of preference order of suitable nodes in 
scheduler choice, based on vnic types required for the VM.

In a cloud that supports SRIOV in some of the compute nodes, a request such as:

 nova boot -flavor m1.large -image image-uuid --nic net-id=net-uuid vm

doesn't require a SRIOV port. However, it's possible for the nova scheduler to 
place it on a compute node that supports sriov port only. Since neutron plugin 
runs on the controller, port-create would succeed unless neutron knows the host 
doesn't support non-sriov port. But connectivity on the node would not be 
established since no agent is running on that host to establish such 
connectivity.
[IrenaB] I
Having ML2 plugin as neutron backend, will fail to bind the port, in no agent 
is running on the Host

[ROBERT] If a host supports SRIOV only, and there is an agent running on the 
host to support SRIOV, would binding succeed in ML2 plugin for the above 'nova 
boot' request?
[IrenaB] I think by adding the vnic_typem as we plan, Mechanism Driver will 
bind the port only if it supports vic_type and there is live agent on this 
host. So it should work

On a hybrid compute node, can we run multiple neutron L2 agents on a single 
host? It seems possible.

Irena brought up the idea of using host aggregate. This requires creation of a 
non-SRIOV host aggregate, and use that in the above 'nova boot' command. It 
should work.

The patch I had introduced a new constraint in the existing PCI passthrough 
filter.

The consensus seems to be having a better solution in a later release. And for 
now, people can either use host aggregate or resort to their own means.

Let's keep the discussion going on this.

Thanks,
Robert





On 1/24/14 4:50 PM, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:

Hi Folks,

Based on Thursday's discussion and a chat with Irena, I took the liberty to add 
a summary and discussion points for SRIOV on Monday and onwards. Check it out 
https://wiki.openstack.org/wiki/Meetings/Passthrough. Please feel free to 
update it. Let's try to finalize it next week. The goal is to determine the BPs 
that need to get approved, and to start coding.

thanks,
Robert


On 1/22/14 8:03 AM, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:

Sounds great! Let's do it on Thursday.

--Robert

On 1/22/14 12:46 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert, all,
I would suggest not to delay the SR-IOV discussion to the next week.
Let's try to cover the SRIOV side and especially the nova-neutron interaction 
points and interfaces this Thursday.
Once we have the interaction points well defined, we can run parallel patches 
to cover the full story.

Thanks a lot,
Irena

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 22, 2014 12:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][neutron] PCI passthrough SRIOV

Hi Folks,

As the debate about PCI flavor versus host aggregate goes on, I'd like to move 
forward with the SRIOV side of things in the same time. I know that tomorrow's 
IRC will be focusing on the BP review, and it may well continue into Thursday. 
Therefore, let's start discussing SRIOV side of things on Monday

Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-27 Thread Irena Berezovsky
Hi Robert,
Thank you for raising this issue.
Neutron side support for hybrid compute node is part of the mission  I want to 
achieve by implementing:
https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type.
I think it should be allowed to run more than one agent on certain node and 
Mechanism driver will bind the port if:

1.   It supports requested vnic_type

2.   Capable to manage segment for requested port (taking in to account 
physical network, network type, alive agent,..)
I think at least for now, new agents will be added and not mixed into existing 
one. But it may be a good idea to come up with Modular Agent.

BR,
Irena

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 11:16 PM
To: Irena Berezovsky; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Ok, this is something that's going to be added in ml2. I was looking at the 
bind_port() routine in mech_agent.py. The routine check_segment_for_agent() 
seems to be performing static check. So we are going to add something like 
check_vnic_type_for_agent(), I guess? Is the pairing of an agent with the mech 
driver predetermined? The routine bind_port() just throws warnings, though.

In any case, this is after the fact the scheduler has decided to place the VM 
onto the host.

Maybe not for now, but we need to consider how to support the hybrid compute 
nodes. Would an agent be able to support multiple vnic types? Or is it possible 
to reuse ovs agent, in the same time running another agent to support sriov? 
Any thoughts?

--Robert

On 1/27/14 4:01 PM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert,
Please see inline

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 10:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Irena,

I agree on your first comment.

see inline as well.

thanks,
Robert

On 1/27/14 10:54 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert, all,
My comments inline

Regards,
Irena
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 5:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Folks,

In today's meeting, we discussed a scheduler issue for SRIOV. The basic 
requirement is for coexistence of the following compute nodes in a cloud:
  -- SRIOV only compute nodes
  -- non-SRIOV only compute nodes
  -- Compute nodes that can support both SRIOV and non-SRIOV ports. Lack of 
a proper name, let's call them compute nodes with hybrid NICs support, or 
simply hybrid compute nodes.

I'm not sure if it's practical in having hybrid compute nodes in a real cloud. 
But it may be useful in the lab to bench mark the performance differences 
between SRIOV, non-SRIOV, and coexistence of both.
[IrenaB]
I would like to clarify a bit on the requirements you stated below.
As I see it, the hybrid compute nodes actually can be preferred in the real 
cloud, since one can define VM with one vNIC attached via SR-IOV virtual 
function while the other via some vSwitch.
But it definitely make sense to land VM with 'virtio' vNICs only on the 
non-SRIOV compute node.

Maybe there should be some sort of preference order of suitable nodes in 
scheduler choice, based on vnic types required for the VM.

In a cloud that supports SRIOV in some of the compute nodes, a request such as:

 nova boot -flavor m1.large -image image-uuid --nic net-id=net-uuid vm

doesn't require a SRIOV port. However, it's possible for the nova scheduler to 
place it on a compute node that supports sriov port only. Since neutron plugin 
runs on the controller, port-create would succeed unless neutron knows the host 
doesn't support non-sriov port. But connectivity on the node would not be 
established since no agent is running on that host to establish such 
connectivity.
[IrenaB] I
Having ML2 plugin as neutron backend, will fail to bind the port, in no agent 
is running on the Host

[ROBERT] If a host supports SRIOV only, and there is an agent running on the 
host to support SRIOV, would binding succeed in ML2 plugin for the above 'nova 
boot' request?
[IrenaB] I think by adding the vnic_typem as we plan, Mechanism Driver will 
bind the port only if it supports vic_type and there is live agent on this 
host. So it should work

On a hybrid compute node, can we run multiple neutron L2 agents on a single 
host? It seems possible.

Irena brought up the idea of using host aggregate. This requires creation of a 
non-SRIOV host aggregate, and use that in the above 'nova boot' command. It 
should work.

The patch I had introduced a new constraint in the existing PCI passthrough 
filter.

The consensus seems to be having a better solution

Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-27 Thread Irena Berezovsky
Hi Nrupal,
We definitely consider both these cases.
Agree with you that we should aim to support both.

BR,
Irena


From: Jani, Nrupal [mailto:nrupal.j...@intel.com]
Sent: Monday, January 27, 2014 11:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi,

There are two possibilities for the hybrid compute nodes

-  In the first case, a compute node has two NICs,  one SRIOV NIC  the 
other NIC for the VirtIO

-  In the 2nd case, Compute node has only one SRIOV NIC, where VFs are 
used for the VMs, either macvtap or direct assignment.  And the PF is used for 
the uplink to the linux bridge or OVS!!

My question to the team is whether we consider both of these deployments or not?

Thx,

Nrupal

From: Irena Berezovsky [mailto:ire...@mellanox.com]
Sent: Monday, January 27, 2014 1:01 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Robert,
Please see inline

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 10:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Irena,

I agree on your first comment.

see inline as well.

thanks,
Robert

On 1/27/14 10:54 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert, all,
My comments inline

Regards,
Irena
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 5:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Folks,

In today's meeting, we discussed a scheduler issue for SRIOV. The basic 
requirement is for coexistence of the following compute nodes in a cloud:
  -- SRIOV only compute nodes
  -- non-SRIOV only compute nodes
  -- Compute nodes that can support both SRIOV and non-SRIOV ports. Lack of 
a proper name, let's call them compute nodes with hybrid NICs support, or 
simply hybrid compute nodes.

I'm not sure if it's practical in having hybrid compute nodes in a real cloud. 
But it may be useful in the lab to bench mark the performance differences 
between SRIOV, non-SRIOV, and coexistence of both.
[IrenaB]
I would like to clarify a bit on the requirements you stated below.
As I see it, the hybrid compute nodes actually can be preferred in the real 
cloud, since one can define VM with one vNIC attached via SR-IOV virtual 
function while the other via some vSwitch.
But it definitely make sense to land VM with 'virtio' vNICs only on the 
non-SRIOV compute node.

Maybe there should be some sort of preference order of suitable nodes in 
scheduler choice, based on vnic types required for the VM.

In a cloud that supports SRIOV in some of the compute nodes, a request such as:

 nova boot -flavor m1.large -image image-uuid --nic net-id=net-uuid vm

doesn't require a SRIOV port. However, it's possible for the nova scheduler to 
place it on a compute node that supports sriov port only. Since neutron plugin 
runs on the controller, port-create would succeed unless neutron knows the host 
doesn't support non-sriov port. But connectivity on the node would not be 
established since no agent is running on that host to establish such 
connectivity.
[IrenaB] I
Having ML2 plugin as neutron backend, will fail to bind the port, in no agent 
is running on the Host

[ROBERT] If a host supports SRIOV only, and there is an agent running on the 
host to support SRIOV, would binding succeed in ML2 plugin for the above 'nova 
boot' request?
[IrenaB] I think by adding the vnic_typem as we plan, Mechanism Driver will 
bind the port only if it supports vic_type and there is live agent on this 
host. So it should work

On a hybrid compute node, can we run multiple neutron L2 agents on a single 
host? It seems possible.

Irena brought up the idea of using host aggregate. This requires creation of a 
non-SRIOV host aggregate, and use that in the above 'nova boot' command. It 
should work.

The patch I had introduced a new constraint in the existing PCI passthrough 
filter.

The consensus seems to be having a better solution in a later release. And for 
now, people can either use host aggregate or resort to their own means.

Let's keep the discussion going on this.

Thanks,
Robert





On 1/24/14 4:50 PM, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:

Hi Folks,

Based on Thursday's discussion and a chat with Irena, I took the liberty to add 
a summary and discussion points for SRIOV on Monday and onwards. Check it out 
https://wiki.openstack.org/wiki/Meetings/Passthrough. Please feel free to 
update it. Let's try to finalize it next week. The goal is to determine the BPs 
that need to get approved, and to start coding.

thanks,
Robert


On 1/22/14 8:03 AM, Robert Li (baoli) 
ba...@cisco.commailto:ba

Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-21 Thread Irena Berezovsky
Hi Robert, all,
I would suggest not to delay the SR-IOV discussion to the next week.
Let's try to cover the SRIOV side and especially the nova-neutron interaction 
points and interfaces this Thursday.
Once we have the interaction points well defined, we can run parallel patches 
to cover the full story.

Thanks a lot,
Irena

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 22, 2014 12:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][neutron] PCI passthrough SRIOV

Hi Folks,

As the debate about PCI flavor versus host aggregate goes on, I'd like to move 
forward with the SRIOV side of things in the same time. I know that tomorrow's 
IRC will be focusing on the BP review, and it may well continue into Thursday. 
Therefore, let's start discussing SRIOV side of things on Monday.

Basically, we need to work out the details on:
-- regardless it's PCI flavor or host aggregate or something else, how 
to use it to specify a SRIOV port.
-- new parameters for -nic
-- new parameters for neutron net-create/neutron port-create
-- interface between nova and neutron
-- nova side of work
-- neutron side of work

We should start coding ASAP.

Thanks,
Robert


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-20 Thread Irena Berezovsky
Hi,
Having post PCI meeting discussion with Ian based on his proposal 
https://docs.google.com/document/d/1vadqmurlnlvZ5bv3BlUbFeXRS_wh-dsgi5plSjimWjU/edit?pli=1#,
I am  not sure that the case that quite usable for SR-IOV based networking is 
covered well by this proposal. The understanding I got is that VM can land on 
the Host that will lack suitable PCI resource.
Can you please provide an example for the required cloud admin PCI related 
configurations on nova-compute and controller node with regards to the 
following simplified scenario:
 -- There are 2 provider networks (phy1, phy2), each one has associated range 
on vlan-ids
 -- Each compute node has 2 vendor adapters with SR-IOV  enabled feature, 
exposing xx Virtual Functions.
 -- Every VM vnic on virtual network on provider network  phy1 or phy2  should 
be pci pass-through vnic. 

Thanks a lot,
Irena

-Original Message-
From: Robert Li (baoli) [mailto:ba...@cisco.com] 
Sent: Saturday, January 18, 2014 12:33 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

Yunhong,

I'm hoping that these comments can be directly addressed:
  a practical deployment scenario that requires arbitrary attributes.
  detailed design on the following (that also take into account the 
introduction of predefined attributes):
* PCI stats report since the scheduler is stats based
* the scheduler in support of PCI flavors with arbitrary attributes and 
potential overlapping.
  networking requirements to support multiple provider nets/physical nets

I guess that the above will become clear as the discussion goes on. And we also 
need to define the deliveries
 
Thanks,
Robert

On 1/17/14 2:02 PM, Jiang, Yunhong yunhong.ji...@intel.com wrote:

Robert, thanks for your long reply. Personally I'd prefer option 2/3 as 
it keep Nova the only entity for PCI management.

Glad you are ok with Ian's proposal and we have solution to resolve the 
libvirt network scenario in that framework.

Thanks
--jyh

 -Original Message-
 From: Robert Li (baoli) [mailto:ba...@cisco.com]
 Sent: Friday, January 17, 2014 7:08 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through 
 network support
 
 Yunhong,
 
 Thank you for bringing that up on the live migration support. In 
addition  to the two solutions you mentioned, Irena has a different 
solution. Let me  put all the them here again:
 1. network xml/group based solution.
In this solution, each host that supports a provider 
net/physical  net can define a SRIOV group (it's hard to avoid the 
term as you can see  from the suggestion you made based on the PCI 
flavor proposal). For each  SRIOV group supported on a compute node, A 
network XML will be  created the  first time the nova compute service 
is running on that node.
 * nova will conduct scheduling, but not PCI device allocation
 * it's a simple and clean solution, documented in libvirt as 
the  way to support live migration with SRIOV. In addition, a network 
xml is  nicely mapped into a provider net.
 2. network xml per PCI device based solution
This is the solution you brought up in this email, and Ian  
mentioned this to me as well. In this solution, a network xml is 
created  when A VM is created. the network xml needs to be removed 
once the  VM is  removed. This hasn't been tried out as far as I  
know.
 3. interface xml/interface rename based solution
Irena brought this up. In this solution, the ethernet 
interface  name corresponding to the PCI device attached to the VM 
needs to be  renamed. One way to do so without requiring system reboot 
is to change  the  udev rule's file for interface renaming, followed 
by a udev reload.
 
 Now, with the first solution, Nova doesn't seem to have control over 
or  visibility of the PCI device allocated for the VM before the VM is  
launched. This needs to be confirmed with the libvirt support and see 
if  such capability can be provided. This may be a potential drawback 
if a  neutron plugin requires detailed PCI device information for operation.
 Irena may provide more insight into this. Ideally, neutron shouldn't 
need  this information because the device configuration can be done by 
libvirt  invoking the PCI device driver.
 
 The other two solutions are similar. For example, you can view the 
second  solution as one way to rename an interface, or camouflage an 
interface  under a network name. They all require additional works 
before the VM is  created and after the VM is removed.
 
 I also agree with you that we should take a look at XenAPI on this.
 
 
 With regard to your suggestion on how to implement the first solution 
with  some predefined group attribute, I think it definitely can be 
done. As I  have pointed it out earlier, the PCI flavor proposal is 
actually a  

Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-18 Thread Irena Berezovsky
Hi Robert, Yonhong,
Although network XML solution (option 1) is very elegant, it has one major 
disadvantage. As Robert mentioned, the disadvantage of the network XML is the 
inability to know what SR-IOV PCI device was actually allocated. When neutron 
is responsible to set networking configuration, manage admin status, set 
security groups, it should be able to identify the SR-IOV PCI device to apply 
configuration. Within current libvirt Network XML implementation, it does not 
seem possible.
Between option (2) and (3), I do not have any preference, it should be as 
simple as possible.
Option (3) that I raised can be achieved by renaming the network interface of 
Virtual Function via 'ip link set  name'. Interface logical name can be based 
on neutron port UUID. This will  allow neutron to discover devices, if backend 
plugin requires it. Once VM is migrating, suitable Virtual Function on the 
target node should be allocated, and then its corresponding network interface 
should be renamed to same logical name. This can be done without system 
rebooting. Still need to check how the Virtual Function corresponding network 
interface can be returned to its original name once is not used anymore as VM 
vNIC.

Regards,
Irena 

-Original Message-
From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com] 
Sent: Friday, January 17, 2014 9:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

Robert, thanks for your long reply. Personally I'd prefer option 2/3 as it keep 
Nova the only entity for PCI management.

Glad you are ok with Ian's proposal and we have solution to resolve the libvirt 
network scenario in that framework.

Thanks
--jyh

 -Original Message-
 From: Robert Li (baoli) [mailto:ba...@cisco.com]
 Sent: Friday, January 17, 2014 7:08 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network 
 support
 
 Yunhong,
 
 Thank you for bringing that up on the live migration support. In 
 addition to the two solutions you mentioned, Irena has a different 
 solution. Let me put all the them here again:
 1. network xml/group based solution.
In this solution, each host that supports a provider 
 net/physical net can define a SRIOV group (it's hard to avoid the term 
 as you can see from the suggestion you made based on the PCI flavor 
 proposal). For each SRIOV group supported on a compute node, A network 
 XML will be created the first time the nova compute service is running 
 on that node.
 * nova will conduct scheduling, but not PCI device allocation
 * it's a simple and clean solution, documented in libvirt as 
 the way to support live migration with SRIOV. In addition, a network 
 xml is nicely mapped into a provider net.
 2. network xml per PCI device based solution
This is the solution you brought up in this email, and Ian 
 mentioned this to me as well. In this solution, a network xml is 
 created when A VM is created. the network xml needs to be removed once 
 the VM is removed. This hasn't been tried out as far as I  know.
 3. interface xml/interface rename based solution
Irena brought this up. In this solution, the ethernet interface 
 name corresponding to the PCI device attached to the VM needs to be 
 renamed. One way to do so without requiring system reboot is to change 
 the udev rule's file for interface renaming, followed by a udev 
 reload.
 
 Now, with the first solution, Nova doesn't seem to have control over 
 or visibility of the PCI device allocated for the VM before the VM is 
 launched. This needs to be confirmed with the libvirt support and see 
 if such capability can be provided. This may be a potential drawback 
 if a neutron plugin requires detailed PCI device information for operation.
 Irena may provide more insight into this. Ideally, neutron shouldn't 
 need this information because the device configuration can be done by 
 libvirt invoking the PCI device driver.
 
 The other two solutions are similar. For example, you can view the 
 second solution as one way to rename an interface, or camouflage an 
 interface under a network name. They all require additional works 
 before the VM is created and after the VM is removed.
 
 I also agree with you that we should take a look at XenAPI on this.
 
 
 With regard to your suggestion on how to implement the first solution 
 with some predefined group attribute, I think it definitely can be 
 done. As I have pointed it out earlier, the PCI flavor proposal is 
 actually a generalized version of the PCI group. In other words, in 
 the PCI group proposal, we have one predefined attribute called PCI 
 group, and everything else works on top of that. In the PCI flavor 
 proposal, attribute is arbitrary. So certainly we can define a 
 particular attribute for networking, which let's temporarily 

Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-16 Thread Irena Berezovsky
Ian,
Thank you for putting in writing the ongoing discussed specification.
I have added few comments on the Google doc [1].

As for live migration support, this can be done also without libvirt network 
usage.
Not very elegant, but working:  rename the interface of the PCI device to some 
logical name, let's say based on neutron port UUID and put it into the 
interface XML, i.e.:
If PCI device network interface name  is eth8 and neutron port UUID is   
02bc4aec-b4f4-436f-b651-024 then rename it to something like: eth02bc4aec-b4'. 
The interface XML will look like this:

  ...
interface type='direct'
mac address='fa:16:3e:46:d3:e8'/
source dev='eth02bc4aec-b4' mode='passthrough'/
target dev='macvtap0'/
model type='virtio'/
alias name='net0'/
address type='pci' domain='0x' bus='0x00' slot='0x03' function='0x0'/
/interface


...

[1] 
https://docs.google.com/document/d/1vadqmurlnlvZ5bv3BlUbFeXRS_wh-dsgi5plSjimWjU/edit?pli=1#heading=h.308b0wqn1zde

BR,
Irena
From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Thursday, January 16, 2014 2:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

To clarify a couple of Robert's points, since we had a conversation earlier:
On 15 January 2014 23:47, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:
  ---  do we agree that BDF address (or device id, whatever you call it), and 
node id shouldn't be used as attributes in defining a PCI flavor?

Note that the current spec doesn't actually exclude it as an option.  It's just 
an unwise thing to do.  In theory, you could elect to define your flavors using 
the BDF attribute but determining 'the card in this slot is equivalent to all 
the other cards in the same slot in other machines' is probably not the best 
idea...  We could lock it out as an option or we could just assume that 
administrators wouldn't be daft enough to try.
* the compute node needs to know the PCI flavor. [...]
  - to support live migration, we need to use it to create 
network xml

I didn't understand this at first and it took me a while to get what Robert 
meant here.

This is based on Robert's current code for macvtap based live migration.  The 
issue is that if you wish to migrate a VM and it's tied to a physical 
interface, you can't guarantee that the same physical interface is going to be 
used on the target machine, but at the same time you can't change the 
libvirt.xml as it comes over with the migrating machine.  The answer is to 
define a network and refer out to it from libvirt.xml.  In Robert's current 
code he's using the group name of the PCI devices to create a network 
containing the list of equivalent devices (those in the group) that can be 
macvtapped.  Thus when the host migrates it will find another, equivalent, 
interface.  This falls over in the use case under consideration where a device 
can be mapped using more than one flavor, so we have to discard the use case or 
rethink the implementation.

There's a more complex solution - I think - where we create a temporary network 
for each macvtap interface a machine's going to use, with a name based on the 
instance UUID and port number, and containing the device to map.  Before 
starting the migration we would create a replacement network containing only 
the new device on the target host; migration would find the network from the 
name in the libvirt.xml, and the content of that network would behave 
identically.  We'd be creating libvirt networks on the fly and a lot more of 
them, and we'd need decent cleanup code too ('when freeing a PCI device, delete 
any network it's a member of'), so it all becomes a lot more hairy.
--
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-13 Thread Irena Berezovsky
Hi,
After having a lot of discussions both on IRC and mailing list, I would like to 
suggest to define basic use cases for PCI pass-through network support with 
agreed list of limitations and assumptions  and implement it.  By doing this 
Proof of Concept we will be able to deliver basic PCI pass-through network 
support in Icehouse timeframe and understand better how to provide complete 
solution starting from  tenant /admin API enhancement, enhancing nova-neutron 
communication and eventually provide neutron plugin  supporting the PCI 
pass-through networking.
We can try to split tasks between currently involved participants and bring up 
the basic case. Then we can enhance the implementation.
Having more knowledge and experience with neutron parts, I would like  to start 
working on neutron mechanism driver support.  I have already started to arrange 
the following blueprint doc based on everyone's ideas:
https://docs.google.com/document/d/1RfxfXBNB0mD_kH9SamwqPI8ZM-jg797ky_Fze7SakRc/edit#https://docs.google.com/document/d/1RfxfXBNB0mD_kH9SamwqPI8ZM-jg797ky_Fze7SakRc/edit

For the basic PCI pass-through networking case we can assume the following:

1.   Single provider network (PN1)

2.   White list of available SRIOV PCI devices for allocation as NIC for 
neutron networks on provider network  (PN1) is defined on each compute node

3.   Support directly assigned SRIOV PCI pass-through device as vNIC. (This 
will limit the number of tests)

4.   More 


If my suggestion seems reasonable to you, let's try to reach an agreement and 
split the work during our Monday IRC meeting.

BR,
Irena

From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com]
Sent: Saturday, January 11, 2014 8:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

Comments with prefix [yjiang5_2] , including the double confirm.

I think we (you and me) is mostly on the same page, would you please give a 
summary, and then we can have community , including Irena/Robert, to check it. 
We need Cores to sponsor it. We should check with John to see if this is 
different with his mentor picture, and we may need a neutron core (I assume 
Cisco has a bunch of Neutron cores :) )to sponsor it?

And, will anyone from Cisco can help on the implementation? After this long 
discussion, we are in half bottom of I release and I'm not sure if Yongli and I 
alone can finish them in I release.

Thanks
--jyh

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Friday, January 10, 2014 6:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support



 OK - so if this is good then I think the question is how we could change the 
 'pci_whitelist' parameter we have - which, as you say, should either *only* 
 do whitelisting or be renamed - to allow us to add information.  Yongli has 
 something along those lines but it's not flexible and it distinguishes poorly 
 between which bits are extra information and which bits are matching 
 expressions (and it's still called pci_whitelist) - but even with those 
 criticisms it's very close to what we're talking about.  When we have that I 
 think a lot of the rest of the arguments should simply resolve themselves.



 [yjiang5_1] The reason that not easy to find a flexible/distinguishable 
 change to pci_whitelist is because it combined two things. So a stupid/naive 
 solution in my head is, change it to VERY generic name, 
 'pci_devices_information',

 and change schema as an array of {'devices_property'=regex exp, 'group_name' 
 = 'g1'} dictionary, and the device_property expression can be 'address ==xxx, 
 vendor_id == xxx' (i.e. similar with current white list),  and we can squeeze 
 more into the pci_devices_information in future, like 'network_information' 
 = xxx or Neutron specific information you required in previous mail.


We're getting to the stage that an expression parser would be useful, 
annoyingly, but if we are going to try and squeeze it into JSON can I suggest:

{ match = { class = Acme inc. discombobulator }, info = { group = we like 
teh groups, volume = 11 } }

[yjiang5_2] Double confirm that 'match' is whitelist, and info is 'extra info', 
right?  Can the key be more meaningful, for example, 
s/match/pci_device_property,  s/info/pci_device_info, or s/match/pci_devices/  
etc.
Also assume the class should be the class code in the configuration space, and 
be digital, am I right? Otherwise, it's not easy to get the 'Acme inc. 
discombobulator' information.



 All keys other than 'device_property' becomes extra information, i.e. 
 software defined property. These extra information will be carried with the 
 PCI devices,. Some implementation details, A)we can limit the acceptable 
 keys, like we only support 'group_name', 'network_id', or we can accept any 
 keys other than reserved (vendor_id, 

Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-13 Thread Irena Berezovsky
Ian,
It's great news.
Thank you for bringing Bob's attention to this effort. I'll look for Bob on IRC 
to get the details.
And of course, core support raises our chances to make PCI pass-through 
networking into icehouse.

BR,
Irena

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Monday, January 13, 2014 2:02 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

Irena, have a word with Bob (rkukura on IRC, East coast), he was talking about 
what would be needed already and should be able to help you.  Conveniently he's 
also core. ;)
--
Ian.

On 12 January 2014 22:12, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:
Hi John,
Thank you for taking an initiative and summing up the work that need to be done 
to provide PCI pass-through network support.
The only item I think is missing is the neutron support for PCI pass-through. 
Currently we have Mellanox Plugin that supports PCI pass-through assuming 
Mellanox Adapter card embedded switch technology. But in order to have fully 
integrated  PCI pass-through networking support for the use cases Robert listed 
on previous mail, the generic neutron PCI pass-through support is required. 
This can be enhanced with vendor specific task that may differ (Mellanox 
Embedded switch vs Cisco 802.1BR), but there is still common part of being PCI 
aware mechanism driver.
I have already started with definition for this part:
https://docs.google.com/document/d/1RfxfXBNB0mD_kH9SamwqPI8ZM-jg797ky_Fze7SakRc/edit#https://docs.google.com/document/d/1RfxfXBNB0mD_kH9SamwqPI8ZM-jg797ky_Fze7SakRc/edit
I also plan to start coding soon.

Depends on how it goes, I can take also nova parts that integrate with neutron 
APIs from item 3.

Regards,
Irena

-Original Message-
From: John Garbutt [mailto:j...@johngarbutt.commailto:j...@johngarbutt.com]
Sent: Friday, January 10, 2014 4:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support
Apologies for this top post, I just want to move this discussion towards action.

I am traveling next week so it is unlikely that I can make the meetings. Sorry.

Can we please agree on some concrete actions, and who will do the coding?
This also means raising new blueprints for each item of work.
I am happy to review and eventually approve those blueprints, if you email me 
directly.

Ideas are taken from what we started to agree on, mostly written up here:
https://wiki.openstack.org/wiki/Meetings/Passthrough#Definitions


What doesn't need doing...


We have PCI whitelist and PCI alias at the moment, let keep those names the 
same for now.
I personally prefer PCI-flavor, rather than PCI-alias, but lets discuss any 
rename separately.

We seemed happy with the current system (roughly) around GPU passthrough:
nova flavor-key three_GPU_attached_30GB set pci_passthrough:alias= 
large_GPU:1,small_GPU:2
nova boot --image some_image --flavor three_GPU_attached_30GB some_name

Again, we seemed happy with the current PCI whitelist.

Sure, we could optimise the scheduling, but again, please keep that a separate 
discussion.
Something in the scheduler needs to know how many of each PCI alias are 
available on each host.
How that information gets there can be change at a later date.

PCI alias is in config, but its probably better defined using host aggregates, 
or some custom API.
But lets leave that for now, and discuss it separately.
If the need arrises, we can migrate away from the config.


What does need doing...
==

1) API  CLI changes for nic-type, and associated tempest tests

* Add a user visible nic-type so users can express on of several network 
types.
* We need a default nic-type, for when the user doesn't specify one (might 
default to SRIOV in some cases)
* We can easily test the case where the default is virtual and the user 
expresses a preference for virtual
* Above is much better than not testing it at all.

nova boot --flavor m1.large --image image_id
  --nic net-id=net-id-1
  --nic net-id=net-id-2,nic-type=fast
  --nic net-id=net-id-3,nic-type=fast vm-name

or

neutron port-create
  --fixed-ip subnet_id=subnet-id,ip_address=192.168.57.101
  --nic-type=slow | fast | foobar
  net-id
nova boot --flavor m1.large --image image_id --nic port-id=port-id

Where nic-type is just an extra bit metadata string that is passed to nova and 
the VIF driver.


2) Expand PCI alias information

We need extensions to PCI alias so we can group SRIOV devices better.

I still think we are yet to agree on a format, but I would suggest this as a 
starting point:

{
 name:GPU_fast,
 devices:[
  {vendor_id:1137,product_id:0071, address:*, attach-type:direct},
  {vendor_id:1137,product_id:0072, address:*, attach-type:direct} 
 ],
 sriov_info: {}
}

{
 name:NIC_fast,
 devices:[
  {vendor_id:1137,product_id:0071

Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-12 Thread Irena Berezovsky
Hi John,
Thank you for taking an initiative and summing up the work that need to be done 
to provide PCI pass-through network support.
The only item I think is missing is the neutron support for PCI pass-through. 
Currently we have Mellanox Plugin that supports PCI pass-through assuming 
Mellanox Adapter card embedded switch technology. But in order to have fully 
integrated  PCI pass-through networking support for the use cases Robert listed 
on previous mail, the generic neutron PCI pass-through support is required. 
This can be enhanced with vendor specific task that may differ (Mellanox 
Embedded switch vs Cisco 802.1BR), but there is still common part of being PCI 
aware mechanism driver. 
I have already started with definition for this part:
https://docs.google.com/document/d/1RfxfXBNB0mD_kH9SamwqPI8ZM-jg797ky_Fze7SakRc/edit#
I also plan to start coding soon.

Depends on how it goes, I can take also nova parts that integrate with neutron 
APIs from item 3.
 
Regards,
Irena

-Original Message-
From: John Garbutt [mailto:j...@johngarbutt.com] 
Sent: Friday, January 10, 2014 4:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

Apologies for this top post, I just want to move this discussion towards action.

I am traveling next week so it is unlikely that I can make the meetings. Sorry.

Can we please agree on some concrete actions, and who will do the coding?
This also means raising new blueprints for each item of work.
I am happy to review and eventually approve those blueprints, if you email me 
directly.

Ideas are taken from what we started to agree on, mostly written up here:
https://wiki.openstack.org/wiki/Meetings/Passthrough#Definitions


What doesn't need doing...


We have PCI whitelist and PCI alias at the moment, let keep those names the 
same for now.
I personally prefer PCI-flavor, rather than PCI-alias, but lets discuss any 
rename separately.

We seemed happy with the current system (roughly) around GPU passthrough:
nova flavor-key three_GPU_attached_30GB set pci_passthrough:alias= 
large_GPU:1,small_GPU:2
nova boot --image some_image --flavor three_GPU_attached_30GB some_name

Again, we seemed happy with the current PCI whitelist.

Sure, we could optimise the scheduling, but again, please keep that a separate 
discussion.
Something in the scheduler needs to know how many of each PCI alias are 
available on each host.
How that information gets there can be change at a later date.

PCI alias is in config, but its probably better defined using host aggregates, 
or some custom API.
But lets leave that for now, and discuss it separately.
If the need arrises, we can migrate away from the config.


What does need doing...
==

1) API  CLI changes for nic-type, and associated tempest tests

* Add a user visible nic-type so users can express on of several network 
types.
* We need a default nic-type, for when the user doesn't specify one (might 
default to SRIOV in some cases)
* We can easily test the case where the default is virtual and the user 
expresses a preference for virtual
* Above is much better than not testing it at all.

nova boot --flavor m1.large --image image_id
  --nic net-id=net-id-1
  --nic net-id=net-id-2,nic-type=fast
  --nic net-id=net-id-3,nic-type=fast vm-name

or

neutron port-create
  --fixed-ip subnet_id=subnet-id,ip_address=192.168.57.101
  --nic-type=slow | fast | foobar
  net-id
nova boot --flavor m1.large --image image_id --nic port-id=port-id

Where nic-type is just an extra bit metadata string that is passed to nova and 
the VIF driver.


2) Expand PCI alias information

We need extensions to PCI alias so we can group SRIOV devices better.

I still think we are yet to agree on a format, but I would suggest this as a 
starting point:

{
 name:GPU_fast,
 devices:[
  {vendor_id:1137,product_id:0071, address:*, attach-type:direct},
  {vendor_id:1137,product_id:0072, address:*, attach-type:direct} 
 ],
 sriov_info: {}
}

{
 name:NIC_fast,
 devices:[
  {vendor_id:1137,product_id:0071, address:0:[1-50]:2:*, 
attach-type:macvtap}
  {vendor_id:1234,product_id:0081, address:*, attach-type:direct} 
 ],
 sriov_info: {
  nic_type:fast,
  network_ids: [net-id-1, net-id-2]  } }

{
 name:NIC_slower,
 devices:[
  {vendor_id:1137,product_id:0071, address:*, attach-type:direct}
  {vendor_id:1234,product_id:0081, address:*, attach-type:direct} 
 ],
 sriov_info: {
  nic_type:fast,
  network_ids: [*]  # this means could attach to any network  } }

The idea being the VIF driver gets passed this info, when network_info includes 
a nic that matches.
Any other details, like VLAN id, would come from neutron, and passed to the VIF 
driver as normal.


3) Reading nic_type and doing the PCI passthrough of NIC user requests

Not sure we are agreed on this, but basically:
* network_info contains nic-type from neutron
* need to select the correct VIF 

Re: [openstack-dev] [nova] [neutron] Todays' meeting log: PCI pass-through network support

2013-12-23 Thread Irena Berezovsky
Please, see inline

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Tuesday, December 24, 2013 1:38 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] Todays' meeting log: PCI 
pass-through network support

On autodiscovery and configuration, we agree that each compute node finds out 
what it has based on some sort of list of match expressions; we just disagree 
on where they should live.

I know we've talked APIs for setting that matching expression, but I would 
prefer that compute nodes are responsible for their own physical configuration 
- generally this seems wiser on the grounds that configuring new hardware 
correctly is a devops problem and this pushes the problem into the installer, 
clear devops territory.  It also makes the (I think likely) assumption that the 
config may differ per compute node without having to add more complexity to the 
API with host aggregates and so on.  And it means that a compute node can start 
working without consulting the central database or reporting its entire device 
list back to the central controller.
[IrenaB] Totally agree on this.  For both auto-discovery and configuration, we 
need to close the format  and content that is will be available to nova.
My concern here if there is a way to provide auto-discovery based on network 
connectivity (something like what neutron has i.e. 
'physical_interface_mappings')
For configuration, maybe worth to provide some reference flow for managing it 
by installer.
On PCI groups, I think it is a good idea to have them declared centrally (their 
name, not their content).  Now, I would use config to define them and maybe an 
API for the tenant to list their names, personally; that's simpler and easier 
to implement and doesn't preclude adding an (admin) API in the future.  But I 
don't imagine the list of groups will change frequently so any update API would 
be very infrequently used, and if someone really feels they want to implement 
it I'm not going to stop them.

[IrenaB] The issue we need to resolve is nova scheduler taking its decision 
that satisfies network connectivity

On nova boot, I completely agree that we need a new argument to --nic to 
specify the PCI group of the NIC.  The rest of the arguments - I'm wondering if 
we could perhaps do this in two stages:
1. Neutron will read those arguments (attachment type, additional stuff like 
port group where relevant) from the port during an attach and pass relevant 
information to the plugging driver in Nova
[IrenaB] Do you mean via 'neutron port-create before 'nova boot'? Hopefully we 
can close the details during the discussion today.
2. We add a feature to nova so that you can specify other properties in the 
--nic section line and they're passed straight to the port-create called from 
within nova.
[IrenaB] I like this option. This should also allow to request virtio versus 
SR-IOV nic. It should be possible to have  both options available on the same 
Host.
This is not specific to passthrough at all, just a useful general purpose 
feature.  However, it would simplify both the problem and design here, because 
these parameters, whatever they are, are now entirely the responsibility of 
Neutron and Nova's simply transporting them into it.  A PCI aware Neutron will 
presumably understand the attachment type, the port group and so on, or will 
reject them if they're meaningless to it, and we've even got room for future 
expansion without changing Nova or Neutron, just the plugin.  We can propose it 
now and independently, put in a patch and have it ready before we need it.  I 
think anything that helps to clarify and divide the responsibilities of Nova 
and Neutron will be helpful, because then we don't end up with too many 
cross-project-interrelated patches.
[IrenaB] +2
I'm going to ignore the allocation problem for now.  If a single user can 
allocate all the NICs in the cluster to himself, we still have a more useful 
solution than the one now where he can't use them, so it's not the top of our 
list.
[IrenaB] Agree
Time seems to be running out for Icehouse. We need to come to agreement ASAP. I 
will be out from wednesday until after new year. I'm thinking that to move it 
forward after the new year, we may need to have the IRC meeting in a daily 
basis until we reach agreement. This should be one of our new year's 
resolutions?

Whatever gets it done.
[IrenaB] Fine with me. If we reach required decisions today regarding  neutron, 
I can start to dive into the details of SR-IOV mechanism driver assuming ML2 
plugin.

BR,
Irena
--
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2013-12-22 Thread Irena Berezovsky
Hi Ian,
My comments are inline
I  would like to suggest to focus the next PCI-pass though IRC meeting on:

1.Closing the administration and tenant that powers the VM use cases.

2.   Decouple the nova and neutron parts to start focusing on the neutron 
related details.

BR,
Irena

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Friday, December 20, 2013 2:50 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

On 19 December 2013 15:15, John Garbutt 
j...@johngarbutt.commailto:j...@johngarbutt.com wrote:
 Note, I don't see the person who boots the server ever seeing the pci-flavor, 
 only understanding the server flavor.
 [IrenaB] I am not sure that elaborating PCI device request into server flavor 
 is the right approach for the PCI pass-through network case. vNIC by its 
 nature is something dynamic that can be plugged or unplugged after VM boot. 
 server flavor is  quite static.
I was really just meaning the server flavor specify the type of NIC to attach.

The existing port specs, etc, define how many nics, and you can hot
plug as normal, just the VIF plugger code is told by the server flavor
if it is able to PCI passthrough, and which devices it can pick from.
The idea being combined with the neturon network-id you know what to
plug.

The more I talk about this approach the more I hate it :(

The thinking we had here is that nova would provide a VIF or a physical NIC for 
each attachment.  Precisely what goes on here is a bit up for grabs, but I 
would think:
Nova specifiies the type at port-update, making it obvious to Neutron it's 
getting a virtual interface or a passthrough NIC (and the type of that NIC, 
probably, and likely also the path so that Neutron can distinguish between NICs 
if it needs to know the specific attachment port)
Neutron does its magic on the network if it has any to do, like faffing(*) with 
switches
Neutron selects the VIF/NIC plugging type that Nova should use, and in the case 
that the NIC is a VF and it wants to set an encap, returns that encap back to 
Nova
Nova plugs it in and sets it up (in libvirt, this is generally in the XML; 
XenAPI and others are up for grabs).
[IrenaB] I agree on the described flow. Still need to close how to elaborate 
the request for pass-through vNIC into the  'nova boot'.
 We might also want a nic-flavor that tells neutron information it requires, 
 but lets get to that later...
 [IrenaB] nic flavor is definitely something that we need in order to choose 
 if  high performance (PCI pass-through) or virtio (i.e. OVS) nic will be 
 created.
Well, I think its the right way go. Rather than overloading the server
flavor with hints about which PCI devices you could use.

The issue here is that additional attach.  Since for passthrough that isn't 
NICs (like crypto cards) you would almost certainly specify it in the flavor, 
if you did the same for NICs then you would have a preallocated pool of NICs 
from which to draw.  The flavor is also all you need to know for billing, and 
the flavor lets you schedule.  If you have it on the list of NICs, you have to 
work out how many physical NICs you need before you schedule (admittedly not 
hard, but not in keeping) and if you then did a subsequent attach it could fail 
because you have no more NICs on the machine you scheduled to - and at this 
point you're kind of stuck.

Also with the former, if you've run out of NICs, the already-extant resize call 
would allow you to pick a flavor with more NICs and you can then reschedule the 
subsequent VM to wherever resources are available to fulfil the new request.
[IrenaB] Still think that putting PCI NIC request into Server Flavor is not 
right approach. You will need to create different server flavors per any 
possible combination of tenant networks attachment options, or maybe assume he 
is connecting to all. As for billing, you can use type of vNIC in addition to 
packets in/out for billing per vNIC. This way, tenant will be charged only for  
used vNICs.
One question here is whether Neutron should become a provider of billed 
resources (specifically passthrough NICs) in the same way as Cinder is of 
volumes - something we'd not discussed to date; we've largely worked on the 
assumption that NICs are like any other passthrough resource, just one where, 
once it's allocated out, Neutron can work magic with it.
[IrenaB] I am not so familiar with Ceilometer, but seems that if we are talking 
about network resources, neutron should be in charge.

--
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2013-12-19 Thread Irena Berezovsky
Hi John,
I totally agree that we should define the use cases both for administration and 
tenant that powers the VM.
Since we are trying to support PCI pass-through network, let's focus on the 
related use cases.
Please see my comments inline.

Regards,
Irena
-Original Message-
From: John Garbutt [mailto:j...@johngarbutt.com] 
Sent: Thursday, December 19, 2013 1:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

Apologies for being late onto this thread, and not making the meeting the other 
day.
Also apologies this is almost totally a top post.

On 17 December 2013 15:09, Ian Wells ijw.ubu...@cack.org.uk wrote:
 Firstly, I disagree that
 https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support is an 
 accurate reflection of the current state.  It's a very unilateral 
 view, largely because the rest of us had been focussing on the google 
 document that we've been using for weeks.

I haven't seen the google doc. I got involved through the blueprint review of 
this:
https://blueprints.launchpad.net/nova/+spec/pci-extra-info

I assume its this one?
https://docs.google.com/document/d/1EMwDg9J8zOxzvTnQJ9HwZdiotaVstFWKIuKrPse6JOs

On a quick read, my main concern is separating out the user more:
* administration (defines pci-flavor, defines which hosts can provide it, 
defines server flavor...)
* person who boots server (picks server flavor, defines neutron ports)

Note, I don't see the person who boots the server ever seeing the pci-flavor, 
only understanding the server flavor.
[IrenaB] I am not sure that elaborating PCI device request into server flavor 
is the right approach for the PCI pass-through network case. vNIC by its nature 
is something dynamic that can be plugged or unplugged after VM boot. server 
flavor is  quite static.

We might also want a nic-flavor that tells neutron information it requires, 
but lets get to that later...
[IrenaB] nic flavor is definitely something that we need in order to choose if  
high performance (PCI pass-through) or virtio (i.e. OVS) nic will be created.

 Secondly, I totally disagree with this approach.  This assumes that 
 description of the (cloud-internal, hardware) details of each compute 
 node is best done with data stored centrally and driven by an API.  I 
 don't agree with either of these points.

Possibly, but I would like to first agree on the use cases and data model we 
want.

Nova has generally gone for APIs over config in recent times.
Mostly so you can do run-time configuration of the system.
But lets just see what makes sense when we have the use cases agreed.

 On 2013年12月16日 22:27, Robert Li (baoli) wrote:
 I'd like to give you guy a summary of current state, let's discuss it 
 then.
 https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support


 1)  fade out alias ( i think this ok for all)
 2)  white list became pic-flavor ( i think this ok for all)
 3)  address simply regular expression support: only * and a number 
 range is support [hex-hex]. ( i think this ok?)
 4)  aggregate : now it's clear enough, and won't impact SRIOV.  ( i 
 think this irrelevant to SRIOV now)

So... this means we have:

PCI-flavor:
* i.e. standardGPU, standardGPUnew, fastGPU, hdFlash1TB etc

Host mapping:
* decide which hosts you allow a particular flavor to be used
* note, the scheduler still needs to find out if any devices are free

flavor (of the server):
* usual RAM, CPU, Storage
* use extra specs to add PCI devices
* example:
** add one PCI device, choice of standardGPU or standardGPUnew
** also add: one hdFlash1TB

Now, the other bit is SRIOV... At a high level:

Neutron:
* user wants to connect to a particular neutron network
* user wants a super-fast SRIOV connection

Administration:
* needs to map PCI device to what neutron network the connect to

The big question is:
* is this a specific SRIOV only (provider) network
* OR... are other non-SRIOV connections also made to that same network

I feel we have to go for that latter. Imagine a network on VLAN 42, you might 
want some SRIOV into that network, and some OVS connecting into the same 
network. The user might have VMs connected using both methods, so wants the 
same IP address ranges and same network id spanning both.
[IrenaB] Agree. SRIOV connection is the choice for certain VM on certain 
network. The same VM can be connected to other network via virtio nic as well 
as other VMs can be connected to the same network via virtio nics.

If we go for that latter new either need:
* some kind of nic-flavor
** boot ... -nic nic-id:public-id:,nic-flavor:10GBpassthrough
** but neutron could store nic-flavor, and pass it through to VIF driver, and 
user says port-id
* OR add NIC config into the server flavor
** extra spec to say, tell VIF driver it could use on of this list of PCI 
devices: (list pci-flavors)
* OR do both

I vote for nic-flavor only, because it matches the volume-type we have with 

Re: [openstack-dev] [neutron] Third party Neutron plugin testingmeeting

2013-12-10 Thread Irena Berezovsky
+1
Will join the IRC too

-Original Message-
From: Kyle Mestery [mailto:mest...@siliconloons.com] 
Sent: Tuesday, December 10, 2013 10:48 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Third party Neutron plugin testingmeeting

OK, looks like we've reached consensus. I've set the time and channel to 
12-12-2013 (Thursday), 1700UTC, on #openstack-meeting-alt.

Thanks!
Kyle

On Dec 10, 2013, at 12:59 PM, Gary Duan gd...@varmour.com wrote:

 I will be joining IRC too.
 
 Thanks,
 Gary
 
 
 On Tue, Dec 10, 2013 at 10:33 AM, Edgar Magana emag...@plumgrid.com wrote:
 Also joining!
 Looking forward to hearing your ideas folks!
 
 Edgar
 
 On 12/10/13 10:16 AM, Nachi Ueno na...@ntti3.com wrote:
 
 +1 ! I'll join.
 I'm also working on investigating how to use openstack gating system.
 (This document is still draft version) 
 https://docs.google.com/presentation/d/1WJInaSt_H2kVkjnhtPmiATP1F-0BV
 buk1e
 efQalL5Q0/edit#slide=id.p
 
 2013/12/10 Ivar Lazzaro i...@embrane.com:
  +1 for 1700UTC Thursday on IRC
 
  -Original Message-
  From: Kyle Mestery [mailto:mest...@siliconloons.com]
  Sent: Tuesday, December 10, 2013 9:21 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [neutron] Third party Neutron plugin 
 testing meeting
 
  On Dec 10, 2013, at 10:45 AM, Veiga, Anthony
 anthony_ve...@cable.comcast.com wrote:
  -Original Message-
  From: Kyle Mestery mest...@siliconloons.com
  Reply-To: OpenStack Development Mailing List (not for usage 
 questions)
  openstack-dev@lists.openstack.org
  Date: Tuesday, December 10, 2013 10:48
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  Subject: [openstack-dev] [neutron] Third party Neutron plugin 
 testing  meeting
 
  Last week I took an action item to organize a meeting for 
  everyone who is doing third-party testing in Neutron for plugins, 
  whether this is vendor or Open Source based. The idea is to share 
  ideas around setups and any issues people hit. I'd like to set 
  this meeting up for this week, Thursday at 1700UTC. I would also 
  like to propose we make this a dial in meeting using the 
  Infrastructure Conferencing bridges [1]. If this time works, I'll 
  set something up and reply to this thread with the dial in information.
 
  +1 for the meeting time.  Any particular reason for voice over IRC?
 
  We kind of decided that doing this over voice initially would be 
 expedient, but I am fine with moving to IRC. If I don't hear 
 objections, lets assume we will meet at 1700UTC Thursday on 
 #openstack-meeting-alt.
 
 
 
  Also, I've started a etherpad page [2] with information. It would 
  be good for people to add information to this etherpad as well. 
  I've coupled this pad with information around multi-node gate 
  testing for Neutron as well, as I suspect most of the third-party 
  testing will require multiple nodes as well.
 
  I'll start filling out our setup.  I have some questions around 
  Third-Party Testing in particular, and look forward to this discussion.
 
  Awesome, thanks Anthony!
 
 
  Thanks!
  Kyle
 
  [1] https://wiki.openstack.org/wiki/Infrastructure/Conferencing
  [2] https://etherpad.openstack.org/p/multi-node-neutron-tempest
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org 
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Third party Neutron plugin testing meeting

2013-12-10 Thread Irena Berezovsky
Please take guys and girls from Israel into account ☺.


From: Yongsheng Gong [mailto:gong...@unitedstack.com]
Sent: Wednesday, December 11, 2013 5:20 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Third party Neutron plugin testing 
meeting

UTC 22:00+, which is 6:am beijing time,but if there are guys from Israel alike, 
I can get up one hour earlier, just like what I do for neutron meeting.

On Wed, Dec 11, 2013 at 11:08 AM, Kyle Mestery 
mest...@siliconloons.commailto:mest...@siliconloons.com wrote:
I suspect we'll need another meeting next week, I propose we have it
at a time friendly to those in Asian timezones. Yong and Akihiro, can
you guys propose a timeslot which works for you guys and I'll see
about setting the followup meeting up.

Thanks,
Kyle

On Dec 10, 2013, at 8:14 PM, Yongsheng Gong 
gong...@unitedstack.commailto:gong...@unitedstack.com wrote:

 It is 1am beijing time, so I am afraid I will not join.


 On Wed, Dec 11, 2013 at 10:10 AM, Akihiro Motoki 
 amot...@gmail.commailto:amot...@gmail.com wrote:
 Thanks Kyle for coordinating the meeting.

 The time is midnight to me, but it fits everyone except me. I'll try the time 
 but not sure. Anyway I will follow the log.

 2013年12月11日水曜日 Shiv Haris sha...@brocade.commailto:sha...@brocade.com:

 +1



 Will join via IRC or voice call







 From: Gary Duan [mailto:gd...@varmour.commailto:gd...@varmour.com]
 Sent: Tuesday, December 10, 2013 10:59 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [neutron] Third party Neutron plugin 
 testingmeeting



 I will be joining IRC too.



 Thanks,

 Gary



 On Tue, Dec 10, 2013 at 10:33 AM, Edgar Magana 
 emag...@plumgrid.commailto:emag...@plumgrid.com wrote:

 Also joining!
 Looking forward to hearing your ideas folks!

 Edgar


 On 12/10/13 10:16 AM, Nachi Ueno na...@ntti3.commailto:na...@ntti3.com 
 wrote:

 +1 ! I'll join.
 I'm also working on investigating how to use openstack gating system.
 (This document is still draft version)
 https://docs.google.com/presentation/d/1WJInaSt_H2kVkjnhtPmiATP1F-0BVbuk1e
 efQalL5Q0/edit#slide=id.p
 
 2013/12/10 Ivar Lazzaro i...@embrane.commailto:i...@embrane.com:
  +1 for 1700UTC Thursday on IRC
 
  -Original Message-
  From: Kyle Mestery 
  [mailto:mest...@siliconloons.commailto:mest...@siliconloons.com]
  Sent: Tuesday, December 10, 2013 9:21 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [neutron] Third party Neutron plugin
 testing meeting
 
  On Dec 10, 2013, at 10:45 AM, Veiga, Anthony
 anthony_ve...@cable.comcast.commailto:anthony_ve...@cable.comcast.com 
 wrote:
  -Original Message-
  From: Kyle Mestery 
  mest...@siliconloons.commailto:mest...@siliconloons.com
  Reply-To: OpenStack Development Mailing List (not for usage
 questions)
  openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
  Date: Tuesday, December 10, 2013 10:48
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
  Subject: [openstack-dev] [neutron] Third party Neutron plugin testing
  meeting
 
  Last week I took an action item to organize a meeting for everyone
  who is doing third-party testing in Neutron for plugins, whether this
  is vendor or Open Source based. The idea is to share ideas around
  setups and any issues people hit. I'd like to set this meeting up for
  this week, Thursday at 1700UTC. I would also like to propose we make
  this a dial in meeting using the Infrastructure Conferencing bridges
  [1]. If this time works, I'll set something up and reply to this
  thread with the dial in information.
 
  +1 for the meeting time.  Any particular reason for voice over IRC?
 
  We kind of decided that doing this over voice initially would be
 expedient, but I am fine with moving to IRC. If I don't hear objections,
 lets assume we will meet at 1700UTC Thursday on #openstack-meeting-alt.
 
 
 
  Also, I've started a etherpad page [2] with information. It would be
  good for people to add information to this etherpad as well. I've
  coupled this pad with information around multi-node gate testing for
  Neutron as well, as I suspect most of the third-party testing will
  require multiple nodes as well.
 
  I'll start filling out our setup.  I have some questions around
  Third-Party Testing in particular, and look forward to this discussion.
 
  Awesome, thanks Anthony!
 
 
  Thanks!
  Kyle
 
  [1]


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [Openstack-dev][Neutron] Handling of ovs command errors

2013-11-25 Thread Irena Berezovsky
Salvatore, 
Very good questions.
You raised your concerns for OVS agent, but I think it will be applicable for 
any other neutron agent that requires additional service to perform actions . 
At least, I was dealing with similar issues for Mellanox L2 agent. It makes 
sense for me if you fail to bind the port, it should be indicated  by neutron 
port status.
Another issue I had and try to solve  by the following patch: 
https://review.openstack.org/#/c/48842/ is the situation when agent fails to 
communicate with external daemon that responsible for actual programming. After 
number of retries with increasing back-off interval between retries, the agent 
will be terminated if fails to communicate. Does it make sense?

Regards,
Irena 

-Original Message-
From: Kyle Mestery (kmestery) [mailto:kmest...@cisco.com] 
Sent: Monday, November 25, 2013 11:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Openstack-dev][Neutron] Handling of ovs command 
errors

On Nov 25, 2013, at 12:36 PM, Salvatore Orlando sorla...@nicira.com wrote:
 
 Thanks Kyle,
 
 More comments inline.
 
 Salvatore
 
 
 On 25 November 2013 16:03, Kyle Mestery (kmestery) kmest...@cisco.com wrote:
 On Nov 25, 2013, at 8:28 AM, Salvatore Orlando sorla...@nicira.com wrote:
 
  Hi,
 
  I've been recently debugging some issues I've had with the OVS agent, and I 
  found out that in many  cases (possibly every case) the code just logs 
  errors from ovs-vsctl and ovs-ofctl without taking any action in the 
  control flow.
 
  For instance, the routine which should do the wiring for a port, port_bound 
  [1], does not react in any way if it fails to configure the local vlan, 
  which I guess means the port would not be able to send/receive any data.
 
  I'm pretty sure there's a good reason for this which I'm missing at the 
  moment. I am asking because I see a pretty large number of ALARM_CLOCK 
  errors returned by OVS commands in gate logs (see bug [2]), and I'm not 
  sure whether it's ok to handle them as the OVS agent is doing nowadays.
 
 Thanks for bringing this up Salvatore. It looks like the underlying run_vstcl 
 [1] provides an ability to raise exceptions on errors, but this is not used 
 by most of the callers of run_vsctl. Do you think we should be returning the 
 exceptions back up the stack to callers to handle? I think that may be a good 
 first step.
 
 I think it makes sense to start to handle errors; as they often happen in the 
 agent's rpc loop simply raising will probably just cause the agent to crash.
 I looked again at the code and it really seems it's silently ignoring errors 
 from ovs command.
 This actually makes sense in some cases. For instance the l3 agent might 
 remove a qr-xxx or qg-xxx port while the l2 agent is in the middle of its 
 iteration.
 
 There are however cases in which the exception must be handled.
 In cases like the ALARM_CLOCK error, either a retry mechanism or marking the 
 port for re-syncing at the next iteration might make sense.
 Other error cases might be unrecoverable; for instance when a port 
 disappears. In that case it seems reasonable to put the relevant neutron port 
 in ERROR state, so that the user is aware that the port anymore.
 
I think it makes sense to address these things. Want me to file a bug?

 Thanks,
 Kyle
 
 [1] 
 https://github.com/openstack/neutron/blob/master/neutron/agent/linux/ovs_lib.py#L52
 
  Regards,
  Salvatore
 
  [1] 
  https://github.com/openstack/neutron/blob/master/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py#L599
  [2] https://bugs.launchpad.net/neutron/+bug/1254520
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2013-10-29 Thread Irena Berezovsky
Hi Jiang, Robert,
IRC meeting option works for me.
If I understand your question below, you are looking for a way to tie up 
between requested virtual network(s) and requested PCI device(s). The way we 
did it in our solution  is to map a provider:physical_network to an interface 
that represents the Physical Function. Every virtual network is bound to the 
provider:physical_network, so the PCI device should be allocated based on this 
mapping.  We can  map a PCI alias to the provider:physical_network.

Another topic to discuss is where the mapping between neutron port and PCI 
device should be managed. One way to solve it, is to propagate the allocated 
PCI device details to neutron on port creation.
In case  there is no qbg/qbh support, VF networking configuration should be 
applied locally on the Host.
The question is when and how to apply networking configuration on the PCI 
device?
We see the following options:

* it can be done on port creation.

* It can be done when nova VIF driver is called for vNIC plugging. This 
will require to  have all networking configuration available to the VIF driver 
or send request to the neutron server to obtain it.

* It can be done by  having a dedicated L2 neutron agent on each Host 
that scans for allocated PCI devices  and then retrieves networking 
configuration from the server and configures the device. The agent will be also 
responsible for managing update requests coming from the neutron server.


For macvtap vNIC type assignment, the networking configuration can be applied 
by a dedicated L2 neutron agent.

BR,
Irena

From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com]
Sent: Tuesday, October 29, 2013 9:04 AM

To: Robert Li (baoli); Irena Berezovsky; prashant.upadhy...@aricent.com; 
chris.frie...@windriver.com; He, Yongli; Itzik Brown
Cc: OpenStack Development Mailing List; Brian Bowen (brbowen); Kyle Mestery 
(kmestery); Sandhya Dasu (sadasu)
Subject: RE: [openstack-dev] [nova] [neutron] PCI pass-through network support

Robert, is it possible to have a IRC meeting? I'd prefer to IRC meeting because 
it's more openstack style and also can keep the minutes clearly.

To your flow, can you give more detailed example. For example, I can consider 
user specify the instance with -nic option specify a network id, and then how 
nova device the requirement to the PCI device? I assume the network id should 
define the switches that the device can connect to , but how is that 
information translated to the PCI property requirement? Will this translation 
happen before the nova scheduler make host decision?

Thanks
--jyh

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, October 28, 2013 12:22 PM
To: Irena Berezovsky; 
prashant.upadhy...@aricent.commailto:prashant.upadhy...@aricent.com; Jiang, 
Yunhong; chris.frie...@windriver.commailto:chris.frie...@windriver.com; He, 
Yongli; Itzik Brown
Cc: OpenStack Development Mailing List; Brian Bowen (brbowen); Kyle Mestery 
(kmestery); Sandhya Dasu (sadasu)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

Hi Irena,

Thank you very much for your comments. See inline.

--Robert

On 10/27/13 3:48 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert,
Thank you very much for sharing the information regarding your efforts. Can you 
please share your idea of the end to end flow? How do you suggest  to bind Nova 
and Neutron?

The end to end flow is actually encompassed in the blueprints in a nutshell. I 
will reiterate it in below. The binding between Nova and Neutron occurs with 
the neutron v2 API that nova invokes in order to provision the neutron 
services. The vif driver is responsible for plugging in an instance onto the 
networking setup that neutron has created on the host.

Normally, one will invoke nova boot api with the -nic options to specify the 
nic with which the instance will be connected to the network. It currently 
allows net-id, fixed ip and/or port-id to be specified for the option. However, 
it doesn't allow one to specify special networking requirements for the 
instance. Thanks to the nova pci-passthrough work, one can specify PCI 
passthrough device(s) in the nova flavor. But it doesn't provide means to tie 
up these PCI devices in the case of ethernet adpators with networking services. 
Therefore the idea is actually simple as indicated by the blueprint titles, to 
provide means to tie up SRIOV devices with neutron services. A work flow would 
roughly look like this for 'nova boot':

  -- Specifies networking requirements in the -nic option. Specifically for 
SRIOV, allow the following to be specified in addition to the existing required 
information:
   . PCI alias
   . direct pci-passthrough/macvtap
   . port profileid that is compliant with 802.1Qbh

The above information is optional. In the absence of them, the existing 
behavior remains

[openstack-dev] [nova] [neutron] PCI pass-through network support

2013-10-16 Thread Irena Berezovsky
Hi,
As one of the next steps for PCI pass-through I would like to discuss is the 
support for PCI pass-through vNIC.
While nova takes care of PCI pass-through device resources  management and VIF 
settings, neutron should manage their networking configuration.
I would like to register a summit proposal to discuss the support for PCI 
pass-through networking.
I am not sure what would be the right topic to discuss the PCI pass-through 
networking, since it involve both nova and neutron.
There is already a session registered by Yongli on nova topic to discuss the 
PCI pass-through next steps.
I think PCI pass-through networking is quite a big topic and it worth to have a 
separate discussion.
Is there any other people who are interested to discuss it and share their 
thoughts and experience?

Regards,
Irena

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Neutron support for passthrough of networking devices?

2013-10-10 Thread Irena Berezovsky
Hi Chris, Jiang,
We are also looking into enchantment of basic PCI pass-through to provide 
SR-IOV based networking.
In order to support automatic provisioning, it requires the awareness to what 
virtual network to connect the requested SR-IOV device. 
This should be considered by the scheduler  in order to run VM on the Host that 
is connected to the physical network. 
It requires the Neutron to be aware of PCI pass though allocated device and 
allocate port on the virtual network. 
It will require some sort of VIF Driver to manage the libvirt device settings. 
It may also require neutron agent to apply port policy on the device. I think 
it makes sense to  support this as part of ML2 neutron plugin (via mechanism 
driver).
In case you plan to attend the design summit, maybe it worth to collaborate 
there and discuss what can be done in the coming  Icehouse release?

Regards,
Irena

-Original Message-
From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com] 
Sent: Thursday, October 10, 2013 2:26 AM
To: Chris Friesen; openst...@lists.openstack.org
Subject: Re: [Openstack] Neutron support for passthrough of networking devices?

Several thing in my mind:
a) NIC need more information like the switch, and these information need be 
managed by nova also. We have basic support, but not fully implemented.
b) How to setup the device, including the mac address or 802.1Qbh etc. Libvirt 
have several option to support it, need more work to support them, also need 
consider other virt driver like xenapi etc.
c) How to achieve the isolation of tenants, and how to setup like the router in 
Neutron. I'm not well on Neutron, but I think others may have more idea on it.

Thanks
--jyh

 -Original Message-
 From: Chris Friesen [mailto:chris.frie...@windriver.com]
 Sent: Wednesday, October 09, 2013 11:53 AM
 To: openst...@lists.openstack.org
 Subject: Re: [Openstack] Neutron support for passthrough of networking 
 devices?
 
 On 10/09/2013 12:31 PM, Jiang, Yunhong wrote:
  When we worked on H release, we target for basic PCI support like 
  accelerator card or encryption card etc. I think SR-IOV network 
  support is more complex and requires more effort, in both Nova side 
  and Neutron side. We are working on some enhancement in Nova side 
  now. But the whole picture may need more time/discussion.
 
 Can you elaborate on the complexities?  Assuming you enable SR-IOV on 
 the host, and pass it through to the guest using the normal PCI 
 passthrough mechanisms, what's the extra complexity?
 
 Do we run into the same complexity if we have spare physical NICs on 
 the host that get passed in to the guest?
 
 Thanks,
 Chris
 
 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openst...@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Testing support for PCI passthrough

2013-09-02 Thread Irena Berezovsky
Hi guys,
Great work on adding PCI pass-through support to nova.
We would like  to test the support for PCI pass-through.
We thought to use devstack (nova) on physical machine with SR-IOV devices and 
apply patches that are still under review.
Can you please advise what should be defined in nova.conf and what should be 
requested on nova-boot command in order to create VM with PCI device?
Is it according to documented at 
https://wiki.openstack.org/wiki/Enhanced-platform-awareness-pcie?
Can you please help me to identify the use cases that merged patches support?

Thank you very much,
Irena
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >