Re: [openstack-dev] [os-vif] [passthrough] [VifHostDevice]

2017-09-22 Thread Moshe Levi
The ODL part is still not ready. We need to do code changes in odl to make it 
work see [1] [2]

[1] https://git.opendaylight.org/gerrit/#/c/62481/ 
[2] https://git.opendaylight.org/gerrit/#/c/60259/ 

We try to make the design as generic as possible so if you have a SR-IOV NIC 
that support switchdev 
And allow to offload rules using linux tc it should work.  

> -Original Message-
> From: pranab boruah [mailto:pranabjyotibor...@gmail.com]
> Sent: Friday, September 22, 2017 4:50 PM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [os-vif] [passthrough] [VifHostDevice]
> 
> Thanks Sean for such a descriptive answer. It definitely helped.
> 
> >the hardware offload support for melonox nics is only supported with
> >the openvswitch or odl >ml2 dirvers
> [Pranab] Okay. We have the option of using ODL as the mechanism driver
> too. I am building the cluster with ODL as the mechanism driver right now and
> would experiment with it.
> Do you think there is a design limitation in VIFHostDevice model that limits 
> us
> in using either only Mellanox or Netronome NICs? AFAIK, say I have a SRIOV
> NIC that supports OVS offload and Switchdev and we use openvswitch/odl as
> the mechanism driver, things should work as expected for this NIC too right?
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.
> openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack-
> dev=02%7C01%7Cmoshele%40mellanox.com%7C5d64f1a6da0142ffb40
> 908d501c0c634%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C63641
> 6849830686294=eU2fTih8PqnDEpmNugmDMmOB2I40znLkEpbEmfYS3
> 3Y%3D=0
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-vif] [passthrough] [VifHostDevice]

2017-09-22 Thread pranab boruah
Thanks Sean for such a descriptive answer. It definitely helped.

>the hardware offload support for melonox nics is only supported with the 
>openvswitch or odl >ml2 dirvers
[Pranab] Okay. We have the option of using ODL as the mechanism driver
too. I am building the cluster with ODL as the mechanism driver right
now and would experiment with it.
Do you think there is a design limitation in VIFHostDevice model that
limits us in using either only Mellanox or Netronome NICs? AFAIK, say
I have a SRIOV NIC that supports OVS offload and Switchdev and we use
openvswitch/odl as the mechanism driver, things should work as
expected for this NIC too right?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-vif] [passthrough] [VifHostDevice]

2017-09-21 Thread Mooney, Sean K


> -Original Message-
> From: pranab boruah [mailto:pranabjyotibor...@gmail.com]
> Sent: Thursday, September 21, 2017 5:12 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [os-vif] [passthrough] [VifHostDevice]
> 
> Hi,
> We have a SRIOV capable NIC that supports OVS offload and Switchdev.
> We are trying to test the VifHostDevice model on a Pike cluster. We are
> running into issues. Here are the config options that we have
> used:
> 
> 1. In Neutron conf file: mechanism driver = ovn 
[Mooney, Sean K] looking at the networking-ovn ml2 dirver vnic type direct is 
not supported
https://github.com/openstack/networking-ovn/blob/f5fe5e3c623a2a65ee78ec28b053d8e72060c13d/networking_ovn/ml2/mech_driver.py#L112
the hardware offload support for melonox nics is only supported with the 
openvswitch or odl ml2 dirvers
netronome smartnics require the use of the agilio ovs ml2 dirver which supports
dirct and virtio forwarder mode
https://github.com/Netronome/agilio-ovs-openstack-plugin/blob/master/networking_netronome/plugins/ml2/drivers/agilio_ovs/mech_driver/mech_agilio_ovs.py#L46-L47

if you wish to use ovn you will need to modify the ovn ml2 dirver to add 
vnic_type direct to the supported vnictypes.

>2. In Nova conf file:
> passthrough_whitelist = {"address":":02:00.*"} 3. Created a port as
> vnic_type=direct and launched instances.
> It gives the following error - Nova error : "No net device was found
> for VF"
> Am I missing some other config options?
[Mooney, Sean K] no but as I mentioned above ovn is not currently supported.
I belive you should have a log message in the neutron server log also as when 
neutron calls
The the networking-ovn ml2 driver here
https://github.com/openstack/neutron/blob/433d5a03534c4f30fdf3b864d11dea527e9b6f91/neutron/plugins/ml2/managers.py#L782
we simply retrun on line 502 here after logging.
https://github.com/openstack/networking-ovn/blob/f5fe5e3c623a2a65ee78ec28b053d8e72060c13d/networking_ovn/ml2/mech_driver.py#L502
if you only have the ovn ml2 driver enabled you should set the prot with 
vif_type_binding_failed howver if you have
the sriovnicagent also enable it may be masking the isses as 
https://github.com/openstack/neutron/blob/433d5a03534c4f30fdf3b864d11dea527e9b6f91/neutron/plugins/ml2/managers.py#L776
will continue to try the other dirvers.

Assuming ovn in the only enabled mech driver i belive this should result in the 
vif_type being set to VIF_TYPE_BINDING_FAILED as
https://github.com/openstack/neutron/blob/433d5a03534c4f30fdf3b864d11dea527e9b6f91/neutron/plugins/ml2/managers.py#L748
will not return anyting so we should execute 
https://github.com/openstack/neutron/blob/433d5a03534c4f30fdf3b864d11dea527e9b6f91/neutron/plugins/ml2/managers.py#L750-L757
you should be able to configm this by doing a port show and/or checking the 
neutron server log.

> 
> Also, how can I check the logs that are related to the os-vif library?
[Mooney, Sean K] the logs are present in the n-cpu log as os-vif executes with 
the nova compute agent.
> 
> Let me know if further details are required.
> 
> TIA,
> Pranab
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-vif] [passthrough] [VifHostDevice]

2017-09-21 Thread pranab boruah
Hi,
We have a SRIOV capable NIC that supports OVS offload and Switchdev.
We are trying to test the VifHostDevice model on a Pike cluster. We
are running into issues. Here are the config options that we have
used:

1. In Neutron conf file: mechanism driver = ovn
2. In Nova conf file: passthrough_whitelist = {"address":":02:00.*"}
3. Created a port as vnic_type=direct and launched instances.
It gives the following error - Nova error : "No net device was found for VF"
Am I missing some other config options?

Also, how can I check the logs that are related to the os-vif library?

Let me know if further details are required.

TIA,
Pranab

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev