Re: [openstack-dev] [nova][vlan trunking] Guest networking configuration for vlan trunk

2017-06-13 Thread Robert Li (baoli)
A quick update on this. As suggested by members of the community, I created a 
nova blueprint 
https://blueprints.launchpad.net/nova/+spec/expose-vlan-trunking, and posted a 
spec for Queens here: https://review.openstack.org/471815. Sean Mooney 
suggested in his review that automatic vlan subinterface configuration in the 
guest should be enabled/disabled on per trunk basis. I think that it’s a good 
idea. But to do that requires API and database schema changes. If it’s 
something that the community would like to go with, then I’d think it requires 
a RFE from the neutron side. We need reviews and feedbacks to move this forward.

Thanks,
Robert

On 6/7/17, 12:36 PM, "Robert Li (baoli)" <ba...@cisco.com> wrote:

Hi Bence,

Thanks for the pointers. I was aware of this 
https://bugs.launchpad.net/neutron/+bug/1631371, but not the blueprint you 
wrote. 

As suggested by Matt in https://bugs.launchpad.net/nova/+bug/1693535, I 
wrote a blueprint 
https://blueprints.launchpad.net/nova/+spec/expose-vlan-trunking, trying to 
tackle it in a simple manner.

--Robert


On 6/6/17, 7:35 AM, "Bence Romsics" <bence.roms...@gmail.com> wrote:

Hi Robert,

I'm late to this thread, but let me add a bit. There was an attempt
for trunk support in nova metadata on the Pike PTG:

https://review.openstack.org/399076

But that was abandoned right after the PTG, because the agreement
seemed to be in favor of putting the trunk details into the upcoming
os-vif object. The os-vif object was supposed to be described in a new
patch set to this change:

https://review.openstack.org/390513

Unfortunately there's not much happening there since. Looking back now
it seems to me that turning the os-vif object into a prerequisite made
this work too big to ever happen. I definitely didn't have the time to
take that on.

But anyway I hope the abandoned spec may provide relevant input to you.

Cheers,
Bence


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vlan trunking] Guest networking configuration for vlan trunk

2017-06-07 Thread Robert Li (baoli)
Hi Bence,

Thanks for the pointers. I was aware of this 
https://bugs.launchpad.net/neutron/+bug/1631371, but not the blueprint you 
wrote. 

As suggested by Matt in https://bugs.launchpad.net/nova/+bug/1693535, I wrote a 
blueprint https://blueprints.launchpad.net/nova/+spec/expose-vlan-trunking, 
trying to tackle it in a simple manner.

--Robert


On 6/6/17, 7:35 AM, "Bence Romsics"  wrote:

Hi Robert,

I'm late to this thread, but let me add a bit. There was an attempt
for trunk support in nova metadata on the Pike PTG:

https://review.openstack.org/399076

But that was abandoned right after the PTG, because the agreement
seemed to be in favor of putting the trunk details into the upcoming
os-vif object. The os-vif object was supposed to be described in a new
patch set to this change:

https://review.openstack.org/390513

Unfortunately there's not much happening there since. Looking back now
it seems to me that turning the os-vif object into a prerequisite made
this work too big to ever happen. I definitely didn't have the time to
take that on.

But anyway I hope the abandoned spec may provide relevant input to you.

Cheers,
Bence

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vlan trunking] Guest networking configuration for vlan trunk

2017-05-25 Thread Robert Li (baoli)
I created a nova bug for this: https://bugs.launchpad.net/nova/+bug/1693535. I 
am currently working on a code patch for it.

--Robert

On 5/24/17, 3:52 PM, "Robert Li (baoli)" 
<ba...@cisco.com<mailto:ba...@cisco.com>> wrote:

Thanks for the pointer. I think your suggestion in comments #14 
https://bugs.launchpad.net/neutron/+bug/1631371/comments/14 makes sense. I 
actually meant to address the nova side so that trunk details can be exposed in 
the metadata and configdrive.

-Robert



On 5/24/17, 1:52 PM, "Armando M." <arma...@gmail.com<mailto:arma...@gmail.com>> 
wrote:



On 24 May 2017 at 08:53, Robert Li (baoli) 
<ba...@cisco.com<mailto:ba...@cisco.com>> wrote:
Hi Kevin,

In that case, I will start working on it. Should this be considered a RFE or a 
regular bug?

There have been discussions in the past about this [1]. The conclusion of the 
discussion was: Nova should have everything it needs to expose trunk details to 
guest via the metadata API/config drive and at this stage nothing is required 
from the neutron end (and hence there's no point in filing a Neutron RFE).

While notifying trunk changes to nova require a simple minor enhancement in 
neutron, it seems premature to go down that path when there's no nova 
scaffolding yet. Someone should then figure out how the guest itself gets 
notified of trunk changes so that it can rearrange its networking stack. That 
might as well be left to some special sauce added to the guest image, though no 
meaningful discussion has taken place on how to crack this particular nut.

HTH
Armando

[1] https://bugs.launchpad.net/neutron/+bug/1631371



Thanks,
Robert

On 5/23/17, 12:12 AM, "Kevin Benton" 
<ke...@benton.pub<mailto:ke...@benton.pub>> wrote:

I think we just need someone to volunteer to do the work to expose it as 
metadata to the VM in Nova.

On May 22, 2017 1:27 PM, "Robert Li (baoli)" 
<ba...@cisco.com<mailto:ba...@cisco.com>> wrote:
Hi Levi,

Thanks for the info. I noticed that support in the nova code, but was wondering 
why something similar is not available for vlan trunking.

--Robert


On 5/22/17, 3:34 PM, "Moshe Levi" 
<mosh...@mellanox.com<mailto:mosh...@mellanox.com>> wrote:

Hi Robert,
The closes thing that I know about is tagging of SR-IOV physical function’s 
VLAN tag to guests see [1]
Maybe you can leverage the same mechanism to config vlan trunking in guest.

[1] - 
https://specs.openstack.org/openstack/nova-specs/specs/ocata/implemented/sriov-pf-passthrough-neutron-port-vlan.html


From: Robert Li (baoli) [mailto:ba...@cisco.com<mailto:ba...@cisco.com>]
Sent: Monday, May 22, 2017 8:49 PM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [nova][vlan trunking] Guest networking configuration 
for vlan trunk

Hi,

I’m trying to find out if there is support in nova (in terms of metadata and 
cfgdrive) to configure vlan trunking in the guest. In the ‘CLI usage example’ 
provided in this wiki https://wiki.openstack.org/wiki/Neutron/TrunkPort, it 
indicates:

# The typical cloud image will auto-configure the first NIC (eg. eth0) only and 
not the vlan interfaces (eg. eth0.VLAN-ID).
ssh VM0-ADDRESS sudo ip link add link eth0 name eth0.101 type vlan id 101

I’d like to understand why the support of configuring vlan interfaces in the 
guest is not added. And should it be added?

Thanks,
Robert

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vlan trunking] Guest networking configuration for vlan trunk

2017-05-24 Thread Robert Li (baoli)
Thanks for the pointer. I think your suggestion in comments #14 
https://bugs.launchpad.net/neutron/+bug/1631371/comments/14 makes sense. I 
actually meant to address the nova side so that trunk details can be exposed in 
the metadata and configdrive.

-Robert



On 5/24/17, 1:52 PM, "Armando M." <arma...@gmail.com<mailto:arma...@gmail.com>> 
wrote:



On 24 May 2017 at 08:53, Robert Li (baoli) 
<ba...@cisco.com<mailto:ba...@cisco.com>> wrote:
Hi Kevin,

In that case, I will start working on it. Should this be considered a RFE or a 
regular bug?

There have been discussions in the past about this [1]. The conclusion of the 
discussion was: Nova should have everything it needs to expose trunk details to 
guest via the metadata API/config drive and at this stage nothing is required 
from the neutron end (and hence there's no point in filing a Neutron RFE).

While notifying trunk changes to nova require a simple minor enhancement in 
neutron, it seems premature to go down that path when there's no nova 
scaffolding yet. Someone should then figure out how the guest itself gets 
notified of trunk changes so that it can rearrange its networking stack. That 
might as well be left to some special sauce added to the guest image, though no 
meaningful discussion has taken place on how to crack this particular nut.

HTH
Armando

[1] https://bugs.launchpad.net/neutron/+bug/1631371



Thanks,
Robert

On 5/23/17, 12:12 AM, "Kevin Benton" 
<ke...@benton.pub<mailto:ke...@benton.pub>> wrote:

I think we just need someone to volunteer to do the work to expose it as 
metadata to the VM in Nova.

On May 22, 2017 1:27 PM, "Robert Li (baoli)" 
<ba...@cisco.com<mailto:ba...@cisco.com>> wrote:
Hi Levi,

Thanks for the info. I noticed that support in the nova code, but was wondering 
why something similar is not available for vlan trunking.

--Robert


On 5/22/17, 3:34 PM, "Moshe Levi" 
<mosh...@mellanox.com<mailto:mosh...@mellanox.com>> wrote:

Hi Robert,
The closes thing that I know about is tagging of SR-IOV physical function’s 
VLAN tag to guests see [1]
Maybe you can leverage the same mechanism to config vlan trunking in guest.

[1] - 
https://specs.openstack.org/openstack/nova-specs/specs/ocata/implemented/sriov-pf-passthrough-neutron-port-vlan.html


From: Robert Li (baoli) [mailto:ba...@cisco.com<mailto:ba...@cisco.com>]
Sent: Monday, May 22, 2017 8:49 PM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [nova][vlan trunking] Guest networking configuration 
for vlan trunk

Hi,

I’m trying to find out if there is support in nova (in terms of metadata and 
cfgdrive) to configure vlan trunking in the guest. In the ‘CLI usage example’ 
provided in this wiki https://wiki.openstack.org/wiki/Neutron/TrunkPort, it 
indicates:

# The typical cloud image will auto-configure the first NIC (eg. eth0) only and 
not the vlan interfaces (eg. eth0.VLAN-ID).
ssh VM0-ADDRESS sudo ip link add link eth0 name eth0.101 type vlan id 101

I’d like to understand why the support of configuring vlan interfaces in the 
guest is not added. And should it be added?

Thanks,
Robert

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vlan trunking] Guest networking configuration for vlan trunk

2017-05-24 Thread Robert Li (baoli)
Hi Kevin,

In that case, I will start working on it. Should this be considered a RFE or a 
regular bug?

Thanks,
Robert

On 5/23/17, 12:12 AM, "Kevin Benton" 
<ke...@benton.pub<mailto:ke...@benton.pub>> wrote:

I think we just need someone to volunteer to do the work to expose it as 
metadata to the VM in Nova.

On May 22, 2017 1:27 PM, "Robert Li (baoli)" 
<ba...@cisco.com<mailto:ba...@cisco.com>> wrote:
Hi Levi,

Thanks for the info. I noticed that support in the nova code, but was wondering 
why something similar is not available for vlan trunking.

--Robert


On 5/22/17, 3:34 PM, "Moshe Levi" 
<mosh...@mellanox.com<mailto:mosh...@mellanox.com>> wrote:

Hi Robert,
The closes thing that I know about is tagging of SR-IOV physical function’s 
VLAN tag to guests see [1]
Maybe you can leverage the same mechanism to config vlan trunking in guest.

[1] - 
https://specs.openstack.org/openstack/nova-specs/specs/ocata/implemented/sriov-pf-passthrough-neutron-port-vlan.html


From: Robert Li (baoli) [mailto:ba...@cisco.com<mailto:ba...@cisco.com>]
Sent: Monday, May 22, 2017 8:49 PM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [nova][vlan trunking] Guest networking configuration 
for vlan trunk

Hi,

I’m trying to find out if there is support in nova (in terms of metadata and 
cfgdrive) to configure vlan trunking in the guest. In the ‘CLI usage example’ 
provided in this wiki https://wiki.openstack.org/wiki/Neutron/TrunkPort, it 
indicates:

# The typical cloud image will auto-configure the first NIC (eg. eth0) only and 
not the vlan interfaces (eg. eth0.VLAN-ID).
ssh VM0-ADDRESS sudo ip link add link eth0 name eth0.101 type vlan id 101

I’d like to understand why the support of configuring vlan interfaces in the 
guest is not added. And should it be added?

Thanks,
Robert

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vlan trunking] Guest networking configuration for vlan trunk

2017-05-22 Thread Robert Li (baoli)
Hi Levi,

Thanks for the info. I noticed that support in the nova code, but was wondering 
why something similar is not available for vlan trunking.

--Robert


On 5/22/17, 3:34 PM, "Moshe Levi" 
<mosh...@mellanox.com<mailto:mosh...@mellanox.com>> wrote:

Hi Robert,
The closes thing that I know about is tagging of SR-IOV physical function’s 
VLAN tag to guests see [1]
Maybe you can leverage the same mechanism to config vlan trunking in guest.

[1] - 
https://specs.openstack.org/openstack/nova-specs/specs/ocata/implemented/sriov-pf-passthrough-neutron-port-vlan.html


From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, May 22, 2017 8:49 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova][vlan trunking] Guest networking configuration 
for vlan trunk

Hi,

I’m trying to find out if there is support in nova (in terms of metadata and 
cfgdrive) to configure vlan trunking in the guest. In the ‘CLI usage example’ 
provided in this wiki https://wiki.openstack.org/wiki/Neutron/TrunkPort, it 
indicates:

# The typical cloud image will auto-configure the first NIC (eg. eth0) only and 
not the vlan interfaces (eg. eth0.VLAN-ID).
ssh VM0-ADDRESS sudo ip link add link eth0 name eth0.101 type vlan id 101

I’d like to understand why the support of configuring vlan interfaces in the 
guest is not added. And should it be added?

Thanks,
Robert
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][vlan trunking] Guest networking configuration for vlan trunk

2017-05-22 Thread Robert Li (baoli)
Hi,

I’m trying to find out if there is support in nova (in terms of metadata and 
cfgdrive) to configure vlan trunking in the guest. In the ‘CLI usage example’ 
provided in this wiki https://wiki.openstack.org/wiki/Neutron/TrunkPort, it 
indicates:

# The typical cloud image will auto-configure the first NIC (eg. eth0) only and 
not the vlan interfaces (eg. eth0.VLAN-ID).
ssh VM0-ADDRESS sudo ip link add link eth0 name eth0.101 type vlan id 101

I’d like to understand why the support of configuring vlan interfaces in the 
guest is not added. And should it be added?

Thanks,
Robert
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] pci-passtrough and neutron multi segment networks

2015-09-08 Thread Robert Li (baoli)
As far as I know, it was discussed but not supported yet. It requires change in 
nova and support in the neutron plugins.

—Robert

On 9/8/15, 9:39 AM, "Vladyslav Gridin" 
> 
wrote:

Hi All,

Is there a way to successfully deploy a vm with sriov nic
on both single segment vlan network, and multi provider network,
containing vlan segment?
When nova builds pci request for nic it looks for 'physical_network'
at network level, but for multi provider networks this is set within a segment.

e.g.
RESP BODY: {"network": {"status": "ACTIVE", "subnets": 
["3862051f-de55-4bb9-8c88-acd675bb3702"], "name": "sriov", "admin_state_up": 
true, "router:external": false, "segments": [{"provider:segmentation_id": 77, 
"provider:physical_network": "physnet1", "provider:network_type": "vlan"}, 
{"provider:segmentation_id": 35, "provider:physical_network": null, 
"provider:network_type": "vxlan"}], "mtu": 0, "tenant_id": 
"bd3afb5fac0745faa34713e6cada5a8d", "shared": false, "id": 
"53c0e71e-4c9a-4a33-b1a0-69529583e05f"}}


So, if on compute my pci_passthrough_whitelist contains physical_network,
deployment will fail in multi segment network, and vice versa.

Thanks,
Vlad.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] HELP CONFIRM OR DISCUSS:create a port when network contain ipv4 subnets and ipv6 subnets, allocate ipv6 address to the port.

2015-07-20 Thread Robert Li (baoli)
First of all, your network contains an IPv4 and an IPv6 SLAAC subnet.
Given the the definition of SLAAC, your port will receive an IPv6 address
from the Ipv6 subnet. On the other hand, if you want your network to have
both IPv6 and IPv4, and want to selectively assign either IPv4 or IPv6 to
your ports, you shouldn¹t use SLAAC, but DHCP stateful. Hope that this
helps.

‹Robert 

On 7/20/15, 8:01 AM, Neil Jerram neil.jer...@metaswitch.com wrote:

I am not completely sure, but I believe that the primary associations
are that:

- a port is attached to a particular network

- a network may have an IPv4 subnet and/or an IPv6 subnet

- therefore, when a port is created, it gets an IPv4 address, and/or an
IPv6 address, according to the subnets that are associated with the
network.

Then the question is: what difference does it make if you add
--fixed-ip subnet_id=$[ipv4_subnet_id] to your port creation command?

Unfortunately I don't know the answer to that.  However I do know that
you can use neutron port-update --fixed-ip ... to _add_ an additional
IP address to an existing port.  Therefore, and also considering your
observations, I would guess that the effect of --fixed_ip ... is
always strictly additional to the basic IP address allocation behaviour
that you get based on the network and subnet associations.

Hope that may help...

 Neil


On 20/07/15 04:26, zhaobo wrote:
 Hi ,
 Could anyone please check the bug below?
 https://bugs.launchpad.net/neutron/+bug/1467791

 This bug description:
 When the created network contains one ipv4 subnet and an ipv6 subnet
 which turned on slaac or stateless.
 When I create a port use cmd like:
 neutron port-create --fixed-ip subnet_id=$[ipv4_subnet_id]
 $[network_id/name]
 The specified fixed-ip is ipv4_subnet, but the returned port contained
 the ipv6 subnet.


 If user just want a port with ipv4 , why returned port had allocated
 an ipv6 address.
 And I know this is an design behavior from
 
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/multiple-ip
v6-prefixes.html#proposed-change
 But we are still confused about this operation.

 Thank you anyone could help for confirm this issue , and wish you
 return the message asap.

 ZhaoBo




 
_
_
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] devstack local.conf file fro sriov pci nic passthrough

2015-05-15 Thread Robert Li (baoli)
Hi,

On your controller node,  you can add in local.conf:

[[post-config|$NOVA_CONF]]
[DEFAULT]
scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, 
ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, 
ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, PciPassthroughFilter

On your compute node:
[[post-config|$NOVA_CONF]]
[DEFAULT]
pci_passthrough_whitelist = your whitelist definition 1
…
pci_passthrough_whitelist = your whitelist definition 2

You can get examples from the sr-iov wikis as people has pointed out from other 
threads.

good luck.

—Robert




On 5/15/15, 9:13 AM, Kamsali, RaghavendraChari (Artesyn) 
raghavendrachari.kams...@artesyn.commailto:raghavendrachari.kams...@artesyn.com
 wrote:

Hi,
I am bringing up the single node openstack cloud and having pci sriov supported 
nic controller (intel XL710), created 4virtual functions on top of the nic. My 
goal is to bring up the cloud setup with neutron and nova network services 
using devstack.
How to configure local.conf file  so that when any VM spawns , it used the 
virtual function of the sriov nic and able to communicate .

Could anyone help me ???


Thanks and Regards,
Raghavendrachari kamsali | Software Engineer II  | Embedded Computing
Artesyn Embedded Technologies|5th Floor, Capella Block, The V, Madhapur| 
Hyderabad, AP 500081 India
T +91-40-66747059 | M +919705762153

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] IPv4 transition/interoperation with IPv6

2015-05-05 Thread Robert Li (baoli)
Hi Mike,

Currently dual stack is supported. Can you be specific on what 
interoperation/transition techniques you are interested in? We’ve been thinking 
about NAT64 (stateless or stateful).

thanks,
Robert

On 5/4/15, 9:56 PM, Mike Spreitzer 
mspre...@us.ibm.commailto:mspre...@us.ibm.com wrote:

Does Neutron support any of the 4/6 interoperation/transition techniques?  I 
wear an operator's hat nowadays, and want to make IPv6 as useful and easy to 
use as possible for my tenants.  I think the interoperation/transition 
techniques will play a big role in this.

Thanks,
Mike
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [SRIOV] IRC meeting for tommorow

2015-03-16 Thread Robert Li (baoli)
Hi,

I will not be available tomorrow morning for the meeting. Please feel free to 
go ahead without me.

Cheers,
Robert
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] pci stats format and functional tests

2015-03-05 Thread Robert Li (baoli)
“extra_info” is no longer a key in the stats pool, nor the “physical_function”. 
If you check pci/stats.py, the keys are pool_keys = ['product_id', 
'vendor_id', 'numa_node’] plus whatever tags are used in the whitelist. So I 
believe it’s something like this:
os-pci:pci_stats: [
{
count: 5,
key1: value1”,
…
“keyn: valuen,
product_id: 1520,
vendor_id: “8086”,
“numa_node
},
],

And each stats entry may have different keys.

thanks,
—Robert

On 3/5/15, 5:16 PM, Jiang, Yunhong 
yunhong.ji...@intel.commailto:yunhong.ji...@intel.com wrote:

Paul, you are right that the ‘extra_info’ should not be in the 
os-pci:pci_stats, since it’s not part of ‘pool-keys’ anymore, but I’m not sure 
if both ‘key1’ and ‘phys_function’ will be part of the pci_stats.

Thanks
--jyh

From: Murray, Paul (HP Cloud) [mailto:pmur...@hp.com]
Sent: Thursday, March 5, 2015 11:39 AM
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Nova] pci stats format and functional tests

Hi All,

I know Yunhong Jiang and Daniel Berrange have been involved in the following, 
but I thought it worth sending to the list for visibility.

While writing code to convert the resource tracker to use the ComputeNode 
object realized that the api samples used in the functional tests are not the 
same as the format as the PciDevicePool object. For example: 
hypervisor-pci-detail-resp.json has something like this:

os-pci:pci_stats: [
{
count: 5,
extra_info: {
key1: value1,
phys_function: [[\0x\, \0x04\, \0x00\, \0x1\]]”
},
keya: valuea,
product_id: 1520,
vendor_id: 8086
}
],

My understanding from interactions with yjiang5 in the past leads me to think 
that something like this is what is actually expected:

os-pci:pci_stats: [
{
count: 5,
key1: value1,
phys_function: [[\0x\, \0x04\, \0x00\, \0x1\]]”,
keya: valuea,
product_id: 1520,
vendor_id: 8086
}
],

This is the way the PciDevicePool object expects the data structure to be and 
is also the way the libvirt virt driver creates pci device information (i.e. 
without the “extra_info” key). Other than that (which is actually pretty clear) 
I couldn’t find anything to tell me definitively if my interpretation is 
correct and I don’t want to change the functional tests without being sure they 
are wrong. So if anyone can give some guidance here I would appreciate it.

I separated this stuff out into a patch with a couple of other minor cleanups 
in preparation for the ComputeNode change, see: 
https://review.openstack.org/#/c/161843

Let me know if I am on the right track,

Cheers,
Paul


Paul Murray
Nova Technical Lead, HP Cloud
+44 117 316 2527

Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England. The contents of this message and any attachments 
to it are confidential and may be legally privileged. If you have received this 
message in error, you should delete it from your system immediately and advise 
the sender. To any recipient of this message within HP, unless otherwise stated 
you should consider this message and attachments as HP CONFIDENTIAL.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ipv6] dhcp stateful

2015-02-19 Thread Robert Li (baoli)
My guess is that your dhcp client running inside the VM set up the subnet
mask as /128. Dhcpv6 doesn¹t provide prefix length, but the client system
sometime adds the net mask based on the link type. Some of the dhcp
clients use a script to configure the interface, and I think you can use
/64 if your link is ethernet.

‹Robert

On 2/19/15, 4:27 AM, Andreas Scheuring scheu...@linux.vnet.ibm.com
wrote:

Hi, 
I was playing around with the various dhcp/radvd options of neutron.
Very helpful was the matrix [1] that describes the combinations of ra
and adress mode that can be configured.

For dhcpv6-stateful (ra  adress mode) it says: VM obtains IPv6 address
from dnsmasq using DHCPv6 stateful and optional info from dnsmasq using
DHCPv6 stateful [1] -- My assumption was that IP adresses and prefix
are assigned via dnsmasq.

But going this way, my instances got the right IP-Adress (great) but
always the subnetmask /128, although I configured /64. Dumping the
traffic and having a look at dnsmasq logs the dhcp process from solicit
to reply worked fine.

I was using rhel7 for guest and host and dnsmasq 2.68.

I googled around and found some hints, that dhcpv6 does not support
prefix delegation. Seems like that it is the job of the radvd daemon
[2][3][4]


Is that true? And if so, what's the use case of configuring
dhcpv6-stateful for ra and address mode?






[1]
http://specs.openstack.org/openstack/neutron-specs/specs/juno/ipv6-radvd-r
a.html#rest-api-impact

[2] https://lists.isc.org/pipermail/dhcp-users/2012-May/015446.html
[3]
http://serverfault.com/questions/528387/sending-netmask-and-gateway-route-
with-dhcp-for-ipv6
[4]
https://supportforums.cisco.com/document/116221/part-1-implementing-dhcpv6
-stateful-dhcpv6

-- 
Andreas 
(irc: scheuran)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Prefix delegation using dibbler client

2015-02-19 Thread Robert Li (baoli)
Hi Kyle, Ihar,

It looks promising to have our patch upstreamed. Please take a look at this 
pull request 
https://github.com/tomaszmrugalski/dibbler/pull/26#issuecomment-75144912. Most 
importantly, Tomek asked if it’s sufficient to have the code up in his master 
branch. I guess you guys may be able to help answer that question since I’m not 
familiar with openstack release process.

Cheers,
Robert

On 2/13/15, 12:16 PM, Kyle Mestery 
mest...@mestery.commailto:mest...@mestery.com wrote:

On Fri, Feb 13, 2015 at 10:57 AM, John Davidge (jodavidg) 
jodav...@cisco.commailto:jodav...@cisco.com wrote:
Hi Ihar,

To answer your questions in order:

1. Yes, you are understanding the intention correctly. Dibbler doesn¹t
currently support client restart, as doing so causes all existing
delegated prefixes to be released back to the PD server. All subnets
belonging to the router would potentially receive a new cidr every time a
subnet is added/removed.

2. Option 2 cannot be implemented using the current version of dibbler,
but it can be done using the version we have modified. Option 3 could
possibly be done with the current version of dibbler, but with some major
limitations - only one single router namespace would be supported.

Once the dibbler changes linked below are reviewed and finalised we will
only need to merge a single patch into the upstream dibbler repo. No
further patches are anticipated.

Yes, you are correct that dibbler is not needed unless prefix delegation
is enabled by the deployer. It is intended as an optional feature that can
be easily disabled (and probably will be by default). A test to check for
the correct dibbler version would certainly be necessary.

Testing in the gate will be an issue until the new version of dibbler is
merged and packaged in the various distros. I¹m not sure if there is a way
to avoid this problem, unless we have devstack install from our updated
repo while we wait.

To me, this seems like a pretty huge problem. We can't expect distributions to 
package side-changes to upstream projects. The correct way to solve this 
problem is to work to get the changes required in the dependent packages 
upstream into those projects first (dibbler, in this case), and then propose 
the changes into Neutron to make use of those changes. I don't see how we can 
proceed with this work until the issues around dibbler has been resolved.

John Davidge
OpenStack@Cisco




On 13/02/2015 16:01, Ihar Hrachyshka 
ihrac...@redhat.commailto:ihrac...@redhat.com wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Thanks for the write-up! See inline.

On 02/13/2015 04:34 PM, Robert Li (baoli) wrote:
 Hi,

 while trying to integrate dibbler client with neutron to support
 PD, we countered a few issues with the dibbler client (and server).
 With a neutron router, we have the qg-xxx interface that is
 connected to the public network, on which a dhcp server is running
 on the delegating router. For each subnet with PD enabled, a router
 port will be created in the neutron router. As a result, a new PD
 request will be sent that asks for a prefix from the delegating
 router. Keep in mind that the subnet is added into the router
 dynamically.

 We thought about the following options:

 1. use a single dibbler client to support the above requirement.
 This means, the client should be able to accept new requests on the
 fly either through configuration reload or other interfaces.
 Unfortunately, dibbler client doesn¹t support it.

Sorry for my ignorance on PD implementation (I will definitely look at
it the next week), but what does this entry above mean? Do you want a
single dibbler instance running per router serving all subnets plugged
into it? And you want to get configuration updates when a new subnet
is plugged in, or removed from the router?

If that's the case, why not just restarting the client?

 2. start a dibbler client per subnet. All of the dibbler clients
 will be using the same outgoing interface (which is the qg-xxx
 interface). Unfortunately, dibbler client uses /etc/dibbler and
 /var/lib/dibbler for its state (in which it saves duid file, pid
 file, and other internal states). This means it can only support
 one client per network node. 3. run a single dibbler client that
 requests a smaller prefix (say /56) and splits it among the subnets
 with PD enabled (neutron subnet requires /64). Depending on the
 neutron router setup, this may result in significant waste of
 prefixes.

Just to understand all options at the table: can we implement ^^
option with stock dibbler?


 Given the significant drawback with 3, we are left with 1 and 2.
 After looking at the dibbler source code, we found that 2 is easier
 to achieve for now by making some small changes in the dibbler
 code. In the long run, we think option 1 is better.

 The changes we made to the linux dibbler client code, and the
 dibbler server code can be found in here:
 https://github.com/johndavidge/dibbler/tree/cloud-dibbler

Re: [openstack-dev] pci_alias config

2015-02-18 Thread Robert Li (baoli)
If you just use SR-IOV for networking, then pci_alias is not needed.

—Robert

On 2/16/15, 3:11 PM, Harish Patil 
harish.pa...@qlogic.commailto:harish.pa...@qlogic.com wrote:

Hello,

Do we still need “pci_alias config under /etc/nova/nova.conf for SR-IOV PCI 
passthru’ ?

I have Juno release of 1:2014.2.1-0ubuntu1.

Thanks,

Harish



This message and any attached documents contain information from the sending 
company or its parent company(s), subsidiaries, divisions or branch offices 
that may be confidential. If you are not the intended recipient, you may not 
read, copy, distribute, or use this information. If you have received this 
transmission in error, please notify the sender immediately by reply e-mail and 
then delete this message.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Prefix delegation using dibbler client

2015-02-13 Thread Robert Li (baoli)
Hi,

while trying to integrate dibbler client with neutron to support PD, we 
countered a few issues with the dibbler client (and server). With a neutron 
router, we have the qg-xxx interface that is connected to the public network, 
on which a dhcp server is running on the delegating router. For each subnet 
with PD enabled, a router port will be created in the neutron router. As a 
result, a new PD request will be sent that asks for a prefix from the 
delegating router. Keep in mind that the subnet is added into the router 
dynamically.

We thought about the following options:

  1.  use a single dibbler client to support the above requirement. This means, 
the client should be able to accept new requests on the fly either through 
configuration reload or other interfaces. Unfortunately, dibbler client doesn’t 
support it.
  2.  start a dibbler client per subnet. All of the dibbler clients will be 
using the same outgoing interface (which is the qg-xxx interface). 
Unfortunately, dibbler client uses /etc/dibbler and /var/lib/dibbler for its 
state (in which it saves duid file, pid file, and other internal states). This 
means it can only support one client per network node.
  3.  run a single dibbler client that requests a smaller prefix (say /56) and 
splits it among the subnets with PD enabled (neutron subnet requires /64). 
Depending on the neutron router setup, this may result in significant waste of 
prefixes.

Given the significant drawback with 3, we are left with 1 and 2. After looking 
at the dibbler source code, we found that 2 is easier to achieve for now by 
making some small changes in the dibbler code. In the long run, we think option 
1 is better.

The changes we made to the linux dibbler client code, and the dibbler server 
code can be found in here: 
https://github.com/johndavidge/dibbler/tree/cloud-dibbler. Basically it does a 
few things:
  — create a unique working area per dibbler client
  — since all the clients use the same outgoing interface, we’d like each 
dibbler client to use a unique LLA as its source address when sending messages. 
This would avoid clients to receive server messages that are not intended for 
them.
  — we found that dibbler server uses transaction ID alone to identify a match 
between a request and an answer. This would require that unique transaction IDs 
be used among all the existing clients. We found that clients could use the 
same transaction IDs in our environment. Therefore, a little change is made in 
the server code so that it will take the request sender into consideration 
while looking up a match.


Option 1 requires better understanding of the dibbler code, and we think that 
it may not be possible to make it happen in the kilo timeframe. But we think it 
has significant advantages over option 2. Regardless, changes made for 2 is 
also needed since we need to run one dibbler client per neutron router.

Now the issue is how to make those changes (possible with further revision) 
into an official dibbler release ASAP so that we can use them for kilo release. 
John Davidge has contacted the dibbler authors, and hasn’t received response so 
far.

Comments and thoughts are welcome.

Cheers,
—Robert
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] SR-IOV IRC meeting for 2/10

2015-02-09 Thread Robert Li (baoli)
Hi,

I won’t be able to make it for tomorrow’s meeting. But you guys are welcome to 
have the meeting without me.

—Robert
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack][nova] Question on rollback live migration at the destination

2015-01-26 Thread Robert Li (baoli)
Hi,

I’m looking at rollback_live_migration_at_destination() in compute/manager.py. 
If it’s shared storage (such as NFS, is_shared_instance_path is True), it’s not 
going to be called since _live_migration_cleanup_flags() will return False. Can 
anyone let me know what’s the reason behind it? So nothing needs to be cleaned 
up at the destination in such case? Should VIFs be unplugged, to say the least?

I’m working on the live migration support with SR-IOV macvtap interfaces. The 
devices allocated at the destination needs to be freed.

thanks,
Robert
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] SR-IOV IRC meeting on Jan, 13th Canceled

2015-01-12 Thread Robert Li (baoli)
Hi,

I’m canceling the meeting since I’m traveling this week.

Regards,
Robert
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Request Spec Freeze Exception

2015-01-08 Thread Robert Li (baoli)
Hi,

During the Kilo summit, the folks in the pci passthrough and SR-IOV groups 
discussed what we’d like to achieve in this cycle, and the result was 
documented in this Etherpad:
https://etherpad.openstack.org/p/kilo_sriov_pci_passthrough

To get the work going, we’ve submitted a few design specs:

Nova: Live migration with macvtap SR-IOV
https://blueprints.launchpad.net/nova/+spec/sriov-live-migration

Nova: sriov interface attach/detach
https://blueprints.launchpad.net/nova/+spec/sriov-interface-attach-detach

 Nova: Api specify vnic_type
https://blueprints.launchpad.net/neutron/+spec/api-specify-vnic-type

Nova: SRIOV scheduling with stateless offloads
https://blueprints.launchpad.net/nova/+spec/sriov-sched-with-stateless-offloads

Nova: Add spec for VIF Driver for SR-IOV 
InfiniBandhttps://review.openstack.org/131729
https://review.openstack.org/#/c/131729/

Thanks for your kindly consideration.

—Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][sriov] SRIOV related specs pending for approval

2015-01-07 Thread Robert Li (baoli)
Hi Joe and others,

One of the topics for tomorrow’s NOVA IRC is about k-2 spec exception process. 
I’d like to bring up the following specs up again for consideration:

During the Kilo summit, the folks in the pci passthrough and SR-IOV groups 
discussed what we’d like to achieve in this cycle, and the result was 
documented in this Etherpad:
https://etherpad.openstack.org/p/kilo_sriov_pci_passthrough

To get the work going, we’ve submitted a few design specs:

Nova: Live migration with macvtap SR-IOV
https://blueprints.launchpad.net/nova/+spec/sriov-live-migration

Nova: sriov interface attach/detach
https://blueprints.launchpad.net/nova/+spec/sriov-interface-attach-detach

 Nova: Api specify vnic_type
https://blueprints.launchpad.net/neutron/+spec/api-specify-vnic-type

Nova: SRIOV scheduling with stateless offloads
https://blueprints.launchpad.net/nova/+spec/sriov-sched-with-stateless-offloads


Thanks for your kindly consideration.

—Robert

On 12/22/14, 1:20 PM, Joe Gordon 
joe.gord...@gmail.commailto:joe.gord...@gmail.com wrote:



On Fri, Dec 19, 2014 at 6:53 AM, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:
Hi Joe,

See this thread on the SR-IOV CI from Irena and Sandhya:

http://lists.openstack.org/pipermail/openstack-dev/2014-November/050658.html

http://lists.openstack.org/pipermail/openstack-dev/2014-November/050755.html

I believe that Intel is building a CI system to test SR-IOV as well.

Thanks for the clarification.


Thanks,
Robert


On 12/18/14, 9:13 PM, Joe Gordon 
joe.gord...@gmail.commailto:joe.gord...@gmail.com wrote:



On Thu, Dec 18, 2014 at 2:18 PM, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:
Hi,

During the Kilo summit, the folks in the pci passthrough and SR-IOV groups 
discussed what we’d like to achieve in this cycle, and the result was 
documented in this Etherpad:
https://etherpad.openstack.org/p/kilo_sriov_pci_passthrough

To get the work going, we’ve submitted a few design specs:

Nova: Live migration with macvtap SR-IOV
https://blueprints.launchpad.net/nova/+spec/sriov-live-migration

Nova: sriov interface attach/detach
https://blueprints.launchpad.net/nova/+spec/sriov-interface-attach-detach

 Nova: Api specify vnic_type
https://blueprints.launchpad.net/neutron/+spec/api-specify-vnic-type

Neutron-Network settings support for vnic-type
https://blueprints.launchpad.net/neutron/+spec/network-settings-support-vnic-type

Nova: SRIOV scheduling with stateless offloads
https://blueprints.launchpad.net/nova/+spec/sriov-sched-with-stateless-offloads

Now that the specs deadline is approaching, I’d like to bring them up in here 
for exception considerations. A lot of works have been put into them. And we’d 
like to see them get through for Kilo.

We haven't started the spec exception process yet.


Regarding CI for PCI passthrough and SR-IOV, see the attached thread.

Can you share this via a link to something on 
http://lists.openstack.org/pipermail/openstack-dev/


thanks,
Robert


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] host aggregate's availability zone

2014-12-23 Thread Robert Li (baoli)
Hi Danny,

check this link out.
https://wiki.openstack.org/wiki/Scheduler_Filters

Add the following into your /etc/nova/nova.conf before starting the nova 
service.

scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, 
ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, 
ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, AvailabilityZoneFilter

Or, You can do so in your local.conf
[[post-config|$NOVA_CONF]]
[DEFAULT]
pci_alias={name:cisco,vendor_id:8086,product_id:10ed}
scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, 
ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, 
ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, AvailabilityZoneFilter


—Robert

On 12/22/14, 9:53 AM, Danny Choi (dannchoi) 
dannc...@cisco.commailto:dannc...@cisco.com wrote:

Hi Joe,

No, I did not.  I’m not aware of this.

Can you tell me exactly what needs to be done?

Thanks,
Danny

--

Date: Sun, 21 Dec 2014 11:42:02 -0600
From: Joe Cropper cropper@gmail.commailto:cropper@gmail.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [qa] host aggregate's availability zone
Message-ID: 
b36d2234-bee0-4c7b-a2b2-a09cc9098...@gmail.commailto:b36d2234-bee0-4c7b-a2b2-a09cc9098...@gmail.com
Content-Type: text/plain; charset=utf-8

Did you enable the AvailabilityZoneFilter in nova.conf that the scheduler uses? 
 And enable the FilterScheduler?  These are two common issues related to this.

- Joe

On Dec 21, 2014, at 10:28 AM, Danny Choi (dannchoi) 
dannc...@cisco.commailto:dannc...@cisco.com wrote:
Hi,
I have a multi-node setup with 2 compute hosts, qa5 and qa6.
I created 2 host-aggregate, each with its own availability zone, and assigned 
one compute host:
localadmin@qa4:~/devstack$ nova aggregate-details host-aggregate-zone-1
++---+---+---+--+
| Id | Name  | Availability Zone | Hosts | Metadata 
|
++---+---+---+--+
| 9  | host-aggregate-zone-1 | az-1  | 'qa5' | 
'availability_zone=az-1' |
++---+---+---+--+
localadmin@qa4:~/devstack$ nova aggregate-details host-aggregate-zone-2
++---+---+---+--+
| Id | Name  | Availability Zone | Hosts | Metadata 
|
++---+---+---+--+
| 10 | host-aggregate-zone-2 | az-2  | 'qa6' | 
'availability_zone=az-2' |
++---+---+---+?+
My intent is to control at which compute host to launch a VM via the 
host-aggregate?s availability-zone parameter.
To test, for vm-1, I specify --availiability-zone=az-1, and 
--availiability-zone=az-2 for vm-2:
localadmin@qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec --flavor 1 
--nic net-id=5da9d715-19fd-47c7-9710-e395b5b90442 --availability-zone az-1 vm-1
+--++
| Property | Value  
|
+--++
| OS-DCF:diskConfig| MANUAL 
|
| OS-EXT-AZ:availability_zone  | nova   
|
| OS-EXT-SRV-ATTR:host | -  
|
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -  
|
| OS-EXT-SRV-ATTR:instance_name| instance-0066  
|
| OS-EXT-STS:power_state   | 0  
|
| OS-EXT-STS:task_state| -  
|
| OS-EXT-STS:vm_state  | building   
|
| OS-SRV-USG:launched_at   | -  
|
| OS-SRV-USG:terminated_at | -  
|
| accessIPv4   |
|
| accessIPv6   |
|
| adminPass| kxot3ZBZcBH6   
|
| config_drive

Re: [openstack-dev] [nova][sriov] SRIOV related specs pending for approval

2014-12-19 Thread Robert Li (baoli)
Hi Joe,

See this thread on the SR-IOV CI from Irena and Sandhya:

http://lists.openstack.org/pipermail/openstack-dev/2014-November/050658.html

http://lists.openstack.org/pipermail/openstack-dev/2014-November/050755.html

I believe that Intel is building a CI system to test SR-IOV as well.

Thanks,
Robert


On 12/18/14, 9:13 PM, Joe Gordon 
joe.gord...@gmail.commailto:joe.gord...@gmail.com wrote:



On Thu, Dec 18, 2014 at 2:18 PM, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:
Hi,

During the Kilo summit, the folks in the pci passthrough and SR-IOV groups 
discussed what we’d like to achieve in this cycle, and the result was 
documented in this Etherpad:
https://etherpad.openstack.org/p/kilo_sriov_pci_passthrough

To get the work going, we’ve submitted a few design specs:

Nova: Live migration with macvtap SR-IOV
https://blueprints.launchpad.net/nova/+spec/sriov-live-migration

Nova: sriov interface attach/detach
https://blueprints.launchpad.net/nova/+spec/sriov-interface-attach-detach

 Nova: Api specify vnic_type
https://blueprints.launchpad.net/neutron/+spec/api-specify-vnic-type

Neutron-Network settings support for vnic-type
https://blueprints.launchpad.net/neutron/+spec/network-settings-support-vnic-type

Nova: SRIOV scheduling with stateless offloads
https://blueprints.launchpad.net/nova/+spec/sriov-sched-with-stateless-offloads

Now that the specs deadline is approaching, I’d like to bring them up in here 
for exception considerations. A lot of works have been put into them. And we’d 
like to see them get through for Kilo.

We haven't started the spec exception process yet.


Regarding CI for PCI passthrough and SR-IOV, see the attached thread.

Can you share this via a link to something on 
http://lists.openstack.org/pipermail/openstack-dev/


thanks,
Robert


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][sriov] SRIOV related specs pending for approval

2014-12-18 Thread Robert Li (baoli)
Hi,

During the Kilo summit, the folks in the pci passthrough and SR-IOV groups 
discussed what we’d like to achieve in this cycle, and the result was 
documented in this Etherpad:
https://etherpad.openstack.org/p/kilo_sriov_pci_passthrough

To get the work going, we’ve submitted a few design specs:

Nova: Live migration with macvtap SR-IOV
https://blueprints.launchpad.net/nova/+spec/sriov-live-migration

Nova: sriov interface attach/detach
https://blueprints.launchpad.net/nova/+spec/sriov-interface-attach-detach

 Nova: Api specify vnic_type
https://blueprints.launchpad.net/neutron/+spec/api-specify-vnic-type

Neutron-Network settings support for vnic-type
https://blueprints.launchpad.net/neutron/+spec/network-settings-support-vnic-type

Nova: SRIOV scheduling with stateless offloads
https://blueprints.launchpad.net/nova/+spec/sriov-sched-with-stateless-offloads

Now that the specs deadline is approaching, I’d like to bring them up in here 
for exception considerations. A lot of works have been put into them. And we’d 
like to see them get through for Kilo.

Regarding CI for PCI passthrough and SR-IOV, see the attached thread.

thanks,
Robert

---BeginMessage---
Hi Steve,
Regarding SR-IOV testing, at Mellanox we have CI job running on bare metal
node with Mellanox SR-IOV NIC.  This job is reporting on neutron patches.
Currently API tests are executed.
The contact person for SRIOV CI job is listed at driverlog:
https://github.com/stackforge/driverlog/blob/master/etc/default_data.json#L
1439

The following items are in progress:
 - SR-IOV functional testing
 - Reporting CI job on nova patches
 - Multi-node setup
It worth to mention that we   want to start the collaboration on SR-IOV
testing effort as part of the pci pass-through subteam activity.
Please join the weekly meeting if you want to collaborate or have some
inputs: https://wiki.openstack.org/wiki/Meetings/Passthrough

BR,
Irena

-Original Message-
From: Steve Gordon [mailto:sgor...@redhat.com]
Sent: Wednesday, November 12, 2014 9:11 PM
To: itai mendelsohn; Adrian Hoban; Russell Bryant; Ian Wells (iawells);
Irena Berezovsky; ba...@cisco.com
Cc: Nikola Đipanov; Russell Bryant; OpenStack Development Mailing List (not
for usage questions)
Subject: [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other
features that can't be tested on current infra.

Hi all,

We had some discussions last week - particularly in the Nova NFV design
session [1] - on the subject of ensuring that telecommunications and
NFV-related functionality has adequate continuous integration testing. In
particular the focus here is on functionality that can't easily be tested on
the public clouds that back the gate, including:

- NUMA (vCPU pinning, vCPU layout, vRAM layout, huge pages, I/O device
locality)
- SR-IOV with Intel, Cisco, and Mellanox devices (possibly others)
  
In each case we need to confirm where we are at, and the plan going
forward, with regards to having:

1) Hardware to run the CI on.
2) Tests that actively exercise the functionality (if not already in
existence).
3) Point person for each setup to maintain it and report into the
third-party meeting [2].
4) Getting the jobs operational and reporting [3][4][5][6].

In the Nova session we discussed a goal of having the hardware by K-1 (Dec
18) and having it reporting at least periodically by K-2 (Feb 5). I'm not
sure if similar discussions occurred on the Neutron side of the design
summit.

SR-IOV
==

Adrian and Irena mentioned they were already in the process of getting up
to speed with third party CI for their respective SR-IOV configurations.
Robert are you attempting similar with regards to Cisco devices? What is the
status of each of these efforts versus the four items I lifted above and
what do you need assistance with?

NUMA


We still need to identify some hardware to run third party CI for the
NUMA-related work, and no doubt other things that will come up. It's
expected that this will be an interim solution until OPNFV resources can be
used (note cdub jokingly replied 1-2 years when asked for a rough estimate
- I mention this because based on a later discussion some people took this
as a serious estimate).

Ian did you have any luck kicking this off? Russell and I are also
endeavouring to see what we can do on our side w.r.t. this short term
approach - in particular if you find hardware we still need to find an owner
to actually setup and manage it as discussed.

In theory to get started we need a physical multi-socket box and a virtual
machine somewhere on the same network to handle job control etc. I believe
the tests themselves can be run in VMs (just not those exposed by existing
public clouds) assuming a recent Libvirt and an appropriately crafted
Libvirt XML that ensures the VM gets a multi-socket topology etc. (we can
assist with this).

Thanks,

Steve

[1] https://etherpad.openstack.org/p/kilo-nova-nfv
[2] https://wiki.openstack.org/wiki/Meetings/ThirdParty
[3] 

Re: [openstack-dev] [neutron] Is this fix introducing another different bug to dhcp-agent?

2014-11-13 Thread Robert Li (baoli)
Nice catch. Since it’s already merged, a new bug may be in order.

—Robert

On 11/13/14, 10:25 AM, Miguel Ángel Ajo 
majop...@redhat.commailto:majop...@redhat.com wrote:

I believe this fix to IPv6 dhcp spawn breaks isolated metadata when we have a 
subnet combination like this on a network:

1) IPv6 subnet, with DHCP enabled
2) IPv4 subnet, with isolated metadata enabled.


https://review.openstack.org/#/c/123671/1/neutron/agent/dhcp_agent.py

I haven’t been able to test yet, but wanted to share it before I forget.




Miguel Ángel
ajo @ freenode.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] IPv6 team summit meetup

2014-11-07 Thread Robert Li (baoli)
will be there too

On 11/7/14, 4:53 AM, Brian Haley brian.ha...@hp.com wrote:

On 11/06/2014 04:18 PM, Xuhan Peng wrote:
 Hey,

 Since we don't have any slot for ipv6 in summit to meet up, can we have
 a lunch meetup together tomorrow (11/7 Friday)?

 We can meet at 12:30 at the meet up place Neuilly lobby of Le Meridien
 and go to lunch together after that.

I'll be there.

-Brian


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] pci pass through turing complete config options?

2014-11-04 Thread Robert Li (baoli)


On 11/3/14, 6:32 PM, Doug Hellmann d...@doughellmann.com wrote:


On Oct 31, 2014, at 9:27 PM, Robert Li (baoli) ba...@cisco.com wrote:

 
 
 On 10/28/14, 11:01 AM, Daniel P. Berrange berra...@redhat.com wrote:
 
 On Tue, Oct 28, 2014 at 10:18:37AM -0400, Jay Pipes wrote:
 On 10/28/2014 07:44 AM, Daniel P. Berrange wrote:
 One option would be a more  CSV like syntax eg
 
   pci_passthrough_whitelist =
 address=*0a:00.*,physical_network=physnet1
   pci_passthrough_whitelist = vendor_id=1137,product_id=0071
 
 But this gets confusing if we want to specifying multiple sets of
data
 so might need to use semi-colons as first separator, and comma for
list
 element separators
 
   pci_passthrough_whitelist = vendor_id=8085;product_id=4fc2,
 vendor_id=1137;product_id=0071
 
 What about this instead (with each being a MultiStrOpt, but no comma
or
 semicolon delimiters needed…)?

This is easy for a developer to access, but not easy for a deployer to
make sure they have configured correctly because they have to keep up
with the order of the options instead of making sure there is a new group
header for each set of options.

 
 [pci_passthrough_whitelist]
 # Any Intel PRO/1000 F Sever Adapter
 vendor_id=8086
 product_id=1001
 address=*
 physical_network=*
 # Cisco VIC SR-IOV VF only on specified address and physical network
 vendor_id=1137
 product_id=0071
 address=*:0a:00.*
 physical_network=physnet1
 
 I think this is reasonable, though do we actually support setting
 the same key twice ?

Yes, if it is registered in different groups.

 
 As an alternative we could just append an index for each element
 in the list, eg like this:
 
 [pci_passthrough_whitelist]
 rule_count=2
 
 # Any Intel PRO/1000 F Sever Adapter
 vendor_id.0=8086

Be careful about constructing the names. You can’t have “.” in them
because then you can’t access them in python, for example:
cfg.CONF.pci_passthrough_whitelist.vendor_id.0

 product_id.0=1001
 address.0=*
 physical_network.0=*
 
 # Cisco VIC SR-IOV VF only on specified address and physical network
 vendor_id.1=1137
 product_id.1=0071
 address.1=*:0a:00.*
 physical_network.1=physnet1
 [pci_passthrough_whitelist]
 rule_count=2
 
 # Any Intel PRO/1000 F Sever Adapter
 vendor_id.0=8086
 product_id.0=1001
 address.0=*
 physical_network.0=*
 
 # Cisco VIC SR-IOV VF only on specified address and physical network
 vendor_id.1=1137
 product_id.1=0071
 address.1=*:0a:00.*
 physical_network.1=physnet1
 
 Or like this:
 
 [pci_passthrough]
 whitelist_count=2
 
 [pci_passthrough_rule.0]
 # Any Intel PRO/1000 F Sever Adapter
 vendor_id=8086
 product_id=1001
 address=*
 physical_network=*
 
 [pci_passthrough_rule.1]
 # Cisco VIC SR-IOV VF only on specified address and physical network
 vendor_id=1137
 product_id=0071
 address=*:0a:00.*
 physical_network=physnet1
 
 Yeah, The last format (copied in below) is a good idea (without the
 section for the count) to handle list of dictionaries. I¹ve seen similar
 config examples in neutron code.
 [pci_passthrough_rule.0]
 # Any Intel PRO/1000 F Sever Adapter
 vendor_id=8086
 product_id=1001
 address=*
 physical_network=*
 
 [pci_passthrough_rule.1]
 # Cisco VIC SR-IOV VF only on specified address and physical network
 vendor_id=1137
 product_id=0071
 address=*:0a:00.*
 physical_network=physnet1
 
 Without direct oslo support, to implement it requires a small method
that
 uses oslo cfg¹s MultiConfigParser().

I’m not sure what you mean needs new support? I think this would work,
except for the “.” in the group name.

The group header is not fixed in this case. Let’s replace “.” with “:”,
then the user may have configured multiple groups such as
[pci_passthrough_rule:x]. With oslo, how would you register the group and
the options under it and access them as a list of dictionaries?


 
 Now a few questions if we want to do it in Kilo:
  ‹ Do we still need to be back-ward compatible in configuring the
 whitelist? If we do, then we still need to be able to handle the json
 docstring.

If there is code released using that format, you need to support it. You
can define options as being deprecated so the new options replace the old
but the old are available if found in the config file.

Doug

  ‹ To support the new format in devstack, we can use meta-section in
 local.conf. how would we support the old format which is still json
 docstring?  Is something like this
 https://review.openstack.org/#/c/123599/ acceptable?
  ‹ Do we allow old/new formats coexist in the config file? Probably not.
 
 
 
 Either that, or the YAML file that Sean suggested, would be my
 preference...
 
 I think it is nice to have it all in the same file, not least because
it
 will be easier for people supporting openstack in the field. ie in bug
 reports we cna just ask for nova.conf and know we'll have all the user
 config we care about in that one place.
 
 Regards,
 Daniel
 -- 
 |: http://berrange.com  -o-
 http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org

Re: [openstack-dev] [nova] pci pass through turing complete config options?

2014-10-31 Thread Robert Li (baoli)


On 10/28/14, 11:01 AM, Daniel P. Berrange berra...@redhat.com wrote:

On Tue, Oct 28, 2014 at 10:18:37AM -0400, Jay Pipes wrote:
 On 10/28/2014 07:44 AM, Daniel P. Berrange wrote:
 One option would be a more  CSV like syntax eg
 
 pci_passthrough_whitelist =
address=*0a:00.*,physical_network=physnet1
 pci_passthrough_whitelist = vendor_id=1137,product_id=0071
 
 But this gets confusing if we want to specifying multiple sets of data
 so might need to use semi-colons as first separator, and comma for list
 element separators
 
 pci_passthrough_whitelist = vendor_id=8085;product_id=4fc2,
vendor_id=1137;product_id=0071
 
 What about this instead (with each being a MultiStrOpt, but no comma or
 semicolon delimiters needed...)?
 
 [pci_passthrough_whitelist]
 # Any Intel PRO/1000 F Sever Adapter
 vendor_id=8086
 product_id=1001
 address=*
 physical_network=*
 # Cisco VIC SR-IOV VF only on specified address and physical network
 vendor_id=1137
 product_id=0071
 address=*:0a:00.*
 physical_network=physnet1

I think this is reasonable, though do we actually support setting
the same key twice ?

As an alternative we could just append an index for each element
in the list, eg like this:

 [pci_passthrough_whitelist]
 rule_count=2

 # Any Intel PRO/1000 F Sever Adapter
 vendor_id.0=8086
 product_id.0=1001
 address.0=*
 physical_network.0=*

 # Cisco VIC SR-IOV VF only on specified address and physical network
 vendor_id.1=1137
 product_id.1=0071
 address.1=*:0a:00.*
 physical_network.1=physnet1
 [pci_passthrough_whitelist]
 rule_count=2

 # Any Intel PRO/1000 F Sever Adapter
 vendor_id.0=8086
 product_id.0=1001
 address.0=*
 physical_network.0=*

 # Cisco VIC SR-IOV VF only on specified address and physical network
 vendor_id.1=1137
 product_id.1=0071
 address.1=*:0a:00.*
 physical_network.1=physnet1

Or like this:

 [pci_passthrough]
 whitelist_count=2

 [pci_passthrough_rule.0]
 # Any Intel PRO/1000 F Sever Adapter
 vendor_id=8086
 product_id=1001
 address=*
 physical_network=*

 [pci_passthrough_rule.1]
 # Cisco VIC SR-IOV VF only on specified address and physical network
 vendor_id=1137
 product_id=0071
 address=*:0a:00.*
 physical_network=physnet1

Yeah, The last format (copied in below) is a good idea (without the
section for the count) to handle list of dictionaries. I¹ve seen similar
config examples in neutron code.
[pci_passthrough_rule.0]
# Any Intel PRO/1000 F Sever Adapter
vendor_id=8086
product_id=1001
address=*
physical_network=*

[pci_passthrough_rule.1]
# Cisco VIC SR-IOV VF only on specified address and physical network
vendor_id=1137
product_id=0071
address=*:0a:00.*
physical_network=physnet1

Without direct oslo support, to implement it requires a small method that
uses oslo cfg¹s MultiConfigParser().

Now a few questions if we want to do it in Kilo:
  ‹ Do we still need to be back-ward compatible in configuring the
whitelist? If we do, then we still need to be able to handle the json
docstring.
  ‹ To support the new format in devstack, we can use meta-section in
local.conf. how would we support the old format which is still json
docstring?  Is something like this
https://review.openstack.org/#/c/123599/ acceptable?
  ‹ Do we allow old/new formats coexist in the config file? Probably not.



 Either that, or the YAML file that Sean suggested, would be my
preference...

I think it is nice to have it all in the same file, not least because it
will be easier for people supporting openstack in the field. ie in bug
reports we cna just ask for nova.conf and know we'll have all the user
config we care about in that one place.

Regards,
Daniel
-- 
|: http://berrange.com  -o-
http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o-
http://virt-manager.org :|
|: http://autobuild.org   -o-
http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-
http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ipv6 and ipv4 dual stack for floating IP

2014-10-30 Thread Robert Li (baoli)
ipv6 floating Ip is currently not supported.

Check out this review and the associated bug: 
https://review.openstack.org/#/c/131145/

—Robert

On 10/30/14, 6:47 AM, Jerry Xinyu Zhao 
xyzje...@gmail.commailto:xyzje...@gmail.com wrote:

Unfortunately, it seems to be the case. Just saw there is a summit talk about 
it called IPv6 Feature in Openstack Juno. Dual stack floating ip support is 
planned in K.
However, I couldn't even get single IPv6 floating IP to work. Even though I 
configured only IPv6 subnet for the external network, I got those errors from 
neutron-l3-agent when associating the floating IP to instance.

Oct 29 10:55:32 overcloud-controller0-ghqtsmsgjgck neutron-l3-agent: Command: 
['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-b243c786-4648-4d69-b749-ee5fad02069b', 
'iptables-restore', '-c']
Oct 29 10:55:32 overcloud-controller0-ghqtsmsgjgck neutron-l3-agent: Exit code: 
2
Oct 29 10:55:32 overcloud-controller0-ghqtsmsgjgck neutron-l3-agent: Stdout: ''
Oct 29 10:55:32 overcloud-controller0-ghqtsmsgjgck neutron-l3-agent: Stderr: 
iptables-restore v1.4.21: host/network `2001:470:1f0f:cb4::7' not found\nError 
occurred at line: 39\nTry `iptables-restore -h' or 'iptables-restore --help' 
for more information.\n
Oct 29 10:55:32 overcloud-controller0-ghqtsmsgjgck neutron-l3-agent: 2014-10-29 
10:55:32.407 30286 ERROR neutron.agent.linux.iptables_manager [-] 
IPTablesManager.apply failed to apply the following set of iptables rules:
Oct 29 10:55:32 overcloud-controller0-ghqtsmsgjgck neutron-l3-agent:   1. # 
Generated by iptables-save v1.4.21 on Wed Oct 29 10:55:32 2014
Oct 29 10:55:32 overcloud-controller0-ghqtsmsgjgck neutron-l3-agent:   2. 
*raw
Oct 29 10:55:32 overcloud-controller0-ghqtsmsgjgck neutron-l3-agent:   3. 
:PREROUTING ACCEPT [148546:23091816]
Oct 29 10:55:32 overcloud-controller0-ghqtsmsgjgck neutron-l3-agent:   4. 
:OUTPUT ACCEPT [219:20352]
Oct 29 10:55:32 overcloud-controller0-ghqtsmsgjgck neutron-l3-agent:   5. 
:neutron-l3-agent-OUTPUT - [0:0]
Oct 29 10:55:32 overcloud-controller0-ghqtsmsgjgck neutron-l3-agent:   6. 
:neutron-l3-agent-PREROUTING - [0:0]
Oct 29 10:55:32 overcloud-controller0-ghqtsmsgjgck neutron-l3-agent:   7. 
[148546:23091816] -A PREROUTING -j neutron-l3-agent-PREROUTING
Oct 29 10:55:32 overcloud-controller0-ghqtsmsgjgck neutron-l3-agent:   8. 
[219:20352] -A OUTPUT -j neutron-l3-agent-OUTPUT
Oct 29 10:55:32 overcloud-controller0-ghqtsmsgjgck neutron-l3-agent:   9. 
COMMIT
Oct 29 10:55:32 overcloud-controller0-ghqtsmsgjgck neutron-l3-agent:  10. # 
Completed on Wed Oct 29 10:55:32 2014
Oct 29 10:55:32 overcloud-controller0-ghqtsmsgjgck neutron-l3-agent:  11. # 
Generated by iptables-save v1.4.21 on Wed Oct 29 10:55:32 2014
Oct 29 10:55:32 overcloud-controller0-ghqtsmsgjgck neutron-l3-agent:  12. 
*mangle
Oct 29 10:55:32 overcloud-controller0-ghqtsmsgjgck neutron-l3-agent:  13. 
:PREROUTING ACCEPT [148546:23091816]
Oct 29 10:55:32 overcloud-controller0-ghqtsmsgjgck neutron-l3-agent:  14. 
:INPUT ACCEPT [55837:18978656]

On Thu, Oct 30, 2014 at 6:32 PM, Harm Weites 
h...@weites.commailto:h...@weites.com wrote:
I'm seeing the same error when trying to setup a whole new network through 
Horizon with external gateway and an ipv4 and ipv6 subnet. Eg, without floating 
ip.

l3_agent.py is trying this: prefixlen = 
netaddr.IPNetwork(port['subnet']['cidr']).prefixlen

Looking inside port[] lists the following items:

2014-10-30 10:26:05.834 21765 ERROR neutron.agent.l3_agent [-] Ignoring 
multiple IPs on router port b4d94d2a-0ba2-43f0-be5f-bb53e89abe32
2014-10-30 10:26:05.836 21765 INFO neutron.agent.l3_agent [-] CHECK: 
port[status] = DOWN
2014-10-30 10:26:05.837 21765 INFO neutron.agent.l3_agent [-] CHECK: 
port[binding:host_id] = myhostname
2014-10-30 10:26:05.839 21765 INFO neutron.agent.l3_agent [-] CHECK: port[name] 
=
2014-10-30 10:26:05.840 21765 INFO neutron.agent.l3_agent [-] CHECK: 
port[allowed_address_pairs] = []
2014-10-30 10:26:05.841 21765 INFO neutron.agent.l3_agent [-] CHECK: 
port[admin_state_up] = True
2014-10-30 10:26:05.843 21765 INFO neutron.agent.l3_agent [-] CHECK: 
port[network_id] = 00539791-0b2f-4628-9599-622fa00993b5
2014-10-30 10:26:05.844 21765 INFO neutron.agent.l3_agent [-] CHECK: 
port[tenant_id] =
2014-10-30 10:26:05.846 21765 INFO neutron.agent.l3_agent [-] CHECK: 
port[extra_dhcp_opts] = []
2014-10-30 10:26:05.847 21765 INFO neutron.agent.l3_agent [-] CHECK: 
port[binding:vif_details] = {}
2014-10-30 10:26:05.848 21765 INFO neutron.agent.l3_agent [-] CHECK: 
port[binding:vif_type] = binding_failed
2014-10-30 10:26:05.849 21765 INFO neutron.agent.l3_agent [-] CHECK: 
port[device_owner] = network:router_gateway
2014-10-30 10:26:05.851 21765 INFO neutron.agent.l3_agent [-] CHECK: 
port[mac_address] = fa:16:3e:53:89:8d
2014-10-30 10:26:05.853 21765 INFO neutron.agent.l3_agent [-] CHECK: 
port[binding:profile] = {}
2014-10-30 

Re: [openstack-dev] [nova] pci pass through turing complete config options?

2014-10-28 Thread Robert Li (baoli)
Sean,

Are you talking about this one: https://review.openstack.org/#/c/128805/?
is it still breaking something after fixing the incompatible awk syntax?

Originally https://review.openstack.org/#/c/123599/ proposed a simple
patch to support that config. But it was abandoned in favor of the
local.conf meta-section.

Thanks,
Robert

On 10/28/14, 8:31 AM, Daniel P. Berrange berra...@redhat.com wrote:

On Tue, Oct 28, 2014 at 08:07:14AM -0400, Sean Dague wrote:
 On 10/28/2014 07:44 AM, Daniel P. Berrange wrote:
  On Tue, Oct 28, 2014 at 07:34:11AM -0400, Sean Dague wrote:
  We're dealing with some issues on devstack pass through with really
  complicated config option types, the fixes are breaking other things.
 
  The issue at hand is the fact that the pci pass through device
listing
  is an olso MultiStrOpt in which each option value is fully valid json
  document, which must parse as such. That leads to things like:
 
  pci_passthrough_whitelist = {address:*:0a:00.*,
  physical_network:physnet1}
  pci_passthrough_whitelist = {vendor_id:1137,product_id:0071}
 
  Which, honestly, seems a little weird for configs.
 
  We're talking about a small number of fixed fields here, so the use
of a
  full json doc seems weird. I'd like to reopen why this was the value
  format, and if we could have a more simple one.
  
  Do you have ant suggestion for an alternative config syntax for
specifying
  a list of dicts which would be suitable ?
  
  One option would be a more  CSV like syntax eg
  
 pci_passthrough_whitelist =
address=*0a:00.*,physical_network=physnet1
 pci_passthrough_whitelist = vendor_id=1137,product_id=0071
  
  But this gets confusing if we want to specifying multiple sets of data
  so might need to use semi-colons as first separator, and comma for
list
  element separators
  
 pci_passthrough_whitelist = vendor_id=8085;product_id=4fc2,
vendor_id=1137;product_id=0071
  
  Overall it isn't clear that inventing a special case language for
this PCI
  config value is a good idea.
  
  I think it illustrates a gap in oslo.config, which ought to be able to
  support a config option type which was a list of dicts of strings
  so anywhere which needs such a beast will use the same syntax.
 
 Mostly, why do we need name= at all. This seems like it would be fine as
 an fstab like format (with 'x' as an ignore value).
 
 # vendor_id product_id address
 pci_passthrough_whitelist = 8085
 pci_passthrough_whitelist = 1137 4fc2
 pci_passthrough_whitelist = x 0071
 pci_passthrough_whitelist = x x *0a:00.*

 Basically going to a full name = value seems incredibly overkill for
 something with  6 fields.

I don't think that is really very extensible for the future to drop the
key name. We've already extended the info we record here at least once,
and I expect we'd want to add more fields later. It is also makes it
less clear to the user - it is very easy to get confused about vendor
vs product IDs if we leave out the name.

Regards,
Daniel
-- 
|: http://berrange.com  -o-
http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o-
http://virt-manager.org :|
|: http://autobuild.org   -o-
http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-
http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing to disallow updates of IPv6 attributes on subnets

2014-10-09 Thread Robert Li (baoli)
Hi Thiago,

A couple of things to consider:
  ― As it is now, it doesn’t seem to be fully functional if you change your 
subnet to use SLAAC. The addresses that were assigned to your existing ports in 
neutron wouldn’t be updated/changed. So basically, you can not simply make an 
API call to change the subnet and expect that everything will be setup 
correctly. Not mention that currently subnet api to update the modes can’t even 
be invoked.

  ― if you want to use SLAAC, there might be a couple of ways to do that
. Once multiple prefix is supported, you can create a new slaac subnet 
in the same network. Obviously you need to use a different prefix. Later, you 
can remove the previous subnet.
. For now, you can create a new network with a slaac subnet. And attach 
your VMs to this new network. Once it’s done, you can remove the previous 
network.

Hope that it helps
Robert

On 10/8/14, 10:25 PM, Martinx - ジェ�`ムズ 
thiagocmarti...@gmail.commailto:thiagocmarti...@gmail.com wrote:

Hi!

Currently, I have IceHouse up and running (Ubuntu 14.04.1) with VLAN Provider 
Network + static IPv6.

I created the subnet(s) like this (one for each tenant):

--
neutron net-create --tenant-id $ID --provider:physical_network=physnet1 
--provider:network_type=vlan --provider:segmentation_id=200 physnet1-vlan200

neutron subnet-create --ip-version 6 --disable-dhcp --tenant-id $ID 
physnet1-vlan200 2001:db8:1::/64 --allocation-pool 
start=2001:db8:1::8000,end=2001:db8:1:0::::fffe
--

This new BUGs means that, after upgrading to Juno, I'll not be able to update 
/ convert this static network to SLAAC ?!?!

If yes, how can I force the update without breaking the production environment? 
Is there a procedure to follow?

I'm not using Neutron L3 Router and I have no plans to use GRE/VXLAN, neither a 
radvd controlled by Neutron. My upstream router already have radvd ready.

Thanks!
Thiago

On 7 October 2014 13:21, Henry Gessau 
ges...@cisco.commailto:ges...@cisco.com wrote:
A number of bugs[1][2][3] have been filed which are related to updating the
IPv6 attributes after a subnet has been created.

In the reviews[4][5] for the fixes for [1] and [2] some shortcomings and
questions have been raised, which were discussed in today's IPv6 IRC meeting[6].

Summary:
In Juno we are not ready for allowing the IPv6 attributes on a subnet to be
updated after the subnet is created, because:
- The implementation for supporting updates is incomplete.
- Perceived lack of usefulness, no good use cases known yet.
- Allowing updates causes more complexity in the code.
- Have not tested that radvd, dhcp, etc. behave OK after update.

Therefore we are proposing to change 'allow_put' to False for the two IPv6
attributes, ipv6_ra_mode and ipv6_address_mode. This will prevent the modes
from being updated via the PUT:subnets API.

We would be interested to hear of any disagreements or questions.


[1] https://launchpad.net/bugs/1362966
[2] https://launchpad.net/bugs/1363064
[3] https://launchpad.net/bugs/1373417
[4] https://review.openstack.org/125328
[5] https://review.openstack.org/117799
[6]
http://eavesdrop.openstack.org/meetings/neutron_ipv6/2014/neutron_ipv6.2014-10-07-15.01.log.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] local.conf ini file setting issues

2014-10-07 Thread Robert Li (baoli)
Hi Ian,

I agree with your plan. I¹ve +1ed the first two patches.

I responded to your comments on [3]. Basically I think that iniadd_literal
is necessary. In addition, [3] will also keep the original order of items.

With all of the three patches, I think the end result would be that it¹s
almost exact copy from local.conf to the destination config file.


‹Robert

On 10/6/14, 9:38 PM, Ian Wienand iwien...@redhat.com wrote:

Hi,

Rather than adding more MAGIC_VARIABLE=foo variables to devstack
that really only add lines to config files, I've been asking people to
add them to local.conf and provide additional corresponding
documentation if required.

However increased use has exposed some issues, now covered by several
reviews.  There seem to be three issues:

  1. preservation of quotes
  2. splitting of arguments with an '=' in them
  3. multi-line support

We have several reviews not dependent on each other but which will all
conflict.  If we agree, I think we should

  1. merge [1] to handle quotes
  2. merge [2] to handle '='s
  3. extract just multi-line support from [3]

All include test-cases, which should increase confidence

--

I did consider re-implementing a python tool to handle this; there
were some things that made me think twice.  Firstly ini settings in
local.conf are expected to expand shell-vars (path to neutron plugin
configs, etc) so you end up shelling out anyway.  Secondly
ConfigParser doesn't like [[foo]] as a section name (drops the
trailing ], maybe a bug) so you have to start playing games there.
Using a non-standard library (oslo.config) would be a big change to
devstack's current usage of dependencies.

-i

[1] https://review.openstack.org/124227
[2] https://review.openstack.org/124467/
[3] https://review.openstack.org/124502


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [IPv6] New API format for extra_dhcp_opts

2014-09-30 Thread Robert Li (baoli)
Xu Han,

That looks good to me. To keep it consistent with existing CLI, we should use 
ip-version instead of ‘version’. It seems to be identical to prefixing the 
option_name with v4 or v6, though.

Just to clarify, are the available opt-names coming from dnsmasq definitions?

With regard to the default, your suggestion version is optional (no version 
means version=4). seems to be different from Mark’s:
I’m -1 for both options because neither is properly backwards compatible.  
Instead we should add an optional 3rd value to the dictionary: “version”.  The 
version key would be used to make the option only apply to either version 4 or 
6.  If the key is missing or null, then the option would apply to both.

Thanks,
Robert

On 9/30/14, 1:46 AM, Xu Han Peng 
pengxu...@gmail.commailto:pengxu...@gmail.com wrote:

Robert,

I think the CLI will look something like based on Mark's suggestion:

neutron port-create extra_dhcp_opts 
opt_name=dhcp_option_name,opt_value=value,version=4(or 6) network

This extra_dhcp_opts can be repeated and version is optional (no version means 
version=4).

Xu Han

On 09/29/2014 08:51 PM, Robert Li (baoli) wrote:
Hi Xu Han,

My question is how the CLI user interface would look like to distinguish 
between v4 and v6 dhcp options?

Thanks,
Robert

On 9/28/14, 10:29 PM, Xu Han Peng 
pengxu...@gmail.commailto:pengxu...@gmail.com wrote:

Mark's suggestion works for me as well. If no one objects, I am going to start 
the implementation.

Thanks,
Xu Han

On 09/27/2014 01:05 AM, Mark McClain wrote:

On Sep 26, 2014, at 2:39 AM, Xu Han Peng 
pengxu...@gmail.commailto:pengxu...@gmail.com wrote:

Currently the extra_dhcp_opts has the following API interface on a port:

{
port:
{
extra_dhcp_opts: [
{opt_value: testfile.1,opt_name: bootfile-name},
{opt_value: 123.123.123.123, opt_name: tftp-server},
{opt_value: 123.123.123.45, opt_name: server-ip-address}
],

 }
}

During the development of DHCPv6 function for IPv6 subnets, we found this 
format doesn't work anymore because an port can have both IPv4 and IPv6 
address. So we need to find a new way to specify extra_dhcp_opts for DHCPv4 and 
DHCPv6, respectively. ( https://bugs.launchpad.net/neutron/+bug/1356383)

Here are some thoughts about the new format:

Option1: Change the opt_name in extra_dhcp_opts to add a prefix (v4 or v6) so 
we can distinguish opts for v4 or v6 by parsing the opt_name. For backward 
compatibility, no prefix means IPv4 dhcp opt.

extra_dhcp_opts: [
{opt_value: testfile.1,opt_name: bootfile-name},
{opt_value: 123.123.123.123, opt_name: v4:tftp-server},
{opt_value: [2001:0200:feed:7ac0::1], opt_name: 
v6:dns-server}
]

Option2: break extra_dhcp_opts into IPv4 opts and IPv6 opts. For backward 
compatibility, both old format and new format are acceptable, but old format 
means IPv4 dhcp opts.

extra_dhcp_opts: {
 ipv4: [
{opt_value: testfile.1,opt_name: bootfile-name},
{opt_value: 123.123.123.123, opt_name: tftp-server},
 ],
 ipv6: [
{opt_value: [2001:0200:feed:7ac0::1], opt_name: 
dns-server}
 ]
}

The pro of Option1 is there is no need to change API structure but only need to 
add validation and parsing to opt_name. The con of Option1 is that user need to 
input prefix for every opt_name which can be error prone. The pro of Option2 is 
that it's clearer than Option1. The con is that we need to check two formats 
for backward compatibility.

We discussed this in IPv6 sub-team meeting and we think Option2 is preferred. 
Can I also get community's feedback on which one is preferred or any other 
comments?


I’m -1 for both options because neither is properly backwards compatible.  
Instead we should add an optional 3rd value to the dictionary: “version”.  The 
version key would be used to make the option only apply to either version 4 or 
6.  If the key is missing or null, then the option would apply to both.

mark




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [IPv6] New API format for extra_dhcp_opts

2014-09-29 Thread Robert Li (baoli)
Hi Xu Han,

My question is how the CLI user interface would look like to distinguish 
between v4 and v6 dhcp options?

Thanks,
Robert

On 9/28/14, 10:29 PM, Xu Han Peng 
pengxu...@gmail.commailto:pengxu...@gmail.com wrote:

Mark's suggestion works for me as well. If no one objects, I am going to start 
the implementation.

Thanks,
Xu Han

On 09/27/2014 01:05 AM, Mark McClain wrote:

On Sep 26, 2014, at 2:39 AM, Xu Han Peng 
pengxu...@gmail.commailto:pengxu...@gmail.com wrote:

Currently the extra_dhcp_opts has the following API interface on a port:

{
port:
{
extra_dhcp_opts: [
{opt_value: testfile.1,opt_name: bootfile-name},
{opt_value: 123.123.123.123, opt_name: tftp-server},
{opt_value: 123.123.123.45, opt_name: server-ip-address}
],

 }
}

During the development of DHCPv6 function for IPv6 subnets, we found this 
format doesn't work anymore because an port can have both IPv4 and IPv6 
address. So we need to find a new way to specify extra_dhcp_opts for DHCPv4 and 
DHCPv6, respectively. ( https://bugs.launchpad.net/neutron/+bug/1356383)

Here are some thoughts about the new format:

Option1: Change the opt_name in extra_dhcp_opts to add a prefix (v4 or v6) so 
we can distinguish opts for v4 or v6 by parsing the opt_name. For backward 
compatibility, no prefix means IPv4 dhcp opt.

extra_dhcp_opts: [
{opt_value: testfile.1,opt_name: bootfile-name},
{opt_value: 123.123.123.123, opt_name: v4:tftp-server},
{opt_value: [2001:0200:feed:7ac0::1], opt_name: 
v6:dns-server}
]

Option2: break extra_dhcp_opts into IPv4 opts and IPv6 opts. For backward 
compatibility, both old format and new format are acceptable, but old format 
means IPv4 dhcp opts.

extra_dhcp_opts: {
 ipv4: [
{opt_value: testfile.1,opt_name: bootfile-name},
{opt_value: 123.123.123.123, opt_name: tftp-server},
 ],
 ipv6: [
{opt_value: [2001:0200:feed:7ac0::1], opt_name: 
dns-server}
 ]
}

The pro of Option1 is there is no need to change API structure but only need to 
add validation and parsing to opt_name. The con of Option1 is that user need to 
input prefix for every opt_name which can be error prone. The pro of Option2 is 
that it's clearer than Option1. The con is that we need to check two formats 
for backward compatibility.

We discussed this in IPv6 sub-team meeting and we think Option2 is preferred. 
Can I also get community's feedback on which one is preferred or any other 
comments?


I’m -1 for both options because neither is properly backwards compatible.  
Instead we should add an optional 3rd value to the dictionary: “version”.  The 
version key would be used to make the option only apply to either version 4 or 
6.  If the key is missing or null, then the option would apply to both.

mark




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] requesting an FFE for SRIOV

2014-09-04 Thread Robert Li (baoli)
Hi,

The main sr-iov patches have gone through lots of code reviews, manual 
rebasing, etc. Now we have some critical refactoring work on the existing infra 
to get it ready. All the code for refactoring and sr-iov is up for review.

https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov

thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][SR-IOV] Please review this patch series: replace pci_request storage with proper object usage

2014-09-03 Thread Robert Li (baoli)
Hi,

the patch series:
 https://review.openstack.org/#/c/117781/5
https://review.openstack.org/#/c/117895/
https://review.openstack.org/#/c/117839/
 https://review.openstack.org/#/c/118391/

is ready for review. This needs to get in before Juno feature freeze so that 
the sr-iov patches  can land in Juno. Refer to 
https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov for all the 
patches related to SR-IOV.

thanks,
Robert



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][IPv6] Neighbor Discovery for HA

2014-08-27 Thread Robert Li (baoli)
Hi Xuhan,

What I saw is that GARP is sent to the gateway port and also to the router 
ports, from a neutron router. I’m not sure why it’s sent to the router ports 
(internal network). My understanding for arping to the gateway port is that it 
is needed for proper NAT operation. Since we are not planning to support ipv6 
NAT, so this is not required/needed for ipv6 any more?

There is an abandoned patch that disabled the arping for ipv6 gateway port:  
https://review.openstack.org/#/c/77471/3/neutron/agent/l3_agent.py

thanks,
Robert

On 8/27/14, 1:03 AM, Xuhan Peng 
pengxu...@gmail.commailto:pengxu...@gmail.com wrote:

As a follow-up action of yesterday's IPv6 sub-team meeting, I would like to 
start a discussion about how to support l3 agent HA when IP version is IPv6.

This problem is triggered by bug [1] where sending gratuitous arp packet for HA 
doesn't work for IPv6 subnet gateways. This is because neighbor discovery 
instead of ARP should be used for IPv6.

My thought to solve this problem turns into how to send out neighbor 
advertisement for IPv6 routers just like sending ARP reply for IPv4 routers 
after reading the comments on code review [2].

I searched for utilities which can do this and only find a utility called 
ndsend [3] as part of vzctl on ubuntu. I could not find similar tools on other 
linux distributions.

There are comments in yesterday's meeting that it's the new router's job to 
send out RA and there is no need for neighbor discovery. But we didn't get 
enough time to finish the discussion.

Can you comment your thoughts about how to solve this problem in this thread, 
please?

[1] https://bugs.launchpad.net/neutron/+bug/1357068

[2] https://review.openstack.org/#/c/114437/

[3] http://manpages.ubuntu.com/manpages/oneiric/man8/ndsend.8.html

Thanks,
Xu Han
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] PCI support

2014-08-11 Thread Robert Li (baoli)
Gary,

Cisco is adding it in our CI testbed. I guess that mlnx is doing the same for 
their MD as well.

—Robert

On 8/11/14, 9:05 AM, Gary Kotton 
gkot...@vmware.commailto:gkot...@vmware.com wrote:

Hi,
At the moment all of the drivers are required CI support. Are there any plans 
regarding the PIC support. I understand that this is something that requires 
specific hardware. Are there any plans to add this?
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][sr-iov] Tomorrow's IRC meeting

2014-08-11 Thread Robert Li (baoli)
Hi,

I won’t be able to make it tomorrow. Please feel free having the meeting 
without me.

Thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][SR-IOV]: RE: ML2 mechanism driver for SR-IOV capable NIC based switching, ...

2014-07-24 Thread Robert Li (baoli)
Hi Kyle, 

Sorry I missed your queries on the IRC channel today. I was thinking about
this whole BP. After chatting with Irena this morning, I think that I
understand what this BP is trying to achieve overall. I also had a chat
with Sandhya afterwards. I¹d like to discuss a few things in here:
  
  ‹ Sandhya¹s MD is going to support cisco¹s VMFEX. Overall her code¹s
structure would look like very much similar to Irena¹s patch in part 1.
However, she cannot simply inherit from SriovNicSwitchMechanismDriver. The
differences for her code are: 1) get_vif_details() would populate
profileid (rather than vlanid), 2) she¹d need to do vmfex specific
processing in try_to_bind(). We¹re thinking that with a little of
generalization, SriovNicSwitchMechanismDriver() (with a changed name such
as SriovMechanismDriver()) can be used both for nic switch and vmfex. It
would look like in terms of class hierarchy:
 SriovMechanismDriver
SriovNicSwitchMechanismDriver
SriovQBRMechanismDriver
 SriovCiscoVmfexMechanismDriver

Code duplication would be reduced significantly. The change would be:
   ‹ make get_vif_details an abstract method in SriovMechanismDriver
   ‹ make an abstract method to perform specific bind action required
by a particular adaptor indicated in the PCI vendor info
   ‹ vif type and agent type should be set based on the PCI vendor
info 

A little change of patch part 1 would achieve the above

  ‹ Originally I thought that SR-IOV port¹s status would be depending on
the Sriov Agent (patch part 2). After chatting with Irena, this is not the
case. So all the SR-IOV ports will be active once created or bound
according to the try_to_bind() method. In addition, the current Sriov
Agent (patch part 2) only supports port admin status change for mlnx
adaptor. I think these caveats need to be spelled out explicitly to avoid
any confusion or misunderstanding, at least in the documentation.

  ‹ Sandhya has planned to support both intel and vmfex in her MD. This
requires a hybrid sriov mech driver that populates vif details based on
the PCI vendor info in the port. Another way to do this is to run two MDs
in the same time, one supporting intel, the other vmfex. This would work
well with the above classes. But it requires change of the two config
options (in Irena¹s patch part one) so that per MD config options can be
specified. I¹m not sure if this is practical in real deployment (meaning
use of SR-IOV adaptors from different vendors in the same deployment), but
I think it¹s doable within the existing ml2 framework.

we¹ll go over the above in the next sr-iov IRC meeting as well.

Thanks,
Robert









On 7/24/14, 1:55 PM, Kyle Mestery (Code Review) rev...@openstack.org
wrote:

Kyle Mestery has posted comments on this change.

Change subject: ML2 mechanism driver for SR-IOV capable NIC based
switching, Part 2
..


Patch Set 3: Code-Review+2 Workflow+1

I believe Irena has answered all of Robert's questions. Any subsequent
issues can be handled as a followup.

-- 
To view, visit https://review.openstack.org/107651
To unsubscribe, visit https://review.openstack.org/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: I533ccee067935326d5837f90ba321a962e8dc2a6
Gerrit-PatchSet: 3
Gerrit-Project: openstack/neutron
Gerrit-Branch: master
Gerrit-Owner: Berezovsky Irena ire...@mellanox.com
Gerrit-Reviewer: Akihiro Motoki mot...@da.jp.nec.com
Gerrit-Reviewer: Arista Testing arista-openstack-t...@aristanetworks.com
Gerrit-Reviewer: Baodong (Robert) Li ba...@cisco.com
Gerrit-Reviewer: Berezovsky Irena ire...@mellanox.com
Gerrit-Reviewer: Big Switch CI openstack...@bigswitch.com
Gerrit-Reviewer: Brocade CI openstack_ger...@brocade.com
Gerrit-Reviewer: Brocade OSS CI dl-grp-vyatta-...@brocade.com
Gerrit-Reviewer: Cisco Neutron CI cisco-openstack-neutron...@cisco.com
Gerrit-Reviewer: Freescale CI fslo...@freescale.com
Gerrit-Reviewer: Hyper-V CI hyper-v...@microsoft.com
Gerrit-Reviewer: Jenkins
Gerrit-Reviewer: Kyle Mestery mest...@mestery.com
Gerrit-Reviewer: Mellanox External Testing
mlnx-openstack...@dev.mellanox.co.il
Gerrit-Reviewer: Metaplugin CI Test metaplugint...@gmail.com
Gerrit-Reviewer: Midokura CI Bot lu...@midokura.com
Gerrit-Reviewer: NEC OpenStack CI nec-openstack...@iaas.jp.nec.com
Gerrit-Reviewer: Neutron Ryu ryu-openstack-rev...@lists.sourceforge.net
Gerrit-Reviewer: One Convergence CI oc-neutron-t...@oneconvergence.com
Gerrit-Reviewer: PLUMgrid CI plumgrid-ci...@plumgrid.com
Gerrit-Reviewer: Tail-f NCS Jenkins to...@tail-f.com
Gerrit-Reviewer: vArmour CI Test openstack-ci-t...@varmour.com
Gerrit-HasComments: No


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][sriov] today's IRC meeting

2014-07-15 Thread Robert Li (baoli)
Hi,

I need to pick up my son at 9:00. It’s a short trip.  So I will be late about 
15 minutes.

Status wise, if everything goes well, the patches should be up in a couple of 
days. One of the challenges is due to dividing them up, some unit tests will 
fail due to missing module and it took time to fix. another challenge is that 
code repos for upstreaming and functional test are separate, with later 
combines all the changes and fake test code (due to missing neutron parts). 
Maintaining them in sync is not fun. Rebasing is another issue too.

I checked the logs for the past two weeks. Yongli indicated that VFs on intel 
card can be brought up down individually from the host. I’d like to hear more 
details about it.

thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][sriov] patch that depends on multiple existing patches under review

2014-07-15 Thread Robert Li (baoli)
Hi,

I was working on the last patch that I’d planned to submit for SR-IOV. It 
turned out this patch would depend on multiple existing patches. “git review 
–d” seems to be supporting one dependency only. Do let me know how we can 
create a patch that depends on multiple existing patches under review. 
Otherwise, I would have to collapse all of them and submit a single patch 
instead.


thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][sriov] patch that depends on multiple existing patches under review

2014-07-15 Thread Robert Li (baoli)
Thanks Russell for the quick response. I¹ll give it a try rearranging the
dependencies. 

‹Robert 

On 7/15/14, 3:26 PM, Russell Bryant rbry...@redhat.com wrote:

On 07/15/2014 03:12 PM, Robert Li (baoli) wrote:
 Hi,
 
 I was working on the last patch that I¹d planned to submit for SR-IOV.
 It turned out this patch would depend on multiple existing patches. ³git
 review ­d² seems to be supporting one dependency only. Do let me know
 how we can create a patch that depends on multiple existing patches
 under review. Otherwise, I would have to collapse all of them and submit
 a single patch instead.

Ideally this whole set of patches would be coordinated into a single
dependency chain.  If A and B are currently independent, but your new
patch (C) depends on both of them, I would rebase so that your patch
depends on B, and B depends on A.  Each patch can only have one parent.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Neutron IPv6 in Icehouse and further

2014-06-30 Thread Robert Li (baoli)
Hi,

There is a patch for radvd https://review.openstack.org/#/c/102648/2 that you 
can use in addition to the devstack patch. You want to make sure that ipv6 is 
enabled and ra accepted with your VM’s image. Both patches are under 
development.

To use dhcpv6,  the current dhcp agent should be working. However, it might be 
broken due to a recent commit. If you found a Traceback in your dhcp agent log 
complaining about an uninitialized reference to the variable ‘mode', you may 
hit that issue. You can just initialize it to be ‘static’. In addition, only 
recently, dhcp messages maybe dropped before entering the VM. Therefore, you 
might need to disable the ipv6 rules manually by “ip6tables –F” after a VM is 
launched .  Also make sure you are using the latest dnsmasq (2.6.8)

Thanks,
Robert

On 6/27/14, 3:47 AM, Jaume Devesa 
devv...@gmail.commailto:devv...@gmail.com wrote:

Hello Maksym,

last week I had more or less the same questions than you and I investigate a 
little bit... Currently we have the ipv6_ra_mode and ipv6_address_mode in the 
subnet entity. The way you combine these two values will determine how and who 
will configure your VM´s IPv6 addresses. Not all the combinations are possible. 
This document[1] and the upstream-slaac-support spec[2] provide the possible 
combinations. Not sure which one is more updated...

If you want to try IPv6 current support, you can use the Baodong Li's devstack 
patch [3], although is still in development. Follow the message commit 
instructions to provide a radvd daemon. That means that there is no RA 
advertiser in Neutron currently. There is a spec in review[4] to fill this gap.

The changes for allow DHCPv6 in dnsmasq are in review in this patch[5].

This is what I found... I hope some IPv6 folks can correct me if this 
information is not accurate enough (or wrong)


[1]: https://www.dropbox.com/s/9bojvv9vywsz8sd/IPv6%20Two%20Modes%20v3.0.pdf
[2]: 
http://docs-draft.openstack.org/43/88043/9/gate/gate-neutron-specs-docs/82c251a/doc/build/html/specs/juno/ipv6-provider-nets-slaac.html
[3]: https://review.openstack.org/#/c/87987
[4]: https://review.openstack.org/#/c/101306/
[5]: https://review.openstack.org/#/c/70649/



On 27 June 2014 00:51, Martinx - ジェームズ 
thiagocmarti...@gmail.commailto:thiagocmarti...@gmail.com wrote:
Hi! I'm waiting for that too...

Currently, I'm running IceHouse with static IPv6 address, with the topology 
VLAN Provider Networks and, to make it easier, I'm counting on the following 
blueprint:

https://blueprints.launchpad.net/neutron/+spec/ipv6-provider-nets-slaac

...but, I'm not sure if it will be enough to enable basic IPv6 support (without 
using Neutron as Instance's default gateway)...

Cheers!
Thiago


On 26 June 2014 19:35, Maksym Lobur 
mlo...@mirantis.commailto:mlo...@mirantis.com wrote:
Hi Folks,

Could you please tell what is the current state of IPv6 in Neutron? Does it 
have DHCPv6 working?

What is the best point to start hacking from? Devstack stable/icehouse or maybe 
some tag? Are there any docs / raw deployment guides?
I see some patches not landed yet [1] ... I assume it won't work without them, 
right?

Somehow I can't open any of the code reviews from the [2] (Not Found)

[1] 
https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:%255E.*%255Cipv6.*,n,z
[2] https://wiki.openstack.org/wiki/Neutron/IPv6

Best regards,
Max Lobur,
Python Developer, Mirantis, Inc.

Mobile: +38 (093) 665 14 28tel:%2B38%20%28093%29%20665%2014%2028
Skype: max_lobur

38, Lenina ave. Kharkov, Ukraine
www.mirantis.comhttp://www.mirantis.com
www.mirantis.ruhttp://www.mirantis.ru

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Jaume Devesa
Software Engineer at Midokura
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][sriov] weekly meeting for july 1st and july 8th

2014-06-30 Thread Robert Li (baoli)
Hi,

I will be on PTO from Tuesday, and come back to office on July 9th Wednesday. 
Therefore, I won’t be present in the next two SR-IOV weekly meetings. Regarding 
the sr-iov development status, I finally fixed all the failures in the existing 
unit tests. Rob and I are still working on adding new unit test cases in the 
PCI and libvirt driver area. Once that’s done, we should be able to push 
another two patches up.

Thanks,
Robert

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] tomorrow's PCI passthrough IRC meeting

2014-06-16 Thread Robert Li (baoli)
Hi,

I’m taking tomorrow off, and therefore I won’t be present in the IRC meeting.

We made a lot of progress last week. We’ve got the first +2 for our spec, and 
therefore it’s a big step forward toward getting approval. A lot of progress on 
the coding front as well, and more patches will be coming for review soon.

Thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] need core reviewers for https://review.openstack.org/#/c/81954/

2014-06-12 Thread Robert Li (baoli)
Hi,

The SR-IOV work depends on this fix. It has got +1’s for quite some time, and 
need core reviewers to review and approve.

thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][ipv6] ipv6 dual stack

2014-06-11 Thread Robert Li (baoli)
Hi,

I added ipv6 support in devstack https://review.openstack.org/#/c/87987/. This 
is a WIP patch given that neutron ipv6 is not fully implemented yet. With this 
script, dual stack data network can be created with neutron as well. The only 
thing that needs to be done manually is starting the RA service. If you want to 
start a dual stack, just set IP_VERSION=4+6 in your localrc. The script uses 
existing neutron commands, and invokes linux IP utilities to properly set up 
the router namespace. With the right version of dnsmasq (I’m using 2.68) in 
use, it will be successfully launched and handing out both ipv6 and ipv4 
addresses. An example of dnsmasq instance is shown as below:


dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces 
--interface=tap4f9e30eb-f6 --except-interface=lo 
--pid-file=/opt/stack/data/neutron/dhcp/c5eb1f36-0c70-4658-8201-8407752212b1/pid
 
--dhcp-hostsfile=/opt/stack/data/neutron/dhcp/c5eb1f36-0c70-4658-8201-8407752212b1/host
 
--addn-hosts=/opt/stack/data/neutron/dhcp/c5eb1f36-0c70-4658-8201-8407752212b1/addn_hosts
 
--dhcp-optsfile=/opt/stack/data/neutron/dhcp/c5eb1f36-0c70-4658-8201-8407752212b1/opts
 --leasefile-ro --dhcp-range=set:tag0,10.0.0.0,static,86400s 
--dhcp-range=set:tag1,2001:420:2c50:200b::,static,86400s 
--dhcp-lease-max=16777216 --conf-file= --domain=openstacklocal

This is achieved without making any changes in the neutron dhcp service.

Make sure that your VM image has dhcp v6 client enabled on the port. This can 
be easily achieved with an ubuntu image, for example,  add ‘iface eth0 inet6 
dhcp” in the /etc/network/interfaces file.

You can check the commit message in https://review.openstack.org/#/c/87987/ for 
more details.

Note that there seems to be a bug in the neutron ip6tables that prevents dhcp 
v6 packets coming in to the VM. The bug seems to be introduced recently. If you 
see ipv4 but not ipv6 addresses in your VM, you can flush the ip6tables, and 
change the status of your port in the VM.

thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][pci] A couple of questions

2014-06-10 Thread Robert Li (baoli)
Hi Yunhong  Yongli,

In the routine _prepare_pci_devices_for_use(), it’s referring to 
dev[‘hypervisor_name’]. I didn’t see code that’s setting it up, or the libvirt 
nodedev xml includes hypervisor_name. Is this specific to Xen?

Another question is about the issue that was raised in this review: 
https://review.openstack.org/#/c/82206/. It’s about the use of node id or host 
name in the PCI device table. I’d like to know you guys’ thoughts on that.

thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] SR-IOV nova-specs

2014-05-30 Thread Robert Li (baoli)
John, thanks for the review. I¹m going to clarify the things you mentioned
in your comments, and upload a new version soon.

thanks,
Robert

On 5/30/14, 12:35 PM, John Garbutt j...@johngarbutt.com wrote:

Hey,

-2 has been removed, feel free to ping me in IRC if you need quicker
turn around, been traveling last few days.

Thanks,
John

On 27 May 2014 19:21, Robert Li (baoli) ba...@cisco.com wrote:
 Hi John,

 Now that we have agreement during the summit on how to proceed in order
to
 get it in to Juno, please take a look at this:

 https://review.openstack.org/#/c/86606/16

 Please let us know your comments or what is still missing. I¹m also not
sure
 if your  ­2 needs to be removed before the other cores will take a look
at
 it.

 thanks,
 Robert


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] SR-IOV nova-specs

2014-05-27 Thread Robert Li (baoli)
Hi John,

Now that we have agreement during the summit on how to proceed in order to get 
it in to Juno, please take a look at this:

https://review.openstack.org/#/c/86606/16

Please let us know your comments or what is still missing. I’m also not sure if 
your  –2 needs to be removed before the other cores will take a look at it.

thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Privacy extension

2014-05-16 Thread Robert Li (baoli)
Dane put some notes on the session’s ether pad  to support multiple prefixes. 
Seem like this is really something that everyone want to support in openstack.

―Robert

On 5/16/14, 2:23 PM, Martinx - ジェ�`ムズ 
thiagocmarti...@gmail.commailto:thiagocmarti...@gmail.com wrote:

Precisely Anthony! We talked about this topic (Non-NAT Floating IPv6) here, 
on the following thread:

--
[openstack-dev] [Neutron][IPv6] Idea: Floating IPv6 - Without any kind of NAT:
http://lists.openstack.org/pipermail/openstack-dev/2014-February/026871.html
--

:-D

About IPv6 Privacy Extensions, well, if it is too hard to implement, I think 
that it can be postponed... And only the IPv6 self-generated by SLAAC and 
previously calculated by Neutron itself (based on Instance's MAC address), 
should be allowed to pass/work for now...

-
 Thiago


On 16 May 2014 12:12, Veiga, Anthony 
anthony_ve...@cable.comcast.commailto:anthony_ve...@cable.comcast.com wrote:
I’ll take this one a step further.  I think one of the methods for getting 
(non-NAT) floating IPs in IPv6 would be to push a new, extra address to the 
same port.  Either by crafting an extra, unicast RA to the specific VM or 
providing multiple IA_NA fields in the DHCPv6 transaction.  This would require 
multiple addresses to be allowed on a single MAC.
-Anthony

From: Martinx - ジェ�`ムズ 
thiagocmarti...@gmail.commailto:thiagocmarti...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, May 15, 2014 at 14:18
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][IPv6] Privacy extension

Hello!

I agree that there is no need for Privacy Extensions in a Cloud environment, 
since MAC address are fake... No big deal...

Nevertheless, I think that should be nice to allow 1 Instance to have more than 
1 IPv6 addr, since IPv6 is (almost) virtually unlimited... This way, a VM with, 
for example, a range of IPv6s to it, can have a shared host environment when 
each website have its own IPv6 address (I prefer to use IP-Based virtualhosts 
on Apache, instead of Name-Based)...

Cheers!
Thiago


On 15 May 2014 14:22, Ian Wells 
ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk wrote:
I was just about to respond to that in the session when we ran out of time.  I 
would vote for simply insisting that VMs run without the privacy extension 
enabled, and only permitting the expected ipv6 address based on MAC.  Its 
primary purpose is to conceal your MAC address so that your IP address can't be 
used to track you, as I understand it, and I don't think that's as relevant in 
a cloud environment and where the MAC addresses are basically fake.  Someone 
interested in desktop virtualisation with Openstack may wish to contradict me...
--
Ian.


On 15 May 2014 09:30, Shixiong Shang 
sparkofwisdom.cl...@gmail.commailto:sparkofwisdom.cl...@gmail.com wrote:
Hi, guys:

Nice to meet with all of you in the technical session and design session. I 
mentioned the challenge of privacy extension in the meeting, but would like to 
hear your opinions of how to address the problem. If you have any comments or 
suggestions, please let me know. I will create a BP for this problem.

Thanks!

Shixiong


Shixiong Shang

!--- Stay Hungry, Stay Foolish ---!


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Informal meeting before SR-IOV summit presentation

2014-05-09 Thread Robert Li (baoli)
It sounds good to me.

Thanks Sandhya for organizing it.

‹Robert

On 5/9/14, 2:51 PM, Sandhya Dasu (sadasu) sad...@cisco.com wrote:

Thanks for all your replies.

Thanks for the great inputs on how to frame the discussion in the etherpad
so it becomes easier for people to get on board. We will add author indent
to track the source of the changes. Will work on cleaning that up.

Regarding the session itself, as you probably know, there was an attempt
in Icehouse to get the sr-iov work going. We found that the time allotted
for the session was not sufficient to get to all the use cases and discuss
alternate views. 

This time around we want to be better prepared and so would like to keep
only a couple of open times for the actual session. Hence, the request for
the early meeting.

How does Monday 1pm sound?

Thanks,
Sandhya

On 5/9/14 11:44 AM, Steve Gordon sgor...@redhat.com wrote:

- Original Message -
 From: Robert Li (baoli) ba...@cisco.com
 Subject: Re: Informal meeting before SR-IOV summit presentation
 
 This is the one that Irena created:
 https://etherpad.openstack.org/p/pci_passthrough_cross_project

Thanks, I missed this as it wasn't linked from the design summit Wiki
page.

-Steve

 On 5/8/14, 4:33 PM, Steve Gordon sgor...@redhat.com wrote:
 
 - Original Message -
   It would be nice to have an informal discussion / unconference
session
   before the actual summit session on SR-IOV. During the previous
IRC
   meeting, we were really close to identifying the different use
cases.
   There was a dangling discussion on introducing another level of
   indirection between the vnic_types exposed via the nova boot API
and
 how
   it would be represented internally. It would be ideal to have
these 2
   discussions converged before the summit session.
  
  What would be the purpose of doing that before the session? IMHO, a
  large part of being able to solve this problem is getting everyone
up to
  speed on what this means, what the caveats are, and what we're
trying to
  solve. If we do some of that outside the scope of the larger
audience, I
  expect we'll get less interaction (or end up covering it again) in
the
  session.
  
  That said, if there's something I'm missing that needs to be
resolved
  ahead of time, then that's fine, but I expect the best plan is to
just
  keep the discussion to the session. Afterwards, additional things
can be
  discussed in a one-off manner, but getting everyone on the same page
is
  largely the point of having a session in the first place IMHO.
 
 Right, in spite of my previous response...looking at the etherpad
there
 is nothing there to frame the discussion at the moment:
 
 https://etherpad.openstack.org/p/juno-nova-sriov-support
 
 I think populating this should be a priority rather than organizing
 another session/meeting?
 
 Steve
 
 

-- 
Steve Gordon, RHCE
Product Manager, Red Hat Enterprise Linux OpenStack Platform
Red Hat Canada (Toronto, Ontario)



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Neutron Routers and LLAs

2014-05-08 Thread Robert Li (baoli)
Hi Xuhan,

I agree that such subnet shouldn’t be allowed to be added in a neutron router.

However, I have some reservations in creating a subnet with an external LLA 
gateway address. First of all, it seems that the sole purpose of providing the 
gateway IP is to install an RA rule to permit RAs from that gateway. Secondly, 
what if the gateway IP needs to be changed? Would that incur updates in the 
neutron subnets that refer to it? I think that we need a better strategy for RA 
spoofing. Currently, rogue RAs are dropped at the receiving end. Would it be 
better to stop them at the source and to allow RAs being SENT from the 
legitimate sources only?

thanks,
Robert



On 4/25/14, 5:46 AM, Xuhan Peng 
pengxu...@gmail.commailto:pengxu...@gmail.com wrote:

Sean and Robert,

Sorry for replying this late, but after giving this a second thought, I think 
it makes sense to not allow a subnet with a LLA gateway IP address to be 
attached to a neutron router for the following reasons:

1. A subnet with LLA address gateway specified is only used to receive RA from 
provider router.  I could not think of any other use case that a user want to 
specify LLA address gateway for an Neutron subnet.

2. Attach a subnet with LLA (or even any address outside subnet's cidr) will 
cause the port have two LLA (or another address outside subnet's cidr) on the 
subnet gateway port (qr-). This will confuse dnsmasq about binding with 
which address.

3. For allowing RA sending from dnsmasq on gateway port, we can use ip command 
to get the LLA. Currently I use calculation method to get the source address, 
but I will improve it to use ip command to make sure the source IP is right.

Thoughts?  If we all agree, I will open a bug to disallow subnet with gateway 
outside CIDR be attached to a router.

Xuhan


On Wed, Mar 26, 2014 at 9:52 PM, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:
Hi Sean,

Unless I have missed something, this is my thinking:
  -- I understand that the goal is to allow RAs from designated sources
only.
  -- initially, xuhanp posted a diff for
https://review.openstack.org/#/c/72252. And my comment was that subnet
that was created with gateway ip not on the same subnet can't be added
into the neutron router.
  -- as a result, https://review.openstack.org/#/c/76125/ was posted to
address that issue. With that diff, LLA would be allowed. But a
consequence of that is a gateway port would end up having two LLAs: one
that is automatically generated, the other from the subnet gateway IP.
  -- with xuhanp's new diff for https://review.openstack.org/#/c/72252, if
openstack native RA is enabled, then the automatically generated LLA will
be used; and if it's not enabled, it will use the external gateway's LLA.
And the diff seems to indicate this LLA comes from the subnet's gateway
IP.
  -- Therefore, the change of https://review.openstack.org/#/c/76125/
seems to be able to add the gateway IP as an external gateway.
  -- Thus, my question is: should such a subnet be allowed to add in a
router? And if it should, what would the semantics be? If not, proper
error should be provided to the user. I'm also trying to figure out the
reason that such a subnet needs to be created in neutron (other than
creating L2 ports for VMs).

-- Another thought is that if the RA is coming from the provider net, then
the provider net should have installed mechanisms to prevent rogue RAs
from entering the network. There are a few RFCs that address the rogue RA
issue.

see inline as well.

I hope that I didn't confuse you guys.

Thanks,
Robert


On 3/25/14 2:18 PM, Collins, Sean 
sean_colli...@cable.comcast.commailto:sean_colli...@cable.comcast.com
wrote:

During the review[0] of the patch that only allows RAs from known
addresses, Robert Li brought up a bug in Neutron, where a
IPv6 Subnet could be created, with a link local address for the gateway,
that would fail to create the Neutron router because the IP address that
the router's port would be assigned, was a link local
address that was not on the subnet.

This may or may have been run before force_gateway_on_subnet flag was
introduced. Robert - if you can give us what version of Neutron you were
running that would be helpful.

[Robert] I'm using the latest



Here's the full text of what Robert posted in the review, which shows
the bug, which was later filed[1].

 This is what I've tried, creating a subnet with a LLA gateway address:

 neutron subnet-create --ip-version 6 --name myipv6sub --gateway
fe80::2001:1 mynet :::/64

 Created a new subnet:

+--+
+
 | Field | Value |

+--+
+
 | allocation_pools | {start: :::1, end:
::::::fffe} | | cidr | :::/64 | |
dns_nameservers | | | enable_dhcp | True | | gateway_ip | fe80::2001:1
| | host_routes | | | id | a1513aa7

[openstack-dev] SR-IOV summit session

2014-04-30 Thread Robert Li (baoli)
Hi John,

With the summit around the corner, please advise how we should run this 
session: http://summit.openstack.org/cfp/details/248

We are currently working on this nova spec, 
https://review.openstack.org/#/c/86606/.  I guess its content will be a 
candidate to be presented in the session.

Thanks,
Robert

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Devstack] [IPv6]

2014-04-16 Thread Robert Li (baoli)
Hi folks,

If you want to use IPv6 with devstack, Check this out
https://review.openstack.org/87987. The commit message has all the details
on how to use it.

thanks,
Robert


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][pci]PCI SR-IOV use cases initial doc

2014-04-14 Thread Robert Li (baoli)
Hi John,

Sorry for the late response. I was completely tied up with something.

I agree with your comments on the use cases.

Once there are the use cases, given all the Config vs API debates, I
would look at the pure data flow, in a Config/API agnostic way.
Agreeing the info needed from the user, then in the VIF driver, then
in between, etc. We should be able to agree on that, before returning
to the host aggregates API vs something new API vs more config debate.

I have seen your comments with Irenab’s nova-spec. I will try to reply as
well. And let’s go over the use cases outlined in that spec in tomorrow’s
IRC meeting.

Thanks,
Robert



On 4/10/14, 4:40 AM, John Garbutt j...@johngarbutt.com wrote:

Apologies, that came out all wrong...

On 10 April 2014 09:28, John Garbutt j...@johngarbutt.com wrote:
 I think writing this up as a nova-spec is going to make this process
 much easier:
 https://wiki.openstack.org/wiki/Blueprints#Nova

 It will save you having to re-write your document once you want to
 submit a blueprint, and we can all see each others comments in gerrit,
 and more clearly see how things change and evolve. The way the
 template in nova-spec works, it should also help you with structuring
 your argument.

Thats just want I would find easier, its just a suggestion.

 Please don't design assuming a single vendor solution, that is sure to
 get rejected (at least my me) at the blueprint review stage. You might
 want a different vendor in each AZ to isolate you from failures due to
 vendor bugs, if you are digging for a use case.

I guess thats a tenant use case, I got confused reading through those.

 I still can't see a clear description of the tenant use cases, I
 still think thats the key to getting agreement here, and getting
 useful feedback at the summit. Not sure I understand the tables, they
 seem a bit confusing/distracting.

Sorry, forgot to mention, you are making good progress here. But,
given the loop we are going around here, I think agreeing the ideal
use cases, then looking at the detail, and looping back to see if
everything works is probably the right approach. Other ideas
welcome!

Once there are the use cases, given all the Config vs API debates, I
would look at the pure data flow, in a Config/API agnostic way.
Agreeing the info needed from the user, then in the VIF driver, then
in between, etc. We should be able to agree on that, before returning
to the host aggregates API vs something new API vs more config debate.
Right it doesn't seem to be clear what is required, so its hard to
know what the best approach is, compared to other features we already
have in Nova.

At the moment I am struggling to see the whole picture, getting the
general idea clear before the summit would be awesome, so we can
discuss how to stage the implementation, deal with backwards
compatibility, etc.

Thanks,
John

 On 10 April 2014 09:14, yongli he yongli...@intel.com wrote:
 于 2014年04月10日 15:59, Irena Berezovsky 写道:

 Hi Robert,

 Thanks a lot the inputs you posted in the doc.

 I have raised there few questions and added use case for High
Availability.

 Another concern I have is regarding the assumption that there is no
case to
 mix different vendor cards in the setup. I think that mixing Cisco and
Intel
 or Mellanox cards does not make sense, but Intel and Mellanox cards can
 coexist. At least for my understanding, but I may be wrong, both Intel
and
 Mellanox take HW VEB (HW embedded switch) approach.

 1. open to mail list.
 2. admin/usr won't mixing Intel/Cisco/Mellanox card, does not mean
we
 should disable it, or don't give a chance.
 3. i raise couple of question and questioning the aggregate solution.
see
 inline comments.

 
https://docs.google.com/document/d/1zgMaXqrCnad01-jQH7Mkmf6amlghw9RMScGL
BrKslmw/edit

 Yongli He



 Thanks,

 Irena



 From: Robert Li (baoli) [mailto:ba...@cisco.com]
 Sent: Wednesday, April 09, 2014 11:11 PM
 To: Irena Berezovsky; Sandhya Dasu (sadasu); Robert Kukura; He, Yongli
 (yongli...@intel.com); Itzik Brown; beag...@redhat.com
 Subject: Re: PCI SR-IOV use cases initial doc



 Hi,



 I updated the doc with some of my thoughts.



 Thanks,

 Robert



 On 3/24/14, 8:41 AM, Irena Berezovsky ire...@mellanox.com wrote:



 Hi,

 I have created the initial doc to capture PCI SR-IOV networking use
cases:

 
https://docs.google.com/document/d/1zgMaXqrCnad01-jQH7Mkmf6amlghw9RMScGL
BrKslmw/edit



 I have updated the agenda for tomorrow meeting to discuss the use
cases.



 Please comment and update



 BR,

 Irena




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Neutron Routers and LLAs

2014-03-26 Thread Robert Li (baoli)
Hi Sean,

Unless I have missed something, this is my thinking:
  -- I understand that the goal is to allow RAs from designated sources
only.
  -- initially, xuhanp posted a diff for
https://review.openstack.org/#/c/72252. And my comment was that subnet
that was created with gateway ip not on the same subnet can't be added
into the neutron router.
  -- as a result, https://review.openstack.org/#/c/76125/ was posted to
address that issue. With that diff, LLA would be allowed. But a
consequence of that is a gateway port would end up having two LLAs: one
that is automatically generated, the other from the subnet gateway IP.
  -- with xuhanp's new diff for https://review.openstack.org/#/c/72252, if
openstack native RA is enabled, then the automatically generated LLA will
be used; and if it's not enabled, it will use the external gateway's LLA.
And the diff seems to indicate this LLA comes from the subnet's gateway
IP.
  -- Therefore, the change of https://review.openstack.org/#/c/76125/
seems to be able to add the gateway IP as an external gateway.
  -- Thus, my question is: should such a subnet be allowed to add in a
router? And if it should, what would the semantics be? If not, proper
error should be provided to the user. I'm also trying to figure out the
reason that such a subnet needs to be created in neutron (other than
creating L2 ports for VMs).

-- Another thought is that if the RA is coming from the provider net, then
the provider net should have installed mechanisms to prevent rogue RAs
from entering the network. There are a few RFCs that address the rogue RA
issue. 

see inline as well.

I hope that I didn't confuse you guys.

Thanks,
Robert


On 3/25/14 2:18 PM, Collins, Sean sean_colli...@cable.comcast.com
wrote:

During the review[0] of the patch that only allows RAs from known
addresses, Robert Li brought up a bug in Neutron, where a
IPv6 Subnet could be created, with a link local address for the gateway,
that would fail to create the Neutron router because the IP address that
the router's port would be assigned, was a link local
address that was not on the subnet.

This may or may have been run before force_gateway_on_subnet flag was
introduced. Robert - if you can give us what version of Neutron you were
running that would be helpful.

[Robert] I'm using the latest



Here's the full text of what Robert posted in the review, which shows
the bug, which was later filed[1].

 This is what I've tried, creating a subnet with a LLA gateway address:
 
 neutron subnet-create --ip-version 6 --name myipv6sub --gateway
fe80::2001:1 mynet :::/64

 Created a new subnet:
 
+--+
+
 | Field | Value |
 
+--+
+
 | allocation_pools | {start: :::1, end:
::::::fffe} | | cidr | :::/64 | |
dns_nameservers | | | enable_dhcp | True | | gateway_ip | fe80::2001:1
| | host_routes | | | id | a1513aa7-fb19-4b87-9ce6-25fd238ce2fb | |
ip_version | 6 | | name | myipv6sub | | network_id |
9c25c905-da45-4f97-b394-7299ec586cff | | tenant_id |
fa96d90f267b4a93a5198c46fc13abd9 |
 
+--+
+
 
 openstack@devstack-16:~/devstack$ neutron router-list

 
+--+-+--
---+
 | id | name | external_gateway_info
 | 
+--+-+--
---+
 | 7cf084b4-fafd-4da2-9b15-0d25a3e27e67 | router1 | {network_id:
02673c3c-35c3-40a9-a5c2-9e5c093aca48, enable_snat: true}
 | 
 
+--+-+--
---+

 openstack@devstack-16:~/devstack$ neutron router-interface-add
7cf084b4-fafd-4da2-9b15-0d25a3e27e67 myipv6sub

 400-{u'NeutronError': {u'message': u'Invalid input for operation: IP
address fe80::2001:1 is not a valid IP for the defined subnet.',
u'type': u'InvalidInput', u'detail': u''}}


During last week's meeting, we had a bit of confusion near the end of the
meeting[2] about the following bug, and the fix[3].

If I am not mistaken - the fix is so that when you create a v6 Subnet
with a link local address, then create a Neutron router to serve as the
gateway for that subnet - the operation will successfully complete and a
router will be created.

We may need to take a look at the code that create a router - to ensure
that only one gateway port is created, and that the link local address
from the subnet's 'gateway' attribute is used as the address.

[Robert] We are discussing what's going to happen when such a subnet is
added into a router. The neutron router may have already existed.



This is at least my understanding of the problem as it 

Re: [openstack-dev] [nova][PCI] one use case make the flavor/extra-info based solution to be right choice

2014-03-20 Thread Robert Li (baoli)
Hi Yongli,

I'm very glad that you bring this up and relive our discussion on PCI
passthrough and its application on networking. The use case you brought up
is:

   user wants a FASTER NIC from INTEL to join a virtual
networking. 

By FASTER, I guess that you mean that the user is allowed to select a
particular vNIC card. Therefore, the above statement can be translated
into the following requests for a PCI device:
. Intel vNIC
. 1G or 10G or ?
. network to join

First of all, I'm not sure in a cloud environment, a user would care about
the vendor or card type. 1G or 10G doesn't have anything to do with the
bandwidth a user would get. But I guess a cloud provider may have the
incentive to do so for other reasons, and want to provide its users with
such choice. In any case, let's assume it's a valid use case.

With the initial PCI group proposal, we have one tag and you can tag the
Intel device with its group name, for example, Intel_1G_phy1,
Intel_10G_phy1. When requesting a particular device, the user can say:
pci_group=Intel_1G_phy1, or pci_group=Intel_10G_phy1, or if the user
don't care 1G or 10G, pci_group=Intel_1G_phy1 OR Intel_10G_phy1.

I would also think that it's possible to have two tags on a networking
device with the above use case in mind: a group tag, and a network tag.
For example, a device can be tagged with pci_group=Intel_1G,
network=phy1. When requesting a networking device, the network tag can
be derived from the nic that's being requested.

As you can see, an admin defines the devices once on the compute nodes,
and doesn't need to do anything on top of that. It's simple and easy to
use.

My initial comments to the flavor/extra-info based solution are about the
PCI stats management and scheduling. Your latest patch seems to have
answered some of my original questions. However, your implementation seems
to deviate from (or I should say have clarified) the original proposal
https://docs.google.com/document/d/1vadqmurlnlvZ5bv3BlUbFeXRS_wh-dsgi5plSji
mWjU/edit, which doesn't provide detailed explanation on those.

Here, let me extract the comment I provided to this patch
https://review.openstack.org/#/c/63267/:

'''
I'd like to take an analogy with a database table. Assuming a device table
with columns of device properties (such as product_id, etc), designated as
P and extra attributes as E. So it would look like something as T-columns
= (P1, P2, ..., E1, E2, ..).
A pci_flavor_attrs is a subset of T-columns. With that, the entire device
table will be REDUCED to a smaller stats pool table. For example, if
pci_flavor_attrs is (P1, P2, E1, E2), then the stats pool table will look
like: S-columns = (P1, P2, E1, E2, COUNT). In the worst case, S-columns =
T-columns. Although a well educated admin wouldn't do that.
Therefore, requesting a PCI device is like doing a DB search based on the
stats pool table. And the search criteria is based on a combination of the
S-columns (for example, by way of nova flavor).
The admin can decide to define any extra attributes, and devices may be
tagged with different extra attributes. It's possible that many extra
attributes are defined, but some devices may be tagged with one. However,
all the extra attributes have to have corresponding columns in the stats
pool table.
I can see there are many ways to use such an interface. it also means it
could easily lead to misuse. An admin may define a lot of attributes,
later he may find it's not enough based on how he used it, and adding new
attributes or deleting attributes may not be a fun thing at all (due to
the fixed pci_flavor_attrs configuration), let alone how to do that in a
working cloud.
'''

Imagine in a cloud that supports PCI passthrough on various classes of PCI
cards (by class, I mean the linux pci device class). Examples are video,
crypto, networking, storage, etc. The pci_flavor_attrs needs to be defined
on EVERY node, and has to accommodate attributes from ALL of these classes
of cards. However, an attribute for one class of cards may not be
applicable to other classes of cards. However, the stats group are keyed
on pci_flavor_attrs, and PCI flavors can be defined with any attributes
from pci_flavor_attrs. Thus, it really lacks the level of abstraction that
clearly defines the usage and semantics. It's up to a well educated admin
to use it properly, and it's not easy to manage. Therefore, I believe it
requires further work.

I think that practical use cases would really help us find the right
solution, and provide the optimal interface to the admin/user. So let's
keep the discussion going.

thanks,
Robert

On 3/20/14 4:22 AM, yongli he yongli...@intel.com wrote:

HI, all

cause of the Juno, the PCI discuss keen open, for group VS to
flavor/extra-information based solution. there is a use case, which
group based
solution can not supported well.

please considerate of this, and choose the flavor/extra-information
based solution.


Groups problem:

I: exposed may detail 

Re: [openstack-dev] [Neutron][IPv6][Security Group] BP: Support ICMP type filter by security group

2014-03-10 Thread Robert Li (baoli)
Hi Akihiro,

See inline for a question Š.

Thanks,
Robert

On 3/7/14 2:02 PM, Akihiro Motoki amot...@gmail.com wrote:

Hi Robert,

Thanks for the clarification. I understand the motivation.

I think the problem can be split into two categories:
(a) user configurable rules vs infra enforced rule, and
(b) DHCP/RA service exists inside or outside of Neutron

Regarding (a), I believe DHCP or RA related rules is better to be handled
by the infra side because it is required to ensure DHCP/RA works well.
I don't think it is a good idea to delegate users to configure rule to
allow them.
It works as long as DHCP/RA service works inside OpenStack.
This is the main motivation of my previous question.

On the other hand, there is no way to cooperate with DHCP/RA
services outside of OpenStack at now. This blocks the usecase in your
mind.
It is true that the current Neutron cannot works with dhcp server
outside of neutron.

I'd appreciate it if you can explain the above in more detail? I'd like to
understand what has caused the limitation.
thanks.


I agree that adding a security group rule to allow RA is reasonable as
a workaround.
However, for a long time solution, it is better to explore a way to
configure
infra-required rules.

Thanks,
Akihiro


On Sat, Mar 8, 2014 at 12:50 AM, Robert Li (baoli) ba...@cisco.com
wrote:
 Hi Akihiro,

 In the case of IPv6 RA, its source IP is a Link Local Address from the
 router's RA advertising interface. This LLA address is automatically
 generated and not saved in the neutron port DB. We are exploring the
idea
 of retrieving this LLA if a native openstack RA service is running on
the
 subnet.

 Would SG be needed with a provider net in which the RA service is
running
 external to openstack?

 In the case of IPv4 DHCP, the dhcp port is created by the dhcp service,
 and the dhcp server ip address is retrieved from this dhcp port. If the
 dhcp server is running outside of openstack, and if we'd only allow dhcp
 packets from this server, how is it done now?

 thanks,
 Robert

 On 3/7/14 12:00 AM, Akihiro Motoki amot...@gmail.com wrote:

I wonder why RA needs to be exposed by security group API.
Does a user need to configure security group to allow IPv6 RA? or
should it be allowed in infra side?

In the current implementation DHCP packets are allowed by provider
rule (which is hardcoded in neutron code now).
I think the role of IPv6 RA is similar to DHCP in IPv4. If so, we
don't need to expose RA in security group API.
Am I missing something?

Thanks,
Akihiro

On Mon, Mar 3, 2014 at 10:39 PM, Xuhan Peng pengxu...@gmail.com wrote:
 I created a new blueprint [1] which is triggered by the requirement to
allow
 IPv6 Router Advertisement security group rule on compute node in my
on-going
 code review [2].

 Currently, only security group rule direction, protocol, ethertype and
port
 range are supported by neutron security group rule data structure. To
allow
 Router Advertisement coming from network node or provider network to
VM
on
 compute node, we need to specify ICMP type to only allow RA from known
hosts
 (network node dnsmasq binded IP or known provider gateway).

 To implement this and make the implementation extensible, maybe we can
add
 an additional table name SecurityGroupRuleData with Key, Value and
ID
in
 it. For ICMP type RA filter, we can add key=icmp-type value=134,
and
 security group rule to the table. When other ICMP type filters are
needed,
 similar records can be stored. This table can also be used for other
 firewall rule key values.
 API change is also needed.

 Please let me know your comments about this blueprint.

 [1]

https://blueprints.launchpad.net/neutron/+spec/security-group-icmp-type
-f
ilter
 [2] https://review.openstack.org/#/c/72252/

 Thank you!
 Xuhan Peng

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6][Security Group] BP: Support ICMP type filter by security group

2014-03-07 Thread Robert Li (baoli)
Hi Akihiro,

In the case of IPv6 RA, its source IP is a Link Local Address from the
router's RA advertising interface. This LLA address is automatically
generated and not saved in the neutron port DB. We are exploring the idea
of retrieving this LLA if a native openstack RA service is running on the
subnet.

Would SG be needed with a provider net in which the RA service is running
external to openstack?

In the case of IPv4 DHCP, the dhcp port is created by the dhcp service,
and the dhcp server ip address is retrieved from this dhcp port. If the
dhcp server is running outside of openstack, and if we'd only allow dhcp
packets from this server, how is it done now?

thanks,
Robert

On 3/7/14 12:00 AM, Akihiro Motoki amot...@gmail.com wrote:

I wonder why RA needs to be exposed by security group API.
Does a user need to configure security group to allow IPv6 RA? or
should it be allowed in infra side?

In the current implementation DHCP packets are allowed by provider
rule (which is hardcoded in neutron code now).
I think the role of IPv6 RA is similar to DHCP in IPv4. If so, we
don't need to expose RA in security group API.
Am I missing something?

Thanks,
Akihiro

On Mon, Mar 3, 2014 at 10:39 PM, Xuhan Peng pengxu...@gmail.com wrote:
 I created a new blueprint [1] which is triggered by the requirement to
allow
 IPv6 Router Advertisement security group rule on compute node in my
on-going
 code review [2].

 Currently, only security group rule direction, protocol, ethertype and
port
 range are supported by neutron security group rule data structure. To
allow
 Router Advertisement coming from network node or provider network to VM
on
 compute node, we need to specify ICMP type to only allow RA from known
hosts
 (network node dnsmasq binded IP or known provider gateway).

 To implement this and make the implementation extensible, maybe we can
add
 an additional table name SecurityGroupRuleData with Key, Value and ID
in
 it. For ICMP type RA filter, we can add key=icmp-type value=134, and
 security group rule to the table. When other ICMP type filters are
needed,
 similar records can be stored. This table can also be used for other
 firewall rule key values.
 API change is also needed.

 Please let me know your comments about this blueprint.

 [1]
 
https://blueprints.launchpad.net/neutron/+spec/security-group-icmp-type-f
ilter
 [2] https://review.openstack.org/#/c/72252/

 Thank you!
 Xuhan Peng

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Change in openstack/neutron[master]: Permit ICMPv6 RAs only from known routers

2014-03-05 Thread Robert Li (baoli)
Hi Sean,

See embedded commentsŠ

Thanks,
Robert

On 3/4/14 3:25 PM, Collins, Sean sean_colli...@cable.comcast.com wrote:

On Tue, Mar 04, 2014 at 02:08:03PM EST, Robert Li (baoli) wrote:
 Hi Xu Han  Sean,
 
 Is this code going to be committed as it is? Based on this morning's
 discussion, I thought that the IP address used to install the RA rule
 comes from the qr-xxx interface's LLA address. I think that I'm
confused.

Xu Han has a better grasp on the query than I do, but I'm going to try
and take a crack at explaining the code as I read through it. Here's
some sample data from the Neutron database - built using
vagrant_devstack. 

https://gist.github.com/sc68cal/568d6119eecad753d696

I don't have V6 addresses working in vagrant_devstack just yet, but for
the sake of discourse I'm going to use it as an example.

If you look at the queries he's building in 72252 - he's querying all
the ports on the network, that are q_const.DEVICE_OWNER_ROUTER_INTF
(network:router_interface). The IP of those ports are added to the list
of IPs.

Then a second query is done to find the port connected from the router
to the gateway, q_const.DEVICE_OWNER_ROUTER_GW
('network:router_gateway'). Those IPs are then appended to the list of
IPs.

Finally, the last query adds the IPs of the gateway for each subnet
in the network.

So, ICMPv6 traffic from ports that are either:

A) A gateway device
B) A router
C) The subnet's gateway

My understanding is that the RA (if enabled) will be sent to the router
interface (the qr interface). Therefore, the RA's source IP will be an LLA
from the qr interface

 

Will be passed through to an instance.

Now, please take note that I have *not* discussed what *kind* of IP
address will be picked up. We intend for it to be a Link Local address,
but that will be/is addressed in other patch sets.

 Also this bug: Allow LLA as router interface of IPv6 subnet
 https://review.openstack.org/76125 was created due to comments to 72252.
 If We don't need to create a new LLA for the gateway IP, is the fix
still
 needed? 

Yes - we still need this patch - because that code path is how we are
able to create ports on routers that are a link local address.

As a result of this change, it will end up having two LLA addresses in the
router's qr interface. It would have made more sense if the LLA will be
replacing the qr interface's automatically generated LLA address.



This is at least my understanding of our progress so far, but I'm not
perfect - Xu Han will probably have the last word.

-- 
Sean M. Collins


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of ports

2014-03-05 Thread Robert Li (baoli)
Hi Irena,

The main reason for me to do it that way is how vif_details should be
setup in our case. Do you need vlan in vif_details? The behavior in the
existing base classes is that the vif_details is set during the driver
init time. In our case, it needs to be setup during bind_port().

thanks,
Robert


On 3/5/14 7:37 AM, Irena Berezovsky ire...@mellanox.com wrote:

Hi Robert, Sandhya,
I have pushed the reference implementation SriovAgentMechanismDriverBase
as part the following WIP:
https://review.openstack.org/#/c/74464/

The code is in mech_agent.py, and very simple code for
mech_sriov_nic_switch.py.

Please take a look and review.

BR,
Irena

-Original Message-
From: Irena Berezovsky [mailto:ire...@mellanox.com]
Sent: Wednesday, March 05, 2014 9:04 AM
To: Robert Li (baoli); Sandhya Dasu (sadasu); OpenStack Development
Mailing List (not for usage questions); Robert Kukura; Brian Bowen
(brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
binding of ports

Hi Robert,
Seems to me that many code lines are duplicated following your proposal.
For agent based MDs, I would prefer to inherit from
SimpleAgentMechanismDriverBase and add there verify method for
supported_pci_vendor_info. Specific MD will pass the list of supported
pci_vendor_info list. The  'try_to_bind_segment_for_agent' method will
call 'supported_pci_vendor_info', and if supported continue with binding
flow. 
Maybe instead of a decorator method, it should be just an utility method?
I think that the check for supported vnic_type and pci_vendor info
support, should be done in order to see if MD should bind the port. If
the answer is Yes, no more checks are required.

Coming back to the question I asked earlier, for non-agent MD, how would
you deal with updates after port is bound, like 'admin_state_up' changes?
I'll try to push some reference code later today.

BR,
Irena

-Original Message-
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, March 05, 2014 4:46 AM
To: Sandhya Dasu (sadasu); OpenStack Development Mailing List (not for
usage questions); Irena Berezovsky; Robert Kukura; Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
binding of ports

Hi Sandhya,

I agree with you except that I think that the class should inherit from
MechanismDriver. I took a crack at it, and here is what I got:

# Copyright (c) 2014 OpenStack Foundation # All Rights Reserved.
#
#Licensed under the Apache License, Version 2.0 (the License); you
may
#not use this file except in compliance with the License. You may
obtain
#a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an AS IS BASIS,
WITHOUT
#WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See
the
#License for the specific language governing permissions and
limitations
#under the License.

from abc import ABCMeta, abstractmethod

import functools
import six

from neutron.extensions import portbindings from neutron.openstack.common
import log from neutron.plugins.ml2 import driver_api as api

LOG = log.getLogger(__name__)


DEFAULT_VNIC_TYPES_SUPPORTED = [portbindings.VNIC_DIRECT,
portbindings.VNIC_MACVTAP]

def check_vnic_type_and_vendor_info(f):
@functools.wraps(f)
def wrapper(self, context):
vnic_type = context.current.get(portbindings.VNIC_TYPE,
portbindings.VNIC_NORMAL)
if vnic_type not in self.supported_vnic_types:
LOG.debug(_(%(func_name)s: skipped due to unsupported 
vnic_type: %(vnic_type)s),
  {'func_name': f.func_name, 'vnic_type': vnic_type})
return

if self.supported_pci_vendor_info:
profile = context.current.get(portbindings.PROFILE, {})
if not profile:
LOG.debug(_(%s: Missing profile in port binding),
  f.func_name)
return
pci_vendor_info = profile.get('pci_vendor_info')
if not pci_vendor_info:
LOG.debug(_(%s: Missing pci vendor info in profile),
  f.func_name)
return
if pci_vendor_info not in self.supported_pci_vendor_info:
LOG.debug(_(%(func_name)s: unsupported pci vendor 
info: %(info)s),
  {'func_name': f.func_name, 'info':
pci_vendor_info})
return
f(self, context)
return wrapper

@six.add_metaclass(ABCMeta)
class SriovMechanismDriverBase(api.MechanismDriver):
Base class for drivers that supports SR-IOV

The SriovMechanismDriverBase provides common code for mechanism
drivers that supports SR-IOV. Such a driver may or may not require
an agent to be running

Re: [openstack-dev] Change in openstack/neutron[master]: Permit ICMPv6 RAs only from known routers

2014-03-05 Thread Robert Li (baoli)
Hi Sean,

Sorry for your frustration. I actually provided the comments about the two
LLAs in the review (see patch set 1). If the intent for these changes is
to allow RAs from legitimate sources only, I'm afraid that that goal won't
be reached with them. I may be completely wrong, but so far I haven't been
convinced yet. 
 

thanks,
Robert



On 3/5/14 10:21 AM, Collins, Sean sean_colli...@cable.comcast.com
wrote:

Hi Robert,

I'm reaching out to you off-list for this:

On Wed, Mar 05, 2014 at 09:48:46AM EST, Robert Li (baoli) wrote:
 As a result of this change, it will end up having two LLA addresses in
the
 router's qr interface. It would have made more sense if the LLA will be
 replacing the qr interface's automatically generated LLA address.

Was this not what you intended, when you -1'd the security group patch
because you were not able to create gateways for Neutron subnets with a
LLA address? I am a little frustrated because we scrambled to create a
patch so you would remove your -1, then now your suggesting we abandon
it?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PCI SRIOV meeting suspend?

2014-03-04 Thread Robert Li (baoli)
Hi Yongli,

I have been looking at your patch set. Let me look at it again if you have
new update. 

The meeting changed back to UTC 1300 Tuesday.

thanks,
Robert

On 3/4/14 12:39 AM, yongli he yongli...@intel.com wrote:

On 2014年03月04日 13:33, Irena Berezovsky wrote:
 Hi Yongli He,
 The PCI SRIOV meeting switched back to weekly occurrences,.
 Next meeting will be today at usual time slot:
 https://wiki.openstack.org/wiki/Meetings#PCI_Passthrough_Meeting

 In coming meetings we would like to work on content to be proposed for
Juno.
 BR,
thanks, Irena.

Yongli he
 Irena

 -Original Message-
 From: yongli he [mailto:yongli...@intel.com]
 Sent: Tuesday, March 04, 2014 3:28 AM
 To: Robert Li (baoli); Irena Berezovsky; OpenStack Development Mailing
List
 Subject: PCI SRIOV meeting suspend?

 HI, Robert

 does it stop for while?

 and if you are convenient please review this patch set , check if the
interface is ok.


 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branc
h:master+topic:bp/pci-extra-info,n,z

 Yongli He



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Update the Wiki with links to blueprints and reviews

2014-03-04 Thread Robert Li (baoli)
Hi Sean,

I just added the ipv6-prefix-delegation BP that can be found using the
search link on the ipv6 wiki. More details about it will be added once I
find time.

thanks,
--Robert

On 3/4/14 10:05 AM, Collins, Sean sean_colli...@cable.comcast.com
wrote:

Hi All,

We've got a lot of work in progress, so if you
have a blueprint or bug that you are working on (or know about),
let's make sure that we keep track of them. Ideally, for bugs, add the
ipv6 tag

https://bugs.launchpad.net/neutron/+bugs?field.tag=ipv6

For blueprints and code reviews, please add them to the Wiki

https://wiki.openstack.org/wiki/Neutron/IPv6

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Update the Wiki with links to blueprints and reviews

2014-03-04 Thread Robert Li (baoli)
Yea. that's a good idea. I will try to find out time working on the spec.

--Robert

On 3/4/14 11:17 AM, Collins, Sean sean_colli...@cable.comcast.com
wrote:

On Tue, Mar 04, 2014 at 04:06:02PM +, Robert Li (baoli) wrote:
 Hi Sean,
 
 I just added the ipv6-prefix-delegation BP that can be found using the
 search link on the ipv6 wiki. More details about it will be added once I
 find time.

Perfect - we'll probably want to do a session at the summit on it.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Change in openstack/neutron[master]: Permit ICMPv6 RAs only from known routers

2014-03-04 Thread Robert Li (baoli)
Hi Xu Han  Sean,

Is this code going to be committed as it is? Based on this morning's
discussion, I thought that the IP address used to install the RA rule
comes from the qr-xxx interface's LLA address. I think that I'm confused.

Also this bug: Allow LLA as router interface of IPv6 subnet
https://review.openstack.org/76125 was created due to comments to 72252.
If We don't need to create a new LLA for the gateway IP, is the fix still
needed? 

Just trying to sync up with you guys on them.

Thanks,
Robert



On 3/4/14 3:02 AM, Sean M. Collins (Code Review) rev...@openstack.org
wrote:

Sean M. Collins has posted comments on this change.

Change subject: Permit ICMPv6 RAs only from known routers
..


Patch Set 4: Looks good to me, but someone else must approve

Automatically re-added by Gerrit trivial rebase detection script.

--
To view, visit https://review.openstack.org/72252
To unsubscribe, visit https://review.openstack.org/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: I1d5c7aaa8e4cf057204eb746c0faab2c70409a94
Gerrit-PatchSet: 4
Gerrit-Project: openstack/neutron
Gerrit-Branch: master
Gerrit-Owner: Xu Han Peng xuh...@cn.ibm.com
Gerrit-Reviewer: Arista Testing arista-openstack-t...@aristanetworks.com
Gerrit-Reviewer: Baodong (Robert) Li ba...@cisco.com
Gerrit-Reviewer: Big Switch CI openstack...@bigswitch.com
Gerrit-Reviewer: Brian Haley brian.ha...@hp.com
Gerrit-Reviewer: Brocade CI openstack_ger...@brocade.com
Gerrit-Reviewer: Cisco Neutron CI cisco-openstack-neutron...@cisco.com
Gerrit-Reviewer: Hyper-V CI hyper-v...@microsoft.com
Gerrit-Reviewer: Jenkins
Gerrit-Reviewer: Midokura CI Bot lu...@midokura.com
Gerrit-Reviewer: Miguel Angel Ajo miguelan...@ajo.es
Gerrit-Reviewer: NEC OpenStack CI nec-openstack...@iaas.jp.nec.com
Gerrit-Reviewer: Neutron Ryu ryu-openstack-rev...@lists.sourceforge.net
Gerrit-Reviewer: Nuage CI nuage...@nuagenetworks.net
Gerrit-Reviewer: One Convergence CI oc-neutron-t...@oneconvergence.com
Gerrit-Reviewer: PLUMgrid CI plumgrid-ci...@plumgrid.com
Gerrit-Reviewer: Sean M. Collins sean_colli...@cable.comcast.com
Gerrit-Reviewer: Xu Han Peng xuh...@cn.ibm.com
Gerrit-Reviewer: mark mcclain mmccl...@yahoo-inc.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of ports

2014-03-04 Thread Robert Li (baoli)
 mentioned Base/Mixin class that inherits from
AgentMechanismDriverBase class? When you mentioned port state, were you
referring to the validate_port_binding() method?

Pls clarify.

Thanks,
Sandhya

On 2/6/14 7:57 AM, Sandhya Dasu (sadasu) sad...@cisco.com wrote:

Hi Bob and Irena,
   Thanks for the clarification. Irena, I am not opposed to a
SriovMechanismDriverBase/Mixin approach, but I want to first figure out
how much common functionality there is. Have you already looked at this?

Thanks,
Sandhya

On 2/5/14 1:58 AM, Irena Berezovsky ire...@mellanox.com wrote:

Please see inline my understanding

-Original Message-
From: Robert Kukura [mailto:rkuk...@redhat.com]
Sent: Tuesday, February 04, 2014 11:57 PM
To: Sandhya Dasu (sadasu); OpenStack Development Mailing List (not for
usage questions); Irena Berezovsky; Robert Li (baoli); Brian Bowen
(brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
binding of ports

On 02/04/2014 04:35 PM, Sandhya Dasu (sadasu) wrote:
 Hi,
  I have a couple of questions for ML2 experts regarding support
of
 SR-IOV ports.

I'll try, but I think these questions might be more about how the
various
SR-IOV implementations will work than about ML2 itself...

 1. The SR-IOV ports would not be managed by ova or linuxbridge L2
 agents. So, how does a MD for SR-IOV ports bind/unbind its ports to
 the host? Will it just be a db update?

I think whether or not to use an L2 agent depends on the specific
SR-IOV
implementation. Some (Mellanox?) might use an L2 agent, while others
(Cisco?) might put information in binding:vif_details that lets the
nova
VIF driver take care of setting up the port without an L2 agent.
[IrenaB] Based on VIF_Type that MD defines, and going forward with
other
binding:vif_details attributes, VIFDriver should do the VIF pluging
part.
As for required networking configuration is required, it is usually
done
either by L2 Agent or external Controller, depends on MD.

 
 2. Also, how do we handle the functionality in mech_agent.py, within
 the SR-IOV context?

My guess is that those SR-IOV MechanismDrivers that use an L2 agent
would
inherit the AgentMechanismDriverBase class if it provides useful
functionality, but any MechanismDriver implementation is free to not
use
this base class if its not applicable. I'm not sure if an
SriovMechanismDriverBase (or SriovMechanismDriverMixin) class is being
planned, and how that would relate to AgentMechanismDriverBase.

[IrenaB] Agree with Bob, and as I stated before I think there is a need
for SriovMechanismDriverBase/Mixin that provides all the generic
functionality and helper methods that are common to SRIOV ports.
-Bob

 
 Thanks,
 Sandhya
 
 From: Sandhya Dasu sad...@cisco.com mailto:sad...@cisco.com
 Reply-To: OpenStack Development Mailing List (not for usage
questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
 Date: Monday, February 3, 2014 3:14 PM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org, Irena Berezovsky
 ire...@mellanox.com mailto:ire...@mellanox.com, Robert Li
(baoli)
 ba...@cisco.com mailto:ba...@cisco.com, Robert Kukura
 rkuk...@redhat.com mailto:rkuk...@redhat.com, Brian Bowen
 (brbowen) brbo...@cisco.com mailto:brbo...@cisco.com
 Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
 extra hr of discussion today
 
 Hi,
 Since, openstack-meeting-alt seems to be in use, baoli and myself
 are moving to openstack-meeting. Hopefully, Bob Kukura  Irena can
 join soon.
 
 Thanks,
 Sandhya
 
 From: Sandhya Dasu sad...@cisco.com mailto:sad...@cisco.com
 Reply-To: OpenStack Development Mailing List (not for usage
questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
 Date: Monday, February 3, 2014 1:26 PM
 To: Irena Berezovsky ire...@mellanox.com
 mailto:ire...@mellanox.com, Robert Li (baoli) ba...@cisco.com
 mailto:ba...@cisco.com, Robert Kukura rkuk...@redhat.com
 mailto:rkuk...@redhat.com, OpenStack Development Mailing List
(not
for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org, Brian Bowen (brbowen)
 brbo...@cisco.com mailto:brbo...@cisco.com
 Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
 extra hr of discussion today
 
 Hi all,
 Both openstack-meeting and openstack-meeting-alt are available
 today. Lets meet at UTC 2000 @ openstack-meeting-alt.
 
 Thanks,
 Sandhya
 
 From: Irena Berezovsky ire...@mellanox.com
 mailto:ire...@mellanox.com
 Date: Monday, February 3, 2014 12:52 AM
 To: Sandhya Dasu sad...@cisco.com mailto:sad...@cisco.com,
Robert
 Li (baoli) ba...@cisco.com mailto:ba...@cisco.com, Robert Kukura
 rkuk...@redhat.com mailto:rkuk...@redhat.com, OpenStack
 Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org, Brian Bowen (brbowen)
 brbo...@cisco.com

Re: [openstack-dev] [Neutron][IPv6] BP:Store both IPv6 LLA and GUA address on router interface port

2014-02-27 Thread Robert Li (baoli)
Hi Xuhan,

Thank you for your summary. see comments inline.

--Robert

On 2/27/14 12:49 AM, Xuhan Peng 
pengxu...@gmail.commailto:pengxu...@gmail.com wrote:

As the follow up action of IPv6 sub-team meeting [1], I created a new blueprint 
[2] to store both IPv6 LLA and GUA address on router interface port.

Here is what it's about:

Based on the two modes (ipv6-ra-mode and ipv6-address-mode) design[3], RA can 
be sent from both openstack controlled dnsmasq or existing devices.

RA From dnsmasq: gateway ip that dnsmasq binds into should be link local 
address (LLA) according to [4]. This means we need to pass the LLA of the 
created router internal port (i.e. qr-) to dnsmasq spawned by openstack 
dhcp agent. In the mean while, we need to assign an GUA to the created router 
port so that the traffic from external network can be routed back using the GUA 
of the router port as the next hop into the internal subnet. Therefore, we will 
need some change to the current logic to leverage both LLA and GUA on router 
port.

[Robert]: in this case, a LLA address is automatically created based on the 
gateway port's MAC address (EUI64 format). If it's determined that the gateway 
port is enabled with IPv6 (due to the two modes), then an RA rule can be 
installed based on the gateway port's automatic LLA.
if a service VM is running on the same subnet that supports IPv6 (either by RA 
or DHCPv6), then the service VM is attached to a neutron port on the same 
subnet (the gateway port). In this case, the automatic LLA on that port can be 
used to install the RA Rule. This is actually the same as in the dnsmasq case: 
use the gateway port's automatic LLA.


RA from existing device on the same link which is not controlled by openstack: 
dnsmasq will not send RA in this case. RA is sending from subnet's gateway 
address which should also be LLA according to [4]. Allowing subnet's gateway IP 
to be LLA is enough in this case. Current code works when 
force_gateway_on_subnet = False.

[Robert]
if it's a provider network, the gateway already exists. I believe that the 
behavior of the —gateway option in the subnet API is to indicate the gateway's 
true IP address and install default route. In the IPv6 case, however, due to 
the existence of RA, the gateway doesn't have to be provided. In this case, a 
neutron gateway port doesn't have to be created, either. Installing a RA rule 
to prevent RA from malicious source should be done explicitly. A couple of 
methods may be considered. For example, an option such as —alow-ra LLA can be 
introduced in the subnet API, or the security group rule can be enhanced to 
allow specification of message type so that a RA rule can be incorporated.

In any case, I don't believe that the gateway behavior should be modified. In 
addition, I don't think that this functionality (IPv6 RA rule) has to be 
provided right now, but can be introduced when it's completely sorted out.

The above is just my two cents.

thanks.





RA from router gateway port (i.e. qg-):  the LLA of the gateway port 
(qg-) should be set as the gateway of tenant subnet to get the RA from 
that. This could be potentially calculated by [5] or by other methods in the 
future considering privacy extension. However, this will make the tenant 
network gateway port qr- useless. Therefore, we also need code change to 
current router interface attach logic.

If you have any comments on this, please let me know.

[1] 
http://eavesdrop.openstack.org/meetings/neutron_ipv6/2014/neutron_ipv6.2014-02-25-14.02.html
[2] https://blueprints.launchpad.net/neutron/+spec/ipv6-lla-gua-router-interface
[3] https://blueprints.launchpad.net/neutron/+spec/ipv6-two-attributes
[4] http://tools.ietf.org/html/rfc4861
[5] https://review.openstack.org/#/c/56184/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] SR-IOV networking patches available

2014-02-26 Thread Robert Li (baoli)
Hi,

The following two Work In Progress patches are available for end-to-end SR-IOV 
networking:
nova client: https://review.openstack.org/#/c/67503/
nova: https://review.openstack.org/#/c/67500/

Please check the commit messages for how to use them.

Neutron changes required to support SR-IOV have already been merged. Many 
thanks to the developers working on them and having them merged in a very short 
time! They are:

https://blueprints.launchpad.net/neutron/+spec/vif-details
https://blueprints.launchpad.net/neutron/+spec/ml2-binding-profile
https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type

The above patches combined can be used to develop a neutron plugin that 
supports SR-IOV. Please note that Although the nova patches are WIP patches, 
they can be used for your integration testing if you are developing a sr-iov 
capable neutron plugin.

If you use devstack, you may need the following patch for devstack to define 
the PCI whitelist entries:

diff --git a/lib/nova b/lib/nova
index fefeda1..995873a 100644
--- a/lib/nova
+++ b/lib/nova
@@ -475,6 +475,10 @@ function create_nova_conf() {
 iniset $NOVA_CONF DEFAULT ${I/=/ }
 done

+if [ -n $PCI_LIST ]; then
+iniset_multiline  $NOVA_CONF DEFAULT pci_passthrough_whitelist 
${PCI_LIST[@]}
+fi
+
 # All nova-compute workers need to know the vnc configuration options
 # These settings don't hurt anything if n-xvnc and n-novnc are disabled
 if is_service_enabled n-cpu; then

And define something like the following in your localrc file:
PCI_LIST=('{vendor_id:1137,product_id:0071,address:*:0a:00.*,physical_network:physnet1}'
   '{vendor_id:1137,product_id:0071}')
Basically it's a bash array of strings with each string being a json dict. 
Checkout https://review.openstack.org/#/c/67500 for the syntax.

Thanks,
Robert

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] SRIOV Meeting on Wednesday Feb.19th

2014-02-18 Thread Robert Li (baoli)
Hi Folks,

Irena suggested to have another sync-up meeting on Wednesday. So let's meet at 
8:00am at #openstack-meeting-alt.

Thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] SRIOV: Recap of Feb 12th and agenda on Feb 13th

2014-02-12 Thread Robert Li (baoli)
Hi Folks,

I put the recap in here:2 Feb. 12th, 2014 
Recaphttps://wiki.openstack.org/wiki/Meetings/Passthrough#Feb._12th.2C_2014_Recap.
 Please take a look at and see if everything is fine and correct any 
misunderstandings.

I also put together an Agenda for tomorrow in here:1 Agenda on Feb. 13th, 
2014https://wiki.openstack.org/wiki/Meetings/Passthrough#Agenda_on_Feb._13th.2C_2014.
 Hopefully we can get the nova side of things cleared.

Thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The simplified blueprint for PCI extra attributes and SR-IOV NIC blueprint

2014-02-05 Thread Robert Li (baoli)
Hi John and all,

Yunhong's email mentioned about the SR-IOV NIC support BP:
https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov

I'd appreciate your consideration of the approval of both BPs so that we
can have SR-IOV NIC support in Icehouse.

Thanks,
Robert


On 2/4/14 1:36 AM, Jiang, Yunhong yunhong.ji...@intel.com wrote:

Hi, John and all,
   I updated the blueprint
https://blueprints.launchpad.net/nova/+spec/pci-extra-info-icehouse
according to your feedback, to add the backward compatibility/upgrade
issue/examples.

   I try to separate this BP with the SR-IOV NIC support as a standalone
enhancement, because this requirement is more a generic PCI pass through
feature, and will benefit other usage scenario as well.

   And the reasons that I want to finish this BP in I release are:

   a) it's a generic requirement, and push it into I release is helpful to
other scenario.
   b) I don't see upgrade issue, and the only thing will be discarded in
future is the PCI alias if we all agree to use PCI flavor. But that
effort will be small and there is no conclusion to PCI flavor yet.
   c) SR-IOV NIC support is complex, it will be really helpful if we can
keep ball rolling and push the all-agreed items forward.

   Considering the big patch list for I-3 release, I'm not optimistic to
merge this in I release, but as said, we should keep the ball rolling and
move forward.

Thanks
--jyh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The simplified blueprint for PCI extra attributes

2014-02-04 Thread Robert Li (baoli)
Hi Yunhong,

A couple of questions:
   -- about the pci_information config item in your spec. What is a
device_id? 

   -- in libvirt driver, we need to retrieve the PCI devices allocated for
the requested networks. These PCI devices won't be treated as hostdev
devices in the domain xml, rather as interfaces. Shall it be specified in
your spec how this is going to be supported?

thanks,
Robert

On 2/4/14 1:36 AM, Jiang, Yunhong yunhong.ji...@intel.com wrote:

Hi, John and all,
   I updated the blueprint
https://blueprints.launchpad.net/nova/+spec/pci-extra-info-icehouse
according to your feedback, to add the backward compatibility/upgrade
issue/examples.

   I try to separate this BP with the SR-IOV NIC support as a standalone
enhancement, because this requirement is more a generic PCI pass through
feature, and will benefit other usage scenario as well.

   And the reasons that I want to finish this BP in I release are:

   a) it's a generic requirement, and push it into I release is helpful to
other scenario.
   b) I don't see upgrade issue, and the only thing will be discarded in
future is the PCI alias if we all agree to use PCI flavor. But that
effort will be small and there is no conclusion to PCI flavor yet.
   c) SR-IOV NIC support is complex, it will be really helpful if we can
keep ball rolling and push the all-agreed items forward.

   Considering the big patch list for I-3 release, I'm not optimistic to
merge this in I release, but as said, we should keep the ball rolling and
move forward.

Thanks
--jyh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

2014-01-31 Thread Robert Li (baoli)
Hi Irena,

Thanks for the Reply. See inline…

If possible, can we put details on what would be exactly covered by each BP?

--Robert

On 1/30/14 4:13 PM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Robert,
Thank you very much for the summary.
Please, see inline

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Thursday, January 30, 2014 10:45 PM
To: Robert Kukura; Sandhya Dasu (sadasu); Irena Berezovsky; OpenStack 
Development Mailing List (not for usage questions); Brian Bowen (brbowen)
Subject: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

Hi,

We made a lot of progress today. We agreed that:
-- vnic_type will be a top level attribute as binding:vnic_type
-- BPs:
 * Irena's 
https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type for 
binding:vnic_type
 * Bob to submit a BP for binding:profile in ML2. SRIOV input info will be 
encapsulated in binding:profile
 * Bob to submit a BP for binding:vif_details in ML2. SRIOV output info 
will be encapsulated in binding:vif_details, which may include other 
information like security parameters. For SRIOV, vlan_id and profileid are 
candidates.
-- new arguments for port-create will be implicit arguments. Future release may 
make them explicit. New argument: --binding:vnic_type {virtio, direct, macvtap}.
I think that currently we can make do without the profileid as an input 
parameter from the user. The mechanism driver will return a profileid in the 
vif output.

Please correct any misstatement in above.

Issues:
  -- do we need a common utils/driver for SRIOV generic parts to be used by 
individual Mechanism drivers that support SRIOV? More details on what would be 
included in this sriov utils/driver? I'm thinking that a candidate would be the 
helper functions to interpret the pci_slot, which is proposed as a string. 
Anything else in your mind?
[IrenaB] I thought on some SRIOVPortProfileMixin to handle and persist SRIOV 
port related attributes

[Robert] This makes sense to me. Would this live in the extension area, or 
would it be in the ML2 area? I thought one of the above listed would cover the 
persistence of SRIOV attributes. But sounds like we need this BP.

  -- what should mechanism drivers put in binding:vif_details and how nova 
would use this information? as far as I see it from the code, a VIF object is 
created and populated based on information provided by neutron (from get 
network and get port)

Questions:
  -- nova needs to work with both ML2 and non-ML2 plugins. For regular plugins, 
binding:vnic_type will not be set, I guess. Then would it be treated as a 
virtio type? And if a non-ML2 plugin wants to support SRIOV, would it need to  
implement vnic-type, binding:profile, binding:vif-details for SRIOV itself?
[IrenaB] vnic_type will be added as an additional attribute to binding 
extension. For persistency it should be added in PortBindingMixin for non ML2. 
I didn’t think to cover it as part of ML2 vnic_type bp.
For the rest attributes, need to see what Bob plans.

[Robert] Sounds good to me. But again, which BP would cover this?

 -- is a neutron agent making decision based on the binding:vif_type?  In that 
case, it makes sense for binding:vnic_type not to be exposed to agents.
[IrenaB] vnic_type is input parameter that will eventually cause certain 
vif_type to be sent to GenericVIFDriver and create network interface. Neutron 
agents periodically scan for attached interfaces. For example, OVS agent will 
look only for OVS interfaces, so if SRIOV interface is created, it won’t be 
discovered by OVS agent.
[Robert] I get the idea. it relies on what are plugged onto the integration 
bridge by nova to determine if it needs to take actions.


Thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-30 Thread Robert Li (baoli)
Ian,

I hope that you guys are in agreement on this. But take a look at the wiki: 
https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support and see if it has 
any difference from your proposals.  IMO, it's the critical piece of the 
proposal, and hasn't been specified in exact term yet. I'm not sure about 
vif_attributes or vif_stats, which I just heard from you. In any case, I'm not 
convinced with the flexibility and/or complexity, and so far I haven't seen a 
use case that really demands it. But I'd be happy to see one.

thanks,
Robert

On 1/29/14 4:43 PM, Ian Wells 
ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk wrote:

My proposals:

On 29 January 2014 16:43, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:
1. pci-flavor-attrs is configured through configuration files and will be
available on both the controller node and the compute nodes. Can the cloud
admin decide to add a new attribute in a running cloud? If that's
possible, how is that done?

When nova-compute starts up, it requests the VIF attributes that the schedulers 
need.  (You could have multiple schedulers; they could be in disagreement; it 
picks the last answer.)  It returns pci_stats by the selected combination of 
VIF attributes.

When nova-scheduler starts up, it sends an unsolicited cast of the attributes.  
nova-compute updates the attributes, clears its pci_stats and recreates them.

If nova-scheduler receives pci_stats with incorrect attributes it discards them.

(There is a row from nova-compute summarising devices for each unique 
combination of vif_stats, including 'None' where no attribute is set.)

I'm assuming here that the pci_flavor_attrs are read on startup of 
nova-scheduler and could be re-read and different when nova-scheduler is reset. 
 There's a relatively straightforward move from here to an API for setting it 
if this turns out to be useful, but firstly I think it would be an uncommon 
occurrence and secondly it's not something we should implement now.

2. PCI flavor will be defined using the attributes in pci-flavor-attrs. A
flavor is defined with a matching expression in the form of attr1 = val11
[| val12 Š.], [attr2 = val21 [| val22 Š]], Š. And this expression is used
to match one or more PCI stats groups until a free PCI device is located.
In this case, both attr1 and attr2 can have multiple values, and both
attributes need to be satisfied. Please confirm this understanding is
correct

This looks right to me as we've discussed it, but I think we'll be wanting 
something that allows a top level AND.  In the above example, I can't say an 
Intel NIC and a Mellanox NIC are equally OK, because I can't say (intel + 
product ID 1) AND (Mellanox + product ID 2).  I'll leave Yunhong to decice how 
the details should look, though.

3. I'd like to see an example that involves multiple attributes. let's say
pci-flavor-attrs = {gpu, net-group, device_id, product_id}. I'd like to
know how PCI stats groups are formed on compute nodes based on that, and
how many of PCI stats groups are there? What's the reasonable guidelines
in defining the PCI flavors.

I need to write up the document for this, and it's overdue.  Leave it with me.
--
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

2014-01-30 Thread Robert Li (baoli)
Hi,

We made a lot of progress today. We agreed that:
-- vnic_type will be a top level attribute as binding:vnic_type
-- BPs:
 * Irena's 
https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type for 
binding:vnic_type
 * Bob to submit a BP for binding:profile in ML2. SRIOV input info will be 
encapsulated in binding:profile
 * Bob to submit a BP for binding:vif_details in ML2. SRIOV output info 
will be encapsulated in binding:vif_details, which may include other 
information like security parameters. For SRIOV, vlan_id and profileid are 
candidates.
-- new arguments for port-create will be implicit arguments. Future release may 
make them explicit. New argument: --binding:vnic_type {virtio, direct, macvtap}.
I think that currently we can make do without the profileid as an input 
parameter from the user. The mechanism driver will return a profileid in the 
vif output.

Please correct any misstatement in above.

Issues:
  -- do we need a common utils/driver for SRIOV generic parts to be used by 
individual Mechanism drivers that support SRIOV? More details on what would be 
included in this sriov utils/driver? I'm thinking that a candidate would be the 
helper functions to interpret the pci_slot, which is proposed as a string. 
Anything else in your mind?

  -- what should mechanism drivers put in binding:vif_details and how nova 
would use this information? as far as I see it from the code, a VIF object is 
created and populated based on information provided by neutron (from get 
network and get port)

Questions:
  -- nova needs to work with both ML2 and non-ML2 plugins. For regular plugins, 
binding:vnic_type will not be set, I guess. Then would it be treated as a 
virtio type? And if a non-ML2 plugin wants to support SRIOV, would it need to  
implement vnic-type, binding:profile, binding:vif-details for SRIOV itself?

 -- is a neutron agent making decision based on the binding:vif_type?  In that 
case, it makes sense for binding:vnic_type not to be exposed to agents.

Thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

2014-01-29 Thread Robert Li (baoli)
Hi folks,

I'd like to do a recap on today's meeting, and if possible we should continue 
the discussion in this thread so that we can be more productive in tomorrow's 
meeting.

Bob suggests that we have these BPs:
One generic covering implementing binding:profile in ML2, and one specific to 
PCI-passthru, defining the vnic-type (wherever it goes) and any keys for 
binding:profile.


Irena suggests that we have three BPs:
1. generic ML2 support for binding:profile (corresponding to Bob's covering 
implementing binding:profile in ML2 ?)
2. add vnic_type support for binding Mech Drivers in ML2 plugin
3. support PCI slot via profile (corresponding to Bob's any keys for 
binding:profile ?)

Both proposals sound similar, so it's great that we are converging. I think 
that it's important that we put more details in each BP on what's exactly 
covered by it. One question I have is about where binding:profile will be 
implemented. I see that portbinding is defined/implemented under its extension 
and neutron.db. So when both of you guys are saying that implementing 
binding:profile in ML2, I'm kind of confused. Please let me know what I'm 
missing here. My understanding is that non-ML2 plugin can use it as well.

Another issue that came up during the meeting is about whether or not vnic-type 
should be part of the top level binding or part of binding:profile. In other 
words, should it be defined as binding:vnic-type or binding:profile:vnic-type.

We also need one or two BPs to cover the change in the neutron 
port-create/port-show CLI/API.

Another thing is that we need to define the binding:profile dictionary.

Thanks,
Robert



On 1/29/14 4:02 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Will attend

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 29, 2014 12:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

Hi Folks,

Can we have one more meeting tomorrow? I'd like to discuss the blueprints we 
are going to have and what each BP will be covering.

thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-29 Thread Robert Li (baoli)
Hi Yongli,

Thank you for addressing my comments, and for adding the encryption card
use case. One thing that I want to point out is that in this use case, you
may not use the pci-flavor in the --nic option because it's not a neutron
feature.

I have a few more questions:
1. pci-flavor-attrs is configured through configuration files and will be
available on both the controller node and the compute nodes. Can the cloud
admin decide to add a new attribute in a running cloud? If that's
possible, how is that done?
2. PCI flavor will be defined using the attributes in pci-flavor-attrs. A
flavor is defined with a matching expression in the form of attr1 = val11
[| val12 Š.], [attr2 = val21 [| val22 Š]], Š. And this expression is used
to match one or more PCI stats groups until a free PCI device is located.
In this case, both attr1 and attr2 can have multiple values, and both
attributes need to be satisfied. Please confirm this understanding is
correct
3. I'd like to see an example that involves multiple attributes. let's say
pci-flavor-attrs = {gpu, net-group, device_id, product_id}. I'd like to
know how PCI stats groups are formed on compute nodes based on that, and
how many of PCI stats groups are there? What's the reasonable guidelines
in defining the PCI flavors.


thanks,
Robert



On 1/28/14 10:16 PM, Robert Li (baoli) ba...@cisco.com wrote:

Hi,

I added a few comments in this wiki that Yongli came up with:
https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support

Please check it out and look for Robert in the wiki.

Thanks,
Robert

On 1/21/14 9:55 AM, Robert Li (baoli) ba...@cisco.com wrote:

Yunhong, 

Just try to understand your use case:
-- a VM can only work with cards from vendor V1
-- a VM can work with cards from both vendor V1 and V2

  So stats in the two flavors will overlap in the PCI flavor
solution.
I'm just trying to say that this is something that needs to be properly
addressed.


Just for the sake of discussion, another solution to meeting the above
requirement is able to say in the nova flavor's extra-spec

   encrypt_card = card from vendor V1 OR encrypt_card = card from
vendor V2


In other words, this can be solved in the nova flavor, rather than
introducing a new flavor.

Thanks,
Robert
   

On 1/17/14 7:03 PM, yunhong jiang yunhong.ji...@linux.intel.com
wrote:

On Fri, 2014-01-17 at 22:30 +, Robert Li (baoli) wrote:
 Yunhong,
 
 I'm hoping that these comments can be directly addressed:
   a practical deployment scenario that requires arbitrary
 attributes.

I'm just strongly against to support only one attributes (your PCI
group) for scheduling and management, that's really TOO limited.

A simple scenario is, I have 3 encryption card:
 Card 1 (vendor_id is V1, device_id =0xa)
 card 2(vendor_id is V1, device_id=0xb)
 card 3(vendor_id is V2, device_id=0xb)

 I have two images. One image only support Card 1 and another image
support Card 1/3 (or any other combination of the 3 card type). I don't
only one attributes will meet such requirement.

As to arbitrary attributes or limited list of attributes, my opinion is,
as there are so many type of PCI devices and so many potential of PCI
devices usage, support arbitrary attributes will make our effort more
flexible, if we can push the implementation into the tree.

   detailed design on the following (that also take into account
 the
 introduction of predefined attributes):
 * PCI stats report since the scheduler is stats based

I don't think there are much difference with current implementation.

 * the scheduler in support of PCI flavors with arbitrary
 attributes and potential overlapping.

As Ian said, we need make sure the pci_stats and the PCI flavor have the
same set of attributes, so I don't think there are much difference with
current implementation.

   networking requirements to support multiple provider
 nets/physical
 nets

Can't the extra info resolve this issue? Can you elaborate the issue?

Thanks
--jyh
 
 I guess that the above will become clear as the discussion goes on.
 And we
 also need to define the deliveries
  
 Thanks,
 Robert 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

2014-01-29 Thread Robert Li (baoli)
Hi Irena,

I'm now even more confused. I must have missed something. See inline….

Thanks,
Robert

On 1/29/14 10:19 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert,
I think that I can go with Bob’s suggestion, but think it makes sense to cover 
the vnic_type and PCI-passthru via two separate patches. Adding vnic_type will 
probably impose changes to existing Mech. Drivers while PCI-passthru is about 
introducing some pieces for new SRIOV supporting Mech. Drivers.

More comments inline

BR,
IRena

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 29, 2014 4:47 PM
To: Irena Berezovsky; rkuk...@redhat.commailto:rkuk...@redhat.com; Sandhya 
Dasu (sadasu); OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

Hi folks,

I'd like to do a recap on today's meeting, and if possible we should continue 
the discussion in this thread so that we can be more productive in tomorrow's 
meeting.

Bob suggests that we have these BPs:
One generic covering implementing binding:profile in ML2, and one specific to 
PCI-passthru, defining the vnic-type (wherever it goes) and any keys for 
binding:profile.


Irena suggests that we have three BPs:
1. generic ML2 support for binding:profile (corresponding to Bob's covering 
implementing binding:profile in ML2 ?)
2. add vnic_type support for binding Mech Drivers in ML2 plugin
3. support PCI slot via profile (corresponding to Bob's any keys for 
binding:profile ?)

Both proposals sound similar, so it's great that we are converging. I think 
that it's important that we put more details in each BP on what's exactly 
covered by it. One question I have is about where binding:profile will be 
implemented. I see that portbinding is defined/implemented under its extension 
and neutron.db. So when both of you guys are saying that implementing 
binding:profile in ML2, I'm kind of confused. Please let me know what I'm 
missing here. My understanding is that non-ML2 plugin can use it as well.
[IrenaB] Basically you  are right. Currently ML2 does not inherit the DB Mixin 
for port binding. So it supports the port binding extension, but uses its own 
DB table to store relevant attributes. Making it work for ML2 means not adding 
this support to PortBindingMixin.

[ROBERT] does that mean binding:profile for PCI can't be used by non-ml2 plugin?

Another issue that came up during the meeting is about whether or not vnic-type 
should be part of the top level binding or part of binding:profile. In other 
words, should it be defined as binding:vnic-type or binding:profile:vnic-type.
[IrenaB] As long as existing binding capable Mech Drivers will take vnic_type 
into its consideration, I guess doing it via binding:profile will introduce 
less changes all over (CLI, API). But I am not sure this reason is strong 
enough to choose this direction
We also need one or two BPs to cover the change in the neutron 
port-create/port-show CLI/API.
[IrenaB] binding:profile is already supported, so it probably depends on 
direction with vnic_type

[ROBERT] Can you let me know where in the code binding:profile is supported? in 
portbindings_db.py, the PortBindingPort model doesn't have a column for 
binding:profile. So I guess that I must have missed it.
Regarding BPs for the CLI/API, we are planning to add vnic-type and profileid 
in the CLI, also the new keys in binding:profile. Are you saying no changes are 
needed (say display them, interpret the added cli arguments, etc), therefore no 
new BPs are needed for them?

Another thing is that we need to define the binding:profile dictionary.
[IrenaB] With regards to PCI SRIOV related attributes, right?

[ROBERT] yes.


Thanks,
Robert



On 1/29/14 4:02 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Will attend

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 29, 2014 12:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

Hi Folks,

Can we have one more meeting tomorrow? I'd like to discuss the blueprints we 
are going to have and what each BP will be covering.

thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

2014-01-29 Thread Robert Li (baoli)
Hi Irena,

With your reply, and after taking a close look at the code, I think that I 
understand it now.

Regarding the cli change:

  neutron port-create –binding:profile type=dict vnic_type=direct

following the neutron net-create —provider:physical_network as an example, 
--binding:* can be treated as unknown arguments. And they are opaquely 
transmitted to the neutron plugin for processing. I have always wondered why 
net-create help doesn't display the —provider:* arguments, and sometimes have 
to google the syntax. After taking look at the code, I think I kind of know 
what's going on out of there.  I'd like to know why it's done that way. But I 
think that it will work for —binding:* in the neutron port-create commands.

now regarding binding:profile for SR-IOV, from your google doc, it will have 
the following properties:
   pci_slot in the format of vendor_id:product_id:domain:bus:slot.fn.
   pci_flavor: will be a PCI flavor name when the API is available and 
it's desirable for neutron to use it. For now, it will be a physical network 
name.
   profileid: for 802.1qbh/802.1br
   vnic-type: it's still debatable whether or not this property belongs 
here. I kind of second you on making it binding:vnic-type.

They all seem to be non plugin or MD specific. Of course, a MD that supports 
802.1br would enforce profileid. But in terms of persisting them, I don't feel 
like they should be done in the plugin. On the other hand, the examples you 
gave me do show that these plugins are responsible for storing plugin-specific 
binding:profile in the DB. And in the case of —provider:* for neutron network, 
it's the individual plugins that persist it, and duplicate the code. Therefore, 
we may not have options other than following the existing examples.


thanks,
Robert



On 1/29/14 12:17 PM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert,
Please see inline, I’ll try to post my understanding.


From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 29, 2014 6:03 PM
To: Irena Berezovsky; rkuk...@redhat.commailto:rkuk...@redhat.com; Sandhya 
Dasu (sadasu); OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

Hi Irena,

I'm now even more confused. I must have missed something. See inline….

Thanks,
Robert

On 1/29/14 10:19 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert,
I think that I can go with Bob’s suggestion, but think it makes sense to cover 
the vnic_type and PCI-passthru via two separate patches. Adding vnic_type will 
probably impose changes to existing Mech. Drivers while PCI-passthru is about 
introducing some pieces for new SRIOV supporting Mech. Drivers.

More comments inline

BR,
IRena

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 29, 2014 4:47 PM
To: Irena Berezovsky; rkuk...@redhat.commailto:rkuk...@redhat.com; Sandhya 
Dasu (sadasu); OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

Hi folks,

I'd like to do a recap on today's meeting, and if possible we should continue 
the discussion in this thread so that we can be more productive in tomorrow's 
meeting.

Bob suggests that we have these BPs:
One generic covering implementing binding:profile in ML2, and one specific to 
PCI-passthru, defining the vnic-type (wherever it goes) and any keys for 
binding:profile.


Irena suggests that we have three BPs:
1. generic ML2 support for binding:profile (corresponding to Bob's covering 
implementing binding:profile in ML2 ?)
2. add vnic_type support for binding Mech Drivers in ML2 plugin
3. support PCI slot via profile (corresponding to Bob's any keys for 
binding:profile ?)

Both proposals sound similar, so it's great that we are converging. I think 
that it's important that we put more details in each BP on what's exactly 
covered by it. One question I have is about where binding:profile will be 
implemented. I see that portbinding is defined/implemented under its extension 
and neutron.db. So when both of you guys are saying that implementing 
binding:profile in ML2, I'm kind of confused. Please let me know what I'm 
missing here. My understanding is that non-ML2 plugin can use it as well.
[IrenaB] Basically you  are right. Currently ML2 does not inherit the DB Mixin 
for port binding. So it supports the port binding extension, but uses its own 
DB table to store relevant attributes. Making it work for ML2 means not adding 
this support to PortBindingMixin.

[ROBERT] does that mean binding:profile for PCI can't be used by non-ml2 plugin?
[IrenaB] binding:profile is can be used by any plugin that supports binding 
extension. To persist the binding:profile (in the DB), plugin should add DB 
table for this . The PortBindingMixin does not persist the binding:profile

Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

2014-01-29 Thread Robert Li (baoli)
Hi Bob,

that's a good find. profileid as part of IEEE 802.1br needs to be in
binding:profile, and can be specified by a normal user, and later possibly
the pci_flavor. Would it be wrong to say something as in below in the
policy.json?
 create_port:binding:vnic_type: rule:admin_or_network_owner
 create_port:binding:profile:profileid: rule:admin_or_network_owner

If it's not appropriate, then I agree with you we may need another
extension. 


--Robert

On 1/29/14 4:57 PM, Robert Kukura rkuk...@redhat.com wrote:

On 01/29/2014 09:46 AM, Robert Li (baoli) wrote:

 Another issue that came up during the meeting is about whether or not
 vnic-type should be part of the top level binding or part of
 binding:profile. In other words, should it be defined as
 binding:vnic-type or binding:profile:vnic-type.

I'd phrase that choice as top-level attribute vs. key/value pair
within the binding:profile attribute. If we go with a new top-level
attribute, it may or may not end up being part of the portbindings
extension.

Although I've been advocating making vnic_type a key within
binding:profile (minimizing effort), it just occurred to me that
policy.json contains:

create_port:binding:profile: rule:admin_only,
get_port:binding:profile: rule:admin_only,
update_port:binding:profile: rule:admin_only,

This means that only administrative users (including nova's integration
with neutron) can read or write the binding:profile attribute by default.

But my (limited) understanding of the PCI-passthru use cases is that
normal users need to specify vnic_type because this is what determines
the NIC type that their VMs see for the port. If that is correct, then I
think this tips the balance towards vnic_type being a new top-level
attribute to which normal users have read/write access. Comments?

If I'm mistaken on the above, please ignore the rest of this email...

If vnic_type is a new top-level attribute accessible to normal users,
then I'm not sure it belongs in the portbindings extension. First,
everything else in that extension is only visible to administrative
users. Second, from the normal user's point of view, vnic_type has to do
with the type of NIC they want within their VM, not with how the port is
bound outside their VM to some underlying network segment and networking
mechanism they aren't even aware of. So we need a new extension for
vnic_type, which has the advantage of not requiring any change to
existing plugins that don't support that extension.

If vnic_type is a new top-level attribute in a new API extension, it
deserves its own neutron BP covering defining the extension and
implementing it in ML2. This is probably an update of Irena's
https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type.
Implementations for other plugins could follow via separate BPs as they
choose to implement the extension.

If anything else we've been planning to put in binding:profile needs
normal user access, it could be defined in this new extension instead.
For now, I'm assuming other input data for PCI-passthru (such as the
slot info from nova) is only accessible to administrators and will go in
binding:profile. I'll submit a separate BP for generically implementing
the binding:profile attribute in ML2, as we've discussed.

This leaves us with potentially 3 separate generic neutron/Ml2 BPs
providing the infrastructure for PCI-passthru:

1) Irena's
https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type
2) My BP to implement binding:profile in ML2
3) Definition/implementation of binding:vif_details based on Nachi's
binding:vif_security patch, for which I could submit a BP.

-Bob



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

2014-01-29 Thread Robert Li (baoli)
Hi Bob,

Those are all good questions. But as with nova VM flavor, could profileid
be created by the admin, and then used by normal users? For the Icehouse
release, we won't have time to develop the profileid management API. I
think that next release we should have it available. Personally, I don't
like PCI flavor (which will be created by the admin as well), and I think
neutron may not need to use it at all unless a special use case warrants
it.  

For provider net, I see that a normal user can create a neutron net that
uses the provider net, but it doesn't have privilege to select a specific
vlan, which the admin does.

Those are just my thoughts, which may be wrong. And we can continue our
discussion tomorrow.

thanks,
Robert

On 1/29/14 5:50 PM, Robert Kukura rkuk...@redhat.com wrote:

On 01/29/2014 05:44 PM, Robert Li (baoli) wrote:
 Hi Bob,
 
 that's a good find. profileid as part of IEEE 802.1br needs to be in
 binding:profile, and can be specified by a normal user, and later
possibly
 the pci_flavor. Would it be wrong to say something as in below in the
 policy.json?
  create_port:binding:vnic_type: rule:admin_or_network_owner
  create_port:binding:profile:profileid: rule:admin_or_network_owner

Maybe, but a normal user that owns a network has no visibility into the
underlying details (such as the providernet extension attributes.

It seems to me that profileid is something that only make sense to an
administrator of the underlying cloud environment. Where would a normal
cloud user get a value to use for this?

Also, would a normal cloud user really know what pci_flavor to use?
Isn't all this kind of detail hidden from a normal user within the nova
VM flavor (or host aggregate or whatever) pre-configured by the admin?

-Bob

 
 If it's not appropriate, then I agree with you we may need another
 extension. 
 
 
 --Robert
 
 On 1/29/14 4:57 PM, Robert Kukura rkuk...@redhat.com wrote:
 
 On 01/29/2014 09:46 AM, Robert Li (baoli) wrote:

 Another issue that came up during the meeting is about whether or not
 vnic-type should be part of the top level binding or part of
 binding:profile. In other words, should it be defined as
 binding:vnic-type or binding:profile:vnic-type.

 I'd phrase that choice as top-level attribute vs. key/value pair
 within the binding:profile attribute. If we go with a new top-level
 attribute, it may or may not end up being part of the portbindings
 extension.

 Although I've been advocating making vnic_type a key within
 binding:profile (minimizing effort), it just occurred to me that
 policy.json contains:

create_port:binding:profile: rule:admin_only,
get_port:binding:profile: rule:admin_only,
update_port:binding:profile: rule:admin_only,

 This means that only administrative users (including nova's integration
 with neutron) can read or write the binding:profile attribute by
default.

 But my (limited) understanding of the PCI-passthru use cases is that
 normal users need to specify vnic_type because this is what determines
 the NIC type that their VMs see for the port. If that is correct, then
I
 think this tips the balance towards vnic_type being a new top-level
 attribute to which normal users have read/write access. Comments?

 If I'm mistaken on the above, please ignore the rest of this email...

 If vnic_type is a new top-level attribute accessible to normal users,
 then I'm not sure it belongs in the portbindings extension. First,
 everything else in that extension is only visible to administrative
 users. Second, from the normal user's point of view, vnic_type has to
do
 with the type of NIC they want within their VM, not with how the port
is
 bound outside their VM to some underlying network segment and
networking
 mechanism they aren't even aware of. So we need a new extension for
 vnic_type, which has the advantage of not requiring any change to
 existing plugins that don't support that extension.

 If vnic_type is a new top-level attribute in a new API extension, it
 deserves its own neutron BP covering defining the extension and
 implementing it in ML2. This is probably an update of Irena's
 https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type.
 Implementations for other plugins could follow via separate BPs as they
 choose to implement the extension.

 If anything else we've been planning to put in binding:profile needs
 normal user access, it could be defined in this new extension instead.
 For now, I'm assuming other input data for PCI-passthru (such as the
 slot info from nova) is only accessible to administrators and will go
in
 binding:profile. I'll submit a separate BP for generically implementing
 the binding:profile attribute in ML2, as we've discussed.

 This leaves us with potentially 3 separate generic neutron/Ml2 BPs
 providing the infrastructure for PCI-passthru:

 1) Irena's
 https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type
 2) My BP to implement binding:profile in ML2
 3) Definition/implementation

Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-28 Thread Robert Li (baoli)
Hi,

For the second case, supposed that the PF is properly configured on the host, 
is it a matter of configuring it as you normally do with a regular ethernet 
interface to add it to the linux bridge or OVS?

--Robert

On 1/28/14 1:03 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Nrupal,
We definitely consider both these cases.
Agree with you that we should aim to support both.

BR,
Irena


From: Jani, Nrupal [mailto:nrupal.j...@intel.com]
Sent: Monday, January 27, 2014 11:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi,

There are two possibilities for the hybrid compute nodes

-  In the first case, a compute node has two NICs,  one SRIOV NIC  the 
other NIC for the VirtIO

-  In the 2nd case, Compute node has only one SRIOV NIC, where VFs are 
used for the VMs, either macvtap or direct assignment.  And the PF is used for 
the uplink to the linux bridge or OVS!!

My question to the team is whether we consider both of these deployments or not?

Thx,

Nrupal

From: Irena Berezovsky [mailto:ire...@mellanox.com]
Sent: Monday, January 27, 2014 1:01 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Robert,
Please see inline

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 10:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Irena,

I agree on your first comment.

see inline as well.

thanks,
Robert

On 1/27/14 10:54 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert, all,
My comments inline

Regards,
Irena
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 5:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Folks,

In today's meeting, we discussed a scheduler issue for SRIOV. The basic 
requirement is for coexistence of the following compute nodes in a cloud:
  -- SRIOV only compute nodes
  -- non-SRIOV only compute nodes
  -- Compute nodes that can support both SRIOV and non-SRIOV ports. Lack of 
a proper name, let's call them compute nodes with hybrid NICs support, or 
simply hybrid compute nodes.

I'm not sure if it's practical in having hybrid compute nodes in a real cloud. 
But it may be useful in the lab to bench mark the performance differences 
between SRIOV, non-SRIOV, and coexistence of both.
[IrenaB]
I would like to clarify a bit on the requirements you stated below.
As I see it, the hybrid compute nodes actually can be preferred in the real 
cloud, since one can define VM with one vNIC attached via SR-IOV virtual 
function while the other via some vSwitch.
But it definitely make sense to land VM with ‘virtio’ vNICs only on the 
non-SRIOV compute node.

Maybe there should be some sort of preference order of suitable nodes in 
scheduler choice, based on vnic types required for the VM.

In a cloud that supports SRIOV in some of the compute nodes, a request such as:

 nova boot —flavor m1.large —image image-uuid --nic net-id=net-uuid vm

doesn't require a SRIOV port. However, it's possible for the nova scheduler to 
place it on a compute node that supports sriov port only. Since neutron plugin 
runs on the controller, port-create would succeed unless neutron knows the host 
doesn't support non-sriov port. But connectivity on the node would not be 
established since no agent is running on that host to establish such 
connectivity.
[IrenaB] I
Having ML2 plugin as neutron backend, will fail to bind the port, in no agent 
is running on the Host

[ROBERT] If a host supports SRIOV only, and there is an agent running on the 
host to support SRIOV, would binding succeed in ML2 plugin for the above 'nova 
boot' request?
[IrenaB] I think by adding the vnic_typem as we plan, Mechanism Driver will 
bind the port only if it supports vic_type and there is live agent on this 
host. So it should work

On a hybrid compute node, can we run multiple neutron L2 agents on a single 
host? It seems possible.

Irena brought up the idea of using host aggregate. This requires creation of a 
non-SRIOV host aggregate, and use that in the above 'nova boot' command. It 
should work.

The patch I had introduced a new constraint in the existing PCI passthrough 
filter.

The consensus seems to be having a better solution in a later release. And for 
now, people can either use host aggregate or resort to their own means.

Let's keep the discussion going on this.

Thanks,
Robert





On 1/24/14 4:50 PM, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:

Hi Folks,

Based on Thursday's discussion and a chat with Irena, I took the liberty to add 
a summary and discussion points for SRIOV on Monday

[openstack-dev] PCI Request, associating a device to its original request

2014-01-28 Thread Robert Li (baoli)
Hi Yongli,

In today's IRC meeting, we discussed this a little bit. I think the answer 
probably lies in the definition of the PCI request. In the current 
implementation of _translate_alias_to_requests(), a new property (assume it's 
called requestor_id) maybe added to the PCI request. And this is how it works:
   -- add a requestor_id to the request spec, and return it to the 
caller
   -- when a device is allocated, a mapping requester_id to the pci 
device can be established
   -- the requestor later can retrieve the device by calling something 
like get_pci_device(requestor_id)

The requestor id could be a UUID that is generated by the python uuid module.

In the neutron SRIOV case, we need to create a PCI request per nic (or per 
requested_network). Therefore, the generic PCI may provide an API so that we 
can do so. Such API may look like:
  create_pci_request(count, request_spec), The request_spec is key 
value pairs. It returns a requestor_id.

Also if PCI flavor API is available later, I guess we can have an API like:
  create_pci_request_from_pci_flavor(count, pci_flavor). It returns 
a requestor_id as well.

Let me know what you think.

thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

2014-01-28 Thread Robert Li (baoli)
Hi Folks,

Can we have one more meeting tomorrow? I'd like to discuss the blueprints we 
are going to have and what each BP will be covering.

thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-28 Thread Robert Li (baoli)
Hi,

I added a few comments in this wiki that Yongli came up with:
https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support

Please check it out and look for Robert in the wiki.

Thanks,
Robert

On 1/21/14 9:55 AM, Robert Li (baoli) ba...@cisco.com wrote:

Yunhong, 

Just try to understand your use case:
-- a VM can only work with cards from vendor V1
-- a VM can work with cards from both vendor V1 and V2

  So stats in the two flavors will overlap in the PCI flavor solution.
I'm just trying to say that this is something that needs to be properly
addressed.


Just for the sake of discussion, another solution to meeting the above
requirement is able to say in the nova flavor's extra-spec

   encrypt_card = card from vendor V1 OR encrypt_card = card from
vendor V2


In other words, this can be solved in the nova flavor, rather than
introducing a new flavor.

Thanks,
Robert
   

On 1/17/14 7:03 PM, yunhong jiang yunhong.ji...@linux.intel.com wrote:

On Fri, 2014-01-17 at 22:30 +, Robert Li (baoli) wrote:
 Yunhong,
 
 I'm hoping that these comments can be directly addressed:
   a practical deployment scenario that requires arbitrary
 attributes.

I'm just strongly against to support only one attributes (your PCI
group) for scheduling and management, that's really TOO limited.

A simple scenario is, I have 3 encryption card:
  Card 1 (vendor_id is V1, device_id =0xa)
  card 2(vendor_id is V1, device_id=0xb)
  card 3(vendor_id is V2, device_id=0xb)

  I have two images. One image only support Card 1 and another image
support Card 1/3 (or any other combination of the 3 card type). I don't
only one attributes will meet such requirement.

As to arbitrary attributes or limited list of attributes, my opinion is,
as there are so many type of PCI devices and so many potential of PCI
devices usage, support arbitrary attributes will make our effort more
flexible, if we can push the implementation into the tree.

   detailed design on the following (that also take into account
 the
 introduction of predefined attributes):
 * PCI stats report since the scheduler is stats based

I don't think there are much difference with current implementation.

 * the scheduler in support of PCI flavors with arbitrary
 attributes and potential overlapping.

As Ian said, we need make sure the pci_stats and the PCI flavor have the
same set of attributes, so I don't think there are much difference with
current implementation.

   networking requirements to support multiple provider
 nets/physical
 nets

Can't the extra info resolve this issue? Can you elaborate the issue?

Thanks
--jyh
 
 I guess that the above will become clear as the discussion goes on.
 And we
 also need to define the deliveries
  
 Thanks,
 Robert 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-27 Thread Robert Li (baoli)
Hi Folks,

In today's meeting, we discussed a scheduler issue for SRIOV. The basic 
requirement is for coexistence of the following compute nodes in a cloud:
  -- SRIOV only compute nodes
  -- non-SRIOV only compute nodes
  -- Compute nodes that can support both SRIOV and non-SRIOV ports. Lack of 
a proper name, let's call them compute nodes with hybrid NICs support, or 
simply hybrid compute nodes.

I'm not sure if it's practical in having hybrid compute nodes in a real cloud. 
But it may be useful in the lab to bench mark the performance differences 
between SRIOV, non-SRIOV, and coexistence of both.

In a cloud that supports SRIOV in some of the compute nodes, a request such as:

 nova boot —flavor m1.large —image image-uuid --nic net-id=net-uuid vm

doesn't require a SRIOV port. However, it's possible for the nova scheduler to 
place it on a compute node that supports sriov port only. Since neutron plugin 
runs on the controller, port-create would succeed unless neutron knows the host 
doesn't support non-sriov port. But connectivity on the node would not be 
established since no agent is running on that host to establish such 
connectivity.

Irena brought up the idea of using host aggregate. This requires creation of a 
non-SRIOV host aggregate, and use that in the above 'nova boot' command. It 
should work.

The patch I had introduced a new constraint in the existing PCI passthrough 
filter.

The consensus seems to be having a better solution in a later release. And for 
now, people can either use host aggregate or resort to their own means.

Let's keep the discussion going on this.

Thanks,
Robert





On 1/24/14 4:50 PM, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:

Hi Folks,

Based on Thursday's discussion and a chat with Irena, I took the liberty to add 
a summary and discussion points for SRIOV on Monday and onwards. Check it out 
https://wiki.openstack.org/wiki/Meetings/Passthrough. Please feel free to 
update it. Let's try to finalize it next week. The goal is to determine the BPs 
that need to get approved, and to start coding.

thanks,
Robert


On 1/22/14 8:03 AM, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:

Sounds great! Let's do it on Thursday.

--Robert

On 1/22/14 12:46 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert, all,
I would suggest not to delay the SR-IOV discussion to the next week.
Let’s try to cover the SRIOV side and especially the nova-neutron interaction 
points and interfaces this Thursday.
Once we have the interaction points well defined, we can run parallel patches 
to cover the full story.

Thanks a lot,
Irena

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 22, 2014 12:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][neutron] PCI passthrough SRIOV

Hi Folks,

As the debate about PCI flavor versus host aggregate goes on, I'd like to move 
forward with the SRIOV side of things in the same time. I know that tomorrow's 
IRC will be focusing on the BP review, and it may well continue into Thursday. 
Therefore, let's start discussing SRIOV side of things on Monday.

Basically, we need to work out the details on:
-- regardless it's PCI flavor or host aggregate or something else, how 
to use it to specify a SRIOV port.
-- new parameters for —nic
-- new parameters for neutron net-create/neutron port-create
-- interface between nova and neutron
-- nova side of work
-- neutron side of work

We should start coding ASAP.

Thanks,
Robert


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-27 Thread Robert Li (baoli)
Hi Irena,

I agree on your first comment.

see inline as well.

thanks,
Robert

On 1/27/14 10:54 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert, all,
My comments inline

Regards,
Irena
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 5:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Folks,

In today's meeting, we discussed a scheduler issue for SRIOV. The basic 
requirement is for coexistence of the following compute nodes in a cloud:
  -- SRIOV only compute nodes
  -- non-SRIOV only compute nodes
  -- Compute nodes that can support both SRIOV and non-SRIOV ports. Lack of 
a proper name, let's call them compute nodes with hybrid NICs support, or 
simply hybrid compute nodes.

I'm not sure if it's practical in having hybrid compute nodes in a real cloud. 
But it may be useful in the lab to bench mark the performance differences 
between SRIOV, non-SRIOV, and coexistence of both.
[IrenaB]
I would like to clarify a bit on the requirements you stated below.
As I see it, the hybrid compute nodes actually can be preferred in the real 
cloud, since one can define VM with one vNIC attached via SR-IOV virtual 
function while the other via some vSwitch.
But it definitely make sense to land VM with ‘virtio’ vNICs only on the 
non-SRIOV compute node.

Maybe there should be some sort of preference order of suitable nodes in 
scheduler choice, based on vnic types required for the VM.

In a cloud that supports SRIOV in some of the compute nodes, a request such as:

 nova boot —flavor m1.large —image image-uuid --nic net-id=net-uuid vm

doesn't require a SRIOV port. However, it's possible for the nova scheduler to 
place it on a compute node that supports sriov port only. Since neutron plugin 
runs on the controller, port-create would succeed unless neutron knows the host 
doesn't support non-sriov port. But connectivity on the node would not be 
established since no agent is running on that host to establish such 
connectivity.
[IrenaB] I
Having ML2 plugin as neutron backend, will fail to bind the port, in no agent 
is running on the Host

[ROBERT] If a host supports SRIOV only, and there is an agent running on the 
host to support SRIOV, would binding succeed in ML2 plugin for the above 'nova 
boot' request?

On a hybrid compute node, can we run multiple neutron L2 agents on a single 
host? It seems possible.

Irena brought up the idea of using host aggregate. This requires creation of a 
non-SRIOV host aggregate, and use that in the above 'nova boot' command. It 
should work.

The patch I had introduced a new constraint in the existing PCI passthrough 
filter.

The consensus seems to be having a better solution in a later release. And for 
now, people can either use host aggregate or resort to their own means.

Let's keep the discussion going on this.

Thanks,
Robert





On 1/24/14 4:50 PM, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:

Hi Folks,

Based on Thursday's discussion and a chat with Irena, I took the liberty to add 
a summary and discussion points for SRIOV on Monday and onwards. Check it out 
https://wiki.openstack.org/wiki/Meetings/Passthrough. Please feel free to 
update it. Let's try to finalize it next week. The goal is to determine the BPs 
that need to get approved, and to start coding.

thanks,
Robert


On 1/22/14 8:03 AM, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:

Sounds great! Let's do it on Thursday.

--Robert

On 1/22/14 12:46 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert, all,
I would suggest not to delay the SR-IOV discussion to the next week.
Let’s try to cover the SRIOV side and especially the nova-neutron interaction 
points and interfaces this Thursday.
Once we have the interaction points well defined, we can run parallel patches 
to cover the full story.

Thanks a lot,
Irena

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 22, 2014 12:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][neutron] PCI passthrough SRIOV

Hi Folks,

As the debate about PCI flavor versus host aggregate goes on, I'd like to move 
forward with the SRIOV side of things in the same time. I know that tomorrow's 
IRC will be focusing on the BP review, and it may well continue into Thursday. 
Therefore, let's start discussing SRIOV side of things on Monday.

Basically, we need to work out the details on:
-- regardless it's PCI flavor or host aggregate or something else, how 
to use it to specify a SRIOV port.
-- new parameters for —nic
-- new parameters for neutron net-create/neutron port-create
-- interface between nova and neutron
-- nova side of work
-- neutron side of work

We should start coding ASAP.

Thanks,
Robert

Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-27 Thread Robert Li (baoli)
Ok, this is something that's going to be added in ml2. I was looking at the 
bind_port() routine in mech_agent.py. The routine check_segment_for_agent() 
seems to be performing static check. So we are going to add something like 
check_vnic_type_for_agent(), I guess? Is the pairing of an agent with the mech 
driver predetermined? The routine bind_port() just throws warnings, though.

In any case, this is after the fact the scheduler has decided to place the VM 
onto the host.

Maybe not for now, but we need to consider how to support the hybrid compute 
nodes. Would an agent be able to support multiple vnic types? Or is it possible 
to reuse ovs agent, in the same time running another agent to support sriov? 
Any thoughts?

--Robert

On 1/27/14 4:01 PM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert,
Please see inline

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 10:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Irena,

I agree on your first comment.

see inline as well.

thanks,
Robert

On 1/27/14 10:54 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert, all,
My comments inline

Regards,
Irena
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 5:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Folks,

In today's meeting, we discussed a scheduler issue for SRIOV. The basic 
requirement is for coexistence of the following compute nodes in a cloud:
  -- SRIOV only compute nodes
  -- non-SRIOV only compute nodes
  -- Compute nodes that can support both SRIOV and non-SRIOV ports. Lack of 
a proper name, let's call them compute nodes with hybrid NICs support, or 
simply hybrid compute nodes.

I'm not sure if it's practical in having hybrid compute nodes in a real cloud. 
But it may be useful in the lab to bench mark the performance differences 
between SRIOV, non-SRIOV, and coexistence of both.
[IrenaB]
I would like to clarify a bit on the requirements you stated below.
As I see it, the hybrid compute nodes actually can be preferred in the real 
cloud, since one can define VM with one vNIC attached via SR-IOV virtual 
function while the other via some vSwitch.
But it definitely make sense to land VM with ‘virtio’ vNICs only on the 
non-SRIOV compute node.

Maybe there should be some sort of preference order of suitable nodes in 
scheduler choice, based on vnic types required for the VM.

In a cloud that supports SRIOV in some of the compute nodes, a request such as:

 nova boot —flavor m1.large —image image-uuid --nic net-id=net-uuid vm

doesn't require a SRIOV port. However, it's possible for the nova scheduler to 
place it on a compute node that supports sriov port only. Since neutron plugin 
runs on the controller, port-create would succeed unless neutron knows the host 
doesn't support non-sriov port. But connectivity on the node would not be 
established since no agent is running on that host to establish such 
connectivity.
[IrenaB] I
Having ML2 plugin as neutron backend, will fail to bind the port, in no agent 
is running on the Host

[ROBERT] If a host supports SRIOV only, and there is an agent running on the 
host to support SRIOV, would binding succeed in ML2 plugin for the above 'nova 
boot' request?
[IrenaB] I think by adding the vnic_typem as we plan, Mechanism Driver will 
bind the port only if it supports vic_type and there is live agent on this 
host. So it should work

On a hybrid compute node, can we run multiple neutron L2 agents on a single 
host? It seems possible.

Irena brought up the idea of using host aggregate. This requires creation of a 
non-SRIOV host aggregate, and use that in the above 'nova boot' command. It 
should work.

The patch I had introduced a new constraint in the existing PCI passthrough 
filter.

The consensus seems to be having a better solution in a later release. And for 
now, people can either use host aggregate or resort to their own means.

Let's keep the discussion going on this.

Thanks,
Robert





On 1/24/14 4:50 PM, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:

Hi Folks,

Based on Thursday's discussion and a chat with Irena, I took the liberty to add 
a summary and discussion points for SRIOV on Monday and onwards. Check it out 
https://wiki.openstack.org/wiki/Meetings/Passthrough. Please feel free to 
update it. Let's try to finalize it next week. The goal is to determine the BPs 
that need to get approved, and to start coding.

thanks,
Robert


On 1/22/14 8:03 AM, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:

Sounds great! Let's do it on Thursday.

--Robert

On 1/22/14 12:46 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert, all,
I would suggest

Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-24 Thread Robert Li (baoli)
Hi Folks,

Based on Thursday's discussion and a chat with Irena, I took the liberty to add 
a summary and discussion points for SRIOV on Monday and onwards. Check it out 
https://wiki.openstack.org/wiki/Meetings/Passthrough. Please feel free to 
update it. Let's try to finalize it next week. The goal is to determine the BPs 
that need to get approved, and to start coding.

thanks,
Robert


On 1/22/14 8:03 AM, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:

Sounds great! Let's do it on Thursday.

--Robert

On 1/22/14 12:46 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert, all,
I would suggest not to delay the SR-IOV discussion to the next week.
Let’s try to cover the SRIOV side and especially the nova-neutron interaction 
points and interfaces this Thursday.
Once we have the interaction points well defined, we can run parallel patches 
to cover the full story.

Thanks a lot,
Irena

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 22, 2014 12:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][neutron] PCI passthrough SRIOV

Hi Folks,

As the debate about PCI flavor versus host aggregate goes on, I'd like to move 
forward with the SRIOV side of things in the same time. I know that tomorrow's 
IRC will be focusing on the BP review, and it may well continue into Thursday. 
Therefore, let's start discussing SRIOV side of things on Monday.

Basically, we need to work out the details on:
-- regardless it's PCI flavor or host aggregate or something else, how 
to use it to specify a SRIOV port.
-- new parameters for —nic
-- new parameters for neutron net-create/neutron port-create
-- interface between nova and neutron
-- nova side of work
-- neutron side of work

We should start coding ASAP.

Thanks,
Robert


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-22 Thread Robert Li (baoli)
Sounds great! Let's do it on Thursday.

--Robert

On 1/22/14 12:46 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert, all,
I would suggest not to delay the SR-IOV discussion to the next week.
Let’s try to cover the SRIOV side and especially the nova-neutron interaction 
points and interfaces this Thursday.
Once we have the interaction points well defined, we can run parallel patches 
to cover the full story.

Thanks a lot,
Irena

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 22, 2014 12:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][neutron] PCI passthrough SRIOV

Hi Folks,

As the debate about PCI flavor versus host aggregate goes on, I'd like to move 
forward with the SRIOV side of things in the same time. I know that tomorrow's 
IRC will be focusing on the BP review, and it may well continue into Thursday. 
Therefore, let's start discussing SRIOV side of things on Monday.

Basically, we need to work out the details on:
-- regardless it's PCI flavor or host aggregate or something else, how 
to use it to specify a SRIOV port.
-- new parameters for —nic
-- new parameters for neutron net-create/neutron port-create
-- interface between nova and neutron
-- nova side of work
-- neutron side of work

We should start coding ASAP.

Thanks,
Robert


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-21 Thread Robert Li (baoli)
Just one comment:
  The devices allocated for an instance are immediately known after
the domain is created. Therefore it's possible to do a port update and
have the device configured while the instance is booting.

--Robert

On 1/19/14 2:15 AM, Irena Berezovsky ire...@mellanox.com wrote:

Hi Robert, Yonhong,
Although network XML solution (option 1) is very elegant, it has one
major disadvantage. As Robert mentioned, the disadvantage of the network
XML is the inability to know what SR-IOV PCI device was actually
allocated. When neutron is responsible to set networking configuration,
manage admin status, set security groups, it should be able to identify
the SR-IOV PCI device to apply configuration. Within current libvirt
Network XML implementation, it does not seem possible.
Between option (2) and (3), I do not have any preference, it should be as
simple as possible.
Option (3) that I raised can be achieved by renaming the network
interface of Virtual Function via 'ip link set  name'. Interface logical
name can be based on neutron port UUID. This will  allow neutron to
discover devices, if backend plugin requires it. Once VM is migrating,
suitable Virtual Function on the target node should be allocated, and
then its corresponding network interface should be renamed to same
logical name. This can be done without system rebooting. Still need to
check how the Virtual Function corresponding network interface can be
returned to its original name once is not used anymore as VM vNIC.

Regards,
Irena 

-Original Message-
From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com]
Sent: Friday, January 17, 2014 9:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network
support

Robert, thanks for your long reply. Personally I'd prefer option 2/3 as
it keep Nova the only entity for PCI management.

Glad you are ok with Ian's proposal and we have solution to resolve the
libvirt network scenario in that framework.

Thanks
--jyh

 -Original Message-
 From: Robert Li (baoli) [mailto:ba...@cisco.com]
 Sent: Friday, January 17, 2014 7:08 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network
 support
 
 Yunhong,
 
 Thank you for bringing that up on the live migration support. In
 addition to the two solutions you mentioned, Irena has a different
 solution. Let me put all the them here again:
 1. network xml/group based solution.
In this solution, each host that supports a provider
 net/physical net can define a SRIOV group (it's hard to avoid the term
 as you can see from the suggestion you made based on the PCI flavor
 proposal). For each SRIOV group supported on a compute node, A network
 XML will be created the first time the nova compute service is running
 on that node.
 * nova will conduct scheduling, but not PCI device allocation
 * it's a simple and clean solution, documented in libvirt as
 the way to support live migration with SRIOV. In addition, a network
 xml is nicely mapped into a provider net.
 2. network xml per PCI device based solution
This is the solution you brought up in this email, and Ian
 mentioned this to me as well. In this solution, a network xml is
 created when A VM is created. the network xml needs to be removed once
 the VM is removed. This hasn't been tried out as far as I  know.
 3. interface xml/interface rename based solution
Irena brought this up. In this solution, the ethernet interface
 name corresponding to the PCI device attached to the VM needs to be
 renamed. One way to do so without requiring system reboot is to change
 the udev rule's file for interface renaming, followed by a udev
 reload.
 
 Now, with the first solution, Nova doesn't seem to have control over
 or visibility of the PCI device allocated for the VM before the VM is
 launched. This needs to be confirmed with the libvirt support and see
 if such capability can be provided. This may be a potential drawback
 if a neutron plugin requires detailed PCI device information for
operation.
 Irena may provide more insight into this. Ideally, neutron shouldn't
 need this information because the device configuration can be done by
 libvirt invoking the PCI device driver.
 
 The other two solutions are similar. For example, you can view the
 second solution as one way to rename an interface, or camouflage an
 interface under a network name. They all require additional works
 before the VM is created and after the VM is removed.
 
 I also agree with you that we should take a look at XenAPI on this.
 
 
 With regard to your suggestion on how to implement the first solution
 with some predefined group attribute, I think it definitely can be
 done. As I have pointed it out earlier, the PCI flavor proposal is
 actually a generalized version of the PCI group. In other words, in
 the PCI group

Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-21 Thread Robert Li (baoli)
Yunhong, 

Just try to understand your use case:
-- a VM can only work with cards from vendor V1
-- a VM can work with cards from both vendor V1 and V2

  So stats in the two flavors will overlap in the PCI flavor solution.
I'm just trying to say that this is something that needs to be properly
addressed.


Just for the sake of discussion, another solution to meeting the above
requirement is able to say in the nova flavor's extra-spec

   encrypt_card = card from vendor V1 OR encrypt_card = card from
vendor V2


In other words, this can be solved in the nova flavor, rather than
introducing a new flavor.

Thanks,
Robert
   

On 1/17/14 7:03 PM, yunhong jiang yunhong.ji...@linux.intel.com wrote:

On Fri, 2014-01-17 at 22:30 +, Robert Li (baoli) wrote:
 Yunhong,
 
 I'm hoping that these comments can be directly addressed:
   a practical deployment scenario that requires arbitrary
 attributes.

I'm just strongly against to support only one attributes (your PCI
group) for scheduling and management, that's really TOO limited.

A simple scenario is, I have 3 encryption card:
   Card 1 (vendor_id is V1, device_id =0xa)
   card 2(vendor_id is V1, device_id=0xb)
   card 3(vendor_id is V2, device_id=0xb)

   I have two images. One image only support Card 1 and another image
support Card 1/3 (or any other combination of the 3 card type). I don't
only one attributes will meet such requirement.

As to arbitrary attributes or limited list of attributes, my opinion is,
as there are so many type of PCI devices and so many potential of PCI
devices usage, support arbitrary attributes will make our effort more
flexible, if we can push the implementation into the tree.

   detailed design on the following (that also take into account
 the
 introduction of predefined attributes):
 * PCI stats report since the scheduler is stats based

I don't think there are much difference with current implementation.

 * the scheduler in support of PCI flavors with arbitrary
 attributes and potential overlapping.

As Ian said, we need make sure the pci_stats and the PCI flavor have the
same set of attributes, so I don't think there are much difference with
current implementation.

   networking requirements to support multiple provider
 nets/physical
 nets

Can't the extra info resolve this issue? Can you elaborate the issue?

Thanks
--jyh
 
 I guess that the above will become clear as the discussion goes on.
 And we
 also need to define the deliveries
  
 Thanks,
 Robert 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] PCI passthrough SRIOV

2014-01-21 Thread Robert Li (baoli)
Hi Folks,

As the debate about PCI flavor versus host aggregate goes on, I'd like to move 
forward with the SRIOV side of things in the same time. I know that tomorrow's 
IRC will be focusing on the BP review, and it may well continue into Thursday. 
Therefore, let's start discussing SRIOV side of things on Monday.

Basically, we need to work out the details on:
-- regardless it's PCI flavor or host aggregate or something else, how 
to use it to specify a SRIOV port.
-- new parameters for —nic
-- new parameters for neutron net-create/neutron port-create
-- interface between nova and neutron
-- nova side of work
-- neutron side of work

We should start coding ASAP.

Thanks,
Robert


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-17 Thread Robert Li (baoli)
Yunhong,

Thank you for bringing that up on the live migration support. In addition
to the two solutions you mentioned, Irena has a different solution. Let me
put all the them here again:
1. network xml/group based solution.
   In this solution, each host that supports a provider net/physical
net can define a SRIOV group (it's hard to avoid the term as you can see
from the suggestion you made based on the PCI flavor proposal). For each
SRIOV group supported on a compute node, A network XML will be created the
first time the nova compute service is running on that node.
* nova will conduct scheduling, but not PCI device allocation
* it's a simple and clean solution, documented in libvirt as the
way to support live migration with SRIOV. In addition, a network xml is
nicely mapped into a provider net.
2. network xml per PCI device based solution
   This is the solution you brought up in this email, and Ian
mentioned this to me as well. In this solution, a network xml is created
when A VM is created. the network xml needs to be removed once the VM is
removed. This hasn't been tried out as far as I  know.
3. interface xml/interface rename based solution
   Irena brought this up. In this solution, the ethernet interface
name corresponding to the PCI device attached to the VM needs to be
renamed. One way to do so without requiring system reboot is to change the
udev rule's file for interface renaming, followed by a udev reload.

Now, with the first solution, Nova doesn't seem to have control over or
visibility of the PCI device allocated for the VM before the VM is
launched. This needs to be confirmed with the libvirt support and see if
such capability can be provided. This may be a potential drawback if a
neutron plugin requires detailed PCI device information for operation.
Irena may provide more insight into this. Ideally, neutron shouldn't need
this information because the device configuration can be done by libvirt
invoking the PCI device driver.

The other two solutions are similar. For example, you can view the second
solution as one way to rename an interface, or camouflage an interface
under a network name. They all require additional works before the VM is
created and after the VM is removed.

I also agree with you that we should take a look at XenAPI on this.


With regard to your suggestion on how to implement the first solution with
some predefined group attribute, I think it definitely can be done. As I
have pointed it out earlier, the PCI flavor proposal is actually a
generalized version of the PCI group. In other words, in the PCI group
proposal, we have one predefined attribute called PCI group, and
everything else works on top of that. In the PCI flavor proposal,
attribute is arbitrary. So certainly we can define a particular attribute
for networking, which let's temporarily call sriov_group. But I can see
with this idea of predefined attributes, more of them will be required by
different types of devices in the future. I'm sure it will keep us busy
although I'm not sure it's in a good way.

I was expecting you or someone else can provide a practical deployment
scenario that would justify the flexibilities and the complexities.
Although I'd prefer to keep it simple and generalize it later once a
particular requirement is clearly identified, I'm fine to go with it if
that's most of the folks want to do.

--Robert



On 1/16/14 8:36 PM, yunhong jiang yunhong.ji...@linux.intel.com wrote:

On Thu, 2014-01-16 at 01:28 +0100, Ian Wells wrote:
 To clarify a couple of Robert's points, since we had a conversation
 earlier:
 On 15 January 2014 23:47, Robert Li (baoli) ba...@cisco.com wrote:
   ---  do we agree that BDF address (or device id, whatever
 you call it), and node id shouldn't be used as attributes in
 defining a PCI flavor?
 
 
 Note that the current spec doesn't actually exclude it as an option.
 It's just an unwise thing to do.  In theory, you could elect to define
 your flavors using the BDF attribute but determining 'the card in this
 slot is equivalent to all the other cards in the same slot in other
 machines' is probably not the best idea...  We could lock it out as an
 option or we could just assume that administrators wouldn't be daft
 enough to try.
 
 
 * the compute node needs to know the PCI flavor.
 [...] 
   - to support live migration, we need to use
 it to create network xml
 
 
 I didn't understand this at first and it took me a while to get what
 Robert meant here.
 
 This is based on Robert's current code for macvtap based live
 migration.  The issue is that if you wish to migrate a VM and it's
 tied to a physical interface, you can't guarantee that the same
 physical interface is going to be used on the target machine, but at
 the same time you can't change the libvirt.xml as it comes over with
 the migrating machine.  The answer is to define a network

Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-17 Thread Robert Li (baoli)
Yunhong,

I'm hoping that these comments can be directly addressed:
  a practical deployment scenario that requires arbitrary attributes.
  detailed design on the following (that also take into account the
introduction of predefined attributes):
* PCI stats report since the scheduler is stats based
* the scheduler in support of PCI flavors with arbitrary
attributes and potential overlapping.
  networking requirements to support multiple provider nets/physical
nets

I guess that the above will become clear as the discussion goes on. And we
also need to define the deliveries
 
Thanks,
Robert

On 1/17/14 2:02 PM, Jiang, Yunhong yunhong.ji...@intel.com wrote:

Robert, thanks for your long reply. Personally I'd prefer option 2/3 as
it keep Nova the only entity for PCI management.

Glad you are ok with Ian's proposal and we have solution to resolve the
libvirt network scenario in that framework.

Thanks
--jyh

 -Original Message-
 From: Robert Li (baoli) [mailto:ba...@cisco.com]
 Sent: Friday, January 17, 2014 7:08 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network
 support
 
 Yunhong,
 
 Thank you for bringing that up on the live migration support. In
addition
 to the two solutions you mentioned, Irena has a different solution. Let
me
 put all the them here again:
 1. network xml/group based solution.
In this solution, each host that supports a provider net/physical
 net can define a SRIOV group (it's hard to avoid the term as you can see
 from the suggestion you made based on the PCI flavor proposal). For each
 SRIOV group supported on a compute node, A network XML will be
 created the
 first time the nova compute service is running on that node.
 * nova will conduct scheduling, but not PCI device allocation
 * it's a simple and clean solution, documented in libvirt as the
 way to support live migration with SRIOV. In addition, a network xml is
 nicely mapped into a provider net.
 2. network xml per PCI device based solution
This is the solution you brought up in this email, and Ian
 mentioned this to me as well. In this solution, a network xml is created
 when A VM is created. the network xml needs to be removed once the
 VM is
 removed. This hasn't been tried out as far as I  know.
 3. interface xml/interface rename based solution
Irena brought this up. In this solution, the ethernet interface
 name corresponding to the PCI device attached to the VM needs to be
 renamed. One way to do so without requiring system reboot is to change
 the
 udev rule's file for interface renaming, followed by a udev reload.
 
 Now, with the first solution, Nova doesn't seem to have control over or
 visibility of the PCI device allocated for the VM before the VM is
 launched. This needs to be confirmed with the libvirt support and see if
 such capability can be provided. This may be a potential drawback if a
 neutron plugin requires detailed PCI device information for operation.
 Irena may provide more insight into this. Ideally, neutron shouldn't
need
 this information because the device configuration can be done by libvirt
 invoking the PCI device driver.
 
 The other two solutions are similar. For example, you can view the
second
 solution as one way to rename an interface, or camouflage an interface
 under a network name. They all require additional works before the VM is
 created and after the VM is removed.
 
 I also agree with you that we should take a look at XenAPI on this.
 
 
 With regard to your suggestion on how to implement the first solution
with
 some predefined group attribute, I think it definitely can be done. As I
 have pointed it out earlier, the PCI flavor proposal is actually a
 generalized version of the PCI group. In other words, in the PCI group
 proposal, we have one predefined attribute called PCI group, and
 everything else works on top of that. In the PCI flavor proposal,
 attribute is arbitrary. So certainly we can define a particular
attribute
 for networking, which let's temporarily call sriov_group. But I can see
 with this idea of predefined attributes, more of them will be required
by
 different types of devices in the future. I'm sure it will keep us busy
 although I'm not sure it's in a good way.
 
 I was expecting you or someone else can provide a practical deployment
 scenario that would justify the flexibilities and the complexities.
 Although I'd prefer to keep it simple and generalize it later once a
 particular requirement is clearly identified, I'm fine to go with it if
 that's most of the folks want to do.
 
 --Robert
 
 
 
 On 1/16/14 8:36 PM, yunhong jiang yunhong.ji...@linux.intel.com
 wrote:
 
 On Thu, 2014-01-16 at 01:28 +0100, Ian Wells wrote:
  To clarify a couple of Robert's points, since we had a conversation
  earlier:
  On 15 January 2014 23:47, Robert Li (baoli) ba...@cisco.com wrote

[openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-15 Thread Robert Li (baoli)
Hi Folks,

In light of today's IRC meeting, and for the purpose of moving this forward, 
I'm fine to go with the following if that's what everyone wants to go with:

 
https://docs.google.com/document/d/1vadqmurlnlvZ5bv3BlUbFeXRS_wh-dsgi5plSjimWjU/edit

But with some concerns and reservations.

  ---  I don't expect everyone to agree on this. But I think the proposal is 
much more complicated in terms of implementation and administration.
  ---  I'd like to see a practical deployment scenario in which only PCI flavor 
can support, but PCI group can't, which I guess can justify the complexities.
  ---  do we agree that BDF address (or device id, whatever you call it), and 
node id shouldn't be used as attributes in defining a PCI flavor?
  ---  I'd like to see a detailed (not vague) design on the following:
* PCI stats report since the scheduler is stats based
* the scheduler in support of PCI flavors with arbitrary attributes.
  ---  I'd like to see how this can be mapped into SRIOV support:
* the compute node needs to know the PCI flavor. A couple of reasons 
for this:
  - the neutron plugin may need this to associate with a 
particular subsystem (or physical network)
  - to support live migration, we need to use it to create 
network xml
* We also need to be able to do auto discovery so that we can support 
live migration with SRIOV
* use the PCI flavor in the —nic option and neutron commands
  --- Just want to point out that this PCI flavor doesn't seem to be the same 
PCI flavor that Join was talking about in one of his emails.

I'd like to also point out that if you consider a PCI group as an attribute (in 
terms of the proposal), then the PCI group design is a special (or degenerated) 
case of the proposed design. The significant difference here is that with PCI 
group, its semantics is clear and well defined, and everything else works on 
top of it. An attribute is arbitrary and open for interpretation. In terms of 
getting things done ASAP, the PCI group is actually the way to go.

I guess that we will take a phased approach to implement it so that we can get 
something done in Icehouse. However, I'd like to see that neutron requirements 
one way or the other can be satisfied in the first phase.

Maybe we can continue the IRC tomorrow and talk about the above. Again, let's 
move on if that's really where we want to go.

thanks,
Robert


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >