Re: [openstack-dev] [nova] Exposing provider networks in network_data.json

2015-07-29 Thread Sean M. Collins
On Mon, Jul 20, 2015 at 10:19:02AM EDT, Jim Rollenhagen wrote:
 On Mon, Jul 20, 2015 at 12:56:10PM +, Sean M. Collins wrote:
  On Sun, Jul 19, 2015 at 02:26:32PM EDT, Jim Rollenhagen wrote:
   For a little background, this patch came from code that is running in
   production today, where we're trunking two VLANs down to the host -- it
   isn't a theoretical use case.
  
  Have you taken a look at the vlan transparent API extension, or are you
  using it currently?
  
  http://specs.openstack.org/openstack/neutron-specs/specs/liberty/vlan-aware-vms.html
 
 Yes, we're looking at it for the future, but our current code has been
 in production for about a year, before this work was even being looked
 at. I wonder if the best route is to defer this Nova work until the VLAN
 extension is completed in Neutron, or to support both eventually.
 

Just wanted to close the loop on this and mention for anyone in the
Ironic project who wasn't present at the Neutron meeting, that I
suggested this approach. I don't know if it will be the way to achieve
the feature, but it at least looked like the API sort of matched up with
what Ironic is looking for. We just have to see what progress has been
made on the vlan transparent API extension.

http://eavesdrop.openstack.org/meetings/networking/2015/networking.2015-07-28-14.01.html

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Exposing provider networks in network_data.json

2015-07-23 Thread Jim Rollenhagen
On Thu, Jul 16, 2015 at 04:23:29PM -0400, Mathieu Gagné wrote:
 Hi,
 
 I stubble on this review [1] which proposes adding info about provider
 networks in network_data.json.
 
 snip
 
 [1] https://review.openstack.org/#/c/152703/5
 

Just to loop back on this - we talked about it a bit at the Nova
midcycle. We agreed that the provider networks extension should indicate
whether this should be passed to the instance or not. As far as how it
decides to do that, we're going to discuess that at the next Neutron
meeting.[0]

// jim

[0] https://wiki.openstack.org/wiki/Network/Meetings

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Exposing provider networks in network_data.json

2015-07-20 Thread Sean M. Collins
On Sun, Jul 19, 2015 at 02:26:32PM EDT, Jim Rollenhagen wrote:
 For a little background, this patch came from code that is running in
 production today, where we're trunking two VLANs down to the host -- it
 isn't a theoretical use case.

Have you taken a look at the vlan transparent API extension, or are you
using it currently?

http://specs.openstack.org/openstack/neutron-specs/specs/liberty/vlan-aware-vms.html

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Exposing provider networks in network_data.json

2015-07-20 Thread Jim Rollenhagen
On Mon, Jul 20, 2015 at 12:56:10PM +, Sean M. Collins wrote:
 On Sun, Jul 19, 2015 at 02:26:32PM EDT, Jim Rollenhagen wrote:
  For a little background, this patch came from code that is running in
  production today, where we're trunking two VLANs down to the host -- it
  isn't a theoretical use case.
 
 Have you taken a look at the vlan transparent API extension, or are you
 using it currently?
 
 http://specs.openstack.org/openstack/neutron-specs/specs/liberty/vlan-aware-vms.html

Yes, we're looking at it for the future, but our current code has been
in production for about a year, before this work was even being looked
at. I wonder if the best route is to defer this Nova work until the VLAN
extension is completed in Neutron, or to support both eventually.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Exposing provider networks in network_data.json

2015-07-20 Thread Clint Byrum
Excerpts from Sam Stoelinga's message of 2015-07-18 05:39:23 -0700:
 +1 on Kevin Benton's comments.
 Ironic should have integration with switches where the switches are SDN
 compatible. The individual bare metal node should not care which vlan,
 vxlan or other translation is programmed at the switch. The individual bare
 metal nodes just knows I have 2 nics and and these are on Neutron network
 x. The SDN controller is responsible for making sure the baremetal node
 only has access to Neutron Network x through changing the switch
 configuration dynamically.
 
 Making an individual baremetal have access to several vlans and let the
 baremetal node configure a vlan tag at the baremetal node itself is a big
 security risk and should not be supported. Unless an operator specifically
 configures a baremetal node to be vlan trunk.
 

Here's a baremetal use case we have currently in infra-cloud:

* network0 is the untagged VLAN, and is unroutable internal networking.
* network1 is tagged and is routable to the internet.

All of the baremetal machines are wired for both vlans, and will want to
communicate with machines on both vlans as well as the internet. Without
vlan info coming from neutron/nova, we have to manually force in
information about network1.  This requires the node creator to have
explicit knowledge of the network, which is a bit frustrating since
neutron already knows that network1 is on a particular tagged vlan
and could easily share this with the node if that is what we need to
have happen.

I'd love to say we could change how this works and just multi-home on
a single vlan. However, I'm pretty sure we won't be the last people to
want to boot machines with Ironic in an environment that does not have
the luxury of rewiring or reconfiguring all of the switch ports.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Exposing provider networks in network_data.json

2015-07-20 Thread Devananda van der Veen
On Sat, Jul 18, 2015 at 5:42 AM Sam Stoelinga sammiest...@gmail.com wrote:

 +1 on Kevin Benton's comments.
 Ironic should have integration with switches where the switches are SDN
 compatible. The individual bare metal node should not care which vlan,
 vxlan or other translation is programmed at the switch. The individual bare
 metal nodes just knows I have 2 nics and and these are on Neutron network
 x. The SDN controller is responsible for making sure the baremetal node
 only has access to Neutron Network x through changing the switch
 configuration dynamically.

 Making an individual baremetal have access to several vlans and let the
 baremetal node configure a vlan tag at the baremetal node itself is a big
 security risk and should not be supported.


I was previously of this opinion, and have changed my mind.

While I agree that this is true in a multi-tenant case and requires an
SDN-capable TOR, there are users (namely, openstack-infra) asking us to
support a single tenant with a statically-configured TOR, where cloud-init
is used to pass in the (external, unchangeable) VLAN configuration to the
bare metal instance.

-Deva



 Unless an operator specifically configures a baremetal node to be vlan
 trunk.

 Sam Stoelinga

 On Sat, Jul 18, 2015 at 5:10 AM, Kevin Benton blak...@gmail.com wrote:

  which requires VLAN info to be pushed to the host. I keep hearing bare
 metal will never need to know about VLANs so I want to quash that ASAP.

 That's leaking implementation details though if the bare metal host only
 needs to be on one network. It also creates a security risk if the bare
 metal node is untrusted.

 If the tagging is to make it so it can access multiple networks, then
 that makes sense for now but it should ultimately be replaced by the vlan
 trunk ports extension being worked on this cycle that decouples the
 underlying network transport from what gets tagged to the VM/bare metal.
 On Jul 17, 2015 11:47 AM, Jim Rollenhagen j...@jimrollenhagen.com
 wrote:

 On Fri, Jul 17, 2015 at 10:56:36AM -0600, Kevin Benton wrote:
  Check out my comments on the review. Only Neutron knows whether or not
 an
  instance needs to do manual tagging based on the plugin/driver loaded.
 
  For example, Ironic/bare metal ports can be bound by neutron with a
 correct
  driver so they shouldn't get the VLAN information at the instance
 level in
  those cases. Nova has no way to know whether Neutron is configured
 this way
  so Neutron should have an explicit response in the port binding
 information
  indicating that an instance needs to tag.

 Agree. However, I just want to point out that there are neutron drivers
 that exist today[0] that support bonded NICs with trunked VLANs, which
 requires VLAN info to be pushed to the host. I keep hearing bare metal
 will never need to know about VLANs so I want to quash that ASAP.

 As far as Neutron sending the flag to decide whether the instance should
 tag packets, +1, I think that should work.

 // jim
 
  On Fri, Jul 17, 2015 at 9:51 AM, Jim Rollenhagen 
 j...@jimrollenhagen.com
  wrote:
 
   On Fri, Jul 17, 2015 at 01:06:46PM +0100, John Garbutt wrote:
On 17 July 2015 at 11:23, Sean Dague s...@dague.net wrote:
 On 07/16/2015 06:06 PM, Sean M. Collins wrote:
 On Thu, Jul 16, 2015 at 01:23:29PM PDT, Mathieu Gagné wrote:
 So it looks like there is a missing part in this feature. There
   should
 be a way to hide this information if the instance does not
 require
   to
 configure vlan interfaces to make network functional.

 I just commented on the review, but the provider network API
 extension
 is admin only, most likely for the reasons that I think someone
 has
 already mentioned, that it exposes details of the phyiscal
 network
 layout that should not be exposed to tenants.

 So, clearly, under some circumstances the network operator wants
 to
 expose this information, because there was the request for that
   feature.
 The question in my mind is what circumstances are those, and what
 additional information needs to be provided here.

 There is always a balance between the private cloud case which
 wants to
 enable more self service from users (and where the users are
 often also
 the operators), and the public cloud case where the users are
 outsiders
 and we want to hide as much as possible from them.

 For instance, would an additional attribute on a provider
 network that
 says this is cool to tell people about be an acceptable
 approach? Is
 there some other creative way to tell our infrastructure that
 these
 artifacts are meant to be exposed in this installation?

 Just kicking around ideas, because I know a pile of gate
 hardware for
 everyone to use is at the other side of answers to these
 questions. And
 given that we've been running full capacity for days now,
 keeping this
 ball moving forward would be great.
   
Maybe we just 

Re: [openstack-dev] [nova] Exposing provider networks in network_data.json

2015-07-20 Thread Kevin Benton
Having Nova make assumptions isn't the right way to do this though. To
support this I would rather see an ML2 driver that informs Nova to pass
tagging info to the instance via the port binding info. Only neutron knows
which one is appropriate based on the network configuration.
On Jul 20, 2015 1:19 PM, Devananda van der Veen devananda@gmail.com
wrote:



 On Sat, Jul 18, 2015 at 5:42 AM Sam Stoelinga sammiest...@gmail.com
 wrote:

 +1 on Kevin Benton's comments.
 Ironic should have integration with switches where the switches are SDN
 compatible. The individual bare metal node should not care which vlan,
 vxlan or other translation is programmed at the switch. The individual bare
 metal nodes just knows I have 2 nics and and these are on Neutron network
 x. The SDN controller is responsible for making sure the baremetal node
 only has access to Neutron Network x through changing the switch
 configuration dynamically.

 Making an individual baremetal have access to several vlans and let the
 baremetal node configure a vlan tag at the baremetal node itself is a big
 security risk and should not be supported.


 I was previously of this opinion, and have changed my mind.

 While I agree that this is true in a multi-tenant case and requires an
 SDN-capable TOR, there are users (namely, openstack-infra) asking us to
 support a single tenant with a statically-configured TOR, where cloud-init
 is used to pass in the (external, unchangeable) VLAN configuration to the
 bare metal instance.

 -Deva



 Unless an operator specifically configures a baremetal node to be vlan
 trunk.

 Sam Stoelinga

 On Sat, Jul 18, 2015 at 5:10 AM, Kevin Benton blak...@gmail.com wrote:

  which requires VLAN info to be pushed to the host. I keep hearing
 bare metal will never need to know about VLANs so I want to quash that
 ASAP.

 That's leaking implementation details though if the bare metal host only
 needs to be on one network. It also creates a security risk if the bare
 metal node is untrusted.

 If the tagging is to make it so it can access multiple networks, then
 that makes sense for now but it should ultimately be replaced by the vlan
 trunk ports extension being worked on this cycle that decouples the
 underlying network transport from what gets tagged to the VM/bare metal.
 On Jul 17, 2015 11:47 AM, Jim Rollenhagen j...@jimrollenhagen.com
 wrote:

 On Fri, Jul 17, 2015 at 10:56:36AM -0600, Kevin Benton wrote:
  Check out my comments on the review. Only Neutron knows whether or
 not an
  instance needs to do manual tagging based on the plugin/driver loaded.
 
  For example, Ironic/bare metal ports can be bound by neutron with a
 correct
  driver so they shouldn't get the VLAN information at the instance
 level in
  those cases. Nova has no way to know whether Neutron is configured
 this way
  so Neutron should have an explicit response in the port binding
 information
  indicating that an instance needs to tag.

 Agree. However, I just want to point out that there are neutron drivers
 that exist today[0] that support bonded NICs with trunked VLANs, which
 requires VLAN info to be pushed to the host. I keep hearing bare metal
 will never need to know about VLANs so I want to quash that ASAP.

 As far as Neutron sending the flag to decide whether the instance should
 tag packets, +1, I think that should work.

 // jim
 
  On Fri, Jul 17, 2015 at 9:51 AM, Jim Rollenhagen 
 j...@jimrollenhagen.com
  wrote:
 
   On Fri, Jul 17, 2015 at 01:06:46PM +0100, John Garbutt wrote:
On 17 July 2015 at 11:23, Sean Dague s...@dague.net wrote:
 On 07/16/2015 06:06 PM, Sean M. Collins wrote:
 On Thu, Jul 16, 2015 at 01:23:29PM PDT, Mathieu Gagné wrote:
 So it looks like there is a missing part in this feature.
 There
   should
 be a way to hide this information if the instance does not
 require
   to
 configure vlan interfaces to make network functional.

 I just commented on the review, but the provider network API
 extension
 is admin only, most likely for the reasons that I think
 someone has
 already mentioned, that it exposes details of the phyiscal
 network
 layout that should not be exposed to tenants.

 So, clearly, under some circumstances the network operator
 wants to
 expose this information, because there was the request for that
   feature.
 The question in my mind is what circumstances are those, and
 what
 additional information needs to be provided here.

 There is always a balance between the private cloud case which
 wants to
 enable more self service from users (and where the users are
 often also
 the operators), and the public cloud case where the users are
 outsiders
 and we want to hide as much as possible from them.

 For instance, would an additional attribute on a provider
 network that
 says this is cool to tell people about be an acceptable
 approach? Is
 there some other creative way to tell our 

Re: [openstack-dev] [nova] Exposing provider networks in network_data.json

2015-07-19 Thread Jim Rollenhagen
On Sat, Jul 18, 2015 at 08:39:23PM +0800, Sam Stoelinga wrote:
 +1 on Kevin Benton's comments.
 Ironic should have integration with switches where the switches are SDN
 compatible. The individual bare metal node should not care which vlan,
 vxlan or other translation is programmed at the switch. The individual bare
 metal nodes just knows I have 2 nics and and these are on Neutron network
 x. The SDN controller is responsible for making sure the baremetal node
 only has access to Neutron Network x through changing the switch
 configuration dynamically.
 
 Making an individual baremetal have access to several vlans and let the
 baremetal node configure a vlan tag at the baremetal node itself is a big
 security risk and should not be supported. Unless an operator specifically
 configures a baremetal node to be vlan trunk.

Right, trunking is the main use case here, and I think we should support
it. :)

To be clear, I'm not advocating that we always send the VLAN to the
instance. I agree that this patch isn't the right way to do it. But I
do think we need to consider that there are cases where we do need to
expose the VLAN, and we should support this.

For a little background, this patch came from code that is running in
production today, where we're trunking two VLANs down to the host -- it
isn't a theoretical use case.

// jim

 
 Sam Stoelinga
 
 On Sat, Jul 18, 2015 at 5:10 AM, Kevin Benton blak...@gmail.com wrote:
 
   which requires VLAN info to be pushed to the host. I keep hearing bare
  metal will never need to know about VLANs so I want to quash that ASAP.
 
  That's leaking implementation details though if the bare metal host only
  needs to be on one network. It also creates a security risk if the bare
  metal node is untrusted.
 
  If the tagging is to make it so it can access multiple networks, then that
  makes sense for now but it should ultimately be replaced by the vlan trunk
  ports extension being worked on this cycle that decouples the underlying
  network transport from what gets tagged to the VM/bare metal.
  On Jul 17, 2015 11:47 AM, Jim Rollenhagen j...@jimrollenhagen.com
  wrote:
 
  On Fri, Jul 17, 2015 at 10:56:36AM -0600, Kevin Benton wrote:
   Check out my comments on the review. Only Neutron knows whether or not
  an
   instance needs to do manual tagging based on the plugin/driver loaded.
  
   For example, Ironic/bare metal ports can be bound by neutron with a
  correct
   driver so they shouldn't get the VLAN information at the instance level
  in
   those cases. Nova has no way to know whether Neutron is configured this
  way
   so Neutron should have an explicit response in the port binding
  information
   indicating that an instance needs to tag.
 
  Agree. However, I just want to point out that there are neutron drivers
  that exist today[0] that support bonded NICs with trunked VLANs, which
  requires VLAN info to be pushed to the host. I keep hearing bare metal
  will never need to know about VLANs so I want to quash that ASAP.
 
  As far as Neutron sending the flag to decide whether the instance should
  tag packets, +1, I think that should work.
 
  // jim
  
   On Fri, Jul 17, 2015 at 9:51 AM, Jim Rollenhagen 
  j...@jimrollenhagen.com
   wrote:
  
On Fri, Jul 17, 2015 at 01:06:46PM +0100, John Garbutt wrote:
 On 17 July 2015 at 11:23, Sean Dague s...@dague.net wrote:
  On 07/16/2015 06:06 PM, Sean M. Collins wrote:
  On Thu, Jul 16, 2015 at 01:23:29PM PDT, Mathieu Gagné wrote:
  So it looks like there is a missing part in this feature. There
should
  be a way to hide this information if the instance does not
  require
to
  configure vlan interfaces to make network functional.
 
  I just commented on the review, but the provider network API
  extension
  is admin only, most likely for the reasons that I think someone
  has
  already mentioned, that it exposes details of the phyiscal
  network
  layout that should not be exposed to tenants.
 
  So, clearly, under some circumstances the network operator wants
  to
  expose this information, because there was the request for that
feature.
  The question in my mind is what circumstances are those, and what
  additional information needs to be provided here.
 
  There is always a balance between the private cloud case which
  wants to
  enable more self service from users (and where the users are
  often also
  the operators), and the public cloud case where the users are
  outsiders
  and we want to hide as much as possible from them.
 
  For instance, would an additional attribute on a provider network
  that
  says this is cool to tell people about be an acceptable
  approach? Is
  there some other creative way to tell our infrastructure that
  these
  artifacts are meant to be exposed in this installation?
 
  Just kicking around ideas, because I know a pile of gate hardware
  for
  

Re: [openstack-dev] [nova] Exposing provider networks in network_data.json

2015-07-18 Thread Sam Stoelinga
+1 on Kevin Benton's comments.
Ironic should have integration with switches where the switches are SDN
compatible. The individual bare metal node should not care which vlan,
vxlan or other translation is programmed at the switch. The individual bare
metal nodes just knows I have 2 nics and and these are on Neutron network
x. The SDN controller is responsible for making sure the baremetal node
only has access to Neutron Network x through changing the switch
configuration dynamically.

Making an individual baremetal have access to several vlans and let the
baremetal node configure a vlan tag at the baremetal node itself is a big
security risk and should not be supported. Unless an operator specifically
configures a baremetal node to be vlan trunk.

Sam Stoelinga

On Sat, Jul 18, 2015 at 5:10 AM, Kevin Benton blak...@gmail.com wrote:

  which requires VLAN info to be pushed to the host. I keep hearing bare
 metal will never need to know about VLANs so I want to quash that ASAP.

 That's leaking implementation details though if the bare metal host only
 needs to be on one network. It also creates a security risk if the bare
 metal node is untrusted.

 If the tagging is to make it so it can access multiple networks, then that
 makes sense for now but it should ultimately be replaced by the vlan trunk
 ports extension being worked on this cycle that decouples the underlying
 network transport from what gets tagged to the VM/bare metal.
 On Jul 17, 2015 11:47 AM, Jim Rollenhagen j...@jimrollenhagen.com
 wrote:

 On Fri, Jul 17, 2015 at 10:56:36AM -0600, Kevin Benton wrote:
  Check out my comments on the review. Only Neutron knows whether or not
 an
  instance needs to do manual tagging based on the plugin/driver loaded.
 
  For example, Ironic/bare metal ports can be bound by neutron with a
 correct
  driver so they shouldn't get the VLAN information at the instance level
 in
  those cases. Nova has no way to know whether Neutron is configured this
 way
  so Neutron should have an explicit response in the port binding
 information
  indicating that an instance needs to tag.

 Agree. However, I just want to point out that there are neutron drivers
 that exist today[0] that support bonded NICs with trunked VLANs, which
 requires VLAN info to be pushed to the host. I keep hearing bare metal
 will never need to know about VLANs so I want to quash that ASAP.

 As far as Neutron sending the flag to decide whether the instance should
 tag packets, +1, I think that should work.

 // jim
 
  On Fri, Jul 17, 2015 at 9:51 AM, Jim Rollenhagen 
 j...@jimrollenhagen.com
  wrote:
 
   On Fri, Jul 17, 2015 at 01:06:46PM +0100, John Garbutt wrote:
On 17 July 2015 at 11:23, Sean Dague s...@dague.net wrote:
 On 07/16/2015 06:06 PM, Sean M. Collins wrote:
 On Thu, Jul 16, 2015 at 01:23:29PM PDT, Mathieu Gagné wrote:
 So it looks like there is a missing part in this feature. There
   should
 be a way to hide this information if the instance does not
 require
   to
 configure vlan interfaces to make network functional.

 I just commented on the review, but the provider network API
 extension
 is admin only, most likely for the reasons that I think someone
 has
 already mentioned, that it exposes details of the phyiscal
 network
 layout that should not be exposed to tenants.

 So, clearly, under some circumstances the network operator wants
 to
 expose this information, because there was the request for that
   feature.
 The question in my mind is what circumstances are those, and what
 additional information needs to be provided here.

 There is always a balance between the private cloud case which
 wants to
 enable more self service from users (and where the users are
 often also
 the operators), and the public cloud case where the users are
 outsiders
 and we want to hide as much as possible from them.

 For instance, would an additional attribute on a provider network
 that
 says this is cool to tell people about be an acceptable
 approach? Is
 there some other creative way to tell our infrastructure that
 these
 artifacts are meant to be exposed in this installation?

 Just kicking around ideas, because I know a pile of gate hardware
 for
 everyone to use is at the other side of answers to these
 questions. And
 given that we've been running full capacity for days now, keeping
 this
 ball moving forward would be great.
   
Maybe we just need to add policy around who gets to see that extra
detail, and maybe hide it by default?
   
Would that deal with the concerns here?
  
   I'm not so sure. There are certain Neutron plugins that work with
   certain virt drivers (Ironic) that require this information to be
 passed
   to all instances built by that virt driver. However, it doesn't (and
   probably shouldn't, as to not confuse cloud-init/etc) need to be
 passed
   to other instances. I think the 

Re: [openstack-dev] [nova] Exposing provider networks in network_data.json

2015-07-17 Thread John Garbutt
On 17 July 2015 at 11:23, Sean Dague s...@dague.net wrote:
 On 07/16/2015 06:06 PM, Sean M. Collins wrote:
 On Thu, Jul 16, 2015 at 01:23:29PM PDT, Mathieu Gagné wrote:
 So it looks like there is a missing part in this feature. There should
 be a way to hide this information if the instance does not require to
 configure vlan interfaces to make network functional.

 I just commented on the review, but the provider network API extension
 is admin only, most likely for the reasons that I think someone has
 already mentioned, that it exposes details of the phyiscal network
 layout that should not be exposed to tenants.

 So, clearly, under some circumstances the network operator wants to
 expose this information, because there was the request for that feature.
 The question in my mind is what circumstances are those, and what
 additional information needs to be provided here.

 There is always a balance between the private cloud case which wants to
 enable more self service from users (and where the users are often also
 the operators), and the public cloud case where the users are outsiders
 and we want to hide as much as possible from them.

 For instance, would an additional attribute on a provider network that
 says this is cool to tell people about be an acceptable approach? Is
 there some other creative way to tell our infrastructure that these
 artifacts are meant to be exposed in this installation?

 Just kicking around ideas, because I know a pile of gate hardware for
 everyone to use is at the other side of answers to these questions. And
 given that we've been running full capacity for days now, keeping this
 ball moving forward would be great.

Maybe we just need to add policy around who gets to see that extra
detail, and maybe hide it by default?

Would that deal with the concerns here?

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Exposing provider networks in network_data.json

2015-07-17 Thread Jim Rollenhagen
On Fri, Jul 17, 2015 at 01:06:46PM +0100, John Garbutt wrote:
 On 17 July 2015 at 11:23, Sean Dague s...@dague.net wrote:
  On 07/16/2015 06:06 PM, Sean M. Collins wrote:
  On Thu, Jul 16, 2015 at 01:23:29PM PDT, Mathieu Gagné wrote:
  So it looks like there is a missing part in this feature. There should
  be a way to hide this information if the instance does not require to
  configure vlan interfaces to make network functional.
 
  I just commented on the review, but the provider network API extension
  is admin only, most likely for the reasons that I think someone has
  already mentioned, that it exposes details of the phyiscal network
  layout that should not be exposed to tenants.
 
  So, clearly, under some circumstances the network operator wants to
  expose this information, because there was the request for that feature.
  The question in my mind is what circumstances are those, and what
  additional information needs to be provided here.
 
  There is always a balance between the private cloud case which wants to
  enable more self service from users (and where the users are often also
  the operators), and the public cloud case where the users are outsiders
  and we want to hide as much as possible from them.
 
  For instance, would an additional attribute on a provider network that
  says this is cool to tell people about be an acceptable approach? Is
  there some other creative way to tell our infrastructure that these
  artifacts are meant to be exposed in this installation?
 
  Just kicking around ideas, because I know a pile of gate hardware for
  everyone to use is at the other side of answers to these questions. And
  given that we've been running full capacity for days now, keeping this
  ball moving forward would be great.
 
 Maybe we just need to add policy around who gets to see that extra
 detail, and maybe hide it by default?
 
 Would that deal with the concerns here?

I'm not so sure. There are certain Neutron plugins that work with
certain virt drivers (Ironic) that require this information to be passed
to all instances built by that virt driver. However, it doesn't (and
probably shouldn't, as to not confuse cloud-init/etc) need to be passed
to other instances. I think the conditional for passing this as metadata
is going to need to be some combination of operator config, Neutron
config/driver, and virt driver.

I know we don't like networking things to be conditional on the virt
driver, but Ironic is working on feature parity with virt for
networking, and baremetal networking is vastly different than virt
networking. I think we're going to have to accept that.

// jim

 
 Thanks,
 John
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Exposing provider networks in network_data.json

2015-07-17 Thread Sean Dague
On 07/16/2015 06:06 PM, Sean M. Collins wrote:
 On Thu, Jul 16, 2015 at 01:23:29PM PDT, Mathieu Gagné wrote:
 So it looks like there is a missing part in this feature. There should
 be a way to hide this information if the instance does not require to
 configure vlan interfaces to make network functional.
 
 I just commented on the review, but the provider network API extension
 is admin only, most likely for the reasons that I think someone has
 already mentioned, that it exposes details of the phyiscal network
 layout that should not be exposed to tenants.

So, clearly, under some circumstances the network operator wants to
expose this information, because there was the request for that feature.
The question in my mind is what circumstances are those, and what
additional information needs to be provided here.

There is always a balance between the private cloud case which wants to
enable more self service from users (and where the users are often also
the operators), and the public cloud case where the users are outsiders
and we want to hide as much as possible from them.

For instance, would an additional attribute on a provider network that
says this is cool to tell people about be an acceptable approach? Is
there some other creative way to tell our infrastructure that these
artifacts are meant to be exposed in this installation?

Just kicking around ideas, because I know a pile of gate hardware for
everyone to use is at the other side of answers to these questions. And
given that we've been running full capacity for days now, keeping this
ball moving forward would be great.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Exposing provider networks in network_data.json

2015-07-17 Thread Kevin Benton
Check out my comments on the review. Only Neutron knows whether or not an
instance needs to do manual tagging based on the plugin/driver loaded.

For example, Ironic/bare metal ports can be bound by neutron with a correct
driver so they shouldn't get the VLAN information at the instance level in
those cases. Nova has no way to know whether Neutron is configured this way
so Neutron should have an explicit response in the port binding information
indicating that an instance needs to tag.

On Fri, Jul 17, 2015 at 9:51 AM, Jim Rollenhagen j...@jimrollenhagen.com
wrote:

 On Fri, Jul 17, 2015 at 01:06:46PM +0100, John Garbutt wrote:
  On 17 July 2015 at 11:23, Sean Dague s...@dague.net wrote:
   On 07/16/2015 06:06 PM, Sean M. Collins wrote:
   On Thu, Jul 16, 2015 at 01:23:29PM PDT, Mathieu Gagné wrote:
   So it looks like there is a missing part in this feature. There
 should
   be a way to hide this information if the instance does not require
 to
   configure vlan interfaces to make network functional.
  
   I just commented on the review, but the provider network API extension
   is admin only, most likely for the reasons that I think someone has
   already mentioned, that it exposes details of the phyiscal network
   layout that should not be exposed to tenants.
  
   So, clearly, under some circumstances the network operator wants to
   expose this information, because there was the request for that
 feature.
   The question in my mind is what circumstances are those, and what
   additional information needs to be provided here.
  
   There is always a balance between the private cloud case which wants to
   enable more self service from users (and where the users are often also
   the operators), and the public cloud case where the users are outsiders
   and we want to hide as much as possible from them.
  
   For instance, would an additional attribute on a provider network that
   says this is cool to tell people about be an acceptable approach? Is
   there some other creative way to tell our infrastructure that these
   artifacts are meant to be exposed in this installation?
  
   Just kicking around ideas, because I know a pile of gate hardware for
   everyone to use is at the other side of answers to these questions. And
   given that we've been running full capacity for days now, keeping this
   ball moving forward would be great.
 
  Maybe we just need to add policy around who gets to see that extra
  detail, and maybe hide it by default?
 
  Would that deal with the concerns here?

 I'm not so sure. There are certain Neutron plugins that work with
 certain virt drivers (Ironic) that require this information to be passed
 to all instances built by that virt driver. However, it doesn't (and
 probably shouldn't, as to not confuse cloud-init/etc) need to be passed
 to other instances. I think the conditional for passing this as metadata
 is going to need to be some combination of operator config, Neutron
 config/driver, and virt driver.

 I know we don't like networking things to be conditional on the virt
 driver, but Ironic is working on feature parity with virt for
 networking, and baremetal networking is vastly different than virt
 networking. I think we're going to have to accept that.

 // jim

 
  Thanks,
  John
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Exposing provider networks in network_data.json

2015-07-17 Thread Jim Rollenhagen
On Fri, Jul 17, 2015 at 10:56:36AM -0600, Kevin Benton wrote:
 Check out my comments on the review. Only Neutron knows whether or not an
 instance needs to do manual tagging based on the plugin/driver loaded.
 
 For example, Ironic/bare metal ports can be bound by neutron with a correct
 driver so they shouldn't get the VLAN information at the instance level in
 those cases. Nova has no way to know whether Neutron is configured this way
 so Neutron should have an explicit response in the port binding information
 indicating that an instance needs to tag.

Agree. However, I just want to point out that there are neutron drivers
that exist today[0] that support bonded NICs with trunked VLANs, which
requires VLAN info to be pushed to the host. I keep hearing bare metal
will never need to know about VLANs so I want to quash that ASAP.

As far as Neutron sending the flag to decide whether the instance should
tag packets, +1, I think that should work.

// jim
 
 On Fri, Jul 17, 2015 at 9:51 AM, Jim Rollenhagen j...@jimrollenhagen.com
 wrote:
 
  On Fri, Jul 17, 2015 at 01:06:46PM +0100, John Garbutt wrote:
   On 17 July 2015 at 11:23, Sean Dague s...@dague.net wrote:
On 07/16/2015 06:06 PM, Sean M. Collins wrote:
On Thu, Jul 16, 2015 at 01:23:29PM PDT, Mathieu Gagné wrote:
So it looks like there is a missing part in this feature. There
  should
be a way to hide this information if the instance does not require
  to
configure vlan interfaces to make network functional.
   
I just commented on the review, but the provider network API extension
is admin only, most likely for the reasons that I think someone has
already mentioned, that it exposes details of the phyiscal network
layout that should not be exposed to tenants.
   
So, clearly, under some circumstances the network operator wants to
expose this information, because there was the request for that
  feature.
The question in my mind is what circumstances are those, and what
additional information needs to be provided here.
   
There is always a balance between the private cloud case which wants to
enable more self service from users (and where the users are often also
the operators), and the public cloud case where the users are outsiders
and we want to hide as much as possible from them.
   
For instance, would an additional attribute on a provider network that
says this is cool to tell people about be an acceptable approach? Is
there some other creative way to tell our infrastructure that these
artifacts are meant to be exposed in this installation?
   
Just kicking around ideas, because I know a pile of gate hardware for
everyone to use is at the other side of answers to these questions. And
given that we've been running full capacity for days now, keeping this
ball moving forward would be great.
  
   Maybe we just need to add policy around who gets to see that extra
   detail, and maybe hide it by default?
  
   Would that deal with the concerns here?
 
  I'm not so sure. There are certain Neutron plugins that work with
  certain virt drivers (Ironic) that require this information to be passed
  to all instances built by that virt driver. However, it doesn't (and
  probably shouldn't, as to not confuse cloud-init/etc) need to be passed
  to other instances. I think the conditional for passing this as metadata
  is going to need to be some combination of operator config, Neutron
  config/driver, and virt driver.
 
  I know we don't like networking things to be conditional on the virt
  driver, but Ironic is working on feature parity with virt for
  networking, and baremetal networking is vastly different than virt
  networking. I think we're going to have to accept that.
 
  // jim
 
  
   Thanks,
   John
  
  
  __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Kevin Benton

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Exposing provider networks in network_data.json

2015-07-17 Thread Jim Rollenhagen
On Fri, Jul 17, 2015 at 10:43:37AM -0700, Jim Rollenhagen wrote:
 On Fri, Jul 17, 2015 at 10:56:36AM -0600, Kevin Benton wrote:
  Check out my comments on the review. Only Neutron knows whether or not an
  instance needs to do manual tagging based on the plugin/driver loaded.
  
  For example, Ironic/bare metal ports can be bound by neutron with a correct
  driver so they shouldn't get the VLAN information at the instance level in
  those cases. Nova has no way to know whether Neutron is configured this way
  so Neutron should have an explicit response in the port binding information
  indicating that an instance needs to tag.
 
 Agree. However, I just want to point out that there are neutron drivers
 that exist today[0] that support bonded NICs with trunked VLANs, which
 requires VLAN info to be pushed to the host. I keep hearing bare metal
 will never need to know about VLANs so I want to quash that ASAP.
 
 As far as Neutron sending the flag to decide whether the instance should
 tag packets, +1, I think that should work.

[0] https://github.com/rackerlabs/ironic-neutron-plugin

// jim

 
 // jim
  
  On Fri, Jul 17, 2015 at 9:51 AM, Jim Rollenhagen j...@jimrollenhagen.com
  wrote:
  
   On Fri, Jul 17, 2015 at 01:06:46PM +0100, John Garbutt wrote:
On 17 July 2015 at 11:23, Sean Dague s...@dague.net wrote:
 On 07/16/2015 06:06 PM, Sean M. Collins wrote:
 On Thu, Jul 16, 2015 at 01:23:29PM PDT, Mathieu Gagné wrote:
 So it looks like there is a missing part in this feature. There
   should
 be a way to hide this information if the instance does not require
   to
 configure vlan interfaces to make network functional.

 I just commented on the review, but the provider network API 
 extension
 is admin only, most likely for the reasons that I think someone has
 already mentioned, that it exposes details of the phyiscal network
 layout that should not be exposed to tenants.

 So, clearly, under some circumstances the network operator wants to
 expose this information, because there was the request for that
   feature.
 The question in my mind is what circumstances are those, and what
 additional information needs to be provided here.

 There is always a balance between the private cloud case which wants 
 to
 enable more self service from users (and where the users are often 
 also
 the operators), and the public cloud case where the users are 
 outsiders
 and we want to hide as much as possible from them.

 For instance, would an additional attribute on a provider network that
 says this is cool to tell people about be an acceptable approach? Is
 there some other creative way to tell our infrastructure that these
 artifacts are meant to be exposed in this installation?

 Just kicking around ideas, because I know a pile of gate hardware for
 everyone to use is at the other side of answers to these questions. 
 And
 given that we've been running full capacity for days now, keeping this
 ball moving forward would be great.
   
Maybe we just need to add policy around who gets to see that extra
detail, and maybe hide it by default?
   
Would that deal with the concerns here?
  
   I'm not so sure. There are certain Neutron plugins that work with
   certain virt drivers (Ironic) that require this information to be passed
   to all instances built by that virt driver. However, it doesn't (and
   probably shouldn't, as to not confuse cloud-init/etc) need to be passed
   to other instances. I think the conditional for passing this as metadata
   is going to need to be some combination of operator config, Neutron
   config/driver, and virt driver.
  
   I know we don't like networking things to be conditional on the virt
   driver, but Ironic is working on feature parity with virt for
   networking, and baremetal networking is vastly different than virt
   networking. I think we're going to have to accept that.
  
   // jim
  
   
Thanks,
John
   
   
   __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
   openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
  
  -- 
  Kevin Benton
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 

Re: [openstack-dev] [nova] Exposing provider networks in network_data.json

2015-07-17 Thread Kevin Benton
 which requires VLAN info to be pushed to the host. I keep hearing bare
metal will never need to know about VLANs so I want to quash that ASAP.

That's leaking implementation details though if the bare metal host only
needs to be on one network. It also creates a security risk if the bare
metal node is untrusted.

If the tagging is to make it so it can access multiple networks, then that
makes sense for now but it should ultimately be replaced by the vlan trunk
ports extension being worked on this cycle that decouples the underlying
network transport from what gets tagged to the VM/bare metal.
On Jul 17, 2015 11:47 AM, Jim Rollenhagen j...@jimrollenhagen.com wrote:

 On Fri, Jul 17, 2015 at 10:56:36AM -0600, Kevin Benton wrote:
  Check out my comments on the review. Only Neutron knows whether or not an
  instance needs to do manual tagging based on the plugin/driver loaded.
 
  For example, Ironic/bare metal ports can be bound by neutron with a
 correct
  driver so they shouldn't get the VLAN information at the instance level
 in
  those cases. Nova has no way to know whether Neutron is configured this
 way
  so Neutron should have an explicit response in the port binding
 information
  indicating that an instance needs to tag.

 Agree. However, I just want to point out that there are neutron drivers
 that exist today[0] that support bonded NICs with trunked VLANs, which
 requires VLAN info to be pushed to the host. I keep hearing bare metal
 will never need to know about VLANs so I want to quash that ASAP.

 As far as Neutron sending the flag to decide whether the instance should
 tag packets, +1, I think that should work.

 // jim
 
  On Fri, Jul 17, 2015 at 9:51 AM, Jim Rollenhagen j...@jimrollenhagen.com
 
  wrote:
 
   On Fri, Jul 17, 2015 at 01:06:46PM +0100, John Garbutt wrote:
On 17 July 2015 at 11:23, Sean Dague s...@dague.net wrote:
 On 07/16/2015 06:06 PM, Sean M. Collins wrote:
 On Thu, Jul 16, 2015 at 01:23:29PM PDT, Mathieu Gagné wrote:
 So it looks like there is a missing part in this feature. There
   should
 be a way to hide this information if the instance does not
 require
   to
 configure vlan interfaces to make network functional.

 I just commented on the review, but the provider network API
 extension
 is admin only, most likely for the reasons that I think someone
 has
 already mentioned, that it exposes details of the phyiscal network
 layout that should not be exposed to tenants.

 So, clearly, under some circumstances the network operator wants to
 expose this information, because there was the request for that
   feature.
 The question in my mind is what circumstances are those, and what
 additional information needs to be provided here.

 There is always a balance between the private cloud case which
 wants to
 enable more self service from users (and where the users are often
 also
 the operators), and the public cloud case where the users are
 outsiders
 and we want to hide as much as possible from them.

 For instance, would an additional attribute on a provider network
 that
 says this is cool to tell people about be an acceptable
 approach? Is
 there some other creative way to tell our infrastructure that these
 artifacts are meant to be exposed in this installation?

 Just kicking around ideas, because I know a pile of gate hardware
 for
 everyone to use is at the other side of answers to these
 questions. And
 given that we've been running full capacity for days now, keeping
 this
 ball moving forward would be great.
   
Maybe we just need to add policy around who gets to see that extra
detail, and maybe hide it by default?
   
Would that deal with the concerns here?
  
   I'm not so sure. There are certain Neutron plugins that work with
   certain virt drivers (Ironic) that require this information to be
 passed
   to all instances built by that virt driver. However, it doesn't (and
   probably shouldn't, as to not confuse cloud-init/etc) need to be passed
   to other instances. I think the conditional for passing this as
 metadata
   is going to need to be some combination of operator config, Neutron
   config/driver, and virt driver.
  
   I know we don't like networking things to be conditional on the virt
   driver, but Ironic is working on feature parity with virt for
   networking, and baremetal networking is vastly different than virt
   networking. I think we're going to have to accept that.
  
   // jim
  
   
Thanks,
John
   
   
  
 __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
   openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
 __
   OpenStack 

Re: [openstack-dev] [nova] Exposing provider networks in network_data.json

2015-07-16 Thread Sean M. Collins
On Thu, Jul 16, 2015 at 01:23:29PM PDT, Mathieu Gagné wrote:
 So it looks like there is a missing part in this feature. There should
 be a way to hide this information if the instance does not require to
 configure vlan interfaces to make network functional.

I just commented on the review, but the provider network API extension
is admin only, most likely for the reasons that I think someone has
already mentioned, that it exposes details of the phyiscal network
layout that should not be exposed to tenants.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev