[openstack-dev] [Neutron] [IPv6] Hide ipv6 subnet API attributes

2014-07-29 Thread Nir Yechiel
Now with the Juno efforts to provide IPv6 support and some features (provider 
networks SLAAC, RADVD) already merged, is there any plan/patch to revert this 
Icehouse change [1] and make the 'ra_mode' and 'ipv6_address_mode' consumable?

Thanks,
Nir

[1] https://review.openstack.org/#/c/85869/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-08 Thread Nir Yechiel


- Original Message -
 A bit more...
 
 
 I have OpenStack IceHouse with Trusty up and running, *almost* in an
 IPv6-Only environment, *there is only one place* that I'm still using IPv4,
 which is:
 
 
 1- For Metadata Network.
 
 
 This means that, soon as OpenStack enables Metadata over IPv6, I'll kiss
 goodbye IPv4. For real, I can not handle IPv4 networks anymore... So many
 NAT tables and overlay networks, that it creeps me out!!  lol
 
 
 NOTE: I'm using VLAN Provider Networks with static (no support for SLAAC
 upstream routers in OpenStack yet) IPv6 address for my tenants, so, I'm not
 using GRE/VXLAN tunnels, and that is another place of OpenStack that still
 depends on IPv4, for its tunnels...
 

Just to make sure I got you right here - would you expect to run the overlay 
tunnels over an IPv6 transport network?

 
 As I said, everything else is running over IPv6, like RabbitMQ, MySQL,
 Keystone, Nova, Glance, Cinder, Neutron (API endpoint), SPICE Consoles and
 Servers, the entire Management Network (private IPv6 address space -
 fd::/64) and etc... So, why do we need IPv4? I don't remember... :-P
 

Good to hear. Would you mine sharing your config/more details around the 
deployment?

 Well, Amazon doesn't support IPv6... Who will be left behind with smelly
 IPv4 and ugly VPCs topologies?! Not us.  ;-)
 
 Best!
 Thiago
 
 
 On 7 July 2014 15:50, Ian Wells ijw.ubu...@cack.org.uk wrote:
 
  On 7 July 2014 11:37, Sean Dague s...@dague.net wrote:
 
   When it's on a router, it's simpler: use the nexthop, get that metadata
   server.
 
  Right, but that assumes router control.
 
 
  It does, but then that's the current status quo - these things go on
  Neutron routers (and, by extension, are generally not available via
  provider networks).
 
In general, anyone doing singlestack v6 at the moment relies on
   config-drive to make it work.  This works fine but it depends what
   cloud-init support your application has.
 
  I think it's also important to realize that the metadata service isn't
  OpenStack invented, it's an AWS API. Which means I don't think we really
  have the liberty to go changing how it works, especially with something
  like IPv6 support.
 
 
  Well, as Amazon doesn't support ipv6 we are the trailblazers here and we
  can do what we please.  If you have a singlestack v6 instance there's no
  compatibility to be maintained with Amazon, because it simply won't work on
  Amazon.  (Also, the format of the metadata server maintains compatibility
  with AWS but I don't think it's strictly AWS any more; the config drive
  certainly isn't.)
 
 
  I'm not sure I understand why requiring config-drive isn't ok. In our
  upstream testing it's a ton more reliable than the metadata service due
  to all the crazy networking things it's doing.
 
  I'd honestly love to see us just deprecate the metadata server.
 
 
  The metadata server could potentially have more uses in the future - it's
  possible to get messages out of it, rather than just one time config - but
  yes, the config drive is so much more sensible.  For the server, and once
  you're into Neutron, then you end up with many problems - which interface
  to use, how to get your network config when important details are probably
  on the metadata server itself...
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] cloud-init IPv6 support

2014-07-07 Thread Nir Yechiel
AFAIK, the cloud-init metadata service can currently be accessed only by 
sending a request to http://169.254.169.254, and no IPv6 equivalent is 
currently implemented. Does anyone working on this or tried to address this 
before?

Thanks,
Nir

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] FFE request: L3 HA VRRP

2014-03-09 Thread Nir Yechiel
+1

I see it as one of the main current gaps and I believe that this is something 
that can promote Neutron as stable and production ready. 
Based on Édouard's comment below, having this enabled in Icehouse as 
experimental makes a lot of sense to me.  

- Original Message -
 +1
 
 - Original Message -
  +1
  
  On Fri, Mar 7, 2014 at 2:42 AM, Édouard Thuleau thul...@gmail.com wrote:
   +1
   I though it must merge as experimental for IceHouse, to let the community
   tries it and stabilizes it during the Juno release. And for the Juno
   release, we will be able to announce it as stable.
  
   Furthermore, the next work, will be to distribute the l3 stuff at the
   edge
   (compute) (called DVR) but this VRRP work will still needed for that [1].
   So if we merge L3 HA VRRP as experimental in I to be stable in J, will
   could
   also propose an experimental DVR solution for J and a stable for K.
  
   [1]
   https://docs.google.com/drawings/d/1GGwbLa72n8c2T3SBApKK7uJ6WLTSRa7erTI_3QNj5Bg/edit
  
   Regards,
   Édouard.
  
  
   On Thu, Mar 6, 2014 at 4:27 PM, Sylvain Afchain
   sylvain.afch...@enovance.com wrote:
  
   Hi all,
  
   I would like to request a FFE for the following patches of the L3 HA
   VRRP
   BP :
  
   https://blueprints.launchpad.net/neutron/+spec/l3-high-availability
  
   https://review.openstack.org/#/c/64553/
   https://review.openstack.org/#/c/66347/
   https://review.openstack.org/#/c/68142/
   https://review.openstack.org/#/c/70700/
  
   These should be low risk since HA is not enabled by default.
   The server side code has been developed as an extension which minimizes
   risk.
   The agent side code introduces a bit more changes but only to filter
   whether to apply the
   new HA behavior.
  
   I think it's a good idea to have this feature in Icehouse, perhaps even
   marked as experimental,
   especially considering the demand for HA in real world deployments.
  
   Here is a doc to test it :
  
  
   https://docs.google.com/document/d/1P2OnlKAGMeSZTbGENNAKOse6B2TRXJ8keUMVvtUCUSM/edit#heading=h.xjip6aepu7ug
  
   -Sylvain
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Can somebody describe the all the rolls about networks' admin_state_up

2014-02-26 Thread Nir Yechiel


- Original Message -
 A thread [1] was also initiated on the ML by Syvlain but no answers/comment
 for the moment.
 
 [1] http://openstack.markmail.org/thread/qy6ikldtq2o4imzl
 
 Édouard.
 

IMHO admin_state_up = false should bring the network down and the traffic needs 
to be stopped. This is of course a risky operation, so there should be some 
clear warning describing the action and asking the user to confirm that.

With regards to the implementation of this, I think that there is a difference 
between the network admin_state and the individual ports admin_state; setting 
the network admin_state to false should not change the port's state to false. 
Instead, I am in favor of the second solution [1] described by Sylvain in the 
ML. I also included this in the bug.

/Nir

[1] do not change the admin_state_up value of ports and introduce a new field 
in the get_device_details rpc call in order to indicate that the admin_state_up 
of network is down and then set the port as dead


 
 On Mon, Feb 24, 2014 at 9:35 AM, 黎林果 lilinguo8...@gmail.com wrote:
 
  Thanks you very much.
 
  IMHO when admin_state_up is false that entity should be down, meaning
  network should be down.
  otherwise what it the usage of admin_state_up ? same is true for port
  admin_state_up
 
  It likes switch's power button?
 
  2014-02-24 16:03 GMT+08:00 Assaf Muller amul...@redhat.com:
  
  
   - Original Message -
   Hi,
  
   I want to know the admin_state_up attribute about networks but I
   have not found any describes.
  
   Can you help me to understand it? Thank you very much.
  
  
   There's a discussion about this in this bug [1].
   From what I gather, nobody knows what admin_state_up is actually supposed
   to do with respect to networks.
  
   [1] https://bugs.launchpad.net/neutron/+bug/1237807
  
  
   Regard,
  
   Lee Li
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Implement NAPT in neutron (https://blueprints.launchpad.net/neutron/+spec/neutron-napt-api)

2014-01-09 Thread Nir Yechiel


- Original Message -

From: Dong Liu willowd...@gmail.com 
To: Nir Yechiel nyech...@redhat.com 
Cc: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org 
Sent: Wednesday, January 8, 2014 5:36:14 PM 
Subject: Re: [neutron] Implement NAPT in neutron 
(https://blueprints.launchpad.net/neutron/+spec/neutron-napt-api) 


在 2014年1月8日,20:24,Nir Yechiel  nyech...@redhat.com  写道: 




Hi Dong, 

Can you please clarify this blueprint? Currently in Neutron, If an instance has 
a floating IP, then that will be used for both inbound and outbound traffic. If 
an instance does not have a floating IP, it can make connections out using the 
gateway IP (SNAT using PAT/NAT Overload). Does the idea in this blueprint is to 
implement PAT on both directions using only the gateway IP? Also, did you see 
this one [1]? 

Thanks, 
Nir 

[1] https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding 





I think my ide a is duplicated with this one. 
https://blueprints.launchpad.net/neutron/+spec/access-vms-via-port-mapping 

Sorry for missing this. 

[Nir] Thanks, I wasn't familiar with this one. So is there a difference between 
those three? 

https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding 
https://blueprints.launchpad.net/neutron/+spec/access-vms-via-port-mapping 
https://blueprints.launchpad.net/neutron/+spec/neutron-napt-api 

Looks like all of them are trying to solve the same challenge using the public 
gateway IP and PAT. 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Allow multiple subnets on gateway port for router

2014-01-09 Thread Nir Yechiel
Hi Randy, 

I don't have a specific use case. I just wanted to understand the scope here as 
the name of this blueprint (allow multiple subnets on gateway port for 
router) could be a bit misleading. 

Two questions I have though: 

1. Is this talking specifically about the gateway port to the provider's 
next-hop router or relevant for all ports in virtual routers as well? 
2. There is a fundamental difference between v4 and v6 address assignment. With 
IPv4 I agree that one IP address per port is usually enough (there is the 
concept of secondary IP, but I am not sure it's really common). With IPv6 
however you can sure have more then one (global) IPv6 on an interface. 
Shouldn't we support this? 


Thanks, 
Nir 

- Original Message -

From: Randy Tuttle randy.m.tut...@gmail.com 
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org 
Cc: rantu...@cisco.com 
Sent: Tuesday, December 31, 2013 6:43:50 PM 
Subject: Re: [openstack-dev] [Neutron] Allow multiple subnets on gateway port 
for router 

Hi Nir 

Good question. There's absolutely no reason not to allow more than 2 subnets, 
or even 2 of the same IP versions on the gateway port. In fact, in our POC we 
allowed this (or, more specifically, we did not disallow it). However, for the 
gateway port to the provider's next-hop router, we did not have a specific use 
case beyond an IPv4 and an IPv6. Moreover, in Neutron today, only a single 
subnet is allowed per interface (either v4 or v6). So all we are doing is 
opening up the gateway port to support what it does today (i.e., v4 or v6) plus 
allow IPv4 and IPv6 subnets to co-exist on the gateway port (and same 
network/vlan). Our principle use case is to enable IPv6 in an existing IPv4 
environment. 

Do you have a specific use case requiring 2 or more of the same IP-versioned 
subnets on a gateway port? 

Thanks 
Randy 


On Tue, Dec 31, 2013 at 4:59 AM, Nir Yechiel  nyech...@redhat.com  wrote: 



Hi, 

With regards to 
https://blueprints.launchpad.net/neutron/+spec/allow-multiple-subnets-on-gateway-port,
 can you please clarify this statement: We will disallow more that two 
subnets, and exclude allowing 2 IPv4 or 2 IPv6 subnets. 
The use case for dual-stack with one IPv4 and one IPv6 address associated to 
the same port is clear, but what is the reason to disallow more than two 
IPv4/IPv6 subnets to a port? 

Thanks and happy holidays! 
Nir 



___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 






___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Implement NAPT in neutron (https://blueprints.launchpad.net/neutron/+spec/neutron-napt-api)

2014-01-08 Thread Nir Yechiel
Hi Dong, 

Can you please clarify this blueprint? Currently in Neutron, If an instance has 
a floating IP, then that will be used for both inbound and outbound traffic. If 
an instance does not have a floating IP, it can make connections out using the 
gateway IP (SNAT using PAT/NAT Overload). Does the idea in this blueprint is to 
implement PAT on both directions using only the gateway IP? Also, did you see 
this one [1]? 

Thanks, 
Nir 

[1] https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Allow multiple subnets on gateway port for router

2013-12-31 Thread Nir Yechiel
Hi, 

With regards to 
https://blueprints.launchpad.net/neutron/+spec/allow-multiple-subnets-on-gateway-port,
 can you please clarify this statement: We will disallow more that two 
subnets, and exclude allowing 2 IPv4 or 2 IPv6 subnets. 
The use case for dual-stack with one IPv4 and one IPv6 address associated to 
the same port is clear, but what is the reason to disallow more than two 
IPv4/IPv6 subnets to a port? 

Thanks and happy holidays! 
Nir 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev