Re: [openstack-dev] [Neutron] Issue when upgrading from Juno to Kilo due to agent report_state RPC namespace patch

2015-03-06 Thread McCann, Jack
+1 on avoiding changes that break rolling upgrade.

Rolling upgrade has been working so far (at least from my perspective), and
as openstack adoption spreads, it will be important for more and more users.

How do we make rolling upgrade a supported part of Neutron?

- Jack

 -Original Message-
 From: Assaf Muller [mailto:amul...@redhat.com]
 Sent: Thursday, March 05, 2015 11:59 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron] Issue when upgrading from Juno to Kilo 
 due
 to agent report_state RPC namespace patch
 
 
 
 - Original Message -
  To turn this stuff off, you don't need to revert.  I'd suggest just
  setting the namespace contants to None, and that will result in the same
  thing.
 
 
 http://git.openstack.org/cgit/openstack/neutron/tree/neutron/common/constants.py#
 n152
 
  It's definitely a non-backwards compatible change.  That was a conscious
  choice as the interfaces are a bit of a tangled mess, IMO.  The
  non-backwards compatible changes were simpler so I went that route,
  because as far as I could tell, rolling upgrades were not supported.  If
  they do work, it's due to luck.  There's multiple things including the
  lack of testing this scenario to lack of data versioning that make it a
  pretty shaky area.
 
  However, if it worked for some people, I totally get the argument
  against breaking it intentionally.  As mentioned before, a quick fix if
  needed is to just set the namespace constants to None.  If someone wants
  to do something to make it backwards compatible, that's even better.
 
 
 I sent out an email to the operators list to get some feedback:
 http://lists.openstack.org/pipermail/openstack-operators/2015-March/006429.html
 
 And at least one operator reported that he performed a rolling Neutron upgrade
 from I to J successfully. So, I'm agreeing with you agreeing with me that we
 probably don't want to mess this up knowingly, even though there is no testing
 to make sure that it keeps working.
 
 I'll follow up on IRC with you to figure out who's doing what.
 
  --
  Russell Bryant
 
  On 03/04/2015 11:50 AM, Salvatore Orlando wrote:
   To put in another way I think we might say that change 154670 broke
   backward compatibility on the RPC interface.
   To be fair this probably happened because RPC interfaces were organised
   in a way such that this kind of breakage was unavoidable.
  
   I think the strategy proposed by Assaf is a viable one. The point about
   being able to do rolling upgrades only from version N to N+1 is a
   sensible one, but it has more to do with general backward compability
   rules for RPC interfaces.
  
   In the meanwhile this is breaking a typical upgrade scenario. If a fix
   allowing agent state updates both namespaced and not is available today
   or tomorrow, that's fine. Otherwise I'd revert just to be safe.
  
   By the way, we were supposed to have already removed all server rpc
   callbacks in the appropriate package... did we forget out this one or is
   there a reason for which it's still in neutron.db?
  
   Salvatore
  
   On 4 March 2015 at 17:23, Miguel Ángel Ajo majop...@redhat.com
   mailto:majop...@redhat.com wrote:
  
   I agree with Assaf, this is an issue across updates, and
   we may want (if that’s technically possible) to provide
   access to those functions with/without namespace.
  
   Or otherwise think about reverting for now until we find a
   migration strategy
  
  
 https://review.openstack.org/#/q/status:merged+project:openstack/neutron+branch:m
 aster+topic:bp/rpc-docs-and-namespaces,n,z
  
  
   Best regards,
   Miguel Ángel Ajo
  
   On Wednesday, 4 de March de 2015 at 17:00, Assaf Muller wrote:
  
   Hello everyone,
  
   I'd like to highlight an issue with:
   https://review.openstack.org/#/c/154670/
  
   According to my understanding, most deployments upgrade the
   controllers first
   and compute/network nodes later. During that time period, all
   agents will
   fail to report state as they're sending the report_state message
   outside
   of any namespace while the server is expecting that message in a
   namespace.
   This is a show stopper as the Neutron server will think all of its
   agents are dead.
  
   I think the obvious solution is to modify the Neutron server code
   so that
   it accepts the report_state method both in and outside of the
   report_state
   RPC namespace and chuck that code away in L (Assuming we support
   rolling upgrades
   only from version N to N+1, which while is unfortunate, is the
   behavior I've
   seen in multiple places in the code).
  
   Finally, are there additional similar issues for other RPC methods
   placed in a namespace
   this cycle?
  
  
   Assaf Muller, Cloud Networking Engineer
   Red Hat
  


Re: [openstack-dev] [Neutron][L3] Stop agent scheduling without topping sevices

2015-01-08 Thread McCann, Jack
+1 on need for this feature

The way I've thought about this is we need a mode that stops the *automatic*
scheduling of routers/dhcp-servers to specific hosts/agents, while allowing
manual assignment of routers/dhcp-servers to those hosts/agents, and where
any existing routers/dhcp-servers on those hosts continue to operate as normal.

The maintenance use case was mentioned: I want to evacuate routers/dhcp-servers
from a host before taking it down, and having the scheduler add new routers/dhcp
while I'm evacuating the node is a) an annoyance, and b) causes a service blip
when I have to right away move that new router/dhcp to another host.

The other use case is adding a new host/agent into an existing environment.
I want to be able to bring the new host/agent up and into the neutron config, 
but
I don't want any of my customers' routers/dhcp-servers scheduled there until 
I've
had a chance to assign some test routers/dhcp-servers and make sure the new 
server
is properly configured and fully operational.

- Jack

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DVR SNAT shortcut

2014-06-25 Thread McCann, Jack
  If every compute node is
  assigned a public ip, is it technically able to improve SNAT packets
  w/o going through the network node ?

It is technically possible to implement default SNAT at the compute node.

One approach would be to use a single IP address per compute node as a
default SNAT address shared by all VMs on that compute node.  While this
optimizes for number of external IPs consumed per compute node, the downside
is having VMs from different tenants sharing the same default SNAT IP address
and conntrack table.  That downside may be acceptable for some deployments,
but it is not acceptable in others.

Another approach would be to use a single IP address per router per compute
node.  This avoids the multi-tenant issue mentioned above, at the cost of
consuming more IP addresses, potentially one default SNAT IP address for each
VM on the compute server (which is the case when every VM on the compute node
is from a different tenant and/or using a different router).  At that point
you might as well give each VM a floating IP.

Hence the approach taken with the initial DVR implementation is to keep
default SNAT as a centralized service.

- Jack

 -Original Message-
 From: Zang MingJie [mailto:zealot0...@gmail.com]
 Sent: Wednesday, June 25, 2014 6:34 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron] DVR SNAT shortcut
 
 On Wed, Jun 25, 2014 at 5:42 PM, Yongsheng Gong gong...@unitedstack.com 
 wrote:
  Hi,
  for each compute node to have SNAT to Internet, I think we have the
  drawbacks:
  1. SNAT is done in router, so each router will have to consume one public IP
  on each compute node, which is money.
 
 SNAT can save more ips than wasted on floating ips
 
  2. for each compute node to go out to Internet, the compute node will have
  one more NIC, which connect to physical switch, which is money too
 
 
 Floating ip also need a public NIC on br-ex. Also we can use a
 separate vlan to handle the network, so this is not a problem
 
  So personally, I like the design:
   floating IPs and 1:N SNAT still use current network nodes, which will have
  HA solution enabled and we can have many l3 agents to host routers. but
  normal east/west traffic across compute nodes can use DVR.
 
 BTW, does HA implementation still active ? I haven't seen it has been
 touched for a while
 
 
  yong sheng gong
 
 
  On Wed, Jun 25, 2014 at 4:30 PM, Zang MingJie zealot0...@gmail.com wrote:
 
  Hi:
 
  In current DVR design, SNAT is north/south direction, but packets have
  to go west/east through the network node. If every compute node is
  assigned a public ip, is it technically able to improve SNAT packets
  w/o going through the network node ?
 
  SNAT versus floating ips, can save tons of public ips, in trade of
  introducing a single failure point, and limiting the bandwidth of the
  network node. If the SNAT performance problem can be solved, I'll
  encourage people to use SNAT over floating ips. unless the VM is
  serving a public service
 
  --
  Zang MingJie
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] reservation of fixed ip

2014-05-23 Thread McCann, Jack
From the original ask:

 I know that there is possibility to create port with IP
 and later connect VM to this port. This solution is almost ok
 for me but problem is when user delete this instance - then
 port is also deleted and it is not reserved still for the same
 user and tenant.

This sounds like the problem of nova deleting a port that it did not
create.  We could look at a change (likely involving nova and neutron)
such that if I create a port and pass it in to nova boot, nova would
not delete that port when the VM is deleted.

- Jack

From: Mohammad Banikazemi [mailto:m...@us.ibm.com]
Sent: Thursday, May 22, 2014 10:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] reservation of fixed ip


Well, for a use case we had in mind we were trying to figure out how to simply 
get an IP address on a subnet. We essentially want to use such an address 
internally by the controller and make sure it is not used for a port that gets 
created on a network with that subnet. In this use case, an interface to IPAM 
for removing an address from the pool of available addresses (and the interface 
to possibly return the address to the pool) would be sufficient.

Mohammad

[Inactive hide details for Carl Baldwin ---05/22/2014 06:19:16 PM---If an IP is 
reserved for a tenant, should the tenant need to]Carl Baldwin ---05/22/2014 
06:19:16 PM---If an IP is reserved for a tenant, should the tenant need to 
explicitly ask for that specific IP to

From: Carl Baldwin c...@ecbaldwin.netmailto:c...@ecbaldwin.net
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org,
Date: 05/22/2014 06:19 PM
Subject: Re: [openstack-dev] [Neutron] reservation of fixed ip





If an IP is reserved for a tenant, should the tenant need to
explicitly ask for that specific IP to be allocated when creating a
floating ip or port?  And it would pull from the regular pool if a
specific IP is not requested.  Or, does the allocator just pull from
the tenant's reserved pool whenever it needs an IP on a subnet?  If
the latter, then I think Salvatore's concern still a valid one.

I think if a tenant wants an IP address reserved then he probably has
a specific purpose for that IP address in mind.  That leads me to
think that he should be required to pass the specific address when
creating the associated object in order to make use of it.  We can't
do that yet with all types of allocations but there are reviews in
progress [1][2].

Carl

[1] https://review.openstack.org/#/c/70286/
[2] https://review.openstack.org/#/c/83664/

On Thu, May 22, 2014 at 12:04 PM, Sławek Kapłoński 
sla...@kaplonski.plmailto:sla...@kaplonski.pl wrote:
 Hello


 Dnia Wed, 21 May 2014 23:51:48 +0100
 Salvatore Orlando sorla...@nicira.commailto:sorla...@nicira.com napisał:

 In principle there is nothing that should prevent us from
 implementing an IP reservation mechanism.

 As with anything, the first thing to check is literature or related
 work! If any other IaaS system is implementing such a mechanism, is
 it exposed through the API somehow?
 Also this feature is likely to be provided by IPAM systems. If yes,
 what constructs do they use?
 I do not have the answers to this questions, but I'll try to document
 myself; if you have them - please post them here.

 This new feature would probably be baked into neutron's IPAM logic.
 When allocating an IP, first check from within the IP reservation
 pool, and then if it's not found check from standard allocation pools
 (this has non negligible impact on availability ranges management, but
 these are implementation details).
 Aspects to consider, requirement-wise, are:
 1) Should reservations also be classified by qualification of the
 port? For instance, is it important to specify that an IP should be
 used for the gateway port rather than for a floating IP port?

 IMHO it is not required when IP is reserved. User should have
 possibility to reserve such IP for his tenant and later use it as he
 want (floating ip, instance or whatever)

 2) Are reservations something that an admin could specify on a
 tenant-basis (hence an admin API extension), or an implicit mechanism
 that can be tuned using configuration variables (for instance create
 an IP reservation a for gateway port for a given tenant when a router
 gateway is set).

 I apologise if these questions are dumb. I'm just trying to frame this
 discussion into something which could then possibly lead to
 submitting a specification.

 Salvatore


 On 21 May 2014 21:37, Collins, Sean 
 sean_colli...@cable.comcast.commailto:sean_colli...@cable.comcast.com
 wrote:

  (Edited the subject since a lot of people filter based on the
  subject line)
 
  I would also be interested in reserved IPs - since we do not deploy
  the layer 3 agent and use the 

Re: [openstack-dev] [Neutron] reservation of fixed ip

2014-05-23 Thread McCann, Jack
Hi Salvatore.

Nice.  That ought to do it.  Thanks for the pointer.

- Jack

From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: Friday, May 23, 2014 10:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] reservation of fixed ip

Hi Jack,

Do you mean this change by any chance?
https://review.openstack.org/#/c/77043/

Salvatore

On 23 May 2014 15:10, McCann, Jack 
jack.mcc...@hp.commailto:jack.mcc...@hp.com wrote:
From the original ask:

 I know that there is possibility to create port with IP
 and later connect VM to this port. This solution is almost ok
 for me but problem is when user delete this instance - then
 port is also deleted and it is not reserved still for the same
 user and tenant.

This sounds like the problem of nova deleting a port that it did not
create.  We could look at a change (likely involving nova and neutron)
such that if I create a port and pass it in to nova boot, nova would
not delete that port when the VM is deleted.

- Jack

From: Mohammad Banikazemi [mailto:m...@us.ibm.commailto:m...@us.ibm.com]
Sent: Thursday, May 22, 2014 10:41 PM

To: OpenStack Development Mailing List (not for usage questions)
Cc: OpenStack Development Mailing List (not for usage questions)

Subject: Re: [openstack-dev] [Neutron] reservation of fixed ip


Well, for a use case we had in mind we were trying to figure out how to simply 
get an IP address on a subnet. We essentially want to use such an address 
internally by the controller and make sure it is not used for a port that gets 
created on a network with that subnet. In this use case, an interface to IPAM 
for removing an address from the pool of available addresses (and the interface 
to possibly return the address to the pool) would be sufficient.

Mohammad

[Inactive hide details for Carl Baldwin ---05/22/2014 06:19:16 PM---If an IP is 
reserved for a tenant, should the tenant need to]Carl Baldwin ---05/22/2014 
06:19:16 PM---If an IP is reserved for a tenant, should the tenant need to 
explicitly ask for that specific IP to

From: Carl Baldwin c...@ecbaldwin.netmailto:c...@ecbaldwin.net
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org,
Date: 05/22/2014 06:19 PM
Subject: Re: [openstack-dev] [Neutron] reservation of fixed ip





If an IP is reserved for a tenant, should the tenant need to
explicitly ask for that specific IP to be allocated when creating a
floating ip or port?  And it would pull from the regular pool if a
specific IP is not requested.  Or, does the allocator just pull from
the tenant's reserved pool whenever it needs an IP on a subnet?  If
the latter, then I think Salvatore's concern still a valid one.

I think if a tenant wants an IP address reserved then he probably has
a specific purpose for that IP address in mind.  That leads me to
think that he should be required to pass the specific address when
creating the associated object in order to make use of it.  We can't
do that yet with all types of allocations but there are reviews in
progress [1][2].

Carl

[1] https://review.openstack.org/#/c/70286/
[2] https://review.openstack.org/#/c/83664/

On Thu, May 22, 2014 at 12:04 PM, Sławek Kapłoński 
sla...@kaplonski.plmailto:sla...@kaplonski.pl wrote:
 Hello


 Dnia Wed, 21 May 2014 23:51:48 +0100
 Salvatore Orlando sorla...@nicira.commailto:sorla...@nicira.com napisał:

 In principle there is nothing that should prevent us from
 implementing an IP reservation mechanism.

 As with anything, the first thing to check is literature or related
 work! If any other IaaS system is implementing such a mechanism, is
 it exposed through the API somehow?
 Also this feature is likely to be provided by IPAM systems. If yes,
 what constructs do they use?
 I do not have the answers to this questions, but I'll try to document
 myself; if you have them - please post them here.

 This new feature would probably be baked into neutron's IPAM logic.
 When allocating an IP, first check from within the IP reservation
 pool, and then if it's not found check from standard allocation pools
 (this has non negligible impact on availability ranges management, but
 these are implementation details).
 Aspects to consider, requirement-wise, are:
 1) Should reservations also be classified by qualification of the
 port? For instance, is it important to specify that an IP should be
 used for the gateway port rather than for a floating IP port?

 IMHO it is not required when IP is reserved. User should have
 possibility to reserve such IP for his tenant and later use it as he
 want (floating ip, instance or whatever)

 2) Are reservations something that an admin could specify on a
 tenant-basis (hence an admin API extension), or an implicit mechanism
 that can be tuned using configuration variables (for instance create
 an IP reservation a for gateway port for a given tenant when

Re: [openstack-dev] [Neutron][Security Groups] Pings to router ip from VM with default security groups

2014-05-20 Thread McCann, Jack
I think this is a combination of two things...


1. When a VM initiates outbound communications, the egress rules

allow associated return traffic.  So if you allow outbound echo

request, the return echo reply will also be allowed.



2. The router interface will respond to ping.

- Jack

From: Narasimhan, Vivekanandan
Sent: Tuesday, May 20, 2014 8:07 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][Security Groups] Pings to router ip from VM 
with default security groups

Hi ,

We have been trying to understand behavior of security group rules in icehouse 
stable.

The default security group contains 4 rules, two ingress and two egress.

The two ingress rules are one for IPv4 and other for IPv6.
We see both the ingress rules use cyclic security groups, wherein the rule 
contains remote_security_group_id
the same as the security_group_id itself.

Vm1 ---  R1 -- Vm2

Vm1 20.0.0.2
R1 interface 1 - 20.0.0.1
R1 interface 2 - 30.0.0.1
Vm2 30.0.0.2

We saw that with default security groups, Vm1 can ping its DHCP Server IP 
because of provider_rule in security group rules.

Vm1 is also able to ping Vm2 via router R1, as Vm1 port and Vm2 port share the 
same security group.

However, we noticed that a Vm1 is also able to ping the router interfaces (R1 
interface 1 ip - 20.0.0.1) and also ping router
interface (R1 interface 2 IP - 30.0.0.1)  successfully.

Router interfaces donot have security groups associated with them, so the 
router interface IPs won' t get added to
the IPTables of the CN where the Vm1 resides.

We are not able to figure how the ping from the Vm1 to the router interfaces 
work when
no explicit rules are added to allow them.

Could you please throw some light on this?

--
Thanks,

Vivek

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] status of VPNaaS and FWaaS APIs in Icehouse

2014-04-24 Thread McCann, Jack
Thanks Mark.

What steps are necessary to promote these APIs beyond experimental?

- Jack

 -Original Message-
 From: Mark McClain [mailto:mmccl...@yahoo-inc.com]
 Sent: Thursday, April 24, 2014 11:07 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [neutron] status of VPNaaS and FWaaS APIs in
 Icehouse
 
 
 On Apr 23, 2014, at 6:20 PM, McCann, Jack jack.mcc...@hp.com wrote:
 
  Are VPNaaS and FWaaS APIs still considered experimental in Icehouse?
 
  For VPNaaS, [1] says This extension is experimental for the Havana 
  release.
  For FWaaS, [2] says The Firewall-as-a-Service (FWaaS) API is an 
  experimental
 API...
 
 
 Thanks for asking.  Both should still be considered experimental because of 
 the
 multivendor work was completed in Icehouse.
 
 
 
  [1] 
  http://docs.openstack.org/api/openstack-network/2.0/content/vpnaas_ext.html
  [2] http://docs.openstack.org/admin-guide-cloud/content/fwaas.html
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] status of VPNaaS and FWaaS APIs in Icehouse

2014-04-23 Thread McCann, Jack
Are VPNaaS and FWaaS APIs still considered experimental in Icehouse?

For VPNaaS, [1] says This extension is experimental for the Havana release.
For FWaaS, [2] says The Firewall-as-a-Service (FWaaS) API is an experimental 
API...

Thanks,

- Jack

[1] http://docs.openstack.org/api/openstack-network/2.0/content/vpnaas_ext.html
[2] http://docs.openstack.org/admin-guide-cloud/content/fwaas.html






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] About multihost patch review

2013-08-28 Thread McCann, Jack
 That said, If there is potential value in offering both, it seems like it 
 should
 be under the control of the deployer not the user. In other words the deployer
 should be able to set the default network type and enforce whether setting the
 type is exposed to the user at all.

+1

From my perspective, multi-host is an option that should be in the hands
of the deployer/operator, not exposed to the end user.  My users should
not have to know or care how DHCP, routing and floating IPs are implemented
under the covers.

Also, last I looked, I think this patch required admin role to create a 
multi-host
network.  That may be OK for a single flat network or small-scale multi-network
environment, but it is not viable in a large-scale multi-tenant environment.

- Jack

 -Original Message-
 From: Vishvananda Ishaya [mailto:vishvana...@gmail.com]
 Sent: Wednesday, August 28, 2013 1:29 PM
 To: OpenStack Development Mailing List
 Cc: Robert Kukura; Armando Migliaccio; Nachi Ueno; Sumit Naiksatam
 Subject: Re: [openstack-dev] About multihost patch review
 
 
 On Aug 26, 2013, at 6:14 PM, Maru Newby ma...@redhat.com wrote:
 
 
  On Aug 26, 2013, at 4:06 PM, Edgar Magana emag...@plumgrid.com wrote:
 
  Hi Developers,
 
  Let me explain my point of view on this topic and please share your 
  thoughts
 in order to merge this new feature ASAP.
 
  My understanding is that multi-host is nova-network HA  and we are
 implementing this bp https://blueprints.launchpad.net/neutron/+spec/quantum-
 multihost for the same reason.
  So, If in neutron configuration admin enables multi-host:
  etc/dhcp_agent.ini
 
  # Support multi host networks
  # enable_multihost = False
 
  Why do tenants needs to be aware of this? They should just create networks 
  in
 the way they normally do and not by adding the multihost extension.
 
  I was pretty confused until I looked at the nova-network HA doc [1].  The
 proposed design would seem to emulate nova-network's multi-host HA option, 
 where
 it was necessary to both run nova-network on every compute node and create a
 network explicitly as multi-host.  I'm not sure why nova-network was 
 implemented
 in this way, since it would appear that multi-host is basically 
 all-or-nothing.
 Once nova-network services are running on every compute node, what does it 
 mean
 to create a network that is not multi-host?
 
 Just to add a little background to the nova-network multi-host: The fact that 
 the
 multi_host flag is stored per-network as opposed to a configuration was an
 implementation detail. While in theory this would support configurations where
 some networks are multi_host and other ones are not, I am not aware of any
 deployments where both are used together.
 
 That said, If there is potential value in offering both, it seems like it 
 should
 be under the control of the deployer not the user. In other words the deployer
 should be able to set the default network type and enforce whether setting the
 type is exposed to the user at all.
 
 Also, one final point. In my mind, multi-host is strictly better than single
 host, if I were to redesign nova-network today, I would get rid of the single
 host mode completely.
 
 Vish
 
 
  So, to Edgar's question - is there a reason other than 'be like 
  nova-network'
 for requiring neutron multi-host to be configured per-network?
 
 
  m.
 
  1: 
  http://docs.openstack.org/trunk/openstack-compute/admin/content/existing-ha-
 networking-options.html
 
 
  I could be totally wrong and crazy, so please provide some feedback.
 
  Thanks,
 
  Edgar
 
 
  From: Yongsheng Gong gong...@unitedstack.com
  Date: Monday, August 26, 2013 2:58 PM
  To: Kyle Mestery (kmestery) kmest...@cisco.com, Aaron Rosen
 aro...@nicira.com, Armando Migliaccio amigliac...@vmware.com, Akihiro 
 MOTOKI
 amot...@gmail.com, Edgar Magana emag...@plumgrid.com, Maru Newby
 ma...@redhat.com, Nachi Ueno na...@nttmcl.com, Salvatore Orlando
 sorla...@nicira.com, Sumit Naiksatam sumit.naiksa...@bigswitch.com, Mark
 McClain mark.mccl...@dreamhost.com, Gary Kotton gkot...@vmware.com, Robert
 Kukura rkuk...@redhat.com
  Cc: OpenStack List openstack-dev@lists.openstack.org
  Subject: Re: About multihost patch review
 
  Hi,
  Edgar Magana has commented to say:
  'This is the part that for me is confusing and I will need some 
  clarification
 from the community. Do we expect to have the multi-host feature as an 
 extension
 or something that will natural work as long as the deployment include more 
 than
 one Network Node. In my opinion, Neutron deployments with more than one 
 Network
 Node by default should call DHCP agents in all those nodes without the need to
 use an extension. If the community has decided to do this by extensions, then 
 I
 am fine' at
 
 https://review.openstack.org/#/c/37919/11/neutron/extensions/multihostnetwork.py
 
  I have commented back, what is your opinion about it?
 
  Regards,
  Yong Sheng Gong
 
 
  On Fri, Aug 16, 2013 at 9:28 PM, Kyle Mestery