We want to try hierarchical port binding for some TOR vtep. However it
seems that the patch [1] could not be applied directly onto JUNO
release. So are there any other patch needed when merging this patch?
Thanks a lot!
[1]
Extra subnets is suitable for attribute of IKE connections, but not
the whole VPN service. Because customer usually want fine-grained
control of which local subnet could communicate to remote subnets.
On Wed, May 20, 2015 at 11:21 PM, Mathieu Rohon mathieu.ro...@gmail.com wrote:
However after
Hi,
Nova cell has emerged for several release cycles, with the mission of
scalability. Now there are claims of Neutron cell. Maybe similar
Cinder/Ceilometer partition demands would appear in the future.
So I wander if cross-project Openstack partitioning would go into the
our sight in the near
On Thu, Apr 30, 2015 at 2:44 AM, Russell Bryant rbry...@redhat.com wrote:
On 04/29/2015 01:25 PM, Doug Wiegley wrote:
My take on the “where does it fit” yardstick:
Does it stand on its own and add value? Then consider it a standalone
project, *or* part of neutron if you and neutron agree that
From: loy wolfe
Sent: Tuesday, April 28, 2015 6:16:03 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] [Nova] [Cinder] [tc] Should Openstack
project maintained by core team keep only API/DB in the future?
On Wed, Apr 29
On Wed, Apr 29, 2015 at 2:59 AM, Kevin Benton blak...@gmail.com wrote:
The concern is that having broken drivers out there that claim to work with
an OpenStack project end up making the project look bad. It's similar to a
first time Linux user experiencing frequent kernel panics because they
On Fri, Apr 24, 2015 at 9:46 PM, Salvatore Orlando sorla...@nicira.com wrote:
On 24 April 2015 at 15:13, Kyle Mestery mest...@mestery.com wrote:
On Fri, Apr 24, 2015 at 4:06 AM, loy wolfe loywo...@gmail.com wrote:
It's already away from the original thread, so I start this new one,
also
On Fri, Apr 24, 2015 at 9:13 PM, Kyle Mestery mest...@mestery.com wrote:
On Fri, Apr 24, 2015 at 4:06 AM, loy wolfe loywo...@gmail.com wrote:
It's already away from the original thread, so I start this new one,
also with some extra tag because I think it touch some corss-project
area
It's already away from the original thread, so I start this new one,
also with some extra tag because I think it touch some corss-project
area.
Original discuss and reference:
http://lists.openstack.org/pipermail/openstack-dev/2015-April/062384.html
On Thu, Apr 23, 2015 at 3:30 AM, Kyle Mestery mest...@mestery.com wrote:
On Wed, Apr 22, 2015 at 1:19 PM, Russell Bryant rbry...@redhat.com wrote:
Hello!
A couple of things I've been working on lately are project governance
issues as a TC member and also implementation of a new virtual
and whatever other plumbing they want to do.
On Feb 25, 2015 5:30 PM, loy wolfe loywo...@gmail.com wrote:
On Thu, Feb 26, 2015 at 3:51 AM, Kevin Benton blak...@gmail.com wrote:
The fact that a system doesn't use a neutron agent is not a good
justification for monolithic vs driver. The VLAN drivers
(auditing,
more access control, etc).
On Feb 25, 2015 2:04 AM, loy wolfe loywo...@gmail.com wrote:
+1 to separate monolithic OVN plugin
The ML2 has been designed for co-existing of multiple heterogeneous
backends, it works well for all agent solutions: OVS, Linux Bridge, and
even ofagent
will push every work to backend through HTTP call, which partly block
horizontal inter-operation with other backends.
Then I'm thing about this pattern: ML2 /w thin MD - agent - HTTP call to
backend? Which should be much easier for horizontal inter-operate.
On Feb 25, 2015 6:15 PM, loy wolfe loywo
+1 to separate monolithic OVN plugin
The ML2 has been designed for co-existing of multiple heterogeneous
backends, it works well for all agent solutions: OVS, Linux Bridge, and
even ofagent.
However, when things come with all kinds of agentless solutions, especially
all kinds of SDN controller
Remove everything out of tree, and leave only Neutron API framework as
integration platform, would lower the attractions of the whole
Openstack Project. Without a default good enough reference backend
from community, customers have to depends on packagers to fully test
all backends for them. Can
A little confusion on this etherpad topic: Neutron is not a
controller! Can OpenDaylight become The Controller?
Is this the common consensus of the Whole Neutron community, and what
does the word controller mean here?
If Neutron controller nodes are not doing anything related with
controlling,
maybe two reasons: performance caused by flow miss; feature parity
L3+ flow table destroy the megaflow aggregation, so if your app has
many concurrent sessions like web server, flow miss upcall would make
vswitchd corrupted.
iptable is already there, migrating it to ovs flow table needs a lot
of
I suspect whether cell itself could ease large scale deployment across
multiple DC. Although openstack is modularized as separated projects,
but top level total solution across projects is needed when
considering issues such as scaling, especially coordination between
nova and network / metering.
Hi Joe and Cellers,
I've tried to understand relationship between Cell and Cascading. If
Cell has been designed as below, would it be the same as Cascading?
1) Besides Nova, Neutron/Ceilometer.. is also hierarchically
structured for scalability.
2) Child-parent interaction is based on REST
Is TOR based vxlan really a common interest to most of the people, and
should a BP be taken care now with its designed use case of 3rd TOR backend
integration?
On Fri, Sep 5, 2014 at 10:54 PM, Robert Kukura kuk...@noironetworks.com
wrote:
Kyle,
Please consider an FFE for
If the neutron side MD is just for snabbswitch, then I thinks there is no
change to be merged into the tree. Maybe we can learn from sriov nic,
although backend is vendor specific, but the MD is generic, can support
snabb, dpdkovs, ans other userspace vswitch, etc.
As the reference implementation
/plugins/ml2/drivers/mech_agent.py#L53
2.
https://github.com/openstack/neutron/blob/7f466c8730cfca13f2fb374c80d810929bb8/neutron/plugins/ml2/drivers/mechanism_odl.py#L326
On Tue, Aug 26, 2014 at 10:05 PM, loy wolfe loywo...@gmail.com wrote:
On Wed, Aug 27, 2014 at 12:42 PM, Kevin Benton
be restricted to API changes, it should be used for any major
new features that are possible to develop outside of the Neutron core. If
we are going to have this new incubator tool, we should use it to the
fullest extent possible.
On Tue, Aug 26, 2014 at 6:19 PM, loy wolfe loywo...@gmail.com wrote
Incubator doesn't mean being kicked out of tree, it just mean that the API
and resource model needs to be baked for fast iteration, and can't be put
in tree temporarily. As kyle has said, incubator is not talking about
moving 3rd drivers out of tree, which is in another thread.
For DVR, as it has
On Sun, Aug 24, 2014 at 5:09 PM, Luke Gorrie l...@tail-f.com wrote:
On 21 August 2014 12:12, Ihar Hrachyshka ihrac...@redhat.com wrote:
Let the ones that are primarily interested in
good quality of that code (vendors) to drive development. And if some
plugins become garbage, it's bad news
Forwarded from other thread discussing about incubator:
http://lists.openstack.org/pipermail/openstack-dev/2014-August/044135.html
Completely agree with this sentiment. Is there a crisp distinction between
a vendor plugin and an open source plugin though?
I think that opensource is not the
/mechanism_odl.py
On Tue, Aug 26, 2014 at 7:14 PM, loy wolfe loywo...@gmail.com wrote:
Forwarded from other thread discussing about incubator:
http://lists.openstack.org/pipermail/openstack-dev/2014-August/044135.html
Completely agree with this sentiment. Is there a crisp distinction
On Thu, Aug 21, 2014 at 12:28 AM, Salvatore Orlando sorla...@nicira.com
wrote:
Some comments inline.
Salvatore
On 20 August 2014 17:38, Ihar Hrachyshka ihrac...@redhat.com wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Hi all,
I've read the proposal for incubator as described
On Wed, Aug 20, 2014 at 7:03 PM, Salvatore Orlando sorla...@nicira.com
wrote:
As the original thread had a completely different subject, I'm starting a
new one here.
More specifically the aim of this thread is about:
1) Define when a service is best implemented with a service plugin or with
On Thu, Aug 14, 2014 at 4:22 PM, Mathieu Rohon mathieu.ro...@gmail.com
wrote:
Hi,
I would like to add that it would be harder for the community to help
maintaining drivers.
such a work [1] wouldn't have occured with an out of tree ODL driver.
+1.
It's better to move all MD for none
- preserve the ability of doing CI checks on gerrit as we do today
- raise the CI bar (maybe finally set the smoketest as a minimum
requirement?)
Regards,
Salvatore
On 14 August 2014 11:47, loy wolfe loywo...@gmail.com wrote:
On Thu, Aug 14, 2014 at 4:22 PM, Mathieu
/L2P/L3P model while not just
renaming them again and again. APPLY policy template to existing Neutron
core resource, but not reinvent similar concept in GBP and then do the
mapping.
On Mon, Aug 11, 2014 at 9:12 PM, CARVER, PAUL pc2...@att.com wrote:
loy wolfe [mailto:loywo...@gmail.com
Hi Sumit,
First I want to say I'm not opposed to GBP itself, but has many confusion
about it's core resource model and how it will integrate with neutron core.
Do you mean for whatever GBP backend configured in any future Neutron
deployment, so long as they are in tree, then ML2 core plugin
+1 mark
On Tue, Aug 5, 2014 at 4:27 AM, Mark McClain mmccl...@yahoo-inc.com wrote:
All-
tl;dr
* Group Based Policy API is the kind of experimentation we be should
attempting.
* Experiments should be able to fail fast.
* The master branch does not fail fast.
* StackForge is the proper
openstack is designed as decoupled between all modules, both nova and all
neutron plugins: ml2, l3, advanced...however, their agents may need some
sort of interaction. Here are two examples:
1) DVR. L2 population already push all portcontext for a subnet to CN, but
L3 agent on it has to get once
On Fri, Jul 25, 2014 at 5:43 AM, Robert Li (baoli) ba...@cisco.com wrote:
Hi Kyle,
Sorry I missed your queries on the IRC channel today. I was thinking about
this whole BP. After chatting with Irena this morning, I think that I
understand what this BP is trying to achieve overall. I also had
it is another BP about NFV:
https://review.openstack.org/#/c/97715
On Tue, Jul 22, 2014 at 9:37 AM, Isaku Yamahata isaku.yamah...@gmail.com
wrote:
On Mon, Jul 21, 2014 at 02:52:04PM -0500,
Kyle Mestery mest...@mestery.com wrote:
Following up with post SAD status:
*
any relation with this BP?
https://review.openstack.org/#/c/97715/6/specs/juno/nfv-unaddressed-interfaces.rst
On Tue, Jul 22, 2014 at 11:17 AM, Isaku Yamahata isaku.yamah...@gmail.com
wrote:
I'd like to request Juno spec freeze exception for ML2 OVS portsecurity
extension.
-
to distinguish these two cases: for VNF
VM we need to totally bypass qbr with ''no-port-filter setting and
ovs-plug, while for some other certain VM we just need something like
default-empty-filter, still with ovs-hybrid-plug.
On Mon, Jul 14, 2014 at 11:19:05AM +0800,
loy wolfe loywo...@gmail.com wrote
port with flexible ip address setting is necessary. I collected several use
cases:
1) when creating a port, we need to indicate that,
[A] binding to none of subnet(no ip address);
[B] binding to all subnets;
[C] binding to any subnet;
[D] binding to explicitly list of subnets,
the request.
Regards,
Irena
*From:* loy wolfe [mailto:loywo...@gmail.com loywo...@gmail.com]
*Sent:* Thursday, July 10, 2014 6:00 AM
*To:* OpenStack Development Mailing List (not for usage questions)
*Cc:* Mooney, Sean K
*Subject:* Re: [openstack-dev] [Neutron][ML2] Support dpdk ovs
i think both a new vnic_type and a new vif_type should be added. now vnic
has three types: normal, direct, macvtap, then we need a new type of
uservhost.
as for vif_type, now we have VIF_TYPE_OVS, VIF_TYPE_QBH/QBG, VIF_HW_VEB, so
we need a new VIF_TYPE_USEROVS
I don't think it's a good idea to
I read the ml2 tracking reviews, find two similar spec for l2 gateway:
1) GW API: L2 bridging API - Piece 1: Basic use cases
https://review.openstack.org/#/c/93613/
2) API Extension for l2-gateway
https://review.openstack.org/#/c/100278/
also neutron external port spec has some relationship
On Wed, Jun 25, 2014 at 10:29 PM, McCann, Jack jack.mcc...@hp.com wrote:
If every compute node is
assigned a public ip, is it technically able to improve SNAT packets
w/o going through the network node ?
It is technically possible to implement default SNAT at the compute node.
One
GP should support applying policy on exist openstack deployment, so neither
implicit mapping nor intercepting works well.
maybe the explicit associating model is best: associate EPG with exist
neutron network object (policy automatically applied to all ports on it),
or with single port object
45 matches
Mail list logo