Re: [openstack-dev] [Neutron] A big tent home for Neutron backend code

2015-04-23 Thread henry hly
On Thu, Apr 23, 2015 at 10:44 AM, Armando M. arma...@gmail.com wrote:

 Could you please also pay some attention on Cons of this ultimate
 splitting Kyle? I'm afraid it would hurt the user experiences.

 On the position of Dev, A naked Neutron without official built-in
 reference implementation probably has a more clear architecture. On
 the other side, users would be forced to make choice between a long
 list of backend implementations, which is very difficult for
 non-professionals.

 In most of the time, users need a off-the-shelf solution without
 paying much extra integration effort, and they have less interest to
 study which SDN controller is powerful and is better than others. Can
 we imagine Nova without KVM/Qemu virt driver, Cinder without Ceph/VKM
 volume driver [See Deployment Profiles section in 1a] ? Shall we
 really decide to make Neutron the only Openstack project which has not
 any in tree official implementation?


 I think the analogy here is between the agent reference implementation vs
 KVM or Ceph, rather than the plumbing that taps into the underlying
 technology. Nova doesn't build/package KVM as Cinder doesn't build/package
 Ceph. Neutron could rely on other open source solutions (ODL, OpenContrail,
 OVN, etc), and still be similar to the other projects.

 I think there's still room for clarifying what the split needs to be, but I
 have always seen Neutron as the exception rather than norm, where, for
 historic reasons, we had to build everything from the ground up for lack of
 viable open source solutions at the time the project was conceived.


Thanks for bring in this interesting topic, maybe it should not be
scoped only inside Neutron, also I found a similar discuss from John
griffith on Cinder vs SDS controller :-)

https://griffithscorner.wordpress.com/2014/05/16/the-problem-with-sds-under-cinder/

It's clear that an typical Cloud deployment is composed of two
distinct part: workload engine vs. supervisor. The engine part
obviously do not belong to Openstack project, which include open
sourced like KVM, Ceph, OVS/Linuxstack/haproxy/openswan, and vendor's
like Vcenter/ESXi, SAN disk arrays, and all kinds of networking
hardware gears or virtualized service VMs.

However for the supervisor part, there is some blurring for debates:
Should Openstack provide complete in-house implementation of
controlling functions which could directly drive backend workload
engine (via backend driver), or just thin API/DB layer which need to
integrate some 3rd external controller projects to finish those works:
scheduling, pooling and service logic abstraction? For networking, how
should we regard the functions of plugin/agent and SDN controller, are
they classified in the same layer of real backends working engine
like Switchs/Routers/Firewalls?

For Nova  Cinder, it seems former is adopted: a single unified
central framework including API, scheduling, abstraction service
logic, rpc  message queue, and a common agent side framework of
compute/volume manager, then with a bunch of virt/volume drivers
plugged to abstract the all kinds of backends. There are standalone
backends like KVM and LVM, and aggregated clustering backends like
vcenter and ceph.

The Neutron was just like a developing game of continuously
re-factoring: plugin, meta plugin, ML2, and now the platform. Next
ML2 plugin suddenly became just a reference for concept proving, and
no plugin/agent would be maintained in-tree officially anymore, while
the reason is confusingly not to compete with other 3rd SDN
controllers :-P




 [1a]
 http://superuser.openstack.org/articles/openstack-user-survey-insights-november-2014

 Here is my personal suggestion: decomposition decision needs some
 trade-off, remaining 2-3 mainstream opensource backends  in tree [ML2
 with OVSLB, based on the survey result of 1a above]. While we are
 progressing radically with architecture re-factoring, smooth
 experience and easy to adoption should also be cared.

 
  One thing which is worth bringing up in this context is the potential
  overlap between this implementations. I think having them all under the
  Neutron project would allow me as PTL and the rest of the team work to
  combine things when it makes sense.
 
  Kyle
 
  [1] http://www.faqs.org/rfcs/rfc1149.html
 
 
  b) Let each interested group define a new project team for their
  backend
  code.
 
  To be honest, I don't this is a scalable option. I'm involved with 2 of
  these networking-foo projects, and there is not enough participation so
  far
  to warrant an entirely new project, PTL and infra around it. This is
  just my
  opinion, but it's an opinion I've taken after having contributed to
  networking-odl and networking-ovn for the past 5 months.
 
 
  So, as an example, the group of people working on Neutron integration
  with OpenDaylight could propose a new project team that would be a
  projects.yaml entry that looks something like:
 
  Neutron-OpenDaylight:
ptl: Some Person 

Re: [openstack-dev] [neutron] [doc] what's happened to api documents?

2015-04-13 Thread henry hly
Thanks a lot, henry :)

On Mon, Apr 13, 2015 at 6:57 PM, Henry Gessau ges...@cisco.com wrote:
 On Mon, Apr 13, 2015, henry hly henry4...@gmail.com wrote:
 http://developer.openstack.org/api-ref-networking-v2.html

 The above api document seems lost most of the content, leaving only
 port, network, subnet?

 In the navigation bar on the left there is a link to the rest of the Neutron
 API, which is implemented as extensions:
 http://developer.openstack.org/api-ref-networking-v2-ext.html


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [doc] what's happened to api documents?

2015-04-13 Thread henry hly
http://developer.openstack.org/api-ref-networking-v2.html

The above api document seems lost most of the content, leaving only
port, network, subnet?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [ML2] using binding:tun_ip instead of agent_ip for l2pop to support agentless backend

2015-04-02 Thread henry hly
On Thu, Apr 2, 2015 at 3:51 PM, Kevin Benton blak...@gmail.com wrote:
 Whoops, wrong link in last email.

 https://etherpad.openstack.org/p/liberty-neutron-summit-topics

 On Thu, Apr 2, 2015 at 12:50 AM, Kevin Benton blak...@gmail.com wrote:

 Coordinating communication between various backends for encapsulation
 termination is something that would be really nice to address in Liberty.
 I've added it to the etherpad to bring it up at the summit.[1]


Thanks a lot, Kevin.
I think it's really important, for more customers are asking about
various backends coordinating.


 1.
 http://lists.openstack.org/pipermail/openstack-dev/2015-March/059961.html

 On Tue, Mar 31, 2015 at 2:57 PM, Sławek Kapłoński sla...@kaplonski.pl
 wrote:

 Hello,

 I think that easiest way could be to have own mech_driver (AFAIK such
 drivers are for such usage) to talk with external devices to tell them
 what tunnels should it establish.

Sure, I agree.

 With change to tun_ip Henry propese l2_pop agent will be able to
 establish tunnel with external device.

Maybe not necessary here, the key is that interaction between l2pop
and external device MD is needed. Below are just some very basic
ideas:

1) MD as the plugin side agent?
*  each MD register hook in l2pop, then l2pop call the hook list as
well as notifying to agent;
*  MD simulate a update_device_up/down, however with binding:tun_ip
because it has no agent_ip;
* How MD get port status remain unsolved.

2) Things may be much easier in case of hierarchical port binding
(merged in Kilo)
* A ovs/linuxbridge agent still exist to produce update_device_up/down message;
* external device MD get port status update, then add tun_ip to port
context, then trigger l2pop MD?


 On Mon, Mar 30, 2015 at 10:19:38PM +0200, Mathieu Rohon wrote:
  hi henry,
 
  thanks for this interesting idea. It would be interesting to think
  about
  how external gateway could leverage the l2pop framework.
 
  Currently l2pop sends its fdb messages once the status of the port is
  modified. AFAIK, this status is only modified by agents which send
  update_devce_up/down().
  This issue has also to be addressed if we want agent less equipments to
  be
  announced through l2pop.
 
  Another way to do it is to introduce some bgp speakers with e-vpn
  capabilities at the control plane of ML2 (as a MD for instance).
  Bagpipe

Hi Mathieu,

Thanks for your idea, the interaction between l2pop and other MD is
really the key point, and remove agent_ip is just the first step.
BGP speakers is interesting, however I think the goal is not very
same, because I want to keep compatibility of existing deployed l2pop
solutions, and want to extend and enhance it while not replacing it
totally.

  [1] is an opensource bgp speaker which is able to do that.
  BGP is standardized so equipments might already have it embedded.
 
  last summit, we talked about this kind of idea [2]. We were going
  further
  by introducing the bgp speaker on each compute node, in use case B of
  [2].
 
  [1]https://github.com/Orange-OpenSource/bagpipe-bgp
 
  [2]http://www.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe
 
  On Thu, Mar 26, 2015 at 7:21 AM, henry hly henry4...@gmail.com wrote:
 
   Hi ML2er,
  
   Today we use agent_ip in L2pop to store endpoints for ports on a
   tunnel type network, such as vxlan or gre. However this has some
   drawbacks:
  
   1) It can only work with backends with agents;
   2) Only one fixed ip is supported per-each agent;
   3) Difficult to interact with other backend and world outside of
   Openstack.
  
   L2pop is already widely accepted and deployed in host based overlay,
   however because it use agent_ip to populate tunnel endpoint, it's
   very
   hard to co-exist and inter-operating with other vxlan backend,
   especially agentless MD.
  
   A small change is suggested that the tunnel endpoint should not be
   the
   attribute of *agent*, but be the attribute of *port*, so if we store
   it in something like *binding:tun_ip*, it is much easier for
   different
   backend to co-exists. Existing ovs agent and bridge need a small
   patch, to put the local agent_ip into the port context binding fields
   when doing port_up rpc.
  
   Several extra benefits may also be obtained by this way:
  
   1) we can easily and naturally create *external vxlan/gre port* which
   is not attached by an Nova booted VM, with the binding:tun_ip set
   when
   creating;
   2) we can develop some *proxy agent* which manage a bunch of remote
   external backend, without restriction of its agent_ip.
  
   Best Regards,
   Henry
  
  
   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
   openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  

 
  __
  OpenStack

[openstack-dev] [Neutron] [ML2] using binding:tun_ip instead of agent_ip for l2pop to support agentless backend

2015-03-26 Thread henry hly
Hi ML2er,

Today we use agent_ip in L2pop to store endpoints for ports on a
tunnel type network, such as vxlan or gre. However this has some
drawbacks:

1) It can only work with backends with agents;
2) Only one fixed ip is supported per-each agent;
3) Difficult to interact with other backend and world outside of Openstack.

L2pop is already widely accepted and deployed in host based overlay,
however because it use agent_ip to populate tunnel endpoint, it's very
hard to co-exist and inter-operating with other vxlan backend,
especially agentless MD.

A small change is suggested that the tunnel endpoint should not be the
attribute of *agent*, but be the attribute of *port*, so if we store
it in something like *binding:tun_ip*, it is much easier for different
backend to co-exists. Existing ovs agent and bridge need a small
patch, to put the local agent_ip into the port context binding fields
when doing port_up rpc.

Several extra benefits may also be obtained by this way:

1) we can easily and naturally create *external vxlan/gre port* which
is not attached by an Nova booted VM, with the binding:tun_ip set when
creating;
2) we can develop some *proxy agent* which manage a bunch of remote
external backend, without restriction of its agent_ip.

Best Regards,
Henry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2015-02-24 Thread henry hly
So are we talking about using script to eliminate unnecessary new vif types?

Then, a little confusion that why this BP[1] is postponed to L, and
this BP[2] is merged in K.

[1]  https://review.openstack.org/#/c/146914/
[2]  https://review.openstack.org/#/c/148805/

In fact [2] can be replaced by [1] with a customized vouter script, no
need for a totally new vif types introduced by K cycle.

On Thu, Feb 19, 2015 at 3:42 AM, Brent Eagles beag...@redhat.com wrote:
 Hi,

 On 18/02/2015 1:53 PM, Maxime Leroy wrote:
 Hi Brent,

 snip/

 Thanks for your help on this feature. I have just created a channel
 irc: #vif-plug-script-support to speak about it.
 I think it will help to synchronize effort on vif_plug_script
 development. Anyone is welcome on this channel!

 Cheers,
 Maxime

 Thanks Maxime. I've made some updates to the etherpad.
 (https://etherpad.openstack.org/p/nova_vif_plug_script_spec)
 I'm going to start some proof of concept work these evening. If I get
 anything worth reading, I'll put it up as a WIP/Draft review. Whatever
 state it is in I will be pushing up bits and pieces to github.

 https://github.com/beagles/neutron_hacking vif-plug-script
 https://github.com/beagles/nova vif-plug-script

 Cheers,

 Brent



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ECMP on Neutron virtual router

2015-02-24 Thread henry hly
On Wed, Feb 25, 2015 at 3:11 AM, Kevin Benton blak...@gmail.com wrote:
 I wonder if there is a way we can easily abuse the extra routes extension to
 do this? Maybe two routes to the same network would imply ECMP.


It's a good idea, and we deploy a system with similar concept(by extra
routes) by a tiny patch on existing neutron L3 plugin and agent code.

 If not, maybe this can fit into a larger refactoring for route management
 (dynamic routing, etc).

 On Feb 24, 2015 11:02 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 It doesn't support this at this time.  There are no current plans to
 make it work.  I'm curious to know how you would like for this to work
 in your deployment.

 Carl

 On Tue, Feb 24, 2015 at 11:32 AM, NAPIERALA, MARIA H mn1...@att.com
 wrote:
  Does Neutron router support ECMP across multiple static routes to the
  same
  destination network but with different next-hops?
 
  Maria
 
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [ML2] [arp] [l2pop] arp responding for vlan network

2015-02-03 Thread henry hly
Hi ML2'ers,

We encounter use case of large amount of vlan network deployment, and
want to reduce ARP storm by local responding.

Luckily from Icehouse arp local response is implemented, however vlan
is missed for l2pop. Then came this BP[1], which implement the plugin
support of l2pop for configurable network types, and the ofagent vlan
l2pop.

Now I find proposal for ovs vlan support for l2pop [2], it's very
small and was submitted as a bugfix, so I want to know is it possible
to be merged in the K cycle?

Best regards
Henry

[1] https://review.openstack.org/#/c/112947/
[2] https://bugs.launchpad.net/neutron/+bug/1413056

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Minimal ML2 mechanism driver after Neutron decomposition change

2014-12-15 Thread henry hly
On Tue, Dec 16, 2014 at 1:53 AM, Neil Jerram neil.jer...@metaswitch.com wrote:
 Hi all,

 Following the approval for Neutron vendor code decomposition
 (https://review.openstack.org/#/c/134680/), I just wanted to comment
 that it appears to work fine to have an ML2 mechanism driver _entirely_
 out of tree, so long as the vendor repository that provides the ML2
 mechanism driver does something like this to register their driver as a
 neutron.ml2.mechanism_drivers entry point:

   setuptools.setup(
   ...,
   entry_points = {
   ...,
   'neutron.ml2.mechanism_drivers': [
   'calico = xyz.openstack.mech_xyz:XyzMechanismDriver',
   ],
   },
   )

 (Please see
 https://github.com/Metaswitch/calico/commit/488dcd8a51d7c6a1a2f03789001c2139b16de85c
 for the complete change and detail, for the example that works for me.)

 Then Neutron and the vendor package can be separately installed, and the
 vendor's driver name configured in ml2_conf.ini, and everything works.

 Given that, I wonder:

 - is that what the architects of the decomposition are expecting?

 - other than for the reference OVS driver, are there any reasons in
   principle for keeping _any_ ML2 mechanism driver code in tree?


Good questions. I'm also looking for the linux bridge MD, SRIOV MD...
Who will be responsible for these drivers?

The OVS driver is maintained by Neutron community, vendor specific
hardware driver by vendor, SDN controllers driver by their own
community or vendor. But there are also other drivers like SRIOV,
which are general for a lot of vendor agonitsc backends, and can't be
maintained by a certain vendor/community.

So, it would be better to keep some general backend MD in tree
besides SRIOV. There are also vif-type-tap, vif-type-vhostuser,
hierarchy-binding-external-VTEP ... We can implement a very thin
in-tree base MD that only handle vif bind which is backend agonitsc,
then backend provider is free to implement their own service logic,
either by an backend agent, or by a driver derived from the base MD
for agentless scenery.

Regards

 Many thanks,
  Neil

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-12 Thread henry hly
On Fri, Dec 12, 2014 at 4:10 PM, Steve Gordon sgor...@redhat.com wrote:
 - Original Message -
 From: henry hly henry4...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org

 On Fri, Dec 12, 2014 at 11:41 AM, Dan Smith d...@danplanet.com
 wrote:
  [joehuang] Could you pls. make it more clear for the deployment
  mode
  of cells when used for globally distributed DCs with single API.
  Do
  you mean cinder/neutron/glance/ceilometer will be shared by all
  cells, and use RPC for inter-dc communication, and only support
  one
  vendor's OpenStack distribution? How to do the cross data center
  integration and troubleshooting with RPC if the
  driver/agent/backend(storage/network/sever) from different vendor.
 
  Correct, cells only applies to single-vendor distributed
  deployments. In
  both its current and future forms, it uses private APIs for
  communication between the components, and thus isn't suited for a
  multi-vendor environment.
 
  Just MHO, but building functionality into existing or new
  components to
  allow deployments from multiple vendors to appear as a single API
  endpoint isn't something I have much interest in.
 
  --Dan
 

 Even with the same distribution, cell still face many challenges
 across multiple DC connected with WAN. Considering OAM, it's easier
 to
 manage autonomous systems connected with external northband interface
 across remote sites, than a single monolithic system connected with
 internal RPC message.

 The key question here is this primarily the role of OpenStack or an external 
 cloud management platform, and I don't profess to know the answer. What do 
 people use (workaround or otherwise) for these use cases *today*? Another 
 question I have is, one of the stated use cases is for managing OpenStack 
 clouds from multiple vendors - is the implication here that some of these 
 have additional divergent API extensions or is the concern solely the 
 incompatibilities inherent in communicating using the RPC mechanisms? If 
 there are divergent API extensions, how is that handled from a proxying point 
 of view if not all underlying OpenStack clouds necessarily support it (I 
 guess same applies when using distributions without additional extensions but 
 of different versions - e.g. Icehouse vs Juno which I believe was also a 
 targeted use case?)?

It's not about divergent northband API extension. Services between
Openstack projects are SOA based, this is a vertical splitting, so
when building large and distributed system (whatever it is) with
horizontal splitting, shouldn't we prefer clear and stable RESTful
interface between these building blocks?


 Although Cell did some separation and modulation (not to say it's
 still internal RPC across WAN), they leaves cinder, neutron,
 ceilometer. Shall we wait for all these projects to re-factor with
 Cell-like hierarchy structure, or adopt a more loose coupled way, to
 distribute them into autonomous units at the basis of the whole
 Openstack (except Keystone which can handle multiple region
 naturally)?

 Similarly though, is the intent with Cascading that each new project would 
 have to also implement and provide a proxy for use in these deployments? One 
 of the challenges with maintaining/supporting the existing Cells 
 implementation has been that it's effectively it's own thing and as a result 
 it is often not considered when adding new functionality.

Yes we need a new proxy, but nova proxy is just a new type of virt
driver, neutron proxy a new type of agent, cinder proxy a new type of
volume store...They just utilize existing standard driver/agent
mechanism, no influence on other code in tree.


 As we can see, compared with Cell, much less work is needed to build
 a
 Cascading solution, No patch is needed except Neutron (waiting some
 upcoming features not landed in Juno), nearly all work lies in the
 proxy, which is in fact another kind of driver/agent.

 Right, but the proxies still appear to be a not insignificant amount of code 
 - is the intent not that the proxies would eventually reside within the 
 relevant projects? I've been assuming yes but I am wondering if this was an 
 incorrect assumption on my part based on your comment.

 Thanks,

 Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2014-12-11 Thread henry hly
+100!

So, for the vif-type-vhostuser, a general script path name replace the
vif-detail vhost_user_ovs_plug, because it's not the responsibility
of nova to understand it.

On Thu, Dec 11, 2014 at 11:24 PM, Daniel P. Berrange
berra...@redhat.com wrote:
 On Thu, Dec 11, 2014 at 04:15:00PM +0100, Maxime Leroy wrote:
 On Thu, Dec 11, 2014 at 11:41 AM, Daniel P. Berrange
 berra...@redhat.com wrote:
  On Thu, Dec 11, 2014 at 09:37:31AM +0800, henry hly wrote:
  On Thu, Dec 11, 2014 at 3:48 AM, Ian Wells ijw.ubu...@cack.org.uk wrote:
   On 10 December 2014 at 01:31, Daniel P. Berrange berra...@redhat.com
   wrote:
  
  
 [..]
  The question is, do we really need such flexibility for so many nova vif 
  types?
 
  I also think that VIF_TYPE_TAP and VIF_TYPE_VHOSTUSER is good example,
  nova shouldn't known too much details about switch backend, it should
  only care about the VIF itself, how the VIF is plugged to switch
  belongs to Neutron half.
 
  However I'm not saying to move existing vif driver out, those open
  backend have been used widely. But from now on the tap and vhostuser
  mode should be encouraged: one common vif driver to many long-tail
  backend.
 
  Yes, I really think this is a key point. When we introduced the VIF type
  mechanism we never intended for there to be soo many different VIF types
  created. There is a very small, finite number of possible ways to configure
  the libvirt guest XML and it was intended that the VIF types pretty much
  mirror that. This would have given us about 8 distinct VIF type maximum.
 
  I think the reason for the larger than expected number of VIF types, is
  that the drivers are being written to require some arbitrary tools to
  be invoked in the plug  unplug methods. It would really be better if
  those could be accomplished in the Neutron code than the Nova code, via
  a host agent run  provided by the Neutron mechanism.  This would let
  us have a very small number of VIF types and so avoid the entire problem
  that this thread is bringing up.
 
  Failing that though, I could see a way to accomplish a similar thing
  without a Neutron launched agent. If one of the VIF type binding
  parameters were the name of a script, we could run that script on
  plug  unplug. So we'd have a finite number of VIF types, and each
  new Neutron mechanism would merely have to provide a script to invoke
 
  eg consider the existing midonet  iovisor VIF types as an example.
  Both of them use the libvirt ethernet config, but have different
  things running in their plug methods. If we had a mechanism for
  associating a plug script with a vif type, we could use a single
  VIF type for both.
 
  eg iovisor port binding info would contain
 
vif_type=ethernet
vif_plug_script=/usr/bin/neutron-iovisor-vif-plug
 
  while midonet would contain
 
vif_type=ethernet
vif_plug_script=/usr/bin/neutron-midonet-vif-plug
 

 Having less VIF types, then using scripts to plug/unplug the vif in
 nova is a good idea. So, +1 for the idea.

 If you want, I can propose a new spec for this. Do you think we have
 enough time to approve this new spec before the 18th December?

 Anyway I think we still need to have a vif_driver plugin mechanism:
 For example, if your external l2/ml2 plugin needs a specific type of
 nic (i.e. a new method get_config to provide specific parameters to
 libvirt for the nic) that is not supported in the nova tree.

 As I said above, there's a really small finite set of libvirt configs
 we need to care about. We don't need to have a plugin system for that.
 It is no real burden to support them in tree


 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-11 Thread henry hly
On Fri, Dec 12, 2014 at 11:41 AM, Dan Smith d...@danplanet.com wrote:
 [joehuang] Could you pls. make it more clear for the deployment mode
 of cells when used for globally distributed DCs with single API. Do
 you mean cinder/neutron/glance/ceilometer will be shared by all
 cells, and use RPC for inter-dc communication, and only support one
 vendor's OpenStack distribution? How to do the cross data center
 integration and troubleshooting with RPC if the
 driver/agent/backend(storage/network/sever) from different vendor.

 Correct, cells only applies to single-vendor distributed deployments. In
 both its current and future forms, it uses private APIs for
 communication between the components, and thus isn't suited for a
 multi-vendor environment.

 Just MHO, but building functionality into existing or new components to
 allow deployments from multiple vendors to appear as a single API
 endpoint isn't something I have much interest in.

 --Dan


Even with the same distribution, cell still face many challenges
across multiple DC connected with WAN. Considering OAM, it's easier to
manage autonomous systems connected with external northband interface
across remote sites, than a single monolithic system connected with
internal RPC message.

Although Cell did some separation and modulation (not to say it's
still internal RPC across WAN), they leaves cinder, neutron,
ceilometer. Shall we wait for all these projects to re-factor with
Cell-like hierarchy structure, or adopt a more loose coupled way, to
distribute them into autonomous units at the basis of the whole
Openstack (except Keystone which can handle multiple region
naturally)?

As we can see, compared with Cell, much less work is needed to build a
Cascading solution, No patch is needed except Neutron (waiting some
upcoming features not landed in Juno), nearly all work lies in the
proxy, which is in fact another kind of driver/agent.

Best Regards
Henry



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup?

2014-12-10 Thread henry hly
On Thu, Dec 11, 2014 at 12:36 AM, Kevin Benton blak...@gmail.com wrote:
 What would the port binding operation do in this case? Just mark the port as
 bound and nothing else?


Also to set the vif type to tap, but don't care what the real backend switch is.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2014-12-10 Thread henry hly
On Thu, Dec 11, 2014 at 3:48 AM, Ian Wells ijw.ubu...@cack.org.uk wrote:
 On 10 December 2014 at 01:31, Daniel P. Berrange berra...@redhat.com
 wrote:


 So the problem of Nova review bandwidth is a constant problem across all
 areas of the code. We need to solve this problem for the team as a whole
 in a much broader fashion than just for people writing VIF drivers. The
 VIF drivers are really small pieces of code that should be straightforward
 to review  get merged in any release cycle in which they are proposed.
 I think we need to make sure that we focus our energy on doing this and
 not ignoring the problem by breaking stuff off out of tree.


 The problem is that we effectively prevent running an out of tree Neutron
 driver (which *is* perfectly legitimate) if it uses a VIF plugging mechanism
 that isn't in Nova, as we can't use out of tree code and we won't accept in
 code ones for out of tree drivers.

The question is, do we really need such flexibility for so many nova vif types?

I also think that VIF_TYPE_TAP and VIF_TYPE_VHOSTUSER is good example,
nova shouldn't known too much details about switch backend, it should
only care about the VIF itself, how the VIF is plugged to switch
belongs to Neutron half.

However I'm not saying to move existing vif driver out, those open
backend have been used widely. But from now on the tap and vhostuser
mode should be encouraged: one common vif driver to many long-tail
backend.

Best Regards,
Henry

 This will get more confusing as *all* of
 the Neutron drivers and plugins move out of the tree, as that constraint
 becomes essentially arbitrary.

 Your issue is one of testing.  Is there any way we could set up a better
 testing framework for VIF drivers where Nova interacts with something to
 test the plugging mechanism actually passes traffic?  I don't believe
 there's any specific limitation on it being *Neutron* that uses the plugging
 interaction.
 --
 Ian.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup?

2014-12-09 Thread henry hly
Hi Kevin,

Does it make sense to introduce GeneralvSwitch MD, working with
VIF_TYPE_TAP? It just do very simple port bind just like OVS and
bridge. Then anyone can implement their backend and agent without
patch neutron drivers.

Best Regards
Henry

On Fri, Dec 5, 2014 at 4:23 PM, Kevin Benton blak...@gmail.com wrote:
 I see the difference now.
 The main concern I see with the NOOP type is that creating the virtual
 interface could require different logic for certain hypervisors. In that
 case Neutron would now have to know things about nova and to me it seems
 like that's slightly too far the other direction.

 On Thu, Dec 4, 2014 at 8:00 AM, Neil Jerram neil.jer...@metaswitch.com
 wrote:

 Kevin Benton blak...@gmail.com writes:

  What you are proposing sounds very reasonable. If I understand
  correctly, the idea is to make Nova just create the TAP device and get
  it attached to the VM and leave it 'unplugged'. This would work well
  and might eliminate the need for some drivers. I see no reason to
  block adding a VIF type that does this.

 I was actually floating a slightly more radical option than that: the
 idea that there is a VIF type (VIF_TYPE_NOOP) for which Nova does
 absolutely _nothing_, not even create the TAP device.

 (My pending Nova spec at https://review.openstack.org/#/c/130732/
 proposes VIF_TYPE_TAP, for which Nova _does_ creates the TAP device, but
 then does nothing else - i.e. exactly what you've described just above.
 But in this email thread I was musing about going even further, towards
 providing a platform for future networking experimentation where Nova
 isn't involved at all in the networking setup logic.)

  However, there is a good reason that the VIF type for some OVS-based
  deployments require this type of setup. The vSwitches are connected to
  a central controller using openflow (or ovsdb) which configures
  forwarding rules/etc. Therefore they don't have any agents running on
  the compute nodes from the Neutron side to perform the step of getting
  the interface plugged into the vSwitch in the first place. For this
  reason, we will still need both types of VIFs.

 Thanks.  I'm not advocating that existing VIF types should be removed,
 though - rather wondering if similar function could in principle be
 implemented without Nova VIF plugging - or what that would take.

 For example, suppose someone came along and wanted to implement a new
 OVS-like networking infrastructure?  In principle could they do that
 without having to enhance the Nova VIF driver code?  I think at the
 moment they couldn't, but that they would be able to if VIF_TYPE_NOOP
 (or possibly VIF_TYPE_TAP) was already in place.  In principle I think
 it would then be possible for the new implementation to specify
 VIF_TYPE_NOOP to Nova, and to provide a Neutron agent that does the kind
 of configuration and vSwitch plugging that you've described above.

 Does that sound correct, or am I missing something else?

  1 .When the port is created in the Neutron DB, and handled (bound
  etc.)
  by the plugin and/or mechanism driver, the TAP device name is already
  present at that time.
 
  This is backwards. The tap device name is derived from the port ID, so
  the port has already been created in Neutron at that point. It is just
  unbound. The steps are roughly as follows: Nova calls neutron for a
  port, Nova creates/plugs VIF based on port, Nova updates port on
  Neutron, Neutron binds the port and notifies agent/plugin/whatever to
  finish the plumbing, Neutron notifies Nova that port is active, Nova
  unfreezes the VM.
 
  None of that should be affected by what you are proposing. The only
  difference is that your Neutron agent would also perform the
  'plugging' operation.

 Agreed - but thanks for clarifying the exact sequence of events.

 I wonder if what I'm describing (either VIF_TYPE_NOOP or VIF_TYPE_TAP)
 might fit as part of the Nova-network/Neutron Migration priority
 that's just been announced for Kilo.  I'm aware that a part of that
 priority is concerned with live migration, but perhaps it could also
 include the goal of future networking work not having to touch Nova
 code?

 Regards,
 Neil




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][neutron] Proposal to split Neutron into separate repositories

2014-11-30 Thread henry hly
FWaas is typically classified to L4-L7. But if they are developed
standalone, it would be very difficult for implementing with a
distributed manner. For example, with W-E traffic control in DVR mode,
we can't rely on a external python client rest api call, the policy
execution module must be loaded as the L3 agent extension, or another
service-policy agent in the compute node.

My suggestion is that starting with LB and VPN as a trial, which can
never be distributed. FW is very tightly coupled with L3, so leaving
it for discuss some time later may be more smooth.

On Wed, Nov 19, 2014 at 6:31 AM, Mark McClain m...@mcclain.xyz wrote:
 All-

 Over the last several months, the members of the Networking Program have
 been discussing ways to improve the management of our program.  When the
 Quantum project was initially launched, we envisioned a combined service
 that included all things network related.  This vision served us well in the
 early days as the team mostly focused on building out layers 2 and 3;
 however, we’ve run into growth challenges as the project started building
 out layers 4 through 7.  Initially, we thought that development would float
 across all layers of the networking stack, but the reality is that the
 development concentrates around either layer 2 and 3 or layers 4 through 7.
 In the last few cycles, we’ve also discovered that these concentrations have
 different velocities and a single core team forces one to match the other to
 the detriment of the one forced to slow down.

 Going forward we want to divide the Neutron repository into two separate
 repositories lead by a common Networking PTL.  The current mission of the
 program will remain unchanged [1].  The split would be as follows:

 Neutron (Layer 2 and 3)
 - Provides REST service and technology agnostic abstractions for layer 2 and
 layer 3 services.

 Neutron Advanced Services Library (Layers 4 through 7)
 - A python library which is co-released with Neutron
 - The advance service library provides controllers that can be configured to
 manage the abstractions for layer 4 through 7 services.

 Mechanics of the split:
 - Both repositories are members of the same program, so the specs repository
 would continue to be shared during the Kilo cycle.  The PTL and the drivers
 team will retain approval responsibilities they now share.
 - The split would occur around Kilo-1 (subject to coordination of the Infra
 and Networking teams). The timing is designed to enable the proposed REST
 changes to land around the time of the December development sprint.
 - The core team for each repository will be determined and proposed by Kyle
 Mestery for approval by the current core team.
 - The Neutron Server and the Neutron Adv Services Library would be co-gated
 to ensure that incompatibilities are not introduced.
 - The Advance Service Library would be an optional dependency of Neutron, so
 integrated cross-project checks would not be required to enable it during
 testing.
 - The split should not adversely impact operators and the Networking program
 should maintain standard OpenStack compatibility and deprecation cycles.

 This proposal to divide into two repositories achieved a strong consensus at
 the recent Paris Design Summit and it does not conflict with the current
 governance model or any proposals circulating as part of the ‘Big Tent’
 discussion.

 Kyle and mark

 [1]
 https://git.openstack.org/cgit/openstack/governance/plain/reference/programs.yaml

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L2 Agent][Debt] Bootstrapping an L2 agent debt repayment task force

2014-11-25 Thread henry hly
On Wed, Nov 26, 2014 at 12:14 AM, Mathieu Rohon mathieu.ro...@gmail.com wrote:
 On Tue, Nov 25, 2014 at 9:59 AM, henry hly henry4...@gmail.com wrote:
 Hi Armando,

 Indeed agent-less solution like external controller is very
 interesting, and in some certain cases it has advantage over agent
 solution, e.g. software installation is prohibited on Compute Node.

 However, Neutron Agent has its irreplaceable benefits: multiple
 backends support like SRIOV, macvtap, vhost-user snabbswitch, hybrid
 vswitch solution like NIC offloading or VDP based TOR offloading...All
 these backend can not be easily controlled by an remote OF controller.

 Moreover, this solution is tested by the gate (at least ovs), and is
 simpler for small deployments

Not only for small deployments, but also for large scale production
deployments :)

We have deployed more than 500 hosts in the customer production
cluster. Now we are doing some tuning on L2pop / SG / DHCPagent, after
that 1000 nodes cluster is expected to be supported. Also for vxlan
data plane performance, we upgraded the host kernel to 3.14 (with udp
tunnel gro/gso), and it has awful satisfied performance.

The customers have very positive feedback, they have never thought
that openstack bulit-in ovs backend can work so fine, without any help
from external controller platforms, or any special hardware
offloading.



 Also considering DVR (maybe with upcoming FW for W-E), and Security
 Group, W-E traffic control capability gap still exists between linux
 stack and OF flowtable, whether features like advanced netfilter, or
 performance for webserver app which incur huge concurrent sessions
 (because of basic OF upcall model, the more complex flow rule, the
 less megaflow aggregation might take effect)

 Thanks to L2pop and DVR, now many customer give the feedback that
 Neutron has made great progressing, and already meet nearly all their
 L2/L3 connectivity W-E control directing (The next big expectation is
 N-S traffic directing like dynamic routing agent), without forcing
 them to learn and integrate another huge platform like external SDN
 controller.

 +100. Note that Dynamic routing is in progress.

 No attention to argue on agent vs. agentless, built-in reference vs.
 external controller, Openstack is an open community. But, I just want
 to say that modularized agent re-factoring does make a lot of sense,
 while forcing customer to piggyback an extra SDN controller on their
 Cloud solution is not the only future direction of Neutron.

 Best Regard
 Henry

 On Wed, Nov 19, 2014 at 5:45 AM, Armando M. arma...@gmail.com wrote:
 Hi Carl,

 Thanks for kicking this off. I am also willing to help as a core reviewer of
 blueprints and code
 submissions only.

 As for the ML2 agent, we all know that for historic reasons Neutron has
 grown to be not only a networking orchestration project but also a reference
 implementation that is resembling what some might call an SDN controller.

 I think that most of the Neutron folks realize that we need to move away
 from this model and rely on a strong open source SDN alternative; for these
 reasons, I don't think that pursuing an ML2 agent would be a path we should
 go down to anymore. It's time and energy that could be more effectively
 spent elsewhere, especially on the refactoring. Now if the refactoring
 effort ends up being labelled ML2 Agent, I would be okay with it, but my gut
 feeling tells me that any attempt at consolidating code to embrace more than
 one agent logic at once is gonna derail the major goal of paying down the so
 called agent debt.

 My 2c
 Armando

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Recall for previous iscsi backend BP

2014-11-19 Thread henry hly
Hi Flavio,

Thanks for your information about Cinder Store, Yet I have a little
concern about Cinder backend: Suppose cinder and glance both use Ceph
as Store, then if cinder  can do instant copy to glance by ceph clone
(maybe not now but some time later), what information would be stored
in glance? Obviously volume UUID is not a good choice, because after
volume is deleted then image can't be referenced. The best choice is
that cloned ceph object URI also be stored in glance location, letting
both glance and cinder see the backend store details.

However, although it really make sense for Ceph like All-in-one Store,
I'm not sure if iscsi backend can be used the same way.

On Wed, Nov 19, 2014 at 4:00 PM, Flavio Percoco fla...@redhat.com wrote:
 On 19/11/14 15:21 +0800, henry hly wrote:

 In the Previous BP [1], support for iscsi backend is introduced into
 glance. However, it was abandoned because of Cinder backend
 replacement.

 The reason is that all storage backend details should be hidden by
 cinder, not exposed to other projects. However, with more and more
 interest in Converged Storage like Ceph, it's necessary to expose
 storage backend to glance as well as cinder.

 An example  is that when transferring bits between volume and image,
 we can utilize advanced storage offload capability like linked clone
 to do very fast instant copy. Maybe we need a more general glance
 backend location support not only with iscsi.



 [1] https://blueprints.launchpad.net/glance/+spec/iscsi-backend-store


 Hey Henry,

 This blueprint has been superseeded by one proposing a Cinder store
 for Glance. The Cinder store is, unfortunately, in a sorry state.
 Short story, it's not fully implemented.

 I truly think Glance is not the place where you'd have an iscsi store,
 that's Cinder's field and the best way to achieve what you want is by
 having a fully implemented Cinder store that doesn't rely on Cinder's
 API but has access to the volumes.

 Unfortunately, this is not possible now and I don't think it'll be
 possible until L (or even M?).

 FWIW, I think the use case you've mentioned is useful and it's
 something we have in our TODO list.

 Cheers,
 Flavio

 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][neutron] Proposal to split Neutron into separate repositories

2014-11-18 Thread henry hly
Is FWaas L2/3 or L4/7?

On Wed, Nov 19, 2014 at 11:10 AM, Sumit Naiksatam
sumitnaiksa...@gmail.com wrote:
 On Tue, Nov 18, 2014 at 4:44 PM, Mohammad Hanif mha...@brocade.com wrote:
 I agree with Paul as advanced services go beyond just L4-L7.  Today, VPNaaS
 deals with L3 connectivity but belongs in advanced services.  Where does
 Edge-VPN work belong?  We need a broader definition for advanced services
 area.


 So the following definition is being proposed to capture the broader
 context and complement Neutron's current mission statement:

 To implement services and associated libraries that provide
 abstractions for advanced network functions beyond basic L2/L3
 connectivity and forwarding.

 What do people think?

 Thanks,
 —Hanif.

 From: Paul Michali (pcm) p...@cisco.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Tuesday, November 18, 2014 at 4:08 PM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [tc][neutron] Proposal to split Neutron into
 separate repositories

 On Nov 18, 2014, at 6:36 PM, Armando M. arma...@gmail.com wrote:

 Mark, Kyle,

 What is the strategy for tracking the progress and all the details about
 this initiative? Blueprint spec, wiki page, or something else?

 One thing I personally found useful about the spec approach adopted in [1],
 was that we could quickly and effectively incorporate community feedback;
 having said that I am not sure that the same approach makes sense here,
 hence the question.

 Also, what happens for experimental efforts that are neither L2-3 nor L4-7
 (e.g. TaaS or NFV related ones?), but they may still benefit from this
 decomposition (as it promotes better separation of responsibilities)? Where
 would they live? I am not sure we made any particular progress of the
 incubator project idea that was floated a while back.


 Would it make sense to define the advanced services repo as being for
 services that are beyond basic connectivity and routing? For example, VPN
 can be L2 and L3. Seems like restricting to L4-L7 may cause some confusion
 as to what’s in and what’s out.


 Regards,

 PCM (Paul Michali)

 MAIL …..…. p...@cisco.com
 IRC ……..… pc_m (irc.freenode.com)
 TW ………... @pmichali
 GPG Key … 4525ECC253E31A83
 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



 Cheers,
 Armando

 [1] https://review.openstack.org/#/c/134680/

 On 18 November 2014 15:32, Doug Wiegley do...@a10networks.com wrote:

 Hi,

  so the specs repository would continue to be shared during the Kilo
  cycle.

 One of the reasons to split is that these two teams have different
 priorities and velocities.  Wouldn’t that be easier to track/manage as
 separate launchpad projects and specs repos, irrespective of who is
 approving them?

 Thanks,
 doug



 On Nov 18, 2014, at 10:31 PM, Mark McClain m...@mcclain.xyz wrote:

 All-

 Over the last several months, the members of the Networking Program have
 been discussing ways to improve the management of our program.  When the
 Quantum project was initially launched, we envisioned a combined service
 that included all things network related.  This vision served us well in the
 early days as the team mostly focused on building out layers 2 and 3;
 however, we’ve run into growth challenges as the project started building
 out layers 4 through 7.  Initially, we thought that development would float
 across all layers of the networking stack, but the reality is that the
 development concentrates around either layer 2 and 3 or layers 4 through 7.
 In the last few cycles, we’ve also discovered that these concentrations have
 different velocities and a single core team forces one to match the other to
 the detriment of the one forced to slow down.

 Going forward we want to divide the Neutron repository into two separate
 repositories lead by a common Networking PTL.  The current mission of the
 program will remain unchanged [1].  The split would be as follows:

 Neutron (Layer 2 and 3)
 - Provides REST service and technology agnostic abstractions for layer 2
 and layer 3 services.

 Neutron Advanced Services Library (Layers 4 through 7)
 - A python library which is co-released with Neutron
 - The advance service library provides controllers that can be configured
 to manage the abstractions for layer 4 through 7 services.

 Mechanics of the split:
 - Both repositories are members of the same program, so the specs
 repository would continue to be shared during the Kilo cycle.  The PTL and
 the drivers team will retain approval responsibilities they now share.
 - The split would occur around Kilo-1 (subject to coordination of the
 Infra and Networking teams). The timing is designed to enable the proposed
 REST changes to land around the time of the December development sprint.
 - The core team for each repository will be determined and proposed by
 Kyle Mestery for approval by 

[openstack-dev] [Glance] Recall for previous iscsi backend BP

2014-11-18 Thread henry hly
In the Previous BP [1], support for iscsi backend is introduced into
glance. However, it was abandoned because of Cinder backend
replacement.

The reason is that all storage backend details should be hidden by
cinder, not exposed to other projects. However, with more and more
interest in Converged Storage like Ceph, it's necessary to expose
storage backend to glance as well as cinder.

An example  is that when transferring bits between volume and image,
we can utilize advanced storage offload capability like linked clone
to do very fast instant copy. Maybe we need a more general glance
backend location support not only with iscsi.



[1] https://blueprints.launchpad.net/glance/+spec/iscsi-backend-store

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-30 Thread Hly
hi,

Network reachability is not an issue for live migration, it is the same as 
cold. The challenge is near realtime order control of interaction between 
parent proxies, child virt drivers, agents, and libvirt lib.

Wu


Sent from my iPad

On 2014-10-30, at 下午7:28, joehuang joehu...@huawei.com wrote:

 Hello, Keshava
 
 Live migration is allowed inside one pod ( one cascaded OpenStack instance ), 
 not support cross pods live migration yet. 
 
 But cold migration could be done between pods, even cross data centers.
 
 Live migration cross pods will be studied in the future.
 
 Best Regards
 
 Chaoyi Huang ( joehuang )
 
 
 From: A, Keshava [keshav...@hp.com]
 Sent: 30 October 2014 17:45
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
 cascading
 
 Hi,
 Can the VM migration happens across POD (Zone) ?
 If so then how reachability of VM is addressed dynamically without any packet 
 loss ?
 
 Thanks  Regards,
 keshava
 
 -Original Message-
 From: Wuhongning [mailto:wuhongn...@huawei.com]
 Sent: Thursday, October 30, 2014 7:56 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
 cascading
 
 Hi keshava,
 
 Thanks for interested in Cascading. Here are some very simple explanation:
 
 Basically Datacenter is not in the 2-level tree of cascading. We use term 
 POD to represent a cascaded child openstack (same meaning of your term 
 Zone?). There may be single or multiple PODs in one Datacenter, Just like 
 below:
 
 (A, B, C)  ...  (D, E)  ...  (F)  ...   (G)
 Each character represent a POD or child openstack, while parenthesis 
 represent a Datacenter.
 
 Each POD has a corresponding virtual host node in the parent openstack, so 
 when scheduler of any projects (nova/neutron/cinder...) locate a host node, 
 the resource POD is determined, also with its geo-located Datacenter by side 
 effect. Cascading don't schedule by Datacenter directly, DC is just an 
 attribute of POD (for example we can configure host aggregate to identify a 
 DC with multiple PODs). The upper scale of POD is fixed, maybe several 
 hundreds, so a super large DC with tens of thousands servers could be built 
 by modularized PODs, avoiding the difficult of tuning and maintaining such a 
 huge monolithic openstack.
 
 Next do you mean networking reachability? Sorry for the limitation of mail 
 post I can just give some very simple idea: in parent openstack the L2pop and 
 DVR is used, so L2/L3 agent-proxy in each virtual host node can get all the 
 vm reachability information of other POD, then they are set to local POD by 
 Neutron REST API. However, cascading depends on some feature not exists yet 
 in current Neutron, like L2GW, pluggable external network, WE Fwaas in DVR, 
 centralized FIP in DVR... so we have to do some little patch in the front. In 
 the future if these features is merged, these patch code can be removed.
 
 Indeed Neutron is the most challenge part of cascading, without considering 
 those proxies in the parent openstack virtual host node, Neutron patchs 
 account for 85% or more LOC in the whole project.
 
 Regards,
 Wu
 
 From: keshava [keshav...@hp.com]
 Sent: Wednesday, October 29, 2014 2:22 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
 OpenStack cascading
 
 This is very interesting problem to solve.
 I am curious to know how the reachability is provided across different 
 Datacenter.
 How to know which VM is part of which Datacenter?
 VM may be in different Zone but under same DC or in different DC itself.
 
 How this problem is solved?
 
 
 thanks  regards,
 keshava
 
 
 
 --
 View this message in context: 
 http://openstack.10931.n7.nabble.com/all-tc-Multi-clouds-integration-by-OpenStack-cascading-tp54115p56323.html
 Sent from the Developer mailing list archive at Nabble.com.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-30 Thread Hly


Sent from my iPad

On 2014-10-30, at 下午8:05, Hly henry4...@gmail.com wrote:

 hi,
 
 Network reachability is not an issue for live migration, it is the same as 
 cold. The challenge is near realtime order control of interaction between 
 parent proxies, child virt drivers, agents, and libvirt lib.
 
 Wu
 

Also it destroy the principle of only REST between POD, so we may study it in 
some special POC cases


 
 Sent from my iPad
 
 On 2014-10-30, at 下午7:28, joehuang joehu...@huawei.com wrote:
 
 Hello, Keshava
 
 Live migration is allowed inside one pod ( one cascaded OpenStack instance 
 ), not support cross pods live migration yet. 
 
 But cold migration could be done between pods, even cross data centers.
 
 Live migration cross pods will be studied in the future.
 
 Best Regards
 
 Chaoyi Huang ( joehuang )
 
 
 From: A, Keshava [keshav...@hp.com]
 Sent: 30 October 2014 17:45
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
 OpenStack cascading
 
 Hi,
 Can the VM migration happens across POD (Zone) ?
 If so then how reachability of VM is addressed dynamically without any 
 packet loss ?
 
 Thanks  Regards,
 keshava
 
 -Original Message-
 From: Wuhongning [mailto:wuhongn...@huawei.com]
 Sent: Thursday, October 30, 2014 7:56 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
 OpenStack cascading
 
 Hi keshava,
 
 Thanks for interested in Cascading. Here are some very simple explanation:
 
 Basically Datacenter is not in the 2-level tree of cascading. We use term 
 POD to represent a cascaded child openstack (same meaning of your term 
 Zone?). There may be single or multiple PODs in one Datacenter, Just like 
 below:
 
 (A, B, C)  ...  (D, E)  ...  (F)  ...   (G)
 Each character represent a POD or child openstack, while parenthesis 
 represent a Datacenter.
 
 Each POD has a corresponding virtual host node in the parent openstack, so 
 when scheduler of any projects (nova/neutron/cinder...) locate a host node, 
 the resource POD is determined, also with its geo-located Datacenter by side 
 effect. Cascading don't schedule by Datacenter directly, DC is just an 
 attribute of POD (for example we can configure host aggregate to identify a 
 DC with multiple PODs). The upper scale of POD is fixed, maybe several 
 hundreds, so a super large DC with tens of thousands servers could be built 
 by modularized PODs, avoiding the difficult of tuning and maintaining such a 
 huge monolithic openstack.
 
 Next do you mean networking reachability? Sorry for the limitation of mail 
 post I can just give some very simple idea: in parent openstack the L2pop 
 and DVR is used, so L2/L3 agent-proxy in each virtual host node can get all 
 the vm reachability information of other POD, then they are set to local POD 
 by Neutron REST API. However, cascading depends on some feature not exists 
 yet in current Neutron, like L2GW, pluggable external network, WE Fwaas in 
 DVR, centralized FIP in DVR... so we have to do some little patch in the 
 front. In the future if these features is merged, these patch code can be 
 removed.
 
 Indeed Neutron is the most challenge part of cascading, without considering 
 those proxies in the parent openstack virtual host node, Neutron patchs 
 account for 85% or more LOC in the whole project.
 
 Regards,
 Wu
 
 From: keshava [keshav...@hp.com]
 Sent: Wednesday, October 29, 2014 2:22 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
 OpenStack cascading
 
 This is very interesting problem to solve.
 I am curious to know how the reachability is provided across different 
 Datacenter.
 How to know which VM is part of which Datacenter?
 VM may be in different Zone but under same DC or in different DC itself.
 
 How this problem is solved?
 
 
 thanks  regards,
 keshava
 
 
 
 --
 View this message in context: 
 http://openstack.10931.n7.nabble.com/all-tc-Multi-clouds-integration-by-OpenStack-cascading-tp54115p56323.html
 Sent from the Developer mailing list archive at Nabble.com.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman

Re: [openstack-dev] [neutron] Clear all flows when ovs agent start? why and how avoid?

2014-10-29 Thread Hly


Sent from my iPad

On 2014-10-29, at 下午8:01, Robert van Leeuwen robert.vanleeu...@spilgames.com 
wrote:

 I find our current design is remove all flows then add flow by entry, this
 will cause every network node will break off all tunnels between other
 network node and all compute node.
 Perhaps a way around this would be to add a flag on agent startup
 which would have it skip reprogramming flows. This could be used for
 the upgrade case.
 
 I hit the same issue last week and filed a bug here:
 https://bugs.launchpad.net/neutron/+bug/1383674
 
 From an operators perspective this is VERY annoying since you also cannot 
 push any config changes that requires/triggers a restart of the agent.
 e.g. something simple like changing a log setting becomes a hassle.
 I would prefer the default behaviour to be to not clear the flows or at the 
 least an config option to disable it.
 

+1, we also suffered from this even when a very little patch is done

 
 Cheers,
 Robert van Leeuwen
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]issue about dvr flows and port timestamp

2014-10-29 Thread Hly


Sent from my iPad

On 2014-10-29, at 下午6:33, Maru Newby ma...@redhat.com wrote:

 
 On Oct 29, 2014, at 8:12 AM, Yangxurong yangxur...@huawei.com wrote:
 
 Hi,
 
 I’m not sure whether following issue is problematic, and both, our team do 
 some effort, so I submit two blueprints:
 [1.] optimize dvr flows:
 Currently, accurate ovs flows in terms of full mac are used to communicate 
 among distributed router, but here comes problem : (1)the more distributed 
 router node, the more flows; (2)different distributed router across DC 
 cannot communicate through tunnel and additional operation under same mac 
 prefix configuration. So it would be useful to shift the complete-matching 
 of mac to fuzzy Matching, like prefix matching, reducing the number of flows 
 and leading to communicate among different DC though configuring same mac 
 prefix through tunnel.
 Link: https://blueprints.launchpad.net/neutron/+spec/optimize-dvr-flows
 
 I think we need to focus on paying down technical debt (both in the code and 
 on the testing side) related to dvr before we seriously consider the kind of 
 optimization that you are proposing.  I’m also unclear as to why we would 
 want to pursue a solution to a problem whose severity doesn’t appear to be 
 clear (I’m not sure whether the following issue is problematic…).
 

DVR stability is the first class for sure, but if the code and logic could be 
less and simpler, there is more chance of stability. By my understanding, since 
DVR mac range has been configured as a prefix, so prefix based judgement 
instead of one by one flow setup triggered by mesh-like message notifying would 
simplify the code logic, thus indirectly contribute to overall stability. Also, 
it would remove hundreds of flows in the ovs in a middle scale cluster, very 
helpful for trouble shooting.

Wu

 
 Maru
 
 
 [2.]add port timestamp:
 It would be worth adding timestamp fields including create_at, update_at and 
 delete_at in the table of port in neutron, so users can monitor port change 
 conveniently, for example portal or management center wants to query the 
 ports that have changed or refreshed during the latest 5sec in a large scale 
 application. If not, it's time-consuming and low effectiveness.
 Link: https://blueprints.launchpad.net/neutron/+spec/add-port-timestamp
 
 Any response I will appreciate.
 
 Thanks,
 Xurong Yang
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-23 Thread henry hly
Hi Phil,

Thanks for your feedback, and patience of this long history reading :)
See comments inline.

On Wed, Oct 22, 2014 at 5:59 PM, Day, Phil philip@hp.com wrote:
 -Original Message-
 From: henry hly [mailto:henry4...@gmail.com]
 Sent: 08 October 2014 09:16
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
 cascading

 Hi,

 Good questions: why not just keeping multiple endpoints, and leaving
 orchestration effort in the client side?

 From feedback of some large data center operators, they want the cloud
 exposed to tenant as a single region with multiple AZs, while each AZ may be
 distributed in different/same locations, very similar with AZ concept of AWS.
 And the OpenStack API is indispensable for the cloud for eco-system
 friendly.

 The cascading is mainly doing one thing: map each standalone child
 Openstack to AZs in the parent Openstack, hide separated child endpoints,
 thus converge them into a single standard OS-API endpoint.

 One of the obvious benefit doing so is the networking: we can create a single
 Router/LB, with subnet/port member from different child, just like in a 
 single
 OpenStack instance. Without the parent OpenStack working as the
 aggregation layer, it is not so easy to do so. Explicit VPN endpoint may be
 required in each child.

 I've read through the thread and the various links, and to me this still 
 sounds an awful lot like having multiple regions in Keystone.

 First of all I think we're in danger of getting badly mixed up in terminology 
 here around AZs which is an awfully overloaded term - esp when we make 
 comparisons to AWS AZs.  Whether we think the current Openstack usage of 
 these terms or not, lets at least stick to how they are currently defined and 
 used in Openstack:

 AZs - A scheduling concept in Nova and Cinder.Simply provides some 
 isolation schemantic about a compute host or storage server.  Nothing to do 
 with explicit physical or geographical location, although some degree of that 
 (separate racks, power, etc) is usually implied.

 Regions - A keystone concept for a collection of Openstack Endpoints.   They 
 may be distinct (a completely isolated set of Openstack service) or overlap 
 (some shared services).  Openstack clients support explicit user selection of 
 a region.

 Cells - A scalability / fault-isolation concept within Nova.  Because Cells 
 aspires to provide all Nova features transparently across cells this kind or 
 acts like multiple regions where only the Nova service is distinct 
 (Networking has to be common, Glance has to be common or at least federated 
 in a transparent way, etc).   The difference from regions is that the user 
 doesn’t have to make an explicit region choice - they get a single Nova URL 
 for all cells.   From what I remember Cells originally started out also using 
 the existing APIs as the way to connect the Cells together, but had to move 
 away from that because of the performance overhead of going through multiple 
 layers.



Agree, it's very clear now. However isolation is not all about
hardware and facility fault, REST API is preferred in terms of system
level isolation despite the theoretical protocol serialization
overhead.


 Now with Cascading it seems that we're pretty much building on the Regions 
 concept, wrapping it behind a single set of endpoints for user convenience, 
 overloading the term AZ

Sorry not very certain of the meaning overloading. It's just a
configuration choice by admin in the wrapper Openstack. As you
mentioned, there is no explicit definition of what a AZ should be, so
Cascading select to map it to a child Openstack. Surely we could use
another concept or invent new concept instead of AZ, but AZ is the
most appropriate one because it share the same semantic of isolation
with those child.

 to re-expose those sets of services to allow the user to choose between them 
 (doesn't this kind of negate the advantage of not having to specify the 
 region in the client- is that really such a bit deal for users ?) , and doing 
 something to provide a sort of federated Neutron service - because as we all 
 know the hard part in all of this is how you handle the Networking.

 It kind of feels to me that if we just concentrated on the part of this that 
 is working out how to distribute/federate Neutron then we'd have a solution 
 that could be mapped as easily cells and/or regions - and I wonder if then 
 why really need yet another aggregation concept ?


I agree that it's not so huge a gap between cascading AZ and
standalone endpoints for Nova and Cinder. However, wrapping is
strongly needed by customer feedback for Neutron, especially for those
who operate multiple internally connected DC. They don't like to force
tenants to create multiple route domain, connected with explicit
vpnaas. Instead they prefer a simple L3 router connecting subnets and
ports from

Re: [openstack-dev] [neutron] what happened to ModularL2Agent?

2014-10-10 Thread Hly


在 2014-10-10,下午7:16,Salvatore Orlando sorla...@nicira.com 写道:

 Comments inline.
 
 Salvatore
 
 On 10 October 2014 11:02, Wuhongning wuhongn...@huawei.com wrote:
 Hi,
 
 In the Juno cycle there is proposal of ModularL2Agent [1,2], which is very 
 useful to develop agent for new backend with much less redundant code. 
 Without that, we have to either fork a new agent by copying large amount of 
 existing l2agent code, or patch existing l2agent. However in the K pad [3] it 
 seems that this feature disappeared?
 
 The fact that the topic is not present in the Kilo summit discussion does not 
 mean that the related work item has been tabled.
 It just means that it's not something we'll probably discuss at the summit, 
 mostly because the discussion can happen on the mailing list - as you're 
 doing now.
 I think a summit session should be granted if the community still needs to 
 achieve consensus on the technical direction or if the topic is of such a 
 paramount importance that awareness needs to be raised.
 
 The blueprint and spec for this topic are available ad [1] and [2], and 
 afaict are still active.
  
 
 Now there are some interest on hybrid backend (e.g. Hierachical Binding), and 
 some BPs are proposed to patch OVS agent. But it has two drawbacks: 1) 
 tightly coupled with OVS; 2) OVS agent became unnecessarily heavier. With 
 ML2agent we only need to add separated driver modules in the common L2agent 
 framework, but not to patch the monolithic ovs agent.
 
 The point of a modular L2 agent is to have a very efficient RPC interface 
 with the neutron server and a framework for detecting data plane transitions, 
 such as new ports, and apply the corresponding configurations. And then have 
 driver which will apply such configurations to different backends.
 
 I reckon the blueprints you are referring to are probably assuming the OVS 
 agent becomes modular - because otherwise it will be hardy able to target 
 backends which are not ovs.
  
 
 Also if it is convenient  to write only modules but not the whole agent, 
 backend providers may prefer to move out most of the logic from MD to agent 
 side driver, for better stability for Neutron controller node and easier 
 co-existing with other backend. Ofagent shows good compatibility of l2pop to 
 build vxlan/gre tunnel network between itself and other ovs/linuxbridge agent.
 
 Or are there any general consideration about Neutron agent side 
 consolidation, either L2/L3/L4-7?
 
 As far as I know there is no plan for a consolidate neutron agent which does 
 L2/L7 operations. Frankly I do not even think there is any need for it.
  

No necessary consolidation of L2-7, but L2 agent consolidation, L3 agent 
consolidation, advanced service agent consolidation by each, but all have 
framework with modular drivers

 
 1. https://wiki.openstack.org/wiki/Neutron/ModularL2Agent#Possible_Directions
 2. https://etherpad.openstack.org/p/juno-neutron-modular-l2-agent
 3. https://etherpad.openstack.org/p/kilo-neutron-summit-topics
 
 Best Regards
 Wu
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 [1] https://blueprints.launchpad.net/neutron/+spec/modular-l2-agent
 [2] 
 https://review.openstack.org/#/q/project:openstack/neutron-specs+branch:master+topic:bp/modular-L2-agent,n,z
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-09 Thread henry hly
Hi Joshua,

Absolutely internally improvement of single openstack is the first
important thing, and there are already much effort in the community.
For example, without optimization of security group, 200 nodes cluster
would have serve performance problem, now with several patchs in Juno
it's very easy to scale the number up to 500 or more.

The Cascading has no conflicts with that, in fact hierarchical scale
depends on square of single child scale. If a single child can deal
with 00's to 000's, cascading on it would then deal with 00,000's. And
besides ultra high scalability, the Cascading also care about
geo-location distribution, zone fault isolation, modularization plug
and play, and software maintenance isolation.

Conceptually the Cascading would not introduce extra consistence
problem because of its tree-like topology, but not peer mesh topology.
The parent openstack is the central processing point of all
user-facing request driven events, just as what happens in a single
openstack. From the top level view, each child openstack is just a
agent running on a big host, while currently agent side state is
natural asynchronous with controller side db, so there would not be
extra consistence problem for cascading compared with single layer
openstack today.

Best Regads
Wu Hongning


On Thu, Oct 9, 2014 at 12:27 PM, Joshua Harlow harlo...@outlook.com wrote:
 On Oct 7, 2014, at 6:24 AM, joehuang joehu...@huawei.com wrote:

 Hello, Joshua,

 Thank you for your concerns on OpenStack cascading. I am afraid that I am 
 not proper person to give comment on cells, but I would like to speak a 
 little about cascading for you mentioned with its own set of consistency 
 warts I'm sure .

 1. For small scale or a cloud within one data centers, one OpenStack 
 instance (including cells) without cascading feature can work just like it 
 work today. OpenStack cascading just introduces Nova-proxy, Cinder-proxy, 
 L2/L3 proxy... like other vendor specific agent/driver( for example, vcenter 
 driver, hyper-v driver, linux-agent.. ovs-agent ), and does not change the 
 current architecture for Nova/Cinder/Neutron..., and does not affect the 
 aleady developed features and deployment capability. The cloud operators can 
 skip the existence of OpenStack cascading if they don't want to use it, just 
 like they don't want to use some kinds of hypervisor / sdn controller 

 Sure, I understand the niceness that u can just connect clouds into other 
 clouds and so-on (the prettyness of the fractal that results from this). 
 That's a neat approach and its cool that openstack can do this (so +1 for 
 that). The bigger question I have though is around 'should we' do this. This 
 introduces a bunch of proxies that from what I can tell are just making it so 
 that nova, cinder, neutron can scale by plugging more little cascading 
 components together. This kind of connecting them together is very much what 
 I guess could be called an 'external' scaling mechanism, one that plugs into 
 the external API's of one service from the internal of another (and repeat). 
 The question I have is why an 'external' solution in the first place, why not 
 just work on scaling the projects internally first and when that ends up not 
 being good enough switch to an 'external' scaling solution. Lets take an 
 analogy, your queries to mysql are acting slow, do you first, add in X more 
 mysql servers or do you instead try to tune your existing mysql server and 
 queries before scaling out? I just want to make sure we are not prematurely 
 adding in X more layers when we can gain scalability in a more solveable  
 manageable manner first...


 2. Could you provide concrete inconsistency issues you are worried about in 
 OpenStack cascading? Although we did not implement inconsistency check in 
 the PoC source code completely, but because logical 
 VM/Volume/Port/Network... objects are stored in the cascading OpenStack, and 
 the physical objects are stored in the cascaded OpenStack, uuid mapping 
 between logical object and physical object had been built,  it's possible 
 and easy to solve the inconsistency issues. Even for flavor, host aggregate, 
 we have method to solve the inconsistency issue.

 When you add more levels/layers, by the very nature of adding in those levels 
 the number of potential failure points has now increased (there is probably a 
 theorem or proof somewhere in literature about this). If you want to see 
 inconsistencies that already exists just watch the gate issues and bugs and 
 so-on for a while, you will eventually see why it may not be the right time 
 to add in more potential failure points instead of fixing the existing 
 failure points we already have. I (and I think others) would rather see 
 effort focused on those existing failure points vs. adding a set of new ones 
 in (make what exists reliable and scalable *first* then move on to scaling 
 things out via something like cascading, cells, other...). Overall 

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-08 Thread henry hly
Hi,

Good questions: why not just keeping multiple endpoints, and leaving
orchestration effort in the client side?

From feedback of some large data center operators, they want the cloud
exposed to tenant as a single region with multiple AZs, while each AZ
may be distributed in different/same locations, very similar with AZ
concept of AWS. And the OpenStack API is indispensable for the cloud
for eco-system friendly.

The cascading is mainly doing one thing: map each standalone child
Openstack to AZs in the parent Openstack, hide separated child
endpoints, thus converge them into a single standard OS-API endpoint.

One of the obvious benefit doing so is the networking: we can create a
single Router/LB, with subnet/port member from different child, just
like in a single OpenStack instance. Without the parent OpenStack
working as the aggregation layer, it is not so easy to do so. Explicit
VPN endpoint may be required in each child.

Best Regards
Wu Hongning

On Tue, Oct 7, 2014 at 11:30 PM, Monty Taylor mord...@inaugust.com wrote:
 On 10/07/2014 06:41 AM, Duncan Thomas wrote:
 My data consistency concerts would be around:

 1) Defining global state. You can of course hand wave away a lot of
 your issues by saying they are all local to the sub-unit, but then
 what benefit are you providing .v. just providing a list of endpoints
 and teaching the clients to talk to multiple endpoints, which is far
 easier to make reliable than a new service generally is. State that
 'ought' to be global: quota, usage, floating ips, cinder backups, and
 probably a bunch more

 BTW - since infra regularly talks to multiple clouds, I've been working
 on splitting supporting code for that into a couple of libraries. Next
 pass is to go add support for it to the clients, and it's not really a
 lot of work ... so let's assume that the vs. here is going to be
 accomplished soonish for the purposes of assessing the above question.

 Second BTW - you're certainly right about the first two in the global
 list - we keep track of quota and usage ourselves inside of nodepool.
 Actually - since nodepool already does a bunch of these things - maybe
 we should just slap a REST api on it...

 2) Data locality expectations. You have to be careful about what
 expectations .v. realty you're providing here. If the user experience
 is substantially different using your proxy .v. direct API, then I
 don't think you are providing a useful service - again, just teach the
 clients to be multi-cloud aware. This includes what can be connected
 to what (cinder volumes, snaps, backups, networks, etc), replication
 behaviours and speeds (swift) and probably a bunch more that I haven't
 thought of yet.



 On 7 October 2014 14:24, joehuang joehu...@huawei.com wrote:
 Hello, Joshua,

 Thank you for your concerns on OpenStack cascading. I am afraid that I am 
 not proper person to give comment on cells, but I would like to speak a 
 little about cascading for you mentioned with its own set of consistency 
 warts I'm sure .

 1. For small scale or a cloud within one data centers, one OpenStack 
 instance (including cells) without cascading feature can work just like it 
 work today. OpenStack cascading just introduces Nova-proxy, Cinder-proxy, 
 L2/L3 proxy... like other vendor specific agent/driver( for example, 
 vcenter driver, hyper-v driver, linux-agent.. ovs-agent ), and does not 
 change the current architecture for Nova/Cinder/Neutron..., and does not 
 affect the aleady developed features and deployment capability. The cloud 
 operators can skip the existence of OpenStack cascading if they don't want 
 to use it, just like they don't want to use some kinds of hypervisor / sdn 
 controller 

 2. Could you provide concrete inconsistency issues you are worried about in 
 OpenStack cascading? Although we did not implement inconsistency check in 
 the PoC source code completely, but because logical 
 VM/Volume/Port/Network... objects are stored in the cascading OpenStack, 
 and the physical objects are stored in the cascaded OpenStack, uuid mapping 
 between logical object and physical object had been built,  it's possible 
 and easy to solve the inconsistency issues. Even for flavor, host 
 aggregate, we have method to solve the inconsistency issue.

 Best Regards

 Chaoyi Huang ( joehuang )
 
 From: Joshua Harlow [harlo...@outlook.com]
 Sent: 07 October 2014 12:21
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
 OpenStack cascading

 On Oct 3, 2014, at 2:44 PM, Monty Taylor mord...@inaugust.com wrote:

 On 09/30/2014 12:07 PM, Tim Bell wrote:
 -Original Message-
 From: John Garbutt [mailto:j...@johngarbutt.com]
 Sent: 30 September 2014 15:35
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
 OpenStack
 cascading

 On 30 September 

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-04 Thread henry hly
Hi Monty and Cellers,

I understand that there are installation base for Cells, these clouds
are still running and some issues needed to be addressed for the daily
operation. For sure the improvement on Cells are necessary to be done
in first class for the community's commitment.

The introduction of OpenStack cascading is not to divide the
community, ‍it is to address some other interests that Cell is not
designed for: heterogeneous cluster integration based on established
REST API, and total distributed scalability (not only Nova, but also
Cinder/Neutron/Ceilometer...). Total distribution is essential for
some large cloud operators who has many data centers distributed
geographically, and heterogeneous cluster integration‍ is the base
business policy (different version, different vendor, and even
none-Openstack like vcenter).

So Cascading is not an alternative game for cells, both solutions can
co-exist and complement to each other. Also I don't think cellers need
to shift their work to OpenStack cascading, they still focus on cells,
 and there would be not any conflicts between codes of cells and
cascading.

Best Regards,
Wu Hongning


On Sat, Oct 4, 2014 at 5:44 AM, Monty Taylor mord...@inaugust.com wrote:

 On 09/30/2014 12:07 PM, Tim Bell wrote:
  -Original Message-
  From: John Garbutt [mailto:j...@johngarbutt.com]
  Sent: 30 September 2014 15:35
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
  OpenStack
  cascading
 
  On 30 September 2014 14:04, joehuang joehu...@huawei.com wrote:
  Hello, Dear TC and all,
 
  Large cloud operators prefer to deploy multiple OpenStack instances(as
  different zones), rather than a single monolithic OpenStack instance 
  because of
  these reasons:
 
  1) Multiple data centers distributed geographically;
  2) Multi-vendor business policy;
  3) Server nodes scale up modularized from 00's up to million;
  4) Fault and maintenance isolation between zones (only REST
  interface);
 
  At the same time, they also want to integrate these OpenStack instances 
  into
  one cloud. Instead of proprietary orchestration layer, they want to use 
  standard
  OpenStack framework for Northbound API compatibility with HEAT/Horizon or
  other 3rd ecosystem apps.
 
  We call this pattern as OpenStack Cascading, with proposal described by
  [1][2]. PoC live demo video can be found[3][4].
 
  Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in 
  the
  OpenStack cascading.
 
  Kindly ask for cross program design summit session to discuss OpenStack
  cascading and the contribution to Kilo.
 
  Kindly invite those who are interested in the OpenStack cascading to work
  together and contribute it to OpenStack.
 
  (I applied for “other projects” track [5], but it would be better to
  have a discussion as a formal cross program session, because many core
  programs are involved )
 
 
  [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
  [2] PoC source code: https://github.com/stackforge/tricircle
  [3] Live demo video at YouTube:
  https://www.youtube.com/watch?v=OSU6PYRz5qY
  [4] Live demo video at Youku (low quality, for those who can't access
  YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
  [5]
  http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395
  .html
 
  There are etherpads for suggesting cross project sessions here:
  https://wiki.openstack.org/wiki/Summit/Planning
  https://etherpad.openstack.org/p/kilo-crossproject-summit-topics
 
  I am interested at comparing this to Nova's cells concept:
  http://docs.openstack.org/trunk/config-reference/content/section_compute-
  cells.html
 
  Cells basically scales out a single datacenter region by aggregating 
  multiple child
  Nova installations with an API cell.
 
  Each child cell can be tested in isolation, via its own API, before 
  joining it up to
  an API cell, that adds it into the region. Each cell logically has its own 
  database
  and message queue, which helps get more independent failure domains. You 
  can
  use cell level scheduling to restrict people or types of instances to 
  particular
  subsets of the cloud, if required.
 
  It doesn't attempt to aggregate between regions, they are kept independent.
  Except, the usual assumption that you have a common identity between all
  regions.
 
  It also keeps a single Cinder, Glance, Neutron deployment per region.
 
  It would be great to get some help hardening, testing, and building out 
  more of
  the cells vision. I suspect we may form a new Nova subteam to trying and 
  drive
  this work forward in kilo, if we can build up enough people wanting to 
  work on
  improving cells.
 
 
  At CERN, we've deployed cells at scale but are finding a number of 
  architectural issues that need resolution in the short term to attain 
  feature parity. A vision of we all run cells but some of us have only one 
  

Re: [openstack-dev] [neutron] Spec Proposal Deadline has passed, a note on Spec Approval Deadline

2014-07-12 Thread Hly
+1

Sent from my iPad

On 2014-7-12, at 下午11:45, Miguel Angel Ajo Pelayo majop...@redhat.com wrote:

 +1 
 
 Sent from my Android phone using TouchDown (www.nitrodesk.com) 
 
 
 -Original Message- 
 From: Carl Baldwin [c...@ecbaldwin.net] 
 Received: Saturday, 12 Jul 2014, 17:04 
 To: OpenStack Development Mailing List [openstack-dev@lists.openstack.org] 
 Subject: Re: [openstack-dev] [neutron] Spec Proposal Deadline has passed, a 
 note on Spec Approval Deadline 
 
 
 +1  This spec had already been proposed quite some time ago.  I'd like to see 
 this work get in to juno.
 
 Carl
 
 On Jul 12, 2014 9:53 AM, Yuriy Taraday yorik@gmail.com wrote:
 Hello, Kyle.
 
 On Fri, Jul 11, 2014 at 6:18 PM, Kyle Mestery mest...@noironetworks.com 
 wrote:
 Just a note that yesterday we passed SPD for Neutron. We have a
 healthy backlog of specs, and I'm working to go through this list and
 make some final approvals for Juno-3 over the next week. If you've
 submitted a spec which is in review, please hang tight while myself
 and the rest of the neutron cores review these. It's likely a good
 portion of the proposed specs may end up as deferred until K
 release, given where we're at in the Juno cycle now.
 
 Thanks!
 Kyle
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 Please don't skip my spec on rootwrap daemon support: 
 https://review.openstack.org/#/c/93889/
 It got -2'd my Mark McClain when my spec in oslo wasn't approved but now 
 that's fixed but it's not easy to get hold of Mark.
 Code for that spec (also -2'd by Mark) is close to be finished and requires 
 some discussion to get merged by Juno-3.
 
 -- 
 
 Kind regards, Yuriy.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Performance of security group

2014-06-19 Thread henry hly
we have done some tests, but have different result: the performance is
nearly the same for empty and 5k rules in iptable, but huge gap between
enable/disable iptable hook on linux bridge


On Thu, Jun 19, 2014 at 11:21 AM, shihanzhang ayshihanzh...@126.com wrote:

 Now I have not get accurate test data, but I  can confirm the following
 points:
 1. In compute node, the iptable's chain of a VM is liner, iptable filter
 it one by one, if a VM in default security group and this default security
 group have many members, but ipset chain is set, the time ipset filter one
 and many member is not much difference.
 2. when the iptable rule is very large, the probability of  failure  that  
 iptable-save
 save the iptable rule  is very large.





 At 2014-06-19 10:55:56, Kevin Benton blak...@gmail.com wrote:

 This sounds like a good idea to handle some of the performance issues
 until the ovs firewall can be implemented down the the line.
 Do you have any performance comparisons?
 On Jun 18, 2014 7:46 PM, shihanzhang ayshihanzh...@126.com wrote:

 Hello all,

 Now in neutron, it use iptable implementing security group, but the
 performance of this  implementation is very poor, there is a bug:
 https://bugs.launchpad.net/neutron/+bug/1302272 to reflect this problem.
 In his test, with default security groups(which has remote security
 group), beyond 250-300 VMs, there were around 6k Iptable rules on evry
 compute node, although his patch can reduce the processing time, but it
 don't solve this problem fundamentally. I have commit a BP to solve this
 problem:
 https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security
 https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security,
 There are other people interested in this it?



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-06-19 Thread henry hly
OVS agent manipulate not only ovs flow table, but also linux stack, which
is not so easily replaced by pure openflow controller today.
fastpath-slowpath separation sounds good, but really a nightmare for high
concurrent connection application if we set L4 flow into OVS (in our
testing, vswitchd daemon always stop working in this case).

Someday when OVS can do all the L2-L4 rules in the kernel without bothering
userspace classifier, pure OF controller can replace agent based solution
then. OVS hooking to netfilter conntrack may come this year, but not enough
yet.


On Wed, Jun 18, 2014 at 12:56 AM, Armando M. arma...@gmail.com wrote:

 just a provocative thought: If we used the ovsdb connection instead, do we
 really need an L2 agent :P?


 On 17 June 2014 18:38, Kyle Mestery mest...@noironetworks.com wrote:

 Another area of improvement for the agent would be to move away from
 executing CLIs for port commands and instead use OVSDB. Terry Wilson
 and I talked about this, and re-writing ovs_lib to use an OVSDB
 connection instead of the CLI methods would be a huge improvement
 here. I'm not sure if Terry was going to move forward with this, but
 I'd be in favor of this for Juno if he or someone else wants to move
 in this direction.

 Thanks,
 Kyle

 On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando sorla...@nicira.com
 wrote:
  We've started doing this in a slightly more reasonable way for icehouse.
  What we've done is:
  - remove unnecessary notification from the server
  - process all port-related events, either trigger via RPC or via
 monitor in
  one place
 
  Obviously there is always a lot of room for improvement, and I agree
  something along the lines of what Zang suggests would be more
 maintainable
  and ensure faster event processing as well as making it easier to have
 some
  form of reliability on event processing.
 
  I was considering doing something for the ovs-agent again in Juno, but
 since
  we've moving towards a unified agent, I think any new big ticket
 should
  address this effort.
 
  Salvatore
 
 
  On 17 June 2014 13:31, Zang MingJie zealot0...@gmail.com wrote:
 
  Hi:
 
  Awesome! Currently we are suffering lots of bugs in ovs-agent, also
  intent to rebuild a more stable flexible agent.
 
  Taking the experience of ovs-agent bugs, I think the concurrency
  problem is also a very important problem, the agent gets lots of event
  from different greenlets, the rpc, the ovs monitor or the main loop.
  I'd suggest to serialize all event to a queue, then process events in
  a dedicated thread. The thread check the events one by one ordered,
  and resolve what has been changed, then apply the corresponding
  changes. If there is any error occurred in the thread, discard the
  current processing event, do a fresh start event, which reset
  everything, then apply the correct settings.
 
  The threading model is so important and may prevent tons of bugs in
  the future development, we should describe it clearly in the
  architecture
 
 
  On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi m...@us.ibm.com
  wrote:
   Following the discussions in the ML2 subgroup weekly meetings, I have
   added
   more information on the etherpad [1] describing the proposed
   architecture
   for modular L2 agents. I have also posted some code fragments at [2]
   sketching the implementation of the proposed architecture. Please
 have a
   look when you get a chance and let us know if you have any comments.
  
   [1] https://etherpad.openstack.org/p/modular-l2-agent-outline
   [2] https://review.openstack.org/#/c/99187/
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Default routes to SNAT gateway in DVR

2014-06-13 Thread henry hly
Hi car1,

In the link:
https://docs.google.com/document/d/1jCmraZGirmXq5V1MtRqhjdZCbUfiwBhRkUjDXGt5QUQ/edit,
 there is some words like  When the node is being scheduled to host the
SNAT, a new namespace and internal IP address will be assigned to host the
SNAT service.  Any nova instance VM that is connected to the router will
have this new SNAT IP as its external gateway address. 

Can nova VM see this secondary IP? I think that even in the node hosting
SNAT, IR still exists. So VM at this node will also see IP of the IR
interface, and send packet to IR first, next the IR will redirect the
traffic to SNAT in the same node (but in different namespace). Is that
right?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Too much shim rest proxy mechanism drivers in ML2

2014-06-09 Thread henry hly
hi mathieu,


 I totally agree. By using l2population with tunnel networks (vxlan,
 gre), you will not be able to plug an external device which could
 possibly terminate your tunnel. The ML2 plugin has to be aware a new
 port in the vxlan segment. I think this is the scope of this bp :

https://blueprints.launchpad.net/neutron/+spec/neutron-switch-port-extension

 mixing several SDN controller (when used with ovs/of/lb agent, neutron
 could be considered has a SDN controller) could be achieved the same
 way, with the SDN controller sending notification to neutron for the
 ports that it manages.

I agree with basic ieda of this BP, especially controller agnostic and no
vendor specific code to handle segment id. since Neutron already has all
information about ports and a standard way to populate it (l2 pop), why not
just reuse it?

  And with the help of coming ML2 agent framework, hardware
  device or middleware controller adaption agent could be more simplified.

 I don't understand the reason why you want to move middleware
 controller to the agent.

this BP suggest a driver side hook plug, my idea is that existing agent
side router VIF plug processing should be ok. Suppose we have a hardware
router with VETP termination, just keep the L3 plugin unchanged, for L2
part maybe a very thin DEV specific mechanism driver is there (just like
OVS mech driver, doing necessary validation with 10's line of code). Most
work is in agent side: when a router interface is created, DEV specific L3
agent will interact with the router (either directly config wih
netconf/cli, or indirectly via some controller middleware), and then hook
to DEV specific L2 agent co-located with it, doing a virtual VIF plug-in.
Exactly same as OVS agent, this L2 agent scanned the newly plugged VIF,
then rpc call back to ML2 plugin with port-update and standard l2 pop.

While OVS/linux bridge agent VIF plug is identified by port name in br-int,
these appliance specific L3  L2 agents may need a new virtual plug hook.
Any producer/consumer pattern is ok, shared file in tmpfs, name pipe, etc.
Anyway, these work shouldn't happen in plugin side, just leave it in agent
side, to keep with the same framework as exist ovs/bridge agent.

Today DEV specific L2 agent can fork from OVS agent, just like what ofagent
does. In the future, modularized ML2 agent can reduce work to write code
for a new switch engine.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Too much shim rest proxy mechanism drivers in ML2

2014-06-06 Thread henry hly
ML2 mechanism drivers are becoming another kind of plugins. Although they
can be loaded together, but can not work with each other.

Today, there are more and more drivers, supporting all kinds of networking
hardware and middleware (sdn controller). Unfortunately, they are designed
exclusively as chimney REST proxies.

A very general use case of heterogeneous networking: we have OVS controlled
by ovs agent, and  switchs from different vendor, some of them are
controller by their own driver/agent directly, others are controlled by a
sdn controller middleware. Can we create a vxlan network, across all these
sw/hw switchs?

It's not so easy: neutron ovs use l2 population mech driver, sdn
controllers have their own population way, today most dedicated switch
driver can only support vlan. sdn controller people may say: it's ok, just
put everything under the control of my controller, leaving ml2 plugin as a
shim rest proxy layer. But, shouldn't Openstack Neutron itself be the first
class citizen even if there is not controller involved?

Could we remove all device related adaption(rest/ssh/netconf/of... proxy)
from these mechanism driver to the agent side, leaving only necessary code
in the plugin? Heterogeneous networking may become easier, while ofagent
give a good example, it can co-exist with native neutron OVS agent in vxlan
l2 population. And with the help of coming ML2 agent framework, hardware
device or middleware controller adaption agent could be more simplified.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev