Re: [openstack-dev] Telco Working Group meeting for Wednesday April 6th CANCELLED
Thanks Steve I agree with moving to the PWG. On that topic, do you know what's happened to some of the user stories we proposed, specifically https://review.openstack.org/#/c/290060/ and https://review.openstack.org/#/c/290347/? Neither shows up in https://review.openstack.org/#/q/status:open+project:openstack/openstack-user-stories, but there is a https://review.openstack.org/#/c/290991/ which seems to be a copy of https://review.openstack.org/#/c/290060/ with the template help text added back in and no mention of the original? Calum Calum Loudon Director, Architecture +44 (0)208 366 1177 METASWITCH NETWORKS THE BRAINS OF THE NEW GLOBAL NETWORK www.metaswitch.com -Original Message- From: Steve Gordon [mailto:sgor...@redhat.com] Sent: 04 April 2016 23:21 To: OpenStack Development Mailing List (not for usage questions) <openstack-dev@lists.openstack.org>; openstack-operators <openstack-operat...@lists.openstack.org> Subject: [openstack-dev] [NFV][Telco] Telco Working Group meeting for Wednesday April 6th CANCELLED Hi all, I will be on a flight during the meeting slot for Wednesday April 6th and have been unable to source someone else to run the meeting, as a result it is canceled. I would also like to highlight that at this point I believe all of the active Telco Working Group use cases have been submitted/moved to the Product Working Group repository. The Product Working Group also now has an alternative [1][2] timeslot available for those who were unable to make the PDT/EDT friendly time used for the Monday session. Product Working Group meetings are now at: Weekly on Monday at 2100 UTC in #openstack-meeting-alt (IRC webclient) Weekly on Tuesday at 0200 UTC in #openstack-meeting-alt (IRC webclient) Going forward I would like to suggest suspending the weekly Telco Working Group meetings (or having them on a less frequent basis) with a view to concentrating on attending the Product Working Group calls to assist with prioritization discussions, gap analysis, and next steps. Thanks, Steve [1] http://lists.openstack.org/pipermail/product-wg/2016-April/001056.html [2] http://eavesdrop.openstack.org/#OpenStack_Product_WG __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [NFV][Telco] Resigning TelcoWG core team
I'm sorry to hear that , Marc. Thanks for your efforts and best wishes for the future. Calum Calum Loudon Director, Architecture +44 (0)208 366 1177 METASWITCH NETWORKS THE BRAINS OF THE NEW GLOBAL NETWORK www.metaswitch.com -Original Message- From: Marc Koderer [mailto:m...@koderer.com] Sent: 03 November 2015 08:18 To: OpenStack Development Mailing List (not for usage questions); openstack-operat...@lists.openstack.org Subject: [openstack-dev] [NFV][Telco] Resigning TelcoWG core team Hello TelcoWG, Due to personal reasons I have to resign my TelcoWG core team membership. I will remove myself from the core reviewer group. Thanks for all the support! Regards Marc __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [NFV][Telco] New use case - efficient deployment of Cassandra and other NoSQL DBs
Hi all I've uploaded a new use case for comment, available at https://review.openstack.org/#/c/238167/. It's to do with the efficient deployment of Cassandra and other NoSQL DBs. The link to NFV and Telco may seem tenuous, but this is a real issue we are hitting in the field. Telcos are very interested in running geo-redundant services, meaning user data must be geo-redundant. Using scalable open-source N+k NoSQL DBs like Cassandra is a very common solution. They typically provide redundancy at the application level by replicating data between application nodes, actively preferring the underlying storage not to be redundant. They also require the storage attached to different application nodes to have no common points of failure, and (for performance reasons) to place different types of data on different storage pools. Those innocent sounding requirements are trivial to meet in the bare metal world - give your nodes an SSD and an HDD and you're done - but when trying to map to OpenStack it's surprisingly difficult. Enjoy. regards Calum Calum Loudon Director, Architecture +44 (0)208 366 1177 METASWITCH NETWORKS THE BRAINS OF THE NEW GLOBAL NETWORK www.metaswitch.com __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] July 15th meeting cancelled
Hi Steve I missed the linked mail as it was sent to the openstack-operators list, not openstack-dev - was that intentional? On the substance of the mail, +1 to adding Daniel and Yuriy to the core reviewers list. cheers Calum -Original Message- From: Steve Gordon [mailto:sgor...@redhat.com] Sent: 15 July 2015 12:36 To: openstack-operators; OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [NFV][Telco] July 15th meeting cancelled Hi all, I'm unable to make the meeting today and was unable to get an alternative facilitator to run the meeting, as such it is canceled. Please note that I am still seeking comment on: [Openstack-operators] [nfv][telco] On-going management of telcowg-usecases repository http://lists.openstack.org/pipermail/openstack-operators/2015-July/007611.html As always outstanding reviews are here: https://review.openstack.org/#/q/status:open+project:stackforge/telcowg-usecases,n,z Thanks,Steve Steve __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [NFV][Telco] Use case discussion
Hello all Unfortunately, my complete inability to process daylight savings time changes meant I was one hour late for today's TelcoWG meeting so couldn't participate in the discussion of use cases, including the one I submitted on Session Border Control. Thanks to eavesdrop.openstack.org I've been able to catch up. Just to chip in to the discussion, I agree entirely that we should try to keep the wiki vendor neutral, but at the same time I think use cases based on real world implementations (which could of course be open source rather than from a vendor) are more powerful illustrations for the Dev community of why particular bps are needed than the more abstract presentation of use cases in the ETSI docs, which are aimed at a different purpose. I don't think it's particularly convincing or compelling as a developer to hear that some theoretical implementation of a network function you may have never heard of might need feature X, whereas understanding that there are real products out there that genuinely depend on it brings the requirement home. One of the goals of this group is to help the rest of the OpenStack community understand NFV, and I think concrete beats abstract for that. However, as Steve noted, I did try to draw out general characteristics and relate them to specific gaps and requirements and I think that's important to try to draw as much as possible that is general and vendor-neutral from specific cases. Someone asked about ability to test; I imagine most people in this group will know of the OPNFV initiative, and they are working to put together test frameworks and cases which may include real VNFs, including this specific Perimeta example. regards Calum Calum Loudon Director, Architecture +44 (0)208 366 1177 METASWITCH NETWORKS THE BRAINS OF THE NEW GLOBAL NETWORK www.metaswitch.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [NFV] VLAN trunking to VM - justification/use cases
Hello all I took the action on last week's call to explain why the VLAN trunking to VM bp is a relevant use case for NFV - here's my take. The big picture is that this is about how service providers can use virtualisation to provide differentiated network services to their customers (and specifically enterprise customers rather than end users); it's not about VMs want to set up networking between themselves. A typical service provider may be providing network services to thousands or more of enterprise customers. The details of and configuration required for individual services will differ from customer to customer. For example, consider a Session Border Control service (basically, policing VoIP interconnect): different customers will have different sets of SIP trunks that they can connect to, different traffic shaping requirements, different transcoding rules etc. Those customers will normally connect in to the service provider in one of two ways: a dedicated physical link, or through a VPN over the public Internet. Once that traffic reaches the edge of the SP's network, then it makes sense for the SP to put all that traffic onto the same core network while keeping some form of separation to allow the network services to identify the source of the traffic and treat it independently. There are various overlay techniques that can be used (e.g. VXLAN, GRE tunnelling) but one common and simple one is VLANs. Carrying VLAN trunking into the VM allows this scheme to continue to be used in a virtual world. In this set-up, then any VMs implementing those services have to be able to differentiate between customers. About the only way of doing that today in OpenStack is to configure one provider network per customer then have one vNIC per provider network, but that approach clearly doesn't scale (both performance and configuration effort) if a VM has to see traffic from hundreds or thousands of customers. Instead, carrying VLAN trunking into the VM allows them to do this scalably. The net is that a VM providing a service that needs to have access to a customer's non-NATed source addresses needs an overlay technology to allow this, and VLAN trunking into the VM is sufficiently scalable for this use case and leverages a common approach. Calum Calum Loudon Director, Architecture +44 (0)208 366 1177 METASWITCH NETWORKS THE BRAINS OF THE NEW GLOBAL NETWORK www.metaswitch.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [NFV] Specific example NFV use case - ETSI #5, virtual IMS
Hello all Following on from my contribution last week of a specific NFV use case (a Session Border Controller) here's another one, this time for an IMS core (part of ETSI NFV use case #5). As we touched on at last week's meeting, this is not making claims for what every example of a virtual IMS core would need, just as last week's wasn't describing what every SBC would need. In particular, my IMS core example is for an application that was designed to be cloud-native from day one, so the apparent lack of OpenStack gaps is not surprising: other IMS cores may need more. However, I think overall these two examples are reasonably representative of the classes of data plane vs. control plane apps. Use case example Project Clearwater, http://www.projectclearwater.org/. An open source implementation of an IMS core designed to run in the cloud and be massively scalable. It provides SIP-based call control for voice and video as well as SIP-based messaging apps. As an IMS core it provides P/I/S-CSCF function together with a BGCF and an HSS cache, and includes a WebRTC gateway providing interworking between WebRTC SIP clients. Characteristics relevant to NFV/OpenStack - Mainly a compute application: modest demands on storage and networking. Fully HA, with no SPOFs and service continuity over software and hardware failures; must be able to offer SLAs. Elastically scalable by adding/removing instances under the control of the NFV orchestrator. Requirements and mapping to blueprints -- Compute application: - OpenStack already provides everything needed; in particular, there are no requirements for an accelerated data plane, nor for core pinning nor NUMA HA: - implemented as a series of N+k compute pools; meeting a given SLA requires being able to limit the impact of a single host failure - we believe there is a scheduler gap here; affinity/anti-affinity can be expressed pair-wise between VMs, but this needs a concept equivalent to group anti-affinity i.e. allowing the NFV orchestrator to assign each VM in a pool to one of X buckets, and requesting OpenStack to ensure no single host failure can affect more than one bucket (there are other approaches which achieve the same end e.g. defining a group where the scheduler ensures every pair of VMs within that group are not instantiated on the same host) - if anyone is aware of any blueprints that would address this please insert them here Elastic scaling: - similarly readily achievable using existing features - no gap. regards Calum Calum Loudon Director, Architecture +44 (0)208 366 1177 METASWITCH NETWORKS THE BRAINS OF THE NEW GLOBAL NETWORK www.metaswitch.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [NFV] Specific example NFV use case for a data plane app
Hello all At Wednesday's meeting I promised to supply specific examples to help illustrate the NFV use cases and also show how they map to some of the blueprints. Here's my first example - info on our session border controller, which is a data plane app. Please let me know if this is the sort of example and detail the group are looking for, then I can add it into the wiki and send out info on the second, a vIMS core. Use case example Perimeta Session Border Controller, Metaswitch Networks. Sits on the edge of a service provider's network and polices SIP and RTP (i.e. VoIP) control and media traffic passing over the access network between end-users and the core network or the trunk network between the core and another SP. Characteristics relevant to NFV/OpenStack - Fast guaranteed performance: - fast = performance of order of several million VoIP packets (~64-220 bytes depending on codec) per second per core (achievable on COTS hardware) - guaranteed via SLAs. Fully HA, with no SPOFs and service continuity over software and hardware failures. Elastically scalable by adding/removing instances under the control of the NFV orchestrator. Ideally, ability to separate traffic from different customers via VLANs. Requirements and mapping to blueprints -- Fast guaranteed performance - implications for network: - the packets per second target - either SR-IOV or an accelerated DPDK-like data plane - maps to the SR-IOV and accelerated vSwitch blueprints: - SR-IOV Networking Support (https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov) - Open vSwitch to use patch ports (https://blueprints.launchpad.net/neutron/+spec/openvswitch-patch-port-use) - userspace vhost in ovd vif bindings (https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost) - Snabb NFV driver (https://blueprints.launchpad.net/neutron/+spec/snabb-nfv-mech-driver) - VIF_SNABB (https://blueprints.launchpad.net/nova/+spec/vif-snabb) Fast guaranteed performance - implications for compute: - to optimize data rate we need to keep all working data in L3 cache - need to be able to pin cores - Virt driver pinning guest vCPUs to host pCPUs (https://blueprints.launchpad.net/nova/+spec/virt-driver-cpu-pinning) - similarly to optimize data rate need to bind to NIC on host CPU's bus - I/O (PCIe) Based NUMA Scheduling (https://blueprints.launchpad.net/nova/+spec/input-output-based-numa-scheduling) - to offer guaranteed performance as opposed to 'best efforts' we need to control placement of cores, minimise TLB misses and get accurate info about core topology (threads vs. hyperthreads etc.); maps to the remaining blueprints on NUMA vCPU topology: - Virt driver guest vCPU topology configuration (https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology) - Virt driver guest NUMA node placement topology (https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement) - Virt driver large page allocation for guest RAM (https://blueprints.launchpad.net/nova/+spec/virt-driver-large-pages) - may need support to prevent 'noisy neighbours' stealing L3 cache - unproven, and no blueprint we're aware of. HA: - requires anti-affinity rules to prevent active/passive being instantiated on same host - already supported, so no gap. Elastic scaling: - similarly readily achievable using existing features - no gap. VLAN trunking: - maps straightforwardly to VLAN trunking networks for NFV (https://blueprints.launchpad.net/neutron/+spec/nfv-vlan-trunks et al). Other: - being able to offer apparent traffic separation (e.g. service traffic vs. application management) over single network is also useful in some cases - Support two interfaces from one VM attached to the same network (https://blueprints.launchpad.net/nova/+spec/2-if-1-net) regards Calum Calum Loudon Director, Architecture +44 (0)208 366 1177 METASWITCH NETWORKS THE BRAINS OF THE NEW GLOBAL NETWORK www.metaswitch.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] Building a new open source NFV system for Neutron
Hi Luke That sounds fantastic. As an NFV application developer I'm very pleased to see this contribution which looks to eliminate the key bottleneck hitting the performance of very high packet throughput apps on OpenStack. A couple of questions on features and implementation: 1. If I create a VM with say neutron and Open vSwitch then I get a TAP device + Linux bridge + veth device between the VM and the vSwitch, with the Linux bridge needed for implementing anti-spoofing iptables rules/ security group support. What will the stack look like with your NFV driver in place? Will your stack implement equivalent security functions, or will those functions not be available? 2. Are you planning to support live migration? cheers Calum Calum Loudon Director of Architecture Metaswitch Networks -Original Message- From: Luke Gorrie [mailto:l...@snabb.co] Sent: 10 January 2014 15:12 To: OpenStack Development Mailing List Cc: snabb-de...@googlegroups.com Subject: [openstack-dev] [Neutron] Building a new open source NFV system for Neutron Howdy Stackers! We are developing a new open source Network Functions Virtualization driver for Neutron. I am writing to you now to ask for early advice that could help us to smoothly bring this work upstream into OpenStack Juno. The background is that we are open source developers working to satisfy the NFV requirements of large service provider networks including Deutsche Telekom's TeraStream project [1] [2]. We are developing a complete NFV stack for this purpose: from the DPDK-like traffic plane all the way up to the Neutron ML2 driver. We are developing against Havana, we attended the Icehouse summit and had a lot of great discussions in Hong Kong, and our ambition is to start bringing running code upstream into Juno. Our work is 100% open source and we want to work in the open with the wider OpenStack community. Currently we are in heads-down hacking mode on the core functionality, but it would be wonderful to connect with the upstream communities who we hope to be working with more in the future (that's you guys). More details on Github: https://github.com/SnabbCo/snabbswitch/tree/snabbnfv-readme/src/designs/nfv Thanks for reading! Cheers, -Luke [1] Ivan Pepelnjak on TeraStream: http://blog.ipspace.net/2013/11/deutsche-telekom-terastream-designed.html [2] Peter Löthberg's presentation on TeraStream at RIPE 67: https://ripe67.ripe.net/archives/video/3/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime
Hi all More volunteers for you - myself (Calum Loudon) and Colin Tregenza Dancer from Metaswitch (http://metaswitch.com). We're new to OpenStack development, so a bit of context: we develop software for the telecoms space, ranging from low-level network stacks to voice applications. We see enormous interest in the idea of the software Telco, with telecoms providers now really understanding cloud and wanting to move to it; you may have heard of Network Functions Virtualisation (NFV), a big push by the telecoms industry to define how this will work, and which implicitly assumes OpenStack as the underlying cloud platform. NFV needs a few things OpenStack doesn't currently provide, mainly due to the extremely high reliability bandwidth/latency requirements of Telco-grade apps compared to typical Enterprise-grade data apps, and we want to contribute code to help close those gaps. From what I learnt in Hong Kong, I think that initially means richer placement policies (e.g. more advanced (anti)affinity rules; locating VMs close to storage or networks; globally-optimal placement) and if I'm following this list correctly then I think this activity is the first step towards that goal, enabling in future phases Yathi's vision of instance groups with smart resource placement [1] which closely resembles our own. So we'd love to help in whatever way is needed - please count us in. cheers Calum [1] https://docs.google.com/document/d/1IiPI0sfaWb1bdYiMWzAAx0HYR6UqzOan_Utgml5W1HI/edit Calum Loudon Director of Architecture Metaswitch Networks P +44 (0)208 366 1177 E calum.lou...@metaswitch.com -Original Message- From: Robert Collins [mailto:robe...@robertcollins.net] Sent: Friday, November 22, 2013 4:59 AM To: OpenStack Development Mailing List Subject: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime https://etherpad.openstack.org/p/icehouse-external-scheduler I'm looking for 4-5 folk who have: - modest Nova skills - time to follow a fairly mechanical (but careful and detailed work needed) plan to break the status quo around scheduler extraction And of course, discussion galore about the idea :) Cheers, Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev